Originlly, I thought you were extremely skilled in writing backwards, but then I realized that you can film it backwards, and then flip the video. Good effect. One giveaway is that the video shows you using your left hand to write, but thumbnail images show you using your right hand.
@@untitled746 I did notice one time when I was young that if you put your left hand out the door of a glasshouse and write on the outside with your finger on the dampness it's easy to write in a manner that reads properly inside the glasshouse. Way easier than if you try and write on paper with your left hand. Chirality? Luv and Peace.
This presenter, Martin Keen, embodies Einstein's quote "If you can't explain it simply, you don't understand it well enough." Keen certainly understands AI and is a master at explaining things simply.
Interested in applying ANN to generate synthetic data to feed and calibrate an options pricing model which incorporates stochastic volatility as a project so thank you for this brief low-level introduction video to ANN's.
Hello, I am doing a similar ANN however have run into some problems regarding dropout regularisation. I have also reasearched a bit about neural networks for my project. Do you think we could contact and share some ideas?
Hi I have a question regarding the threshold used in the equation to calculate the yhat value. What is a threshold and why did you choose 3 specifically? Is it related to the number of factors taken into consideration? Thanks
yes, it's possible, but it depends on how you make your architecture. if instead of all positive values, you have a datatype that can be represented by both positive and negative values, it may be more useful to use negative values as well. For example if you have a conversation AI that represents happy words or notions with positive values and negative words or notions with negative values, it could prove useful to have some negatively weighted neurons that may result in a negative number represented output.
That is actually what training changes! a training program will adjust (based on the difference between the expected output and the actual output and slowly adjust up or down until it fits.) Keep in mind that sometimes there are shenanigans (which i'm not well versed myself) like activation function instead or whatnot and I'll stop there because I've no clue what I'm talking about when talking in that area.
I believe if you can hook up to a monitor, it will appear bigger. The easiest I've found is to actually run an HDMI from my laptop to a TV, but with modern features, a phone or laptop can screencast to a smart tv.
So… if you trained your language prediction model on, say, academic libraries instead of Twitter you might get a more reliable tool? Like a medical assistant trained purely on peer-reviewed medical libraries? Is anyone doing that?
But you didn't really explain what the nodes do. You explained the progression from input to hidden to output, and then you showed us how an algorithm works, but I didn't gain any understanding of the individual nodes, how they interact, and what they do. Did I just miss it?
Randomly I believe. You randomly select weight and bias values and through training, the model selects the optimal values using the cost function to minimize the errors.
Most Aussie/New Zealand thing I think I have ever witnessed is weighing the quality of the waves heavier than whether or not there are sharks out there...
Thanks for the question. Here are a couple links that may be of use to you. developer.ibm.com/articles/l-neural/ www.ibm.com/products/spss-neural-networks
3blue1brown has a very good series on neural networks. The neural network system they show is primitive and they have been improved over the decades, but it is a good primer to understanding the basic ideas.
his writing was poor (same as me) and I assume a simple mirror/flip superimposition was used. Very effective but I too was distracted by this simple effect.
Ok hold on, So your saying if the Neural Network searches the entire internet and there has not been any shark attacks then it would be safe to go swimming?
Good, but some parts were very poorly explained and rushed. You can't just say "we leverage supervised learning on labelled datasets" without explaining and expect people to understand 🤣
This is incredible. Any civil being able to reach this kind of information in just minutes is indescribable, priceless.
Thanks. Glad you enjoyed the content.
Originlly, I thought you were extremely skilled in writing backwards, but then I realized that you can film it backwards, and then flip the video. Good effect.
One giveaway is that the video shows you using your left hand to write, but thumbnail images show you using your right hand.
That's great John. How great of you to point that out. Ya figured it out!
Well done catching that. You must have a huge neural network in your noggin :D
@@untitled746 I did notice one time when I was young that if you put your left hand out the door of a glasshouse and write on the outside with your finger on the dampness it's easy to write in a manner that reads properly inside the glasshouse.
Way easier than if you try and write on paper with your left hand.
Chirality?
Luv and Peace.
lol was wondering about that exact thing
Thanks IBM for this series of videos. It's been very useful.
Thanks for this video. You held my attention for the full duration!
I, as a neural network, look at this video and think, 'Yes, this is what we've always talked about!
Why do you enjoy lying to people on the internet?
Thank you so much for this ... the regression example has really helped me understand how decisions are made in AI
This presenter, Martin Keen, embodies Einstein's quote "If you can't explain it simply, you don't understand it well enough." Keen certainly understands AI and is a master at explaining things simply.
My worlds have collided. Martin is now helping me on my AI journey AND providing me with interesting information on beer brewing experiments. Awesome!
I normally watch this guy do homebrew videos and now my mind is now blown
Thank you .. This is really great and explained the actual nuances very clearly to understand
Interested in applying ANN to generate synthetic data to feed and calibrate an options pricing model which incorporates stochastic volatility as a project so thank you for this brief low-level introduction video to ANN's.
Hello, I am doing a similar ANN however have run into some problems regarding dropout regularisation. I have also reasearched a bit about neural networks for my project. Do you think we could contact and share some ideas?
How do neural networks help computers recognize patterns?
Hi I have a question regarding the threshold used in the equation to calculate the yhat value. What is a threshold and why did you choose 3 specifically? Is it related to the number of factors taken into consideration? Thanks
Only 45 comments under MIT video is a sin. I need to get addicted to this staff.
Writing from back side showcases an evidence of his talent..
This is incredible.
I think iam going to use that formula for making decision on my daily activity
Thanks that was truly helpful for new starters
Can you recommend any books about the topic "Artificial Neural Networks" for beginners ?
absolutely fun to learn from you, big thank you!
In the surfer example, how did you select "3" as the threshold value?
That's how many itsems are being measured. There are 3 x values with corresponding weights.
is the example of wind surfing based on a perceptron or on a more complex neural network?
Very well done.
simple and precise Thanks
incredible bro
thank you
Thanks
Good explanation, and yeah i have subcribed, thanks
Is IBM still a thing?
Sharks you said? Sharks are always a 5. But yeah otherwise good quick intro.
I just learned about artificial brains (pretty much) in five minutes. Wow.
Are those "neurons" simple chips that transfer and process the information?
In neural work, can weights be negative values?
yes, it's possible, but it depends on how you make your architecture.
if instead of all positive values, you have a datatype that can be represented by both positive and negative values, it may be more useful to use negative values as well.
For example if you have a conversation AI that represents happy words or notions with positive values and negative words or notions with negative values, it could prove useful to have some negatively weighted neurons that may result in a negative number represented output.
I dont understand what the threshhod value is referring to for example the 3
How its connected to graphs
Thanks sir
I would have thought that the presence of sharks were probably a bigger consideration than the quality of the waves but perhaps that's just me...
amazing
A lot was hidden behind 'cost function' and 'gradient descent' which left me feeling like the kernel of understanding was incomplete.
Is it possible to take individuals in to training and test samples instead of observations when training the ML models?
Interesting! But what for? Any examples?... M
How does one decide on the Threshold level?
That is actually what training changes! a training program will adjust (based on the difference between the expected output and the actual output and slowly adjust up or down until it fits.)
Keep in mind that sometimes there are shenanigans (which i'm not well versed myself) like activation function instead or whatnot and I'll stop there because I've no clue what I'm talking about when talking in that area.
isnt the threshold 5 ?
This is great! But can you write bigger so we can read it too?
I believe if you can hook up to a monitor, it will appear bigger. The easiest I've found is to actually run an HDMI from my laptop to a TV, but with modern features, a phone or laptop can screencast to a smart tv.
what is a longap?
So… if you trained your language prediction model on, say, academic libraries instead of Twitter you might get a more reliable tool?
Like a medical assistant trained purely on peer-reviewed medical libraries? Is anyone doing that?
But you didn't really explain what the nodes do. You explained the progression from input to hidden to output, and then you showed us how an algorithm works, but I didn't gain any understanding of the individual nodes, how they interact, and what they do. Did I just miss it?
Perhaps you should consider software engineering school
Lack of sharks is definitely more important than the waves in my (non-surfing) opinion.
I wonder where you are writing?
See ibm.biz/write-backwards
Very good sgort video. But your voice occasionally drops to inaudible level.
Thank you !
How did you come up with -3 as the threshold ?
I think the threshold is 5, coz its the maximum weight
Randomly I believe. You randomly select weight and bias values and through training, the model selects the optimal values using the cost function to minimize the errors.
Most Aussie/New Zealand thing I think I have ever witnessed is weighing the quality of the waves heavier than whether or not there are sharks out there...
Where to know more about this. Any web link or course?
Thanks for the question. Here are a couple links that may be of use to you. developer.ibm.com/articles/l-neural/ www.ibm.com/products/spss-neural-networks
3blue1brown has a very good series on neural networks. The neural network system they show is primitive and they have been improved over the decades, but it is a good primer to understanding the basic ideas.
There isn't already a symbol to "hat" that is " ^ " ?? Why write the symbol's full name???
Because he’s teaching. Why do you care?
Ooooooh my God it's the homebrew guy
It would take years of hard training for me to be able to write backwards as the IBM dude
See ibm.biz/write-backwards
why is my beer guy on this side of the algorithm???
I was far too distracted thinking about the cons of the speaker having to write backwards... Looks cool, but is mostly illegible. :)
his writing was poor (same as me) and I assume a simple mirror/flip superimposition was used. Very effective but I too was distracted by this simple effect.
Bro make it six things. Activation functions are important and you haven't mentioned them.
Spiking neural networks do not have activation function. Spiking neuron has update-function instead which calculates its state at time t.
how is he writing backwards????
Ok hold on, So your saying if the Neural Network searches the entire internet and there has not been any shark attacks then it would be safe to go swimming?
Perhaps
Is not like human brain.
My brain functions different of what you are explaining now
Concord effect i interrupt
Like no 5K
Lebsack Corners
Wow he literally wrote y hat :p instead of making hat on top of y
I WANT IT UNDER 50 SECONDS AND NO BODY GOT TIME FO THIS
Good, but some parts were very poorly explained and rushed. You can't just say "we leverage supervised learning on labelled datasets" without explaining and expect people to understand 🤣
So it's basically trying to simulate the way the brain processes data
He did not explain anything!
Why the videos are mostly garbage. Because they don't really fully explain why it works
A very poor explanation of Neural Networks
another not useful video. Copy and paste.
What markers are you using ?