One thing you should definitely do is try your Perceptron with new data: you can create another 100 Points, exclude these Points from training and test the Perceptron with these Points...also a very cool way to test you Perceptron is to add a Point at the click of mouse and let it label from the Perceptron (this way if it's wrong you are gonna see a black point inside the white blob). Anyway really good work, i love your way of teach, i love programming, i love machine learning and out of all the videos and blog post and slides that i read you are the one that can really make something easy to understand. Never stop to be like this! ;) P.s. if you need it i made a porting to p5.js with this suggestions implemented. :)
Thanks, I'll do this on tomorrow's live stream! (Hope I remember). If you like, you can pull request the p5 version here: github.com/CodingTrain/Rainbow-Code/tree/master/CodingChallenges
Paolo Ricciuti can you help me out! I am trying to make a game using simple codes but I just can't seem to finish. I have a ball which is being controlled by the keyboard and I am trying to have it so if I go any distance greater than the window it wouldn't allow me so I can't leave the window . Also, I am trying to detect lines, (so if the ball hits the line it loops back to a specific coordinates). Thank you
Also followed along but doing it in javascript. Cool little project. I trained it on 500 circles and then did a test set of 100. I used frameRate instead of mouse click to slow animation. Occasionally got some weird oscillating behaviour near around 470-480 correct on the training data but usually got it quickly. Also once trained with the training data normally got all 100 test data on one epoch but sometimes got stuck at around 98/99 correct (that only happened around 10% of attempts). I watched the self driving car video and that sent me back to flappy birds and now neural nets before doing flappy birds neuroevolution and then finally back to self driving cars. Once i've a handle of this in javascript i want to do it in something like unity using c#, this will need me to learn blender to make my circuit. Much more fun than watching endless hours of Netflix whilst in lockdown! Happy coding all.
First impression: 44 minutes? Are you crazy? I'm not going to waste my time on a single video! After done watching: (quietly clicks subscribe button) ....
love that you focus more on the conceptual side of programming more than the nitty gritty details. It's really annoying to see all these other videos that do something like "we have A then we have B, then voila! life!" I've honestly been waiting for a video like this for a long time.
There is a ton of videos that pick out some complex topic and spend 90% of the time explaining something you should have learned in school or something that's rather obvious.
Thank you so much, Daniel. I have never studied coding formally. Started with watching your coding challenges and I'm happy to say that this amazing and simple explanation is exactly what I needed to start my journey into machine learning. You've inspired me and taught me. Thank you.
I can't believe this video was released today! I just started working on a problem at work (internship) that needs a (albeit more complicated) neural network to solve! This was a perfect primer to help me really understand the basics! The fit(), train(), and activate() functions is scikit-learn seem far less magical and way more accessible to me now! THANK YOU SO MUCH!
I love this series so much! Most machine learning / neural network explanations and tutorials are either designed for 5th graders or people with a college degree. The mathematical parts and coding are perfect for a high schools CS student. Thanks so much for finally making me understand backpropogation!
That was a brilliant intro to neural networks #The Coding Train.... For those who are confused how the multiple iterations happened for the learning process, the key is the draw function which runs in an infinite loop. It took me a while to figure this out. So if you are using something other than processing, then you need to run an infinite loop in your training algo.
Dude, I cannot thank you enough for diving into this concept in such an engaging way. It beats trying to break it down from a completely algebraic standpoint
Really nice to see someone writing out the necessary code on the spot, with minimal bells and whistles. Too often it's presented alongside formalism that's much more intimidating than it needs to be.
I am highly motivated with your videos ...the way you speak, the humour you have, the knowledge you earned is really... appreciable 👌👏👏 I tries to give PPT in front of mirror like you...😊
OMG - I am going through Coursera ML course and your video is simply amazing. I cannot thank you with words for offering these videos for free. Love your fun filled way and going through concepts on NN sessions in one at a time.
This is exactly what i was looking for! Explaining what is actually happening behind the scenes on the background of ml5 and tensorflow, how it's working. Thank you so much for this!
Hey! A big thanks for these lectures. I was always struggling to understand what neural network really is. You made my life simple. Although i code in Python, i love watching your p5 videos Thank You whoever you are!!!
This is your first video that I've seen and I must say, I am enthralled by the way you made me understand the Perceptron. Never have I ever seen anyone explain it in such a intuitive, easy, and clear manner. I subscribed immediately and I am gonna go through all your machine learning videos. Thank you very very much. I can't possible explain how much I'm spellbound and impressed by this video. I wish you were my College Professor.
I am 17 and only started self-teaching java 2 weeks ago. This helped a lot to get a better understanding of how the language works and also how ML operates (which is something i am very interested in). The fact that you were somehow 100% understandable to me considering the amount of experience i have is actually phenomenal. Also i was surprised to see you are still interacting with comments Subbed and please make more of these
Starting to really dig your videos. This is the first perceptron tutorial to point out that X_0 and X_1 represent x and y, a data point. I've been looking at neural networks for 2 weeks now, and finally I know this distinction! Really helps! Thank you.
If [X_0, X_1] is [x, y] we are inputting a data point from a two dimensional space, correct? If we input [X_0, X_1, X_2], we can say this is equivalent to [x, y, z], which is a data point from three dimensional data space?
Why stop there? We can input many data points at a time right? For example, we can input two vectors, i.e. a two dimensional array: [X_0, X_1, X_2] and [X_3, X_4, X_5]. So now we are inspecting two data points from a three dimensional space. Is this right? Am I using the correct vocabulary here? Thanks!
Yes, you can send any vector of any length into a perceptron, and have that vector represent any data you like. Whether it will do what you want it to do in terms of the intended outputs that's another story and why you might need a "multi-layered perceptron"! (Coming in the next videos if you keep watching.)
Amazing explanation. Saved my time for searching this whole stuff. Really a great masterpiece. I need to make project on deep learning in just 3 weeks. Your lectures are helping me. God bless you.
watching "The Coding Train" videos and "Andrew Ng" Courses really helps. whenever i get lost i come here to understand the concept better even if he is using Js and Andrew using python , the idea is the same
Thank you so much for explaining what is going on with a neural network. So many people dive straight into code or go off in their own math world. I'm a little confused on when the looping stops.
Woow that is amazing video that just lighten me up to go into Machine Learning flow. I am definitely going to try this. I am more interested in watching how you can make its path based on 360* rotation and with some pros and cons implemented like obstacles, or food to eat or stuff like that. I need more insights before i jump into writing bots for my own game which are hard to beat.
Here's my version in p5.js with errors visualization + displaying what the perceptron thinks the solution is during the learning process: pascalguyon.org/lets-train-a-perceptron
Both you and Siraj are great in so many ways. Both charismatic, both very intelligent, both very funny.. But Siraj doesn't have your talent in teaching and making a solid bridge of information between you as the teacher and us as students through TH-cam as medium.
While I was doing this with you, almost at the end of the video, processing crashed and i lost the code. Good thing it's the first video and easy to recreate
that' incredible. took a couple times lol "guesses a "-1" or "1" as labels" "learns to label the two more accurately for each point, each loop" trippy , Playstation steers us to a better solution . i agree on that one
43:06 you can do this particular example in just 15 lines of MATLAB code with the linprog function: A1 = rand(100,2); A2 = 2*rand(120,2); A1 = A1(sqrt(sum((A1').^2)) < 1.4,:); A2 = A2(sqrt(sum((A2').^2)) > 1.4,:); N1 = length(A1); N2 = length(A2); scatter(A1(:,1), A1(:,2), 'b') hold on scatter(A2(:,1), A2(:,2), 'r') axis equal c = zeros(1,3); A = [-ones(N1,1) -A1; ones(N2,1) A2]; b = -[ones(N1,1); ones(N2,1)]; [x,fval,exitflag] = linprog(c,A,b); ezplot(@(a1,a2) x(1)+a1*x(2)+a2*x(3),[0 2]);
Really cooooooooooool i love it. I understood many things for my project in january. Later I'll look up Neuron sigmoid model. You're a great explanator !
4:28 Saying the point is at coordinate (x0, x1) is valid and solve the issue 39:00 I think a cool way to visualize the learning process in this particular example would have been to color every pixels of the canvas relative to whether the perceptron got it right. Nice video again though, I'm really looking forward to the next video on this topic =) Cheers
If anybody still reads the comments, I have a question: 27:12 Why exactly is DeltaWeight = error * input? The error is obvious of course because it's the difference between the correct answer and the guess, but why does it need to be multiplied with the input again?
code font effects like if someone types bomb it would be listed or mapped to a hatchtag news link and color code the colors to a 10 day active cycle to black or normal if word not in news cycle have it a colored shadow that a tap of the word would effect it to explode or a swipe to living it up with sparkes or flames etc
long time no see, sign function! i didnt remember that it tells whether a number is negative or positive (as those are signs so yeah sign()) i really like this its easy for me to convert to javascript. thank you so much!
The Coding Train How do you detect a color in an if statement? I am trying to have it so if ellipse/x is greater than (color) the ball loops back to 30
you could make the ellipse an object and make it's colour a this. property, and then use a comparator like if (ellipse.colour > something) { //do something }
Why is the new weights, multiplied with the value of the input, in the train function? If you pass it a point very close to (0,0), it will be harder to train it, right?
Just question, When you are continuously tuning weights when guess of perceptron does not match as per expected output, Can this delta change of weights negatively impact the previous inputs for which error was zero and guess was matching expected output. In other words, is it impossible to have ideal value of weights where error becomes zero for all the provided training data.
I like how you treat teaching neural networks through coding. Thumbs up! I think I missed your explanation on the modification of the weight as new_weight = error * weight. Could you give an intuition about it or did you explain it in some of your previous videos? Good job :)
Hmmm a coding video on Singular Value Decomposition or the Interior Point Method with constraints might be quite handy for real world machine learning application ;)
I'm working on a GA to estimate some parameters in geotechnical problems. (Not on processing, although I love it), but this video totally inspired me to try a different focus. I'm dying to know what it would be like to use the GA only to optimize the NN and then have the NN do the dirty job!
Greetings dear (Mr.) from Coding Train, I really like this video of yours and I really like the way you explain the entire procedure, and it's my humble request to you that, is it possible for you to make a video on programming a neural network for a (personal AI Assistant) and show us how you interact with that AI Assistant after programming it's neural net, I will be highly thankful to you for the rest of my life, I hope you gets this message of mine, and I am looking forward to see this video, thank you.
One thing you should definitely do is try your Perceptron with new data: you can create another 100 Points, exclude these Points from training and test the Perceptron with these Points...also a very cool way to test you Perceptron is to add a Point at the click of mouse and let it label from the Perceptron (this way if it's wrong you are gonna see a black point inside the white blob). Anyway really good work, i love your way of teach, i love programming, i love machine learning and out of all the videos and blog post and slides that i read you are the one that can really make something easy to understand. Never stop to be like this! ;)
P.s. if you need it i made a porting to p5.js with this suggestions implemented. :)
Thanks, I'll do this on tomorrow's live stream! (Hope I remember). If you like, you can pull request the p5 version here: github.com/CodingTrain/Rainbow-Code/tree/master/CodingChallenges
The Coding Train Thank's you for your great way of teaching! ;)
Paolo Ricciuti can you help me out! I am trying to make a game using simple codes but I just can't seem to finish. I have a ball which is being controlled by the keyboard and I am trying to have it so if I go any distance greater than the window it wouldn't allow me so I can't leave the window . Also, I am trying to detect lines, (so if the ball hits the line it loops back to a specific coordinates). Thank you
Paolo Ricciuti How do you detect a color in an if statement? I am trying to have it so if ellipse/x is greater than (color) the ball loops back to 30
cool........can I get a hold of that ?
your speaking skills, your interactiveness, your teaching ability...best online teacher I've ever met
*mated
"How to train your Perceptron" (2017)
IMDb 8.7/10
I'd watch.
Efferto93 only an 8.7? You better change that to a 10/10
haters gonna hate man
@@Toochilledtocare-_- it will be in a bit
Has a little something for everyone
Also followed along but doing it in javascript. Cool little project. I trained it on 500 circles and then did a test set of 100. I used frameRate instead of mouse click to slow animation. Occasionally got some weird oscillating behaviour near around 470-480 correct on the training data but usually got it quickly. Also once trained with the training data normally got all 100 test data on one epoch but sometimes got stuck at around 98/99 correct (that only happened around 10% of attempts). I watched the self driving car video and that sent me back to flappy birds and now neural nets before doing flappy birds neuroevolution and then finally back to self driving cars. Once i've a handle of this in javascript i want to do it in something like unity using c#, this will need me to learn blender to make my circuit. Much more fun than watching endless hours of Netflix whilst in lockdown! Happy coding all.
First impression: 44 minutes? Are you crazy? I'm not going to waste my time on a single video!
After done watching: (quietly clicks subscribe button) ....
Haha that was me.
I didn't even realise that his video was 44 min .. I just checked after reading your comment
True 😂
love that you focus more on the conceptual side of programming more than the nitty gritty details. It's really annoying to see all these other videos that do something like "we have A then we have B, then voila! life!" I've honestly been waiting for a video like this for a long time.
Appreciate the feedback, thank you!
There is a ton of videos that pick out some complex topic and spend 90% of the time explaining something you should have learned in school or something that's rather obvious.
Thank you so much, Daniel. I have never studied coding formally. Started with watching your coding challenges and I'm happy to say that this amazing and simple explanation is exactly what I needed to start my journey into machine learning. You've inspired me and taught me. Thank you.
I can't believe this video was released today! I just started working on a problem at work (internship) that needs a (albeit more complicated) neural network to solve! This was a perfect primer to help me really understand the basics! The fit(), train(), and activate() functions is scikit-learn seem far less magical and way more accessible to me now!
THANK YOU SO MUCH!
Keyboard Bandit - Did you solve your problem? 😁
18:05 that pen flip was so smooth lmfao
I have become accustomed to listening long lecture videos at ~2x speed. And watching your videos at 1.75x hits the sweet spot.
I'd recommend this to everyone over any movie
MukulNegi Awesome comment
I agree w jahmahent. This would
The suspense built by the code and then watching it all work so well was amazing. It was also a very good idea to show the progress with each click.
hah, glad to hear this feedback!
This is the clearest explanation on machine learning that I have ever watch.
who ever you are thank you very much ....I love watching all your video and learn new stuffs
hmnnn....who r u?
who
Don't tell this secret to anyone ,he is batman
I love this series so much! Most machine learning / neural network explanations and tutorials are either designed for 5th graders or people with a college degree. The mathematical parts and coding are perfect for a high schools CS student. Thanks so much for finally making me understand backpropogation!
you are the only person on youtube who actually practically shows how this all works. actually coding it.
That was a brilliant intro to neural networks #The Coding Train.... For those who are confused how the multiple iterations happened for the learning process, the key is the draw function which runs in an infinite loop. It took me a while to figure this out. So if you are using something other than processing, then you need to run an infinite loop in your training algo.
I'm watching this a year later and I'm still having plus fun. Not negative fun.
This is how powerful processing can be in understanding a concept. Great video Dan as always.
Dude, I cannot thank you enough for diving into this concept in such an engaging way. It beats trying to break it down from a completely algebraic standpoint
i don't understand java but still i got 100% content from your video.
it is the power of skillfull teacher.
Really nice to see someone writing out the necessary code on the spot, with minimal bells and whistles. Too often it's presented alongside formalism that's much more intimidating than it needs to be.
I am highly motivated with your videos ...the way you speak, the humour you have, the knowledge you earned is really... appreciable 👌👏👏
I tries to give PPT in front of mirror like you...😊
OMG - I am going through Coursera ML course and your video is simply amazing. I cannot thank you with words for offering these videos for free. Love your fun filled way and going through concepts on NN sessions in one at a time.
this is the most diverse course man, he starts off with JS, now Java and i belive he`ll show python, on the same concept, this is acctualy great
Good day - I noticed your Nick Cave t-shirt. I have loved Nick Cave ever since I was a young guy - early 1980s.
Thanks for your videos - very helpful.
💯 Someone needs to offer this man his own show to reach more people.
This is exactly what i was looking for! Explaining what is actually happening behind the scenes on the background of ml5 and tensorflow, how it's working.
Thank you so much for this!
You are the best teacher I've ever seen. Thanks for sharing your knowledge
Hey! A big thanks for these lectures. I was always struggling to understand what neural network really is. You made my life simple. Although i code in Python, i love watching your p5 videos
Thank You whoever you are!!!
This is your first video that I've seen and I must say, I am enthralled by the way you made me understand the Perceptron. Never have I ever seen anyone explain it in such a intuitive, easy, and clear manner. I subscribed immediately and I am gonna go through all your machine learning videos. Thank you very very much. I can't possible explain how much I'm spellbound and impressed by this video. I wish you were my College Professor.
Such nice feedback, thank you!
Thank you Daniel, that really helped me to get into the topic. I like the idea "if you can code it, you kind of understand it".
@23:20 +Fun increases your fun potential, -Fun spends it. It's like climbing a slide, you have fun climbing, you got even more sliding it down!
I am taking a ML course now with the most horrible lectures imaginable. This is a godsend!
I dont understand java but this is still the best video and clearer than the ones for languages ive used before
I am 17 and only started self-teaching java 2 weeks ago. This helped a lot to get a better understanding of how the language works and also how ML operates (which is something i am very interested in). The fact that you were somehow 100% understandable to me considering the amount of experience i have is actually phenomenal. Also i was surprised to see you are still interacting with comments
Subbed and please make more of these
Glad to hear!
Finally somebody who explained it the right way
Starting to really dig your videos. This is the first perceptron tutorial to point out that X_0 and X_1 represent x and y, a data point. I've been looking at neural networks for 2 weeks now, and finally I know this distinction! Really helps! Thank you.
If [X_0, X_1] is [x, y] we are inputting a data point from a two dimensional space, correct? If we input [X_0, X_1, X_2], we can say this is equivalent to [x, y, z], which is a data point from three dimensional data space?
Why stop there? We can input many data points at a time right? For example, we can input two vectors, i.e. a two dimensional array: [X_0, X_1, X_2] and [X_3, X_4, X_5]. So now we are inspecting two data points from a three dimensional space. Is this right? Am I using the correct vocabulary here? Thanks!
Yes, you can send any vector of any length into a perceptron, and have that vector represent any data you like. Whether it will do what you want it to do in terms of the intended outputs that's another story and why you might need a "multi-layered perceptron"! (Coming in the next videos if you keep watching.)
Thanks! I'm taking the Udacity Machine Learning Engineer course, but I find supplementing it with your videos clears up a lot of questions.
Congratulations!! You have explained very well the basic of perceptron model.
Amazing explanation. Saved my time for searching this whole stuff. Really a great masterpiece. I need to make project on deep learning in just 3 weeks. Your lectures are helping me. God bless you.
watching "The Coding Train" videos and "Andrew Ng" Courses really helps.
whenever i get lost i come here to understand the concept better
even if he is using Js and Andrew using python , the idea is the same
Amazing video! You are just a genius to kick the idle mind of us who have average mind wishing to fight for the most challenging things. KUDOS!!!
Great video! Neural networks can be confusing but I like that you started with a simple example.
Mark Jay spotted
Love your videos
You have a very engaging instruction style and explained this concept very well. Great video.
Thank you!
Thank you sir, the best place to learn the explanation of NNs in most simplest way. Keep up the good worker.Looking for more videos by Sir
i can't stop watching your videossss, you're a great teacher !!
Thank you so much for explaining what is going on with a neural network. So many people dive straight into code or go off in their own math world. I'm a little confused on when the looping stops.
I watched some videos way back when the channel started, must say you've definitely improved the format and pacing. Great work!
All i had was positive fun the whole video.
It´s hard to see this man videos without smiling
Woow that is amazing video that just lighten me up to go into Machine Learning flow. I am definitely going to try this. I am more interested in watching how you can make its path based on 360* rotation and with some pros and cons implemented like obstacles, or food to eat or stuff like that. I need more insights before i jump into writing bots for my own game which are hard to beat.
Here's my version in p5.js with errors visualization + displaying what the perceptron thinks the solution is during the learning process: pascalguyon.org/lets-train-a-perceptron
Both you and Siraj are great in so many ways. Both charismatic, both very intelligent, both very funny.. But Siraj doesn't have your talent in teaching and making a solid bridge of information between you as the teacher and us as students through TH-cam as medium.
While I was doing this with you, almost at the end of the video, processing crashed and i lost the code. Good thing it's the first video and easy to recreate
that' incredible. took a couple times lol "guesses a "-1" or "1" as labels" "learns to label the two more accurately for each point, each loop" trippy , Playstation steers us to a better solution . i agree on that one
Very interesting use of random in your code examples. Thank you.
18:08 *successfully throws are catches marker* 10/10
43:06 you can do this particular example in just 15 lines of MATLAB code with the linprog function:
A1 = rand(100,2);
A2 = 2*rand(120,2);
A1 = A1(sqrt(sum((A1').^2)) < 1.4,:);
A2 = A2(sqrt(sum((A2').^2)) > 1.4,:);
N1 = length(A1);
N2 = length(A2);
scatter(A1(:,1), A1(:,2), 'b')
hold on
scatter(A2(:,1), A2(:,2), 'r')
axis equal
c = zeros(1,3);
A = [-ones(N1,1) -A1; ones(N2,1) A2];
b = -[ones(N1,1); ones(N2,1)];
[x,fval,exitflag] = linprog(c,A,b);
ezplot(@(a1,a2) x(1)+a1*x(2)+a2*x(3),[0 2]);
You deserve a gold medal...
Really cooooooooooool i love it. I understood many things for my project in january. Later I'll look up Neuron sigmoid model. You're a great explanator !
4:28 Saying the point is at coordinate (x0, x1) is valid and solve the issue
39:00 I think a cool way to visualize the learning process in this particular example would have been to color every pixels of the canvas relative to whether the perceptron got it right.
Nice video again though, I'm really looking forward to the next video on this topic =)
Cheers
If I had 15ft of pure white snow I’d watch every single video you have published
Thoroughly enjoyed it thank you! I use to learn processing from you and now Neuron Network 👍
Great! I just started this topic in my IA course at university.
Thank you!
Is it just me or everyone else LOLs while watching this guy but still learns a ton from listening . Coolest teaching style
I love how you naturally write p5 instead of pt
How can I be so proud of a cell that is doing the job OF AN IF STATEMENT ?
Neural Netw... Perceptron Power !
To convert a positive number to one and negative number to minus one you could just do n/abs(n) aka n divided by the absolute value of itself.
That would throw an error at n=0. Plus, it's doing quite a bit of unnecessary work behind the scenes.
I'm having +fun!
As always, great video. Your energy and enthusiasm is contagious! Please never stop being so amazing.
You have the best tutorial for neural networks!!
Soooooo cool: Professor Shiffman is wearing a Nick Cave and the Bad Seeds t-shirt! Yeah!!!
Great work as always Dan, can't wait for full Neural Networks!
Actually first AI tutorial that I understood
If anybody still reads the comments, I have a question:
27:12 Why exactly is DeltaWeight = error * input? The error is obvious of course because it's the difference between the correct answer and the guess, but why does it need to be multiplied with the input again?
Best explanation I found on NNs. Thank you!
I love at 35:00 the horror in his face when he realizes his code works :D We all been there before buddy believe me.
Hi Dan! If you use an activation function that returns non zero, in principle, you don't need a bias. Cheers!
the Sign() function will return 0 if the input is zero though right?....
Thanks for this clarification!
Hi. I'm currently trying to make sense out of LSTMs and implement them in c++ but for the life of me it's a struggle! ^_^
Thank you so much for step by step simple explanation. Was very descriptive.
Thank you for the nice feedback!
Excellent explanation and demonstration. Keep it up 🙏👌
I think nobody noticed but you were using a nick cave and the bad seeds t shirt hahah, great artist
code font effects like if someone types bomb it would be listed or mapped to a hatchtag news link and color code the colors to a 10 day active cycle to black or normal if word not in news cycle have it a colored shadow that a tap of the word would effect it to explode or a swipe to living it up with sparkes or flames etc
long time no see, sign function! i didnt remember that it tells whether a number is negative or positive (as those are signs so yeah sign()) i really like this its easy for me to convert to javascript. thank you so much!
This is great. Thanks for breaking it down into simple form!
You are really good at the art of teaching :)
i think the reason why your neural net did so well is that you give ti as input the same data you tried it with
You are the best, end of the discussion.
Finally some processing, much appreciated
I'm actually having +fun because I love neural networks and all !
PS : and all = and coding
Another great video 🤗👌
Thank you!
^^^
The Coding Train How do you detect a color in an if statement? I am trying to have it so if ellipse/x is greater than (color) the ball loops back to 30
you could make the ellipse an object and make it's colour a this. property, and then use a comparator like
if (ellipse.colour > something) {
//do something
}
Why is the new weights, multiplied with the value of the input, in the train function?
If you pass it a point very close to (0,0), it will be harder to train it, right?
Damn it! I watched it till the end. Very good explanation 👍
Just question, When you are continuously tuning weights when guess of perceptron does not match as per expected output, Can this delta change of weights negatively impact the previous inputs for which error was zero and guess was matching expected output. In other words, is it impossible to have ideal value of weights where error becomes zero for all the provided training data.
Thanks for giving me a start, I'll make it in pure java.
Finally find the u-tuber that I was looking for..!!! 😄
Awesome as always Dan.....btw I LOVE Nick Cave and the Bad Seeds !!!
I like how you treat teaching neural networks through coding. Thumbs up! I think I missed your explanation on the modification of the weight as new_weight = error * weight. Could you give an intuition about it or did you explain it in some of your previous videos? Good job :)
he did that in another video...Just browse through his Playlist
Hmmm a coding video on Singular Value Decomposition or the Interior Point Method with constraints might be quite handy for real world machine learning application ;)
I'm working on a GA to estimate some parameters in geotechnical problems. (Not on processing, although I love it), but this video totally inspired me to try a different focus. I'm dying to know what it would be like to use the GA only to optimize the NN and then have the NN do the dirty job!
I'll be getting to this soon!
absolutely love this video. jump right into code
I love you so much for your tutorials!
Greetings dear (Mr.) from Coding Train, I really like this video of yours and I really like the way you explain the entire procedure, and it's my humble request to you that, is it possible for you to make a video on programming a neural network for a (personal AI Assistant) and show us how you interact with that AI Assistant after programming it's neural net, I will be highly thankful to you for the rest of my life, I hope you gets this message of mine, and I am looking forward to see this video, thank you.