This is one of the best explanations I have ever seen. You manage to cover the goal and the method intuitively, mathematically, and programmatically, and you did it with a concrete example that was simple enough to work out by hand. I also appreciate that you showed how we might code the rules for a solution, and then showed how are would program a machine learning approach to come up with a similar solution. I hope you continue to make more excellent videos like this!
Thank you for the clear explanation. Just a couple of comments: a) In 6:35, I think it should be -1.5 (not -0.5) b) When creating the discriminator, if the bias is -1 then the threshold between good/bad images should be lower than 1. Otherwise some of the real faces would be labelled as false…
My fellow data scientists were all about GANs, so I went to learn something about it so I know where I stand in regards of synthetic data. And I'm glad I stumbled upon your video. What a great introduction to the topic! I feel I understand a lot more of what has been said and done about GANs now. Thank you!
You provide by far some of the most descriptive explanations of Neural Network architecture, Machine Learning & statistics out there! Thank you! By the way, I think you forgot to subtract the bias from the result of the second, noisy image at 6:32. It should be -1.5 instead of -0.5 :)
The best video so far that I have come across. It is very annoying that so many videos that teach ML uses some API (e.g. Keras) to give examples. It teaches how to use the API but it does not teach you how it really works. I wish more used this awesome method to teach concepts in ML.
very nice explanation... I started learning GAN from zero, only have basic understanding about CNN. and from this video, I now understand how GAN works. Thank you
mad respect to you for explaning neural network so clear in 20 minutes, actually amazing
ปีที่แล้ว +1
Very nice and clear explaination. Everytime I'd need a recap on GANs, I come to this video. Having no code but simple math makes it more meaningful -- comparing the other channel's which includes ML libraries, that places as obstacles on our way to understand!
No words would appreciate this rich explanation. I do like the visuals, mathematics and codes when they come together. Also, Your language was easy and smooth. You made the complex topic so easy to comprehend. Great thanks.
You really have a special talent to generalize and explain complex concept. You go through every questions that we could think about during your explanations and all of them are answered with such pedagogy.
Senor Luis Serrano: Ud. ha hecho una excelente demostración pedagógica sobre un tema que otros han convertido en mistificación. Trabajo en la interpretación de Edipo Rey, de Sófocles, y estoy tratando de traducir mis conocimientos sobre la comunicación hecha por un texto literario clásico que "pinta" los rasgos de sus personajes mediante relaciones opuestas, principalmente, análogas a las de los slanted portraits de los "retratos" usados por Ud. Muchos de esos "retratos" se acercan a las imágenes que Ud. llamaría "noisy" hasta el extremo de ser desechables por su policía ... y por mí. Quiero decir que, no solo en esto, sino en otros muchos aspectos, encuentro un acercamiento entre la herramienta matemática por Ud. expuesta y la que yo quiero usar ... pero no soy matemático. A pesar de ello, gracias a mi búsqueda serendípica, he logrado encontrarme con su Serrano.Academy, y seguiré orientándome por sus lecciones, con agradecimiento.
The most informative and intuitive explanation of GANs, for a beginner this video is priceless as all other resources aren't so patient with the critical details and steps which doesn't help with the learning process
Thank you Luis for the simplified and very clear explanation. Finally, I feel that I can confidently understand the how GANs work. I also really liked the idea of the simple toy examples that you usually start to explain the complicated concepts.
BRAVO! BRAVO! You are really the Grand Master Explaining Machine Learning! Let me tell you why? about 2 years ago, when I was looking to learn python I also was looking to learn the process of coding a machine learning python script. By that time, I remember searching watching a lot of videos of machine learning, but things were very fuzzy since most people the people just talk and talk and talk and repeat what others said, but no one explained the real roots of the process of it until I found your Videos explaining the real process in calculating the weights, and until that day my head was able to understand the REAL process of creating the code. And here I am 2 years later, breaking my head with how a GAN really works or how is made, and BANG! I just finished washing about 40 videos and NO ONE BUT YOU EXPLAINED SO WELL, that is why you are the MASTER in explaining Machine Learning! BRAVO! BRAVO! God Bless You, Luis!!!! What could be my life without you!!! Thank you a million times!.
I am now watching this while waiting for the laundry. Just great! I also realized that I learned a lot from you about pytorch! Thank you sir! Keep up the great job!
This is a really great video! You are a very good teacher, great video quality, you offer both intuition, as well as a an applied example - the code. Good Job!
Luis truly "KISS" (keep it sweetie simple) machine learning and re-ignites my love of scratching math!!! Whenever i find anything hard to crack i just search it in your channel...... Minor typos at Generator's Derivatives 00:19:00 where D weights are missing and the notations of G weights and D weights get mixed up, though everything is correct in your codes. Kind of cool to work on the simple math and detect those typos... Thanks again Luis for democratizing complex knowledge!! =)
This is a wonderful video, very easy to understand. You have presented such a complex part of deep learning in such a way that it looks so easy! Thank you so much.
Thanks for the slow transition and yeah i am not really good in maths and not yet much into more harder python. Its really useful than other videos out there.
Thanks for sharing,Crystal clear explanation. I remember ,Every one requested u for this and never imagined that request will be granted soon. If u are reading this, kindly give us an opportunity to talk with you on TH-cam live session.
Excellent video - great way to explain a quite complex concept! I learned about your video and github from MIT class "Designing and Building AI Products and Services." Hope that you are getting proper credits from MIT;-)
Love your video. I have a question about what you say at 16:32: Shouldn't the discriminator network only be updated with weights from the real images? In other word: why do we use back propagation to update the weights after feeding it an image from the generator? isn't the generator making a fake image, and therefore, if you update the weights on a discriminator network, shouldn't the discriminator network then be learning how to detect a fake image?
Is there a bit of confusion for the explanation regarding the discriminator (around 5:34) .Once the bias( -1) is added to the 4 other values , should the explanation be : everything with a value > 0 is a face and everything lower than zero is just noise?? LMK?
Dear Serrano. Thank you for your very interesting video. In the generator equation (image generator_math.png), I think there is a missing W_i. Note in your code in function "def derivatives(self, z, discriminator)", you have the line: "factor = -(1-y) * discriminator_weights * x *(1-x)". Parameter "discriminator_weights" represents W_i in the equation, although I believe it is missing in the generator equation. Please, let me know if I'm wrong. Thanks.
A bit confused at 19:01 How the weights v1, v2, v3, v4, c1,c2, c3 and c4 are adjusted by retropropagation? Their derivatives do not show up in the derivation.
Thanks Hyosang!! I also checked out your videos, they’re great! Happy to see you over here after so long, hope all is going well in your side, my friend!
I find it a great vid. But a question: shouldn't the bias to be the same for all the weights of the same neuron? I would imagine that you have 1.7 for the diagonal values and 0.3 for the non-diagonal ones, since I would use a bias equal. Where am I wrong?
Thanks for your question, Jararddan! These derivatives get a bit complex, so I just put them in the screen, but they're not necessary to understand the material. However, the idea is that you use the chain rule, so something like del E/del w_i = del E / del D * del D / del G * del G / del w_i. Then I worked each one out separately based on the formulas above and the expansion for sigmoid(x). Let me know if there's anything that needs more clarification!
Truly fantastic video that explains GAN in only 20 minutes. Question: you mentioned that generator and discriminator neural networks should both be trained with samples from generator, and discriminator should be trained with labeled data(16:48). Should these training from different sources interleave?
You sir are a fantastic teacher. No fancy gimmicks, no catch phrases. Just pure talent. Hoping to collaborate with you!
As a beginner in ML a lot of this still went over my head but it's the most accessible video I've found yet on GANs! Thank you so much
This is one of the best explanations I have ever seen. You manage to cover the goal and the method intuitively, mathematically, and programmatically, and you did it with a concrete example that was simple enough to work out by hand. I also appreciate that you showed how we might code the rules for a solution, and then showed how are would program a machine learning approach to come up with a similar solution. I hope you continue to make more excellent videos like this!
Thank you for the clear explanation. Just a couple of comments:
a) In 6:35, I think it should be -1.5 (not -0.5)
b) When creating the discriminator, if the bias is -1 then the threshold between good/bad images should be lower than 1. Otherwise some of the real faces would be labelled as false…
you are right, having this confusion
My fellow data scientists were all about GANs, so I went to learn something about it so I know where I stand in regards of synthetic data. And I'm glad I stumbled upon your video. What a great introduction to the topic! I feel I understand a lot more of what has been said and done about GANs now. Thank you!
You provide by far some of the most descriptive explanations of Neural Network architecture, Machine Learning & statistics out there! Thank you!
By the way, I think you forgot to subtract the bias from the result of the second, noisy image at 6:32. It should be -1.5 instead of -0.5 :)
Just want to leave a comment so that more people could learn from your amazing videos! Many thanks for the wonderful and fun creation!!!
This channel’s been a gem of a find, always a go to source to refresh seemingly complex algorithms in an absurdly intuitive way. Thank you, Luis.
The best video so far that I have come across. It is very annoying that so many videos that teach ML uses some API (e.g. Keras) to give examples. It teaches how to use the API but it does not teach you how it really works. I wish more used this awesome method to teach concepts in ML.
If you know the basics of CNN or ML and you are looking to learn basics of GAN this video is for you....very well explained thankyou
Amazing summary of GANs with the simplest but concise explanations. Thank you!
very nice explanation... I started learning GAN from zero, only have basic understanding about CNN. and from this video, I now understand how GAN works. Thank you
mad respect to you for explaning neural network so clear in 20 minutes, actually amazing
Very nice and clear explaination. Everytime I'd need a recap on GANs, I come to this video. Having no code but simple math makes it more meaningful -- comparing the other channel's which includes ML libraries, that places as obstacles on our way to understand!
I love his teaching, he makes complex things seem simple.
No words would appreciate this rich explanation. I do like the visuals, mathematics and codes when they come together. Also, Your language was easy and smooth. You made the complex topic so easy to comprehend. Great thanks.
By far, the best explanation of GANS ever! Well done, sir. Many, many thanks!!!!
One of the best explanations of the subject I have ever seen, congratulations, you are an excellent teacher!
Finally Gan math explained is the most elegant way..Thank you Sir
You really have a special talent to generalize and explain complex concept. You go through every questions that we could think about during your explanations and all of them are answered with such pedagogy.
Senor Luis Serrano: Ud. ha hecho una excelente demostración pedagógica sobre un tema que otros han convertido en mistificación. Trabajo en la interpretación de Edipo Rey, de Sófocles, y estoy tratando de traducir mis conocimientos sobre la comunicación hecha por un texto literario clásico que "pinta" los rasgos de sus personajes mediante relaciones opuestas, principalmente, análogas a las de los slanted portraits de los "retratos" usados por Ud. Muchos de esos "retratos" se acercan a las imágenes que Ud. llamaría "noisy" hasta el extremo de ser desechables por su policía ... y por mí. Quiero decir que, no solo en esto, sino en otros muchos aspectos, encuentro un acercamiento entre la herramienta matemática por Ud. expuesta y la que yo quiero usar ... pero no soy matemático. A pesar de ello, gracias a mi búsqueda serendípica, he logrado encontrarme con su Serrano.Academy, y seguiré orientándome por sus lecciones, con agradecimiento.
The most informative and intuitive explanation of GANs, for a beginner this video is priceless as all other resources aren't so patient with the critical details and steps which doesn't help with the learning process
getting to see this after a heavy day at work is refreshing..
Thank you so much for sharing
Thank you Luis for the simplified and very clear explanation. Finally, I feel that I can confidently understand the how GANs work. I also really liked the idea of the simple toy examples that you usually start to explain the complicated concepts.
BRAVO! BRAVO! You are really the Grand Master Explaining Machine Learning! Let me tell you why? about 2 years ago, when I was looking to learn python I also was looking to learn the process of coding a machine learning python script. By that time, I remember searching watching a lot of videos of machine learning, but things were very fuzzy since most people the people just talk and talk and talk and repeat what others said, but no one explained the real roots of the process of it until I found your Videos explaining the real process in calculating the weights, and until that day my head was able to understand the REAL process of creating the code. And here I am 2 years later, breaking my head with how a GAN really works or how is made, and BANG! I just finished washing about 40 videos and NO ONE BUT YOU EXPLAINED SO WELL, that is why you are the MASTER in explaining Machine Learning! BRAVO! BRAVO! God Bless You, Luis!!!! What could be my life without you!!! Thank you a million times!.
Thank you Alex, that's so nice to hear! It's an honor to be part of your machine learning journey!
brilliance is the ability to take the complex and reduce it to simplicity. Brilliant work!
I am now watching this while waiting for the laundry. Just great! I also realized that I learned a lot from you about pytorch! Thank you sir! Keep up the great job!
Why would anyone dislike this? Seriously, why? Thank you sir!
Love that you broke down of concepts to micro level. Made the understanding of GAN's so simple and yet detailed. Appreciate it.
I had to watch this video a few times, pause and rewind etc. but it did help me understand the intricacies of GANs. Thank you very much.
This was a really good video! I'm happy my school included it in the suggested resources
Mr Serrano thank you for existing.
I was struggling to understand this.... Your video made it so clear and easy to understand... Thank you soooooooo much...... ☺
The simplest but most effective explanation I've seen on GAN...Thank you :)
This video is really helpful to understand about GAN. The way of teaching is really awesome.I liked it a lot.
I didn't expect this video until the end of the video. This is really helpful! Thank you
Muchas gracias, Luis. Primer video que veo sobre las GAN y ya entendí el concepto. ¡Impecable!
This is a really great video! You are a very good teacher, great video quality, you offer both intuition, as well as a an applied example - the code. Good Job!
Best video on GAN explanation hands down
So well explained that makes it easy to understand GAN. Thank you. Big thumb up.
Luis truly "KISS" (keep it sweetie simple) machine learning and re-ignites my love of scratching math!!! Whenever i find anything hard to crack i just search it in your channel...... Minor typos at Generator's Derivatives 00:19:00 where D weights are missing and the notations of G weights and D weights get mixed up, though everything is correct in your codes. Kind of cool to work on the simple math and detect those typos... Thanks again Luis for democratizing complex knowledge!! =)
What a great explanation! It's amazing how you teach complex concepts in such an easy way. I have learned a lot from your videos. Mil gracias!
This is one of the best explanation i ever read/watched
What an amazing video. I really impressed by the example you provide to explain such a complex concept.
Your explanations are so simple that anyone can understand !!!
Thank so much
This is a wonderful video, very easy to understand. You have presented such a complex part of deep learning in such a way that it looks so easy! Thank you so much.
THIS IS SOOOOOOOOOOO DAMN GOOD. thanks a lot man. totally understood gans without even a computer vision background
Sr. Luis Serrano, os seus ensinamentos são demasiado preciosos. Por favor, continue fazendo versões em língua espanhola. Muito obrigado.
Thanks for the slow transition and yeah i am not really good in maths and not yet much into more harder python. Its really useful than other videos out there.
Loved the Slanting people demo... Thanks Luis.
Excellent video. Teaches the basics in a very clear manner. Thank you very much!
Interesting example of GAN. Really enjoy your video. Keep a good works
Thank you so much for this video! I haven't found any good video explaining GANs as you did!
Love your explanations and visuals! Thanks again for all you do here Luis, you sir are a gentleman and a scholar🎓🍻
Thanks for sharing,Crystal clear explanation.
I remember ,Every one requested u for this and never imagined that request will be granted soon.
If u are reading this, kindly give us an opportunity to talk with you on TH-cam live session.
Thank you! Hoping to have another live session soon, will post an announcement when I know when!
Excellent video - great way to explain a quite complex concept! I learned about your video and github from MIT class "Designing and Building AI Products and Services." Hope that you are getting proper credits from MIT;-)
plz sir, make more videos frequently your way of teaching is the blow of the mind it is osm....
Thank you Chetan! Trying to make them more often :)
Really helpful! Explains GANs very simply for beginners!
Great Job Man. I understood the basics of GANs and now i can work with my StackGan project
Uau! Amazing! Thanks for the simplest explanation I've ever seen 🙂
Love your video. I have a question about what you say at 16:32: Shouldn't the discriminator network only be updated with weights from the real images? In other word: why do we use back propagation to update the weights after feeding it an image from the generator? isn't the generator making a fake image, and therefore, if you update the weights on a discriminator network, shouldn't the discriminator network then be learning how to detect a fake image?
Wow, I never understood the error functions until now! Thank you Luis!
Amazing presentation with high quality slides!
Awsome, you make neural networks easy and interesting.
Thank you 🙏🏻
Is there a bit of confusion for the explanation regarding the discriminator (around 5:34) .Once the bias( -1) is added to the 4 other values , should the explanation be : everything with a value > 0 is a face and everything lower than zero is just noise?? LMK?
Its really great to understand a complex concept like GAN in a simple way
you are the best AI teacher i have ever seen !
Wow, good job! You gave me a very good sense of how it works and explained the loss function really well. I finally understand. Thank you!
Dear Serrano. Thank you for your very interesting video. In the generator equation (image generator_math.png), I think there is a missing W_i. Note in your code in function "def derivatives(self, z, discriminator)", you have the line: "factor = -(1-y) * discriminator_weights * x *(1-x)". Parameter "discriminator_weights" represents W_i in the equation, although I believe it is missing in the generator equation. Please, let me know if I'm wrong. Thanks.
ThankYou Sir, your content is the best, the visuals you put into videos make it so much easier to understand concepts. Keep it going Sir.
Simple and easy narration. Thank you sir
A bit confused at 19:01 How the weights v1, v2, v3, v4, c1,c2, c3 and c4 are adjusted by retropropagation? Their derivatives do not show up in the derivation.
Great explanation! Cheers to all slantland people!!
ooooh I knew your voice sounded familiar! I'm doing your pytorch course
where is the course? I can't find it in the channel
@@chawza8402 try on google then
@@gumikebbap turns out he is the lead instructor on pytorch Udemy Course. and its free!
@@chawza8402 Link Please
Hi all! The course is here: www.udacity.com/course/deep-learning-pytorch--ud188
This is how the teachers explain, thanks a lot
Thank you very much for this awesome explanation, Luis. Very, very well done!
This is the best explanation on the internet. Thanks a lot!
Very informative for learning GAN's! Congratulations!
the best in GAN tutorials
Hi Luis! Great Videos! I'm very impressed and happy to see my old friend on TH-cam!
Thanks Hyosang!! I also checked out your videos, they’re great! Happy to see you over here after so long, hope all is going well in your side, my friend!
Amazing explanation....
Love from India.....
Good video, good starting point with non dumb down points👍👍
Thank you very much for this nice and very helpful explanation of GANs.
I find it a great vid. But a question: shouldn't the bias to be the same for all the weights of the same neuron? I would imagine that you have 1.7 for the diagonal values and 0.3 for the non-diagonal ones, since I would use a bias equal. Where am I wrong?
The best tutor! Excellent tutorial sir.
Absolutely stunning explanation!
This is a wonderful introduction to GANs. Many thanks for this - TRIPLE BAM!!!!!!!!!
No wait, this is not a StatQuest video, my bad 😁😁😁😁
LOL! :D I showed this to Josh, he found it hilarious too. BAM!
OMG this is sooooooo friendly and easy to understand!!!!!! Thank you so much!!!!
You're the best man. u deserve a million subscribers
Such a simple and great explanation. Thank you!
Best ML videos in the Net for beginners
WOW! best tutorial on machine learning ever! Thank you
So good & Easy to understand. Ty ! So generous w your knowledge
Sir, a bit confused in the derivative part of the generator at 18:59 , in the del E / del wi equation, the term del G/ del z. Please clear the doubt
Thanks for your question, Jararddan! These derivatives get a bit complex, so I just put them in the screen, but they're not necessary to understand the material. However, the idea is that you use the chain rule, so something like del E/del w_i = del E / del D * del D / del G * del G / del w_i. Then I worked each one out separately based on the formulas above and the expansion for sigmoid(x). Let me know if there's anything that needs more clarification!
@@SerranoAcademy thanks a lot sir. I got it
What exactly is the sentence saying at 6:17? Can anyone please help? Cannot make out the words...
really nice illustrations!! Understand the gan now
Truly fantastic video that explains GAN in only 20 minutes. Question: you mentioned that generator and discriminator neural networks should both be trained with samples from generator, and discriminator should be trained with labeled data(16:48). Should these training from different sources interleave?
19:21 the line in plot is so thick, is this because the x axis density? too dense?
It's because the weights are bigger for the thick edges.
Best explanation of GAN in YTB
I finally understand the lost key of GAN! Thank you a lot!