Absolutely Gold Content. Took me 2 weeks to wrap my head around still was struggling hopping from videos to articles and you made it seem so effortless. Any amount of Appreciation is less.
This is so beautifully done! What you emphasize is why the methods work, based on data-inetrnal correlation, not just how the steps of how the machines described, DAE and VAE, calculate. Thank you!!
In the part where you explained Training of a Variational Auto Encoder where do the reparametrization, variational inference, ELBO these terms come. The Bayesian Distrubutions (p(x|z) and p(z|x) ). I know that you explained it in a easier way and I was simply awe struck . I thought it was impossible to explain this difficult topic in a simple way. If you have any resources that also explain the underlying maths it would be very helpful
It can also be thought of as a Sketch Artist (who is used in drawing faces of criminals). How he/she gets information about the facial features of some person and they are able to draw the entire face accurately
Hello there Mr Serrano this is a great intuitive video but I have a question . On 28:26 I believe that the mean and variance may vary from input to input right? So doesn't that mean that the purple and green distributions should become dissimilar on each sample inserted? Thank you Mr Serrano.
3:25 states that VAEs are good at generating high resolution images. In some other videos, I found that this method is not so good for high resolution images.
Total nit: 2:47 "The shrunk data is called the latent space" --> The shrunken data is called the latent (a vector), which exists in whatever d-dimensional latent space you have 👍, right?
30:05 has a typo on the graph. q3 should be 0.4 and not 0.3 in order to sum to 1 and also to be inline with the calculations (where 0.4 is used). Just a note :) Great content by the way!
@SerranoAcademy Where did the first idea/intuition occurred for the first researcher who proposed that the equal diagonal values have property to denoise? you are just riding on the idea, better if you can share more information on it.
Absolutely Gold Content. Took me 2 weeks to wrap my head around still was struggling hopping from videos to articles and you made it seem so effortless. Any amount of Appreciation is less.
This is so beautifully done! What you emphasize is why the methods work, based on data-inetrnal correlation, not just how the steps of how the machines described, DAE and VAE, calculate. Thank you!!
Brilliant explanations and huge work on the video! Really appreciated.
Very intuitive explanation, thank you Luis for making this topic easy to understand.🙂
Thanks!
@@PedroTrujilloV muchas gracias Pedro! Un abrazo.
@ a su merced profe. 🙏🏽Igualmente
Thank you so much! So clear and helpful! Really great job.
Really beautiful explanations & examples. very well explained with animations
Love your videos, I was just about to look into this subject!
This channel would go from great to exceptional if Luis shared the PyTorch code for implementation & usage of such models
Brilliant explanation
As always your video is a masterpiece !
Thank you so much, glad you liked it! :)
In the part where you explained Training of a Variational Auto Encoder where do the reparametrization, variational inference, ELBO these terms come. The Bayesian Distrubutions (p(x|z) and p(z|x) ). I know that you explained it in a easier way and I was simply awe struck . I thought it was impossible to explain this difficult topic in a simple way.
If you have any resources that also explain the underlying maths it would be very helpful
thank you for your great video. just a quick note that at time 13:08, we get the value of -1 and not 1.
Yikes, you're right! Thank you for the correction!
Very nice explaination for beginners!
Thank you! :)
Definitivamente el mejor!!
amazing explanation, thank you
great as usual!
Thank you Samir!
Thanks, that was really helpful !!!
It can also be thought of as a Sketch Artist (who is used in drawing faces of criminals). How he/she gets information about the facial features of some person and they are able to draw the entire face accurately
Great video!
So good. Thank you.
Hello there Mr Serrano this is a great intuitive video but I have a question . On 28:26 I believe that the mean and variance may vary from input to input right? So doesn't that mean that the purple and green distributions should become dissimilar on each sample inserted? Thank you Mr Serrano.
at 25:00 shouldnt it be a "large/small loss" instead of a "large/small loss function"? Im not really shure if i understood this right
3:25 states that VAEs are good at generating high resolution images. In some other videos, I found that this method is not so good for high resolution images.
Total nit:
2:47 "The shrunk data is called the latent space" --> The shrunken data is called the latent (a vector), which exists in whatever d-dimensional latent space you have 👍, right?
🙂Excellent!
30:05 has a typo on the graph. q3 should be 0.4 and not 0.3 in order to sum to 1 and also to be inline with the calculations (where 0.4 is used). Just a note :) Great content by the way!
Ahh you're right, thank you so much for the correction! I can't change the video but will add a comment below.
super nice! Thanks a lot!
Thank you, glad you liked it!
Make videos on Graph Neural network please
Hi, great video, how do you make your amazing illustrations and animations ?
@SerranoAcademy Where did the first idea/intuition occurred for the first researcher who proposed that the equal diagonal values have property to denoise? you are just riding on the idea, better if you can share more information on it.
I understand how a VAE is trained in an unsupervised way. How is a DAE trained? Is it done in the same way?
you're saving my graduate research lmao
million claps from me
2:14