Thank you for such a no math video. It's very rare to find videos with clear explanation for the intuition of the problem. Once we grab the idea, then the math seems more manageable. Thank you so much!
Ohh finally I'm so glad you made tutorial on AAE please cover all aspects of AAE in image processing. Thanks so much you're best youtuber for image processing and deep learning. I'm your biggest fan just small request please take a bit more time in explaining code as I'm a biologist but interested in deep learning and image analysis. Thanks once again.
Well, the next video covers image generation where we generate MNOST images. AAEs are slow so to generate real images you will need lot more resources but the concepts I cover should definitely help.
Thank you for your great video! I saw a lot of notes which introduced too much about mathematical parts but ignored to tell why and how we need to use VAE. Your video helps me to understand why we need to learn a desirable distribution of the latent vector.
Was working my way through MIT Deep Learning Generative models 2024 and was stuck on the introduction of Epsilon for the Loss calculation, your instruction helped clarify many things, however, still trying to get my head around all this.
QUESTION CONCERNING VAE! Using VAE with images, we currently start by compressing an image into the latent space and reconstructing from the latent space. QUESTION: What if we start with the photo of adult human, say a man or woman 25 years old (young adult) and we rebuild to an image of the same person but at a younger age, say man/woman at 14 years old (mid-teen). Do you see where I'm going with this? Can we create a VAE to make the face younger from 25 years (young adult) to 14 years (mid-teen)? In more general term, can VAE be used with non-identity function?
This is all great, I think my one quibble is that you are perhaps using a slightly nonstandard definition of "generative". Usually it means that we are modelling the distribution of the input space, and can therefore sample ("generate") new realistic inputs. For exactly the reasons you state, standard autoencoders don't do this, and therefore by definition are not generative models. Yes they can "generate" things but those things don't represent the input space and will probably be a "meaningless" mess. Whereas with variational autoencoders, they do model the input space and can therefore generate "realistic" inputs, so they are generative models.
Thank you for such a no math video. It's very rare to find videos with clear explanation for the intuition of the problem. Once we grab the idea, then the math seems more manageable. Thank you so much!
As Einstein said “If you can't explain it to a six-year-old, you don't understand it yourself.”
Very easy to understand thank you.
dude your channel is a gold mine, keep up the great work
Glad you enjoy it!
Incredible. Can't wait for future videos. Big fan as always.
More to come!
I have never heard a better explanation than this.
Ohh finally I'm so glad you made tutorial on AAE please cover all aspects of AAE in image processing. Thanks so much you're best youtuber for image processing and deep learning. I'm your biggest fan just small request please take a bit more time in explaining code as I'm a biologist but interested in deep learning and image analysis. Thanks once again.
Please make tutorial on image generation with AAE as well. 😊
Well, the next video covers image generation where we generate MNOST images. AAEs are slow so to generate real images you will need lot more resources but the concepts I cover should definitely help.
@@DigitalSreeni Can we use VAE to augment Spectrum data?
Thank you for your great video! I saw a lot of notes which introduced too much about mathematical parts but ignored to tell why and how we need to use VAE. Your video helps me to understand why we need to learn a desirable distribution of the latent vector.
You're very welcome!
awesome stuff! i like the hesitant pause at “backpropgation” - oh i guess u know it if u r watching this hahaha
How can you explain this topic so elegantly and clearly. Thank you
Thanks
Thank you very much, please keep learning.
Was working my way through MIT Deep Learning Generative models 2024 and was stuck on the introduction of Epsilon for the Loss calculation, your instruction helped clarify many things, however, still trying to get my head around all this.
Thank you for the great intuitive explanation!
Thank you for the great intuitive explanation! was looking for a video of this kind!
Thank you! Looking forward to the application!
Any time!
How do I learn what number to set the artificial neurons in each layer? i'm super confused.
Awsome! Really very useful explanation!
such a great explanation! Thank you so much :)
Amazing video, thank you
Very informative video Sir. Thank you very much.
GOLD, nice explanation
very very useful, thanks a lot, keep up
Great explination! thank you so much
What is this cluster x and y axis??
amazing work!
Totel types of autoencoder?
REALY GREAT VIDEO
IS THERE CODE FOR OTHER DATASET LIKE MRI IMAGES ?
QUESTION CONCERNING VAE! Using VAE with images, we currently start by compressing an image into the latent space and reconstructing from the latent space.
QUESTION: What if we start with the photo of adult human, say a man or woman 25 years old (young adult) and we rebuild to an image of the same person but at a younger age, say man/woman at 14 years old (mid-teen). Do you see where I'm going with this? Can we create a VAE to make the face younger from 25 years (young adult) to 14 years (mid-teen)?
In more general term, can VAE be used with non-identity function?
For what you are proposing you can even use 'standard' autoencoders.. Check this video th-cam.com/video/9zKuYvjFFS8/w-d-xo.html
This is all great, I think my one quibble is that you are perhaps using a slightly nonstandard definition of "generative". Usually it means that we are modelling the distribution of the input space, and can therefore sample ("generate") new realistic inputs. For exactly the reasons you state, standard autoencoders don't do this, and therefore by definition are not generative models. Yes they can "generate" things but those things don't represent the input space and will probably be a "meaningless" mess. Whereas with variational autoencoders, they do model the input space and can therefore generate "realistic" inputs, so they are generative models.
Thank you. Very nicely explained. Go packers :)
Omg so helpful, thank you.
Great work man!!
Thank you! Cheers!
very good
Merci !
Thank you!
You're welcome!