Excellent clarification of multiple concepts in one pass. Helped me relate the encoders relative to latent space in a much more accessible metaphor. Thank you.
Unfortunately YT doesn't allow to increase the volume after posting videos. I could re-upload it, but idk if it'll be really worth it. I'll just take this as lesson for future videos, thanks for the comment!
If I remember correctly, I just used ipywidgets inside a jupyter notebook to do the UI and display. I also wrote the logic for the PCA (sklearn), encoding/decoding, and interpolating the latent vectors.
How does the latent space work exactly?? Does it learn just a global representation (like the slime is typically in the center) or also a local representation? I mean, if we perform the same operation with images rotated randomly, do the latent space arithmetics stop working? Do you know of any papers that explore this in depth. thx in advance
@avb_fj Thank you! I'd like to see one where you code the encoder and decoder I'm coding a autoencoder and it's a little tough trying to find a good balance of reduction but also keeling the important details
I really appreciate your lucid explanation. Superb. I wanna request you if you could enhance the sound quality a bit. Good wishes and thanks for such videos
I think of it like CNNs were good at Pattern Recognition and Identification so they were well suited for Classification type tasks but in Generation we need more nuanced understanding (more data is needed which cannot be picked as a pattern or is more abstract) and so the concept of Latent Space was introduced. Is my thinking in right direction or is it wrong ? Please do tell
Thanks for the comment! It’s a bit more nuanced though coz CNNs have been successfully used for both discriminative tasks (like classification) as well as generative tasks (GANs, Diffusion). At its heart, convolution is a general purpose operation to process spatial information within images. The rest of it's power really comes from special architectural choices, quality and diversity of training data, loss functions, etc. The concept of "Latent Space" has pretty deep roots in Deep Learning - it all probably came from this term called "Representation Learning" or "Manifold Learning", which don't get said often these days. As discussed in the video, the basic idea is that neural networks form their own numerical representations within their weights when we train them on any kind of data, and these representations are often meaningful and interpretable in the right experimental settings. I'll suggest to check these two videos out where I tried to explain Convolution and CNNs from the ground up. th-cam.com/video/kebSR2Ph7zg/w-d-xo.html th-cam.com/video/N_PocrMHWbw/w-d-xo.html
@@avb_fj Thanks for this. I do think CNNs are good in that but in the case of GANs the Generator Discriminator pair was there to kind of find those hidden or more nuanced features which are more than patterns. Seems like both of them (GANs and VAE) were 2 different methods to solve a similar problem of capturing more nuanced features. I am just a beginner in this field so I can be completely wrong. But what I have observed is that major part of Deep Learning theory revolves around finding more nuanced features and how they are inter related.
Oh man. I was a bit lost when you where saying encoder this decoder that but the smile example at 6:50 hit right on the nail. It's indeed mindblowing. I'd love to know more about AI for outsiders, subscribed. PS: A concept I picked up from Ezra is that AI turns semantics into geometry. So you can do king - man + woman and get queen! (paraphrasing). If you could expand on this and give more examples in different modalities... that'd be awesome.
Nice…glad you enjoyed it and stuck around for the whole thing. The semantic example is pretty awesome yeah… I’ve brought it up in the channel in my History of NLP video, but more examples on different modalities seems like a nice idea for a video!
Just subscribed after your NeRF video, and this one is awesome too! You, Yannic, and Two Minute Papers are great at making AI content relatable and interesting and freaking cool :) What a time to be alive! lol
This is the most concise, simplest and best explanation of latent space that I've found so far!
Thanks a lot! Glad you enjoyed it! 🙏🏽
Excellent clarification of multiple concepts in one pass. Helped me relate the encoders relative to latent space in a much more accessible metaphor. Thank you.
10 min felt like 30 min, I had so many rewinds during the vid. the video is so full of info, thanks a lot.
Thank you for the first video that has allowed me to understand the latent space on some intuitional level
This video is clear and concise, amazing work!
Great job! I have used your videos to help several recent grads, who had some gaps in understanding. You are a very good teacher.
Awesome! Thank you!
Thanks Neural Breakdown with AVB! I always wanted to know how "latent space arithmetic" works.
This video is so fascinating. Amazing work.
Thanks! Glad you enjoyed it!
Amazing Explanation
Thanks a lot!
Great explanations! Getting better understanding how some parts of Stable Diffusion work without any efforts )
This video is super cool. It's good to see those visualized concept
Thanks!😊
A great description of interpreting deep learing models. Well done!
I hope you can fix the quiet audio!
Unfortunately YT doesn't allow to increase the volume after posting videos. I could re-upload it, but idk if it'll be really worth it. I'll just take this as lesson for future videos, thanks for the comment!
Truly Amazing
Which tools did you use to be able to change each principal component and see its effect on output image?
If I remember correctly, I just used ipywidgets inside a jupyter notebook to do the UI and display. I also wrote the logic for the PCA (sklearn), encoding/decoding, and interpolating the latent vectors.
Great video!
Excellent video! Thanks for your work!
QQ: Is there a repo for the real-time image manipulation software you used as your demo?
Excellent video
How does the latent space work exactly?? Does it learn just a global representation (like the slime is typically in the center) or also a local representation? I mean, if we perform the same operation with images rotated randomly, do the latent space arithmetics stop working? Do you know of any papers that explore this in depth. thx in advance
Great video, thanks
This was a insanely good video and explanation ty
Thanks! Awesome to hear that! 😊
@avb_fj Thank you! I'd like to see one where you code the encoder and decoder I'm coding a autoencoder and it's a little tough trying to find a good balance of reduction but also keeling the important details
Thanks for great video! Very well explained!
can we do same with the pixels to enhance the image?
Can you clarify what you meant by “doing the same with pixels”?
I really appreciate your lucid explanation. Superb. I wanna request you if you could enhance the sound quality a bit. Good wishes and thanks for such videos
Thanks! I’ll keep that in mind going forward…
I think of it like CNNs were good at Pattern Recognition and Identification so they were well suited for Classification type tasks but in Generation we need more nuanced understanding (more data is needed which cannot be picked as a pattern or is more abstract) and so the concept of Latent Space was introduced.
Is my thinking in right direction or is it wrong ? Please do tell
Thanks for the comment!
It’s a bit more nuanced though coz CNNs have been successfully used for both discriminative tasks (like classification) as well as generative tasks (GANs, Diffusion). At its heart, convolution is a general purpose operation to process spatial information within images. The rest of it's power really comes from special architectural choices, quality and diversity of training data, loss functions, etc. The concept of "Latent Space" has pretty deep roots in Deep Learning - it all probably came from this term called "Representation Learning" or "Manifold Learning", which don't get said often these days. As discussed in the video, the basic idea is that neural networks form their own numerical representations within their weights when we train them on any kind of data, and these representations are often meaningful and interpretable in the right experimental settings.
I'll suggest to check these two videos out where I tried to explain Convolution and CNNs from the ground up.
th-cam.com/video/kebSR2Ph7zg/w-d-xo.html
th-cam.com/video/N_PocrMHWbw/w-d-xo.html
@@avb_fj Thanks for this.
I do think CNNs are good in that but in the case of GANs the Generator Discriminator pair was there to kind of find those hidden or more nuanced features which are more than patterns.
Seems like both of them (GANs and VAE) were 2 different methods to solve a similar problem of capturing more nuanced features.
I am just a beginner in this field so I can be completely wrong. But what I have observed is that major part of Deep Learning theory revolves around finding more nuanced features and how they are inter related.
Great content!
Oh man. I was a bit lost when you where saying encoder this decoder that but the smile example at 6:50 hit right on the nail. It's indeed mindblowing. I'd love to know more about AI for outsiders, subscribed.
PS: A concept I picked up from Ezra is that AI turns semantics into geometry. So you can do king - man + woman and get queen! (paraphrasing). If you could expand on this and give more examples in different modalities... that'd be awesome.
Nice…glad you enjoyed it and stuck around for the whole thing. The semantic example is pretty awesome yeah… I’ve brought it up in the channel in my History of NLP video, but more examples on different modalities seems like a nice idea for a video!
Thank you sir ! You clear the concept of latent space for me ! And I can’t wait to click on your multi modal video in this channel
Just subscribed after your NeRF video, and this one is awesome too! You, Yannic, and Two Minute Papers are great at making AI content relatable and interesting and freaking cool :) What a time to be alive! lol
Wow that’s high praise! Those two are definitely an inspiration, so I’m kinda feeling surreal reading this! Thank you so much!! 🙌🏼🙌🏼
Awesome content!
🙏🏽🙌🏼
This is a fantastic break down. Great pacing, wonderful examples with easy to follow metaphors. Fix your audio and keep em coming!
really great overview of topics. subd n liked each im watching!
wow
Great information. Considerably low voice. Please take care next time.
Great content!
Great content! Just as a FYI, might want to turn up the Mic volume. It's easier to lower the volume than to turn it up from the consumers POV.
Thanks for the feedback! Will keep it in mind for the next one…
Great video but the audio level is way too low. Also the video and audio is not in sync.