Since these videos take an enormous amount of time (this one took about 300 hours), would you like to see, additionally, paper explanations in the style of Yannic Kilcher (www.youtube.com/@YannicKilcher) ? I could cover papers very quickly after they are released and also cover topics I wouldn’t do an animated video for. Let me know what you think :)
Sure! But I would prefer a deep dive once in a while to many simple paper explanations. There aren't many (video) resources for diffusion that go in such depth. So this is really great, thanks a lot for doing the video!
Wow! I did not expect this video to go this deep. But this is awesome! Please make more in depth explanation like this. It’s clear a lot of hard work went into it and the animation is sooo elegant
I absolutely love how you started from scratch, as in what the underlying PDF was. I'm working on a project on diffusion models and I don't know anything about it, and all the resources available are catered towards those with prerequisites I don't have yet, until this one. I haven't yet watched the whole thing, but I'm going to keep coming back to this till I understand everything in this video. Cheers mate!
Thank you for your work! I have started to learn about diffusion models and found that this is more complex idea than VAE idea and GAN idea. However, the people who try to explain these complex concepts to others are very impressive!
Most of the diffusion models I've watched so far and mainly using images to sample. This video is really great in terms of understanding the fundamentals. Would love to see more in depth explanation from zero to hero.
Amazing video, thank you. I learned most of it a year ago in university but this was a great refresher which also provided me with new insights to some of the stuff. I really liked the conclusion of the Denoising Score Matching part, very beautiful.
Regarding your pinned comment. No offense to Yannic, but your explanations are 10x better. The topics you've covered you actually understand, you explain not only what is going on, but also why. That, and you going into mathematical explanations are really appreciated. Don't worry about the quantity, it's easy to read a paper, and put surface level explanations out for more views, what you're doing is more valuable. Your videos are a treasure for amateur Deep Learning hobbyists like me who want to dig deeper into this field.
This is the best explanation of score based models, I imagine I will be rewatching this video over and over. I have also always struggled to understand where some of the Maths results in the big papers come from, you do a very good job demystifying that. I can say I have a much more intuitive understanding of score based models now. I hope to see more deep dives on similar topics (can I suggest "Flow matching for generative modelling" Arxiv - 2210.02747? I would love to see your take on it). Also very interested in more regular Yannick Kilcher style paper journal club videos (and also a discussion group to go along with it?).
@@wolfeinstien313 love to hear that! Already started working on a video about Flow Matching ! Might share progress on twitter if you wanna follow around there :)
Hi. Thank you so much for providing this incredibly great video. I've found this to be the best resource for understanding the derivation of score functions. I would love to see you cover model-based diffusion as your next topic!
The mathematical derivation and explanation is such a lifesaver, I also never really understood the underlying meaning when reading the diffusion models but now everything clicked. Thank you so much for the videos, really enjoyed it. Please make more of such videos. Liked and subscribed : ).
32:38 To correct myself here, the paper gives explanation how to derive the sampler. I personally just find that approach much harder to understand and generally the papers don’t go into too much details for their derivations.
This is a great video explaining in depth. Really enjoyed it. Would it also be possible for you to make implementation videos as well, like what you did for DDPM? Particularly, I am interested in videos explaining how to condition DDPM, for example, in engineering domain that requires the model to be conditioned with physics.
What an amazing video! I did not expect the video to contain the derivations which I have personally struggled to search for. If its not too much, can you do a pytorch implementation of VP-SDE or SDE - DDPM/DDIM? Your previous video of DDPM in Pytorch was extremely useful and would appreciate it if a similar video for this is possible. Finally, love the work you put in this. This channel is a gem for AI enthusiasts.
@@swaystar1235 Unfortunately even doing Würstchen style video models is still super expensive and there are many things that you have to solve first outside the model :/
I have a question. In the last two lines of the formula at 7:30, why did the sign change to positive from the second step to the third step? Will this affect the subsequent optimization process? Thank you for your excellent work, its really helps me a lot!
Actually if you scroll down in the comments there was someone asking this question which was answered by someone else with this comment: "There was another mistake with a sign, which cancels this one out. He was wrong with a sign after integrating by parts (after that it should have changed and be plus instead of minus" Sorry about this
8:11 when gradient of s_{\theta}(x) = 0, x can be a local maximum or minimum, why do you think it's a local maximum and not minimum? 11:45 summary 33:58 summary again
Thanks for your hard work! Amazing explanation! Just want to check the squared equation at 5:55. Can you explain why $\mathbb{E}[p(x)] = \int p(x) dx$? I feel like the equation has something missing...
i know its a short video but some of the syntax may be confusing eg the subscript on the \mathbb{E} that is p(x) in a financial context we often use things such as \mathbb{E}_t [ h(X_T) ] = the conditional probability of h(X_T) where X is a stochastic process creating a filtration such as so it is equal to \mathbb{E} [ h(X_T) | \mathcal{F}_t ] I know its a totally different domains but oftentimes notation like this can be dripping with meaning, so, what is the _meaning_ of the subscript p(x) and what is the _meaning_ of the double bar ( ||_2^2 ) in the expectation ? is that the L2 Norm? timestamp 8:17
There was another mistake with a sign, which cancels this one out. He was wrong with a sign after integrating by parts (after that it should have changed and be plus instead of minus)
Excellent video! I'm kind of stuck at a step at time 33:05. Could you please explain why the score function equals a constant times s_theta? (I can get it from the video that s_theta should follow the direction of log probability, but I don't know why the constant is 1 over square root 1-\bar{alpha}_t.)
I actually encountered this equation several times when reading papers, like in the famous Song Yang 2020 paper. But they seems to just take it for granted, which is not so apparent for me.
@@XinzeLi-j7h I think it is an approximation you have to do in order to view DDPM this way. Like you know how the DDPM update looks and by rearranging terms to get there this is the only thing possible. Not a good answer, but do you get the idea?
If you want to make videos with quicker production, maybe you could use a whitescreen and write everything out, so you can still explain it intuitively but quicker.
Take a look at the papers I linked. The math in the video is taken from all of them together, however some of the things are not really found anywhere in them unfortunately. So this took a while
@@outliier To ask more clearly, have you been working on the basics of score matching and diffusion models for the last year? Assuming that you are using diffusion models at Luma, you also studied advanced topics on the related subject.
Calling ∇s stretches terminology a bit, right? Given s is a gradient vector field itself. Cool effort, thanks for going through all the manipulations. As for designing a read thread for the video, I'm not sure fully sure why you work 10 minutes for the E[s^2]+... term, but then in the explained denoising approach it's not really showing up anymore. Last note: Unlike Lagrang-ian dynamics, Langevin dynamics is not Langev-ian dynamics. But I think Langevin is still on the easier side to pronounce - don't be afraid.
Since these videos take an enormous amount of time (this one took about 300 hours), would you like to see, additionally, paper explanations in the style of Yannic Kilcher (www.youtube.com/@YannicKilcher) ? I could cover papers very quickly after they are released and also cover topics I wouldn’t do an animated video for. Let me know what you think :)
1000% yessssss ❤❤❤🎉
Sure 👍🏻
Sure! But I would prefer a deep dive once in a while to many simple paper explanations. There aren't many (video) resources for diffusion that go in such depth. So this is really great, thanks a lot for doing the video!
@@suraj7984 gotcha, yea I will keep doing normal videos. Was just wondering if other formats are also interesting
I think you should do both ... sorry. You explain in such a better way. Thanks alot for doing this.
Wow! I did not expect this video to go this deep. But this is awesome! Please make more in depth explanation like this. It’s clear a lot of hard work went into it and the animation is sooo elegant
I absolutely love how you started from scratch, as in what the underlying PDF was. I'm working on a project on diffusion models and I don't know anything about it, and all the resources available are catered towards those with prerequisites I don't have yet, until this one. I haven't yet watched the whole thing, but I'm going to keep coming back to this till I understand everything in this video. Cheers mate!
love your mathematics explanation and visualization, no fancy transitions were needed, just slow, simple, and clear english phrases
I have watched this video for three times, may watch this video again. Thank you.
Thank you for your work! I have started to learn about diffusion models and found that this is more complex idea than VAE idea and GAN idea. However, the people who try to explain these complex concepts to others are very impressive!
Most of the diffusion models I've watched so far and mainly using images to sample. This video is really great in terms of understanding the fundamentals. Would love to see more in depth explanation from zero to hero.
Your videos are somehow simultaneously timely and timeless. Your content is absolutely appreciated and I wish you the best in your endeavors.
Amazing video, thank you. I learned most of it a year ago in university but this was a great refresher which also provided me with new insights to some of the stuff. I really liked the conclusion of the Denoising Score Matching part, very beautiful.
thank you for such great video! i would definitely want more video like this and more with code! using pytorch to implement equations!
Regarding your pinned comment. No offense to Yannic, but your explanations are 10x better. The topics you've covered you actually understand, you explain not only what is going on, but also why. That, and you going into mathematical explanations are really appreciated. Don't worry about the quantity, it's easy to read a paper, and put surface level explanations out for more views, what you're doing is more valuable. Your videos are a treasure for amateur Deep Learning hobbyists like me who want to dig deeper into this field.
A series on topics like this would be a gold mine. Great work!!
I used Score-SDE in my thesis and I have my defense next week :D what a timing
1 year. See you back with a really easy to understand explanation. Thank you!
Will be more active!
This is the best explanation of score based models, I imagine I will be rewatching this video over and over. I have also always struggled to understand where some of the Maths results in the big papers come from, you do a very good job demystifying that. I can say I have a much more intuitive understanding of score based models now. I hope to see more deep dives on similar topics (can I suggest "Flow matching for generative modelling" Arxiv - 2210.02747? I would love to see your take on it). Also very interested in more regular Yannick Kilcher style paper journal club videos (and also a discussion group to go along with it?).
@@wolfeinstien313 love to hear that! Already started working on a video about Flow Matching ! Might share progress on twitter if you wanna follow around there :)
Your videos are great. You do well at taking very complex maths topics and walking through them. The summary at the end also helps.
Hi. Thank you so much for providing this incredibly great video. I've found this to be the best resource for understanding the derivation of score functions. I would love to see you cover model-based diffusion as your next topic!
this is such a helpful video!! thanks so much!
youtube giving good content??? i’ve been looking for exactly this lmao, thanks for your work
This is a brilliant video!!!!!!!! I almost addressed all the questions I have about score matching and how it is related to diffusion model.
The mathematical derivation and explanation is such a lifesaver, I also never really understood the underlying meaning when reading the diffusion models but now everything clicked. Thank you so much for the videos, really enjoyed it. Please make more of such videos. Liked and subscribed : ).
Great video! Would be great to see a video on flow matching in the same style!
@@nicolasdufour315 That actually is my plan to do for the next video haha
@@outliierI really want that video bro, awesome job!
thanks, thanks, thanks! you finally gave me missing explanations in those diffusion papers!
Excellent explanation, thank you for making this.
Thankyou so much for making this video ! hatsoff to this elegant explanation!
Thank you for this amazing explanation! keep going...
32:38 To correct myself here, the paper gives explanation how to derive the sampler. I personally just find that approach much harder to understand and generally the papers don’t go into too much details for their derivations.
u finally come back! love ur video 🎉
I haven't seen it yet, but pretty sure is an awesome video. Keep it up man!
Thank you for the video, love it!
This is a great video explaining in depth. Really enjoyed it. Would it also be possible for you to make implementation videos as well, like what you did for DDPM? Particularly, I am interested in videos explaining how to condition DDPM, for example, in engineering domain that requires the model to be conditioned with physics.
Very nice introduction to the topic!
excellent work done by you, thanks for your explaining!
A full series in generative diffusion models would be awesome
What an amazing video! I did not expect the video to contain the derivations which I have personally struggled to search for. If its not too much, can you do a pytorch implementation of VP-SDE or SDE - DDPM/DDIM? Your previous video of DDPM in Pytorch was extremely useful and would appreciate it if a similar video for this is possible. Finally, love the work you put in this. This channel is a gem for AI enthusiasts.
@@talhaahmed6488 thank you so much for the nice comment! I will do an implementation video after the next one!
Thank you so much for such an informative video
Had an epiphany watching you explain so many things that I never fully grilled, thank you so much
Wow! Great job. Many thanks for sharing =)
This is a staggering amount of work, do you have a patreon where you can be supported?
nice video for diffusion models!
great video man! thank you so much
more videos on diffusion models would be great
Really nice explanation, intuitive but also math oriented. Now I am looking forward for implementation
@@vinc6966 My plan is to do Flow Matching next and then an implementation tutorial :)
@@outliier ah yes, GANs, diffusion, score-based models, and flow matching, the four horsemen of generative AI, keep up the good work! :))
@@outliier Yeah, Flow Matching sounds interesting. There are not a lot of explanations in the internet. implementation tutorial is also very cool
Every time you say theta I hear feta. Very nice video.
@@ihmejakki2731 bon appetit
schön, dich mal wieder zu sehen \o/
@@NoahElRhandour hehe
thank you for the awesome video!!
Suuuuuuper Helpful!
Id love to see a video on training video models cheaply like you did for image models with wurchsten
@@swaystar1235 Unfortunately even doing Würstchen style video models is still super expensive and there are many things that you have to solve first outside the model :/
I have a question. In the last two lines of the formula at 7:30, why did the sign change to positive from the second step to the third step? Will this affect the subsequent optimization process? Thank you for your excellent work, its really helps me a lot!
Actually if you scroll down in the comments there was someone asking this question which was answered by someone else with this comment: "There was another mistake with a sign, which cancels this one out. He was wrong with a sign after integrating by parts (after that it should have changed and be plus instead of minus"
Sorry about this
great work
Thank you so much. Wonderful Wonderful Wonderful
8:11 when gradient of s_{\theta}(x) = 0, x can be a local maximum or minimum, why do you think it's a local maximum and not minimum?
11:45 summary
33:58 summary again
Thanks for your hard work! Amazing explanation! Just want to check the squared equation at 5:55. Can you explain why $\mathbb{E}[p(x)] = \int p(x) dx$? I feel like the equation has something missing...
i know its a short video but some of the syntax may be confusing eg the subscript on the \mathbb{E} that is p(x) in a financial context we often use things such as \mathbb{E}_t [ h(X_T) ] = the conditional probability of h(X_T) where X is a stochastic process creating a filtration such as so it is equal to \mathbb{E} [ h(X_T) | \mathcal{F}_t ]
I know its a totally different domains but oftentimes notation like this can be dripping with meaning, so, what is the _meaning_ of the subscript p(x) and what is the _meaning_ of the double bar ( ||_2^2 ) in the expectation ? is that the L2 Norm? timestamp 8:17
This is epic!
7:35 i have a question, the second line -Ep(x)[
abla_x s_theta(x)] = -\int p(x)
abla_x s_theta(x) dx, but you wrote a positive sign?
There was another mistake with a sign, which cancels this one out. He was wrong with a sign after integrating by parts (after that it should have changed and be plus instead of minus)
@@Topakhok thanks for this clarification
Can you make an implementation video for Score SDE's ?
good video!!!
Excellent video! I'm kind of stuck at a step at time 33:05. Could you please explain why the score function equals a constant times s_theta? (I can get it from the video that s_theta should follow the direction of log probability, but I don't know why the constant is 1 over square root 1-\bar{alpha}_t.)
I actually encountered this equation several times when reading papers, like in the famous Song Yang 2020 paper. But they seems to just take it for granted, which is not so apparent for me.
@@XinzeLi-j7h I think it is an approximation you have to do in order to view DDPM this way. Like you know how the DDPM update looks and by rearranging terms to get there this is the only thing possible. Not a good answer, but do you get the idea?
@@outliier I guess I understand what you mean. I will try the derivation later. Thank you very much!
awesome!
If you want to make videos with quicker production, maybe you could use a whitescreen and write everything out, so you can still explain it intuitively but quicker.
Bro also explained why - (a - b) = (b - a) 😂😂
@@tejomaypadole4392 no details left out haha
What about story visualization video?
thx
Love the music background, very relaxing when learning, pls don’t change! Thx!
It appears that the minus sign in the integration by parts was mistakenly written as a plus
can you give the source for the math ? i want to try a hands - on approach
Take a look at the papers I linked. The math in the video is taken from all of them together, however some of the things are not really found anywhere in them unfortunately. So this took a while
Can you explain more about classifier free guidance code implementation during training? 😂
❤
I wonder that, for a year, did you studied on this, only? Because I really wonder that being able to go this much deep takes a year?
@@oguzhanercan4701 no I was just doing bunch of other things too and didn‘t spend so much time always on the video.
@@outliier To ask more clearly, have you been working on the basics of score matching and diffusion models for the last year? Assuming that you are using diffusion models at Luma, you also studied advanced topics on the related subject.
@@oguzhanercan4701 yea I have been mostly working with diffusion models over the last 2 years
Calling ∇s stretches terminology a bit, right? Given s is a gradient vector field itself.
Cool effort, thanks for going through all the manipulations. As for designing a read thread for the video, I'm not sure fully sure why you work 10 minutes for the E[s^2]+... term, but then in the explained denoising approach it's not really showing up anymore.
Last note: Unlike Lagrang-ian dynamics, Langevin dynamics is not Langev-ian dynamics. But I think Langevin is still on the easier side to pronounce - don't be afraid.
Music is unhelpful and distracting.
Please don't do piano background it is super annoying and distracting. Thanks
@@madrooky1398 interesting. I found it much more comforting and giving 3B1B vibes. Will consider
@@outliier I second this. but also you've done a wonderful job.
@@amortalbeing thanks for the feedback. Should do a poll at some point I guess
+1 , the piano music is distracting. If one likes it, he can overlay it himself.
FWIW I liked the piano because it calms me down when I get frustrated from not understanding a step 😃
This technology is obnoxiously abstracted beyond usefulness. The mathematical approach is also likely flawed and misses nuance. AMI is better.
@@Suro_One what is AMI?