Since these videos take an enormous amount of time (this one took about 300 hours), would you like to see, additionally, paper explanations in the style of Yannic Kilcher (www.youtube.com/@YannicKilcher) ? I could cover papers very quickly after they are released and also cover topics I wouldn’t do an animated video for. Let me know what you think :)
Sure! But I would prefer a deep dive once in a while to many simple paper explanations. There aren't many (video) resources for diffusion that go in such depth. So this is really great, thanks a lot for doing the video!
This video was absolutely fantastic-I feel like I’ve finally learned about diffusion models the right way! I really appreciated how you started from the basics, gradually building up concepts and intuition, while clearly explaining the math at every step. It took me a few hours to get through the entire video, but the length and pace were perfect-there’s nothing I would change. Everything was covered so thoroughly. Thank you for the effort you put into this, and I’m excited to see more videos from you in the future!
Wow! I did not expect this video to go this deep. But this is awesome! Please make more in depth explanation like this. It’s clear a lot of hard work went into it and the animation is sooo elegant
I absolutely love how you started from scratch, as in what the underlying PDF was. I'm working on a project on diffusion models and I don't know anything about it, and all the resources available are catered towards those with prerequisites I don't have yet, until this one. I haven't yet watched the whole thing, but I'm going to keep coming back to this till I understand everything in this video. Cheers mate!
well explained video, shut out to your hardwork man, you are doing fabulous work, keep it up definely we want more videos on diffusion models like this explaining the in depth concepts.
OMG. This is really amazing. I am PhD student, and I also struggle with a lot papers, their origins the intuitions. Felt like these authors are getting these from other world. This video made a lot of sense about other paper. If possible please provide reading map for the entire generative models? And your explanation and derivation is spot on. You really are a genius. To get the derivation done on your own and to connect the dots. Good job. ❤ 🎊
Amazing video, thank you. I learned most of it a year ago in university but this was a great refresher which also provided me with new insights to some of the stuff. I really liked the conclusion of the Denoising Score Matching part, very beautiful.
Hi. Thank you so much for providing this incredibly great video. I've found this to be the best resource for understanding the derivation of score functions. I would love to see you cover model-based diffusion as your next topic!
Most of the diffusion models I've watched so far and mainly using images to sample. This video is really great in terms of understanding the fundamentals. Would love to see more in depth explanation from zero to hero.
Thank you for your work! I have started to learn about diffusion models and found that this is more complex idea than VAE idea and GAN idea. However, the people who try to explain these complex concepts to others are very impressive!
The mathematical derivation and explanation is such a lifesaver, I also never really understood the underlying meaning when reading the diffusion models but now everything clicked. Thank you so much for the videos, really enjoyed it. Please make more of such videos. Liked and subscribed : ).
32:38 To correct myself here, the paper gives explanation how to derive the sampler. I personally just find that approach much harder to understand and generally the papers don’t go into too much details for their derivations.
There was another mistake with a sign, which cancels this one out. He was wrong with a sign after integrating by parts (after that it should have changed and be plus instead of minus)
This is the best explanation of score based models, I imagine I will be rewatching this video over and over. I have also always struggled to understand where some of the Maths results in the big papers come from, you do a very good job demystifying that. I can say I have a much more intuitive understanding of score based models now. I hope to see more deep dives on similar topics (can I suggest "Flow matching for generative modelling" Arxiv - 2210.02747? I would love to see your take on it). Also very interested in more regular Yannick Kilcher style paper journal club videos (and also a discussion group to go along with it?).
@@wolfeinstien313 love to hear that! Already started working on a video about Flow Matching ! Might share progress on twitter if you wanna follow around there :)
What an amazing video! I did not expect the video to contain the derivations which I have personally struggled to search for. If its not too much, can you do a pytorch implementation of VP-SDE or SDE - DDPM/DDIM? Your previous video of DDPM in Pytorch was extremely useful and would appreciate it if a similar video for this is possible. Finally, love the work you put in this. This channel is a gem for AI enthusiasts.
I have a question. In the last two lines of the formula at 7:30, why did the sign change to positive from the second step to the third step? Will this affect the subsequent optimization process? Thank you for your excellent work, its really helps me a lot!
Actually if you scroll down in the comments there was someone asking this question which was answered by someone else with this comment: "There was another mistake with a sign, which cancels this one out. He was wrong with a sign after integrating by parts (after that it should have changed and be plus instead of minus" Sorry about this
@@outliier Oh, I didn’t notice that someone had already asked. Thanks, this is the best video explanation I could find so far! Looking forward to the next videos!
8:11 when gradient of s_{\theta}(x) = 0, x can be a local maximum or minimum, why do you think it's a local maximum and not minimum? 11:45 summary 33:58 summary again
This is a great video explaining in depth. Really enjoyed it. Would it also be possible for you to make implementation videos as well, like what you did for DDPM? Particularly, I am interested in videos explaining how to condition DDPM, for example, in engineering domain that requires the model to be conditioned with physics.
Excellent video! I'm kind of stuck at a step at time 33:05. Could you please explain why the score function equals a constant times s_theta? (I can get it from the video that s_theta should follow the direction of log probability, but I don't know why the constant is 1 over square root 1-\bar{alpha}_t.)
I actually encountered this equation several times when reading papers, like in the famous Song Yang 2020 paper. But they seems to just take it for granted, which is not so apparent for me.
@@XinzeLi-j7h I think it is an approximation you have to do in order to view DDPM this way. Like you know how the DDPM update looks and by rearranging terms to get there this is the only thing possible. Not a good answer, but do you get the idea?
Regarding your pinned comment. No offense to Yannic, but your explanations are 10x better. The topics you've covered you actually understand, you explain not only what is going on, but also why. That, and you going into mathematical explanations are really appreciated. Don't worry about the quantity, it's easy to read a paper, and put surface level explanations out for more views, what you're doing is more valuable. Your videos are a treasure for amateur Deep Learning hobbyists like me who want to dig deeper into this field.
i know its a short video but some of the syntax may be confusing eg the subscript on the \mathbb{E} that is p(x) in a financial context we often use things such as \mathbb{E}_t [ h(X_T) ] = the conditional probability of h(X_T) where X is a stochastic process creating a filtration such as so it is equal to \mathbb{E} [ h(X_T) | \mathcal{F}_t ] I know its a totally different domains but oftentimes notation like this can be dripping with meaning, so, what is the _meaning_ of the subscript p(x) and what is the _meaning_ of the double bar ( ||_2^2 ) in the expectation ? is that the L2 Norm? timestamp 8:17
Thanks for your hard work! Amazing explanation! Just want to check the squared equation at 5:55. Can you explain why $\mathbb{E}[p(x)] = \int p(x) dx$? I feel like the equation has something missing...
Very nice video. Was struggling with the Anderson's equations and score matching for long time. Intially I thought gaussian noise description(2020 DDPM) was easier than Song's SDE, 2021. But turn out it is more fundamental and intutive. Also, Can you make videos on how diffusion model can somehow fuse / inpainting images in sdxl?(like in brownian bridge, cold diffusion and Pallete or in general img2img translation?) Thanks a lot for the video.
@@swaystar1235 Unfortunately even doing Würstchen style video models is still super expensive and there are many things that you have to solve first outside the model :/
If you want to make videos with quicker production, maybe you could use a whitescreen and write everything out, so you can still explain it intuitively but quicker.
Take a look at the papers I linked. The math in the video is taken from all of them together, however some of the things are not really found anywhere in them unfortunately. So this took a while
@@outliier To ask more clearly, have you been working on the basics of score matching and diffusion models for the last year? Assuming that you are using diffusion models at Luma, you also studied advanced topics on the related subject.
Calling ∇s stretches terminology a bit, right? Given s is a gradient vector field itself. Cool effort, thanks for going through all the manipulations. As for designing a read thread for the video, I'm not sure fully sure why you work 10 minutes for the E[s^2]+... term, but then in the explained denoising approach it's not really showing up anymore. Last note: Unlike Lagrang-ian dynamics, Langevin dynamics is not Langev-ian dynamics. But I think Langevin is still on the easier side to pronounce - don't be afraid.
Since these videos take an enormous amount of time (this one took about 300 hours), would you like to see, additionally, paper explanations in the style of Yannic Kilcher (www.youtube.com/@YannicKilcher) ? I could cover papers very quickly after they are released and also cover topics I wouldn’t do an animated video for. Let me know what you think :)
1000% yessssss ❤❤❤🎉
Sure 👍🏻
Sure! But I would prefer a deep dive once in a while to many simple paper explanations. There aren't many (video) resources for diffusion that go in such depth. So this is really great, thanks a lot for doing the video!
@@suraj7984 gotcha, yea I will keep doing normal videos. Was just wondering if other formats are also interesting
I think you should do both ... sorry. You explain in such a better way. Thanks alot for doing this.
This video was absolutely fantastic-I feel like I’ve finally learned about diffusion models the right way! I really appreciated how you started from the basics, gradually building up concepts and intuition, while clearly explaining the math at every step. It took me a few hours to get through the entire video, but the length and pace were perfect-there’s nothing I would change. Everything was covered so thoroughly. Thank you for the effort you put into this, and I’m excited to see more videos from you in the future!
Wow! I did not expect this video to go this deep. But this is awesome! Please make more in depth explanation like this. It’s clear a lot of hard work went into it and the animation is sooo elegant
Awesome explanation! Thanks for the hard work, it makes something far away and mathematical seem 10 times more intuitive
Thanks for posting again. Looking forward to the next one
Your videos are somehow simultaneously timely and timeless. Your content is absolutely appreciated and I wish you the best in your endeavors.
I absolutely love how you started from scratch, as in what the underlying PDF was. I'm working on a project on diffusion models and I don't know anything about it, and all the resources available are catered towards those with prerequisites I don't have yet, until this one. I haven't yet watched the whole thing, but I'm going to keep coming back to this till I understand everything in this video. Cheers mate!
this is a really good video. thank you for making it! i'd love to see a similar video for Flow Matching.
well explained video, shut out to your hardwork man, you are doing fabulous work, keep it up definely we want more videos on diffusion models like this explaining the in depth concepts.
Thank you for your wonderful explanation. Yes, I am very interested in learning about diffusion models, especially text to image.
I have watched this video for three times, may watch this video again. Thank you.
love your mathematics explanation and visualization, no fancy transitions were needed, just slow, simple, and clear english phrases
This is a brilliant video!!!!!!!! I almost addressed all the questions I have about score matching and how it is related to diffusion model.
this is such a helpful video!! thanks so much!
OMG. This is really amazing. I am PhD student, and I also struggle with a lot papers, their origins the intuitions. Felt like these authors are getting these from other world. This video made a lot of sense about other paper. If possible please provide reading map for the entire generative models? And your explanation and derivation is spot on.
You really are a genius. To get the derivation done on your own and to connect the dots.
Good job. ❤ 🎊
@@salmank.h2676 thank you so much!
@ is it possible to create a mind map or reading order for flow based models and diffusion models?
Your videos are great. You do well at taking very complex maths topics and walking through them. The summary at the end also helps.
Thank you for this amazing explanation! keep going...
A series on topics like this would be a gold mine. Great work!!
Amazing video, thank you. I learned most of it a year ago in university but this was a great refresher which also provided me with new insights to some of the stuff. I really liked the conclusion of the Denoising Score Matching part, very beautiful.
Hi. Thank you so much for providing this incredibly great video. I've found this to be the best resource for understanding the derivation of score functions. I would love to see you cover model-based diffusion as your next topic!
u finally come back! love ur video 🎉
Most of the diffusion models I've watched so far and mainly using images to sample. This video is really great in terms of understanding the fundamentals. Would love to see more in depth explanation from zero to hero.
Thank you for your work! I have started to learn about diffusion models and found that this is more complex idea than VAE idea and GAN idea. However, the people who try to explain these complex concepts to others are very impressive!
The mathematical derivation and explanation is such a lifesaver, I also never really understood the underlying meaning when reading the diffusion models but now everything clicked. Thank you so much for the videos, really enjoyed it. Please make more of such videos. Liked and subscribed : ).
1 year. See you back with a really easy to understand explanation. Thank you!
Will be more active!
thanks, thanks, thanks! you finally gave me missing explanations in those diffusion papers!
Thankyou so much for making this video ! hatsoff to this elegant explanation!
youtube giving good content??? i’ve been looking for exactly this lmao, thanks for your work
I used Score-SDE in my thesis and I have my defense next week :D what a timing
Thank you so much for such an informative video
Excellent explanation, thank you for making this.
great video man! thank you so much
Very nice introduction to the topic!
nice video for diffusion models!
32:38 To correct myself here, the paper gives explanation how to derive the sampler. I personally just find that approach much harder to understand and generally the papers don’t go into too much details for their derivations.
excellent work done by you, thanks for your explaining!
I haven't seen it yet, but pretty sure is an awesome video. Keep it up man!
Great video! Would be great to see a video on flow matching in the same style!
@@nicolasdufour315 That actually is my plan to do for the next video haha
@@outliierI really want that video bro, awesome job!
Suuuuuuper Helpful!
7:35 i have a question, the second line -Ep(x)[
abla_x s_theta(x)] = -\int p(x)
abla_x s_theta(x) dx, but you wrote a positive sign?
There was another mistake with a sign, which cancels this one out. He was wrong with a sign after integrating by parts (after that it should have changed and be plus instead of minus)
@@Topakhok thanks for this clarification
Awesome! Thank you!
This is the best explanation of score based models, I imagine I will be rewatching this video over and over. I have also always struggled to understand where some of the Maths results in the big papers come from, you do a very good job demystifying that. I can say I have a much more intuitive understanding of score based models now. I hope to see more deep dives on similar topics (can I suggest "Flow matching for generative modelling" Arxiv - 2210.02747? I would love to see your take on it). Also very interested in more regular Yannick Kilcher style paper journal club videos (and also a discussion group to go along with it?).
@@wolfeinstien313 love to hear that! Already started working on a video about Flow Matching ! Might share progress on twitter if you wanna follow around there :)
Wow! Great job. Many thanks for sharing =)
Had an epiphany watching you explain so many things that I never fully grilled, thank you so much
Thank you for the video, love it!
A full series in generative diffusion models would be awesome
thank you for the awesome video!!
What an amazing video! I did not expect the video to contain the derivations which I have personally struggled to search for. If its not too much, can you do a pytorch implementation of VP-SDE or SDE - DDPM/DDIM? Your previous video of DDPM in Pytorch was extremely useful and would appreciate it if a similar video for this is possible. Finally, love the work you put in this. This channel is a gem for AI enthusiasts.
@@talhaahmed6488 thank you so much for the nice comment! I will do an implementation video after the next one!
Every time you say theta I hear feta. Very nice video.
@@ihmejakki2731 bon appetit
This is epic!
Really nice explanation, intuitive but also math oriented. Now I am looking forward for implementation
@@vinc6966 My plan is to do Flow Matching next and then an implementation tutorial :)
@@outliier ah yes, GANs, diffusion, score-based models, and flow matching, the four horsemen of generative AI, keep up the good work! :))
@@outliier Yeah, Flow Matching sounds interesting. There are not a lot of explanations in the internet. implementation tutorial is also very cool
thank you for such great video! i would definitely want more video like this and more with code! using pytorch to implement equations!
I have a question. In the last two lines of the formula at 7:30, why did the sign change to positive from the second step to the third step? Will this affect the subsequent optimization process? Thank you for your excellent work, its really helps me a lot!
Actually if you scroll down in the comments there was someone asking this question which was answered by someone else with this comment: "There was another mistake with a sign, which cancels this one out. He was wrong with a sign after integrating by parts (after that it should have changed and be plus instead of minus"
Sorry about this
@@outliier Oh, I didn’t notice that someone had already asked. Thanks, this is the best video explanation I could find so far! Looking forward to the next videos!
8:11 when gradient of s_{\theta}(x) = 0, x can be a local maximum or minimum, why do you think it's a local maximum and not minimum?
11:45 summary
33:58 summary again
This is a great video explaining in depth. Really enjoyed it. Would it also be possible for you to make implementation videos as well, like what you did for DDPM? Particularly, I am interested in videos explaining how to condition DDPM, for example, in engineering domain that requires the model to be conditioned with physics.
Excellent video! I'm kind of stuck at a step at time 33:05. Could you please explain why the score function equals a constant times s_theta? (I can get it from the video that s_theta should follow the direction of log probability, but I don't know why the constant is 1 over square root 1-\bar{alpha}_t.)
I actually encountered this equation several times when reading papers, like in the famous Song Yang 2020 paper. But they seems to just take it for granted, which is not so apparent for me.
@@XinzeLi-j7h I think it is an approximation you have to do in order to view DDPM this way. Like you know how the DDPM update looks and by rearranging terms to get there this is the only thing possible. Not a good answer, but do you get the idea?
@@outliier I guess I understand what you mean. I will try the derivation later. Thank you very much!
Regarding your pinned comment. No offense to Yannic, but your explanations are 10x better. The topics you've covered you actually understand, you explain not only what is going on, but also why. That, and you going into mathematical explanations are really appreciated. Don't worry about the quantity, it's easy to read a paper, and put surface level explanations out for more views, what you're doing is more valuable. Your videos are a treasure for amateur Deep Learning hobbyists like me who want to dig deeper into this field.
great work
Hi, @outlier!
Thank you for such a large number of great tutorials! I'm wondering what tools do you use to make math animations in your videos?
@@romanschutski4948 i use manim community :3 the python library created by 3blue1brown
i know its a short video but some of the syntax may be confusing eg the subscript on the \mathbb{E} that is p(x) in a financial context we often use things such as \mathbb{E}_t [ h(X_T) ] = the conditional probability of h(X_T) where X is a stochastic process creating a filtration such as so it is equal to \mathbb{E} [ h(X_T) | \mathcal{F}_t ]
I know its a totally different domains but oftentimes notation like this can be dripping with meaning, so, what is the _meaning_ of the subscript p(x) and what is the _meaning_ of the double bar ( ||_2^2 ) in the expectation ? is that the L2 Norm? timestamp 8:17
more videos on diffusion models would be great
Love the music background, very relaxing when learning, pls don’t change! Thx!
pure gold , love from china❤❤❤
This is a staggering amount of work, do you have a patreon where you can be supported?
schön, dich mal wieder zu sehen \o/
@@NoahElRhandour hehe
Thanks for your hard work! Amazing explanation! Just want to check the squared equation at 5:55. Can you explain why $\mathbb{E}[p(x)] = \int p(x) dx$? I feel like the equation has something missing...
good video!!!
Thank you so much. Wonderful Wonderful Wonderful
Very nice video.
Was struggling with the Anderson's equations and score matching for long time. Intially I thought gaussian noise description(2020 DDPM) was easier than Song's SDE, 2021. But turn out it is more fundamental and intutive.
Also, Can you make videos on how diffusion model can somehow fuse / inpainting images in sdxl?(like in brownian bridge, cold diffusion and Pallete or in general img2img translation?)
Thanks a lot for the video.
awesome!
Id love to see a video on training video models cheaply like you did for image models with wurchsten
@@swaystar1235 Unfortunately even doing Würstchen style video models is still super expensive and there are many things that you have to solve first outside the model :/
Can you make an implementation video for Score SDE's ?
It appears that the minus sign in the integration by parts was mistakenly written as a plus
What about story visualization video?
Bro also explained why - (a - b) = (b - a) 😂😂
@@tejomaypadole4392 no details left out haha
If you want to make videos with quicker production, maybe you could use a whitescreen and write everything out, so you can still explain it intuitively but quicker.
can you give the source for the math ? i want to try a hands - on approach
Take a look at the papers I linked. The math in the video is taken from all of them together, however some of the things are not really found anywhere in them unfortunately. So this took a while
waiting for implementation video
Can you explain more about classifier free guidance code implementation during training? 😂
thx
I wonder that, for a year, did you studied on this, only? Because I really wonder that being able to go this much deep takes a year?
@@oguzhanercan4701 no I was just doing bunch of other things too and didn‘t spend so much time always on the video.
@@outliier To ask more clearly, have you been working on the basics of score matching and diffusion models for the last year? Assuming that you are using diffusion models at Luma, you also studied advanced topics on the related subject.
@@oguzhanercan4701 yea I have been mostly working with diffusion models over the last 2 years
love it btw can you pls provide the github code tks!
❤
Calling ∇s stretches terminology a bit, right? Given s is a gradient vector field itself.
Cool effort, thanks for going through all the manipulations. As for designing a read thread for the video, I'm not sure fully sure why you work 10 minutes for the E[s^2]+... term, but then in the explained denoising approach it's not really showing up anymore.
Last note: Unlike Lagrang-ian dynamics, Langevin dynamics is not Langev-ian dynamics. But I think Langevin is still on the easier side to pronounce - don't be afraid.
Music is unhelpful and distracting.
Please don't do piano background it is super annoying and distracting. Thanks
@@madrooky1398 interesting. I found it much more comforting and giving 3B1B vibes. Will consider
@@outliier I second this. but also you've done a wonderful job.
@@amortalbeing thanks for the feedback. Should do a poll at some point I guess
+1 , the piano music is distracting. If one likes it, he can overlay it himself.
FWIW I liked the piano because it calms me down when I get frustrated from not understanding a step 😃
This technology is obnoxiously abstracted beyond usefulness. The mathematical approach is also likely flawed and misses nuance. AMI is better.
@@Suro_One what is AMI?