Diffusion and Score-Based Generative Models

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 มิ.ย. 2024
  • Yang Song, Stanford University
    Generating data with complex patterns, such as images, audio, and molecular structures, requires fitting very flexible statistical models to the data distribution. Even in the age of deep neural networks, building such models is difficult because they typically require an intractable normalization procedure to represent a probability distribution. To address this challenge, we consider modeling the vector field of gradients of the data distribution (known as the score function), which does not require normalization and therefore can take full advantage of the flexibility of deep neural networks. I will show how to (1) estimate the score function from data with flexible deep neural networks and efficient statistical methods, (2) generate new data using stochastic differential equations and Markov chain Monte Carlo, and even (3) evaluate probability values accurately as in a traditional statistical model. The resulting method, called score-based generative modeling or diffusion modeling, achieves record performance in applications including image synthesis, text-to-speech generation, time series prediction, and point cloud generation, challenging the long-time dominance of generative adversarial networks (GANs) on many of these tasks. Furthermore, score-based generative models are particularly suitable for Bayesian reasoning tasks such as solving ill-posed inverse problems, yielding superior performance on several tasks in medical image reconstruction.
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 43

  • @Blueshockful
    @Blueshockful 8 หลายเดือนก่อน +25

    recommended to anyone who wants to understand beyond the mere "noising/denoising" type explanations on diffusion models

  • @moaidali874
    @moaidali874 9 หลายเดือนก่อน +14

    This one of the best presentations I have ever attended

  • @opinions8731
    @opinions8731 11 หลายเดือนก่อน +36

    What an amazing explanation. Wish there was an AI/authors explaining their papers so clearly.

  • @binjianxin7830
    @binjianxin7830 ปีที่แล้ว +6

    What a pleasant insight to think of gradients of the logits as score function! Thank you for sharing the great idea.

  • @StratosFair
    @StratosFair ปีที่แล้ว +21

    Thank you guys for making this talk available on your TH-cam channel. This is pure gold

  • @LifeCodeGame
    @LifeCodeGame ปีที่แล้ว +14

    Amazing insights into generative models! Thanks for sharing this valuable knowledge!

  • @jacksonyan7346
    @jacksonyan7346 7 หลายเดือนก่อน +3

    It really shows how good the explanation is when even I can follow along. Thanks for sharing!

  • @xuezhixie1640
    @xuezhixie1640 10 หลายเดือนก่อน +1

    Really amazing explanation for the entire diffusion model. Clear, great, wonderful work.

  • @simong1666
    @simong1666 หลายเดือนก่อน

    This is the best summarizing resource I have found on the topic. The visual aids are really helpful and the nature of the problem and series of steps leading to improved models, along with the sequence of logic are so clear. What an inspiration!

  • @user-ux3wg1xj9s
    @user-ux3wg1xj9s 7 หลายเดือนก่อน +3

    9:52 intractable to compute integral of exponential of neural networks
    12:00 desiderata of deep generative models
    19:00 the goal is to minimize fisher divergence between
    abla_x log(p(x_data)) and score function s(x), we don't know ground truth log(p(x_data)) but score matching is equivalent to fisher divergence up to the constant, thus same in the optimization perspective.
    23:00 however, score matching is not scalable, greatly due to the Jacovian term. the term requires many times of backpropagation computations. Thus before computing fisher divergence, project each terms with vector v to make the Jacobian disappear, and thus become more scalable, this is called sliced score matching.
    29:00 denoised score matching. The objective is tractable because we design the perturbation kernel by hand(the kernel is easily computable). However because of added noise, the denoised score matching can't estimate noise free distributions. Also the variance of denoising score matching objective becomes bigger and bigger eventually explodes when the smaller the magnitude of the noise.
    31:20 in case of Gaussian perturbation kernel, denoising score matching problem takes more simpler form. Optimize the objective with stochastic gradient descent. Be careful to choose appropriate magnitude of sigma.
    36:00 sampling from langevin dynamics, initialize x0 from simple(gaussian, uniform) distribution and z from N(0, 1), and then repeat the procedure.
    37:20 naive version of langevin dynamics sampling not working well in practice because of the low density region

  • @ck1847
    @ck1847 ปีที่แล้ว +4

    Extremely insightful lecture that is worth every minute of it. Thanks for sharing it.

  • @peterpan1874
    @peterpan1874 7 หลายเดือนก่อน

    a very accessible and amazing tutorial that explained everything clearly and thoroughly!

  • @mm_Tesla_mm
    @mm_Tesla_mm 10 หลายเดือนก่อน +1

    I love this talk! amazing and clear explanation!

  • @xiaotongxu1528
    @xiaotongxu1528 5 หลายเดือนก่อน

    Very clear! Thanks for this amazing lecture!

  • @coolguy69235
    @coolguy69235 3 หลายเดือนก่อน

    khatarnaak aadmi hain !!
    Very good explanation !!!
    sab ka saath, sab ka vikaas

  • @hasesukkt
    @hasesukkt 5 หลายเดือนก่อน

    Amazing explation for me to understand the diffusion model!

  • @tomjiang6831
    @tomjiang6831 2 หลายเดือนก่อน

    literally the best explaination!!!

  • @dy8576
    @dy8576 5 หลายเดือนก่อน

    Such an incredible talk, i was just curious about how everyone here keeps track of all this knowledge, would love to hear from you all

  • @johnini
    @johnini หลายเดือนก่อน

    I am 44 second in the talk and already wanna say thank you! :)

  • @CohenSu
    @CohenSu ปีที่แล้ว +4

    16:54 all papers referenced... This man is amazing

  • @user-zr4ns3hu6y
    @user-zr4ns3hu6y หลายเดือนก่อน

    Such a great explanation

  • @hoang_minh_thanh
    @hoang_minh_thanh 7 หลายเดือนก่อน

    This is amazing!

  • @huseyintemiz5249
    @huseyintemiz5249 9 หลายเดือนก่อน +1

    Amazing tutorial

  • @chartingwithliv
    @chartingwithliv ปีที่แล้ว +3

    Crazy this is available for free!! ty

  • @gaspell
    @gaspell 5 หลายเดือนก่อน

    Thank you!

  • @uxirofkgor
    @uxirofkgor ปีที่แล้ว +11

    damn straight from yang song...

  • @twistedsector
    @twistedsector 4 หลายเดือนก่อน +1

    actually such a fire talk

  • @adrienforbu5165
    @adrienforbu5165 5 หลายเดือนก่อน

    Thank you so much song

  • @YashBhalgat
    @YashBhalgat ปีที่แล้ว +3

    46:30 was a true mic-drop moment from Yang Song 😄

  • @user-ki1jl1fb5d
    @user-ki1jl1fb5d 9 หลายเดือนก่อน +1

    Oh, very clear explanation! Would it be possible to share this slide?

  • @JuliusSmith
    @JuliusSmith 3 หลายเดือนก่อน

    Excellent overview of excellent work, thank you! I am worried about simplified CT scans, however. Wouldn't we get bias toward priors when we're looking instead for anomalies? There needs to be a way to detect all abnormalities with 100% reliability while still reducing radiation. Is this possible?

  • @beluga.314
    @beluga.314 9 หลายเดือนก่อน

    When we can obatin equivalence between DDPM(training network to obtain noise) and score based training in DDPM, then shouldn't both give same kind of results?

  • @RoboticusMusic
    @RoboticusMusic 10 หลายเดือนก่อน +1

    What is now utilizing this, is it still SoTA? Did this improve OpenAI, MJ, Stability, etc.? It sounds promising but I need updated information.

  • @mariuswuyo8742
    @mariuswuyo8742 3 หลายเดือนก่อน

    I have a question about Part "prpo. evaluation": does it mean to use ODE to calculate the likelihood (P_theta(x_0)), but how to input the original data x_0 to the diffusion model?

  • @julienblanchon6082
    @julienblanchon6082 7 หลายเดือนก่อน

    Wow that was cristal clear

  • @pengjunlu
    @pengjunlu หลายเดือนก่อน

    why solving "maximize likelyhood" problem is equal to solve "explicit score matching " problem? For example, once you get S(x, theta), you do get corresponding p(x); but is it the same P(x; theta) where theta maximize likelyhood?

  • @yuxinzhang4228
    @yuxinzhang4228 11 หลายเดือนก่อน

    Why annealed langevin dynamics from the highest noise level to the lowest noise level instead of langevin dynamics sampling directly from the score model with the lowest noise level?

    • @DrumsBah
      @DrumsBah 6 หลายเดือนก่อน

      You use the perturbed noise to traverse and converge to high density areas via Langevin dynamics. Due to the manifold hypothesis large areas of the data space have no density and thus no gradient for langevin to traverse. The large noise is traverse these spaces. Once closer to the higher density, the noise schedule can be decreased and the process repeats

  • @jimmylovesyouall
    @jimmylovesyouall 5 หลายเดือนก่อน +1

    stanford还是牛逼,谢谢。

  • @timandersen8030
    @timandersen8030 3 หลายเดือนก่อน

    How to be good in math like Dr. Song?

  • @johnini
    @johnini หลายเดือนก่อน +1

    Now I need some GPUs

  • @user-xb8xz2oo5c
    @user-xb8xz2oo5c หลายเดือนก่อน

    27:44

  • @clairewang8370
    @clairewang8370 10 วันที่ผ่านมา