Lecture 19: Generative Models I

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ก.ย. 2024

ความคิดเห็น • 20

  • @syedhasany1809
    @syedhasany1809 3 ปีที่แล้ว +7

    I am extremely glad that Generative Models were spanned over 2 lectures; excellent lecture as always!

  • @matato2932
    @matato2932 ปีที่แล้ว +6

    26:00, "we could ask a model: give me an image of a cat with a purple tail, but i dont think itll work". amazing how within just a few years we have already reached this point where we can synthesize images from arbitrary input.

  • @40yearoldman
    @40yearoldman ปีที่แล้ว +3

    This professor is an excellent communicator.

  • @ZinzinsIA
    @ZinzinsIA ปีที่แล้ว +2

    Extremly thanksful for this lecture, finally getting the intuitions behind generative model. Very valuable thanks again, awesome lecture

  • @frommarkham424
    @frommarkham424 2 หลายเดือนก่อน

    i gained a better understanding of generative models as soon as i saw the thumbnail without even watching the video thanks

  • @AkshayRoyal
    @AkshayRoyal 3 ปีที่แล้ว +4

    My left ear enjoyed this lecture a lot :P

  • @heejuneAhn
    @heejuneAhn ปีที่แล้ว

    For pixelRNN, why not mention the sampling methods (greedy, stochastic, temperature control, and maybe even beam search) which are quite related to the current GPT generation methods. Right?

  • @heejuneAhn
    @heejuneAhn ปีที่แล้ว

    If you just assume z is Gaussian, it really become Gaussian? In principle and general, the latent vector has any distribution. So we have to add one more constraint (the latent should be digonal covariance multivariate Gaussian) to the Autoencoder when we training

    • @mrfli24
      @mrfli24 ปีที่แล้ว

      These Gaussian and diagonal covariance things are design choices of the model that is used to approximate p(x). Gaussian distributions are always nice to play with and diagonal covariance is used for efficient computation as the lecture referred.

  • @heejuneAhn
    @heejuneAhn ปีที่แล้ว

    Can you explain why the Pixel RNN model explicit pdf model? Can you express the function of the pdf of the model? What do you mean by "explicit"? To be explicit the probability should be a form of prob(x1, x2, x3, ..... xn), where xi is the value of each pixel. Can you express it like that? And can you explain how we train the PixelRNN? e.g., the output has a probability of 0 to 255 values, and is L1 or L2 loss applied with the training images?

    • @mrfli24
      @mrfli24 ปีที่แล้ว

      1. Your understanding of "explicit" is correct. The pdf is written in the form of the multiplication of conditional pdfs. At test time, we can sequentially compute conditional pdfs and multiply all together. 2. I think the standard way to deal with probability distribution outputs (with softmax) is the cross entropy loss. The trainining paradigm is kind of the same as training a language model.

  • @bibiworm
    @bibiworm 3 ปีที่แล้ว

    I am quite confused at 1:08, where q(z|x) is posterior of decoder. But actually, we are using encoder to estimate q(z|x). So what is the implication of the terminology here? I'd really appreciate it if anyone can shed some light here.

  • @francescos7361
    @francescos7361 ปีที่แล้ว

    Thanks , inetersting .

  • @kyungryunlee9626
    @kyungryunlee9626 3 ปีที่แล้ว +1

    Can somebody help me with the concept of probability? At 31:44 he talks about how to train a model by a given dataset. It says the goal is to find the value W for a unsupervised model is to maximize the probablility of training data. I am confused with this "probability of training data". Does it mean the probability of when a training data x(i) is given the output to be the same x(i)? Like the cost function of a auto encoder( square of x_hat - x).
    My background knowledge is not good enough to look up for papers or math textbooks. so please help me!

    • @bibiworm
      @bibiworm 3 ปีที่แล้ว +3

      For autoregressive models, the way to find optimal weights W that maximize probability of training data is the same as maximum likelihood estimation. The probability of training data is essentially the product of the probability of each training datapoint given by p(x_i) which by definition is f(x_i, W).
      Auto encoders do not explicitly model or estimate the probability density function of the model. Rather they find optimal weights by minimizing reconstruction error, that is the llx - x_hatll^2.
      Different algorithms adopt different methods to find optimal weights. Hopefully, this helps shed some light for u.

    • @kyungryunlee9626
      @kyungryunlee9626 3 ปีที่แล้ว

      @@bibiworm thanks for your help!!

  • @kolokolo2365
    @kolokolo2365 2 ปีที่แล้ว

    very nice lecture as allways

  • @erniechu3254
    @erniechu3254 ปีที่แล้ว +2

    26:03 already realized in 2022 haha

    • @frankiethou7366
      @frankiethou7366 4 หลายเดือนก่อน

      how time flies! Sora even makes video generation possible