Understand the Math and Theory of GANs in ~ 10 minutes

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 ธ.ค. 2024

ความคิดเห็น • 84

  • @jlee-mp4
    @jlee-mp4 10 หลายเดือนก่อน +12

    Holy sh*t, this guy is diabolically, criminally, offensively underrated. THE best explanation of GANs I have ever seen, somehow rooting it deeply in the mathematics while keeping it surface level enough to fit in a 12 min video. Wow

    • @doyoonkim4187
      @doyoonkim4187 4 หลายเดือนก่อน

      It's hard to watch this kind of precise, sophisticated materials from internet. I really like this video

  • @elliotha6827
    @elliotha6827 ปีที่แล้ว +8

    The hallmark of a good teacher is when they can explain complex topics simply and intuitively. And your presentation on GANs in this video truly marks you as a phenomenal one. Thanks!

    • @fidaeharchli4590
      @fidaeharchli4590 8 หลายเดือนก่อน

      I agreeeeeeeee, you are the best, thank you sooo mutch

  • @luisr1421
    @luisr1421 4 ปีที่แล้ว +13

    Didn't think in a million years I'd get the math behind GANs. Thank you man

  • @shivammehta007
    @shivammehta007 4 ปีที่แล้ว +28

    This is Gold!!! Pure Gold!!

  • @shaoxuanchen2052
    @shaoxuanchen2052 4 ปีที่แล้ว +12

    OMG that is the best one in explaining GANs I found these days!!!!! Thank you so much and I'm so lucky to find this vedio!!!!!!

  • @spandanpadhi8275
    @spandanpadhi8275 5 หลายเดือนก่อน

    This was the best 12 minutes of my months. Great explanation of GANs.

  • @bikrammajhi3020
    @bikrammajhi3020 7 หลายเดือนก่อน

    Best mathematical explanation on GAN on the internet so far

  • @deblu118
    @deblu118 11 หลายเดือนก่อน +1

    This video is amazing! You make things intuitive and really dig down to the core idea. Thank you! And also subscribed your blog!

  • @janaosea6020
    @janaosea6020 ปีที่แล้ว +1

    Wow. This video is so well explained and well presented!! The perfect amount of detail and explanation. Thank you so much for demystifying GANs. I wish I could like this video multiple times.

  • @Daniel-ed7lt
    @Daniel-ed7lt 5 ปีที่แล้ว +5

    I have no idea how I found this video, but it has been very helpful.
    Thanks a lot and please continue making videos.

    • @welcomeaioverlords
      @welcomeaioverlords  5 ปีที่แล้ว +2

      That's awesome, glad it helped. I'll definitely be making more videos. If there's any particular ML topics you'd like to see, please let me know!

    • @Daniel-ed7lt
      @Daniel-ed7lt 5 ปีที่แล้ว +4

      @@welcomeaioverlords
      I'm currently interested in CNNs and I think it would be really useful if you would describe its base architecture, same as you did for GAN, while simultaneously explaining the underlying math from a relevant paper.

  • @TheTakenKing999
    @TheTakenKing999 3 ปีที่แล้ว +12

    Awesome explanation. The original GAN paper isn't too hard to read but the "maximize" the Discriminator always irked me. Like.. my understanding was correct but I would always have trouble explaining it to someone else, this is a really well put together video. Clean, concise and good explanation. I think because of the way Goodfellow et al. phrased it, as "ascending the gradient" many people get stuck here, because for beginners like us we have gradient "descent" stuck in our heads lol.

  • @alaayoussef315
    @alaayoussef315 4 ปีที่แล้ว +8

    Brilliant! Never thought I could understand the math behind GAN's

  • @shashanktomar9940
    @shashanktomar9940 4 ปีที่แล้ว +2

    I have lost count of how many times I have paused the video to take notes. You're a lifesaver man!!

  • @tusharkantirouth5605
    @tusharkantirouth5605 ปีที่แล้ว

    Simply the best .. short and crisp... thanks and keep uploading such beautiful videos..

  • @wenhuiwang4439
    @wenhuiwang4439 11 หลายเดือนก่อน

    Great learning resource for GAN. Thank you.

  • @caiomelo756
    @caiomelo756 2 ปีที่แล้ว

    four years ago I read the original GAN paper for more than a month and could not understand what I was reading, and now it makes sense

  • @やみくも-q6d
    @やみくも-q6d ปีที่แล้ว

    Nice explanation! The argument at 7:13 once felt like a jump for me, but found it similar to 'calculus of variation' I learned in classical physics class.

  • @EB3103
    @EB3103 3 ปีที่แล้ว

    Best explainer of deep learning!

  • @dipayanbhadra8332
    @dipayanbhadra8332 10 หลายเดือนก่อน

    Great Explanation! Nice and clean! All the best

  • @dingusagar
    @dingusagar 4 ปีที่แล้ว +1

    best video explaning the math of GAN. Thanks !!

  • @adeebmdislam4593
    @adeebmdislam4593 ปีที่แล้ว

    man immediately knew you listen to prog and play guitar when i heard the intro hahaha! great explanation

  • @gianfrancodemarco8065
    @gianfrancodemarco8065 2 ปีที่แล้ว

    Short, concise, clear. Perfect!

  • @muneebhashmi1037
    @muneebhashmi1037 3 ปีที่แล้ว

    tbvvh couldn't have asked for a better explanation!

  • @siddhantbashisth5486
    @siddhantbashisth5486 9 หลายเดือนก่อน

    Awesome explanation man.. I loved it!!

  • @walidb4551
    @walidb4551 5 ปีที่แล้ว +2

    THANK GOD I FOUND THIS ONE THANK YOU

  • @tarunreddy7
    @tarunreddy7 ปีที่แล้ว

    Lovely explanation.

  • @superaluis
    @superaluis 4 ปีที่แล้ว +1

    Thanks for the detailed video.

  • @toheebadura
    @toheebadura 2 ปีที่แล้ว

    Many thanks, dude! This is awesome.

  • @williamrich3909
    @williamrich3909 4 ปีที่แล้ว +1

    Thank you. This was very clear and easy to follow.

  • @architsrivastava8196
    @architsrivastava8196 3 ปีที่แล้ว +1

    You're a blessing.

  • @DavesTechChannel
    @DavesTechChannel 4 ปีที่แล้ว +1

    Great explanation man, I've read your article on Medium!

  • @jovanasavic4357
    @jovanasavic4357 3 ปีที่แล้ว +1

    This is awesome. Thank you so much!

  • @paichethan
    @paichethan 3 ปีที่แล้ว

    Fantastic explanation

  • @dman8776
    @dman8776 4 ปีที่แล้ว +1

    Best explanation I've seen. Thanks a lot!

  • @ishanweerakoon9838
    @ishanweerakoon9838 2 ปีที่แล้ว +1

    Thanks very clear

  • @bernardoolisan1010
    @bernardoolisan1010 2 ปีที่แล้ว

    I have a question. in 4:49 from were we take the real samples, for example, we want to generate "faces", in the generator m samples are just random vectors of the dimensions of a face image, so it can be a super ugly blur picture right? but what about the real samples? they are just faces images that were taken out of the internet?

  • @friedrichwilhelmhufnagel3577
    @friedrichwilhelmhufnagel3577 ปีที่แล้ว

    CANNOT UPDATE ENOUGH. EVERY STATISTICS OR ML MATH VIDEO SHOULD BE AS CLEAR AS THIS. YOU DEMONSTRATE THAT MATH AND THEORY EXPLANATION IS ONLY A MATTER OF AN ABLE TEACHER

  • @koen199
    @koen199 4 ปีที่แล้ว +1

    @7:20 Why is p_data(x) and p_g(x) assumed constant over x in the integral (a and b)? In my mind the probability changes for each sample...

    • @welcomeaioverlords
      @welcomeaioverlords  4 ปีที่แล้ว +2

      Hi Koen. When I say "at any particular point" I mean "at any particular value of x". So p_data(x) and p_g(x) change with x. Those are, for example, the probabilities of seeing any particular image either in the real or generated data. The analysis that follows is for any particular x, for which p_data and p_g have a single value, here called "a" and "b" respectively. The logical argument is that if you can find the D that maximizes the quantity under the integral for every choice of x, then you have found the D that maximizes the integral itself. For example: imagine you're integrating over two different curves and the first curve is always larger in value than the second. You can safely claim the integral of the first curve is larger than the integral of the second curve. I hope this helps.

    • @koen199
      @koen199 4 ปีที่แล้ว

      @@welcomeaioverlords Oh wow it makes sense now! Thanks man.. keep up the good work

  • @symnshah
    @symnshah 4 ปีที่แล้ว

    Such a great explanation.

  • @StickDoesCS
    @StickDoesCS 4 ปีที่แล้ว +2

    Really great video! I have a little question however since i'm new to this field and i'm a little confused. Why is that at 5:02 you mentioned about ascending the gradient to maximize the cost function? Would like to know exactly why this is the case because I initially thought the cost function generally has to be minimized, so the smaller the cost ideally the better the model. Maybe because of how I'm looking at cost functions in general? Like is there a notion of it being already referred to as something we want to be small, so now we'd simply treat it as the negative of a number in which that number is what you're referring to as the one we want to maximize? Subscribed by the way, keep up the good work! :>

    • @welcomeaioverlords
      @welcomeaioverlords  4 ปีที่แล้ว +4

      In most ML, you optimize such that the cost is minimized. In this case, we have two *adversaries* that are working in opposition to one another. One is trying to decrease the cost (discriminator) and one is working to increase the cost (generator).

  • @anilsarode6164
    @anilsarode6164 4 ปีที่แล้ว

    God bless you, man !! Great Job !! Excellent !!!

  • @psychotropicalfunk
    @psychotropicalfunk 2 ปีที่แล้ว

    Very well explained!

  • @manikantansrinivasan5261
    @manikantansrinivasan5261 ปีที่แล้ว

    thanks a ton for this!

  • @maedehzarvandi3773
    @maedehzarvandi3773 3 ปีที่แล้ว +1

    you helped a lot of lot 👏🏻🙌🏻👍🏻

  • @ramiismael7502
    @ramiismael7502 4 ปีที่แล้ว +1

    great video

  • @shourabhpayal1198
    @shourabhpayal1198 3 ปีที่แล้ว +1

    Good one

  • @Darkev77
    @Darkev77 3 ปีที่แล้ว +2

    This was really good! Though could someone explain to me what does he mean by maximize the loss function for the discriminator? Shouldn't you also train your discriminator via gradient descent to improve classification accuracy?

    • @welcomeaioverlords
      @welcomeaioverlords  3 ปีที่แล้ว +1

      To minimize the loss, you use gradient descent. You walk down the hill. To maximize the loss, you use gradient ASCENT. You calculate the same gradient, but walk up the hill. The discriminator walks up, the generator walks down. That’s why it’s adversarial. You could multiply everything by -1 and get the same result.

    • @sunnydial1509
      @sunnydial1509 3 ปีที่แล้ว +1

      i am not sure but in this case i think we maximise the discriminator loss function as it is expressed as log(1-D(G(Z)) which is equivalent to minimize the log(D(G(Z))) as it happens on normal neural networks.... so the discriminator is learning by maximising the loss in this case

  • @kathanvakharia
    @kathanvakharia 5 หลายเดือนก่อน

    nailed it!

  • @bernardoolisan1010
    @bernardoolisan1010 2 ปีที่แล้ว

    when the training process is done, do we only use the generator model? or what? how to use it in production?

  • @bernardoolisan1010
    @bernardoolisan1010 2 ปีที่แล้ว

    also, were it says theory alert it means that is only for proving that the model is kind of good? like the min value is a good value?

  • @jrt6722
    @jrt6722 ปีที่แล้ว

    Would the loss function works the same if I switch the label of the real sample and fake sample? ( 0 for real sample and 1 for fake sample).

  • @123epsilon
    @123epsilon 3 ปีที่แล้ว

    Does anyone know any good resources to learn more ML theory like how it’s explained in this video? Specifically content covering proofs and guaranteeing convergence

  • @goodn1051
    @goodn1051 5 ปีที่แล้ว +2

    Thaaaaaaank youuuuuu

    • @welcomeaioverlords
      @welcomeaioverlords  5 ปีที่แล้ว +1

      I'm glad you got value from this!

    • @goodn1051
      @goodn1051 5 ปีที่แล้ว

      @@welcomeaioverlords yup...when you're self taught its videos like this that really help so much

  • @adityarajora7219
    @adityarajora7219 3 ปีที่แล้ว

    The cost function isn't the difference between True and predicted value right?, it's the actual predicted value in the range [0,1] right??

    • @welcomeaioverlords
      @welcomeaioverlords  3 ปีที่แล้ว

      It's structured as a classification problem where the discriminator estimates the probability of the sample being real or fake, which is then compared against the ground truth of whether the sample is real, or was faked by the generator.

    • @adityarajora7219
      @adityarajora7219 3 ปีที่แล้ว

      @@welcomeaioverlords Thank you sir for your reply, Got it.

  • @abdulaziztarhuni
    @abdulaziztarhuni 2 ปีที่แล้ว

    this was hard for me to follow , from where should i get more resources

  • @adityarajora7219
    @adityarajora7219 3 ปีที่แล้ว

    what do you do for a living?

  • @saigeeta1993
    @saigeeta1993 4 ปีที่แล้ว

    PLEASE EXPLAIN TEXT TO SPEECH SYNTHESIS EXAMPLE USING GAN

  • @theepicguy6575
    @theepicguy6575 2 ปีที่แล้ว +1

    Found a gold mine

  • @sarrae100
    @sarrae100 5 ปีที่แล้ว +1

    What the fuck, u explained it like it's a toy story, u beauty 😍

  • @kelixoderamirez
    @kelixoderamirez 4 ปีที่แล้ว

    permission to learn sir

  • @jorgecelis8459
    @jorgecelis8459 4 ปีที่แล้ว

    Very good explanation. One question: If we know the form of the optimal discriminator, don't we only need to get the Pg(x), as we have all the statistics of P(x) in advance? And that would be 'just' sampling from the z?

    • @welcomeaioverlords
      @welcomeaioverlords  4 ปีที่แล้ว +1

      Thanks for the question, Jorge. I would point out that knowing the statistics of P(x) is very different than knowing P(x) itself. For instance, I could tell you the mean (and higher-order moments) of a sample from an arbitrary distribution and that wouldn't be sufficient for you to recreate it. The whole point is to model P(x) (the probability that a particular pixel configuration is of a face) , because then we could just sample from it to get new faces. Our real-life sample, which is the training dataset, is obviously a small portion of all possible faces. The generator effectively becomes our sampler of P(x) and the discriminator provides the training signal. I hope this helps.

    • @jorgecelis8459
      @jorgecelis8459 4 ปีที่แล้ว

      @@welcomeaioverlords right... statistics of P(x) =/= distribution P(x), if we know P(x) we could just generate images and we would have no problem to solve with GAN. Thanks.

  • @samowarow
    @samowarow 2 ปีที่แล้ว

    th-cam.com/video/J1aG12dLo4I/w-d-xo.html
    How exactly did you do this variable substitution? Seems not legit to me

    • @JoesMarineRush
      @JoesMarineRush 2 ปีที่แล้ว

      I also stopped at this step. I think it is valid.
      Remember that the transformer g is fixed. In the second term, distribution of z and g(z) are the same, so we can set x = g(z) and replace the z with x. Then we can merge first and second integrals together, with the main difference being that the first term and second term have different probabilities for x since they are being sampled from different distributions.

    • @samowarow
      @samowarow 2 ปีที่แล้ว

      @@JoesMarineRush It's not in general legit to say that the distributions of Z and g(Z) are the same. Z is a random variable. A non-linear function of Z changes its distribution.

    • @JoesMarineRush
      @JoesMarineRush 2 ปีที่แล้ว

      @@samowarow I looked at it again the other day. Yes you are right. g can change the distribution of z.
      There are is a clarification step missing. Setting x = g(z) and swapping out z for x. The distribution of x is given be to under g. There is a link between distributions of z and g that needs clarification. I'll try to think on it.