[Classic] Generative Adversarial Networks (Paper Explained)

แชร์
ฝัง
  • เผยแพร่เมื่อ 13 พ.ค. 2024
  • #ai #deeplearning #gan
    GANs are of the main models in modern deep learning. This is the paper that started it all! While the task of image classification was making progress, the task of image generation was still cumbersome and prone to artifacts. The main idea behind GANs is to pit two competing networks against each other, thereby creating a generative model that only ever has implicit access to the data through a second, discriminative, model. The paper combines architecture, experiments, and theoretical analysis beautifully.
    OUTLINE:
    0:00 - Intro & Overview
    3:50 - Motivation
    8:40 - Minimax Loss Function
    13:20 - Intuition Behind the Loss
    19:30 - GAN Algorithm
    22:05 - Theoretical Analysis
    27:00 - Experiments
    33:10 - Advantages & Disadvantages
    35:00 - Conclusion
    Paper: arxiv.org/abs/1406.2661
    Abstract:
    We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.
    Authors: Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio
    Links:
    TH-cam: / yannickilcher
    Twitter: / ykilcher
    Discord: / discord
    BitChute: www.bitchute.com/channel/yann...
    Minds: www.minds.com/ykilcher
    Parler: parler.com/profile/YannicKilcher
    LinkedIn: / yannic-kilcher-488534136
    If you want to support me, the best thing to do is to share out the content :)
    If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
    SubscribeStar: www.subscribestar.com/yannick...
    Patreon: / yannickilcher
    Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
    Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
    Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
    Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 68

  • @Youtoober6947
    @Youtoober6947 2 ปีที่แล้ว +22

    I don't know if you have an idea, but I would like to tell you that I believe you have NO idea how helpful (and especially how helpful with time management) the Paper Explained series you're doing is for me. These are SERIOUSLY invaluable, thank you so much.

  • @TheInfinix
    @TheInfinix 3 ปีที่แล้ว +90

    I think that such an initiative will be useful for fresh researchers and beginners.

  • @Aniket7Tomar
    @Aniket7Tomar 3 ปีที่แล้ว +103

    I am loving these classic paper videos. More of these, please.

  • @MinecraftLetstime
    @MinecraftLetstime 3 ปีที่แล้ว +14

    These are absolutely amazing, please keep them coming.

  • @kateyurkova6384
    @kateyurkova6384 3 ปีที่แล้ว +10

    These reviews are priceless, you add so much more value than just reading the paper would bring, thank you for your work.

  • @datamlistic
    @datamlistic 3 ปีที่แล้ว +3

    The classic papers are amazing! Please continue making them!

  • @sulavojha8322
    @sulavojha8322 3 ปีที่แล้ว +5

    Classic paper is too good. Hope you upload such more videos. Thank you !

  • @fulin3397
    @fulin3397 3 ปีที่แล้ว +6

    classic paper and very awesome explanation. Thank you!

  • @maltejensen7392
    @maltejensen7392 3 ปีที่แล้ว +6

    It's extremely helpful to hear your thoughts on what the authors have been thinking and things like researchers trying to put MCMC somewhere it was intended not to be. This gives a better idea of how the machine learning in academia works. Please continue this and thanks!

  • @SallyZhang-vt2oi
    @SallyZhang-vt2oi 3 ปีที่แล้ว

    Thank you very much. I really appreciate your understanding of these papers. Please keep on releasing such kind of videos. They helped me a lot. Thanks again!

  • @andresfernandoaranda5498
    @andresfernandoaranda5498 3 ปีที่แล้ว +5

    I thank you for making this resources free to the community ))

  • @benjaminbenjamin8834
    @benjaminbenjamin8834 3 ปีที่แล้ว +1

    @Yannic , this is such a great initiative and you are doing a great great job. Please carry it on.

  • @bjornhansen9659
    @bjornhansen9659 3 ปีที่แล้ว +1

    I like these videos on the papers. It is very helpful to hear how another person views the ideas discussed in these papers. thanks!

  • @agbeliemmanuel6023
    @agbeliemmanuel6023 3 ปีที่แล้ว +2

    It's great to have origin of most models in ML today. Good work

  • @aa-xn5hc
    @aa-xn5hc 3 ปีที่แล้ว +3

    I love these historical videos of you!!

  • @narinpratap8790
    @narinpratap8790 2 ปีที่แล้ว +1

    This was awesome! I am currently a graduate student, and I have to write a paper review for my Deep Learning course. Loved your explainer on GANs. This has helped me understand so much of the intuition behind GANs, and also the developments in Generative Models since the paper's release. Thank you for making this.

  • @YtongT
    @YtongT 3 ปีที่แล้ว +3

    very useful, thank you for such quality content!

  • @ambujmittal6824
    @ambujmittal6824 3 ปีที่แล้ว +1

    You're truly a God's gift for people who are comparatively new in the field. (Maybe even for experienced ones) Thanks a lot and keep up the good work!

  • @falachl
    @falachl 2 ปีที่แล้ว

    Yannic, thank you, In this overloaded world for ML you are providing a critical informative service. Please keep it up

  • @frankd1156
    @frankd1156 3 ปีที่แล้ว

    Wow ...this is gold.Keep up man.be blessed

  • @avishvj
    @avishvj 2 ปีที่แล้ว

    brilliant, would love more of these!

  • @herp_derpingson
    @herp_derpingson 3 ปีที่แล้ว +15

    12:00 I never quite liked the min-max analogy. I think a better analogy would be a teacher student analogy. The discriminator says, "The image you generated does not look like a real image and here are the gradients which tells you why. Use the gradients to improve yourself."
    .
    32:30 I am pretty sure this interpolations existed in auto-encoder literature
    .
    Mode collapse is pretty common for human teachers and students. Teachers often say that you need to solve the problems the way I taught in class. "My way or the highway" XD

    • @YannicKilcher
      @YannicKilcher  3 ปีที่แล้ว +8

      Yes the teacher student phrasing would make more sense, I think the min-max is just the formal way of expressing the optimization problem to be solved and then people go from there into game theory etc.
      The mode collapse could also be the student that knows exactly what to write in any essay to make the one particular teacher happy :D

  • @kristiantorres1080
    @kristiantorres1080 3 ปีที่แล้ว

    Beautiful paper and superb review!

  • @AnassHARMAL
    @AnassHARMAL ปีที่แล้ว

    This is amazing, thank you! As a materials scientist trying to utilize machine learning, this just hits the spot!

  • @aman6089
    @aman6089 2 ปีที่แล้ว

    Thank you for the explaination.
    It is a great resource for beginner like myself!

  • @sergiomanuel2206
    @sergiomanuel2206 3 ปีที่แล้ว +3

    Very good paper!! , can you please go to the paper of next bigger step to the state of art in GANs. Thank you!

  • @bosepukur
    @bosepukur 3 ปีที่แล้ว

    great initiative ....love to see some classis NLP papers

  • @goldfishjy95
    @goldfishjy95 2 ปีที่แล้ว

    Hi this is incredibly useful, thank you so much!

  • @AltafHussain-gk2xe
    @AltafHussain-gk2xe 2 ปีที่แล้ว

    Sir I'm big fan of you. I'm following you for last one year I find your every video is full of information and really useful. Sir I request you to please make few videos one segmentation as well I shall be thankful to you.

  • @DasGrosseFressen
    @DasGrosseFressen 3 ปีที่แล้ว +3

    "Historical" in ML : 6 years :D
    The series ist nice, thanks! one question though: you said that the objective is to minimize the exoectations in (1), but the minmax is already performed to get to the equality, right? How does V look?
    Edit: oh, never mind. In (3) you see that (1) is in the typical CS-sloppy notation...

  • @westcott2204
    @westcott2204 8 หลายเดือนก่อน

    Thank you for providing your insights and current point of view on the paper. it was very helpful.

  • @flyagaric23
    @flyagaric23 3 ปีที่แล้ว

    Thank you, Excellent.

  • @utku_yucel
    @utku_yucel 3 ปีที่แล้ว

    YES! THANKS!

  • @alexandravalavanis2282
    @alexandravalavanis2282 2 ปีที่แล้ว

    Damn. I’m enjoying this video very much. Very helpful. Thank you!

  • @Throwingness
    @Throwingness 2 ปีที่แล้ว

    I'd appreciate more explaining on the math in the future. This kind of math is rarely encountered by most programmers.

  • @Notshife
    @Notshife 3 ปีที่แล้ว +1

    Hey @Yannic, I followed up on the BYOL paper you covered. While I'm not super familiar with machine learning I do feel I implemented something which is mechanically the same as what was presented and I thought it might interest you that the result for me was that it converged to a constant, every time. The exponential moving average weighted network and the separate augmentations did not prevent it. I will be going back through to see if I maybe have made a mistake. But I have been trying a bit of everything and so far nothing has been able to prevent the trivial solution. Maybe I'm missing something, which I hope because I liked the idea. My experimentation with parameters and network architecture has not been exhaustive... But yeah, so far: no magic.

    • @YannicKilcher
      @YannicKilcher  3 ปีที่แล้ว +1

      Yes, I was expecting most people to have your experience and then apparently someone else can somehow make it work sometimes.

  • @rameshravula8340
    @rameshravula8340 3 ปีที่แล้ว

    Yannic, could you give application examples at the end of each paper you review.

  • @kvawnmartin1562
    @kvawnmartin1562 3 ปีที่แล้ว

    Best GAN explanation ever

  • @lcslima45
    @lcslima45 2 ปีที่แล้ว

    This channel is awesome

  • @dl569
    @dl569 ปีที่แล้ว

    thanks a lot!

  • @robo2.069
    @robo2.069 3 ปีที่แล้ว

    Nice explained thanku.......Can you make a video on Dual motion GAN(DMGAN) .

  • @dandy-lions5788
    @dandy-lions5788 3 ปีที่แล้ว

    Thank you so much!! Can you do a paper on UNet?

  • @TheKoreanfavorites
    @TheKoreanfavorites 2 ปีที่แล้ว

    Great!!!

  • @jintaoren6755
    @jintaoren6755 3 ปีที่แล้ว +1

    why youtube hasn't recommended me this channel earlier?

  • @ehza
    @ehza 2 ปีที่แล้ว

    Thanks

  • @shivombhargava2166
    @shivombhargava2166 3 ปีที่แล้ว +1

    Please make a video on pix2pix GANs

  • @vigneshbalaji21
    @vigneshbalaji21 ปีที่แล้ว

    Can you please post a video of GAIL ?

  • @jeromeblanchet3827
    @jeromeblanchet3827 3 ปีที่แล้ว +1

    Most people tells stories with data insights and model prediction. Yannic tells stories with papers.
    An image is worth a 1000 word, and a good story is worth a 1000 image.

  • @hahawadda
    @hahawadda 3 ปีที่แล้ว +3

    Funny how now we can say the original paper on GAN is classic

  • @sweatobertrinderknecht3480
    @sweatobertrinderknecht3480 3 ปีที่แล้ว +2

    I‘d like to see a mix of papers and actual (python) code

  • @paulijzermans7637
    @paulijzermans7637 7 หลายเดือนก่อน

    i'm writing my thesis on GAN's atm. Would enjoy an interesting conversation with an expert:)

  • @DANstudiosable
    @DANstudiosable 3 ปีที่แล้ว +1

    What you mean by prior on input distribution?

    • @YannicKilcher
      @YannicKilcher  3 ปีที่แล้ว

      it's the way the inputs are distributed

  • @jithendrayenugula7137
    @jithendrayenugula7137 3 ปีที่แล้ว

    very awesome explanation! Thanks man!
    Is it too late or waste of time to play with and explore GANs in 2020 where BERT/GPT are hot and trending in AI community?

    • @ssshukla26
      @ssshukla26 3 ปีที่แล้ว +1

      Is it too late to learn something? No... Is it too late to research into GANs? Absolutely not... Nothing is perfect, GANs are not, there will be decades of research on these same topics. Whether you can make money out of knowing GANs... Ummmm debatable...

  • @XOPOIIIO
    @XOPOIIIO 3 ปีที่แล้ว +3

    In the future there'll be an algorithm to transform scientific papers into your videos.

    • @adamantidus
      @adamantidus 3 ปีที่แล้ว +1

      No matter how efficient this algorithm might be, Yannic will still be faster

  • @aishwaryadhumale1278
    @aishwaryadhumale1278 3 ปีที่แล้ว

    Can I please more content on GAN

  • @chinbold
    @chinbold 3 ปีที่แล้ว

    I'm only inspired by watching your videos 😢😢😢

  • @timothyschollux
    @timothyschollux 3 ปีที่แล้ว

    The famous Schmidhuber-Goodfellow moment: th-cam.com/video/HGYYEUSm-0Q/w-d-xo.html

  • @sadface7457
    @sadface7457 3 ปีที่แล้ว

    Revisit attention is all you need because that is now a classic paper.

    • @audrius0810
      @audrius0810 3 ปีที่แล้ว

      He's done the actual paper already