Deep RL Bootcamp Lecture 4A: Policy Gradients

แชร์
ฝัง
  • เผยแพร่เมื่อ 3 ธ.ค. 2024

ความคิดเห็น • 43

  • @naeemajilforoushan5784
    @naeemajilforoushan5784 7 หลายเดือนก่อน +4

    After 5 years still, the lecture is a great video, thank you a lot

  • @bhargav975
    @bhargav975 6 ปีที่แล้ว +35

    This is the best lecture I have seen on policy gradient methods. Thanks a lot.

  • @jony7779
    @jony7779 4 ปีที่แล้ว +17

    Every time I forget how policy gradients work exactly, I just come back here and watch starting at 9:30

    • @andreasf.3930
      @andreasf.3930 4 ปีที่แล้ว +3

      And every time you visited this video, you forgot where to start watching. Thats why you posted this comment. Smart guy!

  • @auggiewilliams3565
    @auggiewilliams3565 5 ปีที่แล้ว +1

    I must say that in more than 6 months, this is by far the best lecture/ material I have come across that was able to make me understand what policy gradient method actually is. I really praise this work. :) Thank you.

  • @ericsteinberger4101
    @ericsteinberger4101 6 ปีที่แล้ว +10

    Amazing lecture! Love how Pieter explains the math. super easy to understand.

  • @marloncajamarca2793
    @marloncajamarca2793 6 ปีที่แล้ว +3

    Great Lecture!!!! Pieter's explanations are just a gem!

  • @ashishj2358
    @ashishj2358 3 ปีที่แล้ว

    Best lecture on Policy Gradients hands down. Has covered some worth noting superficial details of many papers as well.

  • @johnnylima1337
    @johnnylima1337 6 ปีที่แล้ว +5

    It's such a good lecture, I'm stopping to ask myself why it was so easy to cover such significant information with full understanding

  • @Рамиль-ц5о
    @Рамиль-ц5о 4 ปีที่แล้ว

    Very good lecture about policy gradient method. I have looked through a lot of articles and was understanding almost everything, but your derivation explanation is really the best. It just opened my eyes and showed the whole picture. Thank you very much!!

  • @synthetic_paul
    @synthetic_paul 4 ปีที่แล้ว +5

    Honestly I can’t keep up without seeing what he’s pointing at. Gotta pause and search around the screen each time he says “this over here”

    • @akarshrastogi3682
      @akarshrastogi3682 4 ปีที่แล้ว +2

      Exactly. "This over here" has got to be the most uttered phrase in this lecture. So frustrating.

  • @bobsmithy3103
    @bobsmithy3103 2 ปีที่แล้ว

    amazing work. super understandable, concise and information dense.

  • @norabelrose198
    @norabelrose198 2 ปีที่แล้ว

    The explanation of the derivation of policy gradient is really nice and understandable here

  • @dustinandrews89019
    @dustinandrews89019 7 ปีที่แล้ว +1

    I got a lot out of this lecture in particular. Thank you.

  • @JadtheProdigy
    @JadtheProdigy 5 ปีที่แล้ว

    best lecturer in series

  • @DhruvMetha
    @DhruvMetha 3 ปีที่แล้ว

    Wow, this is beautiful!

  • @keqiaoli4617
    @keqiaoli4617 4 ปีที่แล้ว +1

    why a good "R" would increase the probability of path??? Please help me

  • @faizanintech1909
    @faizanintech1909 6 ปีที่แล้ว

    Awesome instructor.

  • @JyoPari
    @JyoPari 5 ปีที่แล้ว +1

    Instead of having a baseline, why not make your reward function be negative for undesired scenarios and positive for good ones? Great lecture!

  • @ethanjyx
    @ethanjyx 5 ปีที่แล้ว

    wow damn this is so well explained and the last video is very entertaining.

  • @sharmakartikeya
    @sharmakartikeya 11 หลายเดือนก่อน

    I might be missing a simple concept here but how are we increasing/decreasing the grad log probability of the actions using the gradient of U(theta)? I get that positive return for a trajectory will make the gradient of U positive and so theta will be increased in favour of those trajectories but how is it increasing grad log prob?

  • @ishfaqhaque1993
    @ishfaqhaque1993 5 ปีที่แล้ว

    23:20- Gradient of expectation is expectation of gradient "under mild assumptions". What are those assumptions?

    • @joaogui1
      @joaogui1 4 ปีที่แล้ว +2

      math.stackexchange.com/questions/12909/will-moving-differentiation-from-inside-to-outside-an-integral-change-the-resu

  • @muratcan__22
    @muratcan__22 4 ปีที่แล้ว +3

    nice but hard to follow without knowing what "this" refers to. I hope my guesses were right :)

  • @isupeene
    @isupeene 4 ปีที่แล้ว +2

    The guy in the background at 51:30

  • @nathanbittner8307
    @nathanbittner8307 7 ปีที่แล้ว

    excellent lecture. Thank you for sharing.

  • @biggeraaron
    @biggeraaron 5 ปีที่แล้ว +1

    Where can i buy his T-shirt?

  • @emilterman6924
    @emilterman6924 5 ปีที่แล้ว

    It would be nice to see what laboratories they had (what exercises)

    • @Procuste34iOSh
      @Procuste34iOSh 4 ปีที่แล้ว

      dont know if ur still interested, but the labs are on the bootcamp website

  • @suertem1
    @suertem1 5 ปีที่แล้ว

    Great lecture, thanks

  • @richardteubner7364
    @richardteubner7364 7 ปีที่แล้ว +1

    1:11 why are DQNs and friends Dynamic Programming Methods? I mean the neural network works as functions approximator to satisfy Bellmans eqn. , but still Backprop is the workhorse. In my opinion DQNs are much more similar to PG methods than to Bellman Updates??! And another issue with RL Landscape slide is where the heck are model based RL algos?? This slide should be renamed to model free RL landscape.

  • @karthik-ex4dm
    @karthik-ex4dm 6 ปีที่แล้ว

    PG is awesome!!!
    Doesn't depend on environment Dynamics really?? Wow
    All the pain and stress just goes away when we see our algorithms working😇😇

  • @ProfessionalTycoons
    @ProfessionalTycoons 6 ปีที่แล้ว

    great talk!

  • @Diablothegeek
    @Diablothegeek 7 ปีที่แล้ว

    Awesome!! Thanks

  • @arpitgarg5172
    @arpitgarg5172 5 ปีที่แล้ว +11

    If you can't explain it like Pieter Abbeel or Andrew NG then you don't understand it well enough.

  • @elzilcho222
    @elzilcho222 6 ปีที่แล้ว +1

    could you train a robot for 2 weeks in the real world then use those trained parameters to optimize a virtual environment? You know.. making the virtual environment very close to the real world?

    • @OfficialYunas
      @OfficialYunas 6 ปีที่แล้ว +1

      Of course you could. It's the opposite of what OpenAI does when they train a model in a virtual environment and deploy it in reality.

    • @soutrikband
      @soutrikband 5 ปีที่แล้ว

      Real world is very complicated with model uncertainties, friction, wear and tear and what have you...
      Simulators can come close , but we cannot expect them to fully mimic real world phenomena.

  • @shaz7163
    @shaz7163 7 ปีที่แล้ว

    very nice :)

  • @piyushjaininventor
    @piyushjaininventor 6 ปีที่แล้ว

    Can you share ppt??

    • @luxorska5143
      @luxorska5143 5 ปีที่แล้ว +3

      You can find all the slides and the other lectures here:
      sites.google.com/view/deep-rl-bootcamp/lectures

  • @MarkoTintor
    @MarkoTintor 4 ปีที่แล้ว

    ... you can use "a", and the math will be the same. :)