Proximal Policy Optimization Explained

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 ม.ค. 2025

ความคิดเห็น • 24

  • @sordesderisor
    @sordesderisor 2 ปีที่แล้ว +11

    If you also read the TRPO and PPO paper this video provides the perfect concise summary of PPO !

  • @aramvanbergen4489
    @aramvanbergen4489 3 ปีที่แล้ว +40

    Thank you for the clear explanation! But next time please use screenshots of the actual formulas this way it is much more readable.

  • @GnuSnu
    @GnuSnu ปีที่แล้ว +19

    4:25 "let me write it real quick" 💀💀

  • @alph4b3th
    @alph4b3th ปีที่แล้ว +2

    Sensational! Dude, you explain in such a simple way! I was wondering what the difference was between deep Q-Learning and PPO, and I was looking for exactly a video like this. Congratulations on your great didactic way of explaining the basic mathematical concepts and abstracting them to a more intuitive approach; you are really very good at this! Excellent video!

  • @James-qv1lh
    @James-qv1lh ปีที่แล้ว +2

    Insanely good video! Simple and straight to the point - thanks so much! :)

  • @carloscampo9119
    @carloscampo9119 ปีที่แล้ว

    That was very, very well done. Thank you for the clear explanation.

  • @sayyidj6406
    @sayyidj6406 10 หลายเดือนก่อน

    i wish i know this channel sooner. thanks for video

  • @alexkonopatski429
    @alexkonopatski429 2 ปีที่แล้ว +5

    I really love your vids and I also love how you explain things! And could you pls maybe make a video about TRPO, 'cause it is a really complex thing to understand in my opinion and the lack of available resources makes the situation not better. Therefor, I and I think a lot of others would be really glad about a good explanation!
    Thanks in advance

  • @datonefaridze1503
    @datonefaridze1503 2 ปีที่แล้ว +1

    Thank you for your effort, i really appreciate it, you are working for us to learn, thanks

  • @crwhhx
    @crwhhx 2 หลายเดือนก่อน

    When you say dqn is offline, were you trying to say it is off policy?

  • @boldizsarszabo883
    @boldizsarszabo883 2 ปีที่แล้ว

    This video was super helpful and informative! Thank you so much for your effort!

  • @marcotroster8247
    @marcotroster8247 ปีที่แล้ว

    Just evaluate the derivative of the policy gradients. Only then, you can really understand why PPO works.
    PPO adds the policy ratio as a factor to the derivative of the vanilla policy gradients. The clipping erases samples from the dataset with bad policy ratios because the derivative of a constant is zero.
    Also you need to understand from advantage actor-critic that the sign of the advantage determines whether the probabilities increase or decrease. Given the same training data, positive advantages will increase probs for good actions and decrease probs for bad actions.
    And the min always picks the clipped objective for bad policy ratios, so the gradients become constants. Otherweise they're the same and make only steps of policy ratios withing the epsilon bound. And because the policy gradients are multiplied by the policy ratio, this actually works as expected and gives PPO its stability.

  • @canoksuzoglu6540
    @canoksuzoglu6540 4 หลายเดือนก่อน

    Thanks dude. That was perfect explanation

  • @ivanwong863
    @ivanwong863 3 ปีที่แล้ว +6

    DQN is not an offline method is it?

    • @EdanMeyer
      @EdanMeyer  3 ปีที่แล้ว +9

      My bad, I meant to say it’s an off-policy method, q-learning performs very poorly an in offline setting

  • @hemanthvemuluri9997
    @hemanthvemuluri9997 ปีที่แล้ว +1

    for DQN you mean Offpolicy method right? DQN is not an Offline method.

  • @anibus1106
    @anibus1106 10 หลายเดือนก่อน

    Thank you so much, you save my day

  • @FlapcakeFortress
    @FlapcakeFortress 2 ปีที่แล้ว

    Much appreciated. Cheers!

  • @vadimavkhimenia5806
    @vadimavkhimenia5806 3 ปีที่แล้ว

    Can you make a video on maddpg with code?

  • @LatpateShubhamManikrao
    @LatpateShubhamManikrao 2 ปีที่แล้ว

    Nicely explained man

  • @awaisahmad5908
    @awaisahmad5908 10 หลายเดือนก่อน

    Thanks

  • @labreynth
    @labreynth 5 หลายเดือนก่อน

    Damn. I learned nothing.