Policy and Value Iteration

แชร์
ฝัง

ความคิดเห็น • 65

  • @kiranmurphy3887
    @kiranmurphy3887 2 ปีที่แล้ว +12

    Great video! Walking through the first few its of the VI on a gridworld problem helped me to understand the algorithm much better!

  • @TheClockmister
    @TheClockmister 10 หลายเดือนก่อน +7

    My bald teacher will talk about this for 2 hours and I won’t understand anything. This helps a lot

    • @Moch117
      @Moch117 3 หลายเดือนก่อน

      lmfaooo

  • @studyaccount9662
    @studyaccount9662 ปีที่แล้ว +20

    this is better explantation than my teacher from MIT, thanks

    • @allantourin
      @allantourin 9 วันที่ผ่านมา +2

      you're not from MIT lol

  • @kkyars
    @kkyars ปีที่แล้ว +5

    For v1, would the two terminal states not be 0.8, since you have to multiply by the probability to get the expected value?

  • @furkanbaldir
    @furkanbaldir 3 ปีที่แล้ว +3

    I seached many many times to find this solution, and finally I found. Thank you.

  • @harbaapkabaap2040
    @harbaapkabaap2040 2 หลายเดือนก่อน

    Best video on the topic I have seen so far, to the point and well explained! Kudos to you brother!

  • @ellyjessy5044
    @ellyjessy5044 ปีที่แล้ว

    I see the values at V3 are for gamma only, shouldn't they be for gamma squared?

  • @sayedmortazakazemi1280
    @sayedmortazakazemi1280 3 ปีที่แล้ว +3

    nice explaination

  • @aymanadam7825
    @aymanadam7825 ปีที่แล้ว +1

    great video!! thanks!!

  • @anishreddyanam8617
    @anishreddyanam8617 3 หลายเดือนก่อน

    Thank you so much! My professor explained this part a bit too fast so I got confused, but this makes a lot of sense!

  • @ziki5993
    @ziki5993 ปีที่แล้ว

    great explanation ! thanks.

  • @daved1113
    @daved1113 ปีที่แล้ว

    Helped me learn it. Thank you.

  • @kyrohrs
    @kyrohrs 5 หลายเดือนก่อน

    Great video but how can we use policy iteration for a MDP when the state space grows considerably with each action? I know there’s various methods of approximation for policy iteration but I just haven’t been able to find anything, do you have any resources on this?

  • @jlopezll
    @jlopezll หลายเดือนก่อน +1

    9:06 Why when iterating v2, the values of the all other squares are 0's? Shouldn't the squares near the terminal states have non-zero value?

  • @stevecarson7031
    @stevecarson7031 2 ปีที่แล้ว

    Nice job, thanks

  • @newtonsnexus
    @newtonsnexus 8 หลายเดือนก่อน

    fantastic video, man I was so confused for some reason when my lecturer was talking about it, not supposed to be hard iguess, just how exactly it worked this video helped fill in the details

  • @sunnygla4323
    @sunnygla4323 2 ปีที่แล้ว

    This is helpful, thank you

  • @abdullah.montasheri
    @abdullah.montasheri 2 หลายเดือนก่อน

    the state value function Bellman equation includes the policy action probability at the beginning of the equation which you did not consider in your equation. any reason why?

  • @parul821
    @parul821 ปีที่แล้ว +6

    Can you provide example of policy iteration too

  • @yjw8958
    @yjw8958 ปีที่แล้ว +2

    If you also suffer from the vague explanation in GT's ML course, here comes Upenn to rescue you!

    • @711tornado
      @711tornado 2 หลายเดือนก่อน

      Literally why I'm here. CS7641 has been pretty good so far but the RL section was honestly crap in the lectures IMO.

  • @Leo-di9fq
    @Leo-di9fq ปีที่แล้ว

    In second iteration V = 0.09 ?

  • @huachengli1786
    @huachengli1786 3 ปีที่แล้ว

    quick question: 6:10, is the R(s,a,s_prime) always 0 in the example.

    • @cssanchit
      @cssanchit 2 ปีที่แล้ว +2

      yes it is fixed to zero

    • @Leo-di9fq
      @Leo-di9fq ปีที่แล้ว

      @@cssanchit except in terminal states

  • @jemtyjose7088
    @jemtyjose7088 ปีที่แล้ว

    In V2 why is it that there is no Value for (2,3)? Doesnt the presence of -1 give it a value of 0.09. I am confused there.

    • @Leo-di9fq
      @Leo-di9fq ปีที่แล้ว

      lol same. any confirmations?

    • @stuartgill6060
      @stuartgill6060 ปีที่แล้ว

      I think there is a value V for (2,3) at V2-- it is 0. You get that value taking the "left" action and bumping into the wall, thereby avoiding the -1 terminal state. What action could you take that would result in a value of .09?

    • @citricitygo
      @citricitygo ปีที่แล้ว

      Remember you are taking the max of action values. So for (2,3), the max action is to move left, which may result in (2, 3) or (3,3) or (1, 3). The value is all 0.

  • @yottalynn776
    @yottalynn776 2 ปีที่แล้ว +2

    Thanks for the video. In v3, how do you get 0.52 and 0.43?

    • @HonduranHunk
      @HonduranHunk ปีที่แล้ว +7

      Instead of starting in square (3, 3), you start in squares (3, 2) and (2, 3). After that, you do the same calculations to get 0.78. The optimal reward in square (3, 2) would be to go up, so the equation will look like: 0.8[0 + 0.9(0.72)] + 0.1[0 + 0.9(0)] + 0.1[0 + 0.9(-1)] = 0.43. The optimal reward in square (2, 3) would be to go right, so the equation will look like: 0.8[0 + 0.9(0.72)] + 0.1[0 + 0.9(0)] + 0.1[0 + 0.9(0)] = 0.52.

    • @ThinhTran-ys4mr
      @ThinhTran-ys4mr 11 หลายเดือนก่อน +4

      @@HonduranHunk How can we calculate to have 0.78. please help me sir

  • @nwudochikaeze6309
    @nwudochikaeze6309 ปีที่แล้ว +2

    Please can you explain how you got the 0.78 in V3.?

    • @ThinhTran-ys4mr
      @ThinhTran-ys4mr 11 หลายเดือนก่อน

      do you understand :( if yes please explain for me

  • @prengbiba3474
    @prengbiba3474 3 ปีที่แล้ว

    nice

  • @abhrantapanigrahi3475
    @abhrantapanigrahi3475 2 ปีที่แล้ว

    Yes ! Finally found suck a video! Yay!

  • @user-bk3tl7ke1r
    @user-bk3tl7ke1r 8 หลายเดือนก่อน

    According to bellman equation, I got the value 0.8 * (0.72 + 0.9 * 1) + 0.1 * (0.72 + 0.9 * 0) + 0.1 * (0.72 + 0.9 * 0) = 1.62. Please correct where I got wrong.

    • @mghaynes24
      @mghaynes24 7 หลายเดือนก่อน +1

      The living reward is 0, not 0.72. 0.72 is the V at time 2 for grid square (3,3). Use the 0.72 value to update grid squares (2,3) and (3,2) at time step 3.

  • @puneet2509704
    @puneet2509704 ปีที่แล้ว

    Thank God, get RL videos from an Indian....

  • @eklavyaattar1810
    @eklavyaattar1810 ปีที่แล้ว +1

    why would you substitute value of +1 in equation in green? the formula says it should the V(S') and not reward value!!!

  • @user-canon031
    @user-canon031 2 หลายเดือนก่อน

    Good!

  • @don-ju8ck
    @don-ju8ck 4 หลายเดือนก่อน

    🙏🙏🏿

  • @pietjan2409
    @pietjan2409 ปีที่แล้ว +2

    Seriously people cant explain this in a easy way. Same for this video

  • @user-ls3bi6jk8u
    @user-ls3bi6jk8u หลายเดือนก่อน

    Cant understand how it it 0.52

  • @tower1990
    @tower1990 11 หลายเดือนก่อน

    There shouldn’t be any value for the terminal state… my god…

  • @anoushkagade8091
    @anoushkagade8091 2 ปีที่แล้ว +7

    Hi, thank you for the explanation. Can you please explain how you got 0.78 for (3,3) in 3rd iteration (V3) ? According to bellman equation, I got the value 0.8 * (0.72 + 0.9 * 1) + 0.1 * (0.72 + 0.9 * 0) + 0.1 * (0.72 + 0.9 * 0) = 1.62. Please correct where I got wrong. Assignment due tomorrow :(

    • @ankurparmar5414
      @ankurparmar5414 2 ปีที่แล้ว

      +1

    • @anoushkagade8091
      @anoushkagade8091 2 ปีที่แล้ว +1

      @@maiadeutsch4424 Thank you so much for the detailed explanation. This was really helpful. I was not considering the agent's own discounted value when going towards the wall and coming back.

    • @donzhu4996
      @donzhu4996 2 ปีที่แล้ว +11

      @@maiadeutsch4424 we don't need to multiply 0.1??

    • @Ishu7287
      @Ishu7287 ปีที่แล้ว

      @@maiadeutsch4424 hey ! nice explaination but can you tell if we will get a table regarding probabilities like 0.8 0.1 0.3 etc for going right left up visa versa

    • @vangelismathioudis3891
      @vangelismathioudis3891 ปีที่แล้ว

      @@maiadeutsch4424 hey there nice explanation, but for the cases with 10% chance it should be 0.1*(0 + 0.9*V_previter).

  • @alialho7309
    @alialho7309 2 ปีที่แล้ว

    For the first moment, you do not need to calculate terminal states and get +1, -1 for them. its wrong !
    we have things like terminal state in grid world. use it.

    • @Leo-di9fq
      @Leo-di9fq ปีที่แล้ว

      what do you mean?