Markov Decision Processes for Planning under Uncertainty (Cyrill Stachniss)

แชร์
ฝัง

ความคิดเห็น • 8

  • @abhinavgupta9990
    @abhinavgupta9990 3 ปีที่แล้ว

    Absolutely marvellous introduction, Professor. Thank you so much for these insightful lectures.

  • @TheProblembaer2
    @TheProblembaer2 6 หลายเดือนก่อน +1

    Thank you!

  • @michaellosh1851
    @michaellosh1851 3 ปีที่แล้ว

    Great introduction!

  • @oldcowbb
    @oldcowbb 2 ปีที่แล้ว

    does mdp work for continuous states and continuous action? like work on R^2 plane instead of a finite grid

  • @vvyogi
    @vvyogi 3 ปีที่แล้ว

    23:27 Very helpful explanation. 1 Question : Will discounting affect the behavior that we observe here? Will the agent prefer a faster, although riskier, route?

    • @CyrillStachniss
      @CyrillStachniss  3 ปีที่แล้ว +1

      It will affect the behavior. The agent will prefer policies that lead to rewards being obtained earlier in time (they can be but are not necessarily more risky).

  • @dushkoklincharov9099
    @dushkoklincharov9099 3 ปีที่แล้ว +2

    Why not just move the charging station to the upper left corner :D Great lecture btw

    • @oldcowbb
      @oldcowbb 2 ปีที่แล้ว +1

      your boss is gonna give you really big negative reward for changing the infrastructure