Markov Decision Processes for Planning under Uncertainty (Cyrill Stachniss)

Поделиться
HTML-код
  • Опубликовано: 25 янв 2025

Комментарии •

  • @Assault137
    @Assault137 3 года назад

    Absolutely marvellous introduction, Professor. Thank you so much for these insightful lectures.

  • @vvyogi
    @vvyogi 4 года назад

    23:27 Very helpful explanation. 1 Question : Will discounting affect the behavior that we observe here? Will the agent prefer a faster, although riskier, route?

    • @CyrillStachniss
      @CyrillStachniss  4 года назад +1

      It will affect the behavior. The agent will prefer policies that lead to rewards being obtained earlier in time (they can be but are not necessarily more risky).

  • @michaellosh1851
    @michaellosh1851 4 года назад

    Great introduction!

  • @TheProblembaer2
    @TheProblembaer2 Год назад +1

    Thank you!

  • @oldcowbb
    @oldcowbb 3 года назад

    does mdp work for continuous states and continuous action? like work on R^2 plane instead of a finite grid

  • @dushkoklincharov9099
    @dushkoklincharov9099 3 года назад +2

    Why not just move the charging station to the upper left corner :D Great lecture btw

    • @oldcowbb
      @oldcowbb 3 года назад +1

      your boss is gonna give you really big negative reward for changing the infrastructure