Markov Decision Processes 2 - Reinforcement Learning | Stanford CS221: AI (Autumn 2019)

Поделиться
HTML-код
  • Опубликовано: 11 янв 2025

Комментарии • 9

  • @albert2266
    @albert2266 8 месяцев назад +2

    Just to clarify a concept as I think 7:29 is not true because value function shouldn't be equal to the Q value. Value function is the expected utility for "all possible actions" at a given state. Therefore, it should be the expected Q_pi rather than just simply equal to Q_pi since Q_pi is the expected utility for "a given action" at a given state. Please correct me if I'm wrong.

  • @aojing
    @aojing 10 месяцев назад +2

    A legacy question from last MDP-1 is still hovering around 2: What is the Transition function for this class? Is it a function of Action?

    • @inventwithdean
      @inventwithdean 7 месяцев назад +1

      It is a function of both State and Action.

  • @black-sci
    @black-sci 10 месяцев назад +2

    Somehow Lecture left me confused in the end. may be I should rewatch.

  • @JumbyG
    @JumbyG 2 года назад +3

    I think there may be a typo at 28:27, it states that the Qpi is (4+8+16)/3 however I believe it should be (4+8+12)/3? Please correct me if I am wrong

    • @seaotterlabs1685
      @seaotterlabs1685 2 года назад +2

      I think it should be (4+8+16)/3, as I believe their last run has four 4 values.

    • @endoumamoru3835
      @endoumamoru3835 Год назад

      he is calculating sum of all rewards you can get. First time sum was 4 as only one reward was present and next was 8 as 2 rewards and then next it was 16 as 4 rewards were there.

  • @henkjekel4081
    @henkjekel4081 2 года назад +1

    Yeah, u really need to be having an episode to play this game

  • @Moriadin
    @Moriadin 7 месяцев назад +3

    not as good as the previous lecture. harder to follow.