Markov Decision Process (MDP) - 5 Minutes with Cyrill

Поделиться
HTML-код
  • Опубликовано: 22 окт 2024

Комментарии • 13

  • @krizh289
    @krizh289 7 месяцев назад +13

    boutta fail my AI exam but with your videos atleast I can cram

  • @flecko5
    @flecko5 Месяц назад

    Thanks for the easy explanation!

  • @tianyazhang7065
    @tianyazhang7065 Год назад +2

    Like all your videos

  • @gunjanshinde396
    @gunjanshinde396 9 месяцев назад

    At 2:53, it is mentioned that "the policy converges", what exactly we means by 'converges' here ?

    • @flecko5
      @flecko5 Месяц назад

      My understanding is that if you keep updating the policy infinitely, at some point the policy stops changing which means that the policy has "converged".

    • @gunjanshinde396
      @gunjanshinde396 Месяц назад

      Hey@@flecko5, thanks! Lately I been realised the same.

  • @HamletMeredith
    @HamletMeredith 17 дней назад

    449 Jermey Loaf

  • @WhereToClick
    @WhereToClick Месяц назад +1

    Thank you!

  • @JasonWhite-o7n
    @JasonWhite-o7n 22 дня назад

    Kuhn Road

  • @IvanIvanov-dk6sm
    @IvanIvanov-dk6sm Год назад

    Thank you for short lecture, but now it is too short. Maybe you could add more equations to make it real 5 min, but not 3.4 min

    • @CyrillStachniss
      @CyrillStachniss  Год назад +5

      Thanks for your opinion

    • @DDeathdealer007
      @DDeathdealer007 Год назад

      @Ivan Ivanov - it is also not 5 minutes, but here is a 50 minute lecture of Cyrill's on this topic: ruclips.net/video/72QwRnSNY88/видео.html

    • @TheProblembaer2
      @TheProblembaer2 10 месяцев назад +3

      that sounds a little bit entitled. I would rather thank the Professor for taking his time in sharing his knowledge with the world. Much appreaciated.