Dynamic Deep Learning | Richard Sutton

Поделиться
HTML-код
  • Опубликовано: 16 янв 2025

Комментарии • 6

  • @williamjmccartan8879
    @williamjmccartan8879 Месяц назад +5

    Good presentation on the current state of RL and its limitations in continuous learning, very relevant to what we are hearing about the dynamic happening across the large LLM companies regardless of the information they are able to acquire. This sounds more like rather than relying on static information they need more human engagement in their processes, thank you very much professor Sutton for sharing your time, work, experience and knowledge, cheers

  • @CemlynWaters
    @CemlynWaters Месяц назад

    Thank you Richard Sutton for giving this talk, very interesting! Also thanks to the ICARL team for setting up this presentation!

  • @webgpu
    @webgpu 2 месяца назад +2

    Thank you very much! Great topic discussed in this presentation 🍻

  • @DanielKang-t6v
    @DanielKang-t6v Месяц назад +1

    Thanks for the wonderful lesson!

  • @Crack-tt2dh
    @Crack-tt2dh Месяц назад

    Regarding the period from 27:00 to 29:00, I personally believe that it is not so much about slow learning as it is about slow forgetting. The red line, due to its high learning rate, forgets previous tasks and thus causes a sharp decline in accuracy. In contrast, the yellow and brown lines, which have a slower forgetting rate, experience less impact on accuracy. Solving the forgetting problem might be related to the scale of the neural network.

  • @tylermoore4429
    @tylermoore4429 Месяц назад +3

    Why are all the questions like: what is advantage of continual learning over frozen models? That is an extraordinarily dumb question. What am I missing?