RL Course by David Silver - Lecture 1: Introduction to Reinforcement Learning

Поделиться
HTML-код
  • Опубликовано: 25 авг 2024
  • #Reinforcement Learning Course by David Silver# Lecture 1: Introduction to Reinforcement Learning
    #Slides and more info about the course: goo.gl/vUiyjq

Комментарии • 328

  • @jordanburgess
    @jordanburgess 8 лет назад +1303

    Just finished lecture 10 and I've come back to write a review for anyone starting. *Excellent course*. Well paced, enough examples to provide a good intuition, and taught by someone who's leading the field in applying RL to games. Thank you David and Karolina for sharing these online.

    • @Gabahulk
      @Gabahulk 8 лет назад +22

      I've finished both of them, and I'd say that this one has a better and much more solid content, although the one from udacity is much more light and easy to follow, so it really depends on what you want :)

    • @adarshmcool
      @adarshmcool 8 лет назад +17

      This course is more thorough and for someone who is looking to make a career in Machine Learning, you should put in the work and do this course.

    • @TheAdithya1991
      @TheAdithya1991 8 лет назад +6

      Thanks for the review!

    • @devonk298
      @devonk298 8 лет назад +6

      One of the best , if not the best , courses I've watched!

    • @saltcheese
      @saltcheese 7 лет назад +3

      thanks for the review

  • @zingg7203
    @zingg7203 7 лет назад +455

    0:01 Outline
    Admin 1:10
    About Reinforcement Learning 6:13
    The Reinforcement Learning problem 22:00
    Inside an RL angent 57:00
    Problems within Reinforcement Learning

    • @user-sf5ig4sz6p
      @user-sf5ig4sz6p 7 лет назад +1

      Good job. Very thankful :)

    • @enochsit
      @enochsit 7 лет назад +1

      thanks

    • @trdngy8230
      @trdngy8230 6 лет назад +2

      You made the world much easier! Thanks!

    • @michaelc2406
      @michaelc2406 6 лет назад +6

      Problems within Reinforcement Learning 1:15:53

    • @mairajamil001
      @mairajamil001 3 года назад

      Thank you for this.

  • @passerby4278
    @passerby4278 4 года назад +23

    what a wonderful time to be alive!!
    thank god we have the opportunity to study a full module from one of the best unis in the world. taught by one of the leaders of its field

  • @tanmaygangwani3534
    @tanmaygangwani3534 7 лет назад +56

    The complete set of 10 lectures is brilliant. David's an excellent teacher. Highly recommended!

  • @tylersnard
    @tylersnard 4 года назад +33

    I love that David is one of the foremost minds in Reinforcement Learning, but he can explain it in ways that even a novice can understand.

    • @DEVRAJ-np2og
      @DEVRAJ-np2og Месяц назад

      hlo, can u please suggeest roadmap for rl.

  • @nguyenduy-sb4ue
    @nguyenduy-sb4ue 4 года назад +160

    how lucky we are to have access to this kind of knowledge only with a button ! Thank you all in DeepMind public this course

    • @BhuwanBhatta
      @BhuwanBhatta 4 года назад +5

      I was going to say the same. Technology has really made our life easier and better in a lot of ways. But a lot of times we take it for granted.

    • @sachinkalwar4359
      @sachinkalwar4359 3 года назад

      @@BhuwanBhatta fvy5tym 🎉4ufgc🙏😎4g🔥f9f4c6v f😎j 9c

    • @anniekhoekzema9344
      @anniekhoekzema9344 3 года назад

      @@BhuwanBhatta ji kghkfktghjkhhiljcujfjpjkui jikskjgjpj

  • @zhongchuxiong
    @zhongchuxiong Год назад +13

    1:10 Admin
    6:13 About Reinforcement Learning
    6:22 Sits in the intersection of many fields of science: solving decision making problem in these fields.
    9:10 Branches of machine learning.
    9:37 Characteristics of RL: no correct answer, delayed feedback, sequence matters, agent influences environment.
    12:30 Example of RL
    21:57 The Reinforcement Learning Problem
    22:57 Reward
    27:53 Sequential Decision Making. Action
    29:36 Agent & Environment. Observation
    33:52 History & State: stream of actions, observations & rewards.
    37:13 Environment state
    40:35 Agent State
    42:00 Information State (Markov State). Contains all useful information from history.
    51:13 Fully observable environment
    52:26 Partially observable environment
    57:04 Inside an RL Agent
    58:42 Policy
    59:51 Value Function: prediction of the expected future reward.
    1:06:29 Model: transition model, reward model.
    1:08:02 Maze example to explain these 3 key components.
    1:10:53 Taxonomy of RL agents based on these 3 key components:
    policy-based, value-based, actor critic (which combines both policy & values function), model-free, model-based
    1:15:52 Problems within Reinforcement Learning.
    1:16:14 Learning vs. Planning. partial known environment vs. fully known environment.
    1:20:38 Exploration vs. Exploitation.
    1:24:25 Prediction vs. Control.
    1:26:42 Course Overview

  • @eyeofhorus1301
    @eyeofhorus1301 5 лет назад +32

    Just finished lecture 1 and can already tell this is going to be one of the absolute best courses 👌

  • @socat9311
    @socat9311 5 лет назад +24

    I am a simple man. I see a great course, I press like

  • @ethanlyon8824
    @ethanlyon8824 7 лет назад +33

    Wow, this is incredible. I'm currently going through Udacity and this lecture series blows their material from GT out of the water. Excellent examples, great explanation of theory, just wow. This actually helped me understand RL. THANK YOU!!!!!

    • @JousefM
      @JousefM 4 года назад

      How do you find the RL course from Udacity? Thinking about doing it after the DL Nanodegree.

    • @pratikd5882
      @pratikd5882 3 года назад +4

      @@JousefM I agree, those explanations by GT professors were confusing and less clear, the entire DS nanodegree which had ML, DL and RL was painful to watch and understand.

  • @Dynamyalo
    @Dynamyalo Месяц назад +1

    Right now I am sitting in my pajamas in the comfort of my home, eating a peanut butter and jelly sandwich and I have the ability to watch an entire course about an advanced topic online for free. What a time to be alive

  • @NganVu
    @NganVu 4 года назад +20

    1:10 Admin
    6:13 About Reinforcement Learning
    21:57 The Reinforcement Learning Problem
    57:04 Inside an RL Agent
    1:15:52 Problems within Reinforcement Learning

  • @guupser
    @guupser 6 лет назад +17

    Thank you so much for repeating the questions each time.

  • @ShalabhBhatnagar-vn4he
    @ShalabhBhatnagar-vn4he 4 года назад +5

    Mr. Silver covers in 90 minutes what most books do not in 99 pages. Cheers and thanks!

  • @vipulsharma3846
    @vipulsharma3846 4 года назад +1

    I am taking a Deep Learning course rn but seriously the comments here are motivating me to get into this one right away.

  • @dbdg8405
    @dbdg8405 7 дней назад

    This is a superb course on so many levels. Thank you

  • @user-hb9wc7sx9h
    @user-hb9wc7sx9h Год назад +2

    David is awesome at explaining a complex topic!. Great lecture. The examples really helped in understanding the concepts..

  • @tristanlouthrobins
    @tristanlouthrobins 7 месяцев назад

    This is one of the clearest and most illuminating introductions I've watched on RL and its practical applications. Really looking forward to the following instalments.

  • @ImtithalSaeed
    @ImtithalSaeed 6 лет назад +79

    I can say that I 've found a treasure..really

  • @AndreiMuntean0
    @AndreiMuntean0 8 лет назад +38

    The lecturer is great!

  • @Abhi-wl5yt
    @Abhi-wl5yt 2 года назад

    I just finished the course, and the people in this comment section are not exaggerating. This is one of the best courses on Reinforcement learning. Thank you very much DeepMind, for making this free and available to everyone!

  • @DrTune
    @DrTune 2 года назад

    Excellent moment around 24:10 when David makes it crystal clear that there needs to be a metric to train by (better/worse) and that it's possible - and necessary - to try to come up with a scalar metric that roughly approximates success or failure in a field. When you train something to optimize for a metric, important to be clear up-front what that metric is.

  • @deviljin6217
    @deviljin6217 Год назад +1

    the legend of all RL courses

  • @vorushin
    @vorushin 7 месяцев назад

    Thanks a lot for the great lectures! I enjoyed watching every one of them (even #7). This is a great complement to reading Sutton/Barto and the seminal papers in RL.
    I remember looking at the Atari paper in the late 2013 and having hard time to understand why everyone is going completely crazy about it. A few years later the trend was absolutely clear. Reinforcement Learning is the key to push the performance of AI systems past the threshold where the humans can serve as wise supervisors to the limit when the different kinds of intelligence help each other to improve via self-play.

  • @lauriehartley9808
    @lauriehartley9808 4 года назад +1

    I have never heard a punishment described as a negative reward at any point during my 71 orbits of the Sun. You can indeed learn something new every day.

  • @mdoIsm771
    @mdoIsm771 Год назад

    I took this playlist as a reference for my thesis in "RL for green radio".

  • @mgonetwo
    @mgonetwo Год назад +1

    Rare opportunity to listen to Christian Bale after he is finished with dealing with criminals as Batman.
    On a serious note, overall great series of lectures! Thanks, prof. David Silver!

  • @TheAIEpiphany
    @TheAIEpiphany 3 года назад +1

    His name should be David Gold or Platinum I dunno. Best intro to RL on YT, thank you!

  • @johntanchongmin
    @johntanchongmin 3 года назад +3

    Really love this video series. Watching it for the fifth time:)

  • @nirajabcd
    @nirajabcd 4 года назад

    Just completed Coursera's Reinforcement Learning Specialization and this is a nice addition to reinforce the concept I am learning.

  • @hassan-ali-
    @hassan-ali- 7 лет назад +19

    lecture starts at 6:30

  • @Edin12n
    @Edin12n 5 лет назад +5

    That was brilliant. Really helping me to get my head around the subject. Thanks David

  • @Newascap
    @Newascap 3 года назад +3

    I actually prefer this 2015 class over the most recent 2019 one. Nothing wrong on the other expositor, but David kinda makes the course more smoothly.

  • @dalcimar
    @dalcimar 5 лет назад +26

    Can you enable the automatic captioning to this content?

  • @aam1819
    @aam1819 8 месяцев назад

    Thank you for sharing your knowledge online. Enjoying your videos, and loving every minute of it.

  • @43SunSon
    @43SunSon 6 месяцев назад +1

    Im back again, watching the whole video again.

  • @linglingfan8138
    @linglingfan8138 3 года назад +1

    This is really the best RL course I have seen!

  • @yuwuxiong1165
    @yuwuxiong1165 4 года назад

    Take swimming as example: learning is part that you directly jump into the water and learn swimming to survive; planning is that part that before jumping into the water, you read books/instructions on how to swim (obviously sometimes planning helps, sometimes not, sometimes counter-helps).

  • @aaronvr_
    @aaronvr_ 4 года назад +2

    really high quality, I'm impressed at David Silver's (or somebody else's?) choice to offer this content to the general public free of charge.. what an age we're living in :DDDDDDDDDDD

  • @kiuhnmmnhuik2627
    @kiuhnmmnhuik2627 7 лет назад +2

    @1:07:00. Instead of defining P_{ss'}^a and R_s^a, it's better to define p(s',r|s,a), which gives the joint probability of the new state and reward. The latter is the approach followed by the 2nd edition of Sutton&Barto's book.

  • @wireghost897
    @wireghost897 Год назад

    It's really nice that he gives examples.

  • @erichuang2009
    @erichuang2009 4 года назад +4

    5 days to train per game. now is 5 minutes to complete a train based on recent papers. envolve fast!

  • @asavu
    @asavu 2 года назад

    David is awesome at explaining a complex topic!

  • @yehu7944
    @yehu7944 7 лет назад +69

    Could you please turn on the auto generated subscript?

    • @user-gw5hx9wm9i
      @user-gw5hx9wm9i 6 лет назад +1

      Plz..

    • @Zebra745
      @Zebra745 5 лет назад +9

      As a learner of reinforcement learning, you should become an agent and improve yourself with getting rewards in this environment

  • @43SunSon
    @43SunSon 4 года назад +21

    I have to admit, david silver is slightly smarter than me.

  • @zhichaochen7732
    @zhichaochen7732 7 лет назад

    RL could be the killer app in ML. Nice lectures to bring people up to speed!

  • @rohitsaka
    @rohitsaka 4 года назад +5

    For Me : David Silver is God ❤️ What a Man ! What an Explanation. One of the Greatest Minds who changed the Dynamics of RL in the past few years.Thanks Deep mind for uploading this Valuable course for free 🤍

  • @viscaelbarca4381
    @viscaelbarca4381 2 года назад +4

    Would be great if you guys could add subtitles!

  • @Esaens
    @Esaens 4 года назад

    Superb David - you are one of the giants I am standing on to see a little further - thank you

  • @rossheaton7383
    @rossheaton7383 5 лет назад +5

    Silver is a boss.

  • @tianmingdu8022
    @tianmingdu8022 7 лет назад

    The UCL lecturer is awesome. Thx for the excellent course.

  • @dhrumilbarot1431
    @dhrumilbarot1431 6 лет назад

    Thank you for sharing.It kinda inspires me to always remember that I have to pass it on too.

  • @Delta19G
    @Delta19G 10 месяцев назад

    This is my first taste of deep mind

  • @alpsahin4340
    @alpsahin4340 5 лет назад

    Great lecture, great starting point. Helped me to understand the basics of Reinforcement Learning. Thanks for great content.

  • @sng5192
    @sng5192 8 лет назад +1

    Thanks for a great lecture. I got grasp the point of reinforcement learning !

  • @iblaliftw
    @iblaliftw 2 года назад

    Thank you very much, I recently got a good grade in RL thanks to your great teaching skills!!

  • @sachinramsuran7372
    @sachinramsuran7372 5 лет назад +1

    Great lecture. The examples really helped in understanding the concepts.

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 7 дней назад +1

    1:09:43 value function example in ternal

  • @ajibolashodipo8911
    @ajibolashodipo8911 3 года назад

    Silver is Gold!

  • @saranggawane4719
    @saranggawane4719 2 года назад

    42:00 - 47:55 : Information State/Markov State
    57:13 RL Agent

  • @bennog8902
    @bennog8902 6 лет назад +1

    awesome course and awesome teacher

  • @filippomiatto1289
    @filippomiatto1289 7 лет назад +1

    Amazing video, a very well-designed and well-delivered lecture! I'm going to enjoy this course, good job! 👍

  • @AhmedThabit99
    @AhmedThabit99 5 лет назад +5

    if you can activate the subtitle from youtube, it will be great, Thanks

  • @HazemAzim
    @HazemAzim 3 года назад

    just amazing and different than any intro to RL

  • @donamincorleone
    @donamincorleone 8 лет назад +6

    Great video. Thanks. I really needed something like this :)

  • @yuxinzhang9403
    @yuxinzhang9403 3 года назад

    Any observation and reward could be wrapped up into abstract data structure in an object for sorting.

  • @taherhabib3180
    @taherhabib3180 3 года назад

    His 2021 "Reward is Enough" paper makes us agree to the Reward Hypothesis @ 24:18 . :D

  • @AntrianiStylianou
    @AntrianiStylianou 2 года назад +2

    anyone can confirm if this is still relevant in 2022? I would like to study RL. It seems that there is a more recent series but with a different professor on this channel.

  • @prashanthduvvuri7845
    @prashanthduvvuri7845 4 года назад +2

    The future is independent of the past given the present
    - David Silver

    • @utsabshrestha277
      @utsabshrestha277 4 года назад

      Only if it have Markov state

    • @prashanthduvvuri7845
      @prashanthduvvuri7845 4 года назад

      The above comment was meant to be in the context of your life. Your brain is a cumulative of all your prior experiences and the choices/decisions which you make will be an a action taken by your brain(which is a markov state). So what I perceived from that statement was that, "you need to forget your past and move on".

  • @jamesr141
    @jamesr141 3 года назад

    What a GIFT.

  • @lcswillems
    @lcswillems 6 лет назад

    A really good introduction course!! Thank you very much!!

  • @MGO2012
    @MGO2012 7 лет назад

    Excellent explanation. Thank you.

  • @einemailadressenbesitzerei8816
    @einemailadressenbesitzerei8816 3 года назад

    I want to discuss:
    "All goals can be described by the maximisation of expected cumulative reward"
    "Do you agree with this statement?"
    My thoughts why it could be controversy is that you can never specify the reward such as you will never have unexpected side effects/behaviour of the agent.
    Any other inputs/thoughts?

  • @abhijeetghodgaonkar
    @abhijeetghodgaonkar 6 лет назад +1

    Excellent Indeed!

  • @ABHINAVGANDHI09
    @ABHINAVGANDHI09 5 лет назад

    Thanks for the question at 19:48!

  • @umountable
    @umountable 6 лет назад

    46:20 this also means that it doesn't matter how you got into this state, it will always mean the same.

  • @smilylife7515
    @smilylife7515 3 года назад

    Please add subtitles to make it more helpful for those who are from non English native countries

  • @kozzuli
    @kozzuli 7 лет назад +1

    Ty for sharing, Great Lecture!!

  • @jy2883
    @jy2883 5 лет назад +8

    Is it possible to add subtitles or autogenerated captions to these lecture videos?

  • @AwesomeLemur
    @AwesomeLemur 3 года назад

    We can't thank you enough!

  • @life42theuniverse
    @life42theuniverse 2 года назад

    The environment state(e,t) is Markov ... though it’s unknowable.

  • @AlessandroOrlandi83
    @AlessandroOrlandi83 4 года назад

    Amazing teacher I wish I could partecipate to this course! I did a course on Coursera but it was so quick to explain very complex things.

    • @pratikd5882
      @pratikd5882 3 года назад

      Are you referring to the RL specialization by Alberta university? If so, then how good was it on the programming/practical aspects?

    • @AlessandroOrlandi83
      @AlessandroOrlandi83 3 года назад

      @@pratikd5882 Yes, I did that. The exercises were good, but I'm not an AI guy but a simple programmer. I managed to do the exercises but I think that explainations were very concise. So in 15 minutes they explain what you get in 1 hour on those lectures. I think that is very summarized. But it's good they have exercises. So I don't think after doing that I'm actually able to do much

    • @satishrapol3650
      @satishrapol3650 2 года назад

      Do you have any suggestions about which one to start with , the Lecture series here or the RL specialization by Alberta University (on Coursera). I need to apply RL on my own project work. By the way I did the course on Machine learning by NG Andrews and I could follow the pace it was good enough for me and besides the programming exercises helped me alot than I could imagine. But I am not sure if so would be the case with RL by Coursera as well. Can you guide me on this?

  • @ProfessionalTycoons
    @ProfessionalTycoons 6 лет назад

    amazing introduction and very cool

  • @weiw1028
    @weiw1028 3 года назад +1

    Begging for subtitles

  • @deepschoolai
    @deepschoolai 7 лет назад +13

    Err have you disabled captions on this video?

  • @RahulSharma-yx5uf
    @RahulSharma-yx5uf 2 года назад

    Thank you very much!!

  • @florentinrieger5306
    @florentinrieger5306 Год назад

    This is so good!

  • @vballworldcom
    @vballworldcom 5 лет назад +1

    Captions would really help here!

  • @mehershrishtinigam5449
    @mehershrishtinigam5449 Год назад

    imp point at 1:00:30
    1:00:22 gamma's value is less than 1

  • @legorative
    @legorative 6 лет назад

    Too good :) Best analogies.

  • @wentingwang883
    @wentingwang883 Год назад

    Thanks so much!

  • @MimJim6784
    @MimJim6784 3 года назад

    Please enable the auto subtitle generator!

  • @VishalKumarTech
    @VishalKumarTech 7 лет назад

    Thank you David!!

  • @robbyrayrab
    @robbyrayrab 3 года назад +1

    what was the bit about 15 hrtz ?

  • @dashingrahulable
    @dashingrahulable 7 лет назад

    On Slide "History and State" @ 34:34, does the order of Actions, Observations and Rewards matter? If yes, then why the order isn't Observations, Rewards and Actions; the reasoning is that the agent sees the observations first, assesses the reward for actions and then takes a particular action? Please clarify if the chain-of-thought went awry at any place.
    Thanks.

  • @vovos00
    @vovos00 7 лет назад

    Thank you for nice lecture

  • @rz4413
    @rz4413 5 лет назад

    brilliant course

  • @lazini
    @lazini 4 года назад +1

    Thanks very much. But I need Eng.subtitle. Could you change setting of this videos? :)

  • @mechanicalmonk2020
    @mechanicalmonk2020 4 года назад

    Lecture 1 has half a million views, 10 has 36k.
    I'm surprised it's even 36k

  • @ZNE1323
    @ZNE1323 Месяц назад

    Goated thanks g