MIT 6.S191 (2023): Reinforcement Learning
HTML-код
- Опубликовано: 12 июн 2024
- MIT Introduction to Deep Learning 6.S191: Lecture 5
Deep Reinforcement Learning
Lecturer: Alexander Amini
2023 Edition
For all lectures, slides, and lab materials: introtodeeplearning.com
Lecture Outline:
0:00 - Introduction
3:49 - Classes of learning problems
6:48 - Definitions
12:24 - The Q function
17:06 - Deeper into the Q function
21:32 - Deep Q Networks
29:15 - Atari results and limitations
32:42 - Policy learning algorithms
36:42 - Discrete vs continuous actions
39:48 - Training policy gradients
47:17 - RL in real life
49:55 - VISTA simulator
52:04 - AlphaGo and AlphaZero and MuZero
56:34 - Summary
Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!! Наука
Excellent slides and explanations!
Very good work. Seen many lectures on the topic but this is by far the best one and very intuitive. Thank you for sharing.
Great as always, thanks for being consistent
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Thank you very much. 😊
Amazing lecture delivery. No words to thank you for sharing this wonderful resource for free. Thanks, MIT as well.
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Wow, Thank very much you )) 🥰🥰😊
haha at 19:50, William Lin the CP legend is answering the question :D
Its so weird, I am not even from the US neither I study there but I recognize a student from his voice at MIT in an MIT online lecture :D
Great video! 🙏
Thank you so much
great thanks for the course!❤
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Thank you so much! I loved the lecture, and I'm learning so much!
Im only 16 now, but I hope I can one day get into MIT or another great university that teaches this well!
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Thanks!
It is so clear. Thank you very much!
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Great!
Great video!
Great lecture. To be precise, at 24:37, you propose the 'target' as a function of the best action a' in some state s', but you don't explicitly define where this s' comes from. I may be mistaken, but I believe that this s' essentially represents the state s in the next step (t+1), as demonstrated in ruclips.net/video/wDVteayWWvU/видео.html (at 14:45). I hope this information is useful to someone.
Thanks for explaining complex Deep Learning and Reinforcement principles
in a simplistic manner 🙌👍
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
~wow my favorite area about AI =]
cant wait to finish the lecture
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Thank you so much :)
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Great lecture from a great instructor.
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Thanks for sharing!
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Dude, this guy did such a good job!!!!
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
This is so great! but unfortunately due to my limited English, I didn't understand some parts. Hopefully in the future there will be subtitles in Indonesian or other languages, thank you very much!
you can use subtitles if you want
Glad to see ML can figure out what I did as an 8 year old with a stack of quarters :D
Great lecture
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Thank You, Ostad Amini, But how can I find some code examples for policy learning like ppo?
@ 50:00 Very impressive work, VISTA!
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Once again a great lecture. I have a challenge, and I wonder if you can help me. I'm currently implementing a NN to determine customer satisfaction through a set of inputs that translate behavioural patterns (think # of complaints with our customer service, rate of usage of our services, etc.), and I'd like to know how much each input i'm using contributes to the overall satisfaction score. I imagine this would involve performing the gradient of the output node (a single one in this case), to each input. Is there any lecture where you go into the details of this, both the math and tensorflow code? Thanks in advance!
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Can you teach AI to play City Skylines.
7:00
RL is so good for optimizing the trading strategies
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
I recommend Barto Sutton „Reinforcement Learning“, 1st Edition, way,way better than the newer 2nd Edition.
14:25
Thanks for the thorough vid! I'm a bit lost @ 39:31 on where the "-0.8" velocity come from. The closest I'm trying to interpret is given the mean=-1 and var=0.5 the prob of norm dist at mean would be about 0.8... and since your going the negative direction to action a, then it becomes -0.8 ?? But this interpretation seems wrong since the mean should indicate the direction and velocity of action a, while the prob is for computing the loss. So.... what am I missing here? Thanks!
when you say " the prob of normal distribution at mean would be around 0.8" where did you get 0.8 from ? (the maximum value of this distribution is 0.564 at mean ) and secondly I think he is using 0.8 m/s as an example ( its a random value which you might get after mapping it back to a speed variable in your game )
@@gnikhil335 Good call! I misused that variance for std. My mistake. And I also really should've said likelihood there. But yeah, really I was just trying to figure out why he said the mean is centered at -0.8 but also shows a mean of -1 for the predicted params of pdf. As in are they just separate random examples or are we using a pdf with mean=-1, var=0.5 to determine the prob when speed is -0.8, which also doesn't seem likely since I thought we would use the velocity with the max likelihood (i.e. mean).
54:38
👏👏
Hi Alex,
Could you please suggest any best platform(online coding) that works properly for Reinforcement Learning, In our local systems, are getting errors(system dependencies).
Even google colab is showing error when using gym library
Thanks
Your RUclips Follower
Have you try to solve those errors by installing the the correct version of the packages?
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
An apple with a byte❤
✒️ fellow August 13th🤳🏿
Oh my God, he is so Handsome. And your spoken, lecture delivery, and fluency in RL in as awesome as your looks are....🤩 focusing on the speaker more than the slides. May Allah Almighty bless you man
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
You say state-action-pear but show an apple, I AM CONFUSION! AMERICA EXPRAIN! :) Loved the lecture, really well done.
Lecture 7 ?
Lecture 7 is having some technical difficulties so it will be published tomorrow same time (10am ET) -- sorry for the delay!
@@AAmini I am very happy for reply within few minutes.
Today I feel the power of mit .
Thank you for your understanding :)
Now, he knows that Q values can be converted into Probability?
once again, audio is super quiet. Had to turn the volume to 100. Fire the audio guy lol
SEALCLATCONTITOIN - YALL NEED TO INCORPORATE HARD-CODED TRAJETORIES LIKE POLITICAL VIEWS IN DEEP LEARNING .. THE SYSTEM DYNAMICS CHANGE BASED ON POLITICAL MODALITIES
33:13
16:03