Shouldn't there also be a reward present in TD error at 42:30 and 50:25 ? edit: ok, it's explained a bit more in the 2015 lecure that this version assumes no intermediate reward
A Nash equilibrium sounds like what happens on roads where traffic evens itself out amongst all the roads towards some destination. When a new road is built, nothing really changes because the traffic just redistributes itself to an new equilibrium.
@@Sigmav0 these slides are from an older UCLxDeepMind lecture series lead primarily by David Silver. They do not include content on the newer AlphaZero models. Do you by any chance know if these updated slides are available online
I would love to see how deepmind would build a city on its own in Cityskyline. See how its optimization would create the best and most efficient layout in real time. Maybe we could learn alot from that.
Despite the success of A 0 nets in several games, I feel that is better starting point playing (random number) games with humans. Only then, when it has grasped some basic basics (by itself, not forcibly inserted by hand), let it play against itself. This way it could accomplish in thousands of self-play games what from scratch it´d take millions of self-play games, due to the total randomness and clueless of the first games. It´s not the absolute zero approach, but it has no "artificial" parameters handcrafted either. It learns from its own games all the way.
Playing with humans takes considerably more time than running simulations - so actually, playing millions of games by itself is still faster than playing 100 games from playing humans. Knowing that a game of go takes around 1h, you'd have finished 3 games with a human in the time that it took AlphaZero to reach human level play. Same for chess, when you realise it took Alpha Zero 4 hours reach a level higher than Stockfish... It should be clear from these examples that one of the particularities of AlphaZero is the speed at which it learns. Playing humans here both defeats the purpose of self-learning and actually wastes time.
amazing high quality lectures. especially enjoyed attention, memory, alpha zero talks.
Shouldn't there also be a reward present in TD error at 42:30 and 50:25 ?
edit: ok, it's explained a bit more in the 2015 lecure that this version assumes no intermediate reward
A Nash equilibrium sounds like what happens on roads where traffic evens itself out amongst all the roads towards some destination. When a new road is built, nothing really changes because the traffic just redistributes itself to an new equilibrium.
great real life analogy
Thankyou so much for this series of lectures!
Two lectures (CNN and RNN) are missing from this series. Can anyone tell if they are available online?
Did David do another RL course in 2018? Or just one lecture?
i was thinking the same & searched a lot, but i think he did just one lecture in 2018
It is possible to access to the course slides ?
@@Sigmav0 Link not working
@@TuhinChattopadhyay The slide has been moved to www.davidsilver.uk/wp-content/uploads/2020/03/games.pdf
Hope this helps !
@@Sigmav0 Got it... many thanks
@@TuhinChattopadhyay No problem ! 👍
@@Sigmav0 these slides are from an older UCLxDeepMind lecture series lead primarily by David Silver. They do not include content on the newer AlphaZero models. Do you by any chance know if these updated slides are available online
The level of computer play in Scrabble is not superhuman. Quackle beats Maven, and the best humans can 50-50 Quackle in a long series.
Anyone know what was the exact hardware used to train Alpha Go Zero?
deepmind.com/blog/alphago-zero-learning-scratch/
I can comment now. See you again David.
I love him, how sad the room is empty
I would love to see how deepmind would build a city on its own in Cityskyline. See how its optimization would create the best and most efficient layout in real time. Maybe we could learn alot from that.
Why so many empty seats?
This stuff not on the exam
The number of people is lower with later lectures for some reason.
@@matveyshishov stupid ppl
Here's a link to the same video but with slides visible ruclips.net/video/N1LKLc6ufGY/видео.html
Despite the success of A 0 nets in several games, I feel that is better starting point playing (random number) games with humans. Only then, when it has grasped some basic basics (by itself, not forcibly inserted by hand), let it play against itself. This way it could accomplish in thousands of self-play games what from scratch it´d take millions of self-play games, due to the total randomness and clueless of the first games. It´s not the absolute zero approach, but it has no "artificial" parameters handcrafted either. It learns from its own games all the way.
Playing with humans takes considerably more time than running simulations - so actually, playing millions of games by itself is still faster than playing 100 games from playing humans. Knowing that a game of go takes around 1h, you'd have finished 3 games with a human in the time that it took AlphaZero to reach human level play.
Same for chess, when you realise it took Alpha Zero 4 hours reach a level higher than Stockfish...
It should be clear from these examples that one of the particularities of AlphaZero is the speed at which it learns. Playing humans here both defeats the purpose of self-learning and actually wastes time.
why does nobody take notes?
Not on the exam
@William Davis Sure... In primary school...
because slides are available online and lectures are available online, I would listen carefully first in the class