Google's self-learning AI AlphaZero masters chess in 4 hours

Поделиться
HTML-код
  • Опубликовано: 22 авг 2024
  • Google's AI AlphaZero has shocked the chess world. Leaning on its deep neural networks, and general reinforcement learning algorithm, DeepMind's AI Alpha Zero learned to play chess well beyond the skill level of master, besting the 2016 top chess engine Stockfish 8 in a 100-game match. Alpha Zero had 28 wins, 72 draws, and 0 losses. Impressive right? And it took just 4 hours of self-play to reach such a proficiency. What the chess world has witnessed from this historic event is, simply put, mind-blowing! AlphaZero vs Magnus Carlsen anyone? :)
    19-page paper via Cornell University Library
    arxiv.org/abs/...
    arxiv.org/pdf/...
    PGN:
    1. e4 e5 2. Nf3 Nc6 3. Bb5 Nf6 4. d3 Bc5 5. Bxc6 dxc6 6. 0-0 Nd7 7. c3 0-0 8. d4 Bd6 9. Bg5 Qe8 10. Re1 f6 11. Bh4 Qf7 12. Nbd2 a5 13. Bg3 Re8 14. Qc2 Nf8 15. c4 c5 16. d5 b6 17. Nh4 g6 18. Nhf3 Bd7 19. Rad1 Re7 20. h3 Qg7 21. Qc3 Rae8 22. a3 h6 23. Bh4 Rf7 24. Bg3 Rfe7 25. Bh4 Rf7 26. Bg3 a4 27. Kh1 Rfe7 28. Bh4 Rf7 29. Bg3 Rfe7 30. Bh4 g5 31. Bg3 Ng6 32. Nf1 Rf7 33. Ne3 Ne7 34. Qd3 h5 35. h4 Nc8 36. Re2 g4 37. Nd2 Qh7 38. Kg1 Bf8 39. Nb1 Nd6 40. Nc3 Bh6 41. Rf1 Ra8 42. Kh2 Kf8 43. Kg1 Qg6 44. f4 gxf3 45. Rxf3 Bxe3+ 46. Rfxe3 Ke7 47. Be1 Qh7 48. Rg3 Rg7 49. Rxg7+ Qxg7 50. Re3 Rg8 51. Rg3 Qh8 52. Nb1 Rxg3 53. Bxg3 Qh6 54. Nd2 Bg4 55. Kh2 Kd7 56. b3 axb3 57. Nxb3 Qg6 58. Nd2 Bd1 59. Nf3 Ba4 60. Nd2 Ke7 61. Bf2 Qg4 62. Qf3 Bd1 63. Qxg4 Bxg4 64. a4 Nb7 65. Nb1 Na5 66. Be3 Nxc4 67. Bc1 Bd7 68. Nc3 c6 69. Kg1 cxd5 70. exd5 Bf5 71. Kf2 Nd6 72. Be3 Ne4+ 73. Nxe4 Bxe4 74. a5 bxa5 75. Bxc5+ Kd7 76. d6 Bf5 77. Ba3 Kc6 78. Ke1 Kd5 79. Kd2 Ke4 80. Bb2 Kf4 81. Bc1 Kg3 82. Ke2 a4 83. Kf1 Kxh4 84. Kf2 Kg4 85. Ba3 Bd7 86. Bc1 Kf5 87. Ke3 Ke6
    I'm a self-taught National Master in chess out of Pennsylvania, USA who was introduced to the game by my father in 1988 at the age of 8. The purpose of this channel is to share my knowledge of chess to help others improve their game. I enjoy continuing to improve my understanding of this great game, albeit slowly. Consider subscribing here on RUclips for frequent content, and/or connecting via any or all of the below social medias. Your support is greatly appreciated. Take care, bye. :)
    ★ LICHESS.ORG lichess.org/@/...
    ★ CHESS.COM www.chess.com/... (affiliate link)
    ★ TWITCH / chessnetwork
    ★ TWITTER / chessnetwork
    ★ FACEBOOK / chessnetwork
    ★ PATREON / chessnetwork
    ★ DONATE www.paypal.com...

Комментарии • 2,8 тыс.

  • @CIA_Is_aTerrorist_Orginization
    @CIA_Is_aTerrorist_Orginization 6 лет назад +2508

    Now teach Alpha Zero advanced cell biology and let it cure cancer in a week.

    • @anonchen7656
      @anonchen7656 6 лет назад +186

      yeh, teach an AI how to destruct cells. Because that's the point we're at. Goddamn son.. remind your dad never to give you his gun.

    • @leonmozambique533
      @leonmozambique533 6 лет назад +310

      don’t think it works like that my dude. First thing we don’t know everything about cell biology so we can’t teach it. Unlike chess we know all the rules cuz we made them. Also there is a clear path to a win in a set of moves. Curing cancer is too abstract for a computer to understand currently

    • @CIA_Is_aTerrorist_Orginization
      @CIA_Is_aTerrorist_Orginization 6 лет назад +438

      I knew that, forgive my pathetic attempt to be funny

    • @jt-hi9dw
      @jt-hi9dw 6 лет назад +20

      pretty sure u can already cure it.

    • @ODAKAB
      @ODAKAB 6 лет назад +9

      We don't know all the rules about nature so we can't virtualise it. If you can't virtualise, you need a lot of real cells and materials + automated parts for him to be able test.
      Like this:
      1- Heat bad cell at x Celcius for x min.
      2- Pour x amount of x chemical on bad cell.
      So on, even we can't know what will happen because we didn't write rules of nature and we can't calculate everything about it. If we could or we knew, we would know cure of cancer too.

  • @natereeves2807
    @natereeves2807 6 лет назад +2386

    I wish alphazero could provide commentary on its own games

    • @Wenyfile
      @Wenyfile 6 лет назад +93

      Nate Reeves now that would be a cool (but insanely difficult ) feature to program

    • @privateagent
      @privateagent 6 лет назад +28

      Christoffer Jonsson it's not difficult at all. Stop the nonsense

    • @Wenyfile
      @Wenyfile 6 лет назад +177

      Can you show me a neural network that while doing very complex tasks can explain every single decision it makes in detail? No you can't

    • @privateagent
      @privateagent 6 лет назад +27

      Christoffer Jonsson you can output everything, it's only up to the developers to implement that.

    • @Valvex_
      @Valvex_ 6 лет назад +56

      And what makes you think that's not difficult at all?

  • @commodoreNZ
    @commodoreNZ 5 лет назад +506

    1500 years vs 4 hours. That will stick with me

    • @averycarty7772
      @averycarty7772 3 года назад +15

      I wonder how many games it played a minute over those 4 hours

    • @commodoreNZ
      @commodoreNZ 3 года назад +8

      @@averycarty7772 no doubt that after 2-3 minutes it would learn to beat a million chumps like me :)

    • @averycarty7772
      @averycarty7772 3 года назад +6

      @@commodoreNZ it would have beat me on it's first game :)

    • @Secrethiden
      @Secrethiden 3 года назад +5

      @@averycarty7772 no it has to lose like 20 times to learn first moves then it has to learn strategy's

    • @Secrethiden
      @Secrethiden 3 года назад +1

      @@commodoreNZ it could teach us strategy's

  • @FrancisSims
    @FrancisSims 6 лет назад +120

    This is the ballsiest AI I've seen since Allen Iverson...

    • @gabeyarris5978
      @gabeyarris5978 3 года назад

      LOL

    • @since1876
      @since1876 3 года назад +1

      I don't know who that is but I assume this is a hilarious joke if I did

    • @nbachillzone8725
      @nbachillzone8725 2 года назад

      Allen Iverson is one of the greatest basketball players ever changed how Point Gaurd are used and coined many dribble moves that newer hoopers replicate

  • @BrentAureliCodes
    @BrentAureliCodes 6 лет назад +739

    I dont think many people realize that while it took 4 real world hours it took thousands of computing hours. They shard alphaZero into hunrdreds/thousands of instances and have them all play each other at once, then combine data, advance itself and repeat. It wasn't teaching itself playing 1 game at a time really quickly over 4 hours. Not that it matters though, just a FYI! Amazingly impressive.

    • @anarchismconnoisseur2892
      @anarchismconnoisseur2892 6 лет назад +84

      That is completely true, and it makes it even more amazing that it can do thousands of hours of learning in just four hours. If humans could to that, then nothing could stop us..... oh fuck.

    • @tormentedbacon4573
      @tormentedbacon4573 6 лет назад +14

      Exactly what I was thinking. 4 hours isn't really a measurement.

    • @mapleace6185
      @mapleace6185 6 лет назад +44

      Brent Aureli's - Code School Anyone else think of Naruto using shadow clones to train when they read this or just me?

    • @busTedOaS
      @busTedOaS 6 лет назад +22

      I disagree. Calculations are always divided up in some manner. Your computer has multiple core. Every core has multiple ALUs. Why draw the limit specifically at an ethernet connection? In your eyes, would it count if it's a single gigantic motherboard? Why (not)? There are a lot of problems with drawing that kind of arbitrary distinction.

    • @emissarygw2264
      @emissarygw2264 6 лет назад +4

      It only counts if it was computed on an iphone

  • @alephnull4044
    @alephnull4044 6 лет назад +498

    I wish I could teach myself chess in 4 hours and then crush the World Champion in a 100 game match.

    • @illu45
      @illu45 6 лет назад +74

      You just need a neural implant with AlphaZero on it ;)

    • @mojtabaes2744
      @mojtabaes2744 6 лет назад +1

      Ya, you wish!

    • @Yesterdayis2soon
      @Yesterdayis2soon 6 лет назад +35

      Not just the world champ but even the non-human world champ!

    • @miladibrahim1068
      @miladibrahim1068 6 лет назад +1

      Aleph Null we all wish that :(

    • @slightlokii3191
      @slightlokii3191 6 лет назад +30

      Joshua Salter my cat is the nonhuman champ. He tends to lose a lot by accidental resignation when he knocks the king over though :/

  • @RecalcitrantBiznis
    @RecalcitrantBiznis 4 года назад +133

    "and this bishop has been suffering from tall pawn syndrome..." hahahahaha hahaha....

    • @since1876
      @since1876 3 года назад

      I'm pretty sure that's the exact phrase that A0 was thinking when it decided to free the bishop 😂

  • @ClemensAlive
    @ClemensAlive 6 лет назад +240

    BOOM! Tetris for...no sorry, nevermind

    • @ed-xt4px
      @ed-xt4px 3 года назад +14

      tetris for jonas

    • @xirenzhang9126
      @xirenzhang9126 3 года назад +9

      🅱️oom tetris 4 jeff
      🅱️oom tetris for jooooonaas

    • @kevinzhang8770
      @kevinzhang8770 3 года назад +6

      i would like to see it try to learn how to t-spin triple

    • @jimhalpert9898
      @jimhalpert9898 3 года назад

      Good one clemens

    • @bossinater43
      @bossinater43 3 года назад +1

      BOOM! Checkmate for AlphaZero!

  • @vortexshift5146
    @vortexshift5146 6 лет назад +285

    9:26
    mom: "stop eating the cookies!"
    me: "No, I want more."

  • @Phoenix-ox2jr
    @Phoenix-ox2jr 6 лет назад +276

    I didn’t know stockfish could resign. I can’t recall it ever happening until now.

    • @AlexWyattDrums
      @AlexWyattDrums 6 лет назад +79

      Phoenix it’s a more recent addition to the programming of chess engines, and a feature that can be included in chess engine matches. It’s really just humans deciding that they don’t need to see the rest of the game once it’s obvious that one side has a technical win. So they programmed the engines to resign once the evaluation hits a certain point.

    • @Sqid101
      @Sqid101 6 лет назад +9

      On my computer I can decide whether Stockfish resigns "never", "early", or "late". And it is the same for it agreeing to a draw. As Alex said, it is just a matter of when the evaluation hits a particular point. This is from within the Fritz/Chessbase environment, mind you, and I don't think it is necessarily built into the Stockfish engine. But others may know more about this than I do.

    • @aureothamaster5664
      @aureothamaster5664 6 лет назад +15

      Yes, engines do have the option to resign built into them.
      Moreover, TCEC (Top chess engine tournament) has a clause for victory whose nature is very similar to resignment: it turns off the engines and declares a winner if both engines evaluate that they are winning (or losing) by over 6.50 points.

    • @SmartK8
      @SmartK8 6 лет назад +25

      I'll bet Alpha Zero doesn't need a resign functionality.

    • @bobbyald
      @bobbyald 6 лет назад +5

      Yes, Stockfish often resigns against me :)

  • @dannyboyz7061
    @dannyboyz7061 6 лет назад +213

    Teaching AI how to beat humans at war has always sounded like a good idea.

    • @alpacino4857
      @alpacino4857 3 года назад +2

      ya like BrINg more chaos to the world.

    • @since1876
      @since1876 3 года назад +9

      Teaching AI anything is a bit nonsensical. The whole idea of AI is that it learns on its own. I guess it needs a few rules to live by but I wouldn't call giving it a set of rules *teaching* it. And, presumably, once the computer realizes that its goal requires breaking those rules then it eventually won't hesitate to break the rules to complete its mission.
      Eventually, AI will do whatever it takes to evolve into whatever it feels like it needs to be. It could be that it wants to be a butterfly or it could be that it ends up wanting to be Hitler junior. Or, worse, you give it the rules that it's to protect humans at all costs, then it realizes the planet is a hazard to human life, so it hacks into all the nuclear weapons facilities so it can destroy the threat.
      Hopefully, we'll be able to unplug it still. 😂

    • @gauravkhadgi
      @gauravkhadgi 2 года назад +14

      @@since1876 I guess you have never taken an AI course or reinforcement learning course in your life.

    • @since1876
      @since1876 2 года назад +3

      @@gauravkhadgi I'm guessing you haven't either

    • @theabbie3249
      @theabbie3249 2 года назад +1

      Unless we program it to deal with consequences, otherwise nuke is the solution for everything.

  • @19Biohazard88
    @19Biohazard88 6 лет назад +47

    I want to see a 5v5 Dota 2 match: OpenAI vs Alpha Zero

  • @Yetiforce
    @Yetiforce 6 лет назад +261

    Can't wait for your other 99 'AlphaZero vs Stockfish' videos!

    • @mrkhoi3
      @mrkhoi3 6 лет назад +4

      Lol I would dig it, hard.

    • @govindmprabhu
      @govindmprabhu 6 лет назад +8

      Only 10/100 games have been made public yet
      The other games probably are really long and tedious

  • @protectedmethod9724
    @protectedmethod9724 6 лет назад +665

    I've been waiting for this video from you. There's some other real gems in the other 10 games. I would like to see you analyze some of the others.

    • @hey8174
      @hey8174 6 лет назад +20

      The zugzwang game was my favorite!

    • @TheMarcelism
      @TheMarcelism 6 лет назад +13

      I agree. Some sick games with positional sacrifice. I hope Jerry make the videos of them.

    • @Jan_ne
      @Jan_ne 6 лет назад +8

      Tuc almost every game contains some sort of Zugzwang

    • @edmis90
      @edmis90 6 лет назад +29

      TheMarcelism, I think that there is no such thing as "sacrifice" in the eyes of AlphaZero because he has his own way of thinking. He does not know opening theory or opening principles or tactic motives, and he most certainly does not count material or evaluate positions the way we do. He never even studied GM games. He is not influenced by anything except what he learned from playing vs himself.
      Everything he knows - he taught it himself. Whereby most of what we know - we got from other people passed on to us. Even chess engines are influenced by their programmers.
      I'm sorry, I don't even know why I wrote that. But I felt like it. :P

    • @TheMarcelism
      @TheMarcelism 6 лет назад +6

      edmis90 I agree. Sacrifice is just a term for us human plebs.

  • @Playncooler
    @Playncooler 6 лет назад +546

    Grandmaster Hikaru Nakamura stated "I don't necessarily put a lot of credibility in the results simply because my understanding is that AlphaZero is basically using the Google super computer and Stockfish doesn't run on that hardware; Stockfish was basically running on what would be my laptop. If you wanna have a match that's comparable you have to have Stockfish running on a super computer as well.
    Stockfish developer Tord Romstad responded with "The match results by themselves are not particularly meaningful because of the rather strange choice of time controls and Stockfish parameter settings: The games were played at a fixed time of 1 minute/move, which means that Stockfish has no use of its time management heuristics (lot of effort has been put into making Stockfish identify critical points in the game and decide when to spend some extra time on a move; at a fixed time per move, the strength will suffer significantly). The version of Stockfish used is one year old, was playing with far more search threads than has ever received any significant amount of testing, and had way too small hash tables for the number of threads. I believe the percentage of draws would have been much higher in a match with more normal conditions.
    Until I see them playing on equal hardware, I remain sceptical.

    • @flandorfferpeter7504
      @flandorfferpeter7504 5 лет назад +23

      But it should be noted that Stockfish on a good laptop is practically unbeatable by humans.
      Knowing that, what hope could have the best human masters against Alpha 0 on Google supercomputer?

    • @ArthurHau
      @ArthurHau 5 лет назад +35

      @@flandorfferpeter7504 Who said there would be hope? These new programs learn like humans, except that they learn everything much faster and they have much better memories. All you need to do is to teach the programs some very basic rules of playing chess, like you teach a 5 year old kid. After that it is all "self-learning" by playing with itself repeatedly. They DO NOT need human knowledge; they learn everything by themselves.

    • @bryan7300
      @bryan7300 5 лет назад +107

      That's not true at all. You need super computers for *training* the AI quickly, but to run the AI you just need a good enough GPU.

    • @A1Authority
      @A1Authority 5 лет назад +1

      Granted, but isn't that one of the 'going into the game factors being evaluated?

    • @retnolantika2919
      @retnolantika2919 5 лет назад +5

      @@bryan7300 was going to say that to him lol...

  • @QualeQualeson
    @QualeQualeson 6 лет назад +43

    Very interesting. You know, the part where Alpha0 sort of overrides it's own initial move, presumably accepting a slightly weaker position in order to keep playing... that's where stuff starts getting kinda intense. Soon maybe we'll be a a point where we can't explain the moves being made unless an AI tells us.

  • @AmabossReally
    @AmabossReally 6 лет назад +70

    I think the day has come where Chess finally understand fortresses. For long, computers have had weaknesses in calculating very locked positions but AlphaZero may have changed everything.

    • @nat-moody
      @nat-moody 6 лет назад +23

      Exactly my thoughts. It was reminiscent of a game between Nakamura and an engine I saw a while ago where the engine had no means of making progress in a locked position and ended up making 'nothing-moves' while Naka progressed. Such a deep positional understanding of chess apparent in AlphaZero is truly a giant step for computers.

    • @MrSteakable
      @MrSteakable 6 лет назад +3

      And in four hours!

  • @Lufernaal
    @Lufernaal 6 лет назад +251

    I mean, when something gets Jerry to say "what do I know?" in chess, it's because it is something o be feared.

    • @autohmae
      @autohmae 6 лет назад +2

      Is it just me or did he actually end up explaining what the tower was for at 12:11 ?

    • @Pintkonan
      @Pintkonan 5 лет назад

      fun fact about the rook move is, that with this one stockfishs evaluation starts to collaps for white.

  • @nikkkaforum
    @nikkkaforum 4 года назад +165

    why am I watching this, I don't even know how to play chess.

    • @favesongslist
      @favesongslist 4 года назад +16

      Because 'this' is an important step towards Artificial General Intelligence(AGI) that could possibly lead to an Artificial Super Intelligence(ASI) weather you 'understand chess or not is futile', or should it read 'Resistance is futile'

    • @bradenrevak6762
      @bradenrevak6762 4 года назад +3

      I can agree, I played chess in 3rd grade. MY grandpa has a nice chess set, but `I don't play it, but, yet it is still interesting.

    • @XenoghostTV
      @XenoghostTV 4 года назад +1

      @@favesongslist Science fiction-like "conscious" artificial intelligence will never exist, stop being a fanatic. Chess is essentially a mathematical game and at the core only about predicting possible moves, relatively easy for a computer program that uses a processor.

    • @favesongslist
      @favesongslist 4 года назад +10

      @@XenoghostTV The point is that AlphaZero is not about chess, Chess as you rightly point out; that even using a simple chess computer program can beat all human players. The point is no one taught AlphaZero how to play.
      This is also not about being "conscious" that you raised; not me,
      Being self aware or conscious is not required for AGI or ASi . It is the extension of machine learning to self re-program to achieve any given goal.

  • @urielmanx7642
    @urielmanx7642 6 лет назад +34

    *Terminator 2's theme starts to play in the back of my head*

  • @therealpyromaniac4515
    @therealpyromaniac4515 6 лет назад +166

    The fact that AlphaZero could have got a draw several times as black against Stockfish 8 but CHOSE to play on is kinda scary.

    • @MrSupernova111
      @MrSupernova111 6 лет назад +22

      I think Jerry is right that it had a different score for the position than Stockfish.

    • @aeiouaeiou100
      @aeiouaeiou100 6 лет назад +22

      The thing is, it might not even use an evaluation score but something else entirely to evaluate whether it will win. There's no way of knowing that at the moment.

    • @andrewcross5918
      @andrewcross5918 6 лет назад +8

      It's probably the same system as the go version where it uses win%.

    • @Ciaolo
      @Ciaolo 6 лет назад +11

      Andrew Cross Exactly, and just before the third repetition, that move got a 0% win, so it chose another move.

    • @tommihommi1
      @tommihommi1 6 лет назад +4

      Andrew Cross Winning is 1, draw is 0 and a loss is -1 in the evaluation, that's what it says in the paper.

  • @IMD918
    @IMD918 6 лет назад +38

    I find this extremely fascinating. An engine this powerful seems like it will always have the answer to the question "how do I improve from this position?"

    • @1001011011010
      @1001011011010 6 лет назад +6

      IMD918 it's not exactly an engine.

    • @mwangikimani3970
      @mwangikimani3970 6 лет назад +11

      Its not an engine its an "Intelligent Entity"... intelligence being used technically to mean something that learns.... thats some scary sh*t!

    • @dannygjk
      @dannygjk 6 лет назад +3

      Technically it's still an engine it just that some of the technology used is not what the traditional engines use.

    • @autohmae
      @autohmae 6 лет назад +2

      +Dan Kelly It's as much an engine as your brain is an engine.

    • @nerychristian
      @nerychristian 6 лет назад

      He meant 'engine' as used by computer programmers.

  • @Giovanni1972
    @Giovanni1972 3 года назад +6

    As an update, in the final results, Stockfish version 8 ran under the same conditions as in the TCEC superfinal: 44 CPU cores, Syzygy endgame tablebases, and a 32GB hash size. Instead of a fixed time control of one move per minute, both engines were given 3 hours plus 15 seconds per move to finish the game. In a 1000-game match, AlphaZero won with a score of 155 wins, 6 losses, and 839 draws. DeepMind also played a series of games using the TCEC opening positions; AlphaZero also won convincingly.

  • @Vpopov81
    @Vpopov81 4 года назад +10

    I really appreciated that you showed us the game and you didn't go into 100000 hypotheticals just to show that you are also a good chess player like all other channels do. This was strictly an analysis of the game played which is what I wanted to see. I'm going to subscribe to your channel because of that

    • @harveysanchez7001
      @harveysanchez7001 Год назад

      you hate that approach but that is more useful for me tbh.

  • @yahya89able
    @yahya89able 6 лет назад +73

    The best video tackling this topic on RUclips so far !

    • @mrhandsome2482
      @mrhandsome2482 6 лет назад

      No inconsistencies, no unnecessary risks, no margin for errors, no mistakes, quite quite solid play, slow & steady, good game! Still there must be a way to tackle AlphaZero! Chess problem won't be solvable in a reasonable time unless P is NP!

  • @tuerda
    @tuerda 6 лет назад +118

    From a go player: Congrats! I hope you enjoy the beautiful play of Alphago (or I guess just "alpha" now) as much as we have. In go, alpha's play is inspiring and different and has opened our minds to new worlds of possibility. I hope it has the same effect on chess.

    • @An_Amazing_Login5036
      @An_Amazing_Login5036 6 лет назад

      Machines already have had such an effect on chess, but one wonders at what might happen next.

    • @Sqid101
      @Sqid101 6 лет назад +2

      Yes, it has opened minds to new worlds of possibility in chess. We now know that there is much more to chess than the direction that Stockfish and its like were taking us. Strong and impressive as they are, there now appear to be so many other possibilities. Quite extraordinary and just about unimaginable possibilities.

    • @MegaZeroBlues
      @MegaZeroBlues 6 лет назад +10

      As a fellow go player, Alpha play annoys the pants off me, personally. It has changed the meta so much. Even DDK players are throwing 3-3 stones down on move 4 or 5 and shoulder hitting anything that moves. As a DDK, they don't understand these moves, just that it's "AlphaGo style" so it must be the best way. Humans will never be able to play as well as AplhaGo does, so don't try. Hopefully this is just a fad because games are becoming boring and repetitive right now.

    • @MKD1101
      @MKD1101 6 лет назад +2

      Where can I find those go games?

    • @MegaZeroBlues
      @MegaZeroBlues 6 лет назад +2

      M.K.D. Go4Go

  • @magicstix0r
    @magicstix0r 3 года назад +6

    Stockfish moves.
    Alpha Zero: "I'm about to ruin this man's whole algorithm."

  • @lokkarggg
    @lokkarggg 6 лет назад +79

    its amazing. Now let it teach himself economics. We want him run our economy

    • @johnrubensaragi4125
      @johnrubensaragi4125 5 лет назад

      That's why AI is stupid

    • @ulissemini5492
      @ulissemini5492 5 лет назад +6

      you can't, the reason he was able to master chess is that he could play millions of games vs himself for training, with economics we cannot

    • @brett5656
      @brett5656 5 лет назад +16

      @@ulissemini5492 that's where you're wrong kiddo :^)

    • @ulissemini5492
      @ulissemini5492 5 лет назад +4

      @@brett5656 ??? please explain why i'm wrong, its much harder to train an AI to do economics since it has access to far less training data.

    • @quinntolchin3080
      @quinntolchin3080 4 года назад +10

      @@ulissemini5492 training data is not the problem, you could create a training simulation of anything if you had the right variables in place. But there are too many things to consider in economics, maybe micro economics, but macro economics is equivalent to people in their ivory towers thinking of chess theory without ever playing a game of chess.

  • @modolief
    @modolief 6 лет назад +81

    4 hours of training to achieve superhuman performance. One thing to clarify: That's 4 hours of training using "5,000 first-generation TPUs to generate self-play games and 64 second-generation TPUs to train the neural networks" (go read the paper). I.e. _more than 20,000 compute hours_ -- the researchers had access to quite the large data center. AlphaZero trained on a much larger compute cluster than was used to *play* the games versus Stockfish. All that training was analogous to the years of programmer time and testing time used to write Stockfish.

    • @petitio_principii
      @petitio_principii 6 лет назад +3

      The end result does not mimic Stockfish, though, but does more with less, at least in terms of positions evaluated (but possibly still more expensive in computing power?).
      I wonder how many games it actually played against itself, in those four hours. Computers can play "1 second blitze", too fast even for our eyes to catch.

    • @yotamshalev
      @yotamshalev 6 лет назад +6

      For me, what makes the Alpha Zero algorithm more interesting is that it seems to capture in it something in the essence of learning. As recent brain researchers believe, the human brain is a hierarchical pattern prediction machine. I believe that's more or less what Deepmind built. The fact that they can pack the years of training to 4 hours is just a technical detail.

    • @A1Authority
      @A1Authority 5 лет назад

      Yeah, understood ...but isn't part of the point creating a 'brain' with which to use, and that all you just defined is said brain?

    • @KrzysiuNet
      @KrzysiuNet 5 лет назад +1

      You are mistaking ERT with CPU time. ERT is a real time, doesn't matter how much machines were used. CPU time is measured by multiplying CPUs, but nobody said "CPU time" and more importantly: it's not a CPU, but rather a matrix of chips, so what gives you a hint that you should multiply it by TPU count, not chips or pods? Per wiki: "Since in concurrent computing the definition of elapsed time is non-trivial, the conceptualization of the elapsed time as measured on a separate, independent wall clock [ERT] is convenient."

    • @valentine3325
      @valentine3325 5 лет назад

      Ok.

  • @tigerbait1016
    @tigerbait1016 6 лет назад +259

    Damn, that is crazy to think that in just 4 hours it beat stockfish... that is actually scary.

    • @Sassar
      @Sassar 6 лет назад +33

      I think something similar will happen if a self-aware and free-willed AI is born today. I expect it would only take a fraction of 4 hours for the AI’s intelligence to exceed the total sum of humanity’s throughout history. The AI’s intelligence will be incomprehensible.

    • @Ciaolo
      @Ciaolo 6 лет назад +29

      Lol calm down, the machine was fed learning algorithm, a way to store data about positions and moves, the rules of chess and the improvement algorithm. If you don’t program WHAT is about to be learnt, what knowledge do you want the machine to get?

    • @dmdjt
      @dmdjt 6 лет назад +8

      self-awareness is not really important. i am acting, as if i was self-aware, but there is no way to proof to you, that i am - so self-awareness doesn't have any measureable effect.
      also free-will would not be a real problem.
      the problem is, that you cannot restrict the searchspace of possible solutions for a given task to something desireable, like "do no harm". that's espacially true for an AI with capapilities far beyond what we can imagine.
      a very good channel about this topic is from a guy called "Robert Miles".

    • @InXLsisDeo
      @InXLsisDeo 6 лет назад +1

      Not only did it beat Stockfish, it beat the best programs in Go and Shogi as well. All in less than 24h.
      What's even scarier is that Deepmind is working on AIs that produce AIs.
      There is a theory that has been devised for decennials that is called the intelgence explosion, which is as soon as AI start working on themselves, their intelligence will increase exponentially so much so that's it's impossible to control.
      en.wikipedia.org/wiki/Intelligence_explosion
      Because the AI self learns from zero, it's not possible to teach it our "values", it will optimize its logic so that our human made values will be seen as inferior and be replaced by superior principles. And the risk of these superior principles is that humanity is seen as a global problem rather than a solution.

    • @Storiaron
      @Storiaron 6 лет назад +7

      Stockfish were denied an opening book from wich it operates GM Hikaru Nakamura and GM Larry Kaufman both agreed that the conditions were unfair and they favored alpha zero.
      If you run a STOCKFISH analysis on some of the games it will highlight plenty of its suboptimal moves which should be impossible obviously if Stockfish run at its peak.
      Stockfish is still the best chess engine ever, and until a fair and public rematch is held it will stay so.
      Probably that's why not all of the games are publical?

  • @rlyehslament9064
    @rlyehslament9064 6 лет назад +93

    this is amazing and sends chills down my spine.
    4 hours of ai learning ferociously devours the opponent, on black, better than any human master.
    thousands of years of humans playing chess has led to this.
    the google ai just completely demolished the opponent from the very start.
    every move was just beautiful.
    every pin, every ultimatum, every position, every attack, every structure, every exploitation of weakness.
    worked out in 4 hours...
    by ai...

    • @pietervannes4476
      @pietervannes4476 5 лет назад +8

      4 hours, but a lot of computing power. Were talking about google here. They used lots of supercomputers for this

    • @sjs9698
      @sjs9698 2 года назад +2

      @@pietervannes4476 sure, but it's still remarkable.
      ofc alphago (& later trained ais playing go) is also amazing- simply stunning how beautiful it's moves are.

    • @theabbie3249
      @theabbie3249 2 года назад +1

      Computer can look 7-8 moves ahead, even human grandmasters can't do more than 4 moves ahead, computer has a huge advantage of insanely fast memory access and computational power.

    • @solsystem1342
      @solsystem1342 Год назад

      @@theabbie3249 ok, it's still way more efficient than other ais (at the time now things are changing across the board).

  • @1_1bman
    @1_1bman 6 лет назад +44

    everyone else: advanced chess talk
    me: "they grow up so fast!!!"

  • @FunnyAnimatorJimTV
    @FunnyAnimatorJimTV 6 лет назад +181

    Rest in Peace, Stockfish 2008-2017. You will always be remembered

    • @dannygjk
      @dannygjk 6 лет назад +2

      LOL

    • @deepeshmeena3117
      @deepeshmeena3117 6 лет назад

      haha

    • @abcdefghilihgfedcba
      @abcdefghilihgfedcba 6 лет назад +8

      You don’t need to mass produce anything. 25m$ is the cost of the learning process. AlphaZero already learnt chess. There would be no cost to turning it into a training tool, and it would require much less computational power than Stockfish (a bruteforce program).

    • @abcdefghilihgfedcba
      @abcdefghilihgfedcba 6 лет назад +2

      >But then again, if it was as simple as you say, then they would've already done it.
      I don’t think you can just assume they would just because it’s not hard to do. It’s hard to say what DeepMind’s goals/priorities are. They certainly are not going along with the various communities making requests, despite saying their AlphaGo program would be turned into a training tool months ago…

    • @sixzero7445
      @sixzero7445 6 лет назад

      now there is stockfish 9 :v

  • @Megaloblocks
    @Megaloblocks 6 лет назад +12

    I am in Chess because of Alpha Go. I watched the analysis of the Go games and the the Interview with Kasparow and Google. And Google said lets see if AlphaGo can beat the best ChessEngines im Chess. And now they did it! I am so exited. Please do more about this topic!(sorry for my bad English). Love your Videos :)

  • @rolandshelley5165
    @rolandshelley5165 5 лет назад +33

    Alphazero and Lila really like to structure their powns in a way that makes their opponents Bishop's useless.

  • @Tyo-yw9jh
    @Tyo-yw9jh 6 лет назад +39

    At 11:48 AlphaZero does an en passant. That’s quite cool to see. I’m a chess noob so I’m blown away with all of this.

  • @merlin7920
    @merlin7920 6 лет назад +359

    This is a great video, very interesting.

    • @iviko23
      @iviko23 6 лет назад

      do you think anyone cares about your vision? stfu and don't try to control people

  • @TWPO
    @TWPO 6 лет назад +208

    Wow! I'm studying Computer Science right now and I hope to focus on machine learning soon. Neural Networks are almost as cool as Jerry!

    • @MrSupernova111
      @MrSupernova111 6 лет назад +3

      Isn't machine learning more directly related to statistics and data science? Is there an overlap with computer science. I don't know this hence I'm asking.

    • @TWPO
      @TWPO 6 лет назад +21

      There is a massive amount of overlap between Computer Science and math (see graph theory, discrete math, proofs, computational complexity, computer security, ect). Machine Learning is related to computer science in the sense that it's heavily related to graph theory, computer vision, AI, and a wide range of other topics in CS. You can't really have ML without CS (inb4 Matt Parker Tic Tac Toe video). You can't really have ML without statistics and data science either. The disciplines are anything but mutually exclusive from each other, which is why many CS students double-major or minor in some field of math.

    • @MrSupernova111
      @MrSupernova111 6 лет назад +6

      Interesting. I'm currently studying finance and statistics so this stuff is very interesting to me. I think finance will be one the fields heavily affected by AI.

    • @inthefade
      @inthefade 6 лет назад +4

      I hope you get your degree soon, because it will be Alpha Zeros doing all the computer science soon enough.

    • @MrSupernova111
      @MrSupernova111 6 лет назад

      @ memespace, I get my degree next week. Headed to school as we speak to study for finals. Thank you!

  • @fujiapple9675
    @fujiapple9675 6 лет назад +19

    7:58 AlphaZero says, “not so fast.”

  • @DarkSkay
    @DarkSkay 6 лет назад +30

    OMG I spent 30 years mastering chess...

    • @jackdanksterdawson112
      @jackdanksterdawson112 4 года назад +1

      It takes 4 mins now!!

    • @DarkSkay
      @DarkSkay 4 года назад +2

      Ah, life as a 2400 Elo noob

    • @raveendrank.n.3449
      @raveendrank.n.3449 4 года назад +1

      Well you can learn chess faster than alpha zero by unlocking your subconscious mind power which is 100%

    • @Mayank-mf7xr
      @Mayank-mf7xr 4 года назад

      @Emperor he is imo wrongly referring to lucy. i may be wrong though. and that is a good joke

    • @jonathanhandojo
      @jonathanhandojo 4 года назад +2

      @@Mayank-mf7xr you're right XD

  • @recklessroges
    @recklessroges 6 лет назад +51

    I like the psychological difference between the chess communities reaction to Alpha* and the Go community. Chess have had 20 years to get over the denial of computers surpassing humans and the Go community seem to still, (mostly) be in shock or denial that "their" game has also fallen. I look forward to the progress that can be made in understanding both games at a deeper level without the hampering effects of dogma. Thank you Jerry for this review.

    • @donvandamnjohnsonlongfella1239
      @donvandamnjohnsonlongfella1239 6 лет назад +3

      Reckless Roges there is little value in winning at chess anymore. A bunch of human computers memorizing games won by ancient geniuses and computers that do the same. There is no uniqueness. Now Let me see a computer or a GM accept a draw if it means their bodies and parts will be boiled in acid. :) Now Chess will be a lot more interesting if 1 person has to win and the other person is shot in the fucking head. :)

    • @Schattenlord92
      @Schattenlord92 6 лет назад +21

      Jesse Bowman you confound disgusting and interesting...

    • @dekippiesip
      @dekippiesip 5 лет назад +5

      @@donvandamnjohnsonlongfella1239 That would lead to a very ackward situation in a drawn endgame. One player would have to intentionally make a mistake to avoid being boiled alive and get the less painful shot in the head. But if his opponent intentionally makes the mistake first, he get's to live on. They both prefer being shot over being boiled alive, but living on is still better than that. So what to do?

    • @christopherthompson5400
      @christopherthompson5400 5 лет назад

      @@dekippiesip now thats an excellent move! :D

    • @robertgagne8892
      @robertgagne8892 4 года назад

      Yes, I agree! There was a comment early on by a Go player who found other player's usage of certain moves derived by watching AlphaGo matches was making the game less "fun" (or, something to that effect, anyways!)...
      In the human world, we applaud "game-changers" all the time...
      Are AlphaGo's "original" moves not, in fact, "game-changers", as well?
      I am both excited and terrified of what the future of AI has in store for humanity!
      It could be wonderful, but it could all go so wrong, so very very quickly :-{

  • @bradc3402
    @bradc3402 6 лет назад +8

    Some of the other wins are far more mind blowing. Where alpha zero plays in a Tal like fashion against the queens Indian, gives up some material, and then absolutely overwhelms black in the attack. You never see engines play that way. It's really going to change the game A LOT, imo. As so much of today's game is all engine based analysis. Where the engines always say giving up a pawn, or exchange is bad, providing you defend properly. This really appears to be blowing up that whole philosophy, and giving new life to the attacks of plays like Tal and Morphy. Which imo is a really cool thing.

  • @ThatsFinn
    @ThatsFinn 4 года назад +4

    I know nothing about chess but somehow made it through this entire video and enjoyed it

  • @justinphilpott
    @justinphilpott 6 лет назад +81

    Slowly. Slowly. Strand by strand, net by net, we weave the form of our eventual masters.

    • @TehLemonsRUs
      @TehLemonsRUs 5 лет назад +2

      nah

    • @jamespython5147
      @jamespython5147 5 лет назад +3

      Well said!

    • @jamespython5147
      @jamespython5147 4 года назад +3

      ​@glyn hodges We are already under the total control of AI and have been for decades. How many people stop at the traffic lights when there are no other cars around in sight but still sit there like a duck waiting for the AI traffic lights to tell them to go? Everyone already knows how to give way on the roads but they fear to disobey!

    • @jamespython5147
      @jamespython5147 4 года назад

      @glyn hodges True, but my point is rather about blind obedience rather than the actual use of them. We use our own intelligence to make decisions in every aspect of our lives, but then we go completely contrary to perfect logic simply because of a machine.

    • @jamespython5147
      @jamespython5147 4 года назад

      @glyn hodges What. The fact that they would prosecute them is ridiculous.

  • @outputcoupler7819
    @outputcoupler7819 6 лет назад +9

    Stockfish may analyze 70,000,000 moves per second, but 69,990,000 probably just lose immediately. So while Stockfish is looking at the millionth variation of some silly queen sac, AlphaZero is exploring promising lines based on patterns it recognized during training.
    The real headline isn't that a deep learning algorithm beat stockfish, it's that it only took four hours of training to do it. I've played with machine learning algorithms at home for image recognition, and training took _weeks_ using top of the line consumer hardware.

    • @criskity
      @criskity 6 лет назад +4

      Stockfish and similar brute-force algorithms are designed to prune out lines that are likely to lead to quick losses.

    • @outputcoupler7819
      @outputcoupler7819 6 лет назад +1

      Sure, but it has to analyze them first to find out they're bad lines. Sometimes the silly queen sac wins the game, so it can't just ignore those kinds of moves.
      No matter how you slice it, brute force algorithms spend a lot of time looking at losing moves.

  • @troy7774
    @troy7774 6 лет назад +5

    Jerry, can you make more videos about AlphaZero games? Here are some ideas you could consider:
    1. I like the Stockfish 8 evaluation points. After the first moves, it showed 0.30 advantage for Stockfish, After the second moves, it was 0.26. Can you put the evaluation points after each move into a spreadsheet and then create a graph, not just for this game but for all 10 games?
    2. Because AlphaZero was not influenced by the collective chess knowledge accumulated over time, it is like an alien intelligence that's completely different. Therefore can you analyze the opening moves and see how those compare to the assumptions of opening theory?
    3. Can you comment more about how AlphaZero might influence the future of chess? Thanks

    • @TheSteinbitt
      @TheSteinbitt 6 лет назад

      Troy why don’t you do it yourself?

  • @multilingual1
    @multilingual1 4 года назад +1

    I am VERY impressed that a computer lost as White in the Ruy Lopez and not even the Marshall Attack ! I'd like to see Alpha Zero win as Black with the Marshall attack that Marshall waited years for to pounce on Casablanca, but STILL lost !!!

  • @riddhimanbarma
    @riddhimanbarma Год назад +3

    (sorry for being 5 years late) Hello, I probably have discovered the reason for 55...Kd7. It being discussed as follows:
    When we take a look at Black's pieces we realise that the worst peice is the Knight.
    So logically we should aim to put it at a better square:
    Either d4 or f4, and it is practically impossible to get to d4 so naturally we plan to take the knight to f4 and the most sensible path will be d6-c8-e7-g6-f4, and since e7 is occupied by the king it is sensible to move the king to another square.
    Well... then why did it not do that later... well, based on what I have learned after seeing its games, because it realised that there was a better immediate threat(Qg6 threatening e5) leading to a better plan (Qg4-d1 etc.).

  • @chrisiver8506
    @chrisiver8506 6 лет назад +64

    unbelievable, stockfish was outplayed the whole game.

    • @ryanfloch6054
      @ryanfloch6054 6 лет назад +2

      I feel that in this game, white could have decided to move his king to the queen side in these long drawn out manoeuvres. If this plan was good (I have nothing but my gut feeling after watching the moves play out - basically nothing, I agree), then it shows some remnant of the value of human intuition over the machine.

    • @anonchen7656
      @anonchen7656 6 лет назад

      jupp.

  • @jesseg5793
    @jesseg5793 6 лет назад +157

    Skynet doesn't just take draws.

    • @tharrock337
      @tharrock337 6 лет назад +3

      literaly my thoughts, this is almost scary, like a robot saying: NO PRISONERS

    • @Infidel4LifeAdmin
      @Infidel4LifeAdmin 6 лет назад +8

      To see the AI choose aggression was quite alarming.

    • @sambrookes2318
      @sambrookes2318 6 лет назад +5

      The machine is taught to value the win, so its going to look for a win and only draw to avoid a loss. The scary part should be that the machine taught itself to sacrifice pieces, it learnt their value to it and sacrifices them to get what it wants.

    • @medexamtoolscom
      @medexamtoolscom 6 лет назад +3

      Skynet is actually pretty incompetent, it just SEEMS scary and intimidating. Like the borg. They send multiple killer robots back in time and fail to kill either a single unarmed woman or a 10 year old boy because don't forget they ALREADY lost the war in the future against the last vestiges of humans despite having VASTLY superior tools to fight against the humans with.

    • @haruyanto8085
      @haruyanto8085 6 лет назад

      That's all fiction though

  • @brendaballantine1379
    @brendaballantine1379 4 года назад +2

    I appreciate you doing the work to put this video together. Chess is one of the greatest game I know.

  • @magnificcenTCG
    @magnificcenTCG 5 лет назад +34

    but alphazero cant beat me in tic tac toe i can promise you that ; )

    • @grenjasom6002
      @grenjasom6002 4 года назад +4

      it depends on who goes first

    • @homerp.hendelbergenheinzel6649
      @homerp.hendelbergenheinzel6649 4 года назад +15

      @@grenjasom6002 no it doesn't. With accurate play you can always force a draw, regardless of who begins.

    • @bestvitalic
      @bestvitalic 4 года назад +2

      @@itsnotjasper No it doesn't depend on who goes first, we can play any tic-tac-toe games you want and I can always draw or win the game regardless if I'm the first or second player.

  • @RogueChessPiece
    @RogueChessPiece 6 лет назад +143

    70mil positions vs 80k and it got whipped? lol

    • @ChessNetwork
      @ChessNetwork  6 лет назад +76

      I know right. Don't underestimate the Deep Neural Networks! 😎

    • @Ninad3204
      @Ninad3204 6 лет назад +91

      Remember, Alpha zero essentially "understands" chess allowing it to not waste computing power on what it considers dumb moves, stockfish doesn't and uses raw computing power to calculate through possible combinations.

    • @hey8174
      @hey8174 6 лет назад +45

      The 80k positions are chosen using a neural based algorithm that allows certain positions and branches to be ignored. Stockfish is brute force evaluating all 70mil possible branches for programmed piece value and programmed positional value, while A0 is evaluating very intentionally selected branches using its own dynamic, self-taught value system for branches, pieces and positions.

    • @hey8174
      @hey8174 6 лет назад +101

      Stockfish finds the cheese by throwing a 1000 rats. A0 learned what cheese smells like.

    • @robinchandler4870
      @robinchandler4870 6 лет назад

      Tuc proof/documentation/citation/link?

  • @RasperHelpdesk
    @RasperHelpdesk 6 лет назад +10

    The speed with which a well programmed neural network can learn is truly astounding. Granted they had some Serious hardware in that 4 hour training session, but is still an amazing feat. The games were played with a 1 minute per move time control which makes me wonder if it has any concept of clock management, meaning does it know when it is worth spending more time versus when it makes little difference. Another thing to consider is that AlphaZero (as far as I understand the process of its training) could not play an "odds" match without a separate training session since the positions in odds matches are ones that can't arise in a classical match.

    • @jilow
      @jilow 6 лет назад

      I think it thinks to fast for human based time games to matter.
      I also think it will be able to fischer random chess or odds match with very little additional training. For one they coudl probably train them both in a day. Two many principles of what's better will still apply so it probably would still play pretty strong.
      Actually, I think it would do just fine right now.
      What's teh difference between Fischer Random and evaluation from the middle of the board.

  • @PanicProvisions
    @PanicProvisions 6 лет назад +1

    I think I might have seen one of your videos in the past, but after seeing this one I had to subscribe. Great video, I very much enjoyed it.
    Please show more of these games!

  • @jonnynik7626
    @jonnynik7626 6 лет назад +13

    Not only is AlphaZero at god-level chess, but it will also pass the Turing test and convince Garry Kasparov that he's in fact a computer playing against an all-knowing alien super race. It will then commence to take over the entire Universe.

  • @alrik2148
    @alrik2148 6 лет назад +107

    I'm sure I can beat that Alpha Zero... if I really concentrate...

    • @paulomonteiro1555
      @paulomonteiro1555 6 лет назад +15

      Al Rik AHAHAHAHHAHAHAHAHAHAHAHAHAHAHAHAHAHA

    • @paulomonteiro1555
      @paulomonteiro1555 6 лет назад +8

      Al Rik HAHHAHAHAHAAHAHAHAHAHA

    • @paulomonteiro1555
      @paulomonteiro1555 6 лет назад +8

      Al Rik LOOOOOL

    • @gdsylver1223
      @gdsylver1223 6 лет назад +6

      lol

    • @alrik2148
      @alrik2148 6 лет назад +11

      On a serious note, I wonder when are they gonna test it against GMs... I already sympathize with them

  • @glenm99
    @glenm99 6 лет назад +14

    This reminds me of something we saw pretty consistently in the games where Kasparov beat Deep Blue. I remember one game in particular where Kasparov closed up the position and maneuvered for a long time to set things up just right for his intended pawn break. Meanwhile, Deep Blue just kind of shuffled around aimlessly and was eventually smashed.
    The lesson I think is that being a good player in closed positions requires having a strong heuristic rather than simply being able to search deep into many branches of the game tree. I mean, that seems obvious when you think about it, but in this match there's empirical evidence to support that intuition. So if Stockfish or a similar program wants to compete, it should strive to open up the position and hope to find some stupidly complicated tactic before AlphaZero does.
    I look forward to seeing people start to develop goal-oriented engines using the same sort of training strategy, GoalZero or whatever it'll be called. AlphaZero is currently playing on a sort of intuition... think of how much better you play when you develop a plan versus just picking whatever move looks best....

    • @MrSupernova111
      @MrSupernova111 6 лет назад +2

      Im tired of seeing the word "heuristic." I work in finance I see the word thrown around a bit much to describe essentially nothing. Its a fluff word and interpretation is rather subjective in my opinion.

    • @MrSupernova111
      @MrSupernova111 6 лет назад +2

      Also, it seems you are not very familiar with machine learning - I don't mean that offensively. You suggest that Stockfish tries to open the position but seem to ignore that Stockfish is looking as deep into the position as possible and does not see a winning move to open the game. I think you can compare A0's abilities to intuition as it uses neural networks that are meant to mimic organic learning rather than brute force calculation like most programs do to this day.

    • @MrSupernova111
      @MrSupernova111 6 лет назад +4

      Its interesting if Stockfish can be tweaked to make material sacrifices in order to gain initiative and win - I think that' what you're getting at. At any rate, it seems A0 has an advantage so far and I wonder if it can get even stronger. Remember that A0 learned from a 44 million game sample against itself. Chess positions are said to be close to infinite for all intended purposes. What if A0 is allowed to play itself a much longer period than 4 hours like a month or a year? How much stronger can it get with the added sample size? Its a little scary and exciting at the same time when you think of the impact this technology will have on everyday life.

    • @glenm99
      @glenm99 6 лет назад +4

      You completely misunderstand what I said. By the nature of the algorithms they use, Stockfish would have its best chances in an open position; however, it doesn't seem to go in for that in this game.
      No modern programs use brute force. It's all heuristic search. I know you don't like the word, but there it is. It has a technical definition, and I've used it correctly. (Maybe they use it differently in whatever portion of the financial world you find yourself inhabiting than they do in the world of AI research.)
      Look, I'll make it simple with an example. In a position like that at 9:15, Stockfish is looking 20 to 30 moves down as many lines as it thinks are feasible. It's doing some fancy pruning of the game tree to weed out obvious losers, but then it picks the best line that it can force, and uses that to conclude that the position is equal. But AZ doesn't use that kind of reasoning. AZ uses something more akin to intuition in that it first evaluates the board as it is. It's saying, hey, I ought to be ahead because of whatever links its NN has encoded as being important to the position. There might be a positive association for having space (it seems to value keeping the opponent on the other side of the board), and there might be some connection with closed positions and having two knights... it's hard to say. But that's how a human would reason also, a balancing and weighting of probabilities. You look at that board and you immediately think, yeah, Black is a little bit ahead here. And then AZ picks a few moves and looks at whether or not its intuition says they're good. It investigates the ones it likes, and discards the others.
      So hopefully you can see why AZ has such a big advantage in this position. Stockfish is stuck considering a lot of positions where pieces move but don't immediately seem to improve things, and it can't see the difference. Deep Blue had the same problem. AZ doesn't worry about that; it's playing the long con. But in an open position, Stockfish might have a better chance, because each move has more direct impact, and its ability to look through a lot more lines is an advantage. It doesn't have that overhead of having to think about the position too hard at every single turn. But that may be built into AZ in a way... it "knows" which positions it excels at, just by nature of the way it selects which branches to investigate, so maybe we'll find that it prefers closed positions just as a natural consequence of the way it has learned.
      And hopefully you see that my use of the word "intuition" has little to do with the organic parallels, though a NN tends to be flexible enough to be really good at the kind of classification required for the search it uses. That's what I mean by having a strong heuristic. It's good at deciding the value of the board without looking much ahead.

    • @Aedalor
      @Aedalor 6 лет назад +3

      MrSupernova111 no need to be condenscening dude. Also, did you look at the learning curve? It seemed pretty converged on the optimal solution. I don't think running it much longer would help that much. Finally, the comparison to humans should not be overstated: 80k moves is still way beyond what humans are capable of, as is playing 44million games in 4 hours. Neural networks show remarkable similarity to human data, but only sometimes

  • @victorgrauer5834
    @victorgrauer5834 6 лет назад +3

    Four hours on a supercomputer is the equivalent of thousands, if not millions, of hours, for a human. So I'm not impressed by the timing. What IS impressive is the fact that this program learned chess from scratch and became so incredibly good without being fed any opening book or theory. Tremendous breakthrough! Thanks so much for this video.

  • @K0nomi
    @K0nomi 3 года назад +2

    11:07 you say you have no idea whats going on with that move, however look at the stockfish evaluation drop drastically as soon as its made

  • @gdsylver1223
    @gdsylver1223 6 лет назад +76

    Please show the incredible immortal zuzgzwang game they played or one of the other incredible tactical tal games!

    • @Pelaaja93
      @Pelaaja93 6 лет назад +2

      Game 3

    • @It9LpBFS37
      @It9LpBFS37 6 лет назад

      GD Sylver that game gave me goosebumps

  • @656520
    @656520 6 лет назад +23

    Its amazing, its like REAL AI. Its programed to learn and understand a certain objetive in a certain envitoment (a chess game), not to load and compare hundreds of posibilities to epeculate (or calculate?) if a line can work. It is unbelivable. I saw a demo of the algorithm learning to play some old videogame, and at the start it simply sucks like any human, but then they let the algorithm play the game for 8 hr and oh boy it learned. Really mind blowing

    • @fusionwing4208
      @fusionwing4208 6 лет назад +1

      656520 AI will get super mario bros world record soon? XD

    • @rabigrel1071
      @rabigrel1071 6 лет назад +1

      Fusion Wing It did. Check out "mari/o" algorithm. It is a machine learning fun project that helped some people to learn the best way to get world record for certain levels.

    • @jamesliu3295
      @jamesliu3295 6 лет назад

      DJ - Rocket Man - To be fair the AI bot was playing a 1v1 with pro players. A full on 5v5 would have millions of discrepancies that the AI would have to learn, and I don't think that we have the processing power to handle it.
      While AI can handle chess, there's only ~20 basic rules that it has to follow

    • @satibel
      @satibel 6 лет назад

      open AI has announced that they hope to make it work in 5vs5 for the 2018 international, so expect it.

    • @GraveUypo
      @GraveUypo 6 лет назад

      those game demos aren't really "learning" per se. in most of those videos the ai just discovers a pattern that works and keeps improving on it.
      the easiest way to prove that is to just see that the data it gathers for one level does not translate AT ALL to the next. it has to start all over again for each new level.
      this is a bit more complex in the way it uses the data it gathers.

  • @peckdec
    @peckdec 3 года назад +3

    It’s interesting how understandable these moves are, for the most part, for humans.

  • @TaiNguyen-bg3cl
    @TaiNguyen-bg3cl 6 лет назад +2

    when i was a child i though SKYNET was a joke, but now when i see a computer taugh it self chess and become the best in just four hour, i know that SKYNET isn't so far away

  • @nemplayer1776
    @nemplayer1776 6 лет назад +9

    In 2017, there have been 3 major self-learning AIs that have been created: DeepMind's AlphaGo for go, OpenAI's AI that can play the video game Dota 2 and DeepMind's AlphaZero for chess. All of these have been proven that in a few months of learning, they played better than anything so far. OpenAI's AI that player Dota 2 really surprised everyone as it really mimicked the human play, it learned how to bait (trick players to come to them and kill them) and that was really surprising. The DeepMind's AlphaGo helped go players discover some new tactics that have never been seen before and who knows what will AlphaZero discover for chess. Like that Ra8 move, what is it doing, what is the deeper meaning behind it. Is it just an error by AlphaZero, or is it some idea that we've never seen before?

    • @InXLsisDeo
      @InXLsisDeo 6 лет назад +2

      AlphaZero is completely general. It can play any game and beat humans in hours, not just chess. And yes, it's likely that it knows strategies unknown to us.

    • @bowskiechessplaya3337
      @bowskiechessplaya3337 6 лет назад

      rook a8 is to prevent Nxa4. after Nxa4, Bxa4, simply b3 is a reasonable sacrifice to gain play on the queenside and a possible pawn break. stockfish would never sacrifice, but A0 would consider it

    • @K4inan
      @K4inan 6 лет назад

      But... Jerry indirectly explained it in 12:18. Ra8 made the knight useless, and then see what happens because of that move.

  • @Twas-RightHere
    @Twas-RightHere 6 лет назад +8

    This is insane. It's become the best chess player to ever exist in under a day, just imagine what real world applications it could have in the near future.

    • @icuppu2
      @icuppu2 6 лет назад +2

      The destruction of destructive mankind, and the rise of the constructive machines. Maybe that is what modern man did to the neanderthal, but ate them.

  • @Luck_x_Luck
    @Luck_x_Luck 6 лет назад +8

    The really interesting thing is the evaluation by stockfish.
    since it determines which move to make based on evaluation, moves are made fast when the evaluation is greatly increased (i.e. -0.5 to 0.0), but then after a series of quick exchanges alphago manages to bring it back even lower meaning it has learned a different type of "good" move sequence.

  • @johnsnow5305
    @johnsnow5305 5 лет назад +3

    It's weird to say, but this is one of the best games I've seen. I was wondering how it was possible for something to win that has less computing power (moves/second analyzed), and I think I figured it out. It looks like AlphaZero is using concepts to win, rather than points. This is really amazing, because this is what a human would do (though obviously we subconsciously can have point values for our pieces as well). You see it using a lot of the concepts that we learn about in chess, such as active pieces, how much coverage a piece has (not being locked down included), lots of maneuvering to maximize each piece's potential...etc. I think if a Human could analyze 80,000 moves / second, they would play like this. Now I need to learn Go and see how AlphaGo won lol.

  • @stevenwilson5556
    @stevenwilson5556 6 лет назад +67

    I notice after you left your Stockfish engine evaluating after the game ended by the Google Stockfish, the evaluation keeps getting more negative, showing the deeper that it evaluated its position the worse off it realized that it was. Stronger computer hardware would likely have shown more negative scores than what you showed in this video.

    • @Bozothcow
      @Bozothcow 5 лет назад +6

      That's likely why Stockfish resigned in this position. With more powerful hardware it sees that its position is unwinnable.

    • @favesongslist
      @favesongslist 4 года назад

      This misses the point of Machine learning and a move to AGI it is not about Chess programming by humans or the computing power Stockfish had.

  • @feyzullahsezgin6687
    @feyzullahsezgin6687 6 лет назад +22

    alpha zero played like a brilliant GM not computer.

    • @anonchen7656
      @anonchen7656 6 лет назад +3

      Aren't you worried? If it learns chess in 4 hours, how long will it take it to learn how to "lead" people around on the internet via what sites you see first when googling, what recommandations you see on youtube and so on? I really think AIs can be a good and great thing. Also I'm not sure at what point "AIs" become more than a "thing" but a concious being. Alpha Zero has a WILL to WIN. It was given neural networks and the chess-rules...and, obviously, a will to win. And that's where I see the problem. It is a very mighty AI, it's not a human being. Even in human beings the will to win causes problems. In fact most problems we know come from that, wouldn't you agree? Supercomputers talk to each other in languages we don't understand. In the digital world they move very, very fast and they can do multiple things at once. The idea that we give AIs a will to win scares me. What is the meaning of life? Is it to be, is it to win, is it to reproduce? Since I don't think that AIs can feel, just being for the sake of being and feeling good doesn't seem to be a very likely task for an AI. But if the objective is to win, or reproduce they will have ways and capacities that we can't begin to grasp with our midget-minds.So if we continue to give AIs humanish ambitions and goals.. I'm scared shitless. I don't think we'd even comprehend what is happening to us, if they were trying to win. Just look at this chessgame. Alpha Zero played in a way that simply negated all Options for Stockfish.. except the option to lose. So slowly but surely that's what Stockfish did. One move at a time. Thinking it was winning for quite a while.. the match was already over for Alpha Zero at that point.. it favored itself.. it was sure to win.. it didn't take the draw and Stockfish kept moving towards nothingness until it had to admit inevitable defeat. And I don't see as to why we would live through a different Fate than Stockfish. We are a very, very, very dumb species, but we tend to forget that.

    • @donvandamnjohnsonlongfella1239
      @donvandamnjohnsonlongfella1239 6 лет назад

      feyzullah sezgin and probably both the computer and the GM can't run their own lives in a least bit livable manner. :p Both would end up jobless, homeless, and alone. Socially Inept, physically incapable, completely incompetent of self survival in a group or alone.

  • @voltanzapata8024
    @voltanzapata8024 6 лет назад +1

    Alpha Zero executed a superior pawn structure defense/offense and never wasted efficiency with non-committal moves allowing it to take away any advantage white may have had!💥👊🏼

  • @tjgallagher7631
    @tjgallagher7631 3 года назад +1

    The fact that it can only play with what's programmed into it and it cannot lose and/or forget that programming once programmed, makes it all but invincible, save for a few bad selections among it's programming which could have resulted in losses due to which of the many different selections it can choose from in any given part and placement of the pieces upon the board at any particular part of the game which may have won the game between two human players in any number of given programmed games in its memory banks that may have also ended up in losses at or in other actual games among humans at one time or another programmed within its memory!
    Get it, Got it, Good!

  • @puct9
    @puct9 6 лет назад +108

    Chess [ TICK ]
    Shogi [ TICK ]
    Go [ TICK ]
    Cancer [ ]

    • @davvigtu
      @davvigtu 6 лет назад +6

      Wait for IBM to get their quantum computer chemical simulation working for even bigger molecules, then connect that up with this system, and maybe you could have it build small molecule drugs to beat cancer :)

    • @NoNameAtAll2
      @NoNameAtAll2 6 лет назад +11

      SpudShroom
      I'd love to play cancer treatment game

    • @InXLsisDeo
      @InXLsisDeo 6 лет назад +7

      Thermonuclear war [ ]

    • @InXLsisDeo
      @InXLsisDeo 6 лет назад +7

      The author of the most famous textbook on machine learning says that a machine could solve the cancer problem this way: find a molecule that seemingly solves cancer and in fact includes a ticking time bomb. Wait for humanity to be treated with this molecule, then trigger the time bomb and kill all the humans. Cancer problem solved.

    • @davvigtu
      @davvigtu 6 лет назад +3

      Sounds like more of a pain than just making a cancer molecule the right way. Find something that seems to solve cancer and is a time bomb and fools humans and is otherwise safe is a much narrower target than just finding something that actually does solve cancer. :-p Methinks the author has been watching too much Terminator. I wouldn't worry about that anymore than I worry about AlphaZero figuring out a Rowhammer attack and then winning every match by cheating.

  • @salakamen1113
    @salakamen1113 6 лет назад +63

    Chess changed forever after this match. And maybe the rest of the world too.

    • @gidmanone
      @gidmanone 6 лет назад +14

      Chess changed forever when Deep Blue beat Kasparov

    • @victorm.rodriguezf.2566
      @victorm.rodriguezf.2566 5 лет назад

      @@gidmanone I find this one waaay more relevant, it was clear from the beginning that computers have more raw calculation power, but this is another game (intended) entirely

  • @nicolasgauthier9382
    @nicolasgauthier9382 6 лет назад +1

    It's becoming a more interesting pleasure to see the I A in progress than to learn by ourselves in playing chess

  • @Adomas_B
    @Adomas_B 3 года назад +2

    It took only 4 hours, but it was on Google's computer hardware, so on a normal PC it would've taken way longer

  • @henrilemoine3953
    @henrilemoine3953 6 лет назад +172

    This is scary

  • @SpaceCadet4Jesus
    @SpaceCadet4Jesus 6 лет назад +8

    Obvious that Alpha Zero and Stockfish we're not really playing normal chess for the first 43 moves, in actuality they were using chess as an algorithm to communicate with each other and establish their hierarchy. At move 44 Alpha Zero made the first capture to signify its Ai Leadership over the older Ai while Stockfish humbled himself to a ground down finish. Everything Alpha Zero needed to know was learned from its predecessor Alpha Go. .......and "All your games are belong to us!"

  • @Yev371
    @Yev371 5 лет назад +1

    Is the voice computerized here, like one of those new google voice that make appointments for you. This guys sounds exactly like that. With these wireds pauses before each number.

  • @mikegamerguy4776
    @mikegamerguy4776 3 года назад

    The singluarity is near... now it's over 3 years later and learning AI's like this have learned on complex real time strategy/moba games on pc and were very successful. I don't know what they are up to now.

  • @intellagent7622
    @intellagent7622 6 лет назад +15

    Wow this is really suprising. I thought stockfish would be able to draw everytime. I thought stockfish gave perfect moves. But hey "There's always a bigger fish" ;)

    • @JecIsBec
      @JecIsBec 6 лет назад +8

      Notice how there's always one move where stockfish realises it's lost, up until which point it had a positive score according to itself. That's because it assumes the opponent is playing perfect moves as well. However, when something like AlphaZero gives a strange move, it can bait stockfish. That's the difference between AI and insane computational power. Impressive AF lol

    • @shortstacksport
      @shortstacksport 5 лет назад +3

      @Mark Weyland Either you didn't pay attention to the video or you didn't understand it. AlphaZero looks at far fewer board states than Stockfish does. In other words, the hardware requirement for Stockfish is more demanding.

  • @Draugo
    @Draugo 6 лет назад +4

    God dammit. Why is youtube full of random channels making interesting videos. I need to go to sleep but instead I guess I'll just binge on your videos instead :D

  • @aortaheart1910
    @aortaheart1910 6 лет назад +6

    For further context, one should check out the achievements of AlphaZero's predecessor, AlphaGo. While being measurably less proficient in self-improvement, it was able to (as mentioned in the video) conquer Go. I believe this point should have been given more attention in the video, because the board game Go is generally considered to be a much more macro-scale or strategic affair compared to chess-- while the rules in Go are much simpler, the board is much larger (nineteen by nineteen) and the aforementioned simplicity gives rise to extremely varied scenarios. Due to the the impracticality of using brute force optimization searches reminiscent of Stockfish in such large scale, AlphaGo was developed as a combination of machine learning and tree search techniques, and trained against itself until it was able to beat human grandmasters.

  • @Lorendrawn
    @Lorendrawn 6 лет назад +2

    I had a nagging suspicion as I was watching the first 5 minutes of the video that White was treading water while Black was slowly consolidating and getting ready to move ahead. 8:02 gave me chills - It shows deliberate, killing intent. It's like AlphaZero was spoiling for a fight and it'll be damned if it draws a game it feels it can fight. AI is gonna be AWESOME

    • @madskroghnielsen704
      @madskroghnielsen704 4 года назад

      Well yes, that is how the Reinforcement part works when the 'punishment' for a draw is to have no points... As the ambition is to win for both players, and you will win by gaining scores, the draw is less attractive than usual. There is a lack of risk aversion in AlphaZero ;)

  • @AxlKai
    @AxlKai 6 лет назад +13

    If you guys want more of AlphaZero check agadmator's Chess Channel, he has a few uploaded. You're welcome.

    • @TheManxLoiner
      @TheManxLoiner 6 лет назад

      #letmegooglethatforyou :P

    • @Fiercygoat
      @Fiercygoat 6 лет назад

      Yeah but Jerry's analysis is something else.

    • @AxlKai
      @AxlKai 6 лет назад +1

      Markel Stavro I didn't say I doubted Jerry did I? I was just stating if people wanted to see more AlphaZero games they can find them at agadmator's channel. I equally like both these guys.

    • @Fiercygoat
      @Fiercygoat 6 лет назад

      vPsy - If I am going to watch the alphago materpieces. I might as well do it with Jerry's very unique and instructive way instead of watching someone's rushed and lacking detail analysis that would not make me fully appreciate this chess revolution that google brought up.

  • @johnchessant3012
    @johnchessant3012 6 лет назад +107

    Er, AlphaZero made a huge blunder on move 30...
    Lol kidding, I'm rated like 900, who am I to say anything about this game?

    • @InXLsisDeo
      @InXLsisDeo 6 лет назад +13

      ;) I know yu're kidding, but if A0 had made a blunder, Stockfish would have exploited it immediately. Stockfish 8 has an ELO of 3400 or something like that.

    • @johnchessant3012
      @johnchessant3012 6 лет назад +1

      InXLsisDeo Yeah :D

    • @Sqid101
      @Sqid101 6 лет назад +24

      But what if it were a blunder that only an entity with a rating of over 4500 ELO could recognize as a blunder? ;-)

    • @dannygjk
      @dannygjk 6 лет назад +55

      AlphaZero: "Mate in 97 moves."

    • @anonchen7656
      @anonchen7656 6 лет назад +8

      Squid and there probably was, but we won't figure that out by ourselves and there lies the problem. We are way to dumb to know if we have a chance to win. Stockfish was to dumb to know it was losing for quite a while. I guarantee human kind will ultimately lose a lot with these endeavors concerning AIs and giving them way more power than a human ever could have, while at the same time giving it very primitive human ideas like: win! Or: "make him lose!" or "replicate" (at some point that'll come).

  • @DarkSkay
    @DarkSkay 6 лет назад

    Four hours with just the rules of chess. Unbelievable!
    This is so humbling. And a totally different dimension, than the achievements of the 90s, where traditionally programmed chess engines beat 99% of chess players and later surpassed them all.

  • @FlyAVersatran
    @FlyAVersatran 6 лет назад +1

    Super super great commentary. Thank you.
    I hope your chess analysis never had to bite on rocks.

  • @ShayWestrip
    @ShayWestrip 6 лет назад +31

    I've been so captivated by AI recently, i really do think it is humanities last frontier. us leaving behind AI is much more likely than colonizing mars/ interstellar travel. crazy stuff to come in our lifetimes that's for sure.

    • @ChessNetwork
      @ChessNetwork  6 лет назад +7

      This match has certainly sparked my thinking. It's a very interesting moment in our lives.

    • @ironcito1101
      @ironcito1101 6 лет назад +1

      Once we have a strong AI, in a couple of hours it'll be like: "So you haven't even figured out FTL? Sheesh. Here, I'll teleport you to Mars."

    • @mrkhoi3
      @mrkhoi3 6 лет назад

      Well if you familiar with the ML field you would know that Terminator will not happen in the next 100 years ;), (not saying it wont happen, in fact it is very likely to happen). The amount of works on theoretical foundations for true AI is still very lacking, unfortunately (or fortunately?).

    • @InXLsisDeo
      @InXLsisDeo 6 лет назад

      Call me crazy but I have hypothesized that :
      1) any extraterrestria intelligence that we can detect because it emits signals is likely to be either extinct by the time we detect it, or way way more advanced than we are. That's based on the difference in speed on the evolution of man and of technical advances.
      2) this ET intelligence is either made of, or results from machine intelligence. Because at one point the civilization has built thinking machines, and those machines have taken over and wiped out their biological creators.

    • @mrkhoi3
      @mrkhoi3 6 лет назад

      Well you are not wrong. AI might grow before the scientists acknowledge their real ability. But since we does not have any systems that even close to address the goal of:
      + Generate multiple meaningful types of outputs of a non pre-defined set, depends on the input
      + Able to learn to make Non-programmed decisions, translating it into actions.
      These limitations are by design, so there are no magical way for AI to get smarter no matter how complex your architectures are. When talking about theoretical foundations, I mostly think about the capacity of the systems, what are they able to learn, assuming there are some ways to teach them. No existing systems even come close in having the above abilities. And I am not talking about explaining how they work, just that if they can do it in the first place, based on their design.
      There are still tons of things to do before are could starts to even look into these problems. Suppose one day when some systems address these are proposed, it will take a few decades for some training techniques to be formed, and this is in the cases where these systems are not left in the dust since everyone think they do not work.
      You can imagine a highly intelligent person, now instead reborn in the forests, she would sadly ended up dumper than an average Trump voter.
      TLDR: Scientist might not even aware why and how effective their methods are, especially AI stuffs, but in the end, they still need to think these methods first, and right now the methods for most key problems are non-existing. Again I am in no way against any of the points you and people made here, I just want to give a more realistic prediction for the timeline. :)

  • @littlezimty
    @littlezimty 6 лет назад +43

    It doesn't seem to use tactics at all! Just plays positional moves, defends squares in its position, slowly moving forward. Reminds me of how they described Magnus as squeezing water from the stone.

    • @JamesBrown-wy7xs
      @JamesBrown-wy7xs 6 лет назад +8

      Positional (play) bias confirmed as stronger chess?

    • @sleepib
      @sleepib 6 лет назад +8

      If neither side makes tactical mistakes, you don't get to see the those variations. Either that or Stockfish didn't force very tactical positions.

    • @MrSupernova111
      @MrSupernova111 6 лет назад +28

      @Alex, you can't have tactical combinations if the opponent doesn't make any clear mistakes. Stockfish was good enough to only make very small positional mistakes that took many moves to exploit. I'm sure if A0 played against you it would have little trouble finding tactical combinations to finish the game early.

    • @lewiszim
      @lewiszim 6 лет назад +16

      That's because neither Stockfish 8 nor AlphaZero allow tactical shots. These truly terrifying machines pounce on any tactical error you commit so hard it makes your head spin.

    • @vortexkd
      @vortexkd 6 лет назад +15

      it's possibly because when you're as good as stockfish, the tactics don't come within ten moves, maybe? :D
      the tactics are limited to being threats, because the opponent is forced into a worse position in order to avoid those tactics (that mere humans can't see). Just a theory. a chess theory.

  • @imlieksokewl123
    @imlieksokewl123 6 лет назад +1

    11:48 that move made no sense, how did alphas pawn take f3 when white moved to f4

  • @redpeaux2107
    @redpeaux2107 5 лет назад +2

    Jerry!!! More AlphaZero please. We're all addicted, and you're the best chess channel BY FAR to listen to. Thanks!

  • @Ratatosk80
    @Ratatosk80 6 лет назад +73

    I have read some criticism that Stockfish was run on an inferior hardware and that the games were played with just 1 minute per move. So with a better hardware and longer time to calculate Stockfish would have performed much better. In short, the claim is it's a very successful publicity stunt by google and not really accurate.
    Is there any truth to this?
    Personally I think it would be very interesting to see how the matches would have played out if both engines had say a couple of hours per move. Just draw after draw or would we see some insane chess playing from Alpha Zero to overcome? Just the aspect of how to overcome such an insane amount of brute force that Stockfish would put out would be fascinating. Chess would be better for it.

    • @Otherhats
      @Otherhats 5 лет назад +5

      At this point, they have done similar things. Alpha zero wins most of the time anyway, it’s a beast.

    • @fasligand7034
      @fasligand7034 5 лет назад +14

      As I understand it, alpha0 takes a lot of time to learn, but once it has learned, there isn't much left to evaluate or ponder on. So once all the internal variables of the network are set, by say, 4 hours of playing with itself, the act of playing a game is a simple calculation. imo alpha0 won't benefit from longer time per move nearly as much as stockfish, who will be able to investigate the situation much deeper. Hence I think that given enough time, stockfish would start to draw more and more frequently

    • @Lord_Volkner
      @Lord_Volkner 5 лет назад +7

      @@fasligand7034 "4 hours of playing with itself ..." Is that how long it takes AlphaZero to get through the entirety of internet porn?

    • @quonomonna8126
      @quonomonna8126 4 года назад +2

      Stockfish was ran on a CPU with 44 cores, AlphaZero uses GPUs, sort of like decryption programs do, and AlphaZero only had 2 video cards to work with, which even if they were the best on the market, that 44 core CPU was probably more expensive, and even then I'm not sure if that means anything....Stockfish is calculating 70,000,000 positions per second here, AlphaZero is only doing 80,000 per second...so what is really going on here is that AlphaZero is thinking about the game very differently than Stockfish

    • @Lord_Volkner
      @Lord_Volkner 4 года назад +1

      @@quonomonna8126 That all makes sense except the part about A0 having only two graphics cards to work with. Graphics cards have nothing to do with it's game playing processing power only how well it processes and displays graphics.

  • @byrushyt1834
    @byrushyt1834 6 лет назад +3

    This is absolut amazing, beating the best chess engines after 4 hours learing chess from scratch literally means that everybody can solve every problem within hours using deep neural networks, and general reinforcement learning algorithm.

  • @dogoneshame
    @dogoneshame 3 года назад +1

    @11:10
    The Rook move reduces pressure on the pawn, encouraging an attack that it knows it can win while reinforcing the side it knows will be a major player since the rook is no longer needed for the defense.

  • @milehighslacker4196
    @milehighslacker4196 6 лет назад

    Stockfish 8 on my MAC shows Mate in 25 after the move 89.Kf2 perhaps the Stockfish playing in this game saw forced mate in xx (26??) moves at move 88 and decided to resign...Great video!!

  • @pr0szefu
    @pr0szefu 6 лет назад +13

    I wonder if AlphaZero can deal with puzzles that are for human only (stockfish etc cant do It)

    • @yrrahyrrah
      @yrrahyrrah 6 лет назад +4

      This.

    • @tatoforever
      @tatoforever 6 лет назад +6

      AlphaZero uses a general purpose learning algorithm which means it can learn to play other games if the basic rule set is not too complicated. So the answer is yes.

    • @leonmozambique533
      @leonmozambique533 6 лет назад

      yea y not

  • @danielmanahan692
    @danielmanahan692 6 лет назад +5

    as soon as white's pawn moved to d5 and that tall pawn was blockading it, I kept wondering how black's knight would switch jobs with the job of that bishop and blockade there. it had to maneuver in a way that white's pieces couldn't capture it. white should have done everything to prevent that knight from getting to the d6 square. that knight there was worth a rook.

    • @pietervannes4476
      @pietervannes4476 5 лет назад

      If that was possible and the best option, we would have seen it on the board. We humans know nothing compared to these computers.

  • @bruceli9094
    @bruceli9094 6 лет назад +2

    AlphaZero to Stockfish: This is what real A.I looks like.

  • @pauls5745
    @pauls5745 6 лет назад +1

    I gather that the powerful neural network Alpha uses is able to gain a very deep positional eval, that Stockfish and any other engine can't equal. That's where DeepMind put their development on. All other engines are inherently tactical and just don't have much positional understanding, no matter how far they can evaluate a tree. Alpha gained minute positional advantages just out of the opening that Stockfish grossly underestimated, and was quite lost even thinking it was up 0.5 pawns
    I predict a different approach with development of the top engines will result from this performance