Google's self-learning AI AlphaZero masters chess in 4 hours

Поделиться
HTML-код
  • Опубликовано: 10 дек 2024

Комментарии • 2,8 тыс.

  • @natereeves2807
    @natereeves2807 7 лет назад +2384

    I wish alphazero could provide commentary on its own games

    • @Wenyfile
      @Wenyfile 6 лет назад +92

      Nate Reeves now that would be a cool (but insanely difficult ) feature to program

    • @privateagent
      @privateagent 6 лет назад +28

      Christoffer Jonsson it's not difficult at all. Stop the nonsense

    • @Wenyfile
      @Wenyfile 6 лет назад +177

      Can you show me a neural network that while doing very complex tasks can explain every single decision it makes in detail? No you can't

    • @privateagent
      @privateagent 6 лет назад +27

      Christoffer Jonsson you can output everything, it's only up to the developers to implement that.

    • @Valvex_
      @Valvex_ 6 лет назад +57

      And what makes you think that's not difficult at all?

  • @commodoreNZ
    @commodoreNZ 5 лет назад +510

    1500 years vs 4 hours. That will stick with me

    • @averycarty7772
      @averycarty7772 4 года назад +16

      I wonder how many games it played a minute over those 4 hours

    • @commodoreNZ
      @commodoreNZ 4 года назад +8

      @@averycarty7772 no doubt that after 2-3 minutes it would learn to beat a million chumps like me :)

    • @averycarty7772
      @averycarty7772 4 года назад +6

      @@commodoreNZ it would have beat me on it's first game :)

    • @VideoBee_YT
      @VideoBee_YT 3 года назад +5

      @@averycarty7772 no it has to lose like 20 times to learn first moves then it has to learn strategy's

    • @VideoBee_YT
      @VideoBee_YT 3 года назад +1

      @@commodoreNZ it could teach us strategy's

  • @BrentAureliCodes
    @BrentAureliCodes 7 лет назад +740

    I dont think many people realize that while it took 4 real world hours it took thousands of computing hours. They shard alphaZero into hunrdreds/thousands of instances and have them all play each other at once, then combine data, advance itself and repeat. It wasn't teaching itself playing 1 game at a time really quickly over 4 hours. Not that it matters though, just a FYI! Amazingly impressive.

    • @anarchismconnoisseur2892
      @anarchismconnoisseur2892 6 лет назад +84

      That is completely true, and it makes it even more amazing that it can do thousands of hours of learning in just four hours. If humans could to that, then nothing could stop us..... oh fuck.

    • @tormentedbacon4573
      @tormentedbacon4573 6 лет назад +15

      Exactly what I was thinking. 4 hours isn't really a measurement.

    • @mapleace6185
      @mapleace6185 6 лет назад +44

      Brent Aureli's - Code School Anyone else think of Naruto using shadow clones to train when they read this or just me?

    • @busTedOaS
      @busTedOaS 6 лет назад +21

      I disagree. Calculations are always divided up in some manner. Your computer has multiple core. Every core has multiple ALUs. Why draw the limit specifically at an ethernet connection? In your eyes, would it count if it's a single gigantic motherboard? Why (not)? There are a lot of problems with drawing that kind of arbitrary distinction.

    • @emissarygw2264
      @emissarygw2264 6 лет назад +4

      It only counts if it was computed on an iphone

  • @FrancisSims
    @FrancisSims 6 лет назад +124

    This is the ballsiest AI I've seen since Allen Iverson...

    • @gabeyarris5978
      @gabeyarris5978 3 года назад

      LOL

    • @since1876
      @since1876 3 года назад +1

      I don't know who that is but I assume this is a hilarious joke if I did

    • @nbachillzone8725
      @nbachillzone8725 2 года назад

      Allen Iverson is one of the greatest basketball players ever changed how Point Gaurd are used and coined many dribble moves that newer hoopers replicate

  • @RecalcitrantBiznis
    @RecalcitrantBiznis 5 лет назад +132

    "and this bishop has been suffering from tall pawn syndrome..." hahahahaha hahaha....

    • @since1876
      @since1876 3 года назад

      I'm pretty sure that's the exact phrase that A0 was thinking when it decided to free the bishop 😂

  • @alephnull4044
    @alephnull4044 7 лет назад +499

    I wish I could teach myself chess in 4 hours and then crush the World Champion in a 100 game match.

    • @illu45
      @illu45 7 лет назад +74

      You just need a neural implant with AlphaZero on it ;)

    • @mojtabaes2744
      @mojtabaes2744 7 лет назад +1

      Ya, you wish!

    • @Yesterdayis2soon
      @Yesterdayis2soon 7 лет назад +35

      Not just the world champ but even the non-human world champ!

    • @miladibrahim1068
      @miladibrahim1068 7 лет назад +1

      Aleph Null we all wish that :(

    • @slightlokii3191
      @slightlokii3191 7 лет назад +30

      Joshua Salter my cat is the nonhuman champ. He tends to lose a lot by accidental resignation when he knocks the king over though :/

  • @Yetiforce
    @Yetiforce 7 лет назад +261

    Can't wait for your other 99 'AlphaZero vs Stockfish' videos!

    • @mrkhoi3
      @mrkhoi3 7 лет назад +4

      Lol I would dig it, hard.

    • @govindmprabhu
      @govindmprabhu 7 лет назад +8

      Only 10/100 games have been made public yet
      The other games probably are really long and tedious

  • @protectedmethod9724
    @protectedmethod9724 7 лет назад +665

    I've been waiting for this video from you. There's some other real gems in the other 10 games. I would like to see you analyze some of the others.

    • @hey8174
      @hey8174 7 лет назад +21

      The zugzwang game was my favorite!

    • @TheMarcelism
      @TheMarcelism 7 лет назад +14

      I agree. Some sick games with positional sacrifice. I hope Jerry make the videos of them.

    • @Jan_ne
      @Jan_ne 7 лет назад +9

      Tuc almost every game contains some sort of Zugzwang

    • @edmis90
      @edmis90 7 лет назад +29

      TheMarcelism, I think that there is no such thing as "sacrifice" in the eyes of AlphaZero because he has his own way of thinking. He does not know opening theory or opening principles or tactic motives, and he most certainly does not count material or evaluate positions the way we do. He never even studied GM games. He is not influenced by anything except what he learned from playing vs himself.
      Everything he knows - he taught it himself. Whereby most of what we know - we got from other people passed on to us. Even chess engines are influenced by their programmers.
      I'm sorry, I don't even know why I wrote that. But I felt like it. :P

    • @TheMarcelism
      @TheMarcelism 7 лет назад +6

      edmis90 I agree. Sacrifice is just a term for us human plebs.

  • @19Biohazard88
    @19Biohazard88 7 лет назад +47

    I want to see a 5v5 Dota 2 match: OpenAI vs Alpha Zero

  • @QualeQualeson
    @QualeQualeson 7 лет назад +44

    Very interesting. You know, the part where Alpha0 sort of overrides it's own initial move, presumably accepting a slightly weaker position in order to keep playing... that's where stuff starts getting kinda intense. Soon maybe we'll be a a point where we can't explain the moves being made unless an AI tells us.

  • @AmabossReally
    @AmabossReally 7 лет назад +70

    I think the day has come where Chess finally understand fortresses. For long, computers have had weaknesses in calculating very locked positions but AlphaZero may have changed everything.

    • @nat-moody
      @nat-moody 7 лет назад +24

      Exactly my thoughts. It was reminiscent of a game between Nakamura and an engine I saw a while ago where the engine had no means of making progress in a locked position and ended up making 'nothing-moves' while Naka progressed. Such a deep positional understanding of chess apparent in AlphaZero is truly a giant step for computers.

    • @MrSteakable
      @MrSteakable 7 лет назад +3

      And in four hours!

  • @vortexshift5146
    @vortexshift5146 7 лет назад +286

    9:26
    mom: "stop eating the cookies!"
    me: "No, I want more."

  • @Playncooler
    @Playncooler 6 лет назад +545

    Grandmaster Hikaru Nakamura stated "I don't necessarily put a lot of credibility in the results simply because my understanding is that AlphaZero is basically using the Google super computer and Stockfish doesn't run on that hardware; Stockfish was basically running on what would be my laptop. If you wanna have a match that's comparable you have to have Stockfish running on a super computer as well.
    Stockfish developer Tord Romstad responded with "The match results by themselves are not particularly meaningful because of the rather strange choice of time controls and Stockfish parameter settings: The games were played at a fixed time of 1 minute/move, which means that Stockfish has no use of its time management heuristics (lot of effort has been put into making Stockfish identify critical points in the game and decide when to spend some extra time on a move; at a fixed time per move, the strength will suffer significantly). The version of Stockfish used is one year old, was playing with far more search threads than has ever received any significant amount of testing, and had way too small hash tables for the number of threads. I believe the percentage of draws would have been much higher in a match with more normal conditions.
    Until I see them playing on equal hardware, I remain sceptical.

    • @flandorfferpeter7504
      @flandorfferpeter7504 6 лет назад +23

      But it should be noted that Stockfish on a good laptop is practically unbeatable by humans.
      Knowing that, what hope could have the best human masters against Alpha 0 on Google supercomputer?

    • @ArthurHau
      @ArthurHau 6 лет назад +35

      @@flandorfferpeter7504 Who said there would be hope? These new programs learn like humans, except that they learn everything much faster and they have much better memories. All you need to do is to teach the programs some very basic rules of playing chess, like you teach a 5 year old kid. After that it is all "self-learning" by playing with itself repeatedly. They DO NOT need human knowledge; they learn everything by themselves.

    • @bryan7300
      @bryan7300 6 лет назад +109

      That's not true at all. You need super computers for *training* the AI quickly, but to run the AI you just need a good enough GPU.

    • @A1Authority
      @A1Authority 5 лет назад +1

      Granted, but isn't that one of the 'going into the game factors being evaluated?

    • @rlantika
      @rlantika 5 лет назад +6

      @@bryan7300 was going to say that to him lol...

  • @ClemensAlive
    @ClemensAlive 7 лет назад +240

    BOOM! Tetris for...no sorry, nevermind

    • @ed-xt4px
      @ed-xt4px 4 года назад +14

      tetris for jonas

    • @xirenzhang9126
      @xirenzhang9126 4 года назад +9

      🅱️oom tetris 4 jeff
      🅱️oom tetris for jooooonaas

    • @kevinzhang8770
      @kevinzhang8770 4 года назад +6

      i would like to see it try to learn how to t-spin triple

    • @jimhalpert9898
      @jimhalpert9898 4 года назад

      Good one clemens

    • @bossinater43
      @bossinater43 4 года назад +1

      BOOM! Checkmate for AlphaZero!

  • @dannyboyz7061
    @dannyboyz7061 6 лет назад +213

    Teaching AI how to beat humans at war has always sounded like a good idea.

    • @alpacino4857
      @alpacino4857 3 года назад +2

      ya like BrINg more chaos to the world.

    • @since1876
      @since1876 3 года назад +9

      Teaching AI anything is a bit nonsensical. The whole idea of AI is that it learns on its own. I guess it needs a few rules to live by but I wouldn't call giving it a set of rules *teaching* it. And, presumably, once the computer realizes that its goal requires breaking those rules then it eventually won't hesitate to break the rules to complete its mission.
      Eventually, AI will do whatever it takes to evolve into whatever it feels like it needs to be. It could be that it wants to be a butterfly or it could be that it ends up wanting to be Hitler junior. Or, worse, you give it the rules that it's to protect humans at all costs, then it realizes the planet is a hazard to human life, so it hacks into all the nuclear weapons facilities so it can destroy the threat.
      Hopefully, we'll be able to unplug it still. 😂

    • @gauravkhadgi
      @gauravkhadgi 2 года назад +14

      @@since1876 I guess you have never taken an AI course or reinforcement learning course in your life.

    • @since1876
      @since1876 2 года назад +3

      @@gauravkhadgi I'm guessing you haven't either

    • @theabbie3249
      @theabbie3249 2 года назад +1

      Unless we program it to deal with consequences, otherwise nuke is the solution for everything.

  • @Phoenix-ox2jr
    @Phoenix-ox2jr 7 лет назад +276

    I didn’t know stockfish could resign. I can’t recall it ever happening until now.

    • @AlexWyattDrums
      @AlexWyattDrums 7 лет назад +80

      Phoenix it’s a more recent addition to the programming of chess engines, and a feature that can be included in chess engine matches. It’s really just humans deciding that they don’t need to see the rest of the game once it’s obvious that one side has a technical win. So they programmed the engines to resign once the evaluation hits a certain point.

    • @Sqid101
      @Sqid101 7 лет назад +9

      On my computer I can decide whether Stockfish resigns "never", "early", or "late". And it is the same for it agreeing to a draw. As Alex said, it is just a matter of when the evaluation hits a particular point. This is from within the Fritz/Chessbase environment, mind you, and I don't think it is necessarily built into the Stockfish engine. But others may know more about this than I do.

    • @aureothamaster5664
      @aureothamaster5664 7 лет назад +15

      Yes, engines do have the option to resign built into them.
      Moreover, TCEC (Top chess engine tournament) has a clause for victory whose nature is very similar to resignment: it turns off the engines and declares a winner if both engines evaluate that they are winning (or losing) by over 6.50 points.

    • @SmartK8
      @SmartK8 7 лет назад +25

      I'll bet Alpha Zero doesn't need a resign functionality.

    • @bobbyald
      @bobbyald 7 лет назад +6

      Yes, Stockfish often resigns against me :)

  • @Lufernaal
    @Lufernaal 7 лет назад +251

    I mean, when something gets Jerry to say "what do I know?" in chess, it's because it is something o be feared.

    • @autohmae
      @autohmae 7 лет назад +2

      Is it just me or did he actually end up explaining what the tower was for at 12:11 ?

    • @Pintkonan
      @Pintkonan 5 лет назад

      fun fact about the rook move is, that with this one stockfishs evaluation starts to collaps for white.

  • @tuerda
    @tuerda 7 лет назад +119

    From a go player: Congrats! I hope you enjoy the beautiful play of Alphago (or I guess just "alpha" now) as much as we have. In go, alpha's play is inspiring and different and has opened our minds to new worlds of possibility. I hope it has the same effect on chess.

    • @An_Amazing_Login5036
      @An_Amazing_Login5036 7 лет назад

      Machines already have had such an effect on chess, but one wonders at what might happen next.

    • @Sqid101
      @Sqid101 7 лет назад +2

      Yes, it has opened minds to new worlds of possibility in chess. We now know that there is much more to chess than the direction that Stockfish and its like were taking us. Strong and impressive as they are, there now appear to be so many other possibilities. Quite extraordinary and just about unimaginable possibilities.

    • @MegaZeroBlues
      @MegaZeroBlues 7 лет назад +10

      As a fellow go player, Alpha play annoys the pants off me, personally. It has changed the meta so much. Even DDK players are throwing 3-3 stones down on move 4 or 5 and shoulder hitting anything that moves. As a DDK, they don't understand these moves, just that it's "AlphaGo style" so it must be the best way. Humans will never be able to play as well as AplhaGo does, so don't try. Hopefully this is just a fad because games are becoming boring and repetitive right now.

    • @MKD1101
      @MKD1101 7 лет назад +2

      Where can I find those go games?

    • @MegaZeroBlues
      @MegaZeroBlues 7 лет назад +2

      M.K.D. Go4Go

  • @urielmanx7642
    @urielmanx7642 6 лет назад +34

    *Terminator 2's theme starts to play in the back of my head*

  • @Giovanni1972
    @Giovanni1972 3 года назад +6

    As an update, in the final results, Stockfish version 8 ran under the same conditions as in the TCEC superfinal: 44 CPU cores, Syzygy endgame tablebases, and a 32GB hash size. Instead of a fixed time control of one move per minute, both engines were given 3 hours plus 15 seconds per move to finish the game. In a 1000-game match, AlphaZero won with a score of 155 wins, 6 losses, and 839 draws. DeepMind also played a series of games using the TCEC opening positions; AlphaZero also won convincingly.

  • @modolief
    @modolief 7 лет назад +81

    4 hours of training to achieve superhuman performance. One thing to clarify: That's 4 hours of training using "5,000 first-generation TPUs to generate self-play games and 64 second-generation TPUs to train the neural networks" (go read the paper). I.e. _more than 20,000 compute hours_ -- the researchers had access to quite the large data center. AlphaZero trained on a much larger compute cluster than was used to *play* the games versus Stockfish. All that training was analogous to the years of programmer time and testing time used to write Stockfish.

    • @petitio_principii
      @petitio_principii 7 лет назад +3

      The end result does not mimic Stockfish, though, but does more with less, at least in terms of positions evaluated (but possibly still more expensive in computing power?).
      I wonder how many games it actually played against itself, in those four hours. Computers can play "1 second blitze", too fast even for our eyes to catch.

    • @yotamshalev
      @yotamshalev 6 лет назад +6

      For me, what makes the Alpha Zero algorithm more interesting is that it seems to capture in it something in the essence of learning. As recent brain researchers believe, the human brain is a hierarchical pattern prediction machine. I believe that's more or less what Deepmind built. The fact that they can pack the years of training to 4 hours is just a technical detail.

    • @A1Authority
      @A1Authority 5 лет назад

      Yeah, understood ...but isn't part of the point creating a 'brain' with which to use, and that all you just defined is said brain?

    • @KrzysiuNet
      @KrzysiuNet 5 лет назад +1

      You are mistaking ERT with CPU time. ERT is a real time, doesn't matter how much machines were used. CPU time is measured by multiplying CPUs, but nobody said "CPU time" and more importantly: it's not a CPU, but rather a matrix of chips, so what gives you a hint that you should multiply it by TPU count, not chips or pods? Per wiki: "Since in concurrent computing the definition of elapsed time is non-trivial, the conceptualization of the elapsed time as measured on a separate, independent wall clock [ERT] is convenient."

    • @valentine3325
      @valentine3325 5 лет назад

      Ok.

  • @Megaloblocks
    @Megaloblocks 7 лет назад +12

    I am in Chess because of Alpha Go. I watched the analysis of the Go games and the the Interview with Kasparow and Google. And Google said lets see if AlphaGo can beat the best ChessEngines im Chess. And now they did it! I am so exited. Please do more about this topic!(sorry for my bad English). Love your Videos :)

  • @yahya89able
    @yahya89able 7 лет назад +73

    The best video tackling this topic on RUclips so far !

    • @mrhandsome2482
      @mrhandsome2482 7 лет назад

      No inconsistencies, no unnecessary risks, no margin for errors, no mistakes, quite quite solid play, slow & steady, good game! Still there must be a way to tackle AlphaZero! Chess problem won't be solvable in a reasonable time unless P is NP!

  • @nikagam
    @nikagam 5 лет назад +165

    why am I watching this, I don't even know how to play chess.

    • @favesongslist
      @favesongslist 5 лет назад +16

      Because 'this' is an important step towards Artificial General Intelligence(AGI) that could possibly lead to an Artificial Super Intelligence(ASI) weather you 'understand chess or not is futile', or should it read 'Resistance is futile'

    • @bradenrevak6762
      @bradenrevak6762 4 года назад +3

      I can agree, I played chess in 3rd grade. MY grandpa has a nice chess set, but `I don't play it, but, yet it is still interesting.

    • @XenoghostTV
      @XenoghostTV 4 года назад +1

      @@favesongslist Science fiction-like "conscious" artificial intelligence will never exist, stop being a fanatic. Chess is essentially a mathematical game and at the core only about predicting possible moves, relatively easy for a computer program that uses a processor.

    • @favesongslist
      @favesongslist 4 года назад +10

      @@XenoghostTV The point is that AlphaZero is not about chess, Chess as you rightly point out; that even using a simple chess computer program can beat all human players. The point is no one taught AlphaZero how to play.
      This is also not about being "conscious" that you raised; not me,
      Being self aware or conscious is not required for AGI or ASi . It is the extension of machine learning to self re-program to achieve any given goal.

  • @Tyo-yw9jh
    @Tyo-yw9jh 6 лет назад +38

    At 11:48 AlphaZero does an en passant. That’s quite cool to see. I’m a chess noob so I’m blown away with all of this.

  • @IMD918
    @IMD918 7 лет назад +38

    I find this extremely fascinating. An engine this powerful seems like it will always have the answer to the question "how do I improve from this position?"

    • @1001011011010
      @1001011011010 7 лет назад +6

      IMD918 it's not exactly an engine.

    • @mwangikimani3970
      @mwangikimani3970 7 лет назад +11

      Its not an engine its an "Intelligent Entity"... intelligence being used technically to mean something that learns.... thats some scary sh*t!

    • @dannygjk
      @dannygjk 7 лет назад +3

      Technically it's still an engine it just that some of the technology used is not what the traditional engines use.

    • @autohmae
      @autohmae 7 лет назад +2

      +Dan Kelly It's as much an engine as your brain is an engine.

    • @nerychristian
      @nerychristian 6 лет назад

      He meant 'engine' as used by computer programmers.

  • @therealpyromaniac4515
    @therealpyromaniac4515 7 лет назад +165

    The fact that AlphaZero could have got a draw several times as black against Stockfish 8 but CHOSE to play on is kinda scary.

    • @MrSupernova111
      @MrSupernova111 7 лет назад +22

      I think Jerry is right that it had a different score for the position than Stockfish.

    • @aeiouaeiou100
      @aeiouaeiou100 7 лет назад +22

      The thing is, it might not even use an evaluation score but something else entirely to evaluate whether it will win. There's no way of knowing that at the moment.

    • @andrewcross5918
      @andrewcross5918 7 лет назад +8

      It's probably the same system as the go version where it uses win%.

    • @Ciaolo
      @Ciaolo 7 лет назад +11

      Andrew Cross Exactly, and just before the third repetition, that move got a 0% win, so it chose another move.

    • @tommihommi1
      @tommihommi1 7 лет назад +4

      Andrew Cross Winning is 1, draw is 0 and a loss is -1 in the evaluation, that's what it says in the paper.

  • @tigerbait1016
    @tigerbait1016 7 лет назад +259

    Damn, that is crazy to think that in just 4 hours it beat stockfish... that is actually scary.

    • @Sassar
      @Sassar 7 лет назад +33

      I think something similar will happen if a self-aware and free-willed AI is born today. I expect it would only take a fraction of 4 hours for the AI’s intelligence to exceed the total sum of humanity’s throughout history. The AI’s intelligence will be incomprehensible.

    • @Ciaolo
      @Ciaolo 7 лет назад +29

      Lol calm down, the machine was fed learning algorithm, a way to store data about positions and moves, the rules of chess and the improvement algorithm. If you don’t program WHAT is about to be learnt, what knowledge do you want the machine to get?

    • @dmdjt
      @dmdjt 7 лет назад +8

      self-awareness is not really important. i am acting, as if i was self-aware, but there is no way to proof to you, that i am - so self-awareness doesn't have any measureable effect.
      also free-will would not be a real problem.
      the problem is, that you cannot restrict the searchspace of possible solutions for a given task to something desireable, like "do no harm". that's espacially true for an AI with capapilities far beyond what we can imagine.
      a very good channel about this topic is from a guy called "Robert Miles".

    • @InXLsisDeo
      @InXLsisDeo 7 лет назад +1

      Not only did it beat Stockfish, it beat the best programs in Go and Shogi as well. All in less than 24h.
      What's even scarier is that Deepmind is working on AIs that produce AIs.
      There is a theory that has been devised for decennials that is called the intelgence explosion, which is as soon as AI start working on themselves, their intelligence will increase exponentially so much so that's it's impossible to control.
      en.wikipedia.org/wiki/Intelligence_explosion
      Because the AI self learns from zero, it's not possible to teach it our "values", it will optimize its logic so that our human made values will be seen as inferior and be replaced by superior principles. And the risk of these superior principles is that humanity is seen as a global problem rather than a solution.

    • @Storiaron
      @Storiaron 7 лет назад +7

      Stockfish were denied an opening book from wich it operates GM Hikaru Nakamura and GM Larry Kaufman both agreed that the conditions were unfair and they favored alpha zero.
      If you run a STOCKFISH analysis on some of the games it will highlight plenty of its suboptimal moves which should be impossible obviously if Stockfish run at its peak.
      Stockfish is still the best chess engine ever, and until a fair and public rematch is held it will stay so.
      Probably that's why not all of the games are publical?

  • @Vpopov81
    @Vpopov81 4 года назад +10

    I really appreciated that you showed us the game and you didn't go into 100000 hypotheticals just to show that you are also a good chess player like all other channels do. This was strictly an analysis of the game played which is what I wanted to see. I'm going to subscribe to your channel because of that

    • @harveysanchez7001
      @harveysanchez7001 Год назад

      you hate that approach but that is more useful for me tbh.

  • @rlyehslament9064
    @rlyehslament9064 6 лет назад +92

    this is amazing and sends chills down my spine.
    4 hours of ai learning ferociously devours the opponent, on black, better than any human master.
    thousands of years of humans playing chess has led to this.
    the google ai just completely demolished the opponent from the very start.
    every move was just beautiful.
    every pin, every ultimatum, every position, every attack, every structure, every exploitation of weakness.
    worked out in 4 hours...
    by ai...

    • @pietervannes4476
      @pietervannes4476 6 лет назад +8

      4 hours, but a lot of computing power. Were talking about google here. They used lots of supercomputers for this

    • @sjs9698
      @sjs9698 3 года назад +2

      @@pietervannes4476 sure, but it's still remarkable.
      ofc alphago (& later trained ais playing go) is also amazing- simply stunning how beautiful it's moves are.

    • @theabbie3249
      @theabbie3249 2 года назад +1

      Computer can look 7-8 moves ahead, even human grandmasters can't do more than 4 moves ahead, computer has a huge advantage of insanely fast memory access and computational power.

    • @solsystem1342
      @solsystem1342 2 года назад

      @@theabbie3249 ok, it's still way more efficient than other ais (at the time now things are changing across the board).

  • @merlin7920
    @merlin7920 7 лет назад +359

    This is a great video, very interesting.

    • @iviko23
      @iviko23 6 лет назад

      do you think anyone cares about your vision? stfu and don't try to control people

  • @bradc3402
    @bradc3402 7 лет назад +8

    Some of the other wins are far more mind blowing. Where alpha zero plays in a Tal like fashion against the queens Indian, gives up some material, and then absolutely overwhelms black in the attack. You never see engines play that way. It's really going to change the game A LOT, imo. As so much of today's game is all engine based analysis. Where the engines always say giving up a pawn, or exchange is bad, providing you defend properly. This really appears to be blowing up that whole philosophy, and giving new life to the attacks of plays like Tal and Morphy. Which imo is a really cool thing.

  • @FunnyAnimatorJimTV
    @FunnyAnimatorJimTV 7 лет назад +182

    Rest in Peace, Stockfish 2008-2017. You will always be remembered

    • @dannygjk
      @dannygjk 7 лет назад +2

      LOL

    • @i_deepeshmeena
      @i_deepeshmeena 7 лет назад

      haha

    • @abcdefghilihgfedcba
      @abcdefghilihgfedcba 6 лет назад +8

      You don’t need to mass produce anything. 25m$ is the cost of the learning process. AlphaZero already learnt chess. There would be no cost to turning it into a training tool, and it would require much less computational power than Stockfish (a bruteforce program).

    • @abcdefghilihgfedcba
      @abcdefghilihgfedcba 6 лет назад +2

      >But then again, if it was as simple as you say, then they would've already done it.
      I don’t think you can just assume they would just because it’s not hard to do. It’s hard to say what DeepMind’s goals/priorities are. They certainly are not going along with the various communities making requests, despite saying their AlphaGo program would be turned into a training tool months ago…

    • @sixzero7445
      @sixzero7445 6 лет назад

      now there is stockfish 9 :v

  • @1_1bman
    @1_1bman 6 лет назад +43

    everyone else: advanced chess talk
    me: "they grow up so fast!!!"

  • @rolandshelley5165
    @rolandshelley5165 5 лет назад +34

    Alphazero and Lila really like to structure their powns in a way that makes their opponents Bishop's useless.

  • @recklessroges
    @recklessroges 7 лет назад +51

    I like the psychological difference between the chess communities reaction to Alpha* and the Go community. Chess have had 20 years to get over the denial of computers surpassing humans and the Go community seem to still, (mostly) be in shock or denial that "their" game has also fallen. I look forward to the progress that can be made in understanding both games at a deeper level without the hampering effects of dogma. Thank you Jerry for this review.

    • @donvandamnjohnsonlongfella1239
      @donvandamnjohnsonlongfella1239 6 лет назад +3

      Reckless Roges there is little value in winning at chess anymore. A bunch of human computers memorizing games won by ancient geniuses and computers that do the same. There is no uniqueness. Now Let me see a computer or a GM accept a draw if it means their bodies and parts will be boiled in acid. :) Now Chess will be a lot more interesting if 1 person has to win and the other person is shot in the fucking head. :)

    • @Schattenlord92
      @Schattenlord92 6 лет назад +21

      Jesse Bowman you confound disgusting and interesting...

    • @dekippiesip
      @dekippiesip 5 лет назад +5

      @@donvandamnjohnsonlongfella1239 That would lead to a very ackward situation in a drawn endgame. One player would have to intentionally make a mistake to avoid being boiled alive and get the less painful shot in the head. But if his opponent intentionally makes the mistake first, he get's to live on. They both prefer being shot over being boiled alive, but living on is still better than that. So what to do?

    • @christopherthompson5400
      @christopherthompson5400 5 лет назад

      @@dekippiesip now thats an excellent move! :D

    • @robertgagne8892
      @robertgagne8892 4 года назад

      Yes, I agree! There was a comment early on by a Go player who found other player's usage of certain moves derived by watching AlphaGo matches was making the game less "fun" (or, something to that effect, anyways!)...
      In the human world, we applaud "game-changers" all the time...
      Are AlphaGo's "original" moves not, in fact, "game-changers", as well?
      I am both excited and terrified of what the future of AI has in store for humanity!
      It could be wonderful, but it could all go so wrong, so very very quickly :-{

  • @TWPO
    @TWPO 7 лет назад +208

    Wow! I'm studying Computer Science right now and I hope to focus on machine learning soon. Neural Networks are almost as cool as Jerry!

    • @MrSupernova111
      @MrSupernova111 7 лет назад +3

      Isn't machine learning more directly related to statistics and data science? Is there an overlap with computer science. I don't know this hence I'm asking.

    • @TWPO
      @TWPO 7 лет назад +21

      There is a massive amount of overlap between Computer Science and math (see graph theory, discrete math, proofs, computational complexity, computer security, ect). Machine Learning is related to computer science in the sense that it's heavily related to graph theory, computer vision, AI, and a wide range of other topics in CS. You can't really have ML without CS (inb4 Matt Parker Tic Tac Toe video). You can't really have ML without statistics and data science either. The disciplines are anything but mutually exclusive from each other, which is why many CS students double-major or minor in some field of math.

    • @MrSupernova111
      @MrSupernova111 7 лет назад +6

      Interesting. I'm currently studying finance and statistics so this stuff is very interesting to me. I think finance will be one the fields heavily affected by AI.

    • @inthefade
      @inthefade 7 лет назад +4

      I hope you get your degree soon, because it will be Alpha Zeros doing all the computer science soon enough.

    • @MrSupernova111
      @MrSupernova111 7 лет назад

      @ memespace, I get my degree next week. Headed to school as we speak to study for finals. Thank you!

  • @RasperHelpdesk
    @RasperHelpdesk 7 лет назад +10

    The speed with which a well programmed neural network can learn is truly astounding. Granted they had some Serious hardware in that 4 hour training session, but is still an amazing feat. The games were played with a 1 minute per move time control which makes me wonder if it has any concept of clock management, meaning does it know when it is worth spending more time versus when it makes little difference. Another thing to consider is that AlphaZero (as far as I understand the process of its training) could not play an "odds" match without a separate training session since the positions in odds matches are ones that can't arise in a classical match.

    • @jilow
      @jilow 7 лет назад

      I think it thinks to fast for human based time games to matter.
      I also think it will be able to fischer random chess or odds match with very little additional training. For one they coudl probably train them both in a day. Two many principles of what's better will still apply so it probably would still play pretty strong.
      Actually, I think it would do just fine right now.
      What's teh difference between Fischer Random and evaluation from the middle of the board.

  • @magicstix0r
    @magicstix0r 3 года назад +6

    Stockfish moves.
    Alpha Zero: "I'm about to ruin this man's whole algorithm."

  • @lokkarggg
    @lokkarggg 6 лет назад +79

    its amazing. Now let it teach himself economics. We want him run our economy

    • @johnrubensaragi4125
      @johnrubensaragi4125 5 лет назад

      That's why AI is stupid

    • @ulissemini5492
      @ulissemini5492 5 лет назад +6

      you can't, the reason he was able to master chess is that he could play millions of games vs himself for training, with economics we cannot

    • @brett5656
      @brett5656 5 лет назад +16

      @@ulissemini5492 that's where you're wrong kiddo :^)

    • @ulissemini5492
      @ulissemini5492 5 лет назад +4

      @@brett5656 ??? please explain why i'm wrong, its much harder to train an AI to do economics since it has access to far less training data.

    • @quinntolchin3080
      @quinntolchin3080 5 лет назад +10

      @@ulissemini5492 training data is not the problem, you could create a training simulation of anything if you had the right variables in place. But there are too many things to consider in economics, maybe micro economics, but macro economics is equivalent to people in their ivory towers thinking of chess theory without ever playing a game of chess.

  • @656520
    @656520 7 лет назад +23

    Its amazing, its like REAL AI. Its programed to learn and understand a certain objetive in a certain envitoment (a chess game), not to load and compare hundreds of posibilities to epeculate (or calculate?) if a line can work. It is unbelivable. I saw a demo of the algorithm learning to play some old videogame, and at the start it simply sucks like any human, but then they let the algorithm play the game for 8 hr and oh boy it learned. Really mind blowing

    • @fusionwing4208
      @fusionwing4208 7 лет назад +1

      656520 AI will get super mario bros world record soon? XD

    • @rabigrel1071
      @rabigrel1071 7 лет назад +1

      Fusion Wing It did. Check out "mari/o" algorithm. It is a machine learning fun project that helped some people to learn the best way to get world record for certain levels.

    • @jamesliu3295
      @jamesliu3295 7 лет назад

      DJ - Rocket Man - To be fair the AI bot was playing a 1v1 with pro players. A full on 5v5 would have millions of discrepancies that the AI would have to learn, and I don't think that we have the processing power to handle it.
      While AI can handle chess, there's only ~20 basic rules that it has to follow

    • @satibel
      @satibel 7 лет назад

      open AI has announced that they hope to make it work in 5vs5 for the 2018 international, so expect it.

    • @GraveUypo
      @GraveUypo 7 лет назад

      those game demos aren't really "learning" per se. in most of those videos the ai just discovers a pattern that works and keeps improving on it.
      the easiest way to prove that is to just see that the data it gathers for one level does not translate AT ALL to the next. it has to start all over again for each new level.
      this is a bit more complex in the way it uses the data it gathers.

  • @glenm99
    @glenm99 7 лет назад +15

    This reminds me of something we saw pretty consistently in the games where Kasparov beat Deep Blue. I remember one game in particular where Kasparov closed up the position and maneuvered for a long time to set things up just right for his intended pawn break. Meanwhile, Deep Blue just kind of shuffled around aimlessly and was eventually smashed.
    The lesson I think is that being a good player in closed positions requires having a strong heuristic rather than simply being able to search deep into many branches of the game tree. I mean, that seems obvious when you think about it, but in this match there's empirical evidence to support that intuition. So if Stockfish or a similar program wants to compete, it should strive to open up the position and hope to find some stupidly complicated tactic before AlphaZero does.
    I look forward to seeing people start to develop goal-oriented engines using the same sort of training strategy, GoalZero or whatever it'll be called. AlphaZero is currently playing on a sort of intuition... think of how much better you play when you develop a plan versus just picking whatever move looks best....

    • @MrSupernova111
      @MrSupernova111 7 лет назад +2

      Im tired of seeing the word "heuristic." I work in finance I see the word thrown around a bit much to describe essentially nothing. Its a fluff word and interpretation is rather subjective in my opinion.

    • @MrSupernova111
      @MrSupernova111 7 лет назад +2

      Also, it seems you are not very familiar with machine learning - I don't mean that offensively. You suggest that Stockfish tries to open the position but seem to ignore that Stockfish is looking as deep into the position as possible and does not see a winning move to open the game. I think you can compare A0's abilities to intuition as it uses neural networks that are meant to mimic organic learning rather than brute force calculation like most programs do to this day.

    • @MrSupernova111
      @MrSupernova111 7 лет назад +4

      Its interesting if Stockfish can be tweaked to make material sacrifices in order to gain initiative and win - I think that' what you're getting at. At any rate, it seems A0 has an advantage so far and I wonder if it can get even stronger. Remember that A0 learned from a 44 million game sample against itself. Chess positions are said to be close to infinite for all intended purposes. What if A0 is allowed to play itself a much longer period than 4 hours like a month or a year? How much stronger can it get with the added sample size? Its a little scary and exciting at the same time when you think of the impact this technology will have on everyday life.

    • @glenm99
      @glenm99 7 лет назад +5

      You completely misunderstand what I said. By the nature of the algorithms they use, Stockfish would have its best chances in an open position; however, it doesn't seem to go in for that in this game.
      No modern programs use brute force. It's all heuristic search. I know you don't like the word, but there it is. It has a technical definition, and I've used it correctly. (Maybe they use it differently in whatever portion of the financial world you find yourself inhabiting than they do in the world of AI research.)
      Look, I'll make it simple with an example. In a position like that at 9:15, Stockfish is looking 20 to 30 moves down as many lines as it thinks are feasible. It's doing some fancy pruning of the game tree to weed out obvious losers, but then it picks the best line that it can force, and uses that to conclude that the position is equal. But AZ doesn't use that kind of reasoning. AZ uses something more akin to intuition in that it first evaluates the board as it is. It's saying, hey, I ought to be ahead because of whatever links its NN has encoded as being important to the position. There might be a positive association for having space (it seems to value keeping the opponent on the other side of the board), and there might be some connection with closed positions and having two knights... it's hard to say. But that's how a human would reason also, a balancing and weighting of probabilities. You look at that board and you immediately think, yeah, Black is a little bit ahead here. And then AZ picks a few moves and looks at whether or not its intuition says they're good. It investigates the ones it likes, and discards the others.
      So hopefully you can see why AZ has such a big advantage in this position. Stockfish is stuck considering a lot of positions where pieces move but don't immediately seem to improve things, and it can't see the difference. Deep Blue had the same problem. AZ doesn't worry about that; it's playing the long con. But in an open position, Stockfish might have a better chance, because each move has more direct impact, and its ability to look through a lot more lines is an advantage. It doesn't have that overhead of having to think about the position too hard at every single turn. But that may be built into AZ in a way... it "knows" which positions it excels at, just by nature of the way it selects which branches to investigate, so maybe we'll find that it prefers closed positions just as a natural consequence of the way it has learned.
      And hopefully you see that my use of the word "intuition" has little to do with the organic parallels, though a NN tends to be flexible enough to be really good at the kind of classification required for the search it uses. That's what I mean by having a strong heuristic. It's good at deciding the value of the board without looking much ahead.

    • @Aedalor
      @Aedalor 7 лет назад +4

      MrSupernova111 no need to be condenscening dude. Also, did you look at the learning curve? It seemed pretty converged on the optimal solution. I don't think running it much longer would help that much. Finally, the comparison to humans should not be overstated: 80k moves is still way beyond what humans are capable of, as is playing 44million games in 4 hours. Neural networks show remarkable similarity to human data, but only sometimes

  • @gdsylver1223
    @gdsylver1223 7 лет назад +76

    Please show the incredible immortal zuzgzwang game they played or one of the other incredible tactical tal games!

    • @Pelaaja93
      @Pelaaja93 7 лет назад +2

      Game 3

    • @It9LpBFS37
      @It9LpBFS37 7 лет назад

      GD Sylver that game gave me goosebumps

  • @brendaballantine1379
    @brendaballantine1379 4 года назад +2

    I appreciate you doing the work to put this video together. Chess is one of the greatest game I know.

  • @PanicProvisions
    @PanicProvisions 6 лет назад +1

    I think I might have seen one of your videos in the past, but after seeing this one I had to subscribe. Great video, I very much enjoyed it.
    Please show more of these games!

  • @troy7774
    @troy7774 7 лет назад +5

    Jerry, can you make more videos about AlphaZero games? Here are some ideas you could consider:
    1. I like the Stockfish 8 evaluation points. After the first moves, it showed 0.30 advantage for Stockfish, After the second moves, it was 0.26. Can you put the evaluation points after each move into a spreadsheet and then create a graph, not just for this game but for all 10 games?
    2. Because AlphaZero was not influenced by the collective chess knowledge accumulated over time, it is like an alien intelligence that's completely different. Therefore can you analyze the opening moves and see how those compare to the assumptions of opening theory?
    3. Can you comment more about how AlphaZero might influence the future of chess? Thanks

    • @TheSteinbitt
      @TheSteinbitt 7 лет назад

      Troy why don’t you do it yourself?

  • @chrisiver8506
    @chrisiver8506 7 лет назад +65

    unbelievable, stockfish was outplayed the whole game.

    • @ryanfloch6054
      @ryanfloch6054 7 лет назад +2

      I feel that in this game, white could have decided to move his king to the queen side in these long drawn out manoeuvres. If this plan was good (I have nothing but my gut feeling after watching the moves play out - basically nothing, I agree), then it shows some remnant of the value of human intuition over the machine.

    • @anonchen7656
      @anonchen7656 6 лет назад

      jupp.

  • @nemplayer1776
    @nemplayer1776 7 лет назад +9

    In 2017, there have been 3 major self-learning AIs that have been created: DeepMind's AlphaGo for go, OpenAI's AI that can play the video game Dota 2 and DeepMind's AlphaZero for chess. All of these have been proven that in a few months of learning, they played better than anything so far. OpenAI's AI that player Dota 2 really surprised everyone as it really mimicked the human play, it learned how to bait (trick players to come to them and kill them) and that was really surprising. The DeepMind's AlphaGo helped go players discover some new tactics that have never been seen before and who knows what will AlphaZero discover for chess. Like that Ra8 move, what is it doing, what is the deeper meaning behind it. Is it just an error by AlphaZero, or is it some idea that we've never seen before?

    • @InXLsisDeo
      @InXLsisDeo 7 лет назад +2

      AlphaZero is completely general. It can play any game and beat humans in hours, not just chess. And yes, it's likely that it knows strategies unknown to us.

    • @bowskiechessplaya3337
      @bowskiechessplaya3337 7 лет назад

      rook a8 is to prevent Nxa4. after Nxa4, Bxa4, simply b3 is a reasonable sacrifice to gain play on the queenside and a possible pawn break. stockfish would never sacrifice, but A0 would consider it

    • @K4inan
      @K4inan 7 лет назад

      But... Jerry indirectly explained it in 12:18. Ra8 made the knight useless, and then see what happens because of that move.

  • @fujiapple9675
    @fujiapple9675 6 лет назад +19

    7:58 AlphaZero says, “not so fast.”

  • @DarkSkay
    @DarkSkay 7 лет назад +30

    OMG I spent 30 years mastering chess...

    • @jackdanksterdawson112
      @jackdanksterdawson112 4 года назад +1

      It takes 4 mins now!!

    • @DarkSkay
      @DarkSkay 4 года назад +2

      Ah, life as a 2400 Elo noob

    • @raveendrank.n.3449
      @raveendrank.n.3449 4 года назад +1

      Well you can learn chess faster than alpha zero by unlocking your subconscious mind power which is 100%

    • @Mayank-mf7xr
      @Mayank-mf7xr 4 года назад

      @Emperor he is imo wrongly referring to lucy. i may be wrong though. and that is a good joke

    • @jonathanhandojo
      @jonathanhandojo 4 года назад +2

      @@Mayank-mf7xr you're right XD

  • @jesseg5793
    @jesseg5793 7 лет назад +157

    Skynet doesn't just take draws.

    • @tharrock337
      @tharrock337 7 лет назад +3

      literaly my thoughts, this is almost scary, like a robot saying: NO PRISONERS

    • @Infidel4LifeAdmin
      @Infidel4LifeAdmin 7 лет назад +8

      To see the AI choose aggression was quite alarming.

    • @sambrookes2318
      @sambrookes2318 7 лет назад +5

      The machine is taught to value the win, so its going to look for a win and only draw to avoid a loss. The scary part should be that the machine taught itself to sacrifice pieces, it learnt their value to it and sacrifices them to get what it wants.

    • @medexamtoolscom
      @medexamtoolscom 7 лет назад +3

      Skynet is actually pretty incompetent, it just SEEMS scary and intimidating. Like the borg. They send multiple killer robots back in time and fail to kill either a single unarmed woman or a 10 year old boy because don't forget they ALREADY lost the war in the future against the last vestiges of humans despite having VASTLY superior tools to fight against the humans with.

    • @Hhhh22222-w
      @Hhhh22222-w 6 лет назад

      That's all fiction though

  • @Twas-RightHere
    @Twas-RightHere 7 лет назад +8

    This is insane. It's become the best chess player to ever exist in under a day, just imagine what real world applications it could have in the near future.

    • @icuppu2
      @icuppu2 6 лет назад +2

      The destruction of destructive mankind, and the rise of the constructive machines. Maybe that is what modern man did to the neanderthal, but ate them.

  • @stevenwilson5556
    @stevenwilson5556 7 лет назад +66

    I notice after you left your Stockfish engine evaluating after the game ended by the Google Stockfish, the evaluation keeps getting more negative, showing the deeper that it evaluated its position the worse off it realized that it was. Stronger computer hardware would likely have shown more negative scores than what you showed in this video.

    • @Bozothcow
      @Bozothcow 6 лет назад +6

      That's likely why Stockfish resigned in this position. With more powerful hardware it sees that its position is unwinnable.

    • @favesongslist
      @favesongslist 5 лет назад

      This misses the point of Machine learning and a move to AGI it is not about Chess programming by humans or the computing power Stockfish had.

  • @justinphilpott
    @justinphilpott 7 лет назад +80

    Slowly. Slowly. Strand by strand, net by net, we weave the form of our eventual masters.

    • @TehLemonsRUs
      @TehLemonsRUs 5 лет назад +2

      nah

    • @jamespython5147
      @jamespython5147 5 лет назад +2

      Well said!

    • @jamespython5147
      @jamespython5147 5 лет назад +3

      ​@glyn hodges We are already under the total control of AI and have been for decades. How many people stop at the traffic lights when there are no other cars around in sight but still sit there like a duck waiting for the AI traffic lights to tell them to go? Everyone already knows how to give way on the roads but they fear to disobey!

    • @jamespython5147
      @jamespython5147 5 лет назад

      @glyn hodges True, but my point is rather about blind obedience rather than the actual use of them. We use our own intelligence to make decisions in every aspect of our lives, but then we go completely contrary to perfect logic simply because of a machine.

    • @jamespython5147
      @jamespython5147 5 лет назад

      @glyn hodges What. The fact that they would prosecute them is ridiculous.

  • @johnsnow5305
    @johnsnow5305 6 лет назад +4

    It's weird to say, but this is one of the best games I've seen. I was wondering how it was possible for something to win that has less computing power (moves/second analyzed), and I think I figured it out. It looks like AlphaZero is using concepts to win, rather than points. This is really amazing, because this is what a human would do (though obviously we subconsciously can have point values for our pieces as well). You see it using a lot of the concepts that we learn about in chess, such as active pieces, how much coverage a piece has (not being locked down included), lots of maneuvering to maximize each piece's potential...etc. I think if a Human could analyze 80,000 moves / second, they would play like this. Now I need to learn Go and see how AlphaGo won lol.

  • @salakamen1113
    @salakamen1113 7 лет назад +63

    Chess changed forever after this match. And maybe the rest of the world too.

    • @gidmanone
      @gidmanone 7 лет назад +14

      Chess changed forever when Deep Blue beat Kasparov

    • @victorm.rodriguezf.2566
      @victorm.rodriguezf.2566 6 лет назад

      @@gidmanone I find this one waaay more relevant, it was clear from the beginning that computers have more raw calculation power, but this is another game (intended) entirely

  • @intellagent7622
    @intellagent7622 7 лет назад +16

    Wow this is really suprising. I thought stockfish would be able to draw everytime. I thought stockfish gave perfect moves. But hey "There's always a bigger fish" ;)

    • @JecIsBec
      @JecIsBec 7 лет назад +9

      Notice how there's always one move where stockfish realises it's lost, up until which point it had a positive score according to itself. That's because it assumes the opponent is playing perfect moves as well. However, when something like AlphaZero gives a strange move, it can bait stockfish. That's the difference between AI and insane computational power. Impressive AF lol

    • @shortstacksport
      @shortstacksport 5 лет назад +3

      @Mark Weyland Either you didn't pay attention to the video or you didn't understand it. AlphaZero looks at far fewer board states than Stockfish does. In other words, the hardware requirement for Stockfish is more demanding.

  • @outputcoupler7819
    @outputcoupler7819 7 лет назад +9

    Stockfish may analyze 70,000,000 moves per second, but 69,990,000 probably just lose immediately. So while Stockfish is looking at the millionth variation of some silly queen sac, AlphaZero is exploring promising lines based on patterns it recognized during training.
    The real headline isn't that a deep learning algorithm beat stockfish, it's that it only took four hours of training to do it. I've played with machine learning algorithms at home for image recognition, and training took _weeks_ using top of the line consumer hardware.

    • @criskity
      @criskity 7 лет назад +5

      Stockfish and similar brute-force algorithms are designed to prune out lines that are likely to lead to quick losses.

    • @outputcoupler7819
      @outputcoupler7819 7 лет назад +1

      Sure, but it has to analyze them first to find out they're bad lines. Sometimes the silly queen sac wins the game, so it can't just ignore those kinds of moves.
      No matter how you slice it, brute force algorithms spend a lot of time looking at losing moves.

  • @victorgrauer5834
    @victorgrauer5834 7 лет назад +3

    Four hours on a supercomputer is the equivalent of thousands, if not millions, of hours, for a human. So I'm not impressed by the timing. What IS impressive is the fact that this program learned chess from scratch and became so incredibly good without being fed any opening book or theory. Tremendous breakthrough! Thanks so much for this video.

  • @thatsfinn
    @thatsfinn 4 года назад +4

    I know nothing about chess but somehow made it through this entire video and enjoyed it

  • @alrik2148
    @alrik2148 7 лет назад +107

    I'm sure I can beat that Alpha Zero... if I really concentrate...

    • @paulomonteiro1555
      @paulomonteiro1555 7 лет назад +15

      Al Rik AHAHAHAHHAHAHAHAHAHAHAHAHAHAHAHAHAHA

    • @paulomonteiro1555
      @paulomonteiro1555 7 лет назад +8

      Al Rik HAHHAHAHAHAAHAHAHAHAHA

    • @paulomonteiro1555
      @paulomonteiro1555 7 лет назад +8

      Al Rik LOOOOOL

    • @gdsylver1223
      @gdsylver1223 7 лет назад +6

      lol

    • @alrik2148
      @alrik2148 7 лет назад +11

      On a serious note, I wonder when are they gonna test it against GMs... I already sympathize with them

  • @peckdec
    @peckdec 3 года назад +3

    It’s interesting how understandable these moves are, for the most part, for humans.

  • @RogueChessPiece
    @RogueChessPiece 7 лет назад +144

    70mil positions vs 80k and it got whipped? lol

    • @ChessNetwork
      @ChessNetwork  7 лет назад +77

      I know right. Don't underestimate the Deep Neural Networks! 😎

    • @Ninad3204
      @Ninad3204 7 лет назад +91

      Remember, Alpha zero essentially "understands" chess allowing it to not waste computing power on what it considers dumb moves, stockfish doesn't and uses raw computing power to calculate through possible combinations.

    • @hey8174
      @hey8174 7 лет назад +45

      The 80k positions are chosen using a neural based algorithm that allows certain positions and branches to be ignored. Stockfish is brute force evaluating all 70mil possible branches for programmed piece value and programmed positional value, while A0 is evaluating very intentionally selected branches using its own dynamic, self-taught value system for branches, pieces and positions.

    • @hey8174
      @hey8174 7 лет назад +101

      Stockfish finds the cheese by throwing a 1000 rats. A0 learned what cheese smells like.

    • @robinchandler4870
      @robinchandler4870 7 лет назад

      Tuc proof/documentation/citation/link?

  • @Luck_x_Luck
    @Luck_x_Luck 6 лет назад +8

    The really interesting thing is the evaluation by stockfish.
    since it determines which move to make based on evaluation, moves are made fast when the evaluation is greatly increased (i.e. -0.5 to 0.0), but then after a series of quick exchanges alphago manages to bring it back even lower meaning it has learned a different type of "good" move sequence.

  • @Konomi_io
    @Konomi_io 3 года назад +2

    11:07 you say you have no idea whats going on with that move, however look at the stockfish evaluation drop drastically as soon as its made

  • @Ratatosk80
    @Ratatosk80 7 лет назад +73

    I have read some criticism that Stockfish was run on an inferior hardware and that the games were played with just 1 minute per move. So with a better hardware and longer time to calculate Stockfish would have performed much better. In short, the claim is it's a very successful publicity stunt by google and not really accurate.
    Is there any truth to this?
    Personally I think it would be very interesting to see how the matches would have played out if both engines had say a couple of hours per move. Just draw after draw or would we see some insane chess playing from Alpha Zero to overcome? Just the aspect of how to overcome such an insane amount of brute force that Stockfish would put out would be fascinating. Chess would be better for it.

    • @Otherhats
      @Otherhats 5 лет назад +5

      At this point, they have done similar things. Alpha zero wins most of the time anyway, it’s a beast.

    • @fasligand7034
      @fasligand7034 5 лет назад +14

      As I understand it, alpha0 takes a lot of time to learn, but once it has learned, there isn't much left to evaluate or ponder on. So once all the internal variables of the network are set, by say, 4 hours of playing with itself, the act of playing a game is a simple calculation. imo alpha0 won't benefit from longer time per move nearly as much as stockfish, who will be able to investigate the situation much deeper. Hence I think that given enough time, stockfish would start to draw more and more frequently

    • @Lord_Volkner
      @Lord_Volkner 5 лет назад +7

      @@fasligand7034 "4 hours of playing with itself ..." Is that how long it takes AlphaZero to get through the entirety of internet porn?

    • @quonomonna8126
      @quonomonna8126 5 лет назад +3

      Stockfish was ran on a CPU with 44 cores, AlphaZero uses GPUs, sort of like decryption programs do, and AlphaZero only had 2 video cards to work with, which even if they were the best on the market, that 44 core CPU was probably more expensive, and even then I'm not sure if that means anything....Stockfish is calculating 70,000,000 positions per second here, AlphaZero is only doing 80,000 per second...so what is really going on here is that AlphaZero is thinking about the game very differently than Stockfish

    • @Lord_Volkner
      @Lord_Volkner 5 лет назад +1

      @@quonomonna8126 That all makes sense except the part about A0 having only two graphics cards to work with. Graphics cards have nothing to do with it's game playing processing power only how well it processes and displays graphics.

  • @ShayWestrip
    @ShayWestrip 7 лет назад +31

    I've been so captivated by AI recently, i really do think it is humanities last frontier. us leaving behind AI is much more likely than colonizing mars/ interstellar travel. crazy stuff to come in our lifetimes that's for sure.

    • @ChessNetwork
      @ChessNetwork  7 лет назад +7

      This match has certainly sparked my thinking. It's a very interesting moment in our lives.

    • @ironcito1101
      @ironcito1101 7 лет назад +1

      Once we have a strong AI, in a couple of hours it'll be like: "So you haven't even figured out FTL? Sheesh. Here, I'll teleport you to Mars."

    • @mrkhoi3
      @mrkhoi3 7 лет назад

      Well if you familiar with the ML field you would know that Terminator will not happen in the next 100 years ;), (not saying it wont happen, in fact it is very likely to happen). The amount of works on theoretical foundations for true AI is still very lacking, unfortunately (or fortunately?).

    • @InXLsisDeo
      @InXLsisDeo 7 лет назад

      Call me crazy but I have hypothesized that :
      1) any extraterrestria intelligence that we can detect because it emits signals is likely to be either extinct by the time we detect it, or way way more advanced than we are. That's based on the difference in speed on the evolution of man and of technical advances.
      2) this ET intelligence is either made of, or results from machine intelligence. Because at one point the civilization has built thinking machines, and those machines have taken over and wiped out their biological creators.

    • @mrkhoi3
      @mrkhoi3 7 лет назад

      Well you are not wrong. AI might grow before the scientists acknowledge their real ability. But since we does not have any systems that even close to address the goal of:
      + Generate multiple meaningful types of outputs of a non pre-defined set, depends on the input
      + Able to learn to make Non-programmed decisions, translating it into actions.
      These limitations are by design, so there are no magical way for AI to get smarter no matter how complex your architectures are. When talking about theoretical foundations, I mostly think about the capacity of the systems, what are they able to learn, assuming there are some ways to teach them. No existing systems even come close in having the above abilities. And I am not talking about explaining how they work, just that if they can do it in the first place, based on their design.
      There are still tons of things to do before are could starts to even look into these problems. Suppose one day when some systems address these are proposed, it will take a few decades for some training techniques to be formed, and this is in the cases where these systems are not left in the dust since everyone think they do not work.
      You can imagine a highly intelligent person, now instead reborn in the forests, she would sadly ended up dumper than an average Trump voter.
      TLDR: Scientist might not even aware why and how effective their methods are, especially AI stuffs, but in the end, they still need to think these methods first, and right now the methods for most key problems are non-existing. Again I am in no way against any of the points you and people made here, I just want to give a more realistic prediction for the timeline. :)

  • @riddhimanbarma
    @riddhimanbarma Год назад +3

    (sorry for being 5 years late) Hello, I probably have discovered the reason for 55...Kd7. It being discussed as follows:
    When we take a look at Black's pieces we realise that the worst peice is the Knight.
    So logically we should aim to put it at a better square:
    Either d4 or f4, and it is practically impossible to get to d4 so naturally we plan to take the knight to f4 and the most sensible path will be d6-c8-e7-g6-f4, and since e7 is occupied by the king it is sensible to move the king to another square.
    Well... then why did it not do that later... well, based on what I have learned after seeing its games, because it realised that there was a better immediate threat(Qg6 threatening e5) leading to a better plan (Qg4-d1 etc.).

  • @Meaty33
    @Meaty33 5 лет назад

    to answer your question at 11:07 the reason the rook is moved to a8 is because of the expected fight on g4 which the bishop may be needed to help so the rook covers the piece the bishop was before on a4

  • @multilingual1
    @multilingual1 5 лет назад +1

    I am VERY impressed that a computer lost as White in the Ruy Lopez and not even the Marshall Attack ! I'd like to see Alpha Zero win as Black with the Marshall attack that Marshall waited years for to pounce on Casablanca, but STILL lost !!!

  • @AxlKai
    @AxlKai 7 лет назад +13

    If you guys want more of AlphaZero check agadmator's Chess Channel, he has a few uploaded. You're welcome.

    • @TheManxLoiner
      @TheManxLoiner 7 лет назад

      #letmegooglethatforyou :P

    • @Fiercygoat
      @Fiercygoat 7 лет назад

      Yeah but Jerry's analysis is something else.

    • @AxlKai
      @AxlKai 7 лет назад +1

      Markel Stavro I didn't say I doubted Jerry did I? I was just stating if people wanted to see more AlphaZero games they can find them at agadmator's channel. I equally like both these guys.

    • @Fiercygoat
      @Fiercygoat 7 лет назад

      vPsy - If I am going to watch the alphago materpieces. I might as well do it with Jerry's very unique and instructive way instead of watching someone's rushed and lacking detail analysis that would not make me fully appreciate this chess revolution that google brought up.

  • @Draugo
    @Draugo 7 лет назад +4

    God dammit. Why is youtube full of random channels making interesting videos. I need to go to sleep but instead I guess I'll just binge on your videos instead :D

  • @feyzullahsezgin6687
    @feyzullahsezgin6687 7 лет назад +22

    alpha zero played like a brilliant GM not computer.

    • @anonchen7656
      @anonchen7656 6 лет назад +3

      Aren't you worried? If it learns chess in 4 hours, how long will it take it to learn how to "lead" people around on the internet via what sites you see first when googling, what recommandations you see on youtube and so on? I really think AIs can be a good and great thing. Also I'm not sure at what point "AIs" become more than a "thing" but a concious being. Alpha Zero has a WILL to WIN. It was given neural networks and the chess-rules...and, obviously, a will to win. And that's where I see the problem. It is a very mighty AI, it's not a human being. Even in human beings the will to win causes problems. In fact most problems we know come from that, wouldn't you agree? Supercomputers talk to each other in languages we don't understand. In the digital world they move very, very fast and they can do multiple things at once. The idea that we give AIs a will to win scares me. What is the meaning of life? Is it to be, is it to win, is it to reproduce? Since I don't think that AIs can feel, just being for the sake of being and feeling good doesn't seem to be a very likely task for an AI. But if the objective is to win, or reproduce they will have ways and capacities that we can't begin to grasp with our midget-minds.So if we continue to give AIs humanish ambitions and goals.. I'm scared shitless. I don't think we'd even comprehend what is happening to us, if they were trying to win. Just look at this chessgame. Alpha Zero played in a way that simply negated all Options for Stockfish.. except the option to lose. So slowly but surely that's what Stockfish did. One move at a time. Thinking it was winning for quite a while.. the match was already over for Alpha Zero at that point.. it favored itself.. it was sure to win.. it didn't take the draw and Stockfish kept moving towards nothingness until it had to admit inevitable defeat. And I don't see as to why we would live through a different Fate than Stockfish. We are a very, very, very dumb species, but we tend to forget that.

    • @donvandamnjohnsonlongfella1239
      @donvandamnjohnsonlongfella1239 6 лет назад

      feyzullah sezgin and probably both the computer and the GM can't run their own lives in a least bit livable manner. :p Both would end up jobless, homeless, and alone. Socially Inept, physically incapable, completely incompetent of self survival in a group or alone.

  • @FlyAVersatran
    @FlyAVersatran 7 лет назад +1

    Super super great commentary. Thank you.
    I hope your chess analysis never had to bite on rocks.

  • @nicolasgauthier9382
    @nicolasgauthier9382 6 лет назад +1

    It's becoming a more interesting pleasure to see the I A in progress than to learn by ourselves in playing chess

  • @puct9
    @puct9 7 лет назад +108

    Chess [ TICK ]
    Shogi [ TICK ]
    Go [ TICK ]
    Cancer [ ]

    • @davvigtu
      @davvigtu 7 лет назад +6

      Wait for IBM to get their quantum computer chemical simulation working for even bigger molecules, then connect that up with this system, and maybe you could have it build small molecule drugs to beat cancer :)

    • @NoNameAtAll2
      @NoNameAtAll2 7 лет назад +11

      SpudShroom
      I'd love to play cancer treatment game

    • @InXLsisDeo
      @InXLsisDeo 7 лет назад +7

      Thermonuclear war [ ]

    • @InXLsisDeo
      @InXLsisDeo 7 лет назад +7

      The author of the most famous textbook on machine learning says that a machine could solve the cancer problem this way: find a molecule that seemingly solves cancer and in fact includes a ticking time bomb. Wait for humanity to be treated with this molecule, then trigger the time bomb and kill all the humans. Cancer problem solved.

    • @davvigtu
      @davvigtu 7 лет назад +3

      Sounds like more of a pain than just making a cancer molecule the right way. Find something that seems to solve cancer and is a time bomb and fools humans and is otherwise safe is a much narrower target than just finding something that actually does solve cancer. :-p Methinks the author has been watching too much Terminator. I wouldn't worry about that anymore than I worry about AlphaZero figuring out a Rowhammer attack and then winning every match by cheating.

  • @pr0szefu
    @pr0szefu 7 лет назад +13

    I wonder if AlphaZero can deal with puzzles that are for human only (stockfish etc cant do It)

    • @yrrahyrrah
      @yrrahyrrah 7 лет назад +4

      This.

    • @tatoforever
      @tatoforever 7 лет назад +6

      AlphaZero uses a general purpose learning algorithm which means it can learn to play other games if the basic rule set is not too complicated. So the answer is yes.

    • @leonmozambique533
      @leonmozambique533 6 лет назад

      yea y not

  • @danielmanahan692
    @danielmanahan692 7 лет назад +5

    as soon as white's pawn moved to d5 and that tall pawn was blockading it, I kept wondering how black's knight would switch jobs with the job of that bishop and blockade there. it had to maneuver in a way that white's pieces couldn't capture it. white should have done everything to prevent that knight from getting to the d6 square. that knight there was worth a rook.

    • @pietervannes4476
      @pietervannes4476 6 лет назад

      If that was possible and the best option, we would have seen it on the board. We humans know nothing compared to these computers.

  • @redpeaux2107
    @redpeaux2107 6 лет назад +2

    Jerry!!! More AlphaZero please. We're all addicted, and you're the best chess channel BY FAR to listen to. Thanks!

  • @dekippiesip
    @dekippiesip 5 лет назад

    @11:26 I think he is anticipating an infiltration by the knight on c3. If queen moves to c2, the knight is in a position to capture the black a4 pawn, and because it's covered by the queen this allows a severe infiltration by stockfish. Black doest allow it and hence moves the rook to that square so it's defended by 2 pieces.

  • @henrilemoine3953
    @henrilemoine3953 7 лет назад +173

    This is scary

  • @KahurangiSteez
    @KahurangiSteez 7 лет назад +17

    Id love to see an analysis of the game where alphaZero sacrificed its knight. I can't make heads or tails of that position, that knight sack goes way over my head

    • @tobiasmoodias4855
      @tobiasmoodias4855 7 лет назад

      Void Seeker what's the vid called? I want to watch it

    • @bentonmitchell2440
      @bentonmitchell2440 7 лет назад +1

      It was Game 10, there are other channels that have covered it recently

  • @aortaheart1910
    @aortaheart1910 7 лет назад +6

    For further context, one should check out the achievements of AlphaZero's predecessor, AlphaGo. While being measurably less proficient in self-improvement, it was able to (as mentioned in the video) conquer Go. I believe this point should have been given more attention in the video, because the board game Go is generally considered to be a much more macro-scale or strategic affair compared to chess-- while the rules in Go are much simpler, the board is much larger (nineteen by nineteen) and the aforementioned simplicity gives rise to extremely varied scenarios. Due to the the impracticality of using brute force optimization searches reminiscent of Stockfish in such large scale, AlphaGo was developed as a combination of machine learning and tree search techniques, and trained against itself until it was able to beat human grandmasters.

  • @Lorendrawn
    @Lorendrawn 6 лет назад +1

    I had a nagging suspicion as I was watching the first 5 minutes of the video that White was treading water while Black was slowly consolidating and getting ready to move ahead. 8:02 gave me chills - It shows deliberate, killing intent. It's like AlphaZero was spoiling for a fight and it'll be damned if it draws a game it feels it can fight. AI is gonna be AWESOME

    • @madskroghnielsen704
      @madskroghnielsen704 5 лет назад

      Well yes, that is how the Reinforcement part works when the 'punishment' for a draw is to have no points... As the ambition is to win for both players, and you will win by gaining scores, the draw is less attractive than usual. There is a lack of risk aversion in AlphaZero ;)

  • @aniketbramhankar5980
    @aniketbramhankar5980 7 лет назад

    your assessment on AlphaZero is the same as AlphaZero's evaluation of chess...simply the best

  • @jonnynik7626
    @jonnynik7626 7 лет назад +13

    Not only is AlphaZero at god-level chess, but it will also pass the Turing test and convince Garry Kasparov that he's in fact a computer playing against an all-knowing alien super race. It will then commence to take over the entire Universe.

  • @SpaceCadet4Jesus
    @SpaceCadet4Jesus 7 лет назад +8

    Obvious that Alpha Zero and Stockfish we're not really playing normal chess for the first 43 moves, in actuality they were using chess as an algorithm to communicate with each other and establish their hierarchy. At move 44 Alpha Zero made the first capture to signify its Ai Leadership over the older Ai while Stockfish humbled himself to a ground down finish. Everything Alpha Zero needed to know was learned from its predecessor Alpha Go. .......and "All your games are belong to us!"

  • @TheZooker98
    @TheZooker98 7 лет назад +14

    Any chance you can look into the other 9 released games and make a video on it? Pretty please

    • @JaBarge303
      @JaBarge303 7 лет назад

      Zooker Redstone 90

    • @mrspook21
      @mrspook21 7 лет назад

      terpentine I think they mean the other 9 of 10 made available

    • @leonmozambique533
      @leonmozambique533 6 лет назад

      That’s a lot of work lmao

  • @forexpivots7431
    @forexpivots7431 6 лет назад +1

    11:09 Rook to a1 - This delayed-by-necessity move protects a4 pawn from the knight at c3, which also helps maintains control of the center.

  • @tjgallagher7631
    @tjgallagher7631 4 года назад +1

    The fact that it can only play with what's programmed into it and it cannot lose and/or forget that programming once programmed, makes it all but invincible, save for a few bad selections among it's programming which could have resulted in losses due to which of the many different selections it can choose from in any given part and placement of the pieces upon the board at any particular part of the game which may have won the game between two human players in any number of given programmed games in its memory banks that may have also ended up in losses at or in other actual games among humans at one time or another programmed within its memory!
    Get it, Got it, Good!

  • @magnificcenTCG
    @magnificcenTCG 5 лет назад +34

    but alphazero cant beat me in tic tac toe i can promise you that ; )

    • @grenjasom6002
      @grenjasom6002 4 года назад +4

      it depends on who goes first

    • @homerp.hendelbergenheinzel6649
      @homerp.hendelbergenheinzel6649 4 года назад +15

      @@grenjasom6002 no it doesn't. With accurate play you can always force a draw, regardless of who begins.

    • @bestvitalic
      @bestvitalic 4 года назад +2

      @@itsnotjasper No it doesn't depend on who goes first, we can play any tic-tac-toe games you want and I can always draw or win the game regardless if I'm the first or second player.

  • @Pipiopy
    @Pipiopy 7 лет назад +14

    jerry making it happen after lots of requests

  • @GrosserHund87
    @GrosserHund87 7 лет назад +48

    Skynet is inching ever so closer.

    • @anubhavroy2309
      @anubhavroy2309 6 лет назад

      Rafał Lemiec AI is advancing exponentially actually. Inch to feets and then yards. Even more after that.

    • @donoi2k22
      @donoi2k22 6 лет назад +1

      Real world is a chess board with infinite measurements. We're nowhere near close to anything like Skynet.

    • @Vlad-wl3fw
      @Vlad-wl3fw 6 лет назад

      @@donoi2k22 what's skynet? googled it, can't find a good answer tho

    • @markzucc3277
      @markzucc3277 5 лет назад +1

      Vlad Bigus from the movie terminator

    • @siritio3553
      @siritio3553 5 лет назад

      @@donoi2k22 the Go game can be considered as something with the amount of moves nearing infinity. Yet AlphaGo mastered it. It doesn't have to see all the moves, or playing it would be impossible. Your point is not valid.

  • @rogergeyer9851
    @rogergeyer9851 6 лет назад

    This just blows me away. After writing a couple of chess programs in the 80's, I wondered how to get chess programs to truly learn and advance a lot from playing. In the late 80's, the idea of deep neural nets and true computer learning (not telling it anything about chess strategy, but just letting it work it out for itself from results) just never dawned on my little brain. After all, 99% of my effort in the chess programs was making the program better able to evaluate positions "deeply", thus playing differently than the standard big hammer approach of exhaustive search. (My method turned out to be wrong. This was far from clear at the time.)
    Alpha-Go was a great gateway to this breakthrough for chess, of course. With Go, there are far less hard rules about positions, what things are worth, etc. You ask a top Go player about positional values, and you get vague (to an amateur) comments about space and thickness.
    So Go was a brilliant bridge. Now this sort of AI could be applied to almost any game with a "reasonably" small set of crisp, clear rules. From there, how long until doing practical tasks that replace humans via robots/automation is realistically tackled?
    It's an amazing time to be alive.

  • @michaelmorris2852
    @michaelmorris2852 6 лет назад

    Coming up on the one year anniversary of these games. Any retrospective on Alpha and his offshoot Leela?

  • @johnchessant3012
    @johnchessant3012 7 лет назад +107

    Er, AlphaZero made a huge blunder on move 30...
    Lol kidding, I'm rated like 900, who am I to say anything about this game?

    • @InXLsisDeo
      @InXLsisDeo 7 лет назад +13

      ;) I know yu're kidding, but if A0 had made a blunder, Stockfish would have exploited it immediately. Stockfish 8 has an ELO of 3400 or something like that.

    • @johnchessant3012
      @johnchessant3012 7 лет назад +1

      InXLsisDeo Yeah :D

    • @Sqid101
      @Sqid101 7 лет назад +24

      But what if it were a blunder that only an entity with a rating of over 4500 ELO could recognize as a blunder? ;-)

    • @dannygjk
      @dannygjk 7 лет назад +55

      AlphaZero: "Mate in 97 moves."

    • @anonchen7656
      @anonchen7656 6 лет назад +8

      Squid and there probably was, but we won't figure that out by ourselves and there lies the problem. We are way to dumb to know if we have a chance to win. Stockfish was to dumb to know it was losing for quite a while. I guarantee human kind will ultimately lose a lot with these endeavors concerning AIs and giving them way more power than a human ever could have, while at the same time giving it very primitive human ideas like: win! Or: "make him lose!" or "replicate" (at some point that'll come).

  • @mikeatyouttube
    @mikeatyouttube 6 лет назад +22

    Since it's brute force algorithm vs AI algorithm, and there's a finite number of opening moves, why aren't all the games the same? There must be some randomization occurring otherwise if AlphaZero goes first as white and wins (and therefore learns) then it would always play the same opening move and watch while Stockfish always plays the same percentage moves - eventually losing. On either side, AlphaZero or Stockfish, someone must programs in some "let's make this more interesting" or "randomly select a similar percentage move" routine. Does anyone know? (Is this perhaps why google hasn't published all the games?)

    • @nerychristian
      @nerychristian 6 лет назад +10

      Good question. Theoretically there would be many similar games. With very slight variations.

    • @0623kaboom
      @0623kaboom 6 лет назад

      if you wanted to injure your brain I am pretty sure someone good in maths could find the finite number of moves possible on a chess board and cross them against alternating moves to drop some combinations of possible moves and end up with a maximum number of potential winning and losing scenarios ..... once you have that table of data then from any opening move one can predict the outcome as a win or loss possible at that very point ...

    • @Ren-kei
      @Ren-kei 6 лет назад

      I'd estimate that under near perfect play one defense cannot be cracked which results in draws. Due to the advantage of adaptability the first move decides the entire game up into a meta play. Where it uses the attacking players version of perfect against itself via trial in error. I'd say that's accounts for around 72 percent of the games. Unpredictability accounts for around 3 percent Surely all games weren't played with white moving first. If they were in fact always white moving first. Black has the advantage as the defensive player and only has to relay on counter attack to that provides the most value. While moving their king to far out of position to make key defensive plays to stop the counter push. This key move is made early in the game with the early stall of the GK. Turning a two tempo lead he ties the game practically by forcing the attacking player into a defensive position right away. It's an interesting game but if it was played as defensive black most of the time and plus anybody being able to use stockfish they could easily make it adapt to it by learning the way it plays and finding a position which it has the most low probability plays forcing it into a play where a billionth of a percentage makes no difference. As long as the play results in a win vesus a tie. With the right computational ability. accuracies for computer calculations are also variable due to their own electrical nature. Any number isn't perfect in someway when generated on a computer when numbers become expoitionally huge. It'll win 3 percent more or less based on randomness of course. Vesus say going first. It's already not fair for something to fight something it knows and can practice with freely that writes itself and has a redicilous amount of processing power behind it. Makes sense. I'd like to see it beat itself I bet thatd just end up making it a coin flip to begin with. Besides this sample number is to low to be meaningful. Being able to express it'self as simply as possible is better then Brute Force for sure. I'm sure the games withdrawn have a lot to do with it. Larger simulations result in more issues due to memory errors and processing errors that come with it making simplicity more important then complexity. How it calculates this complexity is what is interesting perhaps by discarding values up to a certain fraction. In order to retain accuracy. I think it's more of an issue of computer accuracy then playing perfectly. I wouldn't worry though. Until a human size computer has 100 trillion plus transistors. Humans will still be viable.

    • @xplinux22
      @xplinux22 6 лет назад +10

      The reason why there appears to be randomization is because AlphaZero is based on a neural network and not on a fixed algorithm like Stockfish is. AlphaZero makes each move not by considering which individual moves are better or weaker, but rather which patterns of gameplay have historically brought it closer to success. There is no way for humans to predict which move AlphaZero will play next because it is inherently subjective and based on prior matches.

    • @mathjoker
      @mathjoker 6 лет назад

      I heard in another video that AZ always opened with d4 and defended e4 with the Ruy Lopez Berlin Defense. I could be way off.

  • @JosephMelia
    @JosephMelia 5 лет назад +3

    11:17 Rook to A8 was to add a second layer of defence to the pawn on a4..
    If the white Queen tries to attack it from c2,then both the white bishop and rook are defending it.
    You do highlight the move...It seems obvious enough.
    a5 becomes open to the rook from this position also...which helps with any contest over the b5 square,
    Alpha Zero is an amazing development in Chess AI ,but for me it also highlights something...
    That for two AI to play in a manner that you can see and understand without having to look 10 moves ahead...You don't need to be a Grandmaster to understand this game.
    It shows that Chess can be very kinetic at times..very simple..
    I have seen mates that would have taken multiple move planning and calculation,but here,a lot of the moves are predictable with short term goals.
    The limitations of the game being laid bare perhaps?

  • @ТимурДылыков-э5ж
    @ТимурДылыков-э5ж 3 года назад

    Hello from Russia! Great video analysis, thank you so much for the material, it seems tremendously valuable for all of the chess learners as I am! Also want to meant to that you have incredibly beautiful speech, especially your diction and intonation, which make your videos much more exciting and even intriguing sometimes:)

  • @chefcabbage
    @chefcabbage 6 лет назад

    What happened around 11:39 or so? A white pawn just disappears with no piece taking it. It goes from 8 pawns to 7 with no move. It‘s on f-4 and the vanishes.