AlphaZero vs Stockfish Chess Match: Game 3

Поделиться
HTML-код
  • Опубликовано: 1 янв 2025

Комментарии • 475

  • @chess
    @chess  7 лет назад +42

    Here's our news report with the background on this historic event in chess! Biggest since Kasparov vs Deep Blue? www.chess.com/news/view/google-s-alphazero-destroys-stockfish-in-100-game-match

    • @InXLsisDeo
      @InXLsisDeo 7 лет назад

      AlphaZero comes from London, no ?

    • @felixknowloveyoudlovetokno2324
      @felixknowloveyoudlovetokno2324 7 лет назад

      how come only half of the alphazero vs stockfish games are up?

    • @champaigne16
      @champaigne16 6 лет назад

      +Felix wouldnt you like to know and why would the initial analysis begin at Game 3? This alphazero may be the future of computer chess, why skip over its first 2 wins ever vs. stockfish???

    • @George4943
      @George4943 6 лет назад

      What were the conditions of the contest?
      Was the hardware of equal power? If not, such a match should be arranged.
      Was the hardware of equal storage capacity? If not, such a match should be arranged.
      What was the time on the clock? One second per move? One second per match? One millisecond per move? One millisecond per match?
      How much time should there be for a computer blitz match? One nanosecond per move? One nanosecond per match?
      Can Alpha Zero start with a position it did not create and find the best move?
      Can we give it some human chess games and have it show where it would have done differently? The chess analysts would have something to talk about I bet.
      [How can I get a copy of AlphaZero? I want to turn it loose on the stock market.]

  • @leerobbo92
    @leerobbo92 7 лет назад +217

    The most amazing thing about Alpha Zero? It doesn't use opening databases (yet it plays some of the best known lines in chess), it doesn't use endgame tables, and it "only" evaluates 80,000 positions a second compared to Stockfish's 70,000,000. That means that it's using an entirely different strategy for evaluating positions, and whatever this strategy is, it's finding long term positional value in it's moves, despite Stockfish considering some of those moves to be inaccuracies or even mistakes. Stockfish is looking at nearly 1,000x as many moves as Alpha Zero is during a game, yet it's being absolutely crushed. Computer chess just reached a whole new level.

    • @hddnnplnvw
      @hddnnplnvw 7 лет назад +16

      The early d-pawn sacrifice without the help of a database proves that new level very impressively.

    • @leerobbo92
      @leerobbo92 7 лет назад +1

      AB CD Crazy if true! I might've misread and missed a zero off.

    • @Kirbyoh
      @Kirbyoh 7 лет назад +30

      I have some knowledge of machine learning. Without seeing the exact code, I would describe the difference as that when Stockfish evaluates a position, it evaluates the position on the board and where it can go. When AlphaZero evaluates a position, it has residual "memory" of all the similar positions it has seen in its training, and has an evaluation of the position type. It already has an idea of what kind of moves have been successful in the past.

    • @leerobbo92
      @leerobbo92 7 лет назад +6

      Kirbyoh That's also my (very limited) understanding too, so that's reassuring! It seems as though it values a space advantage out of the opening very highly, so I imagine that it recognises that certain pawn formations are weaker than others and goes from there. The rook moves at 6:40 suggest to me that it also favours static pawn structures, where white has all the moves and black has limited choices. Really fascinating that it's "taught" itself those concepts, even if it's just the result of pure statistics!

    • @Myrslokstok
      @Myrslokstok 7 лет назад +1

      It is google they probably cheeted somehow.

  • @intellagent7622
    @intellagent7622 7 лет назад +226

    Theres always a bigger fish Mr. Stockfish

    • @petergreen5337
      @petergreen5337 11 месяцев назад +1

      😂absolutely CORRECT and true well said

    • @intellagent7622
      @intellagent7622 11 месяцев назад

      thank you I cant believe I wrote this 6 years ago OMG hahaha@@petergreen5337

  • @chessbrainiac
    @chessbrainiac 7 лет назад +201

    This was an amazing game! Stockfish was really clueless until it was too late. Alpha Zero played with the creativity of Tal, the dynamism of Kasparov and the precision of Carlsen, but without any of their human flaws. If two humans had played this game, the player with the black pieces would ask "Where did I go wrong?" and the player with the white pieces would answer "I don't know!"

    • @hddnnplnvw
      @hddnnplnvw 7 лет назад +10

      Qh8 was a sign of pure despair. A0 is truely a game changer.

    • @sanantonio4828
      @sanantonio4828 6 лет назад +9

      No qualities of Anand in Alpha Zero ?

    • @garehnkalloghlian6052
      @garehnkalloghlian6052 4 года назад +5

      This comment has a nice rhythmic cadance

    • @georgejoseph7519
      @georgejoseph7519 4 года назад

      Pretty much sums it up !

    • @lukasvandewiel860
      @lukasvandewiel860 7 месяцев назад

      AlphaZero fitted a high order polynomial to chess, and this polynomial of such high order and such good fit that it can beat anything, except a higher order polynomial. A bigger neural network, equally well trained, will defeat this AphaZero.

  • @wfcyellow
    @wfcyellow 7 лет назад +336

    I don't know what all the fuss is about. I beat AlphaZero easily in a match recently.
    We were playing tennis to be fair. And it did actually take a set off of me. But still.

    • @chessbrainiac
      @chessbrainiac 7 лет назад +20

      You should have served more to his backhand, then you would have won in straight sets. But hey, you're human, so you're excused.

    • @valar_dohaeris7387
      @valar_dohaeris7387 7 лет назад +1

      yeah if you did that you would have gotten more aces etc

    • @wfcyellow
      @wfcyellow 7 лет назад +1

      OMG are you actually Ken West?? THE Ken West?!?!?!?

    • @felixknowloveyoudlovetokno2324
      @felixknowloveyoudlovetokno2324 7 лет назад

      no hes keanue Reeves...duhh

    • @peteralainszpiriev4750
      @peteralainszpiriev4750 6 лет назад +1

      whyaremeninmyshoes in fact I liked your comment but my comment got only 1 plus so far and that was from myself so your comment has a clear shadow You know how it goes:)

  • @cliffbartlett5799
    @cliffbartlett5799 7 лет назад +99

    Id like to see alpha zero try to destroy the fortress karjakin built vs Carlsen during the WCC. Something stockfish evaluated as -1.28 would be very interesting.

    • @cliffbartlett5799
      @cliffbartlett5799 7 лет назад +18

      Of course! Id also like to see Houdini and Komodo's evaluation of alpha zero vs stockfish would be interesting to find out who knows alpha zero is winning first, in this game in particular stockfish didnt recognize defeat until it was far too late. Huge fan thank you for responding :)

    • @sharpness7239
      @sharpness7239 7 лет назад +1

      probably a0 wouldnt have played f4 in the first place

    • @mineralfellow
      @mineralfellow 7 лет назад

      It would be interesting to see how it evaluates this position: www.chess.com/news/view/will-this-position-help-to-understand-human-consciousness-4298 . Although I am not entirely clear if it "evaluates" positions in the "X.XX" way that we are used to.

    • @Yoni123
      @Yoni123 6 лет назад +1

      Why aren't they doing this? A0 plays famous positions on RUclips!

    • @Kor88Di
      @Kor88Di 6 лет назад

      Can you specify which game that was? Please? Thank you

  • @Add9Sus4
    @Add9Sus4 7 лет назад +87

    this is absolute insanity. Carlsen said it's like watching how an advanced alien race would play chess... that's a very good description I think.

    • @hddnnplnvw
      @hddnnplnvw 7 лет назад

      Wow! Source for this quote?

    • @Add9Sus4
      @Add9Sus4 7 лет назад +12

      As it turns out I was wrong, it wasn't Carlsen who said that, it was Carlsen's coach Peter Heine Nielsen: www.chess.com/news/view/google-s-alphazero-destroys-stockfish-in-100-game-match
      Not sure if Magnus himself has said anything on the record about AlphaZero but it would be interesting to hear his thoughts.

    • @99growlithe99
      @99growlithe99 7 лет назад

      Add9Sus4 I'm not really knowledgeable about chess so this may sound ignorant, but how much use can the world's #1 chess player get out of a human chess coach? :P Does he help with things aside from just game training?

    • @Add9Sus4
      @Add9Sus4 7 лет назад +5

      Honestly I'm not entirely sure but I think the coach helps with tournament preparation by doing things like researching opponents strengths and weaknesses, helping Carlsen find particular lines or openings to either look for or avoid, developing an overall tournament strategy, that kind of thing.

    • @columbus8myhw
      @columbus8myhw 7 лет назад

      Two heads are greater than one, as long as they know how to work together, I think. It doesn't matter who those two heads happen to be.

  • @edwardtang3585
    @edwardtang3585 6 лет назад +10

    That Rxc5 exchange sacrifice deserves a triple exclamation mark, amazing how profound it was

  • @alpulley4894
    @alpulley4894 7 лет назад +21

    The positional insight of Magnus and tactics equal or greater than Stockfish, amazing.

  • @dinkleberg794
    @dinkleberg794 7 лет назад +173

    RIP Stockfish that was brutal

    • @MrSupernova111
      @MrSupernova111 7 лет назад +9

      This was not Stockfish against A0. It was a gutted version. Run this game in any chess engine and tell me what you think of Rf8 at 15:37. Im sure any 1500+ player can see that was a stupid move. No decent engine would move Rf8 unless it was tweaked to play weak moves.

    • @Stockfish1511
      @Stockfish1511 7 лет назад +9

      i analysed their games on strong stockfish engine and it thought many of its own moves were pretty bad moves. Something went wrong in this tests. Surely stockfish developers will demand fair and controlled rematch and stockfish will totaly massacre a0

    • @stagna1959
      @stagna1959 7 лет назад +6

      In game both players had 1 minute per move. Apparently it is enough for AI, but not enough for engine who is programmed to consider/ evaluate every stupid legal variation . Give your engine only 1 minute , I bet it suggests pretty stupid moves quite often.

    • @MrSupernova111
      @MrSupernova111 7 лет назад +3

      @ stagna1959, many of us already did and SF found the blunders instantly. Perhaps, you should be the one doing fact checking before running your mouth lose.

    • @Stockfish1511
      @Stockfish1511 7 лет назад +2

      stockfish didnt have opening theory aswell in their games.

  • @dude157
    @dude157 7 лет назад +30

    I have been going through the released games. AlphaZero is the best I have ever seen at creating space to manoeuvre its pieces around while keeping its opponent contained. I think most of us deep down had some inclination already that strategic long term positional play was the optimal way to play, but brute force engines are too good at spotting tactical weaknesses and exposing our weak human fleshy brains ability to crunch the numbers. Alpha Zero can play the long positional game and control the space, but it also has the number crunching ability to not fall into the same traps. This really is the biggest monumental step in Chess since Kasparov lost to deep blue. It's exciting because it paves the way for some real instructive games where we can really learn how to play from computers, and develop new strategy, rather than just being out calculated by them.

    • @jackmuller5478
      @jackmuller5478 7 лет назад

      while i agree with most of what u said, we will never be able to learn from computers of any kind
      we are simply inferior to them in brainpower

    • @argschrecklich9704
      @argschrecklich9704 4 года назад

      @@jackmuller5478 A single human brain is superior to every computer that exists on this planet at the moment. Think of all the things you can do that a computer can't. Simply reading these words and truly understanding what I, a different mind, is trying to tell you is impossible for a computer.
      And yes, since AlphaGo plays like a real thinking brain and not like a calculator, we actually can learn from it. It's a neural network. Our brains are neural networks. It plays like a human, if a human could play perfect, unlike stockfish, who, according to magnus is "an idiot that beats you every time."

    • @smokedplus
      @smokedplus 3 года назад

      @@jackmuller5478 computers are still slower than brains one second of brain activity takes a computer 45 minutes to process

    • @mattocardo1002
      @mattocardo1002 3 года назад

      @@smokedplus Yes but not for long. Alpha Zero shows that Computers can learn like humans do through repeating the same over and over again. And with Quantum Computers we will soon be outperformed by Computers, and probably by a very big margin. So it will be interesting what the next decades might bring.

  • @eternalstudios4502
    @eternalstudios4502 4 месяца назад +1

    Stockfish: I’m the best bot at chess you are still a rookie.
    AlphaGo: let me cook.
    Stockfish: What are you going to cook?
    AlphaGo: Fish

  • @wesselconway3920
    @wesselconway3920 7 лет назад +20

    It feels human because rather than calculations it taught itself how to play chess. probably similar to its method to learn Dota 2, which involves playing against for thousands of games. This engine has a taught intuition.

  • @tiberiusvetus9113
    @tiberiusvetus9113 7 лет назад +28

    I thought "Zero" in AlphaZero meant that Google trained completely from scratch without studying any chess history.

    • @dannygjk
      @dannygjk Год назад +1

      That is correct.

  • @winstonsmithamm
    @winstonsmithamm 7 лет назад +12

    1:42 . . . "amateur chess players of all levels." No, every single 800-1100 range game ever played in history starts 1. e4 e5 2. Nf3 Nc6 3.Bc4 Bc5 4.O-O Nf6 5. Nc3 O-O

    • @milz7129
      @milz7129 7 лет назад +4

      Agamemnon Butterscotch
      That's not even amateur. That's beginner. I'm 1500 in 30 min chess and I still consider myself a beginner. Idk if there is some technical standard though.

    • @jackmuller5478
      @jackmuller5478 6 лет назад +1

      everything under 2000 is considered amateurish
      everything above is considered expert
      above 2500 is grandmaster level

    • @dannygjk
      @dannygjk 6 лет назад

      Many amateurs memorize lines and play them. They don't have to be strong players to do that.

    • @gamerawy2546
      @gamerawy2546 5 лет назад +1

      Please can any one telle why in 11:33 stock fish didn't take the rock on d4?

    • @usuallinkinultimate
      @usuallinkinultimate 4 года назад +1

      @@gamerawy2546 Qxg6+, pawn is pinned. Then since the f3 rook is no longer pinned that can move towards king and it will be mate soon. That's also not a position that happened in the game.

  • @waynechen5639
    @waynechen5639 7 лет назад +4

    Stockfish looks more like the titanic than a chess titan.

  • @AAntichrisTT
    @AAntichrisTT 7 лет назад +2

    Listen, people saying 'this isn't under fair conditions' or 'it was only a weak version of Stockfish (which would still be insanely powerful)' It's the fact that it was only given the rules of the game- and in 4 hours it can take down an engine as powerful as Stockfish, something that's took years to develop and refine. It didn't even beat it by beating it at it's own game of who can see furthest down the search tree. It beat it by out thinking it. Literally. It thought about how to defeat it and succeeded.

  • @yakovvidberg7035
    @yakovvidberg7035 10 месяцев назад +1

    When you look at this game with today stockfish he seems to be very angry about how is younger him play this game

  • @wesselconway3920
    @wesselconway3920 7 лет назад +5

    NOOOO WAYYYY I'VE BEEN WAITING FOREVER TO HAVE THIS CHESS ENGINE.

  • @marcwordsmith
    @marcwordsmith 7 лет назад +1

    a line is given at about 11:31 in the video, featuring the move Rd4 by White, but it is never explained why Black cannot simply take the rook on d4 with his queen. All I see after ... Qxd4 is Qxg6+ Kh8, Qh6+ Kg8 and then various continuations in which White can force Black to sac his queen for the second rook in order to avert mate ... but then Black is still up material. Help, anyone?

    • @homerp.hendelbergenheinzel6649
      @homerp.hendelbergenheinzel6649 Год назад +1

      Despite watching the video for the 3rd or 4th time now, I only managed to spot this today. Your comment deserves more attention, since I also couldn't come up with a benefit for sacking an entire rook. But I'm not a good chess player ( and given your line of continuation, you are a way better player than me).
      I was glad I found your comment, though.
      As Danny said, he "spent the last 48h on the alphazero games", maybe it was just a mouse-slip in the variation or due to cognitive dissonance.

    • @marcwordsmith
      @marcwordsmith Год назад

      @@homerp.hendelbergenheinzel6649 thank you for your reply! I forgot all about this video. Now I'll rewatch it! :-)

  • @cyberneticbutterfly8506
    @cyberneticbutterfly8506 6 лет назад +2

    500 years into the future aliens pass by our solar system and are amazed to see an entire orbit filled with humans strapped into chairs serving alphazero as chess opponents

  • @lI1I1ll
    @lI1I1ll 7 лет назад

    Danny, you are so well-spoken. Fast thinker and talker, and you never say "Umm" or anything of the like, and you are straight to the point, and funny at the same time. Thank you.

  • @stoyankolev4728
    @stoyankolev4728 7 лет назад +1

    15:50 is black Re6 a move here, instead of a5? I haven't checked with engines, but it just looks like it solves some problems. Or maybe after Bxe6 dxe6 g4-g5 the zugzwang is still severe.

  • @Jarretman
    @Jarretman 7 лет назад +1

    Excellent commentary and analysis Daniel. It's apparent you put a lot of enough into this video and I appreciate that. Cheers.

  • @ButcherTV
    @ButcherTV 7 лет назад +4

    its amazing how alphazero didnt lose a single game... just insane

  • @mingming9604
    @mingming9604 4 года назад +2

    it's no longer a question of if machine can easily beat the best human players. But the fact that the machines has reached the conclusion of selecting long term positional advantage over immediate material gains just validate how well evolution has developed the human brain !

  • @Markus-Domanski
    @Markus-Domanski 7 лет назад +2

    Google did not re-invent the wheel. AlphaZero has a predecessor going by the names Giraffe (Elo 2410) . The developer is now working for DeepMind. The real sensation is most likely the hardware that AlphaZero ran on. Nevertheless the games against Stockfish were absolutely amazing.

    • @dannygjk
      @dannygjk 6 лет назад

      A0 was not playing the match on a super computer Nakamura was wrong.
      SF8 was doing 70,000 Knps... A0 was doing 80 Knps.

  • @carlosdealcantara_
    @carlosdealcantara_ 7 лет назад +9

    AlphaZero is probably making moves differently from what Stockfish would consider as the best moves, in the engine of analysis from chess.com for exemple. So what happens? The stockfish marks some move as blunder/mistake/innacuracy and aftwards catch itself losing with the moves that it would consider the best? (I tried to make my thoughts as simple as I could, sorry)

    • @takatotakasui8307
      @takatotakasui8307 7 лет назад +4

      Carlos de Alcântara That is what's happening. Stockfish isn't as strong as A0, clearly.

    • @leerobbo92
      @leerobbo92 7 лет назад +9

      It looks like Alpha Zero is using an evaluation strategy that is completely different from other chess computers. The best example is around 6:40. A0 moves the rook from d1 to d6, which drops the evaluation in Stockfish's opinion, and then the rook goes from d6 to f6 straight after, which Stockfish also sees as an inaccuracy.
      Stockfish sees that rook as having very limited squares, pretty much trapping itself, so doesn't value it highly and doesn't try to remove it. A0 sees that rook as paralyzing black's position, forcing black into lines where it can't improve it's own position while white can freely maneuver and improve it's pieces. Alpha Zero is basically forcing Stockfish into positions where Stockfish's best moves are to keep the position as it is, while White has many more moves to choose from and can move pieces however it likes, until it finds a winning strategy. As long as A0 can keep making improving moves, and Stockfish only has bad squares for pieces, it will always eventually be better.

    • @larrydavid5260
      @larrydavid5260 7 лет назад +1

      I think you are giving to much of a human aspect to A0, by saying stuff like this "sees that rook as paralyzing black's position, forcing black into lines where it can't improve it's own position while white can freely maneuver and improve it's pieces".
      I'm not sure the way A0 is designed allows it to learn any heuristics or strategy, rather it is completely a statistical approach based on mass games. So your "paralyzing black position" to A0 translates into, "if I move here, based on the current position of the pieces, I estimated after consulting my neural network trained from self play that there is a higher probability of winning the game."

    • @leerobbo92
      @leerobbo92 7 лет назад +2

      Larry David I don't think I mentioned anything to do with human-like behaviour in my comment? If you notice, I also say "Stockfish sees", and everyone knows how un-humanlike regular chess computers are. It's more a figure of speech than anything.
      However, I will say that I know that AlphaZero uses pattern recognition through machine learning on a massive scale, so in a way it really is "seeing" the position. It "knows" that positions like the one above, where we can describe the features of a large space advantage paired with no pawn moves for black and limited piece movement, leads to a massive advantage eventually. It may have reached that point by a purely statistical approach, but the result absolutely can be considered a strategy, because it's playing the same concepts in all of it's wins without fail: it goes straight for a space advantage, followed by limiting the opponent's piece movement further, and then improving piece activity, generally in that order. Most notably, it moves pieces all over the place multiple times in the opening, meaning that it's valuing piece activity below restricting the opponent early on.
      I supposed it depends on your definition, but if that doesn't count as some sort of strategy, then I don't know what does? Most strategies are created with the help of statistics, and just because a machine has reached this point purely by statistical means makes it no less of a strategy, at least in my opinion. I get your point that it's not actively thinking about all of the factors that I mentioned, but those factors are the reason that it is statistically appealing.

    • @larrydavid5260
      @larrydavid5260 7 лет назад +2

      Yes, I see what you mean, the outcome is the same, humans learn the concepts and employ them to create strategies which produce advantageous board positions, while the A0 doesn't have any concepts to start with, and uses statistics to detect the positions which contain the concepts a winning strategy would produce, then tries to play for those positions.

  • @unknow210
    @unknow210 7 лет назад +1

    it's still 2017, yet deep learning already went so far... The future is really going to be fascinating

  • @DavenH
    @DavenH 7 лет назад +3

    Absolutely loved this. Thanks for the commentary Daniel!

  • @gluonpa6878
    @gluonpa6878 6 лет назад +1

    The fact that the position seems to be lost to a human master but SF still consider it to be +/- equal means that IM/GM still have a better understanding that an engine. A0 on the second hand... By the way, I'm not sure that engine such as SF really understand the concept of "soft zugzwang", in the middle game.

  • @mygmailaccount5068
    @mygmailaccount5068 4 года назад +4

    I'm a vegetarian but I LOVE STOCKFISH

  • @fernandodesouza5762
    @fernandodesouza5762 7 лет назад +3

    15:13 What would have happened if black Re6?

    • @1ngenu1ty
      @1ngenu1ty 3 года назад

      Bxe6 and black would be in a disadvantage

  • @adiadiadi333
    @adiadiadi333 7 лет назад +8

    11:30 is the white rook not hanging?

    • @kenkyuen
      @kenkyuen 7 лет назад

      aditya sai ya I saw that too, maybe we missed something, but if we did he didn't explain it

    • @misusa7
      @misusa7 7 лет назад

      why not RXf7 after check

    • @jalilcompaore
      @jalilcompaore 7 лет назад

      Was wondering the same

    • @mafelan10
      @mafelan10 7 лет назад +13

      after Qxd4 Qxg6+ and if 2. Kf8 Rxf7+ 3. Rxf7 Qxf# ; so 2 KH8 QH5+ 3. KG7 bxf7+ 4 Rxf7 Rxf7 you can't stop checkmate

    • @chesslessonscom
      @chesslessonscom 7 лет назад +1

      I think its because if the queen leaves to take the free rook the f7 pawn is frozen and queen takes g6 pawn check leads to checkmate.

  • @jimbo92107
    @jimbo92107 6 лет назад +1

    Maybe the main lesson of Alpha Zero isn't about book lines, but about learning chess by playing as many games as possible at breakneck speed, and then always play the move that feels right in the first impulse. Neural networks, right?

  • @thefallen250
    @thefallen250 7 лет назад +12

    Lol at engine on left that is actually weaker than alpha zero

  • @christopherjohnson1873
    @christopherjohnson1873 7 лет назад +1

    At 11:30, why can't black take the d4 rook?

  • @afbdreds
    @afbdreds 7 лет назад +24

    Danny is great commentator!

  • @jeffwads
    @jeffwads 7 лет назад +4

    My favorite game of the match games released so far.

  • @Kor88Di
    @Kor88Di 6 лет назад +2

    If alphazero was left playing(learning, or training) against itself from that day till now, how strong could it be?

  • @zeronothinghere9334
    @zeronothinghere9334 5 лет назад +1

    13:26 I don't know why you went over that queen move to H8 so fast, but I think this is kind of the first real mistake Stockfish did. The queen could also have gone to E4 or C6, right?

    • @바밤바바밤바-i9l
      @바밤바바밤바-i9l 5 лет назад

      That allows white queen to control long diagonal and black should worry about g7 mate

    • @바밤바바밤바-i9l
      @바밤바바밤바-i9l 5 лет назад

      chessnetwork's analysis showed that variation so you could see it

  • @noatrope
    @noatrope 3 года назад

    I wish there were some sort of visual indicator distinguishing possible moves and their outcomes from the moves which actually happened - tinting the board or pieces or something, maybe. As it is it’s kind of hard for me to follow.

  • @rishabhrakesh6045
    @rishabhrakesh6045 7 лет назад

    Are you sure about the evaluation at 15:28? it' showing +7.3 for Rf8 move. Best move according to stockfish is Kf8

    • @dannygjk
      @dannygjk 6 лет назад

      After 1 minute of thinking?

  • @yoloswaggins2161
    @yoloswaggins2161 7 лет назад +1

    Amazing that it figured out all of this from just playing itself.

  • @TheRiquelmeONE
    @TheRiquelmeONE 7 лет назад +5

    im not 100% sure but i don't think machine learning means what you think it means. A0 is actually using it, the other top engines are not using it. This makes A0 fundamentally different from the estabilshed engines (is it even called an engine?). I think you used the term machine learning wrong in the past. Correct me if im wrong.

    • @JordanMetroidManiac
      @JordanMetroidManiac 7 лет назад +1

      Bob Nob The main difference between Stockfish and AlphaZero is that AlphaZero makes decisions based on the past whereas Stockfish makes decisions based on the present and future. Both are good, but AlphaZero has the advantage of experience whereas Stockfish essentially starts with a clean slate. Long short-term memory is the central idea of machine learning.

    • @dragonlorder
      @dragonlorder 7 лет назад +2

      Stockfish has human engineered heuristics to evaluate positions. AlphaZero uses neural network which is an evaluation function approximator that can learn from experience and improve. Pretty much human hardcode heuristics vs a function approximator that approximates the "true evaluation function" only God knows what.

    • @JordanMetroidManiac
      @JordanMetroidManiac 7 лет назад +1

      It's amazing, really

    • @dannygjk
      @dannygjk 6 лет назад

      The traditional engines are also AI just a different type.

  • @bikerfreak714
    @bikerfreak714 6 лет назад +3

    trapping the queen on h8!!! absolutely brutal! well, i guess if you're going down, might as well go with the queen by your side!

  • @ianjames537
    @ianjames537 7 лет назад

    You addressed why, after 15...Re8, White doesn't want to play 16. Rxd7, but what about 16. Qxd7? Is it a matter of preferring to keep pieces on the board in order to press his positional advantage?

  • @RadishAcceptable
    @RadishAcceptable 7 лет назад +1

    I, for one, will welcome our Google produced benevolent robot overlords once they are out of production.

  • @Snailman3516
    @Snailman3516 7 лет назад

    One thing I noticed was that inflicting weaknesses on your opponent seemed to be more important than maintaining a good position. Alpha would often have pieces behind pawns, bad pawn structure, and lousy development, but the long term weaknesses inflicted would be greater.

  • @sofia.eris.bauhaus
    @sofia.eris.bauhaus 7 лет назад

    are there any AlphaZero vs AlphaZero games out yet?

  • @michelangeloadamantiel7685
    @michelangeloadamantiel7685 7 лет назад +1

    Thank you for such direct and uncomplicated analysis. I like the way you explain things. Waiting for a file or a diagonal an entire game and working slowly to get your pieces ready should it become available... this I am going to use in my own games. Good stuff!
    And of course, love your enthusiasm!

  • @SimA-pe1br
    @SimA-pe1br 7 лет назад +1

    Moar videos like this and less sleep, Danny! Go go

  • @vampireducks1622
    @vampireducks1622 6 лет назад

    What was wrong with Black Pawn F7-F5, attacking White's Queen, at 2:45 when Stockfish plays Rook F8-E8 instead?

    • @vampireducks1622
      @vampireducks1622 6 лет назад

      Oh, I think I see it. Because then White would check: Queen G4-C4. Yes?

  • @larshartvig3121
    @larshartvig3121 6 лет назад

    Why is it, that I cant find any new games by AlphaZero?

  • @dannygjk
    @dannygjk 7 лет назад

    I'm thinking about the graphs of the opening preferences how they changed over time as AlphaZero trained. AlphaZero has a low opinion of the French for example as did Fischer.

  • @dogfish2988
    @dogfish2988 7 лет назад +18

    I'd really like to know how the game progress was going during these 100 games.
    Like did AlphaZero win the games in random order, or in the later games lets say up from game50+ it was starting to win. Unfortunetely this information is not written down in the paper.

    • @BarsDemirdelen
      @BarsDemirdelen 7 лет назад +28

      dog fish I’m pretty sure during the Stockfish matches the learning algorithm was not running for AlphaZero. So there is no reason to believe AlphaZero would play better in the later games.

    • @Uerdue
      @Uerdue 7 лет назад +14

      Imagine they would let AlphaZero play training games against Stockfish for 4 hours. It would discover holes in SF's openings, learn typical "anti-computer-tactics" like closing the position and possibly even learn to accurately predict SF's moves. My guess is that it would win like +100 =0 -0 after that.

    • @gilbertho2246
      @gilbertho2246 7 лет назад +7

      dog fish considering that alpha zero mastered chess through playing against itself millions of times within the four hours, alpha zero wouldn't have learned or gotten better against stockfish within those 100 games

    • @leerobbo92
      @leerobbo92 7 лет назад

      Gilbert Ho As Uerdue suggested, it wouldn't take long for Alpha Zero to spot patterns in Stockfish's play. Machine Learning basically works through patterns and pattern recognition, so if it were to spot a style that seems to be working against Stockfish (let's say many of the games it wins happens to be with an isolated pawn and a static structure, it will see that), I imagine that 100 games would be plenty enough for Alpha Zero to show signs of being able to "solve" Stockfish. It won 23 games without the learning algorithm being turned on: I have no doubt that it would win over 30, if not more, if it had. 23 games is enough for a pattern to start to emerge (especially given the trillions of games it will have played against itself, and how strong it clearly already is), and it'll snowball from there.

    • @dogfish2988
      @dogfish2988 7 лет назад

      Yeah, u are right. In contrast; the OpenAI bot from Dota2 did _directly_ learn from the input of the professional players, and started animation cancel and things like that. But i assume Dota2 is way easier to implement then chess, esp. that it was only 1on1

  • @amransmanurung181
    @amransmanurung181 3 года назад

    where can i get the latest stockfish app?
    please help.

  • @onetouchtwo
    @onetouchtwo 7 лет назад

    Loving the availability of AlphaZero commentary.

  • @edwardshowden5511
    @edwardshowden5511 7 лет назад +4

    its like watching nakamura vs hansen
    it was final right?

  • @keshavladha3108
    @keshavladha3108 3 года назад

    At 11:30 when rook moves to d4, doesn't it get killed?, I have confusions please clarify

  • @sheeplamb413
    @sheeplamb413 6 лет назад +2

    alpha zero had a fish dinner

  • @peterhans3495
    @peterhans3495 7 лет назад

    It is very interesting to see the engineevaluationbar on the left side. After move h4 the engine considers the Position for much better for black (especially after Qe1+

  • @kisgatyas
    @kisgatyas 7 лет назад +1

    There is a lot of confusion about the fairness of the match between AZ and SF. Some says that the match wasn't fair because AZ had acces to far more computational power than SF had. Well, the thing is that this is not how thing works in AI. AZ is "simply" just a neural net which was trained with the technique called reinforcement learning. This learning method indeed requires a lot of cps, but once the neural net is trained, it no longer needs that amoung of resources. Theoretically you could run AZ on your mobile phone and still whipe the floor with SF.
    Still, I hope there will be a public rematch with clear rules and settings, the clash between these two is super-entertaining, eventhough SF woudn't stand a chance against AZ. The published paper mentioned that AZ could be easily improved, but unfortunatly DeepMind has other goals. ( to actually solve even more complicated problems ).

  • @toequantumspace
    @toequantumspace 7 лет назад +1

    Amazing. Thank you Danny for explaining so thoroughly and well.

  • @Juancris111
    @Juancris111 7 лет назад

    Danny's best video of 2017

  • @ПрикладнаЕкономіка

    Chess com must send GM Simon Williams to sniff more about Alpha zero, all games instead of 10 best, engine itself and so on

  • @Figgy20000
    @Figgy20000 7 лет назад +1

    I for one welcome our new Google overlords.

  • @julliosantoro
    @julliosantoro 7 лет назад

    Kudos to Stockfish for still being able to draw most of the games.

  • @lawldep
    @lawldep 4 года назад

    Where can I find these games online without someone breaking the games down?

  • @Chuculainn9
    @Chuculainn9 7 лет назад

    How much time was allowed for these machines? Or how did that work?

  • @Samuel-ni7vv
    @Samuel-ni7vv 7 лет назад

    Does the computers always start with the same opening on white?

  • @benmcm77
    @benmcm77 7 лет назад

    Can someone tell me why stockfish chooses queen to H8 at 13:30?

  • @dannygjk
    @dannygjk 7 лет назад +2

    Hmmm I predict Stockfish will be demoted to minion. AlphaZero's minion.

  • @guest_informant
    @guest_informant 7 лет назад +4

    It'd be worth seeing a Stockfish win amongst all this love for AlphaZero :-)

    • @BarsDemirdelen
      @BarsDemirdelen 7 лет назад +5

      Guest Informant But there was none, match was 28 wins for AlphaZero and 72 draws.

    • @guest_informant
      @guest_informant 7 лет назад +2

      There were 100 x 10 thematic games for the "most popular" openings. Stockfish won some of those. They're in the paper arxiv.org/pdf/1712.01815.pdf. The score was still heavily in favour of AlphaZero though. White 242/353/5. Black 48/533/19.

    • @Cscuile
      @Cscuile 7 лет назад

      In the 1000+ matches prior to the 100 match event, SF won a few games but sadly Google hasn't released them.

  • @AleksandrHmel
    @AleksandrHmel 7 лет назад

    Thank you Danny! Great stuff! Keep up the excellent work

  • @someguyslastname8487
    @someguyslastname8487 4 года назад

    Kind of reminds me of Fisher's game 6 against Spasky

  • @skull123
    @skull123 7 лет назад +1

    Man that evaluation bar looks pretty confused.

  • @zeNUKEify
    @zeNUKEify 4 года назад

    The video: Two chess gods of perfect mechanical prudence clash while humanity watches in awe
    Me, a 1000elo casual: “interesting.”

  • @punpck
    @punpck 7 лет назад

    this is absolutely crazy ... and stockfish didn't see it coming!

  • @hello38207
    @hello38207 7 лет назад

    I have a question, how could pawn take d6 at 3:18 when blacks pawn is on D5??

    • @dannygjk
      @dannygjk 6 лет назад

      en passant rule.

  • @NilsThylen
    @NilsThylen 7 лет назад

    1. Which rating does this performance get?

  • @1RobertSmith
    @1RobertSmith 6 лет назад +1

    I don't understand why Daniel Rensch & Chess.com would begin its analysis of this battle between AlphaZero vs Stockfish at Game 3. For historical purposes, it would make sense for him to do a video about the initial confrontation first - Game 1. The results of Game 1 signals that Stockfish may soon be replaced as the top chess engine. Why skip over Game 1 and Game 2 to analyze Game 3?

  • @anotherlover6954
    @anotherlover6954 7 лет назад +7

    Maybe I'm just tired but JesusHChrist were you trying to win a speed-talking competition?

  • @columbus8myhw
    @columbus8myhw 7 лет назад

    Are there official tournaments between chess engines? If so, I'd love to see A0 enter, playing under official tournament rules. People have been complaining that Stockfish has been handicapped, and so a tournament would mitigate these concerns.
    Incidentally, I wonder what a human vs A0 game would look like, where the human is allowed to take back as many moves as it likes. (Or perhaps a human team vs A0.)

    • @columbus8myhw
      @columbus8myhw 7 лет назад

      (The team also being allowed to take back moves.)

  • @BrokenG-String
    @BrokenG-String 4 года назад

    11:29 the rook on D4 wasn't free?

  • @broadcasterpro
    @broadcasterpro 7 лет назад +1

    what was the time control?

    • @tgwnn
      @tgwnn 7 лет назад +1

      broadcasterpro 1 minute per move.

    • @Djorgal
      @Djorgal 7 лет назад

      On what hardware? Because just the time isn't really relevant for a computer.

    • @broadcasterpro
      @broadcasterpro 7 лет назад

      thank you

    • @user-ic4vu3ek9b
      @user-ic4vu3ek9b 7 лет назад

      The AlphaZero runs on a google supercomputer, meanwhile stockfish does not...
      Nakamura said this. -->www.chess.com/news/view/alphazero-reactions-from-top-gms-stockfish-author

    • @Djorgal
      @Djorgal 7 лет назад +6

      No, it doesn't run on a supercomputer. It runs on specialised hardware designed for neural networks. Nakamura is not a computer scientist, nor really being very honest here. I just did a little bit of research and AlphaZero used a single machine with 4 TPUs (that is to say 4 specialized CPUs) while Stockfish using 64 threads and a hash size of 1GB.
      It's hard to compare hardware that is too different but the former is very far from a supercomputer and the later very far from a laptop. But A0 and Stockfish are simply not designed to run on the same hardware.

  • @drumcircler
    @drumcircler 7 лет назад

    Is it true that Stockfish's openings book and endgame tables were disabled for this match.
    Was Stockfish wearing handcuffs?

    • @dannygjk
      @dannygjk 6 лет назад

      You don't understand how chess engines work. When their book and EGDB are turned off they will still destroy any human and play extremely high level chess.. AZ also had no book or EGDB.

  • @olivermrobertson
    @olivermrobertson 7 лет назад

    Great analysis! Thanks

  • @smartphonephone9675
    @smartphonephone9675 7 лет назад

    It would be nice to visualize somehow (a sign, another background) when it is not "players" moves, but potential variation, because it is changing so fast I can't tell is it a real move made by "player" or just a some potential move
    PS. I am not a native English speaker

  • @prohz9129
    @prohz9129 3 года назад

    You use an engine to evaluate a potentially better engine...
    Wait I’m actually curious which engine was used to evaluate the position?

  • @Virslimadar
    @Virslimadar 6 лет назад +1

    8:48 Stockfish is begging White to take the 'D' pawn
    i'm sorry what? :D

  • @julioandresgomez3201
    @julioandresgomez3201 6 лет назад

    What if black tries to untangle countersacrificing the exchange with rook e6?

  • @johnbouttell5827
    @johnbouttell5827 7 лет назад

    Good work. More please.

  • @JeMC47
    @JeMC47 7 лет назад +30

    That´s not the latest version of Stockfish and the conditions of play weren't equal. However, it would be interesting to see if AlphaZero can beat a stronger Stockfish...

    • @chesslessonscom
      @chesslessonscom 7 лет назад +21

      They should ask google to enter the next years TCEC event and see how it does against a range of engines.

    • @Stockfish1511
      @Stockfish1511 7 лет назад +22

      Ive read their their report and its extremly flawed. Not latest version of stockfish, not best computer for stockfish, no opening theory book for stockfish hence why it ended up in bad positions. Suprizingly i analysed couple of the games the engines play and analysed it on strongest stockfish engine and stockfish thought that only 65% of the moves it made against alpha zero were the best moves. Ite even analysed and thought couple of moves were mistakes. I analysed with the fastest method which means it analysed the game 110+ moves under two minutes which is way less than it was handed in the game they played. Something deifantly went wrong in that match up. Im sure when when stockfish developers demand rematch it will be massacre. I mean when was the last time stockfish on hardest mode made only 65% best moves rest were ok moves according to stockfish itself, couple of them were mistakes. I mean supergrandmaster do make 75-80 % best moves in pretty much every game they play. In no way stockfish would score only 65% according to its own analyse. If it did make best moves according itself, why would analyse by stockfish show that it were not best moves. Besides on papper i mean 80k moves per second for AZ and stockfish 70 milion speaks for itself.

    • @joi1794
      @joi1794 7 лет назад

      well yes it was kind of an disadvantage for SF but i think that AZ will still win in the end. maybe not by far. but it will win. and when it doesnt win than than it will do soon after the rematch demand from SF. it only learned (from what i heard) 4 hours. now imagine how good it will be after 4 months of learning. and its also a differents in calculating 70mil moves from rly bad to rly good and only calculating 80k moves from good to rly good.

    • @Stockfish1511
      @Stockfish1511 7 лет назад +4

      to be 4 hours is just statment to highball it. Chess is played on combination of moves and yeah there alot of moves but it doesnt mean it will get much stronger with "self teaching" or whatever they claim it is. Sockfish doesnt need self teaching when it can analyse 70 milion moves in 1 second. Trust me even gms have spoken and many of them dont believe it was played on fair conditions and they put 100 thousand dollar price on it. im pretty sure those cunts or just scams who try to make fortune by lying. If they were so good why not enter engine tournment, pretty sure they know they will be tortured. it think stockfish developers will demand a rematch with proper control and dair conditions against proper sockfish engine. Then i would bet my money stockfish will massacre it

    • @Groger_12
      @Groger_12 7 лет назад +10

      2 minutes for the whole game is way too short for an accurate analysis, especially at this high level. If you don't let Stockfish analyze for long enough, it'll incorrectly label moves as mistakes.
      The Stockfish program in the match calculated around 70,000,000 positions per second for a minute per move, which in a laptop corresponds to calculating for around 30-60 minutes.
      I'd suggest to let you computer analyse each move for at least a minute, and if it still says the move is a mistake, let it calculate for around 30 minutes

  • @Grzegorz54321
    @Grzegorz54321 7 лет назад +20

    I analyzed all 10 games on lichess and it shows that Stockfish made many
    inaccuracies and mistakes in each game. It should be games playing open
    to the public, longer time, on the same hardware. This looks impressive
    but it is not so reliable.

    • @Djorgal
      @Djorgal 7 лет назад +2

      That's a proof of concept, not an exhibition match. It's likely they will do one of these later on when their software is ready. But beating chess or doing a chess software is not their aim, it's more of a side project for them :)

    • @JMJF55
      @JMJF55 7 лет назад +3

      Max12345 i glad some people are actually noticing

    • @jaybingham3711
      @jaybingham3711 7 лет назад +3

      They approach chess fundamentally different. It's the fact that the engine approach can be so thoroughly owned by a different approach (self learning) that willingly resistricted itself to a limited number of learning opportunities leading up to the match. Throw max resources to SF...absolutely everything that is possible. Now put all your net worth on the table in favor of SF. How many people pull their bet when they hear AZ is getting 1 more hour of training? How many are left after hearing it's 24 hours? That's one empty table. This has nothing to do with equalizing processing capabilities. It's the inequity in approach that is stunning.

    • @santiagopicco1397
      @santiagopicco1397 7 лет назад +16

      Dude, you should run stockfish in lichess for an hour for each move if you are using a regular i7 processor in order to compare with the computer google used. I don't think u did that.

    • @leerobbo92
      @leerobbo92 7 лет назад +3

      Jay Bingham Exactly. It comes down to 20 years of chess computers being developed and iteratively improved by humans, being almost completely solved by Machine Learning after just 4 hours. That is astonishing. I mean, we're talking about the 2016 TCEC winning version of Stockfish, running on some pretty decent hardware, not some random engine. AlphaZero only analyses 80,000 positions a second: this version of Stockfish would have been analysing 70,000,000 per second, yet it still lost.

  • @lukechavhunduka2970
    @lukechavhunduka2970 6 лет назад

    7:04 Why not knight to h5

  • @LedatorSchach
    @LedatorSchach 7 лет назад

    Thank you so much for this video! This is the first time, that Ive got to see the incredible strong chess engine alphazero. It wil probably become "easily" the worlds strongest chess engine!
    Greetings from Germany!

  • @llindstad
    @llindstad 7 лет назад

    Great commentary! 👍🏻😊

  • @timbaldwin9951
    @timbaldwin9951 6 лет назад

    "R-g7# "pretty darn sexy" ?!
    Absolutely.

  • @jiujitsu5936
    @jiujitsu5936 6 лет назад

    Why do chess engines resign?.. I never understood that.