AlphaGo and the Hand of God (3/3)

Поделиться
HTML-код
  • Опубликовано: 7 ноя 2024

Комментарии • 30

  • @zsoltbihary3347
    @zsoltbihary3347 8 лет назад +5

    I liked your analysis on this match. Touching. Thanks

    • @BradyDaniels1
      @BradyDaniels1  8 лет назад +2

      +Zsolt Bihary This match made me more contemplative than any before, and I suspect than any will in the future. It changed the go-world forever. Thanks, again.

  • @jasondoe2596
    @jasondoe2596 7 лет назад +3

    Hello Brady! I found your channel today and binge-watched your "whatever you do is wrong" video along with this 3-part series - and I want to thank you for the excellent and thought-provoking presentation. I've subscribed and I'm looking forward to more AlphaGo-related videos!
    I don't even know the rules of Go (well, now I do thanks to you, but it's not very useful when I can't recognise the simplest of patterns), and I still found it fascinating. I'm a chess player myself (a weak amateur) with an interest in AI and Machine Learning, and the situation right now probably mirrors what the chess world went through a couple of decades ago, with the widely publicised Deep Blue vs. Kasparov matches. BTW, I watched part of the official AlphaGo live transmission a few months ago, and I was immediately impressed by how graceful Lee Sedol is - such an amusing contrast to Garry Kasparov (the greatest chess player of all time, and a notoriously bad loser!). And, as you said, he made no excuses. The Go community should be proud to have such a representative.
    Anyway, there's no doubt that this is a pivotal moment for the world of Go. And I have no doubt that the game will emerge stronger and richer than ever, just like chess did. What is now a temporary inconvenience for the pro's (mostly the older generation, I suspect) will open up whole new areas of study and new play-styles. And this transitional period will take quite a while - after all, AlphaGo and Deep Learning fundamentally differ from the much more simplistic tree searching algorithms employed by the strongest chess engines. Chess engine analysis is reliable and comparatively easy to understand; neural network output, not so much.
    Frankly, I'm a bit jealous of AlphaGo's "intuitive" play-style, and I wish there were similarly strong chess engines based on neural networks. I think there are a few efforts, but all the really strong ones are using much more mundane algorithms, with the focus being on clever tree-pruning heuristics and an efficient implementation. It's a shame ;)
    If you accept video suggestions: What about a video (or short series) on really basic stuff, to help people get started with Go? Things like basic patterns, play styles, tempo, initiative, what is considered "passive" or "active" play and when to use each, terminology (that's a big hurdle!), how to quickly evaluate a position, game notation, puzzles (are there any?), strategy vs tactics (is that even a thing in Go?), opening "principles", game servers (are there things analogous to ICC, FICS, and the excellent open-source lichess.org?) etc. etc. ...And things I haven't even considered (my apologies, but this list was partly based on my chess experience!) Phew, that's a bit overwhelming now that I think about it D:
    Anyway, thanks again for the great videos, and my apologies for the *huge* comment!

  • @ArthWoW
    @ArthWoW 7 лет назад +4

    Don't play Go but I've been thinking about it. I loved hearing your take on these matches. Your AlphaGo match reviews and commentary just earned you a subscriber. Cheers!

  • @LonelyDriverTakeuchi
    @LonelyDriverTakeuchi 3 года назад

    Watched all three parts, brilliant! Very informative and well presented, thank you!

  • @AirIUnderwater
    @AirIUnderwater 7 лет назад +2

    Thanks for this 3 part series. Amazing. :)

  • @MelindaGreen
    @MelindaGreen 7 лет назад +3

    I too hope that Deep Mind will leave us with a version of AlphaGo that we can all use, and that they leave a developer on the project, and that they build a version that self-teaches itself from scratch. But I *really* like that instead of complaining or making demands, you stopped to reflect on just how grateful you are for the gift that Google has given us, and said clearly to them "Thank you".

  • @sculchy
    @sculchy 6 лет назад

    Thank you for a fascinating series of videos. Your passion and storytelling skills made the topic very accessible for a non-Go audience, and the whole subject is incredibly thought-provoking. As a non-Go player I would have liked a little more explanation of the tactics and strategy, so I could better understand some of the moves you highlighted. But perhaps realistically I just need to learn the game!

  • @BrettCastellanos
    @BrettCastellanos 8 лет назад +2

    This is an excellent video. Thank you.

  • @ig2d
    @ig2d 7 лет назад +1

    really enjoyed the video. on an optimistic note - chess professionals don't seem to have been too adversely affected by the chess "engines" (interesting choice of word..) it would be interesting to know what alphago thinks komi should be. is there a RUclips clip of sedol's gracious words?

  • @kelvinm560
    @kelvinm560 4 года назад +2

    [3:56] A year later (Oct 2017) AlphaGo Zero, a version created without using data from human games, and stronger than any previous version.

    • @OzoneTheLynx
      @OzoneTheLynx 4 года назад

      And AlphaStar mastered starcraft 2

  • @TheNeilChatelain
    @TheNeilChatelain 6 лет назад +1

    Wow, you predicted Alpha Zero

  • @kevinpark5489
    @kevinpark5489 8 лет назад +1

    It is interesting to hear that the 'wedge' does not work. Would be great if you could make a video to teach us how to play against the move.

  • @SonnyKnutson
    @SonnyKnutson 7 лет назад

    +Brady Daniels
    Thank you for more brilliant videos! I don't know if you are aware of the following but do you think naming AlphaGo to Sai would be more fitting and a way to honor the Go community even more?
    Sai is a character from the anime Hikaru No Go. I'm not sure if you are aware of it.

  • @Wreneagle
    @Wreneagle 8 лет назад +3

    I too think it would be fascinating to see a completely self-taught bot. The challenge as I understand is really just time. I imagine it would take years and years of self-play on data-centers worth of computers to get to the level it was after studying those 150,000 pro games. I mean, basically alpha-go got to learn from the institutional knowledge of 2500+ years of human study. That's a lot of really valuable information, even if it all wasn't 100% correct.

    • @u.v.s.5583
      @u.v.s.5583 6 лет назад

      If it took whole 4 hours to master the simple game of chess beyond what humanity or Stockfish ever achieved, it might take a century or millenium to master go at that level.

  • @gabrielgonzales5907
    @gabrielgonzales5907 5 лет назад

    If I'm not mistaken, the new AlphaGo zero has taught itself how to play from scratch.

  • @kp8752
    @kp8752 8 лет назад +2

    The numeric positions in go are vast but still finite. Theres only so much any one player can ever possibly know. Eventually even computers will have to plateau in terms of strength.
    If every humans ground level knowledge of go is the peak of the knowledge of the previous generation then couldn't humans one day catch up to Alphago and other bots in terms of raw strength? (Obviously in a very long time) Isnt it possible that humans will also eventually reach that plateau?

  • @mazertime150
    @mazertime150 7 лет назад

    i really like your videos (i watched about 6 by now, 5 of them on alpha go because that whats youtube algorithem thinks i like :P which is true but i also like go in general), here you touched in so many aspects on alpha-go's games that i had so much trouble explaining to people why im so excited about it. thank you so much! if someone ever ask again ill just send him your videos :)
    i subscribed so hope to see your new videos soon :)
    if you read it, can you tell me how strong are you? im curious

  • @Krellan
    @Krellan 7 лет назад +1

    Fascinating. Anybody else disappointed that DeepMind is retiring AlphaGo from competition? Was looking forward to seeing it more widely available. I'm sure many more Go pros would love the chance to challenge it. Was hoping that DeepMind would make a paid service online, and that anybody could pay a nominal fee to play against AlphaGo.

    • @TheRamstoss
      @TheRamstoss 4 года назад

      Alphago runs on a 25 million dollar computer, so I don't think so.. haha

  • @danodet
    @danodet 8 лет назад +6

    In a way, what DeepMind did with his Policy network, is first catch up with the collective knowledge that has culturally evolve over the 2500 years of go history. Then, with the reinforcement learning, they extrapolate that evolution and gave us a glimpse of what go elite could have look like 100 years from now. About skipping the first part I like to think of it in the following way. What if 2500 years ago human started a colony on mars with people who played go. Those people experiment over time, there are trends, some revolution of styles, and the whole process of cultural evolution take place as it did on earth. Now what if earth people and mars people come into contact and played go. What would it be like? Would the have the same joseki? Would the have some joseki that they don't have? Would they have some joseki that we don't have? As what Michael Redmond says I think we could be on the verge of a revolution in go playing style. I can't wait to see what's next!

  • @fleaz5325
    @fleaz5325 7 лет назад +2

    Going after Starcraft next? I'm starting to think Deep Mind just wants to break everything Korea cares about.

  • @bobbysnobby
    @bobbysnobby 7 лет назад

    Its important to note that Alpha go wasnt the goal of the project, machine learning was the goal. This concept can be generalized to other topics it just so happens that they picked go because it was a much easier interm goal and it can be easier to objectively determine how successful the project had been.
    This has been made a bit more sad in the most recent updates where alpha go was like 50-0 online in short time controls but we have no idea how good it really is because in the programming winning is the top priority not winning by the largest margin or with the most points or the fastest, so many games were 1/2 point or 1.5 points that it really makes you wonder how strong it must actually be to reliably win by such narrow margins.

  • @soggz9190
    @soggz9190 7 лет назад +3

    When a player meats a human player they get the same time to think.
    But a computer if you speed it up then 2 min might equal 2h, to make it as fair as u can maybe you should let the pro player have 1 day for each move considering how fast computers are these days.
    Would alpha always win with unlimited time or would the human win with unlimited time id like to know this, currently feel like modern time limits for go are based on human vs human and makes little sense for a computer.

    • @FewKinG
      @FewKinG 7 лет назад +1

      I think the common principle of handicaps applies here. When two humans with different strengths play each other, the weaker one will make (more) mistakes, because in the time given he can't consider the same amount of variations (in part because his intuition is not as developed).
      One solution would be to slow down the stronger player's thinking, but because that's hardly possible with humans, you give the weaker player an advantage.
      The key thing with go is, it cannot be realistically played well by anyone just using logic. Even at the speed modern computers can apply logic at, they were not able to compete with the intuition of strong human players. That has changed now, not because computers are suddenly a lot faster, but because we managed to create a model that produces some kind of intuition itself (surely being fast still helps but it's not sufficient).
      I don't think there is anything unfair here. Giving the human player more time for his moves would just be admitting that the AI is the stronger player and therefore you're giving it a handicap. Just as you would do in human-vs-human games.

    • @soggz9190
      @soggz9190 7 лет назад +3

      Imagine that alpha go could win 100% against any pro in a game of 1second/move go.
      Would you find that as impressive as these games against sedol?
      If we follow moore's law each year 2 years means that a computer doubles its processing power.
      So every 2 years a computer gets twice the amount of time to think and then 2 years later that is doubled again so in 10 years time its (2 4 8 16 32) 32times the amount of time the same computer has to think.
      It can also just be bought a extra CPU or several extra.
      Surely you realize that processing power is a variable for a computer when its a constant for a human
      With the resources of google have they can basically decide how much time alpha go needs to be able to win they just buy more hardware if it needs more time.
      They used what 12CPUs if i recall correctly in this game and it was rented in a computer farm so they could surely have gotten 24CPUs had they thought that time would be the bottleneck for Alphago.
      Im not saying Alpha Go isnt a brilliant program but time is a variable for a computer its not a constant.

    • @davidberthelot7165
      @davidberthelot7165 7 лет назад

      Or may be limit the computer to 10 seconds per move. And keep reducing the time as long as computers are too strong.

  • @gabrielgonzales5907
    @gabrielgonzales5907 5 лет назад

    I always hated that translation. I call it the "divine move" like in Hikaru no Go, not the "hand of God."