AlphaZero: An Introduction

Поделиться
HTML-код
  • Опубликовано: 6 ноя 2024

Комментарии • 94

  • @conando025
    @conando025 Год назад +54

    I started to look into implementing alphaZero for a diffrent game and this is such a great overview. There are some details that don't quite seem to align with what i got from the paper and its materials, but they are so insignifikant that they're not worth mentioning (besides I'm not even sure I got it right). I commend you for getting things to run, its a project that sounds easier then it is (or I'm just dumb). Right now I'm struggling with the NN part especially since I cursed myself by deciding to write my implementation in rust. Do you plan on publishing your source code?

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад +19

      This is definitely not an easy project, and it sounds like you're making it a bit more difficult by trying to code NNs from scratch, so give yourself lots of credit on that front.
      I definitely left a good chunk of information out... I don't really discuss self-play here, or adding dirichlett noise to the prior, or compressing arrays and making them hashable so I can put them in a graph without destroying my computer! There's also some ambiguity in my mind as to whether playouts are used at all after training is complete, and that seems pretty important, too.
      I have my code here ( github.com/2ToTheNthPower/Pente-AI ), and I recommend looking up "Accelerating Self-Play Learning in Go" on arXiv. I think it's a better resource than the original paper was.
      Hope that helps!

    • @conando025
      @conando025 Год назад +3

      @@2ToTheNthPower It definitely will.
      Yeah with the neutral network i definitely shot my self in the foot because tensorflow in Rust is pretty bare bones.
      I'll also look into that paper but i have to say what helped the most was not the actual paper but rather the pseudo code they provided it just sadly skimps out on the neural network front

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад +4

      @@conando025 I gotcha. From what I've read about the neural network, they primarily used ResNets, since residual connections allow very deep networks to train and getter better results than shallow networks. I don't know if the KataGo paper has any pseudocode, so it may not be helpful on that front.

  • @weldkhaltiiii9935
    @weldkhaltiiii9935 Год назад +77

    While watching the video I legit thought you'd have 200k subs minimum considering the quality. Im pretty sure youll get that in no time, good luck and keep up the clean work

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад +10

      That's an incredible complement! Thank you!

    • @Asterism_Desmos
      @Asterism_Desmos Год назад +3

      I saw this comment and checked and this is a criminally underrated channel. I am subscribing right now.

    • @yahoo5726
      @yahoo5726 Год назад

      @@Asterism_Desmos same here

    • @floridaman6982
      @floridaman6982 Год назад

      Just subbed, good luck buddy 👍

  • @dubsar
    @dubsar Год назад +7

    9:05
    "How will they be used tomorrow?"
    To keep the base of the social pyramid under control while the top 0.01% live as demigods.

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад +1

      There are definitely some interesting ethical, social, and political issues that will emerge as AI becomes more and more capable.

    • @avananana
      @avananana Год назад +1

      This is the sad state of machine learning technology, it has such great potential to do good but the ones that have the power to utilise it the most will only use it for personal gain, which has been the case with how humanity works for centuries upon centuries.

  • @georgevjose
    @georgevjose Год назад +4

    Definitely the start of a great channel. Good luck!!

  • @Joel_M
    @Joel_M 2 месяца назад +3

    One thing that you might have missed, and what likely resulted in the worsening performance from your own implementation is that the researchers only replaced the best model if the newer one won by a margin of 55%, otherwise it would be rejected and train for N games again.

  • @cmnog2167
    @cmnog2167 Год назад +4

    Quality entertainment. Immediately subscribed after finding out you had less than 50k-100k subscribers.

  • @Dhirajkumar-ls1ws
    @Dhirajkumar-ls1ws Год назад +5

    Great ML video, hope your channel grow greatly.

  • @brady6968
    @brady6968 Год назад +2

    very glad RUclips recommended this to me, good video!

  • @Alex50969
    @Alex50969 Год назад +23

    You made it right into the youtube algorithm, if you are able to produce a new video in the next days, your channel will grow extremely fast.
    Quality animations, interesting topic and great commentary. Good job :)

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад +7

      Thanks! If I can find time over the next week, I may start another video. Lots of ideas!

  • @josephmazor725
    @josephmazor725 Год назад +1

    I’ve been looking for a great explainer for Alphazero, wish you best of luck with all future videos, I’ll be there to continue watching them

  • @kapilpoudel8452
    @kapilpoudel8452 10 месяцев назад

    This is the best video i have evern seen about AlphaZero !

  • @xvolutionre3610
    @xvolutionre3610 Год назад +3

    Hope your channel grows!

  • @SinanAkkoyun
    @SinanAkkoyun Год назад +1

    More like this! Finally somebody painting the whole picture!

  • @VPSOUNDS
    @VPSOUNDS Год назад

    And i am excited to find out what your next video is gonna be.

  • @yousefsharaf1957
    @yousefsharaf1957 Год назад +3

    Awesome Video!

  • @etsequentia6765
    @etsequentia6765 Год назад

    Zarathustra in the background... nice touch. All hail our new overlord, HAL 9000!

  • @revimfadli4666
    @revimfadli4666 Год назад +1

    Fantastic vid, surprised this is your first video

  • @NoNTr1v1aL
    @NoNTr1v1aL Год назад +1

    Absolutely amazing video! Subscribed.

  • @fan5188
    @fan5188 Год назад +1

    Hey Aaron, I love your video. Please keep the good work 👏

  • @timtrix1449
    @timtrix1449 Год назад +1

    Great work! That was a really nice Intro. I especially liked the Storytelling, which made it feel more like a movie than science Education :)

  • @NewtonMD
    @NewtonMD Год назад

    I mean who knows what he is talking about?!
    Just love the vid for the quality!

  • @shmug8363
    @shmug8363 Год назад

    please make more videos man, the quality is great. Would you consider exploring some of root concepts in this video in a bit more detail? like maybe a dedicated video on convolutional neural networks? I'm a beginner programmer and I'm super interested in this stuff.

  • @martinsosmucnieks8515
    @martinsosmucnieks8515 Год назад +1

    Great video! Cant wait to see more

  • @furbyfubar
    @furbyfubar Год назад +2

    Really nice video!
    One minor gripe is that at 5:30 you introduce a bunch of terms the layers/steps of what's done with the data that are not explained further than the abstract graphics on the screen. This is sort of fine, but when the next section at 6:06 begins with "So now we understand all the pieces of the puzzle..." it feels more than a little hand-waved. I'd had liked either at least some more sentences on explaining what each of those steps in slightly more detail, OR an acknowledged that we're not going to get into the weed of those things in this video. So it's more the disconnect from how your script's written than really a problem with the info itself. The video assumes that everyone gets what (for example) "a low-dimensional embedding of the gamestate" is and why it's needed here. I sort of can figure that out, but when it's thrown at me in between other sentences that are also dense with technical terms, delivered at that speed? Well I really didn't feel like I understood all the pieces of the puzzles after that slide.

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад +2

      That's fair, and thank you for the feedback! I've put a lot of thought into balancing between explaining things and assuming people know things, and I don't think I've got the balance quite right yet. I'd like to make a series that starts all the way at algebra and works up to state-of-the-art ML, but the amount of time and effort that would take with animations is enormous.
      Maybe this will become my life's work :)

    • @furbyfubar
      @furbyfubar Год назад +2

      @@2ToTheNthPower You certainly have a talent for making videos that explain stuff. So to keep making videos, (while not biting off so much that it feels overwhelming to continue), is likely the best (only?) way forward to get better at it, and to see if there's a career in it for you in the long run.
      From what I've heard other science communicator youtubers say, having people who *can* ask the stupid or at least less informed questions is important. For example, @Numberphile works so well in part because Brady is *not* a mathematician, so he's constantly asking the obvious next question that other non-mathematicians might have.
      One way to get those questions early enough in the process so that you can still answer those questions in a video is to have some sort of small group or community that read your draft for scripts that can give feedback on if something is unclear, and to pretty much after each paragraph ask them "Are there any obvious questions that this paragraph raises that maybe should be answered before we move on?". The tricky part is to find people who are both interested enough in the subject to want to read/listen to those early drafts and also give feedback, but who *don't* already know most of what's covered in the script. Once your channel grows it's possible to crowdsource this, but at first, having some friends who are willing to do it might be easier; mainly because community building is important and all, but it also takes a lot of time and effort that, early on, might be better spent on making more videos instead.

  • @lachlanperrier2851
    @lachlanperrier2851 Год назад

    U are amazing thank you for existing

  • @Luredreier
    @Luredreier Год назад +1

    Almost 4 k views with less than 200 subscribers?
    Jeez.
    Well, you got one now.

  • @michaelheal3600
    @michaelheal3600 Год назад

    Oh my friends are gonna love this

  • @jasonchiu272
    @jasonchiu272 Год назад +2

    Everyone is talking about AlphaZero but I am wondering when AlphaOne will be released.

  • @nossonweissman
    @nossonweissman 10 месяцев назад

    This was amazing!

  • @insideblankoutside
    @insideblankoutside Год назад +1

    Loved the video!

  • @조셉0309
    @조셉0309 Год назад +1

    Amazing video!

  • @dr.kraemer
    @dr.kraemer Год назад +1

    great work - keep at it!
    I bet you're correct that you need more space and more training data. are you using a trie to represent your graph?

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад

      I used a networkx directed graph data structure. Some sort of tree could have worked too, though theoretically I think it's more appropriate to describe it as a graph. Could be wrong tho

  • @karlbooklover
    @karlbooklover Год назад

    great video, activated noticiations :)

  • @AA-gl1dr
    @AA-gl1dr Год назад

    Amazing video, thank you so much.

  • @hijeffhere
    @hijeffhere Год назад +2

    "The result of alpha tensor speak for themselves"
    Why do I feel like this is throwing shade against a recent chess issue? 😂

  • @internetworlock
    @internetworlock Год назад +2

    This is a really nice video. Do you play to make more?

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад

      If this video performs super well, I'll definitely consider it! I have lots of ideas, but I need a good justification to pour time and energy into them.

  • @ofekshochat9920
    @ofekshochat9920 Год назад

    Has been shared in lc0 discord. Good stuff

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад

      Ooh! I bet there a lot of people there who know more about this than I do. I'm looking forward to seeing their comments!

    • @ofekshochat9920
      @ofekshochat9920 Год назад

      @@2ToTheNthPower I see, what's your handle? May I introduce you?
      Oh I read it wrong, thought you meant that you're there, join along :)

    • @ofekshochat9920
      @ofekshochat9920 Год назад

      @aarondavis5609 I have some stuff I'd like to say, it's a really cool video, and I'd definitely use it to introduce people to the concept (thanks!), but some stuff was a little confusing: like the highlighting of different terms in ucb. Also, P(s) is the policy, and shouldn't be there when we're talking about the stuff before the nn...
      But great video!
      Come join, we use transformers now as well

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад +1

      Yeah, that makes sense. P(s) probably should've brought in after the network was introduced.
      Ooh! Transformers are cool. I'm thinking about making a video on visual intuition for the attention mechanism, at least for NLP. We'll see how long that takes!

  • @kraketito8999
    @kraketito8999 Год назад +1

    I am amazed by the quality of animation.which software is used for making these animations

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад +2

      Manim in Python! It was started by 3Blue1Brown. Definitely look it up!

  • @PixelPhobiac
    @PixelPhobiac Год назад +1

    Awesome

  • @yourcatboymaid
    @yourcatboymaid Год назад +3

    Great video, I hope you decide to make more!

  • @MrIndomit
    @MrIndomit Год назад +1

    Cool video! I leave comment here to help to promote :)

  • @applepaul
    @applepaul Год назад

    At 2:35 you mention that we visit the root node (of the subtree) 9 times. I dont get this. Dont we just visit it once and then continue our DFS(depth first search down the tree). So essentially, dont we just visit it only once and not 9 times??

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад

      We visit it 9 times in the sense that we've experienced 9 different game branches so far as a result of visiting that game state. If you're simulating games one at a time, then you will pass through that node 9 different times. If you simulate games in batches, then you can do what you're describing.
      For the sake of MCTS, though, I think the "visit count" essentially refers to the number of leaf nodes that result from visiting a particular node. In that sense, it doesn't matter if we simulate one game at a time, or if we simulate games in batches.

  • @JosephTarun
    @JosephTarun Год назад

    Letsgo a Ludbud in the computer science RUclips space !!

  • @reh.4919
    @reh.4919 Год назад

    If it takes just one atom to indicate the possible state of a game of Pente, would we run out of atoms?

  • @taptox
    @taptox Год назад

    how about adding #Some2 tag?

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад +1

      I’m not sure I know how to add tags, but it’s a good idea and I tried. Thanks for the suggestion!

  • @studgaming6160
    @studgaming6160 Год назад

    nEXT VIDEO WHEN?

  • @farpurple
    @farpurple Год назад

    when matrix multiplication witn O(1) ?

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад

      I don't think it is? I think the strassen algorithm is around O(n^2)

  • @nubrake802
    @nubrake802 Год назад +1

    nice but too many blank black screens

  • @IqweoR
    @IqweoR Год назад +1

    Only one complaint. Too much emptyness. Even if you are talking about atoms in the universe - insert some pictures over your voice, so we don't stare at black screen for this long. For the first 5 seconds of the video I thought my RUclips app froze and was just outputting sound without video. But contentwise this is top notch, composition is great, and topic is good. Keep up the good work, there's not enough channels that have this quality talking about AI, you will be famous in no time :)

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад +1

      Thanks for your input and the complement! I’ll take them both to heart

    • @MsHofmannsJut
      @MsHofmannsJut Год назад

      I disagree. At last someone not blasting us with multimodal excitation.

    • @favesongslist
      @favesongslist Год назад

      @@MsHofmannsJut There is a balance here, I thought there was a video issue with just that black screen. SO glad I kept watching.

  • @yourfutureself4327
    @yourfutureself4327 Год назад

    💚

  • @thanapatrachartburut513
    @thanapatrachartburut513 9 месяцев назад

    great

  • @JoelFazio
    @JoelFazio Год назад

    Good start but holy shit there is way too much black screen time. Also your mic clips quite a lot. Keep at it, youll be big time on yt in no time.

  • @andrewrobison581
    @andrewrobison581 Год назад

    the same as it is large, so it is small. the only winning move is none at all. memento mori

  • @emmettdja
    @emmettdja Год назад

    monte carlo tree search

  • @mxn5132
    @mxn5132 Год назад

    Neat

  • @pananaOwO
    @pananaOwO Год назад

    I was thousand like

  • @millerdowneyy
    @millerdowneyy Год назад

    Bro sounds like jak from disrupt

  • @uku4171
    @uku4171 Год назад

    Isn't AlphaFold 2 an even more impressive feat than the matrix multiplication thing?

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад +1

      AlphaFold 2 is a very impressive domain-specific feat. The matrix multiplication advancement is more of a meta advancement... it has the potential to seriously improve the computational efficiency of training models like AlphaFold 2, so in my opinion the matrix multiplication improvement will have a much broader positive impact than AlphaFold 2 has had so far.

  • @swaruppaul6014
    @swaruppaul6014 Год назад

    Gr8

  • @MightyElemental
    @MightyElemental Год назад

    Deary me, "carbon emissions" being brought up in machine learning training 😩

    • @2ToTheNthPower
      @2ToTheNthPower  Год назад +1

      Where do you think the energy required to run an entire datacenter worth of TPUs comes from, exactly? Ethics, climate science, and machine learning are inseparably linked when we're talking about a project of this scale, and we can't escape that.

  • @marcotroster8247
    @marcotroster8247 Год назад

    I don't know why people in AI don't admit that this field requires technical excellence in high-performance computing to make trainings even feasible. It's important to get the results within a reasonable timespan unlike Hitchhiker's Guide 😂
    I've spent the past 3 months accelerating a training from a month to a few hours. Let's make better use of our hardware instead of throwing money at the problem. Modern PC games can squeeze insane amounts of compute out of our machines. Let's facilitate this in our trainings as well 😉

  • @stonefacehhr6638
    @stonefacehhr6638 Год назад

    What

  • @captain_crunk
    @captain_crunk Год назад +2

    I am sub #320. Wait, 320?