I started to look into implementing alphaZero for a diffrent game and this is such a great overview. There are some details that don't quite seem to align with what i got from the paper and its materials, but they are so insignifikant that they're not worth mentioning (besides I'm not even sure I got it right). I commend you for getting things to run, its a project that sounds easier then it is (or I'm just dumb). Right now I'm struggling with the NN part especially since I cursed myself by deciding to write my implementation in rust. Do you plan on publishing your source code?
This is definitely not an easy project, and it sounds like you're making it a bit more difficult by trying to code NNs from scratch, so give yourself lots of credit on that front. I definitely left a good chunk of information out... I don't really discuss self-play here, or adding dirichlett noise to the prior, or compressing arrays and making them hashable so I can put them in a graph without destroying my computer! There's also some ambiguity in my mind as to whether playouts are used at all after training is complete, and that seems pretty important, too. I have my code here ( github.com/2ToTheNthPower/Pente-AI ), and I recommend looking up "Accelerating Self-Play Learning in Go" on arXiv. I think it's a better resource than the original paper was. Hope that helps!
@@2ToTheNthPower It definitely will. Yeah with the neutral network i definitely shot my self in the foot because tensorflow in Rust is pretty bare bones. I'll also look into that paper but i have to say what helped the most was not the actual paper but rather the pseudo code they provided it just sadly skimps out on the neural network front
@@conando025 I gotcha. From what I've read about the neural network, they primarily used ResNets, since residual connections allow very deep networks to train and getter better results than shallow networks. I don't know if the KataGo paper has any pseudocode, so it may not be helpful on that front.
While watching the video I legit thought you'd have 200k subs minimum considering the quality. Im pretty sure youll get that in no time, good luck and keep up the clean work
One thing that you might have missed, and what likely resulted in the worsening performance from your own implementation is that the researchers only replaced the best model if the newer one won by a margin of 55%, otherwise it would be rejected and train for N games again.
This is the sad state of machine learning technology, it has such great potential to do good but the ones that have the power to utilise it the most will only use it for personal gain, which has been the case with how humanity works for centuries upon centuries.
You made it right into the youtube algorithm, if you are able to produce a new video in the next days, your channel will grow extremely fast. Quality animations, interesting topic and great commentary. Good job :)
Really nice video! One minor gripe is that at 5:30 you introduce a bunch of terms the layers/steps of what's done with the data that are not explained further than the abstract graphics on the screen. This is sort of fine, but when the next section at 6:06 begins with "So now we understand all the pieces of the puzzle..." it feels more than a little hand-waved. I'd had liked either at least some more sentences on explaining what each of those steps in slightly more detail, OR an acknowledged that we're not going to get into the weed of those things in this video. So it's more the disconnect from how your script's written than really a problem with the info itself. The video assumes that everyone gets what (for example) "a low-dimensional embedding of the gamestate" is and why it's needed here. I sort of can figure that out, but when it's thrown at me in between other sentences that are also dense with technical terms, delivered at that speed? Well I really didn't feel like I understood all the pieces of the puzzles after that slide.
That's fair, and thank you for the feedback! I've put a lot of thought into balancing between explaining things and assuming people know things, and I don't think I've got the balance quite right yet. I'd like to make a series that starts all the way at algebra and works up to state-of-the-art ML, but the amount of time and effort that would take with animations is enormous. Maybe this will become my life's work :)
@@2ToTheNthPower You certainly have a talent for making videos that explain stuff. So to keep making videos, (while not biting off so much that it feels overwhelming to continue), is likely the best (only?) way forward to get better at it, and to see if there's a career in it for you in the long run. From what I've heard other science communicator youtubers say, having people who *can* ask the stupid or at least less informed questions is important. For example, @Numberphile works so well in part because Brady is *not* a mathematician, so he's constantly asking the obvious next question that other non-mathematicians might have. One way to get those questions early enough in the process so that you can still answer those questions in a video is to have some sort of small group or community that read your draft for scripts that can give feedback on if something is unclear, and to pretty much after each paragraph ask them "Are there any obvious questions that this paragraph raises that maybe should be answered before we move on?". The tricky part is to find people who are both interested enough in the subject to want to read/listen to those early drafts and also give feedback, but who *don't* already know most of what's covered in the script. Once your channel grows it's possible to crowdsource this, but at first, having some friends who are willing to do it might be easier; mainly because community building is important and all, but it also takes a lot of time and effort that, early on, might be better spent on making more videos instead.
At 2:35 you mention that we visit the root node (of the subtree) 9 times. I dont get this. Dont we just visit it once and then continue our DFS(depth first search down the tree). So essentially, dont we just visit it only once and not 9 times??
We visit it 9 times in the sense that we've experienced 9 different game branches so far as a result of visiting that game state. If you're simulating games one at a time, then you will pass through that node 9 different times. If you simulate games in batches, then you can do what you're describing. For the sake of MCTS, though, I think the "visit count" essentially refers to the number of leaf nodes that result from visiting a particular node. In that sense, it doesn't matter if we simulate one game at a time, or if we simulate games in batches.
please make more videos man, the quality is great. Would you consider exploring some of root concepts in this video in a bit more detail? like maybe a dedicated video on convolutional neural networks? I'm a beginner programmer and I'm super interested in this stuff.
I used a networkx directed graph data structure. Some sort of tree could have worked too, though theoretically I think it's more appropriate to describe it as a graph. Could be wrong tho
@aarondavis5609 I have some stuff I'd like to say, it's a really cool video, and I'd definitely use it to introduce people to the concept (thanks!), but some stuff was a little confusing: like the highlighting of different terms in ucb. Also, P(s) is the policy, and shouldn't be there when we're talking about the stuff before the nn... But great video! Come join, we use transformers now as well
Yeah, that makes sense. P(s) probably should've brought in after the network was introduced. Ooh! Transformers are cool. I'm thinking about making a video on visual intuition for the attention mechanism, at least for NLP. We'll see how long that takes!
If this video performs super well, I'll definitely consider it! I have lots of ideas, but I need a good justification to pour time and energy into them.
Only one complaint. Too much emptyness. Even if you are talking about atoms in the universe - insert some pictures over your voice, so we don't stare at black screen for this long. For the first 5 seconds of the video I thought my RUclips app froze and was just outputting sound without video. But contentwise this is top notch, composition is great, and topic is good. Keep up the good work, there's not enough channels that have this quality talking about AI, you will be famous in no time :)
AlphaFold 2 is a very impressive domain-specific feat. The matrix multiplication advancement is more of a meta advancement... it has the potential to seriously improve the computational efficiency of training models like AlphaFold 2, so in my opinion the matrix multiplication improvement will have a much broader positive impact than AlphaFold 2 has had so far.
Where do you think the energy required to run an entire datacenter worth of TPUs comes from, exactly? Ethics, climate science, and machine learning are inseparably linked when we're talking about a project of this scale, and we can't escape that.
I don't know why people in AI don't admit that this field requires technical excellence in high-performance computing to make trainings even feasible. It's important to get the results within a reasonable timespan unlike Hitchhiker's Guide 😂 I've spent the past 3 months accelerating a training from a month to a few hours. Let's make better use of our hardware instead of throwing money at the problem. Modern PC games can squeeze insane amounts of compute out of our machines. Let's facilitate this in our trainings as well 😉
I started to look into implementing alphaZero for a diffrent game and this is such a great overview. There are some details that don't quite seem to align with what i got from the paper and its materials, but they are so insignifikant that they're not worth mentioning (besides I'm not even sure I got it right). I commend you for getting things to run, its a project that sounds easier then it is (or I'm just dumb). Right now I'm struggling with the NN part especially since I cursed myself by deciding to write my implementation in rust. Do you plan on publishing your source code?
This is definitely not an easy project, and it sounds like you're making it a bit more difficult by trying to code NNs from scratch, so give yourself lots of credit on that front.
I definitely left a good chunk of information out... I don't really discuss self-play here, or adding dirichlett noise to the prior, or compressing arrays and making them hashable so I can put them in a graph without destroying my computer! There's also some ambiguity in my mind as to whether playouts are used at all after training is complete, and that seems pretty important, too.
I have my code here ( github.com/2ToTheNthPower/Pente-AI ), and I recommend looking up "Accelerating Self-Play Learning in Go" on arXiv. I think it's a better resource than the original paper was.
Hope that helps!
@@2ToTheNthPower It definitely will.
Yeah with the neutral network i definitely shot my self in the foot because tensorflow in Rust is pretty bare bones.
I'll also look into that paper but i have to say what helped the most was not the actual paper but rather the pseudo code they provided it just sadly skimps out on the neural network front
@@conando025 I gotcha. From what I've read about the neural network, they primarily used ResNets, since residual connections allow very deep networks to train and getter better results than shallow networks. I don't know if the KataGo paper has any pseudocode, so it may not be helpful on that front.
While watching the video I legit thought you'd have 200k subs minimum considering the quality. Im pretty sure youll get that in no time, good luck and keep up the clean work
That's an incredible complement! Thank you!
I saw this comment and checked and this is a criminally underrated channel. I am subscribing right now.
@@Asterism_Desmos same here
Just subbed, good luck buddy 👍
One thing that you might have missed, and what likely resulted in the worsening performance from your own implementation is that the researchers only replaced the best model if the newer one won by a margin of 55%, otherwise it would be rejected and train for N games again.
9:05
"How will they be used tomorrow?"
To keep the base of the social pyramid under control while the top 0.01% live as demigods.
There are definitely some interesting ethical, social, and political issues that will emerge as AI becomes more and more capable.
This is the sad state of machine learning technology, it has such great potential to do good but the ones that have the power to utilise it the most will only use it for personal gain, which has been the case with how humanity works for centuries upon centuries.
Definitely the start of a great channel. Good luck!!
Quality entertainment. Immediately subscribed after finding out you had less than 50k-100k subscribers.
very glad RUclips recommended this to me, good video!
Great ML video, hope your channel grow greatly.
Fantastic vid, surprised this is your first video
I’ve been looking for a great explainer for Alphazero, wish you best of luck with all future videos, I’ll be there to continue watching them
Hope your channel grows!
You made it right into the youtube algorithm, if you are able to produce a new video in the next days, your channel will grow extremely fast.
Quality animations, interesting topic and great commentary. Good job :)
Thanks! If I can find time over the next week, I may start another video. Lots of ideas!
Awesome Video!
Everyone is talking about AlphaZero but I am wondering when AlphaOne will be released.
smh DeepMind needs to get it together
Really nice video!
One minor gripe is that at 5:30 you introduce a bunch of terms the layers/steps of what's done with the data that are not explained further than the abstract graphics on the screen. This is sort of fine, but when the next section at 6:06 begins with "So now we understand all the pieces of the puzzle..." it feels more than a little hand-waved. I'd had liked either at least some more sentences on explaining what each of those steps in slightly more detail, OR an acknowledged that we're not going to get into the weed of those things in this video. So it's more the disconnect from how your script's written than really a problem with the info itself. The video assumes that everyone gets what (for example) "a low-dimensional embedding of the gamestate" is and why it's needed here. I sort of can figure that out, but when it's thrown at me in between other sentences that are also dense with technical terms, delivered at that speed? Well I really didn't feel like I understood all the pieces of the puzzles after that slide.
That's fair, and thank you for the feedback! I've put a lot of thought into balancing between explaining things and assuming people know things, and I don't think I've got the balance quite right yet. I'd like to make a series that starts all the way at algebra and works up to state-of-the-art ML, but the amount of time and effort that would take with animations is enormous.
Maybe this will become my life's work :)
@@2ToTheNthPower You certainly have a talent for making videos that explain stuff. So to keep making videos, (while not biting off so much that it feels overwhelming to continue), is likely the best (only?) way forward to get better at it, and to see if there's a career in it for you in the long run.
From what I've heard other science communicator youtubers say, having people who *can* ask the stupid or at least less informed questions is important. For example, @Numberphile works so well in part because Brady is *not* a mathematician, so he's constantly asking the obvious next question that other non-mathematicians might have.
One way to get those questions early enough in the process so that you can still answer those questions in a video is to have some sort of small group or community that read your draft for scripts that can give feedback on if something is unclear, and to pretty much after each paragraph ask them "Are there any obvious questions that this paragraph raises that maybe should be answered before we move on?". The tricky part is to find people who are both interested enough in the subject to want to read/listen to those early drafts and also give feedback, but who *don't* already know most of what's covered in the script. Once your channel grows it's possible to crowdsource this, but at first, having some friends who are willing to do it might be easier; mainly because community building is important and all, but it also takes a lot of time and effort that, early on, might be better spent on making more videos instead.
And i am excited to find out what your next video is gonna be.
This is the best video i have evern seen about AlphaZero !
More like this! Finally somebody painting the whole picture!
Great work! That was a really nice Intro. I especially liked the Storytelling, which made it feel more like a movie than science Education :)
Absolutely amazing video! Subscribed.
Great video! Cant wait to see more
Hey Aaron, I love your video. Please keep the good work 👏
Loved the video!
I mean who knows what he is talking about?!
Just love the vid for the quality!
Oh my friends are gonna love this
This was amazing!
Zarathustra in the background... nice touch. All hail our new overlord, HAL 9000!
At 2:35 you mention that we visit the root node (of the subtree) 9 times. I dont get this. Dont we just visit it once and then continue our DFS(depth first search down the tree). So essentially, dont we just visit it only once and not 9 times??
We visit it 9 times in the sense that we've experienced 9 different game branches so far as a result of visiting that game state. If you're simulating games one at a time, then you will pass through that node 9 different times. If you simulate games in batches, then you can do what you're describing.
For the sake of MCTS, though, I think the "visit count" essentially refers to the number of leaf nodes that result from visiting a particular node. In that sense, it doesn't matter if we simulate one game at a time, or if we simulate games in batches.
please make more videos man, the quality is great. Would you consider exploring some of root concepts in this video in a bit more detail? like maybe a dedicated video on convolutional neural networks? I'm a beginner programmer and I'm super interested in this stuff.
Amazing video!
great video, activated noticiations :)
U are amazing thank you for existing
Almost 4 k views with less than 200 subscribers?
Jeez.
Well, you got one now.
"The result of alpha tensor speak for themselves"
Why do I feel like this is throwing shade against a recent chess issue? 😂
😉
Amazing video, thank you so much.
Awesome
great work - keep at it!
I bet you're correct that you need more space and more training data. are you using a trie to represent your graph?
I used a networkx directed graph data structure. Some sort of tree could have worked too, though theoretically I think it's more appropriate to describe it as a graph. Could be wrong tho
Has been shared in lc0 discord. Good stuff
Ooh! I bet there a lot of people there who know more about this than I do. I'm looking forward to seeing their comments!
@@2ToTheNthPower I see, what's your handle? May I introduce you?
Oh I read it wrong, thought you meant that you're there, join along :)
@aarondavis5609 I have some stuff I'd like to say, it's a really cool video, and I'd definitely use it to introduce people to the concept (thanks!), but some stuff was a little confusing: like the highlighting of different terms in ucb. Also, P(s) is the policy, and shouldn't be there when we're talking about the stuff before the nn...
But great video!
Come join, we use transformers now as well
Yeah, that makes sense. P(s) probably should've brought in after the network was introduced.
Ooh! Transformers are cool. I'm thinking about making a video on visual intuition for the attention mechanism, at least for NLP. We'll see how long that takes!
Letsgo a Ludbud in the computer science RUclips space !!
😁
Great video, I hope you decide to make more!
If it takes just one atom to indicate the possible state of a game of Pente, would we run out of atoms?
This is a really nice video. Do you play to make more?
If this video performs super well, I'll definitely consider it! I have lots of ideas, but I need a good justification to pour time and energy into them.
how about adding #Some2 tag?
I’m not sure I know how to add tags, but it’s a good idea and I tried. Thanks for the suggestion!
when matrix multiplication witn O(1) ?
I don't think it is? I think the strassen algorithm is around O(n^2)
Cool video! I leave comment here to help to promote :)
nEXT VIDEO WHEN?
nice but too many blank black screens
Thanks for the feedback!
I am amazed by the quality of animation.which software is used for making these animations
Manim in Python! It was started by 3Blue1Brown. Definitely look it up!
💚
great
the same as it is large, so it is small. the only winning move is none at all. memento mori
Neat
monte carlo tree search
I was thousand like
Only one complaint. Too much emptyness. Even if you are talking about atoms in the universe - insert some pictures over your voice, so we don't stare at black screen for this long. For the first 5 seconds of the video I thought my RUclips app froze and was just outputting sound without video. But contentwise this is top notch, composition is great, and topic is good. Keep up the good work, there's not enough channels that have this quality talking about AI, you will be famous in no time :)
Thanks for your input and the complement! I’ll take them both to heart
I disagree. At last someone not blasting us with multimodal excitation.
@@MsHofmannsJut There is a balance here, I thought there was a video issue with just that black screen. SO glad I kept watching.
Bro sounds like jak from disrupt
Gr8
Good start but holy shit there is way too much black screen time. Also your mic clips quite a lot. Keep at it, youll be big time on yt in no time.
Isn't AlphaFold 2 an even more impressive feat than the matrix multiplication thing?
AlphaFold 2 is a very impressive domain-specific feat. The matrix multiplication advancement is more of a meta advancement... it has the potential to seriously improve the computational efficiency of training models like AlphaFold 2, so in my opinion the matrix multiplication improvement will have a much broader positive impact than AlphaFold 2 has had so far.
Deary me, "carbon emissions" being brought up in machine learning training 😩
Where do you think the energy required to run an entire datacenter worth of TPUs comes from, exactly? Ethics, climate science, and machine learning are inseparably linked when we're talking about a project of this scale, and we can't escape that.
What
I don't know why people in AI don't admit that this field requires technical excellence in high-performance computing to make trainings even feasible. It's important to get the results within a reasonable timespan unlike Hitchhiker's Guide 😂
I've spent the past 3 months accelerating a training from a month to a few hours. Let's make better use of our hardware instead of throwing money at the problem. Modern PC games can squeeze insane amounts of compute out of our machines. Let's facilitate this in our trainings as well 😉
I am sub #320. Wait, 320?