Hey, could you make a tutorial series on how to make these sort of AIs? We all find them so interesting and I'm sure there's others that want to see a tutorial!
I would love to see an AI that actually reacts to the situation. Instead of just selecting the best from random attempts you could use each of the ball's coordinates as inputs to a neural network with 2 outputs representing angle and power. You wouldn't have to train the shots one by one and the resulting network could actually play games against opponents which would be super interesting.
It's also isn't taking into account the affects on the next shot. It's just hitting the best shot it can from it's current state. A pro player will be moving the cue ball into positions to make his later shots easier.
It would be fascinating to see a retake of this from your modern perspective, with a few extra integers of consideration for the generational AI. Maybe sunk balls are worth 3 each, any balls in a line with the white ball and a hole are worth 2 but only one can trigger this bonus score, and any balls in front of a hole is woth 1 each? A fascinating proposition. I bet with those extra incentives, the AI would quickly add multi shots as a priority.
Could you possibly remake this video? I enjoy the way you talk through stuff and explain things. This just isn't the same. Idk if you see these hopefully you do.
Only thing that messes with me about this. Is that it looks to be chasing not just a perfect game, but the same perfect game. Have you thought about taking two of these systems and alternatating which one shoots each time one misses? Similar to real pool. This would be similar to a GAN... and may be able to give you training for a more diverse set of ball positions.
LordDecapo that's the way the algorithm works, it finds one perfect game. It doesn't do any analyzing of the game board, it's just a series of shots to make a good game.
jetison333 I get that yes. This is a static function optimization system. I was referring to future work by him. Like a next step of this. Because personally, that would be an awesome next version. A pair of pool playing ML systems that have some random nature to them to make each game unique. So they have to learn the game, rather then a single function.
Your code is good and all but lets be honest I came to hear you talk about your code your struggle your cockiness the celebration just to be torn down and then rebuilt again with more and more confidence. Your videos are just like the struggles of life and inspire me to keep pushing forward making changes and even when it is bad I know I'm constantly getting better just like your code.
Could you try this again? Like, another video? I’d like to see how this might be done in Q learning. Like the driving video, have each ball in in the right order=points
The problem with Q-learning is that the number of states gets out of hand very quickly. Q-learning starts to become impossible rather quickly in any situation more complex than tic-tac-toe. To illustrate how Q-learning would not work in pool... Imagine: in any valid state, there are anywhere between 2 (cue + 8 ball) and 16 (cue + 8 ball + 7 solid + 7 striped) balls on the table. For every combination of ball numbers, each ball can be anywhere on the table, with the exception that they can't overlap (I'll ignore this for now but we'll soon see that it doesn't even matter). What table dimensions are necessary to capture a small enough change in position to be relevant? Real pool table dimensions are 100x50 inches or 88x44 inches (2:1 length-width). In order to capture the possible positions between 0 and 1 inch, let's say we represent a 100x50 inch table with a 10,000x5,000 2D space. That's 50 million coordinates. Then you'd need to store EVERY POSSIBLE LOCATION of 16 balls in those 50,000,000 coordinates. Then you'd do the same for 15. Then 14. 13... all the way to 2. By the way, there are 7.29 x 10^109 possible combinations for 16 balls across 50,000,000 locations (50,000,000 choose 16). Rounding, that's a 7 followed by 109 zeros. 70,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 states in the Q-learning matrix, JUST FOR BALL LOCATIONS. Also, because in each state you also need to keep track of each action that's possible in that state, you'd need to multiple that huge figure by every possible direction+power combination with which you could hit the cue ball. Even if we enforce the most rigid, grid-locked 2D board possible with a 100x50 2D space (5,000 locations possible) then that leaves 7.119x10^45 possible locations JUST for 16 balls. In summary: Q-learning is appropriate in environments that can be represented by very few, discrete states and actions. Q-learning does NOT scale well in environments with continuous state-action spaces, because the number of states and actions to keep track of is too big for any computer to realistically keep track of and process.
Perrty cool video! Also could you make a video about how these AI-s are being made and how can we make AI-s like this? Edit: Can you maybe program an air hockey AI?
hey, you can watch carrykh if you want to learn more about evolved AI(he makes evolution simulators). ruclips.net/video/C9tWr1WUTuI/видео.html neural networks are really interesting, you can make an AI with several inputs(like eyes or something) and then the numbers gained from the senses can be passed on to actions, if you randomly connect the actions to the senses you can get interesting behaviour, and in between you can have a sort of fake brain, where you multiply the inputs by random numbers. worth a read! making your own AI is really satisfying.
here's a playlist of really beautiful, excellently informative videos on how neural networks work... that's what's behind this :) ruclips.net/video/aircAruvnKk/видео.html
jellyberg Actually, I don't think this is a neural network, but a genetic mutation code, since a neural network would unlikely to output the same game each time (and therefore each generation would start differently). Looking at the code seems to support this as it doesn't look like it's using a neural network of any kind that I know of. But I'm not a professional, so maybe I'm wrong and this is using some strange neural network that I have never seen before.
Could we get some videos on how to start creating you own AI and also where is the best place to learn it. Or some tutorials from you on getting started on AI.
Just google "neural network python/java tutorial" or "how to make your own neural network". There are so many goddamn tutorials out there. And there are a bunch of 'presets' out there. Also Coding Train has some nice videos on it. (*ping*)
In my opinion this isn't much different from brute forcing and to solve this problem brute forcing would most likely be more efficient and easier anyway. I think machine learning AI is about making the AI adjust to any changing situation within the limits of the problem it's trying to solve. What you've done is simply brute forcing the shots for a single solo game of pool, except with extra steps so I don't think it's proper use of the algorithm for the AI.
I agree and he said himself that this is not the optimal scenario for a genetic algorithm. I'd like to see an implementation of a neural network, so that the GA could be set to work on teaching the ai to react to the current board - that would of course add tons of complexity and possibly change the whole spirit of this little experiment
Yeah the algorithm he used here is lacking, he gave the AI no info on how he should be aiming. The snake/asteroid AI that he made was given the ability to look in 8 directions which gave it some awareness to it's surrounding and power to deduce properly, this one's basically brute force, goes through each combination until it finds one that works
That is literally how this type of AI works. Heck that's how evolution works. Just brute force your way and keep what works. That's why there's several generations, each on improving on the last.
You didn't understand. We were made by evolution like the AI in this video, yes, but we can live our own unique lives instead of us all living the exact same life every time. This AI can only live one exact same life and will not work if anything in the environment is changed. A well done genetic algorithm AI would be able to do that.
nice job with this man.. i'm really enjoying ai problems lately... i guess i will get a grasp of them soon and start coding myself... really really nice to see people working and experimenting... cheers
Perhaps it is my imagination, but in your final good run I thought I could see many missed opportunities to pocket multiple balls with one shot. It might be very entertaining to start with a random but legal break, then force-compute from that starting point a perfect Straight Pool run with a maximum number of balls pocketed by each shot, with black going in last in its group of balls. Should require a great deal of computation, but should look amazing.
probably "all my decendents have bassically the same score, so it doesn't matter witch one survive", since the fitnes algorithim was basically at 99.9%, or maybe he programed so it would also try to get the white ball near an hole and it decided it would be more benefitial to get it in to the hole instead of the black one
Similar to the Snake it realized its whole existence is a fabrication designed to achieve one goal. It chooses to make us aware of it through a final 'Screw you!' on the last shot
Black ball on black hole, probably didn't realise it was still in play and made the last shot the only shot it could make, drop the only ball on the table being the white one.
The problem of the algorithm memorising inatead of learning could be fixed if another player begun the game. This adds more control over the outcome of the game to you.
Nice vid! I would say try training based purely on the state of the table (ball positions, number of balls remaining, colors/stripes) as input and have your AI determine the best shot vector from that state. Start small with fewer balls and then increase from there. Penalize sunk balls of the wrong color, etc. Throw in some actual pool rules for your fitness function.
It's a bit obscure but I would love to see an AI learn to play CoreWars. It's a game where you code a "warrior" to destroy another "warrior" by terminating its code. The code for the warrior programs is all in assembly so it wouldn't be that impossible to do. It's a really interesting game and the idea of an AI writing code would be pretty cool.
10:23 I actually think that the AI does this because it can't detect the black ball as it is a bit merging with the pot. So it sees no balls and doesn't know what to do next. Potting the white ball sound logical for an AI to try at that point.
I kinda want to get started on AI, neural networks and all that stuff. However, I don't really know where I should start. Could you help me about this ? Awsome video, even with MovieMaker x) Keep going !
ATP I already knew about 3B1B videos and my main problem was how to implement it in code. Since your comment, I've watch a lot of TheCodingTrain's videos which gave me a better understanding of how all of this works. Thanks a lot !
I was expecting a neural network, but I got an AI that shoots in a random direction and force and takes the best values per step. Place the balls in a different place and it doesn't work anymore. A real AI has one set of values it can use regardless of what situation it's in like the Snake AI you made and many other AIs you made. Although you did of course admit that at the beginning. I like how you bought a microphone and started voice acting because that really improved your videos.
Well that is true, but this is Pool, a game where random doesn't apply, so the fact that it will suck when balls are in different places isn't an issue. Of course, I can see where you're coming from. But even though this method may not be the best choice for educational purposes, I think it's actually a better choice for Pool itself. A neural network would need a huge amount of input neurons just to get the places of balls correctly, as well as a huge amount of output neurons just to shoot the ball in any direction, and at various strengths. But yeah, you do have a point, which I do agree. EDIT: Also I realized my comment may be a bit misleading, since I said that this wasn't the best choice for educational purposes. What I meant was that neural network would be a better choice, but I do think that this is educational, especially on genetic algorithms.
@@KineticManiac If someone else shoots a ball first, the balls will be in a different place every match. Code Bullet explained in a different video, which I watched later, that an AI does not have to be a neural network. Games have AI enemies, but they are hardcoded AI like this pool game or his Pacman game or his AI code example with the dots moving to a target, which doesn't work with randomized levels.
@@NaudVanDalen True, this AI will only work for single-player Pool. One thing though, this is not hard-coded AI, the type that is common in games. Such AIs still do gather information about the environment and make a dynamic decision. However their decision making process is simply a code written by the programmers, they don't learn. This AI on the other hand does not gather any information about the environment, the decision making process is static. But this static decision making is not coded by Code Bullet by hand. Instead he developed a program that develops a static decision making process via genetic algorithms. So it is actually learning, even though the decision making itself is static. Not a huge deal, but I just wanted to explain the difference.
I actually liked movie maker. Simple, but works pretty good compared to most free software. Most other free programs don't even have a simple fade in/out option. XD Wish I could get it again. Still looking for a decent simple program that doesn't constantly freeze or lag on my laptop.
static scenario from first to last shot, it's not pool. just a slight difference on one or some abject balls at any time and the whole thing is ready to fall appart. Not really AIsh thing, rather brute force based on a given static starting position.
anton ZIZIC It’s a genetic algorithm. Maybe he programmed the algorithm a bit poorly, but even then, in general, with genetic algorithms, the AI adapts or “learns” in such a way that they only learn what must be learned to accomplish their goal, and what this does is that given their limitations, they minimize the time and attempts to become efficient at the one thing by only learning what they know to be useful. In other words, he is teaching the AI like a human would teach itself, not like an AI would teach itself. It doesn’t require that the AI be good to eventually learn how to play a good game, whereas by any other learning algorithm, the AI could be an immediately monster, but it would learn and do unnecessary things in the the process. It’s just circumstantial and it depends on the goal
The game is broken if played perfectly and can be played perfectly by humans. But one of the players (first or second, I can't remember) will always win if no mistake is made so I think this would just be boring to watch as the result isn't as interesting as one might hope.
A greedy algorithm basically is an algorithm that has a bunch of finite steps, and some choice to make at the end of each step. At a high level, yes, this is a greedy algorithm, where it is trying to maximize fitness and each step it gets to choose what species reproduce to make the next gen. But greedy algorithms apply to a lot more, and are pretty vaguely defined in that way. A lot (and i mean a LOT) of very famous, and or very frequently used algorithms can be argued greedy.
This is more of a case of exploring a variable space for a single solution with high reward, not really learning how to play pool at all. Of course, with a random pool table this would have no idea what to do - which would be the goal that I would refer to as "learning to play pool". Not an uninteresting video - I just wouldn't call it "AI learns to play pool"
Andrew Christensen It’s a generic algorithm. I think it does count as learning in a broad sense. Just not what people usually mean they use the term “AI learning”, which is a very technical term and refer to a specific type of learning.
Also, he did admit that the algorithm is a bad way to solve the problem of AI playing pool. The reason I did it that way despite that is because I’m fairly certain that’s the focus of his videos and of his research anyway.
I kind of agree with this. The AI should take in the game state and decide on the direction to hit the ball. This is a random search for the ideal set of moves. Still cool though.
@Andrew Christensen credits to the uploader. I do agree with you. I think if there was a 2nd player (human or random shot) alternating with the learned agent, it will make it more interesting and a real test for learned AI agent.
Plot twist: they've captured all the pool tables in the world and are holding a minor populated island hostage for more pool tables, balls, and processing power.
That's been done a lot and it becomes a stalemate very quickly, the AIs basically keep their panels in line with the ball at all times, and no points are scored ever again!
Yeah, pretty much, that's why it's often more interesting to do more complex games. To be fair for those who are just learning it could be a good testing bed for your first AIs.
I enjoy all your videos. I am sad that there was no hilarious banter and animations. But, oh well. I believe the funny stuff I love so much will appear in future videos. I can't code at all in these languages. I only know BASIC.
Which is exactly what machine learning is, and it’s what we call "AI" today. But you’re 100% correct, it’s brute force. It works by using a lot of energy to process a lot of data while first telling the machine what the result should look like.
@@MrBanarium Machine Learning can do more than just brute force. Plus AI is more things than just machine learning; games tend to have AI that are pre-programmed.
@@Liggliluff Of course, AI isn't just ML, never said otherwise. The initial comment was saying that it *wasn't* AI, so I pointed out that it was ML, which is a type of AI. As for ML, it's not about "doing more" than brute force: ML *is*, in a sense, brute force. Of course, there a whole lot more to it, but it wouldn't even be possible without huge amount of computing time and energy to let the algorithm learn until it can achieve the desired result. It is, by its very design, brute-forcing its way through a dataset and toward a mathematical model solving that dataset.
Yes, this is a very brute force way of going at things. Since we are not really considering the current state of the table, just that doing X followed by Y and then Z gives us a desirable result. This isn't "Artificial intelligence" but rather a randomized brute force approach. That likely is slower at finding the optimal solution compared to a non randomized brute force approach. (unless we actively note down what randomized versions we have tested already, but that will chew up data storage rather quickly, not to mention that the act of generating random numbers is likely more intensive then just incrementing a value.)
Source code for this is up
so check it out github.com/Code-Bullet/Pool_AI
Hey, could you make a tutorial series on how to make these sort of AIs? We all find them so interesting and I'm sure there's others that want to see a tutorial!
which language is this
please make an tutorial how to make an AI
I would love to see an AI that actually reacts to the situation. Instead of just selecting the best from random attempts you could use each of the ball's coordinates as inputs to a neural network with 2 outputs representing angle and power. You wouldn't have to train the shots one by one and the resulting network could actually play games against opponents which would be super interesting.
whats the name of the second song you used
Wow these old CB videos are strange.
Duke Skibbington can’t believe he went from this to his own animated character in what, less than a year.
@@a_andrade6939 Evan is AI confirmed
*sniff* They grow up so fast.
Yea
really CB is a sentient AI trying to make more sentient AI by informing humans on how to make AI that CB then can train into sentient AI
There is a simple reason ai will never become a professional pool player.
It doesn't put the powder on the end of the stick
alex xans he needs more gunpowder
It's also isn't taking into account the affects on the next shot. It's just hitting the best shot it can from it's current state. A pro player will be moving the cue ball into positions to make his later shots easier.
@@TheThunderwesel nah, its definitely just the lack of powder that's limiting its abilities
Jacob R lol that’s funny
But it’s chalk...
Man, the friction on that table is incredible. It's like there's a centimeter of water on it or something.
Fun fact, this pool game was designed with a foam pad instead of slate.
It’s called ‘pool’ for a reason.
@@AidenC2718 Badum tss
Black ball literally 1mm from going into the hole.
White Ball: I have decided that i want to die
its called cue ball lol
@@donoui1989 ok boomer
@DeShawn ok boomer
@DeShawn Ah yes. Gen D. My favorite gen.
it's called cum ball
It would be fascinating to see a retake of this from your modern perspective, with a few extra integers of consideration for the generational AI. Maybe sunk balls are worth 3 each, any balls in a line with the white ball and a hole are worth 2 but only one can trigger this bonus score, and any balls in front of a hole is woth 1 each? A fascinating proposition. I bet with those extra incentives, the AI would quickly add multi shots as a priority.
"FINISH HIM"
_suicides_
I'm dead lmfao 😂
Maybe Cortana listened the finish command and killed the AI
so was the ai
Genetic algorithms be like:
I looked at 200 different possible outcomes
In how many do we win?
One
Is this the one where we win?
If I tell you, it won’t happen .
noice
That's more or less how self driving vehicles work. :)
Cody Barkman except in the endgame he looks through like 16 million possibilities
@@ShitpostHeaven 14000605
Meet CB when he wasn't constantly yelling for 10 minutes
i honestly kinda like this better, he isn't trying so hard to be funny
@@Inversine omg exactly, its so try- hard and comes of as fake and just trying to be quirky
@@illosovic Also he doesn't swear
Gradual Growth Genetic Algorithm.
GGGA
Good Game, Generated Ai (Kappa)
Gorgeously Good Gradual Growth Genetic Game
GGGGG
:D
Plot twist: This video is actually called "Will the pool ball hit the corner?
I can feel the AAAAAAAHHHHHHHH! SOUND at Last of the video 😂😂
i didn't realise how old this was at first and i was like "wait hang on a sec why is there just a wall of text where is his sweet voice"
Dang
Same ! Did not realize at first I was watching an older video :D
the fail at the end XD i think it took the "black ball is last" too literally XD
The best part of code bullet videos is the commentary
Could you possibly remake this video? I enjoy the way you talk through stuff and explain things. This just isn't the same. Idk if you see these hopefully you do.
Kino Uura
Yeah I skipped through to find the voice, didn’t bother watching.
I watched it because I though it was cool. And I agree that he should remake this with voice over it, or just revisit it again
And if he does it again, he should add an actual pool stick (idk what it’s called) and animate it
He should do a remake with q learning
And set it up so that it's 2 player, AI vs AI
Not only would he have to finally animate the stripes, it'd give him a valid reason to redo the video
Only thing that messes with me about this. Is that it looks to be chasing not just a perfect game, but the same perfect game.
Have you thought about taking two of these systems and alternatating which one shoots each time one misses? Similar to real pool.
This would be similar to a GAN... and may be able to give you training for a more diverse set of ball positions.
LordDecapo that's the way the algorithm works, it finds one perfect game. It doesn't do any analyzing of the game board, it's just a series of shots to make a good game.
jetison333 I get that yes. This is a static function optimization system.
I was referring to future work by him. Like a next step of this. Because personally, that would be an awesome next version. A pair of pool playing ML systems that have some random nature to them to make each game unique. So they have to learn the game, rather then a single function.
LordDecapo ah okay
The shot should have a small amount of error built in so it's not the exact same game every time
Joe Phillips agreed. That would be enough to make it a much more robust bot
Your code is good and all but lets be honest I came to hear you talk about your code your struggle your cockiness the celebration just to be torn down and then rebuilt again with more and more confidence. Your videos are just like the struggles of life and inspire me to keep pushing forward making changes and even when it is bad I know I'm constantly getting better just like your code.
Could an A.I. play a game like HL2 or Far Cry?
went unnocited for way too long x)
Yes but it would be too confusing to code for one person
Kalendarmenn
Neural network wouldn’t be the best way to learn AI to play a big and complicated game. There is a lot of easier ways to do it.
Google created an AI to play StarCraft 2. You can find videos of it on RUclips
I think there are way too many decisions to make every frame for any computer other than the human brain to be able to figure it out
I’ve just recently found you and I’m loving both your old and new videos. Keep up the amazing work!
This went from agonizing to satisfying really quickly
Could you try this again? Like, another video? I’d like to see how this might be done in Q learning. Like the driving video, have each ball in in the right order=points
The problem with Q-learning is that the number of states gets out of hand very quickly. Q-learning starts to become impossible rather quickly in any situation more complex than tic-tac-toe. To illustrate how Q-learning would not work in pool...
Imagine: in any valid state, there are anywhere between 2 (cue + 8 ball) and 16 (cue + 8 ball + 7 solid + 7 striped) balls on the table. For every combination of ball numbers, each ball can be anywhere on the table, with the exception that they can't overlap (I'll ignore this for now but we'll soon see that it doesn't even matter).
What table dimensions are necessary to capture a small enough change in position to be relevant? Real pool table dimensions are 100x50 inches or 88x44 inches (2:1 length-width).
In order to capture the possible positions between 0 and 1 inch, let's say we represent a 100x50 inch table with a 10,000x5,000 2D space. That's 50 million coordinates. Then you'd need to store EVERY POSSIBLE LOCATION of 16 balls in those 50,000,000 coordinates. Then you'd do the same for 15. Then 14. 13... all the way to 2.
By the way, there are 7.29 x 10^109 possible combinations for 16 balls across 50,000,000 locations (50,000,000 choose 16). Rounding, that's a 7 followed by 109 zeros. 70,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 states in the Q-learning matrix, JUST FOR BALL LOCATIONS. Also, because in each state you also need to keep track of each action that's possible in that state, you'd need to multiple that huge figure by every possible direction+power combination with which you could hit the cue ball.
Even if we enforce the most rigid, grid-locked 2D board possible with a 100x50 2D space (5,000 locations possible) then that leaves 7.119x10^45 possible locations JUST for 16 balls.
In summary: Q-learning is appropriate in environments that can be represented by very few, discrete states and actions. Q-learning does NOT scale well in environments with continuous state-action spaces, because the number of states and actions to keep track of is too big for any computer to realistically keep track of and process.
Love your vids code bullet! Your hard work deserves more recognition keep it up :D
Among all I wanna congratulate you for your video edition and explanation: super clear and precise yet not overdetailed
this channel is easily the most entertaining one
The end where the AI failed was hilarious
Math is pretty. Love the stuff CB, new vids with commentary are awesome too. Keep up the good work!
Fantastic work man! keep this up! i love the unique content :)
I was thinking about the game "Breakout" or "casse-brique" in french.
That can be fun to watch an AI struggling in this game =)
Try search for Atari DeepMind and you'll see it ;)
Perrty cool video! Also could you make a video about how these AI-s are being made and how can we make AI-s like this?
Edit: Can you maybe program an air hockey AI?
hey, you can watch carrykh if you want to learn more about evolved AI(he makes evolution simulators).
ruclips.net/video/C9tWr1WUTuI/видео.html
neural networks are really interesting, you can make an AI with several inputs(like eyes or something) and then the numbers gained from the senses can be passed on to actions, if you randomly connect the actions to the senses you can get interesting behaviour, and in between you can have a sort of fake brain, where you multiply the inputs by random numbers.
worth a read! making your own AI is really satisfying.
here's a playlist of really beautiful, excellently informative videos on how neural networks work... that's what's behind this :) ruclips.net/video/aircAruvnKk/видео.html
Thanks for the videos, I’ll check them out.
PERRTY?!
jellyberg Actually, I don't think this is a neural network, but a genetic mutation code, since a neural network would unlikely to output the same game each time (and therefore each generation would start differently). Looking at the code seems to support this as it doesn't look like it's using a neural network of any kind that I know of. But I'm not a professional, so maybe I'm wrong and this is using some strange neural network that I have never seen before.
That was great, I enjoyed it! Keep doing these!
I really loved the ultimate fail at the end! Good work bud, that was awesome!
Could we get some videos on how to start creating you own AI and also where is the best place to learn it.
Or some tutorials from you on getting started on AI.
Just google "neural network python/java tutorial" or "how to make your own neural network".
There are so many goddamn tutorials out there.
And there are a bunch of 'presets' out there.
Also Coding Train has some nice videos on it. (*ping*)
this is so cool
The shots of all 200 look like different possibilities splitting in the multiverse.
You are awesome I really respect this type of work.
In my opinion this isn't much different from brute forcing and to solve this problem brute forcing would most likely be more efficient and easier anyway. I think machine learning AI is about making the AI adjust to any changing situation within the limits of the problem it's trying to solve. What you've done is simply brute forcing the shots for a single solo game of pool, except with extra steps so I don't think it's proper use of the algorithm for the AI.
I agree and he said himself that this is not the optimal scenario for a genetic algorithm. I'd like to see an implementation of a neural network, so that the GA could be set to work on teaching the ai to react to the current board - that would of course add tons of complexity and possibly change the whole spirit of this little experiment
Yeah the algorithm he used here is lacking, he gave the AI no info on how he should be aiming. The snake/asteroid AI that he made was given the ability to look in 8 directions which gave it some awareness to it's surrounding and power to deduce properly, this one's basically brute force, goes through each combination until it finds one that works
That's often how genetic algorithms work, and why I never use those. They are bad at adapting and generalizing.
That is literally how this type of AI works. Heck that's how evolution works.
Just brute force your way and keep what works. That's why there's several generations, each on improving on the last.
You didn't understand. We were made by evolution like the AI in this video, yes, but we can live our own unique lives instead of us all living the exact same life every time. This AI can only live one exact same life and will not work if anything in the environment is changed. A well done genetic algorithm AI would be able to do that.
Tetris would be cool
nice job with this man.. i'm really enjoying ai problems lately... i guess i will get a grasp of them soon and start coding myself... really really nice to see people working and experimenting... cheers
wow, this popped up on my feed and I though "great, New CB video"... Well it was nice to see the progression from this, to CB of today!
Oh boy the fail attempt was beautiful
“Geoffrey? Break out Lucille.”
Geoffrey barges in with a huge computer and asks the guy to challenge him in 8ball instead*
Love this guy... a version of this with Mr. Cocky TV-head doing the audio narration would probably be one of the funniest videos.
😂 loved the ending. Another fun and very interesting lesson!
Perhaps it is my imagination, but in your final good run I thought I could see many missed opportunities to pocket multiple balls with one shot. It might be very entertaining to start with a random but legal break, then force-compute from that starting point a perfect Straight Pool run with a maximum number of balls pocketed by each shot, with black going in last in its group of balls. Should require a great deal of computation, but should look amazing.
I saw someone else comment about tetris and I think tetris would be cool
Someone tried Tetris once. Very interesting results when the AI actually learned to rage quit by pausing the game so that it wouldn't get a game over.
watch?v=xOCurBYI_gY
Oh yeah, I think I saw that!
The Laughing Rabbit lmao srsly? XDDD
Here's the video: ruclips.net/video/xOCurBYI_gY/видео.html
That last attempt killed itself out of shame for its failures lol
Your pool table has more friction than the local bar pool table with a pint of beer spilled in it.
LOL
What was the AI even thinking when it did that fial shot at the end?
probably "all my decendents have bassically the same score, so it doesn't matter witch one survive", since the fitnes algorithim was basically at 99.9%, or maybe he programed so it would also try to get the white ball near an hole and it decided it would be more benefitial to get it in to the hole instead of the black one
Similar to the Snake it realized its whole existence is a fabrication designed to achieve one goal. It chooses to make us aware of it through a final 'Screw you!' on the last shot
It didn't actually have a goal it was just a random arrangement of vectors and the vector just happened to aim in to a hole
Ahh. So there is no goal orientation at all across development? Purely environmental management or something
Black ball on black hole, probably didn't realise it was still in play and made the last shot the only shot it could make, drop the only ball on the table being the white one.
The problem of the algorithm memorising inatead of learning could be fixed if another player begun the game.
This adds more control over the outcome of the game to you.
Nice vid! I would say try training based purely on the state of the table (ball positions, number of balls remaining, colors/stripes) as input and have your AI determine the best shot vector from that state. Start small with fewer balls and then increase from there. Penalize sunk balls of the wrong color, etc. Throw in some actual pool rules for your fitness function.
We love you code bullet!
Awesome video! Maybe you could do minesweeper? It would be great to see some strategy in my opinion
minesweeper would be incredibly difficult or incredibly boring
The basic strategies of minesweeper are already solid, anything else would just be guess work
The bot knows only one speed to hit the ball: *MAX*
Would love to see a remake of this in today code bullet language
I like that the final product hit the black ball last.
Thanks for accidentally giving us a green screen
It's a bit obscure but I would love to see an AI learn to play CoreWars. It's a game where you code a "warrior" to destroy another "warrior" by terminating its code. The code for the warrior programs is all in assembly so it wouldn't be that impossible to do. It's a really interesting game and the idea of an AI writing code would be pretty cool.
Generic Narrator an ai writing code is literally how humans get themselves eliminated.
I'm imagining the AI being a gladiator that has to do all kinds of challenges at the whims of Code Bullet at the arena for audience entertainment.
Black ball: 1mm away from hole
White ball: ima head out
10:23
I actually think that the AI does this because it can't detect the black ball as it is a bit merging with the pot.
So it sees no balls and doesn't know what to do next.
Potting the white ball sound logical for an AI to try at that point.
I'm glad someone had some form of explanation
So the solution would be to change the shade of either the black ball or the pockets to be slightly lighter than the other.
Good idea, but the ai doesn’t see the color of anything on the screen. It just sees the position of the balls.
I kinda want to get started on AI, neural networks and all that stuff. However, I don't really know where I should start. Could you help me about this ?
Awsome video, even with MovieMaker x) Keep going !
Shinrod Dellore Watch 3Blue1Brown's videos on neural networks first, then if u want to see how the coding is done, head over to The Coding Train
the standard MOOCs like Coursera and edX have Machine Learning and similar courses
ATP I already knew about 3B1B videos and my main problem was how to implement it in code.
Since your comment, I've watch a lot of TheCodingTrain's videos which gave me a better understanding of how all of this works.
Thanks a lot !
This definitely needs a remake. Especially how you now test your AI against real, unsuspecting players in online games.
You're so cool, dude! Great job! 👍
I was expecting a neural network, but I got an AI that shoots in a random direction and force and takes the best values per step. Place the balls in a different place and it doesn't work anymore. A real AI has one set of values it can use regardless of what situation it's in like the Snake AI you made and many other AIs you made. Although you did of course admit that at the beginning. I like how you bought a microphone and started voice acting because that really improved your videos.
Well that is true, but this is Pool, a game where random doesn't apply, so the fact that it will suck when balls are in different places isn't an issue. Of course, I can see where you're coming from. But even though this method may not be the best choice for educational purposes, I think it's actually a better choice for Pool itself. A neural network would need a huge amount of input neurons just to get the places of balls correctly, as well as a huge amount of output neurons just to shoot the ball in any direction, and at various strengths. But yeah, you do have a point, which I do agree.
EDIT: Also I realized my comment may be a bit misleading, since I said that this wasn't the best choice for educational purposes. What I meant was that neural network would be a better choice, but I do think that this is educational, especially on genetic algorithms.
@@KineticManiac If someone else shoots a ball first, the balls will be in a different place every match. Code Bullet explained in a different video, which I watched later, that an AI does not have to be a neural network. Games have AI enemies, but they are hardcoded AI like this pool game or his Pacman game or his AI code example with the dots moving to a target, which doesn't work with randomized levels.
@@NaudVanDalen True, this AI will only work for single-player Pool. One thing though, this is not hard-coded AI, the type that is common in games. Such AIs still do gather information about the environment and make a dynamic decision. However their decision making process is simply a code written by the programmers, they don't learn. This AI on the other hand does not gather any information about the environment, the decision making process is static. But this static decision making is not coded by Code Bullet by hand. Instead he developed a program that develops a static decision making process via genetic algorithms. So it is actually learning, even though the decision making itself is static. Not a huge deal, but I just wanted to explain the difference.
Anyone else but me read the title as “AL learns how to play pool”???
Lmao
It's doing a lot better than I could do for certain!
I love the amount of stuff happening at the same time so suddenly 😂😂😂
8:41 xD lol
I didn't know you used MovieMaker
I actually laughed aloud when i saw this..
MovieMaker wasn't used for the learning.
Dan Kelly no shit
How..
I actually liked movie maker. Simple, but works pretty good compared to most free software. Most other free programs don't even have a simple fade in/out option. XD
Wish I could get it again. Still looking for a decent simple program that doesn't constantly freeze or lag on my laptop.
I'd love to see pool with Q learning instead.
Oh god i went back too far !! I need that BIG CB voice 😆
Quantum pool: Take every possible shot simultaneously.
static scenario from first to last shot, it's not pool. just a slight difference on one or some abject balls at any time and the whole thing is ready to fall appart. Not really AIsh thing, rather brute force based on a given static starting position.
anton ZIZIC It’s a genetic algorithm. Maybe he programmed the algorithm a bit poorly, but even then, in general, with genetic algorithms, the AI adapts or “learns” in such a way that they only learn what must be learned to accomplish their goal, and what this does is that given their limitations, they minimize the time and attempts to become efficient at the one thing by only learning what they know to be useful. In other words, he is teaching the AI like a human would teach itself, not like an AI would teach itself. It doesn’t require that the AI be good to eventually learn how to play a good game, whereas by any other learning algorithm, the AI could be an immediately monster, but it would learn and do unnecessary things in the the process. It’s just circumstantial and it depends on the goal
I am absolutely in love with your work, is there anywhere we can donate to support you??
Thank you so much
I just created a patreon page www.patreon.com/CodeBullet
Wow this is old but still super amazing
Just wanted to tell you, that you made great videos back then even though you didn't have any audio recoring stuff!
Try dots and boxes with 2 competing AI and variable grid size. Or you could make an AI which could just destroy any human.
thanks a lot for the idea :D
The game is broken if played perfectly and can be played perfectly by humans. But one of the players (first or second, I can't remember) will always win if no mistake is made so I think this would just be boring to watch as the result isn't as interesting as one might hope.
Isn't that just called a greedy algorithm?
I think it should be called so.
Each gen is greedy. The whole process is dynamic.
It's called hill climbing. Nothing to do with AI or learning.
@@wedding_photography That wasn't a good joke
A greedy algorithm basically is an algorithm that has a bunch of finite steps, and some choice to make at the end of each step. At a high level, yes, this is a greedy algorithm, where it is trying to maximize fitness and each step it gets to choose what species reproduce to make the next gen. But greedy algorithms apply to a lot more, and are pretty vaguely defined in that way. A lot (and i mean a LOT) of very famous, and or very frequently used algorithms can be argued greedy.
I literally laughed at the end great video keep it going
The explosion of color is cool.
This is more of a case of exploring a variable space for a single solution with high reward, not really learning how to play pool at all. Of course, with a random pool table this would have no idea what to do - which would be the goal that I would refer to as "learning to play pool". Not an uninteresting video - I just wouldn't call it "AI learns to play pool"
Andrew Christensen It’s a generic algorithm. I think it does count as learning in a broad sense. Just not what people usually mean they use the term “AI learning”, which is a very technical term and refer to a specific type of learning.
Also, he did admit that the algorithm is a bad way to solve the problem of AI playing pool. The reason I did it that way despite that is because I’m fairly certain that’s the focus of his videos and of his research anyway.
I kind of agree with this. The AI should take in the game state and decide on the direction to hit the ball. This is a random search for the ideal set of moves. Still cool though.
@@paulburger9904 It's not even a random search for the ideal set of moves. It's a search for moves than are just about good enough.
@Andrew Christensen credits to the uploader. I do agree with you. I think if there was a 2nd player (human or random shot) alternating with the learned agent, it will make it more interesting and a real test for learned AI agent.
Gen 1337: AI has captured the Internet and robots now rule the world
Maxim777 should be gen 420
Plot twist: they've captured all the pool tables in the world and are holding a minor populated island hostage for more pool tables, balls, and processing power.
The AI repeatedly trying to find the best shot over and over again is satisfying
That was beautiful!
Can you do 2 AI's playing Pong against each other?
That's been done a lot and it becomes a stalemate very quickly, the AIs basically keep their panels in line with the ball at all times, and no points are scored ever again!
Fleecemaster that sucks
Yeah, pretty much, that's why it's often more interesting to do more complex games. To be fair for those who are just learning it could be a good testing bed for your first AIs.
It'd be more interesting to have the panels very slow so that they have to think ahead.
Allow the AI to tilt their paddles for more competitive gameplay?
It's probably too hard but could you try to make a Stick Fight: The Game AI? Idk... Do what you think would be cool :)
That just silly , you cant play pool without a beer and computers cant drink beer.
Would love to see pool pro recreate this exact game
the music a 7:00 sounds very vulfpecky, is it some random royalty free music?
I was thinking the same thing. It's like a bastardized Outro.
Kevin Greener yeah Outro x Hero Town
It's called Wolf Kisses by Otis McDonald ! Only could find it on youtube so far...
ruclips.net/video/zH0NHlrSqgs/видео.html
zqualala haha no way! That's literally what vulfpeck translates to
Thanks guys! I recognized the music (especially first two chords) so badly and couldn't put my finger on it.
5:56 cue cips into orange
I really enjoy these videos. Can u put a tutorial on how you framework these AL algorithms for beginners?
I enjoy all your videos. I am sad that there was no hilarious banter and animations. But, oh well. I believe the funny stuff I love so much will appear in future videos. I can't code at all in these languages. I only know BASIC.
I don't think this is called AI, but rather brute force.
Which is exactly what machine learning is, and it’s what we call "AI" today. But you’re 100% correct, it’s brute force. It works by using a lot of energy to process a lot of data while first telling the machine what the result should look like.
@@MrBanarium Machine Learning can do more than just brute force. Plus AI is more things than just machine learning; games tend to have AI that are pre-programmed.
@@Liggliluff Of course, AI isn't just ML, never said otherwise. The initial comment was saying that it *wasn't* AI, so I pointed out that it was ML, which is a type of AI.
As for ML, it's not about "doing more" than brute force: ML *is*, in a sense, brute force. Of course, there a whole lot more to it, but it wouldn't even be possible without huge amount of computing time and energy to let the algorithm learn until it can achieve the desired result. It is, by its very design, brute-forcing its way through a dataset and toward a mathematical model solving that dataset.
Yes, this is a very brute force way of going at things.
Since we are not really considering the current state of the table, just that doing X followed by Y and then Z gives us a desirable result.
This isn't "Artificial intelligence" but rather a randomized brute force approach. That likely is slower at finding the optimal solution compared to a non randomized brute force approach. (unless we actively note down what randomized versions we have tested already, but that will chew up data storage rather quickly, not to mention that the act of generating random numbers is likely more intensive then just incrementing a value.)
Interestingly he seems to prefer "slamming" the ball with high speed, i actually do the same since it is (to me) more predictable
Ai learns to play bubble trouble
Childhood game
*LEGEND*
200 at 1 time is visually satisfying
Looking at your older videos, you have improved the entertainment values 10 fold ever since you started speaking in them.
Only 8k subscribers? With this quality content you deserve 100 times that!
54.6 subscribers????
bLiXy code bullet
That moment when you have subscribers but no videos 8k x 100 = 54.6
bLiXy 8k (now 9k) x 100 = 800k
Uh no, 9k x 💯 = 35¥
this video needs to be remade, i would love to see a remastered version
At least we don’t have to worry about code bullet ever creating humanity destroying robots