This is great, thanks for sharing. I wondered if you could add an increasing speed to the gameplay as that is a key part of Tetris. I think Tetris is a very manageable game at the beginning but the skill and decision-making becomes truly important in the later stages of the game when bricks are falling crazy fast.
@@GreerViau In some versions of tetris you can save a piece for later usually put in a box on the left with the word hold on top. Then the player can switch the hold piece with the current one once a move.
If you freeze it right at 9:05 you can see the failure. My guess is the algorithm became dependent on only the straight shape to eliminate rows because the tetris (4 rows at a time) was valued 8 times more than a single row. Eventually there was a long enough stretch where no straight shapes appeared so it failed.
If you want to get into academics watch some online lectures like MIT and Stanford. Otherwise just come up with a project idea and go for it, learn as you go
Assuming that you don't already know programming, learn C or python, two great places to start. Then if you want to hop straight into AI, watch youtube tutorials on AI projects and follow them the same. At the same time slowly read through and compare your learning progress to this roadmap: i.am.ai/roadmap/#note. As you get better you can start doing your own projects with more complex solutions! Hope this helped 1 year later haha
Hey Greer, I just found your channel from the almighty youtube algorithm. What are the odds it suggests a video from an old friend? Really good stuff, keep it up!
03:25 I always thought it was called co-linearity, didn't know they invented a new word collinerarity. Looked it up and yep, looks legit. Never thought I'd learn a new thing from youtube.
"Never thought I'd learn a new thing from youtube" if you really think so i bet you are either an old man discovering youtube or your ego is bigger than this galaxy.
Does it really masters the game? You are giving it all possible next states & automatically making game to switch to selected state. This is huge help. I would like to see AI that only has current state given and only choose 1 out of 4 moves to make (left/right/down/rotate).
It doesn't automatically switch to the next state, it might seem that way in the sped up clips but when it finds the best move to make it returns a move list that will get it to that position and thats how it makes moves. The system also has a disadvantage compared to human players that it cant perform complex moves like tspins or tucks and stuff like that. I'm also not sure what you mean by select 1 of 4 moves.
@@GreerViau AI creates this move list? Or it only choose desired state? By 4 moves I meant the following. Input of the ai is current state. Output is a move to make in the game: left, right, down or rotate.
@@alexrodichkin7172 well like i mentioned in the video, the ai is responsibe for selecting the best move. A move list is just a set of inputs that will get the piece to the chosen position. Also like i mentioned in the becoming of the video, i first tried what you were suggesting of taking game state and outputing left right up down commands but that didnt work. The reason is for the ai to learn anything it would have to randomly stumble into reward which in tetris is really hard and almost impossible
@@GreerViau That is why it is so interesting. We humans can master it this way. Your ai mastered simplified version of tetris. A lot of information given which player of tetris has to think by itself. Please tell me how far did your ai went without help? I am asking because I failed to teach it some time before with ga.
@@alexrodichkin7172 Using a genetic algorithm it wasn't able to learn anything with the "human" way of playing. Genetic algorithms just arent powerful enough to optimize neural networks for problems of that complexity. Not to mention very slow, training with this style of gameplay took hours to complete 1 generation. I did first attempt a "human" level of play using a Q learning algorithm in python, and that may have worked but reinforcement learning still takes a very long time, I left it running for 4 days and it had barely learned anything. Again the problem with tetris is that if the ai is going to learn how to play with no knowledge of the game, it needs to randomly perform an action that merits reward and in tetris that is very difficult to do. Not to mention even if the ai performs an action that merits reward, in tetris that action may not always merit reward with a different game state. When we play, we already understand the objective of the game so we have an advantage.
How ;pre-determined move' approach is different from neutral networks? I mean, as soon as he said 'ditch NN' it was clear NN will be back very almost immediately.
Well you don't need the neural network to make this work, calculating all of the possible moves is just a brute force approach. The neural net comes in when you need to score each move to decide the best one to make. But technically you could make a scoring function without a neural net, which I did at first in the video, but using a neural net was a better approach for the reasons I explained in the video.
After you added next piece to the equation, it really slowed down the calculation. How is is possible, that after you wake up, the calculation was not slow anymore? From what I understand, the trained network has still have to make all moves combination, so I don't get, how is this possible?
The calculations get faster when there are less players alive in the population. The population starts out at like 200 or something so it's very slow, but as players loose, there are less players doing calculations so they can do it faster.
I would be interested in seeing the game get "harder" by getting faster over time and reducing the amount of time the network is allowed to calculate, simulating things like the kill screen in original Tetris.
i don't think the network works in real-time. it just evaluates every single possible move. it takes longer to make evaluation two moves deep. but that does not makes it harder for the network, it just takes longer to train it.
Even with next piece knowledge, the ai still makes its evaluations as soon as the piece is in play, the only reason it seems to pause is because there are 200 players in the population. So regardless how fast the game gets it will still perform the same. I didn't mention in the video but in those sped up clips, the system is already playing 10x faster than a person but I had to speed up those clips in post, so when it says 10x speed it really is playing 100x faster than a person.
imagine going to sleep while the A.I training and then you wake up already robots took over the world cuz of your code
Good content, I subbed! I loved how you decided to go the deterministic route but then still sprinkled a neural network in there!
I think one way to make the ai more efficient (more tetrises) would be to have something like a score/20 moves.
Nice explanation of your project.
And your snake AI project is awesome too.
The music is absolute fire!
This is great, thanks for sharing. I wondered if you could add an increasing speed to the gameplay as that is a key part of Tetris. I think Tetris is a very manageable game at the beginning but the skill and decision-making becomes truly important in the later stages of the game when bricks are falling crazy fast.
I discover your channel by chance ... Realy like your projects !! Thx.
2 questions 1. At some point did you notice the AI doing T-spins? 2. I know it would be a lot slower but what about if the AI could hold a piece?
It cant perform complex moves like T-spins so thats a limitation and Im not sure what holding a piece is
@@GreerViau In some versions of tetris you can save a piece for later usually put in a box on the left with the word hold on top. Then the player can switch the hold piece with the current one once a move.
I programed this kind of AI when I was in hi school, with my brain making the score function, and Oh my god it wasn't that good !!!
Ok. Atleast u tried
I'm in my senior year of Bachelor's in Computer Engineering and I have never tried AI programming. Salute to you bro!
The way the AI lost at the end would be nice to see slowed down. It made mistakes that would be obvious for human players to avoid.
If you freeze it right at 9:05 you can see the failure. My guess is the algorithm became dependent on only the straight shape to eliminate rows because the tetris (4 rows at a time) was valued 8 times more than a single row. Eventually there was a long enough stretch where no straight shapes appeared so it failed.
Hey your videos are great.
how did you learn AI?!
And how can I learn and where to start?
If you want to get into academics watch some online lectures like MIT and Stanford. Otherwise just come up with a project idea and go for it, learn as you go
@@GreerViau what did you do?
Assuming that you don't already know programming, learn C or python, two great places to start. Then if you want to hop straight into AI, watch youtube tutorials on AI projects and follow them the same. At the same time slowly read through and compare your learning progress to this roadmap: i.am.ai/roadmap/#note. As you get better you can start doing your own projects with more complex solutions! Hope this helped 1 year later haha
Hey Greer, I just found your channel from the almighty youtube algorithm. What are the odds it suggests a video from an old friend? Really good stuff, keep it up!
Yo what are the odds man. Thanks, hope you're doing well
The odds might be pretty high, I watch a lot of youtube lol, I've been good though.
03:25 I always thought it was called co-linearity, didn't know they invented a new word collinerarity. Looked it up and yep, looks legit. Never thought I'd learn a new thing from youtube.
"Never thought I'd learn a new thing from youtube" if you really think so i bet you are either an old man discovering youtube or your ego is bigger than this galaxy.
Off topic: I had to stop several times the x10 to make sure there was just a single piece falling
I wonder how the system would change if it tried to keep the blocks around some adjustable height.
thanks again! love your content.
I wonder how well this would do against hatetris
i suspect it won't do so well since it can only look ahead 2 pieces, and i think hatetris requires quite some forward planning for decent scores.
Does it really masters the game? You are giving it all possible next states & automatically making game to switch to selected state. This is huge help. I would like to see AI that only has current state given and only choose 1 out of 4 moves to make (left/right/down/rotate).
It doesn't automatically switch to the next state, it might seem that way in the sped up clips but when it finds the best move to make it returns a move list that will get it to that position and thats how it makes moves. The system also has a disadvantage compared to human players that it cant perform complex moves like tspins or tucks and stuff like that. I'm also not sure what you mean by select 1 of 4 moves.
@@GreerViau AI creates this move list? Or it only choose desired state? By 4 moves I meant the following. Input of the ai is current state. Output is a move to make in the game: left, right, down or rotate.
@@alexrodichkin7172 well like i mentioned in the video, the ai is responsibe for selecting the best move. A move list is just a set of inputs that will get the piece to the chosen position. Also like i mentioned in the becoming of the video, i first tried what you were suggesting of taking game state and outputing left right up down commands but that didnt work. The reason is for the ai to learn anything it would have to randomly stumble into reward which in tetris is really hard and almost impossible
@@GreerViau That is why it is so interesting. We humans can master it this way. Your ai mastered simplified version of tetris. A lot of information given which player of tetris has to think by itself. Please tell me how far did your ai went without help? I am asking because I failed to teach it some time before with ga.
@@alexrodichkin7172 Using a genetic algorithm it wasn't able to learn anything with the "human" way of playing. Genetic algorithms just arent powerful enough to optimize neural networks for problems of that complexity. Not to mention very slow, training with this style of gameplay took hours to complete 1 generation. I did first attempt a "human" level of play using a Q learning algorithm in python, and that may have worked but reinforcement learning still takes a very long time, I left it running for 4 days and it had barely learned anything. Again the problem with tetris is that if the ai is going to learn how to play with no knowledge of the game, it needs to randomly perform an action that merits reward and in tetris that is very difficult to do. Not to mention even if the ai performs an action that merits reward, in tetris that action may not always merit reward with a different game state. When we play, we already understand the objective of the game so we have an advantage.
amazing
Would have been nice to see the ai play at normal speed...
Do you think you could do a kind of condensing where you train another faster network to predict what the large slow network would do?
How ;pre-determined move' approach is different from neutral networks? I mean, as soon as he said 'ditch NN' it was clear NN will be back very almost immediately.
Well you don't need the neural network to make this work, calculating all of the possible moves is just a brute force approach. The neural net comes in when you need to score each move to decide the best one to make. But technically you could make a scoring function without a neural net, which I did at first in the video, but using a neural net was a better approach for the reasons I explained in the video.
did anybody notice that this video was made on April fools day..?
Please make the coding video so that we could also make the clone with you sir thank you
How do u visualize the neural network at bottom left?
The code is all in the repo
What programming language did you use?
Processing
@@GreerViau ok, thank you
This is awesome!
After you added next piece to the equation, it really slowed down the calculation. How is is possible, that after you wake up, the calculation was not slow anymore? From what I understand, the trained network has still have to make all moves combination, so I don't get, how is this possible?
The calculations get faster when there are less players alive in the population. The population starts out at like 200 or something so it's very slow, but as players loose, there are less players doing calculations so they can do it faster.
Hi bro I need help regarding this game Tetris. Can u please help me?
CAN it even make a tetris reliably? I mean tetris tetris (4 rows splashy splashy by one long shapy shapy)
ah yes, splashy splash long shapy shapy
Indeed, a tetris-tetris
Ah, never too good to use Greedy algorithm
You should apply to Tesla for their AI team:
12707 Line in not 13000 plus.
Nice
3 Months... Are you Codebullet lol
That great but AI cannot play external games
Can you make a bot that is proficient in trading Bitcoins learning to ride the constant up and down crypto market?
there's people who got a higher score
I would be interested in seeing the game get "harder" by getting faster over time and reducing the amount of time the network is allowed to calculate, simulating things like the kill screen in original Tetris.
i don't think the network works in real-time. it just evaluates every single possible move. it takes longer to make evaluation two moves deep. but that does not makes it harder for the network, it just takes longer to train it.
Even with next piece knowledge, the ai still makes its evaluations as soon as the piece is in play, the only reason it seems to pause is because there are 200 players in the population. So regardless how fast the game gets it will still perform the same. I didn't mention in the video but in those sped up clips, the system is already playing 10x faster than a person but I had to speed up those clips in post, so when it says 10x speed it really is playing 100x faster than a person.