Nice work!, I love this type of videos where someone use Ai algorithms to turn dump computer into super clever computer, I Hope you proceed uploading this type of video♥️
It has no idea how to do that since the training data is based on OP's games and he did not glitch through walls (I am assuming)... AIs that learn weird behaviors like that are usually based on evolutionary algorithms - you must have the simulation environment to do a "selective breeding" of AIs that perform well...
You'd probably gain a lot by giving the model some temporal context. Instead of passing in only the current frame, you could stack the current and previous frames depth-wise for an input shape of (224, 224, 2). This extra information would give vital information about the direction and velocity of everything in the scene, and wouldn't require any significant changes to the model.
@@YashSingh-rf7nk It wouldn't be necessary, by using two frames you're already adding velocity information. An LSTM would help, but adds to the complexity of training.
@@Emeraldym personally i find it unlikely that somoane capable of coding AI and who took the time to do it, would wast his time using it to have am unfair advantage in game. Most cheater use tool they haven't made and barelly understand how they work real programmer d'ont waste their time trying to cheat in video game. i did my own runescape-bot once to have it mine, forge, and sell automatically when i was gone, and i d'ont consider this cheating for multiple reason one of them being that it get borring quick so i stop using it soon after. The other is that it's almost more work to make the programe then do9ng it yourself. And the programe did not have any advantage to other player it played just like i did but automatically. Other then that the other programme i made where on single player game because when you CAN code stuff like that your self you d'ont feel the need to use it to have an advantage in game which are just for fun.
Awesome ! Love how it seems simple to train (just record the screen), and yet performs "ok", didn't know it was "that easy" to train a ML model. I feel like you can supervize 10 of these IAs and help them when they struggle, but you could never play 10game at the same time. Very nice for job automation or whatever
Interesting. I think just using past 4-5 images as input would improve it a lot as it will have the information of how objects are moving as well. Keep it up!
This is one of the most entertaining as well as educational video I've seen in a while and now I can't wait to check all your videos out. Nice job I'm sure you're gonna blow up one day with content as good as this 👍
I am a newbie but is that even possible I have only seen CodeBullet's videos which most if not all are unsupervised. Though he would have to make the game first in order for it to have that amount of time to learn.
@@qassem121 reinforcement learning RL is a method where the agent learns from its environment, in most cases an environment is easiest when you have your own simulation.
Thoroughly impressed by how you framed such a complex task into a classification problem! When I saw the thumbnail, I thought you’d be coding up some reinforcement learning stuff. Subbed!
Easy to prove the concept and have some success. You do run into some road block on the harder levels with camera issues. Might take this on in another video.
Good job !! I'm doing a similar thing on my channel, for a racing game (Trackmania), but I'm using feature extraction and a simple NN, instead of a CNN. I didn't think it could work in Fall guys, impressive !
Yosh I would really like to connect to ask you a few quick questions on your projects. I sent you a discord request to add as a friend would that be the best way to chat? I tried to message but we need to be friends first or share a server.
What awesome project! I would appreciate if you do some follow up videos on this project. You could train it using more and better data (say only of your winning games) and maybe using deeper ResNet architectures or maybe using grey scale images (these edge images are a bit too simple approach imo).
I had more success memoizing the recent history as an input to a CNN & doing more feature detection for platforms, walls, and obstacles. I used mean-shift clustering to find moving obstacles and other players so that it would hopefully work on new maps as well.
A massive thank you for putting this video out there! I hadn't considered using AI before, and after watching this, I decided to dabble in creating my own AI to play a completely different game and experimented with using different code. Really helpful
Excellent !! Please do keep us posted and make a video about the progress after training it around for 60 games. I am very excited to see the final results.
Why did you change the image to grey scale and run edge detection on it? At 2:11 you say that the colours don't mean anything to the neural network, but this is not true since your pertrained ResNet18 was trained on natural colour images. Since the output size generated by the pertrained model is fixed at 512 irrespective of the complexity of the input image, I can not see any advantage to doing edge detection on the screenshots. Did you try training the network on the original screenshots before this and have an issue that motivated the move to doing edge detection?
The simpler methodology here would be to just plug the raw screenshots into the model without doing preprocessing on them, and see what happens. It is best to avoid premature optimization. Don't try to fix an issue until you have confirmed it exists. When working in data science, it is best to get your first model working (going from input to output) and wait until you have this before seeing what needs to be improved. Then try out your ideas of what to change one at a time and see if they make an improvement on your criterion compared to the baseline. In this case, I would expect that performing edge detection on the input images decreased the performance of the model. Doing so moved the inputs away from the training distribution for the pretrained ResNet18 model.
@@ClarityCoders if I can manage to start it XD I haven't coded in months and never tried AI before, of course there is your discord I may or may not ask a lot of questions
Haha These comments are cracking me up. Honestly when people ask why I didn't push this project forward it's because I was sick of playing. Thanks for watching buddy.
Nice work! I also had a long lasting idea to make similar aganets for games, and such videos as this one give me a big impetus and inspiration to finally dive into this :)
A better baseline would be to just take the most frequent action, or randomly sample actions at the frequency that you took them in your training data. I suspect that would do about as well as this nn is doing.
I agree, I also suspected, that the agent just learned the right random distribution. Also I did not see a confusion matrix for the test data. I think it unlikely that a agent in such a complex environment could be trained that easily.
Did you do a confusion matrix on testing data? Did you set the probabilities of your random baseline agent to match your data? For example just running and doing nothing else should have the highest probability. It did not seem like this was the case. So it is possible that you just learned the right random distribution. Why don't you use Q-learning or other reinforcement learning methods. Your ground assumption that the action you performed is the right one leaves your agent lacking. Not including bad runs is understandable in your setting, but if you wanted to surpass your performance this would be counterproductive. I would recommend you to use a actor critic reinforcement learning agent. Or just a deep q agent.
Honestly because i wanted to show how you could build your own supervised learning data set on a game. The project was also a starting point not really and ending point a lot of things could be improved including gathering more data. I do have some other videos on reinforcement learning!
@@ClarityCoders You wouldnt believe how much of a poger u would be. There are virtually no in depth tutorials on how to make an ai to play games like this. It would be awesome.
Finally get to see some implementations that is applied on the real game, not an improvised game developed for the purely AI purpose like most of the youtuber does.
This is so good! Imagine if there was some way to use the positions, velocities and hitboxes of relevant objects as input. Would probably take some serious reverse engineering though
Hi, what an entertaining work. I am wondering FastAI does automatically balances each class image weight for result or not? I can see from the confusion matrix that the nothing action is ~10 times bigger than other groups. Because if not, then balancing classes magnitude could make result way more efficient. (Artificially enlarging small action datasets and reducing huge datasets.)
Now that I see this program, I have a theory that Fast AI could be useful for more than just playing video games. This AI could also be useful for other applications, such as learning Calculus for example. It could also be used to make the code that you need in order to get 100% on your coding project too. I am working on a project that involves FastAI
Absolutely love the channel, I used to play a game when I was a kid (Wakfu) and I was wondering if you could make a video on it about farming, it shouldn’t be too complicated but sometimes when farming you’d encounter an AI to fight (fairly easy) but I’m wondering how you would tackle the problem with AI
The code in the git is crazy simple and intuitive, Im really impressed! how common is the usage of fastai in your experience? as someone who learns ML, are there any other libraries you can recommend?
@@ClarityCoders i was lucky enough to learn pytorch in a practical course, but it was a little lacking when it came to search for hyper parameters, and i've never heard of fastai before.
Yup could probably stack images to give it some knowledge of movement speeds. One of the many improvements you could make on the project I just got a bit bored with it.
Hi, great work. You earned a sub. Question: how can I put games like these in some kind of simulator to play against itself so that I can run many simulations in a short amount of time?
@@ClarityCoders sounds complicated... I wanted to try and train an AI bot for clash of clans by letting it play against itself and to have an AI bot that knows how to attack and one that knows how to build bases... Wishful thinking though.
Yeah honestly I got bored playing the game so I didn't want to make more training data. It's fun but I was playing it like it was my job 😂. I should do another game with more data and some improvements.
Awesome video! Found it interesting that a resnet could be somewhat effective at this task. Of course for optimal results some kindve recurrent network will need to be used to encode contextual data from frame to frame. Also your approach of simplifying the images makes sense but it would be nice to see more data retained. Canny edge detection algorithm should retain more of the edge information.
Wait now it can qualify ... Can you make it so it records its own games and learn as a factor of his positionning ? Then make him able to run all by himself ... like 1 ...2 ... 10 runs at the same time ? Then if you leave it to learn maybe overnight it'll show you how much is possible with only there 3 entries
How does this account for the different variations within each mode? Like on Hit Parade, the 2nd section can be the revolving doors that you need to push through, or the spinners that were shown at 9:39 where you can do a combination of avoiding and jumping over the bars.
That gets tricky with a live game. With a game you can pause during learning you could use reinforcement learning. With something like we have here you could filter out my bad games basically only training when I'm performing at my best. This may improve things but obviously still limited by my skill.
Any plans to add re-inforcement learning to make it play fall guys, like AlphaZero , learning from the ground up? Would be fascinating to see something like that.
I dont think detecting edges is necessary since the idea of DNN is to just feed (somewhat) raw data to it and it should find a function that maps input to output... Thats because using the edge detector you are skipping a learning step the NN would have to overcome, which is finding details in the image that may be useful in deciding whether to move left or right...
Why didn’t you use the colour and texture information to do masking? Basically a preprocessing step that transforms your screenshots into several simplified layers such contain the edges. Such that the AI can react specifically to pendulums and other object, while following the inside of the race track
Honestly cause the color and texture difference would need more data and I was sick of playing. It's a really good idea though and would probably perform better.
This is the type of AI I'm interested in. Obvious bots are always annoying but when the bot can blend in like this, it's a good one. The ones with obvious cheats (fly/big speed/wallbreak) in games ruin it for everyone else in the lobby.
going to prevent cheating in a crappy online game, what a hero ... please, spare us the needless comments other than that, the video could have been interesting to watch
Huh, so no need for camera control? Just 3 keys and hold forward, and edge detect for image processing, that's super smart and efficient, I wouldn't guess you could build an efficient player with just that.
imo if you are using a pretrained backbone stripping the incoming frames may not really be helpful at all, its likely very OOD compared to the data it was pretrained on
Best would be to use a stream device linked to another computer or to another program at least - and train the AI to visually recognize different elements from the stream and make decisions from it - then send inputs back to the computer/game.
if it got turned around would it know that it isn't going the right way and turn back forward? if so would you have to teach it only to turn left or right when it was reversing to get back on track or would it see that it is more on the correct path as it turns and continues that direction until moving forward?
If you fed it some training data showcasing that scenario it could learn. So you would have to have played games where you got turned around and corrected the path.
Nice work!, I love this type of videos where someone use Ai algorithms to turn dump computer into super clever computer,
I Hope you proceed uploading
this type of video♥️
I got some others in the works already.
@@ClarityCoders Waiting...
gerge
does this work
It's so smart that it tried to glitch thorough the walls
Cheaters gotta cheat.
Your picture fits nicely :p
thorough
It has no idea how to do that since the training data is based on OP's games and he did not glitch through walls (I am assuming)... AIs that learn weird behaviors like that are usually based on evolutionary algorithms - you must have the simulation environment to do a "selective breeding" of AIs that perform well...
Boku wa doctorrr Tony Tony Chopper
Came from r/python and I just have to stay your very underrated, you have earned a sub
Awesome so glad you watched! The sub means a lot thank you.
Wow, he got recommended to me!
Agreed
You'd probably gain a lot by giving the model some temporal context. Instead of passing in only the current frame, you could stack the current and previous frames depth-wise for an input shape of (224, 224, 2).
This extra information would give vital information about the direction and velocity of everything in the scene, and wouldn't require any significant changes to the model.
would we pass it through a lstm layer?
@@YashSingh-rf7nk It wouldn't be necessary, by using two frames you're already adding velocity information. An LSTM would help, but adds to the complexity of training.
When people make an AI to beat a game it's no longer cheating it's art.
Untill cheaters start using the AI together with cheats and ruin the game
Tell that to the TF2 community, I'm sure they'll agree
@@Emeraldym personally i find it unlikely that somoane capable of coding AI and who took the time to do it, would wast his time using it to have am unfair advantage in game. Most cheater use tool they haven't made and barelly understand how they work real programmer d'ont waste their time trying to cheat in video game. i did my own runescape-bot once to have it mine, forge, and sell automatically when i was gone, and i d'ont consider this cheating for multiple reason one of them being that it get borring quick so i stop using it soon after. The other is that it's almost more work to make the programe then do9ng it yourself. And the programe did not have any advantage to other player it played just like i did but automatically. Other then that the other programme i made where on single player game because when you CAN code stuff like that your self you d'ont feel the need to use it to have an advantage in game which are just for fun.
@@JasonGrace69 they use actuall AI to cheat in Team Fortress? IMPRESSIVE what advantage does it have over traditional cheating methode like aim bot ?
@@Emeraldym yeah until then.
This is when your mom says put game on pause but you can't cause it's online game and you turning on AI
Haha bathroom break.
Oh why did you have to remember me that GPU training used to be free on colab
A lot of fun to be had there and time wasted 😂
Is it no longer free. I know there's pro version as well
What happened? I can still access GPUs and TPUs for free here...
@@israelRaizer complaining about the free usage limits i assume
@@something4922 but the guy said it "was" free, as if you had to pay to use it now
“The plan is simple” Ludwigs in my head
Alright boys so
Wow this is really cool!
I'm surprised it doesn't have more views
I'm happy it got this many haha... Thanks for watching.
Plot twist: He's using Fast AI to reply to all the comments on this video.
Awesome ! Love how it seems simple to train (just record the screen), and yet performs "ok", didn't know it was "that easy" to train a ML model. I feel like you can supervize 10 of these IAs and help them when they struggle, but you could never play 10game at the same time. Very nice for job automation or whatever
Awesome video! Let's see you get a crown in another video!
Challenge accepted
@@ClarityCoders challenge dropped apparently
Interesting. I think just using past 4-5 images as input would improve it a lot as it will have the information of how objects are moving as well. Keep it up!
I agree 100%. Thank you for watching and commenting.
Woah, I’m surprised you got any results with such unstructured data. Neat!
Stupid project but cool haha...
This is one of the most entertaining as well as educational video I've seen in a while and now I can't wait to check all your videos out. Nice job I'm sure you're gonna blow up one day with content as good as this 👍
Thank you that's very nice of you to say. I'm just happy anyone is enjoying the videos!
Nice demo of picking a simplified framing of the problem and getting a proof of concept quickly! Great project :)
Thanks for watching!
i'd love to see an unsupervised learner do this
I am a newbie but is that even possible I have only seen CodeBullet's videos which most if not all are unsupervised. Though he would have to make the game first in order for it to have that amount of time to learn.
@@qassem121 reinforcement learning RL is a method where the agent learns from its environment, in most cases an environment is easiest when you have your own simulation.
@@froozynoobfan Almost: RL is a ML paradigm alongside unsupervised and supervised :)
Don't GAN and VAE count as unsupervised?
@@revimfadli4666 for a GAN you still need labeled data to train your classifier
Thoroughly impressed by how you framed such a complex task into a classification problem! When I saw the thumbnail, I thought you’d be coding up some reinforcement learning stuff. Subbed!
Because there is a such thing of frameworks.
For sure!
its called imitation learning if U want to learn more
Had this idea too a few days ago. However I never would have thought it would be so "easy" to make in terms of AI models/techniques. Great job dude!
Easy to prove the concept and have some success. You do run into some road block on the harder levels with camera issues. Might take this on in another video.
Good job !! I'm doing a similar thing on my channel, for a racing game (Trackmania), but I'm using feature extraction and a simple NN, instead of a CNN. I didn't think it could work in Fall guys, impressive !
You interested in teaming up for a video?
@@ClarityCoders I'd like to continue my AI project for now, but maybe later !
Yosh I would really like to connect to ask you a few quick questions on your projects. I sent you a discord request to add as a friend would that be the best way to chat? I tried to message but we need to be friends first or share a server.
8:38 "we at least know we're headed in the right direction" unlike that random agent... ^^
Amazing stuff! Sharing this with work mates
Thanks for watching i really appreciate it!
This is a super high quality video! Great editing, and it's super informative! 😁
Thanks it honestly means a lot. I'm just glad people enjoy watching!
It’s nice of you to try and help prevent these kind of hackings things, good work!
0:37 BOYS THE PLAN IS SIMPLE
Haha should that be my catch phrase?
What awesome project! I would appreciate if you do some follow up videos on this project. You could train it using more and better data (say only of your winning games) and maybe using deeper ResNet architectures or maybe using grey scale images (these edge images are a bit too simple approach imo).
This is a very good idea and something I have been playing around with a bit.
its not just simple, its more complex than using greyscale and its bad
I had more success memoizing the recent history as an input to a CNN & doing more feature detection for platforms, walls, and obstacles. I used mean-shift clustering to find moving obstacles and other players so that it would hopefully work on new maps as well.
That's awesome. You going to share that sh!t? Thanks for the view buddy.
A massive thank you for putting this video out there! I hadn't considered using AI before, and after watching this, I decided to dabble in creating my own AI to play a completely different game and experimented with using different code. Really helpful
Excellent !! Please do keep us posted and make a video about the progress after training it around for 60 games. I am very excited to see the final results.
Thanks for watching and commenting! I will try and post an update.
Ngl I want to see the ai footage. Always interesting
thanks for watching!
@@ClarityCoders Stream it 24 hours on twitch lol that kind of thing always seems exciting
opencv could easily be used to make it recognize the levels! Great work, love it.
It's so smart it tried to use speedrun tricks
Crafty AI! Thanks for watching and commenting.
A sub is a must for your channel. Keep up the good work. We need you to make these kind of fun videos of AI playing simple games.
Keep this up.. Just keep uploading and your channel will grow.
I love your simple and elegant solution.
Thanks this comment means a lot. I'll keep on making videos!
Fantastic video! Was thinking of doing something like this before and never figured out how, this vid helped a ton!
Glad I could help!
dude ur a legend for this! thank you.
Thought reinforcement learning from the title. OMG just CNN! good job
Yeah I have some reinforcement videos but this was a fun simple CNN project.
@@ClarityCoders good. I am a fan of fastai too.
It would be really cool to see the continuation of this
I never use fast ai before. Wow that was really fast AI development. Just need to add more huge of data of a pro player to make the AI OP at the game
For sure ad the pro player haha
From reddit here ^^
I didn't waste my time :D It was worth it
Thanks for checking it out! I got some more cool stuff coming soon stay tuned.
Why did you change the image to grey scale and run edge detection on it? At 2:11 you say that the colours don't mean anything to the neural network, but this is not true since your pertrained ResNet18 was trained on natural colour images. Since the output size generated by the pertrained model is fixed at 512 irrespective of the complexity of the input image, I can not see any advantage to doing edge detection on the screenshots. Did you try training the network on the original screenshots before this and have an issue that motivated the move to doing edge detection?
I had this thought as well. Really don't understand the motivation here
The simpler methodology here would be to just plug the raw screenshots into the model without doing preprocessing on them, and see what happens. It is best to avoid premature optimization. Don't try to fix an issue until you have confirmed it exists.
When working in data science, it is best to get your first model working (going from input to output) and wait until you have this before seeing what needs to be improved. Then try out your ideas of what to change one at a time and see if they make an improvement on your criterion compared to the baseline.
In this case, I would expect that performing edge detection on the input images decreased the performance of the model. Doing so moved the inputs away from the training distribution for the pretrained ResNet18 model.
this looks like a lot of fun and I plan on attempting this for a few games.
Awesome I hope you share it!
@@ClarityCoders if I can manage to start it XD I haven't coded in months and never tried AI before, of course there is your discord I may or may not ask a lot of questions
Damn sooooo underrated keep up the gud work earned a sub :D
Thank you! I appreciate the sub and comment.
This looks like a much more fun way to play Fall Guys than the regular method, for sure.
Haha These comments are cracking me up. Honestly when people ask why I didn't push this project forward it's because I was sick of playing. Thanks for watching buddy.
Can we get a part 2 and see how far this can go? Would like to see it win some games and get smoother.
Nice work! I also had a long lasting idea to make similar aganets for games, and such videos as this one give me a big impetus and inspiration to finally dive into this :)
This was great, thanks for the inspiration!
Thanks for watching and commenting it means a lot.
A better baseline would be to just take the most frequent action, or randomly sample actions at the frequency that you took them in your training data. I suspect that would do about as well as this nn is doing.
Very good point. Given more time and interest I would have gathered even more data and then balanced it better. Thanks for watching I appreciate it.
I agree, I also suspected, that the agent just learned the right random distribution. Also I did not see a confusion matrix for the test data. I think it unlikely that a agent in such a complex environment could be trained that easily.
Did you do a confusion matrix on testing data?
Did you set the probabilities of your random baseline agent to match your data? For example just running and doing nothing else should have the highest probability. It did not seem like this was the case. So it is possible that you just learned the right random distribution.
Why don't you use Q-learning or other reinforcement learning methods. Your ground assumption that the action you performed is the right one leaves your agent lacking. Not including bad runs is understandable in your setting, but if you wanted to surpass your performance this would be counterproductive. I would recommend you to use a actor critic reinforcement learning agent. Or just a deep q agent.
Honestly because i wanted to show how you could build your own supervised learning data set on a game. The project was also a starting point not really and ending point a lot of things could be improved including gathering more data. I do have some other videos on reinforcement learning!
You made this a lot more complex than was necessary, simply have the ai copy the patterns of the fastest fall guys and replicate their inputs.
You would need to have access to that data as inputs. I didn't have any of their code so I couldn't use that...
Such a great explanation! I even don't know python basics but I am watching this ;)
Thanks for watching and commenting it means a lot.
Awesome video dude, you broke it down super well
Thanks I really appreciate it!
Could you do more videos about fastAI an how to use it for different use cases?
That’s really cool how an AI can just teach it’s self how to play!
Thanks so much I really appreciate you taking the time to watch.
that was amazing. would love an in depth tutorial of the code
Thanks! That's a great idea. maybe the same concept on a game every has access to try?
@@ClarityCoders would be awesome
@@ClarityCoders You wouldnt believe how much of a poger u would be. There are virtually no in depth tutorials on how to make an ai to play games like this. It would be awesome.
Finally get to see some implementations that is applied on the real game, not an improvised game developed for the purely AI purpose like most of the youtuber does.
It's way tougher to be honest. Thanks for the comment.
This is so good! Imagine if there was some way to use the positions, velocities and hitboxes of relevant objects as input. Would probably take some serious reverse engineering though
Hi, what an entertaining work. I am wondering FastAI does automatically balances each class image weight for result or not? I can see from the confusion matrix that the nothing action is ~10 times bigger than other groups. Because if not, then balancing classes magnitude could make result way more efficient. (Artificially enlarging small action datasets and reducing huge datasets.)
Yes, fastAI does a lot of magic in the background
Yeah they make it crazy easy to spin something up. You can dig in and customize everything but out of the box it gets you running.
nice job man i think training to play CSGO is worth a shot now. thanks for sharing the process.
I would love to see that go for it!
atleast were heading in the right direction when hes clearly running back to the start
Would love to see ai bot playing among us and actually generating speech that can convince others that it's not the suspect.
Honestly a super cool idea. Not that far off either sense among us would have some processing time.
Same actually! I've never seen a bot actually play Among Us before, just spam bots that leave moments after they join.
Awesome project! Good use of fastai, definitely going to try to build something like this now.
Got another FastAI video coming out in a week or so.... If you're interested hit that bell so you don't miss out.
Okay that's creative. Replaced Reinforcement learning with CNNs 👏👏👏
Thanks! can probably create some interesting use cases.
Fun video. Nicely explained, Liked and Subbed.
Thanks for the sub! Glad to have you around.
That was very entertaining! Congratulations!
Thank you.
Now that I see this program, I have a theory that Fast AI could be useful for more than just playing video games. This AI could also be useful for other applications, such as learning Calculus for example. It could also be used to make the code that you need in order to get 100% on your coding project too. I am working on a project that involves FastAI
3:28 : what is CO or OO (the orange logo in left top corner)
Absolutely love the channel, I used to play a game when I was a kid (Wakfu) and I was wondering if you could make a video on it about farming, it shouldn’t be too complicated but sometimes when farming you’d encounter an AI to fight (fairly easy) but I’m wondering how you would tackle the problem with AI
I'll put it on my list to check out! Thanks for taking the time to comment.
The code in the git is crazy simple and intuitive, Im really impressed! how common is the usage of fastai in your experience? as someone who learns ML, are there any other libraries you can recommend?
FastAI for me is great for the quick setup and testing things out. if you want to get more detailed I'd learn pytorch. I have a setup tutorial!
@@ClarityCoders i was lucky enough to learn pytorch in a practical course, but it was a little lacking when it came to search for hyper parameters, and i've never heard of fastai before.
@@sdfrtyhfds awesome it will be easy for you to pickup then Google them they have a tutorial series.
i'm love fastAI!
I'm a huge fan of Jeremy and Rachael ... and yours, now @ClarityCoders
@@bobdylan9173 so far my highlight on youtube is him tweeting my video haha
"The plan is simple" OMEGALUL
That outline picture does not include information about which way the ball is moving, or how fast.
Yup could probably stack images to give it some knowledge of movement speeds. One of the many improvements you could make on the project I just got a bit bored with it.
Amazing, Guy! Congratulations!
Thank you! Cheers!
Awesome video and great efforts!!!
Thanks a lot!
Maybe you could mod the game to have the textures all be black with white outlines, since skins might mess with the training a bit
Very good point. I really want to figure out a way to keep the camera centered is my next challenge.
@@ClarityCoders Check out the melonloader, it makes modding unity games really easy
Can we get more of this with a 3d old mmo game?
Hi, great work. You earned a sub. Question: how can I put games like these in some kind of simulator to play against itself so that I can run many simulations in a short amount of time?
Tough with a live game like this without the source code. That's why a lot of people doing AI on games rewrite the whole thing in their own code.
@@ClarityCoders sounds complicated... I wanted to try and train an AI bot for clash of clans by letting it play against itself and to have an AI bot that knows how to attack and one that knows how to build bases... Wishful thinking though.
Felt a bit cut short at the end. Would have liked to see the improvements you talked about
Yeah honestly I got bored playing the game so I didn't want to make more training data. It's fun but I was playing it like it was my job 😂. I should do another game with more data and some improvements.
Awesome video! Found it interesting that a resnet could be somewhat effective at this task. Of course for optimal results some kindve recurrent network will need to be used to encode contextual data from frame to frame.
Also your approach of simplifying the images makes sense but it would be nice to see more data retained. Canny edge detection algorithm should retain more of the edge information.
Wait now it can qualify ... Can you make it so it records its own games and learn as a factor of his positionning ? Then make him able to run all by himself ... like 1 ...2 ... 10 runs at the same time ? Then if you leave it to learn maybe overnight it'll show you how much is possible with only there 3 entries
If you played more games to better train your AI, did it improve?
Yes. Although I moved on as I got bored with the game 😄
im gonna try and adapt this for minecraft with my unhealthy amount of free time
Get it! Keep me posted.
How does this account for the different variations within each mode? Like on Hit Parade, the 2nd section can be the revolving doors that you need to push through, or the spinners that were shown at 9:39 where you can do a combination of avoiding and jumping over the bars.
I used a different model for each mode. with enough data you might be able to share models.
That’s awesome ! As far as I understood, your AI learns to play like you. But I wonder how are you gonna make it better than you.
That gets tricky with a live game. With a game you can pause during learning you could use reinforcement learning. With something like we have here you could filter out my bad games basically only training when I'm performing at my best. This may improve things but obviously still limited by my skill.
@@ClarityCoders Thank you so much ! I didn't realize it was a one year old video but you still did answer
Any plans to add re-inforcement learning to make it play fall guys, like AlphaZero , learning from the ground up? Would be fascinating to see something like that.
I would love to but.... need a bit more processing power than I have at the moment. 😂 If you guys keep watching maybe Nivida can hook a guy up.
Just a question, I am truly interested into the normalization function you apply on your pictures, would you mind explaining how it works ?
I dont think detecting edges is necessary since the idea of DNN is to just feed (somewhat) raw data to it and it should find a function that maps input to output... Thats because using the edge detector you are skipping a learning step the NN would have to overcome, which is finding details in the image that may be useful in deciding whether to move left or right...
how can i pass a tuple of integers instead of using the direction classifier?
Why didn’t you use the colour and texture information to do masking? Basically a preprocessing step that transforms your screenshots into several simplified layers such contain the edges. Such that the AI can react specifically to pendulums and other object, while following the inside of the race track
Honestly cause the color and texture difference would need more data and I was sick of playing. It's a really good idea though and would probably perform better.
This is the type of AI I'm interested in. Obvious bots are always annoying but when the bot can blend in like this, it's a good one. The ones with obvious cheats (fly/big speed/wallbreak) in games ruin it for everyone else in the lobby.
Yeah it's pretty good harmless fun.
Me training the bot in a game that I suck and getting a 1% winrate
-Ah yes, yes its working out even better than I expected, its all coming together
going to prevent cheating in a crappy online game, what a hero ...
please, spare us the needless comments
other than that, the video could have been interesting to watch
thanks for the feedback and the view!
Background music? Thank you.
Huh, so no need for camera control? Just 3 keys and hold forward, and edge detect for image processing, that's super smart and efficient, I wouldn't guess you could build an efficient player with just that.
Good note! That's why those 3 level were chosen because you don't need to mess with the camera. That would need to be addressed on the next version.
Awesome stuff, subscribed!
Awesome, thank you!
I know this is a bit late but, I wonder what a bot that would look for the closest player and mimics it’s moves would do
Haha. That is very interesting actually. Find a buddy and live or die with the results.
imo if you are using a pretrained backbone stripping the incoming frames may not really be helpful at all, its likely very OOD compared to the data it was pretrained on
Best would be to use a stream device linked to another computer or to another program at least - and train the AI to visually recognize different elements from the stream and make decisions from it - then send inputs back to the computer/game.
if it got turned around would it know that it isn't going the right way and turn back forward? if so would you have to teach it only to turn left or right when it was reversing to get back on track or would it see that it is more on the correct path as it turns and continues that direction until moving forward?
If you fed it some training data showcasing that scenario it could learn. So you would have to have played games where you got turned around and corrected the path.
Commenting so this video does a little bit better, maybe someone gets inspiration from this video and decides to learn programming!!!
Thanks that means a lot! Let me know if you have any issues learning along the way.
I just want to point out how untrivial is the fact that you actually managed to get training data that isn't trash - pre-processing is key!
Yeah that's the majority of the battle. Thanks for watching / commenting.