HUGE thanks to NordVPN for sponsoring this video. Get Exclusive NordVPN deal here ➼ nordvpn.com/codebullet It's risk-free with Nord's 30-day money-back guarantee!
It's a miracle when code bullet uploads, but you can tell he doesn't half ass his videos, he does actually put time and effort into his content. Years of doing coding videos, 3 months at a time, it feels like meeting an old friend you haven't seen in ages again. Great videos.
I like to imagine that in the code bullet universe, he is the last remaining AI, and is constantly programming other AI through brute force because he was programmed to not give a shit.
I'm actually impressed that Evan drew the colision lines on the backgrounds by hand. I half expected him to try and make something for that too. Or use like a magic mark tool. Effectively using an AI as well.
@@dexorne9753 have you never seen ANY of Evan's videos before? You think ge cares about how long it takes? Nah, his whole point in life is to make it difficult for himself and then have an AI solve that problem too
"The main problem with walls is that they're not floors." Ah yes. I never really remember how much I need CB videos in my life until he goes off break once every year (edited the one typo to prevent medical emergencies)
HOLY HOLY!!! I can proudly say that I have the two HOTTEST women on this planet as MY GIRLFRIENDS! I am the unprettiest RUclipsr ever, but they love me for what's inside! Thanks for listening co
I love how *specifically* on the level where he demonstrated putting a collectable for the AI to grab to go down, the AI found a way to not go down and grab it lol
Evan mentioned Elden Ring, that 100% confirms without a doubt that he's currently making the entire game engine himself and is going to train an AI to beat the entire thing. Can't wait for that video
Code Bullet is one of the FEW RUclipsrs I will likely never unsubscribe from. Because even though he uploads very infrequently, each video he makes is an instant banger in my books.
i love how he put the incentive to go down to take the optimal path, only for the AI to largely completely ignore it and instead make a herculean jump to the upper platform
@@justonefra Is that an impossible jump in the actual game? I see him bonking the wall a bit but jumping straight up while the wind is blowing should let you get up that way shouldn't it?
It feels like it's been so long since I've seen a swarm of ai trying to figure out a game codebullet is crap at, it feels like watching the level replay of super meat boy
Seeing him fall all the way down to the start really makes you appreciate how JumpKing is actually really generous with where you can faceplant and not fall like 5 levels
14:42 I love how the AI actually does get better each evolutions, the longer the AI tried this the less AI tried to go to the left side and tried to skip, because it is not possible more AI tends to go to the right side by each evolutions and started doing better after realising that it is a waste of moves by going to the left
This reminds me how its a valid evolutionary tactic to just throw as much genetic material out into the wind in hopes one of them survives long enough to also throws genetic material out into the wind.
3:08 So what we are seeing here is actually what allows Mario to clip through doors with BLJs in SM64, if the square moved fast enough, it would blow right through the line because it's checking for collision frame by frame, if there isn't a frame where collision happens, then collision CAN'T happen. Again at 9:41, character reaches basically escape velocity and they move fast enough to just teleport on the other side of the collision wall.
9:41 it moved so fast that the next frame was drawn out of the supposed colission line right? 3:08 the collision line was inside the character during the refresh...?
@@boingboingresearcherph.d.2871 It did, the whole player was suddenly at the other side of the line so no collision happened. And the collision line was inside because at first he didn't code the behavior to snap back. Many games snap back based on direction so if you change direction at the exact same frame you snap back inside the collision, this is used to clip through walls on the Super Mario from the NES. On SM64 I don't know the whole thing, but i would guess it was similar, though in that game i would say it's near impossible to turn on the same frame and it calculates the collision on four steps from the previous to the new position so you would have to be a lot faster, TAS runs manage it due to weird glitches that allow them to go several thousand times normal speed.
I love watching the little waves of AI for a few generations kinda all ending up in the same place, knowing they’re just looking for the most Up, before one of them makes that next jump, and then they all flood that way. It just feels every time like that one guy went “YO GUYS! THERES UP OVER HERE!”
Something I've thought about for projects like this is, rather than incentivizing the AI with manually placed collectibles to encourage the correct path, having a 2D success heuristic that rewards it both for height gained, as well as what area of the map it can uniquely traverse without falling down a level, to encourage it to explore upwards Then tweaking the ratio of reward for up vs reward for exploration until there's a good balance that encourages constant upward climb
Or perhaps 2 separate Explorer and Exploiter subpopulations, each going for novelty or performance respectively, with members that surprisingly perform well in the other category's metric cloned to that category
In reinforcement learning this is what is called "exploration and exploitation". Maybe that could be done by changing the mutation rate of the evolutionary algorithm using the Robbins-Monro algorithm.
Bit sad that there weren't any listing on songs used. So here's mine of the ones I knew previously and could find with a little help of Google Sound search. 5:29 Megan Wofford - Inspiration 8:45 Arc De Soleil - Train of Liberation 10:06 Lupus Nocte - Hadouken 11:30 Zorro - What the Dogs Can't Have 13:49 Trevor Kowalski - Keep Up This Time 16:04 Christoffer Moe Ditlevsen - A Pure Force 17:41 Lupus Nocte - Hadouken (again) 19:45 Eveningland - Hyperspeed 21:35 Digital Camel -Sad Life 23:48 Zorro - What the Dogs Can't Have (again) 26:19 Diamond Ortiz - Mirror Mirror
@@alexanderklee6357 I might have missed something as some songs used i weren't familiar with or glossed over it think it still was part of the previous song. I'll edit it in the morrow.
it would be cool to see an AI with a more complex reward which takes in other platforms rather than just prioritising 'up'. then after a few levels the AI could get through in less and less generations rather than starting from square one each time
It's starting from square one because he's not building a robot that knows how to beat this game. He's building 44 robots that know how to beat one level. I doubt he is even syncing them into one fluid action... There is no truth about how to beat the game based on platform distance. It just is 44 separate recordings of random inputs. Pretty lame imo. I'd enjoy it but I wouldn't upload it. Have fun CB
@@littlegandhi1199 homeslice the chance of beating jump king with just random inputs is nearly impossible even on a per level basis you can clearly see they prioritise going up rather than just randomly jumping around
Wouldn't "next screen" be the better reward? Like "If x moves fail, increase move count untill success" and just have it add more iterations for vertical height as well to brute force past the height limitation and/or remove iteration numbers at all and/or write some pathfinding code.
18:16 Interesting thing I noticed in this room, some of the jumpers actually made the shortcut to the higher ledge, but because you placed the "collectible" item to encourage the AI to go down and then back up, that path ended up being ignored mostly, until on a fluke one actually made it without collecting it! xD
Ah, the memories of doing collisions and finally accepting that looking at just a "snapshot" by checking intersections is doomed to fail eventually (low fps or fast objects). While the math is a bit more complicated, you should try to find the intersection as a function of the actual movement, find the one that happens first, respond to it, adjust your velocity and recursively repeat the process until the full movement for the frame is "used up". Considering it's about the AI and not pixel perfect collision physics, that's likely overdoing it, though.
Yeah I've done the whole "I want to reinvent the wheel and code it from scratch all by myself". Fast moving objects absolutely gives me "back to the drawing board" flashbacks.
@@ViktorSarge On the other hand, doing things from scratch at least once can give a better idea how they work.. and will greatly increase your appreciation for existing libraries and engines. When I tried doing ellipsoid collision in my Minecraft clone, moving over the edge between two blocks at very high speed would make you bounce around the landscape like the Hulk. Floating point numbers are not our friends.
@@TheTrienco That's true! I did some light C++ at uni in the 90s and the manual labor required to do network programming really made me appreciate Python urllib later in life :D
@@ViktorSarge And at least until the next standard, there is still no networking in the standard library. I remember a Java assignment at uni to connect, download and display a HTML page. Coming from C++ I thought it's absurd to do that in one week (implementing the http protocal, a HTML parser, a renderer, etc.). Turned out to be a couple of lines, since everything is already there.
It's been 6 months without bullet content. I have started scratching my fingers against the walls until they bled and licking the blood from the walls until the walls where clean again. Bullet content is my food. Bullet content is my water. Bullet content is my warmth. I am cold, hungry, and thirsty. Feeeeeed meeeeeeee
I always like seeing runs done by brute force ai like this. The final ai is always amazing during the absolutely intricate parts of the game (like perfectly timed jumps on the diagonal ice) , but just dumb when it comes to the basic easy parts lol
@@mattiabilla it's an evolutionary algorithm that narrows down the final path by brute forcing many options until a generation succeeds better than the previous. Its an ai that doesn't learn how to play the game, only which random moves get it further. The ai isn't learning things like, what jump patters it can know, where the map exits are, efficient routing, it doesn't actually learn how to play the game. It's just brute forcing every possible option until you get the combination that gets to the end of the game. Which results in the hard parts being done perfectly, and the easy parts being done like a 2 year old. The best way to say it is. This type of AI does not learn to play games. It learns to beat them through brute force.
@@coreyfrank506 well, we can say that the model has a high variance, however all the AI algorithms aim to find the optimal solution by moving through the feasible region according to a strategy, in this case optimizing a fitness function with a GA. Brute forcing is just an inefficient and different strategy.
this makes me want an AI speedrun competition, where the AIs of multiple devs try to beat a game the fastest i'm not even sure if this can be considered a competition or if the AIs will even behave differently but it'd be cool
@@SwingRipper a TAS is a tool that allows people to do things frame by frame and then record those inputs and play them back, there's still a player that knows all of the strategies and movements i guess my idea boils down to one of those marble race videos where it's pretty much random but it's fun to look at and root for your favourite colour
Nah I don’t think it will be that much different between AIs. At least in a game like Jump King. The devs would just have to make thousands upon thousands of gens to have the most optimized run possible and the runs will ultimately be the same no matter which ai is playing. However, having devs compete in a limited time-frame to code the most successful ai would be very interesting since some of them would definitely try unique finnicky ai learning methods instead of just the rinse-and-repeat used here ...
It looks kinda like they're trying to learn each level, and then the first few generations of the next level are trying the same thing as the last level, before mutating enough to find a way. I'd be curious to see how much better it would fare if you used the algorithm that learns as it goes rather than generational, if it would get faster as the levels progress
@@amaurysaint-cast7039 makes sense, but it appears as though the AI can't "see" anything except the altitude value. In previous videos he's included entire vision systems of ray casting for vision, but here he just seems to send it blind
@@monad_tcp Not exactly. While people can only see one level at a time too, people can apply jump mechanics learned in previous levels to the later ones. People are capable of seeing the whole map and understanding "I need a big jump on this distance but a small one on that." Meanwhile, the AI in this case is running pretty much entirely blind. The only thing the ai sees is it's height and the "incentive coin" that was placed to show the AI hey go down to complete the level. Essentially, the ai is brute force jumping around until it hits the right answer where a person can actually plan a route and set of jumps.
@@NerdyGirlTay so that's why I wasn't able to beat platform games. Damn, I Am an AI and touch my behavior was human so all humans were playing like that.
I am a software engineer and let me say that making this 27 minutes video took a LOT of hours, effort, trial and errors. It is not just re-making the videogame itself, but the machine learning work is also VERY time consuming and a lot of tries to have all variables configured correctly and ALSO make it really entertaining. I enjoy every video you make, so BIG APPLAUSE to you sir. GREAT VIDEO, KEEP IT UP !
yea this has nothing to do with AI and especially not machine learning.. it doesn't learn anything. It just keeps the random inputs that work and at the end strings them together. Literally anyone could code this "AI". It's just pure random brute force.
@@RitaTheCuteFox dont say that, this powerful AI might just wake up one day and hunt you down, terminator style. Nah seriously, this couldnt pump water if it was meant to
I mean, thats exactly what hes doing. Im not sure this could be considered AI, its more like "brute force" because its trying every possible sequence of jumps until it finds the winning sequence of jumps
@@Kevdama1 Yeah if it went through however many generations for the final one to be able to complete the whole game then you'd have an AI that learned jump king. I just don't think we're there yet so just stitching together the one successful run of each stage of the game is the best you can do. Jump king just seems like an odd game to do it with because the entire point of it is having to do everything perfectly or else face huge setbacks, yet the AI in this case just repeats the same stage over and over until it makes it past.
@@sticks7857 probably could make a perfectly optimal, or near perfect run, the AI would probably need to be much more complex and train for much longer, I think this method cuts down on the raw processing power required
The little time lapse part of your videos always help when I'm feeling depressed or anxious. I don't know why. They're just satisfying I guess. So thank you.
Code Bullet so entertaining and informative. I love the videos on this channel! I LOVE the little details that he adds. The beret while talking about art, the code bullet 2D pixel character, his dance while the song is playing. Come on, so good!!!
The absolute best part of the gameplay time-lapse is how each generation starts as one little dude for like a frame, then immediately erupts into a swarm of them that covers the screen lmao
His debug jokes are acually VERY accurate! I know from briefly working as a coder that 2/3rds of the time spent making a functional program is to make it functional*
@@freetousebyjtc That the program does what is asked of it, but if you look at the code and how it does the task, it's full of band-aid fixes, failed shortcuts that took longer to fix than to do the original coding, and comments that nobody understands the meaning of because the code is so mangled after all the 'fixes'.
@@alkestos then you pass it along to the next group of coders who have to spend the first half of the project figuring out wtf the last group of people were thinking, just to repeat the cycle😂
It's so amazing watching them slowly get through a difficult section You can see that they're all falling and failing... but wait, one hero made it through! And then two, and then ten, and then they're onto the next level! Inspirational.
you know what would be awesome? an AI speedrun competition where everyone makes their own ai and has to train them and then try to get the world record. how often would they beat each other? would there be a limit? would they at the end, try to completely break the game apart to somehow teleport to the end? what crazy things would happen? edit: Holy shit guys i didn't knew so many people would like this idea. this was meant to be a random ass comment swirling around in the comment section never to be seen again.
I feel like you could make a tas and that would technically be considered an ai... maybe with randomly generated levels though? That could actually be interesting to watch
The problem is that you know some are gonna train their AI to the point the AI will always find the most optimal solution, and nobody will ever win. AI races in set levels won't work. AI vs AI is only valid in super complex scenarios where they directly compete against each other and try to outsmart each other. That's why the OpenAI team is specifically developing an AI for DOTA 2.
It isn't synced. It just looks that way because the AI has so many movements that any of them can be interpreted as being in sync with the music, so that's what your brain does.
If there are 44 levels in the game, and if there were a total of 500 AI characters in each level, then that means that over the course of the game, at least 22,000 characters participated. The sad part is that the vast majority of these 22,000 characters failed when trying to beat the game. Edit: This math doesn't take into account the number of generations the AI had to go through to develop the correct path. @Golianxz Original replied down below with some better math on the total number of bots that potentially failed.
Since there were 926 generations, and assuming that there were 500 characters per generation (I don't really remember if that's true), then the total of victims would be a bit under 463.000, considering that some of the last generation succeeded.
Worst of it: Some of the "clever" ones might have died on the way because others were simply more successful in THAT particular level. You never know if the final bots actually were truelly "dumb", but jumped at the right time without aiming for success at all... If you catch my drift.
@@M3dicayne They were all dumb. Machine learning is slightly intelligent brute forcing. There's no intelligence involved beyond "kill the ones that don't perform"
I think the part where the AI is learning is fascinating. It makes me curious about how this would have gone with that type of AI exemplified by the race car that learns to recognize the walls and avoid them. It seems like this AI is always randomly guessing 500 times. Is it possible for the iterations to get smarter jumps each generation as well as rewarding them for simply getting higher?
This AI isn't really learning anything except a single route to get as high as possible. It's actual intelligence is fixed, and dumb as a rock. It has no idea what it's doing. It's basically button mashing and the single best route of each iteration gets immortalized and copy/pasted over the other 499 routes. It's possible to make AI's that actually get smarter, but it's far more difficult. Stuff worthy of science papers. edit: I forgot the part where he made it favor going higher or getting 'tokens' when it has to go down to advance. The AI isn't dumb as rocks. More like...an amoeba? Also, I should point out that this doesn't mean CodeBullet's AI is bad. If you've got a nail, a hammer is perfectly fine tool. Renting a 50ton excavator to nail your birdhouse together is all kinds of absurd. So long as the results are acceptable who cares how jank your tools are?
Yeah, it would be nice to see an AI that generalizes for unseen levels. But that's more difficult to do. It's still entertaining to see the dumb AI, though. It always gets me thinking on how it could be improved. I wish he made it more clear what exactly the AI sees. What are its inputs. Like he did in older videos.
@@LucasPossatti I was also wondering about that, but I actually suspect the only thing this one takes as input is the current level and current "score". IE, the bot has a fixed "strategy" for each stage, and then starts completely over with blind button mashing for the next stage. It's not analyzing the nearby geometry at all.
@@FiltyIncognito not sure what your point is? the goal of the ai was to learn how to navigate this specific character in this specific game through these specific levels, and it obviously learned how to perform this task better over time. its just not generalized to play all games as well. maybe thats what u trying to say?
@DeadLiftGaming-wr7ln the joke is Monika is an AI in a horror game called "Doki Doki Literature Club" please trust that I'm not on crack that is the actual name of it
I love videos like these where you show more of the actual programming and small obstacles. It's fun to pause the video and look at the code you just wrote to see how you actually implemented it all
Almost every coder I know has made at least a few jumping bodies for various project. But it doesn’t matter how often you do it, how good of a programmer you are. Never has a jumping mechanic been made right the first time.
I even screwed up programming one in an RPG Maker game I'm creating and that handles the coding for me... can't imagine actually coding things to jump....
@@DarkFrozenDepths you know that whole “an object in movement will stay in movement unless acted upon by a force”? Same is true for jumping bodies in your program. Just that you need five different forces acting on it and the hand of god pushing it down for it yo actually work
18:38 the first A.I to make it to the next floor was the one that just brute forced his way into that top jumó and forget the collectible. He was like “Gentlemen,I may not have a brain, but I have an idea…”
@@MrLachek no, it basically glitches and double jumped through to the top of the block. not sure if that’s intentional game design with the wind though
Honestly, imagine doing this, but with 5 AI each with a different colour model, no generation resets, just five dudes trying to figure it out, and then streaming it on twitch as a gauntlet, make a poll to place bets, let it run for however long it takes, That'd be fun as hell You could come back periodically and check how your guy is doing in the race, chat with folks about the progress Man I'd love to see that happen
@@aiyumi6747 not sure if there was supposed to be a link attached or if you're talking about the video, but yeah, i mean it'd take way longer but then you could build a community around the event in that time
@@aeaeeaoiauea dude you say that but there was a really long stream (idk how long but months if not years I believe) where a fish just played Pokémon by swimming to different parts of its bowl and it was completely random and it eventually won and a lot of people watched it
As someone who has spent a few years learning and writing AI and Genetic Algorithms from scratch, i'd like to point out that the reason it looks like the "AI" is brute forcing its way through, is because it is. Genetic Algorithms and AI's are not the same, they may give *initially* similar results, however there is a massive difference. An AI is actually learning, a Genetic Algorithm is memorizing. You can think of it this way, an AI is built in order to generalize a data set and label it correctly, and it should in theory be able to do this for any new data you give it. A Genetic Algorithm however, is basically an optimized random function, allowing you to find the optimal labels quick for a specific data set. However it is completely unable to label new data without another round of learning and brute forcing. I'm in no way dissing this video, I actually really like it and love this channel, however, with the way the code is functioning from my perspective, it seems like its not an AI and only a Genetic Algorithm, which is why its "brute forcing" because its literally just memorizing the game and the best moves, whereas an AI would actually generalize the game, and theoretically be able to solve new custom made levels. Note that this could be using a Neural Network that trains using a genetic algorithm, however I don't believe this is the case as the amount of error in each level is very similar, the character makes multiple bad jumps and continues to do this throughout the levels in a very repetitive fashion as if it doesn't actually understand what a level is and what it needs to do.
Came to the comments for this. An AI for this game could be shown a new level and *probably* get through it a majority of its attempts. (If it was good, haha) Not to trivialize this in any way though, I get that most people won't know the difference between "AI" and genetic algorithms in a RUclips title/thumbnail. And, great coding and teaching.
Agreed. You could literally build a physical analogue to the game, then randomly throw hollow plastic counters upwards from the bottom of each section, keep the counters that land on a higher ledge whilst discarding any that fall back down, repeat the process from the position of the highest ledge reached from the previous throw each time, and you'd achieve the exact same result. The coding aspect is completely irrelevant. You could demonstrate the same outcome using pieces of paper, dice, ball bearings, etc.
@@sampfalcon5128 I agree, what is done here is really cool, and a great way to get people into the field, I really appreciate it! Its always nice for a deeper understanding and clarification however, and I'm happy I can help with that in some way!
That's true to a point, as some levels you have to actually go lower in height to get higher at some point, however there's an easy adjustment for that. I still think this video was educational to those wanting to learn how people actually train genetic algorithms and AI's on games. So I can understand the simplification in order to get across another important aspect of this field.
Deep Learning is only a subset of machine learning and machine learning is only a subset of artificial intelligence. There are many more branches regarding AI like natural language processing for example. It's not easy to answer what's a true AI and what not. Some only consider it an AI of it learns, others believe you don't necessarily need to learn to show up intelligent behavior. Some believe an AI should generalize as much as possible, while others think an AI could also simply deal with a specific scenario in an intelligent way. It's hard to draw the line. Obviously you couldn't call a sorting algorithm an AI for example. But can you really say you need to "learn" to do a task in an intelligent way? Essentially machine learning methods are also simply algorithms. Though on that regard I don't have enough experience to form an accurate opinion.
I think it's amazing how you manage to show the bugs you encounter so visually appealing - like the one where you said adding the extra mechanics was basically a peace of cake with no issues and then show the jumping through walls and infinite accelleration, I love it
what a fine youtube channel, I sure do hope he keeps a consistent upload schedule. in all seriousness, I kinda missed your videos, but I get that it takes a lot of time to make them so it is fine. I mean, we wouldnt miss them if they were some quickly pieced together crap instead of masterpieces they are currently.
18:30 it is interesting how some bots where able to get to the top without going down (check 18:34), but then failed to get to the next map, due to collectable forcing them to go down.
The way the evolutionary method is implemented here is really essentially searching through a graph of moves. This works marvelously since the levels are pretty much deterministic. I wonder how different it would be if each player had a sense of decision making.
I think if they could "see" the screen, they would learn much, much quicker, because they wouldn't make random obviously bad jumps wasting time, and it would get more efficient at learning stages over time. The way this is made, if thrown at a DLC it would be starting from scratch, but an AI that can "see" would be able to implement everything it learned, and even improve old paths it took since it would then know better.
@@professorpwerrel i got the impression that the blizzard is just a constant force. I don't play Jump king so not sure. But yeah if it was non deterministic and was changing each time, the evolution of moves might not work. It would still have a chance to work by adopting a strategy that is minimally affected by the wind.
@@IlluviumGaming Seeing wouldn't really fix anything other than remove obvious first jump deaths. The AI would still be blind to any jump beyond their first as they wouldn't see the 2nd jump until their position changed to the location given from first jump. Even "planning" out moves wouldn't really work as that is basically what the simulation is doing from the beginning, just without any first hand checks for death moves. Simming a Sim to sim a sim is just a turtles all the way down kind of issue, sometimes you actually have to go and do. What seeing and knowing the blizzard timing would do is just narrow and expand the range of possible moves for the generation involved. The moves of the prior generation or possible moves of the future gen literally do not matter to the current generation.
@@Arrek8585 Seeing teachs the AI to measure distances and calculate if a jump will work prior to jumping, and the AI would improve its calculation abilities through generations. His AI is literally trying random moves until it brute forces through the level. A skilled player would be skilled in a DLC, just like an AI that can see would, whereas this AI would suck just as much at the DLC as the original generation did. The simulation is not planning anything out, it is brute forcing similar to a TAS, finding a chain of inputs that goes upwards. That's why a skilled player would do better than this AI on a blind stage, the player learns the behaviors, not just randomly trying stuff until something works.
Me when I started watching this channel years ago: How can he call himself a programmer if he uses so much code from other people Me after starting programming in college: How can he call himself a programmer if he uses so little of other people's code.
@@michaelmoore2679 Frameworks that have (hopefully) been tested and had engineering methodologies utilized in their creation? Sure! Random code snippets from stack overflow? Absolutely not.
Two things, first off I seen about a dozen of your videos and this one is the best edited so far, great job! Secondly when you jumped off the top to show us the whole game level by level and the music started all I could think was let's start super fast build mode
Another possible solution: Treat it as a pathfinding problem, and just draw up a map/graph/tree that records for each position and move which position it leads to. There seems to be a limited number of game positions in which a move is allowed (to first order just all the pixels representing floors), so that would hopefully be a manageable graph and it would spit out an optimal solution.
Instead of all the possible positions you could generate a smaller map with random moves. Can’t remember the name but you can do say 5 random moves, then 5 another 5 from each of those and keep going as far as you want. Slap an A* search on the path options and it would probably be pretty quick.
If you wanted to design a TAS, then you would have the costs of the edges be the number of frames it takes to walk/jump between those pixels. Then do a breadth first search (the smallest cost path gets extended next), but mark each pixel once it's reached and prune the exploration candidates that reach an already reached pixel. Also prune all search paths that have not reached the next screen, once every pixel on the next screen has been reached. Eventually a winning pixel will be reached and that path is an optimal Speedrun. (There may be more than 1)
@@lordfelidae4505 The way the AI moves here is that it only moves when the character is standing still on a horizontal surface, so that cuts down on the number of potential positions by a lot. Though even every pixel may still be manageable.
A little more work would be needed than just using all floor pixels. Screens with wind would need a different location per pixel for each possible wind velocity vector/frame in cycle. For floors with ice, each pixel would need a different location per player's velocity vector.
Watching the final run with the winning AI is REALLY cool. Because as the game progresses it gets more and more competent as it goes. Towards the end it starts to look deliberate rather than just random guessing.
The reason the AI looks more competent at the end is because there are less viable pathways. In the easier early levels, there are viable pathways including stupid looking maneuvers, so it has to "look" smarter at the end, even though the attempts made to get there were as random as the earlier ones.
I love how you re-did the game, but I can't help but note that it looks like your AI is brute forcing it's way through the game blind. Maybe you could make another video where you give the AI the ability to "see" all 360 degrees and get distance and collision angles & surface friction, as well as distance / angle to next token. That way you could train an AI that would be able to be given a new map and do it quickly, instead of an AI that can only beat this map. Might be fun! That way you can re-use the game you spent so long re-creating and make a whole other video out of it.
probably since that'll take a bit more time figuring out and coding it when while brute forcing takes a lot of time, he can just sit back and enjoy doing other things like the master procrastinator that he is (which isn't necessarily a bad thing)
I just played your version of JK, and there are some little things off: The short jumps seem to be a little bit to strong; the timing needed to jump short is off. And there is no max-jump-auto-release. Bouncing of walls, especially head first into a ceiling also feels weird.. but yeilds nearly the same result as in original JK. I have beaten JK Map#1 fall-less.. and I will try to do an (at least) sub10 minutes fullrun of your Version of the game soon.
Seeing so many Code Bullets’ jump to one point in existence is nightmare fuel, but watching the evolution process and seeing the AI learn is fascinating. As always, a deadeye on ingenuity.
It really doesn't 'learn' just brute force its way to the top, the generation that clear the 44th floor isn't any better that the generation that clear the floor 1. If you were to put the last generation in the first floor it will fail miserably. But still an amazing video :D
I suddenly thought "woah what happened to code bullet, I hope he didn't quit. When was the last time they posted a video? 6 months ago? Phew, a video should be coming by in another half year"
As a JS developer, I'm am always impressed with your implementations of the game themselves. The neural net stuff is interesting and fun to watch, but watching you actually recreating the games is what keeps me coming back. It also makes me feel bad about myself, even after doing this professionally for over a decade 🤣
You have absolutely no clue how reassuring it is to watch someone who I view as an excellent coder struggle to get basic collision working, lmfao. Reminds me that at the end of the day, all software engineers are just beating their heads against walls, no matter how smart you may be
@@tiqosc1809 ya, I actually usually just use primitives so calculations will be easier and indexes so that if it's index is an index of something collideable than just don't allow it to continue in the direction of the primitive or set its position to just outside the whatever primitive shape it is That you got using some easy maths and even with that many simplifications it's a unoptimized nightmare and makes performance shit most of the time
25:20 interestingly, your AI avoided your 'coin incentive' and instead chose a more difficult path. always so cool to see how evolutionary algorithms manipulate the worlds we make.
The AI doesn't know that it need to collect the coin until it actually collected it. The only thing the AI was told is basically "hey, do anything that make the funny number go up". This means that it doesn't know exactly what it's supposed to do, rather it performs random move and see if the score increases. What happened there is that some bots randomly managed to skip through the intended path by chance, so they get better score thus they stick to that strategy. Of course some other players still found the intended path but they'd be performing worse since it require more moves.
HUGE thanks to NordVPN for sponsoring this video. Get Exclusive NordVPN deal here ➼ nordvpn.com/codebullet
It's risk-free with Nord's 30-day money-back guarantee!
Hi
Oh shoot
Finally a new video
Finally
First
Big respect to all AI's who climbed all the way to the princess just to jump above her and go all the way down to beginning
sigma nodes
@@jacksonsmith2955 indeed, they were "just exercising" and didn't even see her.
They went back to grind again
actual chads
Big Connor energy
It's a miracle when code bullet uploads, but you can tell he doesn't half ass his videos, he does actually put time and effort into his content. Years of doing coding videos, 3 months at a time, it feels like meeting an old friend you haven't seen in ages again. Great videos.
This is incredible accurate.
Quality over Quantity 👌
he has gotten better. we may live to see him graduate college.
He went from easy games with rigid rulesets to Jump King. Character development at its finest.
Dude if he uses unity his life would be 200 percent easier. I don’t know why he doesn’t.
@@randomuser1249 because he’s a masochist
@@christophermoore6110 fair enough
way better than new star wars !
@@christophermoore6110 Game maker Studio ftw
I like to imagine that in the code bullet universe, he is the last remaining AI, and is constantly programming other AI through brute force because he was programmed to not give a shit.
That's probably why each upload takes so long 🤣
No he was raised by some Javascript wolves I think
@@Whathefox. He was raised by wild pythons
@@DefinitelyFroggyDioBrando Nah you see the hoodie? He was raised by Linux Penguins
I'm actually impressed that Evan drew the colision lines on the backgrounds by hand.
I half expected him to try and make something for that too.
Or use like a magic mark tool. Effectively using an AI as well.
I think that would probably take longer and add a lot more need for some lovely bugfixing
That honestly sounds like something Evan would do though
@@dexorne9753 have you never seen ANY of Evan's videos before? You think ge cares about how long it takes? Nah, his whole point in life is to make it difficult for himself and then have an AI solve that problem too
@@anasyn i didnt know his name was evan
@@dexorne9753 in his past videos he says his name "Bad Evan, Dumb Evan"
"The main problem with walls is that they're not floors." Ah yes. I never really remember how much I need CB videos in my life until he goes off break once every year
(edited the one typo to prevent medical emergencies)
I think I had a stroke while reading that
@@michaelhughes553 There's like a single typo in that comment. Stop being overdramatic.
walls are just vertical floor
Floorgang?
@@kuff.7772 so ur saying, a wall is a horizontal floor? me brain hurts
Code went from using AI to beat up nerds to now using AI to get laid.
Truly a king of our time.
Playing Jump King gets you laid?
you still beat up nerds
HOLY HOLY!!! I can proudly say that I have the two HOTTEST women on this planet as MY GIRLFRIENDS! I am the unprettiest RUclipsr ever, but they love me for what's inside! Thanks for listening co
@yes we're here to talk about Code Bullet's AI, not yours
@@ph03n1xm9 It's a bot. Don't worry about responding to it.
This year's "Best of CodeBullet" video will last just 2 seconds and I can´t wait to see it.
With my attention span I probably won't even watch the entire video
he just posted🥳
@@dux5 surely is because he saw my comment
@@dafelix 😂😂
He did have one like that at some point but I guess it got deleted
12:32 this one poor fella stuck in the grass block. Best beta tester though
Hi SGM! I agree, that guy needs help.
Little shoutout to gen33 to gen35 at 14:20 as well
He's a little confused but he got spirit
yo wassup SGM!
The AI equivalent of GBJ
The mark of a truly high quality youtuber is an inconsistent schedule full of large gaps who's every video is loved. Keep up the good work CB
Vsauce
@@NabeelFarooqui sam o nella academy
Like Technoblade
Internet Historian
And martincitopants
what a nice entertaining programming YT channel, I sure hope he doesn't leave for months at a time
@yes who tf are you
@@SossigaKungen bot
it takes a long time to make a video like this, so i think u should expect an upload every 2-3 months
@@MrJanismerhej i think that yt creators are ppl and we shouldn’t get mad at them for “disappearing”? like people have lives outside of their jobs
@@justyourfellowduck who's getting mad at them?
“The main problem with walls is they’re not floors…”-CodeBullet 2022
Me, repeatedly walking into walls
I love how *specifically* on the level where he demonstrated putting a collectable for the AI to grab to go down, the AI found a way to not go down and grab it lol
18:37 Classic speedrunning!
They're practicing their any % run lol 😂
true lol
Creator: NoOoOo, you can't just take a quicker path
Chad AI: Haha, jump goes up, up, up
@Neyra i dont speak taco
Evan mentioned Elden Ring, that 100% confirms without a doubt that he's currently making the entire game engine himself and is going to train an AI to beat the entire thing. Can't wait for that video
True!!
yes
So when we, humanity, will need to continually change where we live, under earth, on mountains, other planets in order to escape the takeover?
IT CAN BE DONE!!
Bro i can't wait long he's goin to do that
Honestly I was expecting around a few months so three isn't too bad. I think four to five videos a year will keep us happy.
Yeah. Especially with a second channel where he can upload longer form vids that had to be cut for time.
*Yeah*
nope daily uploads obviously 🙄 coding isnt thaaaaaat hard right
five?! i think you're getting a bit too ahead now
Yeah it sucks that this type of content is going to take forever to record and edit no matter what.
Code Bullet is one of the FEW RUclipsrs I will likely never unsubscribe from. Because even though he uploads very infrequently, each video he makes is an instant banger in my books.
finally, the king has returned. i hope he doesn't disappear for several months again
Your right, it's not gonna be months.... It's gonna be years
He probably will
the jump king
He's done it once, he's going to do it again.
Casually Disappears again
i love how he put the incentive to go down to take the optimal path, only for the AI to largely completely ignore it and instead make a herculean jump to the upper platform
the funny thing is, the ones who ignored it seemed more successful than the ones who took the intended route
It's pretty interesting how similar they are to people
up was good after all...
I'm curious to see whether the AI's inputs could beat the actual Jump King game. Basically a test of how accurate your recreation of the game is.
It's not. Minor differences between the game and remake would snowball really quickly. I highly doubt he can get past first few screens.
or find what is the fastest time to speed run
Probably not. See things like the glitched jump at 26:00
So why can’t he make an AI to play the original jump king?
@@justonefra Is that an impossible jump in the actual game? I see him bonking the wall a bit but jumping straight up while the wind is blowing should let you get up that way shouldn't it?
Hope everything is okay sir code you are the reason I code I would mourn if you aren’t doing to well
It feels like it's been so long since I've seen a swarm of ai trying to figure out a game codebullet is crap at, it feels like watching the level replay of super meat boy
NO WAY AI PLAYS SUPER MEAT BOY NEXT??
“Anger management has never been my strong suit” *chooses to write code for a living*
@nieooj gotoy I think you meant to put this as a comment and not a reply
anger management is something that is gold in this field haha
@@Coolbi97 lol
He's a masochist or he just enjoys anger? Hmmm... 🤔
@@Ken_neThT yes
Seeing him fall all the way down to the start really makes you appreciate how JumpKing is actually really generous with where you can faceplant and not fall like 5 levels
The original yes. The DLC’s not so much
@@juvesidc5960 Wait. *It can be WORSE??*
@@enderdrane much worse
stockholm syndrome
@@enderdrane the DLC is downright sadistic
14:42
I love how the AI actually does get better each evolutions, the longer the AI tried this the less AI tried to go to the left side and tried to skip, because it is not possible more AI tends to go to the right side by each evolutions and started doing better after realising that it is a waste of moves by going to the left
This reminds me how its a valid evolutionary tactic to just throw as much genetic material out into the wind in hopes one of them survives long enough to also throws genetic material out into the wind.
The ol' spray and pray
@@enigmabloom judgement NUT never ends
Yes indeed. I too value the doctorine of the great Soviet Union.
Insects be like
Pro tip: Do not actually throw your genetic material out into the wind.
Court date: pending.
3:08 So what we are seeing here is actually what allows Mario to clip through doors with BLJs in SM64, if the square moved fast enough, it would blow right through the line because it's checking for collision frame by frame, if there isn't a frame where collision happens, then collision CAN'T happen.
Again at 9:41, character reaches basically escape velocity and they move fast enough to just teleport on the other side of the collision wall.
Thanks for the explanation
Happens in halo infinite as well if I remember correctly, lot of speed runners use it to OOB
9:41 it moved so fast that the next frame was drawn out of the supposed colission line right?
3:08 the collision line was inside the character during the refresh...?
@@boingboingresearcherph.d.2871 It did, the whole player was suddenly at the other side of the line so no collision happened.
And the collision line was inside because at first he didn't code the behavior to snap back. Many games snap back based on direction so if you change direction at the exact same frame you snap back inside the collision, this is used to clip through walls on the Super Mario from the NES. On SM64 I don't know the whole thing, but i would guess it was similar, though in that game i would say it's near impossible to turn on the same frame and it calculates the collision on four steps from the previous to the new position so you would have to be a lot faster, TAS runs manage it due to weird glitches that allow them to go several thousand times normal speed.
Nice explanation, thank you
I love watching the little waves of AI for a few generations kinda all ending up in the same place, knowing they’re just looking for the most Up, before one of them makes that next jump, and then they all flood that way. It just feels every time like that one guy went “YO GUYS! THERES UP OVER HERE!”
"up? up? up? up?" *the ais then start flooding*
@@LeivenFrestea Mine? Mine? Mine?
More like "I guess you guys are ready for this yet, but my kids are gonna love it."
"smells like up over here"
It's kinds how that platform game in crab game works with real people
I found it very amusing that there's almost always the *one* AI that somehow manages to clip into the terrain and get stuck falling every time. 😆
Imagine the princess’ horror, as she sees hundreds of TV headed knights, hopping up the tower to save her like a locust swarm 😂
Being saved by a knight is overrated
Being rescued by a *swarm of locusts* is where it's at
And then see as like half of them proceed to jump over her and yeet themselves off the ledge behind her
Just imagine all of them sounding like the text to speech bird
"Becky lemme smash"
"You want sum fuk?"
"Lemme smash"
Jump King: "Return the Princess, or suffer my Curse!"
Tower: "What is your Offer?"
@@AzureToroto King Rameses!
*THE MAN IN GAUZE! THE MAN IN GAUZE!*
Watching a hundred lil baby codebullet jumpmen is therapeutic 😌
not for me, man the multiple small green heads just gave me anxiety
500* babes jump to their death
@@xiChann you took the first reply tsk
@@TheBluePhoenix008 xD, I grant you the Honourary 1st Reply 🙇
Oh, i know you, you are a dude howtoMHW and more :D
Something I've thought about for projects like this is, rather than incentivizing the AI with manually placed collectibles to encourage the correct path, having a 2D success heuristic that rewards it both for height gained, as well as what area of the map it can uniquely traverse without falling down a level, to encourage it to explore upwards
Then tweaking the ratio of reward for up vs reward for exploration until there's a good balance that encourages constant upward climb
intreese's puffs
yeah he way overfit the data
Hahaha but i think that the overfit also is the joke here
Or perhaps 2 separate Explorer and Exploiter subpopulations, each going for novelty or performance respectively, with members that surprisingly perform well in the other category's metric cloned to that category
In reinforcement learning this is what is called "exploration and exploitation". Maybe that could be done by changing the mutation rate of the evolutionary algorithm using the Robbins-Monro algorithm.
there's something so satisfying about watching a million little jump kings move around
Bit sad that there weren't any listing on songs used. So here's mine of the ones I knew previously and could find with a little help of Google Sound search.
5:29 Megan Wofford - Inspiration
8:45 Arc De Soleil - Train of Liberation
10:06 Lupus Nocte - Hadouken
11:30 Zorro - What the Dogs Can't Have
13:49 Trevor Kowalski - Keep Up This Time
16:04 Christoffer Moe Ditlevsen - A Pure Force
17:41 Lupus Nocte - Hadouken (again)
19:45 Eveningland - Hyperspeed
21:35 Digital Camel -Sad Life
23:48 Zorro - What the Dogs Can't Have (again)
26:19 Diamond Ortiz - Mirror Mirror
Godsend
Thank
Nice work, always great to know this
well 26:19 is not diamond hertz but "Molly Hemsley - i am trouble"
@@alexanderklee6357 I might have missed something as some songs used i weren't familiar with or glossed over it think it still was part of the previous song.
I'll edit it in the morrow.
“The main problem with walls is they’re not floors”
Love this quote
hmm, yes, the wall is made out of floor.
@@Zootaloo2111 well yes but actually no
Is the gingerbread man made of house? Or is his house made of him? He screams because he does not know.
it would be cool to see an AI with a more complex reward which takes in other platforms rather than just prioritising 'up'. then after a few levels the AI could get through in less and less generations rather than starting from square one each time
It doesn't need a more complex reward, it just needs more input data then just score. (it's own x,y location and information about the map)
It's starting from square one because he's not building a robot that knows how to beat this game. He's building 44 robots that know how to beat one level. I doubt he is even syncing them into one fluid action... There is no truth about how to beat the game based on platform distance. It just is 44 separate recordings of random inputs. Pretty lame imo. I'd enjoy it but I wouldn't upload it. Have fun CB
@@littlegandhi1199 homeslice the chance of beating jump king with just random inputs is nearly impossible even on a per level basis you can clearly see they prioritise going up rather than just randomly jumping around
@@Polareon They randomly send inputs and are rewarded when they go up. They have ZERO brainpower.
Wouldn't "next screen" be the better reward? Like "If x moves fail, increase move count untill success" and just have it add more iterations for vertical height as well to brute force past the height limitation and/or remove iteration numbers at all and/or write some pathfinding code.
9:45 bro literally made a particle accelerator
18:16 Interesting thing I noticed in this room, some of the jumpers actually made the shortcut to the higher ledge, but because you placed the "collectible" item to encourage the AI to go down and then back up, that path ended up being ignored mostly, until on a fluke one actually made it without collecting it! xD
It was built differently
It shows that off in the final run too!
@@misterree yes because that path was chosen, since the fluke completion of it got it to the next room
25:21 I like how the AI just casually ignores the checkpoint you set in the bottom room there and proceeds to take a more optimal route.
Ah, the memories of doing collisions and finally accepting that looking at just a "snapshot" by checking intersections is doomed to fail eventually (low fps or fast objects). While the math is a bit more complicated, you should try to find the intersection as a function of the actual movement, find the one that happens first, respond to it, adjust your velocity and recursively repeat the process until the full movement for the frame is "used up".
Considering it's about the AI and not pixel perfect collision physics, that's likely overdoing it, though.
Only understood about half of that, but that sounds cool.
Yeah I've done the whole "I want to reinvent the wheel and code it from scratch all by myself". Fast moving objects absolutely gives me "back to the drawing board" flashbacks.
@@ViktorSarge On the other hand, doing things from scratch at least once can give a better idea how they work.. and will greatly increase your appreciation for existing libraries and engines. When I tried doing ellipsoid collision in my Minecraft clone, moving over the edge between two blocks at very high speed would make you bounce around the landscape like the Hulk. Floating point numbers are not our friends.
@@TheTrienco That's true! I did some light C++ at uni in the 90s and the manual labor required to do network programming really made me appreciate Python urllib later in life :D
@@ViktorSarge And at least until the next standard, there is still no networking in the standard library. I remember a Java assignment at uni to connect, download and display a HTML page. Coming from C++ I thought it's absurd to do that in one week (implementing the http protocal, a HTML parser, a renderer, etc.). Turned out to be a couple of lines, since everything is already there.
It's been 6 months without bullet content. I have started scratching my fingers against the walls until they bled and licking the blood from the walls until the walls where clean again. Bullet content is my food. Bullet content is my water. Bullet content is my warmth. I am cold, hungry, and thirsty. Feeeeeed meeeeeeee
I always like seeing runs done by brute force ai like this.
The final ai is always amazing during the absolutely intricate parts of the game (like perfectly timed jumps on the diagonal ice) , but just dumb when it comes to the basic easy parts lol
It's actually an evolutionary algorithm, even more interesting.
@@mattiabilla it's an evolutionary algorithm that narrows down the final path by brute forcing many options until a generation succeeds better than the previous.
Its an ai that doesn't learn how to play the game, only which random moves get it further.
The ai isn't learning things like, what jump patters it can know, where the map exits are, efficient routing, it doesn't actually learn how to play the game.
It's just brute forcing every possible option until you get the combination that gets to the end of the game. Which results in the hard parts being done perfectly, and the easy parts being done like a 2 year old.
The best way to say it is. This type of AI does not learn to play games. It learns to beat them through brute force.
@@coreyfrank506 well, we can say that the model has a high variance, however all the AI algorithms aim to find the optimal solution by moving through the feasible region according to a strategy, in this case optimizing a fitness function with a GA. Brute forcing is just an inefficient and different strategy.
"I am on top of a mountain, and I have no idea how I got here" -some ai probably
this makes me want an AI speedrun competition, where the AIs of multiple devs try to beat a game the fastest
i'm not even sure if this can be considered a competition or if the AIs will even behave differently but it'd be cool
I think we just call those TAS lol
This AI is just running a set of input instructions for each bot, what stops someone from just hardcoding inputs?
@@SwingRipper the game could have randomness so that hardcoding wont work.
@@SwingRipper a TAS is a tool that allows people to do things frame by frame and then record those inputs and play them back, there's still a player that knows all of the strategies and movements
i guess my idea boils down to one of those marble race videos where it's pretty much random but it's fun to look at and root for your favourite colour
You'd have to make the game almost impossible for humans to be difficult enough for computers to compete.
Nah I don’t think it will be that much different between AIs. At least in a game like Jump King. The devs would just have to make thousands upon thousands of gens to have the most optimized run possible and the runs will ultimately be the same no matter which ai is playing.
However, having devs compete in a limited time-frame to code the most successful ai would be very interesting since some of them would definitely try unique finnicky ai learning methods instead of just the rinse-and-repeat used here ...
It looks kinda like they're trying to learn each level, and then the first few generations of the next level are trying the same thing as the last level, before mutating enough to find a way. I'd be curious to see how much better it would fare if you used the algorithm that learns as it goes rather than generational, if it would get faster as the levels progress
Each level is so different that treating each level case by case is probably more efficient than deep learning
@@amaurysaint-cast7039 makes sense, but it appears as though the AI can't "see" anything except the altitude value. In previous videos he's included entire vision systems of ray casting for vision, but here he just seems to send it blind
@@amaurysaint-cast7039 isn't that the way human also solves it ?
@@monad_tcp Not exactly. While people can only see one level at a time too, people can apply jump mechanics learned in previous levels to the later ones. People are capable of seeing the whole map and understanding "I need a big jump on this distance but a small one on that." Meanwhile, the AI in this case is running pretty much entirely blind. The only thing the ai sees is it's height and the "incentive coin" that was placed to show the AI hey go down to complete the level. Essentially, the ai is brute force jumping around until it hits the right answer where a person can actually plan a route and set of jumps.
@@NerdyGirlTay so that's why I wasn't able to beat platform games. Damn, I Am an AI and touch my behavior was human so all humans were playing like that.
0:25 I was expecting it so I am watching with sunglasses
Holy shit code bullet uploaded a video. He is a few years ahead of schedule this is amazing
I am a software engineer and let me say that making this 27 minutes video took a LOT of hours, effort, trial and errors. It is not just re-making the videogame itself, but the machine learning work is also VERY time consuming and a lot of tries to have all variables configured correctly and ALSO make it really entertaining. I enjoy every video you make, so BIG APPLAUSE to you sir. GREAT VIDEO, KEEP IT UP !
Are they rly learning how to behave in situations or is this just almost random movement and if success then add to the best score kinda like?
@@kippe1221 yeah the method he uses keeps the best result from the random actions
@@gumbo64 dang thats lame then
yea this has nothing to do with AI and especially not machine learning.. it doesn't learn anything. It just keeps the random inputs that work and at the end strings them together. Literally anyone could code this "AI". It's just pure random brute force.
@@RitaTheCuteFox dont say that, this powerful AI might just wake up one day and hunt you down, terminator style.
Nah seriously, this couldnt pump water if it was meant to
I love how the final run looks like he's just doing random jumps and accidentally winning.
I mean, thats exactly what hes doing. Im not sure this could be considered AI, its more like "brute force" because its trying every possible sequence of jumps until it finds the winning sequence of jumps
@@Kevdama1 Yeah if it went through however many generations for the final one to be able to complete the whole game then you'd have an AI that learned jump king. I just don't think we're there yet so just stitching together the one successful run of each stage of the game is the best you can do. Jump king just seems like an odd game to do it with because the entire point of it is having to do everything perfectly or else face huge setbacks, yet the AI in this case just repeats the same stage over and over until it makes it past.
the AI takes "If at first you don't succeed, _try, try again"_ to inhuman levels.
@@sticks7857 probably could make a perfectly optimal, or near perfect run, the AI would probably need to be much more complex and train for much longer, I think this method cuts down on the raw processing power required
Isn't that how you play the game 😂
The little time lapse part of your videos always help when I'm feeling depressed or anxious. I don't know why. They're just satisfying I guess. So thank you.
Code Bullet so entertaining and informative. I love the videos on this channel! I LOVE the little details that he adds. The beret while talking about art, the code bullet 2D pixel character, his dance while the song is playing. Come on, so good!!!
For a guy who always talks about being lazy and taking shortcuts in his programming, he certainly goes all out on the editing
ruclips.net/video/wgYWqYlyhvU/видео.html
I like the unoptimised run. Shows that evolution isn't the art of perfecting something, but the art of doing something _juuuust_ good enough to pass.
Motto of my life
The absolute best part of the gameplay time-lapse is how each generation starts as one little dude for like a frame, then immediately erupts into a swarm of them that covers the screen lmao
It's hypotonic
10:25 the fact that the snow is part of the background so the player draws on the snow instead of behind it is so cursed
The AI learning montage is so intimidating. Just imagine seeing a bunch of coding bullets clump a tower like that
World War Z
@don't be surprised Yup...finally the bot invasion is here...
popcorn
His debug jokes are acually VERY accurate!
I know from briefly working as a coder that 2/3rds of the time spent making a functional program is to make it functional*
That asterisk is very needed. Functional*, not functional.
@@unma5253 I'm not a coder so what does the asterisk mean
@@freetousebyjtc That the program does what is asked of it, but if you look at the code and how it does the task, it's full of band-aid fixes, failed shortcuts that took longer to fix than to do the original coding, and comments that nobody understands the meaning of because the code is so mangled after all the 'fixes'.
@@alkestos then you pass it along to the next group of coders who have to spend the first half of the project figuring out wtf the last group of people were thinking, just to repeat the cycle😂
@@alkestos you could never make pain more laughable if you ever tried than what you just said here xD
It's so amazing watching them slowly get through a difficult section
You can see that they're all falling and failing... but wait, one hero made it through! And then two, and then ten, and then they're onto the next level!
Inspirational.
Dark mode CodeBullet might be the best idea he's ever had.
"This works most of the time... but let's just round that up to all of the time and move on"
Every developer has said this. Fact
Every developer has also lost 3 days 8 months later trying to fix it :D
E
Gotta love tech debt.
@@CrystalStearOfTheCas suspiciously specific
@@daniellima4391 nah, he explained why it's so specific. Cause ever developer does it
you know what would be awesome? an AI speedrun competition where everyone makes their own ai and has to train them and then try to get the world record. how often would they beat each other? would there be a limit? would they at the end, try to completely break the game apart to somehow teleport to the end? what crazy things would happen?
edit: Holy shit guys i didn't knew so many people would like this idea. this was meant to be a random ass comment swirling around in the comment section never to be seen again.
They would hack the human race
The most advanced one will just stand there and show its middle finger when they'll hit play
I feel like you could make a tas and that would technically be considered an ai... maybe with randomly generated levels though? That could actually be interesting to watch
Soooo, Pokemon?
The problem is that you know some are gonna train their AI to the point the AI will always find the most optimal solution, and nobody will ever win.
AI races in set levels won't work. AI vs AI is only valid in super complex scenarios where they directly compete against each other and try to outsmart each other. That's why the OpenAI team is specifically developing an AI for DOTA 2.
The fact that Evan actually took the time to sync up the AI clips to the music is extremely impressive. Well done, sir.
It isn't synced. It just looks that way because the AI has so many movements that any of them can be interpreted as being in sync with the music, so that's what your brain does.
@@LeoStaley Yeah, I definitely think you're right. But you have to admit, if 18:17-19:10 is a mistake, it's beautiful. Also 22:17-22:27.
If there are 44 levels in the game, and if there were a total of 500 AI characters in each level, then that means that over the course of the game, at least 22,000 characters participated.
The sad part is that the vast majority of these 22,000 characters failed when trying to beat the game.
Edit: This math doesn't take into account the number of generations the AI had to go through to develop the correct path. @Golianxz Original replied down below with some better math on the total number of bots that potentially failed.
Sounds about right for this genre of game, tbf.
Since there were 926 generations, and assuming that there were 500 characters per generation (I don't really remember if that's true), then the total of victims would be a bit under 463.000, considering that some of the last generation succeeded.
Worst of it: Some of the "clever" ones might have died on the way because others were simply more successful in THAT particular level. You never know if the final bots actually were truelly "dumb", but jumped at the right time without aiming for success at all... If you catch my drift.
@@M3dicayne They were all dumb. Machine learning is slightly intelligent brute forcing. There's no intelligence involved beyond "kill the ones that don't perform"
@@M3dicayne Is the AI even capable of seeing? I think this is just hit and miss, and remembering exactly which buttons to press when.
CB: "I added collectables so AI will go down as some optimal paths require it"
AI at 25:20 **proceeds to make a swag strat and ignore the path down**
Do you know if that's a possible path in the original game?
@@linnickschlanter4712 idk... unfortunately I'm not this kind of gamer that gets over it...
@@linnickschlanter4712 yes it is! At 1:45 in this video m.ruclips.net/video/LZlo6TzL7N4/видео.html
@@linnickschlanter4712 It is possible and is actually used as a speedrun strat I believe
@@linnickschlanter4712 Just checked and yes, it's possible in the original.
I think the part where the AI is learning is fascinating. It makes me curious about how this would have gone with that type of AI exemplified by the race car that learns to recognize the walls and avoid them. It seems like this AI is always randomly guessing 500 times. Is it possible for the iterations to get smarter jumps each generation as well as rewarding them for simply getting higher?
This AI isn't really learning anything except a single route to get as high as possible. It's actual intelligence is fixed, and dumb as a rock. It has no idea what it's doing. It's basically button mashing and the single best route of each iteration gets immortalized and copy/pasted over the other 499 routes.
It's possible to make AI's that actually get smarter, but it's far more difficult. Stuff worthy of science papers.
edit: I forgot the part where he made it favor going higher or getting 'tokens' when it has to go down to advance. The AI isn't dumb as rocks. More like...an amoeba?
Also, I should point out that this doesn't mean CodeBullet's AI is bad. If you've got a nail, a hammer is perfectly fine tool. Renting a 50ton excavator to nail your birdhouse together is all kinds of absurd. So long as the results are acceptable who cares how jank your tools are?
Yeah, it would be nice to see an AI that generalizes for unseen levels. But that's more difficult to do. It's still entertaining to see the dumb AI, though. It always gets me thinking on how it could be improved.
I wish he made it more clear what exactly the AI sees. What are its inputs. Like he did in older videos.
@@LucasPossatti I was also wondering about that, but I actually suspect the only thing this one takes as input is the current level and current "score". IE, the bot has a fixed "strategy" for each stage, and then starts completely over with blind button mashing for the next stage. It's not analyzing the nearby geometry at all.
@@FiltyIncognito not sure what your point is? the goal of the ai was to learn how to navigate this specific character in this specific game through these specific levels, and it obviously learned how to perform this task better over time. its just not generalized to play all games as well. maybe thats what u trying to say?
I think thats called machine learning
Im done with this abusive relationship code bullet
What?
Huuuuu.....excuse me what
@DeadLiftGaming-wr7ln the joke is Monika is an AI in a horror game called "Doki Doki Literature Club" please trust that I'm not on crack that is the actual name of it
@@Giant356 I know the game. I just didn't get it at first.
I thought the dark mode was actually going to last only 4 seconds but I'm happy it stayed
I love videos like these where you show more of the actual programming and small obstacles. It's fun to pause the video and look at the code you just wrote to see how you actually implemented it all
Almost every coder I know has made at least a few jumping bodies for various project. But it doesn’t matter how often you do it, how good of a programmer you are. Never has a jumping mechanic been made right the first time.
E
I even screwed up programming one in an RPG Maker game I'm creating and that handles the coding for me...
can't imagine actually coding things to jump....
@@DarkFrozenDepths you know that whole “an object in movement will stay in movement unless acted upon by a force”? Same is true for jumping bodies in your program. Just that you need five different forces acting on it and the hand of god pushing it down for it yo actually work
The song in the falling montage is Haduken by Lupas Nocte.
Absolute legend. Been looking for this song for so long
@@jms747 np, the artist does some great stuff
Thank you so much man, the song wasn’t in the description and it was driving me crazy.
@@MeroVPN np man, happy listening!
18:38 the first A.I to make it to the next floor was the one that just brute forced his way into that top jumó and forget the collectible. He was like “Gentlemen,I may not have a brain, but I have an idea…”
hahaha true
“Gentlemen,I may not have a brain, but I have an idea…”
hahahahahah awesome
At 25:59 the A.I used a glitch to get up to the next level xD
@@PaysChannel you mean wind?
@@MrLachek no, it basically glitches and double jumped through to the top of the block. not sure if that’s intentional game design with the wind though
Honestly, imagine doing this, but with 5 AI each with a different colour model, no generation resets, just five dudes trying to figure it out, and then streaming it on twitch as a gauntlet, make a poll to place bets, let it run for however long it takes,
That'd be fun as hell
You could come back periodically and check how your guy is doing in the race, chat with folks about the progress
Man I'd love to see that happen
this
or, well, at least something like this
@@aiyumi6747 not sure if there was supposed to be a link attached or if you're talking about the video, but yeah, i mean it'd take way longer but then you could build a community around the event in that time
5 AIs doing random brute forcing would be boring so no
Unless he codes 5 actual evolving neural network AI, but that's too hard so not happening
@@aeaeeaoiauea dude you say that but there was a really long stream (idk how long but months if not years I believe) where a fish just played Pokémon by swimming to different parts of its bowl and it was completely random and it eventually won and a lot of people watched it
@@JustFiscus Would that still be popular if the fish didn't exist and it's just a random number generator instead?
As someone who has spent a few years learning and writing AI and Genetic Algorithms from scratch, i'd like to point out that the reason it looks like the "AI" is brute forcing its way through, is because it is. Genetic Algorithms and AI's are not the same, they may give *initially* similar results, however there is a massive difference. An AI is actually learning, a Genetic Algorithm is memorizing.
You can think of it this way, an AI is built in order to generalize a data set and label it correctly, and it should in theory be able to do this for any new data you give it. A Genetic Algorithm however, is basically an optimized random function, allowing you to find the optimal labels quick for a specific data set. However it is completely unable to label new data without another round of learning and brute forcing.
I'm in no way dissing this video, I actually really like it and love this channel, however, with the way the code is functioning from my perspective, it seems like its not an AI and only a Genetic Algorithm, which is why its "brute forcing" because its literally just memorizing the game and the best moves, whereas an AI would actually generalize the game, and theoretically be able to solve new custom made levels.
Note that this could be using a Neural Network that trains using a genetic algorithm, however I don't believe this is the case as the amount of error in each level is very similar, the character makes multiple bad jumps and continues to do this throughout the levels in a very repetitive fashion as if it doesn't actually understand what a level is and what it needs to do.
Came to the comments for this. An AI for this game could be shown a new level and *probably* get through it a majority of its attempts. (If it was good, haha)
Not to trivialize this in any way though, I get that most people won't know the difference between "AI" and genetic algorithms in a RUclips title/thumbnail. And, great coding and teaching.
Agreed. You could literally build a physical analogue to the game, then randomly throw hollow plastic counters upwards from the bottom of each section, keep the counters that land on a higher ledge whilst discarding any that fall back down, repeat the process from the position of the highest ledge reached from the previous throw each time, and you'd achieve the exact same result.
The coding aspect is completely irrelevant. You could demonstrate the same outcome using pieces of paper, dice, ball bearings, etc.
@@sampfalcon5128 I agree, what is done here is really cool, and a great way to get people into the field, I really appreciate it! Its always nice for a deeper understanding and clarification however, and I'm happy I can help with that in some way!
That's true to a point, as some levels you have to actually go lower in height to get higher at some point, however there's an easy adjustment for that. I still think this video was educational to those wanting to learn how people actually train genetic algorithms and AI's on games. So I can understand the simplification in order to get across another important aspect of this field.
Deep Learning is only a subset of machine learning and machine learning is only a subset of artificial intelligence. There are many more branches regarding AI like natural language processing for example. It's not easy to answer what's a true AI and what not. Some only consider it an AI of it learns, others believe you don't necessarily need to learn to show up intelligent behavior. Some believe an AI should generalize as much as possible, while others think an AI could also simply deal with a specific scenario in an intelligent way.
It's hard to draw the line. Obviously you couldn't call a sorting algorithm an AI for example. But can you really say you need to "learn" to do a task in an intelligent way? Essentially machine learning methods are also simply algorithms.
Though on that regard I don't have enough experience to form an accurate opinion.
I watch your vids to fall asleep a lot of nights so I very much appreciate the dark gray background
I think it's amazing how you manage to show the bugs you encounter so visually appealing - like the one where you said adding the extra mechanics was basically a peace of cake with no issues and then show the jumping through walls and infinite accelleration, I love it
what a fine youtube channel, I sure do hope he keeps a consistent upload schedule.
in all seriousness, I kinda missed your videos, but I get that it takes a lot of time to make them so it is fine. I mean, we wouldnt miss them if they were some quickly pieced together crap instead of masterpieces they are currently.
"Code bullet makes a ai to make content"-code bullet
Same brof
@@ChangedCauseYT-HateFoxNames brilliant!
So many things by season season'd are to their right praise and true perfection. 3 months for one video isnt too bad
What do you mean? He is consistent. He uploads every 4-5 months :)
I wish I was smart enough to come up with this instead of playing. 😭
Now you gotta create your own jump king and beat it
New DLC when Connor? 🤣
But then so many Deez nuts jokes, postal delivery lore, and zookeeper positions would go to waste.
Cdawg nice 👍
But then you would never have learned the significance of Kenya
18:30 it is interesting how some bots where able to get to the top without going down (check 18:34), but then failed to get to the next map, due to collectable forcing them to go down.
1:23 has the same vibe as Micheal Reeves saying “ and that’s how you turn a 5 hour task into a 2 month task, because I’m a programmer”
No no no, "because I'm a crackhead"
The way the evolutionary method is implemented here is really essentially searching through a graph of moves. This works marvelously since the levels are pretty much deterministic. I wonder how different it would be if each player had a sense of decision making.
I think if they could "see" the screen, they would learn much, much quicker, because they wouldn't make random obviously bad jumps wasting time, and it would get more efficient at learning stages over time.
The way this is made, if thrown at a DLC it would be starting from scratch, but an AI that can "see" would be able to implement everything it learned, and even improve old paths it took since it would then know better.
@@professorpwerrel i got the impression that the blizzard is just a constant force. I don't play Jump king so not sure.
But yeah if it was non deterministic and was changing each time, the evolution of moves might not work.
It would still have a chance to work by adopting a strategy that is minimally affected by the wind.
@@IlluviumGaming Seeing wouldn't really fix anything other than remove obvious first jump deaths. The AI would still be blind to any jump beyond their first as they wouldn't see the 2nd jump until their position changed to the location given from first jump. Even "planning" out moves wouldn't really work as that is basically what the simulation is doing from the beginning, just without any first hand checks for death moves. Simming a Sim to sim a sim is just a turtles all the way down kind of issue, sometimes you actually have to go and do.
What seeing and knowing the blizzard timing would do is just narrow and expand the range of possible moves for the generation involved. The moves of the prior generation or possible moves of the future gen literally do not matter to the current generation.
@@Arrek8585 Seeing teachs the AI to measure distances and calculate if a jump will work prior to jumping, and the AI would improve its calculation abilities through generations. His AI is literally trying random moves until it brute forces through the level. A skilled player would be skilled in a DLC, just like an AI that can see would, whereas this AI would suck just as much at the DLC as the original generation did.
The simulation is not planning anything out, it is brute forcing similar to a TAS, finding a chain of inputs that goes upwards. That's why a skilled player would do better than this AI on a blind stage, the player learns the behaviors, not just randomly trying stuff until something works.
@@IlluviumGaming so you suggesting to use CNN?
Me when I started watching this channel years ago: How can he call himself a programmer if he uses so much code from other people
Me after starting programming in college: How can he call himself a programmer if he uses so little of other people's code.
Me 2 years into my degree:
Why in all hell is this a popular opinion?
Why reinvent the wheel when there are so many open source wheels available, right?
@@michaelmoore2679 Frameworks that have (hopefully) been tested and had engineering methodologies utilized in their creation? Sure! Random code snippets from stack overflow? Absolutely not.
I would love to see him make an A.I to play and beat getting over it…
Multiple videos in ONE YEAR. We are truly blessed
Two things, first off I seen about a dozen of your videos and this one is the best edited so far, great job! Secondly when you jumped off the top to show us the whole game level by level and the music started all I could think was let's start super fast build mode
Another possible solution: Treat it as a pathfinding problem, and just draw up a map/graph/tree that records for each position and move which position it leads to. There seems to be a limited number of game positions in which a move is allowed (to first order just all the pixels representing floors), so that would hopefully be a manageable graph and it would spit out an optimal solution.
…all the pixels?
MANAGABLE?
Instead of all the possible positions you could generate a smaller map with random moves. Can’t remember the name but you can do say 5 random moves, then 5 another 5 from each of those and keep going as far as you want. Slap an A* search on the path options and it would probably be pretty quick.
If you wanted to design a TAS, then you would have the costs of the edges be the number of frames it takes to walk/jump between those pixels. Then do a breadth first search (the smallest cost path gets extended next), but mark each pixel once it's reached and prune the exploration candidates that reach an already reached pixel.
Also prune all search paths that have not reached the next screen, once every pixel on the next screen has been reached. Eventually a winning pixel will be reached and that path is an optimal Speedrun. (There may be more than 1)
@@lordfelidae4505 The way the AI moves here is that it only moves when the character is standing still on a horizontal surface, so that cuts down on the number of potential positions by a lot. Though even every pixel may still be manageable.
A little more work would be needed than just using all floor pixels. Screens with wind would need a different location per pixel for each possible wind velocity vector/frame in cycle. For floors with ice, each pixel would need a different location per player's velocity vector.
Cant wait to wait for another 1-2 years for 1-2 more videos
Watching the final run with the winning AI is REALLY cool. Because as the game progresses it gets more and more competent as it goes. Towards the end it starts to look deliberate rather than just random guessing.
The reason the AI looks more competent at the end is because there are less viable pathways. In the easier early levels, there are viable pathways including stupid looking maneuvers, so it has to "look" smarter at the end, even though the attempts made to get there were as random as the earlier ones.
I love how you re-did the game, but I can't help but note that it looks like your AI is brute forcing it's way through the game blind. Maybe you could make another video where you give the AI the ability to "see" all 360 degrees and get distance and collision angles & surface friction, as well as distance / angle to next token. That way you could train an AI that would be able to be given a new map and do it quickly, instead of an AI that can only beat this map. Might be fun! That way you can re-use the game you spent so long re-creating and make a whole other video out of it.
probably since that'll take a bit more time figuring out and coding it when while brute forcing takes a lot of time, he can just sit back and enjoy doing other things like the master procrastinator that he is (which isn't necessarily a bad thing)
You sir clearly dont know code bullet
Dont get me wrong i would love to see that but he isnt doing it
i personally think it's fine to make the AI overfitting if the game only has one level and the purpose of the AI is only to beat that level.
@@FatherBilly he has done neural networking AIs before. I think it'd be interesting to see one figuring out this game.
sounds like work
I just played your version of JK, and there are some little things off: The short jumps seem to be a little bit to strong; the timing needed to jump short is off.
And there is no max-jump-auto-release. Bouncing of walls, especially head first into a ceiling also feels weird.. but yeilds nearly the same result as in original JK.
I have beaten JK Map#1 fall-less.. and I will try to do an (at least) sub10 minutes fullrun of your Version of the game soon.
So glad there are creators out there like you. Entertaining and fun! Thank you!
Dude the fact that when one AI gets to the next spot, that next generation or so swarm that spot like ants makes it so satisfyingly cool.
i love how even this time the AI somehow abused the physics engine (26:00 wallclipping)
agreed, though I don't think it is wall clipping, but move (ab)using the snow storm to be blown back to the wall after bouncing off of it?
12:40 ish there is one that gets stuck in a box
@@xaviersurman8400 That's why he said bug fixing time.
Seeing so many Code Bullets’ jump to one point in existence is nightmare fuel, but watching the evolution process and seeing the AI learn is fascinating. As always, a deadeye on ingenuity.
It looks like a swarm of frogs just jumping around it’s terrifying
It really doesn't 'learn' just brute force its way to the top, the generation that clear the 44th floor isn't any better that the generation that clear the floor 1. If you were to put the last generation in the first floor it will fail miserably. But still an amazing video :D
I suddenly thought "woah what happened to code bullet, I hope he didn't quit. When was the last time they posted a video? 6 months ago? Phew, a video should be coming by in another half year"
You have to admit.
This guy has a consistent vanishing act
Always
He’s like my daaaad
Whoa, didn’t quite expect jumping into super fast build mode at 10:05, but I’m digging it 😂
Kinda fits the theme 😁
I was looking for this comment😂
Well hello there my fellow jumpers and coders.
I feel like I want to terraform my room now
It's actually the super fast grind mode song, but they're very similar songs from the same artist :)
As a JS developer, I'm am always impressed with your implementations of the game themselves. The neural net stuff is interesting and fun to watch, but watching you actually recreating the games is what keeps me coming back. It also makes me feel bad about myself, even after doing this professionally for over a decade 🤣
Writing a game and designing one from scratch are two different things.
Its like comparing a construction worker with an architect.
To be honest though this is a very very simple 2d game with simple physics. The only complex thing with this game is the art
@@doufmech4323 which he did *flawlessly*
@@kinkajoulegend9989 agreed
E
i enjoyed the dancing code bullet during the "drawing the lines of all levels" immensely.
The imperfections of the final run was better to watch than a perfect run. Nicely done Code bullet see you in the summer hahaha
Ikr? It felt like an inexperienced Jump King player who's also perfectly precise in everything they do
You have absolutely no clue how reassuring it is to watch someone who I view as an excellent coder struggle to get basic collision working, lmfao. Reminds me that at the end of the day, all software engineers are just beating their heads against walls, no matter how smart you may be
Beating our heads against walls by finding where the wall is and moving our head to be outside of the wall.
@@totalmetaljacket789 which is a problem, because they are not floors
collision are not fun to code ya know
@@tiqosc1809 ya, I actually usually just use primitives so calculations will be easier and indexes so that if it's index is an index of something collideable than just don't allow it to continue in the direction of the primitive or set its position to just outside the whatever primitive shape it is That you got using some easy maths and even with that many simplifications it's a unoptimized nightmare and makes performance shit most of the time
@@basicallytutorials2107 what are primitives?
25:20 interestingly, your AI avoided your 'coin incentive' and instead chose a more difficult path. always so cool to see how evolutionary algorithms manipulate the worlds we make.
The AI doesn't know that it need to collect the coin until it actually collected it. The only thing the AI was told is basically "hey, do anything that make the funny number go up". This means that it doesn't know exactly what it's supposed to do, rather it performs random move and see if the score increases.
What happened there is that some bots randomly managed to skip through the intended path by chance, so they get better score thus they stick to that strategy. Of course some other players still found the intended path but they'd be performing worse since it require more moves.
I am glad that in the 8 months that youtube didn't recommend me your videos I have only missed one.