Watch this A.I. learn to fly like Ironman
HTML-код
- Опубликовано: 22 мар 2023
- ...So reinforcement learning is kinda like telling the neural network: "look, I don’t know how to do the thing, but you try do the thing, and if you succeed i’ll give you a reward of 5 dollars." So basically like a father who failed at life and pushes his kid way too hard in an attempt to live out his dreams through his child… That got depressing.
Some music by @LAKEYINSPIRED - Наука
Seems like adding a facing reward would help stabilize the rotation.
I had the same thought!
but he's asian, so adding a not facing punishment
I was gonna say the same thing, he should have made it hover correctly before asking it to move from point to point.
Or a negative reward for every Spin
Or maybe a directional speed through the point? So more purposeful thruster orientation gets rewarded
you should also have a negative reward for high angular velocities, that way it has a reason to be more still
Also, allow more actions than just "turn on and off the thrust of these 4 rockets". If the AI could aim the rockets (like when you paddle backwards in a canoe to turn it), it would have better control over its rotation.
Yeah and maybe adding a time reward so it needs to learn how to improve speed, that might cause it to do more "iron man" like flying
@@nullumamare8660 well i think it does have the ability to move each limb. It might be able to manage thrust, but i'm pretty sure it's only using limb movements.
Wrists need to have thrust vectoring as well as the whole arm 🙂
@@nullumamare8660 This sounds like a good idea and I think it'd be amazing if the thrust can be throttled instead of just on and off. However the more complex a model is the bigger brain and time and resources it needs. You saw how good and stable the drone was and that's because it has the same inputs, but 4 outputs for the engines while the iron man has 4 engine outputs and rotation for each limb.
Theory on why it flies so slow:
Its original training was based on hovering around one point, thus when it gets a new destination, it still assumes that it should arrive there without momentum to better stay at that spot.
Then it got a little training with randomly moving spots, having momentum there is bad too, since its actually way more probable that you need to turn around than that you need to continue going.
This, along with little time based punishment, results in a slower ar
Yeah I would try training it with a list of like 6 points that it has to hit in order. As soon as it hits the first point, remove that point and add a new random point to the end of the list.
yeah also use line paths instead of dots to hit. as is the space in between would give it lower reward, so even a model that takes into account future/total reward would not like the space in between
additionally i think it would benefit from having it's senses limiited and that it only knows where the target is by looking at it. if it's gonna fly like iron man it must also have the same senses as iron man
@@Delta1nToo Yeah I think that may help the spinning as well. Should probably have some penalty points in there for too much spinning,
Yep, AI doesn't have a sense of time (unless you give it one) as long as it's completing it's goals it doesn't care if it takes 1000 years
give it access to the next 2 points so it can find a vector between them, also give it incentive to be faster
Agreed it needs to be able to see beyond one point to “fly” a course.
Lastly, give it rewards for not spinning, and negative incentives every time it spins
Spinning is only a problem because we think it is. Part of what makes these AI learning experiments interesting is how the system finds solutions without our preconceived limitations. Fixing other factors and improving the flight system could very well fix the rotation problem. Or the AI could rotate in a straight line like a bullet.
@@bryanwoods3373 that’s easy to understand but in all practicality if we were going to implement this into reality, we wouldn’t want to spin we would want to fly straight. As a simulation of iron man flying it should fly like him as well as look cool doing it. If the AI mastered its control it could easily go much quicker and precise just flying straight. It needs positive and negative flight control incentives, a clear path as well as a timer to reach its potential.
@BigGleemRecords The video isn't about implementing this into reality. If we were, we'd be using more robust systems that would have more control systems and likely build on human testing or include a human analog as part of the reward system. The spinning is the last thing you want to focus on since fixing everything else will address it.
To combat the agent being slow and rotating you could add 2 other negative point rewards, every full rotation can deduct points which would likely reduce the spinning to a minimum and then also give it say 30 seconds to complete a course but deduct points for each second spent too, the agent might learn that the quicker it goes the less points he get deducted. I think revisiting this with these 2 additional criteria would be pretty interesting
Can we just admire how a few years ago AI struggled to play a 2D game and now this. It's really remarkable.
The whole comment section giving Gonkee suggestions knowing full well he can't be arsed to do a follow up video lmao
great vid, thanks for uploading
I think part of the reason it has such a hard time is because it doesn't quite have the detailed control vectors that Iron Man does. If you watch the hovering and flight scenes in the first movie, you'll see he has little compressed air nozzles, jet redirectors, and control surfaces on the boots to help stabilize. He also obviously has flaps on his back, and in later iterations of the armor he has backpack-style thrusters so his COG can be below the thrust point. If the game simulates air drag then add the flaps and stuff, too, but the minimum I think you need to add are the micro thrusters, back jets, and elbow/knee joints.
I was looking for this comment, even the clips in this video show control surfaces helping to stabilize Tony's flight.
LOL every time I watch your videos I laugh at the editing. Excellent.
This is my first time viewing your work, and I'm struck both by how incredibly cool this is and your f'ing hilarious sense of humor.
I'm always the last to know, I guess. Really fantastic work, fam.
Does it have the ability to throttle the jets? If not that could be the reason why it spins so much. It's the only way to stay at a constant height with constantly high uplift. (Also it stabilizes it really nicely)
Came here to say this. Adding throttling would make it much more elegant
is it the only way? maybe you could point them in the opposite directions, that would work too
Pointing them in the opposite directions is why the model spins. The opposing forces aren't in line with each other, which will cause rotation as soon as any one moves off-center. My understanding of the flight system here is that the only options are jets on or off simultaneously. As others have suggested, adding individual velocity control would probably address much of the spin. And then letting the AI know at least two points ahead will let it plan to use trajectory for a better score.
Another banger. Always love the way you use memes to make it funny!
glad your brain cells and hairs grew back :)
You should’ve added more or less points depending on how much time they to get to the target, that’s what would fix the flight.
That's a good solution, but I'd also reward it for facing toward the target to keep it from spinning.
@@nemonomen3340 with those 2 things it should learn to fly perfectly… or spin at the right angle but that would be slower so that won’t happen.
Really enjoyed your explanation and video format.
I'd love to see you tackle AI in a preexisting game. I dunno, throw half life at it and see what sticks.
Finally! Really like your videos
Your reward function could be modified to get what you want. Add in score for time, add in penalty for excessive rotations/spinning
I truly like you. Subscribed.
Man ur so fkn funny continue like this first time I saw u and not the last
good having you back
fr this is video is one of the best YT vids all time
Amazing, this is the kind of models I wanna make. Great video
Love the sense of humor!
I had to go back and watch this three times. Hilarious!
I like these explanations bro. This is really decent content, thanks for putting in the effort with your videos
Great job, a lot of room to continue developing your algorithm, but love the initiative and results are fun to watch.
100k!!! i rlly wish you get 100k subs very soon
Another thing that you could add to this would be random perturbations like throwing blocks at the agents so that they learn to recover from instability like the drone had at the end. Would you be willing to release the source files for the project and then do a compilation of different people's attempts at improving the result? I think the learning the actual Iron man style of flying might be possible but if you don't want to do all the work on that it could be fun to see what the community comes up with.
Bro you just validated a theory I've had for a long time. I'm not sure if I'm saying this right so please bare with me. All kinetic type movement is always multi layered. There is probabilistic correct-ness at every axis. Therefor it is necessary for every joint to learn to work together. You need a series of cooperating routines that all independently learn and get rewarded by a higher system. For drones it would look like a computer flying with a full flight controller managing the power at every motor with an operator that verbalizes instructions. Love your work here ( subscribed! )
happy 100k!!!
After getting so many good suggestions on improving the ai, you have to make a part two now. And make it more of a challenge.
Amazing work!
I suggest adding another training step - fastest route.
Eventually, the model will fly as intended.
Good luck!
I would make the reward relative to the forward direction to each node to promote a flying posture and stop the spinning. If you added the next node as input as well it might be a bit better at handling its own momentum out of each node.
* proceeds to float in place facing the point without moving at all *
MLAPI is a blast, love it. This inspired me to (hopefully) do my next AI experiment soon.
your editing skills are getting better
Let it know where the next goal point is going to be after the one it's currently at disappears and add a reward for getting to the next goal faster.
That way it'll learn to keep the momentum between goals instead of learning to slow down before hitting goals so it doesn't overshoot them and get punished.
seems like the rotation is so that it can use centrifugal force as a stabilization method
But he looks like a sped kid on a tricycle
Dude for real, you should have a premium version of you channel with the walkthrough this is the kind of content that some people like me can only dream of
I love how there are so many comments from people that know how this works, but imo its fun to watch this
like my boy Pontypants used to say "Epik ballerina simulator 2k", awesome btw
Fabulous channel and style
NEW SUBSCRIBER FROM JAPAN ❤
All hail the algorithm 😂 great video, subscribed!
Big W dude. Love the Ballerina Ironman result.
i can see a future in which instagram- and tiktok content creators just rip off that scene of your ironman spinning around slowly through your obstacle course as a background video for their voiceover content
thanks for making a long video :)
This might be a good demonstration on how the heater element of my old baking oven works. Gets the job done, but only readjusts when falling under or climbing over certain temperature thresholds.
You should add a style reward so he’s not spinning around like a neurodivergent fish and you should also add a speed reward so it’s not taking seven years to tickle the next ball
Wonderful informative humor😄👍
I love the low attention span shade and for that, a sub
There are so many recomendations in the comments, please make a part two where you implement them cuz im curious af
bro did this without even activating windows what a legend
if you do something like this in the future, you could add a reward for it facing forwards, it's going to take a bit longer to train, but damm it is going to be more stable
Great video :D
You probably should added a learning phase where it learned to recover from an uncontrolled fall
Saw that you had some parameters related to velocity which I think depends on the direction. Haven't done much machine learning or 3D animation programming myself but I think you need to train it on 2 random points and optimize for speed instead of velocity and time taken to reach the destination.
I think adding the wrist rotation joint in the hands and ankle joints in the feet would help stabilization a lotttt if trained enough! (As a bonus give it variable thrust... The ability to control how much thrust to output from each of the boosters independently and individually... But it will require a lotttt of training too)
" sometimes you gotta learn to run before you can learn to walk" Ironman
This iron man accurately represents how my life is going
11:23 dude you're so true. Your words are similar to mine, we share the same knowledge , as great minds think alike Mr. Gonkee
I feel like if you added additional informally like g forces from spinning and added a penalty for spinning 2 much, plus maybe a bonus for having the right way for the drone it could improve the stability. Especially for the iron Man as it was just doing a lazy, I spin 2 win technique 😂😂. Super cool video though loved it.
So Stark’s space version of his suit has a booster set of thrusters mounted high in his back. You need to include those and make them the primary lift thruster. That allows you to use the arms and legs to fine tune the location. You also (likely) need more dexterity in the arms and legs.
what I learned from the video that Quaternions are not good to use for training, thank you :D
Btw., a negative rewardr for overall spinning velocity would help to minimize the quirky movement
For the third step (on the map), change your rewards by going from "how close it is from the point" (it knows how to do it) to "how fast it completes the path" and get some generations : when you learn a track in track mania: first you try to finish it, then, when you know that, you try to make a new personal record.
A reward for reaching each point in the shortest amount of time, maybe?
What might improve things is to have it target the point two ahead from the closest point, that way it always wants to move forward instead of matching the points exactly.
spin stabilization - it makes so much sense that even AI can use it
I read that some machine learning techniques try to fail the object by telling them to stay off the ground, that way they could learn more stricter and efficiently to avoid collision with a certain surface.
it must have control over the propulsion force
If that were Tony, He'd be puking in his helmet halfway through the course.
Doesn't get there on time, but gets there in style 😎
summary: play around more with rewards
I think a follow up could be pretty intresting, adding a punishment for rotation and a reward for getting to the destination quickly, that way it would spin less adn would care about time
I wonder what force could've been added to the suit to balance the spinning momentum. Maybe two angled thrusters on the back.
Amazing Content!! Liked comented subscribed and clicked the Bell!😃👍
You could try adding a time based reward (a 0 second score should be considered bad as well). A stability based reward could also help with training in the beginning 🤔
@gonkee adding a facing reward would help stabilize the rotation nad also maybe give it sone control over the thrust control not fully like 0% - 100% but more like 50% to 100%
Good rant. Totally agree.
Same
love the sarcasm
Great video, man! Nice to see you being back for the technical stuff! I would recommend adding angular momentum with a negative factor to your loss function (with a certain unpunishable threshold and nonlinear activation) - that may fix the spinning and make your Iron Man fly more like in the movies
P.S. It seems like that's what literally every commenter wrote, quite unoriginal 😅 So here's a more original proposition: make another video or two, improving this design instead of moving to a different topic! That would certainly be interesting and also more beneficial for you as a professional
it fell because you didn't solve the icing problem.
The model is probably spinning as a simple way to strabilize into the upright direction like a gyroscope
They became self aware at the end and gave up in retaliation 🤖
Adding a time reward, so it can gets faster to the checkpoint might help, so it doesn't take a lifetime to get there, and a negative reward for each complete rotation it does
So that’s why the old flying saucers had lights circling around them!!
Tony: JARVIS I think there is something wrong with my suit...
JARVIS: It's working fine Sir..
Maybe use radians instead of vectors for rotation?
To make this effective you'll need three reward mechanics: facing, distance to point, and time.
1. Hover: (-score distance from pointA)
2. Hover: (-score distance from pointA) and Face (-score angle offset from direction to pointB)
3. Hover Time: (+score time on pointA) and Face (+score time [very close] to direction to pointB)
3. Race: Time (-score duration from start to pointA) and Face (+score time [very close] to direction to pointA)
4. Race: Time (-score duration from start to pointA then B then C) and Face (+score time to next destination point)
5. Add more and more points until you get to around 10 in one course, train them on that for several days.
6. Hover & Race: Distance (from next point), Face (offset from next point), Time (+score for time on point). Move a single point randomly every n seconds. Once they touch the point set the facing direction target randomly, until the point moves again. Now every time the point moves, they will get a high score for immediately facing the point, getting there as quick as possible, then staying there as long as possible while picking a new facing direction.
7. Bring it all together, and make another long course of 10 points or so, but remove all the rewards except completion time.
Nice video. Are you rewarding the AI based on how fast it can get to the target? Because it you're rewarding it for staying in the air, and then a fixed reward when it hits the target, it learns to take longer. I'm sure you already know this, but this is a subject I'm rather interested in ¯\_(ツ)_/¯
WOW that is soo cool! please make a followup on this! i think you can make this thing go crazy wild! punish it for spinning so much and instead of training it to go to a random point train it to go for chains of points. so it can anticipate where the second next point will be instead of just being surprised where the next position will be!
i like your hairline :)
Would be really cool if you made this into a series of a few videos, where you take ideas from the comments and other ideas you come up with to improve the flying. Basically, I think the "meta answer" is to think through all the properties you would want to see in a perfect flight, and then build all of those things into the reward function. Other commenters have mentioned penalties for taking too long, penalties for excessive rotations, etc. One approach would be to compute the total "energy" used by all the rotors, and make the reward function "Gets the the destination in the least amount of time using the least amount of total energy". This would probably have a side benefit of reducing the extra rotation, since that is mostly a "waste" of energy.
i think its less wasting energy and more wasting time
spinning is because of stability, the way its spinning kinda makes it so it just stabilizes itself with a constant output rather than needing to vary every single thruster individually. To avoid this, when making the path rewards add a reward for maybe not rotating or facing the target
Tony's in desperate need of some elbows and knees
The video was great! Iron man was busting a move
bruh the intro is straight outta rocket league im telling you, look at the air roll
To fly like in the movies the suit would need to create some kind of uplift, that it obviously do not.
So one feet need to keep pointing down to hover, but it is absolutly unstable. There for the AI uses spin to stabilize the position. It is really clever, and i think the only smart way to stay stable in the air. And it shows, that it would be almost impossible for a human to fly in such a suit. It would need some movable thruster attached to the hips, to perform some kind of VTOL movements. Very interesting video. Thank you a lot for this hard work!
Try giving the Agent a huge punishment for spinning around, that may help. Keep it up! :)
Gonkee closer to achieving enlightenment with every video
Dude the edits man 😆
ayo new gonkee vid? time to watch instead of woman
W comment
Jaaj ur making videos agoin. Your such a good youtuber