Watch this A.I. learn to fly like Ironman
HTML-код
- Опубликовано: 22 мар 2023
- ...So reinforcement learning is kinda like telling the neural network: "look, I don’t know how to do the thing, but you try do the thing, and if you succeed i’ll give you a reward of 5 dollars." So basically like a father who failed at life and pushes his kid way too hard in an attempt to live out his dreams through his child… That got depressing.
Some music by @LAKEYINSPIRED - Наука
Seems like adding a facing reward would help stabilize the rotation.
I had the same thought!
but he's asian, so adding a not facing punishment
I was gonna say the same thing, he should have made it hover correctly before asking it to move from point to point.
Or a negative reward for every Spin
Or maybe a directional speed through the point? So more purposeful thruster orientation gets rewarded
you should also have a negative reward for high angular velocities, that way it has a reason to be more still
Also, allow more actions than just "turn on and off the thrust of these 4 rockets". If the AI could aim the rockets (like when you paddle backwards in a canoe to turn it), it would have better control over its rotation.
Yeah and maybe adding a time reward so it needs to learn how to improve speed, that might cause it to do more "iron man" like flying
@@nullumamare8660 well i think it does have the ability to move each limb. It might be able to manage thrust, but i'm pretty sure it's only using limb movements.
Wrists need to have thrust vectoring as well as the whole arm 🙂
@@nullumamare8660 This sounds like a good idea and I think it'd be amazing if the thrust can be throttled instead of just on and off. However the more complex a model is the bigger brain and time and resources it needs. You saw how good and stable the drone was and that's because it has the same inputs, but 4 outputs for the engines while the iron man has 4 engine outputs and rotation for each limb.
Theory on why it flies so slow:
Its original training was based on hovering around one point, thus when it gets a new destination, it still assumes that it should arrive there without momentum to better stay at that spot.
Then it got a little training with randomly moving spots, having momentum there is bad too, since its actually way more probable that you need to turn around than that you need to continue going.
This, along with little time based punishment, results in a slower ar
Yeah I would try training it with a list of like 6 points that it has to hit in order. As soon as it hits the first point, remove that point and add a new random point to the end of the list.
yeah also use line paths instead of dots to hit. as is the space in between would give it lower reward, so even a model that takes into account future/total reward would not like the space in between
additionally i think it would benefit from having it's senses limiited and that it only knows where the target is by looking at it. if it's gonna fly like iron man it must also have the same senses as iron man
@@Delta1nToo Yeah I think that may help the spinning as well. Should probably have some penalty points in there for too much spinning,
Yep, AI doesn't have a sense of time (unless you give it one) as long as it's completing it's goals it doesn't care if it takes 1000 years
give it access to the next 2 points so it can find a vector between them, also give it incentive to be faster
Agreed it needs to be able to see beyond one point to “fly” a course.
Lastly, give it rewards for not spinning, and negative incentives every time it spins
Spinning is only a problem because we think it is. Part of what makes these AI learning experiments interesting is how the system finds solutions without our preconceived limitations. Fixing other factors and improving the flight system could very well fix the rotation problem. Or the AI could rotate in a straight line like a bullet.
@@bryanwoods3373 that’s easy to understand but in all practicality if we were going to implement this into reality, we wouldn’t want to spin we would want to fly straight. As a simulation of iron man flying it should fly like him as well as look cool doing it. If the AI mastered its control it could easily go much quicker and precise just flying straight. It needs positive and negative flight control incentives, a clear path as well as a timer to reach its potential.
@BigGleemRecords The video isn't about implementing this into reality. If we were, we'd be using more robust systems that would have more control systems and likely build on human testing or include a human analog as part of the reward system. The spinning is the last thing you want to focus on since fixing everything else will address it.
To combat the agent being slow and rotating you could add 2 other negative point rewards, every full rotation can deduct points which would likely reduce the spinning to a minimum and then also give it say 30 seconds to complete a course but deduct points for each second spent too, the agent might learn that the quicker it goes the less points he get deducted. I think revisiting this with these 2 additional criteria would be pretty interesting
Can we just admire how a few years ago AI struggled to play a 2D game and now this. It's really remarkable.
I think part of the reason it has such a hard time is because it doesn't quite have the detailed control vectors that Iron Man does. If you watch the hovering and flight scenes in the first movie, you'll see he has little compressed air nozzles, jet redirectors, and control surfaces on the boots to help stabilize. He also obviously has flaps on his back, and in later iterations of the armor he has backpack-style thrusters so his COG can be below the thrust point. If the game simulates air drag then add the flaps and stuff, too, but the minimum I think you need to add are the micro thrusters, back jets, and elbow/knee joints.
I was looking for this comment, even the clips in this video show control surfaces helping to stabilize Tony's flight.
Does it have the ability to throttle the jets? If not that could be the reason why it spins so much. It's the only way to stay at a constant height with constantly high uplift. (Also it stabilizes it really nicely)
Came here to say this. Adding throttling would make it much more elegant
is it the only way? maybe you could point them in the opposite directions, that would work too
Pointing them in the opposite directions is why the model spins. The opposing forces aren't in line with each other, which will cause rotation as soon as any one moves off-center. My understanding of the flight system here is that the only options are jets on or off simultaneously. As others have suggested, adding individual velocity control would probably address much of the spin. And then letting the AI know at least two points ahead will let it plan to use trajectory for a better score.
The whole comment section giving Gonkee suggestions knowing full well he can't be arsed to do a follow up video lmao
great vid, thanks for uploading
You should’ve added more or less points depending on how much time they to get to the target, that’s what would fix the flight.
That's a good solution, but I'd also reward it for facing toward the target to keep it from spinning.
@@nemonomen3340 with those 2 things it should learn to fly perfectly… or spin at the right angle but that would be slower so that won’t happen.
Your reward function could be modified to get what you want. Add in score for time, add in penalty for excessive rotations/spinning
I'd love to see you tackle AI in a preexisting game. I dunno, throw half life at it and see what sticks.
This is my first time viewing your work, and I'm struck both by how incredibly cool this is and your f'ing hilarious sense of humor.
I'm always the last to know, I guess. Really fantastic work, fam.
Another banger. Always love the way you use memes to make it funny!
glad your brain cells and hairs grew back :)
Let it know where the next goal point is going to be after the one it's currently at disappears and add a reward for getting to the next goal faster.
That way it'll learn to keep the momentum between goals instead of learning to slow down before hitting goals so it doesn't overshoot them and get punished.
seems like the rotation is so that it can use centrifugal force as a stabilization method
But he looks like a sped kid on a tricycle
Really enjoyed your explanation and video format.
Another thing that you could add to this would be random perturbations like throwing blocks at the agents so that they learn to recover from instability like the drone had at the end. Would you be willing to release the source files for the project and then do a compilation of different people's attempts at improving the result? I think the learning the actual Iron man style of flying might be possible but if you don't want to do all the work on that it could be fun to see what the community comes up with.
Finally! Really like your videos
LOL every time I watch your videos I laugh at the editing. Excellent.
good having you back
fr this is video is one of the best YT vids all time
I would make the reward relative to the forward direction to each node to promote a flying posture and stop the spinning. If you added the next node as input as well it might be a bit better at handling its own momentum out of each node.
* proceeds to float in place facing the point without moving at all *
Love the sense of humor!
I love how there are so many comments from people that know how this works, but imo its fun to watch this
After getting so many good suggestions on improving the ai, you have to make a part two now. And make it more of a challenge.
I truly like you. Subscribed.
I like these explanations bro. This is really decent content, thanks for putting in the effort with your videos
You should add a style reward so he’s not spinning around like a neurodivergent fish and you should also add a speed reward so it’s not taking seven years to tickle the next ball
I read that some machine learning techniques try to fail the object by telling them to stay off the ground, that way they could learn more stricter and efficiently to avoid collision with a certain surface.
if you do something like this in the future, you could add a reward for it facing forwards, it's going to take a bit longer to train, but damm it is going to be more stable
Amazing, this is the kind of models I wanna make. Great video
Bro you just validated a theory I've had for a long time. I'm not sure if I'm saying this right so please bare with me. All kinetic type movement is always multi layered. There is probabilistic correct-ness at every axis. Therefor it is necessary for every joint to learn to work together. You need a series of cooperating routines that all independently learn and get rewarded by a higher system. For drones it would look like a computer flying with a full flight controller managing the power at every motor with an operator that verbalizes instructions. Love your work here ( subscribed! )
This might be a good demonstration on how the heater element of my old baking oven works. Gets the job done, but only readjusts when falling under or climbing over certain temperature thresholds.
bro did this without even activating windows what a legend
Great job, a lot of room to continue developing your algorithm, but love the initiative and results are fun to watch.
happy 100k!!!
I had to go back and watch this three times. Hilarious!
So Stark’s space version of his suit has a booster set of thrusters mounted high in his back. You need to include those and make them the primary lift thruster. That allows you to use the arms and legs to fine tune the location. You also (likely) need more dexterity in the arms and legs.
Why not add a function or something to make it stabilize after every checkpoint. When I see it constantly spinning around it makes me feel like it's because there's constant motion, and the jets are always powered on, so it HAS to keep spinning to continue (I don't know shit about programming). But like, what would happen if you started it from a standing upright position, then had it pause between every checkpoint? Would it give you the seamless realism you're looking for? Or something (I'm also high)?
lol at 17:12 *he just a lil confused but he got the spirit*
i can see a future in which instagram- and tiktok content creators just rip off that scene of your ironman spinning around slowly through your obstacle course as a background video for their voiceover content
100k!!! i rlly wish you get 100k subs very soon
I think adding the wrist rotation joint in the hands and ankle joints in the feet would help stabilization a lotttt if trained enough! (As a bonus give it variable thrust... The ability to control how much thrust to output from each of the boosters independently and individually... But it will require a lotttt of training too)
Dude for real, you should have a premium version of you channel with the walkthrough this is the kind of content that some people like me can only dream of
MLAPI is a blast, love it. This inspired me to (hopefully) do my next AI experiment soon.
Nice video. Are you rewarding the AI based on how fast it can get to the target? Because it you're rewarding it for staying in the air, and then a fixed reward when it hits the target, it learns to take longer. I'm sure you already know this, but this is a subject I'm rather interested in ¯\_(ツ)_/¯
Saw that you had some parameters related to velocity which I think depends on the direction. Haven't done much machine learning or 3D animation programming myself but I think you need to train it on 2 random points and optimize for speed instead of velocity and time taken to reach the destination.
spinning is because of stability, the way its spinning kinda makes it so it just stabilizes itself with a constant output rather than needing to vary every single thruster individually. To avoid this, when making the path rewards add a reward for maybe not rotating or facing the target
What might improve things is to have it target the point two ahead from the closest point, that way it always wants to move forward instead of matching the points exactly.
If that were Tony, He'd be puking in his helmet halfway through the course.
Man ur so fkn funny continue like this first time I saw u and not the last
It spins because spinning balances out any misaligned thrust in all directions.
it must have control over the propulsion force
For the third step (on the map), change your rewards by going from "how close it is from the point" (it knows how to do it) to "how fast it completes the path" and get some generations : when you learn a track in track mania: first you try to finish it, then, when you know that, you try to make a new personal record.
" sometimes you gotta learn to run before you can learn to walk" Ironman
1) add thrusters to the back
2) add at least 1-10 output for each thruster
3) add negative reward to limit spinning
Amazing work!
I suggest adding another training step - fastest route.
Eventually, the model will fly as intended.
Good luck!
spin stabilization - it makes so much sense that even AI can use it
Big W dude. Love the Ballerina Ironman result.
Adding a time reward, so it can gets faster to the checkpoint might help, so it doesn't take a lifetime to get there, and a negative reward for each complete rotation it does
you should have continued the first training session until it stopped spinning
I love the low attention span shade and for that, a sub
your editing skills are getting better
Great video :D
i guess the rotation was due to the movement allowed in the x and y direction .. it was trying to move in a 3d space with just 2d controls and couldnt comprihend that he can turn too
This iron man accurately represents how my life is going
You probably should added a learning phase where it learned to recover from an uncontrolled fall
There are so many recomendations in the comments, please make a part two where you implement them cuz im curious af
The rotation is to account for the lack of degrees of rotation in the limbs. It can't even lean forward to fly because it isn't able to position the hand thrusters under the center of gravity. It's also missing critical mechanics such as being able to turn thrust on and off. Sure it could fly eventually with full power all the time, but any small mistake it makes in the learning process is amplified because the momentum is already moving the that direction and it has no way to stop it, only reposition the hand to start working the momentum back towards the reward. This is another reason for the rotation.
A reward for reaching each point in the shortest amount of time, maybe?
I wonder what force could've been added to the suit to balance the spinning momentum. Maybe two angled thrusters on the back.
The model is probably spinning as a simple way to strabilize into the upright direction like a gyroscope
Really cool. How are you handling in what direction the AI points its limbs? that seems really difficult to do, and is the rigging limited to the action figure-like marching movements? That could be one reason it's struggling to maintain an exact position. It's hard to tell, but are the motors thrust dynamic? Like are you passing into the model the rigid body's acceleration so it could thrust more when falling faster? Wish the video was longer so I could learn more.
Wonderful informative humor😄👍
Retrain with time taken between points and add penalty for collision with stage. The network will learn that spinning makes it hard to change velocity and will correct itself. It’ll zip around like you want it to.
So, this guys suit had no stabilizers (small weaker max thrust jets on the chest / legs that allow hovering) which would have made staying in a spot easier and allowed for small adjustments from closer to the center of mass while the arms and legs would go for the larger movements.
As others have said, part of it is the physical points your taking into account, your wanting 6 degree's of freedom which means that you need to take all 6 degree's properly into account if you do not want to use heading etc because of the discontinuous number though that could have been corrected for in programming, you need to look at it in terms of relative velocities and programming it to do it's best at having certain velocities as low as possible, that's how things like the PID's etc in an Arducopter work for navigation, you take into account your standard X,Y,Z velocities but then you also have horizontal rotation velocity, pitch rotation velocity and roll rotation velocities if you don't want these to be in radians or degree's, have them in m per second or g's that gives you the ability to ask the AI to fine tune the model, the ai also really needs to be allowed to change it's positing more, it seemed in every one of those that it ended up basically becoming a ridged body, which kinda nullifies half the point of the test. and that appears to be because each thruster is always giving out a constant velocity, it's hard to tell without seeing the full code base in unity.
Great project never the less love these vids.
what I learned from the video that Quaternions are not good to use for training, thank you :D
Btw., a negative rewardr for overall spinning velocity would help to minimize the quirky movement
You could try adding a time based reward (a 0 second score should be considered bad as well). A stability based reward could also help with training in the beginning 🤔
Part of the problem is probably that it was trained to stay in one place first. So it learned how to stabilize itself by spinning. It might be better if it was trained how to move first, and add a reward for speed
All hail the algorithm 😂 great video, subscribed!
you could also include propulsion on\off, when it needs to go down it raises a hand and that dips it too far down bc of gravity and propulsion together, another thing could work is add lift force based on speed based on the aerodynamics formula, that could reward forward facing flying in a fast scenario
What if you have the AI the ability to control the strength of the boosters? Wouldn't it have more control?
I feel like good adjustments would be a reward for looking forward like other people said, a bigger radius on the ball goals, in order to have a follow through momentum without it having to constantly correct itself, as well as spreading out the balls, so theres more time for momentum to be created
Maybe use radians instead of vectors for rotation?
To make this effective you'll need three reward mechanics: facing, distance to point, and time.
1. Hover: (-score distance from pointA)
2. Hover: (-score distance from pointA) and Face (-score angle offset from direction to pointB)
3. Hover Time: (+score time on pointA) and Face (+score time [very close] to direction to pointB)
3. Race: Time (-score duration from start to pointA) and Face (+score time [very close] to direction to pointA)
4. Race: Time (-score duration from start to pointA then B then C) and Face (+score time to next destination point)
5. Add more and more points until you get to around 10 in one course, train them on that for several days.
6. Hover & Race: Distance (from next point), Face (offset from next point), Time (+score for time on point). Move a single point randomly every n seconds. Once they touch the point set the facing direction target randomly, until the point moves again. Now every time the point moves, they will get a high score for immediately facing the point, getting there as quick as possible, then staying there as long as possible while picking a new facing direction.
7. Bring it all together, and make another long course of 10 points or so, but remove all the rewards except completion time.
you could make it better if you increase the reward for it facing in the correct direction, decrease for spinning. you could also increase the reward for getting to a point in the most effiecient way possible. you could also encourage it to use its legs for power and hands for stabalization.
This is way above my level of A.I. knowledge, but wouldn't adding some rotational constraint force it to stop spinning so much and make it fly like Ironman? I'm sure you thought of that, I wonder why that wouldn't work though...
Quaternions are continuous, used a lot in spacecraft dynamics. Often times you convert quaternions back to euler angles though because it’s hard to visually understand what a quaternion is doing from its output alone
I feel like if you added additional informally like g forces from spinning and added a penalty for spinning 2 much, plus maybe a bonus for having the right way for the drone it could improve the stability. Especially for the iron Man as it was just doing a lazy, I spin 2 win technique 😂😂. Super cool video though loved it.
thanks for making a long video :)
This is how Iron Man should fly in his next movie
summary: play around more with rewards
it fell because you didn't solve the icing problem.
I think the biggest problem of iron man is that he can't see into the future. He only has the next target in mind and the way he has been trained, he wants to be ready to go into a random direction. That means having as little momentum as possible. But in reality the targets are mostly in a line. So he should know the next two targets to properly use his momentum
Good rant. Totally agree.
Same
Negative reward for time taken to complete the course could result in better flight mechanics aka preservation of momentum
There are multiple reasons why it might be spinning to solve the problem; the model has a mobility constraint that you are unaware of, if Unity physics are true to life, it could be taking advantage of gyroscopic stability, or your reward function needs to be adjusted to punish these kinds of unstable movements. In reality, a human being wouldn't be able to navigate this way, so this should be considered in your reward function. The more consideration you give to real life in your NN model, the more likely you'll achieve life-like results.
like my boy Pontypants used to say "Epik ballerina simulator 2k", awesome btw
probably add collision penalty to make sure it isn't flying around randomly