That's what i was thinking. :) Very interesting!! We could give robot arms in factory like references on how to do it. And then let it learn itself on how to perfect the task.
6:58 the agent learns to complete the flip, but because of exploitation probably won't be able to learn good form. Notice that it sets a lot of flip during its set. For those unfamiliar with acrobatics terminology, all flips are composed of three phases: set, execution, and landing. The set is the initial jump to get height. The set should send the body's center of gravity as high as it can and then begin to execute the rotation. By leaning backwards like the agent is doing, it cuts a lot of the potential height it can get, which is bad technique. Although it's a hard problem even for humans to recognize how to set properly, the agent should learn the difference between good and bad technique. Perhaps one signal could be the amount of impact that agent takes in its joints. Bad sets can cause more compression where as when setting with a lot of height, you have enough time to flip and then open early enough to spot the ground. Opening early slows the body's rotation. Anyways, super dope stuff. Would be cool to see how much parkour theory agents can learn. ;)
Does anyone notice how perfect the martial art kicks are. Look at the side kick and the jumping spinning axe kick (or jumping spinning outward crescent kick depending on how you look at it). Amazing!
As matter of fact, GTA 4 used a simpler version of this for its NPC reactions. In fact, it has superior physics to GTA 5 in those terms, however, they removed this on GTA 5 because of the CPU load it had. But yes , this is not THAT hard to simulate and with upcoming 16thread CPU's, you should start seeing much better and realistic NPC AI. Not auto-aims but actual human-like realism
Navhkrin yeah, a lot has happened since Euphoria/Endorphin - recently, a lot of research has been done on optimizing the performance of AI, so that should help a lot too :-)
As 01:17 shows -> Wouldn't it be a good reward extra to monitor the energy usage for the movement? I would expect that energy balance is one of the most important influencing factors for our body movements. I'm pretty sure you already thought about that, didn't you?
One question about this: How difficult is it to train a single artificial neural network to not only walk, but also run (if needed), avoid obstacles, move up and down ramps, and not step into holes?
Something I don't see a lot is systems designing movement taking physical damage into account, as well as physical strength of limbs in specific directions and angles relative to the liumbs current rotation.
muscle models can better model those characteristics. Check out this work: mrl.snu.ac.kr/research/ProjectManyMuscle/index.html These models can be fairly naturally integrated into our system.
it would be cool to develop a game with this,train your AI fighter and fight other peoples AI fighters,it would be interesting assuming different training methods would influence different techniques basically making a robot mma game,sounds awesome
Worker: Mr. Foreman sir! There is a robot outside doing sick flips! Foreman: I can't have anyone show me up! Take one or two men and throw some boxes at it to mess it up. 3:00 Worker: Sir, we've thrown thrown some boxes, but... IT JUST KEEPS FLIPPING. Foreman (on PA speaker system): Attention employees. I need all of you to stop what you are doing, and throw every box in the warehouse you can find at the robot outside. 3:08
This is fascinating. What are the chances that similar technologies could be used in video games for characters with active physical simulation, as to remove the need for scripted, keyframed animations ingame?
Quote from the paper: ’We present a conceptually simple RL framework that enables simulated characters to learn highly dynamic and acrobatic skills from reference motion clips, which can be provided in the form of mocap data recorded from human subjects.‘
I don't know if this is something you do yourself, but I wonder if there is any kind of reference for a biological 'conservation of energy' formula built in? I would think if you could add a way to track energy expenditure as a weight in your modeling values that might have interesting effects. The crazy hand waving might be super efficient if we as biological beings didn't care about muscle fatigue or conservation of calories is my basic thought.
we don't have an effort term at the moment, and adding it might help to fix somethings. But other work in deep RL has try adding an effort term (sum of squared torques), and usually that still ends up with some odd behaviours.
Oh. Right on. Thanks for the response. I had done some searching but it's such a specific question that I think I probably framed wrong. 'Effort' is probably exactly what i'm talking about.
But I don't get why not use these reference animations? The main advantage of ML is that it does your work or makes your work easier. This has no benefits but only more work to do compared to just using the refernence, has it?
The advantage is that the simulation gives you an interactive character that can synthesize behaviours to a variety of difference situations. For example, given a reference motion for kicking one particular target, the algorithm can learn how to kick a variety of different targets without needing new motion clips for those new targets. The simulation also generates responsive behaviours automatically.
In the case of keyframes I think you began to see a problem, especially on the dragon model, where the character isn't so much moving in reaction to moving the right muscles but instead trying harder to move and flex muscles that aren't actually being flexed in the key frame, but just move enough to convince the program to use the muscles it sees moving. No scientist and definitely did less research than you but this seems like an easy to notice issue.
You had the chance to have a T-Rex do a roundhouse kick and you didn't take it. That's on you.
#missedOppurtunity
Sounds like a very sensible approach. Learning is not just trial and error, imitation plays a huge role as well.
That's what i was thinking. :) Very interesting!! We could give robot arms in factory like references on how to do it. And then let it learn itself on how to perfect the task.
01:17 "Yay! I have a consciousness! I love to run and feel the..."
...end of simulation.
Runs just like Steven Seagal.
Dude...
absolutely the future
4:47, the simulated character naturally does a rasangan vs just throwing it!
6:58 the agent learns to complete the flip, but because of exploitation probably won't be able to learn good form. Notice that it sets a lot of flip during its set. For those unfamiliar with acrobatics terminology, all flips are composed of three phases: set, execution, and landing. The set is the initial jump to get height. The set should send the body's center of gravity as high as it can and then begin to execute the rotation. By leaning backwards like the agent is doing, it cuts a lot of the potential height it can get, which is bad technique. Although it's a hard problem even for humans to recognize how to set properly, the agent should learn the difference between good and bad technique. Perhaps one signal could be the amount of impact that agent takes in its joints. Bad sets can cause more compression where as when setting with a lot of height, you have enough time to flip and then open early enough to spot the ground. Opening early slows the body's rotation. Anyways, super dope stuff. Would be cool to see how much parkour theory agents can learn. ;)
There isnt enough variables to have to worry about most of those. Only criteria is to complete the flip at the moment .
oh my goodness, the future possibilities are expanding endlessly
Hey how did you compare the differences between the two states at 1:02?
Does anyone notice how perfect the martial art kicks are. Look at the side kick and the jumping spinning axe kick (or jumping spinning outward crescent kick depending on how you look at it). Amazing!
Great, now robots will know kung fu as well. Thanks guys.
Incredible! When will we see character motion like this in games?? Time to remake some classic martial arts video games!! :-D
Very important unanswered question for this: Is this real time?? If yes, then can someone please make a Unity plugin for this!
As matter of fact, GTA 4 used a simpler version of this for its NPC reactions. In fact, it has superior physics to GTA 5 in those terms, however, they removed this on GTA 5 because of the CPU load it had. But yes , this is not THAT hard to simulate and with upcoming 16thread CPU's, you should start seeing much better and realistic NPC AI. Not auto-aims but actual human-like realism
Navhkrin yeah, a lot has happened since Euphoria/Endorphin - recently, a lot of research has been done on optimizing the performance of AI, so that should help a lot too :-)
Wouldn't it be more logical to do the calculations on the GPU instead of the CPU?
@@iruns1246 here you go github.com/Sohojoe/ActiveRagdollStyleTransfer
This is super amazing. I'm excited for what will come out of this kind of technology.
As 01:17 shows -> Wouldn't it be a good reward extra to monitor the energy usage for the movement? I would expect that energy balance is one of the most important influencing factors for our body movements. I'm pretty sure you already thought about that, didn't you?
Now have the reference fight the simulation to determine who is truly the best!
One question about this: How difficult is it to train a single artificial neural network to not only walk, but also run (if needed), avoid obstacles, move up and down ramps, and not step into holes?
1:17 The dude's happy to be alive.
Excellent! Now if only I could learn to do a backflip too
when a computer can backflip better than you
I think you just love to throw boxes to robots xD. Well done, awesome research!
I could play with this for days and test what it makes out of unrealistic animations :D
4:14 why is it so satisfying to see the loin fall
because you thought they gave up throwing boxes at it
I believe they have a separate network for each motion. How would u go about having one single network doing all this stuff?
no T-rex mocap?? amateurs.
All i just wanted is a program to animate the reference and simulate , just that, does someone know something like this?
Something I don't see a lot is systems designing movement taking physical damage into account, as well as physical strength of limbs in specific directions and angles relative to the liumbs current rotation.
muscle models can better model those characteristics. Check out this work:
mrl.snu.ac.kr/research/ProjectManyMuscle/index.html
These models can be fairly naturally integrated into our system.
What simulation environment is this? Looks better compared to Mujoco. Also, is the code to reproduce this available yet?
What software was used to make the simulation ?
which simulator has been used here ???
Great adaptation and generalization. Nice video too
it would be cool to develop a game with this,train your AI fighter and fight other peoples AI fighters,it would be interesting assuming different training methods would influence different techniques basically making a robot mma game,sounds awesome
gameplay suggestion: a way to get new skills: provide your own motion captured movements from video clips
This is what amiibos do for smash already
Worker: Mr. Foreman sir! There is a robot outside doing sick flips!
Foreman: I can't have anyone show me up! Take one or two men and throw some boxes at it to mess it up.
3:00
Worker: Sir, we've thrown thrown some boxes, but... IT JUST KEEPS FLIPPING.
Foreman (on PA speaker system): Attention employees. I need all of you to stop what you are doing, and throw every box in the warehouse you can find at the robot outside.
3:08
This is fascinating. What are the chances that similar technologies could be used in video games for characters with active physical simulation, as to remove the need for scripted, keyframed animations ingame?
that is exactly the applications we have in mind.
Then I'm glad to hear it! I'd love to see technology like this applied to games in the future.
amazing!! Is there any way to use in games ?
holy cow. just apply it on real world atlas. i cant wait!
Already did, just google atlas backflip
Don't worry. Their weakness is cardboard boxes. We just have to throw 100 at the same time and we will defeat the uprising.
Miloš Ćirić well ya it did a backflip but it can’t cartwheel, sideflip, frontflip, or jump off stuff
Or kick stuff
Jason_ _parkour It can jump i guess soon it would be able to to all of that
1:17 Look at those dance moves! (dab dab dab dab)
1:26 Keep it coming, keep dancing! (backfall)
You've just created a dancing A.I.
Absolutely incredible!!
Awfully rude of them to keep throwing boxes at them. :/
AI never forgets.
4:47 The way baseball is meant to be played
Marvelous. Hope to make beta.
Can a parkour videogame use this soon? Thanks
yup, i'm hoping this can be used for something like that.
Depending on the group of tired muscles, the way of walking and movement can be changed.
You taught it parkour and martial arts? That’s awesome!
hurdles?
Awesome! How do you make so many humanoid with fancy skills as reference?
It seems like motion capture.
Quote from the paper: ’We present a conceptually simple RL framework that enables simulated characters to learn highly dynamic and acrobatic skills from reference motion clips, which can be provided in the form of mocap data recorded from human subjects.‘
Thanks for your replay! It's very helpful for me.
4:46 Awkward strategy... or most genius pitcher that ever played? Willing to bet no one's ever even tried a Naruto run towards the batter's face.
I don't know if this is something you do yourself, but I wonder if there is any kind of reference for a biological 'conservation of energy' formula built in? I would think if you could add a way to track energy expenditure as a weight in your modeling values that might have interesting effects. The crazy hand waving might be super efficient if we as biological beings didn't care about muscle fatigue or conservation of calories is my basic thought.
we don't have an effort term at the moment, and adding it might help to fix somethings. But other work in deep RL has try adding an effort term (sum of squared torques), and usually that still ends up with some odd behaviours.
Oh. Right on. Thanks for the response. I had done some searching but it's such a specific question that I think I probably framed wrong. 'Effort' is probably exactly what i'm talking about.
How is it different from video game.
In this video the Artificial intelligence learned how to walk, jump, kick, etc etc all by itself.
While in a game it's all animated by the developers.
4:47 CHEATER
Careful, Atlas might get a hold of this and go on a kung fu rampage through Boston
Very Cool!
it's funny because as humans we also behave the same way before performing backflips
1:18 accurate simulation of me running from life in general
For some reasons, this seems overfitting?
But I don't get why not use these reference animations? The main advantage of ML is that it does your work or makes your work easier. This has no benefits but only more work to do compared to just using the refernence, has it?
The advantage is that the simulation gives you an interactive character that can synthesize behaviours to a variety of difference situations. For example, given a reference motion for kicking one particular target, the algorithm can learn how to kick a variety of different targets without needing new motion clips for those new targets. The simulation also generates responsive behaviours automatically.
Now that makes sense to me. Thanks for your real quick answer!
I *_love_* the _peculiar gait !!_
Just awesome
Why train the humanoid like a fighter and not like a care giver?
looks more like a dancer than a fighter
Because it's less fun
Probably because they are complex motions.
Please make a fighting game with physics like this! Current fighting game engines have not advanced since they were first invented some 30+ years ago.
SIGGRAPH 2021 fulfills your dreams lol
look like Toribash character from the game.. I wonder if you could make an AI learn this game.
I think that this is how neo learned so fast how to fight
wow this is amazing
Company of Heroes 3 is gonna be so DOPE!!!
westworld, guys?
1:20 Victory lap baby!
Soon
Did he just loose his shit here? 2:58
4:46 I died laughing
its 99% reference.
that lion ran into a cloud of cubes
Next challenge: dragon backflip :)
5:20 walks like a drunk!
Yay For Robo BaseBall not yay for terminator
Doomsday is near, I could feel it
you didnt have to do atlas dirty like that
Atlas - president of Earth in future
It would be cool if goat simulator had this in it.
I could do most of this stuff, but I keep sitting around eating Dunkin' Stix 😒😫
how can you do them even if you tried
wow
4:10 poor lion :'(
i know kung fu
his is amazing
Now skynet has learn combat we are doomed
In the case of keyframes I think you began to see a problem, especially on the dragon model, where the character isn't so much moving in reaction to moving the right muscles but instead trying harder to move and flex muscles that aren't actually being flexed in the key frame, but just move enough to convince the program to use the muscles it sees moving. No scientist and definitely did less research than you but this seems like an easy to notice issue.