Meet Your Virtual AI Stuntman! 💪🤖
HTML-код
- Опубликовано: 14 июл 2024
- ❤️ Check out Lambda here and sign up for their GPU Cloud: lambdalabs.com/papers
📝 The paper "DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills" is available here:
xbpeng.github.io/projects/Dee...
❤️ Watch these videos in early access on our Patreon page or join us here on RUclips:
- / twominutepapers
- / @twominutepapers
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: / twominutepapers
Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: discordapp.com/invite/hbcTJu2
Thumbnail tree image credit: pixabay.com/images/id-576847/
Károly Zsolnai-Fehér's links:
Instagram: / twominutepapers
Twitter: / twominutepapers
Web: cg.tuwien.ac.at/~zsolnai/ - Наука
I like the way the AI thinks that humans should run before it sees the reference. I think I’ll start running like that when I jog.
Perhaps if we had muscles with infinite expenditures of energy, the most efficient way of running would be this, but it must be extremely tiring to run raising the knee so much and with the arms dancing like this.
@@esleygonzaga1769 same with the arms. With infinite shoulder power you can raise your center of mass for finer grained control
@@esleygonzaga1769 Exactly what I was thinking. Add "tiredness" to the model, as well as "endurance" to the fitness score.
This is how Naruto runs when he goes super saiyan.
We know more, we evolved to run
AI-man lying on the floor trying to backflip: "Why was I created like this?"
Made me think of some people who tend to keep trying after the initial conditions for success are gone.
@@MartinHindenes That is an interesting way to put it. What do you mean? Like a guy born 5 feet tall trying to date?!
Existence is pain, Jerry
"why do we live, just to suffer?"
@@MartinHindenes That's right, this is life
So what I'm hearing is:
We could teach a robot dragon to walk near perfectly on the moon before the hardware ever leaves earth. Amazing.
What a time to be alive!
Profile picture checks out
even when space debri start hitting the robot
Exactly what I heard. :)
That's generally a good workflow honestly. Prove you can simulate it before you waste metal on an actual dragon.
I wanted to see the T-Rex doing a backflip :(
Me too
It would be interesting to see with which technique an AI would come up with, since the tail and other parts makes it non-trivial task to do
I think it failed to do a backfilp so they didn't show it on demo. I mean, I would definetly try it and show it if it works :D
And if it did a perfect backflip. What will the next Jurrasic Park be like? :)
@@jvankooo
TREX FLIP
BOTTOM TEXT
This could really speed up the animation process from movies and games.
I've been preaching about this for a long time. Every game studio needs an animation AI expert to help streamline their whole process.
Imagine a game where after shooting the enemy leg they start to stumble and limp realistically.
With this stuff getting as advanced as it is, if RDR2 were made maybe 10 years later, all of the intense animation work that was put into it could probably have been handled by an AI that works in real time.
@@godofthecripples1237 it's incredible, every time they say a job can't be automated boom!
@@The0Stroy Some games already have that, but this could be more easy if instead of coding it manually could be automated by the AI.
When I told my parkour and stunt friends that one day a robot will take their jobs soon they laughed at me. Well, here we are ...
Im a professional stunt man and this shit is both exciting and very worrying lol
@UC_9WEQOrt2m2Y7J4U7UElSg That was really interesting! Thank you for sharing
This tech would be super useful for a parkour game like the one Storror is working on. If this tech gets optimized to the point where it can run in the background of a game in real time, moving characters dynamically based on the environment would be so much easier!
Ministry of Silly Walks: Looks like we no longer have a place in this world.
They might have laughed at you for being a luddite😆
6:10 - The Normie "Reference Motion" vs. The Chad "Use More Body Volume" vs. The Virgin "Use Less Body Volume"
lol, I had the same thought
One looks like a zombie. One looks like it's holding the biggest shit of his life
Normie VS Vince McMahon VS Post-Chipotle
Early termination
0:55 He looks so happy and careless.
Me on my first trip to the pub after lockdown
Underrated comment 😂😂
Me vs research papers ~ sleeps in 5 minutes and thinks it’s fucking boring
Me vs Two Minute Papers ~ watches the full video on something random and thinks it’s so cool.
Videos are really well explained and visuals and presentation are incredible
This one got me good. Thank you so much! 🙏
Narrator: "You can also see that the technique is robust against perturbations."
Atlas: *Gets buried.*
Boston dynamics robots vs boxes
rwby
0:55 These funky walk procedurals will eventually stop happening when more factors are considered/added on; e.g. limb weight, resistances, stamina, muscular energy consumption, energy efficiency conscientiousness, etc.
Eventually we'll see models figuring to run like Usain Bolt on their own
These kinds of things seem like such obvious innovations that could be employed to get better behavior. I wonder why they aren't already being done.
Another consideration is frequency of movement. Energy consumption would guide this some, but energy isn't the only resource: the human brain can only put so much detail into a movement.
@@Taudris I bet somebody is working on it right now for sure. I think the thing is that there is a maximum speed with which things can progress because it still takes a human to write and run the code, to write the paper, get it reviewed and accepted at some conference etc. In AI there are a lot of things like the ones you name that are very obvious but also still to get it really right it takes some time to fix bugs and especially to tweak all the parameters involved in such a simulation. Cause my understanding is that still there is a lot of trial and error involved to get good results. Some AI researchers might get offended but probably lots of deep learning AI research is still more the result of intuition, exploration and trial and error rather than a solid science based in theory. I'm probably not qualified enough to make this a strong statement, just a hunch that I got as a student recently getting into the topic.
just imagine, the first AI that takes over the world won't figure out that its hard before it tries & fails miserably
So, total energy use (enthalpy + potential energy + kinetic energy), accounts for limb weight, tries to minimise the amount of energy used per movement or stroke...
It would be fun to see how fast a humanoid can run.
Takeshi's Castle for AI. LETS GO!
Right you are, Ken!
bro that would be so fun tbh
Lmao should i create a project like that? 😂
Thanks for the idea sir noted.
2:15 How I imagine doing backflips (Reference)
2:10 How It looked like when I tried to do it (Simulation)
Jokes aside, this is a really great advancement in our virtual technology. Two Minute Papers's narration is great as well, clear and onpoint.
When men leave the gym: 6:12
*Full body volume*
Me walking to the toilet: 6:12
*Discourage full body volume*
0:55 what if that is actually the correct way to run?
yes, if we had infinite stamina and energy
Could be. I think it's the A.I's way to maintain perfect balance, possible more balanced that how humans do.
@@npc4416 lmao
None of my friends can ever tell me a backflip is easy when even an A.I chickens out like i do
AI : Can't learn
You : Self preservation
And as I see this agent is blind! Imagine if it could scan the "game level" its progress would be superhuman! Great paper, thanks Károly!
I discovered your channel yesterday and i’ve been hooked. thanks 👍
4:27 " this one is doing well" .......oh
my life when i try to learn backflip.
Hearing that a developers favorite past time is to throw boxes at AI characters to see what they can take somehow makes me feel better about my life and how far I’ve come.
The creators of this paper should work with the pokemon company to animate their massive collection of 3d models.
😂😂😂
This is insane. If this applied to boston dynamics.
Could you please explain your comment i am physics student
@@user-mw3rx6mg5n Google "Boston Dynamics robots" or look on RUclips. He's suggesting their robots could be improved by applying such learning techniques (or the results).
@@peter9477 you do know their vids are "fake" right? They use human model and edit the robot into it.
@@Crustee0 no... no... you are VERY WRONG!!
i cant get enough of the ragdoll/humanoid ai papers. it’s just so interesting watching ai mimic humans, really eerie o.O
That's amazing! I also love that Peter Lorre narrates it.
Oh man, I'd love to see some more follow up papers to this!
The potential this software has in game development and animation is absolutely amazing. Imagine a player character with this backup ai and looser animations and unique interactions between the player and the environment.
Possible future development could include 'reflex response' characteristics, such as if a character is falling, that they try to grab out toward the edge or any protrusions etc. Another example would be if a projectile is coming toward a vital area, that the character flinches. When combined with the energy level simulation variables, it can create some very realistic behaviors. A character which is extremely tired (low energy) may not notice a projectile and therefore not flinch.
I love your enthusiasm. I still have no idea how you create these simulations, having only grasped the absolute basics of Convolutional Neural Networks looking at hand written digits. However, even that helps a little.
ONE OF THE BEST CHANNELS IN THE INTERNET!!! BRAVO!!! Greetings from Akon, temporal capital of Sanmartina!!!
This channel is pure gold.
This was probably the funniest episode I've seen😁 Amazing work!
this is now my favourite channel, but at the same time it scares me shitless, imagine jean claude van dam death robots and a t-rex
Fascinated by these 2 minute papers.
Looking forward to the Two Minute Papers computer game as the AI will be immense and graphics amazing.
What a time to be alive. What a time to be born. I’d love to see more of the future!
You are in history of the world. Both from virtual and real engineering sides. Congrats!
This would be really cool to see in more video games. I always thought that jumping and movement animations in platformers would be cool if they were more realistic. That way we could learn movements from them, since we’d be functionally learning from a master.
The next step is obviously to give it the set of simulation options for locomotion and place it in a virtual level with obstacles and task them with making it to the end focusing on efficiency of action or speed of action. One option allows for low energy consumption for robotic ambulation, the other provides for emergency response movements.
0:55 He looks so happy that he can run🤣
the part where it uses more or less Kinect energy will make stamina in games a very cool mechanic
Awesome video!
There is a little tiny error: The motion is not retargeted to the T-Rex and Dragon, those are based on keyframe animation.
What a time to be alive!!! 😃
So... Unreal Engine 5 is going to be used in Video Games, Hollywood, and Robotics. Sweet.
Unity and unreal: "bout time. '
in order to make these fine algorithms truly wonderful (and usable across a wide spectrum of cases), is it yet possible to use 'straight' video (of a human or a cat or any animal) as the reference motion? that could mean, for instance, that a different algorithm parses the reference video motion into CG motion (ie mocap producing for example bvh, fbx) so that it can then be fed into this algorithm. from an admittedly quick glance at the paper and github i could not see what format the reference was supposed to be in, but it must be there somewhere. there are commercial mocap offerings for 'straight' video (with no depth cameras), but none that i have seen is good enough for this purpose - unless it was cleaned up by some other AI, possibly even a variation or adaptation of the one presented here?
maybe by using two different AIs together we could achieve that. One ai would find the position of the bones and joints to make a stick figure with different levels of tint/shade to indicate depth. Then a separate ai would create a 3d model and animation from the stick figure.
ruclips.net/video/F84jaIR5Uxc/видео.html
☝️ poses estimated from techniques like this should suffice. Even if the reference motions are "glitchy" (discontinuous in position through time) from the pose estimations, the mimicking RL agent will be forced to create a smooth approximation. This is due to the fact that the agent cannot produce discontinuous motions in the physical simulator while only having control over force applications at joints.
Then you can optimize for minimum force applied at joints or for minimizing total kinetic energy to really get a nice looking result.
The author of this paper has published a follow up paper.
xbpeng.github.io/projects/AMP/
The follow up paper uses GAIL. There is a separate critic model that tries to distinguish between the reference animations and whatever the physics rig is doing. The physics rig is trying to fool the critic. I think this could be used with your idea. As long as some physical measurements (rotations and velocities) could be estimated for the joints from the video sources (another model could be trained to do this). Then these measurements could be used as sources for the GAIL critic.
@@yeatard interesting idea, thanks.
@@tchlux thanks for pointing out the 'AI-Based 3D Pose Estimation' paper/video. very promising. i would love to see mocap results for an animal such as a dog or a cat. the reason is that with good mocap and even better predictive motions (so that i can direct my animated cat to jump on a stool or stroke its whiskers etc) animated films could at last become affordable to make (thus, in theory, allowing for better and more original stories).
0:45 once I saw this, I immediately went to grab a piece of paper for me to hold on to.
Edit: having seen the entire video. I believe this would be very impressive for making procedural animations. some video games already use procedural animations, but this would allow for it to be used for the entire animation process, which would drastically cut down on time/effort.
One day, such system will be able to develop a martial art that is better than the one Bruce Lee invented!
Well no, because Bruce Lee's was basically 'use whatever works, man.' It basically defines itself as the best.
This paper is a giant leap
I am so ready for video game characters to move properly in melee situations based on the target point and terrain setting
This is really really cool. Human simulation has never been this automated before.
6:10 The chad stride in the middle
0:56 When you try to catch that bus
That running animation looks like something I've seen on stick figure animation forums, not gonna lie.
Your kind of paper sounds really cool.
Imagine having an AI controlled character in a video game that is able to attempt to maneuver in specified ways given any dynamic environment. It would look less programmatic and more flexible.
Rockstar did something like this when they used Euphoria engine to animate its human characters in GTA4. Characters would hold on to handrails as they stumbled, push themselves off your bumper if you nudged them with your car, all demonstrating incredible self-preservation that would be impossible with canned animations. I believe the series still uses the engine. Not all of the animations use the engine though, they're canned until the AI needs to kick it in to keep a character from falling over or doing something that would look silly. They called it "intelligent ragdoll".
I don't know why but that little guy trying to keep doing back flips even while sitting down and you saying A for effort had my laughing so hard I was crying.... really got me for some reason lol
0:50 AYO! The Jack Sparrow run! 😭😂
That backflip is so realistic at the start. It was exactly how I (don't) do backflips.
I'm absolutely stealing the AI's first run animation for some kind of project. That's just too beautiful.
It's so hilarious to just see the models start getting peppered by boxes
I can't wait for the next paper!
You ROCK my friend!!!
The potential applications of this AI driven animation is just staggering. I hope I'm well alive and still kicking to see it in motion in the future.
The guy on the floor trying to backflup is the funniest thing I've seen in a while. It just looks so silly.
I would love to see this technique simulating downhill skateboarding, finding new and improved ways to race and find the perfect lines and stances.
what a great mix of much wow and lolz :)
It seems that the ai looks to make the most practical movement over time
the simulated back flip from 2018 is a perfect simulation of me attempting a back flip
the sad p
art is in the beginning the simulation trying to do a backflip is like a picture perfect representation of if i tried to do one
„A+ for effort, little AI“ 😂
I'm very interested to see if you could pit two of these virtual people against each other in hand-to-hand combat to see what martial artistry they might come up with - you could increase their strength or speed to find out what a fight between superpowered humans might actually look at!
Finally i can explain my professor how i lost my papers while rushing to his lecture.
Ah, reminds me back to the old Endorphin (software) days.
god i loved that program
i gotta admit, i like the way the AI approaches running 0:55 😁
but jokes aside, this is an amazing paper! we are finally getting closer to robot pet buddies!
I could see this being used in multiplayer shooters to give player characters realistic animations to the environment. Battlefield 5 tried it but fell a little short. I can just imagine characters responding to the environment, position, velocity appropriately. It would do wonders for immersion
This would make such a fun parlour game
As a dance teacher, my heart is jumping for joy at the possibilities!
What would be interesting is tweaking point values based on energy used to resist gravity, possibly resulting in a more realistic run if the amount of energy expended is higher (no more arms in the air resisting gravity)
this would be nice addition to vr gaming
What a time to be alive indeed
This is one step closer to my dream of a customized martial arts style, based on an individuals body preportions and strengths. Feed an RL with motion capture from different martial arts styles and allow it to experiment. while having a physics simulation calculate the strength and speed of strikes based with body data. Also a bit of parkor thown in the mix for flare.
I would love to see how the model could handle a body with changing limbs and mass distribution within a single sim. It would be learning how each limb works individually and then a greater ability to consider itself "whole" with many limbs and a torso
Watching the ai learn to backflip made me realize that i should try the rsi and et method on myself too
I was literally thinking of this a couple weeks ago
This stuff is mind-blowing. AI has come so far.
The vengeance of the stick man would be cruel
I'm just waiting to see what the military is doing with AI right now
exploring new ways of killing people of course
Not exactly something to get excited about.
@@Androidonator Now the military drones can fly autonomously and identify the target with computer vision.
Just look at Boston Dynamics...
@@martiddy "computer vision" -- more like skin colour classifiers. Brown == fire missiles...
Scar: Long live the king. 5:31
Decrease body mass was a psycho walk. Also, i want to see a version of the human movement on MORE gravity, as well as seeing if a lion could do backflips (aka, non-humanoid figures trying to do nontraditional motions given the limitations of their body)
I'm sure we'll see that running style emerge in the next olympic 100m sprint
I love this man voice
6:15, middle guy is how I walk home from the gym.
That backflip looks like something I would do.
The future of gaming is promising. I'm all for it.
4:16 I couldn't help but think of that old obstacle course show Most Extreme Elimination Challenge, and wonder if we are in a simulation where our overlords are testing our machine learned animation and physics.
Ha, I saw the character running at 0:55 and wondered if it could be improved by considering energy usage, then at they end, they actually try it! It turns out it just makes the person look old.
I could see this replacing animation trees in AI for computer games. Instead of having to animate 50 different animation states, and blends. You can have the AI chose the reference motion from an entire list and have it dynamically blend with the environment and actions. Leading to more dynamic NPC enemy, friendly or neutral movement and interactions.
Seems like a natural next step for Rockstar games, you could have the AI train on the fly as you progress through the game. It would be great to see this assign a value score to trying to avoid impact collision to valuable areas like the head, and chain movement to stand back up to enable a successful movement following a failure.
Fascinating!
I dont blame the old one for not being able to do a backflip, it looks exactly how I look when I try
Atlas is being treated like usual. Boxes flying at them.