1. Put the AI in a desert environment 2. Give it limited resources (wood, stone..) 3. Give it points for staying healthy in the environment which is always trying to deduct health points 4. Watch a civilization grow before your eyes or 5. Watch the AI character die horribly
The simplest way would be to reward it for how long it stays alive with points. The digging in part could be solved by not letting plants grow without sunlight, but then again it could just make a glass ceiling. I guess the best way would be to simulate boredom by attributing negative points for idling or doing repetitive tasks@Haze_Nexus_real
2 месяца назад+5
@@ණChỉYêuMìnhEmOr perhaps it'll be more ambitious and those are the things holding us back.
The best motion physics I've seen in video games is football games like fifa or pes. I can only imagine how immersive the movements will be in up and coming games.
Next paper should incorporate weight/gravity in the clothing/armor and see how the animations react to it (I hate seeing massive cartoonish characters or characters with tons of armor feel weightless)
I don't think this is intended for robotics. By looking at the video, specially on the training part, it seems like the animation model attempts to match an animation to a pre-defined trajectory.
You first need to build a robot that is capable of the fluid movement that the AI makes before you implement it. Like if the bot doesn't have the capability of fluid ankle rotations than you won't be able to achieve it that easily.
@CHIM3RA. Since the robot manufacturers know the parts and limitations of the components and the overall robot design, these limitations could be programmed into the virtual diffusion training environment.
It would be interesting if you could have a key frame where you specify the transform for one specific body part in relation to the torso (or relation to the world), for example, the head being turned right to look that way while letting the body do it's thing, or giving hand key frames to attack with a sword
I really, really love your videos; you're doing great work! For me, the pauses between the words don't need to be as emphasized. Great stuff! I also love your AI focus!
@@JustLocal It's not far off. Watch the latest Tesla event, those movements were surprisingly smooth imo! And then look at the raptor bot from KAIST hitting 46kph. That is faster than Usain Bolts world record of 44.72. I think with the investment AI and robotics are seeing these last 2 years we are going to see absolutely amazing things in the next few years!
It's intriguing why the hardware is not ready though. I'd imagine that at some scale, it could be. Surely the mechanics of the simulation can translate via nicely into hardware, and the power can be from electric motors or pneumatics.
TL;DR , don't set your expectations too high for now. There's a huge hurdle FROM *animatronic playback* TO *physical embodied dynamics* , and then tying that into some sort of useful cooperative skillset that an android can translate to real world (chaotic) situations is a whole 'nother level we are fumbling in the direction of , maybe.. Consider the problem of embodiment - every robot gripper/chassis/motor/battery is going to vary slightly and wear out over time too.. The model needs to adapt to adverse conditions with only a few dozen sensor inputs for the model to digitize, analyze, compute, move, and predict the future hopefully before the state changes rendering prior solutions invalid ... Contrast to our living bodies, composed of a plethora of cells->organs working in concert to maintain fitness. Our bodies form a vast, self-preserving, cooperative problem-solving network we could not possibly replicate with our current material sciences and manufacturing methods. PhD Michael Levin's talks on the intelligence of cells gave me a deeper appreciation for the heritability, adaptability, and problem-solving , all that astonishingly emerges out of 'simple programs' of electrochemical patterns that ultimately make up biological systems . It just boggles the mind!
I think the tool that automatically moved the character to fit an end pose is the most useful. You could create a few end poses for places the character has to move (custscene type stuff) and have the character go there dynamically from anywhere.
2 месяца назад
@@Cl0udWolf Motion matching... does exactly that. You're asking for something that fully exists...
Last I heard, Motion matching pics the best animation to fit a trajectory. This has the character make it’s own animations and trajectory to get into a position at a location. That’s like saying forward is the same as inverse kinematics because they both are kinematics
Doing it in real time is neat... But that takes processor time. It would be best if this generation was used to fill out keyframes in animation instead, so that it can be pulled up like our more common animation techniques.
When you look at 1 year olds. They might spend a couple of weeks crawling. Then they pull themselves up to stand on 2 legs. Then within a couple of weeks they are walking. Given some of that is building muscle, it doesn't leave many hours to actually learn to walk. It's very quick. But if you look at learning to speak, its a different matter. Even at 1 they can understand instructions at a basic level. But to talk takes a whole year and more. That's a far more complex learning task.
If the cost is some delay in controlling the character to get realistic physics, gamers will get used to it. Its probably just a bit more delayed than rdr2 controls
I'm not so sure; the problem is the inconsistency. The responsiveness will vary based on the current position which might make for very rage game like controls.
Yeah, I get frustrated w animation/hitbox sloppiness. Though this might be cool for Strategy genre or the now popular "walking simulator" genre , hahaha Or, perhaps it can be incorporated into gameplay as many games do have you "steady yourself on a tightrope" ... I love games that have emergent physics puzzles even as a main gameplay element. QWOP for example 🤣
It depends on the genre to use it realtime, for genres that require instant controls, they can just save the animations and use them like they do right now. The difference will be that the animation team will be able to produce 100 animations instead of 10 in the same time. As for the genres that don't really require instant controls, like in walking simulators or any kind of non-competitive slow-paced game, we still need them to feel like there is no apparent lag. On the note: I have been watching these videos for years now. Yes, many papers. No, zero implementations into actual video games (at least in the sense of real-time generated or physics-based generative animations, I should remind everyone that Euphoria is still the king for some reason). There is either some issue with how these research are licensed (patents and stuff), or this channel just likes to exaggerate. There is one project that I am aware of which is called Motorica AI (in which we can generate simple walking animations with presets) and even their resulting animations are not allowed to be in the games. Possibly meaning that they are trying find a way around some possible licensing issues.
Impressive. Not only will it be great for games, but imagine filmmakers who need to create huge realistic battle scenes, etc. Three years ago 90% of what we can now do with AI would have been an unrealistic dream!
At this point, I am pretty sure that Nvidia itself is gate-keeping these research to be used in actual game engines. It has been so long since their first AI-Driven realtime calculated physical animation thingy was published. Considering that they are still building the Omniverse, I guess that they will eventually evolve into some sort of everything app for 3D and use these techniques there.
My long-term prediction: small parts of video game graphics will be replaced by generative models until entire games are built traditionally, but displayed generatively.
Given the variable responsiveness, I wonder if the first application for this technique's descendants in games will be in a life sim where they player has limited control, anyway, and the character needs to be able to avoid protruding obstacles or floor clutter while, say, carrying irregularly-shaped objects.
I can imagine a time the future Dr Zsolnai-Fehér enslaved by AI with a big smile on his face and tears welling in his eyes saying, "What a time to not be alive!"
Did you know that while they were testing this, at one point the AI agents just stared at the screen? Even when they shouldn't have had any notion of where the viewport was, they always somehow knew to look _directly at the camera._ To this day, no one knows why.
doesn't sound real given that: 1. The AI shouldn't know where the camera is 2. The AI had no incentive to look at the camera However, the agent might start out looking at the camera by default, and it's just trying to return to it's initial state? (sounds kinda far fetched)
i think it will be a long time before the AI captures the human element to go along with this animation system. except for more dynamic movement, games wont be much different.
Boston Dynamics Atlas has amazing performance, but still doesn't seem to have the same sort of fluidity as these simulations. This is a puzzle as Atlas seems to have the power, the rotations and the lever type joints that might allow it to do so. Insight anyone ?
When will we ever see this technology implemented into software that we can actually use? It's always just papers and demo's, but never a live demonstration.
What happens when you upload an anatomical muscular system and give it to the AI so it actually has to calculate how to move each muscle in tandem.... At what point do we just create a full simulation of our own reality where the AI we create has to control individual muscles we give it? What are we? Are we not just a fractal creation of God, mimicking what we are with these AI creations? After all, we are just attempting to simulate our own reality, and make it look, "real". Which begs the question, why do we desire so fundamentally, to copy all the values of laws of physics we assign in these programs, in order to make it indistinguishable from our own, yet have full control over it? What end does that serve? Hint: a scary one.
are we ever going to see any of this tech in actual video games? I feel like we've been seeing videos like this for years now and still nothing in games? I'm starting to suspect most of this tech only actually works in isolated cases and well known environments. does any of this generalize? or is it just fun and interesting experiments to look at?
i'd so love if there was a game generated by ai, using such techniques for character movements, but even the levels and dialogs created by an ai, making such a game a complete unique experience
Lol. Motion matching is just an overly complex system that could only result in having a rich animation system. There is nothing reactive or physical about that, it's only matching trajectory to the already available hundreds of animations. So, actually, you can use these techniques to create the "hundreds of animations" that would eventually make motion matching a viable alternative to use in all projects, instead of only multi-million dolar AAA games. Just look at any motion matched game. All the ledges have either x or 2x height, the ground is mostly flat and you can never have really complex geometry that animations can react to, etc. 15 year old Euphoria engine accomplished more than what motion matching can possibly do.
They are learning from scratch, with a defined goal. That's the point, the goal is not a product, it's a proof the methodology works and can be applied elsewhere potentially or expanded on
The walking is super natural! Can’t wait to see this type of stuff implemented in video games
And 3d software, like houdini.
..and robots!
get ready for 24 GB of VRAM being the minimum requirement just for this ai system to load
@@kuromiLayfe good thing we'll have 5090s with 32gb!
@@BabySisZ_VR if you are willing to lay down 5 digits for it :D
I don't always copy on tests, but when I do, the guy I am copying from is already two papers down the line 😅
xD
I've been there before!😂
Before you can walk you have to twitch!
1. Put the AI in a desert environment
2. Give it limited resources (wood, stone..)
3. Give it points for staying healthy in the environment which is always trying to deduct health points
4. Watch a civilization grow before your eyes or
5. Watch the AI character die horribly
Watch the AI creating AI
The simplest way would be to reward it for how long it stays alive with points. The digging in part could be solved by not letting plants grow without sunlight, but then again it could just make a glass ceiling. I guess the best way would be to simulate boredom by attributing negative points for idling or doing repetitive tasks@Haze_Nexus_real
@@ණChỉYêuMìnhEmOr perhaps it'll be more ambitious and those are the things holding us back.
@@ණChỉYêuMìnhEm sentimento de poder é um conceito abstrato
@Haze_Nexus_real yes, it will be great :)
The best motion physics I've seen in video games is football games like fifa or pes. I can only imagine how immersive the movements will be in up and coming games.
Next paper should incorporate weight/gravity in the clothing/armor and see how the animations react to it (I hate seeing massive cartoonish characters or characters with tons of armor feel weightless)
Which humanoid robot company will implement this first?
I don't think this is intended for robotics. By looking at the video, specially on the training part, it seems like the animation model attempts to match an animation to a pre-defined trajectory.
@danguafer Train using Diffusion method virtually, using real world physics and physical robot constraints, and once perfect upload to the robot.
You first need to build a robot that is capable of the fluid movement that the AI makes before you implement it. Like if the bot doesn't have the capability of fluid ankle rotations than you won't be able to achieve it that easily.
@@electroncommerce First you gotta build a bot that has the capability to perform such fluid movement before uploading the animations.
@CHIM3RA. Since the robot manufacturers know the parts and limitations of the components and the overall robot design, these limitations could be programmed into the virtual diffusion training environment.
I wonder when this will become common in video games. This would be huge for indie devs, since it's basically motion capture quality.
It would be interesting if you could have a key frame where you specify the transform for one specific body part in relation to the torso (or relation to the world), for example, the head being turned right to look that way while letting the body do it's thing, or giving hand key frames to attack with a sword
I really, really love your videos; you're doing great work! For me, the pauses between the words don't need to be as emphasized. Great stuff! I also love your AI focus!
Pretty soon humanoid robots are going to have this type of agility.
I don’t think so. The hardware is not ready..
@@JustLocal It's not far off. Watch the latest Tesla event, those movements were surprisingly smooth imo! And then look at the raptor bot from KAIST hitting 46kph. That is faster than Usain Bolts world record of 44.72.
I think with the investment AI and robotics are seeing these last 2 years we are going to see absolutely amazing things in the next few years!
It's intriguing why the hardware is not ready though. I'd imagine that at some scale, it could be. Surely the mechanics of the simulation can translate via nicely into hardware, and the power can be from electric motors or pneumatics.
It’s a mistake to think that ai software progress equals to progress of physical machines
TL;DR , don't set your expectations too high for now.
There's a huge hurdle FROM *animatronic playback* TO *physical embodied dynamics* , and then tying that into some sort of useful cooperative skillset that an android can translate to real world (chaotic) situations is a whole 'nother level we are fumbling in the direction of , maybe..
Consider the problem of embodiment - every robot gripper/chassis/motor/battery is going to vary slightly and wear out over time too.. The model needs to adapt to adverse conditions with only a few dozen sensor inputs for the model to digitize, analyze, compute, move, and predict the future hopefully before the state changes rendering prior solutions invalid ... Contrast to our living bodies, composed of a plethora of cells->organs working in concert to maintain fitness. Our bodies form a vast, self-preserving, cooperative problem-solving network we could not possibly replicate with our current material sciences and manufacturing methods.
PhD Michael Levin's talks on the intelligence of cells gave me a deeper appreciation for the heritability, adaptability, and problem-solving , all that astonishingly emerges out of 'simple programs' of electrochemical patterns that ultimately make up biological systems .
It just boggles the mind!
this is super impressive, i just wish some of this tech could come over to engines like unreal and blender sooner.
Motion matching...For real. Also they've added muscle deformation with ml
I think the tool that automatically moved the character to fit an end pose is the most useful. You could create a few end poses for places the character has to move (custscene type stuff) and have the character go there dynamically from anywhere.
@@Cl0udWolf Motion matching... does exactly that. You're asking for something that fully exists...
Last I heard, Motion matching pics the best animation to fit a trajectory. This has the character make it’s own animations and trajectory to get into a position at a location. That’s like saying forward is the same as inverse kinematics because they both are kinematics
I remember when "Overgrowth" was pushing the limits of physics based movement.
This is truly next level🤯
Another excellent video! I learn something new every time I watch your channel. Keep them coming!
I wonder how much performance overhead it'd take to implement this in a game? How heavy is the neural network?
Doing it in real time is neat... But that takes processor time. It would be best if this generation was used to fill out keyframes in animation instead, so that it can be pulled up like our more common animation techniques.
I'm thinking the controls might end up hard to use, like a rage game, because of the inconsistency in responsiveness.
😎QWOP are you talking about?
watch david rosen's animation bootcamp from gdc, it absolutely won't be a problem with that approach.
Can’t wait to see a enemy skip away from me right after shooting me 😂
Why would you want that?
it does see the head as just a piece of weight instead of looking ahead
Next job on the chopping block: 3d animation
When will we finally see the animation in games?
When you look at 1 year olds. They might spend a couple of weeks crawling. Then they pull themselves up to stand on 2 legs. Then within a couple of weeks they are walking. Given some of that is building muscle, it doesn't leave many hours to actually learn to walk. It's very quick.
But if you look at learning to speak, its a different matter. Even at 1 they can understand instructions at a basic level. But to talk takes a whole year and more. That's a far more complex learning task.
If the cost is some delay in controlling the character to get realistic physics, gamers will get used to it. Its probably just a bit more delayed than rdr2 controls
I'm not so sure; the problem is the inconsistency. The responsiveness will vary based on the current position which might make for very rage game like controls.
Yeah, I get frustrated w animation/hitbox sloppiness.
Though this might be cool for Strategy genre or the now popular "walking simulator" genre , hahaha
Or, perhaps it can be incorporated into gameplay as many games do have you "steady yourself on a tightrope" ... I love games that have emergent physics puzzles even as a main gameplay element.
QWOP for example 🤣
It depends on the genre to use it realtime, for genres that require instant controls, they can just save the animations and use them like they do right now. The difference will be that the animation team will be able to produce 100 animations instead of 10 in the same time. As for the genres that don't really require instant controls, like in walking simulators or any kind of non-competitive slow-paced game, we still need them to feel like there is no apparent lag.
On the note: I have been watching these videos for years now. Yes, many papers. No, zero implementations into actual video games (at least in the sense of real-time generated or physics-based generative animations, I should remind everyone that Euphoria is still the king for some reason). There is either some issue with how these research are licensed (patents and stuff), or this channel just likes to exaggerate.
There is one project that I am aware of which is called Motorica AI (in which we can generate simple walking animations with presets) and even their resulting animations are not allowed to be in the games. Possibly meaning that they are trying find a way around some possible licensing issues.
4:00 she went crazy because she couldn't hold onto any
Impressive. Not only will it be great for games, but imagine filmmakers who need to create huge realistic battle scenes, etc. Three years ago 90% of what we can now do with AI would have been an unrealistic dream!
Nvidia has had amazing AI for quite some time now. When will these models finally make it into games?
At this point, I am pretty sure that Nvidia itself is gate-keeping these research to be used in actual game engines. It has been so long since their first AI-Driven realtime calculated physical animation thingy was published. Considering that they are still building the Omniverse, I guess that they will eventually evolve into some sort of everything app for 3D and use these techniques there.
so instead of controlling a character, you merely suggest it to move, like how we ride horses. 10/10 I'm sold.
Why are the feet sliding on the ground? How good is the physics simulation behind this?
there isn't any, physics isn't simulated here, but imitated indirectly via the restraints physics had on the original footage
this is actually amazing
Now, I understand that the narrator's voice indicates that he's still in the middle stages of working out the smooth movement.
My long-term prediction: small parts of video game graphics will be replaced by generative models until entire games are built traditionally, but displayed generatively.
I cant wait to see this in GTA 97 and Sims 523 in 1000 years
GTA 6 or 7.
5:15 I guess Christmas came early, since budget Santa's holding onto his papers? Am I the only one who sees this?
Why is noise used?
bannerlord will need this more than ever to animate the characters
I can see that deadpool is still making maximum effort 😂
Given the variable responsiveness, I wonder if the first application for this technique's descendants in games will be in a life sim where they player has limited control, anyway, and the character needs to be able to avoid protruding obstacles or floor clutter while, say, carrying irregularly-shaped objects.
Life on earth has been denoising to create ultimate life form...
Genuinely curious: why is foot sliding even a problem? Shouldn’t it be solved by inverse kinematics? Isn’t it possible to “fix” the foot? 🤔
Is there some custom software for modelling these AI games?
Npcs are going to get way better
yeah!!
At 2:00, the model seemed to know capoeira. 😄
I can imagine a time the future Dr Zsolnai-Fehér enslaved by AI with a big smile on his face and tears welling in his eyes saying, "What a time to not be alive!"
What are this fields called ?
Did you know that while they were testing this, at one point the AI agents just stared at the screen? Even when they shouldn't have had any notion of where the viewport was, they always somehow knew to look _directly at the camera._
To this day, no one knows why.
Sounds like a good creepy pasta^^ Any sources that this is true?
doesn't sound real given that:
1. The AI shouldn't know where the camera is
2. The AI had no incentive to look at the camera
However, the agent might start out looking at the camera by default, and it's just trying to return to it's initial state? (sounds kinda far fetched)
prolly just some weird bug
@@Georgeous42nah I just made it up.
This could be good for crowd sims.
i think it will be a long time before the AI captures the human element to go along with this animation system.
except for more dynamic movement, games wont be much different.
As you said "Sign me in for future videogame developpement."...🤨....
When are they training it on Michael Jackson's dance moves? I wanna see a little AI do some moon walking and other iconic MJ dance moves.
But can it hit the griddy?
Boston Dynamics Atlas has amazing performance, but still doesn't seem to have the same sort of fluidity as these simulations. This is a puzzle as Atlas seems to have the power, the rotations and the lever type joints that might allow it to do so. Insight anyone ?
Should be a fun tool for helping create military robots
once they are ready, our demise will be swift
Walking AI has come so far, wow.
is this done in unity game engine
I would like to know how to do that because I am a unity game dev myself
Stop feeding them whiskey.
When will we ever see this technology implemented into software that we can actually use? It's always just papers and demo's, but never a live demonstration.
What happens when you upload an anatomical muscular system and give it to the AI so it actually has to calculate how to move each muscle in tandem.... At what point do we just create a full simulation of our own reality where the AI we create has to control individual muscles we give it? What are we? Are we not just a fractal creation of God, mimicking what we are with these AI creations? After all, we are just attempting to simulate our own reality, and make it look, "real". Which begs the question, why do we desire so fundamentally, to copy all the values of laws of physics we assign in these programs, in order to make it indistinguishable from our own, yet have full control over it? What end does that serve? Hint: a scary one.
Next level glitchy NPCs here we come!
now add collisions to itself and othrts
Now make them swordfight!
But they are really walking or they are imiting the movements to walk? Is there a physic and gravity and muscles or this is only for videogames?
This is a good question, and concerns my research. Afai understand, this work does state prediction, whereas in robotics you require the action.
it remind me at Naruto learning Rashen shuriken...🤣
are we ever going to see any of this tech in actual video games? I feel like we've been seeing videos like this for years now and still nothing in games? I'm starting to suspect most of this tech only actually works in isolated cases and well known environments. does any of this generalize? or is it just fun and interesting experiments to look at?
Those little added comments always makes me giggle.
Looks like drunk trolls 😅 That + chat GPT would make quite interesting pnjs
put this in unreal and TAKE MY MONEY
Teach it kung-fu and see what it can figure out
Nothing Impressive compared to how a human learns to walk in 10-18 months 😂
yea, but what do they need me for?
i'd so love if there was a game generated by ai, using such techniques for character movements, but even the levels and dialogs created by an ai, making such a game a complete unique experience
Meh. I do those moves any Saturday night 🥴
IN UNREALE ENGINE WE ARE USING MORE ADVANCE THING TAHN THIS YOUR SHOWING SOME OLD THING TO US . LOOK AT THE UNREAL ENGINE MOTION MACHING SYSTEM
Lol. Motion matching is just an overly complex system that could only result in having a rich animation system. There is nothing reactive or physical about that, it's only matching trajectory to the already available hundreds of animations. So, actually, you can use these techniques to create the "hundreds of animations" that would eventually make motion matching a viable alternative to use in all projects, instead of only multi-million dolar AAA games.
Just look at any motion matched game. All the ledges have either x or 2x height, the ground is mostly flat and you can never have really complex geometry that animations can react to, etc. 15 year old Euphoria engine accomplished more than what motion matching can possibly do.
Interesting
I love your videos, you are a very good RUclipsr, can I get a comment heart?🙂
😭 I don't get first but great video that's matters the knowledge
Third comment
Second comment
fourth comment
First comment 😅
first comment on the first comment.
💩
Your way of explaining things is terrible, as if it's from someone who has no idea what's really happening, or no depth of knowledge lmao
I don't understand the purpose of this, why don't just animate a walking ragdoll lol
They are learning from scratch, with a defined goal. That's the point, the goal is not a product, it's a proof the methodology works and can be applied elsewhere potentially or expanded on
Fourth comment