Did you enjoy our new logo animation? We love it. It was made by Tom Wilusz - check his reel out here if you need some pixel magic in your life (or projects). www.wilusz.tv/
I want to run these animations on my pc ti (is a normal desktop poerrfull enough to run these simulations?) ..... But i have no idea what i need or what i should buy or
The only thing to add to make those animation almost as good or better than handcraft, would be to add some way to make sure the hands act more normally and don't enter the rest of the model body/clothes of the character.
whats amazing is that neural network can compute literally anything that it is possible to compute if you can get them to learn it. Someone will probably train one to do that at some point
It also means potentially higher returns for investors if they can downsize workforces involved in making games and reduce operating expenses. A win-win for everyone! Animation isn't the only realm in game development where automation/AI can reduce workforce!
@@2drealms196 it's funny that you call reducing workforce a win for everyone. People lose their jobs if you "downsize" them. Edit: I think I need to clarify that I am not against it per se, and I know it will come regardless. The immediate effect is still some people either having to adapt to a new version of their job quickly or losing it. This happened thousands of times in history before and usually came with an increased standard of living, but for the people affected immediately it is hardly a win.
To be fair, sitting is an extremely difficult thing to do. Why else would everyone be so stressed out all the time while spending on average 6.5 hours sitting down?
There are great applications for this in engineering as human/machine and human/environment simulation can help make a lot of things a whole lot better... (not just chairs). Think about simulations of aircraft cabin evacuations that account for differing levels of agility and mobility of the passengers, differing levels of cabin damage, blocked egress routes, etc. This sort of things have higher uses than for amusement and I really wish them well.
@@andrasbiro3007 you will end everyone's career, including your own destroying jobs after jobs having everyone in the world living on benefits creating a deficit in social security systems all around the globe. And we'll have to figure a way for people to get money and be able to live their lives without having a job because there will be none anyway.
Wow! Can we see this implemented with "personality"? Like, having some parameters that change the type of the animation? For example, a really happy person will perform a different motion to sit down than someone depressed, tired, angry or any other emotion or physical/mental state. This is even more valid for some other motions. Walking, for example. Even if two persons are in the same mood, they will walk differently, and that can convey their biomechanicity and their personality as well as so many other things.
Would be awesome! But this is one level above the NSN so we will have to wait a bit more, than NSN. If we consider that this technique (NSN) will be implemented everywhere cca in 5 years, like in CGI animation and gaming etc. Then i expect the personific emonional AI animation in next 5 years. (Which means, that all of this should be common 10 years from now (cca in 2030) which is not that far away :D
Yes they can. some researchers actually made a robot that used AI learning to teach itself how to walk around (on four legs), and then they cut off one of its legs and it learned to limp. So you could just introduce things like this to the skeleton that drives the mesh, something that would limit some motions and it would learn to work around it much like living beings do. As for mood that would have to be faked but it should be possible just by training it to learn both "sad walking" and "happy walking" and then apply that to the character based on a preset state. The cool thing is that it should be pretty effective at blending these two styles of walking without much if any additional information.
Get with the program. You can be the AI's handler! You'll get to create beautiful performances and you don't have to spend 1 day per second of animation sliding animation curves backwards and forwards.
In fact "flying cars" already exist - we call them "helicopters". They are pretty expensive to own and to maintain though, and most people cannot afford them. They also run on kerosene, so if you make that cheaper, or somehow convince everyone that what they really want is a heli, you could have everyone driving around in "flying cars". But they've been around since the 1950s.
Ouch! Right in the hours-spent-learning-procedural-animation. Morpheus: "We don't know who struck first, us or them." Developers Everywhere: "It was them...it was definitely them." (gently sobs)
No kidding, everything costs money, but if you were to, say, implement it as a module for SFM, Live2D Euclid or Poser it'd MASSIVELY reduce the amount of manhours dedicated to animating detailed interactions which means you'd eventually look at a budget of thousands rather than tens if not hundreds of thousands of dollars. Indie doesn't mean free. It means realistically affordable by a small group of private individuals.
@@RGapskiM Well the technic will be publicly available ofc, but there's something you guys seem to forgot : the humongous power you need to train those AIs. And this isn't cheap at all, even for actual game studios. However and fortunately, it will go cheaper and cheaper with time, and you can ofc implement much less expensive solution that will possibly work with standard machines, even if this means weeks of training, and i sincerely hope that indie devs will be patient and dedicated enough to use them.
The number one giveaway that you are watching a video game today is the awkward animations, when it transitions between different motion capture or just doesn't know how to animate right for the scene. This is going to take games to a whole new level!!!
This could actually be revolutionary in the game industry! -low to nonexistent input lag. -better quality animations. -good for crunch times. In general this would help devs more than gamers but still has so many possibilities!
AIs like this could generate a bunch of walking/sitting/whatever animations to make games look more natural than just seeing the same animation over and over again. Cool and interesting stuff.
There's one big glaring omission: the figure needs a proper tail. :D That said, there is a problem with the armpits, but I suppose that could be resolved with a bit of careful vertex weighting.
This is amazing, I've been waiting all my life for someone to make an NPC that doesn't jump between pre-programmed motions. all he needs is the cloth simulator to stop his leg clipping thru.
cool tech! It reminds me of animating quadruped characters, which was painful indeed... with this tech artists can put more focus on acting/facial animation.
I can’t wait until machine learning is widely used in games. I know it was used in the recent Microsoft Flight Simulator to generate 3D buildings from satellite images giving you an entire planet to explore. Projects like that would have been impossible to do by hand.
This is insane, imagine in the future if you can just take a video of someone and their mannerisms and you can make a fully animated deep fake of them...
you should probably play the 2 clips one after the other instead of at the same time, since we can't carefully inspect both sides simultaneously anyway
I guess it will only be a matter of time until a video game studio tries to create their own AI of this nature or have an AI like this licensed out to them so they could use the AI for the movements of humanoid characters in their games while it could also make player character animations a lot more realistic! :)
I am trying to help someone who broke their neck in July at C4 (completely paralyzed but breathing on his own). My own brother died many yeas ago after living 18 months with a C2 break (completely paralyzed on a respirator). There are many technologies for picking up intentional and automatic signals from the brain, or skin near muscles. But my question is about AI control training and optimization. I think it is much easier to take intention signals and map them to robotics, than to try to map out the existing muscles and autonomic controls. Have you looked at this topic at all? For everyday low cost methods for the roughly 300,000 people in the US living with spinal cord injury, just feeding, picking things up with robotic hands, taking medications, adjusting chairs and controlling wheel chairs and cars are the usual "wanted list". But the rising number of elderly, many other causes of nerve damage or weakness or paralysis - this raises the number much larger. Ask "how many people in the us living with paralysis" and you get about 5 million. The US is 332 Million, the world is 7900 million. But it does not scale exactly. The problem being that many places in the world, these things lead to death, not permanent disabilities of various sorts, requiring full time or part time care. I can tell you how many people, what they face, what it costs, who is working on it, what has been tried. But this piece of "control the existing muscles as well or better than they were controlled and maintained before" has not been done well. Much of the reason for that, is that groups and individuals all work on their own, making things that benefit them, so most global issues never get solved. If you want, I can tell you about a few thousand such. But now I just want to see if, by summer 2022, if some specifics and examples can be demonstrated. The framework is important - who, why, where - and all the pieces need to fit together without all the parts fighting each other. So it needs to be open, lossless, auditable. Ignore where you get the signals, it is possible to 3D scan anyone and "rig" the 3D model for animation. It is also possible to map the muscles, tendons and details of the joints and activation involved. Yes, I know, it is tedious and time consuming. But a few months of scanning would be tiny compared to the lives some live. So can you think how to create a global community to solve this once and for all? With low cost sustainable methods. You must know "constrained optimization" in many forms. Modeled processes that you solve for one or several objectives. So imagine solving the global economic, social, financial, organizational and political issues all at once, and one part is helping these kinds of individuals to live with dignity and purpose. Right now most of this kind of retraining - trying to use what little connection is left, is being handled by human rehabilitation people and volunteers. But all those neural (both automatic and intentional) pickups and stimulation of current muscles are being covered by piecemeal methods globally. The brain machine interfaces, most of the money is going into surgery and expensive and intrusive methods. But there are about 40 basic methods for 3D imaging of nerve activity that can give varying levels of data. The groups now are happy if they can bang bang the nerves and help people to stand, and take a few clumsy steps. But there is enough data to control the original muscles, or by-pass and do the equivalent or better machine controlled devices and processes. A non-invasive neural interface for machine welding would have the sensors and speed of a robot welder, but the control and experience of a human. Yes, you could use learning methods to replace the humans, but humans are really inexpensive now. That is why Zooniverse can get millions of volunteers to do human-in-the-loop recognition problems that a decent algorithm could handle. Richard Collins, Director, The Internet Foundation
Will be interesting to see how the technique can be adapted to incorporate the subject's mood into the animation too; e.g. how a subject approaches something fearfully vs. enthusiatically, how someone who's happy walks compared so someone who's sad (at least similar to how an actor might portray the subject).
In these vids is the computer pretty much writing its own code and when the sim gets to a point where it’s advanced you can use that code in things like video game npcs
this, This, THIS is what I've been waiting for since I was a child playing Quake. Imagine how more realistic nowadays games (I also want to see this in older games) would feel and look if this kind of animation were applied to them. Thanks for the video
I'm honestly starting to think that if you are into software development (and more specifically of the videogame-oriented type) and you want to keep your fingers on the pulse of how the industry's workflow could change in few years, this may possibly be the most important RUclips channel you can follow today (well, that I'm aware of).
These are really going to need some work still to avoid some of those unnatural looking motions and model deformations, but it's such amazing progress.
Everyone talking about how this will be used to make animations smoother or replace the jobs of animators (lol) but overlooking an even better use: to nicely animate content that doesn't come with the game. User-created content, mods, or characters generated on the fly. Imagine a character creator that lets you design really bizarre characters and still have them look natural! Or have the existing characters interact with custom levels/items without it being awkward!
This kind of adaptive animation is amasing, because a designer cannot predict every possible movement relative to every object, yet this works in real time convincingly. I wonder how long it will be before this technique is incorporated into most openworld and V.R. titles.
The expensive part is the training of the neural network. To run it we use GPUs. Running a NN is basically a lot of matrix multiplication, and GPUs are really fast with that. It is cheaper than 3D graphics rendering.
I wonder if you could record a bunch of motion capture data of an actor doing a bunch of random things, then input that into the motion AI so that every character has their own body language that really matches their character properly, while smoothly moving around a complex environment!
As far as I know Google's reCAPTCHA uses machine learning to check whether your selection of tiles is correct and your mouse movement is natural. In theory it could be possible to train a network that introduces very specific little "glitches" into my mouse movement such that reCAPTCHA thinks I am a human regardless of what tiles I select.
Oh man, that hind-leg-sliding, I see that a lot in the game Planet Zoo. Drives me nuts. (I'm a gamer by choice and just started watching your fluid-simulation videos. Cannot wait for the next 10 years of computing evolution.
Now I want to see lounging and irregular sitting configurations that can be remixed and factors character weight into where and how it can sit. Maybe even factor in muscle tension so that these AI characters can attempt to plank realistically on different objects
This sort of AI could also be perfect for use in video games where it could be used to animate the movement of characters live depending on the geometry, physics objects and the player’s interactions! It could also be a solution to the problem of realistic door opening and door interactions in video games! :)
I might not understand all of the technicalities in these papers, or the science behind all of this, but it is honestly very inspiring and cool to see the state of technology in today's society.
Did you enjoy our new logo animation? We love it. It was made by Tom Wilusz - check his reel out here if you need some pixel magic in your life (or projects). www.wilusz.tv/
It looks awesome
I want to run these animations on my pc ti (is a normal desktop poerrfull enough to run these simulations?) ..... But i have no idea what i need or what i should buy or
Great job but happened to the equations? I would really like to see them again :)
I didn't notice until I read this comment, silly me! Very nice
Is your voice synthesized in the video?
Born too late to explore Earth.
Born too soon to explore space.
Born right in time to rent GPU.
@Solve Everything :D
Actually, we can still explore the oceans and the deep forests ^^' haha But I got your point =D
@@ekzac ah yes, but they don't seem as exciting as the ones above
Miguel Almeida what do you mean born too soon to explore space, there are so many discoveries being made
Wow I wish I was born in 2030 it’s boring oh wait video games.
Anubis needs to sit down, he's had a long day
Hard work being a god.
anubis may sit down on my face
@@TomFowkes anubis would crush you in his presence
Yeah.
@@TomFowkes yes
The only thing to add to make those animation almost as good or better than handcraft, would be to add some way to make sure the hands act more normally and don't enter the rest of the model body/clothes of the character.
whats amazing is that neural network can compute literally anything that it is possible to compute if you can get them to learn it. Someone will probably train one to do that at some point
Yeah, anti-jitter AI and anti-clipping AI where it adjusts cloth/items to not clip and characters/moving items to not jitter, would be sweet.
weighting 3D clothes is a bitch
@@xXYourShadowDaniXx you should see what star citizen is doing with their cloth physics: /watch?v=HjHn0iUpBVE&t=536s
That clipping of clothing is just bad cloth physics, which isnt what is getting tested. They could easily add proper cloth physics that will not clip.
I cant wait for some big game studios to start implementing such technologies in games! That smooth animation would look so amazing!
It also means potentially higher returns for investors if they can downsize workforces involved in making games and reduce operating expenses. A win-win for everyone! Animation isn't the only realm in game development where automation/AI can reduce workforce!
@@2drealms196 it's funny that you call reducing workforce a win for everyone. People lose their jobs if you "downsize" them.
Edit: I think I need to clarify that I am not against it per se, and I know it will come regardless. The immediate effect is still some people either having to adapt to a new version of their job quickly or losing it. This happened thousands of times in history before and usually came with an increased standard of living, but for the people affected immediately it is hardly a win.
@@ninjatogo I thought assassins creed used something similar too?
@@lorenz.f Was about to say this lol. Jesus. We're almost at the age where we need to ban certain things or else everyone will lose their jobs
Rip my pc
He's like that dude from Cyanide and Happiness who can sit anywhere
**INSERT A LADDER JOKE HERE**
cancer
Arin Hanson, you mean Arin Hanson
overshot grunt ass cancer
i see. you are man of culture as well
A: Hey, I trained an exciting new AI!
B: What does it do?
A: It can sit!
To be fair, sitting is an extremely difficult thing to do. Why else would everyone be so stressed out all the time while spending on average 6.5 hours sitting down?
B: Good boy!
You can also say you trained a neural network to perform voice commands if you get your dog to sit.
There are great applications for this in engineering as human/machine and human/environment simulation can help make a lot of things a whole lot better... (not just chairs). Think about simulations of aircraft cabin evacuations that account for differing levels of agility and mobility of the passengers, differing levels of cabin damage, blocked egress routes, etc. This sort of things have higher uses than for amusement and I really wish them well.
Simple tasks are also really hard to do if you want your A.I to be adaptable...
[2:00] When that character stepped through the wall my jaw dropped. Outstanding accomplishment all around, guys!
That’s not a wall
that's a wall
thats a wall with a hole
Cartoon animators: **exists**
Programmers: Im aboutta end this mans whole career
Programmers don't discriminate, we will end everyone's careers.
@@andrasbiro3007 yeah 🤟🏻
@@andrasbiro3007 you will end everyone's career, including your own destroying jobs after jobs having everyone in the world living on benefits creating a deficit in social security systems all around the globe. And we'll have to figure a way for people to get money and be able to live their lives without having a job because there will be none anyway.
@@Egoistic_girl standard fear mongering
@@andrasbiro3007 everyone meaning your own careers as well. General Ai will be self programming
Wow! Can we see this implemented with "personality"? Like, having some parameters that change the type of the animation? For example, a really happy person will perform a different motion to sit down than someone depressed, tired, angry or any other emotion or physical/mental state. This is even more valid for some other motions. Walking, for example. Even if two persons are in the same mood, they will walk differently, and that can convey their biomechanicity and their personality as well as so many other things.
Would be awesome! But this is one level above the NSN so we will have to wait a bit more, than NSN. If we consider that this technique (NSN) will be implemented everywhere cca in 5 years, like in CGI animation and gaming etc. Then i expect the personific emonional AI animation in next 5 years. (Which means, that all of this should be common 10 years from now (cca in 2030) which is not that far away :D
Let's not get ahead of ourselves.
Yes they can. some researchers actually made a robot that used AI learning to teach itself how to walk around (on four legs), and then they cut off one of its legs and it learned to limp. So you could just introduce things like this to the skeleton that drives the mesh, something that would limit some motions and it would learn to work around it much like living beings do. As for mood that would have to be faked but it should be possible just by training it to learn both "sad walking" and "happy walking" and then apply that to the character based on a preset state. The cool thing is that it should be pretty effective at blending these two styles of walking without much if any additional information.
UE4 plugin when.
look at Euforia™ Engine dude. it's ai based animation used in gta 4
There goes my future as an animator 😀Time to plan my future again 😎
Αμίλα Κουμάρα Welcome to the club. 😆
Its true sadly
Get with the program. You can be the AI's handler! You'll get to create beautiful performances and you don't have to spend 1 day per second of animation sliding animation curves backwards and forwards.
vote for a UBI president... Or be an AI researcher.
You can still do that. Plenty of movement require some human touch in the end. AI will just help with the more brain-dead stuff.
That Egyptian animation scene could be the start of a cool game
@Axel Eli that looks pretty sick
4:07 The one on the left didn't have any artifacts, too. He was just having a seizure.
taser mo-cap dataset
1:41 - don't grab stuffs like that, ita bad for the back, instead, crouch and lift...
I don't think it's going to be a problem for an animated character.
Don't worry. The AI will eventually learn how to get backache.
@@andrasbiro3007 Well it's gonna make the animated character look like it's had to lift something heavy a handful of times in its life.
@@andrasbiro3007 can't... process..... joke......
@@davideophile New feature: when backache, AI will move slower and complain from time to time
1950: Year 2000 there will flying cars!
2020 : Our 3D model just learned to sit on a chair of any size, what a time to be alive!!!!!
In fact "flying cars" already exist - we call them "helicopters". They are pretty expensive to own and to maintain though, and most people cannot afford them. They also run on kerosene, so if you make that cheaper, or somehow convince everyone that what they really want is a heli, you could have everyone driving around in "flying cars". But they've been around since the 1950s.
no helicopter is not flying car because it is hard to drive dengrous and instable
@@pedroisdkao8001 nobody is looking up at a helicopter thinking, "oh look, a flying car!"
@@thecakeThief "Nobody" except everyone who looks up and thinks "oh look, a flying car!"...
I mean if you want you can make it car shaped, it'd be less efficient like the ideas of the people of the 1950s
I find it interesting that they used Awo- Anubis as the agent.
Keeping up the trend of canine character I suppose.
HechTea Compas Mate, I heard you almost say Awoobis. Don't try to slide that past me.
Yes. Very... Interesting, I'll admit.
@@TornaitSuperBird Uwubis
*FURRYBOTS*
O w O
with every video i'm just smiling at the end thinking how fricking cool is this!
Ouch! Right in the hours-spent-learning-procedural-animation.
Morpheus: "We don't know who struck first, us or them."
Developers Everywhere: "It was them...it was definitely them." (gently sobs)
This is going to propel indie animation to a level where it's going to be as cheap as indie audio dramas.
@Solve Everything im sure you would want to stylize your animations tho
Wouldn't they have to pay for the AI animation software though making indie dev unable to afford that lol.
No kidding, everything costs money, but if you were to, say, implement it as a module for SFM, Live2D Euclid or Poser it'd MASSIVELY reduce the amount of manhours dedicated to animating detailed interactions which means you'd eventually look at a budget of thousands rather than tens if not hundreds of thousands of dollars.
Indie doesn't mean free. It means realistically affordable by a small group of private individuals.
@@Egoistic_girl If it's published, it might be free.
@@RGapskiM Well the technic will be publicly available ofc, but there's something you guys seem to forgot : the humongous power you need to train those AIs. And this isn't cheap at all, even for actual game studios.
However and fortunately, it will go cheaper and cheaper with time, and you can ofc implement much less expensive solution that will possibly work with standard machines, even if this means weeks of training, and i sincerely hope that indie devs will be patient and dedicated enough to use them.
The number one giveaway that you are watching a video game today is the awkward animations, when it transitions between different motion capture or just doesn't know how to animate right for the scene. This is going to take games to a whole new level!!!
This could actually be revolutionary in the game industry!
-low to nonexistent input lag.
-better quality animations.
-good for crunch times.
In general this would help devs more than gamers but still has so many possibilities!
4:01
"natural, smooth, creamy, and I don't see artifacts"
AI: I'm pretty sure humans can just suck in their armpits like this.
I actually liked the one on the left more. It looked more like how I move. I’m a bit clumsy though so maybe that’s a bad thing.
*me, a 16 yr old guy watching RUclips at 1 p.m:*
*Two minute papers:* deer fello schalars
*me:* mhm
AIs like this could generate a bunch of walking/sitting/whatever animations to make games look more natural than just seeing the same animation over and over again. Cool and interesting stuff.
Unreal devs, are you here? Let's just skip motion matching and implement this, okay?! Thank you!
i hope they'll see this video
you must think real time = worth the performance hit
Also use dev's aren't responsible for how games are animated, the game dev's are
@Solve Everything You are probably correct on the first part, you are definitely incorrect on the second part.
@Jason Poole "Just purchase product man, just upgrade more"
3:00 my boi skipped leg day ay?
This is incredible stuff
I feel so guilty every time he says “Hello fellow scholars” since I’m just a normal joe needing out
There's one big glaring omission: the figure needs a proper tail. :D
That said, there is a problem with the armpits, but I suppose that could be resolved with a bit of careful vertex weighting.
As someone used to video game walking animation, Anubis felt so slooooow. I would be hammering the controller to find the dash button right about now.
We need to build a big, beautiful firewall to keep the AI from taking our jobs.
A.I can sit on it
Should everyone replace existing employment with A.I. without proper transition if this kind of job is enjoyable?
If we ever get real AI our firewalls will be about as effective as trying to stop a nuclear bomb by ducking and covering
xD
Ones 1 single AI can acces the internet its over. Nothing can stop it
Very impressive! Excited to see the impact it will have on the future game development scene.
This is amazing, I've been waiting all my life for someone to make an NPC that doesn't jump between pre-programmed motions. all he needs is the cloth simulator to stop his leg clipping thru.
we NEED a game that uses this!! it's amazing!
Comment on AI paper so YT AI will recommend it to others ;]
It doesn't
I’ll try
Wait... thats not how it works
Nice
Autopilot...Drivers Unemployed
Automation...Workers Unemployed
AutoAnimation...Animators Unemployed
Bro even the advert at the end was good ! Well done !
Man, videogames in the future with AI character animations are gonna be cool
that's an amazing improvement over the previous work. Can't wait for games to actually make use of that!
Hopefully GTA 6
@@nikoha1763 just imagine
If it also had ray tracing and fluid simulation it would be better than the Matrix 😉
Imagine being able to make an entire animated film by yourself using this technology. That’s what I’m hoping to do in a decade or two.
Now you just need a deep learning algorithm to correctly align the arms and height of them
cool tech! It reminds me of animating quadruped characters, which was painful indeed... with this tech artists can put more focus on acting/facial animation.
I can’t wait until machine learning is widely used in games. I know it was used in the recent Microsoft Flight Simulator to generate 3D buildings from satellite images giving you an entire planet to explore. Projects like that would have been impossible to do by hand.
I wish my job is to sit on all kinds of chairs as well.
i like how hes just chillin' while you explain stuff
In the next version the animation will learn to cross his legs when sitting whilst wearing a short sarong.
Where do you find this stuff? This is amazing I love your channel
It was featured for siggraph Asia which is happening this week.
This is insane, imagine in the future if you can just take a video of someone and their mannerisms and you can make a fully animated deep fake of them...
I cannot wait to see a game that implements a full fledged physics engine backed with AI
God when the animation is of such high quality my brain ignores the texture and fur imperfections. Wild that electronic dog looks alive.
you should probably play the 2 clips one after the other instead of at the same time, since we can't carefully inspect both sides simultaneously anyway
I guess it will only be a matter of time until a video game studio tries to create their own AI of this nature or have an AI like this licensed out to them so they could use the AI for the movements of humanoid characters in their games while it could also make player character animations a lot more realistic! :)
Our technological advancements have surpassed our humanity
wow, it seems that gaming industry is going to change a lot because of this.
I love just how natural these kinds of simulations are, to the point where you feel like you could control it yourself.
I am trying to help someone who broke their neck in July at C4 (completely paralyzed but breathing on his own). My own brother died many yeas ago after living 18 months with a C2 break (completely paralyzed on a respirator).
There are many technologies for picking up intentional and automatic signals from the brain, or skin near muscles. But my question is about AI control training and optimization. I think it is much easier to take intention signals and map them to robotics, than to try to map out the existing muscles and autonomic controls.
Have you looked at this topic at all? For everyday low cost methods for the roughly 300,000 people in the US living with spinal cord injury, just feeding, picking things up with robotic hands, taking medications, adjusting chairs and controlling wheel chairs and cars are the usual "wanted list".
But the rising number of elderly, many other causes of nerve damage or weakness or paralysis - this raises the number much larger. Ask "how many people in the us living with paralysis" and you get about 5 million. The US is 332 Million, the world is 7900 million. But it does not scale exactly. The problem being that many places in the world, these things lead to death, not permanent disabilities of various sorts, requiring full time or part time care. I can tell you how many people, what they face, what it costs, who is working on it, what has been tried. But this piece of "control the existing muscles as well or better than they were controlled and maintained before" has not been done well. Much of the reason for that, is that groups and individuals all work on their own, making things that benefit them, so most global issues never get solved. If you want, I can tell you about a few thousand such. But now I just want to see if, by summer 2022, if some specifics and examples can be demonstrated. The framework is important - who, why, where - and all the pieces need to fit together without all the parts fighting each other. So it needs to be open, lossless, auditable.
Ignore where you get the signals, it is possible to 3D scan anyone and "rig" the 3D model for animation. It is also possible to map the muscles, tendons and details of the joints and activation involved. Yes, I know, it is tedious and time consuming. But a few months of scanning would be tiny compared to the lives some live. So can you think how to create a global community to solve this once and for all? With low cost sustainable methods. You must know "constrained optimization" in many forms. Modeled processes that you solve for one or several objectives. So imagine solving the global economic, social, financial, organizational and political issues all at once, and one part is helping these kinds of individuals to live with dignity and purpose.
Right now most of this kind of retraining - trying to use what little connection is left, is being handled by human rehabilitation people and volunteers. But all those neural (both automatic and intentional) pickups and stimulation of current muscles are being covered by piecemeal methods globally. The brain machine interfaces, most of the money is going into surgery and expensive and intrusive methods. But there are about 40 basic methods for 3D imaging of nerve activity that can give varying levels of data.
The groups now are happy if they can bang bang the nerves and help people to stand, and take a few clumsy steps. But there is enough data to control the original muscles, or by-pass and do the equivalent or better machine controlled devices and processes. A non-invasive neural interface for machine welding would have the sensors and speed of a robot welder, but the control and experience of a human. Yes, you could use learning methods to replace the humans, but humans are really inexpensive now. That is why Zooniverse can get millions of volunteers to do human-in-the-loop recognition problems that a decent algorithm could handle.
Richard Collins, Director, The Internet Foundation
Great, now robots can almost perfectly imitate humans to easily blend in and cause chaos!
Profile picture checks out.
Will be interesting to see how the technique can be adapted to incorporate the subject's mood into the animation too; e.g. how a subject approaches something fearfully vs. enthusiatically, how someone who's happy walks compared so someone who's sad (at least similar to how an actor might portray the subject).
WOW! Absolutely phenomenal progress! I'm very excited to see what the future holds!
I threw my papers the moment I read the title!!
nice sitting simulator
In these vids is the computer pretty much writing its own code and when the sim gets to a point where it’s advanced you can use that code in things like video game npcs
cannot wait to see this stuff in games! fantastic work guys :-)
This is briliant. Future games will have surely awesome animations.
this, This, THIS is what I've been waiting for since I was a child playing Quake. Imagine how more realistic nowadays games (I also want to see this in older games) would feel and look if this kind of animation were applied to them. Thanks for the video
I'm honestly starting to think that if you are into software development (and more specifically of the videogame-oriented type) and you want to keep your fingers on the pulse of how the industry's workflow could change in few years, this may possibly be the most important RUclips channel you can follow today (well, that I'm aware of).
These are really going to need some work still to avoid some of those unnatural looking motions and model deformations, but it's such amazing progress.
Holy shit that motion looked realistic - imagine this in a vr game
Vr definitely needs it. Looking at hand animation and even motion capture in vr the flaws are so obvious
Holy crap! I am just thinking how awesome Assassin's creed and far cry would look with this.
Nice video and knowledge! I can't deny that was funny to see Annubis having a chilling day
Everyone talking about how this will be used to make animations smoother or replace the jobs of animators (lol) but overlooking an even better use: to nicely animate content that doesn't come with the game. User-created content, mods, or characters generated on the fly. Imagine a character creator that lets you design really bizarre characters and still have them look natural! Or have the existing characters interact with custom levels/items without it being awkward!
This kind of adaptive animation is amasing, because a designer cannot predict every possible movement relative to every object, yet this works in real time convincingly. I wonder how long it will be before this technique is incorporated into most openworld and V.R. titles.
Computer Graphics and AI the two best sciences.
how long until we get procedural generated AI Interpreted worlds?
How much computing power does it require?
The expensive part is the training of the neural network. To run it we use GPUs. Running a NN is basically a lot of matrix multiplication, and GPUs are really fast with that. It is cheaper than 3D graphics rendering.
Ok he can sit wherever but I was most impressed when he crouched through the hole in the wall 😯
Oh, so that's why animals finally look good in games.
I wonder if you could record a bunch of motion capture data of an actor doing a bunch of random things, then input that into the motion AI so that every character has their own body language that really matches their character properly, while smoothly moving around a complex environment!
As far as I know Google's reCAPTCHA uses machine learning to check whether your selection of tiles is correct and your mouse movement is natural. In theory it could be possible to train a network that introduces very specific little "glitches" into my mouse movement such that reCAPTCHA thinks I am a human regardless of what tiles I select.
Oh man, that hind-leg-sliding, I see that a lot in the game Planet Zoo. Drives me nuts.
(I'm a gamer by choice and just started watching your fluid-simulation videos. Cannot wait for the next 10 years of computing evolution.
Perfect, now it needs to work with dinosaurs.
Imagine how terrifying horror games will become with this technology
this was a year ago & games still have AI taking 90 degree turns & sliding feet whenever possible.
Now I want to see lounging and irregular sitting configurations that can be remixed and factors character weight into where and how it can sit. Maybe even factor in muscle tension so that these AI characters can attempt to plank realistically on different objects
Every time this channel posts a video, I get a mini existential crisis.
There is just an artifact about the rig itself under the armpits but I guess the rigging wasn't done by an AI.
love these kind of vids
From rigging and animation to direction and suggestion.
This is such great news for gamers and movie watchers around the world! 😊 Great work!
This sort of AI could also be perfect for use in video games where it could be used to animate the movement of characters live depending on the geometry, physics objects and the player’s interactions! It could also be a solution to the problem of realistic door opening and door interactions in video games! :)
I might not understand all of the technicalities in these papers, or the science behind all of this, but it is honestly very inspiring and cool to see the state of technology in today's society.
Could we use this as an animation generator between a playermodel and any kind of entity ? Or simply a generator for movement animations ?
Would love to see a game that uses this stuff. Idk if any current games use NN but it would be really cool
Those animation are real time it can be used for games - quad CG warstation commercial right after
honestly i tought about this kind of stuff, SOON AI will be able to create ENTIRE worlds and games with barely any intervention from our part.
These AI papers are crazy impressive.
you know somethings happening when the promo video is suffering fps
SAG is sweating nervously. Virtual actors are coming.
Some day we will have legitimate ai actors.
Awoobis needs a chair
Imagine the Sims, Pes, GTA with this technology.
I really hope I live long enough to see this stuff implemented in games. Star Citizen will be incredible if they can pull it off
This is amazing. But when will something practical show up?