[SIGGRAPH 2018] Mode-Adaptive Neural Networks for Quadruped Motion Control

Поделиться
HTML-код
  • Опубликовано: 11 дек 2024

Комментарии • 218

  • @bourbonbobo
    @bourbonbobo 6 лет назад +285

    So realistic! I want to pet this good dog you trapped inside the computer.

    • @colderplasma
      @colderplasma 6 лет назад +6

      it's a wolf bro that dog will tear your face off

    • @bourbonbobo
      @bourbonbobo 6 лет назад +15

      Oh no :0 this cute canine is trapped in the computer so I can never pet it's lovely fur, and it as I'm sure it's just so eager to do, can never tear my face off. This my friend is a true tragedy...

    • @jiinkC
      @jiinkC 6 лет назад +5

      Richie Aprile
      it's not programmed to tear your face off ;)

    • @DanielMoraisdmgm
      @DanielMoraisdmgm 6 лет назад +1

      for now

    • @Zoza15
      @Zoza15 6 лет назад

      Or might be rabid XD

  • @ShrubRustle
    @ShrubRustle 6 лет назад +26

    What a good boy! The back legs feel a bit stiff, particularly during running, but that's probably either because of the source sample or the fact that the pawtips don't have a bone to move.

    • @sebastianstarke5023
      @sebastianstarke5023  6 лет назад +5

      Yeah that's a good point. Indeed, since the motion capture facilities had a rather small space, the running was never for a really long distance and we only got very very short sequences. That also caused that most motion happened in the front legs, and the back legs only moved a bit (for reference, see source motion capture data 1:40 - 1:52). According to our statistical comparison to the ground truth in the motion capture (see paper), our network reproduces 80-90% of the motion specifically for the legs.

    • @ShrubRustle
      @ShrubRustle 6 лет назад +3

      Sebastian Starke Thanks for your reply! Would this process be reasonable with manual animations? That is, if an artist either wants to animate an unrealistic or unnatural animal, or if they don't have access to proper motion capture, could they simply animate a sample?

    • @sebastianstarke5023
      @sebastianstarke5023  6 лет назад +6

      Our motion capture data is very limited actually. For example, for canter (running), we only have like 5-6 sequences which is like nothing. Nevertheless, it works quite well with the MANN in runtime. That means that defining a few sparse animations manually should be able to improve / modify what is learned by the network.

    • @ShrubRustle
      @ShrubRustle 6 лет назад +5

      Fascinating! This is why I always love when Siggraph rolls around, I get to see cool demos like this.

  • @rafadembek724
    @rafadembek724 6 лет назад +5

    Congratz! You made my dog believe it's a real wolf. He started to bark!

  • @davekite5690
    @davekite5690 6 лет назад +16

    Great work - Aside from also demonstrating this on human motion, I think it would be very interesting to expand this from animal movement to character movement (i.e. layered motions for an animal that is happy/sad/tired/excited/old/young/wounded/etc - sooooo much potential...)

  • @Curt-0001
    @Curt-0001 6 лет назад

    Very convincing locomotion! I can't wait to see it used in the industry.

  • @figa5567
    @figa5567 6 лет назад +64

    Holy moly. That looks so smooth!
    I have no idea what's going on here, but fantastic work man!
    I would totally see this being a major contributor to any game that uses a quadruped as the main character... Probably overkill for secondary characters right now, but mb in 10 years time we'll have the GPUs for it :)
    The tail looks funky tho

    • @tehf00n
      @tehf00n 6 лет назад +3

      this should be able to be calulated via a cloud server if the latency between input/network/response isn't too bad. Which means one day AI could all be done by a single company offering AI services.

    • @figa5567
      @figa5567 6 лет назад

      True, but can you imagine a game that runs some form of quadruped simulation on, let's say, 10 units.
      For every player of said game, you'd need to have the network evaluate 10 times per frame/whatever unit you choose.
      Seems overkill for something like that. Sure, eventually it'll happen, if computers keep evolving the way they do.
      We can only hope :D

    • @nochtans342
      @nochtans342 6 лет назад +34

      Evaluating action with an already trained network should not be that costly.

    • @hadriscus
      @hadriscus 6 лет назад

      @tehf00n Sounds good, what could go wrong ?

    • @ronnetgrazer362
      @ronnetgrazer362 6 лет назад +11

      Exactly Mees, you could pack this NN with a game, the hard work has already been done during development. The same NN could be generalized to do all quadrupeds, great and small. A modern GPU could do that without breaking a sweat.

  • @slowdragon4169
    @slowdragon4169 6 лет назад +17

    my god this is amazing. quadruped motion is porbaboly one of the hardeest problems but u seem to have nailed it almost perfectly. really KOOL! that ending tho ;)

  • @Mikefiser
    @Mikefiser 6 лет назад +1

    hi, amazing demo, i played with it for a while and was kind of surprised how good it looks from such a small sample of data.
    I would love to see it implemented in-game but the responsivness is not the best. great job!

  • @jonathanxdoe
    @jonathanxdoe 6 лет назад +17

    Amazing, just the back legs look a little stiff and some foot sliding. Wish you the best with this!

    • @Hectoricisboss
      @Hectoricisboss 6 лет назад

      that has to do with the sampling rate of the mocap and the overall resolution in relation to the movement data gathered from captured data (movements too fine/slight)

  • @vertxxyz
    @vertxxyz 6 лет назад

    Can't believe I'm just catching up to this gem now! I assume Unity's looking into the relevance of this for their own Kinematica animation system. It'd be lovely to hear news of y'all working together in future :P

  • @zippysqrl
    @zippysqrl 6 лет назад

    Excellent work! I've always wanted stuff like this in games.

  • @NKPyo
    @NKPyo 6 лет назад

    This is so beautiful, I teared up a lil bit. Damn, I am amazed.

  • @PeanutCupProductions
    @PeanutCupProductions 6 лет назад +3

    Hey, I used to do quadruped primate locomotion research, and I think your approach might have a significant impact in the neuroscience research field. I'd love to hear about what kind of motion capture data you used, it would be extremely interesting to pass this through the gobs of primate motion capture data that exists. Happy to connect later!

    • @sebastianstarke5023
      @sebastianstarke5023  6 лет назад

      Hi David, sure, just send me a mail and we can discuss things :) We use regular skeleton-based motion capture data of a dog, so nothing special happening on this end :)

  • @TomatePasFraiche
    @TomatePasFraiche 6 лет назад

    Really awesome work !
    Can't wait to see this applied to any type of animations.

  • @PeterJansen
    @PeterJansen 6 лет назад

    Holy shit. I can see this sort of thing supercharging crowd simulators like massive and goalem. Incredible.

  • @yanndetaf5725
    @yanndetaf5725 5 лет назад

    just Amazing. Great job. Well done Sebastian

  • @tyelork
    @tyelork 6 лет назад

    I'd love to see this technique be used in a game to achieve quick and realistic movement of quadrupeds!

  • @xXJulensolo2Xx
    @xXJulensolo2Xx 6 лет назад

    Really good job! Is incredible how much will Machine learning and AI revolutionize the world, videogames are going to be a real mindfuck.

  • @mbunds
    @mbunds 6 лет назад +3

    Imagine a robotic version of this, with this kind of agility. Now imagine a pack of those, deployed in coordinated formations.

    • @jaiggm
      @jaiggm 6 лет назад

      Mark Bunds and ready for attack

    • @iSyriux
      @iSyriux 4 года назад

      It's already real. see boston dynmaics

  • @ArnoldVeeman
    @ArnoldVeeman 4 года назад

    That demo is awesome!

  • @RyMann88
    @RyMann88 2 года назад

    I know this is old. But hopefully this could become a plug with animation data (separately), to help indie studios achieve the same effects.

  • @paintingjo6842
    @paintingjo6842 6 лет назад +12

    My only question here is how taxing would it be on a computer? Would it be viable to try and implement this in a game, where there's already a good portion of computing power used to render the world and control AIs? And if so, to which extent?

    • @sebastianstarke5023
      @sebastianstarke5023  6 лет назад +33

      Absolutely :) All video material was recorded in real-time on my laptop in the Unity 3D engine (C#), and requires ∼2ms per frame for running the neural network single-threaded on an Intel Core i-7 CPU using a port of the Eigen
      library. Note that the 16ms (60FPS) are just the rendering rate, and not the time for generating the animations.

    • @DontfallasleeZZZZ
      @DontfallasleeZZZZ 6 лет назад

      Have you ever considered that this resource utilization might be unnecessarily restrictive?
      Obviously if this technique is used in a game, it would be the game's unique selling point. It would be justifiable to spend the lion's share, sorry dog's share, of resources on it. Customers wouldn't care if the rest of the world had top-notch graphics.
      Why not use 10 ms per frame, and 2GB for the NN?

    • @pavelbazhin
      @pavelbazhin 6 лет назад

      Great stuff! But i have a question - this is some kind of Motion Match animation system? Or something completely different?

    • @sebastianstarke5023
      @sebastianstarke5023  6 лет назад +6

      It is very different from motion matching. First, motion matching was (like the PFNN) originally designed for biped locomotion, and it is not clear how good or if it can work on quadrupeds. Second, our method is based on neural networks, and scales better for very large amount of motion data (no increase in computation time and better interpolation).

    • @pavelbazhin
      @pavelbazhin 6 лет назад

      Thanks!

  • @Muninnn_
    @Muninnn_ 6 лет назад +2

    Can you work on Unreal Engine 4 and put this and all your methods on Marketplace? I cannot work with Unity and this is fantastic!

  • @Hamstertron
    @Hamstertron 6 лет назад +2

    3:33 Are you saying the dog is no longer an "ambi-turner"?

  • @BakoBako863
    @BakoBako863 6 лет назад

    Looks so good! I wonder if actuators in quadruped robots could pull this off. Would be interesting to see.

  • @serpentineocean
    @serpentineocean 6 лет назад

    Looking really good. Great video and narration.

  • @tzisorey
    @tzisorey 6 лет назад +1

    I kinda want to get that tech demo and just walk around!

  • @leonitasmaximus4004
    @leonitasmaximus4004 6 лет назад +6

    just a heads up and not sure if it matters for your purposes. I downloaded the demo and when you hold F and V it eventually sends the wolf through the ground and the ui glitches out. Just wanted to let you know in case.

  • @darthboxOriginal
    @darthboxOriginal 6 лет назад

    This looks incredible! Great work!

  • @IIStaffyII
    @IIStaffyII 6 лет назад

    This is totally amazing. However I found a few things strange when trying out the demo.
    For example dog will jump after sitting when given command to more forward and go forward when given command to jump(when sitting). Head will never be straight always slightly tilted in the direction anticipated.
    Combine this system with a bite attack animation and one could throw this into any wolf character in a game.

    • @sebastianstarke5023
      @sebastianstarke5023  6 лет назад

      Thanks! :) Yeah, this can happen because the neural network models the statistics of the mocap. Since the data is very unstructured and sometimes noisy, the learming will also try to model these artifacts to some degree. Thus, capturing some structured and clean mocap data like a company would do for a real game would improve this.

  • @planetarta
    @planetarta 4 года назад +1

    This is perfect! Just what I am looking for! Is it possible to download for my own skeletal mesh quadruped in unreal engine?

  • @DontfallasleeZZZZ
    @DontfallasleeZZZZ 6 лет назад +1

    Gaming’s next self-made billionaire is the person who does for animated characters what Minecraft did for physical levels: make them come alive for the user.
    Whoever makes this game should not waste time worrying whether the animation always looks “correct” or not or fits into existing animation production workflows.
    That would make as much sense as Markus Persson getting discouraged by comments from level designers about Minecraft levels being ugly.
    Instead, in this game, as many parameters of the simulation as possible would be exposed to the users, they would control the character “as-is”, and other users would create scenarios for the simulated character to exist in.
    Discovering the limits of the simulation, the occasions where it breaks down, as well as what parameters work best, would be part of the fun of playing the game, not considered a downside compared to traditional animation.
    Seriously, this could be the Next Big Thing.

    • @bahshas
      @bahshas 4 месяца назад +1

      we had this since gta iv in 2008 apparently nobody cared

  • @jeromevuarand3768
    @jeromevuarand3768 6 лет назад

    Feets sliding on the ground is one of the worst immersion breaking artifacts in video games for me. It's amazing how well you fixed that for a quadruped, given how bad it still is in 2018 in most games, even for simpler bipeds...

  • @TheAuxLux
    @TheAuxLux 6 лет назад

    Thats a nice piece of rocket science.

  • @barbramorgan4467
    @barbramorgan4467 6 лет назад

    That's just really cool.

  • @thirdeyenz
    @thirdeyenz 6 лет назад +4

    Amazing! This looks far better than any other solution I've seen. How much work would it be to get the paws moving properly? I'm not seeing bones for them in the rig. Does it complicate the solution too much?

    • @sebastianstarke5023
      @sebastianstarke5023  6 лет назад +9

      The solution itself is totally capable of this, we are just missing rotation data for the paws in the mocap data ;) However, some IK coding or rotation calculation considering the previous bones might be used in that case.

    • @kiaranr
      @kiaranr 6 лет назад

      Lots of options. One would be to detect foot planting based on the velocity of the foot and procedurally blend in an additive 'toe splaying' pose.

    • @thirdeyenz
      @thirdeyenz 6 лет назад

      Yes, however I'm talking about the feet's forward and back rotation which as Sebastian has mentioned they don't have the mocap data for. Procedural foot splay would be the icing on the cake though.

  • @siqxyre8473
    @siqxyre8473 4 года назад +1

    Once I can get wolfquest 3 and a better computer, I'm gonna take this wolf's base and such and mod it into wolfquest, the current wolfquest looks so stiff and computery, but this looks completely natural!
    They both use unity sooo 😳

    • @eg-draw
      @eg-draw 2 года назад

      I wonder how did you ended up?

  • @fabracht
    @fabracht 6 лет назад

    Nice work. Very little residual sliding. Plus, the sliding of a real animal would be expected depending on the terrain it's traversing. In fact, why not try to quantify this sliding and link it to some sort of friction coefficient of the surface?

  • @squeakypasta1698
    @squeakypasta1698 6 лет назад

    There was no motion point for the tip of the tail, so it ends up looking very unrealistic unfortunately.
    Besides that and the back legs, it's pretty incredible.

  • @victorhogrefe7154
    @victorhogrefe7154 6 лет назад

    This is amazing work.

  • @黄翔-q9n
    @黄翔-q9n 6 лет назад

    Compare this approach with MotionMatching, which has been widely used in game industry, MotionMatching uses the same input data: trajectory and previous frame pose, MotionMatching uses this input data as constraints to search the most suitable motion sequence from a gigantic motion database through a cost function, and this NN approach uses a "Gate Network" to select another NN to reconstruct motions which learned by this selected NN, I doubt this NN is Approximately equal to motion sequences in MotionMatching's database, motion sequence stores data in a more structured way, and these NN stores data inside hidden layers' parameter. Therefore I doubt we could train a "Gate Network" to learn to choose motion sequence from a database instead of choosing from several NN

  • @VoiceOfEuropa
    @VoiceOfEuropa 4 года назад

    This is going to be a game changer if you could make a Unity plugin :-)

  • @dj_tmc
    @dj_tmc 6 лет назад +1

    Very exciting work!
    I’m getting ready to buy my SIGGRAPH pass today. What category will your session be in? Art Papers? Technical papers? Panels? I want to make sure to get the most affordable pass where I can still attend.
    Also, I want invite you to the Mocap Society’s BOF that I’ll be hosting that week. Will let you know as soon as we have a confirmed time and date. Let me know if you are interested in speaking there as well!

  • @ba1anse
    @ba1anse 6 лет назад

    awesome! congrats guys!

  • @videos4mydad
    @videos4mydad 5 лет назад +1

    Is this or will this be available for developers?

  • @NikitaAgafonov
    @NikitaAgafonov 6 лет назад +41

    I wonder if BostonDynamics guys can use this for their SpotMini?

    • @DavidSaintloth
      @DavidSaintloth 6 лет назад +12

      Very good question....and the answer is YES!
      But only if the DOF range of the mechanical joints on the Spot Mini contain the ranges that a trained model has....this is really a matter of creating a model that simply incorporates those DOF's and then train that model in simulation using this method....and then once trained....shooting the model down into a Spot mini body and then watch it move around on command in real time as the desired gait mechanics are prescribed.
      This demonstrates certain things:
      1) Google was right in selling BD, it was said that their main gripe was the philosophical path that BD was taking...they were not interested in transitioning from statistical controllers as they currently use to neural network driven models.
      2) Google determined that machine learning models would be the future of all areas of software and so felt all their projects that could use it should...this led to the sale.
      3) Where as a statistical method with good reinforcement from sensors and cameras can perform well as BD's bots demonstrate they are still computationally costly as well as still hand crafted and can't be readily trained in simulation...neural models are completely different. They are readily trained in simulation and then once complete loading them into a body that maps to the models DOF's is all that is required to get extremely efficient ...fluid dynamic response from the physical body. Expect when BD puts spot mini up for sale supposedly next year that teams that have created NN based training models for locomotion will QUICKLY train models to drive the body and outclass BD within weeks if not days of the Spot Mini system being available. You read it here first.

    • @hadriscus
      @hadriscus 6 лет назад +6

      As I understand it this method is not trying to actually balance the wolf, being a computer model it doesn't need it - however any actual robot dog like the Spot Mini would need balancing. Can a neural network help with that ? Would it need to be trained differently ?

    • @MaxLohMusic
      @MaxLohMusic 6 лет назад +5

      I'm not so sure about that David. I read that state of the art video games still use very coarse approximations of physics, so a moving entity is equivalent to a "puppet inside a trashcan on a roomba" meaning they don't actually have to balance to go places. The animation just needs to look realistic but doesn't need to be physically good.

    • @john_hunter_
      @john_hunter_ 6 лет назад +1

      I think it's using AI for blending animations in a complex way. I don't think it's using physics to move the dog.
      There are some physics simulations that do use AI to control bipeds though. I think they also use simulations to speed up learning for their robots. I know that they use simulated environments to train autonomous cars.

    • @desktorp
      @desktorp 6 лет назад +1

      Some people just can't resist the urge to senselessly namedrop unrelated shit.

  • @NeoKailthas
    @NeoKailthas 6 лет назад

    Way to go. Nice

  • @elijahroyaie
    @elijahroyaie 4 года назад +1

    Hello Sebastian. This is Awesome. Im a second year vfx student and doing my assignment in this area. do you know where can I get more information of if you have something you can help me with. thank you so much

  • @0hate9
    @0hate9 6 лет назад

    How the actual hell do you get the kind of motion at 4:46?

  • @jimmyjones9109
    @jimmyjones9109 6 лет назад

    This is the future

  • @Erendislevrai
    @Erendislevrai 6 лет назад

    Thank you RUclips, this is so random, but great

  • @dkin860
    @dkin860 6 лет назад

    looks awesome!

  • @Neopiko1
    @Neopiko1 4 года назад

    This is amazing.

  • @LiamKarlMitchell
    @LiamKarlMitchell 6 лет назад

    Great work.

  • @MaxLohMusic
    @MaxLohMusic 6 лет назад

    Can someone explain to me in layman's terms exactly what has been accomplished? IIUC: Usually when pressing a key programmers/animators have to synthesize the animation themselves using some mocap data and a lot of elbow grease, and this neural net knows how to move the dog without any of that? So without any additional programming you press W and the neural net determines the exact animation?
    That's pretty sweet, but how does it know from the training data that "W" means that it should move forward? More importantly: How does it know, from just looking at training data, how to transition from stationary to moving?

    • @sebastianstarke5023
      @sebastianstarke5023  6 лет назад

      Exactly, you just tell the network what it should aim for, i.e. target velocity, target direction, type of motion style / action as single values such as for jumping or sitting, and that's it :) No need for a detailed setup of all possible transitions between keyframed animations, no speeding up playback of animation, no generating motion clips by hand.

    • @MaxLohMusic
      @MaxLohMusic 6 лет назад

      Thanks. But how does the neural net know how to transition from stationary to moving? For example, is there some sort of time-based label on the training data saying like, at 23 seconds this real mo-capped dog started to accelerate forward and at 25 seconds it reached top speed?

    • @sebastianstarke5023
      @sebastianstarke5023  6 лет назад

      Max Loh everything the network does is having learned the motion function how the animal can actually move, and performs interpolation given a set of control signals and the current state of the character. The underlying “rules” are extracted from the ground truth motion capture data. Check paper for details ;)

  • @aranganr
    @aranganr 6 лет назад

    Very nice animation automation. Great. Is it possible to port this to Blender 3D as a plug in.

  • @wolfgangsam4652
    @wolfgangsam4652 6 лет назад

    this is soo awesome!

  • @LPMOVIESTVSofficial
    @LPMOVIESTVSofficial 6 лет назад

    love it

  • @veggiet2009
    @veggiet2009 6 лет назад

    It could be the overly soft shadows, but something still doesn't look quite... right

  • @lostmic
    @lostmic 3 года назад

    Poor doggy at the end 😂

  • @Salatiels
    @Salatiels 6 лет назад

    really awesome

  • @GeoffCoope
    @GeoffCoope 6 лет назад

    Impressive, very!

  • @thisisajoke0
    @thisisajoke0 6 лет назад

    When does this get added to Mixamo?

  • @hippiefilthy9936
    @hippiefilthy9936 6 лет назад

    Looks nice! Though I didn't quite get how did you capture the original dogs motions for training?

  • @mmd9737
    @mmd9737 5 лет назад

    how to use it in maya?

  • @kebman
    @kebman 6 лет назад +1

    Awesome! Can't wait for the next dog game! Hey, can you hunt mice in the snow? I want to be a dog hunting mice, but jumping nose first into the snowww!

  • @des24man
    @des24man 5 лет назад

    can a bvh be created from this?

  • @BenjiBear
    @BenjiBear 6 лет назад

    Well done.

  • @TheIsleNarrator
    @TheIsleNarrator 5 лет назад

    can we get this for UE4?

  • @AnonTen
    @AnonTen 6 лет назад +2

    Can you provide a compiled demo?

    • @sebastianstarke5023
      @sebastianstarke5023  6 лет назад +5

      A compiled demo will be available soon on my GitHub: github.com/sebastianstarke/AI4Animation

    • @leon_meka
      @leon_meka 6 лет назад

      look in the description :)

    • @GD15555
      @GD15555 6 лет назад

      any plans for 3ds max, maya etc? There literally no plugins for quick character/animal animation.

  • @UTube2K6
    @UTube2K6 2 года назад

    is this available for unreal ?

  • @tehf00n
    @tehf00n 6 лет назад +74

    If this ever gets a single dislike, that person should be fed to the wolves.

    • @VS3d0v
      @VS3d0v 6 лет назад +4

      Why do you even care?

    • @iurieceban126
      @iurieceban126 6 лет назад +2

      To the virtual ones

    • @MuradBeybalaev
      @MuradBeybalaev 6 лет назад +3

      Pathetic.

    • @AlterRektMLG
      @AlterRektMLG 6 лет назад +3

      11 dislikes are there
      do it to em

    • @onground330
      @onground330 6 лет назад +2

      How ONE can even dislike such a beatiful peace of technical innovation :(((((((((

  • @RoadHater
    @RoadHater 5 лет назад

    Will this be implemented in UE4? Will the weights in the NN work if the skeletal model of the quadruped is modified?

  • @GD15555
    @GD15555 6 лет назад

    where can i buy it a plugin for 3ds max?

  • @alexandrepv
    @alexandrepv 6 лет назад

    YES! THANK YOU!

  • @manictiger
    @manictiger 6 лет назад

    I remember a few years ago, they said we were having trouble getting robots to be as smart as cockroaches. Well, looks like we'll be skipping that step and just going straight to dogs.

  • @leozendo3500
    @leozendo3500 6 лет назад

    OOOOOOhhhhhhh. Wow. Impressive.

  • @tuff_lover
    @tuff_lover 6 лет назад +1

    LOL for a sec i thought its life of a black tiger 2 EXCLUSIVE demo...

  • @captainAKAcapt
    @captainAKAcapt 5 лет назад

    was this system already used for something, games or movies? Or is it in beta stage? I am far from technical artist so I barely understand how this is done and I am curious when games will start to implement such system in games

  • @LoneWolfZ
    @LoneWolfZ 6 лет назад

    most everything he said may as well have been in another language, but the results sure look nice.

  • @Geddy135
    @Geddy135 6 лет назад +12

    now someone take this tech, add some fur physics, apply an animation for picking up an object, like a ball or stick, and make a little environment for me to be able to play fetch with my new virtual wolf friend in VR

    • @Pinedal
      @Pinedal 6 лет назад +1

      Geddy135 I want a game where you play as a wild dog that is part of a huge pack and you hunt bears and jump from tree to tree like a baller ninja. They could call it Silver Paws: The Game.

  • @Grovezy
    @Grovezy 5 лет назад

    Is this expensive on the cpu or gpu?

  • @Manquia
    @Manquia 6 лет назад

    Very cool!

  • @subspark
    @subspark 6 лет назад

    Does it factor in energy loss due to physical activity? Animal would get tired and make different choices as a result.

    • @paintingjo6842
      @paintingjo6842 6 лет назад

      I think this would be fairly simple to implement in the NN, just need something to keep track of that and feed it as a bias to the NN to influence the action it takes.

    • @ronnetgrazer362
      @ronnetgrazer362 6 лет назад

      Wouldn't you need mocap data of tired animal gait as well?

    • @getsideways7257
      @getsideways7257 6 лет назад

      It would be much more interesting if the wolf itself was physically and kinematically modeled, while the AI was taught to drive such a system instead of trying to mimic canine movements visually.

    • @ronnetgrazer362
      @ronnetgrazer362 6 лет назад

      But then it would need a whole lot of understanding about why the wolf moves the way it does. What muscle groups does it have, how is it using them in certain situations? Easier to go the behavioristic route and mimick the output of the black box.

    • @ronnetgrazer362
      @ronnetgrazer362 6 лет назад

      One might be able to reuse it for synthesis of new ways of moving around, adapting parameters to optimize for a given challenge. If locomotive modes have been fed into the network, it should be able to come up with some realistic solutions. A GAN could help it optimize.
      This last thing makes it possible to reduce the need for mocap: you could train it on some 3D data first. Then, to teach it new behaviors, you need to find video material that shows the movements you're looking for. You make the generator render an attempt at mimicking it, matching the perspective of the ground truth. The discriminator will then help optimize the newly learned behavior. Because it hits the ground running (pun intended), having already learned some moves that share a high probability of a rig/bone structure, you can expect it to stop looking completely clueless fairly soon. Help it ease into challenges by presenting new moves in a logical order. You should learn to walk before you can learn to moonwalk.
      So for instance you do mocap on a few pinguin moves. Then you make the system practice on re-enacting "March of the Pinguins" in glorious OpenGL. By the time it gets it right, you can generate whole families of convincing pinquins. This also works on cat GIFs, but a nice white background would probably help for the coming years.
      All this extra text comes from me trying to tell you that fidelity in understanding of kinematics would actually arise from mimicking, ie. studying locomotion. The whole brainfart of "train first with 3D, then extend with GAN by comparing 2D imagery' flowed from that. It's either been done already or just not a logically sound idea, you tell me.
      The weakness can be summed up with my first sentence: It can adapt arisen parameters, but can it also invent new ones? You've mocapped a lot of monkey walk cycles, but then it grabs a branch with its tail in a clip the GAN gets fed. Changes of the generator figuring out what the discriminator wants to see are pretty slim.

  • @artbyaline
    @artbyaline 6 лет назад

    Amazing!

  • @tyalikanky
    @tyalikanky 5 лет назад

    Add ragdolls and some inertia to this and would be perfect.

  • @haolihfaioefh
    @haolihfaioefh 6 лет назад

    So when is Okami 2 happening then?? :)

  • @fevertango4622
    @fevertango4622 6 лет назад

    Amazing Work!! Jun Saito

  • @fabiocroldan
    @fabiocroldan 6 лет назад

    Great, can you try a horse? and a horse and its rider?

  • @Grumf.
    @Grumf. 6 лет назад

    Great ! Now we need a graphically gorgeous realistic vast open-world wolf life simulator !
    Wolf (the game) is getting too old...

  • @karlisstigis
    @karlisstigis 6 лет назад

    Wow, so cool.

  • @kraken2844
    @kraken2844 6 лет назад

    Absolutely excellent footwork, but seems you dropped the ball on the tail. does it only have 2 joints? Just seems like everything else is so realistic except the tail that sticks straight out. The tail is the personality, can't leave it out.

    • @sebastianstarke5023
      @sebastianstarke5023  6 лет назад +1

      Thanks! Yeah, the tail does act weird sometimes - we actually do only have 2 joints for it, and I believe there went something wrong in the mirroring for the tail. The skeleton itself was not symmetric, so had to find some "reasonable" angular offsets, which we probably estimated wrongly for the tail root. Anyways, this is just a pre-processing issue ;)

    • @kraken2844
      @kraken2844 6 лет назад

      I imagine the tail is even more difficult to work with than the feet since it responds to both the motion and emotion, not to mention that each breed has different sizes and mannerisms. Some tails curl in on themselves, and when they tuck the tail under them it would be hard to use optical motion capture systems. When it comes to footwork, it's mostly universal now that you've figured it out. A congratulations are in order

  • @bitbloop
    @bitbloop 6 лет назад

    brad leone voice: “A GOOD BOY!!!”

  • @pequenog2323
    @pequenog2323 6 лет назад

    Hi!!! very good work Sebastian!!
    I am very interested in your system to include it in my videogame uder Unity engine. I see that you also created the asset for Unity "BioIK".
    What are your plans to put this new system based on neuronal networks for sale in the Unity store? I´ve been looking for something like that for a long time. In the store de Unity is the runevision´s locomotion but can not be used with Animator component.
    A greeting.

  • @murrayfang5369
    @murrayfang5369 4 года назад

    I opened the source project and found the wolf warping.

  • @tuloski
    @tuloski 6 лет назад

    It looks like you are pointing a laser on the ground :)

  • @katokatokatokatokatokatokato
    @katokatokatokatokatokatokato 6 лет назад

    can i get this game thanks

  • @העבד
    @העבד 6 лет назад +9

    Incredible stuff, you guys have talent. Why not create your own company and get investors to fund your research and make this stuff proprietary technology which you can then license. Hell, with this kind of succefull demo you could get funded by a VC after one month.

    • @atomicthumbsV2
      @atomicthumbsV2 6 лет назад +12

      or open-source it

    • @aleksandersuur9475
      @aleksandersuur9475 6 лет назад +8

      Open sourcing it is better business, then you can sell support and training. Customers for this sort of thing want that know-how in house and they are going to get it. If one company tries closed source model, they go to another or just hire their own neural network experts to come up with something similar. And let's be fair, people are open-sourcing neural network stuff left, right and center, trying to hog the IP isn't going to work. Once the idea is out about what you can do with neural networks, it won't take long for someone to develop a way to get the same result in slightly different manner and there goes your patent/trade secret.

    • @DontfallasleeZZZZ
      @DontfallasleeZZZZ 6 лет назад

      That is true as long as you make the assumption that licensing the tech to big existing game studios is the only way to commercialize it. What if they chose to sell the simulated character itself as a standalone game, with an API so that consumers or third parties can build environments for it to exist in.
      The dog already has a mouse "control scheme". Just add some (admittedly non-trivial) things like interaction with objects, different breeds and emotional states, and put it on Steam.

  • @purpleAiPEy
    @purpleAiPEy 6 лет назад

    i broke the demo a couple times :(
    but other than that good work :D

  • @kobrapromotions
    @kobrapromotions 6 лет назад

    I love this, then times it by thousands and individual hairs, the tip of the tail, response to surfaces, lip movements, eyes, facial muscles all work together in an almost natural way. You know the further this goes religion goes down the drain.