SIGGRAPH 2018: DeepMimic paper (supplementary video)

Поделиться
HTML-код
  • Опубликовано: 29 янв 2025

Комментарии • 29

  • @citiblocsMaster
    @citiblocsMaster 6 лет назад +6

    For those wondering about ET: (from the paper arxiv.org/abs/1804.02717 )
    For cyclic skills, the task can be modeled as an infinite horizon MDP (Markov decision process). But during training, each episode is simulated for a finite horizon. An episode terminates either after a fixed period of time, or when certain termination conditions have been triggered. For locomotion, a common condition for early termination (ET) is the detection of a fall, characterized by the character’s torso making contact with the ground. [...]
    Once early termination has been triggered, the character is left with zero reward for the remainder of the episode. This instantiation of early termination provides another means of shaping the reward function to discourage undesirable behaviors. Another advantages of early termination is that it can function as a curating mechanism that biases the data distribution in favor of samples that may be more relevant for a task. In the case of skills such as walking and flips, once the character has fallen, it can be challenging for it to recover and return to its nominal trajectory.
    Without early termination, data collected during the early stages of training will be dominated by samples of the character struggling on the ground in vain, and much of the capacity of the network will be devoted to modeling such futile states. This phenomena is analogous to the class imbalance problem encountered by other methodologies such as supervised learning. By terminating the episode whenever such failure states are encountered, this imbalance can be mitigated

  • @KristoferPettersson
    @KristoferPettersson 6 лет назад +21

    Biologically implausible. I've practiced this using the video as RSI and I have no issues with ET yet I keep banging my head in the floor. :-P

  • @johnlime1469
    @johnlime1469 2 года назад +1

    This paper is so iconic.

  • @bit2shift
    @bit2shift 6 лет назад +6

    4:55 don't you hate it when people throw boxes at you while you're running?

  •  6 лет назад

    So could this be ambiguated into a single database of adaptable animation? Also is it possible to train it to interact with specific scenario, for exemple in the run with multiple obstacle could it be train to react in proximity in order to trigger the desired interaction/movement with a step, barrier or any other object?

  • @emilterman6924
    @emilterman6924 6 лет назад

    Would it be possible to make the robot learn with a NEAT algorithm to do the same stuff?

  • @josaxytube
    @josaxytube 6 лет назад +1

    Hello, could you please share what kind of simulation software you are using for the experiments?

    • @_SimpleSam
      @_SimpleSam 6 лет назад

      Joshua Owoyemi looks like Unity game framework to me, but don't quote me.

    • @josaxytube
      @josaxytube 6 лет назад

      I see. The paper says Bullet physics engine but doesn't talk about software for visualization.

    • @jim.....
      @jim..... 6 лет назад

      unity
      mujoco.org
      pybullet.org/wordpress/

  • @RemingtonCreative
    @RemingtonCreative 6 лет назад

    Really awesome work. I do have one question regarding the application of this in different scenarios.
    I'll use the walking on the narrow wall example. If the wall was majorly changed from what was displayed in the video , would the character automatically adapt to the new path, or would it have to be retrained?

    • @beiller84
      @beiller84 6 лет назад +1

      Other scenes in this video suggest its not over fitting to me EG. the boxes flying at a character. Therefore, it would adapt to any walking example. You still have to feed the "heightmap" to the character as it moves around however. It's a parameter in their model (it's optional).

  • @deusxyz
    @deusxyz 6 лет назад +1

    Will this be released as a unity add on ? I am sure a lot of people would love to use this system :)

  • @AB-mv1mb
    @AB-mv1mb 6 лет назад +2

    1:02 when you press the jump button but the mario doesn't jump

  • @nextlifeonearth
    @nextlifeonearth 6 лет назад +1

    You should integrate a rule for wanting to keep one's head still in walking. The head bobbing makes the humanoid look like a mental case.

    • @nextlifeonearth
      @nextlifeonearth 6 лет назад +1

      In another example I saw the T-Rex also bobbing his head like mad. If you look at real animals you see they don't do that. (Birds are closest related to dinosaurs, and you could use their head as a gyroscope)

  • @Noruzenchi86
    @Noruzenchi86 6 лет назад +8

    he's trying his best

  • @willtee3895
    @willtee3895 6 лет назад +2

    Hello. I was wondering if the DeepMimic project is Open Source, or will be? Sorry in advance if I missed it somewhere. Thanks!

    • @jasonpeng2176
      @jasonpeng2176  6 лет назад +4

      We haven't released the code yet, but it's something we are looking into. No concrete plans just yet though.

    • @willtee3895
      @willtee3895 6 лет назад +1

      Jason Peng Thank you for the quick reply. I will be following!

    • @jonwise3419
      @jonwise3419 6 лет назад +1

      Jason Peng This is great. I really hope you release the code.

  • @icarusswitkes986
    @icarusswitkes986 6 лет назад +1

    Subscribed. This stuff is amazing

  • @WhiteDragon103
    @WhiteDragon103 5 лет назад +1

    The first example I've seen where it could be argued that the AI produced results are superior to the ground truth.

  • @erikm9768
    @erikm9768 6 лет назад +2

    1:01 Ouch!

  • @johnsherfey3675
    @johnsherfey3675 6 лет назад +1

    Remember kids always add extra terrestrials to your neural networks.

  • @iHIMMA
    @iHIMMA 5 лет назад +1

    I want to see these things fight to the death

  • @siloquant
    @siloquant 6 лет назад

    very cool! It needs anime opening ost v=yhAeVfpy_Mo

  • @peskarr
    @peskarr 6 лет назад +1

    obviously drunk