DeepGait: Planning and Control of Quadrupedal Gaits using Deep Reinforcement Learning

Поделиться
HTML-код
  • Опубликовано: 30 янв 2020
  • We train neural-network policies for terrain-aware locomotion,
    which respectively plan and execute foothold and base motions
    over challenging terrains in simulated 3D environments using
    both proprioceptive and exteroceptive measurements.
    In IEEE Robotics and Automation Letters (RA-L) and IEEE International Conference on Robotics and Automation (ICRA) 2020 in Paris, France
    Authors: Vassilios Tsounis, Mitja Alge, Joonho Lee, Farbod Farshidian Marco Hutter
    Paper pre-print available at: arxiv.org/abs/1909.08399
  • НаукаНаука

Комментарии • 14

  • @Wingduddoesart
    @Wingduddoesart 4 года назад +4

    It's interesting to see much more 'animal-esque' movement emerging from the training! Any chance we'll see a video of this model deployed on the IRL robot?

  • @DamianReloaded
    @DamianReloaded 4 года назад +3

    Genius!

  • @retrorobodog
    @retrorobodog 4 года назад +3

    cool :)

  • @miguelangelrodriguez9811
    @miguelangelrodriguez9811 4 года назад +7

    Love what you people are doing! Is there a Gazebo ready version ?

    • @leggedrobotics
      @leggedrobotics  4 года назад +4

      We currently use the RaiSim physics engine with it's respective visualizer. When we release the code, it will also come with an RViz visualization, but we don't plan to support Gazebo explicitly. The software will be modular enough to be integrated into any CMake-type of ecosystem such as Catkin and ROS.

  • @ahmedbenyoucef3238
    @ahmedbenyoucef3238 3 года назад

    very good

  • @boyuandeng8298
    @boyuandeng8298 Год назад

    it's so cool!how can i get the repo?

  • @nochan99
    @nochan99 4 года назад +7

    Cool! Where is the git repo?

    • @leggedrobotics
      @leggedrobotics  4 года назад +4

      It's coming! We plan to release the code in early June.

    • @shawntootill24
      @shawntootill24 4 года назад +1

      @@leggedrobotics Its June and im planning a Spot Micro build, will this code be suitable for the Nvidia Nano? with various sensors of course. you seem to be leading the race!

    • @alexbeh2061
      @alexbeh2061 3 года назад

      @@leggedrobotics Hello, may I know the update of this schedule?

  • @johnlime1469
    @johnlime1469 4 года назад +1

    This looks like they used deep reinforcement learning to plan the position of the end-effectors, and used some sort of inverse kinematics algorithm to generate the actual trajectories of each joints.
    Edit: I'm wrong. They used a form of hierarchical reinforcement learning.

    • @leggedrobotics
      @leggedrobotics  4 года назад +1

      Nope. We learn both high-level foothold planning and low-level motion generation and feedback controls as respective neural-network-based RL agents. Please have a look at the description for the link to the pre-print version of our paper.

    • @johnlime1469
      @johnlime1469 4 года назад

      @@leggedrobotics Thank you. I'll definitely take a look at the paper. I was merely pointing out the similarity between the low-level foot placement seen in this video and the same task implemented using traditional inverse kinematics methods.