Learning Vision-Based Bipedal Locomotion for Challenging Terrain

Поделиться
HTML-код
  • Опубликовано: 2 ноя 2024

Комментарии • 7

  • @rodrigob
    @rodrigob 7 месяцев назад +1

    I am missing the side-by-side when the vision system is turned off, to see the qualitative improvement in walking behaviour.

    • @danielsantini1153
      @danielsantini1153 7 месяцев назад

      This video document testifies that the machine, that all machines have the level of intelligence of a chicken.

    • @rodrigob
      @rodrigob 7 месяцев назад

      @@danielsantini1153 From a motricity point of view yes, chicken still wins by a mile.
      From a cognitive point of view all robots have access to "ChatGPT/Gemini/LLama" level capabilities, which make them more knowledgeable and able to communicate with humans than any chicken I have seen.
      Yann Lecun has mentioned multiple times that "the dream" is to have machines able to "learn to be like a cat". This work is on that line of research.
      Looking are results like this ruclips.net/video/REvNnUzVDAA/видео.htmlsi=EdPmU4VdeWGnZV-u we seem to be on a good trajectory.

    • @OregonStateDRL
      @OregonStateDRL  2 месяца назад

      Thank you for your interest! You can refer to the papers here for examples of flat ground walking ruclips.net/video/AISy0hxo6-0/видео.htmlfeature=shared and stair walking ruclips.net/video/MPhEmC6b6XU/видео.htmlfeature=shared. Without visual inputs, the gaits either adapt to flat ground or maintain a higher swing height that always anticipates upcoming stairs. These examples should provide insight into the qualitative improvements in walking behavior when the vision system is turned off.

  • @苗健
    @苗健 4 месяца назад

    Does anyone know how to detect and visualize the points of the elevation map in the mujoco simulation in this video?

  • @JiahuiZhu-wj5kp
    @JiahuiZhu-wj5kp 6 месяцев назад

    it's that auto, or by joystick?

    • @OregonStateDRL
      @OregonStateDRL  2 месяца назад

      The RL policy receives XY and yaw velocity commands from the user’s joystick to control the robot's heading and speed, while the policy manages the rest of the robot's responses.