Learning to Open and Traverse Doors with a Legged Manipulator

Поделиться
HTML-код
  • Опубликовано: 10 фев 2025
  • Using doors is a longstanding challenge in robotics and is of significant practical interest in giving robots greater access to human-centric spaces. The task is challenging due to the need for online adaptation to varying door properties and precise control in manipulating the door panel and navigating through the confined doorway. To address this, we propose a learning-based controller for a legged manipulator to open and traverse through doors. The controller is trained using a teacher-student approach in simulation to learn robust task behaviors as well as estimate crucial door properties during the interaction. Unlike previous works, our approach is a single control policy that can handle both push and pull doors through learned behaviour which infers the opening direction during deployment without prior knowledge. The policy was deployed on the ANYmal legged robot with an arm and achieved a success rate of 95.0% in repeated trials conducted in an experimental setting. Additional experiments validate the policy's effectiveness and robustness to various doors and disturbances.
    Paper link: arxiv.org/abs/...

Комментарии • 12

  • @davidzhang5566
    @davidzhang5566 4 месяца назад +1

    Very impressive! It’s great progress towards real automation.

  • @yaroslavishchuk
    @yaroslavishchuk 5 месяцев назад +1

    It is so satisfying to watch!! What a great job

  • @nancyyan5268
    @nancyyan5268 4 месяца назад

    Incredible video and great work 👏

  • @mikhailterekhov5922
    @mikhailterekhov5922 5 месяцев назад +7

    Can't hide from them behind the door anymore

  • @PerFrivik
    @PerFrivik 5 месяцев назад +1

    nice step towards real autonomy :D

  • @Hukkinen
    @Hukkinen 5 месяцев назад

    Good explanation and honest evaluation :)

  • @hamidmirza333
    @hamidmirza333 5 месяцев назад

    Amazing work. If there will be a gripper with this robot, this must perform the task much more robust.

    • @dowesschule
      @dowesschule 5 месяцев назад

      why? how is having a way more complex way of manipulating the door handle gonna make the whole thing more robust? it will get more possible points of failure than before and improve nothing.
      what could help is a better way of sensing where the door handle is. maybe using an end-effector-mounted camera.

  • @3DisFuntastic
    @3DisFuntastic 5 месяцев назад

    Amazing video!

  • @aa-vb3vh
    @aa-vb3vh 4 месяца назад

    Very impressive. Is the model computed locally on the device?

  • @WaveOfDestiny
    @WaveOfDestiny 5 месяцев назад

    I wonder if it's easy or hard to combine this style of in house model training with the ability to generalize of large multimodal models. This looks very robust and efficient for specific actions, and the large model can act as a decision making director that can choose which specific actions to take. For example, if the door is locked, i should look for another door to the same room and try that one, or look for an alternative route. Or if the terrain is detected to be too hard to traverse, look for another path, or even try to remove rubble and objects that are blocking the path first, or wait for the green light to cross a road.

  • @Tagraff
    @Tagraff 5 месяцев назад

    I'm very curious to see if the modern AI/software solutions can be tapped into the existing failing robot (the exact robot that fails in the competition long ago)....And see if it can respond successfully.