Learned Inertial Odometry for Autonomous Drone Racing (RAL 2023)

Поделиться
HTML-код
  • Опубликовано: 15 сен 2024
  • Inertial odometry is an attractive solution to the problem of state estimation for agile quadrotor flight. It is inexpensive, lightweight, and it is not affected by perceptual degradation. However, only relying on the integration of the inertial measurements for state estimation is infeasible. The errors and time-varying biases present in such measurements cause the accumulation of large drift in the pose estimates. Recently, inertial odometry has made significant progress in estimating the motion of pedestrians. State-of-the-art algorithms rely on learning a motion prior that is typical of humans but cannot be transferred to drones. In this work, we propose a learning-based odometry algorithm that uses an inertial measurement unit (IMU) as the only sensor modality for autonomous drone racing tasks. The core idea of our system is to couple a model-based filter, driven by the inertial measurements, with a learning-based module that has access to the thrust measurements.
    We show that our inertial odometry algorithm is superior to the state-of-the-art filter-based and optimization-based visual-inertial odometry as well as the state-of-the-art learned-inertial odometry in estimating the pose of an autonomous racing drone. Additionally, we show that our system is comparable to a visual-inertial odometry solution that uses a camera and exploits the known gate location and appearance. We believe that the application in autonomous drone racing paves the way for novel research in inertial odometry for agile quadrotor flight.
    Reference:
    G. Cioffi, L. Bauersfeld, E. Kaufmann, and D. Scaramuzza
    Learned Inertial Odometry for Autonomous Drone Racing
    IEEE Robotics and Automation Letters (RA-L), 2023.
    PDF: rpg.ifi.uzh.ch...
    Code: github.com/uzh...
    More info on our research in Drone Racing:
    rpg.ifi.uzh.ch...
    More info on our research in Agile Drone Flight:
    rpg.ifi.uzh.ch...
    Affiliations:
    G. Cioffi, L. Bauersfeld, E. Kaufmann, and D. Scaramuzza are with the Robotics and Perception Group, Dep. of Informatics, University of Zurich, and Dep. of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland
    rpg.ifi.uzh.ch/

Комментарии • 11

  • @karan_jain
    @karan_jain Год назад +2

    Really cool work! Do you currently use the state estimates from the inertial odometry in closed loop? Would be interesting to see that behavior.
    Also, you all have a 60fps version of this video? 30fps is too slow for drone racing :)

    • @hengxie7638
      @hengxie7638 Год назад

      I want to ask the exact same question. Did the authors use the estimated states for the "controller network" that generates the thrust commands? if the answer is yes, it is mostly like to make the whole system unstable. If the answer is no, then what is the purpose to estimate the pose of the robot?

  • @matteodietz9759
    @matteodietz9759 Год назад

    Impressive! Would it maybe make sense to combine the IMU solution with an event based camera because it also has a really low frequency, low power consumption and doesn't suffer as much from HDR and motion blur as conventional camera based solutions?

  • @naeemajilforoushan5784
    @naeemajilforoushan5784 Год назад

    Wow !!! I saw the drone touch the 70 km/h of the speed .this research is very cool.
    can I ask you which model of IMU have you used in this project?
    Also Have you used the raw IMU's data of the the Intel RealSense T265 to reduce noise, estimation , ... or just have you used the raw IMU's data of the autopilet ?
    I know the difficulty of this work and this edge of your research is really respectable.
    Thank you for sharing.
    I wish success to your team.

  • @daikucoffee5316
    @daikucoffee5316 7 месяцев назад

    Wait, is this using only inertial navigation with accelerometers?! It can’t possibly be so precise.

  • @ChadKovac
    @ChadKovac Год назад

    excellent, when this is chasing me through my house, I want it to be most efficient.

  • @guilhermesoares1053
    @guilhermesoares1053 Год назад

    Impressive!

  • @foxeer
    @foxeer Год назад

    How did you get the collective thrust data from a real drone? There's no such sensor avaialble at this point yet right?

    • @foxeer
      @foxeer Год назад +1

      In the paper I see "The Blackbird dataset provides rotor speed measurements recorded onboard a quadrotor flying in a motion capture system, which we use to compute massnormalized collective thrust measurements for our network" if you already have a motion capture system, shouldn't it already give you accurate position in the monitored space?

  • @anasbiniftikhar5918
    @anasbiniftikhar5918 Год назад

    which IMU have you used?

  • @jaredhenkel1661
    @jaredhenkel1661 Год назад

    They claimed that relying on internal measurements is infeasible but I disagree. I wonder if they have considered using multiple IMU and staggering the polling rate between the units to increase accuracy. if you had 2 IMU running at 1GHz in staggered intervals on a 2GHz + processor you could effectively get a polling rate of ~2GHz. There is of course synchronization and data fusion challenges but you won't always have the luxury of motion capture in the real world lol.