Autonomous Dynamic Object Tracking Without External Localization

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024
  • www.wilselby.com for more information
    Autonomous Dynamic Object Tracking Without External Localization
    MIT Distributed Robotics Lab
    Spring 2011
    In this video we present an autonomous on-board visual navigation and tracking system for an Ascending Technologies Hummingbird quadrotor vehicle to support the whale tracking application independent of external localization. Due to the limited payload of the robot, we are restricted to a computationally impoverished SBC such as a Fit-PC2. The vision system was run on the vehicle using a 2.0 GHz Intel Atom processor (Fit-PC2) with a Point Grey Firefly MV USB camera. The camera had a resolution of 640x480 pixels which was down sampled to 320x240 pixels to reduce computational cost.The full system combined for a total payload of 535 g, well above the recommended maximum payload of 200 g for this platform, but our experiments show that the system remains maneuverable.
    The target for the robot tracking experiments was a 0.21x0.28 m blue clipboard mounted onto an iRobot iCreate. The iCreate was programmed to follow a specific trajectory at a constant 0.025 m/s and was also tracked by the motion capture system.The quadrotor flew at a desired altitude of 1.35 m for each trial.
    This second experiment removed external localization and relied entirely on visual feedback. It utilized an Extended Kalman Filter (EKF) to estimate the pose of the quadrotor. This estimated pose was sent to the control module which computed commands to maneuver the quadrotor to the center of the target. The EKF was adapted extensively from \cite{abeRANGE2010,Bachrach09IJMAV} and implemented using the KFilter library. This filter combined position estimates from the vision system algorithms as well as attitude and acceleration information from the IMU. While the IMU readings were calculated at a frequency of 30 Hz, the vision system module operated at 10 Hz. The filter had to handle these asynchronous measurements and the inherent latencies in these measurements. The filter output position and attitude estimates at 110 Hz.
    For a sample trial, the EKF estimates had a RMSE of 0.107 m, a velocity RMSE of 0.037 m/s, and an acceleration RMSE of 0.121 m/s^2 compared to the ground truth captured by the motion capture system.
    The data was computed over ten consecutive successful trials. The average RMSE was approximately 0.068 m in the x axis and 0.095 m in the y axis.
    While the performance is slightly more varied and less accurate than the tracking with motion capture state feedback, the performance is still acceptable. There is also an inherent delay using the filter and for our system, this was around 0.06 seconds. Additionally, the Pelican was used to achieve tracking target speeds of up to 0.25 m/s. At this speed, the experiments resulted in an RMSE of 0.11 m in the x axis and 0.09 m in the y axis. This error was slightly more than the Hummingbird experiments, but the increased speed demonstrated the stability of the control system.

Комментарии • 4

  • @fiqz90
    @fiqz90 10 лет назад

    u r using kalman filter?

    • @wilselby
      @wilselby  10 лет назад

      Yes an Extended Kalman Filter combines the camera position information with the IMU measurements.

    • @fiqz90
      @fiqz90 10 лет назад

      Im about to develop the system for uav siswa challenge. But im confiusing of what type the algorithm is the best for the mission in the competition. The mission is to scan, detect & reconize a fixed target on ground. Once the target confirm, uav will loiter above n release a payload. But i found difficulties on the target recognition system regarding the algorithm. The uav only equip with ccd camera n sonar. What do u prefer for my system?

    • @wilselby
      @wilselby  10 лет назад

      Ahmad Affiq
      I would recommend you have a low level navigation algorithm for flying the mission and a separate computer vision algorithm for locating the target. Once found, the vision algorithm can send desired position commands to the navigation algorithm.