Robot Mapping using Laser and Kinect-like Sensor

Поделиться
HTML-код
  • Опубликовано: 12 авг 2014
  • Comparison between real and virtual 3rd person views of a robot mapping an environment using RTAB-Map. Five books are also detected using Find-Object during the experiment.
    The code is publicly available on introlab.github.io/rtabmap.
    For objects recognition, see introlab.github.io/find-object/ .
  • НаукаНаука

Комментарии • 47

  • @elmichellangelo
    @elmichellangelo 2 года назад

    To think was made a couple of years ago. Big up to you.

  • @user-gs8qm9rw6r
    @user-gs8qm9rw6r 5 лет назад

    nice work! i love this video

  • @antonisvenianakis1047
    @antonisvenianakis1047 3 года назад +1

    Very interesting, thank you!

  • @magokeanu
    @magokeanu 7 лет назад

    amazing dude!

  • @dubmona1301
    @dubmona1301 7 лет назад

    Way of the future.Awsome.

  • @VicConner
    @VicConner 9 лет назад

    Amazing!

  • @timurtt1
    @timurtt1 9 лет назад

    Excellent demo! Coud you please clarify - how do you handle new scene points descovered by the robots? Do you add all of them into you scene or do you perform some sort of smart merging? What is the "add point to the scene"-rate you have?

    • @matlabbe
      @matlabbe  9 лет назад

      Anton Myagotin The map's graph is filtered to keep around 1 node/meter, then a point cloud is created from each filtered node. In the visualization above, some clouds are effectively superposed.

  • @fighterknigh
    @fighterknigh 8 лет назад

    Great job, btw, how do you find the object? Do you build the object point clouds model before searching?

    • @matlabbe
      @matlabbe  8 лет назад +1

      +余煒森 Objects are found using RGB images. Visual features (SURF) are extracted from images of the books, then they are compared to the live RGB stream to find the same visual features.

  • @user-ps1ug9fh3w
    @user-ps1ug9fh3w 8 лет назад

    nice job! could u tell me ,how u do the navigation?

  • @isaiabinadabrochasegura5972
    @isaiabinadabrochasegura5972 7 лет назад

    Hi, very good job.
    I have a question about how you connect the Kinect to the voltage, you used some battery? Or some current inverter?

    • @matlabbe
      @matlabbe  7 лет назад +1

      On this demo, it is a Xtion Live Pro, which is powered by USB for convenience. For a Kinect v1, we could cut the wire and plug into a 12V dc output directly on the board of the robot.

  • @kapilyadav23
    @kapilyadav23 9 лет назад

    Hi... can u tell me what processor or dev-board are you using to process the kinect and lazer data.. ?
    btw... nice video.. impressed with your results.... :)

    • @mathieulabbe4889
      @mathieulabbe4889 9 лет назад

      kapil yadav It is a mini ITX with an i7 + SSD (no GPU). It is running Ubuntu 12.04 + ROS Hydro.

  • @barthurs99
    @barthurs99 6 лет назад

    Oh man this is prefect for what I'm doing I making a robot with room mapping that will be like a security guard but I'm using Li-dar mapping and camera object recognition and object follow and facial recognition and you use some of that right?

    • @matlabbe
      @matlabbe  6 лет назад

      In this demo, SLAM is done with lidar and rgb-d camera from "rtabmap_ros" package and books are detected using "find_object_2d" package. There is no face detection or object following here. cheers!

    • @ayarzuki
      @ayarzuki 3 года назад

      @@matlabbe what if we combine with object detection using camera?

  • @kiefac
    @kiefac 7 лет назад +2

    It seemed to throw away a lot of points after they were out of the FOV of the camera. Is that to prevent distance inaccuracies or keep the performance up or smth?

    • @matlabbe
      @matlabbe  7 лет назад +2

      We keep the map downsampled for rendering performance. Maybe with new GPUs, we could keep the map more dense while keeping smooth visualization. We can see at the end of the video, the rendering frame rate is already lower than at the beginning.

    • @kiefac
      @kiefac 7 лет назад

      matlabbe ah alright.

    • @mattizzle81
      @mattizzle81 4 года назад

      Point clouds are very memory intensive!
      I am doing a similar type of pointcloud mapping on Android, using ARCore. One of the first things I noticed is how hard it is to keep that many points. If I compute points for an entire camera frame, and try to keep them, the device would run out of memory after about 30 seconds. Luckily all I need really is a birds eye view perspective, so I project the points to a 2D image and that works fine.

  • @DerekDickerson
    @DerekDickerson 9 лет назад

    matlabbe so you must have a laser scanner as well outside of the kinect?

    • @matlabbe
      @matlabbe  9 лет назад

      It is not required, but increase the precision of the mapping. In this video: ruclips.net/video/_qiLAWp7AqQ/видео.html , only the Kinect is used.

  • @sylvesterfowzan5417
    @sylvesterfowzan5417 5 лет назад

    i'm working on a humanoid robot currently we need to perform navigation and perception could you help us what hardwares and how to perform using ROS?

    • @matlabbe
      @matlabbe  5 лет назад +1

      The current documentation is on ros.org: wiki.ros.org/rtabmap_ros/Tutorials, when you have specific questions, you can ask them on ROS answers (answers.ros.org/questions/) or for on RTAB-Map's forum (official-rtab-map-forum.67519.x6.nabble.com/)

  • @suzangray6483
    @suzangray6483 6 лет назад

    Hi,
    What is your robot acting on? So is it moving according to the laser data or camera data or are you checking with the remote control? I also have a laser scanner and I can get 3d the image of the environment and the distance of the nearest object . But I want to communicate this with a tool like yours. I'm very happy if you can help me how to do it. Thank you

    • @matlabbe
      @matlabbe  6 лет назад

      The robot is tele-operated using a gamepad. The best way to communicate the data you have to rtabmap is to use ROS and publish the right topics. See this example wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot#Kinect_.2B-_Odometry_.2B-_2D_laser for more info!

    • @suzangray6483
      @suzangray6483 6 лет назад

      I understood. I have 3dm-gx1 ımu. I want to get odometry messages with IMU instead of encoders. But as far as I know, linear velocity data to x and y axis is not obtained with IMU.How can ı measure odometry values with IMU.?

    • @matlabbe
      @matlabbe  6 лет назад

      It is possible (integrating two times the acceleration), but it will have a lot of drift. You said you have a laser scanner, you can use it to get x,y parameters, or estimate odometry only with the laser scanner (like hector mapping). Here is another setup here: wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot#Kinect_.2B-_2D_laser

  • @ripleylee5726
    @ripleylee5726 4 года назад

    Hi may I know if you have done any project with intel realsense D435i Camera before?

    • @matlabbe
      @matlabbe  4 года назад

      D435i is integrated in RTAB-Map standalone application (for hand-held mapping). It can be also used like the kinect above with rtabmap_ros package. Note that any RGB-D cameras and stereo cameras can be used with rtabmap_ros right know, if they comply with the standard ros interface for images.

  • @masahirokobayashi911
    @masahirokobayashi911 7 лет назад

    What kind of sensors do you use?

    • @matlabbe
      @matlabbe  7 лет назад

      On this demo: URG-04LX, Xtion Pro Live and wheel odometry from the robot.

  • @amirparvizi3997
    @amirparvizi3997 7 лет назад

    how did i download this video

    • @Uditsinghparihar
      @Uditsinghparihar 6 лет назад

      Go to:- en.savefrom.net/1-how-to-download-youtube-video/
      Then paste the url of any youtube video (in your case this video's url) in the box.

  • @xninjas3138
    @xninjas3138 3 года назад

    There use arduino?

    • @matlabbe
      @matlabbe  3 года назад

      a (2009 if I remember) Intel NUC is on the robot, the drive motors use custom boards

  • @AndreaGulberti
    @AndreaGulberti 8 лет назад

    OMG