Это видео недоступно.
Сожалеем об этом.

Outdoor stereo SLAM with RTAB-Map

Поделиться
HTML-код
  • Опубликовано: 17 авг 2024
  • More info: wiki.ros.org/rt...
    0:00 First Loop
    4:00 Second Loop

Комментарии • 86

  • @dikmugget
    @dikmugget 2 года назад +4

    I bet seeing those loop closures happen was an amazing feeling and vindication of your hard work! :)

  • @aadityaasati868
    @aadityaasati868 7 лет назад +4

    fantastic work,very impressive...

  • @sabtvg
    @sabtvg 3 года назад +1

    Great! Impressing! Show us more please please please

  • @klaus-udokloppstedt6257
    @klaus-udokloppstedt6257 3 года назад +2

    awesome how map (and position) is adjusted at 9:30 to fix the accumulated error.

    • @matlabbe
      @matlabbe  3 года назад +1

      After the loop closure is detected (visually), the underlaying pose-graph of the map is optimized with the new constraint, re-adjusting all the clouds created so far and moving back the robot at its initial position.

  • @gulhankaya5088
    @gulhankaya5088 2 года назад

    hello, how do I use the map I created? I am using the application in an autonomous vehicle. How will it go through the map I created when I put the vehicle at the starting point?

  • @sencis9367
    @sencis9367 4 года назад

    At the moment of rotation of the camera, the RTAB-Map relies only on the optical flow from the camera, or are IMU sensors and wheel encoders used? It is also interesting how the algorithm copes with superimposing one point on another, deletes the last of stored points if they nearby?

    • @matlabbe
      @matlabbe  4 года назад +1

      Hi, In this example, the motion estimation is done only with the stereo camera (feature matching between current frame and a local map of 3D visual features). It could indeed be improved by adding an IMU. The wheel encoders could be difficult to use in this case, as seen in the video when the robot is climbing the hill, the wheels are slipping a lot! For the map generation, rtabmap offers many options, from 2d occupancy grid to 3D OctoMAp, passing by a 3D point cloud with overlapped points filtered/merged. In this video, the points of each key frame overlap. This may not be efficient in term of memory, but for processing time it is better when the map has to be optimized (all frames should be moved, and thus all points!), though more useful for fast visualization.

  • @chrisvonwielligh1962
    @chrisvonwielligh1962 3 года назад

    Would it be possible to fuse GPS localisation to link the odometry frame to lat-long coordinates?

    • @matlabbe
      @matlabbe  3 года назад

      yes, see for example official-rtab-map-forum.67519.x6.nabble.com/Is-this-the-way-to-use-lidar-IMU-efficiently-tp7496p7515.html

  • @pauldatche8410
    @pauldatche8410 7 лет назад +4

    Wow! Awesome stuff! I would love to experiment with this on a 3D imaging project. How can I get your SLAM stereo-modelling software?

    • @matlabbe
      @matlabbe  7 лет назад +3

      You can start looking here: introlab.github.io/rtabmap/

  • @zyxwvutsrqponmlkh
    @zyxwvutsrqponmlkh 4 года назад

    3:05 had some trouble closing the circle. Are you mayhaps only using the tachometers for localization?

    • @matlabbe
      @matlabbe  4 года назад +1

      Visual loop closures could not be detected backward in this test because only the front camera was used. A 360 camera or another backward camera would need to be used to detect those loop closures.

  • @muratkoc4693
    @muratkoc4693 2 года назад

    thanks for this work. Do we able image processing in OpenCV with constructed 3d map?

    • @matlabbe
      @matlabbe  2 года назад +1

      Well, you can subscribe to cloud_map topic, which is a sensor_msgs/PointCloud2 topic and do whatever you want!

  • @jacksonkr_
    @jacksonkr_ 8 лет назад +1

    I am here because I was viewing your blog. I am looking for information on your model of stereo camera but I'm not finding anything. Do you have that info listed anywhere online? Is it a bumblebee?

    • @matlabbe
      @matlabbe  8 лет назад +2

      It is a Bumblebee2. Indeed, it is not named anywhere. I updated the tutorial above, thx to point out!

  • @FutureAIDev2015
    @FutureAIDev2015 7 лет назад +4

    How do you deal with THIS much data? I bet if I tried to store all that on my Arduino, it'd probably not like it very well.

    • @matlabbe
      @matlabbe  7 лет назад +14

      For visual odometry, a laptop's cpu at least is required to have a decent odometry update frequency (>10Hz). If your odometry is computed externally from the arduino, for the 3D map there are options to save RGB-D images in lower resolution to save some space or not save them and just keep the occupancy grid with visual words used for loop closure detection. EDIT: Note also that you don't need to visualize the map on the arduino, the resulting database can be opened after on a desktop computer for visualization.

  • @rahul122112
    @rahul122112 3 года назад +1

    @matlabbe Nice mapping. But is the system able to localize given this map ?

    • @matlabbe
      @matlabbe  3 года назад +1

      This is what happens at 9:27 (the robot detected it has come back to beginning)

    • @rahul122112
      @rahul122112 3 года назад

      @@matlabbe Awesome! I didn't notice that snip. Another question that I have: Which would be better for outdoors in your experience/opinion, depth based rtab slam or stereo rtab slam? I am thinking of using a ZED2 camera for unstructured environment slam and navigation.

    • @matlabbe
      @matlabbe  3 года назад

      @@rahul122112 For outdoor environments, I would go for stereo cameras (like ZED2)

    • @rahul122112
      @rahul122112 3 года назад

      @@matlabbe Ah yes. But I was asking if using ZED2 would be suitable with RGBD RTAB slam or STEREO RTAB slam in outdoor unstructured environment ?

    • @mathieulabbe4889
      @mathieulabbe4889 3 года назад

      @@rahul122112 Zed can compute dense disparity on GPU, which can save CPU time. Pose estimation will be better estimated in stereo mode though, if sone features are outside the depth range.

  • @TestSubject2000
    @TestSubject2000 8 лет назад +2

    Very nice, how heavy doest the map get?

    • @matlabbe
      @matlabbe  8 лет назад +2

      The resulting database is 193 MB. The RAM used to generate the 3D is dependent of the number of points created (which is adjustable).

  • @rafcins
    @rafcins 6 лет назад +1

    What sensor do you use for depth, a Zed Stereo or a bumble bee?

    • @matlabbe
      @matlabbe  6 лет назад

      Raf it is a bumblebee2 stereo camera, with zed you may also have similar results

    • @rafcins
      @rafcins 6 лет назад +1

      matlabbe I use a Zed with a Jetson Tx2 but it slows down heavily.

  • @user-mj2jp2cb1y
    @user-mj2jp2cb1y 8 лет назад

    great job! Could you please tell me what type of robots u r using in this video?And u finished SLAM by one stereo camera?

    • @matlabbe
      @matlabbe  8 лет назад

      The robot is AZIMUT3, I added the link in the referred tutorial. The SLAM was done with only one stereo camera. cheers

  • @tienhoangngoc7867
    @tienhoangngoc7867 4 года назад

    Hi, can you tell me know what's algorithm this project used? rtab-map algorithm? Thank you.

  • @-factos6519
    @-factos6519 3 года назад

    Hey! Really awesome function! But I got a question, what is your processor which runs this algorithm ?? I'm really curious !

    • @matlabbe
      @matlabbe  3 года назад

      The bag was processed on a macbook pro 2010 if I remember well (I think it was an i7 2010)

  • @JabiloJoseJ
    @JabiloJoseJ 2 года назад

    What camera u r using....?

    • @matlabbe
      @matlabbe  2 года назад

      In this video it was an old Bumblebee2 stereo camera (yes the same name than the transformer)

  • @theolix9938
    @theolix9938 4 года назад

    Can I use a raspberry pi 4 modelB for this project? Is there any way to contact you for help for our research? Thank you and God Bless

    • @mathieulabbe4889
      @mathieulabbe4889 4 года назад

      Not tested on RPI4 yet. I know rtabmap is working on RPI3 with limited capability. For RPI4, I don't know if you can get ROS on it easily (with binaries). You may look at this page: ubuntu.com/blog/roadmap-for-official-support-for-the-raspberry-pi-4, if you install ubuntu, you may not have to rebuild ROS from source (which could be very long!). If you have more questions or installation problems, look at the Troubleshooting section on the project's page: introlab.github.io/rtabmap/

  • @keshav2136
    @keshav2136 3 года назад

    Awesome

  • @tokyowarfare6729
    @tokyowarfare6729 8 лет назад

    wow. would it be possible from an stereo video or left- right video files to build a pointcloud ofline? under windows.? I've an stereo vid recorded from inside my car. bonet is slighly visible but its mostly road and surroundings.

    • @matlabbe
      @matlabbe  8 лет назад

      You may be interested to side-by-side video section on this page: github.com/introlab/rtabmap/wiki/Stereo-mapping#process-a-side-by-side-stereo-video-with-calibration-example

    • @tokyowarfare6729
      @tokyowarfare6729 8 лет назад

      Awesome! I did not expect did to exist!! All the videos in other channels referenced to complex code that had to be run in Linux, do this do the other.. ohhh tx!!.
      In the past I used for fun a nice program named Project Video Scanner, similar to this one but aimed to road mapping. It was unstable, unnacurate but fun.
      I'll test my dataset with yours and tell you how it goes.
      So far I've tested my dataset on SFM programs, some do catch quite large portions of road and when manage to densify these the ressults are interesting. Usually these SFM apps miss a lot of road details.
      This SFM apps should use as initial camera location a similar approach to yours. May be the camera poses can be exported as well as the pointn cloud. If it is dense enough I could teste to extract some breaklines (manually), mesh them with a clean topology and if there is a way to import camera poses with the model try to project the textures on teh meshes.

    • @tokyowarfare6729
      @tokyowarfare6729 8 лет назад

      [SOLVED]
      Following the database tutorial I get this error when I press Play. Ini file loaded, database loaded, check unuse existing odometry, uncheck stamps, apply, ok, New database... Play --> error:
      [FATAL] (2016-09-09 01:47:50) CameraModel.cpp:76::rtabmap::CameraModel::CameraModel() Condition (fx > 0.0) not met! [fx=0.000000]
      Tomorrow I'll print the checkboard to try to use custom dataset.
      Ok. Calibration files where in the .zip file of stereo dataset. May be should be available for ones testing only the database tutorial.

    • @tokyowarfare6729
      @tokyowarfare6729 8 лет назад

      Finally managed to run. In the Stero images you forgot to comment that creating a new database is needed. After this play button is available. DId test with you sample images, this is quite impressive!. Never saw a reconstrucction happen that fast XD

    • @tokyowarfare6729
      @tokyowarfare6729 8 лет назад

      If you run the processing more times you get extra points. Like :)

  • @Shaban_Interactive
    @Shaban_Interactive 4 года назад

    This is amazing. Can I capture the whole city with this? Does ultrabook can handle this much data. about 60km2 area.

    • @matlabbe
      @matlabbe  4 года назад +1

      It depends if you want to do it in real-time. It is however possible to record data and process them offline. Example with stereo on ca car: ruclips.net/video/xIGKaE_rZ_Q/видео.html, example with Lidar on a car: official-rtab-map-forum.67519.x6.nabble.com/Ouster-drive-under-the-rain-td6496.html

    • @Shaban_Interactive
      @Shaban_Interactive 4 года назад

      matlabbe excellent. Is tia support iphoneX TrueDepth Camera? Record 3D app can give RGBD realtime streaming via USB cable. Is it possible to map with iphoneX?

  • @tokyowarfare6729
    @tokyowarfare6729 8 лет назад

    Unable to calibrate from video. When I click "Calibrate" butt "Camera initialization failed" error appears. I'm testing with a new camera, a 3d camera instead of dual camera rig and got this issues. Also tried to calibrate with your calibration sample video after loading the tutorial *.ini file and I get same error, as quick as I switch to Images mode, no errors appear. I believe that with SBS video mdoe it looks for a camera instead for a file as if it was loking for an usb cam. I'll extract frames from the video to try to calibrate the new camera.

    • @matlabbe
      @matlabbe  8 лет назад

      With the calibration sample video, it works here. What is the full error message on terminal or console of the main window? What is the output of the 3D camera?

    • @tokyowarfare6729
      @tokyowarfare6729 8 лет назад

      This is what I get:
      [ERROR] (2016-09-21 20:13:04) CameraStereo.cpp:1237::rtabmap::CameraStereoVideo::init() CameraStereoVideo: Failed to create a capture object!
      [ WARN] (2016-09-21 20:13:04) PreferencesDialog.cpp:4275::rtabmap::PreferencesDialog::createCamera() init camera failed...

    • @matlabbe
      @matlabbe  8 лет назад

      OpenCV cannot create a capture object for the video. This could be a video codec issue. The one used for the sample videos is "H264 - MPEG-4 AVC (part 10) (H264)" (from VLC codec info). The Windows binaries should have h264 support (OpenCV built with x264 library).

    • @tokyowarfare6729
      @tokyowarfare6729 8 лет назад

      Ok I'll check but I'm in W10 and I can open the videos without issues from the explorer.

    • @tokyowarfare6729
      @tokyowarfare6729 8 лет назад

      mm still no luck. I installed OpenCV ( the already built one for win). Added environment variables, and the Path as explained in this install guide for dummies. homepages.ed.ac.uk/cblair2/opencv/opencv_tutorial.pdf (adapting the path). I guess what is trully neccesary is to build in visual studio wih the x264 library... a bit out of the reach of mortals. During the weekend I'll try to make in car footage with the new stereo camera and if have time with the dual CCD cameras too.

  • @maatsche
    @maatsche 9 лет назад +4

    Can you upload the project to github ?

    • @matlabbe
      @matlabbe  9 лет назад +2

      Marcel Cohrs you can find it here: github.com/introlab/rtabmap_ros

  • @user-cm7bb1cc4g
    @user-cm7bb1cc4g 5 лет назад

    Are you using laptop? Or another mcu product?

    • @matlabbe
      @matlabbe  5 лет назад +1

      In this setup, we recorded a bag, then we played it again on a laptop computer to record this video. However, the mapping node (without visualization) could have run on the Mini-ITX of the robot without problems.

  • @surbhipal985
    @surbhipal985 6 лет назад

    Quite impressive!! Will this be able to create a 3D model for some complicated outdoor object such as temple carving. Can you suggest some camera name to fulfil my requirement.

    • @matlabbe
      @matlabbe  6 лет назад

      It depends at which precision you want the resulting model and time/cost you want to put to get a final model. For very complex objects and if you don't care about real-time processing, look at photogrammetry approaches (which can be done with a simple camera). Otherwise, reconstructing live an environment, I would go for TOF cameras indoor and stereo cameras outdoor.

    • @surbhipal985
      @surbhipal985 6 лет назад

      Sir,
      Can you name some stereo camera which will be good enough for temple carving.

    • @matlabbe
      @matlabbe  6 лет назад +1

      Try realsense 3D cameras, zed camera...

  • @alexandergrau887
    @alexandergrau887 8 лет назад

    Interesting point: when there is only lawn (so all 'normal' vectors show up, in your first frames), RTABMAP can already find enough feature points for a correlation using a stereo camera? My experiments on lawn were not that successful so far (not enough feature points on lawn...) grauonline.de/wordpress/?page_id=1282

    • @matlabbe
      @matlabbe  8 лет назад +1

      +Alexander Grau RTABMAP uses visual features for odometry, not geometry (ICP-like motion estimation would use normal vectors). On your test on lawn, it is not the lack of visual features the problem, it is the lack of valid depth values: Xtion Pro Live or Kinect cannot be used outdoor, unless there is no sunlight. In the video above, the stereo camera can work outdoor as the depth is computed by feature triangulation between the left and right RGB images (no IR image used).

  • @LowLightVideos
    @LowLightVideos 8 лет назад +1

    This Video is perfectly good. So many views and so few 👍's, hmmm, wonder why that is ...
    Geniuses saw "Outdoor stereo SLAM with RTAB-Map" and they knew "An outdoor Slam, and RTAB's playing; featuring Buddy Map" - s-man, that's gonna rock gotta see it.
    All they were able to figure out is that they didn't have a clue what they were looking at and so they hit the Back Button. Unfortunate, since SLAM is a valid technique to build these images and this Video demonstrates it perfectly well.

  • @oldcowbb
    @oldcowbb 4 года назад

    is it possible to use this with two regular camera?

    • @matlabbe
      @matlabbe  4 года назад

      In general no unless they can be hardware synchronized (linked with a connector triggering a picture exactly at the same time on both cameras, like the industrial PointGrey or Flir cameras). Fortunately, you can find stereo cameras that are not very expensive, like Realsense D435, ZED or Mynteye.

    • @oldcowbb
      @oldcowbb 4 года назад

      @@matlabbe wow, thank you so much for the quick response

  • @shrijank522
    @shrijank522 6 лет назад

    Can it be used for dynamic slam as well?

    • @matlabbe
      @matlabbe  5 лет назад +1

      Algorithm doesn't segment dynamic objects, so if something is moving while the robot is mapping, the object will appear multiple times in 3d point cloud. For the occupancy grid output format, the object can be cleared.

    • @shrijank522
      @shrijank522 5 лет назад

      @@matlabbe So for a dense dynamic environment , would occupancy grid with laser scan matching technique would work?

  • @5sweatingpalm
    @5sweatingpalm 3 года назад

    holy shit!

  • @blackvalley007
    @blackvalley007 6 лет назад

    These stereo cameras surely obliterate the lidar systems when it comes to 3d mapping processing speed. It probably won't have as accurate depth information as a lidar, but that won't be much of an issue for my project.(low speed robot with obstacle avoidance) Interesting video!

  •  4 года назад

    Is this possible with intel realsense T265?

    • @matlabbe
      @matlabbe  4 года назад

      Yes, see this post: official-rtab-map-forum.67519.x6.nabble.com/Slam-using-Intel-RealSense-tracking-camera-T265-td6333.html