Robot navigation using only an RGBD camera

Поделиться
HTML-код
  • Опубликовано: 7 фев 2025
  • In this tutorial I explain how to do navigation using only a RealSense camera.
    ROS version: ROS2 Foxy
    The project is here:
    drive.google.c...

Комментарии • 32

  • @rekindle9
    @rekindle9 2 года назад +6

    clear pronounciation, clear knowledge and logic display, no blabbering. love the work!

  • @RSC2194
    @RSC2194 Год назад +1

    nice video, buy you can also use just one lidar to perform slam without other sensors

    • @robotmania8896
      @robotmania8896  Год назад

      Hi RSC2194!
      Thanks for watching my video!
      Yes, the most common way to do slam is by using a LIDAR. I introduced RGBD camera as an alternative solution.

  • @omarsid6900
    @omarsid6900 Год назад +1

    Great work! Do you have any tips or ideas on how to improve the performance of these camera-based systems without using any sensors other than cameras? is adding more cameras gonna do the trick or do we need to use different methods?

    • @robotmania8896
      @robotmania8896  Год назад +1

      Hi Omer Sayed!
      Thanks for watching my video!
      Disadvantage of camera is that is has narrower and shorter range compared to Lidar. If you increase number of cameras, you will be able to compensate narrow angle, but the system will be more complex and expensive. So, I think that for real world application, combination of a Lidar and a camera is optimal.

  • @timandersen8030
    @timandersen8030 Год назад +2

    Great video! Are there any other cameras cheaper than RealSense for mapping?

    • @robotmania8896
      @robotmania8896  Год назад +3

      Hi Tim Andersen!
      Thanks for watching my video!
      For example, some Luxonis cameras are cheaper then realsense D435. But I think realsense cameras are the best in terms of amount of information available and reliability.

  • @NiranjanSujay
    @NiranjanSujay Год назад +1

    There is a module not found error in the file that you have given! could you check the error and reupload the file, I tried solving it but there are more issues importing issues, saying that directory mismatch! I would appreciate it if you could check it! to be precise it's from the file gazebo_sim_launch_vo.py file

    • @NiranjanSujay
      @NiranjanSujay Год назад +1

      I see that you have mentioned how the robot navigates but the thing is, can you mention which part of code does all of the things you mentioned... can you explain to me where and which part of code does the below points mentioned are present -
      1) Detect features from the first available RGB image using FAST algorithm.
      2) Track the detected features in the next available RGB image using Lucas-Kanade Optical Flow Algorithm.
      3) Create the 3D pointcloud (of the tracked/detected feature points) of the latest two available RGB image with the help of their depth image .
      4) Estimate the motion between two consecutive 3D pointclouds.
      5) Concatenate the Rotation and Translational information to obtain the predicted path.

    • @NiranjanSujay
      @NiranjanSujay Год назад

      For the case of create 3d point cloud, what did you put as intrinsic parameters

    • @robotmania8896
      @robotmania8896  Год назад

      Hi Niranjan Sujay!
      Thanks for watching my video!
      Sorry for the late response. I currently have some issues with my Ubuntu20 environment, so I cannot check the yaml file right now. But if there is a bug, I will fix it. Also, regarding rtabmap code, you can find it here. Please check the code for details.
      github.com/introlab/rtabmap_ros/tree/ros2

  • @balajiravi4
    @balajiravi4 Год назад +1

    Thanks for the video. Navigation part is explained well but the mapping tutorial is not there. Can you help me how you did SLAM mapping for the same using this camera?

    • @robotmania8896
      @robotmania8896  Год назад +1

      Hi balaji ravi!
      Thanks for watching my video!
      I have used “slam_toolbox” to create a map. In case of this simulation, you have to launch “slam_launch.py”, “depthimage_to_laserscan_node”, “visual_odometry” (refer to “gazebo_sim_launch_vo.py” script). You have to move the robot very slowly to create a map successfully, since the robot loses its position very easily when using only RGBD camera.

    • @balajiravi4
      @balajiravi4 Год назад +1

      @@robotmania8896 thanks for the reply. I have tried setting up depthimage_to_laserscan_node, but facing issue. how to install the package for foxy and slam toolbox is not publishing a map in my case? And also visual odometry how to use.
      It would be nice if you could share the project files for SLAM mapping as i can refer and see where I made mistakes

    • @robotmania8896
      @robotmania8896  Год назад +1

      @@balajiravi4 If I remember correctly, if you have installed all packages mentioned in this tutorial, “depthimage_to_laserscan” should be installed automatically. If you are using “slam_toolbox”, you don’t have to make additional packages. Please refer to this tutorial for “slam_toolbox usage.
      navigation.ros.org/tutorials/docs/navigation2_with_slam.html

  • @nobodygg7314
    @nobodygg7314 Год назад +1

    Hi, I wanted to calculate height and width of the object detected using RealSense 455d. My yolov8 model provides me with a bounding box for particular object. I wanted to find its height and width using depth information, if you know how to do this please share. Also, is there any method to reduce depth noise?

    • @robotmania8896
      @robotmania8896  Год назад +1

      Hi Nobody GG!
      Thanks for watching my video!
      Using perspective projection transformation, you can calculate coordinates of bounding box in camera (real world) coordinate system. So, if we assume that object height is about the same as bounding box height, you can estimate object height. Here is the link to the video in which I explain the theory. Unfortunately, I don’t know how to reduce depth noise. I assume that the noise inherently comes from mechanical structure of the depth camera.
      ruclips.net/video/--81OoXMvlw/видео.html

    • @nobodygg7314
      @nobodygg7314 Год назад

      @@robotmania8896 Thanks for reply. I will try to implement as you shown in that video.

  • @nhattran4833
    @nhattran4833 Год назад +1

    hello friend, when use rgbd camera in reality, are we recalibration camera or not to get more precise extrinsic parameter. I am using rgbd camera of intel, can you recommand some tools to calibrate this camera to get extrinsic matric

    • @robotmania8896
      @robotmania8896  Год назад +2

      Hi nhattran4833!
      Thanks for watching my video!
      If you are using realsense camera, you can get intrinsic parameters using the pyrealsense2 library. For example, like in this command.
      intr = profile.get_stream(rs.stream.color).as_video_stream_profile().get_intrinsics()

    • @nhattran4833
      @nhattran4833 Год назад

      @@robotmania8896 so we can get all of camera calibration parameters based on this library?

  • @tsegaabnigusse2804
    @tsegaabnigusse2804 8 месяцев назад +1

    nice video sir
    can you make video on vslam on a custom gazebo world( using rtabmap or orbslam) pls?

    • @robotmania8896
      @robotmania8896  8 месяцев назад +1

      Hi Tsegaab Nigusse!
      Thanks for watching my video!
      Thank you for the suggestion, I will consider it!

  • @dipsihadiya2125
    @dipsihadiya2125 Год назад

    Is this applicable for underwater ? I want to do navigation in underwater is this so ?

    • @robotmania8896
      @robotmania8896  Год назад

      Hi Dipsi Hadiya!
      Thanks for watching my video!
      I think it is possible to use visual slam under water. But you should be careful with device choice since light attenuates rapidly in the water.

  • @valuetradings8766
    @valuetradings8766 10 месяцев назад

    I'm getting a no map received message in ravin

    • @robotmania8896
      @robotmania8896  10 месяцев назад

      Hi Value Tradings!
      What is ravin?

    • @valuetradings8766
      @valuetradings8766 10 месяцев назад

      @@robotmania8896 sorry I meant rviz ...it got autocorrected

    • @robotmania8896
      @robotmania8896  10 месяцев назад

      @@valuetradings8766 Does this message appear during map creation or during navigation?

    • @valuetradings8766
      @valuetradings8766 10 месяцев назад

      @@robotmania8896 during map creation

    • @robotmania8896
      @robotmania8896  10 месяцев назад

      For this tutorial I created a map using slam_toolbox. What tool are you using?