Hi RSC2194! Thanks for watching my video! Yes, the most common way to do slam is by using a LIDAR. I introduced RGBD camera as an alternative solution.
Great work! Do you have any tips or ideas on how to improve the performance of these camera-based systems without using any sensors other than cameras? is adding more cameras gonna do the trick or do we need to use different methods?
Hi Omer Sayed! Thanks for watching my video! Disadvantage of camera is that is has narrower and shorter range compared to Lidar. If you increase number of cameras, you will be able to compensate narrow angle, but the system will be more complex and expensive. So, I think that for real world application, combination of a Lidar and a camera is optimal.
Hi Tim Andersen! Thanks for watching my video! For example, some Luxonis cameras are cheaper then realsense D435. But I think realsense cameras are the best in terms of amount of information available and reliability.
There is a module not found error in the file that you have given! could you check the error and reupload the file, I tried solving it but there are more issues importing issues, saying that directory mismatch! I would appreciate it if you could check it! to be precise it's from the file gazebo_sim_launch_vo.py file
I see that you have mentioned how the robot navigates but the thing is, can you mention which part of code does all of the things you mentioned... can you explain to me where and which part of code does the below points mentioned are present - 1) Detect features from the first available RGB image using FAST algorithm. 2) Track the detected features in the next available RGB image using Lucas-Kanade Optical Flow Algorithm. 3) Create the 3D pointcloud (of the tracked/detected feature points) of the latest two available RGB image with the help of their depth image . 4) Estimate the motion between two consecutive 3D pointclouds. 5) Concatenate the Rotation and Translational information to obtain the predicted path.
Hi Niranjan Sujay! Thanks for watching my video! Sorry for the late response. I currently have some issues with my Ubuntu20 environment, so I cannot check the yaml file right now. But if there is a bug, I will fix it. Also, regarding rtabmap code, you can find it here. Please check the code for details. github.com/introlab/rtabmap_ros/tree/ros2
Thanks for the video. Navigation part is explained well but the mapping tutorial is not there. Can you help me how you did SLAM mapping for the same using this camera?
Hi balaji ravi! Thanks for watching my video! I have used “slam_toolbox” to create a map. In case of this simulation, you have to launch “slam_launch.py”, “depthimage_to_laserscan_node”, “visual_odometry” (refer to “gazebo_sim_launch_vo.py” script). You have to move the robot very slowly to create a map successfully, since the robot loses its position very easily when using only RGBD camera.
@@robotmania8896 thanks for the reply. I have tried setting up depthimage_to_laserscan_node, but facing issue. how to install the package for foxy and slam toolbox is not publishing a map in my case? And also visual odometry how to use. It would be nice if you could share the project files for SLAM mapping as i can refer and see where I made mistakes
@@balajiravi4 If I remember correctly, if you have installed all packages mentioned in this tutorial, “depthimage_to_laserscan” should be installed automatically. If you are using “slam_toolbox”, you don’t have to make additional packages. Please refer to this tutorial for “slam_toolbox usage. navigation.ros.org/tutorials/docs/navigation2_with_slam.html
Hi, I wanted to calculate height and width of the object detected using RealSense 455d. My yolov8 model provides me with a bounding box for particular object. I wanted to find its height and width using depth information, if you know how to do this please share. Also, is there any method to reduce depth noise?
Hi Nobody GG! Thanks for watching my video! Using perspective projection transformation, you can calculate coordinates of bounding box in camera (real world) coordinate system. So, if we assume that object height is about the same as bounding box height, you can estimate object height. Here is the link to the video in which I explain the theory. Unfortunately, I don’t know how to reduce depth noise. I assume that the noise inherently comes from mechanical structure of the depth camera. ruclips.net/video/--81OoXMvlw/видео.html
hello friend, when use rgbd camera in reality, are we recalibration camera or not to get more precise extrinsic parameter. I am using rgbd camera of intel, can you recommand some tools to calibrate this camera to get extrinsic matric
Hi nhattran4833! Thanks for watching my video! If you are using realsense camera, you can get intrinsic parameters using the pyrealsense2 library. For example, like in this command. intr = profile.get_stream(rs.stream.color).as_video_stream_profile().get_intrinsics()
Hi Dipsi Hadiya! Thanks for watching my video! I think it is possible to use visual slam under water. But you should be careful with device choice since light attenuates rapidly in the water.
clear pronounciation, clear knowledge and logic display, no blabbering. love the work!
Hi ZhenQi Liu!
Thank you!
nice video, buy you can also use just one lidar to perform slam without other sensors
Hi RSC2194!
Thanks for watching my video!
Yes, the most common way to do slam is by using a LIDAR. I introduced RGBD camera as an alternative solution.
Great work! Do you have any tips or ideas on how to improve the performance of these camera-based systems without using any sensors other than cameras? is adding more cameras gonna do the trick or do we need to use different methods?
Hi Omer Sayed!
Thanks for watching my video!
Disadvantage of camera is that is has narrower and shorter range compared to Lidar. If you increase number of cameras, you will be able to compensate narrow angle, but the system will be more complex and expensive. So, I think that for real world application, combination of a Lidar and a camera is optimal.
Great video! Are there any other cameras cheaper than RealSense for mapping?
Hi Tim Andersen!
Thanks for watching my video!
For example, some Luxonis cameras are cheaper then realsense D435. But I think realsense cameras are the best in terms of amount of information available and reliability.
There is a module not found error in the file that you have given! could you check the error and reupload the file, I tried solving it but there are more issues importing issues, saying that directory mismatch! I would appreciate it if you could check it! to be precise it's from the file gazebo_sim_launch_vo.py file
I see that you have mentioned how the robot navigates but the thing is, can you mention which part of code does all of the things you mentioned... can you explain to me where and which part of code does the below points mentioned are present -
1) Detect features from the first available RGB image using FAST algorithm.
2) Track the detected features in the next available RGB image using Lucas-Kanade Optical Flow Algorithm.
3) Create the 3D pointcloud (of the tracked/detected feature points) of the latest two available RGB image with the help of their depth image .
4) Estimate the motion between two consecutive 3D pointclouds.
5) Concatenate the Rotation and Translational information to obtain the predicted path.
For the case of create 3d point cloud, what did you put as intrinsic parameters
Hi Niranjan Sujay!
Thanks for watching my video!
Sorry for the late response. I currently have some issues with my Ubuntu20 environment, so I cannot check the yaml file right now. But if there is a bug, I will fix it. Also, regarding rtabmap code, you can find it here. Please check the code for details.
github.com/introlab/rtabmap_ros/tree/ros2
Thanks for the video. Navigation part is explained well but the mapping tutorial is not there. Can you help me how you did SLAM mapping for the same using this camera?
Hi balaji ravi!
Thanks for watching my video!
I have used “slam_toolbox” to create a map. In case of this simulation, you have to launch “slam_launch.py”, “depthimage_to_laserscan_node”, “visual_odometry” (refer to “gazebo_sim_launch_vo.py” script). You have to move the robot very slowly to create a map successfully, since the robot loses its position very easily when using only RGBD camera.
@@robotmania8896 thanks for the reply. I have tried setting up depthimage_to_laserscan_node, but facing issue. how to install the package for foxy and slam toolbox is not publishing a map in my case? And also visual odometry how to use.
It would be nice if you could share the project files for SLAM mapping as i can refer and see where I made mistakes
@@balajiravi4 If I remember correctly, if you have installed all packages mentioned in this tutorial, “depthimage_to_laserscan” should be installed automatically. If you are using “slam_toolbox”, you don’t have to make additional packages. Please refer to this tutorial for “slam_toolbox usage.
navigation.ros.org/tutorials/docs/navigation2_with_slam.html
Hi, I wanted to calculate height and width of the object detected using RealSense 455d. My yolov8 model provides me with a bounding box for particular object. I wanted to find its height and width using depth information, if you know how to do this please share. Also, is there any method to reduce depth noise?
Hi Nobody GG!
Thanks for watching my video!
Using perspective projection transformation, you can calculate coordinates of bounding box in camera (real world) coordinate system. So, if we assume that object height is about the same as bounding box height, you can estimate object height. Here is the link to the video in which I explain the theory. Unfortunately, I don’t know how to reduce depth noise. I assume that the noise inherently comes from mechanical structure of the depth camera.
ruclips.net/video/--81OoXMvlw/видео.html
@@robotmania8896 Thanks for reply. I will try to implement as you shown in that video.
hello friend, when use rgbd camera in reality, are we recalibration camera or not to get more precise extrinsic parameter. I am using rgbd camera of intel, can you recommand some tools to calibrate this camera to get extrinsic matric
Hi nhattran4833!
Thanks for watching my video!
If you are using realsense camera, you can get intrinsic parameters using the pyrealsense2 library. For example, like in this command.
intr = profile.get_stream(rs.stream.color).as_video_stream_profile().get_intrinsics()
@@robotmania8896 so we can get all of camera calibration parameters based on this library?
nice video sir
can you make video on vslam on a custom gazebo world( using rtabmap or orbslam) pls?
Hi Tsegaab Nigusse!
Thanks for watching my video!
Thank you for the suggestion, I will consider it!
Is this applicable for underwater ? I want to do navigation in underwater is this so ?
Hi Dipsi Hadiya!
Thanks for watching my video!
I think it is possible to use visual slam under water. But you should be careful with device choice since light attenuates rapidly in the water.
I'm getting a no map received message in ravin
Hi Value Tradings!
What is ravin?
@@robotmania8896 sorry I meant rviz ...it got autocorrected
@@valuetradings8766 Does this message appear during map creation or during navigation?
@@robotmania8896 during map creation
For this tutorial I created a map using slam_toolbox. What tool are you using?