Robot Mapping using Laser and Kinect-like Sensor
HTML-код
- Опубликовано: 12 авг 2014
- Comparison between real and virtual 3rd person views of a robot mapping an environment using RTAB-Map. Five books are also detected using Find-Object during the experiment.
The code is publicly available on introlab.github.io/rtabmap.
For objects recognition, see introlab.github.io/find-object/ . - Наука
To think was made a couple of years ago. Big up to you.
nice work! i love this video
Very interesting, thank you!
amazing dude!
Way of the future.Awsome.
Amazing!
Excellent demo! Coud you please clarify - how do you handle new scene points descovered by the robots? Do you add all of them into you scene or do you perform some sort of smart merging? What is the "add point to the scene"-rate you have?
Anton Myagotin The map's graph is filtered to keep around 1 node/meter, then a point cloud is created from each filtered node. In the visualization above, some clouds are effectively superposed.
Great job, btw, how do you find the object? Do you build the object point clouds model before searching?
+余煒森 Objects are found using RGB images. Visual features (SURF) are extracted from images of the books, then they are compared to the live RGB stream to find the same visual features.
nice job! could u tell me ,how u do the navigation?
Hi, very good job.
I have a question about how you connect the Kinect to the voltage, you used some battery? Or some current inverter?
On this demo, it is a Xtion Live Pro, which is powered by USB for convenience. For a Kinect v1, we could cut the wire and plug into a 12V dc output directly on the board of the robot.
Hi... can u tell me what processor or dev-board are you using to process the kinect and lazer data.. ?
btw... nice video.. impressed with your results.... :)
kapil yadav It is a mini ITX with an i7 + SSD (no GPU). It is running Ubuntu 12.04 + ROS Hydro.
Oh man this is prefect for what I'm doing I making a robot with room mapping that will be like a security guard but I'm using Li-dar mapping and camera object recognition and object follow and facial recognition and you use some of that right?
In this demo, SLAM is done with lidar and rgb-d camera from "rtabmap_ros" package and books are detected using "find_object_2d" package. There is no face detection or object following here. cheers!
@@matlabbe what if we combine with object detection using camera?
It seemed to throw away a lot of points after they were out of the FOV of the camera. Is that to prevent distance inaccuracies or keep the performance up or smth?
We keep the map downsampled for rendering performance. Maybe with new GPUs, we could keep the map more dense while keeping smooth visualization. We can see at the end of the video, the rendering frame rate is already lower than at the beginning.
matlabbe ah alright.
Point clouds are very memory intensive!
I am doing a similar type of pointcloud mapping on Android, using ARCore. One of the first things I noticed is how hard it is to keep that many points. If I compute points for an entire camera frame, and try to keep them, the device would run out of memory after about 30 seconds. Luckily all I need really is a birds eye view perspective, so I project the points to a 2D image and that works fine.
matlabbe so you must have a laser scanner as well outside of the kinect?
It is not required, but increase the precision of the mapping. In this video: ruclips.net/video/_qiLAWp7AqQ/видео.html , only the Kinect is used.
i'm working on a humanoid robot currently we need to perform navigation and perception could you help us what hardwares and how to perform using ROS?
The current documentation is on ros.org: wiki.ros.org/rtabmap_ros/Tutorials, when you have specific questions, you can ask them on ROS answers (answers.ros.org/questions/) or for on RTAB-Map's forum (official-rtab-map-forum.67519.x6.nabble.com/)
Hi,
What is your robot acting on? So is it moving according to the laser data or camera data or are you checking with the remote control? I also have a laser scanner and I can get 3d the image of the environment and the distance of the nearest object . But I want to communicate this with a tool like yours. I'm very happy if you can help me how to do it. Thank you
The robot is tele-operated using a gamepad. The best way to communicate the data you have to rtabmap is to use ROS and publish the right topics. See this example wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot#Kinect_.2B-_Odometry_.2B-_2D_laser for more info!
I understood. I have 3dm-gx1 ımu. I want to get odometry messages with IMU instead of encoders. But as far as I know, linear velocity data to x and y axis is not obtained with IMU.How can ı measure odometry values with IMU.?
It is possible (integrating two times the acceleration), but it will have a lot of drift. You said you have a laser scanner, you can use it to get x,y parameters, or estimate odometry only with the laser scanner (like hector mapping). Here is another setup here: wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot#Kinect_.2B-_2D_laser
Hi may I know if you have done any project with intel realsense D435i Camera before?
D435i is integrated in RTAB-Map standalone application (for hand-held mapping). It can be also used like the kinect above with rtabmap_ros package. Note that any RGB-D cameras and stereo cameras can be used with rtabmap_ros right know, if they comply with the standard ros interface for images.
What kind of sensors do you use?
On this demo: URG-04LX, Xtion Pro Live and wheel odometry from the robot.
how did i download this video
Go to:- en.savefrom.net/1-how-to-download-youtube-video/
Then paste the url of any youtube video (in your case this video's url) in the box.
There use arduino?
a (2009 if I remember) Intel NUC is on the robot, the drive motors use custom boards
OMG