It is one of the best tutorials about Ros with yolo that I see in Internet, I am a student researcher who do project about robotic vision detection and your video really helped me a ton. Thank you for your contribution!
Very good tutorial and nicely explained! I am working on a robot that should recognize a ball using ML and depth vision and this has been of great help. Thank you!
Great tutorial. Got a question though, is it possible to do "object pose estimation" with just RGB data? If possible, do you think using Point Cloud (just depth data) instead of RGB would make it better for pose estimation? My case is detecting a pallet and it's pose, with a Kinect v1 camera.
Hi Alperen Keser! Thanks for watching my video! To do object estimation, first of all, you have to recognize the object. So, I think it is not possible to do pose estimation with just depth data because you will not be able to recognize the object. Also, you should have depth data to calculate coordinates of the object, but if your objects are all on the same plane and you know the distance from the camera to that plane, you will be able to calculate the coordinates.
Hi. Thank you for your video. It's exactly what I've been looking for. However, I have a question. I have my RGB camera that would be working with the YOLOv8 custom model that I'm currently training, but I also have a depth TOF which can publish a PointCloud depth map. What reference do the bouding boxes have? I need this information to match at least the coordinates of the corners with the depth map. Do you think that's possible?
Yes, I think it is possible to align frames from RGB camera and depth camera. But it will involve some relatively complex mathematical operations. If you would like to use RGB and depth camera simultaneously, I recommend you using RealSense or ZED camera. Probably it will save you a lot of time.
@@robotmania8896 thank you so much. I will be indeed using a RealSense. I will look into it to see if there are already some algorithms to do that. Wish you success :)
Hello! I tried installing the project on Ubuntu 22.04 and ROS-Humble and everything went okay until I got to simulation, where everything launches correctly but there are no messages being published on the topic and no camera feed appears in RVIZ. Any solutions? Thanks!
Did you type in the terminal sudo apt update and sudo apt upgrade after installing the pkgs ? I'm using the same distribution as yours and I was able to see the messages after running those commands !
Hi Carni Vorah! Thanks for watching my video! It is difficult to say only from the information you gave me. Are there any other errors in the terminal? Note that if you are, for example, using “ros2 topic echo” command to check topic content, you should execute “source” command before.
@@robotmania8896 I did everything several times, tried it on a virtual machine, and grew up on different ubuntu, but the problem remained, the camera is empty.
I did it almost a month ago. it was like something like [[[[WINPACK.cpp:64] Could not initialize NPACK! Reason: Unsupported hardware. YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
Hello, i wan to ask should we calibrate rgbd camere before using it to get the distance? If we need to calibrate it, could you recommand the tools to calibrate intel realsense, or any ros2 packages to calibrate it. Thanks
Hi Nhat Net! Thanks for watching my video! Yes, basically you have to calibrate camera before using it. In case of realsense, you can get intrinsic parameters using realsense library. Please see this comment. github.com/IntelRealSense/librealsense/issues/869#issuecomment-348171983
Hi, again. I commented your code a while back, but now I have another question. If you were to subscribe to multiple cameras at the same time, let's say one in the front, one in the right, one in the left and one in the back, you would need to use the threading library in your first code too? Thank you :)
Hi Daniel Felipe Cruz Hernández! In that case you have to define subscribers for each camera and run each of the subscribers in a different thread. In this tutorial, I am implementing this method. ruclips.net/video/Z5czzGeRJ4o/видео.html Please refer to the “robot_control_ss.py” script lines 203~205.
Thanks a lot for the video. I had some questions. I have a ros1 melodic environment and have a custom yolov9 model that detects specific objects. Can I still use your code and just replace my model .pt file instead of yours? If you have any suggestions please let me know.
Hi Najib Murshed! Thanks for watching my video! No, since yolov8 and yolov9 models are different you cannot use yolov9 pt file for yolov8. Since this code is made for ROS2, you cannot use it directly with ROS1. But the inference part should be the same. So, you have to change declaration of subscribers and publisher.
Hey, great video and very straight forward to compile. However, are you planning on posting some videos using the yolobot_control pkg as well ? I don't have a joystick to use the joy node, but I did use the teleop_twist_keyboard pkg to move the vehicle around the world but the commands are swap, if I press "I" to move forward, the vehicle will move backwards and so on with the other commands. Any thoughts ? thanks !
Hi Dennis Jhon Moreno Ortega! Thanks for watching my video! If I understand correctly, you are publishing “/yolobot/cmd_vel” using keyboard. I think you can fix your issue by altering joint axis direction. In the “yolobot.urdf” file, at lines 219 and 246 change from to Do not forget to execute “colcon build” after you correct the file. I hope this will help you.
@@robotmania8896 Sir I am using my own urdf and the error showing like [differential_drive_controller]: Joint [left_wheel_base_joint] not found, plugin will not work
@@seethasubramanyan213 This error means that there is no joint named “left_wheel_base_joint” in your URDF. Please rename the joint which is connecting the body and the left wheel of your robot.
Hey, excellent tutorial and very well explained. But I have one issue when I try to use my own pretrained model. I paste my 'best.pt' file on the yolobot_recognition/scripts folder, then in the python script 'yolov8_ros2_pt.py' I write the name of my pretrained model. When executed prints an error saying that There is no file or directory called 'best.pt'. Any idea where the error is?
Hi Pablo Gomez! Thanks for watching my video! Please put your ‘best.pt’ file to the home directory (/home/”user name”) or specify the absolute path in the ‘yolov8_ros2_pt.py’ script.
Hey, if u dont mind can you help me out? Im facing some issues. I downloaded all the code from the link in the video description, followed all the steps, but did not get any output- as in there was no message output, just blank when i ran "ros2 topic echo /Yolov8_Inference". However all the models and the robot are loaded into gazebo just fine. I compared the topics published in my run versus the video, and i am missing /rgb_cam/camera_info /rgb_cam/image_raw/compressed /rgb_cam/image_raw/compressedDepth /yolobot/odom Using rviz2, the topic image_raw can be found under Yolov8_Inference, but when I move to add it, there is no image window and it shows "no image". My ubuntu version is 20.04.6, using latest foxy distibution.
Hi architlahiri3110! Thanks for watching my video! I personally have never faced such issue. Considering that you are missing camera related topics, maybe you don’t have gazebo plugins. Please refer to this page. Maybe “sudo apt-get install ros-${ROS_DISTRO}-ros-gz” command will solve your problem. gazebosim.org/docs/latest/ros_installation
Hello! Thank you for this video about implementing YOLOv8 with gazebo and ros2. I have a question though. I have trained a YOLOv8 model on a custom dataset and have the best.pt file from the training. How do I then load this best.pt file? I tried replacing the path in the yolobot_recognition scripts to the path with the best.pt file but I keep getting the error "No such file or directory". I'm not sure whether the path I wrote is wrong or some other issue. Any suggestions are appreciated and thank you again!
Hi Sharke00! Thanks for watching my video! I think this happens because ROS is searching for a weight file in a wrong directory. I will fix it later, but as a quick fix, in “yolov8_ros2_pt.py” modify line 19 as self.model = YOLO('best.pt') and place the “best.py” file in the home directory. It should work.
@@robotmania8896 Yes, apparently the program made a new directory and once I placed the pt file there it started working. Another question I have is if I wanted to use the recognition package with other projects that use different robot models, what else do I need to do besides including the package in the main launch file? The console seems to just stop responding and no output (number, type of object detected) or error is given. Thank you for responding!
@@sharke0062 I don’t think that you have to do something special except for checking whether camera on your robot publishes “rgb_cam/image_raw” topic. Sometimes Gazebo may take a long time to launch especially if gazebo world contains a lot of objects, so maybe you just have to wait.
Hi nhattran4833! Thanks for watching my video! I have actually created a video about semantic segmentation and ROS2. Here is the link. I hope it will help you. ruclips.net/video/Z5czzGeRJ4o/видео.html
I think that semantic segmentation is more often used in conjunction with other methods rather than by itself. For example, it is used for control of mobile robots, like described in this paper. www.sciencedirect.com/science/article/abs/pii/S0957417421015189
Hi once again! I am using in my work your code to compare performance of the raw yolo8n model with one accelerated with the DeepStream. Do i need to cite you or someone else?
Hi alocatnaf! Thanks for watching my video! In this code, you should do post-processing yourself. For example, if you want to show only objects with confidence above some value, you should extract confidence parameter from results (yolov8_ros2_pt.py line 41) and apply an “if” statement when plotting inference results.
If you are using Ubuntu24, you should install gazebo simulation. As far as I remember, gazebo simulation will be installed alongside with ROS2 Jazzy. To operate robot using gazebo simulation, please see this tutorial. ruclips.net/video/b8VwSsbZYn0/видео.html
I dont know why when I run the code that you shown "Publisher already registered for provided node name. If this is due to multiple nodes with the same name then all logs for that logger name will go out over the existing publisher. As soon as any node with that name is destructed it will unregister the publisher, preventing any further logs for that name from being published on the rosout topic" This error come and the node is duplicated
Hi Towerboi! Thanks for watching my video! Does this error have negative effect on your simulation? If not, just leave it as it is. Since it might be ROS bug.
Hi Nhat Net! Thanks for watching my video! I am currently not planning to make a tutorial about yolov8 and tensorRT but I have several videos related to it. Where exactly are you experiencing a problem? ruclips.net/video/xqroBkpf3lY/видео.html ruclips.net/video/aWDFtBPN2HM/видео.html
Hi mdmahedihassan2444! Thanks for watching my video! As I have explained in the video from 11:15, please run the “source” command before executing the “ros2 topic list” command.
Hi NOURISH CHERISH! Thanks for watching my video! Yes, this work is written by me. There is no github repo. You may download the zip file from your home and send it to your working place by email.
Hi! I have an error after I source the yolobot setup.bash. It says when I run the ros2 launch yolobot_gazebo yolobot_launch.py command, it says gazebo_ros not found. I've install the ros2 iron gazebo package as well. How can I fix this?
Hi Howard Kang! Thanks for watching my video! If the error says “gazebo_ros not found”, please install “ros-iron-gazebo-ros” package. Also note that this project was made with ROS Foxy, so it may not work with ROS Iron.
Hi anshbhatia4805! Thanks for watching my video! What do you mean by “integrate”? In this tutorial I have already explained how to use yolov8 with camera.
hi, thanks for video! I trained the yolov8 model on colab to detect the traffic cones. I have zed2i stereo camera, i want to integrate the yolov8 with with ros2
Hi Manish Nayak! Thanks for watching my video! Since there is python implementation for yolov8, you don’t have to build it. Just install the required libraries for yolov8 using pip.
Hi this video really help me to do my project but i did not getting the images when i saw on rviz2 could you help me out for that also when i am trying todo install ros foxy i got error no such file or directory?
@@robotmania8896 yeah it say no such file or directory i have gazebo 11.10.2 and humble package using and also how do i speed up the gazebo simulation to detect things and which camera are we using and did you first train yolov8 ? also after colcon build they ask me to conncect joystick what is this and when i run ros2 topic echo / yolov8_inference i did not get back any parameters if you don't mind can i have you email address to ask more questions. i have to submit this project end of this month please
Have you built the project successfully? To speed up the inference, you should use computer with GPU. Since it is a simulation, camera parameters are defined in the SDF file. No, in this tutorial I haven’t trained YOLO. I just used a provided model. To operate the robot, you should use joypad. You should use source command before doing “ros2 topic echo /yolov8_inference”. Here is my e-mail: robotmania8867@yahoo.com
Hello, i am facing issues while installing ultralytics (it's build dependencies are not satisfied), i am using Ubuntu 20.04 and i tried with python 3.8.10 and python 3.7.5 it is still giving error please suggest what to do, i am not able to find solution anywhere else🥲
I am having the same issues as well. It is with ultralytic’s dependencies when installing through pip. I tried running it within a docker container as well and I get the same issues
In that case, you can publish the “cmd_vel” topic when the robot detects “turn left” sign. Probably you will need depth camera as well to make the robot turn at right point.
Thank you for your REALLY NICE VIDEO! Im trying the video, but I have some issues during 'colcon build'. [error] ModuleNotFoundError: No module named 'catkin_pkg' I tried to solve the error 1. pip install catkin_pkg 2. source /opt/ros/foxy/setup.bash 3. added “source /opt/ros/foxy/setup.bash” in “.bashrc” file but it's not working. Any idea about this error? Thank you!
You don’t need catkin with ros2. ROS2 packages are built using ament_cmake. Have you tried adding “source /opt/ros/foxy/setup.bash” in “.bashrc” file and rebooting?
Hi Bad Brother! Thanks for watching my video! What do you mean by “simple track”? If it is something like a road, you can use semantic segmentation to extract the road part from an image.
@@robotmania8896 I mean lane tracking. So it detects the trajectory of the black tape. oh okay, thank you for your advice. I would like to ask you a question. Actually I have never tried YOLO. If I want to start learning, where should I start, so that I can do lane traking using YOLO? Thanks
@@BadBrother If you need to detect black tape, you don’t have to do inference using YOLO. You may just use HSV decomposition to detect black color. Here is an example video. ruclips.net/video/hdnuykRwMmI/видео.html
@@robotmania8896I really appreciate your response. I apologise for confusing you. I mean so the robot I want to build uses a Camera to detect the 2 black tape on the left and right and the path forms a trajectory. So the camera is connected to the jetson nano and then the motor is driven by the arduino uno.
I understand your problem. If you know exactly the object and color you have to detect, I think you don’t necessarily have to use yolo. You may do color detection or use infrared sensor to move along the line.
Hi. thank you for making video!! :) But I have something problem. I did < ros2 topic echo /Yolov8_Inference > then i got WARNING: topic [/Yolov8_Inference] does not appear to be published yet Could not determine the type for the passed topic How can i fix this error? sudo apt-get install ros-humble-ros-ign-bridge sudo apt-get install ros-humble-ros-pkgs sudo apt-get install ros-${ROS_DISTRO}-ros-gz" I tried them but still have error. Plz help me.
Hi Moon! Thanks for watching my video! Did you execute the “source” command before executing the “ros2 topic echo” command? Otherwise, you will get an error.
Hi lordfarquad-by1dq! Thanks for watching my video! The segmentation result format is slightly different from recognition, but you should be able to publish it with little change to the code. I am planning to release a new video within a few days regarding semantic segmentation and yolo, it may also help you.
Hi Wahyu Fahrizal Al Fayyadh! Thanks for watching my video! This is a very simple model, so I didn’t use any xacro files. You can change the extension of the file to xacro and it should work as a xacro file.
Actually its running without errors. Wtf. Like you just have a line that says self.subscriprion... .hows that not causing an error. Do i not know python
Hello sir, thank you for the video, i am learning alot from you. I tried to implement the project step by step, but firstly the gazebo folder didnt appear on my home directory after unhiding all content of the home, I searched it manually and found a gazebo-9 folder but the folder contents where not similar though it had a model folder still. Secondly on colcon build i get this error CMake Error at CMakeLists.txt:19 (find_package): By not providing "Findament_cmake.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "ament_cmake", but CMake did not find one. Could not find a package configuration file provided by "amend_cmake" with any of the following names: ament_cmakeConfig.cmake ament_cmake.config.cmake Add the installation prefix of "ament_cmake" to CMAKE_PREFIX_PATH or set "ament_cmake_DIR" to a directory containing one of the above files. If "ament_cmake" provides a separate development package or SDK, be sure it has been installed.
Hi @user-xg7dk1wl3s! Thanks for watching my video! Please open the terminal and execute the “gazebo” command. The folder should appear. Also, add “source /opt/ros/foxy/setup.bash” to your “.bashrc” file. This should solve ament_cmake related error. Don’t forget to reboot your computer after altering the “.bashrc” file.
cant make the yolobot_inference folder to be listed. keeps showing this error ModuleNotFoundError: No module named 'yolov8_msgs.yolov8_msgs_s__rosidl_typesupport_c'
@@robotmania8896 I reinstalled everything and now the inference folder is listed but when i do the echo part does not show the camera working and when i open rviz the image of inference show no image
@@ChristianDiaryUG i fixed running gazebo 11 with ros2 humble on ubuntu 22.04 and added the aditional step of installing the pckg of communication between ros2 and gazebo sudo apt-get install ros-humble-ros-ign-bridge
Hi OYM dental! Thanks for watching my video! No, I am not providing any classes, since I have another job which takes almost all my time. But if you have any questions, maybe I can answer them.
Hi it is very good tutorial. But, I am facing this error during the command "colcon build" CMake Error at CMakeLists.txt:19 (find_package): By not providing "Findament_cmake.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "ament_cmake", but CMake did not find one. Could not find a package configuration file provided by "amend_cmake" with any of the following names: ament_cmakeConfig.cmake ament_cmake.config.cmake Add the installation prefix of "ament_cmake" to CMAKE_PREFIX_PATH or set "ament_cmake_DIR" to a directory containing one of the above files. If "ament_cmake" provides a separate development package or SDK, be sure it has been installed. Do you have any idea to solve this? ThankYou!
@@robotmania8896 It's works. Thanks a lot. After it, I run the command "ros2 topic echo /Yolov8_Inference“. But, it doesn't show anything. Do you have any idea about this?
It is one of the best tutorials about Ros with yolo that I see in Internet, I am a student researcher who do project about robotic vision detection and your video really helped me a ton. Thank you for your contribution!
Hi Jiangjing!
Thanks for watching my video!
It is my pleasure if this video has helped you!
Very good tutorial and nicely explained! I am working on a robot that should recognize a ball using ML and depth vision and this has been of great help. Thank you!
Hi Eric Avram!
Thanks for watching my video!
It is my pleasure if this video has helped you!
Thanks a lot for your video, actually for all your videos, they are really helping me with my projects
Hi Vilson Wenis Belle!
Thanks for watching my videos!
It is my pleasure if these videos have helped you!
Thank you. I've found this very useful. I appreciate you're effort!
Hi checksumff1248!
Thanks for watching my video!
It is my pleasure is this video has helped you!
Hey, it helped me a lot on a project I am doing. Easy to implement on any project, thanks a lot!
Hi Vivien Chambost!
Thanks for watching my video!
It is my pleasure is this program has helped you!
Love you professor. This video will make my future. 🥰
Hi shailigupta4086!
Thanks for watching my video!
It is my pleasure if this video has helped you!
thank you very much brother, it's really help me to finish my study project.
Hi Sharing My Little Insight!
It is my pleasure it this video has helped you!
This was very helpful, thank you !
Hi Newton Kariuki!
Thanks for watching my video!
It is my pleasure if this video has helped you.
Great tutorial. Got a question though, is it possible to do "object pose estimation" with just RGB data? If possible, do you think using Point Cloud (just depth data) instead of RGB would make it better for pose estimation?
My case is detecting a pallet and it's pose, with a Kinect v1 camera.
Hi Alperen Keser!
Thanks for watching my video!
To do object estimation, first of all, you have to recognize the object. So, I think it is not possible to do pose estimation with just depth data because you will not be able to recognize the object. Also, you should have depth data to calculate coordinates of the object, but if your objects are all on the same plane and you know the distance from the camera to that plane, you will be able to calculate the coordinates.
Hi. Thank you for your video. It's exactly what I've been looking for. However, I have a question. I have my RGB camera that would be working with the YOLOv8 custom model that I'm currently training, but I also have a depth TOF which can publish a PointCloud depth map. What reference do the bouding boxes have? I need this information to match at least the coordinates of the corners with the depth map. Do you think that's possible?
Yes, I think it is possible to align frames from RGB camera and depth camera. But it will involve some relatively complex mathematical operations. If you would like to use RGB and depth camera simultaneously, I recommend you using RealSense or ZED camera. Probably it will save you a lot of time.
@@robotmania8896 thank you so much. I will be indeed using a RealSense. I will look into it to see if there are already some algorithms to do that. Wish you success :)
Hello! I tried installing the project on Ubuntu 22.04 and ROS-Humble and everything went okay until I got to simulation, where everything launches correctly but there are no messages being published on the topic and no camera feed appears in RVIZ. Any solutions? Thanks!
Did you type in the terminal sudo apt update and sudo apt upgrade after installing the pkgs ? I'm using the same distribution as yours and I was able to see the messages after running those commands !
Hi Carni Vorah!
Thanks for watching my video!
It is difficult to say only from the information you gave me. Are there any other errors in the terminal?
Note that if you are, for example, using “ros2 topic echo” command to check topic content, you should execute “source” command before.
@@robotmania8896 I did everything several times, tried it on a virtual machine, and grew up on different ubuntu, but the problem remained, the camera is empty.
I did it almost a month ago. it was like something like
[[[[WINPACK.cpp:64] Could not initialize NPACK! Reason: Unsupported hardware.
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
same issues
Hello, i wan to ask should we calibrate rgbd camere before using it to get the distance? If we need to calibrate it, could you recommand the tools to calibrate intel realsense, or any ros2 packages to calibrate it. Thanks
Hi Nhat Net!
Thanks for watching my video!
Yes, basically you have to calibrate camera before using it. In case of realsense, you can get intrinsic parameters using realsense library. Please see this comment.
github.com/IntelRealSense/librealsense/issues/869#issuecomment-348171983
Hi, again. I commented your code a while back, but now I have another question. If you were to subscribe to multiple cameras at the same time, let's say one in the front, one in the right, one in the left and one in the back, you would need to use the threading library in your first code too? Thank you :)
Hi Daniel Felipe Cruz Hernández!
In that case you have to define subscribers for each camera and run each of the subscribers in a different thread. In this tutorial, I am implementing this method.
ruclips.net/video/Z5czzGeRJ4o/видео.html
Please refer to the “robot_control_ss.py” script lines 203~205.
Thanks a lot for the video. I had some questions. I have a ros1 melodic environment and have a custom yolov9 model that detects specific objects. Can I still use your code and just replace my model .pt file instead of yours? If you have any suggestions please let me know.
Hi Najib Murshed!
Thanks for watching my video!
No, since yolov8 and yolov9 models are different you cannot use yolov9 pt file for yolov8.
Since this code is made for ROS2, you cannot use it directly with ROS1. But the inference part should be the same. So, you have to change declaration of subscribers and publisher.
Hey, great video and very straight forward to compile. However, are you planning on posting some videos using the yolobot_control pkg as well ? I don't have a joystick to use the joy node, but I did use the teleop_twist_keyboard pkg to move the vehicle around the world but the commands are swap, if I press "I" to move forward, the vehicle will move backwards and so on with the other commands. Any thoughts ? thanks !
Hi Dennis Jhon Moreno Ortega!
Thanks for watching my video!
If I understand correctly, you are publishing “/yolobot/cmd_vel” using keyboard. I think you can fix your issue by altering joint axis direction. In the “yolobot.urdf” file, at lines 219 and 246 change from
to
Do not forget to execute “colcon build” after you correct the file.
I hope this will help you.
@@robotmania8896 Sir I am using my own urdf and the error showing like [differential_drive_controller]: Joint [left_wheel_base_joint] not found, plugin will not work
Could you please explain the Urdf used in this video
@@seethasubramanyan213 This error means that there is no joint named “left_wheel_base_joint” in your URDF. Please rename the joint which is connecting the body and the left wheel of your robot.
very nice, thanks!!
Hi Roger Weerd!
It is my pleasure if this video has helped you!
Hey, excellent tutorial and very well explained. But I have one issue when I try to use my own pretrained model. I paste my 'best.pt' file on the yolobot_recognition/scripts folder, then in the python script 'yolov8_ros2_pt.py' I write the name of my pretrained model. When executed prints an error saying that There is no file or directory called 'best.pt'. Any idea where the error is?
Hi Pablo Gomez!
Thanks for watching my video!
Please put your ‘best.pt’ file to the home directory (/home/”user name”) or specify the absolute path in the ‘yolov8_ros2_pt.py’ script.
@@robotmania8896 Thanks, worked like a charm!
Hey, if u dont mind can you help me out? Im facing some issues.
I downloaded all the code from the link in the video description, followed all the steps, but did not get any output- as in there was no message output, just blank when i ran "ros2 topic echo /Yolov8_Inference". However all the models and the robot are loaded into gazebo just fine. I compared the topics published in my run versus the video, and i am missing
/rgb_cam/camera_info
/rgb_cam/image_raw/compressed
/rgb_cam/image_raw/compressedDepth
/yolobot/odom
Using rviz2, the topic image_raw can be found under Yolov8_Inference, but when I move to add it, there is no image window and it shows "no image".
My ubuntu version is 20.04.6, using latest foxy distibution.
Hi architlahiri3110!
Thanks for watching my video!
I personally have never faced such issue. Considering that you are missing camera related topics, maybe you don’t have gazebo plugins. Please refer to this page. Maybe “sudo apt-get install ros-${ROS_DISTRO}-ros-gz” command will solve your problem.
gazebosim.org/docs/latest/ros_installation
I am at this stage. I don't know whether you have moved beyond this. Those are the exact topics i am missing
me too :( .I tried that but nothing changed.
I'm replying here for everyone in the thread:
sudo apt install ros-foxy-gazebo-ros-pkgs
used this to fix it
Good luck :)
i am working on humble and I changed command but also this command did not work.
Hello! Thank you for this video about implementing YOLOv8 with gazebo and ros2. I have a question though. I have trained a YOLOv8 model on a custom dataset and have the best.pt file from the training. How do I then load this best.pt file? I tried replacing the path in the yolobot_recognition scripts to the path with the best.pt file but I keep getting the error "No such file or directory". I'm not sure whether the path I wrote is wrong or some other issue. Any suggestions are appreciated and thank you again!
Hi Sharke00!
Thanks for watching my video!
I think this happens because ROS is searching for a weight file in a wrong directory. I will fix it later, but as a quick fix, in “yolov8_ros2_pt.py” modify line 19 as
self.model = YOLO('best.pt')
and place the “best.py” file in the home directory. It should work.
@@robotmania8896 Yes, apparently the program made a new directory and once I placed the pt file there it started working. Another question I have is if I wanted to use the recognition package with other projects that use different robot models, what else do I need to do besides including the package in the main launch file? The console seems to just stop responding and no output (number, type of object detected) or error is given. Thank you for responding!
@@sharke0062 I don’t think that you have to do something special except for checking whether camera on your robot publishes “rgb_cam/image_raw” topic. Sometimes Gazebo may take a long time to launch especially if gazebo world contains a lot of objects, so maybe you just have to wait.
@@robotmania8896 I see. Thank you so much!
great tutorial, thanks for your sharing. Could make more video about using semantic segmentation in ROS2?
Hi nhattran4833!
Thanks for watching my video!
I have actually created a video about semantic segmentation and ROS2. Here is the link. I hope it will help you.
ruclips.net/video/Z5czzGeRJ4o/видео.html
@@robotmania8896 thanks, i really apply semantic segmention in mobile robot, could you recommend som applications which apply it in to mobile robot?
I think that semantic segmentation is more often used in conjunction with other methods rather than by itself. For example, it is used for control of mobile robots, like described in this paper.
www.sciencedirect.com/science/article/abs/pii/S0957417421015189
Hi once again! I am using in my work your code to compare performance of the raw yolo8n model with one accelerated with the DeepStream. Do i need to cite you or someone else?
Hi Dgh!
Since I am providing only the zip file, I think it is difficult to cite. So, I think citing is not necessary.
@@robotmania8896 ok, clear then. Thanks for your code!
Hi,
great tutorial! im wondering where i can specify the yolo inference parameters, like imgsz, conf, max_det ?
thanks in advance :)
Hi alocatnaf!
Thanks for watching my video!
In this code, you should do post-processing yourself. For example, if you want to show only objects with confidence above some value, you should extract confidence parameter from results (yolov8_ros2_pt.py line 41) and apply an “if” statement when plotting inference results.
@@robotmania8896 thank you very much!
i am using ubuntu 24 and ros2 jazzy what gazebo should install then and how?
cuz i see uinstall gazebo9 classic whic i guess won work fo rme
If you are using Ubuntu24, you should install gazebo simulation. As far as I remember, gazebo simulation will be installed alongside with ROS2 Jazzy. To operate robot using gazebo simulation, please see this tutorial.
ruclips.net/video/b8VwSsbZYn0/видео.html
I dont know why when I run the code that you shown "Publisher already registered for provided node name. If this is due to multiple nodes with the same name then all logs for that logger name will go out over the existing publisher. As soon as any node with that name is destructed it will unregister the publisher, preventing any further logs for that name from being published on the rosout topic" This error come and the node is duplicated
Hi Towerboi!
Thanks for watching my video!
Does this error have negative effect on your simulation? If not, just leave it as it is. Since it might be ROS bug.
can i use this tutorial to integrate yolov8obb on ros2 using humble?
Hi Kenneth Eladistu!
Thanks for watching my video!
Yes, the way of integration should be pretty much the same.
thanks for this project, could you make a same tutorial using yolov8 and use the tensorRT
Hi Nhat Net!
Thanks for watching my video!
I am currently not planning to make a tutorial about yolov8 and tensorRT but I have several videos related to it. Where exactly are you experiencing a problem?
ruclips.net/video/xqroBkpf3lY/видео.html
ruclips.net/video/aWDFtBPN2HM/видео.html
@@robotmania8896 have this tutorial directly inferenced with gpu or cpu of jetson nano ?
In the video I am using CPU for inference, but GPU can also be used.
hello one problem, when I run ros2 topic list
yolobot inference is not there
how could I solve it?
Hi mdmahedihassan2444!
Thanks for watching my video!
As I have explained in the video from 11:15, please run the “source” command before executing the “ros2 topic list” command.
is this project written by you. is there a github repo. i am not allowed to access gdrives at my work
Hi NOURISH CHERISH!
Thanks for watching my video!
Yes, this work is written by me. There is no github repo. You may download the zip file from your home and send it to your working place by email.
Hey can i use the same code for jetson nano ROS 2 in real-time USB camera?
Hi AishRobotics!
Thanks for watching my video!
Yes, you can. Just make sure that your USB camera publishes “rgb_cam/image_raw” topic.
Hi!
I have an error after I source the yolobot setup.bash. It says when I run the ros2 launch yolobot_gazebo yolobot_launch.py command, it says gazebo_ros not found. I've install the ros2 iron gazebo package as well. How can I fix this?
Hi Howard Kang!
Thanks for watching my video!
If the error says “gazebo_ros not found”, please install “ros-iron-gazebo-ros” package. Also note that this project was made with ROS Foxy, so it may not work with ROS Iron.
Great video! Can you please guide me how can I integrate this YOLO v8 with ROS2 code to camera for real time object detection?
Hi anshbhatia4805!
Thanks for watching my video!
What do you mean by “integrate”? In this tutorial I have already explained how to use yolov8 with camera.
hi, thanks for video!
I trained the yolov8 model on colab to detect the traffic cones.
I have zed2i stereo camera, i want to integrate the yolov8 with with ros2
Hi Hammad Safeer!
Thanks for watching my video!
Do I understand correctly that you want to do inference using yolov8 with ZED and ROS2?
exactly!@@robotmania8896
How to build the yolov8 for ros noetic ?
Hi Manish Nayak!
Thanks for watching my video!
Since there is python implementation for yolov8, you don’t have to build it. Just install the required libraries for yolov8 using pip.
Hi this video really help me to do my project but i did not getting the images when i saw on rviz2 could you help me out for that also when i am trying todo install ros foxy i got error no such file or directory?
Hi Izaq!
Thanks for watching my video!
Do you have any error messages in the terminal?
@@robotmania8896 yeah it say no such file or directory i have gazebo 11.10.2 and humble package using and also how do i speed up the gazebo simulation to detect things and which camera are we using and did you first train yolov8 ? also after colcon build they ask me to conncect joystick what is this and when i run ros2 topic echo / yolov8_inference i did not get back any parameters
if you don't mind can i have you email address to ask more questions. i have to submit this project end of this month please
Have you built the project successfully? To speed up the inference, you should use computer with GPU. Since it is a simulation, camera parameters are defined in the SDF file. No, in this tutorial I haven’t trained YOLO. I just used a provided model. To operate the robot, you should use joypad. You should use source command before doing “ros2 topic echo /yolov8_inference”. Here is my e-mail: robotmania8867@yahoo.com
hey will it work on gazebo11 too?
Hi AK Entertainment!
Thanks for watching my video!
Yes, it should work on gazebo 11.
Hello, i am facing issues while installing ultralytics (it's build dependencies are not satisfied), i am using Ubuntu 20.04 and i tried with python 3.8.10 and python 3.7.5 it is still giving error please suggest what to do, i am not able to find solution anywhere else🥲
Nice Issues
Hi Aditya Jambhale!
Thanks for watching my video!
What error exactly do you have?
I am having the same issues as well. It is with ultralytic’s dependencies when installing through pip. I tried running it within a docker container as well and I get the same issues
Very good....
Hi cmtro!
Thank you!
Hello thank you for this amazing video but i want my robot recognize traffic sign and react according to them how can I do it ?
Hi Mohammed Bourouba!
Thanks for watching my video!
In that case, you have to train your own model.
yes i do it but i want my robot when it detect for example turn left it turn left automatically
In that case, you can publish the “cmd_vel” topic when the robot detects “turn left” sign. Probably you will need depth camera as well to make the robot turn at right point.
What's are the changes to do in case i want to be autonomous, it's possible ?
@@mohammedbourouba9274 Do you mean, that you would like to a navigation?
does it work with ros2 humble
Hi samesh managond!
Thanks for watching my video!
Yes, it should work.
Thank you for your REALLY NICE VIDEO!
Im trying the video, but I have some issues during 'colcon build'.
[error] ModuleNotFoundError: No module named 'catkin_pkg'
I tried to solve the error
1. pip install catkin_pkg
2. source /opt/ros/foxy/setup.bash
3. added “source /opt/ros/foxy/setup.bash” in “.bashrc” file
but it's not working.
Any idea about this error? Thank you!
You don’t need catkin with ros2. ROS2 packages are built using ament_cmake. Have you tried adding “source /opt/ros/foxy/setup.bash” in “.bashrc” file and rebooting?
Can YOLOv8 track a simple track using ros?
Hi Bad Brother!
Thanks for watching my video!
What do you mean by “simple track”? If it is something like a road, you can use semantic segmentation to extract the road part from an image.
@@robotmania8896 I mean lane tracking. So it detects the trajectory of the black tape. oh okay, thank you for your advice. I would like to ask you a question. Actually I have never tried YOLO. If I want to start learning, where should I start, so that I can do lane traking using YOLO? Thanks
@@BadBrother If you need to detect black tape, you don’t have to do inference using YOLO. You may just use HSV decomposition to detect black color. Here is an example video. ruclips.net/video/hdnuykRwMmI/видео.html
@@robotmania8896I really appreciate your response. I apologise for confusing you. I mean so the robot I want to build uses a Camera to detect the 2 black tape on the left and right and the path forms a trajectory. So the camera is connected to the jetson nano and then the motor is driven by the arduino uno.
I understand your problem. If you know exactly the object and color you have to detect, I think you don’t necessarily have to use yolo. You may do color detection or use infrared sensor to move along the line.
Hi. thank you for making video!! :)
But I have something problem.
I did < ros2 topic echo /Yolov8_Inference > then i got
WARNING: topic [/Yolov8_Inference] does not appear to be published yet
Could not determine the type for the passed topic
How can i fix this error?
sudo apt-get install ros-humble-ros-ign-bridge
sudo apt-get install ros-humble-ros-pkgs
sudo apt-get install ros-${ROS_DISTRO}-ros-gz"
I tried them but still have error. Plz help me.
Hi Moon!
Thanks for watching my video!
Did you execute the “source” command before executing the “ros2 topic echo” command? Otherwise, you will get an error.
i did not get any error but nothing happened after run this command
wat about yolo8 seg?
Hi lordfarquad-by1dq!
Thanks for watching my video!
The segmentation result format is slightly different from recognition, but you should be able to publish it with little change to the code. I am planning to release a new video within a few days regarding semantic segmentation and yolo, it may also help you.
why you do not use file .xacro? i mean please give us the xacrofile
Hi Wahyu Fahrizal Al Fayyadh!
Thanks for watching my video!
This is a very simple model, so I didn’t use any xacro files. You can change the extension of the file to xacro and it should work as a xacro file.
@@robotmania8896 thank you sir! Amazing videos!
how can i use this code with yolov5?
Hi ថាន្នី សុគុណ!
If you would like to use YoloV5, please refer to this video.
ruclips.net/video/594Gmkdo-_s/видео.html
@robotmania8896 I have watched it, but when I try to run it as ros2 run ... it doesn't know module 'models' and 'utils'
Yes, to run that code using “ros2 run” you have to modify the CMake file. You can run that code using “python3”.
i see few syntax mistakes in your code just by looking at it.
Year, there could be syntax mistakes. Please let me know if you have found any.
Actually its running without errors. Wtf. Like you just have a line that says self.subscriprion... .hows that not causing an error. Do i not know python
sudo apt install gazebo9 error
Hi Nikhil Kulkarni!
Thanks for watching my video!
On which version of Ubuntu are you trying to install gazebo?
@@robotmania8896 22.04
For 22.04, "sudo apt install gazebo" should work.
@@robotmania8896 gazebo no candidate error
@@robotmania8896 tried but showing error no 'gazebo' Candidate
Hello sir, thank you for the video, i am learning alot from you. I tried to implement the project step by step, but firstly the gazebo folder didnt appear on my home directory after unhiding all content of the home, I searched it manually and found a gazebo-9 folder but the folder contents where not similar though it had a model folder still. Secondly on colcon build i get this error
CMake Error at CMakeLists.txt:19 (find_package):
By not providing "Findament_cmake.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "ament_cmake", but CMake did not find one.
Could not find a package configuration file provided by "amend_cmake" with any of the following names:
ament_cmakeConfig.cmake
ament_cmake.config.cmake
Add the installation prefix of "ament_cmake" to CMAKE_PREFIX_PATH or set "ament_cmake_DIR" to a directory containing one of the above files. If "ament_cmake" provides a separate development package or SDK, be sure it has been installed.
I have tried to add the source /opt/ros/foxy/setup.bash at the bottom of the file but still it doesn't build the packages
Hi @user-xg7dk1wl3s!
Thanks for watching my video!
Please open the terminal and execute the “gazebo” command. The folder should appear.
Also, add “source /opt/ros/foxy/setup.bash” to your “.bashrc” file. This should solve ament_cmake related error. Don’t forget to reboot your computer after altering the “.bashrc” file.
cant make the yolobot_inference folder to be listed.
keeps showing this error
ModuleNotFoundError: No module named 'yolov8_msgs.yolov8_msgs_s__rosidl_typesupport_c'
Hi raphaelcrespopereira3206!
Thanks for watching my video!
Hmm… I have never had such an error. Which version of ROS are you using?
@@robotmania8896 ros2 foxy on ubuntu 20.04
@@robotmania8896 I reinstalled everything and now the inference folder is listed but when i do the echo part does not show the camera working and when i open rviz the image of inference show no image
I am at this stage. I don't know whether you have moved beyond this. Those are the exact topics i am missing
@@ChristianDiaryUG i fixed running gazebo 11 with ros2 humble on ubuntu 22.04 and added the aditional step of installing the pckg of communication between ros2 and gazebo sudo apt-get install ros-humble-ros-ign-bridge
are you taking any clases, im looking to contact you
Hi OYM dental!
Thanks for watching my video!
No, I am not providing any classes, since I have another job which takes almost all my time. But if you have any questions, maybe I can answer them.
Hi it is very good tutorial. But, I am facing this error during the command "colcon build"
CMake Error at CMakeLists.txt:19 (find_package):
By not providing "Findament_cmake.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "ament_cmake", but CMake did not find one.
Could not find a package configuration file provided by "amend_cmake" with any of the following names:
ament_cmakeConfig.cmake
ament_cmake.config.cmake
Add the installation prefix of "ament_cmake" to CMAKE_PREFIX_PATH or set "ament_cmake_DIR" to a directory containing one of the above files. If "ament_cmake" provides a separate development package or SDK, be sure it has been installed.
Do you have any idea to solve this? ThankYou!
Hi Loo Waijun!
Thanks for watching my video!
Have you added “source /opt/ros/foxy/setup.bash” in your “.bashrc” file?
@@robotmania8896 It's works. Thanks a lot.
After it, I run the command "ros2 topic echo /Yolov8_Inference“. But, it doesn't show anything. Do you have any idea about this?
@@loowaijun2960 I explain how to execute “ros2 topic list” command in the video. Please watch starting from 11:14.
@@robotmania8896 Ya, I followed it, but the recognized object information does not show.
@@loowaijun2960 Do boundary boxes appear in RVIZ when information in the terminal is not shown?
Someone have thiss issue in ROS2 HUMBLE when runing colcon :
Starting >>> yolobot_control
Starting >>> yolobot_description
Starting >>> yolobot_gazebo
Starting >>> yolobot_recognition
Starting >>> yolov8_msgs
Finished
solution: downgrade setuptools , install sphinx and sudo apt-get install python3-pip python3-dev python3-setuptools apache2 -y
Hi Lucas Balvin Huertas!
Thanks for watching my video!
I am glad that you found a solution. And thank you for sharing the information.