How to Use YOLOv8 with ROS2

Поделиться
HTML-код
  • Опубликовано: 3 дек 2024

Комментарии • 183

  • @Jiangjing-u2c
    @Jiangjing-u2c 10 месяцев назад +2

    It is one of the best tutorials about Ros with yolo that I see in Internet, I am a student researcher who do project about robotic vision detection and your video really helped me a ton. Thank you for your contribution!

    • @robotmania8896
      @robotmania8896  10 месяцев назад +1

      Hi Jiangjing!
      Thanks for watching my video!
      It is my pleasure if this video has helped you!

  • @ericavram5646
    @ericavram5646 Год назад

    Very good tutorial and nicely explained! I am working on a robot that should recognize a ball using ML and depth vision and this has been of great help. Thank you!

    • @robotmania8896
      @robotmania8896  Год назад

      Hi Eric Avram!
      Thanks for watching my video!
      It is my pleasure if this video has helped you!

  • @vilsonwenisbelle9041
    @vilsonwenisbelle9041 8 месяцев назад

    Thanks a lot for your video, actually for all your videos, they are really helping me with my projects

    • @robotmania8896
      @robotmania8896  8 месяцев назад +1

      Hi Vilson Wenis Belle!
      Thanks for watching my videos!
      It is my pleasure if these videos have helped you!

  • @checksumff1248
    @checksumff1248 Год назад

    Thank you. I've found this very useful. I appreciate you're effort!

    • @robotmania8896
      @robotmania8896  Год назад

      Hi checksumff1248!
      Thanks for watching my video!
      It is my pleasure is this video has helped you!

  • @vivienchambost4415
    @vivienchambost4415 9 месяцев назад

    Hey, it helped me a lot on a project I am doing. Easy to implement on any project, thanks a lot!

    • @robotmania8896
      @robotmania8896  9 месяцев назад

      Hi Vivien Chambost!
      Thanks for watching my video!
      It is my pleasure is this program has helped you!

  • @shailigupta4086
    @shailigupta4086 Год назад

    Love you professor. This video will make my future. 🥰

    • @robotmania8896
      @robotmania8896  Год назад

      Hi shailigupta4086!
      Thanks for watching my video!
      It is my pleasure if this video has helped you!

  • @sharingmylittleinsight
    @sharingmylittleinsight 6 месяцев назад

    thank you very much brother, it's really help me to finish my study project.

    • @robotmania8896
      @robotmania8896  6 месяцев назад

      Hi Sharing My Little Insight!
      It is my pleasure it this video has helped you!

  • @newtonkariuki3104
    @newtonkariuki3104 Год назад

    This was very helpful, thank you !

    • @robotmania8896
      @robotmania8896  Год назад

      Hi Newton Kariuki!
      Thanks for watching my video!
      It is my pleasure if this video has helped you.

  • @alperenkeser
    @alperenkeser 11 месяцев назад

    Great tutorial. Got a question though, is it possible to do "object pose estimation" with just RGB data? If possible, do you think using Point Cloud (just depth data) instead of RGB would make it better for pose estimation?
    My case is detecting a pallet and it's pose, with a Kinect v1 camera.

    • @robotmania8896
      @robotmania8896  11 месяцев назад +1

      Hi Alperen Keser!
      Thanks for watching my video!
      To do object estimation, first of all, you have to recognize the object. So, I think it is not possible to do pose estimation with just depth data because you will not be able to recognize the object. Also, you should have depth data to calculate coordinates of the object, but if your objects are all on the same plane and you know the distance from the camera to that plane, you will be able to calculate the coordinates.

  • @dafech_911
    @dafech_911 6 месяцев назад

    Hi. Thank you for your video. It's exactly what I've been looking for. However, I have a question. I have my RGB camera that would be working with the YOLOv8 custom model that I'm currently training, but I also have a depth TOF which can publish a PointCloud depth map. What reference do the bouding boxes have? I need this information to match at least the coordinates of the corners with the depth map. Do you think that's possible?

    • @robotmania8896
      @robotmania8896  6 месяцев назад +1

      Yes, I think it is possible to align frames from RGB camera and depth camera. But it will involve some relatively complex mathematical operations. If you would like to use RGB and depth camera simultaneously, I recommend you using RealSense or ZED camera. Probably it will save you a lot of time.

    • @dafech_911
      @dafech_911 6 месяцев назад

      @@robotmania8896 thank you so much. I will be indeed using a RealSense. I will look into it to see if there are already some algorithms to do that. Wish you success :)

  • @carnivorah8837
    @carnivorah8837 Год назад +2

    Hello! I tried installing the project on Ubuntu 22.04 and ROS-Humble and everything went okay until I got to simulation, where everything launches correctly but there are no messages being published on the topic and no camera feed appears in RVIZ. Any solutions? Thanks!

    • @DennisJhonMorenoOrtega
      @DennisJhonMorenoOrtega Год назад

      Did you type in the terminal sudo apt update and sudo apt upgrade after installing the pkgs ? I'm using the same distribution as yours and I was able to see the messages after running those commands !

    • @robotmania8896
      @robotmania8896  Год назад

      Hi Carni Vorah!
      Thanks for watching my video!
      It is difficult to say only from the information you gave me. Are there any other errors in the terminal?
      Note that if you are, for example, using “ros2 topic echo” command to check topic content, you should execute “source” command before.

    • @ИльяИолтуховский-ю2я
      @ИльяИолтуховский-ю2я Год назад +1

      @@robotmania8896 I did everything several times, tried it on a virtual machine, and grew up on different ubuntu, but the problem remained, the camera is empty.

    • @ИльяИолтуховский-ю2я
      @ИльяИолтуховский-ю2я Год назад

      I did it almost a month ago. it was like something like
      [[[[WINPACK.cpp:64] Could not initialize NPACK! Reason: Unsupported hardware.
      YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients

    • @christopherbousquet-jette4301
      @christopherbousquet-jette4301 4 месяца назад

      same issues

  • @nhatnet479
    @nhatnet479 7 дней назад

    Hello, i wan to ask should we calibrate rgbd camere before using it to get the distance? If we need to calibrate it, could you recommand the tools to calibrate intel realsense, or any ros2 packages to calibrate it. Thanks

    • @robotmania8896
      @robotmania8896  5 дней назад

      Hi Nhat Net!
      Thanks for watching my video!
      Yes, basically you have to calibrate camera before using it. In case of realsense, you can get intrinsic parameters using realsense library. Please see this comment.
      github.com/IntelRealSense/librealsense/issues/869#issuecomment-348171983

  • @dafech_911
    @dafech_911 5 месяцев назад

    Hi, again. I commented your code a while back, but now I have another question. If you were to subscribe to multiple cameras at the same time, let's say one in the front, one in the right, one in the left and one in the back, you would need to use the threading library in your first code too? Thank you :)

    • @robotmania8896
      @robotmania8896  5 месяцев назад

      Hi Daniel Felipe Cruz Hernández!
      In that case you have to define subscribers for each camera and run each of the subscribers in a different thread. In this tutorial, I am implementing this method.
      ruclips.net/video/Z5czzGeRJ4o/видео.html
      Please refer to the “robot_control_ss.py” script lines 203~205.

  • @najibmurshed
    @najibmurshed 7 месяцев назад

    Thanks a lot for the video. I had some questions. I have a ros1 melodic environment and have a custom yolov9 model that detects specific objects. Can I still use your code and just replace my model .pt file instead of yours? If you have any suggestions please let me know.

    • @robotmania8896
      @robotmania8896  7 месяцев назад

      Hi Najib Murshed!
      Thanks for watching my video!
      No, since yolov8 and yolov9 models are different you cannot use yolov9 pt file for yolov8.
      Since this code is made for ROS2, you cannot use it directly with ROS1. But the inference part should be the same. So, you have to change declaration of subscribers and publisher.

  • @DennisJhonMorenoOrtega
    @DennisJhonMorenoOrtega Год назад

    Hey, great video and very straight forward to compile. However, are you planning on posting some videos using the yolobot_control pkg as well ? I don't have a joystick to use the joy node, but I did use the teleop_twist_keyboard pkg to move the vehicle around the world but the commands are swap, if I press "I" to move forward, the vehicle will move backwards and so on with the other commands. Any thoughts ? thanks !

    • @robotmania8896
      @robotmania8896  Год назад +1

      Hi Dennis Jhon Moreno Ortega!
      Thanks for watching my video!
      If I understand correctly, you are publishing “/yolobot/cmd_vel” using keyboard. I think you can fix your issue by altering joint axis direction. In the “yolobot.urdf” file, at lines 219 and 246 change from
      to
      Do not forget to execute “colcon build” after you correct the file.
      I hope this will help you.

    • @seethasubramanyan213
      @seethasubramanyan213 11 месяцев назад

      @@robotmania8896 Sir I am using my own urdf and the error showing like [differential_drive_controller]: Joint [left_wheel_base_joint] not found, plugin will not work

    • @seethasubramanyan213
      @seethasubramanyan213 11 месяцев назад

      Could you please explain the Urdf used in this video

    • @robotmania8896
      @robotmania8896  11 месяцев назад

      @@seethasubramanyan213 This error means that there is no joint named “left_wheel_base_joint” in your URDF. Please rename the joint which is connecting the body and the left wheel of your robot.

  • @RogerWeerd
    @RogerWeerd 11 месяцев назад

    very nice, thanks!!

    • @robotmania8896
      @robotmania8896  11 месяцев назад

      Hi Roger Weerd!
      It is my pleasure if this video has helped you!

  • @pablogomez9401
    @pablogomez9401 Год назад +1

    Hey, excellent tutorial and very well explained. But I have one issue when I try to use my own pretrained model. I paste my 'best.pt' file on the yolobot_recognition/scripts folder, then in the python script 'yolov8_ros2_pt.py' I write the name of my pretrained model. When executed prints an error saying that There is no file or directory called 'best.pt'. Any idea where the error is?

    • @robotmania8896
      @robotmania8896  Год назад

      Hi Pablo Gomez!
      Thanks for watching my video!
      Please put your ‘best.pt’ file to the home directory (/home/”user name”) or specify the absolute path in the ‘yolov8_ros2_pt.py’ script.

    • @pablogomez9401
      @pablogomez9401 Год назад

      @@robotmania8896 Thanks, worked like a charm!

  • @architlahiri3110
    @architlahiri3110 Год назад +1

    Hey, if u dont mind can you help me out? Im facing some issues.
    I downloaded all the code from the link in the video description, followed all the steps, but did not get any output- as in there was no message output, just blank when i ran "ros2 topic echo /Yolov8_Inference". However all the models and the robot are loaded into gazebo just fine. I compared the topics published in my run versus the video, and i am missing
    /rgb_cam/camera_info
    /rgb_cam/image_raw/compressed
    /rgb_cam/image_raw/compressedDepth
    /yolobot/odom
    Using rviz2, the topic image_raw can be found under Yolov8_Inference, but when I move to add it, there is no image window and it shows "no image".
    My ubuntu version is 20.04.6, using latest foxy distibution.

    • @robotmania8896
      @robotmania8896  Год назад

      Hi architlahiri3110!
      Thanks for watching my video!
      I personally have never faced such issue. Considering that you are missing camera related topics, maybe you don’t have gazebo plugins. Please refer to this page. Maybe “sudo apt-get install ros-${ROS_DISTRO}-ros-gz” command will solve your problem.
      gazebosim.org/docs/latest/ros_installation

    • @ChristianDiaryUG
      @ChristianDiaryUG Год назад

      I am at this stage. I don't know whether you have moved beyond this. Those are the exact topics i am missing

    • @bedirhanselimyesilyurt998
      @bedirhanselimyesilyurt998 Год назад

      me too :( .I tried that but nothing changed.

    • @architlahiri3110
      @architlahiri3110 Год назад +2

      I'm replying here for everyone in the thread:
      sudo apt install ros-foxy-gazebo-ros-pkgs
      used this to fix it
      Good luck :)

    • @bedirhanselimyesilyurt998
      @bedirhanselimyesilyurt998 Год назад +1

      i am working on humble and I changed command but also this command did not work.

  • @sharke0062
    @sharke0062 7 месяцев назад

    Hello! Thank you for this video about implementing YOLOv8 with gazebo and ros2. I have a question though. I have trained a YOLOv8 model on a custom dataset and have the best.pt file from the training. How do I then load this best.pt file? I tried replacing the path in the yolobot_recognition scripts to the path with the best.pt file but I keep getting the error "No such file or directory". I'm not sure whether the path I wrote is wrong or some other issue. Any suggestions are appreciated and thank you again!

    • @robotmania8896
      @robotmania8896  7 месяцев назад +1

      Hi Sharke00!
      Thanks for watching my video!
      I think this happens because ROS is searching for a weight file in a wrong directory. I will fix it later, but as a quick fix, in “yolov8_ros2_pt.py” modify line 19 as
      self.model = YOLO('best.pt')
      and place the “best.py” file in the home directory. It should work.

    • @sharke0062
      @sharke0062 7 месяцев назад

      @@robotmania8896 Yes, apparently the program made a new directory and once I placed the pt file there it started working. Another question I have is if I wanted to use the recognition package with other projects that use different robot models, what else do I need to do besides including the package in the main launch file? The console seems to just stop responding and no output (number, type of object detected) or error is given. Thank you for responding!

    • @robotmania8896
      @robotmania8896  7 месяцев назад +1

      @@sharke0062 I don’t think that you have to do something special except for checking whether camera on your robot publishes “rgb_cam/image_raw” topic. Sometimes Gazebo may take a long time to launch especially if gazebo world contains a lot of objects, so maybe you just have to wait.

    • @sharke0062
      @sharke0062 7 месяцев назад

      @@robotmania8896 I see. Thank you so much!

  • @nhattran4833
    @nhattran4833 Год назад

    great tutorial, thanks for your sharing. Could make more video about using semantic segmentation in ROS2?

    • @robotmania8896
      @robotmania8896  Год назад +1

      Hi nhattran4833!
      Thanks for watching my video!
      I have actually created a video about semantic segmentation and ROS2. Here is the link. I hope it will help you.
      ruclips.net/video/Z5czzGeRJ4o/видео.html

    • @nhattran4833
      @nhattran4833 Год назад

      @@robotmania8896 thanks, i really apply semantic segmention in mobile robot, could you recommend som applications which apply it in to mobile robot?

    • @robotmania8896
      @robotmania8896  Год назад

      I think that semantic segmentation is more often used in conjunction with other methods rather than by itself. For example, it is used for control of mobile robots, like described in this paper.
      www.sciencedirect.com/science/article/abs/pii/S0957417421015189

  • @y8fj
    @y8fj 10 месяцев назад

    Hi once again! I am using in my work your code to compare performance of the raw yolo8n model with one accelerated with the DeepStream. Do i need to cite you or someone else?

    • @robotmania8896
      @robotmania8896  10 месяцев назад

      Hi Dgh!
      Since I am providing only the zip file, I think it is difficult to cite. So, I think citing is not necessary.

    • @y8fj
      @y8fj 10 месяцев назад

      @@robotmania8896 ok, clear then. Thanks for your code!

  • @alocatnaf
    @alocatnaf Год назад

    Hi,
    great tutorial! im wondering where i can specify the yolo inference parameters, like imgsz, conf, max_det ?
    thanks in advance :)

    • @robotmania8896
      @robotmania8896  Год назад

      Hi alocatnaf!
      Thanks for watching my video!
      In this code, you should do post-processing yourself. For example, if you want to show only objects with confidence above some value, you should extract confidence parameter from results (yolov8_ros2_pt.py line 41) and apply an “if” statement when plotting inference results.

    • @alocatnaf
      @alocatnaf Год назад

      @@robotmania8896 thank you very much!

  • @shark-yt
    @shark-yt 23 дня назад

    i am using ubuntu 24 and ros2 jazzy what gazebo should install then and how?
    cuz i see uinstall gazebo9 classic whic i guess won work fo rme

    • @robotmania8896
      @robotmania8896  23 дня назад

      If you are using Ubuntu24, you should install gazebo simulation. As far as I remember, gazebo simulation will be installed alongside with ROS2 Jazzy. To operate robot using gazebo simulation, please see this tutorial.
      ruclips.net/video/b8VwSsbZYn0/видео.html

  • @towerboi-zg3it
    @towerboi-zg3it Год назад

    I dont know why when I run the code that you shown "Publisher already registered for provided node name. If this is due to multiple nodes with the same name then all logs for that logger name will go out over the existing publisher. As soon as any node with that name is destructed it will unregister the publisher, preventing any further logs for that name from being published on the rosout topic" This error come and the node is duplicated

    • @robotmania8896
      @robotmania8896  Год назад

      Hi Towerboi!
      Thanks for watching my video!
      Does this error have negative effect on your simulation? If not, just leave it as it is. Since it might be ROS bug.

  • @kennetheladistu3356
    @kennetheladistu3356 5 месяцев назад

    can i use this tutorial to integrate yolov8obb on ros2 using humble?

    • @robotmania8896
      @robotmania8896  5 месяцев назад

      Hi Kenneth Eladistu!
      Thanks for watching my video!
      Yes, the way of integration should be pretty much the same.

  • @nhatnet479
    @nhatnet479 4 месяца назад

    thanks for this project, could you make a same tutorial using yolov8 and use the tensorRT

    • @robotmania8896
      @robotmania8896  4 месяца назад

      Hi Nhat Net!
      Thanks for watching my video!
      I am currently not planning to make a tutorial about yolov8 and tensorRT but I have several videos related to it. Where exactly are you experiencing a problem?
      ruclips.net/video/xqroBkpf3lY/видео.html
      ruclips.net/video/aWDFtBPN2HM/видео.html

    • @nhatnet479
      @nhatnet479 4 месяца назад

      @@robotmania8896 have this tutorial directly inferenced with gpu or cpu of jetson nano ?

    • @robotmania8896
      @robotmania8896  4 месяца назад

      In the video I am using CPU for inference, but GPU can also be used.

  • @mdmahedihassan2444
    @mdmahedihassan2444 Год назад

    hello one problem, when I run ros2 topic list
    yolobot inference is not there
    how could I solve it?

    • @robotmania8896
      @robotmania8896  Год назад

      Hi mdmahedihassan2444!
      Thanks for watching my video!
      As I have explained in the video from 11:15, please run the “source” command before executing the “ros2 topic list” command.

  • @nourishcherish9593
    @nourishcherish9593 6 месяцев назад

    is this project written by you. is there a github repo. i am not allowed to access gdrives at my work

    • @robotmania8896
      @robotmania8896  6 месяцев назад

      Hi NOURISH CHERISH!
      Thanks for watching my video!
      Yes, this work is written by me. There is no github repo. You may download the zip file from your home and send it to your working place by email.

  • @aishRobotics
    @aishRobotics 5 месяцев назад

    Hey can i use the same code for jetson nano ROS 2 in real-time USB camera?

    • @robotmania8896
      @robotmania8896  5 месяцев назад

      Hi AishRobotics!
      Thanks for watching my video!
      Yes, you can. Just make sure that your USB camera publishes “rgb_cam/image_raw” topic.

  • @howardkanginan
    @howardkanginan Год назад

    Hi!
    I have an error after I source the yolobot setup.bash. It says when I run the ros2 launch yolobot_gazebo yolobot_launch.py command, it says gazebo_ros not found. I've install the ros2 iron gazebo package as well. How can I fix this?

    • @robotmania8896
      @robotmania8896  Год назад

      Hi Howard Kang!
      Thanks for watching my video!
      If the error says “gazebo_ros not found”, please install “ros-iron-gazebo-ros” package. Also note that this project was made with ROS Foxy, so it may not work with ROS Iron.

  • @anshbhatia4805
    @anshbhatia4805 Год назад

    Great video! Can you please guide me how can I integrate this YOLO v8 with ROS2 code to camera for real time object detection?

    • @robotmania8896
      @robotmania8896  Год назад

      Hi anshbhatia4805!
      Thanks for watching my video!
      What do you mean by “integrate”? In this tutorial I have already explained how to use yolov8 with camera.

  • @hammadsafeer4283
    @hammadsafeer4283 Год назад

    hi, thanks for video!
    I trained the yolov8 model on colab to detect the traffic cones.
    I have zed2i stereo camera, i want to integrate the yolov8 with with ros2

    • @robotmania8896
      @robotmania8896  Год назад

      Hi Hammad Safeer!
      Thanks for watching my video!
      Do I understand correctly that you want to do inference using yolov8 with ZED and ROS2?

    • @hammadsafeer4283
      @hammadsafeer4283 Год назад

      exactly!@@robotmania8896

  • @manishnayak9759
    @manishnayak9759 Год назад

    How to build the yolov8 for ros noetic ?

    • @robotmania8896
      @robotmania8896  Год назад

      Hi Manish Nayak!
      Thanks for watching my video!
      Since there is python implementation for yolov8, you don’t have to build it. Just install the required libraries for yolov8 using pip.

  • @Izaq-b2i
    @Izaq-b2i 10 месяцев назад

    Hi this video really help me to do my project but i did not getting the images when i saw on rviz2 could you help me out for that also when i am trying todo install ros foxy i got error no such file or directory?

    • @robotmania8896
      @robotmania8896  10 месяцев назад

      Hi Izaq!
      Thanks for watching my video!
      Do you have any error messages in the terminal?

    • @Izaq-b2i
      @Izaq-b2i 10 месяцев назад

      @@robotmania8896 yeah it say no such file or directory i have gazebo 11.10.2 and humble package using and also how do i speed up the gazebo simulation to detect things and which camera are we using and did you first train yolov8 ? also after colcon build they ask me to conncect joystick what is this and when i run ros2 topic echo / yolov8_inference i did not get back any parameters
      if you don't mind can i have you email address to ask more questions. i have to submit this project end of this month please

    • @robotmania8896
      @robotmania8896  10 месяцев назад

      Have you built the project successfully? To speed up the inference, you should use computer with GPU. Since it is a simulation, camera parameters are defined in the SDF file. No, in this tutorial I haven’t trained YOLO. I just used a provided model. To operate the robot, you should use joypad. You should use source command before doing “ros2 topic echo /yolov8_inference”. Here is my e-mail: robotmania8867@yahoo.com

  • @akentertainment7911
    @akentertainment7911 4 месяца назад

    hey will it work on gazebo11 too?

    • @robotmania8896
      @robotmania8896  4 месяца назад

      Hi AK Entertainment!
      Thanks for watching my video!
      Yes, it should work on gazebo 11.

  • @adityajambhale915
    @adityajambhale915 Год назад +1

    Hello, i am facing issues while installing ultralytics (it's build dependencies are not satisfied), i am using Ubuntu 20.04 and i tried with python 3.8.10 and python 3.7.5 it is still giving error please suggest what to do, i am not able to find solution anywhere else🥲

    • @aadityanair2488
      @aadityanair2488 Год назад +1

      Nice Issues

    • @robotmania8896
      @robotmania8896  Год назад

      Hi Aditya Jambhale!
      Thanks for watching my video!
      What error exactly do you have?

    • @magnolia123gaming9
      @magnolia123gaming9 16 дней назад

      I am having the same issues as well. It is with ultralytic’s dependencies when installing through pip. I tried running it within a docker container as well and I get the same issues

  • @cmtro
    @cmtro Год назад

    Very good....

  • @mohammedbourouba9274
    @mohammedbourouba9274 3 месяца назад

    Hello thank you for this amazing video but i want my robot recognize traffic sign and react according to them how can I do it ?

    • @robotmania8896
      @robotmania8896  3 месяца назад

      Hi Mohammed Bourouba!
      Thanks for watching my video!
      In that case, you have to train your own model.

    • @mohammedbourouba9274
      @mohammedbourouba9274 3 месяца назад

      yes i do it but i want my robot when it detect for example turn left it turn left automatically

    • @robotmania8896
      @robotmania8896  3 месяца назад +1

      In that case, you can publish the “cmd_vel” topic when the robot detects “turn left” sign. Probably you will need depth camera as well to make the robot turn at right point.

    • @mohammedbourouba9274
      @mohammedbourouba9274 3 месяца назад

      What's are the changes to do in case i want to be autonomous, it's possible ?

    • @robotmania8896
      @robotmania8896  3 месяца назад

      @@mohammedbourouba9274 Do you mean, that you would like to a navigation?

  • @sameshmanagond9128
    @sameshmanagond9128 Месяц назад

    does it work with ros2 humble

    • @robotmania8896
      @robotmania8896  Месяц назад

      Hi samesh managond!
      Thanks for watching my video!
      Yes, it should work.

  • @jungahkwak6343
    @jungahkwak6343 Год назад

    Thank you for your REALLY NICE VIDEO!
    Im trying the video, but I have some issues during 'colcon build'.
    [error] ModuleNotFoundError: No module named 'catkin_pkg'
    I tried to solve the error
    1. pip install catkin_pkg
    2. source /opt/ros/foxy/setup.bash
    3. added “source /opt/ros/foxy/setup.bash” in “.bashrc” file
    but it's not working.
    Any idea about this error? Thank you!

    • @robotmania8896
      @robotmania8896  Год назад

      You don’t need catkin with ros2. ROS2 packages are built using ament_cmake. Have you tried adding “source /opt/ros/foxy/setup.bash” in “.bashrc” file and rebooting?

  • @BadBrother
    @BadBrother 8 месяцев назад

    Can YOLOv8 track a simple track using ros?

    • @robotmania8896
      @robotmania8896  8 месяцев назад

      Hi Bad Brother!
      Thanks for watching my video!
      What do you mean by “simple track”? If it is something like a road, you can use semantic segmentation to extract the road part from an image.

    • @BadBrother
      @BadBrother 8 месяцев назад

      @@robotmania8896 I mean lane tracking. So it detects the trajectory of the black tape. oh okay, thank you for your advice. I would like to ask you a question. Actually I have never tried YOLO. If I want to start learning, where should I start, so that I can do lane traking using YOLO? Thanks

    • @robotmania8896
      @robotmania8896  8 месяцев назад

      @@BadBrother If you need to detect black tape, you don’t have to do inference using YOLO. You may just use HSV decomposition to detect black color. Here is an example video. ruclips.net/video/hdnuykRwMmI/видео.html

    • @BadBrother
      @BadBrother 8 месяцев назад

      @@robotmania8896I really appreciate your response. I apologise for confusing you. I mean so the robot I want to build uses a Camera to detect the 2 black tape on the left and right and the path forms a trajectory. So the camera is connected to the jetson nano and then the motor is driven by the arduino uno.

    • @robotmania8896
      @robotmania8896  8 месяцев назад +1

      I understand your problem. If you know exactly the object and color you have to detect, I think you don’t necessarily have to use yolo. You may do color detection or use infrared sensor to move along the line.

  • @Moon-ue8qb
    @Moon-ue8qb 10 месяцев назад

    Hi. thank you for making video!! :)
    But I have something problem.
    I did < ros2 topic echo /Yolov8_Inference > then i got
    WARNING: topic [/Yolov8_Inference] does not appear to be published yet
    Could not determine the type for the passed topic
    How can i fix this error?
    sudo apt-get install ros-humble-ros-ign-bridge
    sudo apt-get install ros-humble-ros-pkgs
    sudo apt-get install ros-${ROS_DISTRO}-ros-gz"
    I tried them but still have error. Plz help me.

    • @robotmania8896
      @robotmania8896  10 месяцев назад

      Hi Moon!
      Thanks for watching my video!
      Did you execute the “source” command before executing the “ros2 topic echo” command? Otherwise, you will get an error.

    • @Izaq-b2i
      @Izaq-b2i 10 месяцев назад

      i did not get any error but nothing happened after run this command

  • @lordfarquad-by1dq
    @lordfarquad-by1dq Год назад

    wat about yolo8 seg?

    • @robotmania8896
      @robotmania8896  Год назад

      Hi lordfarquad-by1dq!
      Thanks for watching my video!
      The segmentation result format is slightly different from recognition, but you should be able to publish it with little change to the code. I am planning to release a new video within a few days regarding semantic segmentation and yolo, it may also help you.

  • @1b_wahyufahrizalalfayyadh286
    @1b_wahyufahrizalalfayyadh286 2 месяца назад

    why you do not use file .xacro? i mean please give us the xacrofile

    • @robotmania8896
      @robotmania8896  2 месяца назад

      Hi Wahyu Fahrizal Al Fayyadh!
      Thanks for watching my video!
      This is a very simple model, so I didn’t use any xacro files. You can change the extension of the file to xacro and it should work as a xacro file.

    • @1b_wahyufahrizalalfayyadh286
      @1b_wahyufahrizalalfayyadh286 2 месяца назад

      @@robotmania8896 thank you sir! Amazing videos!

  • @ថាន្នីសុគុណ
    @ថាន្នីសុគុណ 9 месяцев назад

    how can i use this code with yolov5?

    • @robotmania8896
      @robotmania8896  9 месяцев назад

      Hi ថាន្នី សុគុណ!
      If you would like to use YoloV5, please refer to this video.
      ruclips.net/video/594Gmkdo-_s/видео.html

    • @ថាន្នីសុគុណ
      @ថាន្នីសុគុណ 9 месяцев назад

      @robotmania8896 I have watched it, but when I try to run it as ros2 run ... it doesn't know module 'models' and 'utils'

    • @robotmania8896
      @robotmania8896  9 месяцев назад

      Yes, to run that code using “ros2 run” you have to modify the CMake file. You can run that code using “python3”.

  • @nourishcherish9593
    @nourishcherish9593 6 месяцев назад

    i see few syntax mistakes in your code just by looking at it.

    • @robotmania8896
      @robotmania8896  6 месяцев назад

      Year, there could be syntax mistakes. Please let me know if you have found any.

    • @nourishcherish9593
      @nourishcherish9593 6 месяцев назад

      Actually its running without errors. Wtf. Like you just have a line that says self.subscriprion... .hows that not causing an error. Do i not know python

  • @nikhill3102
    @nikhill3102 7 месяцев назад

    sudo apt install gazebo9 error

    • @robotmania8896
      @robotmania8896  7 месяцев назад

      Hi Nikhil Kulkarni!
      Thanks for watching my video!
      On which version of Ubuntu are you trying to install gazebo?

    • @nikhill3102
      @nikhill3102 7 месяцев назад

      @@robotmania8896 22.04

    • @robotmania8896
      @robotmania8896  7 месяцев назад

      For 22.04, "sudo apt install gazebo" should work.

    • @nikhill3102
      @nikhill3102 7 месяцев назад

      @@robotmania8896 gazebo no candidate error

    • @nikhill3102
      @nikhill3102 7 месяцев назад

      @@robotmania8896 tried but showing error no 'gazebo' Candidate

  • @LUWAGAMICHEAL
    @LUWAGAMICHEAL Год назад

    Hello sir, thank you for the video, i am learning alot from you. I tried to implement the project step by step, but firstly the gazebo folder didnt appear on my home directory after unhiding all content of the home, I searched it manually and found a gazebo-9 folder but the folder contents where not similar though it had a model folder still. Secondly on colcon build i get this error
    CMake Error at CMakeLists.txt:19 (find_package):
    By not providing "Findament_cmake.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "ament_cmake", but CMake did not find one.
    Could not find a package configuration file provided by "amend_cmake" with any of the following names:
    ament_cmakeConfig.cmake
    ament_cmake.config.cmake
    Add the installation prefix of "ament_cmake" to CMAKE_PREFIX_PATH or set "ament_cmake_DIR" to a directory containing one of the above files. If "ament_cmake" provides a separate development package or SDK, be sure it has been installed.

    • @LUWAGAMICHEAL
      @LUWAGAMICHEAL Год назад

      I have tried to add the source /opt/ros/foxy/setup.bash at the bottom of the file but still it doesn't build the packages

    • @robotmania8896
      @robotmania8896  Год назад

      Hi @user-xg7dk1wl3s!
      Thanks for watching my video!
      Please open the terminal and execute the “gazebo” command. The folder should appear.
      Also, add “source /opt/ros/foxy/setup.bash” to your “.bashrc” file. This should solve ament_cmake related error. Don’t forget to reboot your computer after altering the “.bashrc” file.

  • @raphaelcrespopereira3206
    @raphaelcrespopereira3206 Год назад

    cant make the yolobot_inference folder to be listed.
    keeps showing this error
    ModuleNotFoundError: No module named 'yolov8_msgs.yolov8_msgs_s__rosidl_typesupport_c'

    • @robotmania8896
      @robotmania8896  Год назад

      Hi raphaelcrespopereira3206!
      Thanks for watching my video!
      Hmm… I have never had such an error. Which version of ROS are you using?

    • @raphaelcrespopereira3206
      @raphaelcrespopereira3206 Год назад

      @@robotmania8896 ros2 foxy on ubuntu 20.04

    • @raphaelcrespopereira3206
      @raphaelcrespopereira3206 Год назад

      @@robotmania8896 I reinstalled everything and now the inference folder is listed but when i do the echo part does not show the camera working and when i open rviz the image of inference show no image

    • @ChristianDiaryUG
      @ChristianDiaryUG Год назад

      I am at this stage. I don't know whether you have moved beyond this. Those are the exact topics i am missing

    • @raphaelcrespopereira3206
      @raphaelcrespopereira3206 Год назад +1

      @@ChristianDiaryUG i fixed running gazebo 11 with ros2 humble on ubuntu 22.04 and added the aditional step of installing the pckg of communication between ros2 and gazebo sudo apt-get install ros-humble-ros-ign-bridge

  • @oymdental2602
    @oymdental2602 Год назад

    are you taking any clases, im looking to contact you

    • @robotmania8896
      @robotmania8896  Год назад

      Hi OYM dental!
      Thanks for watching my video!
      No, I am not providing any classes, since I have another job which takes almost all my time. But if you have any questions, maybe I can answer them.

  • @loowaijun2960
    @loowaijun2960 Год назад

    Hi it is very good tutorial. But, I am facing this error during the command "colcon build"
    CMake Error at CMakeLists.txt:19 (find_package):
    By not providing "Findament_cmake.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "ament_cmake", but CMake did not find one.
    Could not find a package configuration file provided by "amend_cmake" with any of the following names:
    ament_cmakeConfig.cmake
    ament_cmake.config.cmake
    Add the installation prefix of "ament_cmake" to CMAKE_PREFIX_PATH or set "ament_cmake_DIR" to a directory containing one of the above files. If "ament_cmake" provides a separate development package or SDK, be sure it has been installed.
    Do you have any idea to solve this? ThankYou!

    • @robotmania8896
      @robotmania8896  Год назад

      Hi Loo Waijun!
      Thanks for watching my video!
      Have you added “source /opt/ros/foxy/setup.bash” in your “.bashrc” file?

    • @loowaijun2960
      @loowaijun2960 Год назад

      @@robotmania8896 It's works. Thanks a lot.
      After it, I run the command "ros2 topic echo /Yolov8_Inference“. But, it doesn't show anything. Do you have any idea about this?

    • @robotmania8896
      @robotmania8896  Год назад

      @@loowaijun2960 I explain how to execute “ros2 topic list” command in the video. Please watch starting from 11:14.

    • @loowaijun2960
      @loowaijun2960 Год назад

      @@robotmania8896 Ya, I followed it, but the recognized object information does not show.

    • @robotmania8896
      @robotmania8896  Год назад

      @@loowaijun2960 Do boundary boxes appear in RVIZ when information in the terminal is not shown?

  • @lucasbalviin1999
    @lucasbalviin1999 5 дней назад

    Someone have thiss issue in ROS2 HUMBLE when runing colcon :
    Starting >>> yolobot_control
    Starting >>> yolobot_description
    Starting >>> yolobot_gazebo
    Starting >>> yolobot_recognition
    Starting >>> yolov8_msgs
    Finished

    • @lucasbalviin1999
      @lucasbalviin1999 5 дней назад

      solution: downgrade setuptools , install sphinx and sudo apt-get install python3-pip python3-dev python3-setuptools apache2 -y

    • @robotmania8896
      @robotmania8896  5 дней назад

      Hi Lucas Balvin Huertas!
      Thanks for watching my video!
      I am glad that you found a solution. And thank you for sharing the information.