Object Recognition with Orin Nano (Jetpack 6.0) Using YOLOv9, TensorRT and RealSense

Поделиться
HTML-код
  • Опубликовано: 1 фев 2025

Комментарии • 66

  • @sazidrahmansimanto7858
    @sazidrahmansimanto7858 6 месяцев назад

    Thank you very much for the Video, finally librealsense 2.55.1 and realsense ROS 4.55.1 (installed from source ) in Jetpack 6 is working perfectly for Realsense D456 Camera. I was facing issue to work with ROS but when I downgraded the latest firmware to it's previous version 5.15.1.0, everything is working fine now.

    • @robotmania8896
      @robotmania8896  6 месяцев назад

      Hi Sazid Rahman Simanto!
      Thanks for watching my video!
      It is my pleasure if this video has helped you!

  • @bsv4
    @bsv4 Месяц назад

    First of all, thanks for your explanation, can we do this in jetson nano, what do we need to change, except pytorch and torchvision versions?
    thanks for your reply in advance

    • @robotmania8896
      @robotmania8896  Месяц назад +1

      Hi bki!
      Thanks for watching my video!
      Here is a tutorial in which I am using Jetson Nano. I hope it will help you!
      ruclips.net/video/o42BFhtVqyo/видео.html

    • @bsv4
      @bsv4 Месяц назад

      @@robotmania8896 Thank you very much for your quick reply, I looked at your tutorial, I need to make a track using yolov9, do you think this is possible by following your tutorial?

    • @robotmania8896
      @robotmania8896  Месяц назад

      @@bsv4 I am not sure whether YoloV9 will work with python 3.6. Maybe you will be able to do inference.

  • @다비-o8z
    @다비-o8z 5 месяцев назад

    Hello! Teacher, i have a question. I have Jetson Orin Nano dev-kit(jetpack6). I want to operate intel realsense with yolo,Ros2 in Jetson. But when i try to operate, errors comes.. without jetson i can operate realsense with ros2 and yolov8 but in jetson.. i can't. Can you check what is problem? And what should i do 😢 error is just when i run the code in terminal, connect doesn't exist. But when i check the connect in jetson's Port. I can find realsense is connected

    • @robotmania8896
      @robotmania8896  5 месяцев назад

      Hi 다비!
      Thanks for watching my video!
      What error exactly do you have?

    • @다비-o8z
      @다비-o8z 4 месяца назад

      ​@@robotmania8896 Can i get your email? i want to show my project's procedure...i already did do too many troubleshooting... and i want to crying ㅠㅠHa.... Basically 1. install Ros humble 2. install pytorch,torchvision 3. build opencv with cuda 4. install ros-realsense wrapper(but i realized Jetpack 6.0 doesn't support librealsense...so i should build from source, but even that also error comes. i guess JetsonHackNano's librealsense is too old version) 5. install YOLOv8-ros wrapper...purpose is operate intel realsense D455 , and if you give me email, than i can explain more better bcz i can upload the photo of errors.. plz help me ㅠㅠ

    • @robotmania8896
      @robotmania8896  4 месяца назад

      Here is my e-mail:
      robotmania8867@yahoo.com

    • @mr.9489
      @mr.9489 4 месяца назад

      @@robotmania8896 Thankyou! I sended email! i'm 제천대성

  • @xp-4yt
    @xp-4yt 10 месяцев назад

    Hi, as i got, the fps for the Realsense camera is 30, but inference runs only with 10. This is still better then opencv, but far away from the solutions utilizing DeepStream or Isaac SDK. Btw, maybe you will do someday a tutorial how to configure detection of objects crossing predefined frame borders? I saw this functionality in the deepstream application, but didnt have much time to dig deeper. I also noticed that deepstream allows a great inference configuration, but its still very demanding to c++ skills and general understanding - so maybe you know other approaches?

    • @robotmania8896
      @robotmania8896  10 месяцев назад +1

      Hi xp-4yt!
      Thanks for watching my video!
      It is difficult to compare various frameworks, because apart from inference task itself, they are doing a lot of other things like drawing bounding boxes and so on, which also consumes some amount of time. So, if you really need high fps, I recommend you delete all unnecessary processes for your application and do visualization only during debug. Running your model using C++ indeed makes inference faster, but as you mentioned, converting a model to C++ requires some programming skills, so the easy way is to use existing tools like TensorRT. As for detection of objects crossing predefined frame borders, I will look into it.

  • @Davedav84
    @Davedav84 3 месяца назад

    Good tutorial, it's possible use CSI Camera as trial or it's needs only a webcam?

    • @robotmania8896
      @robotmania8896  3 месяца назад

      Hi Davedav84!
      Thanks for watching my videos!
      CSI camera also will work. But you have to modify the part of the code where you obtain frames from the camera.

    • @Davedav84
      @Davedav84 3 месяца назад

      ​@robotmania8896 Ookay, what kind of code I must add...I'm totally new about this kind of environmental

    • @robotmania8896
      @robotmania8896  3 месяца назад

      I think this article and GitHub project will help you.
      jetsonhacks.com/2023/04/05/using-the-jetson-orin-nano-with-csi-cameras/
      github.com/jetsonhacks/CSI-Camera
      If we take “detect_RS.py” for reference, you have to modify lines 82~104.

  • @심민경-o6l
    @심민경-o6l 6 месяцев назад

    Great video!
    Is that your opencv is opencv with cuda? or just basic opencv about jetpack6.0?

    • @심민경-o6l
      @심민경-o6l 6 месяцев назад

      And I want to run tensorrt with cpp but I'm just going to detect with csi cam. I don't know how to fix it in demo.cpp.
      Should the cpp be index modified like the index modified in python?

    • @robotmania8896
      @robotmania8896  6 месяцев назад

      Hi 심민경!
      Thanks for watching my video!
      Opencv is the version that comes with jetpack. I didn’t do any custom installations.
      Regarding the CPP file, you have to modify around line 83 in demo.cpp file. Instead of reading an image from the path, you have to capture a video frame. Steps 6~8 of this post should help you.
      selfmadetechie.com/how-to-create-a-webcam-video-capture-using-opencv-c

  • @VR-fh4im
    @VR-fh4im 4 месяца назад

    Thanks for the video. Why not use ultralytics library?

    • @robotmania8896
      @robotmania8896  4 месяца назад

      Hi V R!
      Thanks for watching my video!
      I just wanted to use YOLOv9. As far as I know, ultralytics implements YOLOv8. But ultralytics is a good library too.

  • @andrewli1126
    @andrewli1126 5 месяцев назад

    Thank you for the nice video!
    However, when I execute my Python script on the last step, the pipeline fails to start, returning Runtime error: "No device connected." Could you provide some insight on this? (the camera is connected to a USB 3port with a USB 3 cable)

    • @robotmania8896
      @robotmania8896  5 месяцев назад

      Hi andrewli1126!
      Thanks for watching my video!
      Usually, this error occurs when you have a problem with USB cable. Can you obtain an image using the “realsense-viewer” application?

    • @andrewli1126
      @andrewli1126 5 месяцев назад

      @@robotmania8896 Thanks for the reply!
      Yes the camera works fine with realsense-viewer, I could also import pyrealsense2 module, just that when running the script, the pipeline fails to start, returning the error: "No device connected."

    • @robotmania8896
      @robotmania8896  5 месяцев назад

      @@andrewli1126 Can you get frames from the realsense using this code?(index may not be 0)
      import cv2
      capture = cv2.VideoCapture('/dev/video0')
      while(True):
      try:
      ret, frame = capture.read()
      cv2.imshow('frame', frame)
      except:
      import traceback
      traceback.print_exc()

    • @andrewli1126
      @andrewli1126 5 месяцев назад

      @@robotmania8896thank you for the reply! I have resolved the issue by uninstalling pyrealsense2 and rebuilding the wrapper from source. Cheers!

  • @yasiraltay2849
    @yasiraltay2849 7 месяцев назад

    I just want to install intel realsense for jetpack 6 but I'm getting problems like apt-key can you help me?

    • @robotmania8896
      @robotmania8896  7 месяцев назад

      Hi Yasir Altay!
      Thanks for watching my video!
      What problem exactly do you have?

    • @yasiraltay2849
      @yasiraltay2849 7 месяцев назад

      @@robotmania8896 Previously, when realsense-viver was running, it did not see my camera, but now realsense-viver SDK is working, but it gives a device not found error in my python code. (jetson orin nano jp 6.0 d435i)

    • @yasiraltay2849
      @yasiraltay2849 7 месяцев назад

      @@robotmania8896 I installed it as you did, but I don't have an OFF file. That's why I didn't change anything in bash.

    • @robotmania8896
      @robotmania8896  6 месяцев назад

      @@yasiraltay2849 There should be a directory in which pyrealsense2 library files are located. If the bug is corrected, it may not be called “OFF” any more. Do you have such directory in the “/usr/local” directory?

  • @paolomagri7207
    @paolomagri7207 9 месяцев назад

    Hi! Thanks for the video!!
    It's possible do the same procedure with Jetpack 5.1.2 with ubuntu 20.04 and ros humble or i have to change something?

    • @robotmania8896
      @robotmania8896  9 месяцев назад

      Hi Paolo Magri!
      Thanks for watching my video!
      ROS Humble works with Ubuntu22, so you have to use Foxy if you are using jetpack 5.

  • @duncanmaclennan9624
    @duncanmaclennan9624 7 месяцев назад

    Thank you so much for your video RM.
    Im running into an error at the end running yolov9_trt_RS py.... i get "AttributeError: module 'pyrealsense2' has no attribute 'config'". I dont suppose you know what might be causing this error?
    Thanks again

    • @robotmania8896
      @robotmania8896  7 месяцев назад +1

      Hi Duncan MacLennan!
      Thanks for watching my video!
      The reason of this error is that python can find the “pyrealsense2” directory but cannot find the “config” function. Please check whether you have 9 files inside the /usr/local/OFF directory (16:07 in the video). Also, please check whether you wrote “PYTHONPATH” correctly in your ”.bashrc” file (16:40 in the video). Do not forget to source “.bashrc” file or reboot your Nano after altering “.bashrc” file, otherwise changes will not be reflected.

    • @DanielEsqueda-x6j
      @DanielEsqueda-x6j 7 месяцев назад +1

      pip install pyrealsense2 did the trick for me. However, now I get profile = pipeline.start(config) RuntimeError: No device connected

    • @robotmania8896
      @robotmania8896  7 месяцев назад

      @@DanielEsqueda-x6j In this case most often it is a problem with a cable. I recommend to use cable that comes along with RealSense.

    • @duncanmaclennan9624
      @duncanmaclennan9624 7 месяцев назад

      Thanks @robotmania8896. Really appreciate the reply.
      Ive gone through the whole process again (after reflashing the nano too). Im running into a problem earlier, at 14:33, running yolov9_trt - which might be the cause of the problem later at the end.
      When attempting inference for the first time, it says....
      "Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attemping to cast down to INT32"
      Then it attempts to create the inference engine... After 25 minutes it says, "TensorRT encountered issues when converting weights between types and that could affect accuracy"
      "Error in do_infer: 'Yolov9' object has no attribute 'output_dim'".
      Any idea where I might be going wrong?

    • @robotmania8896
      @robotmania8896  7 месяцев назад

      @duncanmaclennan9624 Regarding the message “Your ONNX model has been~”, this is because the original model was trained not on Orin nano. For experimental purposes, it is not that relevant.
      I am not sure what may cause the “'Yolov9' object has no attribute 'output_dim'" error. Do you have a more detailed output log?

  • @m.iqbalmaulana7328
    @m.iqbalmaulana7328 10 месяцев назад

    i plan to use this hardware combined with camera to do person detection on edge, the detection result then sent to higher spec pc to be processed and recognized who is this person, do you think this hardware suitable for the task?

    • @robotmania8896
      @robotmania8896  10 месяцев назад

      Hi M. Iqbal Maulana!
      Thanks for watching my video!
      Yes, I think Orin Nanos’ capability is more than enough for that use case.

  • @dks9290
    @dks9290 10 месяцев назад

    Great video!!
    Can you plz upload more ROS1 videos?

    • @robotmania8896
      @robotmania8896  10 месяцев назад

      Hi Ahs!
      Thanks for watching my video!
      I used to upload videos related to ROS1, but now, all ROS1 distributions except Noetic are deprecated, so I upload ROS2-related videos. If you are planning to use ROS for a while, I recommend you use ROS2.

  • @locpham5522
    @locpham5522 2 месяца назад

    can you make a video with yolov11 on jetson orin?

    • @robotmania8896
      @robotmania8896  2 месяца назад

      Hi loc pham!
      Thanks for watching my video!
      Are you experiencing trouble with yolo installation? If so, where?

    • @locpham5522
      @locpham5522 2 месяца назад

      my boss want me to apply yolov11 in jetson orin. But yolov11 is a newest version of yolo family so it very hard to setup needed thing to run yolov11

    • @robotmania8896
      @robotmania8896  2 месяца назад

      @@locpham5522 As described in this GitHub page, you should be able to install Yolov11 by executing this command:
      pip3 install ultralytics
      github.com/ultralytics/ultralytics

    • @locpham5522
      @locpham5522 2 месяца назад

      @@robotmania8896 i have installed jetpack 6.0 and next is installing ultra ?. Do i have to install pytorch and other likes in your video?

    • @robotmania8896
      @robotmania8896  2 месяца назад

      @@locpham5522 Yes, please install pytorch and torch-vision as I described in the video.

  • @zgmf5331
    @zgmf5331 9 месяцев назад

    Great video! Is this possible to use raspberry pi camera or usb webcam for this project?

    • @robotmania8896
      @robotmania8896  9 месяцев назад

      Hi ZGMF!
      Thanks for watching my video!
      Yes, it is possible to use other cameras. But you have to modify the code a little bit.

  • @imenmabrouk6769
    @imenmabrouk6769 9 месяцев назад

    hi ,thanks for all your videos . how can i user yolov9 with ros2 ?

    • @robotmania8896
      @robotmania8896  9 месяцев назад +1

      Hi Imen Mabrouk!
      Thanks for watching my video!
      I haven’t made any videos related to using Yolov9 with ROS2. But the code should be very similar to the code I have provided in this tutorial.
      ruclips.net/video/594Gmkdo-_s/видео.html

    • @imenmabrouk6769
      @imenmabrouk6769 9 месяцев назад

      @robotmania8896 Thank you for your response 🙏 and my ather question. I have a Jetson AGX Orin and I need to use YOLOV9, not with Gazebo, but with an Intel camera. And my questions are, how can I use the GPU and the power of this carte ?

    • @robotmania8896
      @robotmania8896  9 месяцев назад +1

      @@imenmabrouk6769 If you have AGX Orin, the procedure to set up it will be absolutely the same as what I have explained in this video. Modern machine learning libraries can detect GPU automatically, so you don’t have to do anything special yourself.

    • @imenmabrouk6769
      @imenmabrouk6769 9 месяцев назад

      ​@@robotmania8896 Thank you so much, it's kind of you to respond to my question

  • @gokdenizyildirim2680
    @gokdenizyildirim2680 6 месяцев назад

    Hello, thanks for the video. After running yolov9_trt.py I get this error code : 2024-07-10 00:07:57,929 - INFO - yolov9_trt,get_trt_model_stream: bingding shape:(1, 3, 640, 640)
    2024-07-10 00:07:57,934 - INFO - yolov9_trt,get_trt_model_stream: bingding shape:(1, 84, 8400)
    2024-07-10 00:07:57,936 - INFO - yolov9_trt,get_trt_model_stream: bingding shape:(1, 84, 8400)
    wrapper took 13.1507 ms to execute.
    Error in do_infer: 'Yolov9' object has no attribute 'output_anchor_num'
    wrapper took 909.7321 ms to execute.
    2024-07-10 00:07:58,854 - ERROR - yolov9_trt,: No detection results for image: 000000000036.jpg
    wrapper took 7.7267 ms to execute.
    Error in do_infer: 'Yolov9' object has no attribute 'output_anchor_num'
    wrapper took 112.7925 ms to execute.
    2024-07-10 00:07:58,972 - ERROR - yolov9_trt,: No detection results for image: 000000000194.jpg
    wrapper took 7.4565 ms to execute.
    Error in do_infer: 'Yolov9' object has no attribute 'output_anchor_num'
    wrapper took 125.2282 ms to execute.
    2024-07-10 00:07:59,103 - ERROR - yolov9_trt,: No detection results for image: 000000000144.jpg
    wrapper took 9.0101 ms to execute.
    Error in do_infer: 'Yolov9' object has no attribute 'output_anchor_num'
    wrapper took 115.9399 ms to execute.
    2024-07-10 00:07:59,226 - ERROR - yolov9_trt,: No detection results for image: 000000000368.jpg
    2024-07-10 00:07:59,226 - INFO - yolov9_trt,destroy: yolov9 destroy
    -------------------------------------------------------------------
    PyCUDA ERROR: The context stack was not empty upon module cleanup.
    -------------------------------------------------------------------
    A context was still active when the context stack was being
    cleaned up. At this point in our execution, CUDA may already
    have been deinitialized, so there is no way we can finish
    cleanly. The program will be aborted now.
    Use Context.pop() to avoid this problem.
    -------------------------------------------------------------------
    Aborted (core dumped)

    • @gokdenizyildirim2680
      @gokdenizyildirim2680 6 месяцев назад

      Can you help me with that

    • @robotmania8896
      @robotmania8896  6 месяцев назад

      Hi Gökdeniz YILDIRIM!
      Thanks for watching my video!
      Unfortunately, I don’t have Orin Nano by my side now, so I cannot tell you for sure.
      Can you please check what “binding_index” do you have in “for” loop in the “get_trt_model_stream” function (lines 174~196)? The 'output_anchor_num' should be defined when binding_index == 2.

    • @zylek4163
      @zylek4163 6 месяцев назад

      @@robotmania8896 Thanks you just saved me a lot of time! I had the same problem and this worked like a charm.