Hello? I'm jetsonmom, a 65-year-old living in Korea and a copycat (I don't know much about Python, but I like to try things out and get help from acquaintances and chatgpt4 for things I don't know). I tried following the teacher's video using ORIN NANO provided by NVIDIA. (I am also a Jesson Nano Ambassador)Thank you so so much for sharing. My image is 6.0dp, so the version of torch or torchvision in the shared video is different. I installed torch version 2.2.0 and torchvision 0.17.0. I don't know if it's because it's different from the teacher's video, but when I ran the Python sample program, the results came out well. But awareness is fleeting. So I thought it was strange, so I asked, and when I looked it up, I found that cuda was not used. When installing cudnn, it seemed like I needed version 12, but it didn't work. Do I need to change to the same version as the teacher? ruclips.net/video/HVFFNKN8pB8/видео.html
Hello 장성숙! When you are using jetson Orin, you don’t have to install cuda or cudnn. It is installed by default. If cuda is available, it will be used automatically if you are using pytorch. So, it is strange that cuda had not been used. Have you installed pytorch version suitable for jetson (aarch architecture)?
@@robotmania8896 Yes, I installed it. It's just that the execution speed is too slow and I wanted to eliminate intermittent interruptions, so I checked with the command to see if CUDA is used when running it, and it came out as not, so I asked a question. Is there any way to make video processing faster? result is "0: 480x640 1 laptop, 1434.2ms Speed: 5.9ms preprocess, 1434.2ms inference, 2.1ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1155.6ms Speed: 4.0ms preprocess, 1155.6ms inference, 2.1ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1195.6ms Speed: 1.8ms preprocess, 1195.6ms inference, 3.8ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1128.3ms Speed: 3.8ms preprocess, 1128.3ms inference, 2.2ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1102.3ms Speed: 2.2ms preprocess, 1102.3ms inference, 2.8ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1160.8ms Speed: 3.4ms preprocess, 1160.8ms inference, 3.0ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1125.3ms Speed: 4.8ms preprocess, 1125.3ms inference, 2.6ms postprocess per image at shape (1, 3, 480, 640)"
Great man you are great. Since 3 days we have been tried with jetson orin nano and our gpu has not worked on yolov8, but your script is great and very nice guidance now our gpu has worked on jetpack 6 latest os on jetson orin nano appreciate your work 👌🏻
Hi Wang Jin! Thanks for watching my video! If you would like to use Yolov8 with Jetson Nano, this video will help you. ruclips.net/video/joAZEUbZZy8/видео.html
First of all i want to thank your for another great and very detailed tutorial with nice explanation. I am pretty new to Robotics and your videos are crucial to avoid a lot of troubles newbees are spotting in this field. And i have also small questions to the video - as i got the idea, the way an object detection should be used with all the Jetson, is a Deepstream Engine implementation. The engine seems to be much faster than a pt model. Am i wrong? Btw, can you give me a clue how to work with the navigation stack in ros2 to add the conditions and maybe some stoppong criteria? I want to use object detection to create an autopilot which takes road signs, traffic lights and etc into account. P.S. I am sorry for my English . P.P.S. Thanks for the new video once more!
Hi xp-4yt! Thanks for watching my video! Yes, if you need to push your inference time to a limit, you should use DeepStream Engine. But I think describing several topics in one tutorial could be confusing, so I will make another tutorial for a DeepStream. As for navigation stack, some of the features you have mentioned could probably be achieved using Waypoint Task Executors. navigation.ros.org/plugins/index.html#waypoint-task-executors Also, in this tutorial I explained how to navigate to detected objects. It may also help you. ruclips.net/video/Ob8lGOHBrig/видео.html
@@robotmania8896 I am also very inpatient about it. I am dealing with Deep stream right now I have no idea how to use an engine with topic data. Seems that topic information should be converted into video stream, but I have synchronisation problems...Is it possible to use Deep stream engine with Ros2 in real time at all? Forums say no
Hi Deepak NR! Thanks for watching my video! Input of the stream (obtaining of the frames) is at line 27 (frames = pipeline.wait_for_frames()) in the “yolov8_rs.py” script.
Hello mate do you have one of this this but with the new jetson nano, the possibilities are incredible Ip cameras + Jetson Nano for security just for starters, i will bought that course inmediatly
Hi Joaco Solbes! Thanks for watching my video! In this tutorial I use Jetson Orin Nano, which is the newest model as far as I know. Technically, Jetson Orin Nano Super is the newest model, but since it was announced just a few days ago, I don’t have it.
@@robotmania8896 Hi thank you for your reply. there is no folder or other folder generated. Even I got error said that the pyrealsense2 module not found. But I try to change to another version of librealsense into v2.54.2 and it's working!
hi there Im on the step after building the librealsense shell script. The download finished but when i go into files and go to USR local directory to search for the OFF file containing pybackend and the others there is no file at all named OFF that appears. Am i able to keep LIB as the file name in bash rc ? im not sure what to do
also is pyrealsense supposed to download automatically when librealsense shell script is built or do we have to download it separately. I have no pybackend files or pyrealsense files at all whether it's in OFF folders or somewhere else so im not able to import pyrealsense after adjusting the code in the bashrc file. Im getting an error saying no module named pyrealsense when trying to import pyrealsense as rs. If you can give some advice id appreciate it.
Hi, wonderful video! I am wondering why I continue encountering the error "network is unreachable" when I "pip3 install ultralytics"? I really appreciate your help!
Hi bijan esphand! Thanks for watching my video! At ultralytics github page there is no reference about how to use the model with CPU or GPU. I guess, if the GPU is available, yolo automatically chooses GPU to do inference.
Thanks for the video which was greatly useful!!! But I couldn't find a way to download the yolov8_rs file that you said to download it from google drive. How and where can I get this file?
@@충현이-p1r I haven’t got Jetson Orin by my side right now, so I cannot check. But I didn’t do anything special while installing opencv. If you install the version specified in the “requirements.txt” file, the program should work. Do you have any troubles with opencv?
Hi soasuitegc! Thanks for watching my video! The FPS largely depends on yolo model size and orin nano model. Are you using exactly the same yolo model and orin nano model as my tutorial? Since it seems to me that orin nano can do much faster than 5 FPS.
Hi, I would like to ask you if it would be possible to do the same with 'Pose Detection'? Let me explain, I would like to take the keypoints on the colour view and put them in real time in the depth view. Basically to do the exact same thing you did but using not only the bounding box but also the keypoints but I don't understand how to achieve this, I would be grateful if you could answer me. Thanks!
Hi Eness Chebbaki! Thanks for watching my video! Yes, it is possible. In case of “pose detection”, as described in the page below, you have to extract “keypoints” from results just as I did for bounding boxes in this tutorial. Then you will be able to plot coordinates of “keypoints” on depth image. docs.ultralytics.com/modes/predict/#masks
@@robotmania8896 Thank you for replying! I have read the documentation and am trying to do the same thing as you did with the boxes but I am not getting any results. I don't know if it's because of the format in which the tensor containing the coordinates of the keypoints is output. Could you help me out?
@@enesschebbaki1226 Here is the sample code to extract coordinates of the keypoints. /////////////////////////////////////////////////////////// from ultralytics import YOLO import os model_directory = os.environ['HOME'] + '/pose/yolov8m-pose.pt' model = YOLO(model_directory) source = "sample.jpeg" results = model(source) for r in results: keypoints = r.keypoints for keypoint in keypoints: b = keypoint.xy[0].to('cpu').detach().numpy().copy() print(f"b : {b}")
@@robotmania8896 Fortunately, I was able to extract the keypoints. I used your same approach but thank you infinitely for your willingness and time! Your videos are always inspiring 💪🏼
Hi Romet Arak! Thanks for watching my video! It is difficult so say only from the information you have provided. Are you using USB cable that came with realsense? Low quality USB cable may cause problems.
Hi Ahmet Can Eskikale! Thanks for watching my video! Yes, it is enough to skip realsense part. To obtain camera frames, you just have to use next code. cap = cv2.VideoCapture(0) ret, frame = cap.read()
Are there any updates? I can not get PyTorch to install. The steps have changed since this video. I have had my Orin Nano for two weeks and still can not get it to inference on GPU. I am starting to lose hope in the Orin Nano.
@@robotmania8896Thank you so much for the response! The commands on the Installing PyTorch for Jetson has changed. I Had to re flash my sd card with the version you were using (5.1.2) and then pause your video as you were highlighting and enter manually. I was finally able to install PyTorch that way! I can now run yolov5 on my computer with GPU inference. I am having a problem getting yolov8 to work however. I can not get bounding boxes to show when I use v8 on my webcam.
@@robotmania8896 The installing pytorch for jetson platform has changed. I believe it is to support jetpack 6. I had jetpack six installed and it would not work. I flashed jetpack 5.1.2 and it still did not work. I ended up watching your video and pausing as you highlighted the commands and entered them manually and it worked! Thank you so much for your help!!
Hi Gianluca De Musis! Thanks for watching my video! If you would like to recognize potholes, you have to use your own trained model (pt file). Change ‘yolov8m.pt’ (line 22 in the “yolov8_rs.py”) to your model’s name.
Hi Kawther Trabelsi! Thanks for watching my video! With small image size and small model, you probably will be able to execute yolov8 on the Jetson Nano. But you will not be able to execute inference simultaneously on both cameras. You will need to execute inference sequentially.
Hi MrOrtach! Thanks for watching my video! I currently not planning to make a video regarding realsense and windows. But here you can find a detailed explanation on how to build librealsense on windows. dev.intelrealsense.com/docs/compiling-librealsense-for-windows-guide
thanks mate, helped a lot. I was stressed before deploying the product, you saved me🤩
Hi Kenzhebek Taniev!
Thanks for watching my video!
It is my pleasure if this video has helped you!
Hello? I'm jetsonmom, a 65-year-old living in Korea and a copycat (I don't know much about Python, but I like to try things out and get help from acquaintances and chatgpt4 for things I don't know). I tried following the teacher's video using ORIN NANO provided by NVIDIA. (I am also a Jesson Nano Ambassador)Thank you so so much for sharing. My image is 6.0dp, so the version of torch or torchvision in the shared video is different. I installed torch version 2.2.0 and torchvision 0.17.0. I don't know if it's because it's different from the teacher's video, but when I ran the Python sample program, the results came out well. But awareness is fleeting. So I thought it was strange, so I asked, and when I looked it up, I found that cuda was not used. When installing cudnn, it seemed like I needed version 12, but it didn't work. Do I need to change to the same version as the teacher? ruclips.net/video/HVFFNKN8pB8/видео.html
Hello 장성숙!
When you are using jetson Orin, you don’t have to install cuda or cudnn. It is installed by default. If cuda is available, it will be used automatically if you are using pytorch. So, it is strange that cuda had not been used. Have you installed pytorch version suitable for jetson (aarch architecture)?
@@robotmania8896 Yes, I installed it. It's just that the execution speed is too slow and I wanted to eliminate intermittent interruptions, so I checked with the command to see if CUDA is used when running it, and it came out as not, so I asked a question. Is there any way to make video processing faster? result is "0: 480x640 1 laptop, 1434.2ms
Speed: 5.9ms preprocess, 1434.2ms inference, 2.1ms postprocess per image at shape (1, 3, 480, 640)
0: 480x640 1 laptop, 1155.6ms
Speed: 4.0ms preprocess, 1155.6ms inference, 2.1ms postprocess per image at shape (1, 3, 480, 640)
0: 480x640 1 laptop, 1195.6ms
Speed: 1.8ms preprocess, 1195.6ms inference, 3.8ms postprocess per image at shape (1, 3, 480, 640)
0: 480x640 1 laptop, 1128.3ms
Speed: 3.8ms preprocess, 1128.3ms inference, 2.2ms postprocess per image at shape (1, 3, 480, 640)
0: 480x640 1 laptop, 1102.3ms
Speed: 2.2ms preprocess, 1102.3ms inference, 2.8ms postprocess per image at shape (1, 3, 480, 640)
0: 480x640 1 laptop, 1160.8ms
Speed: 3.4ms preprocess, 1160.8ms inference, 3.0ms postprocess per image at shape (1, 3, 480, 640)
0: 480x640 1 laptop, 1125.3ms
Speed: 4.8ms preprocess, 1125.3ms inference, 2.6ms postprocess per image at shape (1, 3, 480, 640)"
Sorry for the late response. What GPU are you using? Yes, it is possible to make inference faster with smaller image or model size.
Great man you are great. Since 3 days we have been tried with jetson orin nano and our gpu has not worked on yolov8, but your script is great and very nice guidance now our gpu has worked on jetpack 6 latest os on jetson orin nano appreciate your work 👌🏻
Hi Music!
Thanks for watching my video!
It is my pleasure if this video has helped you!
@@robotmania8896 keep it up bring up new videos like this and ros with yolov8 on jetson orin nano
How did you get pytorch to install? Everything has changed since this tutorial and I can not get pytorch to install using the steps in this tutorial.
This tuto saved my life!! GREAT video
Hi 赵子铭!
Thanks for watching my video!
It is my pleasure if this video has helped you!
This is great, borther! This installation video is the answer to my urgent need.
Hi SeanWhen!
Thanks for watching my video!
It is my pleasure if this video has helped you!
Awesome! Great tutorial!
Hi autumn soybean!
Thanks for watching my video!
It is my pleasure if this video has helped you!
Quick and easy!
Hi Prescription Oatmeal!
Thanks for watching my video!
It is my pleasure if this video has helped you!
Thanks a lot for making this video!!! I am just wondering if this tutorial is also suitable for Jetson Nano?
Hi Wang Jin!
Thanks for watching my video!
If you would like to use Yolov8 with Jetson Nano, this video will help you.
ruclips.net/video/joAZEUbZZy8/видео.html
First of all i want to thank your for another great and very detailed tutorial with nice explanation. I am pretty new to Robotics and your videos are crucial to avoid a lot of troubles newbees are spotting in this field. And i have also small questions to the video - as i got the idea, the way an object detection should be used with all the Jetson, is a Deepstream Engine implementation. The engine seems to be much faster than a pt model. Am i wrong?
Btw, can you give me a clue how to work with the navigation stack in ros2 to add the conditions and maybe some stoppong criteria? I want to use object detection to create an autopilot which takes road signs, traffic lights and etc into account.
P.S. I am sorry for my English .
P.P.S. Thanks for the new video once more!
Hi xp-4yt!
Thanks for watching my video!
Yes, if you need to push your inference time to a limit, you should use DeepStream Engine. But I think describing several topics in one tutorial could be confusing, so I will make another tutorial for a DeepStream.
As for navigation stack, some of the features you have mentioned could probably be achieved using Waypoint Task Executors.
navigation.ros.org/plugins/index.html#waypoint-task-executors
Also, in this tutorial I explained how to navigate to detected objects. It may also help you.
ruclips.net/video/Ob8lGOHBrig/видео.html
@@robotmania8896 Thanks a lot! This information is extremely helpful! ☺️
@@robotmania8896 I can't wait to see DeepStream tutorial
@@KobeNein I am planning to make a tutorial about DeepStream in a few weeks.
@@robotmania8896 I am also very inpatient about it. I am dealing with Deep stream right now I have no idea how to use an engine with topic data. Seems that topic information should be converted into video stream, but I have synchronisation problems...Is it possible to use Deep stream engine with Ros2 in real time at all? Forums say no
love this
Hi CS RASEL!
Thanks for watching my video!
It is my pleasure if this video has helped you!
Hey Great video.... Thanks
I had one question where you mentioned the input of the stream? Like rtsp or webcam
Hi Deepak NR!
Thanks for watching my video!
Input of the stream (obtaining of the frames) is at line 27 (frames = pipeline.wait_for_frames()) in the “yolov8_rs.py” script.
Hello mate do you have one of this this but with the new jetson nano, the possibilities are incredible
Ip cameras + Jetson Nano for security just for starters, i will bought that course inmediatly
Hi Joaco Solbes!
Thanks for watching my video!
In this tutorial I use Jetson Orin Nano, which is the newest model as far as I know. Technically, Jetson Orin Nano Super is the newest model, but since it was announced just a few days ago, I don’t have it.
@@robotmania8896 Thanks for the vid. Do you know if this method will work on jetpack 6.1?
Great video
Hi 준호 김!
Thanks for watching my video!
It is my pleasure if this video has helped you!
Thank you for the tutorial but I got problem where I cannot find the OFF folder
Hi nurulnajwa khamis!
Thanks for watching my video!
Is there any other generated folder, or the folder is not generated at all?
@@robotmania8896 Hi thank you for your reply. there is no folder or other folder generated. Even I got error said that the pyrealsense2 module not found. But I try to change to another version of librealsense into v2.54.2 and it's working!
@@nurulnajwakhamis2680 I am glad that you make it work!
hi there Im on the step after building the librealsense shell script. The download finished but when i go into files and go to USR local directory to search for the OFF file containing pybackend and the others there is no file at all named OFF that appears. Am i able to keep LIB as the file name in bash rc ? im not sure what to do
also is pyrealsense supposed to download automatically when librealsense shell script is built or do we have to download it separately. I have no pybackend files or pyrealsense files at all whether it's in OFF folders or somewhere else so im not able to import pyrealsense after adjusting the code in the bashrc file. Im getting an error saying no module named pyrealsense when trying to import pyrealsense as rs. If you can give some advice id appreciate it.
Hi Aaron Pena!
Can you please try this command
pip3 install pyrealsense2
probably it will work.
Hi, wonderful video! I am wondering why I continue encountering the error "network is unreachable" when I "pip3 install ultralytics"? I really appreciate your help!
Hi Wang Jin!
Thanks for watching my video!
It seems to be a network problem. Do you have internet connection?
At line 39, shoudnt you call results = model(color_image) and provide the param `device=0` to use the GPU?
Hi bijan esphand!
Thanks for watching my video!
At ultralytics github page there is no reference about how to use the model with CPU or GPU. I guess, if the GPU is available, yolo automatically chooses GPU to do inference.
Hi,
Do you plan to make videos related to the intel neural stick?
Thanks for your tutorials!
Hi Guillermo Velazquez!
Thanks for watching my video!
For now, I am not considering making a tutorial for the intel neural stick.
Thanks for the video which was greatly useful!!! But I couldn't find a way to download the yolov8_rs file that you said to download it from google drive. How and where can I get this file?
Hi 충현이!
As mentioned near the ending of the video, the google drive link is in the description. Please open description and you will find the link.
@@robotmania8896 Thanks for your comment. Sorry for bothering, but can you tell me the version of openCV you are using?
@@충현이-p1r I haven’t got Jetson Orin by my side right now, so I cannot check. But I didn’t do anything special while installing opencv. If you install the version specified in the “requirements.txt” file, the program should work. Do you have any troubles with opencv?
are you using which version of intel realsense camera? can i go with reaisense d455 with same procedure as shown in above video?
Hi Sri Charan Kaipa!
Thanks for watching my video!
In this tutorial I used RealSense D435 camera. I think you can use D455 with the same procedure.
thanks for sharing!! what fps did you manage to get? I can't get more than 5 fps :(
Hi soasuitegc!
Thanks for watching my video!
The FPS largely depends on yolo model size and orin nano model. Are you using exactly the same yolo model and orin nano model as my tutorial? Since it seems to me that orin nano can do much faster than 5 FPS.
@@장성숙-l9z 감사합니다!
Hi, I would like to ask you if it would be possible to do the same with 'Pose Detection'? Let me explain, I would like to take the keypoints on the colour view and put them in real time in the depth view. Basically to do the exact same thing you did but using not only the bounding box but also the keypoints but I don't understand how to achieve this, I would be grateful if you could answer me. Thanks!
Hi Eness Chebbaki!
Thanks for watching my video!
Yes, it is possible. In case of “pose detection”, as described in the page below, you have to extract “keypoints” from results just as I did for bounding boxes in this tutorial. Then you will be able to plot coordinates of “keypoints” on depth image.
docs.ultralytics.com/modes/predict/#masks
@@robotmania8896 Thank you for replying! I have read the documentation and am trying to do the same thing as you did with the boxes but I am not getting any results. I don't know if it's because of the format in which the tensor containing the coordinates of the keypoints is output. Could you help me out?
@@enesschebbaki1226 Here is the sample code to extract coordinates of the keypoints.
///////////////////////////////////////////////////////////
from ultralytics import YOLO
import os
model_directory = os.environ['HOME'] + '/pose/yolov8m-pose.pt'
model = YOLO(model_directory)
source = "sample.jpeg"
results = model(source)
for r in results:
keypoints = r.keypoints
for keypoint in keypoints:
b = keypoint.xy[0].to('cpu').detach().numpy().copy()
print(f"b : {b}")
@@robotmania8896 Fortunately, I was able to extract the keypoints. I used your same approach but thank you infinitely for your willingness and time! Your videos are always inspiring 💪🏼
Please!! What is your ubuntu and python version you using. I'm a newbie Yolo and Robotic! Respect for your helping
Hi Trường -!
Thanks for watching my video!
It is ubuntu20.04. Python version is 3.8.
Hello! I'm facing an error saying: RuntimeError: Frame didn't arrive within 5000. Any solutions to this?
Hi Romet Arak!
Thanks for watching my video!
It is difficult so say only from the information you have provided. Are you using USB cable that came with realsense? Low quality USB cable may cause problems.
Hello, the video is very nice, but I want to do it using a USB camera instead of a realsense camera. Is it enough to skip the realsense part?
Hi Ahmet Can Eskikale!
Thanks for watching my video!
Yes, it is enough to skip realsense part. To obtain camera frames, you just have to use next code.
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
Are there any updates? I can not get PyTorch to install. The steps have changed since this video. I have had my Orin Nano for two weeks and still can not get it to inference on GPU. I am starting to lose hope in the Orin Nano.
Hi Thomas Dunn!
Thanks for watching my video!
Where exactly are you experiencing a problem?
@@robotmania8896Thank you so much for the response! The commands on the Installing PyTorch for Jetson has changed. I Had to re flash my sd card with the version you were using (5.1.2) and then pause your video as you were highlighting and enter manually. I was finally able to install PyTorch that way! I can now run yolov5 on my computer with GPU inference. I am having a problem getting yolov8 to work however. I can not get bounding boxes to show when I use v8 on my webcam.
@@robotmania8896 The installing pytorch for jetson platform has changed. I believe it is to support jetpack 6. I had jetpack six installed and it would not work. I flashed jetpack 5.1.2 and it still did not work. I ended up watching your video and pausing as you highlighted the commands and entered them manually and it worked! Thank you so much for your help!!
@thomasdunn1906 I am glad that my video has helped you!
Were you able to run yolov8?
@@robotmania8896yes! Thank you so much. I could not have done it without your video.
Hello... How I can recognise only an object like potholes using this project?
Thank you
Hi Gianluca De Musis!
Thanks for watching my video!
If you would like to recognize potholes, you have to use your own trained model (pt file). Change ‘yolov8m.pt’ (line 22 in the “yolov8_rs.py”) to your model’s name.
thankyou for saving me ㅠ.ㅠ
My pleasure!
Hello :) is it possible to use Jetson Nano 2 GB with 2 USB Webcams and yolov8!!
Hi Kawther Trabelsi!
Thanks for watching my video!
With small image size and small model, you probably will be able to execute yolov8 on the Jetson Nano. But you will not be able to execute inference simultaneously on both cameras. You will need to execute inference sequentially.
hello, Is it possible to run yolov7 with these settings?
Yes, I think you will be able to execute yolov7 with libraries that have been installed in this tutorial.
@@robotmania8896 thank you
Can you please explain on a Windows computer?
Hi MrOrtach!
Thanks for watching my video!
I currently not planning to make a video regarding realsense and windows. But here you can find a detailed explanation on how to build librealsense on windows.
dev.intelrealsense.com/docs/compiling-librealsense-for-windows-guide
Hello, What python version are you using?
Hi Nhật Phạm!
Thanks for watching my video.
I use python 3.10 in this tutorial.
@@robotmania8896 Can I use Python 3.11 to install Ultralytics on Jetson Nano, I get a mistake when I use "Pip3 install Ultralytics"
Yes, you should be able to install ultralytics on python 3.11. Please refer to this page.
docs.ultralytics.com/quickstart/
@@robotmania8896 Hey, how can I check my version's jetpack
I use jetson nano not jetson orin nano, Is setting these up any different? I can setup the Pyrealsense2 lib