joev valdivia
joev valdivia
  • Видео 83
  • Просмотров 179 427
Unitree 4D LiDAR L1 running on Jetson AGX Orin
Here is a video of me running the Unitree 4D LiDAR L1 on my Jetson AGX Orin. It gives impressive results for its price point. The next task is to line up the feedback from the lidar with the feedback from a ZED 2 camera.
Here is some follow up links.
Unitree lidar website:
shop.unitree.com/products/unitree-4d-lidar-l1
Unitree lidar git hub:
github.com/unitreerobotics/unilidar_sdk
Unitree lidar docs:
m.unitree.com/download/LiDAR
Просмотров: 2 727

Видео

Jetson AGX Orin + ZED2 + YOLO4 + MQTT + PLC
Просмотров 1 тыс.Год назад
Here is a video of a YOLO4 application running on the Jetson AGX Orin using the ZED2 camera. The application extracts the Object description, Object depth and object X Y pixel coordinates. It then sends all that Data to a MQTT server so it can be utilized elsewhere. It also shows another application Displaying the MQTT Data from the YOLO application and simultaneously sending that data to a PLC...
Nvidia Jetson Orin Nano Dev Kit + Skeletal Tracking + DeepStream + YOLO Darknet + RIVA Speech SDK
Просмотров 3,3 тыс.Год назад
Here is a video of the Nvidia Jetson Orin Nano Dev kit being put to the test. It demonstrates skeletal tracking using a Stereolabs ZED 2 camera. It shows an Nvidia Deepstream 6.2 YOLO example. Also a Nvidia RIVA Speech SDK Example and a YOLO Darknet example Utilizing the Stereolabs ZED 2 camera Links to in depth explanations of the Jetson Orin specs. ruclips.net/video/qCAoPcMiR4k/видео.html ruc...
Zed camera measurement results against a laser measurement
Просмотров 359Год назад
This is just a video that compares the Zed camera measurement results against a laser measurement result.
Nvidia TAO Toolkit 4.0 jupyter notebook setup and walk through Part 2
Просмотров 1,7 тыс.Год назад
This is Part 2 of the Nvidia TAO Toolkit 4.0 jupyter notebook setup and walk through video tutorial. This video tutorial is the walk through of the Action Recognitionnet jupyter notebook. The first part of the video tutorial can be found here: ruclips.net/video/tguxnthQ10E/видео.html Link to Nvidia TAO toolkit 4.0 landing page: developer.nvidia.com/tao-toolkit Link to Nvidia TAO toolkit 4.0 get...
Nvidia TAO Toolkit 4.0 jupyter notebook setup and walk through Part 1
Просмотров 3,3 тыс.Год назад
Nvidia TAO Toolkit 4.0 has just come out. Its a powerful Transfer Learning Tool than can make you model training fast and accurate. Part 1 of the Video tutorial shows some of the highlights of TAO Toolkit 4.0 and how to get the Action Recognitionnet jupyter notebook up and running. Part 2 will be the actual walk though of the Action Recognition net jupyter note book. Link to part 2: ruclips.net...
Jetson Nvidia AGX Orin Python example of ZED camera stream + OpenCV + RTSP out
Просмотров 1,7 тыс.Год назад
Here is a video that shows a Jetson Nvidia AGX Orin using Python to open a ZED camera stream. Then use OpenCV to extract pixel coordinates and there associated depth data . Then display the Zed camera stream using cv2.Imshow and also send the camera stream back out as a RTSP stream that can be viewed in a VLC player or be sent to a Jetson Nvidia DeepStream Pipeline. Here is link to github repo ...
Cats personal robot ride
Просмотров 2392 года назад
Here is a couple videos of my cat enjoying his I robote create3
Nvidia Xavier NX & I Robot Create3 & RPlidar A1 & Sterolabs ZED2 camera
Просмотров 1,2 тыс.2 года назад
Here is a video of a I Robot Create 3 being controlled with a Nvidia Xavier NX. It also shows the Xavier NX running a Lidar sensor display output from the Slamtech RPlidar A1 and a Object recognition and tracking program using the ZED 2 camera. All at the same time. Here are links to items in video. Excellent set of tutorials about ROS2 and the I Robot Create 3: github.com/paccionesawyer/Create...
Latest Nvidia TAO Toolkit with TensorBoard integration Demo
Просмотров 2,5 тыс.2 года назад
Here is a video of the latest Nvidia TAO Toolkit with Tensorboard Integration. The Nvidia Team has done a great job integrating Tensorboard into the TAO toolkit Jupyter notebooks examples. This will really help to cut down model training time. Nvidia TAO toolkit: developer.nvidia.com/tao-toolkit Nvidia TAO toolkit tensorboard visualization: docs.nvidia.com/tao/tao-toolkit/text/tensorboard_visua...
Nvidia Jetson AGX Orin running Deepstream 6.0 YOLO model using Stereo lab's ZED 2 video streams
Просмотров 4,8 тыс.2 года назад
Here is a video of the Jetson AGX Orin running a Deepstream 6.0 YOLO 3 model using the Video streams coming from the ZED 2 stereo camera. It uses gstreamer to build the pipeline that runs the Deepstream YOLO Model. This will also run on the Xavier NX and Jetson Nano. Link to github repo with instructions to run and prebuilt gstreamer pipelines: github.com/valdivj/gstream_Deep
Nvidia Jetson AGX Orin +ROS 2 with ZED 2 camera +RIVA embedded python +Deepstream model all at once!
Просмотров 2,8 тыс.2 года назад
Here is a video of the Nvidia Jetson AGX Orin running ROS 2 using a ZED2 camera and a Python RIVA embedded voice recognition example and a Deepstream model example running all at the same time. Link to the stereolabs github repo that contains all the magic: github.com/stereolabs/
Nvidia Jetson AGX Orin Dev Kit & Jetpack 5.0 & ZED 2 stereo camera
Просмотров 4,3 тыс.2 года назад
Here is a video of the Jetson AGX Orin dev kit with Jetpack 5.0 installed and the ZED 2 stereo camera in action. It shows the instructions to load the SDK for the camera and then running a couple of Python and Gstreamer Demos. SDK for Nvidia Jetpack 5.0 DP: www.stereolabs.com/developers/release/ instructions to load Zed python API: github.com/stereolabs/zed-python-api Instructions to install Ze...
Nvidia Jetson AGX Orin running Riva ASR + Peoplenet + caffemodel + YOLO3 locally at the same time
Просмотров 1,5 тыс.2 года назад
Here is a video that shows how well the Jetson AGX Orin performs when running the Riva ASR model & deepstream resnet34 peoplenet model & deepstream resent10 caffemodel & deepstream YOLO3 model all at the same time locally on the Jetson AGX Orin.
Nvidia Jetson AGX Orin running a model trained with the Nvidia TAO toolkit in the cloud
Просмотров 1,9 тыс.2 года назад
Here is a video of the Nvidia Jetson AGX Orin running a custom model trained in the cloud using the Nvidia TAO toolkit. Nvidia has put together a software stack that can go from Dataset to Training to Deployment in a short time frame with excellent results. Link to setting up TAO Toolkit in cloud: docs.nvidia.com/tao/tao-toolkit/text/running_in_cloud/overview.html
Nvidia Jetson AGX Orin developer kit Hands On Demo Ver1
Просмотров 2,8 тыс.2 года назад
Nvidia Jetson AGX Orin developer kit Hands On Demo Ver1
Nvidia Jetson AGX Orin GeekBench results
Просмотров 1,9 тыс.2 года назад
Nvidia Jetson AGX Orin GeekBench results
Speeding up AI Model Development with TAO Toolkit
Просмотров 1,7 тыс.2 года назад
Speeding up AI Model Development with TAO Toolkit
Nvidia Xavier NX running Deepstream pipeline using a model trained on synthetic data
Просмотров 1 тыс.2 года назад
Nvidia Xavier NX running Deepstream pipeline using a model trained on synthetic data
Using Nvidia Deepstream 6 to generate an image data set with KITTI formatted annotations.
Просмотров 1,2 тыс.2 года назад
Using Nvidia Deepstream 6 to generate an image data set with KITTI formatted annotations.
Nvidia TAO toolkit using Synthetic Data generated by the Omniverse Isaac Sim to retrain model
Просмотров 1,4 тыс.2 года назад
Nvidia TAO toolkit using Synthetic Data generated by the Omniverse Isaac Sim to retrain model
Nvidia Omniverse Isaac Sim Synthetic data generation in the KITTI format for the TAO toolkit
Просмотров 2,1 тыс.2 года назад
Nvidia Omniverse Isaac Sim Synthetic data generation in the KITTI format for the TAO toolkit
long winded explanation of deepstream deployment using TAO toolkit with a custom data set
Просмотров 1,4 тыс.2 года назад
long winded explanation of deepstream deployment using TAO toolkit with a custom data set
Nvidia Xavier NX running deepstream and detecting 70 different objects
Просмотров 5072 года назад
Nvidia Xavier NX running deepstream and detecting 70 different objects
Object Detection unit based on Nvidia products detailed Demonstration video
Просмотров 2612 года назад
Object Detection unit based on Nvidia products detailed Demonstration video
Object detection product based on Nvidia Xavier NX , Yolo3 and Deepstream
Просмотров 8092 года назад
Object detection product based on Nvidia Xavier NX , Yolo3 and Deepstream
Nvidia Jetson nano relays designed for the Nano/Xavier NX
Просмотров 1,4 тыс.3 года назад
Nvidia Jetson nano relays designed for the Nano/Xavier NX
Nvidia Transfer Learning Toolkit 3 trained with synthetic data from AIReverie
Просмотров 6783 года назад
Nvidia Transfer Learning Toolkit 3 trained with synthetic data from AIReverie
NVIDIA Jetson Nano + Ethernet relay board + Easy Pezzy
Просмотров 1,1 тыс.3 года назад
NVIDIA Jetson Nano Ethernet relay board Easy Pezzy
Nvidia Deepstream + QuickHMI Web-based Scada / HMI system + PLC + Xaiver NX
Просмотров 1,1 тыс.3 года назад
Nvidia Deepstream QuickHMI Web-based Scada / HMI system PLC Xaiver NX

Комментарии

  • @jajaboss
    @jajaboss 17 дней назад

    I got the data but do you know how to turn it to PCL file?

  • @abdobabukr1811
    @abdobabukr1811 25 дней назад

    is there a way to get this PoE?

  • @eBautistaBau
    @eBautistaBau Месяц назад

    Hey, Thanks for your tutorial, I know it is a little bit outdated, but do you know if it works with a Jetson orin nano? Once I use the command "ninja" it does not finish the process, I can not use the camera with the computer, any suggestion?. thanks in advance.

  • @canofpulp
    @canofpulp Месяц назад

    Would you please donate the sensor to me for my robot

  • @asdfds6752
    @asdfds6752 Месяц назад

    Is there any chance to get a nicely documented walkthrough on how to train a model using tao? It is incredible that there is no such a material. Anybody helping a dummy like me? It is super frustrating to get this quick and dirty videos. They are super useless

  • @Dr.pabloschmirtz
    @Dr.pabloschmirtz Месяц назад

    Excelente, buen trabajo, es un aporte!. Saludos!!

  • @ivanomilo9438
    @ivanomilo9438 3 месяца назад

    thank you for making this video!

  • @deplorablesecuritydevices
    @deplorablesecuritydevices 3 месяца назад

    What kind of frame rates are you getting?

  • @pranavpatil7312
    @pranavpatil7312 3 месяца назад

    Can i use this expansion board with the jetson nano development board to implement CAN communication between the different sensors and microcontroller to control the motor wit CAN compatible motor driver to control the motor ??

  • @ericyoung3183
    @ericyoung3183 4 месяца назад

    wondering how does Mediatek's integration with NVIDIA TAO empower edge AI innovation, enabling faster and more efficient AI processing at the edge?

  • @3s843a
    @3s843a 4 месяца назад

    Im gonna grab a unit

  • @rafaelmunozlozada2802
    @rafaelmunozlozada2802 6 месяцев назад

    hi, can you see the imu data on rviz, there is no data published on tf to se imu behavior

  • @MOSSAI362
    @MOSSAI362 7 месяцев назад

    Hello Joev, huge fan of your work. Whats the best way to have a deeper conversation with you about future Ai devices using jetsons?

  • @CalTN
    @CalTN 7 месяцев назад

    Joe, I found your work on the Ignition maker project showcase. It inspired me to take on a similar project for my senior capstone project. Your website, AI Triad, seems to be offline. Do you have a contact email at which I can send you some questions I have? Thanks, CT

    • @joevvaldivia
      @joevvaldivia 7 месяцев назад

      You can contact me at joev@rfpco.com.

  • @sirens3237
    @sirens3237 7 месяцев назад

    Hey man, love your work are you going to GTC? Or are you in the OV discord?

    • @joevvaldivia
      @joevvaldivia 7 месяцев назад

      Thanks for the good word. I have prior commitments so I will have to attend GTC online. Not happy about that, not happy at all. Really wanted to be there in person.

  • @ethanholter
    @ethanholter 8 месяцев назад

    I bought this very same Lidar but have been having issues with the USB device being unrecognized if the adapter module was being supplied power at the same time. Was wondering you've had similar issues

    • @joevvaldivia
      @joevvaldivia 7 месяцев назад

      I had to send the first one I got because I had the same issue

    • @ethanholter
      @ethanholter 7 месяцев назад

      @@joevvaldivia Thank you! Glad I’m not the only one

  • @thomasdunn1906
    @thomasdunn1906 8 месяцев назад

    How in the world did you manage to get GPU inference on the Jetson Orin Nano? I am trying to run YOLOv5 (or any yolo version) and cannot get it to run on GPU.

    • @joevvaldivia
      @joevvaldivia 8 месяцев назад

      Are you using Jetson-Stats to monitor the Nano. pypi.org/project/jetson-stats/ Everything I run just seems to default to the GPU. If you have installed tensorflow make sure its the GPU version. If you run YOLO using deepstream it defaults to the GPU.

  • @jorgegonzalez7499
    @jorgegonzalez7499 8 месяцев назад

    Hello Joev. Very nice to see your tests. I bought one last week and I could make it work under windows environemet. I was a bit puzzled because the company who sent it is Youyeetoo but the manufacturer seems to be unitree. I wrote them and both replied with software for Windows and Linux machines.they told me the only way to get the point cloud is through the Sdk. I made that to work in Windows but it seems it just capture from one point so slam solution must be under Ubuntu. Im findidng some issues bybinstalling the Ubuntu version and I was wondering if you would be so kind to share how did you setup the Linux software please?. I think I will also write them again but its great too see that working. Also, I had the same issue with the rotation. It cannot be placed on a table without stixking to to something. I was wondering what did you used as a base and if it is possible to use a battery instead of a charger. Would you mind give me some advice please? Thanks so much and great to see your videos!

    • @joevvaldivia
      @joevvaldivia 8 месяцев назад

      Its nice to see somebody else giving this LIDAR a chance. Here is a link to all the docs for the Unitree 4D LiDAR L1: m.unitree.com/download/LiDAR Down at the bottom of the page are downloads for the SLAM and data applications. I used the doc "Unilidar SDK User Manual_v1.0" to setup the SDK and run Rivz with Ros2 on Ubuntu 20. Located in the folder "C:\a42f75fdba044f8a9f73ba1972488027\unitree_lidar_sdk\examples" are 2 programs that are quit useful. A C++ data publisher "unilidar_publisher_udp.cpp " and a Python data subscriber "unilidar_subcriber_udp.py" After some experimenting I will post a video with more results.

    • @jorgegonzalez7499
      @jorgegonzalez7499 8 месяцев назад

      ​@@joevvaldiviaGreat! Thanks so much Joev! Really appreciate it. These are exactly what Im looking for. I could manage to make it work under windows but actually Slam is what I really need. The láser is great but, as you said it would be great if Unitree could develop some video tutorials about it spezially how to in increase the number of points and how to exporta it vía Sdk, but its also great trying to figure out. Great to see your videos. I will look forward for your next tests and I will try to post mine too. Thanks so much!

  • @PAFFO
    @PAFFO 8 месяцев назад

    how did you learn all this stuff? i just put my hand on a jetson and i didnt even know how where to start :(

  • @rickt1866
    @rickt1866 9 месяцев назад

    Thank you for sharing.

  • @erikflores4389
    @erikflores4389 10 месяцев назад

    excellent video, maybe some video with body joint detection with Jetson nano + kinectv2?

  • @erikflores4389
    @erikflores4389 10 месяцев назад

    Muy bien, me ha servido mucho, tal vez Kinectv2 + proceesing in Jerson nano ?

  • @R00kTruth
    @R00kTruth 11 месяцев назад

    After many hours/days, with so much information that had overloaded my brain, looking into OpenNI/ROS/Libfreenect2 and freenect2.... I have finely got mine to work, I had thought my machine wasn't going to work, as its rather old. I will look more into what you've presented here. watching your video though has got me thinking, would it not be more suitable to also include the IR for more accuracy to pick objects up? the data could be used to "paint" the object, so that the depth camera can focus on it better? if that makes sense lol. thank you for the video and information.

  • @viorelgheorghe5655
    @viorelgheorghe5655 Год назад

    Ohhh! Subscribed!

  • @Wj3399
    @Wj3399 Год назад

    hey joev, thanksf for the great video! it's odd that csi is choppy, was wondering what's the fps compared to the USB, i thought USB would have some more CPU processing overhead and would have less fps comapred to the CSI?

  • @CircusSimon
    @CircusSimon Год назад

    Hi Naidol. Love your video. I'm currently working on a project which the Create 3 robot might work. Can i ask what is the maximum payload you can place on top and still have the robot working effectively. And How loud is it? I'm currently using a Roomba but noticing this is quite loud. Any help is much appreciated.

  • @ENGEL333
    @ENGEL333 Год назад

    Hello, Joev. Maybe stupid question, but i cant find answer. Does it do all calculation by hardware onboard, or it only provide pictures and all calculations of depthmap must be done by software on external processor?

  • @viorelgheorghe5655
    @viorelgheorghe5655 Год назад

    🥰Respect! I've seen MQTT... and I've said, nice, but way later I understood you make something with PLC!!! Btw, with Orin NX 16GB you have tested something?

  • @eddydewaegeneer9514
    @eddydewaegeneer9514 Год назад

    Hey Joe, Great video.But how/where (URL) can I download the sources for these benchmarks and TAO etc,...?

  • @ktysai
    @ktysai Год назад

    Hello, what orin exactly do you have coupled with Zed 2?

  • @bbamboo3
    @bbamboo3 Год назад

    good to see your process and the power of this thing. I suppose more memory would be useful.

  • @ego4892
    @ego4892 Год назад

    You are using an HTTP address not an RTSP stream...

  • @deplorablesecuritydevices
    @deplorablesecuritydevices Год назад

    Thanks so much for making this video I have beem thinking of getting one.

  • @Iturner72
    @Iturner72 Год назад

    Gigachad, exactly what I needed. Thank you sir!

  • @andreabasile8320
    @andreabasile8320 Год назад

    Hii, thanks for your video!! It really helps me a lot. Could you write the name of the software you use to monitor the Jetson performance? Please. I need it for a university project. Thanksssss ;-)

  • @NajwaBelarbi-s9c
    @NajwaBelarbi-s9c Год назад

    thank you for the video, have you tried YOLOV8 with the Jetson Orin Nano?

  • @granatapfel6661
    @granatapfel6661 Год назад

    How did you fuse the two frames from the stereo camera? I always thought you only can get two separated frames from the cam?

  • @ayoubbensakhria
    @ayoubbensakhria Год назад

    Fantastic tutorial, thank you

  • @josecas9099
    @josecas9099 Год назад

    Hello, I recently bought a jetson orin nano and its amazing. do you suggest running these applications inside containers?

  • @vazquezelectronics8334
    @vazquezelectronics8334 Год назад

    hi thanks for your videos i like it do you know if ist possible install deepstream 6.2 in my nvidia card rtx 4090 with ubuntu 22.04 or i need to downgrade ubuntu 20.04? Thanks

    • @joevvaldivia
      @joevvaldivia Год назад

      deepstream 6.2 seems to be built for Ubuntu 20.04. The nvidia forums may have a more definitive answer for you.

    • @vazquezelectronics8334
      @vazquezelectronics8334 Год назад

      @@joevvaldivia 👍 I gonna install Ubuntu 20.04 because I had a lot issues with the new version I let you know if everything works fine.

  • @cahlen
    @cahlen Год назад

    I have your same setup. I'm trying to think of a cool think to do with skeletal tracking outside of the typical UE / Unity livelink stuff. Would be cool to even be able to live render something in threejs for cross platform access, and I know how to work with animations without issue, but mapping skeletal rigs is a whole other thing.

  • @utkarshkushwaha6636
    @utkarshkushwaha6636 Год назад

    Hii Joe, thanks for the video it's really helpful. I faced an issue while working with this app, I git cloned your repo got all the dependencies , but I keep getting this error : ERROR: nvdsinfer_backend.cpp:38 cudaStreamCreateWithPriority failed, cuda err_no:222, err_str:cudaErrorUnsupportedPtxVersion 0:05:51.168639720 55514 0x55922bab1b30 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::allocateResource() <nvdsinfer_context_impl.cpp:293> [UID = 1]: Failed to create preprocessor cudaStream 0:05:51.168660590 55514 0x55922bab1b30 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:1029> [UID = 1]: preprocessor allocate resource failed ERROR: nvdsinfer_context_impl.cpp:1275 Infer Context prepare preprocessing resource failed., nvinfer error:NVDSINFER_TENSORRT_ERROR 0:05:51.197942721 55514 0x55922bab1b30 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance 0:05:51.197986385 55514 0x55922bab1b30 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start:<primary-inference> error: Config file path: config_infer_primary_yoloV3.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR Do you know what might be causing it ? A reply would be reaaly really helpful thanks again

  • @1stfafafa
    @1stfafafa Год назад

    Hi Joev, have you had any progress on getting the depth in Deepstream through gstreamer? Sadly the RTSP-server solution is introducing a lot of delay.

    • @joevvaldivia
      @joevvaldivia Год назад

      I have not. There is depth data in the gstreamer stream but Its beyond my capability's to figure how to get it out. What I have done is use this example: github.com/stereolabs/zed-yolo Its easier to get out the depth data and the AGX Orin can run this at 28FPS

  • @saifzamer5432
    @saifzamer5432 Год назад

    Hi Mr. Valdivia, Thank you for your work. I ran your code on my jetson agx orin devkit and i wanted to launch the stream onto my laptop straight from the zed2i camera. I'm using VLC media player on my laptop and i tried to launch a network stream but it wont open, telling me that it can't reach the localhost:8554. any advice on how to fix this? or do i have to put in the ip address of the jetson agx orin

    • @joevvaldivia
      @joevvaldivia Год назад

      yes you have to use the I.P. address of the AGX Orin

    • @saifzamer5432
      @saifzamer5432 Год назад

      @@joevvaldivia hi again, sorry for the disturbance, but i connected them to the same network and used the IP address of said Orin, it worked in my office but not at home again. are there any common problems with the connection?

    • @joevvaldivia
      @joevvaldivia Год назад

      @@saifzamer5432 If the wired connection and WIFI are 2 different I.P addresses there is sometimes an issue on with the Orin deciding which path to use. You may have to shut down one of the connections

    • @saifzamer5432
      @saifzamer5432 Год назад

      @@joevvaldivia i'm only using the wifi connection. however on one of the ORINs i am facing no problems and it immediately connects to the vlc media player and im able to open the stream, However on the second one i am using zed-sdk 3.8.2, and your github repository. any idea why this might be happening? should i install any additional programs?

    • @saifzamer5432
      @saifzamer5432 Год назад

      @@joevvaldivia i am getting the following errors in my log: live555 error, satip error, access_realrtsp error

  • @davewesj
    @davewesj Год назад

    Love your stuff..but the volume is low.

    • @joevvaldivia
      @joevvaldivia Год назад

      Thanks. I do it for fun and to keep my brain sharp. My hope is when I retire I can devote more time to it.

  • @imenselmi9230
    @imenselmi9230 Год назад

    Thank you for the amazing playlist on NVIDIA TAO Toolkit! Could you please add another video about Mask R-CNN with a custom dataset using NVIDIA TAO Toolkit? I'm eager to try it out and see the results. I believe it will be quite fascinating.

  • @salwaghanem-c1m
    @salwaghanem-c1m Год назад

    Thanks for sharing, your a hidden gem, I just found your channel and IM binging your videos. Can you please make a detailed video on yolo8 custom object deepstream with neural magic?

    • @joevvaldivia
      @joevvaldivia Год назад

      I will put that on the to do list. In the mean time this may be interesting. github.com/marcoslucianops/DeepStream-Yolo

  • @YifeiLiu-h4l
    @YifeiLiu-h4l Год назад

    Hi, thanks for sharing this great work and video! I'm also working on a similar work to record the traffic. I'm wondering how tools did you use to mount the camera on the car? And which battery did you use? Thanks:)

  • @siddharthrana-vb8uk
    @siddharthrana-vb8uk Год назад

    Hi Joev, Tthanks for the video , but i want to do skeleton tracking with id , how to do it with zed camera??please guide me

  • @computervision-er8xo
    @computervision-er8xo Год назад

    my rtsp link dosent have .sdp format, what can i do?