For the bird table, could you combine detection, tracking and edge detection to identify the bird from the front/side, then detect when it turns and add those frames to the set of images for learning with the bounds and classification specified automatically? Great video by the way, I knew nothing about ML before this video, now I'm chomping at the bit and ready to get started!
This is a great reference video, im trying to identify the location of objects from an aerial view of about 30 feet up in "cells". Luckily all my objects land in a specific 6x6 foot square area at all times when they need to be detected. Would I be able to use less training images given the specific conditions of my vision area?
The term "grok" came about from the book Stranger In A Strange Land. 'PITA' means Pain In The Ass. :) ;) Now, you have learned two new terms you can use. ;) I expect to hear you use both of these real soon. ;0
Fascinating video, thank you. Sorry for the probably obvious question (I’m trying to get my head around the basics!) Once you have training images and the network has been trained on them does the network gain further ‘experience’ as you expose it to images for identification or is it’s knowledge now set in stone? Many thanks!
Have you done any work with sensor fusion, using multiple input types to build a model? For example image and audio? Or even visible light and IR cameras?
The raspicam V1.3 didn't work for me either. Apparently the jetson nano only support the raspicam V2 (sensor IMX219) out of the box. I was able to get a regular USB camera to work.
How can I plot and visualize the train and test charts? I want to do is to show the results of training (“loss” and “accuracy”) into a graphic (“loss” vs" epochs" / “accuracy” vs “epochs”) and create a" confusion matrix".
Hello after training my custom datasheet and exporting it to onnx it give me an error ssting OSERROR: couldnt fund valid .pth checkpoint under 'models/TuodMango'
It would need to be still images for transfer learning, so you could use video as long as you can freeze frame it to outline the image to be captured. I don't think there is a way round the hard work required to build the library of images for learning.
Hi Kevin, if i have a large dataset of thousands of images. How can you train that? Using the asset capture tool will take forever to complete? Is there any tool for that?
Thanks for the video. One question---have you learned how to run the custom model from a python 3 script (not just from terminal)? I am attempting to adapt the sample tutorial from "Jetson AI Fundamentals S3E4" provided by NVIDIA (its on youtube and a good reference), which runs a pre-trained model from a 10 line python script. That said, it would be very useful to do the same for custom models.
I am getting that same error that you ran into at 49:46 , but you did not go through any solution for it. Anyone get this issue? i cannot find it online
This is awesome, using Jetson Nano for my college senior project and dong AI object detection to detect whether someone is sitting in a chair or not.
Very good break down
Thanks for the video. I would definitively pay for a deep course in AI/NVIDIA.
Excelent! Thanks for share!
For the bird table, could you combine detection, tracking and edge detection to identify the bird from the front/side, then detect when it turns and add those frames to the set of images for learning with the bounds and classification specified automatically? Great video by the way, I knew nothing about ML before this video, now I'm chomping at the bit and ready to get started!
This is a great reference video, im trying to identify the location of objects from an aerial view of about 30 feet up in "cells". Luckily all my objects land in a specific 6x6 foot square area at all times when they need to be detected. Would I be able to use less training images given the specific conditions of my vision area?
The term "grok" came about from the book Stranger In A Strange Land. 'PITA' means Pain In The Ass. :) ;) Now, you have learned two new terms you can use. ;) I expect to hear you use both of these real soon. ;0
Fascinating video, thank you. Sorry for the probably obvious question (I’m trying to get my head around the basics!) Once you have training images and the network has been trained on them does the network gain further ‘experience’ as you expose it to images for identification or is it’s knowledge now set in stone? Many thanks!
Have you done any work with sensor fusion, using multiple input types to build a model? For example image and audio? Or even visible light and IR cameras?
I’ve not done any sensor fusion, yet!
@@kevinmcaleer28... "yet" 😁😁❤️
The raspicam V1.3 didn't work for me either. Apparently the jetson nano only support the raspicam V2 (sensor IMX219) out of the box. I was able to get a regular USB camera to work.
How can I plot and visualize the train and test charts? I want to do is to show the results of training (“loss” and “accuracy”) into a graphic (“loss” vs" epochs" / “accuracy” vs “epochs”) and create a" confusion matrix".
this examples can run with the 2gb ram version?
Hello after training my custom datasheet and exporting it to onnx it give me an error ssting OSERROR: couldnt fund valid .pth checkpoint under 'models/TuodMango'
Is it possible to do transfer learning from video instead of camera?
It would need to be still images for transfer learning, so you could use video as long as you can freeze frame it to outline the image to be captured. I don't think there is a way round the hard work required to build the library of images for learning.
Hi Kevin, if i have a large dataset of thousands of images. How can you train that? Using the asset capture tool will take forever to complete? Is there any tool for that?
Thats the bit that can't be done automatically unfortunately. I'm not aware of a tool that can do that!
Hi kevin, i subscribe you. how to make a program in 28:17 using csi camera?
How can I train a model in Colab or on a GPU computer and then use the model on Jetson Nano?
Hi what streaming app did you use and slides for the presentation?
you have a beautiful presentation
I use Apple Keynote for slides and Ecamm Live for live-streaming
Hi Kevin, trying to inference more than 25 objects in an image.. Any idea how that limit can be removed?
I'm not sure where the limitation exists, that might be a tensorflow thing
Did you use power point?
I use Apple Keynote
@@kevinmcaleer28 Thanks!!
Thanks for the video. One question---have you learned how to run the custom model from a python 3 script (not just from terminal)? I am attempting to adapt the sample tutorial from "Jetson AI Fundamentals S3E4" provided by NVIDIA (its on youtube and a good reference), which runs a pre-trained model from a 10 line python script. That said, it would be very useful to do the same for custom models.
hi kevin McAleer san i can able to execute for my custom data but i want to stream the detection in rtsp vlc can you help me for getting rtsp
the best way to get help is to join our discord group - lots of smart people in there who can help: action.smarsfan.com/join-discord
I am getting that same error that you ran into at 49:46 , but you did not go through any solution for it. Anyone get this issue? i cannot find it online
If you are not making mistakes you have no opportunity to learn. ;) Hurry up and make a misteak so you can get on to the next one! ;)
Can you make a skill were we can listen to music
Watched 45 minutes just for him to fail. Not worth it.
Fail? Can you explain - the demo works fine
Welcome to the world of software engineering! :) The demo was great, even if some of the commands didn't execute