AI on the Jetson Nano LESSON 50: Introduction to Deep Learning and Deep Neural Networks

Поделиться
HTML-код
  • Опубликовано: 21 дек 2024

Комментарии • 62

  • @opalprestonshirley1700
    @opalprestonshirley1700 4 года назад +2

    The lessons are never to long, it just takes the time it takes. Thanks Paul.

  • @chillcopyrightfreemusic
    @chillcopyrightfreemusic 2 года назад

    You are fantastic. Thank you for breaking things down in a simple manner. Subscribed!!

  • @quaternion-pi
    @quaternion-pi 4 года назад +2

    We need to get you a larger audience of passionate , motivated learners. I hope others will consider Patreon support. Excellence should be encouraged. Without your series I would have given up on AI despite trying very hard to break into the subject. Thanks.

    • @paulmcwhorter
      @paulmcwhorter  4 года назад +4

      Thanks for the comment and support. One of the challenges we face is that AI, and CS in general tends to be a tight knit group of subject matter experts. They speak there own language, and it is almost impossible to break into their club, because it is like they speak a secret language and you do not have a decoder ring. When they teach, they used their special language, so you can not even understand what they are trying to teach. What I have tried to do in these lessons is teach in plain english. I assume you are smart and willing to work hard, but not a CS expert. Hope it will help more people to enjoy this fascinating field

  • @ezio8000
    @ezio8000 3 года назад +2

    I added some timers to the last program. The listing is below. The frame rate with the picam was 15 fps, or 60 ms per loop. The times for each statement is as follows:
    cam1.read 2 ms.
    cvtColor 9 to 12 ms
    cudaFromNumpy 26-29 ms
    classID 5 ms
    Total for these 4 statements 42 - 48 ms. This information can help with the frame rate, but it doesn't explain the latency. I think the latency is the time from the CPU getting the frame to the cv2.imshow. It must be backed up and is dropping frames.
    import jetson.inference
    import jetson.utils
    import cv2
    import numpy as np
    import time
    print("click on display, then press 'q' to quit")
    # lowest piCam resolution
    width = 1280
    height = 720
    dispW = width # needed in camSet
    dispH = height
    flip=2
    camSet='nvarguscamerasrc ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method='+str(flip)+' ! video/x-raw, width='+str(dispW)+', height='+str(dispH)+', format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink'
    cam1= cv2.VideoCapture(camSet) # piCam
    font = cv2.FONT_HERSHEY_SIMPLEX
    net = jetson.inference.imageNet('googlenet')
    timeMark = time.time()
    fpsFilter = 0 # used to smooth out fps calculation
    while True:
    # q to kill
    startTime = time.process_time()
    _,frame = cam1.read()
    endTime = time.process_time()
    print("_,frame = cam1.read() time: "+str(endTime - startTime))
    startTime = time.process_time()
    img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA).astype(np.float32)
    endTime = time.process_time()
    print("cvtColor(frame, cv2.COLOR_BGR2RGBA) time: "+str(endTime - startTime))
    startTime = time.process_time()
    img = jetson.utils.cudaFromNumpy(img)
    endTime = time.process_time()
    print("jetson.utils.cudaFromNumpy(img) time: "+str(endTime - startTime))
    startTime = time.process_time()
    classID, confidence = net.Classify(img, width, height)
    endTime = time.process_time()
    print("classID, confidence = net.Classify(img, width, height) time: "+str(endTime - startTime))
    startTime = time.process_time()
    item = net.GetClassDesc(classID)
    dt = time.time() - timeMark # dt is change in time
    fps = 1/dt
    fpsFilter = 0.95 * fpsFilter + 0.05 * fps # averages fps over 20 frames
    timeMark = time.time()
    cv2.putText(frame, str(round(fpsFilter,1))+ ' FPS '+ item, (0,30), font, 1, (0,0,255), 2)
    cv2.imshow('recognized', frame)
    cv2.moveWindow('recognized', 0, 0)
    if cv2.waitKey(1) == ord('q'):
    break
    cam1.release()
    cv2.destroyAllWindows()

  • @KrishnanS-mt8zw
    @KrishnanS-mt8zw 4 года назад +1

    Respected sir, I am Krishnan from India I am working as an assistant professor in engineering college. I really love your all lecturers, tutorials like Arduino, raspberry pi, AI and all. As a lecturer, I share your tutorial to my student's group.
    Thank you for your great job.

    • @wishicouldarduino8880
      @wishicouldarduino8880 4 года назад

      Well I have to say your tutorials on open cv visual studio and such teach the most I just need to figure out a way to get it into a ros node that's a challenge but I think it can work it's all the same packages.

  • @ttaylor9916
    @ttaylor9916 11 месяцев назад

    Thanks!

    • @paulmcwhorter
      @paulmcwhorter  11 месяцев назад

      Really appreciate the support. Means a lot to me, thanks!

  • @thomascoyle3715
    @thomascoyle3715 4 года назад +2

    I have two different webcams each with different die sizes. Both indicated that they can do 1280 X 720, however one was limited to 800 X 600 while the other one could do 1280 X 720. The one that could do only 800 X 600 hung gstreamer when I set the width and height to 1280 X 720 and the only way out was to reboot the Nano. I noticed that the higher the requested width and height, the slower the frame rate which is to be expected. The 640 X 480 gave the fastest frame rate. These frame rates are only using the jetson-inference/utils and not the conversion from jetson-inference to OpenCV which is covered later in the tutorial. Regards, Tom C

  • @ricardobjorkeheim775
    @ricardobjorkeheim775 3 года назад +1

    now I have 3 different programs and all my four cameras works, except for the 2 sec latency on the raspberry pi cameras at the end.
    Great video Paul, thank you!

    • @paulmcwhorter
      @paulmcwhorter  3 года назад +1

      I think in one of my lessons on the Xavier NX I show how to adjust the gstreamer string to get rid of that latency

  • @pralaymajumdar7822
    @pralaymajumdar7822 4 года назад

    It's very deep to learn and astonished by your knowledge..You are just awesome sir...Thank you and carry on your best work in this life..

  • @CodingScientist
    @CodingScientist 4 года назад

    WOW !! Now the real thrill starts with actual use case of Artificial Intelligence. Keep up the BRILLIANT work Paul.

  • @jaredthomas2957
    @jaredthomas2957 3 года назад

    I always enjoy reading Paul's easter egg folder names on his desktop. Haha, great humor.

    • @paulmcwhorter
      @paulmcwhorter  3 года назад +1

      Shhh . . . no one else notices that.

  • @paulmeistrell1726
    @paulmeistrell1726 3 года назад

    Great lesson Paul, lots of fun with mixing and matching. Running 3 cameras on the Nano 2g and it is keeping up using the jetson software. I found the latency gets good again if you hold the picam to a 640 x 480 format even using all cuda code. The cameras I am running is Picam, a Logitech web cam640 x 472, a generic web cam 640 x 480. Picam on all cuda 30 fps latency good at 640 x 480 format latency bad if 1280 x 720, cv2 display 21 fps very large display latency problems, cv2 display and camera held to 640 x 480 latency good 30 fps. Picam does better if I do not have a browser running on nano. Other cameras will work with browser. Cuda only LT cam 9-15 fps, generic web cam 29 fps, cv2 display LT cam 12-14 fps, generic cam is 29 fps, cv2 camera and display LT cam 7-8 fps, generic cam 8 fps, latency is good. Thanks for lookin at all the ins and outs. Long lessons on your part is not bad. My doing the lessons and experimenting can take hours.....(

  • @marksholcomb
    @marksholcomb 4 года назад

    A lot of information! A wild ride. BUT I think I got it (mostly) .. Thank You.

  • @somebody9033
    @somebody9033 4 года назад

    Caught up! YAY! Thanks a lot for all the great tutorials!

  • @wayneswan3092
    @wayneswan3092 3 года назад +1

    I have 4 cameras hooked up. 2 pi cams, 2 Logitech cams on pan tilts. I need a command to flip the camera.

  • @Arcticwhir
    @Arcticwhir 3 года назад

    For those have errors, be sure to import cv2 first. Idk why but that fixed it.

  • @fablapp
    @fablapp Год назад

    not sure if anybody still watching these excellent tutorials, but for the pi cameras had to use csi://0 instead

  • @ssnoc
    @ssnoc 4 года назад

    Another Excellent lesson - 👍

  • @geeksatlarge
    @geeksatlarge 3 года назад

    Interestingly, my Logitech webcam has the same cropping issue as your piCam. But only inside this lesson's code. All the older code works fine, it resizes correctly. My RealSense cam works fine all the time.

  • @sudhirbrahma
    @sudhirbrahma 3 года назад

    Outstanding!

  • @SteJuMusic
    @SteJuMusic 4 года назад

    Thanks a lot for these lessens. Again i have a question. When i capture the webcam using openCV it is much slower than using cuda. Is there any possibility to change this? So for me it seems to be much better to grap the frame using cuda and convert it to openCV format. It is much faster on the XavierNX. Anyway, i am enjoying your lessens since a couple of weeks, and i decided to be a patrion. Thanks a lot again.

  • @loredanabudileanu4719
    @loredanabudileanu4719 4 года назад

    Hello, I have a question : i want to use a dataset for voice, an excel file, how cand I do that to train the nano to recognize voice if is male or female, for exemple? Do you have any video using an excel file to train the nano?

  • @martymcgill1312
    @martymcgill1312 4 года назад

    Awesome video Paul..

  • @eranfeit
    @eranfeit 4 года назад

    Thank you

  • @sanfinity_
    @sanfinity_ 3 года назад

    Great lesson sir, now we are fully utilizing the power of jetson ( my jetson too hangs few times while executing the code 😅)

  • @drakkartwentyd1389
    @drakkartwentyd1389 3 года назад

    weird I'm using the pi cam and it only runs at less than 0.5 frames per second using the first method , why? interestingly when I do your second method using imshow it ramps up to nearly 22 fps

  • @jardelvieira8742
    @jardelvieira8742 Год назад

    How to rotate/flip the pi camera?

  • @ricardobjorkeheim775
    @ricardobjorkeheim775 3 года назад

    wow, that was a lot to learn, I think I am going to try to split this lesson into three different codes. after a lot of debugging at least, I got the piCam to work, the Webcam did not work in the last part.
    Sorry but, Is very hard to debug when there is there a different version in the same code.

  • @HoXDipannew
    @HoXDipannew 4 года назад

    *Sir you are my motivation* 💪👊💯💯💯💯👊

  • @IMSezer
    @IMSezer 4 года назад

    does importing jetson inference and utils blocks autocomplete?

  • @partsdave8943
    @partsdave8943 4 года назад

    Have you considered teaching a Marlin Firmware class? Basically like programming an Arduino with specific purpose. 😀

  • @quaternion-pi
    @quaternion-pi 4 года назад

    Broken Intellisense: Anyone fixed intellisense with modules jetson.inference and jetson.utils? I cannot find the correct path(s) to add to "python.autoComplete.extraPaths" in settings.json. The modules work fine, but no autocomplete or intellisense. I tried adding the path to "jetson_inference_python.so" and "jetson_utils_python.so" which on my machine is "/usr/lib/python3.6/dist-packages" to the settings.json file and no success despite closing and reloading vs code.
    No intellisense is serious aggravation for me.

    • @paulmcwhorter
      @paulmcwhorter  4 года назад

      I am growing weary of intellisense. After the patch from an earlier lessons, on some days it seems to work for openCV and on other days it does not. Sometimes it works, but is slow to kick in. Really annoys me as it is hard to memorize and the syntax for all the commands.

  • @DerrickMuncy
    @DerrickMuncy 4 года назад +1

    Hey all,
    This code gives you back errors and line numbers in a try/except case in python3!
    #Add to imports
    import sys
    import ctypes
    import linecache
    # This goes in your definitions towards the top.
    def PrintException():
    exc_type, exc_obj, tb = sys.exc_info()
    f = tb.tb_frame
    lineno = tb.tb_lineno
    filename = f.f_code.co_filename
    linecache.checkcache(filename)
    line = linecache.getline(filename, lineno, f.f_globals)
    print('EXCEPTION IN ( ' + str(filename) + ', LINE ' + str(lineno) + ': ' + str(line.strip()) + ' : ' + str(exc_obj))
    #Use this in your try/except cases
    try:
    # print(1/0) #Uncomment the divide by zero to try it out
    except:
    PrintException()

  • @r1rmndz507
    @r1rmndz507 3 года назад

    Lesson 50!

  • @visionaryrobotics4065
    @visionaryrobotics4065 3 года назад +1

    I have the webcam and the picam but I dropped my nano and the picam slot broke so I am just left with the webcam.LOL

  • @ROY-mh7qy
    @ROY-mh7qy 4 года назад

    Hey Paul, nice haircut!

  • @geeksatlarge
    @geeksatlarge 3 года назад

    Using Logitech webcam and Intel RealSense cams

  • @jasontito7644
    @jasontito7644 3 года назад

    who gave you a hair cut professor??

  • @TheRealFrankWizza
    @TheRealFrankWizza 4 года назад

    I have 45 days to get up to speed.

    • @TheRealFrankWizza
      @TheRealFrankWizza 4 года назад

      I was just about to watch this and noticed I already gave it the thumbs up.