AI on the Jetson Nano LESSON 47: Facial Recognition on Multiple Multiple Cameras in OpenCV

Поделиться
HTML-код
  • Опубликовано: 2 янв 2025

Комментарии • 49

  • @ZebPepin
    @ZebPepin 10 месяцев назад +1

    Hi Paul! I just came across your account, I love it. I was wondering if you had any guidance on a senior design project i'm working on. A facial recognition, dual verification automated wait time queueing system. Leaning towards the application of theme parks, but deployable anywhere.
    Three cameras,
    Camera 1: Detects an unknown face, assigns a sequential number and updates a database.
    Camera 2: Recognitizes the face, and associates the face with another verifiable trait (clothing, walking gait, shoe, etc.), if not there (left the queue) update database.
    Camera 3: Recognize and Associate, calculate live queue wait time, calculate throughput, delete biometric data from database.
    Would you recommend the Jetson Nano for such a project?

  • @mikethompson5119
    @mikethompson5119 4 года назад +1

    Here's code for vStream.getFrame() that returns a black frame if the camera is not ready or frozen. This has a couple of advantages - start up to seeing an image is a bit faster, it's easier to see the camera is frozen, it keeps the code from a tight infinite read frame-exception-read frame-exception loop that seems to starve the computer of resources and freezes my entire Jetson nano - mouse goes dead, all applications are frozen (code - oss, nano editor, desktop - entire computer is frozen). It's faster to detect a problem and just a lot nicer to work with when there is a problem. By making the vStream class always return a frame, the try/except can be removed from the main code, because vStream.getFrame won't fail in normal operation, even during startup. That makes the main code simpler and easier to debug. (by main code, I mean the while true, get frame, process / face recognize / imshow() loop.)
    def getFrame(self):
    try:
    _=self.frame2.shape # just see if frame2 exists.
    except:
    self.frame2=np.zeros((self.height,self.width,3),np.uint8)
    print('Waiting for frames, shape: ',self.frame2.shape)
    return self.frame2
    Note when you get the black frame because the camera is locked up, typically for me the RPI Camera V2 after the program has
    an abrupt exit (ctrl-C, crash, trashing terminal window...), that restarting nvargus-daemon can get it working again, avoiding a reboot.
    sudo service nvargus-daemon restart
    You can get some status info on the nvargus-daemon and see if it is running normally or is in a bad state with this command:
    sudo service nvargus-daemon status
    Just for grins, here's a tweak to vStream.update() to get rid of the resize error...
    def update(self):
    while True:
    _,self.frame=self.capture.read()
    try:
    self.frame2=cv2.resize(self.frame,(self.width,self.height))
    except:
    pass

  • @jevylux
    @jevylux 4 года назад +4

    Hi Paul, I learned so much from your videos. I am about to be retired so I restarted my electronic labs at home, and want to focus on AI . My objectif is to build a kind of Jarvis, using facial recognition, facial tracking, voice recognition and text to speech.
    I also bought the Elegato robot car as well as the Jetson Ai Bot and I am waiting on your new tutorials.
    Considering this tutorial, I would really appreciate learning to use the tools from Nvidia in order to take full advantage of the power of the GPU.
    Thanks for providing us with such a high quality content

  • @quaternion-pi
    @quaternion-pi 4 года назад +1

    Anticipating the exciting announcements and another new series. Incorporating the gpio pins with face recognition - connect a speaker and give audio feedback when a face is recognized - would be a great project. Thanks for the best, most accessible content on the internet.

  • @Mircea007
    @Mircea007 3 года назад +1

    Hi Paul,
    For better troubleshooting I have put except AttributeError as errorMessage and then print (errorMessage). You can add more types of error and print them.
    For resizing the image from the webcam the method cv2.resize does not keep the aspect ratio of my webcam, so instead I have used inside __init__ the method self.capture.set(cv2.CAP_PROP_FRAME_WIDTH, self.width) and the same for the height and now I have the correct aspect ratio. You have used this method to resize the webcam image on a previous lesson, so I know it thanks to you.

  • @thejumbler6000
    @thejumbler6000 4 года назад +1

    I think you are the best teacher on youtube for coding

  • @G8YTC
    @G8YTC 4 года назад +1

    The way ahead sounds like a good route. Running this series on a Xavier NX along with the NX series. Managed to get 13 fps at scale 0.3 using the NX. Well worth the upgrade if people are fortunate to have some spare $$$.

  • @opalprestonshirley1700
    @opalprestonshirley1700 4 года назад +1

    Definitely exciting Paul and challenging. I love the two cameras and the pan/tilt servos. Great trouble shooting. Have a great weekend.

  • @OnlyOne1Dee
    @OnlyOne1Dee 4 года назад +4

    Always waiting for your tutoring sir 😊

  • @sanfinity_
    @sanfinity_ 3 года назад

    Really enjoyed all the 47 videos till now, Awesome job sir. very excited to watch the next one.

  • @heraldoborges
    @heraldoborges 4 года назад

    Thank you my friend! A big hug from your Brazilian student.

  • @Qbanolokot
    @Qbanolokot 4 года назад +1

    Hello Paul, I set the try-except this way, so that I can get what the error is about: try: --code here--- except Exception as err: print(err)

  • @paulmeistrell1726
    @paulmeistrell1726 4 года назад

    Hi Paul great lesson, I worked on this from several angles.... all are very slow on the 2g. Finally had you on a tablet and just the ide and program on the Jetson Nano. Look forward to new lessons, you have done a great job on what we have covered looking forward to the Nvidia methods. I am impressed that the 2g is even running this, frame times are comparable to yours!

    • @paulmcwhorter
      @paulmcwhorter  4 года назад

      Cant remember your past comments . . . did you move to the newer dlib library . . . apparently that is the key to good frame rates. The dlib I said to install worked with jetpack 4.2, but more recent viewers are probably on jetpack 4.4 and need a more recent dlib.

    • @paulmeistrell1726
      @paulmeistrell1726 4 года назад

      @@paulmcwhorter
      I did upgrade the dlib but so far the upgrades on the 2G does not go to 4.4. I have not burned a new operating system for a few weeks though. Just the regular updates and upgrades. The little board struggles to get going then works pretty well. Will run on the model=CNN but takes a long time to start display, then runs pretty fast.

  • @epixexplorations
    @epixexplorations 4 года назад +1

    amazing stuff, got everything working well today.

  • @raghawjanbandhu1479
    @raghawjanbandhu1479 4 дня назад

    Greatest Teacher of the World 🎉

  • @panit-anantlertdhirabhatr2451
    @panit-anantlertdhirabhatr2451 2 месяца назад

    Have you try on RTSP incoming source for face recognition? or Deepstream on jetson nano?

  • @fablapp
    @fablapp 3 года назад

    managed to fix several problems updating to different libraries and get this working but experiencing way too may crushes and the cameras stay jammed unless I reboot the nano. takes also way too long time to start the program....

  • @pralaymajumdar7822
    @pralaymajumdar7822 4 года назад

    Great news ahead coming in a few weeks!!! Great sir...I am very much excited...already follow your robotics and Jetson Xsavier.
    I am going to become an expert for your such kind and hardworking stuff...
    I like and love you from India sir..Long live and God bless..Thank you

  • @karanindersingh8257
    @karanindersingh8257 4 года назад +1

    Best teacher ever 👌

  • @CodingScientist
    @CodingScientist 4 года назад

    Hi Paul, in this lesson, you used 2 Pi cam V2 ? or 1 Pi cam and 1 usb cam ? because for some reason in Xavier NX this is not working, I am using 1 Pi camera and 1 USB webcam in NX, unable to open the frame itself to recognize the face. However without face recognition liberary, both the camera pops out in single window 2 frames. Pls advise.

  • @xacompany
    @xacompany 4 года назад

    I just ordered the lights from your link, thanks again!

    • @paulmcwhorter
      @paulmcwhorter  4 года назад +1

      Manuel, thanks! I really like these lights because you can control them with the remote, and you can adjust color temperature. I have gotten really good results and the bases are fairly stable.

  • @paulseidel5819
    @paulseidel5819 4 года назад

    Gpio pins yes. Connect to lighting. Door locks etc.

  • @ikari0133
    @ikari0133 Год назад

    I’m trying to do an intranet project where every device that connects to the network would be able to perform face recognition but I want to use each device’s camera but instead it just uses one camera

  • @benjaminlim1735
    @benjaminlim1735 4 года назад

    Hi Paul, i have been following your tutorial closely. I was so happy to hear you say that you would be covering Nvidia Framework coding in your upcoming tutorials. Oh...one other thing i would like to point out is that having cv2.imshow in your code can bring down the FPS to some degree. I guess in real life applications, the code would be applied to some form of automation by tying it to GPIOS.

  • @fontanelles
    @fontanelles 9 месяцев назад

    What can I do if I want to work with a ip streamig camera?

  • @GOBish23
    @GOBish23 4 года назад

    Hi Paul - I have been experimenting with threaded camera streams and an object tracking application (using a Siam tracker) on the xavier nx. Even with cuda support, I have been getting slower than expected performance (~12 FPS). I verified with jtop that the GPU is running at 50 to 90% .
    The FPS of one camera alone using threading is ~350-380 FPS on the Xavier NX. The camera I am using can output at 60 FPS.
    I had a thought that perhaps what may be slowing down the tracker is that it is overloaded with too many frames which causes it to slow down. Do you think there is any possibility to this?
    I did some experiments to test this:
    1. using mod (%), only perform tracking every nth frame. Results were poor, the tracker lagged.
    2. perform the tracking on an input video, not camera. Results were still slow FPS.
    3. Also note that image size did not have a huge impact on FPS, surprisingly. When I scaled the image by 0.3, only got to about 14 FPS.
    Would love to hear your thoughts!
    FYI performance for the tracker on desktop machines is reported at 180FPS. It is based on an RPN model with an Alexnet backbone. I would like to try to convert the model to TRT, but I haven't had success with that yet.

  • @alevelico
    @alevelico Год назад

    At this day, about multiple cameras in opencv, is stil reccomended to use Jetson nano? theres something better? like a stronger server ?

  • @OZtwo
    @OZtwo 3 года назад

    still here loving all your videos..yet this video I started programming a bit cleaner where others can understand what it is I'm typing. also fixed the overall Try/except error by moving code around and creating two blank images incase there is an error so the program can use the default image if any thing may go wrong with one of the two cameras. What I found interesting in my test program is that each thread keeps track of it's own FPS which the Logitech 310 has a frame rate of about 15FPS (was 30, but started going slower?) and the PI has about 21FPS. now with this application the overall frame rate is 4.3FPS yet the Logitech 310 is now showing a 300FPS and the PI showing up as 700-900FPS? Can't wait to see what you did with your version in this video. :)

  • @DerrickMuncy
    @DerrickMuncy 4 года назад +3

    Try this for easier debugging:
    except: # catch *all* exceptions
    e = sys.exc_info()[0]
    print( "

    • @DerrickMuncy
      @DerrickMuncy 4 года назад +1

      Oh, remember to import sys to make it work.

  • @mikethompson5119
    @mikethompson5119 4 года назад

    Often you can reset the cameras by restarting the nvargus daemon. It often gets hung up when the program exits abruptly by using ctrl-C or trashing the terminal window. This command will restart the nvargus daemon:
    sudo service nvargus-daemon restart
    Another thing to watch for is a system crash dialog hidden behind your other windows. Sometimes you can't reset the cameras until the system dialog is dismissed.

  • @kaxoxinho
    @kaxoxinho 4 года назад +1

    Hello, here's how i troubleshoot:
    except Exception as err:
    print(err)

  • @wishicouldarduino8880
    @wishicouldarduino8880 4 года назад

    You blasted so far out in front of me I'm swamped I can't catch up my build is to big but the infrastructure is there for it though but I don't think I could catch up in two years .👍🌞😀

  • @Bob-zg2zf
    @Bob-zg2zf 4 года назад

    What might be the reasons that make it difficult for Paul to code with Nvidia's framework(only demo codes can be implemented)? (as mentioned in the live chat a moment ago).

    • @paulmcwhorter
      @paulmcwhorter  4 года назад

      Try taking the NVIDIA Gstreamer class. I could run the demos, but I would not be able to do all the things we are doing in OpenCV. The learning curve is too difficult

    • @Bob-zg2zf
      @Bob-zg2zf 4 года назад

      @@paulmcwhorter I thought Paul could achieve anything...

  • @petersobotta3601
    @petersobotta3601 4 года назад

    GPIO pins!! so important for most of us I'd think.. no point doing all this cool stuff and not being able to do anything with it in the real world. Stepper motors would be great. How about we build a small CNC that draws and/or writes what the camera sees??

  • @vaughntaylor2855
    @vaughntaylor2855 3 года назад

    Boy Paul, if you are game, please lets jump into the true Nvidia world as there is alot of horsepower out there that would be amazing to make use of!!!
    A quick question on the speed issue you were addressing at the end of the video, Have you checked how much faster this exact script might be on your Xavier?
    Great Lesson Paul!! Thank You!

  • @OZtwo
    @OZtwo 3 года назад

    coming into this a bit late here, but I feel that open CV is just way outdated and we need to move on to what the standards are of today since we are wanting to learn about AI on the Jetson and not what AI could have done on the IBM PC. :)

  • @dogedaily5655
    @dogedaily5655 4 года назад

    Interesting.