Object Identification & Animal Recognition With Raspberry Pi + OpenCV + Python

Поделиться
HTML-код
  • Опубликовано: 22 май 2024
  • Subscribe For More!
    Article with All Steps - core-electronics.com.au/tutor...
    Actively search and classify all kinds of household objects and common animals with a palm sized single board computer. Then use specific object detection to control GPIO pins.
    Make sure to use the Previous Raspberry Pi 'Buster' OS with this Guide.
    Related Information
    Backyard BirdCam Project (Amazing Project that utilises this exact technology) - core-electronics.com.au/proje...
    Flashing 'Buster' OS onto a Raspberry Pi - core-electronics.com.au/tutor...
    Facial Recognition with the Raspberry Pi - core-electronics.com.au/tutor...
    Face and Movement Tracking System For Raspberry Pi - core-electronics.com.au/tutor...
    Controlling a Servo Motor with a Raspberry Pi - core-electronics.com.au/tutor...
    Speed Camera with Raspberry Pi - core-electronics.com.au/tutor...
    Hand Tracking & Gesture Control With Raspberry Pi - core-electronics.com.au/tutor...
    Control Your Raspberry Pi Remotely Using Your Phone (RaspController Guide) - core-electronics.com.au/tutor...
    Coco Dataset Library - cocodataset.org/#home
    Have you ever wanted to get your Raspberry Pi 4 Model B to actively search and identify common household objects and commonplace animals? Then you have found the right place. I'll show you exactly how to do this so you can set up a similar system in your own Maker-verse. Furthermore, I will demonstrate how you can refine the identification so it searches only for particular desired targets. Then we’ll take this to the next step and demonstrate how you can alter the code to make the Raspberry Pi control physical hardware when it identifies that particular target. This guide is going to blend machine learning and open-source software together with the Raspberry Pi ecosystem. One of the open-source software used here is Open-CV which is a huge resource that helps solve real-time computer vision and image processing problems. This will be a second foray into Open-CV landscape with Raspberry Pi and Facial Recognition being the first. We will also utilise an already trained library of objects and animals from the Coco Library. The Coco (Common Object in Context) Library is large-scale object detection, segmentation, and captioning dataset. This trained library is how the Raspberry Pi will know what certain objects and animals generally look like. You can also find pre-trained libraries for all manner of objects, creatures, sounds, and animals so if this particular library here does not suit your needs you can find many others freely accessible online. The library used here will enable our Raspberry Pi will be able to identify 91 unique objects/animals and provide a constantly updating confidence rating. Machine learning has never been more accessible and this video will demonstrate this.
    If you have any questions about this content or want to share a project you're working on head over to our maker forum, we are full time makers and here to help - coreelec.io/forum
    Core Electronics is located in the heart of Newcastle, Australia. We're powered by makers, for makers. Drop by if you are looking for:
    Raspberry Pi 4 Model B (4GB) Ultimate Kit Bundle (AVALIABLE!) - core-electronics.com.au/raspb...
    Raspberry Pi 4 Model B 4GB: core-electronics.com.au/catal...
    Raspberry Pi High Quality Camera (Used Here): core-electronics.com.au/catal...
    Raspberry Pi 6mm Wide Angle Camera Lens (Used Here): core-electronics.com.au/catal...
    Raspberry Pi Official Camera Module V2 : core-electronics.com.au/catal...
    Makeblock 9g Micro Servo Pack (used here): core-electronics.com.au/catal...
    Raspberry Pi 4 Power Supply: core-electronics.com.au/catal...
    0:00 Intro
    0:17 Video Overview
    0:56 What You Will Need
    1:30 Set Up
    3:10 Grab Some Objects
    3:35 Its Working!
    4:02 Some Values Worth Tinkering
    4:55 GPIO Control with Identified Objects
    5:36 Acknowledgments
    5:47 Outro

Комментарии • 299

  • @mike0rr
    @mike0rr 2 года назад +20

    This was the fastest, cleanest comprehensive guide I have found on OpenCV for Pi.
    Only thing that would make this better would be an Install script, but even then I think its good for some manual work to be left anyways. Get peoples hands dirty and force them to explore and learn more.
    So cool to have the power of machine learning and Computer Vision in our hands to explore and experiment with. What a time to be alive!

    • @Core-Electronics
      @Core-Electronics  2 года назад +3

      Very glad you have your system all up and running 🙂 and I absolutely agree. Something about a machine learned system that runs on a palm-sized computer that you have put together yourself really feels like magic ✨✨

  • @stevenhillman6376
    @stevenhillman6376 7 месяцев назад +2

    Excellent. I came to this after seeing the facial recognition video as it would help with a project I have in mind. However, after seeing this and how easy it is to set up and use my project will be more ambitious. Thanks again and keep up the good work.

  • @jacksonpark5001
    @jacksonpark5001 2 года назад +2

    this was exactly the thing i was looking for. i will be buying things from their store as compensation!

  • @mark-il8oo
    @mark-il8oo Год назад +2

    Your website, products and educational resources are amazing. I was wondering if you had any advice as to how to further train the machine to identify less common objects? I was hoping to use it for a drone video feed and train it to identify people, for basic search and rescue functions. I am a volunteer in my local community, hence my specific question :-)

  • @biancaar8032
    @biancaar8032 8 месяцев назад +2

    And a really big thanks to you for explaining this so well😁😁

  • @stefanosbek
    @stefanosbek 2 года назад +1

    Thanks for sharing, this is really good and easy to follow

  • @joelbay1468
    @joelbay1468 Месяц назад

    You're a life saviour. Thank you so much ❤

  • @muhammadumarsotvoldiev8768
    @muhammadumarsotvoldiev8768 8 месяцев назад +1

    Thank you very much for your work!

  • @thezmanner7478
    @thezmanner7478 Год назад +1

    Amazing, Easy to follow, Comprehensive video for object detection. Gonna use this to turn my RC car into a autonomous vehicle.
    Thanks Tim, Keep up the great work :D

    • @Core-Electronics
      @Core-Electronics  Год назад

      Oh man that sounds like an amazing project 😊! Definitely keep me posted on how it goes. The Forum is a great place for a worklog - forum.core-electronics.com.au/

    • @user-ow3se6ff2p
      @user-ow3se6ff2p 5 месяцев назад

      brother i too am working on this project can you leave any leads i am sending you an email if you have time please reply

  • @p.b.9515
    @p.b.9515 9 месяцев назад +1

    Just perfect, thanks a lot man!

  • @elvarzz
    @elvarzz 6 месяцев назад +1

    Hey man great video. Any chance you can cover how to use this same concept to detect anomalies instead? Rather than looking for specific objects expected to be there in the camera, the program learns the objects expected to be there and detects when an unusual object is found. Thanks.

  • @ashanperera5169
    @ashanperera5169 Месяц назад

    Thank you man! This was really helpful.

  • @nishyu9101
    @nishyu9101 10 месяцев назад +1

    This is amazing ! this is soo very cool! Thank you for introducing me to coco!

  • @suheladesilva2933
    @suheladesilva2933 2 месяца назад

    Great video, thank you for sharing.

  • @user-ng6ps8rm1n
    @user-ng6ps8rm1n Год назад +2

    Hi Tim, I would like to ask how can I speed up the fps and speed up the recognition rate? Or do I need to use the lite version to speed up the speed?

  • @marnierogers3931
    @marnierogers3931 2 года назад +3

    Hey this is great, thanks for putting this together. Really easy to follow along as a beginner. Is there a tutorial that builds on this and allows you to connect a speaker to the raspi so that whenever a specific object is detected, it makes a specific noise? Would love to see it!

    • @Core-Electronics
      @Core-Electronics  2 года назад +2

      Such a good idea. Yet to find a project talk about it directly, but where I added the extra code in for the Servo control if you instead replace that with code to set up a speaker and activate it, you would be off to the races.
      Here is a related guide on speakers - core-electronics.com.au/tutorials/how-to-use-speakers-and-amplifiers-with-your-project.html

    • @marnierogers3931
      @marnierogers3931 2 года назад +1

      @@Core-Electronics Supertar, thanks!

  • @bosss6053
    @bosss6053 Год назад +1

    Hi Tim, the video was great btw do you know another dataset that i could use with this code, and can you explain how to train a new object to detect?

  • @suryanarayansanthakumar3528
    @suryanarayansanthakumar3528 Год назад +1

    Hi Tim,
    Thank you so much on this video for demonstrating how to use OpenCV with the Raspberry Pi.
    I am willing to follow along your process to install OpenCV and test it out.
    I am just wondering if OpenCV will run on the new Raspberry Pi OS

    • @Core-Electronics
      @Core-Electronics  Год назад

      At this current stage I would recommend using the older 'Buster' OS with this guide. If you want to use Bullseye with machine scripts come check this guide on the OAK-D Lite - core-electronics.com.au/guides/raspberry-pi/oak-d-lite-raspberry-pi/

  • @soulo6661
    @soulo6661 2 года назад +5

    Trust me . I just find everything I was looking for about my raspberry pi 🌹

  • @fradioumayma7919
    @fradioumayma7919 Год назад +1

    Amazing , thank you !

  • @lordergame6147
    @lordergame6147 Год назад +2

    This video helped a lot! 👍

  • @maxxgraphix
    @maxxgraphix Год назад +3

    To use a USB cam install fswebcam then change cv2.VideoCapture(0) to cv2.VideoCapture(0, cv2.CAP_V4L2) in the script

  • @_zsebtelep8502
    @_zsebtelep8502 Год назад

    cool, but what would it take to make this work with 60 fps (doing the image recognition in every frame and not lagging behind when things move fast)

  • @riddusarav5666
    @riddusarav5666 Год назад

    Hello, great video but how do I get the coordinates of the tracked objects I am trying to build a robot that can identify and pick up objects, how would I find the coordinates

  • @sku1196
    @sku1196 2 года назад +3

    Hey tim! I successfully have managed to run this project in about an hour. I didn't compile opencv from source though. I installed it through pip but still got it working and its running pretty smooth. Hope you could change the opencv compiling part as it takes tooo long (took me 3 days and was still unsuccessful)and is unnecessary. Thank you
    I have used the raspberry 3b+
    If you use raspberry pi 4, it could be much faster and smoother

    • @Core-Electronics
      @Core-Electronics  2 года назад +1

      If you can provide some more information I'd happily update the guides 😊 (Perhaps jump onto our core electronics forum and do a quick write up on your process)

  • @user-ow3se6ff2p
    @user-ow3se6ff2p 5 месяцев назад

    You are a legend bro
    i have a question what if when it detects particular image in my case (garbage) it has to generate a gps location or it has to send the location of that point to another vehicle like you did to your servo motor

  • @user-uc3uk8tg2i
    @user-uc3uk8tg2i 10 месяцев назад

    do you have any guides for using an ultra low light camera module such as Arducam B0333 camera module (Sony Starvis IMX462 sensor)

  • @maritesdespares4112
    @maritesdespares4112 2 года назад +1

    great video, big help for my thesis. it can be used also to the pest?

    • @Core-Electronics
      @Core-Electronics  2 года назад

      Glad to be of help 🙂 not quite sure what you mean though.

  • @sonofsid1
    @sonofsid1 10 месяцев назад

    I have a imx219 apparently it will not work with opencv. Is there a way to use gstreamer to make it work in open cv?

  • @andyturner1502
    @andyturner1502 5 месяцев назад

    Hi great videos! How do I add to the dataset, is there a file to add to or is it an adjustment in the code Thanks again

  • @zakashii
    @zakashii 2 года назад +2

    Hi.
    I wanted to ask, do you think the raspberry pi Zero cam could be used as a substitute? I'm currently working on a project that involved Raspberry Pi's and camera's and have done a lot of research on what hardware to acquire, I haven't seen much benefit in using the V2 camera instead of the Zerocam. I actually think the raspberry pi zero cam has better specs for its price when compared to the V2.

    • @Core-Electronics
      @Core-Electronics  2 года назад +1

      Should work perfectly fine 😊. If the video data is coming into the Raspberry Pi through the ribbon cable I don't think you would even need to change anything in the script.

  • @zichhub3659
    @zichhub3659 Год назад

    hi awesome video and great content, pls can I also get this same code to identify FIRE ? can you guide me to how i can do that.
    also can i get the trained dataset for Fire and how do i get the library into the folder

  • @Catge
    @Catge 2 года назад +3

    Hi Great Video! I know this may be unrelated but how about recognition of objects on screen without a camera? Is there any projects you know of that use AI detection to control the cursor of the computer when it detects an object on screen? Cheers

    • @Core-Electronics
      @Core-Electronics  2 года назад +1

      Cheers mate and excellent ideas. You can definitely feed this system data that has been pre-recorded or streamed in from another location, would require some adjustments to the script. Also in regards to AI detection to control a cursor on a Raspberry Pi come have a look at this video - ruclips.net/video/hLMfcGKXhPM/видео.html

  • @michaelauth8936
    @michaelauth8936 2 года назад +2

    Great video, I just came up with an idea for a project using this. I have no experience with Pi's but basically it would be using a camera to detect a squirrel on a bird feeder and then playing some loud noise through a speaker. Would this be a difficult thing to do?

    • @Core-Electronics
      @Core-Electronics  2 года назад

      Sounds like an absolutely excellent idea that could definitely be implemented using this kind of Object Detection. We just had a new project posted on our website worth checking out all about using a Raspberry Pi to track Kangaroos and when it does it sends photos of them to a website server - core-electronics.com.au/projects/rooberry-pi

  • @rizkylevy8154
    @rizkylevy8154 Год назад +2

    I got error
    Traceback (most recent call last):
    File "", line 35
    cv2.putText(img,classNames[classId-1].upper(),(box[0] 10,box[1] 30),
    SyntaxError: invalid syntax
    What mean with this error, i already install cv2

  • @daniiltimin5396
    @daniiltimin5396 8 месяцев назад +1

    Lost two nights trying to run it on the latest OS! Use the previous one, it is mentioned in the article.

    • @specterstrider186
      @specterstrider186 6 месяцев назад +1

      thank you, I was struggling with this and was utterly confused.

  • @aadigupta4252
    @aadigupta4252 Год назад +2

    Hi this was a really great project and helped me a lot but can you help in how can we change the size of the box made around our object?

    • @Core-Electronics
      @Core-Electronics  Год назад

      Size of the boxes tend to be based on the size of the detected object. But the Colour and Width of box can definitely be altered. Inside the code look for the section | if (draw): |
      Then below that the line | cv2.rectangle(img,box,color=(0,255,0),thickness=2) |
      By altering the (0,255,0) numbers you can change the colour of the box. By changing the thickness number you can have very thin lines or very bold lines. Font and other aesthetic changes can be done in the following lines.

    • @aadigupta4252
      @aadigupta4252 Год назад +1

      @@Core-Electronics Thank you very much

  • @GarYYht001
    @GarYYht001 Год назад +1

    great video! good for beginner.
    I want to get the name of the objects into a string and print it when object detected.
    Can you give me any tips or help to me? Thank you so much.

    • @Core-Electronics
      @Core-Electronics  Год назад

      Cheers mate! In the main script underneath the line | result, objectInfo = getObjects(img,0.45,0.2) | is another line stating | #print(objectInfo) |. If you delete that | # | then save and run it again you will be printing the name of the identified object to the shell script.
      Hope that helps 😊

  • @rjdub4392
    @rjdub4392 2 года назад

    This is really cool. I wonder how hard it would be to connect to a thermal imaging camera and identify things by body heat?

    • @Core-Electronics
      @Core-Electronics  2 года назад

      Sounds like an awesome project, let us know if you manage to get it working.

    • @Core-Electronics
      @Core-Electronics  2 года назад

      Also as inspiration check out what this man managed to do with a Pico and a thermal camera! (if only he shared his code) - ruclips.net/video/xO4RsO3nBZ8/видео.html

  • @roblaicekameni8273
    @roblaicekameni8273 Год назад +1

    very good video and explanations are well detailed. please I have a project that consists of detecting paper your technique works with other objects but does not work with paper. I don't know if it's possible to teach the system to recognize paper. Thank you

    • @Core-Electronics
      @Core-Electronics  Год назад

      Edge Impulse is your friend here - www.edgeimpulse.com/
      This will let you customise already created AI systems like the CoCo Library. Stepping through this system you will be able to modify CoCo library to recognise paper 😊

  • @enzocienfuegos4733
    @enzocienfuegos4733 11 месяцев назад

    Hi is there a way that then creates a log with all recognized animals/humans so data can be consumed ?

  • @Seii__
    @Seii__ Год назад +1

    thank youu veryy muchh🙇

  • @gameonly6489
    @gameonly6489 Год назад

    Hi tim, how to add gTTS in the program when the object is detected

  • @charlesblithfield6182
    @charlesblithfield6182 Год назад +1

    Thanks for this. I want to use my pi to do custom recognition of trees from their bark in a portable field unit. I already tried an tensorflow lite and off the shelf database to do common object recognition.
    If I had a small need to recognize say 50 trees, how many labelled images do I need of each tree for the training data?

    • @Core-Electronics
      @Core-Electronics  Год назад

      Hi Charles, some Australian scientists concluded in a 2020 paper “How many images do I need?” (Saleh Shahinfar, et al) that the minimum number of data points for a class should be in the 150 - 500 range. So if you had 50 species of trees to identify from you'd need roughly between 7,500 - 25,000 images/data points.

    • @charlesblithfield6182
      @charlesblithfield6182 Год назад +1

      @@Core-Electronics thanks so much for this info. I have to get to work! I’m checking out the paper.

  • @zivanaf
    @zivanaf 9 месяцев назад

    Ty for a great video
    Where could i found library for specific stuff i need?
    I am looking for Cans, Bottles, glass bottles etc

    • @kos309
      @kos309 3 месяца назад

      Hello, did you ever find a library of the things you needed? I also need a library for specific items and was wondering if you found a good resource.

    • @zivanaf
      @zivanaf 3 месяца назад

      i did not
      made myself a model by training it using Roboflow
      @@kos309

  • @mattclagett778
    @mattclagett778 3 месяца назад +1

    Can I use a normal usb camera with this?

  • @turnersheatingandplumbing
    @turnersheatingandplumbing 5 месяцев назад

    Hi Great video! can I use a usb webcam instead of the pi cam, is it just a case of changing the code

  • @Dhanu-bc8pn
    @Dhanu-bc8pn 7 месяцев назад

    instead of raspberry pi 4 can we use a raspberry pi zero 2w if the speed doesn't matter to me?

  • @lukasscheunemann4059
    @lukasscheunemann4059 2 года назад +1

    Thanks for the tutorial. Can you maybe show how to implement a new library? I want it to just detect If there is an animal, the kind doesnt matter.

    • @Core-Electronics
      @Core-Electronics  2 года назад

      I've been learning more about this recently. A great way to create custom libraries that a Raspberry Pi can then implement is through Edge Impulse. With this you will be able to train and expand the amount of Animals that default COCO library comes with. Tutorials on this hopefully soon. www.edgeimpulse.com/

    • @xyliusdominicibayan6215
      @xyliusdominicibayan6215 2 года назад

      @@Core-Electronics Hi Do you have tutorials for Custom Object Detection using your own model?

  • @farisk9119
    @farisk9119 5 месяцев назад

    Can I run your project on MacBook if possible and in this case what kind of modifications to have with hardware please. Thanks

  • @dominicroman5038
    @dominicroman5038 Месяц назад

    Excuse me, i need help please, tiny yolo is better to raspberry pi or normal yolo can be used?

  • @YukthiM-sn1lt
    @YukthiM-sn1lt 26 дней назад

    I wanted to apply this for feedback system for blind can it be adjusted on the cap

  • @UsamaRiaz-yf5jk
    @UsamaRiaz-yf5jk 11 месяцев назад

    Amazing sir how can add speak module beacause easily to understand to detect of any object and after to speak a text in any objects

  • @bellooluwaseyi4193
    @bellooluwaseyi4193 Год назад +1

    Nicely explain. Please do I apply this on new dataset different from this?

    • @Core-Electronics
      @Core-Electronics  Год назад

      It will require some dedicated effort but you can customise this object detection dataset using edge impulse. www.edgeimpulse.com/
      That way you can add whatever object or creature you'd like 😊 I hope I understood correctly.

    • @andyturner1502
      @andyturner1502 5 месяцев назад

      How do you transfer a data set to the pi do you store it in a file or does it need adding to the code.

  • @igval2982
    @igval2982 10 месяцев назад +1

    How can I fuse this code with the face recognition one?

  • @niseem4462
    @niseem4462 2 года назад

    Great video! I’m wondering if instead of the green rectangle with the name of the object, I can get the names of the objects into a string so I can print. I am trying to use a Text To Speech software so that whatever the object’s name, it is said out loud. Do you have any tips, help, or advice to give me?

    • @Core-Electronics
      @Core-Electronics  2 года назад

      Definitely something you can do! Theres a lot of great text to speech packages that will work with Raspberry Pi, Pico TTS is a great example of one. With a little bit code adjustment you'll be off to the races.
      Come make a forum post (link in description) on your idea and then we can give you a much better hand than I can here 😊

    • @adios04
      @adios04 2 года назад

      @Niseem Bhattacharya Did you figure out how to do that? I need help with it.

  • @JohnnyJiuJitsu
    @JohnnyJiuJitsu Год назад +1

    Great video! Can you run this portable on a battery not connected to the internet?

    • @Core-Electronics
      @Core-Electronics  Год назад

      All the processing is done on the edge, thus you only need the hardware (no calculations happen over Wifi or via the Cloud). So if you had a big enough battery you could definitely run this system via a battery without Internet 😊.

    • @JohnnyJiuJitsu
      @JohnnyJiuJitsu Год назад +1

      Thanks for the quick reply!

  • @FUKTxProductions
    @FUKTxProductions 8 месяцев назад +13

    just dowload and extract this zip file. trust me

  • @user-rc4tp2vi5g
    @user-rc4tp2vi5g 11 дней назад

    ohhh man where are , i spent a week trying to install lib's thank u sooooo much

  • @oumargbadamassi7864
    @oumargbadamassi7864 Год назад +1

    Hello
    I'm verry happy to see this tuto
    Thank for help
    Is it possible to detect drugs or pills ?

    • @Core-Electronics
      @Core-Electronics  Год назад

      For sure but you will need to create a custom Machine Learnt Edge system. Come check out Edge Impulse, personally I think they are the best in the game for this kind of stuff (and totally free for a maker) - www.edgeimpulse.com/

  • @beyond_desi7719
    @beyond_desi7719 День назад

    Hi core electronics, I am looking for a lens for my Raspberry Pi HQ camera module... I want good quality image and a closer view for defect detection for my FFF 3D printed parts...can you suggest some lenses. Thanks

    • @Core-Electronics
      @Core-Electronics  День назад

      There is a microscope lens that might be suitable for looking at 3D print defects. Give that a look. core-electronics.com.au/microscope-lens-for-the-raspberry-pi-high-quality-camera-0-12-1-8x.html

  • @mike0rr
    @mike0rr 2 года назад

    For anyone it might help out later. I followed the commands on the guide verbatim and was having issues on the, "cmake -D CMAKE_BUILD_TYPE=RELEASE \" and 4 following commands that are all grouped together. I was using, "Right click highlighted text - copy, from the web page and CTRL+Shift+V paste to Terminal to input commands. And that worked great for most of the commands but doesn't appear to work for that last paragraph. I had to manually type it in myself in order for it to work correctly.
    Tim if you do read this, first of all thanks. But I am a tad lost on exactly when to change back the CONF_SWAPSIZE to 100. I assume after the installation is fully complete but to some of us noobs, its a bit unclear I guess. Also, don't know exactly why, but it says that, "sudo pip3 install numpy" already had its bits installed to Buster so it "might" be redundant. Unless its more of a full proof guild for other versions of the OS.
    Finally able to finish up my project! Once this finishes installing...
    :P

    • @Core-Electronics
      @Core-Electronics  2 года назад +1

      Cheers for this write up mate 🙂 I'll legit jump into the guide and make it a little clearer when to swap back the CONF_SWAPSIZE. I'll make it more similar to what I have in my Face Recognition written up guide. My intention is to 'Noob proof' it as best as I can so everyone can have Open-Source Machine learned systems in the palm of their hands that they've created themselves.
      Very glad you now have it all up and running too!

    • @mike0rr
      @mike0rr 2 года назад +1

      @@Core-Electronics I didn't think to check your guild on the other OpenCV videos. I'll go do that now. Finally have the next 2 days off so I can fully jump into it.
      I got past this issue, but now when I run the script its having issues no one else in the forums had. I assume this was due to some issues I may have made when trying to get the multi line command working. Idk so lost with all of this lol.
      I'm good with Arduino, but Raz Pi, Linux, console commands and scripts vs coding. So much to learn at once. You are such a huge help while lost and overwhelmed in this new little world.

  • @JoseMoreno-hp2le
    @JoseMoreno-hp2le Год назад +1

    Hi Tim, can the coral accelerator be integrated in this project?

    • @timgivney
      @timgivney 6 месяцев назад

      Absolutely

  • @archieyoung3192
    @archieyoung3192 2 года назад +1

    Hey tim! Here's a question, Is the model trained by your coco generated by the yolo algorithm? This is related to the writing of my graduation thesis. I will be more grateful if you can provide more suggestions.

    • @Core-Electronics
      @Core-Electronics  2 года назад

      Sorry for getting to this so late. A lot can be learned here - cocodataset.org/ . Also there are a ton of research papers as people are unraveling this technology that are worth exploring (or adding to the bottom of a graduation thesis). Good luck mate!

    • @archieyoung3192
      @archieyoung3192 2 года назад

      @@Core-Electronics thanks so much!I believe that with your help I can get a high score.best wish!

  • @user-sr3jj7kh7x
    @user-sr3jj7kh7x Год назад +1

    hey tim ! i seem to encounter a problem while following your instructions on the make -j $(nproc) it stops every time on the 40% and i re-typed and entered the same line several times but it didnt work is there any solution thanks for answering

    • @timgivney
      @timgivney 6 месяцев назад

      Check the description for the article page. Scroll down to the questions section and you'll find the answer

  • @xyliusdominicibayan6215
    @xyliusdominicibayan6215 2 года назад +1

    Hey great video, May I know where to tinker if i will be using esp32 camera to stream the video? Thank you in advance!

    • @Core-Electronics
      @Core-Electronics  2 года назад

      Hey mate cheers 🙂 the line to alter in code is | cap = cv2.VideoCapture(0) | changing that 0 to another index number that will represent your esp32 camera stream. Come make a forum post if you need any extra hand.

    • @xyliusdominicibayan6215
      @xyliusdominicibayan6215 2 года назад

      @@Core-Electronics Hi would like to some extra hands on this one. How can I implement esp32 cam as my video stream for real time object detection using the code. Thankss!

    • @Core-Electronics
      @Core-Electronics  2 года назад

      Definitely a great question for our Core Electronics Forum 😊

  • @skkrttheturtle8538
    @skkrttheturtle8538 4 дня назад

    What model is the COCO dataset trained on? I tried a custom one trained on TFlow Lite and it wouldn't work.

    • @Core-Electronics
      @Core-Electronics  3 дня назад

      COCO was trained off 325k images of just day to day environments and objects. They have the research paper here if you are interested! arxiv.org/abs/1405.0312
      (loading the PDF may take a little while)

  • @yahata3920
    @yahata3920 Год назад

    hi good sir can i use 0v7670 camera for this program?

  • @missmickey
    @missmickey Год назад +1

    hi, this tutorial helped a lot for my project. i successfully set up and run the codes on raspberry 4 model b terminal, i just couldn't figure out how can i see the video output while the code is running on the terminal (not on geany or thonny). maybe u could help me out :>>

    • @Core-Electronics
      @Core-Electronics  Год назад +1

      Not quite sure why it wouldn't do that for you when you run the script in the terminal. Come write up a post here at our forum and post some pictures of the situation - forum.core-electronics.com.au/. Reference in the post me and I'll best be able to help 😊

  • @corleone6272
    @corleone6272 5 месяцев назад

    I want to get output when alghoritm recognize an animal. I want to send this output to firebase. What am i suppose to do ?

  • @sanjaysuresh743
    @sanjaysuresh743 2 года назад +1

    Will I be able to add an entire category to the list of objects to be displayed in real-time? So instead of saying ['horse'], could I possibly mention a broader category of ['animal'] in the objects parameter? If not, please do let me know the correct way to approach this.

    • @Core-Electronics
      @Core-Electronics  2 года назад +1

      The fastest way would be to just add a long list like ['horse', 'dog', 'elephant'] etc. If you check the full-write up I do something very similar there.

  • @didiash_2256
    @didiash_2256 Год назад +1

    hey time ! i seem to encounter a problem while following your instructions on the "cmake -D CMAKE_BUILD_TYPE=RELEASE \
    "
    i was getting the error :
    " CMake Warning:
    No source or binary directory provided. Both will be assumed to be the
    same as the current working directory, but note that this warning will
    become a fatal error in future CMake releases.
    CMake Error: The source directory "/home/pi/opencv/build" does not appear to contain CMakeLists.txt.
    Specify --help for usage, or press the help button on the CMake GUI. "
    how do i resolve this ? i tried typing it manually and apparently it still didnt work

    • @Core-Electronics
      @Core-Electronics  Год назад

      Heyya mate,
      Sounds like a stray command line that didn't get typed in or using 'Bullseye' instead of 'Buster' Raspberry Pi OS. Jump to the comment section of the Article and that will have successful troubleshooting that will help you. If you still run into issues write up a message there with some screenshots of the problem and I'll make sure you get it running 😊

    • @christopherlazo8485
      @christopherlazo8485 Год назад +1

      @@Core-Electronics Quick question, does that mean the OS needs to run this project is Raspbian 'Buster'? Not Bullseye? Would this also work for Ubuntu?
      I love your videos btw, I've been following it ever since I become a hobbyist for Rpi

    • @Core-Electronics
      @Core-Electronics  Год назад

      Definitely utilise 'Buster' Raspberry Pi OS for this guide. If you work hard at it you'll be able to get Machine Learnt Systems to work on Ubuntu, but the process will be different than what is outlined in the above video.
      And big thanks 😊 Raspberry Pi Single Board Computers are really rad.

  • @KemiYT
    @KemiYT Месяц назад

    Hi, I keep getting that error message at 41% on the make-j $(nproc). no matter how many times i reenter the command it wont progress. any help?

  • @yumiyacha976
    @yumiyacha976 6 месяцев назад

    does this work with raspberry pi bullseye os?

  • @Max-cu6bw
    @Max-cu6bw 7 месяцев назад

    Hello I am trying to create a design that will recognize different trash types. Does this image recognition able to perceive things like cardboard, paper, tissue, or silver foil as such? like trash items?

    • @Core-Electronics
      @Core-Electronics  7 месяцев назад

      Hey Max, Im currently working on a very similar project. My workshop can get a bit messy so I am setting it up to scream at me when it gets untidy. I will report back to you how it goes, or if you've had some luck I'd be more than interested.
      Cheers!

  • @xavierdawkins920
    @xavierdawkins920 Год назад +1

    Would this program be able to email somebody about what object is seeing, like instead of turning the servo email somebody?

    • @Core-Electronics
      @Core-Electronics  Год назад

      Absolutely! Here is a straight forward code to send an email through a Python Script. If you merge those two lands together you'll be smooth sailing - raspberrypi-guide.github.io/programming/send-email-notifications#:~:text=Sending%20an%20email%20from%20Python,-Okay%20now%20we&text=import%20yagmail%20%23%20start%20a%20connection,(%22Email%20sent!%22)

  • @specterstrider186
    @specterstrider186 6 месяцев назад

    I am currently trying to follow this on a pi 4b with Bullseye. I am really struggling to get the files to download and build properly, any tips?

    • @timgivney
      @timgivney 6 месяцев назад +1

      Check the article G you'll see it in the description

  • @meghap5221
    @meghap5221 2 года назад +1

    I am getting cv2.imshow error while running object-ident.py in pi terminal, I connected pi via ssh. What should I do?

    • @Core-Electronics
      @Core-Electronics  2 года назад +1

      Very clever doing it through SSH 😊. It shouldn't be an issue doing it that way so long as you go through all the set up process. If you come write me a message on the core electronics forum under this topic I'll best be able to help you. That way you can sent through screen grabs of your terminal command errors.

  • @nielmarioncasinto8862
    @nielmarioncasinto8862 Год назад

    can you make a tutorial on how to install the packages sir?

  • @jonathanboot
    @jonathanboot Год назад +1

    Hi, thank you for the explanation and code. I tried the code with the V3 HD camera, but it didn't work. Additionally, can you tell me how to create an autostart for this design? The 5 ways to autostart don't work ("Output:957): Gtk-WARNING **: 19:31:41.632: cannot open display:"). I'm sending a relay with it to keep the chickens away from the terrace with a water jet. Beautiful design! Greetings, Luc.

    • @Core-Electronics
      @Core-Electronics  Год назад

      Hey Luc,
      To start you will need to update a new driver for the V3 Camera so it can work with the older 'Buster' Raspberry Pi OS. Check out how to do it here - forum.arducam.com/t/16mp-autofocus-raspbian-buster-no-camera-available/2464 -
      And if you want to autostart your system come check out how here (I would use CronTab) - www.tomshardware.com/how-to/run-script-at-boot-raspberry-pi
      Come pop to our forum if you need any more help 😊 forum.core-electronics.com.au/latest
      Kind regards,
      Tim

  • @Jianned-arc
    @Jianned-arc 6 месяцев назад

    oop last question sirs, can you use any type of type c and power supply cable for raspberry??

    • @Core-Electronics
      @Core-Electronics  6 месяцев назад

      You may be able to run a Pi with other power supplies but it's recommended to use the official Raspberry Pi Power Supply. They actually provide 5.1V to prevent issues from voltage drop that you might run into with a generic power supply.

  • @DanielRisbjerg
    @DanielRisbjerg 2 года назад +1

    Hey Core Electronics! Can I make it detect pistols only?

    • @Core-Electronics
      @Core-Electronics  2 года назад +1

      Give Edge Impulse a look at. This library doesn't have that as an object but you can use Edge Impulse to train/modify the standard COCO library to include new objects and things.

  • @sefaocal8825
    @sefaocal8825 Год назад +1

    hi, i am using raspberry pi 3 model B+ in this project. I uploaded the codes and was successful. But there is a delay of 8-10 seconds and it detects an object many times. You mentioned in the forum that we can reduce the latency by lowering the camera resolution. I can't find where to set this setting, can you help me? (I am using raspberry pi camera module v2 as camera.)

    • @Core-Electronics
      @Core-Electronics  Год назад

      Sure mate, lower the values you find in the line here | net.setInputSize(320,320) |. Make sure both numbers are matching. Most AI Vision systems depend on the inputted video data to be square. If you type | net.setInputSize(160,160) | it will yield faster responses.

    • @sefaocal8825
      @sefaocal8825 Год назад +1

      @@Core-Electronics The image became faster, but object recognition worsened. It draws the boundaries in different parts of the object. Thanks for your reply.

    • @Core-Electronics
      @Core-Electronics  Год назад

      I hadn't realised it would do that, without a doubt there is some code lines in there that when adjusted would fix up the boundary boxes.

    • @diannevila8837
      @diannevila8837 Год назад

      @@Core-Electronics any update with this issue?

  • @xyliusdominicibayan6215
    @xyliusdominicibayan6215 2 года назад +1

    Hi, How can I run the object detection even without connection to laptop or manually running the code.

    • @Core-Electronics
      @Core-Electronics  2 года назад

      You got heaps of different options. For example, you could run the script automatically every time the Raspberry Pi boots (using Cron Jobs, check here for a guide - ruclips.net/video/rErAOjACT6w/видео.html) or you could run the code remotely using your phone (check here - core-electronics.com.au/tutorials/raspcontrol-raspberry-pi.html)

  • @raultabirara6512
    @raultabirara6512 5 месяцев назад

    can i use webcam logitech c310?

  • @diannevila8837
    @diannevila8837 Год назад +2

    Hello can it be possible if you can join the animal, object and person or facial recognition at the same time? I'm working that kind of project could you help me sir? Please...

    • @Core-Electronics
      @Core-Electronics  Год назад

      Aww what an excellent idea! You will start wanting more powerful hardware very quickly going down this path. Come check out the Oak-D Lite (which is an excellent way to start stacking multiple AI system whilst still using a Raspberry Pi) - ruclips.net/video/7BkHcJu57Cg/видео.html

    • @diannevila8837
      @diannevila8837 Год назад

      @@Core-Electronics how about just identifying if it is an animal, things or a person or some kind of moving object and at the same time it will capture a preview picture of it? How can you make this? and also how to create like if the raspberry pi detects a person in can email to you but if it is not a person it will not email you. Hoping you can help me with my research

  • @rollan__vales25
    @rollan__vales25 Год назад

    I wanted to use for custom data set but how to train

  • @skkrttheturtle8538
    @skkrttheturtle8538 2 месяца назад

    Can you do this with your own dataset and if so how? Need it for a school project. Thank you to whoever answers.

    • @arafatsiam4060
      @arafatsiam4060 Месяц назад

      Yes you can but you have to train your dataset. Watch some other tutorials on how to train dataset.

    • @skkrttheturtle8538
      @skkrttheturtle8538 Месяц назад

      @@arafatsiam4060 Will a dataset made from TensorFlow lite work? Or is there a different one compatible with the program?

  • @TakeElite
    @TakeElite 2 года назад +1

    You're the most closet project of my idea in fact it's practically that.
    But I would like to run it 7/24 during a 10 day period ( my holiday)
    I would like it press a button 10 minutes after each time it identify a cat (mine) and nothing else :
    Here is a cat :
    wait 10 minute
    press the smart button ( I looking for a way to flush the toilet each time after my cats have done their needs )
    is this possible/faisable with this?

    • @Core-Electronics
      @Core-Electronics  2 года назад

      Definitely possible and an excellent project to eliminate a chore 😊 or make for an even more in-dependent kitty. The Coco Library used in this guide has | Cat | as one of the animals it can identify. And Raspberry Pi's are excellent at running 24/7. So I reckon your in for a project winner.
      If you follow through the Full Write up you'll be able to have a system that can Identify Cats (and only cats). That the hard bit done. Solenoids are a way to trigger the button, check this guide for the process on getting one to run with Raspberry Pi - core-electronics.com.au/guides/solenoid-control-with-raspberry-pi-relay/

  • @wkinne1
    @wkinne1 Год назад

    Will this work on an Orange Pi5? Raspberry Pi's are out of stock everywhere except from scalpers charging five times normal.

    • @Core-Electronics
      @Core-Electronics  Год назад

      I'm not sure honestly mate, if you can get it to run 'Buster' Raspberry Pi OS then I'll give it a solid positive maybe.

  • @karanashu639
    @karanashu639 4 месяца назад

    Hey tim i am getting problem in installing opencv after unziping it

  • @amritraj7640
    @amritraj7640 2 года назад +1

    Can we run a 5v buzzer onobject detection for a specific object ?

    • @Core-Electronics
      @Core-Electronics  2 года назад

      Absolutely, you will just need to add in some extra code to activate that buzzer whenever the desired object is spotted. Very swell idea.

    • @amritraj7640
      @amritraj7640 2 года назад +1

      Can you add a guide to make it simpler for me? Would love to see it.

  • @cristiandavidandrade807
    @cristiandavidandrade807 2 года назад

    Tengo ese error no me coge el codigo
    Traceback (última llamada más reciente): Archivo "/home/pi/Desktop/Object_Detection_Files/object-ident.py", línea 15, en net = cv2.dnn_DetectionModel(weightsPath,configPath) AttributeError: module 'cv2.cv2' no tiene atributo 'dnn_DetectionModel'

  • @shashankmetkar2820
    @shashankmetkar2820 2 года назад +1

    code zip file is not available at the bottom of your article posted.Will you please upload it?

    • @Core-Electronics
      @Core-Electronics  2 года назад

      Code should be available at the bottom of article or in the comment section. If you can't see it pop me a reply and we'll figure out whats happening.

  • @Osst197
    @Osst197 Месяц назад

    Can I also use a normal webcam?

  • @nithins9640
    @nithins9640 Год назад +2

    Hi,
    Can I execute this project with a Raspberry Pi 3 A+ ?

    • @Core-Electronics
      @Core-Electronics  Год назад

      You definitely can, it will just run a little bit slower.

  • @karanashu639
    @karanashu639 4 месяца назад

    Hey tim i reached u on website too i am geeting error in running ur code in line no 7 for object-ident.py file
    No such file or directory

    • @thebestletsplay4694
      @thebestletsplay4694 3 месяца назад

      You have to replace the paths with your own ones. Ive got the same problem

  • @mmshilleh
    @mmshilleh 7 месяцев назад

    Weird question, how did you manage to record the Raspberry Pi screen so smoothly. I am a fellow content creator not as big as you haha and I don't know any good Raspian software that does this

    • @Core-Electronics
      @Core-Electronics  7 месяцев назад +2

      Well spotted, there's no decent way to screen record a Pi internally without a terrible framerate. We run our Raspberry Pi through an Elgato capture card, just like you would capture gameplay footage from a console.

    • @mmshilleh
      @mmshilleh 6 месяцев назад

      Good to know, will be buying one today. Thanks@@Core-Electronics

  • @jolly9833
    @jolly9833 2 месяца назад

    I get the error message of "[Makefile:166: all] Error 2". Could you please help me on how to fix this?

  • @kyungpark5258
    @kyungpark5258 2 года назад +1

    Hi, Tim.
    I am using your source for my object detection project! Thank you so much!
    But, I have a question about delay. So, there seems like about 10 seconds delay in real-time. Is there any way to reduce that delay, such as “decreasing frame rate”, or “reducing the number of images read every second”? I am trying to find where I can adjust those in the code, but it seems hard to do so. Thank you so much!

    • @Core-Electronics
      @Core-Electronics  2 года назад +1

      Awesome! 10 seconds is a very long time though, what hardware are you using? Software wise by limiting the objects it searching for I have found it to increase speed, also deep in code you can find locations to lower camera resolution which will help too.

    • @kyungpark5258
      @kyungpark5258 2 года назад

      @@Core-Electronics Thank you for your reply!
      So, I am using "Raspberry Pi" and "Logitech C930e 1080P HD Video Webcam" for camera.
      I tried limiting the objects it searching, but it didn't help because it anyways detected objects even though the bounding boxes are not displayed in the video.
      And I think I am using 640 x 480 resolution, then do I just manually change it to smaller scale?
      except those two methods above (limiting objects it searching for, lowering camera resolution), are there any other ways to reduce the delay time? such as "changing FRAME RATE?" or "reducing the number images per second"?
      Thank you so much for your help.
      -Kyung

    • @xyliusdominicibayan6215
      @xyliusdominicibayan6215 2 года назад

      Hi did you succeed in lowering your delay?

    • @kyungpark5258
      @kyungpark5258 2 года назад

      @@xyliusdominicibayan6215 Hey, I couldn’t reduce delay. My project was done last year, but I just used my laptop instead of Raspberry Pi. If I had more time to fix it, I would have tried to lower the delay using Pi.

    • @diannevila8837
      @diannevila8837 Год назад

      @@kyungpark5258 did you find a solution to your problem? I have the same problem also