Want to learn how to train your own TFLite model to run on the Raspberry Pi? I released a video giving step-by-step instructions for training TFLite object detection models inside your web browser using Google Colab and deploying it on the Pi. Check it out here! ruclips.net/video/XZ7FYAMCc4M/видео.html
Can someone help me? I have a problem with the following command step "sudo pip3 install virtualenv", when I execute this command the following error "externally-managed-environment" appears, I performed all the previous steps but I was unable to resolve it
I find it absurd, but also a complete testament to what you have done here, that I was able to get this working in about 15 minutes on the first try. Thank you!!!!
Hey all! If you're using the Raspberry Pi OS Bullseye release (which is the latest version), there's a couple things you have to do to get it working with the Raspberry Pi Camera: 1. Make sure the OS is up-to-date by issuing "sudo apt update" and "sudo apt install" and then rebooting the Pi 2. Open a terminal, enter "sudo raspi-config", go to the "Interface Options" menu, then go to the "Legacy Camera" option and enable it. Then, reboot the Pi (again). 3. Run the TFLite_detection_webcam.py script as described in this video. Note: You only need to do these steps if you're using a Raspberry Pi Camera (HQ, v1, or v2). You don't need to do them if you're using a USB webcam. Also, you don't need to do them if you're using the Stretch or Buster OS releases.
Great video. For those looking to do this and get a higher FPS rate try using the pi camera connection instead of USB. The actual connection on the board itself will use less power and will have lower latency plus it goes directly to the GPU which is what you want for object detection. I haven’t tested this with TF Lite but the results are dramatic when running OpenCV
Amazing thing done on the Raspberry Pi, Sir. All this while I thought Tensorflow would never work properly on the Pi. But this video helped a lot, Sir. Please keep geeking Sir. :)
I recently updated some of the setup scripts to work with newer versions of Raspberry Pi OS. (With Raspberry Pi and TensorFlow always releasing new versions of software, it's hard to stay on top of it all.) Everything should still work when following the instructions in this video. Please let me know if you run into any errors!
hi i have arasberry pi 4 b 64 bit os and im getting this error at the very end when trying to run it. I am using a high quality pi camera [ WARN:0] VIDEOIO(V4L2:/dev/video0): can't open camera by index Traceback (most recent call last): File "/home/pi/tflite1/TFLite_detection_webcam.py", line 171, in frame = frame1.copy() AttributeError: 'NoneType' object has no attribute 'copy'
i run it at virtualbox with raspberry OS Desktop 32Bit. the tensorflow cannot installed. it says caould not find a version that satisfied the requirements tensorflow (From version: )
So excited. I've been looking for a light weight model to put onto a pi in a RC car - this guide was straight forward, you've put a lot of hardwork in getting everything done, and to see it in action is amazing. Looking for that next video about what will speed up the FPS! Thanks man!
Me and my team tried using a diffrent software and a pi 3 for object detection and it was hell. we only got results every 8 seconds and this was on a moving drone ship so by the time it detected what it had to it was already miles away lol. The detection speed in this is amazing.
@@BinkiklouGaminglol well we had its 6 motors and sensors (mainly a bunch of MZ80s)running on a arduino mega and we had a pi3 with a pi camera on top) The physical dimensions are If I remmember correctly (it was some time ago so probably these might be off) İt was round 50 ish cm (how long it was) 30-40cm in height and again 30-40 cm in with. Why did you ask ? :D
@@BinkiklouGaminglol An autonomous ship. In this case, we built it for a competition and the goal was that our "bigger" ship would be placed in a pool in which there were other "smaller ships" the smaller ships were red and green and you had to somehow capture the green ones and take them to a different part of the pool. I don't know if they have any English resources but you can search "Fetih1453 TeknoFest" that's the name of the competition. It would make more sense if you just looked at that :D
Once I formatted my NOOBS and started fresh your tutorial worked perfectly. Honestly, I started here, I'm going to go back and do step 1 now. The documentation is excellent. You've given a lot to learn and it's walked through for the non-pro like myself. Excellent work
I did this 2 years ago and it was an nightmare. It was still fairly new and you had to find patches for the patches. You made this ridiculously simple.
I created a Google Colab notebook for making your own TensorFlow Lite model with custom data! You can train, convert, and export a TFLite SSD-MobileNet model (or EfficientDet), and then download it to your Raspberry Pi and use as shown in this video. I'm still working on the video that walks through the Colab notebook, but please try it out if you're interested! colab.research.google.com/github/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/Train_TFLite2_Object_Detction_Model.ipynb
You're very welcome! Were you successfully able to train a model with the Colab notebook? It hasn't been tested by many other users yet, so I'm curious to hear if you ran in to any errors or issues.
@@EdjeElectronics Well I wanted to train a clothes classifier using FASHION-MNIST, so I'm still in the process of figuring out how to change that dataset to fit the colab notebook. In short, not succeeded yet, but haven't had the time to properly test it, so fingers crossed!
@@casualjay7428 Oh! Actually, my guide won't work for that 🙁. My guide is for "object detection" models, while the FASHION-MNIST dataset is used to train "image classification" models. Here's a good guide from TensorFlow on training a basic classifier on the FASHION-MNIST dataset. www.tensorflow.org/tutorials/keras/classification
This was my first click researching a project and I live on one of the cross streets shown in the beginning of the video. So random! Helpful video too.
I followed the recommendation, below in the comments, to install tensorflow 1.14 after running the requirements script. Everything works and my Pi4 4GB is giving about 5fps with the google sample.
This worked brilliantly. My pi 4 is setup to work with the Sunfounder Picar-x and was a little doubtful if your project would play along with their setup. Luckily, it worked seamlessly on the first attempt using your setup scripts and the default models. My picam is doing 20-24 FPS and I’m just amazed. My end goal is to have this Picar-x to roam around the house without colliding into anything and to annoy my cat to do some exercise (she is on the bulkier side)
Thanks, I'm glad to hear it works well! Do you know what version of Raspberry Pi OS you were using? I'm working on updating some of the scripts to work without errors on the latest Raspberry Pi OS.
For those having the following error: (tflite1-env) pi@raspberrypi:~/tflite1 $ python3 TFLite_detection_webcam.py --modeldir=Sample_TfLite_model Traceback (most recent call last): File "TFLite_detection_webcam.py", line 122, in with open(PATH_TO_LABELS, 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: '/home/pi/tflite1/Sample_TfLite_model/labelmap.txt' Remember that the model files have been unzipped in Sample_TFLite_model and not Sample_TfLite_model or Sample_Tflite_model for that matter. Just make sure that you type *TFLite* correctly, and you're good to go.
@Edje Electronics I just want to say a big thankyou for your work of putting this tutorial out there. I have designed and constructed a Autonomous Mobile Robot which is 95% 3d printed that uses tflite to identify and exterminate weeds. I couldn't have done it without your help! If I'm ever in your neck of the woods. I would like to thankyou in person. Hello from a final year mechatronics student in Port Elizabeth, South Africa!
Hello Mr. Radnartjie, Trust you are well. Hey, I was wondering how you ran the object detection headless. Did you run this program on an IDE like Thonny / Geany? I'm trying also to build an Autonomous Mobile Robot that uses object detection but I can't seem to find how to run this program other than on the terminal... Mr. Radnartjie, I would be really grateful for some advice.
Is this something that would benefit being on a cluster? One Pi for the camera, one Pi for the processing? I don't know anything about tensor flow or Pi clusters, just curious.
Reading in a frame from a USB camera vs reading it in from another Pi isn't really a difference in performance. But other processing steps after the detection might be heavy enough to benefit from multiple Raspberries.
Good question! No, I don't think a cluster would help for this. The main chunk of processing occurs when passing the image through the neural network to find the detected objects, and there isn't any (easy) way to split that between multiple Pis. And couka is correct that using a separate Pi to handle the camera wouldn't really help. I already have the camera running in a separate thread to speed things up (see www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/ )
Hi @EdjeElectronics ! I have followed your tutorials for a project of mine. I have encountered some errors. Can you help me. I have followed you on twitter.
Great video! I followed your written instructions last week. I have modified your code to count the frames when it detects a person and take a picture every 10th frame. I placed the camera in my car dash at work today and it took pictures when people walked in front of my car (it took 48 pictures). Pretty cool! I am now wanting to train my own model.
That's awesome, sounds like a cool project! Training a model takes a bit more work, but the written guide (linked in the video description) walks through every step of the process. There have been a lot of version changes since I made the original guide for training on TensorFlow, so you might hit a few snags along the way. But for the most part, you should be able to resolve them if you Google the errors. Hope you're able to get it working!
Fantastic guide - clear, well sized steps, i love that install script, well documented, use cases! Thx! Btw, i like how to model at the end of the video is sure (more or less) that your guitar is a person or a backpack! :D
Hi Edje I have a problem about line 122 Traceback (most recent call last): File "TFLite_detection_webcam.py", line 122, in with open(PATH_TO_LABELS, 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: '/home/pi/tflitel/Sample_TFLite_model/labelmap.txt'
Had the same problem, I just created the /home/pi/tflite1/Sample_TFLite_model/ folder and moved the labelmap.txt and detect.tflite from the tflite1 folder into it!
This was perfect and works fabulously! Far better than the official Google coral documentation which I haven't been able to get working yet. When you have time...a video on how to access GPIO pins and activate them or to activate another program based on a detected class would be super helpful. I'm having trouble figuring out how to turn the results of a detection into concrete effects (if bird detected, take a photo and if squirrel detected turn a gpio high and take a video to record the fun). Thanks for all the hard work you put into these videos!
Thanks, I'm glad the videos are helpful! I'm hoping to put out a video soon that will give an example of toggling GPIO when certain objects are detected. Really hoping to get started on it this weekend! I also want to do a video showing how to trigger video/audio recording using ffmpeg.
@@EdjeElectronics In case you haven't seen it, Pyimagesearch has a nifty KeyClipWriter that looks like it might be a good way to record the video, not just of the action frames but storing the frames in a buffer and saving the entire event to video including the frames prior to and immediately after the event is detected. That blog post is " Saving key event video clips with OpenCV."
Thanks for the video, however having many troubles installing get_pi_requirements.sh. getting unable to locate, [Errno -3] Temporary failure in name resolution')':
Can I ask that can we train our own model on Tensorflow Lite ? As I have followed your previous tutorial for training my own model on Pi 3. It was good but in slow speed
Here's my GitHub guide showing how to train your own TensorFlow Lite detection model! github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi
Can we use this to make smart traffic light differentiating between a normal vehicle and an emergency vehicle such as an ambulance? Can you make a video to demonstrate or help me out through any link. I will be obliged.
Yes you can, that would be a cool project! I don't have time to help, but check out my Pet Detector video, that might give you some ideas for how to control a program based on what is detected. ruclips.net/video/gGqVNuYol6o/видео.html
I am new on this and perhaps this is a silly question: I am running a headless rpi connecting via ssh, I've done everything on this tutorial except the last part where I've to execute the python code. But when I run it "python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model" I got this message: ": cannot connect to X server" anyone has faced the same issue? is it correct run the python code over ssh? if not, do I need the raspberry desktop version instead? Thanks in advance!
Unfortunately, it doesn't work with a headless RPi connected over SSH. The "X server" error message occurs because it's trying to display an image to the screen, but there is no screen. You'll have to either use a desktop version, or modify the code so it just saves image files instead of trying to display them. Nice cat picture btw 😺
@@EdjeElectronics Many many thanks mate, now I get it, I also did some research in blogs and they pointed out to the same. About my profile pic, long live cat lovers 🐈 haha 👍🏻 Cheers!
Thank you! I really appreciate your efforts in clearing up how to get this working. So far things are working great after your set up instructions. I will be trying to set up some custom objects to detect and passing the locations via I2C to an Arduino. I'm looking forward to trying it with the USB Coral unit soon.
Gregory Mazza hey Gregory, curious to know what kind of objects you are trying to detect. I’m working on my own algorithms and was wondering if you’d like to share information, thanks. My email is jatinderm19@gmail.com.
I have a similar project, Pi will automatically track down the object e.g. Raccoon or human for my project(you can train your own model use OpenCV), and "fire" laser on the target and sound the alarm. My project is based on this: www.pyimagesearch.com/2019/04/01/pan-tilt-face-tracking-with-a-raspberry-pi-and-opencv/
How was your setup right at the beginning of the video in the car? How do you recorded the screen? what type of connection do you used to connect to the pi? thanks for the cool tutorial!
I had my Pi plugged into a monitor and recorded the screen using this HDMI recorder: www.amazon.com/gp/product/B00KMTYPXC . Looks like it's no longer available on Amazon, but you should be able to find something similar!
I think that part of the problem is that there are new versions of the programs that are being downloaded in the .sh that haven't been updated and so aren't working/downloading correctly. But I can't figure out which ones they are to get the updated ones.
Hocam öncelikle kolay gelsin. Artifical Intelligence dersim için TensorFlow Lite ile Food Recognition tarzı bi ödev yapmam gerekiyor ama kendi modelimi train etmekte sorunlar yaşıyorum. Bu uygulamayı çalıştırabildiyseniz yardımcı olabilir misiniz? Şimdiden çok teşekkürler.
Hello, please watch my Pet Detector video. It explains how the variables work and gives an example of how to trigger actions if certain objects are detected. Good luck! ruclips.net/video/gGqVNuYol6o/видео.html
great instructions! I use the pi4 on 64bit mode, idk if that is related or not, but, I did have a issue with the version of opencv not being installed, this was resolved by : pip install --upgrade pip pip install opencv-python just posting this if anyone else gets that this should do the trick for no matching distribution
Im looking to export label names as they come in / recognized by the pi. Does anyone happen to know where that string variable is? "for context, as a current student project, I am looking to pass this name on to another micro controller for a project I have been working. And now that i can "kind of" train a model, i would like to find this variable before moving forward" any and all help would be much appreciated.
The label names are held in the "object_name" variable. If you add "print(object_name)" line after line 183 in TFLite_detection_webcam.py , it will print the name of every detected object on every frame.
Hi, Just got the pre-compiled model from your Part 2 with the Coral Accelerator up and running at 20 to 24 FPS with the standard Edge TPU runtime, however, I had to make a small change. The first time through running the webcam detection script (in 2C) strangely it wasn't looking for detect_edgetpu.tflite but edgetpu.tflite, so I copied the detect_edgetpu.tflite model to create an edgetpu.tflite and restarted. Worked like a charm! Thanks for the awesome tutorial!!! Next steps learning to compile my own models.
Awesome! I'm glad you were able to get it working, and thank you for the feedback! It looks like I have a mistake in my guide, I meant to rename the file to just "edgetpu.tflite", not "detect_edgetpu.tflite". I will change the guide to fix it! Training, converting, and compiling a TFLite model is quite the process! There's a lot of steps, but if you stick to it you should be able to get it all working. Please create an issue on the GitHub page (or comment here) if you run in to any problems following the guide!
Yeah, it should work there. Raspbian and Ubuntu are both based on Debian after all. And, I'd be surprised if your PC doesn't hold up to a Raspberry Pi. All the steps should be the same.
thanks a lot for this video but i just face some problem with this python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model Traceback (most recent call last): File "TFLite_detection_webcam.py", line 19, in import cv2 File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/cv2/__init__.py", line 3, in from .cv2 import * ImportError: libjasper.so.1: cannot open shared object file: No such file or directory
I solved the problem by downloading this version of the model instead : wget storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip and unzip: unzip coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip -d Sample_TFLite_model
Amazing vid! I feel like this is the start of an amazing channel. Couple of questions : I have a rpi 4 as well with rpi cam. I wanted to setup the rpi as a basic IP cam for streaming only, no recording but the fps is extremely low (15fps max) . The idea was to see how high it could go. So I guess I'm asking how high it could be and also in the last seconds of this video did you achieve 20fps with the coral connected? Finally could it be trained to identify people? Thanks. I'm now wondering about setting up tensor flow 24/7 on the house server to monitor the babies 🤣 maybe make a video on that ❤️
hey, thanks for the video it really helped me a lot. but i have a question , how can i detect from any website like from url of youtube . please help me i have to complete my project and i am confused .......... and again thanks for the video.
Hello Evan! thank you very much for your tutorial, it was a great pleaser to learn from you. Hope you will do more projects like that! I successfully repeated your project with my custom model for one month ago (I got my model from google cloud). Yesterday I built another model with different dataset and got some trouble with implementation. The error says next: ValueError: Op builtin_code out of range: 130. Are you using old TFLite binary with newer model? I found out they updated their conversion with TensorFlow 2.5 runtime. I guess this is the problem, may be you know how to fix it?
@@GenadiJai Thanks, I'm glad the tutorial has been helpful! Hmm, if you updated tflite-runtime and you're still getting that error, then I'm not sure what the problem is. Can you check the version of tflite-runtime you're using on the Pi and the version of TensorFlow that you used for building your model? You should be able to use this to check the tflite-runtime version: import tflite_runtime tflite_runtime.__version__
@@EdjeElectronics thank you very much for your response. The version of tflite_runtime on raspberry pi is 2.5.0 and Google cloud uses TensorFlow 2.5.x (latest patch) cloud.google.com/ai-platform/training/docs/runtime-version-list package list
Hi, the tutorial is relly great, but is there an option to access the raspberry gpio`s? Can somebody help me please. I am under a little time pressure.
Ok i found a solution. Activate the virtual enviroment => cd tflite1/ source tflite1-env/bin/activate pip list #shows all installed packages pip install rpi.gpio
@@keshavharipersad2024 can u provide me code where can I put these..I train my custom object detection module with tf lite ...i want to glow led when my custom object detect
thank you for the video but i had this error when trying to open the pi camera VIDEOIO ERROR: V4L: can't open camera by index 0 Traceback (most recent call last): File "TFLite_detection_webcam.py", line 171, in frame = frame1.copy() AttributeError: 'NoneType' object has no attribute 'copy' can you help me with this
@@ZEDketa you need to change index from 0 to -1 in line 32 and need to modify the code to frame1 = videostream.read() if frame1 is None: break frame = frame1.copy() in line 171 in TFLite_detection_webcam.py
@@sribharathsajja5736 any chance you have another solution to this problem, I've googled this and feel like I've tried everything. This fix didn't work either
Have you tried the TFLite_detection_stream.py script? That should work for an RTSP stream - just need to give it the correct IP address. github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/TFLite_detection_stream.py
Hi, Edje thanks for the tutorial, the object detection works or certainly looks perfectly fine to me but after I run it, at first it says : ' HadoopFileSystem load error: libhfds.so: cannot open shared object file: No such file or directory ' Could you please help me solve this issue :)
Haha yep!! I'm from Great Falls originally but living in Bozeman now. It's a great place to live! Check out my Raspberry Pi 3 vs Raspberry Pi 4 video, it's mostly footage of me driving around Bozeman :) ruclips.net/video/TiOKvOrYNII/видео.html
Awesome, it will be out in a couple weeks! I JUST finished writing the GitHub guide showing how to set up the Coral USB Accelerator. If you want to use it, check it out here: github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/Raspberry_Pi_Guide.md#section-2---run-edge-tpu-object-detection-models-on-the-raspberry-pi-using-the-coral-usb-accelerator
@@EdjeElectronics Brilliant. Just ran through it and am up and running! Now I need to start demanding the custom model tutorials! Thanks brother you're doing good stuff. P.S. For anyone wondering, I'm getting about 18-20FPS running the standard speed tpufile. Thats an upgrade from the 3-4FPS running standard tensorflow lite.
@@kyleheppler2860 Sweet, thanks for being a guinea pig to test my guide! Glad it worked for you. Haha if only I had more time! The custom model tutorials will be out in January. Thanks again friend!
Want to learn how to train your own TFLite model to run on the Raspberry Pi? I released a video giving step-by-step instructions for training TFLite object detection models inside your web browser using Google Colab and deploying it on the Pi. Check it out here!
ruclips.net/video/XZ7FYAMCc4M/видео.html
FIRST!!!
Edje Electronics is it better to use the 8G or 4G raspberry pi
Can someone help me? I have a problem with the following command step "sudo pip3 install virtualenv", when I execute this command the following error "externally-managed-environment" appears, I performed all the previous steps but I was unable to resolve it
BroHam !!!!! this is what I was looking for, something simple to catapult my curiosity to see if I like it !!! Excellent work my friend.
I find it absurd, but also a complete testament to what you have done here, that I was able to get this working in about 15 minutes on the first try. Thank you!!!!
Hey all! If you're using the Raspberry Pi OS Bullseye release (which is the latest version), there's a couple things you have to do to get it working with the Raspberry Pi Camera:
1. Make sure the OS is up-to-date by issuing "sudo apt update" and "sudo apt install" and then rebooting the Pi
2. Open a terminal, enter "sudo raspi-config", go to the "Interface Options" menu, then go to the "Legacy Camera" option and enable it. Then, reboot the Pi (again).
3. Run the TFLite_detection_webcam.py script as described in this video.
Note: You only need to do these steps if you're using a Raspberry Pi Camera (HQ, v1, or v2). You don't need to do them if you're using a USB webcam. Also, you don't need to do them if you're using the Stretch or Buster OS releases.
I want to glow led when car detected what will be the changes ?
hey so i wanted to detect only a sertain ojbject iinstead of all kinds.. how can i do that?
Thank you so much for creating, uploading, and updating this program. It’s brilliant!
Can you show how to setup and run in vscode or pycharm?
Great video. For those looking to do this and get a higher FPS rate try using the pi camera connection instead of USB. The actual connection on the board itself will use less power and will have lower latency plus it goes directly to the GPU which is what you want for object detection. I haven’t tested this with TF Lite but the results are dramatic when running OpenCV
Dude! It worked!!! Thanks so much. I tried one of your older videos but had no luck so I'm pumped to have something that finally runs!
9:49 nice acoustic person/backpack you've got there xP
No joke, I actually love you, I've been looking everywhere for a video like this!
Amazing thing done on the Raspberry Pi, Sir. All this while I thought Tensorflow would never work properly on the Pi. But this video helped a lot, Sir. Please keep geeking Sir. :)
I recently updated some of the setup scripts to work with newer versions of Raspberry Pi OS. (With Raspberry Pi and TensorFlow always releasing new versions of software, it's hard to stay on top of it all.) Everything should still work when following the instructions in this video. Please let me know if you run into any errors!
hi i have arasberry pi 4 b 64 bit os and im getting this error at the very end when trying to run it. I am using a high quality pi camera
[ WARN:0] VIDEOIO(V4L2:/dev/video0): can't open camera by index
Traceback (most recent call last):
File "/home/pi/tflite1/TFLite_detection_webcam.py", line 171, in
frame = frame1.copy()
AttributeError: 'NoneType' object has no attribute 'copy'
i run it at virtualbox with raspberry OS Desktop 32Bit. the tensorflow cannot installed. it says caould not find a version that satisfied the requirements tensorflow (From version: )
I have a question... How to change the rotation of the camera ? Mine is too much rotated ://
@@georgoschalkiadakis2402 did you got it resolved?
This is super. very methodical and complete video. worked perfectly.
Thank you so much, I used your older guide for Tensorflow with SSDLite before, and now you release this. Thank you!
So excited. I've been looking for a light weight model to put onto a pi in a RC car - this guide was straight forward, you've put a lot of hardwork in getting everything done, and to see it in action is amazing. Looking for that next video about what will speed up the FPS! Thanks man!
Can you please tell me why my camera window is not showing? for webcam
Me and my team tried using a diffrent software and a pi 3 for object detection and it was hell. we only got results every 8 seconds and this was on a moving drone ship so by the time it detected what it had to it was already miles away lol. The detection speed in this is amazing.
How big was the drone ship
@@BinkiklouGaminglol well we had its 6 motors and sensors (mainly a bunch of MZ80s)running on a arduino mega and we had a pi3 with a pi camera on top) The physical dimensions are If I remmember correctly (it was some time ago so probably these might be off) İt was round 50 ish cm (how long it was) 30-40cm in height and again 30-40 cm in with. Why did you ask ? :D
@@barsgecgil3437 wait what's a drone ship
@@BinkiklouGaminglol An autonomous ship. In this case, we built it for a competition and the goal was that our "bigger" ship would be placed in a pool in which there were other "smaller ships" the smaller ships were red and green and you had to somehow capture the green ones and take them to a different part of the pool. I don't know if they have any English resources but you can search "Fetih1453 TeknoFest" that's the name of the competition. It would make more sense if you just looked at that :D
@@barsgecgil3437 Oh nice, this is kinda like FRC robots but on water, and the participants are a little bit older.
Once I formatted my NOOBS and started fresh your tutorial worked perfectly. Honestly, I started here, I'm going to go back and do step 1 now. The documentation is excellent. You've given a lot to learn and it's walked through for the non-pro like myself. Excellent work
Thank you! I tried to make the instructions as straightforward as possible. Glad to hear they are working!
Oh Man, that's a really great video!
I definitively have to try this !
Thanks for the great work.
Thank you so much for this guide, i was strunggling a lot with the object detection application until i found your guide :)
I did this 2 years ago and it was an nightmare. It was still fairly new and you had to find patches for the patches. You made this ridiculously simple.
Thanks! It's a pain staying on top of all the version changes. I did my best to make this one easy to follow and future-proof to new versions!
I created a Google Colab notebook for making your own TensorFlow Lite model with custom data! You can train, convert, and export a TFLite SSD-MobileNet model (or EfficientDet), and then download it to your Raspberry Pi and use as shown in this video. I'm still working on the video that walks through the Colab notebook, but please try it out if you're interested!
colab.research.google.com/github/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/Train_TFLite2_Object_Detction_Model.ipynb
You are a lifesaver, thank you!
You're very welcome! Were you successfully able to train a model with the Colab notebook? It hasn't been tested by many other users yet, so I'm curious to hear if you ran in to any errors or issues.
@@EdjeElectronics Well I wanted to train a clothes classifier using FASHION-MNIST, so I'm still in the process of figuring out how to change that dataset to fit the colab notebook.
In short, not succeeded yet, but haven't had the time to properly test it, so fingers crossed!
@@casualjay7428 Oh! Actually, my guide won't work for that 🙁. My guide is for "object detection" models, while the FASHION-MNIST dataset is used to train "image classification" models. Here's a good guide from TensorFlow on training a basic classifier on the FASHION-MNIST dataset. www.tensorflow.org/tutorials/keras/classification
@@EdjeElectronics Oh I see! Thank you! I'm learning a lot so I still see this as a win!
Absolutely great guide. Worked perfectly on Raspberry Pi4 8GB with Stretch installed!
Thank you very much.
This was my first click researching a project and I live on one of the cross streets shown in the beginning of the video. So random! Helpful video too.
Nice! Feel free to say hi if you ever see me in Bozeman :)
Dude! This is cool! I didnt even know that they had this type of technology.
Your tutorials are good for beginners, please keep doing them :)
I followed the recommendation, below in the comments, to install tensorflow 1.14 after running the requirements script. Everything works and my Pi4 4GB is giving about 5fps with the google sample.
This worked brilliantly. My pi 4 is setup to work with the Sunfounder Picar-x and was a little doubtful if your project would play along with their setup. Luckily, it worked seamlessly on the first attempt using your setup scripts and the default models. My picam is doing 20-24 FPS and I’m just amazed.
My end goal is to have this Picar-x to roam around the house without colliding into anything and to annoy my cat to do some exercise (she is on the bulkier side)
Thanks, I'm glad to hear it works well! Do you know what version of Raspberry Pi OS you were using? I'm working on updating some of the scripts to work without errors on the latest Raspberry Pi OS.
For those having the following error:
(tflite1-env) pi@raspberrypi:~/tflite1 $ python3 TFLite_detection_webcam.py --modeldir=Sample_TfLite_model
Traceback (most recent call last):
File "TFLite_detection_webcam.py", line 122, in
with open(PATH_TO_LABELS, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/pi/tflite1/Sample_TfLite_model/labelmap.txt'
Remember that the model files have been unzipped in Sample_TFLite_model and not Sample_TfLite_model or Sample_Tflite_model for that matter. Just make sure that you type *TFLite* correctly, and you're good to go.
You are the great man. I'm computer science teacher from Thailand.
Thank you!! I hope this video can help your students 😃
Great video! Definitely subscribing for more. I already have the coral device's so I can't wait to see what you do with them.
@Edje Electronics I just want to say a big thankyou for your work of putting this tutorial out there.
I have designed and constructed a Autonomous Mobile Robot which is 95% 3d printed that uses tflite to identify and exterminate weeds. I couldn't have done it without your help! If I'm ever in your neck of the woods. I would like to thankyou in person. Hello from a final year mechatronics student in Port Elizabeth, South Africa!
That's awesome! Thank you for letting me know, I'm glad this video was helpful. Keep up the good work!
Hello Mr. Radnartjie,
Trust you are well. Hey, I was wondering how you ran the object detection headless. Did you run this program on an IDE like Thonny / Geany? I'm trying also to build an Autonomous Mobile Robot that uses object detection but I can't seem to find how to run this program other than on the terminal... Mr. Radnartjie, I would be really grateful for some advice.
It will be really useful to know How can you toggle GPIO when certain object is detected? Thanks.
Incredibly simple and verry well explained! This is exactly what I was looking for. Congratulations!
Is this something that would benefit being on a cluster? One Pi for the camera, one Pi for the processing?
I don't know anything about tensor flow or Pi clusters, just curious.
Reading in a frame from a USB camera vs reading it in from another Pi isn't really a difference in performance.
But other processing steps after the detection might be heavy enough to benefit from multiple Raspberries.
Good question! No, I don't think a cluster would help for this. The main chunk of processing occurs when passing the image through the neural network to find the detected objects, and there isn't any (easy) way to split that between multiple Pis. And couka is correct that using a separate Pi to handle the camera wouldn't really help. I already have the camera running in a separate thread to speed things up (see www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/ )
I am forever grateful for these video tutorials. Thank you
Hi @EdjeElectronics ! I have followed your tutorials for a project of mine. I have encountered some errors. Can you help me. I have followed you on twitter.
Let's get that next video! The people need the next videoooooooo
Can you please tell me why my camera window is not showing? for webcam
Great video! I followed your written instructions last week. I have modified your code to count the frames when it detects a person and take a picture every 10th frame. I placed the camera in my car dash at work today and it took pictures when people walked in front of my car (it took 48 pictures). Pretty cool! I am now wanting to train my own model.
That's awesome, sounds like a cool project! Training a model takes a bit more work, but the written guide (linked in the video description) walks through every step of the process. There have been a lot of version changes since I made the original guide for training on TensorFlow, so you might hit a few snags along the way. But for the most part, you should be able to resolve them if you Google the errors. Hope you're able to get it working!
Hey Jim! Do you have a tutorial on how to do that? I am trying to make a program that takes pictures n seconds.
Is it possible to shaew the github code for that function. Where it will take picture for every 10th frame?
Hi ! I'm run tflite on Raspberry Pi 3 B+. Why i get 0.6-0.9 fps? Can you help me for more fps?
Fantastic guide - clear, well sized steps, i love that install script, well documented, use cases! Thx!
Btw, i like how to model at the end of the video is sure (more or less) that your guitar is a person or a backpack! :D
Hi Edje I have a problem about line 122 Traceback (most recent call last):
File "TFLite_detection_webcam.py", line 122, in
with open(PATH_TO_LABELS, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/pi/tflitel/Sample_TFLite_model/labelmap.txt'
Had the same problem, I just created the /home/pi/tflite1/Sample_TFLite_model/ folder and moved the labelmap.txt and detect.tflite from the tflite1 folder into it!
This was perfect and works fabulously! Far better than the official Google coral documentation which I haven't been able to get working yet.
When you have time...a video on how to access GPIO pins and activate them or to activate another program based on a detected class would be super helpful. I'm having trouble figuring out how to turn the results of a detection into concrete effects (if bird detected, take a photo and if squirrel detected turn a gpio high and take a video to record the fun). Thanks for all the hard work you put into these videos!
Thanks, I'm glad the videos are helpful! I'm hoping to put out a video soon that will give an example of toggling GPIO when certain objects are detected. Really hoping to get started on it this weekend! I also want to do a video showing how to trigger video/audio recording using ffmpeg.
@@EdjeElectronics Yayyy, looking forward to the former !! Great content
@@EdjeElectronics In case you haven't seen it, Pyimagesearch has a nifty KeyClipWriter that looks like it might be a good way to record the video, not just of the action frames but storing the frames in a buffer and saving the entire event to video including the frames prior to and immediately after the event is detected. That blog post is "
Saving key event video clips with OpenCV."
@@jasondegani Thanks for the heads up, I will check it out! I love PyImageSearch 👍
Can someone help? Im trying to control a servo motor once TF detected a specific object. Thank you
Sorry i dont know that
Anybody figured how to toggle GPIO in real time when XYZ object detected.
Are you planning to use MQTT to start/stop the motor? That will work.
Nice Job! Had a issue reviewed the comments reinstalled Raspbian, followed the video all working, thanks for sharing
This is an outstanding tutorial.
this is the best tutorial ive seen on youtube, thank you so much !
Can I download you're bird squirrel and racoon model anywhere?
I really love your channel. I will also credit your Github repo in my project submission.
Keep up the awesome work
Thanks for the video, however having many troubles installing get_pi_requirements.sh. getting unable to locate, [Errno -3] Temporary failure in name resolution')':
Is there a way to do text detection/capture? For example, reading street signs?
This looks like just what I need for a project. Thank you for this. Very good video.
Can I ask that can we train our own model on Tensorflow Lite ?
As I have followed your previous tutorial for training my own model on Pi 3. It was good but in slow speed
Here's my GitHub guide showing how to train your own TensorFlow Lite detection model! github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi
Wow! The best guide for TensorFlow Object Detection! Thank you sir!
Can we use this to make smart traffic light differentiating between a normal vehicle and an emergency vehicle such as an ambulance? Can you make a video to demonstrate or help me out through any link. I will be obliged.
Yes you can, that would be a cool project! I don't have time to help, but check out my Pet Detector video, that might give you some ideas for how to control a program based on what is detected. ruclips.net/video/gGqVNuYol6o/видео.html
Nice to watch this video on RUclips! Thank you!
can u do this on an old pc or laptop aswell? and can you accelerate this process with a graphics card? @Edje Electronics
What an amazing tutorial, thanks man👌🏻👍🏻
I am new on this and perhaps this is a silly question:
I am running a headless rpi connecting via ssh, I've done everything on this tutorial except the last part where I've to execute the python code. But when I run it "python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model"
I got this message:
": cannot connect to X server"
anyone has faced the same issue? is it correct run the python code over ssh? if not, do I need the raspberry desktop version instead?
Thanks in advance!
Unfortunately, it doesn't work with a headless RPi connected over SSH. The "X server" error message occurs because it's trying to display an image to the screen, but there is no screen. You'll have to either use a desktop version, or modify the code so it just saves image files instead of trying to display them.
Nice cat picture btw 😺
@@EdjeElectronics Many many thanks mate, now I get it, I also did some research in blogs and they pointed out to the same.
About my profile pic, long live cat lovers 🐈 haha 👍🏻
Cheers!
Thank you! I really appreciate your efforts in clearing up how to get this working. So far things are working great after your set up instructions. I will be trying to set up some custom objects to detect and passing the locations via I2C to an Arduino. I'm looking forward to trying it with the USB Coral unit soon.
Gregory Mazza hey Gregory, curious to know what kind of objects you are trying to detect. I’m working on my own algorithms and was wondering if you’d like to share information, thanks. My email is jatinderm19@gmail.com.
github keeps asking me to login when I try to download the packages and it keep rejecting it. what should I do?
I am having the same issue.
Check the link you’re using. A git:// url requires a login, an url doesn’t.
You the real MVP keep making content!
Hi, nice video, is it possible that when detecting a bird, turn on an LED light or send a pulse?
I have a similar project, Pi will automatically track down the object e.g. Raccoon or human for my project(you can train your own model use OpenCV), and "fire" laser on the target and sound the alarm.
My project is based on this: www.pyimagesearch.com/2019/04/01/pan-tilt-face-tracking-with-a-raspberry-pi-and-opencv/
Thank you for this video. This appears to be the material I needed to run a tflite object detection model from a pi cam.
How was your setup right at the beginning of the video in the car? How do you recorded the screen? what type of connection do you used to connect to the pi?
thanks for the cool tutorial!
I had my Pi plugged into a monitor and recorded the screen using this HDMI recorder: www.amazon.com/gp/product/B00KMTYPXC . Looks like it's no longer available on Amazon, but you should be able to find something similar!
@@EdjeElectronics Thanks!
Just got this up and running!!! Just fantastic!! Had to uncomment some lines in the config.txt for my VGA monitor.
could you help me with some bugs i'am having?
,@@nectaligironperdomo7219, What step did it bomb out on? Do you have and error messages?
any not and --- I used a Raspberry Pi 4 with 4gb ram
Thank you 🙏 very useful tutorial
Thanks so much for this! Far better than the google documentation which I found to be as clear as mud
Hey im running the bullseye os on a raspberry pi 4 B. I can't seem to get across the problem regarding running the .sh script
Same here
I think that part of the problem is that there are new versions of the programs that are being downloaded in the .sh that haven't been updated and so aren't working/downloading correctly. But I can't figure out which ones they are to get the updated ones.
Thanks man I was looking for something exactly like this
I'm more interested if it can read and log license plates.
Thanks for the tutorial, it works perfectly. I got around 1.5fps with NoIR camera v2 (8MP) and Pi 3+.
Hocam öncelikle kolay gelsin. Artifical Intelligence dersim için TensorFlow Lite ile Food Recognition tarzı bi ödev yapmam gerekiyor ama kendi modelimi train etmekte sorunlar yaşıyorum. Bu uygulamayı çalıştırabildiyseniz yardımcı olabilir misiniz? Şimdiden çok teşekkürler.
May i know what is the Gb Ram of your Pi3+?
hi can I know how to write if labels= person it will rotate the motor and if not it will continue running ?
Hello, please watch my Pet Detector video. It explains how the variables work and gives an example of how to trigger actions if certain objects are detected. Good luck! ruclips.net/video/gGqVNuYol6o/видео.html
Wow bro, so many tutorial in RUclips is unique and fitted for my next project, if you have similar like this but using pytorch is high appreciated
great instructions! I use the pi4 on 64bit mode, idk if that is related or not, but, I did have a issue with the version of opencv not being installed, this was resolved by :
pip install --upgrade pip
pip install opencv-python
just posting this if anyone else gets that this should do the trick for no matching distribution
Im looking to export label names as they come in / recognized by the pi. Does anyone happen to know where that string variable is? "for context, as a current student project, I am looking to pass this name on to another micro controller for a project I have been working. And now that i can "kind of" train a model, i would like to find this variable before moving forward" any and all help would be much appreciated.
The label names are held in the "object_name" variable. If you add "print(object_name)" line after line 183 in TFLite_detection_webcam.py , it will print the name of every detected object on every frame.
@@EdjeElectronics Thank you so much for responding This genuinely helps a ton.
Hi, Just got the pre-compiled model from your Part 2 with the Coral Accelerator up and running at 20 to 24 FPS with the standard Edge TPU runtime, however, I had to make a small change. The first time through running the webcam detection script (in 2C) strangely it wasn't looking for detect_edgetpu.tflite but edgetpu.tflite, so I copied the detect_edgetpu.tflite model to create an edgetpu.tflite and restarted. Worked like a charm! Thanks for the awesome tutorial!!! Next steps learning to compile my own models.
Awesome! I'm glad you were able to get it working, and thank you for the feedback! It looks like I have a mistake in my guide, I meant to rename the file to just "edgetpu.tflite", not "detect_edgetpu.tflite". I will change the guide to fix it!
Training, converting, and compiling a TFLite model is quite the process! There's a lot of steps, but if you stick to it you should be able to get it all working. Please create an issue on the GitHub page (or comment here) if you run in to any problems following the guide!
@@EdjeElectronics Okay thanks!
I've successfully implemented transfer learning using classifier models like Resnet50, so I hopefully that will help with this process.
Could it be done in ubuntu mate? I have a rock64 and im curious if it gam be done on a raspberry like board
Yeah, it should work there. Raspbian and Ubuntu are both based on Debian after all. And, I'd be surprised if your PC doesn't hold up to a Raspberry Pi. All the steps should be the same.
this whole video is blowing my mind.
thanks a lot for this video but i just face some problem with this
python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model
Traceback (most recent call last):
File "TFLite_detection_webcam.py", line 19, in
import cv2
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/cv2/__init__.py", line 3, in
from .cv2 import *
ImportError: libjasper.so.1: cannot open shared object file: No such file or directory
I am having the same issue
I solved the problem by downloading this version of the model instead :
wget storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
and unzip:
unzip coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip -d Sample_TFLite_model
Amazing vid! I feel like this is the start of an amazing channel.
Couple of questions : I have a rpi 4 as well with rpi cam.
I wanted to setup the rpi as a basic IP cam for streaming only, no recording but the fps is extremely low (15fps max) . The idea was to see how high it could go. So I guess I'm asking how high it could be and also in the last seconds of this video did you achieve 20fps with the coral connected?
Finally could it be trained to identify people?
Thanks. I'm now wondering about setting up tensor flow 24/7 on the house server to monitor the babies 🤣 maybe make a video on that ❤️
hey, thanks for the video it really helped me a lot.
but i have a question , how can i detect from any website like from url of youtube .
please help me i have to complete my project and i am confused ..........
and again thanks for the video.
Use web scraping...I guess that'll help.
Really the best guide i found . Thank you
Hello Evan! thank you very much for your tutorial, it was a great pleaser to learn from you. Hope you will do more projects like that!
I successfully repeated your project with my custom model for one month ago (I got my model from google cloud). Yesterday I built another model with different dataset and got some trouble with implementation. The error says next:
ValueError: Op builtin_code out of range: 130. Are you using old TFLite binary with newer model?
I found out they updated their conversion with TensorFlow 2.5 runtime. I guess this is the problem, may be you know how to fix it?
I tried update manually tflite-runtime package, but it did not help
@@GenadiJai Thanks, I'm glad the tutorial has been helpful! Hmm, if you updated tflite-runtime and you're still getting that error, then I'm not sure what the problem is. Can you check the version of tflite-runtime you're using on the Pi and the version of TensorFlow that you used for building your model? You should be able to use this to check the tflite-runtime version:
import tflite_runtime
tflite_runtime.__version__
@@EdjeElectronics thank you very much for your response.
The version of tflite_runtime on raspberry pi is 2.5.0
and Google cloud uses TensorFlow 2.5.x (latest patch)
cloud.google.com/ai-platform/training/docs/runtime-version-list
package list
I am recreating your turtorial this week!
Hi, the tutorial is relly great, but is there an option to access the raspberry gpio`s?
Can somebody help me please. I am under a little time pressure.
Ok i found a solution.
Activate the virtual enviroment
=>
cd tflite1/
source tflite1-env/bin/activate
pip list #shows all installed packages
pip install rpi.gpio
@@stefanm2059 Thanks for sharing your solution! 😃
Great video, got it working on my RPi3 + Pi Camera. Just getting 1 FPS but hey, it works! :)
Works in the pi zero ?
Nice video! Is there a way to let this detect numberplates from a video or pictures and pixelate them?
yeah ofc
@@weslyvanbaarsen666 you know how? I'm not programming a lot and I don't know how rn
@@DashcamDriversGermany well you would use the tf api to actuate on by applying a pixel effect on the detected object region
can i adjust the code to detect only 1 specific class like a person?
yes, use google
Great video.
Any tutorial on using Tensor flow with Home Assitant?
I am also interested in this
I want to glow led when car detected what will be changes?
just configure single class object detection and when a car is detected then : glow led for .. sec
@@keshavharipersad2024 can u provide me code where can I put these..I train my custom object detection module with tf lite ...i want to glow led when my custom object detect
@@Satish_Lakhan29 yeah.. i was working on it for u.. it really is hard to do... i could not find a solution so far
Arduino. Modify the script to send a trigger via serial to the arduino and trigger a function in the arduino.
Thanks, this is exactly what i needed to get started with TensorFlow
thank you for the video
but i had this error when trying to open the pi camera
VIDEOIO ERROR: V4L: can't open camera by index 0
Traceback (most recent call last):
File "TFLite_detection_webcam.py", line 171, in
frame = frame1.copy()
AttributeError: 'NoneType' object has no attribute 'copy'
can you help me with this
I don't know but I sometimes have this problem after 1 to 2 hours of use ...
For me it would come from my camera ...
@@ZEDketa you need to change index from 0 to -1 in line 32 and need to modify the code to
frame1 = videostream.read()
if frame1 is None:
break
frame = frame1.copy()
in line 171
in TFLite_detection_webcam.py
@@sribharathsajja5736 any chance you have another solution to this problem, I've googled this and feel like I've tried everything. This fix didn't work either
This is a great video. It'd be really handy if it could link to an RTSP stream from my existing NVR
Have you tried the TFLite_detection_stream.py script? That should work for an RTSP stream - just need to give it the correct IP address. github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/TFLite_detection_stream.py
Hi, Edje thanks for the tutorial, the object detection works or certainly looks perfectly fine to me but after I run it, at first it says :
' HadoopFileSystem load error: libhfds.so: cannot open shared object file: No such file or directory '
Could you please help me solve this issue :)
A few people have gotten this error! I haven't had time to look in to it yet. Can you tell me which Raspbian OS you are using? Buster or Stretch?
Edje Electronics Buster, 4.19
@@EdjeElectronics I am also getting this same error on Raspbian GNU/Linux 10 (buster)
I'm also getting this error on Buster. Any straight-foward solution yet?
Thank you so much! I have all the components for Rpi 4 + Coral, so very much looking forward to your next installment.
0:02 Hotel Baxter?! HOLY SHIT! It's my home town of Bozeman!
Haha yep!! I'm from Great Falls originally but living in Bozeman now. It's a great place to live! Check out my Raspberry Pi 3 vs Raspberry Pi 4 video, it's mostly footage of me driving around Bozeman :) ruclips.net/video/TiOKvOrYNII/видео.html
Great video Ed. I thought you were Dutch. Edje means Little Ed in Dutch. Enough. Time to experiment with Tensorflow Lite now. Cheers!
This is a awesome tip bro
Thank you
I need to deep dive a little bit to make is work :)
Is this compatible on raspberry pi 5?
I definitely would like to know as well. Been really struggling to get a coral TPU model to run on raspberry pi 5 with the latest OS...
Hit us with that coral vid my guy, just got mine in the mail and ready to figure this puppy out.
Awesome, it will be out in a couple weeks! I JUST finished writing the GitHub guide showing how to set up the Coral USB Accelerator. If you want to use it, check it out here: github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/Raspberry_Pi_Guide.md#section-2---run-edge-tpu-object-detection-models-on-the-raspberry-pi-using-the-coral-usb-accelerator
@@EdjeElectronics Brilliant. Just ran through it and am up and running! Now I need to start demanding the custom model tutorials! Thanks brother you're doing good stuff.
P.S. For anyone wondering, I'm getting about 18-20FPS running the standard speed tpufile. Thats an upgrade from the 3-4FPS running standard tensorflow lite.
@@kyleheppler2860 Sweet, thanks for being a guinea pig to test my guide! Glad it worked for you. Haha if only I had more time! The custom model tutorials will be out in January. Thanks again friend!
58% chance his guitar is a person.. lmao 😅😅😅
Near the end...
@@jmart6438 can not compute
@@jmart6438 pretty sure I was cracking a joke... 🤔
✌️
@@sheepleslayer586 they deleted their comments lol
Keep uploading this kind of videos on Rpi