Great video! Is it possible to recognize a logo on a golf ball, so I can send a signal to a motor and consequently have the logo on the place I want? Thanks in advance
You could definitely train a system to recognise a golf ball and rotate that golf ball until the logo is visible. I'd recommend looking at Edge Impulse - www.edgeimpulse.com/ . This will allow you to create your own custom machine learnt systems or customise already created ones. You could definitely customise the CoCo library to recognise Golf Balls which would be a great step in the right direction 😊 come check this guide for more on Object and Animal Recognition - ruclips.net/video/iOTWZI4RHA8/видео.html
I haven't experimented with it. The power difference in GHZ between the two is only .1 but you will be more restricted with the RAM. Knowing that it should work, just a little slower. core-electronics.com.au/tutorials/raspberry-pi-generations.html
Awesome video but the link to the Full Article points to another article: "Hand Recognition and Finger Identification with Raspberry Pi and OpenCV" and not to "Face Landmark Detection & Pose Estimation With Raspberry Pi + OpenCV". In addition to that, I can't find where to search for articles on your site
Thank you hugely for telling me this, I've fixed it now 😊! From this page here you'll be able to filter tutorials by tags - core-electronics.com.au/guides/raspberry-pi/
Absolutely, once the compatibility between the packages are there. Undoubtedly OpenCV and Raspberry Pi teams are working furiously to make this occur. It is very easy to revert the the earlier OS system however, just take a look at the guide here - core-electronics.com.au/tutorials/flash-buster-os-pi.html
I want to make a bodyguard robot that protects me by putting punch and aggression detection when the robot detects a person approaching me to hit me he tries to shoot him with airsoft balls so is there punch detection, we will say that I am a victim and a person approaches and punches me does the camera detect that it is a punch and that the robot must react by shooting it with airsoft balls and still in the camera I am the victim even if I hit it the robot does not shoot me not on it and he always remains focused on the person who attacks me by shooting him until he leaves and how i code all this on the raspberry pi please if you can help me
I love the project! Albeit it I hope nobody ever hits you and that you're safe! Come hit up our forum if you want to flesh out this idea 😍 I've seen AI systems that are aware when people are acting unsafely - arxiv.org/pdf/2002.04355.pdf, ruclips.net/video/r8vcGFQ5qfU/видео.html so what you desire is definitely within the realms of possibility. Hope all the best for you champ.
I'm using a Raspberry Pi 4b and a Raspberry Pi camera v15mp. I applied your script, but it said, "WebCAM Read Error." The error message "INFO: Created TensorFlow Lite XNNPACK degate for CPU" appears. How can we resolve this?
I cant' thank you enough. You managed to resolve many problems I had doing something similar with a raspberry Pi. Thank you.
So good to hear it! Best of luck with your projects 😃
Amazing quality content!
Thanks mate 😊
very cool
Great video!
Is it possible to recognize a logo on a golf ball, so I can send a signal to a motor and consequently have the logo on the place I want?
Thanks in advance
You could definitely train a system to recognise a golf ball and rotate that golf ball until the logo is visible. I'd recommend looking at Edge Impulse - www.edgeimpulse.com/ . This will allow you to create your own custom machine learnt systems or customise already created ones.
You could definitely customise the CoCo library to recognise Golf Balls which would be a great step in the right direction 😊 come check this guide for more on Object and Animal Recognition - ruclips.net/video/iOTWZI4RHA8/видео.html
@@Core-Electronics Thanks a lot!
Great post.
Glad you enjoyed it
Does adding nodes for facial detection decrease the speed in which the program runs?
The more Nodes the more computation the Raspberry Pi Single Board Computer has to do which does mean a slower FPS on the preview window.
Great!
Can we use this to detect where a person is looking at ( gaze tracking)
How much different is the performance if I only have a Raspberry Pi 3b+
I haven't experimented with it. The power difference in GHZ between the two is only .1 but you will be more restricted with the RAM. Knowing that it should work, just a little slower. core-electronics.com.au/tutorials/raspberry-pi-generations.html
You implemented this with pi 3b+?
Can we do face recognition on rpi4 from 2 cameras?
And will the performance depend upon the ram of the board
Awesome video but the link to the Full Article points to another article: "Hand Recognition and Finger Identification with Raspberry Pi and OpenCV" and not to "Face Landmark Detection & Pose Estimation With Raspberry Pi + OpenCV". In addition to that, I can't find where to search for articles on your site
Thank you hugely for telling me this, I've fixed it now 😊! From this page here you'll be able to filter tutorials by tags - core-electronics.com.au/guides/raspberry-pi/
Would you be able to show us how to do this using opencv on the 'Bullseye' OS?
Absolutely, once the compatibility between the packages are there. Undoubtedly OpenCV and Raspberry Pi teams are working furiously to make this occur. It is very easy to revert the the earlier OS system however, just take a look at the guide here - core-electronics.com.au/tutorials/flash-buster-os-pi.html
@@Core-Electronics thank you so much for this guide! I’m working on an important project in college right now and this will come in handy!
Does anyone know if it's possible to gather z coordinates as well to more precisely put the viewer in a 3d environment?
I want to make a bodyguard robot that protects me by putting punch and aggression detection
when the robot detects a person approaching me to hit me
he tries to shoot him with airsoft balls
so is there punch detection, we will say that I am a victim and a person approaches and punches me
does the camera detect that it is a punch and that the robot must react by shooting it with airsoft balls and still in the camera I am the victim even if I hit it the robot does not shoot me not on it and he always remains focused on the person who attacks me by shooting him until he leaves
and how i code all this on the raspberry pi
please if you can help me
I love the project! Albeit it I hope nobody ever hits you and that you're safe! Come hit up our forum if you want to flesh out this idea 😍 I've seen AI systems that are aware when people are acting unsafely - arxiv.org/pdf/2002.04355.pdf, ruclips.net/video/r8vcGFQ5qfU/видео.html so what you desire is definitely within the realms of possibility.
Hope all the best for you champ.
do you have any codes or a little tutorial that can help me and thank you very much
Give this a look in regards to aiming the balls - core-electronics.com.au/guides/Face-Tracking-Raspberry-Pi/
I'm using a Raspberry Pi 4b and a Raspberry Pi camera v15mp. I applied your script, but it said, "WebCAM Read Error."
The error message "INFO: Created TensorFlow Lite XNNPACK degate for CPU" appears. How can we resolve this?
Possible application for mute people, hand-sign to text or speech.
Our guide on hand tracking and gesture control might be a better fit ruclips.net/video/a7B5EZVHHkw/видео.html
Genial 👌🏻
More comments will come.. being a pioneer is better than the one who can’t be one
оу, так він і так індус)