I've watched your other videos too, and as a Vision professional myself i'll say this....you sir are on a whole different level...these combination of ideas of yours are Classic
you probably dont care but if you guys are bored like me during the covid times you can stream pretty much all of the latest movies and series on InstaFlixxer. I've been binge watching with my brother these days xD
this is a great project! great job! open pose looks really impressive. ive been wanting to try this in conjunction with a vr helmet to have a realtime 3rd person ar camera with a mix of computer vision and human input to control it. imagine adding cinematic angles, pans, slides, and zooms that could be triggered and strung together!
This is just brilliant! I have the drone already, not used too much, but this would be perfect. If this is able to run on RPi zero W or something like that - that would be amazing...
Well, you can run a pose estimator on a phone (if it is not OpenPose, you can use another one like Posenet maybe lighter?). But I am not sure it can run fast enough to use it to "comfortably" control the drone. Imagine you can only decode 2 frames/sec, you will need to limit the max speed of the drone to a low value otherwise the chance to hit a wall is too big. In any case, you will need to tune the settings of the PID controller accordingly to the decoding rate.
Thx! I have already tried photogrammetry with the tello. It was outside and using frames from the video stream, for a poor result, probably because of the low resolution of the frames. I will try again but this time by taking 5M pictures.
@@geaxgx1 Best advice for photogrammetry is to take as sharp focus pictures as you can. I live in the UK so there's a lot of overcast weather so its almost perfect for photogrammetry (avoid hard shadows). Having a programmable drone that can take images around an object seems like a perfect use-case though.
Hi there, saw your video on Reddit and it prompted some questions that I'd like answered if you'd be so kind... 1. Can it be programmed to focus on one target? 2. What happens with multiple targets in the room? 3. Can more expensive drones be programmed this way?
If there are multiple bodies in a frame, the program currently selects the one which appears to have the bigger size. The size is evaluated by measuring the length of some segments of the skeleton. Often it will be the body which is the closest to the tello. If 2 bodies appear to have close size, the tello may "oscillate" between them. To make the tello focus on one particular target, we could use face recognition, so that the tello can distinguish between several faces. Face recognition alone wouldn't be enough because it works well only when the face is facing the camera. By using tracking in complement, we should be able to keep the focus when the face is not facing the camera anymore. I am not very familiar with drones but I know that some more expensive drones can already do that kind of things (body tracking and gesture recognition), and in a more efficient way than mine (because their program is embedded in the drones and they certainly don't have the processing power of a gtx 1080ti on board :-)
Epic!! i love computers, tech, and quadcopters... do you think you can make a video or explain somehow how to do all that? i mean how to install all what is needed on the tello to do these hands tricks? thank you :)
You don't need to install anything on the tello itself. But you need a PC with a powerful nvidia GPU and install a few packages/libraries as explained in the github : github.com/geaxgx/tello-openpose
Thx! If there are multiple bodies in a frame, the program currently selects the one which appears to have the bigger size and ignore the others. The size is evaluated by measuring the length of some segments of the skeleton. Often it is the body which is the closest to the tello.
Great work! With RTX 2070 Super GPU I am only able to achieve 30 fps before running openpose and 10 fps after. The frame rate seems to cap at 30fps and 10 fps respectively and does not change even if i downscale/upscale the images.
have you tried this project ?? i am having issue in the last stage, after running the code , window appears that shows video ,fps ,etc but with in a minute it stucks. no errors it just stucks
I can only repeat: absolutely amazing - Very creative and impressive solution Do you have any experience in Xcode/Swift as development environment in this area ? (And thank you very much for the additional info at Video)
Wow, great project! I am thinking of buying a tello drone as well in order to learn programming that way. I already have a basic knowledge of coding with python and would like to start an ambitious project for myself. Therefore I would appreciate, it if you could answer me following question: Would it be possible to go outdoors and make the drone follow you while running for lets say 50 or 100 meters recording you from the side? I'm thinking of wearing a colored cap for example in order to make the drone follow me by flying sideways and recording me running.
Thx! From my experience, detecting a color to follow an object in a image is not reliable when you don't control the lighting conditions. Depending on the sun, the clouds or the shadows from the tree, the original color can look very differently in the image. Using a person detector is probably a more robust option.
Wonderful project! You earned a new subscriber! :-) Btw - Do you think this project can be done through Google Colaboratory in case the GPU poses a problem?
Thanks ! It is a good question but unfortunately I don't know if it is possible. The tello drone needs to communicate with a process that is on the local tello wifi network, and this process would also communicate with google colab.
Everything is perfect, but i want to know which hardware inside drobe is being used for the coding, or for the motion controller... (Means which hardware is used for communication of drone with openCv or OpenPose )
Thanks! I don't know what is precisely the hardware inside the drone, but actually I don't really need since I rely on the Tellopy python package that manages the communications between my program and the drone (via wifi).
@@geaxgx1 okay, i know , python or languages made communication between operator and drone, but the thing is, in which hardware has been this code is being implemented ? ... But i am expecting u don't know, but can u prefer from where i will get my answer ? Because i really need to know the hardware for my project that contains this coding ..
@@Kkkkkkkkkkkkkkkkk-g8p If you ask what hardware is inside the drone, this page says the tello uses a Intel VPU Movidius Myriad 2: gobot.io/blog/2018/04/20/hello-tello-hacking-drones-with-go/ Not sure if it answers your question :-)
Congratulations and nice work! I'm wondering about: - Two people are present in the same frame, what's the behaviour of the drone? - How can you avoid collisions with walls eg behind the drone looking at you? Thanks!
Thx ! - Two people are present in the same frame, what's the behaviour of the drone? The drone only follows the one which appears bigger in the image. It means that if the people are moving, the drone can switch from one guy to the other one, which is probably not the behaviour we would like. - How can you avoid collisions with walls eg behind the drone looking at you? I can't ! The Tello has no sensor to "see" behind or on the sides. So the guy who is "piloting" the drone has to be careful :-)
@geaxg1 Hi. Is there a more powerful drone that we can program it with this SDK like you did? For example a drone that has sensors, and much bigger than this ? Great job!
Thx ! I am not an drone specialist, but yes there are bigger drones that have their own SDK support, like the DJI Mavic (developer.dji.com/) or the Parrot drones (developer.parrot.com/). The Tello is more like a toy but it has its own stabilization mechanism and it is working great ( I don't have to care about it).
Hi. This is super amazing and inspiring. I am trying to build openPose for python on mac and really struggling. Also when compiled openPose and ran using CPU_ONLY mode, FPS is very poor (0.4) on the latest MAC. The FPS in your demo is 25+. Was there anything different than the specified configuration in the openPose github to get it to 25+?
Thx ! I was using a powerful GPU (Gtx 1080 ti) in order to get such FPS. If I had to make this project today, I would not used Openpose, but a recent model like Blazepose or Movenet that can run much faster on a CPU.
@@geaxgx1 thanks. Just tried blazepose mediapipe sample and it is way better. Getting around 15 FPS. BTW this video is really good. Have shared it with my friends too. Especially the Morse code touch was brilliant.
Very easily, by using the coordinates of the keypoints given by Openpose. For instance, to take a picture, I cross my hands under the neck. To recognize this pose, I just need to check that the keypoints for the right wrist, the left wrist and the neck are close to each other. To ask for an immediate landing, the left wrist keypoint needs to be close to the right ear keypoint. Just a bit more complex to make the drone move (forward, backward, left, right), because I need to look at the angle between pairs of keypoints. I am lucky here because I just have 6 poses in this project. If I had more poses to recognize, I would probably need to train a classifier.
@@geaxgx1 correct me if i am wrong here , you are calculating the euclidean distances and checking if they are are below or above a threshold ,you are not using a classifier.
can you guide me to how you did the pose recognition? did you use ai for that or jsut check which points are connected to each other and in want locaion? plesae someone answer if you know
I am not using AI for pose recognition but just do computations on the landmark locations. For instance, to determine if hands are crossed, I can check if the distance between hand landmarks are below a threshold.
Hey, man! It's a great job. Thanks. I'm trying to figure out, can i program drone to fly every 15 minutes from the base, check for small trash on the table, pick it up if there is a trash and throw it to the garbage, and then return to base for charging?
In this project, I don't directly do face detection with the drone. Instead I rely on pose estimation to localize the different parts of the human body, including the head.
Sir can I get the source code i am working on this project in which drone detect the face with pose estimation and follow and using gestures it does some tasks it will really help me in FYP if you share the source code
Would I be able to measure your distance without using OpenPose? I have a project that is to follow any object that has been classified on a CNN. The only idea I have is whether the detection box is large or small, which would show if it is approaching or moving away from the drone.
You have probably observed that the size of the bounding box can vary a lot from one frame to the other even if the object is almost not moving. So you should probably use a moving average of the box size to decide if you are approaching or moving away.
Dear sir i installed all the libraries and openpose is in running condition i also run the examples but how will i open your project file of tello_openpose_master in openpose
Thx! I can have an indirect and very rough approximation of the distance by computing the ratio shoulders width measured in the image / image width. For instance, for the drone to keep a constant distance to me, it just need to keep that ratio constant by moving forward or backward. Of course, it does not work anymore if the drone sees me from the side view, because it will think I am further away than I actually am, but that was enough for the demo :-)
@@geaxgx1 wow that's clever! So facing the situation that you want the drone to follow you if you're turning left/right, would the tracking box detect you or it would lose you? Have you tried it? Thanks in advance!
In the current version, if it detects several persons, it will consider only the "biggest" person in the image, the one which have the largest shoulder width for instance. It is not a very satisfying solution because if 2 persons are close to each other, the focus may oscillate from one person to the other depending on their movement and their position relative to the drone.
I've watched your other videos too, and as a Vision professional myself i'll say this....you sir are on a whole different level...these combination of ideas of yours are Classic
Thank you sir ! I really appreciate your kind comment.
How does someone dare to hit dislike button in such video? This project is absolutely amazing! 😲
you probably dont care but if you guys are bored like me during the covid times you can stream pretty much all of the latest movies and series on InstaFlixxer. I've been binge watching with my brother these days xD
@Ian Benicio Definitely, I have been using instaflixxer for years myself =)
Really amazing!! Love it
Using simple tello drone to acheive that is amazing!!
as someone who learn machine vision, watching your video feels like when a painter discover bob ross for the first time. beautiful!
Thank you for your nice comment !
@@geaxgx1 How do you get the video so fast? I am on windows, and have only been able to get the video at very slow fps with lots of delay
@@ShredderJT The fps you can get depends on your GPU. I was using a gtx 1080ti.
Amazing video! I unfortunately am not good at programming, but its awesome to see the capabilities of tello extended without any extra apps
You are genius, incredible utilization of the tello
I'm absolutely amazed. This opens totally new horizons. Thanks.
Excellent project! Thank you for sharing.
Really love this video and the codes! You demonstrated perfectly what can be done even by a toy drone.
Impressive job!
Thanks for sharing your work and a bit of behind the scenes! 👍🏻
Really impressed by your project!! Keep up the good work, subscribed to your channel as well.
Man, this is so cool! Thank you for sharing!
Just Amazing! Congratulations, impresive project!
this video is so well explained I finally can picture now how to create an object tracking algorithm!! thank you so much !!
One ML engineer to another, dude, I am not worthy, great job!
Congratulations! It's so cool man, I've loved it.
Definitely inspiring.
Congratulations ! great job, very impressive
Fantastic work! Thanks for sharing. This is definitely going on my list of projects to try out
Amazing project... Simply Fantastic!!
Awesome! Very good idea & video. Impressiv!
So inspiring! Thank you, Sir!
thanks a lot for code Sir! I will teach kids on your examples!!
This is Amazing, well done and thanks for sharing!
genius. really great ideas and thanks for the inspiration and the code!
Amazing work, building on it
this is a great project! great job! open pose looks really impressive. ive been wanting to try this in conjunction with a vr helmet to have a realtime 3rd person ar camera with a mix of computer vision and human input to control it. imagine adding cinematic angles, pans, slides, and zooms that could be triggered and strung together!
Very slick!! Nice work.
That is huge! Thank you!
Wonderful job sir 👍👏
smart project sir...i was seriously looking for such,thanks for sharing.
love all of this, love you sir
Good job, regards from Mexico
I love you. :)) You have some of the coolest programming projects out here.
That was seriously impressive. Thanks. :)
Great work
Great work!
Wow!! Excellent!!
Thank you for sharing... nice music ;)
When you search for a project with the drone and openCV and a guy did all the hard work :D
This is just brilliant! I have the drone already, not used too much, but this would be perfect. If this is able to run on RPi zero W or something like that - that would be amazing...
Thx! Unfortunately, to run Openpose, you need something more powerful than a Pi.
What about a cellphone? Like snapdragon 855?
Well, you can run a pose estimator on a phone (if it is not OpenPose, you can use another one like Posenet maybe lighter?). But I am not sure it can run fast enough to use it to "comfortably" control the drone. Imagine you can only decode 2 frames/sec, you will need to limit the max speed of the drone to a low value otherwise the chance to hit a wall is too big. In any case, you will need to tune the settings of the PID controller accordingly to the decoding rate.
This is so cool 😁😁😁 nice work!!
Nice to see you here, Edje !
This is amazing!
Well done !!!!😄
Really cool, i want to make my own now
Amazing project!
Wow! excelent job :)
WOW, Amazing!!
Incroyable!
What you did is amazing
Thank you for sharing
this is solid! good job!
Nice! one thing you might try, using the tello to take images for photogrammetry. I might buy one and give it a go myself.
Thx! I have already tried photogrammetry with the tello. It was outside and using frames from the video stream, for a poor result, probably because of the low resolution of the frames. I will try again but this time by taking 5M pictures.
@@geaxgx1 Best advice for photogrammetry is to take as sharp focus pictures as you can. I live in the UK so there's a lot of overcast weather so its almost perfect for photogrammetry (avoid hard shadows). Having a programmable drone that can take images around an object seems like a perfect use-case though.
Woah!!! 😍 Amazing 🤩🤩
Excellent!
cool, gonna try it.
Subscribed, you've inspired me to do something similar, thanks!
Thx, it is rewarding to read that !
This is the most awesome project I have seem!!!
great stuff! subscribed!
Fabulous!
Hi there, saw your video on Reddit and it prompted some questions that I'd like answered if you'd be so kind...
1. Can it be programmed to focus on one target?
2. What happens with multiple targets in the room?
3. Can more expensive drones be programmed this way?
If there are multiple bodies in a frame, the program currently selects the one which appears to have the bigger size. The size is evaluated by measuring the length of some segments of the skeleton. Often it will be the body which is the closest to the tello. If 2 bodies appear to have close size, the tello may "oscillate" between them. To make the tello focus on one particular target, we could use face recognition, so that the tello can distinguish between several faces. Face recognition alone wouldn't be enough because it works well only when the face is facing the camera. By using tracking in complement, we should be able to keep the focus when the face is not facing the camera anymore.
I am not very familiar with drones but I know that some more expensive drones can already do that kind of things (body tracking and gesture recognition), and in a more efficient way than mine (because their program is embedded in the drones and they certainly don't have the processing power of a gtx 1080ti on board :-)
@@geaxgx1 thanks for taking the time to respond. It was a great video with an interesting concept
You have done an excellent job it's amazing ✨😍🙌
impressive!
Absolutely awesome!!
Very cool.
Awesome
Epic!!
i love computers, tech, and quadcopters...
do you think you can make a video or explain somehow how to do all that? i mean how to install all what is needed on the tello to do these hands tricks? thank you :)
You don't need to install anything on the tello itself. But you need a PC with a powerful nvidia GPU and install a few packages/libraries as explained in the github : github.com/geaxgx/tello-openpose
You're excellentl!
amazing!
Amazing. Truly. This could replace cameramen. Lol.
Amazing
Respect sir.
What an awesome project. I wonder how it will behave when it sees more than one person?
Thx! If there are multiple bodies in a frame, the program currently selects the one which appears to have the bigger size and ignore the others. The size is evaluated by measuring the length of some segments of the skeleton. Often it is the body which is the closest to the tello.
Great work! With RTX 2070 Super GPU I am only able to achieve 30 fps before running openpose and 10 fps after. The frame rate seems to cap at 30fps and 10 fps respectively and does not change even if i downscale/upscale the images.
have you tried this project ?? i am having issue in the last stage, after running the code , window appears that shows video ,fps ,etc but with in a minute it stucks. no errors it just stucks
unbelievable! a
I want to try when I have the Tello.. 😁
I can only repeat: absolutely amazing - Very creative and impressive solution
Do you have any experience in Xcode/Swift as development environment in this area ?
(And thank you very much for the additional info at Video)
Thanks ! Ah no, I have zero experience on Swift.
Wow, great project! I am thinking of buying a tello drone as well in order to learn programming that way. I already have a basic knowledge of coding with python and would like to start an ambitious project for myself. Therefore I would appreciate, it if you could answer me following question: Would it be possible to go outdoors and make the drone follow you while running for lets say 50 or 100 meters recording you from the side? I'm thinking of wearing a colored cap for example in order to make the drone follow me by flying sideways and recording me running.
Thx! From my experience, detecting a color to follow an object in a image is not reliable when you don't control the lighting conditions. Depending on the sun, the clouds or the shadows from the tree, the original color can look very differently in the image. Using a person detector is probably a more robust option.
Wonderful project! You earned a new subscriber! :-)
Btw - Do you think this project can be done through Google Colaboratory in case the GPU poses a problem?
Thanks ! It is a good question but unfortunately I don't know if it is possible. The tello drone needs to communicate with a process that is on the local tello wifi network, and this process would also communicate with google colab.
Everything is perfect, but i want to know which hardware inside drobe is being used for the coding, or for the motion controller... (Means which hardware is used for communication of drone with openCv or OpenPose )
Thanks! I don't know what is precisely the hardware inside the drone, but actually I don't really need since I rely on the Tellopy python package that manages the communications between my program and the drone (via wifi).
@@geaxgx1 okay, i know , python or languages made communication between operator and drone, but the thing is, in which hardware has been this code is being implemented ? ... But i am expecting u don't know, but can u prefer from where i will get my answer ? Because i really need to know the hardware for my project that contains this coding ..
@@Kkkkkkkkkkkkkkkkk-g8p If you ask what hardware is inside the drone, this page says the tello uses a Intel VPU Movidius Myriad 2: gobot.io/blog/2018/04/20/hello-tello-hacking-drones-with-go/ Not sure if it answers your question :-)
@@geaxgx1 i will read it, Thanks a lot for u time and u told what u know..
Sir can you tell how to connect the tello drone to the project and sir your project is wonderful I loved it.
Thanks ! On startup the Tello configures itself as a Wireless Access Point. By using the Tellopy package, the program connects to this access point.
@@geaxgx1 Sir can you tell me it a little more briefly please.
Sir it is also showing a problem file:///C:/Users/rishi/OneDrive/Desktop/Screenshot%202021-01-07%20094741.jpg
Super
Congratulations and nice work!
I'm wondering about:
- Two people are present in the same frame, what's the behaviour of the drone?
- How can you avoid collisions with walls eg behind the drone looking at you?
Thanks!
Thx !
- Two people are present in the same frame, what's the behaviour of the drone? The drone only follows the one which appears bigger in the image. It means that if the people are moving, the drone can switch from one guy to the other one, which is probably not the behaviour we would like.
- How can you avoid collisions with walls eg behind the drone looking at you? I can't ! The Tello has no sensor to "see" behind or on the sides. So the guy who is "piloting" the drone has to be careful :-)
@geaxg1 Hi. Is there a more powerful drone that we can program it with this SDK like you did? For example a drone that has sensors, and much bigger than this ? Great job!
Is there any drones that more powerful (big, has sensors etc) than this with SDK support? How do you overcome the stabilization ? Great job!
Thx ! I am not an drone specialist, but yes there are bigger drones that have their own SDK support, like the DJI Mavic (developer.dji.com/) or the Parrot drones (developer.parrot.com/). The Tello is more like a toy but it has its own stabilization mechanism and it is working great ( I don't have to care about it).
Hi. This is super amazing and inspiring. I am trying to build openPose for python on mac and really struggling. Also when compiled openPose and ran using CPU_ONLY mode, FPS is very poor (0.4) on the latest MAC. The FPS in your demo is 25+. Was there anything different than the specified configuration in the openPose github to get it to 25+?
Thx ! I was using a powerful GPU (Gtx 1080 ti) in order to get such FPS. If I had to make this project today, I would not used Openpose, but a recent model like Blazepose or Movenet that can run much faster on a CPU.
@@geaxgx1 thanks. Just tried blazepose mediapipe sample and it is way better. Getting around 15 FPS. BTW this video is really good. Have shared it with my friends too. Especially the Morse code touch was brilliant.
Great work!! How are you classifying the pose ?
Very easily, by using the coordinates of the keypoints given by Openpose. For instance, to take a picture, I cross my hands under the neck. To recognize this pose, I just need to check that the keypoints for the right wrist, the left wrist and the neck are close to each other. To ask for an immediate landing, the left wrist keypoint needs to be close to the right ear keypoint. Just a bit more complex to make the drone move (forward, backward, left, right), because I need to look at the angle between pairs of keypoints.
I am lucky here because I just have 6 poses in this project. If I had more poses to recognize, I would probably need to train a classifier.
@@geaxgx1 correct me if i am wrong here , you are calculating the euclidean distances and checking if they are are below or above a threshold ,you are not using a classifier.
Yes, exactly ! Euclidean distances and that kind of stuff. No need for a classifier here :-)
@@geaxgx1 thanks a lot!!
can you guide me to how you did the pose recognition? did you use ai for that or jsut check which points are connected to each other and in want locaion? plesae someone answer if you know
I am not using AI for pose recognition but just do computations on the landmark locations. For instance, to determine if hands are crossed, I can check if the distance between hand landmarks are below a threshold.
Hey, man! It's a great job. Thanks. I'm trying to figure out, can i program drone to fly every 15 minutes from the base, check for small trash on the table, pick it up if there is a trash and throw it to the garbage, and then return to base for charging?
Great video! Are you using a "normal" Tello or the EDU version?
Thx! The "normal" Tello.
Hello, can I please know how to make openpose work, everything is just fine except that..
can u share the process of how to do the face detection with tello drone...
In this project, I don't directly do face detection with the drone. Instead I rely on pose estimation to localize the different parts of the human body, including the head.
Sir can I get the source code i am working on this project in which drone detect the face with pose estimation and follow and using gestures it does some tasks it will really help me in FYP if you share the source code
My tello drone keeps moving to the left even though I'm not commanding it to do anything. How can I stop this?
Would I be able to measure your distance without using OpenPose? I have a project that is to follow any object that has been classified on a CNN. The only idea I have is whether the detection box is large or small, which would show if it is approaching or moving away from the drone.
You have probably observed that the size of the bounding box can vary a lot from one frame to the other even if the object is almost not moving. So you should probably use a moving average of the box size to decide if you are approaching or moving away.
The Ryze Tello (DJI) is not a toy.
good
Dear sir i installed all the libraries and openpose is in running condition i also run the examples but how will i open your project file of tello_openpose_master in openpose
Hey, great demo! Were you able to teach the drone to keep distance with you?
Thx! I can have an indirect and very rough approximation of the distance by computing the ratio shoulders width measured in the image / image width. For instance, for the drone to keep a constant distance to me, it just need to keep that ratio constant by moving forward or backward. Of course, it does not work anymore if the drone sees me from the side view, because it will think I am further away than I actually am, but that was enough for the demo :-)
@@geaxgx1 wow that's clever! So facing the situation that you want the drone to follow you if you're turning left/right, would the tracking box detect you or it would lose you? Have you tried it? Thanks in advance!
Hello sir,
Thank you for sharing, I want to ask you which graphic card are you using for training the models of Deep learning ???
Amazing project ! Is it possible to move the drone camera from 0° to 90°?
Thx! No, on the Tello, the camera is fixed.
Absolutely brilliant project. Im going to give this a go.
Music was really bad Ill give you one of my chilled Trip Hop tracks if you want.
Bach would be glad to read your comment :-)) Sure, send me a link on your tracks. If they are free to use, I may use them in a future video.
i see for tracking is very good realtime and fast, but how if in front drone stand 2 person, its will lock both person or first person find?
In the current version, if it detects several persons, it will consider only the "biggest" person in the image, the one which have the largest shoulder width for instance. It is not a very satisfying solution because if 2 persons are close to each other, the focus may oscillate from one person to the other depending on their movement and their position relative to the drone.