One of the most professionally made video tutorial on the whole of youtube, keep making brilliant videos like these I find this to be the best video under the mediapipe series. Congratulations for the gift you have and thanks for sharing it with us
I wanted to detect my hand pose as an object. So I went to tensorflow - 2 days of configuring. Made autolabel with groundingDINO - 3 days of configuring. And now I found this. 1 Hour and the model's ready to detect in real time... DAIM what I was doing this 5 days?! And thanks A LOT for your effort, also for that particular thing, when you shown how to add a bunch of new staff to the existing model
One of the best programmer guide out there! I love how you explain into detail on what you did and why you did it, helps someone like me who is learning just from youtube! I'm gonna try and create my own project based on this program for boxing. Which can detect what time of move a person is doing and can give pointers if that person is not using the proper technique, wish me luck!
Hey I'm working in a body and face emotion detection for my paper, but I couldn't find sources that could help me, but after seeing your video and the way everytime you'll explain every bit of code is really appreciated, this actually sparked that interest in coding for me again, thank you for being very caring you're amazing
Hi Nicholas, great stuff. I thoroughly enjoy your tutorials as they are mind blowing, I'd love to have a tutorial on pose deviation comparing poses of two people. Awesome work man!!!
The 210th comment is dedicated to the best content on RUclips. Period! Though late, I am lucky to have discovered your channel. Subscribed and notifications turned on!
@@NicholasRenotte Mann! I thought you wouldn’t respond! Im in my junior year of college and am currently doing a machine learning project. I have few doubts and I believe you could help me with it. It would great if you could tell me how to reach out to you and I can get them clarified!
Awesome content!! I know I mentioned this in a previous video but I would still love to see a “virtual coach” type implementation. Something that goes beyond just static poses but actually tracks a movements key points over time and could detect form quality by comparing them to a “good form” and “bad form” example.
Yup! Got it planned but it'll be a longer tutorial, just taking a little recovery break from long vids this week. Should do it in the next couple of weeks though @Caleb!
@@CalebSchantzChristFollower oh thanks man, I honestly needed to hear that it's okay! Been feeling a little bad I haven't hit my two a week since I released the big TFOD tutorial.
so can i actually use this to detect more sign languages using those joints...... man u actually helped a lot in giving people ideas and inspiration. will always support ur youtube channel!
@@NicholasRenotte I tried to look online to find documentation on how you would pass to a RNN for action detection, but didnt find anything significant. Do you have documentation or video you could share with us? :D
Thanks for the great video as always !!, May I ask a question If I want to count how long for the specific position what should I do , do you have any suggestion ? , Thanks in advance.
Thats Awesome! For facial expression recognition do you recommend using facial landmarks as the base features for training with DNN, or using CNN directly on the image dataset? Thanks
I prefer using the base features with a classifier as if you were to detect multiple faces you could apply the model across each face (FYI this model only detects a single face) but also because it's faster to train on tabular data!
first, i want to thank you about this big effort. second, can you make a video about fall detection using media pipe please this video will help me a lot
Hi there. Great video, tanks. Can I use a ply file instead a live webcam? I need to evaluate some children and want to use 3d scan (kinectics) and posterior analisys.
I'd love to have a tutorial where you use the face geometry module from mediapipe. And maybe add some 3D models tracked to the face like glasses ! Great video as always :)
Awesome video btw, I'm currently trying to recreate the model in my local machine. However, seems like I fail to append the different classes of poses/facial expressions, how can I change this?
Hi Nick, loved this video! I was wondering if you could go more into how to do hyperparameter tuning and building more advanced pipelines? Do you have other videos going more in detail on those? Thanks!
Love the content! Just curious would it be possible to use other videos and images to generate the landmark for different poses or facial expression and then our webcam to see if the model can detect ourselves doing the poses or facial expressions?
@@NicholasRenotte Hi Nicholas, wanted to commend on the awesome content, and format of presentation. Makes it easy to understand. Great job, and a huge thank you. Could you please point me to the video where you use other videos/pictures to train the model?
Hey, Nicholas Thank you so much. Very useful. Highly appreciated. But can you help me how to make the decoder like the in the video works with hands too? specifically how to extract the hands landmark? thankyou
Hey, Nicholas Thank you so much. Very useful. Highly appreciated. you have awesome teaching skills. One Question is what it takes to extend this to Action detection. Can you please make one tutorial for custom action detection? Thanks
Your videos are amazing, still waiting for using unity to import those keypoints and rig them to a hand model. In the meantime any clue on how to do that? how to ring and which hand model to use?
Heya @Imane, was actually talking about it on Discord yesterday. I actually found someone that had an example on it the other. Let me know if you want me to try to dig up the link!
Great Tutorial, very professionally and amazingly done. Thanks a lot !!! just an issue in loading the model again after restarting the kernel. Have to train the model every time when I open my jupyter notebook(cuz when loading the model in this condition it is not making predictions even after saving in pickel). Is there any way that I can keep the model intact even after I close the notebook or I have to train it everytime I open my notebook ? Please guide. Is there any way wherein I can save it not using pickel but using h5 format. Please guide !!
@@NicholasRenotte Firstly, A Big Thanks for the reply !!!! You Are AWESOME❤️❤️. I am saving the the model with pickel and its working fine for when I am making detections for the first time (just after training), but as soon as I close the notebook and open it the next time and load the same model its not making the detections, and I have to train it everytime. Hence I am unable to save/load it (i.e pickel might not be working or maybe I am doing something wrong in the last part). Can I do it using joblib as well ? Please guide. Thank you.
Hi Nicholas, your videos are great and they provide a lot of value. I have tried implementing many projects from your tutorials. I have one query regarding the accuracy of computer vision algorithms. Can we ever achieve accuracy around 99 percent for computer vision applications?
Hi @Nicholas Renotte thank you for this nice tutorial. I was wondering how you could give the face detection data a bit more weight? So that I can do a classification that relies more on if i for instance smile or not? Or is that a training thing mostly? I still want the body language to count, but give more weight to the face in the model. Hope that makes sense? Cheers
Awesome content! But I have an error when running, how should I deal with it? UserWarning: X does not have valid feature names, but StandardScaler was fitted with feature names warnings.warn(
I just discovered your channel and Learned So much about Machine Learning. Could you help me in my project? My Project is Classification of Bad and Good Sitting Posture. How would you deploy this in raspberry pi and send data to mobile application? Hoping for your response and tutorials on this. Thanks! I totally in love with your contents
hey man ıhave been followin you for a while you done greeaaat jobs your channel is like a full stack course of data science thank you so much for that. but ı really wantto know why are you doin that ı mean beside of a youtube content what do you expect from your projects ? is that something about your IBM carreer ? (like a part of portfolio ? )
Hi Nicholas !Thank you for nice work sharing. May i know the way to increase the fps for mediapipe . i have very low fps as 1 to 2 fps only. Kindly advice. thank you
I trained my model and it's working fine. I have one issue. My application has to do the estimation on a whole classroom. How can we apply this to a group of people? It just detects one face per pic. Would really love some suggestion.
Hi Nicholas , thank you about the stuff , now in my study i work about AI application of fitness training . I would like to get help from you, I need the detection all the body joints.
My model is very confident to non trained labels. Suppose, if i do nothing and just stay in the frame, the model still predicts one of the action with high confidence value. How can we solve such problems? Do you have any idea regarding such issues?
I want to combine pose recognition and face recognition to enable me recognize the person that is posing. How do I go about it? Please can you share more lights on this? Thanks.
Awesome videos on object detection and great teaching ability. Great video on sign language detection. I wonder if you can create or already have a video on this. Scenario: User takes phone and hovers over an object, like a credit card, where entire credit card must be captured within the defined outlined box. Once the full object is detected "within" the outline a success message appears notifying user that they are centered and can now take a picture. Do you have any videos like that? Kinda like google lens but entire object would have to be inside outline. Thanks!
at [1:03:13], you told that we could use in a different use cases. But you told like action detection could be different so could you please explain what is the differences and what we have to do if you could say like 'follow this way' advices? thank you so much..
@@NicholasRenotte at first, thanks for your all attention. I already checked this out but when I implement and test all these processes in that video I cannot obtain efficient and accurate results. Do you think may I implement and train for the push up action by using LSTM units? Cuz I couldnt get good results even in basic arm and hand actions. Watcha think about it?
Thank you so much. Great tutorial. There is only one problem. The hands landmarks do not record coordinates if I use one of the hands to turn off camera. basically, both hands should be in the view of videocamera from the start till the end to record coordinates. I cannot figure out how to stop recording automatically after certain time. Could you please help with this?
hi, thanks for you video lesson. I have a issue, when I load model and run, has a waring prompt: /Users/mac/miniforge-pypy3/envs/mp/lib/python3.7/site-packages/sklearn/base.py:446: UserWarning: X does not have valid feature names, but StandardScaler was fitted with feature names "X does not have valid feature names, but". what's I make mistake? thanks.
Hi Nicholas, first of all, it's really a great piece of work but here again, I have a scenario where I will train my model with different signs(Alphabetical blind signs) and I wanted all of them to come up in a phrase/sentence that may be in a new window any clue how to do that?
Is there a way to know to which joint the x1, y1, or x2, y2 are associated to? Am I mistaken to say that x1 would be associated to the first value of the Pose Landmark Model which is the nose?
Thank you very much for the great tutorial! Will be looking forward to your future projects. The MP hand detection works with multiple detection when max_num_hands = n. Would it be possible to edit the pose .py to have max_num_poses to detect multiple human poses like openpose/yolo?
Hi, this is an amazing project! I followed the method discussed here for exercise pose classification. It worked good. But now I have a doubt that's very different from pose classification. How can I rate a user's exercise pose accuracy given that I know the class of the pose?
Hi! What do I need to learn about if I want to detect motion instead of a pose? and compare my shooting motion(basketball) from the shooting motion of a pro? Thanks for the videos, I struck goldmine from your channel!
Hey Nick, need a help. Can you please create a tutorial where i pass images to mediapipe holistic and it will create coords.csv on the basis of those images.
Hey Nicholas, Thanks for this tutorial ... i was wondering if there is a solution for training a model with sentence instead of single word? lets say for training a sentence of ""HI, How are you doing?"" and then use it to recognize the sign language or for generation of sign language using animations. I dont want you to go through the whole process but my question is about teaching sentence(instead of single word) to the model using webcam and csv Thanks Best regards
Which CPU and GPU did you use? I'm looking for a live tracking of 10 persons in a 720p 25fps stream. Holistic Tracking with pose_landmarks and face_landmarks. Is it possible to do this on a affordable PC?
Heya @Markaay, sure can. Just be mindful that mediapipe doesn't offer multi-person tracking for poses anymore. You could check out OpenPose for that though.
@@NicholasRenotte Could you please do a comparison video(performance, features) of the following Tracking algorythms: MediaPipe, OpenPose, BlazePose, PoseNet, wrnchAI, YogAI
@@NicholasRenotte Back with another fan request❤️! I'd love to see a tutorial on exploring Mediapipe's 3D face mesh to render some kind of face filters like in Insta. Digging deep on their graphs and calculators 😀
@@priyambordoloi771 hmm, that's just the way it works. It'll pick up the highest likelihood for a person. You could probably use an alternate method by training for a specific person, haven't tried this out before though :S
Hi Nicholas, I didn't understand the process for CSV file work...in this video (Capture and Landmarks using openCV and CSV part.).Can you provide the CSV file
One of the most professionally made video tutorial on the whole of youtube, keep making brilliant videos like these
I find this to be the best video under the mediapipe series. Congratulations for the gift you have and thanks for sharing it with us
Thanks sooo much @Hargobind, so glad you enjoyed it!!
I wanted to detect my hand pose as an object. So I went to tensorflow - 2 days of configuring. Made autolabel with groundingDINO - 3 days of configuring. And now I found this. 1 Hour and the model's ready to detect in real time... DAIM what I was doing this 5 days?!
And thanks A LOT for your effort, also for that particular thing, when you shown how to add a bunch of new staff to the existing model
HI. you can send requirements.txt string , please!
I just discovered your channel and I‘m obsessed. Thank you so much for doing such great content🙏🏻🙏🏻🙏🏻
Thank you so much @Frieda, so stoked you're getting value from it!
Me tooo !!!!!
One of the best programmer guide out there! I love how you explain into detail on what you did and why you did it, helps someone like me who is learning just from youtube! I'm gonna try and create my own project based on this program for boxing. Which can detect what time of move a person is doing and can give pointers if that person is not using the proper technique, wish me luck!
🙌🙌🙌 sending you all the luck, you'll smash it @Francis!
have anyone actually implemented this code Please let me know Very urgent !!
The most professional easy to understand and implement tutorials on youtube. You really are the best.
Hey I'm working in a body and face emotion detection for my paper, but I couldn't find sources that could help me, but after seeing your video and the way everytime you'll explain every bit of code is really appreciated, this actually sparked that interest in coding for me again, thank you for being very caring you're amazing
HI. you can send requirements.txt string , please!
So awesome, is there any way to control a 3D model in blender using this body estimation technique? Like motion capture?
Haven't seen it in blender but I've seen it in Unity using the Barracuda framework!
@@NicholasRenotte may you pls tag me a tutorial?
@@KriGeta don't have anything yet but will shoot it through once it's up!
@@NicholasRenotte that's great 😍
your tutorials are the best tutorials I have seen in my life, congratulations!
Thanks a tonnn @Eduardo!!
Hi Nicholas, great stuff. I thoroughly enjoy your tutorials as they are mind blowing, I'd love to have a tutorial on pose deviation comparing poses of two people. Awesome work man!!!
Awesome usecase, I'll add it to the list. Never even thought about comparing poses!
Looking forward to watching this later!! Thank you for all the quality videos. Have a great one!
Anytime @Isaac! Thanks a bunch 🙏
The 210th comment is dedicated to the best content on RUclips. Period! Though late, I am lucky to have discovered your channel. Subscribed and notifications turned on!
Anytime my guy, better late than never, WELCOME TO THE FAM!!!
Man! Your tutorials are really really cool!! And DAMN RIGHT we want a bigger data science series!!!!!
Awesome work man!!! Cheers!!!
YESSS! So glad you enjoyed it!
@@NicholasRenotte Mann! I thought you wouldn’t respond!
Im in my junior year of college and am currently doing a machine learning project. I have few doubts and I believe you could help me with it. It would great if you could tell me how to reach out to you and I can get them clarified!
@@srirammadduri8078 nah man! I try to get to all comments. Hit me up on Discord, I check every night and if crazy every second night. bit.ly/3dQiZsV
Your videos are very descriptive and useful. Your content is of high quality. Thank you.
Awesome content!! I know I mentioned this in a previous video but I would still love to see a “virtual coach” type implementation. Something that goes beyond just static poses but actually tracks a movements key points over time and could detect form quality by comparing them to a “good form” and “bad form” example.
Exactly.
Yes!!
Yup! Got it planned but it'll be a longer tutorial, just taking a little recovery break from long vids this week. Should do it in the next couple of weeks though @Caleb!
No hurry, you are already putting out content at a crazy pace! Take that break!
@@CalebSchantzChristFollower oh thanks man, I honestly needed to hear that it's okay! Been feeling a little bad I haven't hit my two a week since I released the big TFOD tutorial.
so can i actually use this to detect more sign languages using those joints...... man u actually helped a lot in giving people ideas and inspiration. will always support ur youtube channel!
Thank you so much @Nurul! You could particularly with the hand models! Could even pass to a RNN for action detection!
@@NicholasRenotte I tried to look online to find documentation on how you would pass to a RNN for action detection, but didnt find anything significant. Do you have documentation or video you could share with us? :D
@@yuriemond7340 definitely, take a look at some of these action models: tfhub.dev/s?q=action
Your channel has helped me so much when working on my dissertation. Thank you 🙏
YESSSS, go getem Marc! Hope you smash it out of the park!
hey can we use this to do real time sign launguage . If we can use this please do a video on it.Thank you
Bruce! Heya, yup, got something in mind!
Good job brother. I will always appreciate the tasks that you do.
Thanks so much @Priyam!
Thank you so much!!!!!
Please can i do same steps for sign language recognition instead of using mediapipe+LSTM ???
Thanks for the great video as always !!, May I ask a question If I want to count how long for the specific position what should I do , do you have any suggestion ? , Thanks in advance.
Could look at counting the number of frames that position had the top score!
@@NicholasRenotte Thanks !!
Thats Awesome!
For facial expression recognition do you recommend using facial landmarks as the base features for training with DNN, or using CNN directly on the image dataset?
Thanks
I prefer using the base features with a classifier as if you were to detect multiple faces you could apply the model across each face (FYI this model only detects a single face) but also because it's faster to train on tabular data!
This is so cool! Can I set this up to only detect poses I make with my hands?
Yup, there's mediapipe models that only detect hands as well. I've got a vid on the channel!
@@NicholasRenotte Got it! Il check it out!!
Hi Nicholas,i love the content you make.Much thanks for sharing. Can we deploy this??
Sure can! Been messing around with Kivy for CV (could probs do that).
This is really awesome , loved the way how you explained everything, great Job. Really Thankful for this. 💯
THANK YOU SO MUCH BRO ! KEEP HUSTLING ❤️
Heyy!! Nice video actually I was looking for something like this but is this way generalized. Will this model work on different faces.
Sure will, just gotta fine tune!
first, i want to thank you about this big effort.
second, can you make a video about fall detection using media pipe please
this video will help me a lot
HI. you can send requirements.txt string , please!
Hi there. Great video, tanks. Can I use a ply file instead a live webcam? I need to evaluate some children and want to use 3d scan (kinectics) and posterior analisys.
I'd love to have a tutorial where you use the face geometry module from mediapipe. And maybe add some 3D models tracked to the face like glasses ! Great video as always :)
Yah, agreed! Working on a bunch of stuff in that space rn @Victor!
@@NicholasRenotte Amazing ! I’ll be there to see it as soon as it comes out !
@@victormustin2547 yesss! Thanks a bunch for checking out the vids so far as well!
Awesome video btw, I'm currently trying to recreate the model in my local machine. However, seems like I fail to append the different classes of poses/facial expressions, how can I change this?
thats so cool! i have a question, how can we fix the flickering at then when it's running?
It would need a more powerful GPU or machine. The lag is due to the FPS drag from the ml model.
@@NicholasRenotte i have fixed it!! now running at 10 to 12 FPS, thats still good. thank you so muchhhh
Only you have the best python vidos on RUclips. Greetings from Russia)
Thanks so much @ONE! What's happening from Sydney!?
Thanks buddy for really helpful videos. Keep going, Wakanda Forever!
Thanks lots for great approch and hardwork 🙏♥️
Question :-
1) does it work for multiple person in frame ?
Unfortunately no @Vinay, check out OpenPose for that!
@@NicholasRenotte Thanks Nick 🙏
@@FindMultiBagger anytime!
Awesome content. Is it also possible to train the csv file with tensorflow instead of sklearn? Looking forward to watching your other videos!!
Sure can!
what will be chenges, if i wanted to make a .h5 model ?
please replyy
Hi Nick, loved this video! I was wondering if you could go more into how to do hyperparameter tuning and building more advanced pipelines? Do you have other videos going more in detail on those? Thanks!
Nothing around it yet but will probably do something soon @Jonathon!
Love the content! Just curious would it be possible to use other videos and images to generate the landmark for different poses or facial expression and then our webcam to see if the model can detect ourselves doing the poses or facial expressions?
Sure can! I actually did it in the past with a video of the Royals being interviewed.
@@NicholasRenotte Hi Nicholas, wanted to commend on the awesome content, and format of presentation. Makes it easy to understand. Great job, and a huge thank you.
Could you please point me to the video where you use other videos/pictures to train the model?
@@satishchandrasekaran3045 still a work in progress at this stage!
really amazing content. im currently working on my final cs project and this video (and some others) were tremendous help for me.
amazing job man!
have anyone actually implemented this code Please let me know Very urgent !!
@@harshdasila6680 i did. What you need?
@@shai8559 i am getting error
@@harshdasila6680 you can type the error and ill see if i can help
Great tutorial, i wish your channel have more visibility
thanks for the video
I have a question:
i was trying to export the coords but it didnt export to excel
Hey, Nicholas Thank you so much. Very useful. Highly appreciated. But can you help me how to make the decoder like the in the video works with hands too? specifically how to extract the hands landmark? thankyou
Hey, Nicholas Thank you so much. Very useful. Highly appreciated. you have awesome teaching skills. One Question is what it takes to extend this to Action detection. Can you please make one tutorial for custom action detection? Thanks
Check this out! ruclips.net/video/doDUihpj6ro/видео.html
Your videos are amazing, still waiting for using unity to import those keypoints and rig them to a hand model. In the meantime any clue on how to do that? how to ring and which hand model to use?
Heya @Imane, was actually talking about it on Discord yesterday. I actually found someone that had an example on it the other. Let me know if you want me to try to dig up the link!
Heyy @@NicholasRenotte, yes if u can do that will be great! Thanks a lot for your help.
Hi Nicholas! Depending on the position of the wrist, can you tell me the coordinates of the joint at that time??
Great tutorial....Can you do a tutorial where you can save body and facial motion to BVH or FBX files to use with a 3D Character..
Great Tutorial, very professionally and amazingly done. Thanks a lot !!! just an issue in loading the model again after restarting the kernel. Have to train the model every time when I open my jupyter notebook(cuz when loading the model in this condition it is not making predictions even after saving in pickel). Is there any way that I can keep the model intact even after I close the notebook or I have to train it everytime I open my notebook ? Please guide. Is there any way wherein I can save it not using pickel but using h5 format. Please guide !!
Pickle should work fine, are you sure the model is being saved? h5 is normally reserved for keras/tf models.
@@NicholasRenotte Firstly, A Big Thanks for the reply !!!! You Are AWESOME❤️❤️. I am saving the the model with pickel and its working fine for when I am making detections for the first time (just after training), but as soon as I close the notebook and open it the next time and load the same model its not making the detections, and I have to train it everytime. Hence I am unable to save/load it (i.e pickel might not be working or maybe I am doing something wrong in the last part). Can I do it using joblib as well ? Please guide. Thank you.
Hi Nicholas, your videos are great and they provide a lot of value. I have tried implementing many projects from your tutorials. I have one query regarding the accuracy of computer vision algorithms. Can we ever achieve accuracy around 99 percent for computer vision applications?
Do you know how to optimize the model for production? Great tutorial brother
Probs a bunch of stuff to do so, want a vid on it?
@@NicholasRenotte Yess !!
Let's try something in 3D. That will look great. The AI gym trainer in 3D would be great!!
Agreed, on the R&D list as we speak!
This video help me in creating my final year project thanks lot ❤
Hi @Nicholas Renotte thank you for this nice tutorial. I was wondering how you could give the face detection data a bit more weight? So that I can do a classification that relies more on if i for instance smile or not? Or is that a training thing mostly? I still want the body language to count, but give more weight to the face in the model. Hope that makes sense? Cheers
Awesome content! But I have an error when running, how should I deal with it?
UserWarning: X does not have valid feature names, but StandardScaler was fitted with feature names
warnings.warn(
facing the same error, did u find any solution ?
I just discovered your channel and Learned So much about Machine Learning. Could you help me in my project? My Project is Classification of Bad and Good Sitting Posture. How would you deploy this in raspberry pi and send data to mobile application? Hoping for your response and tutorials on this. Thanks! I totally in love with your contents
Heya, you couldi fine tune this model for good and bad posture and deploy the same code to RPi!
hi! awesome video bro! what if we want to convert it into a desktop application or web application?
Could wrap it in Streamlit or Kivy!
@Nicholas Renotte , please reply, instead of classifying body language, we can detect spoken words with lip movement right?? Using lips coordinates
hey man ıhave been followin you for a while you done greeaaat jobs your channel is like a full stack course of data science thank you so much for that.
but ı really wantto know why are you doin that ı mean beside of a youtube content what do you expect from your projects ? is that something about your IBM carreer ? (like a part of portfolio ? )
Ayyye, anytime man, glad you like it. I'm just doing it because I enjoy it at the moment 🙂
Hi Nicholas !Thank you for nice work sharing. May i know the way to increase the fps for mediapipe . i have very low fps as 1 to 2 fps only. Kindly advice. thank you
Heya @MsSonoFelice, what type of machine are you running this on? Possibly try a different machine, something with a GPU perhaps?
Hey brother, I like your content, your presentation. And a request, can you build a project on a sentence generator when some words are only given.
Yup! Check this out: ruclips.net/video/cHymMt1SQn8/видео.html
This is a perfect project.. Does anyone know, can I use my Keras model for this code?
Yep, sure can, in place of the sk classifier!
I trained my model and it's working fine. I have one issue. My application has to do the estimation on a whole classroom. How can we apply this to a group of people? It just detects one face per pic. Would really love some suggestion.
There's a new pose estimation model available that supports multi person! Check this out tfhub.dev/google/movenet/multipose/lightning/1
Excellent Presentation. Just loved it.
how do we put the threshold on the prediction?
also, thank you for all the effort you put into each modules.
hello thank you for this tutorial i was wondering if you can use a group of still image as a dataset instead of manually recording pose
Just what I needed. Thank you
YESS! So glad to hear Daniel!
Excellent vid man, thanks
Hi Nicholas , thank you about the stuff , now in my study i work about AI application of fitness training . I would like to get help from you, I need the detection all the body joints.
Hey Man...! You are really cool!!!! I love this project, I'm your new subscribers!!!!
YESSS! Welcome to the fam!
My model is very confident to non trained labels. Suppose, if i do nothing and just stay in the frame, the model still predicts one of the action with high confidence value. How can we solve such problems? Do you have any idea regarding such issues?
Great video!! I have a question, how can I enable the code to detect more than one person in the frame (scene)?
Heya @Hazem, not supported in MediaPipe unfortunately, check out OpenPose for multiple person tracking!
@@NicholasRenotte What library would you recommend if I want to make a counting system for people in a scene ?
@@hazemhossam2645 take a look at OpenPose, you could then count the number of detections!
@@NicholasRenotte Tysm. Do you recommend any video/link about Openpose to start with?
I want to combine pose recognition and face recognition to enable me recognize the person that is posing. How do I go about it? Please can you share more lights on this? Thanks.
Such a cool content !
Thanks for sharing this
hi nick can you make a more detailed video on explaining the libraries you imported and how the code works in detail?
underrated channel
Awesome videos on object detection and great teaching ability. Great video on sign language detection. I wonder if you can create or already have a video on this. Scenario: User takes phone and hovers over an object, like a credit card, where entire credit card must be captured within the defined outlined box. Once the full object is detected "within" the outline a success message appears notifying user that they are centered and can now take a picture. Do you have any videos like that? Kinda like google lens but entire object would have to be inside outline. Thanks!
Closest I'd have is the ANPR series ruclips.net/video/0-4p_QgrdbE/видео.html
55:47 that mic glitch made me spill coffee on myself
Oh shit 🤣, didn't even know that was in there.
at [1:03:13], you told that we could use in a different use cases. But you told like action detection could be different so could you please explain what is the differences and what we have to do if you could say like 'follow this way' advices? thank you so much..
Check this out: ruclips.net/video/doDUihpj6ro/видео.html
@@NicholasRenotte at first, thanks for your all attention. I already checked this out but when I implement and test all these processes in that video I cannot obtain efficient and accurate results. Do you think may I implement and train for the push up action by using LSTM units? Cuz I couldnt get good results even in basic arm and hand actions. Watcha think about it?
Thank you so much. Great tutorial. There is only one problem. The hands landmarks do not record coordinates if I use one of the hands to turn off camera. basically, both hands should be in the view of videocamera from the start till the end to record coordinates. I cannot figure out how to stop recording automatically after certain time. Could you please help with this?
Try using a loop rather than a key, could use something like for x in range(3000) to record 3000 frames!
@@NicholasRenotte Thank you!
2 years but the video still Lit
can you show how to run this code on pycharm
hi, thanks for you video lesson. I have a issue, when I load model and run, has a waring prompt: /Users/mac/miniforge-pypy3/envs/mp/lib/python3.7/site-packages/sklearn/base.py:446: UserWarning: X does not have valid feature names, but StandardScaler was fitted with feature names
"X does not have valid feature names, but". what's I make mistake? thanks.
Did you find a fix for this?
Hi Nicholas, first of all, it's really a great piece of work but here again, I have a scenario where I will train my model with different signs(Alphabetical blind signs) and I wanted all of them to come up in a phrase/sentence that may be in a new window any clue how to do that?
Heya @Rupendra, check this out: ruclips.net/video/UPS54i7Km30/видео.html
Is there a way to know to which joint the x1, y1, or x2, y2 are associated to? Am I mistaken to say that x1 would be associated to the first value of the Pose Landmark Model which is the nose?
Check this out @Yuri, it shows the index mapping: google.github.io/mediapipe/images/mobile/pose_tracking_full_body_landmarks.png
Thank you very much for the great tutorial! Will be looking forward to your future projects. The MP hand detection works with multiple detection when max_num_hands = n. Would it be possible to edit the pose .py to have max_num_poses to detect multiple human poses like openpose/yolo?
Heya @Charles, yeah was looking into this, unfortunately the pose model it only works for a single pose/body.
Let's hope the model gets updated in the future. Really appreciate your work, can't wait for your next video. 😀
@@caqbme I'm with you on that one! YESS, just released today: ruclips.net/video/EgjwKM3KzGU/видео.html
Hi, this is an amazing project! I followed the method discussed here for exercise pose classification. It worked good. But now I have a doubt that's very different from pose classification. How can I rate a user's exercise pose accuracy given that I know the class of the pose?
have anyone actually implemented this code Please let me know Very urgent !!
Hi! What do I need to learn about if I want to detect motion instead of a pose? and compare my shooting motion(basketball) from the shooting motion of a pro? Thanks for the videos, I struck goldmine from your channel!
Check this out: ruclips.net/video/doDUihpj6ro/видео.html
Hey Nick, need a help.
Can you please create a tutorial where i pass images to mediapipe holistic and it will create coords.csv on the basis of those images.
Hey Nicholas, Thanks for this tutorial ... i was wondering if there is a solution for training a model with sentence instead of single word? lets say for training a sentence of ""HI, How are you doing?"" and then use it to recognize the sign language or for generation of sign language using animations. I dont want you to go through the whole process but my question is about teaching sentence(instead of single word) to the model using webcam and csv
Thanks
Best regards
Amazing stuff! Getting to learn a lot:)
Thanks so much @Vishal 🙏!
Which CPU and GPU did you use?
I'm looking for a live tracking of 10 persons in a 720p 25fps stream. Holistic Tracking with pose_landmarks and face_landmarks. Is it possible to do this on a affordable PC?
Heya @Markaay, sure can. Just be mindful that mediapipe doesn't offer multi-person tracking for poses anymore. You could check out OpenPose for that though.
@@NicholasRenotte Could you please do a comparison video(performance, features) of the following Tracking algorythms: MediaPipe, OpenPose, BlazePose, PoseNet, wrnchAI, YogAI
Hola !! This is so much fun to implement and decode the pose.... Thanks a million
Heya @Anisha, thanks so much!! Unfortunately, it looks like mp only supports single poses. I'm taking a look at OpenPose for multi pose estimation.
@@NicholasRenotte Back with another fan request❤️!
I'd love to see a tutorial on exploring Mediapipe's 3D face mesh to render some kind of face filters like in Insta. Digging deep on their graphs and calculators 😀
@@anishaudayakumar1778 oh that's a hotly requested one, definitely got that one planned!
I want that pipeline video.
Amazing work! Thank you so much!
Good morning sir hope you have a great day thanks for this one
Anytime @Sagar Singh, what did you think of it?!
@Nicholas Renotte that is super awesome...😁
Hello, why does 'str' object has no attribute 'decode' when learning mechanically.
Hey Nicholas. Hope you are doing well. Just want to know how to use Mediapipe to target a particular person among many persons.
Hmmm, it would depend on what the defining characteristics are for that person @Priyam.
@@NicholasRenotte can you please explain me in a very short manner?
Like the mediapipe it always detects the first person in the frame.
@@priyambordoloi771 hmm, that's just the way it works. It'll pick up the highest likelihood for a person. You could probably use an alternate method by training for a specific person, haven't tried this out before though :S
hey is there any way to train the model with images ,without doing it by ourself
Hi bro are you still doing it, if you have done it, can you help me with that as I am stuck in training a CNN model on MPII dataset
Hye. I prepare project same as this. But, when I add timer on it, the Opencv imshow freeze. Would you mind to help me?
Haven't really played with adding a timer unfortunately @Naziha, it might be slowing down the frame rate.
can we use LSTM for this
Yup, could apply a RNN layer using the landmarks @Rony!
A tutorial using LSTM would be amazing , the documentation is really poor
@@denisdj6180 yah, working on an action detection tutorial atm!
@@NicholasRenotte Thank you!
@@denisdj6180 exactly :(
محتوى ممتع ومفيد، شكرا جزيلا لك 🙏🏼
تستحق المدح والتشجيع
Thank you so much @Youssef! So glad you enjoyed it!
Hi Nicholas, I didn't understand the process for CSV file work...in this video (Capture and Landmarks using openCV and CSV part.).Can you provide the CSV file
mine isnt writing to the csv file for some reason