I followed your earlier pose tracking video and made something similar to this in blender, in that I got the position data into a list and converted it into a binary file and on blender converted it back to a list and animated a rig with it (tbh I really learnt a lot from that video and hope I can learn something here too, to make my blender one better)
very informative video🙏👍. BTW the better way for dragging and dropping objects here at 27:16 will be by adding a few extra lines in the code. public GameObject body; public GameObject[] bodyLM; private void Awake() { bodyLM = new GameObject[body.transform.childCount]; for (int i = 0; i < body.transform.childCount; i++) { bodyLM[i] = body.transform.GetChild(i).gameObject; } } after adding this function, in the unity just drag and drop the "Body" parent object and keep the BodyLM list as is, and when you run game the script will do its work 😋
Congratulations! It is a really interesting project and it could have many applications. It is a great achievement since commercial motion capture systems are too expensive.
Hey! amazing is all I can think of saying, easy to understand. Been looking for something that actually works for a while, I think a lot of people are, maybe adding more cameras will improve the quality as you said. No idea how it would be done lol, but sounds cool. Great as always o/
Amazing video. So glad I found your channel. Do you think that it might be possible to remove camera movement? I'm thinking yes, because if you can track it, it should also be possible to create sort of a "mask" to apply on the created data set to modify coordinates in each frame. I mean apply "negative camera movement" for each move of it. Once again, thank you for the great content. Top explanation, great video quality. Instant sub.
you dont have to grab the spheres manually and put them in the array, delete all the elements from the array, select all the spheres and drag them to the array field, it will auto fill the array
It’s a great job, thank you for sharing this video. And would you make an introduction or a tutorial of MoCap and SMPL model for further learning this topic?
This is just awesome, thanks a lot ! Keep up with the good work ! PS: Can we find a good 3D model and a good rig to translate those points/spheres into rig articulation points and make a complete human 3D model to act the same as in the animation ? In theory, it should be possible. If you are willing to try something like this I would be grateful. Thanks !
Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar
@@aangigandhi Hope I'm not too late for you; I'm far from being proficient in Unity3D and haven't played with it for some years, but the process would rather be pretty simple, but on the implementation side you will have to make things work and requires some practice, of course, but it's certinaly very doable: 1. you have cvzone which reads the poses as absolute points coordonate and you also have the image-map which shows you the pseudo-rig points those points translate to; it's easier you could look at an actual rig in Unity3D to see what points it has and so on so you can do the right mapping directly in python, before writing to the persisted-data file, if you ask me. Maybe these clips will help you understand the official unity3D rig: ruclips.net/video/Htl7ysv10Qs/видео.html and ruclips.net/video/Wx1s3CJ8NHw/видео.html 2. In unity3D things should be as simple as it was for the author, since the code you will have to write is similar; you have to make a GameObject which defines the bones/rig-points (don't know which, I haven't played with it but you can) as a mere Vector of Transform or Point3D or something like that, write the logic that assigns those points from the ones stored in your file, place a delay like he did in the video, in the Update() method (correctly you would get the timings from the cvzone and store those in your file as delays or keyframes and this would make you do a perfectly synced 3D translation of the originar video movements but you shouldn't try this from the start, only after the first part works...) and then, in the inspector, you will have to manually associate those points (your GameObject Vector) with actual rig points/bones (similar to what he did, but this time from the rig itself, not from the created spheres); 3. of course you should start with a project that has a 3D model aswell associated with the rig, and maybe somehow find an unity3d mechanism to easily switch 3d models PS: you should be careful with coordinate systems, you saw in the vide, he did a transform on the points (over the camera matrix, which you don't see because it's hidden in the transform function) to make them be in the local (camera) coordonate system, from the absolute (world) system he had them in in the first place (in the file); most probably you should do the very same, since I assume the rig will also have it's bones/points in the cam coordinate system. here is some code that you can take as reference but it's pretty much the same of his (just some sample codes to make you understand the idea): using System.Collections; using System.Collections.Generic; using UnityEngine; using System.IO; public class RigAnimator : MonoBehaviour { public Transform[] bones; // Oasele rig-ului private List frames; // Lista cu pozițiile pentru fiecare cadru private int currentFrame = 0; void Start() { LoadBoneData("path/to/your/data.json"); } void Update() { if (frames.Count == 0) return; // Aplică pozițiile oaselor for (int i = 0; i < bones.Length; i++) { bones[i].localPosition = frames[currentFrame][i]; } // Actualizează la următorul cadru currentFrame = (currentFrame + 1) % frames.Count; } void LoadBoneData(string filePath) { string jsonContent = File.ReadAllText(filePath); frames = JsonUtility.FromJson(jsonContent); } } (sorry about the romanian comments, forgot to write them in english, but you will understand so this would be animation code, loadBoneData will have to read the file based on the file format, but a json format is verry suitable you should go with it, since in python you have json.dumps for this as well) and for association (in the inspector is a possibility but maybe it works in the code, as well, don't know) - it should be something like this // another gameobject which runs only once (this is why the logic is in the Start method, not in the Update method) void Start() { LoadBoneData("path/to/your/data.json"); bones = new Transform[5]; // Să presupunem că avem 5 oase importante bones[0] = GameObject.Find("Bip001 Pelvis").transform; // Exemplu de nume de os bones[1] = GameObject.Find("Bip001 Spine").transform; bones[2] = GameObject.Find("Bip001 L Thigh").transform; bones[3] = GameObject.Find("Bip001 R Thigh").transform; bones[4] = GameObject.Find("Bip001 Head").transform; // Adaugă restul oaselor conform structurii tale de rig } You should put these in the right order and modify what it is to modify and in theory it should work. Good luck with your work ! Hope it helps !
@@aangigandhi I don't know if this is still a need for you, today I saw the comment and I tried to write two useful comments for you and none was posted by youtube. Don't know how to get in touch, maybe you can find a possibility, as youtube kicks out my comments without explanations.
@@Equilibrier Hi Equilibrier, I hope you have a great day. Could you share those 2 useful comments with me? I believe it would help me alot in my project
It would be great if you can make this project one step further. For example, Recording Motion Real time with Cameras around a room and send these data to Unreal Engine / Unity. i don't know how hard it is. but I was really hoping if you could teach us how to do it
It is really easy to make this real time just make the python script read from the webcam then just make the unity script read the file in update instead of start
Excellent tutorial!!I have a question please.... How can we insert an open source avatar to replace the lines connecting the landmarks? I mean the landmarks to be on an avatar instead of lines
Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar
nice. division is a costly operation . prefer multiply by 0.1 than divide per 10: you will notice the difference in bigger data but better getting used to it in all cases
Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar
This video is so amazing. Thank you sir. Sir, I have a question here. Like your handtracking game, you did that realtime. I have a project on realtime stick model find out. But I cant understand where should i change the code in python. I use Spyder for python interpreter. Can you please help me to complete my project...
Murtaza Sir, How can this be achieved with a Humanoid 3D Model, if you can make a short video on that, it will be helpful as lot's of people are searching simple ways to make Motion Capturing and it will be easy if we could do it with just few lines of code like in this video. It was very helpful. Thank You
Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar
great sir ,Sir can you make a video on distance estimation from webcam to any reference point or image.sir plz make a video for this thanks. i have done face and hand distance measurements.
Alright, all done - pretty cool, thanks. I'd like a hint on how to replicate this in Unreal Engine. I'm guessing the code from the second part won't be much different, but I'm not sure how to set it up inside the engine.
@@TriSutrisnowapu 100% there's a way, but I don't kno which is the right class to use for such a thing. I also bet there's a way to influence the character skeleton joints with those points.
Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar
there are a problem:INFO: Created TensorFlow Lite XNNPACK delegate for CPU. Traceback (most recent call last): File "/Users/mac/PycharmProjects/MotionCapture/Data/MotionCap.py", line 9, in lmList, bboxInfo = detector.findPose(img) ValueError: too many values to unpack (expected 2) Process finished with exit code 1
Thank you for your video explaination, it helps me a lot, just one more question bother me, how can I put these red sphere into a character to sync the motion in unity3D?
Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar.Please help
Great Tutorial! I'm wondering, is it possible to combine this with the video you created on hand tracking? So can I get 3D data on Hands to bring to Unity? Or is this just possible with a video of a full body? Thanks!
@@samsularefinsafi3448 sorry, I needed only hands, so I haven't done any full body tracking. I tracked them with the other tutorial and then used a plugin to record the movement of my hand in unity and export it as fbx for use in maya.
Great tutorial. Thank you so much. I have tried to find angle of elbow but I've got an error. angle = deetector.findAngle(img, 11, 13, 15, draw=False) The error was Finds angle between three points. Inputs index values of landmarks instead of the actual points. how to input index values?
hello,I want to know if I apply this with realtime tracking, how can I collect the data in a stored txt? How to run that code? cuz I think if just replace an video link with cap.videocapture, and I can just get immediate data so I don't know how to do? Another question is, I can't wait to see that you put this model in a 3D model, ex: vtuber, ha ha
can anybody help me Please I extracted the motion capture and tried to apply it on a 3D model in blender. But what happened is i got a lot of deformation in the model. So I looked it up. and it appeared that it is scale difference problem I did solve the problem by dividing coordinates on a big number. so the deformation was gone, but also the movement was gone also. So what should I do to avoid the deformation of the model and keep the movement?
Hey Are you able to get the same result in blender?like by importing motion capture in blender is these stuff be applicable to the avtar? yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar
@@aangigandhi I retargeted the web cam feed on an armature in blender but unfortunattely i couldn't get the same result due to scale difference between blender and the captured data, if you solve this problem i think it will work fine and you will get the same result hopfully, then you can extract the armature with the animation as fbx file and use it in Unity. Hope you look for more details, i am not expert in this at all.
How can we implement the motion capture on a 3d model? is it possible that all the movements are done by 3d model if yes then please make a tutorial for it. Really need it
Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar
Would you like to see more Python and Unity Tutorials? Share your ideas and I will create the most voted.
Create a program to measure the size of foot with live webcam. Thank You
Big fan from Pakistan
Would you mind also share for unreal engine workflow?
Yes I really like ur contents. Full or learning.
Where do u work? Any whatsapp/discord community u r maintaining for Python?
please what if I would like to track specific landmarks, like in a video, if I only want to track the upper limbs landmarks.
can we replace the skeleton with cartoon charctor
Exactly what I've been looking for! Mocap suits are so expensive! Thanks a lot Murtaza, for your awesome tutorials
26:30
1. Lock the inspector
2. Select all sphere in hierarchy with shift+click
3. Drag drop into inspector’s Body list field
ok
I was going to comment this lol, I used to not realize that was a thing and I was so mad at the hours I'd previously wasted when I found out.
How I lock inspector?
Hi,
At the top of the inspector, you can see the lock icon.
You just need to click it to lock or unlock your inspector window.
Thank you
Man your videos are just out of this world.
An absolutely amazing video that is really clear, well explained and uses a wonderful logic driven process :)
Why no one know about this channel your vids are so good i can understand it clearly.
I followed your earlier pose tracking video and made something similar to this in blender, in that I got the position data into a list and converted it into a binary file and on blender converted it back to a list and animated a rig with it (tbh I really learnt a lot from that video and hope I can learn something here too, to make my blender one better)
Hi, can you share it ? I would like to explore it. Thanks.
Please can u share this?
hi bro can u tell me how u mapped that on a humanoid 3d rig
How to animate the rig by using dimensions and outputs getting here
You are a Swiss knife of technologies- Boss.
very informative video🙏👍.
BTW the better way for dragging and dropping objects here at 27:16 will be by adding a few extra lines in the code.
public GameObject body;
public GameObject[] bodyLM;
private void Awake()
{
bodyLM = new GameObject[body.transform.childCount];
for (int i = 0; i < body.transform.childCount; i++)
{
bodyLM[i] = body.transform.GetChild(i).gameObject;
}
}
after adding this function, in the unity just drag and drop the "Body" parent object and keep the BodyLM list as is, and when you run game the script will do its work 😋
Can we just appreciate the time he puts into his videos?
Learning so much from your tutorials. Thanks for everything you produce.
Amazing video. Great job. Keep it up. 👍
Congratulations! It is a really interesting project and it could have many applications. It is a great achievement since commercial motion capture systems are too expensive.
this man always suprise me with his content
Imagine VR full-body with this technology!
Great job, thank you. but How do we apply this movement to a mesh object?
Will create a tutorial on that soon.
Thanks for the reply, looking forward to it
@@murtazasworkshop Good job, how can i use it for a model3d on unity3d, or how can i use in blender ??
@@murtazasworkshop please
@@murtazasworkshop Im so excited!
Hey! amazing is all I can think of saying, easy to understand. Been looking for something that actually works for a while, I think a lot of people are, maybe adding more cameras will improve the quality as you said. No idea how it would be done lol, but sounds cool. Great as always o/
oh wow just when I started looking up ML models for vision based MOCAP, YT recommends me this. And right on time too
this would be fun for vr
Can you generate the skeleton in python and Unity using Webcam in real-time?
Good job. Possible capture mutiple players ?
Amazing video. So glad I found your channel. Do you think that it might be possible to remove camera movement? I'm thinking yes, because if you can track it, it should also be possible to create sort of a "mask" to apply on the created data set to modify coordinates in each frame. I mean apply "negative camera movement" for each move of it.
Once again, thank you for the great content. Top explanation, great video quality. Instant sub.
you dont have to grab the spheres manually and put them in the array, delete all the elements from the array, select all the spheres and drag them to the array field, it will auto fill the array
It’s a great job, thank you for sharing this video. And would you make an introduction or a tutorial of MoCap and SMPL model for further learning this topic?
Ey, gracias por tener tanta paciencia al momento de enseñar este tipo de contenidos
This is just awesome, thanks a lot ! Keep up with the good work !
PS: Can we find a good 3D model and a good rig to translate those points/spheres into rig articulation points and make a complete human 3D model to act the same as in the animation ? In theory, it should be possible. If you are willing to try something like this I would be grateful. Thanks !
rigging
Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar
@@aangigandhi Hope I'm not too late for you; I'm far from being proficient in Unity3D and haven't played with it for some years, but the process would rather be pretty simple, but on the implementation side you will have to make things work and requires some practice, of course, but it's certinaly very doable:
1. you have cvzone which reads the poses as absolute points coordonate and you also have the image-map which shows you the pseudo-rig points those points translate to; it's easier you could look at an actual rig in Unity3D to see what points it has and so on so you can do the right mapping directly in python, before writing to the persisted-data file, if you ask me. Maybe these clips will help you understand the official unity3D rig: ruclips.net/video/Htl7ysv10Qs/видео.html and ruclips.net/video/Wx1s3CJ8NHw/видео.html
2. In unity3D things should be as simple as it was for the author, since the code you will have to write is similar; you have to make a GameObject which defines the bones/rig-points (don't know which, I haven't played with it but you can) as a mere Vector of Transform or Point3D or something like that, write the logic that assigns those points from the ones stored in your file, place a delay like he did in the video, in the Update() method (correctly you would get the timings from the cvzone and store those in your file as delays or keyframes and this would make you do a perfectly synced 3D translation of the originar video movements but you shouldn't try this from the start, only after the first part works...) and then, in the inspector, you will have to manually associate those points (your GameObject Vector) with actual rig points/bones (similar to what he did, but this time from the rig itself, not from the created spheres);
3. of course you should start with a project that has a 3D model aswell associated with the rig, and maybe somehow find an unity3d mechanism to easily switch 3d models
PS: you should be careful with coordinate systems, you saw in the vide, he did a transform on the points (over the camera matrix, which you don't see because it's hidden in the transform function) to make them be in the local (camera) coordonate system, from the absolute (world) system he had them in in the first place (in the file); most probably you should do the very same, since I assume the rig will also have it's bones/points in the cam coordinate system.
here is some code that you can take as reference but it's pretty much the same of his (just some sample codes to make you understand the idea):
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System.IO;
public class RigAnimator : MonoBehaviour
{
public Transform[] bones; // Oasele rig-ului
private List frames; // Lista cu pozițiile pentru fiecare cadru
private int currentFrame = 0;
void Start()
{
LoadBoneData("path/to/your/data.json");
}
void Update()
{
if (frames.Count == 0) return;
// Aplică pozițiile oaselor
for (int i = 0; i < bones.Length; i++)
{
bones[i].localPosition = frames[currentFrame][i];
}
// Actualizează la următorul cadru
currentFrame = (currentFrame + 1) % frames.Count;
}
void LoadBoneData(string filePath)
{
string jsonContent = File.ReadAllText(filePath);
frames = JsonUtility.FromJson(jsonContent);
}
}
(sorry about the romanian comments, forgot to write them in english, but you will understand so this would be animation code, loadBoneData will have to read the file based on the file format, but a json format is verry suitable you should go with it, since in python you have json.dumps for this as well)
and for association (in the inspector is a possibility but maybe it works in the code, as well, don't know) - it should be something like this
// another gameobject which runs only once (this is why the logic is in the Start method, not in the Update method)
void Start()
{
LoadBoneData("path/to/your/data.json");
bones = new Transform[5]; // Să presupunem că avem 5 oase importante
bones[0] = GameObject.Find("Bip001 Pelvis").transform; // Exemplu de nume de os
bones[1] = GameObject.Find("Bip001 Spine").transform;
bones[2] = GameObject.Find("Bip001 L Thigh").transform;
bones[3] = GameObject.Find("Bip001 R Thigh").transform;
bones[4] = GameObject.Find("Bip001 Head").transform;
// Adaugă restul oaselor conform structurii tale de rig
}
You should put these in the right order and modify what it is to modify and in theory it should work.
Good luck with your work ! Hope it helps !
@@aangigandhi I don't know if this is still a need for you, today I saw the comment and I tried to write two useful comments for you and none was posted by youtube. Don't know how to get in touch, maybe you can find a possibility, as youtube kicks out my comments without explanations.
@@Equilibrier Hi Equilibrier, I hope you have a great day. Could you share those 2 useful comments with me? I believe it would help me alot in my project
Fantastic video, well presented!
It would be great if you can make this project one step further.
For example,
Recording Motion Real time with Cameras around a room and send these data to Unreal Engine / Unity.
i don't know how hard it is. but I was really hoping if you could teach us how to do it
It is really easy to make this real time
just make the python script read from the webcam
then just make the unity script read the file in update instead of start
@@twenmod How can i Separate the bones data?
Each Part of the Movement, like hands, legs, spine, head, neck etc.?
is it possible?
@@AHabib1080 I don't get what you mean, what I'm doing to get the movement to a character is setting up the points as IK targets
Excellent tutorial!!I have a question please....
How can we insert an open source avatar to replace the lines connecting the landmarks? I mean the landmarks to be on an avatar instead of lines
Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar
Is this working with any video? sounds exciting.
good tutorial, keep pushing!!!
nice. division is a costly operation . prefer multiply by 0.1 than divide per 10: you will notice the difference in bigger data but better getting used to it in all cases
Can you apply this animation to any other 3d object avatar?
Can we do this??
If yes please tell me
Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar
Can this project be made with vs code or Jupiter nootbook
Awesome stuff! Is there a way to feed the motion data directly into Unreal Engine model rig so the motion capture can be done in real-time?
Great video as always. Thanks for sharing.
Wow nice work can i ask can this be applied to 3D Animal motion capture?
Too amazing to be real! Thanks for share this.
Thanks a lot bro! Keep it up!
This video is so amazing. Thank you sir.
Sir, I have a question here. Like your handtracking game, you did that realtime. I have a project on realtime stick model find out. But I cant understand where should i change the code in python. I use Spyder for python interpreter. Can you please help me to complete my project...
I want to ask, what software do you use?
Great job! Could you explain how to get the depth data for human joints? Is it estimated by a well-trained neural network?
Can you make a video about the estimation of the true size of objects in a scene using multiple cameras through triangulation.
Amazing channel of youTube!!
Murtaza Sir,
How can this be achieved with a Humanoid 3D Model, if you can make a short video on that, it will be helpful as lot's of people are searching simple ways to make Motion Capturing and it will be easy if we could do it with just few lines of code like in this video.
It was very helpful.
Thank You
Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar
will this work moton analysis (not the animation portion) in real time by changing video source to an available camera vs. a stored video?
Hi,great vedio
can you please upload body measurements taking using Opencv,
Finding shoulder size, Waist etc..
how can i make my own python 3d camera tracker?
i want to learn how extract camera motion data from imagine sequence and use it in a 3d program
Очень круто. Хотел бы увидеть с двумя камерами. Обработку по двум камерам стоящими под 90 градусов.
Please bring more such OpenCV with Unity video's it's really helpful.
thanks! you are a blessing
Thanks very much!!That's what i want to do!!
great sir ,Sir can you make a video on distance estimation from webcam to any reference point or image.sir plz make a video for this thanks.
i have done face and hand distance measurements.
You are The BEST
Can we expect something for Blender workflow..?
Great video. Thank you 👌
thank you, great job.
Alright, all done - pretty cool, thanks.
I'd like a hint on how to replicate this in Unreal Engine.
I'm guessing the code from the second part won't be much different, but I'm not sure how to set it up inside the engine.
I wonder if there is a workflow for unreal either.. btw unreal has support for python scripting, so I assume there should be a way
@@TriSutrisnowapu 100% there's a way, but I don't kno which is the right class to use for such a thing.
I also bet there's a way to influence the character skeleton joints with those points.
Very helpful. How do we apply this movement to 3D player model.
Did you have any luck with this?
I need it very urgently
Ya nee too
@@swastikkarwa2507 @Sneha Patil yes, I asked chat gpt to implement this feature it's not perfect but still manageble
Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar
@@aangigandhi are you stuck on creating avatar or applying animation data to avatar ?
Can you give me the link of the motion capture program you used in this video?
hello, is it possible to improve the quality of the system to track facial and hands movement as well?
Also, I want to know may I get rotation parameters from cvzone and mediapipe?
please share.. if you handle it.
there are a problem:INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Traceback (most recent call last):
File "/Users/mac/PycharmProjects/MotionCapture/Data/MotionCap.py", line 9, in
lmList, bboxInfo = detector.findPose(img)
ValueError: too many values to unpack (expected 2)
Process finished with exit code 1
Which country do o
You live. And where did you got your graduation.you are superb
Can you make a video on foot size measurement with webcam
Love from Pakistan
is this the same as Deepmotion?
Thank you for your video explaination, it helps me a lot, just one more question bother me, how can I put these red sphere into a character to sync the motion in unity3D?
Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar.Please help
Great Tutorial! I'm wondering, is it possible to combine this with the video you created on hand tracking? So can I get 3D data on Hands to bring to Unity? Or is this just possible with a video of a full body? Thanks!
sir, did you get how to detectect full body in unity hub realtime? I am facing the same problem
@@samsularefinsafi3448 sorry, I needed only hands, so I haven't done any full body tracking. I tracked them with the other tutorial and then used a plugin to record the movement of my hand in unity and export it as fbx for use in maya.
Hello! Im currently trying to achieve that as well! Do you have any help with that maybe? @@samsularefinsafi3448
which processor you are using bro?
Hi, have you ever tried a proyect with the xbox kinect?
Is this possible that I record my video in sign language and it can capture hand gestures motion also?
Sir can i run this on android and iOS devices?
Amazing man
Thanks a lot man
Can this be used to detect multiple people?
U are osm bro 😍😍😍🥺🥺
I can't register at the site property. It's not sending email for confirmation or resetting password.
Does your code link lead to full program with gui ?
How would you apply the annotations on a 3D model with an armature ??
Sir I am working on same project, but in blender
Great tutorial. Thank you so much.
I have tried to find angle of elbow but I've got an error.
angle = deetector.findAngle(img, 11, 13, 15, draw=False)
The error was
Finds angle between three points. Inputs index values of landmarks
instead of the actual points.
how to input index values?
which domain now the project is
I like this, is there a tutorial where you can convert this to a .bvh?
how do i do this on unreal engine 4
Vector3 is not being recognized and I am getting the error …. Index was outside the bounds of the array.
Which software it is?
loved it
Can you please let me know what are all the application of this content ?
QUESTION: What would I have to do if I want a numerical value of lets say the dots of the waist in the "Y" coordinate to display at all times?
Never mind the question above..i figured it out..thankk you
Wow. Thanks for share
I’m wondering how to view it real time with matplotlib?
Can it work with multi people?
hello,I want to know if I apply this with realtime tracking, how can I collect the data in a stored txt? How to run that code? cuz I think if just replace an video link with cap.videocapture, and I can just get immediate data so I don't know how to do? Another question is, I can't wait to see that you put this model in a 3D model, ex: vtuber, ha ha
can anybody help me Please I extracted the motion capture and tried to apply it on a 3D model in blender. But what happened is i got a lot of deformation in the model. So I looked it up. and it appeared that it is scale difference problem I did solve the problem by dividing coordinates on a big number. so the deformation was gone, but also the movement was gone also. So what should I do to avoid the deformation of the model and keep the movement?
Hey
Are you able to get the same result in blender?like by importing motion capture in blender is these stuff be applicable to the avtar? yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar
@@aangigandhi I retargeted the web cam feed on an armature in blender but unfortunattely i couldn't get the same result due to scale difference between blender and the captured data, if you solve this problem i think it will work fine and you will get the same result hopfully, then you can extract the armature with the animation as fbx file and use it in Unity.
Hope you look for more details, i am not expert in this at all.
Okay
Thank you for your response
How you make skeleton with model 3d?
It does not create the file where it saves the coordinates. Because it can be?
How can we implement the motion capture on a 3d model? is it possible that all the movements are done by 3d model if yes then please make a tutorial for it. Really need it
Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar
it's possible to export FBX file?
Can we import this into unreal engine too?
Now we can have full body tracking