3D Motion Capture using Normal Webcam | Computer Vision OpenCV

Поделиться
HTML-код
  • Опубликовано: 4 ноя 2024

Комментарии • 212

  • @murtazasworkshop
    @murtazasworkshop  2 года назад +49

    Would you like to see more Python and Unity Tutorials? Share your ideas and I will create the most voted.

    • @ameerhamza5542
      @ameerhamza5542 2 года назад +2

      Create a program to measure the size of foot with live webcam. Thank You
      Big fan from Pakistan

    • @TriSutrisnowapu
      @TriSutrisnowapu 2 года назад

      Would you mind also share for unreal engine workflow?

    • @faisaltasleemft
      @faisaltasleemft 2 года назад

      Yes I really like ur contents. Full or learning.
      Where do u work? Any whatsapp/discord community u r maintaining for Python?

    • @studyinchina4139
      @studyinchina4139 2 года назад +2

      please what if I would like to track specific landmarks, like in a video, if I only want to track the upper limbs landmarks.

    • @anime_on_data7594
      @anime_on_data7594 2 года назад +3

      can we replace the skeleton with cartoon charctor

  • @tunaroll3957
    @tunaroll3957 2 года назад +6

    Exactly what I've been looking for! Mocap suits are so expensive! Thanks a lot Murtaza, for your awesome tutorials

  • @김상민학생전기·정보
    @김상민학생전기·정보 2 года назад +17

    26:30
    1. Lock the inspector
    2. Select all sphere in hierarchy with shift+click
    3. Drag drop into inspector’s Body list field

    • @holaholahee
      @holaholahee 2 года назад

      ok

    • @MonkeyShark
      @MonkeyShark 2 года назад

      I was going to comment this lol, I used to not realize that was a thing and I was so mad at the hours I'd previously wasted when I found out.

    • @kanall103
      @kanall103 2 года назад +1

      How I lock inspector?

    • @김상민학생전기·정보
      @김상민학생전기·정보 2 года назад

      Hi,
      At the top of the inspector, you can see the lock icon.
      You just need to click it to lock or unlock your inspector window.

    • @kanall103
      @kanall103 2 года назад

      Thank you

  • @FarmBoyTech
    @FarmBoyTech 6 месяцев назад +1

    Man your videos are just out of this world.

  • @jpmccrossan6807
    @jpmccrossan6807 3 месяца назад

    An absolutely amazing video that is really clear, well explained and uses a wonderful logic driven process :)

  • @yaat9532
    @yaat9532 2 года назад

    Why no one know about this channel your vids are so good i can understand it clearly.

  • @RockingScience
    @RockingScience 2 года назад +10

    I followed your earlier pose tracking video and made something similar to this in blender, in that I got the position data into a list and converted it into a binary file and on blender converted it back to a list and animated a rig with it (tbh I really learnt a lot from that video and hope I can learn something here too, to make my blender one better)

    • @FreshCreamSoup
      @FreshCreamSoup 2 года назад +2

      Hi, can you share it ? I would like to explore it. Thanks.

    • @spatil689
      @spatil689 Год назад

      Please can u share this?

    • @aaditya4998
      @aaditya4998 Год назад

      hi bro can u tell me how u mapped that on a humanoid 3d rig

    • @aangigandhi
      @aangigandhi 7 месяцев назад

      How to animate the rig by using dimensions and outputs getting here

  • @Segmentsoflives
    @Segmentsoflives 2 года назад

    You are a Swiss knife of technologies- Boss.

  • @quarkblue
    @quarkblue 11 месяцев назад

    very informative video🙏👍.
    BTW the better way for dragging and dropping objects here at 27:16 will be by adding a few extra lines in the code.
    public GameObject body;
    public GameObject[] bodyLM;
    private void Awake()
    {
    bodyLM = new GameObject[body.transform.childCount];
    for (int i = 0; i < body.transform.childCount; i++)
    {
    bodyLM[i] = body.transform.GetChild(i).gameObject;
    }
    }
    after adding this function, in the unity just drag and drop the "Body" parent object and keep the BodyLM list as is, and when you run game the script will do its work 😋

  • @zzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
    @zzzzzzzzzzzzzzzzzzzzzzzzzzzzzz 2 года назад +2

    Can we just appreciate the time he puts into his videos?

  • @cyrix_1
    @cyrix_1 2 года назад

    Learning so much from your tutorials. Thanks for everything you produce.

  • @sultanrahil3380
    @sultanrahil3380 Год назад +1

    Amazing video. Great job. Keep it up. 👍

  • @edslab5383
    @edslab5383 2 года назад

    Congratulations! It is a really interesting project and it could have many applications. It is a great achievement since commercial motion capture systems are too expensive.

  • @anantchandak9574
    @anantchandak9574 2 года назад

    this man always suprise me with his content

  • @baveraby
    @baveraby 2 года назад

    Imagine VR full-body with this technology!

  • @aek--
    @aek-- 2 года назад +27

    Great job, thank you. but How do we apply this movement to a mesh object?

    • @murtazasworkshop
      @murtazasworkshop  2 года назад +25

      Will create a tutorial on that soon.

    • @aek--
      @aek-- 2 года назад +1

      Thanks for the reply, looking forward to it

    • @d.maya14
      @d.maya14 2 года назад +6

      @@murtazasworkshop Good job, how can i use it for a model3d on unity3d, or how can i use in blender ??

    • @ricetv942
      @ricetv942 2 года назад

      @@murtazasworkshop please

    • @sydneybeaa
      @sydneybeaa 2 года назад

      @@murtazasworkshop Im so excited!

  • @hexagonproductions5466
    @hexagonproductions5466 2 года назад +1

    Hey! amazing is all I can think of saying, easy to understand. Been looking for something that actually works for a while, I think a lot of people are, maybe adding more cameras will improve the quality as you said. No idea how it would be done lol, but sounds cool. Great as always o/

  • @Zarrar2802
    @Zarrar2802 2 года назад

    oh wow just when I started looking up ML models for vision based MOCAP, YT recommends me this. And right on time too

  • @truestbluu
    @truestbluu 2 года назад +1

    this would be fun for vr

  • @wangshj2319
    @wangshj2319 2 года назад +3

    Can you generate the skeleton in python and Unity using Webcam in real-time?

  • @welsonfy5246
    @welsonfy5246 Год назад

    Good job. Possible capture mutiple players ?

  • @krzysztofjarzyna4225
    @krzysztofjarzyna4225 2 года назад +2

    Amazing video. So glad I found your channel. Do you think that it might be possible to remove camera movement? I'm thinking yes, because if you can track it, it should also be possible to create sort of a "mask" to apply on the created data set to modify coordinates in each frame. I mean apply "negative camera movement" for each move of it.
    Once again, thank you for the great content. Top explanation, great video quality. Instant sub.

  • @TrySomeCG
    @TrySomeCG 7 месяцев назад

    you dont have to grab the spheres manually and put them in the array, delete all the elements from the array, select all the spheres and drag them to the array field, it will auto fill the array

  • @youtobebug
    @youtobebug 2 года назад +3

    It’s a great job, thank you for sharing this video. And would you make an introduction or a tutorial of MoCap and SMPL model for further learning this topic?

  • @jorgebueno1891
    @jorgebueno1891 2 года назад

    Ey, gracias por tener tanta paciencia al momento de enseñar este tipo de contenidos

  • @Equilibrier
    @Equilibrier 2 года назад +11

    This is just awesome, thanks a lot ! Keep up with the good work !
    PS: Can we find a good 3D model and a good rig to translate those points/spheres into rig articulation points and make a complete human 3D model to act the same as in the animation ? In theory, it should be possible. If you are willing to try something like this I would be grateful. Thanks !

    • @FerisovGerman
      @FerisovGerman 9 месяцев назад

      rigging

    • @aangigandhi
      @aangigandhi 7 месяцев назад

      Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar

    • @Equilibrier
      @Equilibrier 6 месяцев назад

      ​@@aangigandhi Hope I'm not too late for you; I'm far from being proficient in Unity3D and haven't played with it for some years, but the process would rather be pretty simple, but on the implementation side you will have to make things work and requires some practice, of course, but it's certinaly very doable:
      1. you have cvzone which reads the poses as absolute points coordonate and you also have the image-map which shows you the pseudo-rig points those points translate to; it's easier you could look at an actual rig in Unity3D to see what points it has and so on so you can do the right mapping directly in python, before writing to the persisted-data file, if you ask me. Maybe these clips will help you understand the official unity3D rig: ruclips.net/video/Htl7ysv10Qs/видео.html and ruclips.net/video/Wx1s3CJ8NHw/видео.html
      2. In unity3D things should be as simple as it was for the author, since the code you will have to write is similar; you have to make a GameObject which defines the bones/rig-points (don't know which, I haven't played with it but you can) as a mere Vector of Transform or Point3D or something like that, write the logic that assigns those points from the ones stored in your file, place a delay like he did in the video, in the Update() method (correctly you would get the timings from the cvzone and store those in your file as delays or keyframes and this would make you do a perfectly synced 3D translation of the originar video movements but you shouldn't try this from the start, only after the first part works...) and then, in the inspector, you will have to manually associate those points (your GameObject Vector) with actual rig points/bones (similar to what he did, but this time from the rig itself, not from the created spheres);
      3. of course you should start with a project that has a 3D model aswell associated with the rig, and maybe somehow find an unity3d mechanism to easily switch 3d models
      PS: you should be careful with coordinate systems, you saw in the vide, he did a transform on the points (over the camera matrix, which you don't see because it's hidden in the transform function) to make them be in the local (camera) coordonate system, from the absolute (world) system he had them in in the first place (in the file); most probably you should do the very same, since I assume the rig will also have it's bones/points in the cam coordinate system.
      here is some code that you can take as reference but it's pretty much the same of his (just some sample codes to make you understand the idea):
      using System.Collections;
      using System.Collections.Generic;
      using UnityEngine;
      using System.IO;
      public class RigAnimator : MonoBehaviour
      {
      public Transform[] bones; // Oasele rig-ului
      private List frames; // Lista cu pozițiile pentru fiecare cadru
      private int currentFrame = 0;
      void Start()
      {
      LoadBoneData("path/to/your/data.json");
      }
      void Update()
      {
      if (frames.Count == 0) return;
      // Aplică pozițiile oaselor
      for (int i = 0; i < bones.Length; i++)
      {
      bones[i].localPosition = frames[currentFrame][i];
      }
      // Actualizează la următorul cadru
      currentFrame = (currentFrame + 1) % frames.Count;
      }
      void LoadBoneData(string filePath)
      {
      string jsonContent = File.ReadAllText(filePath);
      frames = JsonUtility.FromJson(jsonContent);
      }
      }
      (sorry about the romanian comments, forgot to write them in english, but you will understand so this would be animation code, loadBoneData will have to read the file based on the file format, but a json format is verry suitable you should go with it, since in python you have json.dumps for this as well)
      and for association (in the inspector is a possibility but maybe it works in the code, as well, don't know) - it should be something like this
      // another gameobject which runs only once (this is why the logic is in the Start method, not in the Update method)
      void Start()
      {
      LoadBoneData("path/to/your/data.json");
      bones = new Transform[5]; // Să presupunem că avem 5 oase importante
      bones[0] = GameObject.Find("Bip001 Pelvis").transform; // Exemplu de nume de os
      bones[1] = GameObject.Find("Bip001 Spine").transform;
      bones[2] = GameObject.Find("Bip001 L Thigh").transform;
      bones[3] = GameObject.Find("Bip001 R Thigh").transform;
      bones[4] = GameObject.Find("Bip001 Head").transform;
      // Adaugă restul oaselor conform structurii tale de rig
      }
      You should put these in the right order and modify what it is to modify and in theory it should work.
      Good luck with your work ! Hope it helps !

    • @Equilibrier
      @Equilibrier 6 месяцев назад +1

      @@aangigandhi I don't know if this is still a need for you, today I saw the comment and I tried to write two useful comments for you and none was posted by youtube. Don't know how to get in touch, maybe you can find a possibility, as youtube kicks out my comments without explanations.

    • @minhdungdo4541
      @minhdungdo4541 3 месяца назад +1

      @@Equilibrier Hi Equilibrier, I hope you have a great day. Could you share those 2 useful comments with me? I believe it would help me alot in my project

  • @tallslimpr
    @tallslimpr 2 года назад

    Fantastic video, well presented!

  • @AHabib1080
    @AHabib1080 2 года назад +2

    It would be great if you can make this project one step further.
    For example,
    Recording Motion Real time with Cameras around a room and send these data to Unreal Engine / Unity.
    i don't know how hard it is. but I was really hoping if you could teach us how to do it

    • @twenmod
      @twenmod Год назад +1

      It is really easy to make this real time
      just make the python script read from the webcam
      then just make the unity script read the file in update instead of start

    • @AHabib1080
      @AHabib1080 Год назад +2

      @@twenmod How can i Separate the bones data?
      Each Part of the Movement, like hands, legs, spine, head, neck etc.?
      is it possible?

    • @twenmod
      @twenmod Год назад +3

      @@AHabib1080 I don't get what you mean, what I'm doing to get the movement to a character is setting up the points as IK targets

  • @gurudaki
    @gurudaki Год назад

    Excellent tutorial!!I have a question please....
    How can we insert an open source avatar to replace the lines connecting the landmarks? I mean the landmarks to be on an avatar instead of lines

    • @aangigandhi
      @aangigandhi 7 месяцев назад

      Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar

  • @pixelstate26
    @pixelstate26 2 года назад

    Is this working with any video? sounds exciting.

  • @parthchotaliya3468
    @parthchotaliya3468 2 года назад

    good tutorial, keep pushing!!!

  • @alvarobyrne
    @alvarobyrne 2 года назад

    nice. division is a costly operation . prefer multiply by 0.1 than divide per 10: you will notice the difference in bigger data but better getting used to it in all cases

  • @kayumiy
    @kayumiy 2 года назад +6

    Can you apply this animation to any other 3d object avatar?

    • @spatil689
      @spatil689 Год назад

      Can we do this??

    • @spatil689
      @spatil689 Год назад

      If yes please tell me

    • @aangigandhi
      @aangigandhi 7 месяцев назад

      Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar

  • @DissyWorld
    @DissyWorld 2 года назад +1

    Can this project be made with vs code or Jupiter nootbook

  • @MotuDaaduBhai
    @MotuDaaduBhai 2 года назад +1

    Awesome stuff! Is there a way to feed the motion data directly into Unreal Engine model rig so the motion capture can be done in real-time?

  • @musicalbirds2928
    @musicalbirds2928 2 года назад

    Great video as always. Thanks for sharing.

  • @dukezacharia7881
    @dukezacharia7881 2 года назад

    Wow nice work can i ask can this be applied to 3D Animal motion capture?

  •  2 года назад

    Too amazing to be real! Thanks for share this.

  • @MirageDev_
    @MirageDev_ 2 года назад

    Thanks a lot bro! Keep it up!

  • @samsularefinsafi3448
    @samsularefinsafi3448 Год назад

    This video is so amazing. Thank you sir.
    Sir, I have a question here. Like your handtracking game, you did that realtime. I have a project on realtime stick model find out. But I cant understand where should i change the code in python. I use Spyder for python interpreter. Can you please help me to complete my project...

  • @kurushitanaka5150
    @kurushitanaka5150 2 года назад +1

    I want to ask, what software do you use?

  • @sanniu4270
    @sanniu4270 2 года назад +1

    Great job! Could you explain how to get the depth data for human joints? Is it estimated by a well-trained neural network?

  • @edslab5383
    @edslab5383 2 года назад +2

    Can you make a video about the estimation of the true size of objects in a scene using multiple cameras through triangulation.

  • @joaocamiloulhoa4878
    @joaocamiloulhoa4878 2 года назад

    Amazing channel of youTube!!

  • @say_hon3y
    @say_hon3y Год назад

    Murtaza Sir,
    How can this be achieved with a Humanoid 3D Model, if you can make a short video on that, it will be helpful as lot's of people are searching simple ways to make Motion Capturing and it will be easy if we could do it with just few lines of code like in this video.
    It was very helpful.
    Thank You

    • @aangigandhi
      @aangigandhi 7 месяцев назад

      Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar

  • @moses5407
    @moses5407 Год назад

    will this work moton analysis (not the animation portion) in real time by changing video source to an available camera vs. a stored video?

  • @akhibali8405
    @akhibali8405 2 года назад

    Hi,great vedio
    can you please upload body measurements taking using Opencv,
    Finding shoulder size, Waist etc..

  • @madstudios1688
    @madstudios1688 2 года назад +1

    how can i make my own python 3d camera tracker?
    i want to learn how extract camera motion data from imagine sequence and use it in a 3d program

  • @FREEKER18
    @FREEKER18 2 года назад

    Очень круто. Хотел бы увидеть с двумя камерами. Обработку по двум камерам стоящими под 90 градусов.

  • @priyanshubh
    @priyanshubh 2 года назад

    Please bring more such OpenCV with Unity video's it's really helpful.

  • @geralt9036
    @geralt9036 2 года назад

    thanks! you are a blessing

  • @mia_in_lab
    @mia_in_lab Год назад

    Thanks very much!!That's what i want to do!!

  • @tahirullah4923
    @tahirullah4923 2 года назад

    great sir ,Sir can you make a video on distance estimation from webcam to any reference point or image.sir plz make a video for this thanks.
    i have done face and hand distance measurements.

  • @antonolivaresjuanjose2992
    @antonolivaresjuanjose2992 2 года назад

    You are The BEST

  • @humayunkabir7866
    @humayunkabir7866 2 года назад

    Can we expect something for Blender workflow..?

  • @pulinduvinnath1812
    @pulinduvinnath1812 2 года назад

    Great video. Thank you 👌

  • @earnmoney2289
    @earnmoney2289 2 года назад

    thank you, great job.

  • @flioink
    @flioink 2 года назад +1

    Alright, all done - pretty cool, thanks.
    I'd like a hint on how to replicate this in Unreal Engine.
    I'm guessing the code from the second part won't be much different, but I'm not sure how to set it up inside the engine.

    • @TriSutrisnowapu
      @TriSutrisnowapu 2 года назад +1

      I wonder if there is a workflow for unreal either.. btw unreal has support for python scripting, so I assume there should be a way

    • @flioink
      @flioink 2 года назад

      @@TriSutrisnowapu 100% there's a way, but I don't kno which is the right class to use for such a thing.
      I also bet there's a way to influence the character skeleton joints with those points.

  • @FreshCreamSoup
    @FreshCreamSoup 2 года назад +2

    Very helpful. How do we apply this movement to 3D player model.

    • @swastikkarwa2507
      @swastikkarwa2507 2 года назад

      Did you have any luck with this?
      I need it very urgently

    • @spatil689
      @spatil689 Год назад

      Ya nee too

    • @FreshCreamSoup
      @FreshCreamSoup Год назад

      @@swastikkarwa2507 ​ @Sneha Patil yes, I asked chat gpt to implement this feature it's not perfect but still manageble

    • @aangigandhi
      @aangigandhi 7 месяцев назад

      Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar

    • @FreshCreamSoup
      @FreshCreamSoup 7 месяцев назад

      @@aangigandhi are you stuck on creating avatar or applying animation data to avatar ?

  • @MarcoSilvaJesus
    @MarcoSilvaJesus Год назад

    Can you give me the link of the motion capture program you used in this video?

  • @llIIllIlIIllX_XIillIIllIIllIll
    @llIIllIlIIllX_XIillIIllIIllIll 2 года назад

    hello, is it possible to improve the quality of the system to track facial and hands movement as well?

  • @MarkTreeNewBee
    @MarkTreeNewBee 2 года назад +1

    Also, I want to know may I get rotation parameters from cvzone and mediapipe?

  • @whitelee9630
    @whitelee9630 Год назад

    there are a problem:INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
    Traceback (most recent call last):
    File "/Users/mac/PycharmProjects/MotionCapture/Data/MotionCap.py", line 9, in
    lmList, bboxInfo = detector.findPose(img)
    ValueError: too many values to unpack (expected 2)
    Process finished with exit code 1

  • @minisreejith21
    @minisreejith21 2 года назад

    Which country do o
    You live. And where did you got your graduation.you are superb

  • @ameerhamza5542
    @ameerhamza5542 2 года назад

    Can you make a video on foot size measurement with webcam
    Love from Pakistan

  • @nuriman432
    @nuriman432 11 месяцев назад

    is this the same as Deepmotion?

  • @MarkTreeNewBee
    @MarkTreeNewBee 2 года назад

    Thank you for your video explaination, it helps me a lot, just one more question bother me, how can I put these red sphere into a character to sync the motion in unity3D?

    • @aangigandhi
      @aangigandhi 7 месяцев назад

      Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar.Please help

  • @caroscheler
    @caroscheler 2 года назад

    Great Tutorial! I'm wondering, is it possible to combine this with the video you created on hand tracking? So can I get 3D data on Hands to bring to Unity? Or is this just possible with a video of a full body? Thanks!

    • @samsularefinsafi3448
      @samsularefinsafi3448 Год назад

      sir, did you get how to detectect full body in unity hub realtime? I am facing the same problem

    • @caroscheler
      @caroscheler Год назад +1

      @@samsularefinsafi3448 sorry, I needed only hands, so I haven't done any full body tracking. I tracked them with the other tutorial and then used a plugin to record the movement of my hand in unity and export it as fbx for use in maya.

    • @matearkossy7894
      @matearkossy7894 11 месяцев назад

      Hello! Im currently trying to achieve that as well! Do you have any help with that maybe? @@samsularefinsafi3448

  • @vipinsou3170
    @vipinsou3170 2 года назад

    which processor you are using bro?

  • @CyroCh.
    @CyroCh. 2 года назад

    Hi, have you ever tried a proyect with the xbox kinect?

  • @shagull9159
    @shagull9159 Год назад

    Is this possible that I record my video in sign language and it can capture hand gestures motion also?

  • @mimARDev
    @mimARDev Год назад

    Sir can i run this on android and iOS devices?

  • @werachaisrisupinanont3353
    @werachaisrisupinanont3353 2 года назад

    Amazing man

  • @gamer-tf2pe
    @gamer-tf2pe 2 года назад

    Thanks a lot man

  • @NaveenSword
    @NaveenSword 2 года назад

    Can this be used to detect multiple people?

  • @infinity2creation551
    @infinity2creation551 2 года назад

    U are osm bro 😍😍😍🥺🥺

  • @mslmanni
    @mslmanni Год назад

    I can't register at the site property. It's not sending email for confirmation or resetting password.

  • @user-cw3nb8rc9e
    @user-cw3nb8rc9e 2 года назад

    Does your code link lead to full program with gui ?

  • @naiyraelkady8204
    @naiyraelkady8204 2 года назад

    How would you apply the annotations on a 3D model with an armature ??

  • @Tesla_News_Space_X_Musk
    @Tesla_News_Space_X_Musk 2 года назад +2

    Sir I am working on same project, but in blender

  • @ewwkl7279
    @ewwkl7279 2 года назад

    Great tutorial. Thank you so much.
    I have tried to find angle of elbow but I've got an error.
    angle = deetector.findAngle(img, 11, 13, 15, draw=False)
    The error was
    Finds angle between three points. Inputs index values of landmarks
    instead of the actual points.
    how to input index values?

  • @vaishnavikothoori9250
    @vaishnavikothoori9250 3 месяца назад

    which domain now the project is

  • @t-raystudios
    @t-raystudios 2 года назад

    I like this, is there a tutorial where you can convert this to a .bvh?

  • @rithika9442
    @rithika9442 2 года назад +1

    how do i do this on unreal engine 4

  • @zishventure4280
    @zishventure4280 2 года назад

    Vector3 is not being recognized and I am getting the error …. Index was outside the bounds of the array.

  • @infozy
    @infozy 2 года назад

    Which software it is?

  • @Mohak-Bajaj
    @Mohak-Bajaj 2 года назад

    loved it

  • @ezhil182020
    @ezhil182020 2 года назад

    Can you please let me know what are all the application of this content ?

  • @wgalloPT
    @wgalloPT 2 года назад

    QUESTION: What would I have to do if I want a numerical value of lets say the dots of the waist in the "Y" coordinate to display at all times?

    • @wgalloPT
      @wgalloPT 2 года назад

      Never mind the question above..i figured it out..thankk you

  • @temyraverdana6421
    @temyraverdana6421 2 года назад

    Wow. Thanks for share

  • @cyruslegg
    @cyruslegg 2 года назад

    I’m wondering how to view it real time with matplotlib?

  • @HuyNguyen-vp7eb
    @HuyNguyen-vp7eb 2 года назад

    Can it work with multi people?

  • @葉沛成
    @葉沛成 2 года назад +1

    hello,I want to know if I apply this with realtime tracking, how can I collect the data in a stored txt? How to run that code? cuz I think if just replace an video link with cap.videocapture, and I can just get immediate data so I don't know how to do? Another question is, I can't wait to see that you put this model in a 3D model, ex: vtuber, ha ha

  • @iojoe229
    @iojoe229 7 месяцев назад +1

    can anybody help me Please I extracted the motion capture and tried to apply it on a 3D model in blender. But what happened is i got a lot of deformation in the model. So I looked it up. and it appeared that it is scale difference problem I did solve the problem by dividing coordinates on a big number. so the deformation was gone, but also the movement was gone also. So what should I do to avoid the deformation of the model and keep the movement?

    • @aangigandhi
      @aangigandhi 7 месяцев назад

      Hey
      Are you able to get the same result in blender?like by importing motion capture in blender is these stuff be applicable to the avtar? yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar

    • @iojoe229
      @iojoe229 7 месяцев назад +1

      @@aangigandhi I retargeted the web cam feed on an armature in blender but unfortunattely i couldn't get the same result due to scale difference between blender and the captured data, if you solve this problem i think it will work fine and you will get the same result hopfully, then you can extract the armature with the animation as fbx file and use it in Unity.
      Hope you look for more details, i am not expert in this at all.

    • @aangigandhi
      @aangigandhi 7 месяцев назад

      Okay
      Thank you for your response

  • @haithinhtran5108
    @haithinhtran5108 2 года назад

    How you make skeleton with model 3d?

  • @antonolivaresjuanjose2992
    @antonolivaresjuanjose2992 2 года назад

    It does not create the file where it saves the coordinates. Because it can be?

  • @adilmir3040
    @adilmir3040 Год назад

    How can we implement the motion capture on a 3d model? is it possible that all the movements are done by 3d model if yes then please make a tutorial for it. Really need it

    • @aangigandhi
      @aangigandhi 7 месяцев назад

      Did you found the solution for that?If yes pls guide me its urgent requirement for my project.I successfully completed web cam feed to animation(made of sphere and line) conversion but stucked in creating avtar

  • @3Dpixelframe
    @3Dpixelframe 2 года назад

    it's possible to export FBX file?

  • @mrneonlightning132
    @mrneonlightning132 2 года назад

    Can we import this into unreal engine too?

  • @msms-rd6us
    @msms-rd6us 2 года назад

    Now we can have full body tracking