FreeMoCap ROCKS!!!

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024
  • FreeMoCap URL:
    github.com/fre...
    Please support my Patreon:
    / bracerjack
    Paypal Donation:
    bracercom@hotmail.com
    Bitcoin Address Donation:
    1698RW9rSJUXeSSbyNaQY7aNNbxQkLtjs5
    ===============================================================
    Support me by reading my books !
    Humpty Dumpty God
    Google Play eBook:
    play.google.co...
    Kindle eBook:
    www.amazon.com...
    Paperback:
    www.amazon.com...
    ===============================================================
    ===============================================================
    Book 1 :: God Guardian: The Life Review Simulation
    Google Play eBook:
    play.google.co...
    Kindle eBook:
    www.amazon.com...
    Paperback:
    www.amazon.com...
    ===============================================================
    ===============================================================
    Book 2 :: God Guardian: Incarnation Troll
    Google Play eBook:
    play.google.co...
    Kindle eBook:
    www.amazon.com...
    Paperback:
    www.amazon.com...
    ===============================================================
    ===============================================================
    Book 3 :: God Guardian: Perpetual Slaves
    Google Play eBook:
    play.google.co...
    Kindle eBook:
    www.amazon.com...
    Paperback:
    www.amazon.com...
    ===============================================================

Комментарии • 56

  • @genieeye5783
    @genieeye5783 11 месяцев назад +12

    Could you please make a walk-through tutorial? I have 0 python experience, the original tutorial is really confusing to me🤣

    • @BracerJack
      @BracerJack  9 месяцев назад +3

      Sounds like something I should do someday!
      Thanks for the suggestion :D

  • @LFA_GM
    @LFA_GM 8 месяцев назад +2

    Hi, thanks for this video. I am also looking for alternatives with multiple webcams. Have you tested with 922 Logitech? Also, there is a limit of 2 webcams per USB controllers, otherwise the third will not work and the others will have frame rate decreased. In laptops, that is a serious constraint. You'll need at least 30fps at 640x480 resolution for acceptable results. For 3-cam setups, at least 1 webcam should be placed about 6ft/180cm above the floor.

    • @BracerJack
      @BracerJack  8 месяцев назад +2

      I saw your videos and it is nice to see someone who is into mocap to this extend too.
      I bought the kinect version 2 for mocap test using the SDK, it was ok.
      I wanted to try ipisoft but the price is a tad too much for me.
      I was looking at Brekel too but then freemocap comes along and the quality just destroys everything I have seen before.
      You are also right about the webcam limit, I have to use three laptops if I were to use webcams, but I think I will stick with actual cameras because the webcam's framerate is not stable, the output video's framerate is stable but when I am syncing them in the video editor, I realize they skipped frame capture internally and I just cannot trust those cheap webcams anymore.

    • @LFA_GM
      @LFA_GM 8 месяцев назад

      You're right, @BracerJack. Price tags nowadays are a limiting factor, and some "AI tools" pricing models are just delusional. Most of these techs are available as open source, but the setup is not beginner friendly. Also, this mocap business didn't grow as predicted, so we don't have many affordable providers. I'll be testing Freemocap with single and multiple cameras very soon and record a video with my results soon.
      There another one that is free, "EasyMocap", which also allows multiple cameras, but has a way more complicated setup, not so "easy" as the name suggests: ruclips.net/p/PL1NJ84s5bryvhGJzcCjPMDJiI9KW9uJ-7&feature=shared

    • @BracerJack
      @BracerJack  8 месяцев назад

      @@LFA_GM If you are using discord, you can join my discord at discord.gg/AU7Rg6KD so that we can chat about mocap stuff :D

    • @LFA_GM
      @LFA_GM 8 месяцев назад

      Thanks@@BracerJack Unfortunately I don't use Discord, but I'll update my results here. I tested two Kinect 360 in the same computer and unfortunately my hardware specs didn't handle the USB bandwidth requirements.

    • @MrGamelover23
      @MrGamelover23 4 месяца назад

      ​@@BracerJackhave you ever tried xranimator? It uses mediapipe to track the body. Rn it only works with one camera at a time, but It allows you to record motion capture and export it into blender. And I'm curious how well it would work compared to this.

  • @ok-hc4he
    @ok-hc4he 4 месяца назад +1

    I've set up everything and have export my animation to Blender but all I have is a walking skeleton, how do I transfer my animation to the human mesh I created

    • @BracerJack
      @BracerJack  4 месяца назад

      Well first of all, good job 👍
      There is a blender add-on that is created for this purpose created by the community from freemocap, check their GitHub and or discord to get it, it automates the process by a few clicks.

  • @SalElder
    @SalElder 6 месяцев назад

    This is very helpful and informative. Thanks for the video.

    • @BracerJack
      @BracerJack  6 месяцев назад

      You are welcome Sal Elder :D

  • @forthesun6563
    @forthesun6563 5 месяцев назад +1

    How to import animation in blender please?

    • @BracerJack
      @BracerJack  5 месяцев назад

      There is a plugin for it, it comes with the mocap program, you can also join their discord to get the latest version of the add-on.

  • @mohamedharex
    @mohamedharex 9 месяцев назад

    Nice demo, where to get that addon.

    • @BracerJack
      @BracerJack  7 месяцев назад

      Go to their discord, the person who make the add-on is there, you can get the latest version there :D

  • @ScifiSiShredaholic
    @ScifiSiShredaholic 8 месяцев назад +8

    Excellent video, absolutely right about the turning too.
    If I see another promo video for a new mocap app where all the footage is just star jumps and waving at the camera...

    • @BracerJack
      @BracerJack  8 месяцев назад +1

      I know right, it is all too easy to detect those "sudden" and gesturely "loud" moves.
      Give me a system that detects foot placement and turning movements correctly and you have yourself a winner!

  • @kevinwoodrobotics
    @kevinwoodrobotics 19 дней назад +1

    I think you should be able to calibrate your cameras with a smaller chessboard. You would just need to move closer to the camera

    • @BracerJack
      @BracerJack  19 дней назад +1

      @@kevinwoodrobotics YES, that too, but remember the dilemma:
      1: If the board is small, when you move closer to one camera, the other camera will not see it or would be too small to be of any use to that other cameras.
      2: Attempting to rectify the problem by moving the camera closer together remove the parallax property that is the sole principle behind being able to figure out a point in 3D space.
      A bigger board would allow the camera, especially if you only have two, to be far apart from each other for maximum positional triangulation in 3D space while allowing the board to still be detected by both cameras at the same time.

    • @kevinwoodrobotics
      @kevinwoodrobotics 19 дней назад +1

      @@BracerJack I see. So they require you to calibrate all the cameras simultaneously instead of independently?

    • @BracerJack
      @BracerJack  19 дней назад

      @@kevinwoodrobotics They cross reference each other to figure out where exactly in 3D space are the cameras and the board.
      The more cameras are able to catch a glimpse of the board at the same instance in time, the surer the positional space is.
      I believe your next question might be "So that means the videos need to be all synced up then?"
      The answer to that will be: "Yes".

    • @kevinwoodrobotics
      @kevinwoodrobotics 19 дней назад

      @@BracerJack ok that makes sense now! Thanks!

    • @BracerJack
      @BracerJack  19 дней назад

      @@kevinwoodrobotics Glad I am able to help you! Be well on your journey.

  • @BrianHuether
    @BrianHuether 6 месяцев назад +1

    I am about to try this to make s 3d animated model of myself just for shadow and reflection casting. I don't need super accurate.
    Does it work if subject is sitting down? I need it to capture me seated playing guitar. Not my hands, but rest of upper body. Well, and I also need to capture guitar neck. Encouraging to watch your video!

    • @BracerJack
      @BracerJack  6 месяцев назад

      I don't think the AI would work if....
      Hold on let me think about how to best explain this....
      First and foremost understand that AI motion capture have difficulty detecting the body's pose if the outline isn't very strong.
      You sitting down and not really moving very much is probably going to kill the detection, to make matters worse you are about to bring in a foreign object that the A.I. would then have to wonder whether it is your body or not.
      This is a brave experiment, I wish you luck 😀

    • @BrianHuether
      @BrianHuether 6 месяцев назад +1

      Hi, I would think it should work. Am going to try today. Even if there are periods of no motion that shouldn't cause it to not work, since camera trackers work even when no motion as long as there are good initial frames of trackable points. So if I am sitting with two cameras - one on each side of me - and my upper body is moving is moving at first, that should be enough to generate data and set the tracking in motion. Interesting how the guitar will affect things, but I would think that too would not be a problem. AI is just large data set statistics, basically regression based on high dimension datasets, in other words high dimensional curve fitting, so if there are good initial frames, mocap should understand where arm joints are, and the nature of training data should be such that the system would not result in, say, arm joints suddenly jumping and being mapped to the guitar or something, since such jumps would be absent in the training data, and if this system works like next token prediction (the way OpenGPT) then the liklihood calculations would unlikely choose sudden deformation of a joint as a likely next token. I suspect the devs have some sort of constraints system so that frame to frame you wouldn't have the body geometry wildly deforming. I joined their discord channel, will let them know how this goes. Also, been playing around with the great camera tracking software SynthEyes, it has geometric hierarchial tracking, where it would be able to track an object as a separate hierarchy from some other object (like a person holding the object). These FreeMoCap devs and the SynthEyes dev should collaborate! SynthEyes manual describes a multicamera motion capture function that it has, but I have yet to see example of anyone using it. But if I get this working, then I won't bother trying to use SynthEyes for motion cap. @@BracerJack

    • @BracerJack
      @BracerJack  6 месяцев назад

      @@BrianHuether I wish I am there to conduct the experiment with you, this is interesting. I actually want to know the result.

    • @BrianHuether
      @BrianHuether 6 месяцев назад +1

      @@BracerJack will report back here! Going to print out that calibration image and follow the calibration steps.

    • @BracerJack
      @BracerJack  6 месяцев назад

      @@BrianHuether You....you use synth eyes for BODY MOTION CAPTURE?!!!

  • @hotsauce7124
    @hotsauce7124 9 месяцев назад +1

    Question, have you also tried walking towards the camera?

    • @BracerJack
      @BracerJack  7 месяцев назад

      It should be fine AS LONG AS....more than one camera is capturing your moment.

  • @adamvelazquez7336
    @adamvelazquez7336 9 месяцев назад +1

    do you have a walk through of the usage of freemocap from capture to blender?

    • @BracerJack
      @BracerJack  7 месяцев назад

      Not at the moment, maybe I will do that someday :D

  • @AbachiriProduction
    @AbachiriProduction 9 месяцев назад +1

    hi were i can get that large background image

    • @BracerJack
      @BracerJack  9 месяцев назад +2

      That image is included in the github repo :D

  • @GabrieldaSilvaMusic
    @GabrieldaSilvaMusic 9 месяцев назад +1

    Does this work for 2 people detection?

    • @BracerJack
      @BracerJack  7 месяцев назад

      I have no idea, I have never tested it with two people :D

  • @The-Filter
    @The-Filter 7 месяцев назад +1

    No it does not! It is totally horrible to use! It needs a complete UI with FBX export option!

    • @BracerJack
      @BracerJack  7 месяцев назад +1

      Try to discuss with the creator about your issue in discord, he is very helpful :-)

    • @xyzonox9876
      @xyzonox9876 4 месяца назад

      Command Line Interface is cool though

    • @The-Filter
      @The-Filter 4 месяца назад

      @@BracerJack I did already. He avoids any improvements and just points out to the existing solution.

    • @The-Filter
      @The-Filter 4 месяца назад

      @@xyzonox9876 If you have only one animation to do, yea, maybe. But we have 2024! And this is not a solution, especially if you have to manage hundrets of animations, select their range that needs to be saved, organize and rewatch them quickly, to maybe redo the capturing again and so on!

  • @terryd8692
    @terryd8692 Год назад +1

    Awesome, but can it dance? 😄

    • @BracerJack
      @BracerJack  11 месяцев назад +2

      It can...when I can....someday ;p

  • @terryd8692
    @terryd8692 Год назад

    What is the output in? Is it the application itself? Is it simple to get it into blender? It would be good to have a wirefrane background and floor to get an idea of how much drift uoure getting. Apart from getting up from the crouch it was pretty damned good

    • @BracerJack
      @BracerJack  11 месяцев назад +2

      1: The output is a Blender file, there is even a Blender add-on that auto converts the file into a rigify Blender file.
      2: The sliding of the foot have a lot of room for improvement, hopefully, having another camera will help.
      3: Yeah, maybe when the body is squeezed like that, the outline recognition part of the A.I. simply give up to some extend but the fact that it works at all...wow.

  • @ryanrose7523
    @ryanrose7523 10 месяцев назад +1

    Great Job!

  • @raymondwood5496
    @raymondwood5496 11 месяцев назад +2

    may i know what camera you used?

    • @BracerJack
      @BracerJack  11 месяцев назад +1

      I used the Canon g7x mark ii and Sony ZV1.

    • @raymondwood5496
      @raymondwood5496 11 месяцев назад +1

      @@BracerJack ok thanks.... Great video.. I still having problem moving the data to blender.. I will try kinect using ipisoft first..

    • @BracerJack
      @BracerJack  11 месяцев назад +1

      @@raymondwood5496 Let me guess, when the program was attempting to export to blender, it get stuck right ? That can be resolved, backup and then delete your Blender configuration folder, that will do it.
      This happened to me as well, once you have cleared the folder, you are good !

    • @raymondwood5496
      @raymondwood5496 11 месяцев назад

      @@BracerJackthanks for the tips.. Will try again tonight..

    • @BracerJack
      @BracerJack  11 месяцев назад +1

      @@raymondwood5496 I never got the chance to try ipisoft, the multi-cam option is really out of my budget.
      Good luck !