Do you know. It's 2 o'clock at night in the country where I live. But I didn't give up and I won. With 1 webcam and 1 smart phone. 1 solution seems to fail. But I got it working. Thanks to everyone who supported me. Special thanks to Jon Mattthis. The creator of this interesting method. I will have a detailed report on this.
Don't know if the gravity problem was solved but maybe you can use a specific calibration board for the floor. This way the software can recognize and define the workplane. Awesome work, bro! I'm really looking forward to test it
This is a very cool instrument. I can't wait to see the user-friendly interface of the program. I wish I could help with feedback about the program, but it's hard to use in the alpha version right now. Thanks for continuing to work on the product.
I really enjoyed the features from Poo, please bring him into more video, he strikes many alluring poses in which I am utterly captivated by his luminous shimmering curves. As for the technology? Eh.💩
This is the most incredible thing ive seen! this is exxactly what ive been looking for! I hope you have more success on anything else you do because this is the best!
Wow! This is incredible. Thank you so much for developing this project as open source. I will definitely try that out and maybe contribute some code in the future. Such tools could really become useful for indie game developers for example. I really appreciate your work ❤ Btw, those sweet cats gave this great video a nice cozy touch :)
It's possible to made recognition in realtime? I can connect to dev team and work on this feature. My last project is about Google-it(ORB SLAM 3) tracking.
seems great !!!! I didn't find anythings on your website about the hardware setup needed for this. Could you please detail it ? Thanks ! EDIT: okay it is in this vid ;-) sorry for bugging you :P
1000 thanks for making multiple camera markerless mocap accessible to us! If I may ask: Is this tutorial 1:1 applicable to the latest version of freemocap?
blender :/... why not a FBX exporter? The one and only animation format to get the animations to EVERY software?!!!! However, I am totally unsatisfied with the usage... made it running but I don't see any production usage this way. Thanks anway! @@fluxrenders
I posted a detailed replied but it seems it was filtered out :( . At least I want to say that the FBX export is considered but not a priority as you can already export to FBX using a Blender addon. You can even export a FBX suitable for Unreal Engine. There is a video of that process.
Hi, this is pretty wild! If I use a couple android phones and want to connect to my Mac as wifi webcams, seems I have to use software such as Camo. Will this sort of middle man software impact FreeMoCap? Thanks. BTW - maybe have the developers work on a constraints system so that joints/bones are never allowed to undergo impossible deformations. Also, will it work if you are holding an object? I am trying to find a way to do motion capture while playing my guitar, in which case I am seated, also at times walking with or without guitar. I don't need accurate capture of my hands playing guitar. I am using mocap to generate 3D models to then cast shadows and reflections in 3D scenes. When I film I don't show my whole body, but if I have android phones as webcams for the capture do you forsee any issues doing mocap in my use case? Thanks again!
Hi, there would not be a problem in using android phones as webcams. But you can also use the prerecorded videos option where you record in each device locally and then import the videos to the program. Regarding playing the guitar you could overcome occlusion by a good placement of the cameras. I think it should work fine if you don't mind a not ultra accurate finger capture.
It can handle multiple precorded videos from different devices like smartphones, DSLR, etc. They have to be synchronized though. The GUI has an import option that can sync using the audio track or you can sync them in a software like DaVinci Resolve
That is so promising! I have 14 ps3 eye cameras which has 60 fps option (lower resolution though). Do you think resolution of FPS will matter more? Will it be possible to add DSLR footage afterwards? I assume it won't be able to go through calibration process with DSLR.
Will single camera work . Given a 10 second single video file that contains one person walking can this software generate fbx files that contains the animation info?
Thank you for your wonderful work, it is truly useful to DIYers like me. I have a question: I used FreeMoCap on my system previously with a single kinect V1, no issues. Tried moving to multicam but now it won't detect any camera, even the previously detectable kinect V1. I've tried going back to single cam, but it doesnt work either. For info I have used 1-6 Ps3 Eyes, and/or 1-4 Kinect 360 sensors, and/or the aforementioned kinect V1. but not a single camera is detected. Used Ipisofts recorder to see if it was a bandwidth issue but it detects all cams, and runs them as well. Any advice? Reinstalled skellycam as well but no luck.
@@fluxrenders thank you for your suggestion. I could record multicam using Ipisoft recorder, but no charuco board plus data means that wouldn't be very useful in FreeMoCap, right?
Yes, you need to record a calibration with the charuco board and then get a calibration file. This file is used to reconstruct the 3D skeleton when processing the capture videos.
is three the minimum amount of cameras that will produce moderately usable results? And proprietary options can, maybe with less accuracy, achieve this with just one camera and no calibration, is there hope of achieving that with these tools without drastic refactoring? Genuinely curious, and the current results look fantastic.
I give up. :-( I can capture calibration data and record videos, but none of the other buttons in the GUI do anything. (Tried on my PC: Windows / Intel i9 64GB RAM / 4090 24GB VRAM. And tried on my laptop: Ubuntu / Intel i5 40GB RAM / no GPU.) No install errors, GUI running on both. Calibration toml generated successfully. Videos recorded and saved to disk. Then... nothing. The GUI buttons print statements to the console as if they meant to launch processes etc., but nothing happens.
@@fluxrenders Thanks, I will probably try again when I have time. I got another solution working, but the quality was unusable. (And I found out how bad I am at moving my body for specific animations.)
Hi! Thank you for trying to get this running! I know how challenging this kind of thing can be when you're new to programming. Make sure you have done all of the steps and entered all the commands *exactly* as they are shown here- github.com/freemocap/freemocap#how-to-run-the-alpha-gui If that doesn't work (or if you get confused about one of the steps) - please create a new Issue on the GitHub repository here - github.com/freemocap/freemocap/issues When you make the Issue, please be sure to copy any `error` messages that come up in your terminal! 祝你好運!
Hey Jon, great work really appreciate it 🙏 can you please help me I am trying to create a real-time mocap solution as a final year project for my university where I am trying to integrate freemocap with unity to get the tracked data and map it to a character in unity 3D
@@devpatel8276 auto-rig & skin your character when the animation dataset is calculated, then your character will follow the animation I guess. Better use simple models at start
Yes! The latest versions of the software have a "single camera mode" that produces flat skeletons (compressed onto the Y Plane). Its very quick and easy and loads into Blender just like a multicamera system
@@jonmatthis Amazing, thank you, and, im really sorry but may i ask, Do you have a video on how to install the program? I got lost along the way with the text/website Tutorial. Thanks :D
You can already import the Freemocap Blender output in Unreal Engine. Not perfect but depends on the recording setup, the retargeting, etc. But it is improving. ruclips.net/video/QcFmZ_eex0U/видео.html
cat appreciated
Do you know. It's 2 o'clock at night in the country where I live. But I didn't give up and I won. With 1 webcam and 1 smart phone. 1 solution seems to fail. But I got it working. Thanks to everyone who supported me. Special thanks to Jon Mattthis. The creator of this interesting method.
I will have a detailed report on this.
Sorry, but how did you get your phone to work with it? i cant get it to work.
Don't know if the gravity problem was solved but maybe you can use a specific calibration board for the floor. This way the software can recognize and define the workplane.
Awesome work, bro! I'm really looking forward to test it
This is a very cool instrument. I can't wait to see the user-friendly interface of the program. I wish I could help with feedback about the program, but it's hard to use in the alpha version right now. Thanks for continuing to work on the product.
Thanks!
Yeah, I've been busy with other stuff recently but hopefully we'll get a viable software workflow together "Soon"
I really enjoyed the features from Poo, please bring him into more video, he strikes many alluring poses in which I am utterly captivated by his luminous shimmering curves. As for the technology? Eh.💩
Absolutely brilliant work, Jon! Big thanks for your generosity! Keep up the amazing work! Subbed!
Detailed tutorial! Thank you very much!I can never detect my camera when I use the GUI, but it is possible to use traditional methods~ha ha 52:48😋
This is the most incredible thing ive seen! this is exxactly what ive been looking for! I hope you have more success on anything else you do because this is the best!
Wow! This is incredible. Thank you so much for developing this project as open source. I will definitely try that out and maybe contribute some code in the future.
Such tools could really become useful for indie game developers for example.
I really appreciate your work ❤
Btw, those sweet cats gave this great video a nice cozy touch :)
Thank you Jon, and please don't stop
Incredible cinematography.
Wow! Absolutely amazing work you've done here! Thank you so much! Really happy I found this :)
Keep up the great work!
Keep up the amazing work, Jon!
Love your work and adore your kitty, it drew me in. 😊
It's possible to made recognition in realtime? I can connect to dev team and work on this feature. My last project is about Google-it(ORB SLAM 3) tracking.
great works. Is it possible for real-time MoCap?
Looking amazing so Far!
Hello, I made a mocap using FreeMoCap Tell me how I can export animations from blender to Unreal Engine 5
Great Job !
seems great !!!!
I didn't find anythings on your website about the hardware setup needed for this.
Could you please detail it ?
Thanks !
EDIT: okay it is in this vid ;-) sorry for bugging you :P
You're awesome Jon keep up the work!
I'm looking for mocap softwares and yours looks the most promising to me.
Subbed btw ~
Where did i get toml? I have videos (downloaded), i install gui, i run gui and nothing, when i click Process All Step Below
amazing tutorial , thankyou very much
Just here for the cat 😀
Same!!!
Poo is key dev 😼
1000 thanks for making multiple camera markerless mocap accessible to us! If I may ask: Is this tutorial 1:1 applicable to the latest version of freemocap?
No, the latest version has an improved GUI, workflow and Blender output.
And, will you make a new version of the video witht he new GUI workflow and blender output?@@fluxrenders
Well, a new tutorial is necessary indeed. Problem is lack of time and also that in this stage the tutorials get outdated pretty quickly.
blender :/... why not a FBX exporter? The one and only animation format to get the animations to EVERY software?!!!! However, I am totally unsatisfied with the usage... made it running but I don't see any production usage this way. Thanks anway! @@fluxrenders
I posted a detailed replied but it seems it was filtered out :( . At least I want to say that the FBX export is considered but not a priority as you can already export to FBX using a Blender addon. You can even export a FBX suitable for Unreal Engine. There is a video of that process.
Hi, this is pretty wild! If I use a couple android phones and want to connect to my Mac as wifi webcams, seems I have to use software such as Camo. Will this sort of middle man software impact FreeMoCap? Thanks. BTW - maybe have the developers work on a constraints system so that joints/bones are never allowed to undergo impossible deformations. Also, will it work if you are holding an object? I am trying to find a way to do motion capture while playing my guitar, in which case I am seated, also at times walking with or without guitar. I don't need accurate capture of my hands playing guitar. I am using mocap to generate 3D models to then cast shadows and reflections in 3D scenes. When I film I don't show my whole body, but if I have android phones as webcams for the capture do you forsee any issues doing mocap in my use case? Thanks again!
Hi, there would not be a problem in using android phones as webcams. But you can also use the prerecorded videos option where you record in each device locally and then import the videos to the program. Regarding playing the guitar you could overcome occlusion by a good placement of the cameras. I think it should work fine if you don't mind a not ultra accurate finger capture.
@@fluxrenders great, thanks for the info!
This is VERY cool. Does it only work with webcams or can it handle multiple video files recorded from different angles say with DSLR?
It can handle multiple precorded videos from different devices like smartphones, DSLR, etc. They have to be synchronized though. The GUI has an import option that can sync using the audio track or you can sync them in a software like DaVinci Resolve
will the result be improved? if I wear a special suit for motion capture? for example a black skinny suit with a white dot at the joint position
38:30 - cmd can freeze execution when you select text 😢
This is awesome,Jon! waiting for it.
That is so promising!
I have 14 ps3 eye cameras which has 60 fps option (lower resolution though). Do you think resolution of FPS will matter more?
Will it be possible to add DSLR footage afterwards? I assume it won't be able to go through calibration process with DSLR.
Can FreeMoCap work from single camera/video ? lot of thanks
Will single camera work . Given a 10 second single video file that contains one person walking can this software generate fbx files that contains the animation info?
Thank you for your wonderful work, it is truly useful to DIYers like me. I have a question: I used FreeMoCap on my system previously with a single kinect V1, no issues. Tried moving to multicam but now it won't detect any camera, even the previously detectable kinect V1. I've tried going back to single cam, but it doesnt work either. For info I have used 1-6 Ps3 Eyes, and/or 1-4 Kinect 360 sensors, and/or the aforementioned kinect V1. but not a single camera is detected. Used Ipisofts recorder to see if it was a bandwidth issue but it detects all cams, and runs them as well. Any advice? Reinstalled skellycam as well but no luck.
A workaround can be to use prerecorded videos (with OBS for example) instead of recording directly with FreeMoCap
@@fluxrenders thank you for your suggestion. I could record multicam using Ipisoft recorder, but no charuco board plus data means that wouldn't be very useful in FreeMoCap, right?
Yes, you need to record a calibration with the charuco board and then get a calibration file. This file is used to reconstruct the 3D skeleton when processing the capture videos.
@@fluxrenders I dont think I can do that out of the box with OBS or IPI... hmmm... thanks for the idea though.
@@fluxrenders do you think only one instance of type of camera is only being detected by skellycam because they have the same hardware IDs?
i get dark images using -7 exposure on my C922 pro cameras.
I just did my yearly search for open source motion capture tools. Is it finally happening?
If so, this is big news, incredible stuff
is three the minimum amount of cameras that will produce moderately usable results? And proprietary options can, maybe with less accuracy, achieve this with just one camera and no calibration, is there hope of achieving that with these tools without drastic refactoring? Genuinely curious, and the current results look fantastic.
I give up. :-( I can capture calibration data and record videos, but none of the other buttons in the GUI do anything. (Tried on my PC: Windows / Intel i9 64GB RAM / 4090 24GB VRAM. And tried on my laptop: Ubuntu / Intel i5 40GB RAM / no GPU.) No install errors, GUI running on both. Calibration toml generated successfully. Videos recorded and saved to disk. Then... nothing. The GUI buttons print statements to the console as if they meant to launch processes etc., but nothing happens.
Hi, you might want to ask in the server. Maybe we can find what the error is by checking the log files.
@@fluxrenders Thanks, I will probably try again when I have time. I got another solution working, but the quality was unusable. (And I found out how bad I am at moving my body for specific animations.)
hey Jon ending up copying the installation notes into chat got and it helped name install it
Can it capture 2+ persons?
i cant use the gui,seems some packages cant be installed,😭😭i dont know python,learned it while trying,when can a ui come up
Hi! Thank you for trying to get this running! I know how challenging this kind of thing can be when you're new to programming.
Make sure you have done all of the steps and entered all the commands *exactly* as they are shown here- github.com/freemocap/freemocap#how-to-run-the-alpha-gui
If that doesn't work (or if you get confused about one of the steps) - please create a new Issue on the GitHub repository here - github.com/freemocap/freemocap/issues
When you make the Issue, please be sure to copy any `error` messages that come up in your terminal!
祝你好運!
Are you using the conda instructions? And missing PyQt6?
I switched to PyCharm and got it working that way.
Hey Jon, great work really appreciate it 🙏 can you please help me I am trying to create a real-time mocap solution as a final year project for my university where I am trying to integrate freemocap with unity to get the tracked data and map it to a character in unity 3D
Is it possible? if so, then how
@@devpatel8276 auto-rig & skin your character when the animation dataset is calculated, then your character will follow the animation I guess. Better use simple models at start
Question, can i use one camera?
Yes! The latest versions of the software have a "single camera mode" that produces flat skeletons (compressed onto the Y Plane). Its very quick and easy and loads into Blender just like a multicamera system
@@jonmatthis Amazing, thank you, and, im really sorry but may i ask, Do you have a video on how to install the program? I got lost along the way with the text/website Tutorial. Thanks :D
If this gets refined, might help alot of solo dev's using unity or unreal engine. maybe xbox kinect would be better
You can already import the Freemocap Blender output in Unreal Engine. Not perfect but depends on the recording setup, the retargeting, etc. But it is improving. ruclips.net/video/QcFmZ_eex0U/видео.html
best mocap #mocap
Want to bring it to UnrealEngine, but its AGPL license.
The license does not apply to the output of a program so there won't be a problem
@@thegame3417 Python code must integrate into project. So, the programe should be open source that built on this project.....
13:50 hygiene is important, but the timing is not so great.