Meshroom | Photoscan to Camera Track (Matchmove)
HTML-код
- Опубликовано: 26 янв 2019
- Today we use free photoscan software to obtain a camera track (and mesh!)
Final Shot: vimeo.com/341470345
Software:
Meshroom
alicevision.github.io/#meshroom
Blender
www.blender.org/download/
Maya
www.autodesk.com/education/fr...
Music:
-TeknoAXE - Ambient Intermission
-TeknoAXE - Synthwave C
Join my Discord!
/ discord Кино
With blender 2.8 you no longer need to fix the alembic file
You're right thank you for letting me know!
Ya beat me to it. Great to give Meshroom another try, knowing I can bring in my cameras too. + This node workflow... !!
Blender still crashed when I tried it on 2.8.1.
Hello! When I import the alembic file in blender it doesn't crash but I got these "component": mvgRoot> mvgCameras; mvgCamerasUndefined;mvgCloud and I can't see the camera itself. What should I do ?
Really smart, never thought to do a camera track this way and you've explained it nicely!
Great delivery!!!
Sharp and sweet with just enough information. Please produce more Meshroom videos. 🤜🏼🤛🏼🇦🇺🍀🤓
what a great tutorial ... never thought Mesh could do that ....brilliant
Excelente. Been trying to learn meshroom. Can't get scan to come out. I now know it's all in the camera. Going to try this method. You did a great job and to the point! Look forward to learning more from you.
Thanks
You just made my life easier. Thank you a lot👍
That’s very informative. Thank you mate 🤟
This tutorial absolutely nails it! I would love to know if there are any special considerations for film gates/resolution for post production?
Great tutorial, thanks
thanks for sharing man!
your awesome, Mario!!!
Thanks Raquel you are too :D
you saved my life :-D
Thanks Mario I will try it
Did you have to do any sort of retopo to get this to render out/preview faster? I've just created my first stunt double using meshroom and rigged it within blender, however the topology is hurrendous (obviously). Great tut mate, for moving objects (person) i guess you could get them to stand still in the video, then seperate them in the mesh afterwards and rig them. You'd need to remove them from the original video but AE content aware fill does a pretty good job if they're against a plain background.
Hey sorry for just getting to you now, I did not do any retopo but I did Decimate the mesh in Blender to remove a lot of geometry. Glad you could use it for a stunt double I tried to scan people twice and failed haha. If you ever upload anything with the scan feel free to share a link!
Thanks for this tutorial. Is that anyway to get the camera with just a few points to determine a ground plane to speed up the process?
I don't know of what settings will make the process faster but I do know they exist. I don't mess around with this software enough to know but others might be able to help you get faster scans or even just lowering settings in each node
Is there a way to get lens distortion data from this program into blender to give the render the same lens distortion values as the footage to improve the matchmove?
At the time of this video I wondered the same thing but didn't put enough time/research to find out. Meshroom has updated since this video so it may have tools for exporting lens distortion. I'll let you know if I find out!
ace
Wow, how did you do the water person? This is super impressive! I only know a little bit about fluid simulation in Blender but I would have no idea how to achieve something like this. Any hints? ^^
In short I used a fluid sim tool in Maya called biFrost. However the same should work in Blender with Mantaflow (which now is included in the main branch!) What I did was I animated a very low poly model. After I filled the model with fluid (by duplicating and shrinking the main model and making it a fluid object). Last I used the main original model as a collider. This makes the liquid follow the shape of whatever model you use. Sometimes water would spill out the model because the object doesn't have thickness, however it added a cool effect so I didn't try to fix it. If you have any more questions let me know I'd be happy to help!
Here's the final shot if you're interested!
vimeo.com/341470345
Awesome tutorial mate, this is an awesome option for camera tracking!
Is there any other software that you can use for conversion of the abc file that you don't give away your commercial rights to?
Hello thanks for watching! I looked for a few days at the time and didn't find anything then. I still haven't found an option yet but perhaps one will come up. Sorry I hope you run across one!
Now working in Blender 2.8, apparently...
Blender can import the alembic file fine now so no need to do any other conversions.
You're right thanks for letting me know!
Hey - nice work - any way of exporting the camera for use in Maya (yes I know) rather then just using Maya to fix the file - Like any way of importing to Maya and fixing the orientation issues to then throw into something like Nuke or Natron?
For Maya you'll just import the Meshroom camera and OBJ straight into Maya and skip all the Blender stuff. To reorient it you'll want to add a locator in the middle of the scene> Parent the mesh and the camera to the locator> Then translate rotate and scale the locator as you please to orient the scene in a more practical way. Then you can just select the camera and mesh and export them as Alembic for use in Nuke or Natron (you don't have to select the locator on export)
How does this tutorial match up to the more recent versions of Meshroom?
I need help because I am newbie in Blender (2.8). How did you import video aside camera tracking in Blender? Where to import and how. I need to overlay video (from .png files) to mesh that is done in Meshroom. Thanks.
Oh, I was wrong in my other reply (will be deleted to avoid confusion). In Blender 2.80, first delete cube and the camera. After importing .obj and .abc file, select camera from 'mvgRoot' and check 'Background Images', click on Add Image, click on Open, and select first image that was used for Meshroom. Change from Source: Single image to Image Sequence, change Frames: from 1 to number of frames that you can see on Playback track +1 (if there is End: 62, then it is total 63 frames because includes frame number 0), change start to 0, and Offset to -1. If those numbers are wrong, you may notice mesh and background image mismatched. Also, I enabled Cyclic, but can't see the difference, just for the convenience as author of this video did.
Additional info: in order to correct orientation of the object, but keeping camera motion in sync, when rotating and moving, first select both: camera and mesh frame. Else, it will be mismatched.
You are correct you follow the steps you stated with "Background Images". "Background Images" are in a different place in 2.8 then in 2.79 (which I use 2.79 in this video) but it seems you found it. Hope you found all the answers you needed and sorry for only now getting to you (I was away from youtube for some months)
@@marioCazares Yes. Everything is slightly different, but eventually anyone can find the solution. No worry about being late in answering. In that past two months I learned a lot about Blender, and I like that software even more.
tried this out, but the camera animation seems super choppy...is it possible to convert the alembic camera to an FBX to better control FPS and/or keyframing?
I think Meshroom only supports Alembic for now. You should be able to Bake Action of the alembic camera and manipulate the keyframes
Lol you skipped the one thing I needed to know. How do you add the original footage into the shot in the blender (my case it would be C4D)
Place your image or video as "projection" or "screen" on the background of the scene.
Hello! I hope you’re going well. As meshroom and blender software have got many updates. Is it possible to di an updated tutorial too please ?
Have a good day!
Possibly however it may be a while because life is pretty chaotic right now. A really good tutorial out there currently for scanning however is here in the meantime: ruclips.net/video/j3lhPKF8qjU/видео.html
@@marioCazares Thank you a lot for your answer. I hope you’re going to be well. Tell yourself it's temporary! (I’m not good to comfort and my english isn’t that good sorry :/ )
I’ll follow the course thank you again!
How did you import the video into the background in blender.
Thankyou
Blender 2.79 I just opened the side panel with "n" key and checked the "Background Images" box. Select "Add Image">Open and then select your video or image sequence. In Blender 2.8, same steps except it will be in the Camera Settings and NOT the "n" panel
Have you tried aligning with the original footage in Blender? Do you observe it's shifted? Do you know how to fix it?
Usually this happens when your footage in Blender is a frame off. Try using frame offset on the background footage and see if going a frame or two forward or backwards works
@@marioCazares Could have been but I knew that one and it was not that. I used an import photogrammetry addon and it aligns much better now. Still no idea why it didn't when I did it manually. Maybe something to do with lens distortion? Could it be that the model is based on straightened out shots and thus doesn't fit the original footage if it was distorted?
How long to scan this with default setting
About 9.2 hours total
Why not use blenders camera tracking though?
Okay, so if it works without any issues you can just as well use the one from Meshroom to save a bit of time, but if it requires fixing first you might as well just load the image sequence (which you probably exported from Blender in the first place) and do a quick camera track.
It'll probably be faster than fixing the alembic file.
The advantage is that you get a camera and full scanned mesh of your scene that are already aligned. If you camera track in Blender you would need to manually match your photoscan geometry to a new camera which would be very difficult in organic scenes like this one.
@@marioCazares Fair point, though as long as you have something to align with in the scene it wouldn't be too hard either.
Just seemed like most of the video was about how to fix the problems that arose with trying to import Meshrooms track to Blender. Considering how good Blender is at tracking it seemed roundabout, but as you say. You get the alignment with the mesh for free that way.
@@marioCazares Hello! Firstly, thank you for the tutorial and secondly is there any other way to align photoscan + new camera ? (Without using Meshroom tracking) If not what's the best way to manually match a camera track with the 3d environement ( from photogrammetry ) ? By the way for video footage with "human moving" inside it. Is there a way to "mask" human like that you can still get a camera track + some mesh ?
Sorry for my english :s it's not my primary language
@@SuperDao Hello there. The way that I match photoscans and new cameras is by eye actually. I look through the camera and find features in the photoscan that match my image sequence, and I align them while in camera view.
I have a video that talks exactly about this here: ruclips.net/video/4Ga_CrUPwbU/видео.html
start at 8:36 and it will explain exactly what I do in a real shot I've already done. I hope it will help!
Also for "human" in tracking you would just disable trackers as they get covered and for scanning I actually don't know. Maybe you can roto "human" out before bringing the images to meshroom? I've never tried that before so you might have to experiment with that one
My camera didn't match with footage, But This is a great tutorial btw
Sometimes the camera doesn't solve correctly or the footage could be offset by one frame forward or back
@@marioCazares Camera angle is uncorrect too ; w ;
@@npc.artist make sure your camera settings in blender match the settings of the camera you took the video with.
Is there a Mac version? :-)
Unfortunately there is not : ( This is the only information about having it on a mac: github.com/alicevision/meshroom/wiki/MacOS
How long it took to compute the whole mesh?
It's been a while so I don't remember exactly but I believe the whole process was a couple hours on my laptop
@@marioCazares did you change any node settings?
I went back to the file to check and no I left everything at default for this shot mostly because I didn't understand most of the settings at the time
@@marioCazares tho from the logs of each node you can calculate total time it took
@@HeroSnowman Thanks for the tip. Okay so wow this took a lot longer than I remember. Total was 9.2 hours! Guess I left it overnight
Can you talk faster ?
My speed is capped unfortunately ;_: