This video and this whole channel is pure gold. Wish I had access to this when I learned Syntheys many eons ago. :D I always reccomend your channel to juniors when they are starting out.
What do u think by extracting the focal length by using the align in syntheyes then modelling rough geometry for the scene then drop the trackers and solve ?
That would work well when you have great right angle references. This shot doesn't really, only approximates. But yes, you are thinking exactly like I would.
Thanks for another great lesson(s). Learning so much, not just about how to use the software but also how to think. Meanwhile, on that note, if I have a shot planned that needs to be tracked that is mostly stable and has little parallax, is it useful to begin the shot with some helpful moves purely to solve it later? (obviously not a whip pan!)
@@MatthewMerkovich I have watched them...(awesome) :) I think I'm waiting for the part where you use the survey to help solve the shot...or is that up already?
@@MattScottVisuals It isn't, unfortunately. I have to do these in between actual jobs. I'm working on the script now though and it shouldn't be too long.
@@MatthewMerkovich Oh for sure, I'm not asking for it to be ready sooner haha - I appreciate any videos that you make. Just checking that I haven't missed it, is all! And looking forward to future instalments, cheers :)
Thanks for another super helpful tutorial. I don't understand why Automatic mode gave such a different solve from Refine mode at 11:30 in the video - but maybe I just don't understand the difference between the two modes.
Refine does just that: it refines. To put a finer point on it, it refines not just the camera's path but also the solved positions of the trackers by using the previous solution to help the refine solve. I hope that helps, or makes sense. After an initial solve, I switch to refine to keep working. I sometimes then hit auto to see if my added trackers were enough to make the shot finally solve correctly out of the starting gate. This time, well, you see what happened.
@@MatthewMerkovich Gotcha. But how did the initial solve get to a better place with less trackers? Is it better to start with few trackers, solve, add more trackers, refine, add, and refine rather than just starting with a lot of trackers for the first solve?
@@jonschroth I typically keep adding trackers until I get an initial (reasonably good) solve and then switch to refine mode. 99% of the shots I end up working on are all supervised tracking shots, and for those, the refine-solves-until-final approach is usually the best way to go.
@@MatthewMerkovich Interesting. I still don't understand why more trackers would make a fresh solve worse, but so good to know what to do when that happens! Thanks
You’re taking advantage of geometric constraints as you review the solution to see if it “makes sense” (ie does the shape of the alley look like an alley). Would it help to use the geometric constraints explicitly in the solving of the shot to give syntheses more information.
This is a great question. No, constraints do NOT give SynthEyes more information (to help the solve). It is a decades old common misconception that adding constraints helps the solve, but it really doesn't. It only aligns your solve. Russ loves to pre-constrain his solves. I post align my solves. The ONLY constraints I use for plain old camera 3D tracking are the origin constraint (or a similarly locked location on one tracker) and a distance constraint between two trackers to scale the scene appropriately. I never use other constraints for vanilla 3D camera tracking, which this shot very much is.
@@MatthewMerkovich Thanks for the reply - that's great to know. OK then here's a follow-up. I've had shots where we have some survey data which I can match up to some of the tracked points - however, while a LIDAR scan will give the locations of millions of points, it can be difficult due to camera resolution to identify the 3D position of all the supervised trackers (or some of the items like the garbage barrels will have been moved before the scan etc). Is there a way to combine the results of the method you're doing here (supervised tracking without knowledge of the point locations) with a constrained solve from seed points? The seed points would be a few identifiable points (graffiti, fire escape corners etc.). Also the tutorial is fantastic, it could be 10 times as long and I'd still watch picking up a million little pointers.
@@robertoneil. I have a whole tutorial I'd love to do on set surveys that would completely answer your question, but I like the way you are thinking. And as for lidar, that's a whole other thing. I've done many LIDAR shots, and if you know how to use the LIDAR data, it makes the solving super accurate and very simple.
This video and this whole channel is pure gold. Wish I had access to this when I learned Syntheys many eons ago. :D
I always reccomend your channel to juniors when they are starting out.
I always learn something new when watching your videos, Matt. Thank you.
This is so helpful to me, thanks! I look forward to more SynthEyes tuts
yay! great tutorial as always
quote: "Screw Kalvin Kingdon, we want a low parallax tutorial"
;)
LOL!
Thank u alot please more tutorials like these are amazing
What do u think by extracting the focal length by using the align in syntheyes then modelling rough geometry for the scene then drop the trackers and solve ?
That would work well when you have great right angle references. This shot doesn't really, only approximates. But yes, you are thinking exactly like I would.
i like ur long videos!!!!!!!!!!
Do we have a chanse to play with that footage ourselves?
Thanks for another great lesson(s). Learning so much, not just about how to use the software but also how to think. Meanwhile, on that note, if I have a shot planned that needs to be tracked that is mostly stable and has little parallax, is it useful to begin the shot with some helpful moves purely to solve it later? (obviously not a whip pan!)
Look at my How I Shoot Set Surveys tutorial, and for sure shoot a set survey!
@@MatthewMerkovich I have watched them...(awesome) :) I think I'm waiting for the part where you use the survey to help solve the shot...or is that up already?
@@MattScottVisuals It isn't, unfortunately. I have to do these in between actual jobs. I'm working on the script now though and it shouldn't be too long.
@@MatthewMerkovich Oh for sure, I'm not asking for it to be ready sooner haha - I appreciate any videos that you make. Just checking that I haven't missed it, is all! And looking forward to future instalments, cheers :)
@@MattScottVisuals All good, my friend. I didn't feel like you were pressuring me at all!
nice guitar.
LOL! I can finally kind of play it now, thanks to you.
Thanks for another super helpful tutorial. I don't understand why Automatic mode gave such a different solve from Refine mode at 11:30 in the video - but maybe I just don't understand the difference between the two modes.
Refine does just that: it refines. To put a finer point on it, it refines not just the camera's path but also the solved positions of the trackers by using the previous solution to help the refine solve. I hope that helps, or makes sense. After an initial solve, I switch to refine to keep working. I sometimes then hit auto to see if my added trackers were enough to make the shot finally solve correctly out of the starting gate. This time, well, you see what happened.
@@MatthewMerkovich Gotcha. But how did the initial solve get to a better place with less trackers? Is it better to start with few trackers, solve, add more trackers, refine, add, and refine rather than just starting with a lot of trackers for the first solve?
@@jonschroth I typically keep adding trackers until I get an initial (reasonably good) solve and then switch to refine mode. 99% of the shots I end up working on are all supervised tracking shots, and for those, the refine-solves-until-final approach is usually the best way to go.
@@MatthewMerkovich Interesting. I still don't understand why more trackers would make a fresh solve worse, but so good to know what to do when that happens! Thanks
@@jonschroth Oh, it happens all the time! 😆 (I wish I was joking.)
You’re taking advantage of geometric constraints as you review the solution to see if it “makes sense” (ie does the shape of the alley look like an alley). Would it help to use the geometric constraints explicitly in the solving of the shot to give syntheses more information.
This is a great question. No, constraints do NOT give SynthEyes more information (to help the solve). It is a decades old common misconception that adding constraints helps the solve, but it really doesn't. It only aligns your solve. Russ loves to pre-constrain his solves. I post align my solves. The ONLY constraints I use for plain old camera 3D tracking are the origin constraint (or a similarly locked location on one tracker) and a distance constraint between two trackers to scale the scene appropriately. I never use other constraints for vanilla 3D camera tracking, which this shot very much is.
@@MatthewMerkovich Thanks for the reply - that's great to know. OK then here's a follow-up.
I've had shots where we have some survey data which I can match up to some of the tracked points - however, while a LIDAR scan will give the locations of millions of points, it can be difficult due to camera resolution to identify the 3D position of all the supervised trackers (or some of the items like the garbage barrels will have been moved before the scan etc).
Is there a way to combine the results of the method you're doing here (supervised tracking without knowledge of the point locations) with a constrained solve from seed points? The seed points would be a few identifiable points (graffiti, fire escape corners etc.).
Also the tutorial is fantastic, it could be 10 times as long and I'd still watch picking up a million little pointers.
@@robertoneil. I have a whole tutorial I'd love to do on set surveys that would completely answer your question, but I like the way you are thinking. And as for lidar, that's a whole other thing. I've done many LIDAR shots, and if you know how to use the LIDAR data, it makes the solving super accurate and very simple.
were you animating/keyframing the rotation of the crosshairs?
Yes, absolutely.