Would this workflow with Jetset Cine and Syntheyes be useful outside of virtual production applications? Footage shot with the Jetset Cine rig mounted on your camera is basically able to capture a pretty solid camera track and attach some tracked lidar scanned geo that can represent the set? I could definitely see a huge time savings being able to bring a shot into syntheyes with the camera and geo already tracked for reference to integrate CG and VFX. Or am I misunderstanding what it does? Also, does it only work with solid backgrounds/green screen?
Yes -- we think this will be very useful for 'normal' non-greenscreen projects! The AI rotomattes are very good, and the set scanning + post tracking technique will work on any shot that needs VFX added. In fact, the iPhone uses a natural feature tracking algorithm that will work better in a 'normal' environment, since there are many more trackable corner features than on a greenscreen.
Would this workflow with Jetset Cine and Syntheyes be useful outside of virtual production applications? Footage shot with the Jetset Cine rig mounted on your camera is basically able to capture a pretty solid camera track and attach some tracked lidar scanned geo that can represent the set? I could definitely see a huge time savings being able to bring a shot into syntheyes with the camera and geo already tracked for reference to integrate CG and VFX. Or am I misunderstanding what it does? Also, does it only work with solid backgrounds/green screen?
Yes -- we think this will be very useful for 'normal' non-greenscreen projects! The AI rotomattes are very good, and the set scanning + post tracking technique will work on any shot that needs VFX added. In fact, the iPhone uses a natural feature tracking algorithm that will work better in a 'normal' environment, since there are many more trackable corner features than on a greenscreen.