SynthEyes Essentials - 04 Proper QC and Export [Boris FX]

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024
  • КиноКино

Комментарии • 49

  • @imtiazali6980
    @imtiazali6980 4 месяца назад +4

    Sir kindly continue this series and show us distort undistort redistort with CGI workflow and how to apply those opration in compositing please....

    • @MatthewMerkovich
      @MatthewMerkovich 4 месяца назад +2

      This video pretty much covers it. You'd just run the lens workflow, export to your 3D animation app (Maya, Houdini, C4D, Blender, etc.), and then import your 3D assets and do your layout and lighting. Then you'd render at the overscan, un-distorted, resolution, and finally redistort that in comp, exactly like the CGI stand-in movie here. It's really that simple.
      Unless you had a more specific question?

  • @glmstudiogh
    @glmstudiogh 4 месяца назад +4

    I want to thank you Matthew for all your lessons even before Boris acquired this amazing program, I tried learning Syntheyes back then but i dropped it cause it was a bit confusing but your tutorials have helped and now I don’t want to give up learning it cause I see it’s very powerful in matchmoving

  • @SamuraiTin
    @SamuraiTin 2 дня назад

    Hi when I import the comp file in resolve/fusion I don't have the lens redistord and the crop down nodes, there is nothing after the CamRender. And after the CamUndistord, I got CamScaleUp and Camsizedown down which you don't have...
    Is there something I missed or do I need to recreate the redistord node from the distord node and the crop down from the crop up?

  • @juanorea9215
    @juanorea9215 4 месяца назад +2

    Thanks Matt! really appreciate you going thoruoghly through the settings, explaining the 'whys', not just the 'click here & there'. Loving this series!

  • @estebancrop
    @estebancrop 4 месяца назад +2

    Amazing! Thanks Matthew Merkovich and boris!

  • @z4team740
    @z4team740 2 месяца назад

    amazing tutorial!! pls where can we download or buy the footage's tutorial to follow correctly this training? Thanks !!!

  • @muhammadbadran4896
    @muhammadbadran4896 4 месяца назад +2

    As usual, it's a great tutorial!
    Could you make a tutorial about the coordinates window especially (Seed and Lock)?

    • @MatthewMerkovich
      @MatthewMerkovich 4 месяца назад +2

      Absolutely! Next, I'm going to be talking about GeoH Tracking within a solved camera space, and there will be a lot to cover when it comes to constraints.

    • @muhammadbadran4896
      @muhammadbadran4896 4 месяца назад

      @@MatthewMerkovich Thank you so much!

  • @williamreliford_vfx_artist
    @williamreliford_vfx_artist 4 месяца назад +2

    Wow! A master teacher. Well done!!

  • @abdessamadhasnaoui6887
    @abdessamadhasnaoui6887 4 месяца назад +1

    thank you so much
    are you going to give any more in depth tutos about this?
    path filtering....troubleshooting trackers.......

    • @MatthewMerkovich
      @MatthewMerkovich 4 месяца назад +1

      I'm thinking about going into a bit of GeoH Tracking next. Many seem to find it perplexing, but it really is very powerful. I also have had an idea for a couple years. It would be a tutorial about path filtering and smoothing, while also keeping your track from sliding.

    • @abdessamadhasnaoui6887
      @abdessamadhasnaoui6887 3 месяца назад +1

      @@MatthewMerkovich thank you 😀

  • @merdoVFX
    @merdoVFX 4 месяца назад +1

    Thanks alot for this tutorial u cover all things that iwas looking for
    Cant wait for the lidar and geo track tutorial ❤

  • @glmstudiogh
    @glmstudiogh 4 месяца назад +1

    hi everyone. So I figured out the export for Houdini through a forum. Houdini is expecting a .hip file which SYntheyes apparently doesn't write to, so the solution is to open the .bat file in a text editor, copy the codes, open the textport in Houdini, and then paste the code. that's it.
    and for the sliding issue I had between Syntheyes and C4D, the problem was the frame number, not the fps, but which frame the project starts in compared to the actual plate. my plate starts on frame 1001 but the Project in C4D starts on frame 0, so I just had to match that and it worked.

  • @mr_bugzzz
    @mr_bugzzz 3 месяца назад +1

    I still dont understand the purpose of the all these manipulations with distortion, re-distortion... Why i should do this? Or i dont need this at all?

    • @BorisFXco
      @BorisFXco  3 месяца назад +1

      Footage shot with real lenses have a certain amount of curvature to them. Generally this is shown more towards the edges of the image. This can be almost unnoticeable, or very profound if you think about a fisheye lens, for example . CG images are created with perfect, square pixels across the frame.
      If you're trying to composite CG with real footage and you don't compensate for this distortion, then your CG won't be matchmoved properly. Viewers may see the issue and _feel_ something is off, often without being able to say why.
      In this setup, we undistort the plate and have track points for that. It's now flat. Now when that is combined with the flat CG, we want to redistort it, so it looks like it was shot in camera. Usually, instead of redistorted the whole thing, we just redistort the added elements, but the reason is the same; so it looks like it was all shot with the same real camera and lens.
      Hope that makes sense.

    • @mr_bugzzz
      @mr_bugzzz 3 месяца назад +1

      @@BorisFXco Thanks a lot for such detailed answer) Now i understand 😅

  • @glmstudiogh
    @glmstudiogh 4 месяца назад

    ok. so now after doing so many tests, I still can't get an accurate scale when I export to Cinema 4D. any help on that, please? cause all other programs work perfectly but then C4D is my main DCC. any help, please?

  • @glmstudiogh
    @glmstudiogh 4 месяца назад

    I also tried exporting USD but the camera is still and not moving

  • @minimalfun
    @minimalfun 4 месяца назад +2

    Another gem from the master himself!

  • @BrianMartees
    @BrianMartees 2 месяца назад

    Thanks for your tutorials, they really helped me understand SynthEyes. It would be great if you showed how to find "depth".
    If with shots where there is not enough of everything, everything is fine, but when tracking a metropolis, syntheyes go crazy with depth) And it seems that when, for example, the edges of one building should be on the same line, then in synthese the difference between these points is simply cosmic, so It's almost impossible to build a good geometry around such tracking)

    • @BorisFXco
      @BorisFXco  2 месяца назад

      This may be a scenario where setting up more constraints is a good idea. We have a couple of (admittedly rather old) tutorials available here : borisfx.com/videos/?tags=product:SynthEyes&search=constraint
      If you have some specific questions, please join our Discord community. It's good to get more voices there : www.borisfxdiscord.com/

  • @the_shizon3322
    @the_shizon3322 3 месяца назад

    Awesome video. Can you export the undistorted plate from Syntheses to use as a camera plate background in you 3D app?

    • @BorisFXco
      @BorisFXco  3 месяца назад +1

      Yes, absolutely. Here's a link to the workflow for you : borisfx.com/documentation/syntheyes/SynthEyesUM_files/delivering_undistorted

  • @glmstudiogh
    @glmstudiogh 4 месяца назад

    Great lesson Matthew. I’m a bit confused on the cropping. If the image has been upscaled to true a bit, wouldn’t that be different from the original when comping? I’m a bit confused. If you could help me understand please. Cause it’ll definitely look different from the original plate when in Comp or is that an overscan workflow? And also how do you tell how much cropping to do in Syntheyes per plate?

    • @MatthewMerkovich
      @MatthewMerkovich 4 месяца назад

      The Image doesn't get "up-scaled" during the cropping operation, to be clear. We are increasing the *_canvas size_* , so that when the un-distortion gets applied, which *_does_* increase the image resolution, that just means all the original photography has a space to be undistorted into.
      SynthEyes will do all the calculations for how big to make the new canvas using the "cropping" tool (which again are not *_cropping!_*) based on the lens distortion calculations.

  • @sonnhost
    @sonnhost 4 месяца назад

    How to export Un-ditortion attribute to using Compositing in AfterAffect

    • @BorisFXco
      @BorisFXco  4 месяца назад

      Here you go. We have a full video about that : borisfx.com/videos/whats-new-in-syntheyes-2024-1-part-2/

  • @manolomaru
    @manolomaru 4 месяца назад

    ✨👌😎🙂😎👍✨

  • @교석-i2r
    @교석-i2r 2 месяца назад

    I have a few questions. help me please
    1. How do I export to Fusion included in DaVinci Resolve?
    2. Perform the lens workflow and then export it to Blender. The exported file has background and distortion as shown at 9:15 seconds. After placing the model in this state, should I render it as un-distorted when rendering it in Blender? And do I have to re-distort this again?

    • @BorisFXco
      @BorisFXco  2 месяца назад +1

      1. You can use the Fusion exporter from SynthEyes to create a .comp, then go to File > Import Fusion Composition inside of Resolve. It will create the scene for you with every node you need in the flow.
      2. Re-distortion is usually done at the compositing stage. If you're rendering a full comp from Blender, you can do it then. If it's going somewhere else, you'll probably want to render out un-distorted and use the calculated lens to bring all the CG elements together in comp.

    • @교석-i2r
      @교석-i2r 2 месяца назад

      @@BorisFXco Thank you for your kind reply.
      Let me ask you one more thing. If you shoot a video that involves turning your head, there will be a lot of motion blur. Is there a way to track this correctly? When running in the traditional way, there is no tracker at all in areas with severe motion blur.

    • @BorisFXco
      @BorisFXco  2 месяца назад +1

      Severe motion blur is always going to be tricky. If there's no detail at all then you're going to have to manually add points in. In terms of your pixel accuracy, it's going to be quite forgiving in those areas of high motion.

  • @kostas-fh1oi
    @kostas-fh1oi 3 месяца назад

    Thank you, just thank you!!!

    • @BorisFXco
      @BorisFXco  3 месяца назад

      You are so welcome!

  • @joelchamp1949
    @joelchamp1949 12 дней назад

    Thanks

    • @BorisFXco
      @BorisFXco  12 дней назад

      You're very welcome

  • @zurasaur
    @zurasaur 4 месяца назад

    Amazing tutorial - I’ve used lens grids for years - could you explain why we get better tracking results from the lens estimation over properly undistorting using a grid?
    I’ve tested and indeed you get poor results with a properly pre-undistorted plate compared with when you let syntheyes do the estimation
    How is this possible

    • @MatthewMerkovich
      @MatthewMerkovich 4 месяца назад

      SynthEyes isn't doing an "estimation," unless you want to define the word itself as any representation of the lens distortion by ***any*** means. It may help to think that what SynthEyes is doing is calculating the lens distortion based on the behavior of the 2D trackers as they move through the frame, and how they deviate from the basically linear path they'd be on if the shot had been captured with a pinhole camera. So, as your 2D tracker moves to the corners and edges of your frame, the deviation can be graphed over time. In this tutorial series I am using the Std. Radial 4th Order lens model. You can read more about the math here: en.wikipedia.org/wiki/Distortion_(optics)
      Now take into consideration whatever lens grid you have. How was it shot? How big is it? That will determine the photographic focus distance of the lens grid, and that focus distance will now introduce some amount of lens breathe. Now consider the vfx shot in which you are going to use the distortion derived from that lens grid. Was it the same focus distance? Probably not, and if there is a rack focus, definitely not. And now your lens grid deviates from the distortion that is is actually in your shot by some amount. And then you have manufacturing errors which might be small, but absolutely exist for the lens, your lens grid, the camera body, and the list goes on.
      As with all my answers, though, every shot is different. Having that lens grid to fall back on can be a huge help. (I'm using one right now.) But usually, nearly always (when there are plenty of 2D trackers covering the frame, with good parallax, and a good solve), the calculated lens distortion is more accurate than brute forcing a lens grid into the solve calculations.

    • @ryanansen
      @ryanansen 4 месяца назад +1

      I may be completely wrong, but just from speculation, I'd imagine the reasoning is because a lens grid captures the lens distortion of a lens during a very specific point in time (ie the time at which you captured the lens distortion grid).
      The reason this might be important is there could potentially be very subtle micro changes in the properties of the lens between the time the lens grid was shot and the time the priciple photography was shot. Maybe the lens was locked into the camera slightly differently, or settled into the locking bracket in the most subtlest of ways differently in that it would yield a microscopic difference in how the glass elements are positioned in relation to the sensor.
      Another reason I could imagine why it's more accurate is for the same reason he mentions why an ST map might not be the way to go, that being properities of the lens might change throughout the duration of a clip such as lens breathing, etc.
      Or, again, maybe I'm 100% wrong and Matthew will chime in with a correct answer hahaha. I'm just speculating is all!

    • @MatthewMerkovich
      @MatthewMerkovich 4 месяца назад +1

      I can only agree with everything you just said, @ryanansen. One point I’ll expand on is that to capture the changing distortion, you have to solve for all those variables, and the more you’re solving for, the more trackers you’ll need.
      But yes to everything you just wrote.

    • @zurasaur
      @zurasaur 4 месяца назад

      Thanks guys, I wasn’t aware the distortion is usually changing or breathing throughout the shot, I am always working with locked focal length and focus so I thought it would be consistent
      It just a bit annoying to be using estimations instead of having a concrete method of undistorting any lens using real data

    • @MatthewMerkovich
      @MatthewMerkovich 4 месяца назад

      @@zurasaur Lens breathe only happens when you change focus, so the breathe wouldn't necessarily be changing throughout the shot. But you see it all the time whenever you are watching movies or TV shows, especially when they are shot with anamorphic lenses that: 1) have the most lens distortion, and 2) exhibit a lot of lens breath when pulling focus. Just look for when they rack focus a shot, and yowza! It looks like it's a zoom lens! 😂