Handheld Whip Pans - Part 2

Поделиться
HTML-код
  • Опубликовано: 17 ноя 2024

Комментарии • 35

  • @MortenKvale
    @MortenKvale 4 года назад +3

    This video and this whole channel is pure gold. Wish I had access to this when I learned Syntheys many eons ago. :D
    I always reccomend your channel to juniors when they are starting out.

  • @ragebrick
    @ragebrick 4 года назад +1

    I always learn something new when watching your videos, Matt. Thank you.

  • @ChristopherRiewaldt
    @ChristopherRiewaldt 3 года назад

    This is so helpful to me, thanks! I look forward to more SynthEyes tuts

  • @DanasaVFX
    @DanasaVFX 4 года назад +1

    yay! great tutorial as always

  • @millolab
    @millolab 4 года назад +1

    quote: "Screw Kalvin Kingdon, we want a low parallax tutorial"
    ;)

  • @jamaljamalwaziat1002
    @jamaljamalwaziat1002 4 года назад +1

    Thank u alot please more tutorials like these are amazing

  • @jamaljamalwaziat1002
    @jamaljamalwaziat1002 4 года назад +2

    What do u think by extracting the focal length by using the align in syntheyes then modelling rough geometry for the scene then drop the trackers and solve ?

    • @MatthewMerkovich
      @MatthewMerkovich  4 года назад

      That would work well when you have great right angle references. This shot doesn't really, only approximates. But yes, you are thinking exactly like I would.

  • @YanyanLun
    @YanyanLun 11 месяцев назад

    i like ur long videos!!!!!!!!!!

  • @PostolPost
    @PostolPost Год назад

    Do we have a chanse to play with that footage ourselves?

  • @MattScottVisuals
    @MattScottVisuals 3 года назад +1

    Thanks for another great lesson(s). Learning so much, not just about how to use the software but also how to think. Meanwhile, on that note, if I have a shot planned that needs to be tracked that is mostly stable and has little parallax, is it useful to begin the shot with some helpful moves purely to solve it later? (obviously not a whip pan!)

    • @MatthewMerkovich
      @MatthewMerkovich  3 года назад +1

      Look at my How I Shoot Set Surveys tutorial, and for sure shoot a set survey!

    • @MattScottVisuals
      @MattScottVisuals 3 года назад

      @@MatthewMerkovich I have watched them...(awesome) :) I think I'm waiting for the part where you use the survey to help solve the shot...or is that up already?

    • @MatthewMerkovich
      @MatthewMerkovich  3 года назад +1

      @@MattScottVisuals It isn't, unfortunately. I have to do these in between actual jobs. I'm working on the script now though and it shouldn't be too long.

    • @MattScottVisuals
      @MattScottVisuals 3 года назад +2

      @@MatthewMerkovich Oh for sure, I'm not asking for it to be ready sooner haha - I appreciate any videos that you make. Just checking that I haven't missed it, is all! And looking forward to future instalments, cheers :)

    • @MatthewMerkovich
      @MatthewMerkovich  3 года назад +1

      @@MattScottVisuals All good, my friend. I didn't feel like you were pressuring me at all!

  • @mrsticker2
    @mrsticker2 4 года назад +1

    nice guitar.

    • @MatthewMerkovich
      @MatthewMerkovich  4 года назад

      LOL! I can finally kind of play it now, thanks to you.

  • @jonschroth
    @jonschroth 4 года назад +1

    Thanks for another super helpful tutorial. I don't understand why Automatic mode gave such a different solve from Refine mode at 11:30 in the video - but maybe I just don't understand the difference between the two modes.

    • @MatthewMerkovich
      @MatthewMerkovich  4 года назад +1

      Refine does just that: it refines. To put a finer point on it, it refines not just the camera's path but also the solved positions of the trackers by using the previous solution to help the refine solve. I hope that helps, or makes sense. After an initial solve, I switch to refine to keep working. I sometimes then hit auto to see if my added trackers were enough to make the shot finally solve correctly out of the starting gate. This time, well, you see what happened.

    • @jonschroth
      @jonschroth 4 года назад

      @@MatthewMerkovich Gotcha. But how did the initial solve get to a better place with less trackers? Is it better to start with few trackers, solve, add more trackers, refine, add, and refine rather than just starting with a lot of trackers for the first solve?

    • @MatthewMerkovich
      @MatthewMerkovich  4 года назад +1

      @@jonschroth I typically keep adding trackers until I get an initial (reasonably good) solve and then switch to refine mode. 99% of the shots I end up working on are all supervised tracking shots, and for those, the refine-solves-until-final approach is usually the best way to go.

    • @jonschroth
      @jonschroth 4 года назад

      @@MatthewMerkovich Interesting. I still don't understand why more trackers would make a fresh solve worse, but so good to know what to do when that happens! Thanks

    • @MatthewMerkovich
      @MatthewMerkovich  4 года назад

      @@jonschroth Oh, it happens all the time! 😆 (I wish I was joking.)

  • @robertoneil.
    @robertoneil. 4 года назад

    You’re taking advantage of geometric constraints as you review the solution to see if it “makes sense” (ie does the shape of the alley look like an alley). Would it help to use the geometric constraints explicitly in the solving of the shot to give syntheses more information.

    • @MatthewMerkovich
      @MatthewMerkovich  4 года назад

      This is a great question. No, constraints do NOT give SynthEyes more information (to help the solve). It is a decades old common misconception that adding constraints helps the solve, but it really doesn't. It only aligns your solve. Russ loves to pre-constrain his solves. I post align my solves. The ONLY constraints I use for plain old camera 3D tracking are the origin constraint (or a similarly locked location on one tracker) and a distance constraint between two trackers to scale the scene appropriately. I never use other constraints for vanilla 3D camera tracking, which this shot very much is.

    • @robertoneil.
      @robertoneil. 4 года назад

      @@MatthewMerkovich Thanks for the reply - that's great to know. OK then here's a follow-up.
      I've had shots where we have some survey data which I can match up to some of the tracked points - however, while a LIDAR scan will give the locations of millions of points, it can be difficult due to camera resolution to identify the 3D position of all the supervised trackers (or some of the items like the garbage barrels will have been moved before the scan etc).
      Is there a way to combine the results of the method you're doing here (supervised tracking without knowledge of the point locations) with a constrained solve from seed points? The seed points would be a few identifiable points (graffiti, fire escape corners etc.).
      Also the tutorial is fantastic, it could be 10 times as long and I'd still watch picking up a million little pointers.

    • @MatthewMerkovich
      @MatthewMerkovich  4 года назад

      @@robertoneil. I have a whole tutorial I'd love to do on set surveys that would completely answer your question, but I like the way you are thinking. And as for lidar, that's a whole other thing. I've done many LIDAR shots, and if you know how to use the LIDAR data, it makes the solving super accurate and very simple.

  • @denniskutzen8809
    @denniskutzen8809 4 года назад

    were you animating/keyframing the rotation of the crosshairs?