Videogrammetry Demo Real by FAKE

Поделиться
HTML-код
  • Опубликовано: 3 окт 2024

Комментарии • 75

  • @luciox2919
    @luciox2919 10 месяцев назад +5

    Thank u blender bob for sharing with us the professionalism of real fake

  • @JorisPlacette-e5c
    @JorisPlacette-e5c 28 дней назад +1

    Awesome video!
    @blenderBob you may have figured it out already, but the 64k max vertices cap can be disabled, allowing improved mesh resolution, which is critical when capturing 3+ people at the same time. This cap is here by default because of an encoding/decoding optimization in unity and unreal for real-time playback but irrelevant in your use case.
    I hope you are having fun with your capture studio!
    (PS, I may be one of the guys who came to your office to install your volumetric capture studio ;) )

    • @BlenderBob
      @BlenderBob  28 дней назад +1

      Really? Cool! Have you ever setup a system in Montreal?

  • @Ruan3D
    @Ruan3D 9 месяцев назад +1

    That's pretty AMAZING Robert!! Thanks for sharing.

  • @scottesplin4426
    @scottesplin4426 10 месяцев назад +2

    Amazing Mr. Bob! Busy pushing the boundaries as always,... while your cat lives the high life. 😹

  • @zachhoy
    @zachhoy 10 месяцев назад +6

    Bob, this is QUALITY! I can't wait to start getting into video production in the near future. I'm sure the 60k poly upper limit will eventually increase to 1M

  • @vinnypassmore5657
    @vinnypassmore5657 10 месяцев назад +2

    Looks fantastic, nice job. Thanks for sharing.

  • @PhotiniByDesign
    @PhotiniByDesign 10 месяцев назад +13

    Just speculation, but I am guessing you combat the motion blur by either using a really high shutter speed, or by utilizing the lights to strobe at a really high frame rate which is synced to the camera shutter speed. This is awesome Robert, it's pretty awesome to see your videogrammetry pipeline.

    • @BlenderBob
      @BlenderBob  10 месяцев назад +6

      High sutter speed. :-)

    • @jamess.7811
      @jamess.7811 9 месяцев назад +1

      why would a strobe be necessary? why wouldn't you just have the lights on constantly?

    • @PhotiniByDesign
      @PhotiniByDesign 9 месяцев назад

      It all depends on the camera, the lights and final outputs. For example continuous lights aren't always suitable due to limitations in output and flickering, especially if they are not specifically designed for cinematography. I have used synchronized strobes to shoot bats flying overhead a few year back, I used this method to take several images of the bat in one photo. I used a long exposure of 1.3 seconds, in that 1.3 seconds the strobe lights were programmed to flash 5 times. And so I shot the same bat in mid flight 5 times in one shot with no motion blur. Some sonar devices use the same principle to freeze frames. @@jamess.7811

    • @AliasA1
      @AliasA1 9 месяцев назад

      @@jamess.7811 the idea is to have the camera shutter open for longer, and let the strobing light be the thing that limits motion blur. Its not "necessary" it's just another way to do it that you might pick depending on what equipment you have on hand. Studio photography is often done this way, controlling the effective shutter duration with the flash duration instead of the camera setting.

  • @MediaWayUKLtd
    @MediaWayUKLtd 10 месяцев назад +3

    Really impressive Blender Bob! I hope this is really successful for you!

  • @PrinceWesterburg
    @PrinceWesterburg 10 месяцев назад +3

    Wow - Remember seeing CSO (Colour Separation Overlay) done on the BBC in the early 70's as a child, now 50 years later that era is home movie tech and you've moved onto the next generation. With AI this will become easier and easier - look at the one image to 3D model tech that exists now, this is going to grow and grow. Amazing to see!

    • @BlenderBob
      @BlenderBob  10 месяцев назад +2

      Yep. As director of innovation and technology it’s my job to check out all the new stuff

  • @Nicollaos
    @Nicollaos 10 месяцев назад +2

    Потрясающая технология!

  • @MellowMelodiesHub612
    @MellowMelodiesHub612 10 месяцев назад +2

    Looking forward to hear more from you Bob.

  • @SquirrelTheorist
    @SquirrelTheorist 9 месяцев назад +1

    This is absolutely brilliant! I wonder if this will eventually include reflective surfaces as with instant ngp NeRFs using radiance instead of meshes. Still, it is insane that something like this exists, and you guys handle it really well. Thank you for sharing these developments, although I probably couldn't afford it I would love to test out the limits of this system like tossing objects and watching them appear and disappear from the 3D output. Could make for some nice 3D magic tricks!

  • @willowproduction
    @willowproduction 10 месяцев назад +1

    Man, what the actual frack. BRAVO

  • @EdLrandom
    @EdLrandom 10 месяцев назад +2

    This is sick, if you need close-ups you might be able to make these characters with actual CG hair particle systems, if only you could find a way to mount a tiny camera close to the face of the actor paint or key it out and project that sequence back to the character's face.

    • @BlenderBob
      @BlenderBob  10 месяцев назад +2

      That would actually be possible but the geometry wouldn’t be hires enough anyways.

  • @unrealengine1enhanced
    @unrealengine1enhanced 10 месяцев назад +1

    imagine the ability to doctor other people's videos, with this technology, rofl.
    this tech gives a whole new meaning to the term: "trick photography"

    • @BlenderBob
      @BlenderBob  10 месяцев назад

      Isn’t that the definition of VFX?

  • @superkaboose1066
    @superkaboose1066 10 месяцев назад +1

    Very cool! Crowd demo looked insane

  • @llbsidezll
    @llbsidezll 10 месяцев назад +4

    I'd be interested in seeing how this could be implemented in VR. Current 3d video breaks immersion as soon as you try to move and look around.

    • @BlenderBob
      @BlenderBob  10 месяцев назад +2

      Most of the videogrammetry systems have been developed for VR so you can find lots of information on the web

  • @kidfl4sh295
    @kidfl4sh295 10 месяцев назад +2

    I see a lot of possibilities for game stuff and for some VFX sequences, simulation applied to the body and what not. For background characters, how usuable is this, on a set, wouldnt it be less trouble to have extra on set ?

  • @GaryParris
    @GaryParris 10 месяцев назад +1

    well done, hope its a success fot you

  • @AyushBakshi
    @AyushBakshi 10 месяцев назад +1

    Interesting!

  • @unrealengine1enhanced
    @unrealengine1enhanced 10 месяцев назад +1

    amazing work guys.

  • @themightyflog
    @themightyflog 10 месяцев назад +1

    I want more information! Wow!

  • @Vassay
    @Vassay 10 месяцев назад +2

    Looks pretty nice! How many cameras are you using, and how big is the resulting bandwidth per 1 second of a character's performance?

    • @BlenderBob
      @BlenderBob  10 месяцев назад +1

      32 cams. The files are huge. 8GB for the guy juggling

    • @Vassay
      @Vassay 10 месяцев назад +2

      @@BlenderBob the big size is to be expected =) Quite good quality for only 32 cams, great job!

  • @starwars9191
    @starwars9191 10 месяцев назад +2

    If you extend the scenes do you have to reshoot the videogrammetry or are they looped in some magical way

    • @BlenderBob
      @BlenderBob  10 месяцев назад +2

      We can morph two animations together to a certain limit. You need to be more precise by extend.

  • @amazinggraphicsstudios
    @amazinggraphicsstudios 10 месяцев назад +2

    You are always Super,Thank you.But please what software do you use for the videogrammetry.

    • @FireAngelOfLondon
      @FireAngelOfLondon 10 месяцев назад +2

      It's their own custom software, that's the whole point of this video, they are promoting their services for 3D capture. It isn't for sale and probably won't be.

    • @amazinggraphicsstudios
      @amazinggraphicsstudios 10 месяцев назад

      @@FireAngelOfLondonok thank you

  • @electronicmusicartcollective
    @electronicmusicartcollective 9 месяцев назад +1

    WOW

  • @davebulow2
    @davebulow2 10 месяцев назад +2

    Very impressive, Bob! I have to ask, how on earth did you do the motion blur? Surely the mesh is a different mesh from frame to frame and the vertices don't have a reference point from previous frame?

    • @BlenderBob
      @BlenderBob  10 месяцев назад +2

      Secret recipe ;-)

    • @Vassay
      @Vassay 10 месяцев назад +2

      I would do it AFTER rendering the 3d person - calculate motion vectors from the rendered 2d image, use those to drive the motion blur. Easy, and should be more than enough for mid-far characters.

    • @spitfirekryloff744
      @spitfirekryloff744 10 месяцев назад +1

      First thing that comes to mind would be to turn all the individual captures into a single animated mesh with 100+ shape keys (1 shape key per capture) and thus get the motion blur when rendering inside Blender. But that seems like a very tedious method, unless there was a way to automate the process

    • @Vassay
      @Vassay 10 месяцев назад +1

      @@spitfirekryloff744 that would work, if the topology was consistent between frames - and it's not, it literally cannot be, because each frame is a totally different mesh =)

    • @BlenderBob
      @BlenderBob  10 месяцев назад +3

      I'll give you a hint. Water simulation. The geometry changes at every frame yet it's still possible to get motion blur. The vectors are not computed in Blender. It's done in the proprietary software.

  • @keysignphenomenon
    @keysignphenomenon 10 месяцев назад +1

    Merci Bob👏

  • @tgavel4691
    @tgavel4691 10 месяцев назад +1

    Wow - very cool!

  • @Voicetaco
    @Voicetaco 10 месяцев назад +1

    Why are you using green screen?
    In my experience from photogrammetry you wouldn't necessarily need a green screen to key out a person from a background as that is already being done when capturing the person using multiple cameras.
    What is your reason for using green screen when I've already seen others do videogrammetry effectively without it and gettin the same results?

    • @BlenderBob
      @BlenderBob  10 месяцев назад +1

      It’s the most efficient way to extract the character from the BG. Check the BCON 2023 clips on the Blender channel on YT. I have a more detailed approach on it. But I know that the goal is to eliminate it

  • @johntnguyen1976
    @johntnguyen1976 10 месяцев назад +1

    So next level!

  • @uttula
    @uttula 10 месяцев назад

    I guess the next step for even higher fidelity and further options would be to implement the gaussian splatting principles … just like recent evolution from simple photogrammetry => nerfs => gaussian splats :)

    • @BlenderBob
      @BlenderBob  9 месяцев назад

      You can’t shade splatters.

    • @uttula
      @uttula 9 месяцев назад

      The Blender plugins I’ve seen are admittedly still quite limited, but based on what I’ve already seen done in other engines, I’m feeling positive that eventually we should be getting to a point where they become highly useful for all sorts of things. We might not be there yet, but Rome wasn’t built in a day - could well be worth at least keeping an eye open … the road from research papers and proof of concepts to this day has been staggeringly fast and people are still continuing to make things better all the time. Of course, I could simply be hopelesly optimistic :D

  • @ZeroBudgetDevelopments
    @ZeroBudgetDevelopments 2 месяца назад

    HI BLENDER BOB how do i get in touch with you i would like to speak with you please :)

    • @BlenderBob
      @BlenderBob  2 месяца назад

      Tiki.movie.bb at gmail dot com

  • @keithtam8859
    @keithtam8859 10 месяцев назад +1

    clever

  • @thenout
    @thenout 10 месяцев назад +1

    Bam! Does the Head of Innovation need an intern by any chance?

    • @BlenderBob
      @BlenderBob  10 месяцев назад

      Do you live in Quebec?

    • @thenout
      @thenout 10 месяцев назад

      Narp, Berlin. But hey, ready when you are. I'd even make coffee (in Blender, that is).@@BlenderBob

  • @vassilidario8029
    @vassilidario8029 10 месяцев назад +1

    Hey that's pretty neat

  • @xalener
    @xalener 10 месяцев назад +1

    how the hell did you get motion blur working here?

  • @S9universe
    @S9universe 10 месяцев назад +1

    i'm curious about the tool :)

    • @BlenderBob
      @BlenderBob  10 месяцев назад

      What do you want to know?

    • @S9universe
      @S9universe 10 месяцев назад

      pricing, conditions and in which format does the app come ? please

    • @BlenderBob
      @BlenderBob  10 месяцев назад +1

      The price depends on the project, how many characters, how long the sequences. We generate alembic files. FBX if you need a skeleton. If you have a project that could use that tech please contact us at Real by FAKE. :-)

    • @S9universe
      @S9universe 10 месяцев назад

      thank you

  • @bomosley9226
    @bomosley9226 10 месяцев назад +1

    Whoa

  • @rekad8181
    @rekad8181 10 месяцев назад

    The future is definitely guassian splats, and even prompt generation. If i was you, i would spend a week doing thousands of shots and feeding this data into ai to be able to then generate the action you want on any skeleton based on a prompt. Chat gpt could probably guide you through this process🎉

    • @BlenderBob
      @BlenderBob  10 месяцев назад +1

      Try to rig, key and shade Gaussian splatter and the we’ll talk. ;-)