Fusion4D: Real-time Performance Capture of Challenging Scenes

Поделиться
HTML-код
  • Опубликовано: 4 ноя 2024

Комментарии • 26

  • @KraitoKrombongus
    @KraitoKrombongus 8 лет назад +10

    the potential of what you can create from this is sky high.

  • @RuofeiDu
    @RuofeiDu 8 лет назад +4

    This is awesome work! Congratulations to the MSR I3D / Holoportation team!

  • @ScientificGentlemen
    @ScientificGentlemen 8 лет назад +2

    The future of Cinema, theatre, T.v., Video Games...you name it

  • @dr-maybe
    @dr-maybe 8 лет назад +2

    You guys are doing astonishing work!

  • @sl4gg
    @sl4gg 8 лет назад +17

    this is gonna revolutionize motion-capture

    • @dbcomix
      @dbcomix 8 лет назад +4

      Especially with independent film makers/ game designers.

    • @do__ob
      @do__ob 8 лет назад

      This must require one hell of a setup to work in real time with minimum lag.

    • @oBCHANo
      @oBCHANo 8 лет назад +1

      This captures depth and uses it to morph a volume that is then converted into polygons. As far as it's concerned each frame is just a static mesh. There is no information that can be applied to a rig.
      This is good for capturing big events for VR, so that you can allow people to move around and maintain true perspective, imagine hundreds of these set up at a football stadium and then have it rendered with lightfields for VR, that is what this kind of tech is for, it doesn't really benefit motion capture at all.

  • @3Daver
    @3Daver 8 лет назад

    Absolutely brilliant tech! Not so much for film or games because the money is not in the capture, but in the retargeting. This, however, could get you a VR seat at a show on Broadway or The Superbowl. Amazing.

  • @SuperTubeAndi
    @SuperTubeAndi 8 лет назад

    Awesome - hope this will be made available publicly to use/license/implement. Do you include approaches to deal with reflective/refractive/transparent surfaces or absorption? How about reconstructing from RGB(without D) cameras?

  • @FuzzyWobble
    @FuzzyWobble 8 лет назад +3

    Very cool! Please just add this code to Kinect so we don't have to buy four more cameras :)

  • @TheGamingMackV
    @TheGamingMackV 8 лет назад

    If this revolutionizes motion capture, does this mean less work to put into making a 3D model? Such as designing the head, facial movements, mounting the 3D modeled head onto a motion captured body, ect?

  • @Spieluhr009
    @Spieluhr009 4 года назад

    Excellent performance!

  • @creeperlamoureux
    @creeperlamoureux 7 лет назад +5

    4:25 oh uh goodbye then

  • @zhangxaochen
    @zhangxaochen 8 лет назад

    How is the "Qualitative Comparison" carried out? the source code of "DynamicFusion" seems not publicly available till now, did you priviately acquired the sourcecode or implemented it on your own? Thx for reply~ :-)

  • @os3ujziC
    @os3ujziC 8 лет назад

    Amazing stuff.

  • @aldomotion
    @aldomotion 8 лет назад

    when we can test it? i've some kinects and oculus wating for test it!

  • @aporma2297
    @aporma2297 8 лет назад +2

    i want this + live sports + vr headset pls.. first quarter 2017?

  • @Krasniysharigg
    @Krasniysharigg 4 года назад

    And this still not a thing? Just prototype and thats all?

  • @Fludboy
    @Fludboy 8 лет назад

    how about more cams? all around

  • @Bot_Shredingera
    @Bot_Shredingera 6 лет назад

    Почему не сделают игру с этим?

  • @VijayNirmal
    @VijayNirmal 8 лет назад

    great

  • @computrik
    @computrik 8 лет назад +1

    Wow

  • @ylu796
    @ylu796 7 лет назад

    how to get ground truth

    • @SuperOranBerry
      @SuperOranBerry 6 лет назад

      I think ground truth here is basically a green screen, so it will always be more accurate than a 3d estimation