ClothCap (SIGGRAPH 2017)

Поделиться
HTML-код
  • Опубликовано: 4 ноя 2024

Комментарии • 29

  • @minimatamou8369
    @minimatamou8369 7 лет назад +16

    I like this video because everyone can see this is not perfect, but also that the developers are making progress.
    At 1:08, the guy's genitals get sucked up inside the body.
    At 2:32, there's is a huge loss of details of the wrinkles.
    I usually see these kind of videos in a "WOW THIS IS AMAZING THE FUTURE IS NOW" kind of mood, but not this one.
    This one just shows the work of the developers, which had to face a million difficulties that I can't even imagine, to finally get what's in this video right now. And this is just the first step.
    This one just shows me that every amazing looking technology needs some genius people working their asses off for a long time to actually make it happen.
    Maybe one day this will lead to testing some clothes on your body type inside your browser when shopping online, or to link directly concept art with character modeling in video game development. Whatever amazing change this will lead to, I'll know just a tiny little bit the origin story of this technology, the amount of work it represents, the process of creating the cap, etc. and I absolutely love it.
    So yeah :
    At 1:08, the guy's genitals get sucked up inside the body.
    At 2:32, there's is a huge loss in the details of the wrinkles.
    But at 5:40, I don't care at all, because you guys are doing an amazing, complex and difficult job and this little insight on your technology describes it so well I can only fall in love with it.
    Keep being awesome.

  • @TheKevphil
    @TheKevphil 7 лет назад +5

    Obviously needs more detail, but a BIG step above what's possible now. I don't understand how this would be implemented, however. As a plugin (Maya, 3DS Max), will the user buy different motion packs? Or maybe this will be released for higher-end systems. Other demos at Siggraph 2017 show some remarkable cloth simulations as part of physical interactions, so the future looks bright. This is one major area of CGI that still suffers from sub-par results.

    • @mollynicholasba7409
      @mollynicholasba7409 7 лет назад

      Do you have a link to the "cloth simulations as part of physical interactions"? I'm really interested in seeing that.

  • @zoombapup
    @zoombapup 7 лет назад

    Thanks for the upload Michael, I can see a problem with this approach when you transfer the displacements when characters have widely different body shapes though. In that case the wrinkles and other stretching would look strange. Wouldn't it be better to try and determine the cloth parameters from the example and apply those to a physics model of the new body shape instead? Displacements work for relatively similar body shapes I suppose, so it would work for a lot of general cases, so maybe that is a fair trade-off.
    Great video, reading the paper now. Thanks again for sharing!

    • @MichaelBlackMPI
      @MichaelBlackMPI  7 лет назад +2

      Indeed, the wrinkles do not adapt to the shape and the approach does not model the physics of cloth. We discuss these limitations in the paper. We are interested in a more physical approach and have some other new work at SIGGRAPH on learning physical parameters of human soft tissue from examples (ps.is.tuebingen.mpg.de/uploads_file/attachment/attachment/375/main_final.pdf). This works well but we haven't tried to apply it to cloth yet.

    • @zoombapup
      @zoombapup 7 лет назад

      Out of interest Michael, from the Dyna paper, you use a capture with 22 stereo projectors? I would have assumed that the resultant scan and texture quality would be far higher than that shown in the video. Is that some limitation of the equipment? I suppose using something like the newer 4k video capture solutions would help there?
      I ask because I've been looking at various photogrammetry results and the high-end commercial spaces like Ten-24 have very high quality results in terms of scan quality and texture quality, but obviously couldn't work for real-time capture or 60fps capture as you have here. I'm wondering where the sweet spot is in terms of hardware capture of high detail mesh and texture in the near future.

    • @MichaelBlackMPI
      @MichaelBlackMPI  7 лет назад +2

      The high quality meshes you see for sale are captured with really high-res cameras, more of them, and then involve some manual labor to make them look that good. We process 60-120fps, capturing tens of thousands of meshes. With this data volume, everything has to be automatic. The data is already huge so higher res meshes would make life even harder. So we have focused on the under-represented problem of dynamic capture and gave up resolution in return. Note that the raw meshes have about 150K points while our template mesh has only about 7K. So there is much more we could get out of the current data. We also have a separate face/hand scanner to get higher resolution for these body parts.

    • @zoombapup
      @zoombapup 7 лет назад

      I asked because I was wondering how far this approach could get in a reasonably affordable setup made from something like Intel Realsense cameras. I'm just waiting for the newer R400 camera to come out to try and make a fusion based meshing capture volume from them. Clearly you have a higher quality setup, but it doesn't look as good as it could due to the relatively poor textures.
      One approach that many of the commercial scanners I've seen in action use, is to actually capture the subject with a separate colour camera at specific points to then apply the texture to the scanned mesh. Maybe you could apply that same idea to your captures to get higher quality textures? I know it doesn't really matter for your research purposes, but I think having higher quality textures in your example videos would help improve the reception of the research methods (I must admit I have a bias myself for example videos where the results are visually competitive with commercial quality assets).
      Anyway, thanks for sharing. Very nice to see a research group showing their methods in an accessible format, I've had issues trying to track down example videos for a number of really great papers and I'm hoping that sharing via a site like RUclips becomes the norm.

    • @MichaelBlackMPI
      @MichaelBlackMPI  7 лет назад +1

      The color cameras in this system are not high-res but we have 22 of them. They capture slightly out of phase with the stereo cameras but we can solve for this and get out a decent texture map. See our new work on Dynamic FAUST, which appears at CVPR: ruclips.net/video/6T9FSC2bQDA/видео.html

  • @gmsgms1018
    @gmsgms1018 4 года назад

    I Need a full accurate scan of face body and clothes to a 2d image to make as a 3d relief or 3d full model (stl, obj, dxf) any suggestions?

  • @H8ts
    @H8ts 7 лет назад

    I don't believe this will be the future. Realtime computed cloth physics is getting really good, especially with dedicated HW acceleration like PhysX, there is no need for this.

    • @MichaelBlackMPI
      @MichaelBlackMPI  7 лет назад +14

      We agree that cloth simulation can look great. The issue is really about designing the clothing, grading it, and draping it on bodies of different shapes. Clothing design still requires an expert and getting good results is hard. Imagine you want to take the clothes you are wearing and put them on a character -- really getting that right with physics simulation will take a lot of work. Eventually we hope to be able to quickly scan a person, parse that clothing, and transfer it to a new character. We think this will be easier for many people. This is just our first step.

    • @VarlamDWMA
      @VarlamDWMA 7 лет назад

      that awesome.

  • @HikikomoriDev
    @HikikomoriDev 6 лет назад +1

    This would be awesome in VR environments where you wanna simulate people and give them the best dynamics possible in appearance.

  • @camelCased
    @camelCased 6 лет назад

    So, what happened with it? Was it used in any game engine and how exactly? I also saw a similar study called DRAPE.

    • @MichaelBlackMPI
      @MichaelBlackMPI  6 лет назад +4

      My group also did DRAPE. Neither system is fully practical yet for wide deployment but we continue to work on the problem.

    • @norman191000
      @norman191000 3 года назад

      @@MichaelBlackMPI And how it ended up after 6 years of development? I ask from vfx domain perspective.

  • @mayorc
    @mayorc 7 лет назад +1

    It's cool but without a 20K $ realtime scanner it's no use to most 3d artists.

    • @MalrickEQ2
      @MalrickEQ2 7 лет назад +1

      This was made using a simple Iphone camera actually.

  • @selenefrost6267
    @selenefrost6267 6 лет назад

    Looking in the comments i was fully expecting to see mostly immature comments about being able to find the shape of someone body through clothes. despite that underwear fitting so tightly would prevent any accurate scans. Upon looking at the comments section and seeing it was mostly constructive and interesting comments that were respectfully stated when conflicting with the creator I was pleasantly surprised. however i feel that this fundamentally defies human nature and must be fixed before this comments section collapses in on itself so: ha ha you will be able to look at people through clothes ha ha

  • @WhiteDragon103
    @WhiteDragon103 7 лет назад +12

    2:32 significant loss of detail in cloth wrinkles.

    • @lucie3d
      @lucie3d 6 лет назад +2

      please point to your work in cloth simulation and stuff

    • @molIymawk
      @molIymawk 6 лет назад +5

      Dom Troisi you don’t need to be an expert in generating cloth physics to notice there is a signifigant drop in wrinkle detail

    • @toafloast1883
      @toafloast1883 6 лет назад +4

      holy shit is this one of those "you do better" comments like jesus fuck @dom troisi
      as for the detail loss, perhaps textures and a bigger resolution would fix that? afterall it is just a flat, boringly lit cloth

    • @pcmaster888
      @pcmaster888 6 лет назад

      I guess that kind of resolution is pretty normal for today's standards in full body 3D scanning (whatever this 4D scanning they talk about is).
      More detail could always be added in post, this is more of a tech demo than something finished obviously