ShanghaiTech Digital Human
ShanghaiTech Digital Human
  • Видео 46
  • Просмотров 101 817
[SIGGRAPHAsia 2024] V^3: Viewing Volumetric Videos on Mobiles via Streamable 2D DynamicGaussians
Project: authoritywang.github.io/v3/
Experiencing high-fidelity volumetric video as seamlessly as 2D videos is a long-held dream. However, current dynamic 3DGS methods, despite their high rendering quality, face challenges in streaming on mobile devices due to computational and bandwidth constraints. In this paper, we introduce V^3 (Viewing Volumetric Videos), a novel approach that enables high-quality mobile rendering by streaming dynamic Gaussians. Our key innovation is to view dynamic 3DGS as 2D videos, facilitating the use of hardware video codecs. Additionally, we propose a two-stage training strategy to reduce storage requirements with rapid training speed. The first stage employs ha...
Просмотров: 1 859

Видео

[SIGGRAPH 2024] DressCode: Autoregressively Sewing and Generating Garments from Text Guidance
Просмотров 2764 месяца назад
Project Page: ihe-kaii.github.io/DressCode/ Arxiv: arxiv.org/abs/2401.16465 Apparel's significant role in human appearance underscores the importance of garment digitalization for digital human creation. Recent advances in 3D content creation are pivotal for digital human creation. Nonetheless, garment generation from text guidance is still nascent. We introduce a text-driven 3D garment generat...
Instant Facial Gaussians Translator for Relightable and Interactable Facial Rendering
Просмотров 1,3 тыс.4 месяца назад
Project: dafei-qin.github.io/TransGS.github.io/ Arxiv: coming soon The advent of digital twins and mixed reality devices has increased the demand for high-quality and efficient 3D rendering, especially for facial avatars. Traditional and AI-driven modeling techniques enable high-fidelity 3D asset generation from scans, videos, or text prompts. However, editing and rendering these assets often i...
[SIGGRAPH 2024]CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets
Просмотров 7 тыс.7 месяцев назад
Project Page: sites.google.com/view/clay-3dlm Arxiv: arxiv.org/abs/2406.13897 Demo: hyperhuman.deemos.com/rodin In the realm of digital creativity, our potential to craft intricate 3D worlds from imagination is often hampered by the limitations of existing digital tools, which demand extensive expertise and effort. To narrow this disparity, we introduce CLAY, a 3D geometry and material generato...
[SIGGRAPHAsia 2024] LetsGo: Large-Scale Garage Rendering via LiDAR-Assisted Gaussian Primitives
Просмотров 2,4 тыс.7 месяцев назад
Project: zhaofuq.github.io/LetsGo/ Arxiv: arxiv.org/pdf/2404.09748 Large garages are ubiquitous yet intricate scenes that present unique challenges due to their monotonous colors, repetitive patterns, reflective surfaces, and transparent vehicle glass. Conventional Structure from Motion (SfM) methods for camera pose estimation and 3D reconstruction often fail in these environments due to poor c...
[SIGGRAPHAsia 2024] Robust Dual Gaussian Splatting for Immersive Human-centric Volumetric Videos
Просмотров 3 тыс.7 месяцев назад
Project: nowheretrix.github.io/DualGS/ Volumetric video represents a transformative advancement in visual media, enabling users to navigate immersive virtual experiences freely and narrowing the gap between digital and real worlds. However, the need for extensive manual intervention to stabilize mesh sequences and the generation of excessively large assets in existing workflows impedes broader ...
[SIGGRAPH 2024] Media2Face: Co-speech Facial Animation Generation With Multi-Modality Guidance
Просмотров 2,6 тыс.8 месяцев назад
Project: sites.google.com/view/media2face Arxiv: arxiv.org/abs/2401.15687 The synthesis of 3D facial animations from speech has garnered considerable attention. Due to the scarcity of high-quality 4D facial data and well-annotated abundant multi-modality labels, previous methods often suffer from limited realism and a lack of flexible conditioning. We address this challenge through a trilogy. W...
[CVPR2024] HOI-M3: Capture Multiple Humans and Objects Interaction within Contextual Environment
Просмотров 9599 месяцев назад
Project Page: juzezhang.github.io/HOIM3_ProjectPage/ Arxiv: arxiv.org/pdf/2404.00299 Humans naturally interact with both others and the surrounding multiple objects, engaging in various social activities. However, due to fundamental data scarcity, recent advances in modeling human-object interactions mostly focus on perceiving isolated individuals and objects. In this paper, we introduce HOI-M3...
[CVPR2024] I’M HOI: Inertia-aware Monocular Capture of 3D Human-Object Interactions
Просмотров 5749 месяцев назад
Project Page: afterjourney00.github.io/IM-HOI.github.io/ Arxiv: arxiv.org/abs/2312.08869 We are living in a world surrounded by diverse and “smart” devices with rich modalities of sensing ability. Conveniently capturing the interactions between us humans and these objects remains far-reaching. In this paper, we present I’m-HOI, a monocular scheme to faithfully capture the 3D motions of both the...
[CVPR2024] BOTH2Hands: Inferring 3D Hands from Both Text Prompts and Body Dynamics
Просмотров 1 тыс.Год назад
Project Page: godheritage.github.io/ Arxiv: arxiv.org/abs/2312.07937 The recently emerging text-to-motion advances have spired numerous attempts for convenient and interactive human motion generation. Yet, existing methods are largely limited to generating body motions only without considering the rich two-hand motions, let alone handling various conditions like body dynamics or texts. To break...
[CVPR2024] HiFi4G: High-Fidelity Human Performance Rendering via Compact Gaussian Splatting
Просмотров 12 тыс.Год назад
Project Page: nowheretrix.github.io/HiFi4G/ Arxiv: arxiv.org/abs/2312.03461 We have recently seen tremendous progress in photo-real human modeling and rendering. Yet, efficiently rendering realistic human performance and integrating it into the rasterization pipeline remains challenging. In this paper, we present HiFi4G, an explicit and compact Gaussian-based approach for high-fidelity human pe...
[CVPR2024] VideoRF: Rendering Dynamic Radiance Fields as 2D Feature Video Streams
Просмотров 2,2 тыс.Год назад
Project Page: aoliao12138.github.io/VideoRF/ Arxiv: arxiv.org/abs/2312.01407 Neural Radiance Fields (NeRFs) excel in photorealistically rendering static scenes. However, rendering dynamic, long-duration radiance fields on ubiquitous devices remains challenging, due to data storage and computational constraints. In this paper, we introduce VideoRF, the first approach to enable real-time streamin...
[CVPR2024] OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers
Просмотров 1,4 тыс.Год назад
Project Page: tr3e.github.io/omg-page/ Arxiv: arxiv.org/abs/2312.08985 We have recently seen tremendous progress in realistic text-to-motion generation. Yet, the existing methods often fail or produce implausible motions with unseen text inputs, which limits the applications. In this paper, we present OMG, a novel framework, which enables compelling motion generation from zero-shot open-vocabul...
[SIGGRAPH 2023] HACK: Learning a Parametric Head and Neck Model for High-fidelityAnimation
Просмотров 7 тыс.Год назад
Project Page: sites.google.com/view/hack-model Arxiv: arxiv.org/abs/2305.04469 Significant advancements have been made in developing parametric models for digital humans, with various approaches concentrating on parts such as the human body, hand, or face. Nevertheless, connectors such as the neck have been overlooked in these models, with rich anatomical priors often unutilized. In this paper,...
[SIGGRAPH 2023] DreamFace: Progressive Generation of Animatable 3D Faces under Text Guidance
Просмотров 15 тыс.Год назад
Project: sites.google.com/view/dreamface Arxiv: arxiv.org/pdf/2304.03117.pdf Web demo: hyperhuman.deemos.com HuggingFace: huggingface.co/spaces/DEEMOSTECH/ChatAvatar Emerging Metaverse applications demand accessible, accurate, and easy-to-use tools for 3D digital human creations in order to depict different cultures and societies as if in the physical world. Recent large-scale vision-language a...
[CVPR2023] ReRF: Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos
Просмотров 3,3 тыс.Год назад
[CVPR2023] ReRF: Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos
[TVCG (IEEEVR2023)] LiDAR-aid Inertial Poser: Large-scale Human Motion Capture
Просмотров 403Год назад
[TVCG (IEEEVR2023)] LiDAR-aid Inertial Poser: Large-scale Human Motion Capture
[AAAI2023] HybridCap: Inertia-aid Monocular Capture of Challenging Human Motions
Просмотров 483Год назад
[AAAI2023] HybridCap: Inertia-aid Monocular Capture of Challenging Human Motions
[CVPR2023] HumanGen: Generating Human Radiance Fields with Explicit Priors
Просмотров 673Год назад
[CVPR2023] HumanGen: Generating Human Radiance Fields with Explicit Priors
[CVPR2023] NeuralDome: A Neural Modeling Pipeline on Multi-View Human-Object Interactions
Просмотров 833Год назад
[CVPR2023] NeuralDome: A Neural Modeling Pipeline on Multi-View Human-Object Interactions
[CVPR2023] Instant-NVR: Instant Neural Volumetric Rendering for Human-object Interactions
Просмотров 1,1 тыс.Год назад
[CVPR2023] Instant-NVR: Instant Neural Volumetric Rendering for Human-object Interactions
[CVPR2023] Relightable Neural Human Assets from Multi-view Gradient Illuminations
Просмотров 1 тыс.Год назад
[CVPR2023] Relightable Neural Human Assets from Multi-view Gradient Illuminations
[SIGGRAPH ASIA 2022] Human Performance Modeling and Rendering via Neural Animated Mesh
Просмотров 2,6 тыс.2 года назад
[SIGGRAPH ASIA 2022] Human Performance Modeling and Rendering via Neural Animated Mesh
[SIGGRAPH ASIA 2022] SCULPTOR: Skeleton-Consistent Face Creation with a Learned Parametric Generator
Просмотров 3,6 тыс.2 года назад
[SIGGRAPH ASIA 2022] SCULPTOR: Skeleton-Consistent Face Creation with a Learned Parametric Generator
[SIGGRAPH ASIA 2022] Video-driven Neural Physically-based Facial Asset for Production
Просмотров 13 тыс.2 года назад
[SIGGRAPH ASIA 2022] Video-driven Neural Physically-based Facial Asset for Production
[SIGGRAPH2022] NIMBLE: A Non-rigid Hand Model with Bones and Muscles
Просмотров 2,4 тыс.2 года назад
[SIGGRAPH2022] NIMBLE: A Non-rigid Hand Model with Bones and Muscles
[SIGGRAPH2022] Artemis: Articulated Neural Pets with Appearance and Motion Synthesis
Просмотров 2,8 тыс.2 года назад
[SIGGRAPH2022] Artemis: Articulated Neural Pets with Appearance and Motion Synthesis
[CVPR2022] HumanNeRF: Efficiently Generated Human Radiance Field from Sparse Inputs
Просмотров 3,8 тыс.2 года назад
[CVPR2022] HumanNeRF: Efficiently Generated Human Radiance Field from Sparse Inputs
[CVPR2022] Fourier PlenOctree for Dynamic Radiance Field Rendering in Real-time
Просмотров 2 тыс.2 года назад
[CVPR2022] Fourier PlenOctree for Dynamic Radiance Field Rendering in Real-time
[CVPR2022] NeuralHOFusion: Neural Volumetric Rendering under Human-object Interactions
Просмотров 9962 года назад
[CVPR2022] NeuralHOFusion: Neural Volumetric Rendering under Human-object Interactions

Комментарии

  • @jortor2932
    @jortor2932 6 дней назад

    The hell is this 😂 but awesome

  • @Geancg
    @Geancg 2 месяца назад

    Mindblowingggg

  • @meowww-f2d
    @meowww-f2d 2 месяца назад

    wow

  • @c016smith52
    @c016smith52 4 месяца назад

    Can you provide information about what your source videos are, how many cameras and different angles that are initially captured, pre-treatment? This is amazing work, just curious what sources are vs. generative (if any) fillers. Great work!!

  • @test_place7971
    @test_place7971 4 месяца назад

    Hi, what is the Neu Dim asset renderer script?

  • @briancunning423
    @briancunning423 4 месяца назад

    Yes very cool. Can't wait for the code to be published.

  • @MotulzAnto
    @MotulzAnto 4 месяца назад

    wow... so cool

  • @hoang_minh_thanh
    @hoang_minh_thanh 4 месяца назад

    Congrats!

  • @narathipthisso4969
    @narathipthisso4969 4 месяца назад

    Great!! How to tutorial ? 😮

  • @trishul1979
    @trishul1979 5 месяцев назад

    Release date plz?

  • @xl000
    @xl000 5 месяцев назад

    Are you going to release the trained network so that the result can be replicated independently ?

  • @MrEliteXXL
    @MrEliteXXL 5 месяцев назад

    Xtreme, the results are incredible and I hope this will help more people make their projects easier

  • @wesleybarreira9449
    @wesleybarreira9449 5 месяцев назад

    God is sad with so much artificial intelligence

  • @LOC-Ness
    @LOC-Ness 5 месяцев назад

    Losers in the comments

  • @CASAROME
    @CASAROME 5 месяцев назад

    Incredible

  • @konraddobson
    @konraddobson 6 месяцев назад

    While technically cool, it's just sad to see these always going after the creative work people love and never the things people DON'T enjoy doing. A completely distorted approach that's destroying creative fields and artists passions and incomes.

  • @ChikadorangFrog
    @ChikadorangFrog 6 месяцев назад

    will this be an open source and any projected release date?

  • @mosa8766
    @mosa8766 6 месяцев назад

    just wow

  • @jbach
    @jbach 6 месяцев назад

    Cool! What are the datasets Clay was trained on?

    • @perceval_
      @perceval_ 5 месяцев назад

      It's in the paper Objaverse and ShapeNet.

  • @NoobNoob339
    @NoobNoob339 7 месяцев назад

    And as always, what is in the dataset? Were the models you took for yourselves and you're now going to make money off of ethically sourced?

    • @clerothsun3933
      @clerothsun3933 6 месяцев назад

      It doesn't matter, because you fkers will complain whether the dataset is open source or not lol.

    • @perceval_
      @perceval_ 5 месяцев назад

      Its a Siggraph academic paper, so the model described in the paper has nothing to do with making money. Its educational. As for the dataset, it's in the paper they published you can read it there. They used Objaverse and ShapeNet. Objaverse is mainly Sketchfab CC by 4.0 license (before their No AI license change. What is hypocritical since they sell copyrighted stuff for money, like Nintendo characters and whatnot...). ShapeNet on the other hand is non-commercial educational only.

    • @xl000
      @xl000 5 месяцев назад

      Objaverse People agreed that their 3d model could be used for AI training

  • @ilanlee3025
    @ilanlee3025 7 месяцев назад

    Crazy stuff

  • @elrossi96
    @elrossi96 7 месяцев назад

    Great proyect guys. Really incredible. Plase, be careful with this tech, it is dangerous. Make it public plase so everybody can test it. Congrats to the team

    • @clerothsun3933
      @clerothsun3933 6 месяцев назад

      What is dangerous about it?

    • @xl000
      @xl000 5 месяцев назад

      @@clerothsun3933 he thinks it's dangerous because he doesn't understand it.

  • @gaussiansplatsss
    @gaussiansplatsss 7 месяцев назад

    How can I download the app?

  • @fxguide
    @fxguide 7 месяцев назад

    Great work really impressive.

  • @mousatat7392
    @mousatat7392 7 месяцев назад

    When the supplementary material will be released?

  • @Inferencer
    @Inferencer 7 месяцев назад

    Fantastic work! can we expect a July release?

  • @hoang_minh_thanh
    @hoang_minh_thanh 8 месяцев назад

    Does this render in realtime?

  • @StephaneVFX
    @StephaneVFX 8 месяцев назад

    very impressive ! Bravo

  • @SALAVEY13
    @SALAVEY13 8 месяцев назад

    wow! great level design)

  • @sahinerdem5496
    @sahinerdem5496 8 месяцев назад

    there is no eye work.

  • @Moshugaani
    @Moshugaani 8 месяцев назад

    I predict that this will be a huge technology for the movie industry!

  • @benajau
    @benajau 8 месяцев назад

    Can't wait to see the code and play with it!

  • @rallyworld3417
    @rallyworld3417 9 месяцев назад

    So what is the driver? Image or video

  • @Aero3D
    @Aero3D 9 месяцев назад

    Can we use it in Unreal Engine?

  • @DaveDFX
    @DaveDFX 9 месяцев назад

    This is the future!

  • @dodeakim
    @dodeakim 9 месяцев назад

    😍

  • @anyuanli4306
    @anyuanli4306 9 месяцев назад

    Another great step towards hi-quality motion capture!😉

  • @yurongling
    @yurongling 9 месяцев назад

    Very good video, which makes me rotate

  • @altjasonspeed8447
    @altjasonspeed8447 9 месяцев назад

    The reconstructions of dynamic sports like skateboarding look marvelous!

  • @orennnnnnnn
    @orennnnnnnn 9 месяцев назад

    Very impressive and informative thank you!

  • @bause6182
    @bause6182 10 месяцев назад

    Is your tool accessible, how to use it. I am really interested

  • @myelinsheathxd
    @myelinsheathxd 10 месяцев назад

    Amazing progress

  • @Pixelsplasher
    @Pixelsplasher 10 месяцев назад

    All outputs and no inputs. The input video and how it was taken has to be shown first.

  • @char_art
    @char_art 10 месяцев назад

    After testing I can say that it's terribly bad.... Quality, accuracy of the shapes compared to the reference, everything is bad. Classic character artists are not about to be laid off ^^ All this development for that... While we really need high-performance retopology or automatic UV tools as a priority.

  • @DillonThomasDigital
    @DillonThomasDigital 11 месяцев назад

    Can these composite into Google's recently announced 'SMERFS"?

  • @DillonThomasDigital
    @DillonThomasDigital 11 месяцев назад

    This is awesome! This will remove the requirements for greenscreen, but directors/dp's will still need to "match lighting" to composite. I can't wait for this to release!

    • @bause6182
      @bause6182 10 месяцев назад

      the time savings we can have are incredible, you just have to shoot a video and incorporate your 3D model directly into the scene, no more need for motion tracking to integrate a flat video into the scenery. I can't wait to try

  • @simsimsim-n2t
    @simsimsim-n2t Год назад

    any demo online?

  • @f.b.1311
    @f.b.1311 Год назад

    Nice! But wtf was that last demo haha

  • @Danuxsy
    @Danuxsy Год назад

    neural driven games will reign supreme

  • @Razedcold
    @Razedcold Год назад

    This is very interesting, thank you for sharing!