Christian Theobalt
Christian Theobalt
  • Видео 146
  • Просмотров 1 081 623
HQ3DAvatar: High Quality Controllable 3D Head Avatar, TOG 2024
Paper Abstract:
Multi-view volumetric rendering techniques have recently shown great potential in modeling and synthesizing high-quality head avatars. A comm- on approach to capture full head dynamic performances is to track the underlying geometry using a mesh-based template or 3D cube-based graphics primitives. While these model-based approaches achieve promising results, they often fail to learn complex geometric details such as the mouth interior, hair, and topological changes over time. This paper presents a novel approach to building highly photorealistic digital head avatars. Our method learns a canonical space via an implicit function parameterized by a neural network. It leverages...
Просмотров: 415

Видео

Showcase of Holoported Characters (CVPR 2024): Real-time 3D Avatar of Christian Theobalt
Просмотров 7055 месяцев назад
Short video of a real-time 3D avatar of Christian Theobalt controlled in real-time by himself. The avatar was created with the Holoported Characters method that will be presented at CVPR 2024. vcai.mpi-inf.mpg.de/projects/holochar/ This work was done at the MPI for Informatics and the Saarbruecken Research Center for Visual Computing, Interaction and Artificial Intellligence (MPI for Informatic...
VINECS: Video-based Neural Character Skinning. In CVPR, 2024
Просмотров 1985 месяцев назад
Paper Abstract: Rigging and skinning clothed human avatars is a challenging task and traditionally requires a lot of manual work and expertise. Recent methods addressing it either generalize across different characters or focus on capturing the dynamics of a single character observed under different pose configurations. However, the former methods typically pre- dict solely static skinning weig...
ASH: Animatable Gaussian Splats for Efficient and Photoreal Human Rendering. In CVPR, 2024
Просмотров 8955 месяцев назад
Paper Abstract: Real-time rendering of photorealistic and controllable human avatars stands as a cornerstone in Computer Vision and Graphics. While recent advances in neural implicit rendering have unlocked unprecedented photorealism for digital avatars, real-time performance has mostly been demonstrated for static scenes only. To address this, we propose ASH, an Animatable Gaussian Splatting a...
ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis. In CVPR, 2024.
Просмотров 1615 месяцев назад
Paper Abstract: Gestures play a key role in human communication. Recent methods for co-speech gesture generation, while managing to generate beat-aligned motions, struggle generating gestures that are semantically aligned with the utterance. Compared to beat gestures that align naturally to the audio signal, semantically coherent gestures require modeling the complex interactions between the la...
Egocentric Whole Body Motion Capture with FisheyeViT and Diffusion Based Motion Refinement.
Просмотров 1305 месяцев назад
Paper Abstract: In this work, we explore egocentric whole-body motion capture using a single fisheye camera, which simultaneously estimates human body and hand motion. This task presents significant challenges due to three factors, the lack of high-quality datasets, fisheye camera distortion, and human body self-occlusion. To address these challenges, we propose a novel approach that leverages ...
3D Human Pose Perception from Egocentric Stereo Videos, In CVPR, 2024
Просмотров 1755 месяцев назад
Paper Abstract: While head-mounted devices are becoming more compact, they provide egocentric views with significant self-occlusions of the device user. Hence, existing methods often fail to accurately estimate complex 3D poses from egocentric views. In this work, we propose a new transformer-based framework to improve egocentric stereo 3D human pose estimation, which leverages the scene inform...
EventEgo3D 3D Human Motion Capture from Egocentric Event Streams. In CVPR, 2024
Просмотров 1135 месяцев назад
Paper Abstract: Monocular egocentric 3D human motion capture is a challenging and actively researched problem. Existing methods use synchronously operating visual sensors (e.g. RGB cameras) and often fail under low lighting and fast motions, which can be restricting in many applications involving head-mounted devices. In response to the existing limitations, this paper 1) introduces a new probl...
Holoported Characters: Real-time Free-viewpoint Rendering of Humans from Sparse RGB Cameras.
Просмотров 2255 месяцев назад
Paper Abstract: We present the first approach to render highly realistic free-viewpoint videos of a human actor in general apparel, from sparse multi-view recording to display, in real-time at an unprecedented 4K resolution. At inference, our method only requires four camera views of the moving actor and the respective 3D skeletal pose. It handles actors in wide clothing, and reproduces even fi...
ROAM: Robust and Object-aware Motion Generation using Neural Pose Descriptors, 3DV, 2024
Просмотров 3869 месяцев назад
Paper Abstract: Existing automatic approaches for 3D virtual character motion synthesis supporting scene interactions do not generalise well to new objects outside training distributions, even when trained on extensive motion capture datasets with diverse objects and annotated interactions. This paper addresses this limitation and shows that robustness and generalisation to novel scene objects ...
SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes, 3DV, 2024
Просмотров 2509 месяцев назад
Paper Abstract: Existing methods for the 4D reconstruction of general, non-rigidly deforming objects focus on novel-view synthesis and neglect correspondences. However, time consistency enables advanced downstream tasks like 3D editing, motion analysis, or virtual-asset creation. We propose SceNeRFlow to reconstruct a general, non-rigid scene in a time-consistent manner. Our dynamic-NeRF method...
Our Neural Rendering technology applied to The Matrix 4 movie data.
Просмотров 30411 месяцев назад
The following shows results of an exciting experiment we did with our friends from Volucap in Berlin who did the special effects for the movie Matrix Resurrections. Input to the result above was a multi-view video sequence filmed with a seven camera rig. We then applied our NR-NeRF algorithm to it to use neural rendering for creating a virtual bullet time camera path through the scene. Also, we...
Decaf: Monocular Deformation Capture for Face and Hand Interactions. In SIGGRAPH ASIA, 2023.
Просмотров 3,1 тыс.Год назад
Paper Abstract: We introduces the first single view motion capture method that regresses 3D hand and face motions along with deformations arising from their interactions. We model hands as articulated objects inducing non-rigid face deformations during an active interaction. Our method relies on a new hand-face motion and interaction capture dataset with realistic face deformations acquired wit...
NeuS2: Fast Learning of Neural Implicit Surfaces for Multi-view Reconstruction. In ICCV, 2023.
Просмотров 1,2 тыс.Год назад
Paper Abstract: Recent methods for neural surface representation and rendering, for example NeuS, have demonstrated remarkably high-quality reconstruction of static scenes. However, the training of NeuS takes an extremely long time (8~hours), which makes it almost impossible to apply them to dynamic scenes with thousands of frames. We propose a fast neural surface reconstruction approach, calle...
LiveHand: Real-time and Photorealistic Neural Hand Rendering. In ICCV, 2023.
Просмотров 359Год назад
Paper Abstract: The human hand is the main medium through which we interact with our surroundings. Hence, its digitization is of uttermost importance, with direct applications in VR/AR, gaming, and media production amongst other areas. While there are several works modeling the geometry of hands, little attention has been paid to capturing photo-realistic appearance. Moreover, for applications ...
AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars. In SIGGRAPH ASIA 2023.
Просмотров 546Год назад
AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars. In SIGGRAPH ASIA 2023.
HDHumans: A Hybrid Approach for High-fidelity Digital Humans, In SCA, 2023
Просмотров 1,4 тыс.Год назад
HDHumans: A Hybrid Approach for High-fidelity Digital Humans, In SCA, 2023
EgoLocate, In SIGGRAPH 2023.
Просмотров 990Год назад
EgoLocate, In SIGGRAPH 2023.
Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold. SIGGRAPH 2023
Просмотров 7 тыс.Год назад
Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold. SIGGRAPH 2023
General Neural Gauge Fields. In ICLR, 2023
Просмотров 280Год назад
General Neural Gauge Fields. In ICLR, 2023
IMoS Intent Driven Full Body Motion Synthesis for Human Object Interactions. In EUROGRAPHICS, 2023.
Просмотров 409Год назад
IMoS Intent Driven Full Body Motion Synthesis for Human Object Interactions. In EUROGRAPHICS, 2023.
Scene Aware 3D Multi Human Motion Capture from a Single Camera. In EUROGRAPHICS, 2023
Просмотров 2,2 тыс.Год назад
Scene Aware 3D Multi Human Motion Capture from a Single Camera. In EUROGRAPHICS, 2023
EventNeRF: Neural Radiance Fields from a Single Colour Event Camera. In CVPR, 2023.
Просмотров 1,3 тыс.Год назад
EventNeRF: Neural Radiance Fields from a Single Colour Event Camera. In CVPR, 2023.
MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis. In CVPR, 2023
Просмотров 1,2 тыс.Год назад
MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis. In CVPR, 2023
Scene-aware Egocentric 3D Human Pose Estimation. In CVPR, 2023.
Просмотров 854Год назад
Scene-aware Egocentric 3D Human Pose Estimation. In CVPR, 2023.
Estimating Egocentric 3D Human Pose in the Wild with External Weak Supervision. In CVPR, 2022.
Просмотров 3752 года назад
Estimating Egocentric 3D Human Pose in the Wild with External Weak Supervision. In CVPR, 2022.
φ-SfT: Shape-from-Template with a Physics-Based Deformation Model. In CVPR, 2022.
Просмотров 3872 года назад
φ-SfT: Shape-from-Template with a Physics-Based Deformation Model. In CVPR, 2022.
Advances in Neural Rendering - EUROGRAPHICS 2022 STAR Report
Просмотров 5 тыс.2 года назад
Advances in Neural Rendering - EUROGRAPHICS 2022 STAR Report
EventHands: Real-Time Neural 3D Hand Pose Estimation from an Event Stream. In ICCV, 2021
Просмотров 7942 года назад
EventHands: Real-Time Neural 3D Hand Pose Estimation from an Event Stream. In ICCV, 2021
Gravity-Aware Monocular 3D Human-Object Reconstruction. In ICCV, 2021
Просмотров 2732 года назад
Gravity-Aware Monocular 3D Human-Object Reconstruction. In ICCV, 2021

Комментарии

  • @jamesconkle9158
    @jamesconkle9158 28 дней назад

    Hopefully it will become available to artists in time. Amazing work 👏

  • @duyinhquang6168
    @duyinhquang6168 Месяц назад

    very interested in your work. But could you show me how your model can map 3D human pose to exact location on the point clouds map? Thanks in advance!

  • @intothevoid2046
    @intothevoid2046 Месяц назад

    Great, but you know how Monoperf sounds for english speaking people?

  • @jamespogg
    @jamespogg 2 месяца назад

    hey any code release planned?

  • @steevya
    @steevya 3 месяца назад

    wow... simply awesome..

  • @steevya
    @steevya 3 месяца назад

    wow... simply awesome ....

  • @robinkneepkens4970
    @robinkneepkens4970 4 месяца назад

    Congratulations on the outstanding results!

  • @CoyTheobalt
    @CoyTheobalt 4 месяца назад

    Hey cousin, I'm guessing.

  • @prathameshdinkar2966
    @prathameshdinkar2966 5 месяцев назад

    Nice paper! Keep the good work going! 😁

  • @fasiulhaq2366
    @fasiulhaq2366 5 месяцев назад

    I'm shocked

  • @GmMelendez
    @GmMelendez 5 месяцев назад

    Everyone is finding out why this is being used. To steel your will all your assets all your money in the bank. Everything your loved ones worked hard to leave you there legacy.. Are you just going to let your self hell no!!!

  • @BAYqg
    @BAYqg 5 месяцев назад

    such a clean fingers geometry. And what is interesting, why hand geometry is so destructed, when even fingers are captured well. Anyways, the result is amazing!

  • @XRCADIA
    @XRCADIA 5 месяцев назад

    Impressive

  • @Elektrashock1
    @Elektrashock1 5 месяцев назад

    Well done and novel approach. 🤙

  • @SuperSmyer
    @SuperSmyer 5 месяцев назад

    Awesome!

  • @alexijohansen
    @alexijohansen 5 месяцев назад

    I would love to join your lab!

  • @richard_goforth
    @richard_goforth 6 месяцев назад

    Unbelievable. Very impressive. Appreciate you sharing!

  • @xu_xl
    @xu_xl 7 месяцев назад

    it will be very helpful if author could share the code of this project

  • @endavidg
    @endavidg 9 месяцев назад

    4:30 Shouldn't (a) simply be called RASTERIZATION? I think calling it "Forward Rendering" is confusing because "Deferred Rendering" is also rasterization.

  • @leef918
    @leef918 10 месяцев назад

    this is full deep research ,containing popular depth device,leap motion,realsense,and other's researh comparision.

  • @mahdihajialilue3825
    @mahdihajialilue3825 11 месяцев назад

    nice work

  • @crestz1
    @crestz1 Год назад

    whats the diff between this and neuralangelo? both appears to use first and second order derivatives

  • @chillsoft
    @chillsoft Год назад

    This is really cool! One question, you say "capture dataset with realistic face deformations acquired with a markerless multi-view camera system", does that mean we will have to have an array of cameras once the code drops to reproduce this? How many and what quality cameras do we need, does an array of iPhones suffice? Great research, thanks for sharing!

  • @Bellberuu
    @Bellberuu Год назад

    Wowww that's so cool!

  • @well5423
    @well5423 Год назад

    Amazing reconstruction quality! Bravo.

  • @birukfikadu-ni8ph
    @birukfikadu-ni8ph Год назад

    Please tell me where i can experience

  • @topgunmaverick9281
    @topgunmaverick9281 Год назад

    🤟 Great

  • @goteer10
    @goteer10 Год назад

    It's incredible how it can work with it's own skeletal tracking input and still get such amazing output! I'd imagine with more accurate skeletal tracking data gathered seperately (With either IMUs or markers) it'd almost completely weed out the few edge cases where the renderer gets fed incorrect skeletal data (Like arms teleporting or skewed hands) I'd love to see if it could handle hands in the future

  • @pinas.passport
    @pinas.passport Год назад

    The end of photography 😅

  • @jimj2683
    @jimj2683 Год назад

    Has anyone tried to use synthetic data from a game engine to train a neural network? With enough 2d vs 3d data it should be possible to reconstruct most objects/scenes in Unreal Engine or similar.

  • @beautyfitnesschannel6639
    @beautyfitnesschannel6639 Год назад

    Great, where can experience

  • @ChangPhlat
    @ChangPhlat Год назад

    wow

  • @TinNguyen-wx4fq
    @TinNguyen-wx4fq Год назад

    Good Job!

  • @dietrichdietrich7763
    @dietrichdietrich7763 Год назад

    amazing (powerful stuff)

  • @dietrichdietrich7763
    @dietrichdietrich7763 Год назад

    Awesome

  • @yangchen8602
    @yangchen8602 Год назад

    Great Talk! Thanks for sharing!

  • @petixuxu
    @petixuxu Год назад

    Could this be done with an stl of a figure?

  • @absoriz2691
    @absoriz2691 Год назад

    Great work!

  • @mingwuzheng4146
    @mingwuzheng4146 Год назад

    Excellent idea! I'm constantly eager to explore a neural UV mapping technique like this.

  • @ZergRadio
    @ZergRadio Год назад

    Interesting!

  • @bolzanoitaly8360
    @bolzanoitaly8360 2 года назад

    what you want to show us, if you can't share the Model, then what is the need of this, even I can take this video and can place on my VLOG. this is just nothing..... can you share the model and code, please?

  • @21graphics
    @21graphics 2 года назад

    what is RGB camera?

  • @bobthornton9280
    @bobthornton9280 2 года назад

    So, I was interested in seeing if there was an accurate one of these, that I could use on episodes of LOST. Then Daniel Dae Kim showed up in this video. Nice.

  • @wmka
    @wmka 2 года назад

    Just keeps getting better and better.

  • @rodrigoferriz8267
    @rodrigoferriz8267 2 года назад

    what is the name of the software ? , and its for public use?

  • @virtual_intel
    @virtual_intel 2 года назад

    How does this benefit us viewers? and when can we gain access to the tool?

  • @Ethan-ny4vg
    @Ethan-ny4vg 3 года назад

    is the character controlor in Unity???anybody knows??thanks

  • @MattSayYay
    @MattSayYay 3 года назад

    Apparently Chills can't unsee this.

  • @deepfakescoverychannel6710
    @deepfakescoverychannel6710 3 года назад

    that is fake paper without the code.

  • @FancyFun3433
    @FancyFun3433 3 года назад

    Alright that's impressive but hows the multi camera set up? If I wanted to set up 4 cameras to capture my sides, back and front would that be possible or would it give me a shit ton of errors?? Also something that is important to me is ground work. Does this only work for videos where I have to stand up? Or can I do a front flip or back flip or crawling on the ground movements?