[CVPR2023] ReRF: Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos

Поделиться
HTML-код
  • Опубликовано: 5 фев 2025
  • Project: aoliao12138.gi...
    Arxiv: arxiv.org/abs/...
    The success of the Neural Radiance Fields (NeRFs) for modeling and free-view rendering static objects has inspired numerous attempts on dynamic scenes. Current techniques that utilize neural rendering for facilitating free-view videos (FVVs) are restricted to either offline rendering or are capable of processing only brief sequences with minimal motion. In this paper, we present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time FVV rendering on long-duration dynamic scenes. ReRF explicitly models the residual information between adjacent timestamps in the spatial-temporal feature space, with a global coordinate-based tiny MLP as the feature decoder. Specifically, ReRF employs a compact motion grid along with a residual feature grid to exploit inter-frame feature similarities. We show such a strategy can handle large motions without sacrificing quality. We further present a sequential training scheme to maintain the smoothness and the sparsity of the motion/residual grids. Based on ReRF, we design a special FVV codec that achieves three orders of magnitudes compression rate and provides a companion ReRF
    player to support online streaming of long-duration FVVs of dynamic scenes. Extensive experiments demonstrate the effectiveness of ReRF for compactly representing dynamic radiance fields, enabling an unprecedented free-viewpoint viewing experience in speed and quality.
    Liao Wang, Qiang Hu, Qihan He, Ziyu Wang, Jingyi Yu, Tinne Tuytelaars, Lan Xu†, Minye Wu†,
    Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos,
    IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023.

Комментарии • 1

  • @jortor2932
    @jortor2932 14 дней назад

    The hell is this 😂 but awesome