NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections

Поделиться
HTML-код
  • Опубликовано: 4 янв 2025

Комментарии • 52

  • @AdamnGif
    @AdamnGif 4 года назад +112

    Im holding on my paper because what a time to be alive!

    • @tHEuKER
      @tHEuKER 4 года назад +26

      What are you talking about, dear fellow scholars?

    • @CharlieYoutubing
      @CharlieYoutubing 4 года назад +2

      hey Stop, This is Doctor ...

    • @tHEuKER
      @tHEuKER 4 года назад +5

      @@CharlieYoutubing Wrong. "This is Two Minute Papers with Doctor..."

  • @kwea123
    @kwea123 3 года назад +4

    My implementation: github.com/kwea123/nerf_pl/tree/nerfw
    I'll try to make another tutorial soon (still need to think about how to present the work beautifully). For the moment here's some paper results' reproduction: nbviewer.jupyter.org/github/kwea123/nerf_pl/blob/nerfw/test_phototourism.ipynb

  • @edwassermann8368
    @edwassermann8368 4 года назад +9

    Fantastic. Incredible work. Has so much potential!

  • @Balabok
    @Balabok 4 года назад +12

    As a senior hard surface modeler, the gate with adjacent structures would take me about 2 to 3 days to model.

    • @kwea123
      @kwea123 4 года назад +4

      It might probably take 2 to 3 days or more to train this network as well. NeRF is known to be very time-consuming, and this work further extends the network quite a lot. Some tricks need to be designed to improve the speed, like this one: ruclips.net/video/RFqPwH7QFEI/видео.html

  • @rafaelmartinsdecastro7641
    @rafaelmartinsdecastro7641 4 года назад +1

    The results are really impressive and I am looking forward to testing it.

  •  4 года назад +10

    This is groundbreaking.
    I would love to play with this software...

    • @MrGTAmodsgerman
      @MrGTAmodsgerman 4 года назад

      You can, there are tutorials you can watch.

    • @Romeo615Videos
      @Romeo615Videos 4 года назад

      @@MrGTAmodsgerman can you point me to the right group or discord?

    • @MrGTAmodsgerman
      @MrGTAmodsgerman 4 года назад

      @@Romeo615Videos ruclips.net/video/TQj-KUQophI/видео.html

    • @ONDANOTA
      @ONDANOTA 4 года назад

      @@MrGTAmodsgerman this points to a Nerf tutorial, while this video is about Nerf-W

    • @kwea123
      @kwea123 3 года назад +2

      ​@@ONDANOTA I have successfully implemented NeRF-W recently too. I'll try to make another tutorial soon (still need to think about how to present the work beautifully). For the moment here's some paper results' reproduction: nbviewer.jupyter.org/github/kwea123/nerf_pl/blob/nerfw/test_phototourism.ipynb

  • @scottstensland
    @scottstensland 4 года назад +14

    jolly good work ... now spin up a public server so we can feed up our own set of images and get back the 3D synth object scene - Thanks ... and yes micro$oft bought similar tech they called photosynth - remember seeing the TED talk on that project

    • @MichaelRainabbaRichardson
      @MichaelRainabbaRichardson 4 года назад +1

      Get their code and check out Paperspace Jupyter notebooks. It's like spinning up a real WORKSTATION in seconds and only paying for the time you use it.

    • @kwea123
      @kwea123 4 года назад +3

      @@MichaelRainabbaRichardson google colab is free...

    • @jimj2683
      @jimj2683 2 года назад

      Imagine how cool Microsoft Driving Simulator 2040 is going to be!

  • @Mnnvint
    @Mnnvint 4 года назад +3

    I wonder if a few years from now, there'll be no more vertexes, surfaces, textures in computer graphics... it will all be one big neural radiance field.

  • @andreseo2007
    @andreseo2007 4 года назад +2

    It would be nice to able to use it and convert this into an OBJ file to use it on a 3D software like Blender, Cinema 4D, Maya, etc. What are your thoughts about that?

  • @alex_hinojo
    @alex_hinojo 4 года назад +3

    Nice job!

  • @BertrandBordage
    @BertrandBordage 4 года назад

    Such an elegant method ! Congratulations, you have made one of the most powerful softwares in History 😍
    And I know how hard it is! I worked on a similar program this summer using a homemade Tensorflow raytracer, and could just reproduce a fuzzy "3D" apricot from preprocessed 64×64 images 😂

  • @MichaelRainabbaRichardson
    @MichaelRainabbaRichardson 4 года назад +1

    It is 2020 dammit. About time minority-reports quality scene reconstruction becomes a thing! Imagine then applying live video to those models. 8-) Amazing work!

  • @Romeo615Videos
    @Romeo615Videos 4 года назад

    How can i get in the beta program to assist???

  • @Bucorax
    @Bucorax 4 года назад

    So my question is, when i want a 3d model, do i still need to run sfm (or are there any ways to get a model)? If yes, Would you be able to combine NeRF with sfm in order to get a better 3d model? For example the AI could be used in masking and cleaning the input before sfm calculation, so you get higher consistency in your simulated "input" data for sfm. The ai could additionally caclulate the best angles for the virtual photos then used as input for sfm, or when combined with sfm, an ai could be used to actively request better images where needed (eg when the point cloud data is sparse), so i imagine it would be a self-correcting mechanism.

  • @mike_lambert
    @mike_lambert 4 года назад

    Fantastic achievement! Well done!

  • @Johanns0r
    @Johanns0r 4 года назад +1

    This is awesome! I love it!

  • @BenEncounters
    @BenEncounters 2 года назад

    Is this something open that I could try using?!

  • @diogoalmeidavisuals
    @diogoalmeidavisuals 4 года назад +5

    So basically you're mostly optimizing for intelligent cleaning of the input data, everything else is mostly photogrammetry.
    Are you doing any predictive reconstruction of missing information gaps or does all information have to be included in at least one of the input sources?
    Could it fill in one of the columns based on the ones that it managed to capture?
    Can in guess the aspect of the roof/dome of the cathedral from the information it gets from the exclusively low angle input photos?
    Very interesting project

  • @JORDASH07
    @JORDASH07 4 года назад

    Fantastic. Any idea if / when the code will be released?

  • @ONDANOTA
    @ONDANOTA 4 года назад

    how many pics do you need?

  • @ONDANOTA
    @ONDANOTA 4 года назад

    could this output be used in Blender? (great content btw)

  • @tiagotiagot
    @tiagotiagot 4 года назад +1

    Can this be used to obtain the parameters and textures for a PBR material that will replicate the appearance of the real thing under angles and lighting conditions not present in the original dataset?

  • @sugadevanarr1
    @sugadevanarr1 4 года назад +10

    Microsoft Photosynth!

    • @dykam
      @dykam 4 года назад +1

      You would need to combine it with something like Photosynth, as it appears this method specifically does not resolve the viewpoints, but requires them as inputs.

  • @atallahahmed9983
    @atallahahmed9983 4 года назад

    when we get to try this for auther images

  • @patoman13
    @patoman13 4 года назад +1

    awesome!!!!

  • @ZiggityZeke
    @ZiggityZeke 4 года назад

    The future isn't just coming... It's here.

  • @VVLGK
    @VVLGK 4 года назад

    This is 🔥🔥

  • @ONDANOTA
    @ONDANOTA 4 года назад

    Is it safe to assume that Nerf-W has a basic knowledge of the world and guesses what is behind a pillar?

  • @LandMextrem
    @LandMextrem 4 года назад

    Please release the code

  • @Bobby.Kristensen
    @Bobby.Kristensen 4 года назад

    amazing

  • @s1rub
    @s1rub 4 года назад

    wow...

  • @slider0507
    @slider0507 4 года назад

    🤯

  • @HighWarlordJC
    @HighWarlordJC 4 года назад +1

    Saturation: 500%

  • @longlivegarybusey6409
    @longlivegarybusey6409 4 года назад

    Someone in the porn industry will find a use for this in no time.

  • @sagadeanimes
    @sagadeanimes 4 года назад

    1.000 likes

  • @arejays6701
    @arejays6701 4 года назад

    I think Elon Musks "quantum leap in Autonomous driving software " is linked to this.