My implementation: github.com/kwea123/nerf_pl/tree/nerfw I'll try to make another tutorial soon (still need to think about how to present the work beautifully). For the moment here's some paper results' reproduction: nbviewer.jupyter.org/github/kwea123/nerf_pl/blob/nerfw/test_phototourism.ipynb
It might probably take 2 to 3 days or more to train this network as well. NeRF is known to be very time-consuming, and this work further extends the network quite a lot. Some tricks need to be designed to improve the speed, like this one: ruclips.net/video/RFqPwH7QFEI/видео.html
@@ONDANOTA I have successfully implemented NeRF-W recently too. I'll try to make another tutorial soon (still need to think about how to present the work beautifully). For the moment here's some paper results' reproduction: nbviewer.jupyter.org/github/kwea123/nerf_pl/blob/nerfw/test_phototourism.ipynb
jolly good work ... now spin up a public server so we can feed up our own set of images and get back the 3D synth object scene - Thanks ... and yes micro$oft bought similar tech they called photosynth - remember seeing the TED talk on that project
Get their code and check out Paperspace Jupyter notebooks. It's like spinning up a real WORKSTATION in seconds and only paying for the time you use it.
I wonder if a few years from now, there'll be no more vertexes, surfaces, textures in computer graphics... it will all be one big neural radiance field.
It would be nice to able to use it and convert this into an OBJ file to use it on a 3D software like Blender, Cinema 4D, Maya, etc. What are your thoughts about that?
Such an elegant method ! Congratulations, you have made one of the most powerful softwares in History 😍 And I know how hard it is! I worked on a similar program this summer using a homemade Tensorflow raytracer, and could just reproduce a fuzzy "3D" apricot from preprocessed 64×64 images 😂
It is 2020 dammit. About time minority-reports quality scene reconstruction becomes a thing! Imagine then applying live video to those models. 8-) Amazing work!
So my question is, when i want a 3d model, do i still need to run sfm (or are there any ways to get a model)? If yes, Would you be able to combine NeRF with sfm in order to get a better 3d model? For example the AI could be used in masking and cleaning the input before sfm calculation, so you get higher consistency in your simulated "input" data for sfm. The ai could additionally caclulate the best angles for the virtual photos then used as input for sfm, or when combined with sfm, an ai could be used to actively request better images where needed (eg when the point cloud data is sparse), so i imagine it would be a self-correcting mechanism.
So basically you're mostly optimizing for intelligent cleaning of the input data, everything else is mostly photogrammetry. Are you doing any predictive reconstruction of missing information gaps or does all information have to be included in at least one of the input sources? Could it fill in one of the columns based on the ones that it managed to capture? Can in guess the aspect of the roof/dome of the cathedral from the information it gets from the exclusively low angle input photos? Very interesting project
Can this be used to obtain the parameters and textures for a PBR material that will replicate the appearance of the real thing under angles and lighting conditions not present in the original dataset?
You would need to combine it with something like Photosynth, as it appears this method specifically does not resolve the viewpoints, but requires them as inputs.
Im holding on my paper because what a time to be alive!
What are you talking about, dear fellow scholars?
hey Stop, This is Doctor ...
@@CharlieYoutubing Wrong. "This is Two Minute Papers with Doctor..."
My implementation: github.com/kwea123/nerf_pl/tree/nerfw
I'll try to make another tutorial soon (still need to think about how to present the work beautifully). For the moment here's some paper results' reproduction: nbviewer.jupyter.org/github/kwea123/nerf_pl/blob/nerfw/test_phototourism.ipynb
Fantastic. Incredible work. Has so much potential!
As a senior hard surface modeler, the gate with adjacent structures would take me about 2 to 3 days to model.
It might probably take 2 to 3 days or more to train this network as well. NeRF is known to be very time-consuming, and this work further extends the network quite a lot. Some tricks need to be designed to improve the speed, like this one: ruclips.net/video/RFqPwH7QFEI/видео.html
The results are really impressive and I am looking forward to testing it.
This is groundbreaking.
I would love to play with this software...
You can, there are tutorials you can watch.
@@MrGTAmodsgerman can you point me to the right group or discord?
@@Romeo615Videos ruclips.net/video/TQj-KUQophI/видео.html
@@MrGTAmodsgerman this points to a Nerf tutorial, while this video is about Nerf-W
@@ONDANOTA I have successfully implemented NeRF-W recently too. I'll try to make another tutorial soon (still need to think about how to present the work beautifully). For the moment here's some paper results' reproduction: nbviewer.jupyter.org/github/kwea123/nerf_pl/blob/nerfw/test_phototourism.ipynb
jolly good work ... now spin up a public server so we can feed up our own set of images and get back the 3D synth object scene - Thanks ... and yes micro$oft bought similar tech they called photosynth - remember seeing the TED talk on that project
Get their code and check out Paperspace Jupyter notebooks. It's like spinning up a real WORKSTATION in seconds and only paying for the time you use it.
@@MichaelRainabbaRichardson google colab is free...
Imagine how cool Microsoft Driving Simulator 2040 is going to be!
I wonder if a few years from now, there'll be no more vertexes, surfaces, textures in computer graphics... it will all be one big neural radiance field.
This
Doubt that.
It would be nice to able to use it and convert this into an OBJ file to use it on a 3D software like Blender, Cinema 4D, Maya, etc. What are your thoughts about that?
I don't think it works that way
Nice job!
Such an elegant method ! Congratulations, you have made one of the most powerful softwares in History 😍
And I know how hard it is! I worked on a similar program this summer using a homemade Tensorflow raytracer, and could just reproduce a fuzzy "3D" apricot from preprocessed 64×64 images 😂
It is 2020 dammit. About time minority-reports quality scene reconstruction becomes a thing! Imagine then applying live video to those models. 8-) Amazing work!
How can i get in the beta program to assist???
So my question is, when i want a 3d model, do i still need to run sfm (or are there any ways to get a model)? If yes, Would you be able to combine NeRF with sfm in order to get a better 3d model? For example the AI could be used in masking and cleaning the input before sfm calculation, so you get higher consistency in your simulated "input" data for sfm. The ai could additionally caclulate the best angles for the virtual photos then used as input for sfm, or when combined with sfm, an ai could be used to actively request better images where needed (eg when the point cloud data is sparse), so i imagine it would be a self-correcting mechanism.
Fantastic achievement! Well done!
This is awesome! I love it!
Is this something open that I could try using?!
So basically you're mostly optimizing for intelligent cleaning of the input data, everything else is mostly photogrammetry.
Are you doing any predictive reconstruction of missing information gaps or does all information have to be included in at least one of the input sources?
Could it fill in one of the columns based on the ones that it managed to capture?
Can in guess the aspect of the roof/dome of the cathedral from the information it gets from the exclusively low angle input photos?
Very interesting project
Fantastic. Any idea if / when the code will be released?
how many pics do you need?
could this output be used in Blender? (great content btw)
Can this be used to obtain the parameters and textures for a PBR material that will replicate the appearance of the real thing under angles and lighting conditions not present in the original dataset?
Microsoft Photosynth!
You would need to combine it with something like Photosynth, as it appears this method specifically does not resolve the viewpoints, but requires them as inputs.
when we get to try this for auther images
awesome!!!!
The future isn't just coming... It's here.
This is 🔥🔥
Is it safe to assume that Nerf-W has a basic knowledge of the world and guesses what is behind a pillar?
Please release the code
amazing
wow...
🤯
Saturation: 500%
Someone in the porn industry will find a use for this in no time.
1.000 likes
I think Elon Musks "quantum leap in Autonomous driving software " is linked to this.