That was a brilliant video thanks! Perfect balance of complexity and explanation. I’ve never comment much , but definitely keeping doing more of this ! Best of luck with growing your channel !!
really nicely explained , made the basics clear , would love to see more content around 2D photos to 3D scene conversion (could u make a video on nvdiffrec please )
Glad you liked it… thanks for the suggestion! That’s definitely a good idea for a video. My next video will probably be on the Segment Anything paper, but I’ll add this one in the To-Do bucket!
Kinda tricky to explain it here, I'll suggest to try the Instant NGP link in the description. nvlabs.github.io/instant-ngp/ The paper is a great source to learn about it, and it also has a 20 minute presentation link. tom94.net/data/publications/mueller22instant/mueller22instant-gtc.mp4 Hope that helps.
When do you think 2d to 3d ai will be able to render the entire Earth in 3d? It seems like there is enough data out there on the internet, the hard part is just making the ai to use it all. Also, if the AI could learn from video (or from haptic sensors) and implement a physics model too (the 3d world becomes something you can interact with). Maybe it could become a giant simulator of the planet and be used in games or for anything really.
Try the Instant NGP link in the description. nvlabs.github.io/instant-ngp/ The paper is a great source, and it also has a 20 minute presentation link. Hope that helps.
Man that’s a great question. As someone not involved directly with NeRF research, here’s my two cents.. Above all, NeRF provides a way to store 3D scenes in a compressed format (as NN weights) with fast query times (I-NGP) and multi-scale photorealistic renders. The applications could be in so many areas - virtual scene synthesis, 3d mesh creation from images, 3d modeling/printing, visualization, video editing, etc. The apartment rendering itself could be a nice addition to so many property/realtor websites too, but that’s just scratching the surface.
a undercover agent/drone records the video, which is then used by special ops to get a feel of the layout. An agent could snitch the base of a cartel hideout. It could also be used to preserve a crime-scene.
@@zerog4879 I was hoping more of 3D representation of actual real places.. Using metahumans or any character to animate and produce stories/documentaries.. or simply a memory. 3D modeling the entire earth would almost be impossible .. without ai work.
Great explanation. However, as end user (Virtual Tours photographer), is this going to be available anytime soon? Any software to download and play around? I tried LUMA-AI online but not so great for what I needed it for. Appreciate any feedback. Thanks.
As far as I know, they haven't open-sourced or released it yet, but there are other implementations out there that you can use (like developer.nvidia.com/blog/getting-started-with-nvidia-instant-nerfs/). You might wanna look into Reddit for this. Someone should be able to help. I'll start with r/photogrammetry.
That was a brilliant video thanks! Perfect balance of complexity and explanation. I’ve never comment much , but definitely keeping doing more of this !
Best of luck with growing your channel !!
Thanks a lot! This made my day! :)
really nicely explained , made the basics clear , would love to see more content around 2D photos to 3D scene conversion (could u make a video on nvdiffrec please )
Glad you liked it… thanks for the suggestion! That’s definitely a good idea for a video. My next video will probably be on the Segment Anything paper, but I’ll add this one in the To-Do bucket!
Interesting! Thanks for the explaining. I've been playing with the NeRF on Luma AI and now I begin to understand how it does what it does.
Thank you very much for distilling complex novel research into understandable content. I greatly appreciate it.
Yeah, man, this is an awesome video! Clear, informative, without bullsht) Keep going.
Could Google use the "brute force" voxel method if they wanted to make Street View into 3d? Or would even they have too little storage for this?
@AVB When will it open to the public?
Thanks. Can you explain hash encoding logic briefly? What are corners what does different number represent?
Kinda tricky to explain it here, I'll suggest to try the Instant NGP link in the description.
nvlabs.github.io/instant-ngp/
The paper is a great source to learn about it, and it also has a 20 minute presentation link.
tom94.net/data/publications/mueller22instant/mueller22instant-gtc.mp4
Hope that helps.
Great video, next time level the audio with audacity etc before uploading tho cause your way too quiet! subbed
When do you think 2d to 3d ai will be able to render the entire Earth in 3d?
It seems like there is enough data out there on the internet, the hard part is just making the ai to use it all. Also, if the AI could learn from video (or from haptic sensors) and implement a physics model too (the 3d world becomes something you can interact with).
Maybe it could become a giant simulator of the planet and be used in games or for anything really.
Please Explain the Multi Resolution hash encoding Or please give me some reference to learn it.
Try the Instant NGP link in the description.
nvlabs.github.io/instant-ngp/
The paper is a great source, and it also has a 20 minute presentation link. Hope that helps.
Wow amazing
What is the end goal with this tech?
Man that’s a great question. As someone not involved directly with NeRF research, here’s my two cents..
Above all, NeRF provides a way to store 3D scenes in a compressed format (as NN weights) with fast query times (I-NGP) and multi-scale photorealistic renders. The applications could be in so many areas - virtual scene synthesis, 3d mesh creation from images, 3d modeling/printing, visualization, video editing, etc. The apartment rendering itself could be a nice addition to so many property/realtor websites too, but that’s just scratching the surface.
a undercover agent/drone records the video, which is then used by special ops to get a feel of the layout. An agent could snitch the base of a cartel hideout. It could also be used to preserve a crime-scene.
@@zerog4879 I was hoping more of 3D representation of actual real places.. Using metahumans or any character to animate and produce stories/documentaries.. or simply a memory. 3D modeling the entire earth would almost be impossible .. without ai work.
Like everything else in human history. Someone invents it, people figure out how to improve and use it later.
@@Instant_Nerf Yeh, something like a hyper realistic Metaverse?
Is zip available for general public use?
I don’t think it’s released publicly yet.
Great explanation. However, as end user (Virtual Tours photographer), is this going to be available anytime soon? Any software to download and play around? I tried LUMA-AI online but not so great for what I needed it for. Appreciate any feedback. Thanks.
As far as I know, they haven't open-sourced or released it yet, but there are other implementations out there that you can use (like developer.nvidia.com/blog/getting-started-with-nvidia-instant-nerfs/).
You might wanna look into Reddit for this. Someone should be able to help. I'll start with r/photogrammetry.
@@avb_fj Thank you. I will check out your suggestion.