Photogrammetry or NeRF? - Creating 3D Models with a Smartphone
HTML-код
- Опубликовано: 28 сен 2024
- #3dart #3dmodeling #photogrammetry #nerf
In this video, we'll be taking a closer look at two popular methods of 3D scanning objects, photogrammetry and NeRFs (neural radiance fields). We'll be using the app, Polycam, for photogrammetry and Luma AI for NeRFs and comparing how they perform under different circumstances. This video is not a scientifically rigorous comparison, but instead a showcase of the new technology and what it's capable of. So, the question we're asking is: is photogrammetry dead in the face of NeRFs?
Polycam: poly.cam/
Luma AI: lumalabs.ai/
Twitter accounts worth checking out:
@SirWrender
@jonstephens85
@cpheinrich
🎥 Gear Used:
Panasonic Lumix S5: a.co/d/fk5y5f2
Panasonic Lumix S 20-60mm Lens: a.co/d/aXia2K2
NEEWER CL124 LED Light: a.co/d/9tKJlp6
Joby Wavo Pod Mic: a.co/d/3wVWwVk
Did you know I have AI-Generated LUTs? They're free! Download them here: ko-fi.com/s/f3a6a98fb3
Thanks for the comparison! Interested to see NeRF applied to drone 3D modeling, with or without the need of accurate GPS data.
Hi,
Is it possible to export in STL format and than upload it to 3d printer?
Amazing video! thank you so much for this. I watched every second of this video, more then ChatGPT could ever teach me.
Hey, nice video but NeRF's don't create reflections and shiny surfaces like how one might think. What's actually happening is that the reflection is another world built within a world. The water for example isn't a flat surface but from the right angles looks like a shiny reflection in the water but if you fly towards it you'll see that it's actually just an upside-down world with no water surface.
Yes, you are right. It just faked
Dear, if I can ask a suggestion, I'm into a personal project in which I want to 3d print an object (a 1:1 scale electrical adapter) from a 3d model obtained using photogrammetry (turntable, smartphone, meshroom, blender); do you think NeRF tech could be better choice?
Can I ask what materials is the object? Is it very metallic or quite matte in texture?
@@robnjk i was thinking about spry painting it to give matte surface
Could I use the luma ai to create an environment then put myself into the environment using green screen
You totally could!
@@robnjk do you have a tutorial on this
I can say that bring material transfer from Nerf to texture it's just a little step...
I really wish I can get good 3D models out of these
Link me what you’ve gotten! I’d love to see
Get to point
First saw your davinci 18.5 video and now this, as a gis worker in aerial photogrammetrics and lidar, and a hobby in content creation, you got a sub from me lol
Oh dang, looks like I found my audience lol
Hi bro, I am an 3d artist. would like to know have you tried taking Luma Ai capture to Blender or Unity
Yeah! So I’ve tried bringing them into Blender but the issue is all the dynamic lighting effects and textures get baked down into a standard mesh - but they’ve recently updated to work with Unreal Engine 5 so the NeRFs should work accurately in that. I haven’t tried it out though
Loved the video and have subsequently subscribed. I'm thinking about using one of these methods (or both) to shoot pics/videos with my Sony cameras, DJI Osmo Action 3, and DJI Air 2s Drone in order to do a little side hustle for those who are wanting to show their real estate listings or companies wanting to see how their commercial construction projects are coming. Based on this info, which of these 2 methods do you think I should try, and I'd also love any additional feedback you could provide (such as other tools/apps I should be looking into) in order to achieve my goals? Thanks.
Why isn't there an AI that uses both to get the most information and light effects too? Like NeRF and phogam
Golden question - I suspect NerFs will continue to get better and have the equivalent quality of photogrammetry