I got to one of your videos while looking for some turntables alternatives, and here I'm watching the 3rd one in a row that have nothing to do with what I was looking for at the beginning, well done mate, you have very engaging and informative videos 👍
Love the Car example for typiclaly "impossible" camera moves through windows. I do wonder if putting windows up and down as the camera moves through may trick it into keeping windows up for the NERF scan allowing you to move through the passenger windows in the final animation.
Good idea. You should try that. Although I think that car should be then scanned two times. Once with the windows open an once with closed. And then compine some how these parts of the model for example in Unreal. Since NeRF creates lumby mesh if something moves during the scanning.
This is just brilliant, thank you for the great explanation. I wonder if this method would be useful to scan a bigger environment like a whole street from a moving car to use as a backdrop in a studio recording.
Great info. Thx. Do you think this setup would be good to create a 3D model of a large place, like a church for example?? Or do you recommend another type of setup? Thank you!
Fascinating! Can you look the generated models in a VR headset, as the quest 2? In this case, can you walk around inside the model? This would be a perfect application for that!
Sure it can be done in Unreal. There is a video from bad decision studio where the guys tests how NeRF models run in VR in Unreal engine. Check it out: ruclips.net/video/bKt2oVTw900/видео.html
Please let me know how I can get high quality scans like yours. You mentioned that in the middle of the video, you export the HD video instead of the 360 video and upload it to luma ai. However, in the subsequent scene where the two containers are painted, you used equirectangular video. Which video format would you recommend based on your experience so far? Also, did uploading the insv file directly work for you? I'm using ONE X2, but it doesn't work because it doesn't have leveling function.
Yes. I recommend that you allways edit your material in Insta360 studio. Right now I have had much accurate and better nerf models when I edit the video such way that the target that I shot is in the middle of the picture during the whole video. Then I render it out as a normal MP4 in HD resolution and upload that in the Luma AI service as a Normal video. Second option is to load the full equiretangular video (also in MP4 format). But I have noticed that NeRF trained from equiretangular video do not convert that accurate model as the one where the target is centered. Perhaps I could make another video where I go more deeply in these methods.
Thank you for your detailed response. Looking forward to another explainer video. When scanning a place, do you scan the same place over and over again at different heights? Or is it a one time thing?
Yes. When I'm scanning I record all at once to one video file. Usually with 360 camera you don't need to make so many walk arounds of your object on different heights because those wide lenses sees most of the surroundings at once. With the selfie stick it is very easy to reach and capture all corners of your object.
Sure it works. I have tried that on sphere with my dorne. But it is not that convincing when rendered as a equirectangular image out from Luma AI. But when they get this new Gaussian Splatting method work for 360 images it will be perfect. We just need to wait a little bit because its very new tecnique.
Yes. I made test with both. The original full equiretangular footage does not make as good result as the video which is cropped from full 360 video. Luma works better if you can go around your target.
Hello! Can LUMA AI phone scanning software scan a given item in a 1 to 1 ratio? It will know the dimensions of the scanned item, e.g. height, width. I want to model a separate part based on the scanned item that would match the first one. Is it possible?
If I take 30 sec with a 360 Does it take up a lot of resolution or memory? B.. I just want to video 4x people next to each other similar to yre car lady Thank you
Thanks for the video. I followed your tips but when I import the model in the blender it just imports small chunk of cropped scene. in Luma Ai i have adjusted the crop to cover the whole geometry but wjen export to .gltf it exports cropped geo. is limitation of free service? I hope I have explained properly.
Yes. I noticed that luma exports only cropped models right now if you export GLB or OBJ. If you export it to Unreal you will get both versions full model with the backgroud and the cropped one. I quess this need to be asked directly from LumaLabs if they could include the full model also for mesh models.
Thank you very much for the tutorial!! I have uploaded on Luma web, a 360 video as equirectangular, filmed with the camera always vertically (the video is not walking around an object, it is a free walk through an outdoor space). Luma processes it and creates the NeRF model, but with important noise, cuts and cloud species of noise. In the same way when I create a Reshoot in free form and render, the results are still of poor quality. Do you have any suggestions to improve this? Does the 360 origin video have to have any requirements? Thank you so much !!
Yes. I have noticed also that Luma does not make so great models from 360 equiretangular images where you just walk straight line. It will create something but Luma is mostly based on circular movement where you move aroud something. But you also should not rely what you can see on the web browser when you rotateng the model in 3D mode. It is only aproximate preview. Much better result will appear if you render some videos out from Luma service. That is when the actual NeRF model can be seen and it is often much better looking than the model which you can see in the Web Browser. Another tip is to download the model into Unreal Game Engine and see how the volume model will look in there. All the other options when you download the model in GLTF, USD or OBJ format thaey will convert the NeRF volume to polygons and it will loose its quality. In polygons the model is not that good. But as for the 360 camera settings I do not have any special tip. Just don't try to upload too long clips where you walk like over 100 meters long route in the video. Luma works best when you have video shot from short area.
@@OlliHuttunen78Thank you very much Olli for the answer. Yes indeed, it seems that Luma responds very well to scanning objects when moving around them, and not in more limeal routes. In my commented case, the video source is very short, only 17 seconds and taken with Ricoh ThetaV camera.The final video with the route animation in the Reshoot and the 3D model (gltf) generated by Luma, both are very bad. I'll keep trying different alternatives, seeing if I can get better results. Your channel is the only one that deals with this important topic. Thank you very much for your help !!
It is a power selfie stick. There is a battery in selfie stick which can give extra power to 360 camera via usb and you can also press record button and control camera from the stick.
@@OlliHuttunen78 thank you brother , i will try to repricate by fallowing youre video , i 3d print so maybe i can scan some figurings and convert them to 3d printable stls . thank you .
@@JAYTHEGREAT355 I recommend also check out the 3Dpresso web service 3dpresso.ai/. It can also make 3D models from video. They turn out to be much solid and suitaple models for 3D printing than luma ai model. When NeRF model is tornet to polygon model it can be very broken and takes lot of work to make it solid stl for 3D printing.
Hi Oli and thank you for this interesting video. Do i get it right that the objects which are being recorded should be static and the whole thing will not work when you have moving objects? For instance would it be possible to capture a 360 video from a scene in which people dance? I guess not. Thanks
Yes. This scannig method works only with static objects and surroundings. If something moves or passes by (like bike or a car in the background) while you are scanning. AI tries to ignore them and remove from radiance field. It's kind of same effect if you take a photo with a very long exposure time. So you cannot make a very good 3D model with this method from the scene where people are dancing.
@@OlliHuttunen78 Thank you for response. I was thinking about the ability of 3d modeling important events such as wedding. If every guest play well, once could create a memorable 3D model of the event. :) Another question: Is there a special media player / tool to view the exported 3d Model? Can a normal user easily view the model or needs to install specific and complex tools?
Yeah! It could work to model that kind of group picture in wedding if everybody can remain in place couple of minutes while you scan the moment with 360 camera. You can easily share a link from Luma AI and people can look rendered NeRF video and rotate 3d model in web browser. It works in mobile and on the computer. You don't have to login or download any kind of special app or plugin for that. And model can be also be embeded to any webpage. Those are the normal features of this kind of cloud service. Luma AI is a great service.
ruclips.net/video/PclwALPiqiQ/видео.html Was your video done before the update to remove the floaters? Or were they still present during your tests at the 6:30 mark?
My video was made after that Luma AI floaters announcement. But it should be noted that I presented the model in preview mode on Luma's web pages. It doesn't tell the whole truth. The final result of the NeRF model will only appear when the camera animation is rendered. There are often significantly fewer floaters to be seen. But this is quite secondary now that Gaussian Splatting technology has replaced everything and the older 3D models produced with NeRF technology are not talked about very much anymore. In that sense, many things in this video are already outdated information.
I hate it, when people put their feet in dirty shoes on top of seats where other people are going to seat afterwards and make their pants dirty, because of inconsiderate filthy people, that have climbed on the seat with their dirty shoes. If such people don't understand it, then they should be punished for doing this by cleaning the seat every day for a week.
Tip: Merge the vertices on the model and you can sculpt inside a 3d package without the mesh breaking apart.
Ai että! Hienoa kamaa tulossa. Vielä kun saadaan tuosta tehtyä rakennuksen IFC-komponentit niin avot!
I got to one of your videos while looking for some turntables alternatives, and here I'm watching the 3rd one in a row that have nothing to do with what I was looking for at the beginning, well done mate, you have very engaging and informative videos 👍
Keeping us all up to date and realising that 360 isn't just a gimmick any more!
Thank you for this. Even the harsh models will be great references in terms of scaling.
And this the real content ♥️
Love the Car example for typiclaly "impossible" camera moves through windows. I do wonder if putting windows up and down as the camera moves through may trick it into keeping windows up for the NERF scan allowing you to move through the passenger windows in the final animation.
Good idea. You should try that. Although I think that car should be then scanned two times. Once with the windows open an once with closed. And then compine some how these parts of the model for example in Unreal. Since NeRF creates lumby mesh if something moves during the scanning.
Excellent video appreciate the work you put in for it!
This is fantastic work Oli, leep up the good work
This is just brilliant, thank you for the great explanation. I wonder if this method would be useful to scan a bigger environment like a whole street from a moving car to use as a backdrop in a studio recording.
Unironically the lump looks like the orb from Donnie Darko 😂
Great info. Thx. Do you think this setup would be good to create a 3D model of a large place, like a church for example?? Or do you recommend another type of setup? Thank you!
Excellent content, keep going!
Fascinating! Can you look the generated models in a VR headset, as the quest 2? In this case, can you walk around inside the model? This would be a perfect application for that!
Sure it can be done in Unreal. There is a video from bad decision studio where the guys tests how NeRF models run in VR in Unreal engine. Check it out: ruclips.net/video/bKt2oVTw900/видео.html
Thank you Olli!
Please, can this be used to do room interiors? And then these be used as minute data points for comparison?
Which is better for you, postshot or Luma Ai?
I'd say Postshot because you can train your model more accurate than in Luma AI and you can live preview the process.
Great work thank you for the info :). very interesting!.
Great work thank you for the info :)
very interesting!
Please let me know how I can get high quality scans like yours.
You mentioned that in the middle of the video, you export the HD video instead of the 360 video and upload it to luma ai. However, in the subsequent scene where the two containers are painted, you used equirectangular video.
Which video format would you recommend based on your experience so far?
Also, did uploading the insv file directly work for you? I'm using ONE X2, but it doesn't work because it doesn't have leveling function.
Yes. I recommend that you allways edit your material in Insta360 studio. Right now I have had much accurate and better nerf models when I edit the video such way that the target that I shot is in the middle of the picture during the whole video. Then I render it out as a normal MP4 in HD resolution and upload that in the Luma AI service as a Normal video. Second option is to load the full equiretangular video (also in MP4 format). But I have noticed that NeRF trained from equiretangular video do not convert that accurate model as the one where the target is centered. Perhaps I could make another video where I go more deeply in these methods.
Thank you for your detailed response. Looking forward to another explainer video.
When scanning a place, do you scan the same place over and over again at different heights? Or is it a one time thing?
Yes. When I'm scanning I record all at once to one video file. Usually with 360 camera you don't need to make so many walk arounds of your object on different heights because those wide lenses sees most of the surroundings at once. With the selfie stick it is very easy to reach and capture all corners of your object.
Gotcha! Thanks a lot!
very good info
Hi Olli, great content. I'm curious on if this will work with the insta360 sphere, and what kind of results will you get?
Sure it works. I have tried that on sphere with my dorne. But it is not that convincing when rendered as a equirectangular image out from Luma AI. But when they get this new Gaussian Splatting method work for 360 images it will be perfect. We just need to wait a little bit because its very new tecnique.
@@OlliHuttunen78 Thank you. Its mind-boggling technology 🔥
Thanks for sharing
great informative video, just what I needed thanks!
hi!i'm wondering,, the video you uploaded is 360 original footage or recuted one side camer footage?
Yes. I made test with both. The original full equiretangular footage does not make as good result as the video which is cropped from full 360 video. Luma works better if you can go around your target.
Hello! Can LUMA AI phone scanning software scan a given item in a 1 to 1 ratio? It will know the dimensions of the scanned item, e.g. height, width. I want to model a separate part based on the scanned item that would match the first one. Is it possible?
Nicely described video! Your interests match mine, so, just subscribed! Bring us some more goodies. 👍
Hi, is the final result download able?
If I take 30 sec with a 360 Does it take up a lot of resolution or memory?
B..
I just want to video 4x people next to each other similar to yre car lady
Thank you
Thanks for the video. I followed your tips but when I import the model in the blender it just imports small chunk of cropped scene. in Luma Ai i have adjusted the crop to cover the whole geometry but wjen export to .gltf it exports cropped geo. is limitation of free service? I hope I have explained properly.
Yes. I noticed that luma exports only cropped models right now if you export GLB or OBJ. If you export it to Unreal you will get both versions full model with the backgroud and the cropped one. I quess this need to be asked directly from LumaLabs if they could include the full model also for mesh models.
Thank you
Great! Thanks
Thank you very much for the tutorial!! I have uploaded on Luma web, a 360 video as equirectangular, filmed with the camera always vertically (the video is not walking around an object, it is a free walk through an outdoor space). Luma processes it and creates the NeRF model, but with important noise, cuts and cloud species of noise. In the same way when I create a Reshoot in free form and render, the results are still of poor quality. Do you have any suggestions to improve this? Does the 360 origin video have to have any requirements? Thank you so much !!
Yes. I have noticed also that Luma does not make so great models from 360 equiretangular images where you just walk straight line. It will create something but Luma is mostly based on circular movement where you move aroud something. But you also should not rely what you can see on the web browser when you rotateng the model in 3D mode. It is only aproximate preview. Much better result will appear if you render some videos out from Luma service. That is when the actual NeRF model can be seen and it is often much better looking than the model which you can see in the Web Browser. Another tip is to download the model into Unreal Game Engine and see how the volume model will look in there. All the other options when you download the model in GLTF, USD or OBJ format thaey will convert the NeRF volume to polygons and it will loose its quality. In polygons the model is not that good. But as for the 360 camera settings I do not have any special tip. Just don't try to upload too long clips where you walk like over 100 meters long route in the video. Luma works best when you have video shot from short area.
@@OlliHuttunen78Thank you very much Olli for the answer. Yes indeed, it seems that Luma responds very well to scanning objects when moving around them, and not in more limeal routes. In my commented case, the video source is very short, only 17 seconds and taken with Ricoh ThetaV camera.The final video with the route animation in the Reshoot and the 3D model (gltf) generated by Luma, both are very bad. I'll keep trying different alternatives, seeing if I can get better results. Your channel is the only one that deals with this important topic. Thank you very much for your help !!
what is accessory that you used with insta 360 camera ? I saw a connector attach to a rig
It is a power selfie stick. There is a battery in selfie stick which can give extra power to 360 camera via usb and you can also press record button and control camera from the stick.
@@OlliHuttunen78 oh right !! thank you so much mate
I will have to try this with my X3
have you try it? can you share it to me
what is your pc specs Sir?
hello brother , did you shoot a 360 video or where you shooting consting pictures to then upload to luma ai
I shooted 360 video.
@@OlliHuttunen78 thank you brother , i will try to repricate by fallowing youre video , i 3d print so maybe i can scan some figurings and convert them to 3d printable stls . thank you .
@@JAYTHEGREAT355 I recommend also check out the 3Dpresso web service 3dpresso.ai/. It can also make 3D models from video. They turn out to be much solid and suitaple models for 3D printing than luma ai model. When NeRF model is tornet to polygon model it can be very broken and takes lot of work to make it solid stl for 3D printing.
🤗🤗🤗
Hi Oli and thank you for this interesting video. Do i get it right that the objects which are being recorded should be static and the whole thing will not work when you have moving objects? For instance would it be possible to capture a 360 video from a scene in which people dance? I guess not.
Thanks
Yes. This scannig method works only with static objects and surroundings. If something moves or passes by (like bike or a car in the background) while you are scanning. AI tries to ignore them and remove from radiance field. It's kind of same effect if you take a photo with a very long exposure time. So you cannot make a very good 3D model with this method from the scene where people are dancing.
@@OlliHuttunen78 Thank you for response.
I was thinking about the ability of 3d modeling important events such as wedding. If every guest play well, once could create a memorable 3D model of the event. :)
Another question:
Is there a special media player / tool to view the exported 3d Model? Can a normal user easily view the model or needs to install specific and complex tools?
Yeah! It could work to model that kind of group picture in wedding if everybody can remain in place couple of minutes while you scan the moment with 360 camera. You can easily share a link from Luma AI and people can look rendered NeRF video and rotate 3d model in web browser. It works in mobile and on the computer. You don't have to login or download any kind of special app or plugin for that. And model can be also be embeded to any webpage. Those are the normal features of this kind of cloud service. Luma AI is a great service.
@@OlliHuttunen78 thanks a lot Mate. Need to Test it.
If this app wasn't cloud based I would have loved to try it, but
Thanks for this video. Time to dust off my 3d camera
Great video thanks for sharing and thanks and congratulations to your partner who puts up with your tests 😂
only iphone?
It has an android app now
ruclips.net/video/PclwALPiqiQ/видео.html
Was your video done before the update to remove the floaters? Or were they still present during your tests at the 6:30 mark?
My video was made after that Luma AI floaters announcement. But it should be noted that I presented the model in preview mode on Luma's web pages. It doesn't tell the whole truth. The final result of the NeRF model will only appear when the camera animation is rendered. There are often significantly fewer floaters to be seen. But this is quite secondary now that Gaussian Splatting technology has replaced everything and the older 3D models produced with NeRF technology are not talked about very much anymore. In that sense, many things in this video are already outdated information.
if u use a pro iPhone that have lidar sensor the result will be much more detailed than luma ai....
I hate it, when people put their feet in dirty shoes on top of seats where other people are going to seat afterwards and make their pants dirty, because of inconsiderate filthy people, that have climbed on the seat with their dirty shoes. If such people don't understand it, then they should be punished for doing this by cleaning the seat every day for a week.