Dang well played! You beat me to it! I've been wanting to make a video about scanning splats using a "3 cameras on a pole" method. Although I was surprised to see you using 3 different cameras. I thought that couldn't work. The math calculating the position of the images typically requires a consistent FOV between every shot, and more. I was surprised Potshot managed to make anything! Fun experiment! Thanks for posting!
Well thank you very much Wren! Nice that you find my videos. Yeah! Postshot has now this feature that it understands different FOVs by default. Which means that it should also be possible to combine footage from drone and then from this scanner stick for example. That is what I’m going to test next.
Any lead on smerf ? Saw that paper & video. Felt it was extremely useable for high fidelity hero shots. ruclips.net/video/zhO8iUBpnCc/видео.html I m trying to make high budget looking film in shoe string indie budget. Need wren n you to figure out with the guys behind this paper n work with post shot n relight ai n jetset cine. Relight ai where environment video can be used to match light n actors can be placed n filmed. Relight ai gave a demo where Actors can be filmed from 2 angle to make a 3 dimension video , so it make environment n actor 3d giving camera flexibility in post as well.. please u guys make this happen so everyone in globe can benefit. Also is there orange matted chroma solution in low tech?? Which can be implemented in remote shoot locations as well.
I have similar setup with 3 meter perche, and three gopro hero 7 capturing high resolutions photos every 0.5sec (timelapse mode) and it is somewhat effective. last time i went to the beach scanning some rocks using the helicopter method (instead of vertical, i put it slightly horizontal and do circles and circles and circles until i scanned everything.
I've been working on a 3DGS scan solution to scan people at trade shows (especially cosplayers) since the beginning of the year. Initially, I used two different iPhones (1x 12 Pro and 1x 15 Pro) each with a smartphone gimbal on an extendable pole, but I couldn't get good results. Then I came up with the idea of using two DJI Osmo Pocket 3 cameras on my rig, and with two passes around the subject, I got great results. However, I always had to stick with just two cameras on the rig due to budget constraints. I was practically exploding with excitement when I saw your rig. There's no information online about such a camera rig, so I had to develop and build one myself. I had considered using an Insta360 R One for the third upper camera before, but it never worked properly with the old method in post-processing. Now, with the new 3DGS solver in the latest Postshot version, it’s possible. But I have a question: I can't download the version or update my existing version to yours. Where did you get this build of Postshot, and when will the general public have access to it? Thanks for your work. You are amazing and your videos are extremly helpfull :) And one more thing. The DJI Osmo Pocket 3 is a 3DGS Scanning Monster. Really. Stabilised footage is key for a great scan, the motorised Gimbal can also remotely controlled via app and you can use old Smartphones as Monitors to see what you are capture while recording ;)
Hey. That is interesting to hear that you have tested so many cameras as well. I also have the DJI Osmo Pocket version 1. It is a great small camera with the gimbal. I need to make more tests on how it can be utilized with 3D scanning. The Postshot has a very nice Discord community. Developer publish the latest versions in there but i'll but also the link in here where the latest builds can be downloaded directly. Go here: www.jawset.com/builds/postshot/windows/
@@OlliHuttunen78Thank you very, very much for the hint about the link. Thank you very much! And regarding the Osmo Pocket 1: Unfortunately, the sensor in the camera is not the best. The Osmo Pocket 3 has a large 1-inch sensor and an impressive dynamic range for such a small camera. Additionally, the Pocket 3 has a 1:1 3K recording mode where mostly the entire sensor of the camera is used, capturing in a kind of super fisheye mode but with built-in fisheye correction. Plus, color logarythms like D-Log allow for perfect post-processing and color grading. And since the camera is on a gimbal, you can use object tracking to always keep the object in the center. I love the Osmo Pocket for 3DGS :)
This looks like a great idea. Could easily buy 3 identical cheap 2nd hand GoPro type action cameras and attach them to a monopod for a quick scanning setup.
Nice exercise, I'm sure that the problem is the lens correction, I haven't tried jet to generate splats with wide-angle lenses, but I got incredible results just with one video (3 rounds length 0:54) of an object. Maybe the same method with a standard lens could make a difference... Have you tried that?
Perfect timing on your video, I was just discussing with a friend making a "scanning stick" with multiple cameras. I built one in 2020 during the pandemic, using a 3M carbon fiber pole and 4 Sony A6000 cameras w/16mm prime wide angle lenses. It was bulky, lots of wires, and ran back to a microPC to control everything (it was taking individual shots when I pressed a shutter release). At the time I only had Reality Capture for processing, but it took forever with 4x the content obviously. Seeing how everyone's using video, I've been thinking about a similar rig to what you have, since I have quite a few 360 cameras and some action cams. For object scanning (rotation around an object) I'd avoid 360 cameras, just to keep the workflow faster (less fiddling with the viewport). For room scanning, 360's make it much faster, provided you can mask yourself (perhaps a generic mask that's big enough to cover any movement while holding the rig at arm's length), or just ignore the one cubeface you'd end up in (as you showed in one of your previous videos)
Great one Olli! was going to try this method with 2 RS one inch cams but for environments, Wasnt sure how post shot would go if i added a standard SLR in the mix for super HQ but slightly different focal lengths and tone. You've yet again answered my question before i Started testing! thanks again Olli
Hello, thanks for the video, very well explained! I have a question, on the Jawset website I can only download version 0.3.302 of PostShot. Where can I get the most recent version that you used that has Splat MCMC?
Postshot has very nice Discord group where the developer Jascha is releasing links to the latest builds and updates of the program. You can also find the download page of these build in here: www.jawset.com/builds/postshot/windows/
You know.... I thing when postshot goes out, it will "scan our banks acount empty"... I hope thwy don't screwit with some autodesk sub or unafordable licence like stuff
Perhaps. Luma do not handle well images that are taken with different FOVs. To make this work in Luma Ai it should have been scanned with same type of cameras.
I know that in order to obtain quality data, I need to acquire data at a certain angle from a certain distance, but when acquiring data with a three-stage rod, there seems to be a problem because the angle of each camera is different. Wouldn't this have a big impact?
wow this is amazing! thank you for sharing this brilliant method and result! i'm surprised that you can use 3 different cameras. is this the result of using MCMC? or just new Postshot's feature? i heard that Reality Capture can handle multiple FOV but not in NeRF or 3DGS. this is amazing
I like it although it seems like one could 3d print a set up for cheap light cameras and use a github find for the spat modeling. Like 3 old phones then crop in DaVinchi and port so some local program? I'd wanna do that sounds like fun? IDK Thanks for the video
How about using gopros instead of the insta360 cameras? You only use one side/lens of the camera and it might be cheaper. I'm currently researching to build a gopro array to do gaussian splatting on the cheap, but I don't know where to research to know the best distance between cameras, I wonder if you have a suggestion :)
I believe the errors come from different view distortion. Even with correction, it's impossible to have same curvature for each view. So the gradient descent can not converge
I’m not completely following… what kind of data are you exporting out of Reality Capture to bring into PostShot? Is it the source images, or a point cloud, or something else?
Very good idea to do this test..do you plan to do tests with the insta360 x4. I know that for a lot of software, it's not yet compatible. I'd also like to know if you plan to do a test including a gnss/rtk receiver for georeferencing photos or panoramas. Thanks again for the video and keep up the good work.
I wonder if it could be done with 3 passes of a drone with a gimballing cam (newer dji drones would be perfect) and adjusting a calculated altitude on each orbit. Inherently drones have stability and could offer more consistency.
It sure does, as long as you have stable footage and decent quality it doesn't matter what camera the footage comes from lol. There are a lot of different drones a lot cheaper than the DJI lines that can create fantastic splats. I use the Potensic ATOM.
Hey! :) You should try out making interval photos instead of recording videos, it's giving better input images quality (resolution + sharpness). In my experience quality of photogrammetry is way better!
The idea here, i believe, is to make it a one-pass action, to avoid time consuming walking around and changing the height of the camera on each pass. That's why there are three cameras here. (haven't watched yet, but that's what i was thinking about checking myself 😅)
Dang well played! You beat me to it! I've been wanting to make a video about scanning splats using a "3 cameras on a pole" method. Although I was surprised to see you using 3 different cameras. I thought that couldn't work. The math calculating the position of the images typically requires a consistent FOV between every shot, and more. I was surprised Potshot managed to make anything! Fun experiment! Thanks for posting!
Well thank you very much Wren! Nice that you find my videos. Yeah! Postshot has now this feature that it understands different FOVs by default. Which means that it should also be possible to combine footage from drone and then from this scanner stick for example. That is what I’m going to test next.
Any lead on smerf ? Saw that paper & video. Felt it was extremely useable for high fidelity hero shots. ruclips.net/video/zhO8iUBpnCc/видео.html
I m trying to make high budget looking film in shoe string indie budget. Need wren n you to figure out with the guys behind this paper n work with post shot n relight ai n jetset cine. Relight ai where environment video can be used to match light n actors can be placed n filmed. Relight ai gave a demo where Actors can be filmed from 2 angle to make a 3 dimension video , so it make environment n actor 3d giving camera flexibility in post as well.. please u guys make this happen so everyone in globe can benefit. Also is there orange matted chroma solution in low tech?? Which can be implemented in remote shoot locations as well.
eeeey the man himself!
Avata 2... is God mode for splats
Very well done! Interesting approach to 3D scanning. Please do continue! Your research is very helpfull!
Thank you for your research. These are all topics me an my team are experimenting with and every video upload you make has been super helpful!
I have similar setup with 3 meter perche, and three gopro hero 7 capturing high resolutions photos every 0.5sec (timelapse mode) and it is somewhat effective. last time i went to the beach scanning some rocks using the helicopter method (instead of vertical, i put it slightly horizontal and do circles and circles and circles until i scanned everything.
This is brilliant! The ultimate camera rig might be to use three RS-1inch's. The post processing would be intense though!
This man is truly playing in the wild west world of Splatting
Awesome experiment! Love this kind of video!
I've been working on a 3DGS scan solution to scan people at trade shows (especially cosplayers) since the beginning of the year. Initially, I used two different iPhones (1x 12 Pro and 1x 15 Pro) each with a smartphone gimbal on an extendable pole, but I couldn't get good results. Then I came up with the idea of using two DJI Osmo Pocket 3 cameras on my rig, and with two passes around the subject, I got great results. However, I always had to stick with just two cameras on the rig due to budget constraints. I was practically exploding with excitement when I saw your rig. There's no information online about such a camera rig, so I had to develop and build one myself. I had considered using an Insta360 R One for the third upper camera before, but it never worked properly with the old method in post-processing. Now, with the new 3DGS solver in the latest Postshot version, it’s possible. But I have a question: I can't download the version or update my existing version to yours. Where did you get this build of Postshot, and when will the general public have access to it? Thanks for your work. You are amazing and your videos are extremly helpfull :) And one more thing. The DJI Osmo Pocket 3 is a 3DGS Scanning Monster. Really. Stabilised footage is key for a great scan, the motorised Gimbal can also remotely controlled via app and you can use old Smartphones as Monitors to see what you are capture while recording ;)
Hey. That is interesting to hear that you have tested so many cameras as well. I also have the DJI Osmo Pocket version 1. It is a great small camera with the gimbal. I need to make more tests on how it can be utilized with 3D scanning. The Postshot has a very nice Discord community. Developer publish the latest versions in there but i'll but also the link in here where the latest builds can be downloaded directly. Go here: www.jawset.com/builds/postshot/windows/
@@OlliHuttunen78Thank you very, very much for the hint about the link. Thank you very much! And regarding the Osmo Pocket 1: Unfortunately, the sensor in the camera is not the best. The Osmo Pocket 3 has a large 1-inch sensor and an impressive dynamic range for such a small camera. Additionally, the Pocket 3 has a 1:1 3K recording mode where mostly the entire sensor of the camera is used, capturing in a kind of super fisheye mode but with built-in fisheye correction. Plus, color logarythms like D-Log allow for perfect post-processing and color grading. And since the camera is on a gimbal, you can use object tracking to always keep the object in the center. I love the Osmo Pocket for 3DGS :)
This looks like a great idea. Could easily buy 3 identical cheap 2nd hand GoPro type action cameras and attach them to a monopod for a quick scanning setup.
Muy buena idea! voy a probar con mi insta360 x3 y 2 camaras osmoaction dji
Nice exercise, I'm sure that the problem is the lens correction, I haven't tried jet to generate splats with wide-angle lenses, but I got incredible results just with one video (3 rounds length 0:54) of an object. Maybe the same method with a standard lens could make a difference... Have you tried that?
Thanks a lot for experience sharing. It's quite interesting.
So cool. Thanks again for all this R'n'D work regarding #GaussianSplatting
Great video :)
It would be nice to have a “land drone” that you would command to circle something with the pole sticking up :)
Perfect timing on your video, I was just discussing with a friend making a "scanning stick" with multiple cameras. I built one in 2020 during the pandemic, using a 3M carbon fiber pole and 4 Sony A6000 cameras w/16mm prime wide angle lenses. It was bulky, lots of wires, and ran back to a microPC to control everything (it was taking individual shots when I pressed a shutter release). At the time I only had Reality Capture for processing, but it took forever with 4x the content obviously. Seeing how everyone's using video, I've been thinking about a similar rig to what you have, since I have quite a few 360 cameras and some action cams.
For object scanning (rotation around an object) I'd avoid 360 cameras, just to keep the workflow faster (less fiddling with the viewport). For room scanning, 360's make it much faster, provided you can mask yourself (perhaps a generic mask that's big enough to cover any movement while holding the rig at arm's length), or just ignore the one cubeface you'd end up in (as you showed in one of your previous videos)
Great one Olli! was going to try this method with 2 RS one inch cams but for environments,
Wasnt sure how post shot would go if i added a standard SLR in the mix for super HQ but slightly different focal lengths and tone.
You've yet again answered my question before i Started testing!
thanks again Olli
Would have been interesting to see a comparison against one camera to scan the plane. Regardless, loved seeing the results from the experiment!
Hello, thanks for the video, very well explained! I have a question, on the Jawset website I can only download version 0.3.302 of PostShot. Where can I get the most recent version that you used that has Splat MCMC?
Postshot has very nice Discord group where the developer Jascha is releasing links to the latest builds and updates of the program. You can also find the download page of these build in here: www.jawset.com/builds/postshot/windows/
You know.... I thing when postshot goes out, it will "scan our banks acount empty"... I hope thwy don't screwit with some autodesk sub or unafordable licence like stuff
Thank you, great video! Great Would Luma AI produce a good model if you took 3 cameras of the same type?
Perhaps. Luma do not handle well images that are taken with different FOVs. To make this work in Luma Ai it should have been scanned with same type of cameras.
I know that in order to obtain quality data, I need to acquire data at a certain angle from a certain distance, but when acquiring data with a three-stage rod, there seems to be a problem because the angle of each camera is different. Wouldn't this have a big impact?
i wonder if its possible to program a flight formation of a three drones to scan a building with the same concept as the stick
wow this is amazing! thank you for sharing this brilliant method and result!
i'm surprised that you can use 3 different cameras. is this the result of using MCMC? or just new Postshot's feature?
i heard that Reality Capture can handle multiple FOV but not in NeRF or 3DGS.
this is amazing
It is made with Postshot using MCMC profile. It makes camera tracking and Gaussian training a bit better than regular ADC method.
I like it although it seems like one could 3d print a set up for cheap light cameras and use a github find for the spat modeling. Like 3 old phones then crop in DaVinchi and port so some local program? I'd wanna do that sounds like fun? IDK Thanks for the video
How about using gopros instead of the insta360 cameras? You only use one side/lens of the camera and it might be cheaper. I'm currently researching to build a gopro array to do gaussian splatting on the cheap, but I don't know where to research to know the best distance between cameras, I wonder if you have a suggestion :)
I believe the errors come from different view distortion. Even with correction, it's impossible to have same curvature for each view. So the gradient descent can not converge
You should try aligning the images in reality capture first, then moving into PostShot. I'm seeing much better results with this method.
I’m not completely following… what kind of data are you exporting out of Reality Capture to bring into PostShot? Is it the source images, or a point cloud, or something else?
Very good idea to do this test..do you plan to do tests with the insta360 x4.
I know that for a lot of software, it's not yet compatible.
I'd also like to know if you plan to do a test including a gnss/rtk receiver for georeferencing photos or panoramas.
Thanks again for the video and keep up the good work.
I have the Insta360 RS 1 inch. If I’m picking up two additional cameras, do you think the Insta360 or the DJI works better?
I wonder if it could be done with 3 passes of a drone with a gimballing cam (newer dji drones would be perfect) and adjusting a calculated altitude on each orbit. Inherently drones have stability and could offer more consistency.
It sure does, as long as you have stable footage and decent quality it doesn't matter what camera the footage comes from lol. There are a lot of different drones a lot cheaper than the DJI lines that can create fantastic splats. I use the Potensic ATOM.
I have been trying to add more than one clip at once. Perhaps this option is possible in later release or I simply missed ticking some box. 🧐
Hey! :)
You should try out making interval photos instead of recording videos, it's giving better input images quality (resolution + sharpness).
In my experience quality of photogrammetry is way better!
The idea here, i believe, is to make it a one-pass action, to avoid time consuming walking around and changing the height of the camera on each pass. That's why there are three cameras here. (haven't watched yet, but that's what i was thinking about checking myself 😅)
maybe fpv-drone with action camera will make it easy for you to do that thing? no need to walk around the capturing object whatsoever
Is there any mobile-friendly Unity urp vr 3dgs sdk yet?
Great video. Did you try NeRF XXL or any NeRF and is it better quality than Splat? Thank you.