If you don't want to listen to me ramble and just want to look at the comparisons here are the timestamps: 02:22 - Back to back comparison 08:07 - Side to side comparison 13:15 - Quick splat vs lengthy photogrammetry
I can totally relate to the rambling…that was me trying to explain to a friend how next-level this tech is. I too was awestruck by the fidelity compared to photogrammetry. Especially the fine detailed objects, or shiny, reflective or transparent objects that would never work in photogrammetry. I actually really like the soft painterly hazy aesthetic…it feels like a memory. I’ve been experimenting with PostShot ahead of a trip to visit my folks. Planning to scan my childhood home before my folks move to preserve the space.
15:12 Never thought about it. Gaussian Splattering could "bring back" a loved one who passed away. Only for a short moment where you are only watching a still frame of the person but the possibilites are immesurable.
its really cool that there is audio. I assume captured audio from the location. But even if its created from stock footage or something it still adds a lot!
This is so fascinating. Curious how file sizes, or storage requirements vary between skinned photogrammetry meshes and gaussian splatting. I can almost see a mix of both being used. Photogrammetry for general environment at some distance, with gaussian splatting for closer stuff. Thanks for sharing this very well done introduction to both with the comparisons. Has me exciting, and wanting to dig deeper.
this is so interesting because i was interested in bringing 3d scans into one of my fave games as mods but other modders told me "its too many tris its impossible" but then here you are running it in vr which is suually even more demanding. I am wishing i can bring stuff like this in game.
sadly the game i am using isnt either D: I want to use 3d scans which work but modders tell me theres too many tris. Wondering if it can be reduced easily while keeping detail!@@Puffycheeks
@@AnnasGamingCornerUsually in photogrammetry software there is a way to simplify or decimate a model before export. If it is a premade 3d scan Unreal engine has a way to use automatic line of distance models for high poly stuff to make it optimized. What game engine are you using?
If you ever showcase this again, it would be awesome if you could clip away all the flickery and large spats. Only keeping refined areas. Like perfection in a black void!
I was unaware that the splats change their properties depending on the direction we view them from, that explains a lot of how specular highlights can change, and how reflections can work. Google did this in their Lightfield demo by 3D reconstruction and from what I could tell texture swapping. Kind of disappointing as it kind of removes some of the magic, but made sense at the time. That technique also suffered badly from being limited to the 1m sphere of their camera array. If I could take a video of my apartment and, as you said, port it into VR Chat, I would do that in a heartbeat to get my international friends to visit... on location, as it were. I very much look forward to this technology maturing, so we will have neat GUI applications to generate and export these environments. Also, I have completely forgotten to look at this myself, while I've watched videos and tutorials for a while now. Thanks for sharing!
Thanks for watching my video BOLL! The space shuttle in the Lightfield demo is still the most realistic thing I've ever seen in VR. I'm also excited to see this tech mature, feel free to download my splats and view them in the 'Gracia VR' viewer linked in the description. (I recommend the 'Fireplace' file)
Gaussian splatting is literally gaussians in 3D space as far as I’m aware, so a reflection in a Gaussian splat is just a Gaussian behind the ones on the surface. It’s recreating the light you see, not the environment itself.
Pretty sure your phone. Photogrammetry means it’s done using photography, not exclusively LiDAR. Most newer phones support this kind of thing, and there are numerous apps for it
The biggest problem of nerfs and gaussian splats i see their lack of editability like it's like capturing hologram it looks great but for world construction pacticulary for games it seems really hard to use..
For the moment yes that is a good point. But the tech is moving fast, have a look at this example of editing splats: x.com/Andersonmancini/status/1737880610906009634?s=20
What would be nice if this could be overlaid on simple geometry, kind of like using a bump map. Having this high level of detail that 3d objects could interact with.
Gaussian splats' data got the potentials. Geometry wise, it's 3d point cloud. So add/remove points are straight forward. Also, you can do lod, streaming easily. The difficult part is to add them to game lighting pipeline to support dynamic lighting. But I believe it's impossible in the future. Currently there are researches to reverse lighting of the gaussian splats. So I think we may see more progress soon.
Have also been thinking about this. Same as one would sculpt a high dense 3d model and then bake the details on normalmaps etc. on a simplified geometry.@@dbellamy6694
Does Garcia work with the point cloud files that the Luma AI iphone app generates? I tried uploading my point cloud luma files up to garcia, but it doesnt seem to like that format?
I wonder if apple’s Vision Pro spatial video uses something like this. They say the iPhone 15 pro camera can record the spatial video but the cameras aren’t human eye distance apart so it seems like they’d have to be doing some 3D math with the cameras. I wonder if they’ll end up adding a “capture your whole environment in 3D” feature
No they don't. It's really just light fields that they use. It's spatial video, not volumetric video. Volumetric videos (point clouds, gaussian splats) are still waiting to become mainstream.
I could have done some post stabilization of the VR video, yes. But I didn't think of that when editing. VR footage is inherently 'fast'. Nowadays most games will have inbuilt smoothing for the 2D view on the screen that helps this.
I‘m not too knowledgeable about gaussian splatting, but I think it would be nice to somehow convert these splats to roughness, translucency, bump, displacement, normal and opacity maps etc…then map them onto the photogrammetry polygon model
Gaussian splatting does much better job than polygon model for rendering. Gaussian splatting is to replace polygon rasterization but not the textures for polygons
@@vickytao2010 but splats are not practical to work with. You can’t model them, can’t import them into a game engine and apply physics etc. They are not flexible and at best usable for vfx shots.
@@L3nny666the entire advantage of Gaussian splatting is that there is no geometry; it’s replicating light, not objects, and that’s what makes it so good. I think it could potentially, with a lot of work and models for collision detection, be applied in a game in some way. It will, obviously, probably not work for an entire game and may need to be used only in supplement to be effective, but it has potential. Much more potential than NeRFs, at least.
checkout PlayCanvas - it has webvr capability - but you need to tether to the PC to view in VR - If you access from the browser in your headset it freaks out with the 3dGS
There is no point cloud in 3D Gaussian splatting it is built from splats. It is still ok from a distance and actually quite good but you come close and it all breaks. That is why you see them around. You can still import it to 3D modeling software but only if you do not plan to come close to them in 3D environment.
The 3d guassian splatting can use point cloud data structure. And because so, it's easy to add lod to it so it can handle the case you are talking about of "come close". There is a research talking about 3d guassian splatting LOD which shows decent results.
You actually made the thumbnail and said 'to hell with it' and switched the title places (Photo, Splat)? Shameless, and I'm not kidding. Not gonna watch your clickbait shit.
Photogrammetry vs. Gaussian Splat... he didn't lie or swap anything. Those are just really long names, and he shortened them, haha Get better at reading comprehension before insulting someone's character next time
If you don't want to listen to me ramble and just want to look at the comparisons here are the timestamps:
02:22 - Back to back comparison
08:07 - Side to side comparison
13:15 - Quick splat vs lengthy photogrammetry
I can totally relate to the rambling…that was me trying to explain to a friend how next-level this tech is. I too was awestruck by the fidelity compared to photogrammetry. Especially the fine detailed objects, or shiny, reflective or transparent objects that would never work in photogrammetry. I actually really like the soft painterly hazy aesthetic…it feels like a memory.
I’ve been experimenting with PostShot ahead of a trip to visit my folks. Planning to scan my childhood home before my folks move to preserve the space.
15:12 Never thought about it. Gaussian Splattering could "bring back" a loved one who passed away. Only for a short moment where you are only watching a still frame of the person but the possibilites are immesurable.
I'm sure this will be used for 'photo albums' of loved ones. A nice thought.
there are already papers on animated gaussian splatting (including people)
Gaussian Splatting 4D videos are possible. It's not just a 3D image
ruclips.net/video/YsPPmf-E6Lg/видео.html&ab_channel=ZhangChen - video guassian splatting
Exactly why I am looking into splatting unfortunately.
Excatly what i was looking for, incredible for capturing spatial memories
its really cool that there is audio. I assume captured audio from the location. But even if its created from stock footage or something it still adds a lot!
i could see google doing this with street view
And they should
Yeah! Splat the world.
i believe google did make a paper on it.
there is already paper for that
Google already implements NERFs in immersive view and in the aerial view api
This is so fascinating. Curious how file sizes, or storage requirements vary between skinned photogrammetry meshes and gaussian splatting.
I can almost see a mix of both being used. Photogrammetry for general environment at some distance, with gaussian splatting for closer stuff. Thanks for sharing this very well done introduction to both with the comparisons. Has me exciting, and wanting to dig deeper.
this is so interesting because i was interested in bringing 3d scans into one of my fave games as mods but other modders told me "its too many tris its impossible" but then here you are running it in vr which is suually even more demanding. I am wishing i can bring stuff like this in game.
Yeah, you definitely can. There are Gaussian Splat add-ons for Unity and Unreal if you are using either of those game engines.
sadly the game i am using isnt either D: I want to use 3d scans which work but modders tell me theres too many tris. Wondering if it can be reduced easily while keeping detail!@@Puffycheeks
@@AnnasGamingCornerUsually in photogrammetry software there is a way to simplify or decimate a model before export. If it is a premade 3d scan Unreal engine has a way to use automatic line of distance models for high poly stuff to make it optimized. What game engine are you using?
it is called transport fever 2 (on my chanl) and I am using scans from polycam! :)@@Puffycheeks
@@AnnasGamingCornerFeel free to email me at Puffycheeks288@gmail.com if the scans you are using are free assets online I could try are simplify them.
If you ever showcase this again, it would be awesome if you could clip away all the flickery and large spats. Only keeping refined areas. Like perfection in a black void!
Great stuff, I think we will mix technologies to get the best of all worlds for the forseeable future
what a time to be alive
I was unaware that the splats change their properties depending on the direction we view them from, that explains a lot of how specular highlights can change, and how reflections can work. Google did this in their Lightfield demo by 3D reconstruction and from what I could tell texture swapping. Kind of disappointing as it kind of removes some of the magic, but made sense at the time. That technique also suffered badly from being limited to the 1m sphere of their camera array.
If I could take a video of my apartment and, as you said, port it into VR Chat, I would do that in a heartbeat to get my international friends to visit... on location, as it were. I very much look forward to this technology maturing, so we will have neat GUI applications to generate and export these environments. Also, I have completely forgotten to look at this myself, while I've watched videos and tutorials for a while now. Thanks for sharing!
Thanks for watching my video BOLL! The space shuttle in the Lightfield demo is still the most realistic thing I've ever seen in VR. I'm also excited to see this tech mature, feel free to download my splats and view them in the 'Gracia VR' viewer linked in the description. (I recommend the 'Fireplace' file)
Gaussian splatting is literally gaussians in 3D space as far as I’m aware, so a reflection in a Gaussian splat is just a Gaussian behind the ones on the surface. It’s recreating the light you see, not the environment itself.
This looks absolutely amazing!😮
I find this incredibly interesting and awesome.
Can't seem to figure out how you were able to use your Luma splats in Gracia, just puts me in a black void everytime..
It’s Melbourne! Gaussian Splat let me recognise Eureka tower!
beautiful thought
WELL DONE
Thank you for the explanations.
What kind of LIDAR machine would get the results shown in the bridge scene at the beginning?
Pretty sure your phone. Photogrammetry means it’s done using photography, not exclusively LiDAR. Most newer phones support this kind of thing, and there are numerous apps for it
One more step towards The OASIS.
Maybe it can be used as skyboxes?
It would work amazing for skyboxes!
Very cool indeed
ty
The biggest problem of nerfs and gaussian splats i see their lack of editability like it's like capturing hologram it looks great but for world construction pacticulary for games it seems really hard to use..
For the moment yes that is a good point. But the tech is moving fast, have a look at this example of editing splats: x.com/Andersonmancini/status/1737880610906009634?s=20
What would be nice if this could be overlaid on simple geometry, kind of like using a bump map. Having this high level of detail that 3d objects could interact with.
Gaussian splats' data got the potentials. Geometry wise, it's 3d point cloud. So add/remove points are straight forward. Also, you can do lod, streaming easily. The difficult part is to add them to game lighting pipeline to support dynamic lighting. But I believe it's impossible in the future. Currently there are researches to reverse lighting of the gaussian splats. So I think we may see more progress soon.
Have also been thinking about this. Same as one would sculpt a high dense 3d model and then bake the details on normalmaps etc. on a simplified geometry.@@dbellamy6694
That's not what it's made for.
Does Garcia work with the point cloud files that the Luma AI iphone app generates?
I tried uploading my point cloud luma files up to garcia, but it doesnt seem to like that format?
How to export this into quest 2 ? What engine to use ? And what sdk to use? Can you please explain? I am interested to try this. This is awesome..😊
x2
I think he’s on computer.
VR tourism! ...👍
Thank you
I wonder if apple’s Vision Pro spatial video uses something like this. They say the iPhone 15 pro camera can record the spatial video but the cameras aren’t human eye distance apart so it seems like they’d have to be doing some 3D math with the cameras. I wonder if they’ll end up adding a “capture your whole environment in 3D” feature
No they don't. It's really just light fields that they use. It's spatial video, not volumetric video.
Volumetric videos (point clouds, gaussian splats) are still waiting to become mainstream.
Interesting video. Would be great if when capturing video from your point of view you move and look around a bit slower.
I could have done some post stabilization of the VR video, yes. But I didn't think of that when editing. VR footage is inherently 'fast'. Nowadays most games will have inbuilt smoothing for the 2D view on the screen that helps this.
If you train it for longer it’ll look a lot better!
Great!
what LIDAR did you use
iphone 12 pro
I‘m not too knowledgeable about gaussian splatting, but I think it would be nice to somehow convert these splats to roughness, translucency, bump, displacement, normal and opacity maps etc…then map them onto the photogrammetry polygon model
Look up 2 minute papers Gaussian splats. He does some stuff about that.
@@judgsmith thanks will do
Gaussian splatting does much better job than polygon model for rendering. Gaussian splatting is to replace polygon rasterization but not the textures for polygons
@@vickytao2010 but splats are not practical to work with. You can’t model them, can’t import them into a game engine and apply physics etc. They are not flexible and at best usable for vfx shots.
@@L3nny666the entire advantage of Gaussian splatting is that there is no geometry; it’s replicating light, not objects, and that’s what makes it so good. I think it could potentially, with a lot of work and models for collision detection, be applied in a game in some way. It will, obviously, probably not work for an entire game and may need to be used only in supplement to be effective, but it has potential. Much more potential than NeRFs, at least.
Any way to run this in browser? Or directly from headset hard drive?
checkout PlayCanvas - it has webvr capability - but you need to tether to the PC to view in VR - If you access from the browser in your headset it freaks out with the 3dGS
But what about collision?
@@brian7android985 Done in whatever game engine you use it in.
its hard to get good splats without running 400 photos
what VR software do you use to place splat in?
Gracia AI probably, based on the interface
dang, this might be a long way from good but this is crazy
Im going to make a fork that works with VRChat.
It’s like the minority report
There is no point cloud in 3D Gaussian splatting it is built from splats. It is still ok from a distance and actually quite good but you come close and it all breaks. That is why you see them around. You can still import it to 3D modeling software but only if you do not plan to come close to them in 3D environment.
The 3d guassian splatting can use point cloud data structure. And because so, it's easy to add lod to it so it can handle the case you are talking about of "come close". There is a research talking about 3d guassian splatting LOD which shows decent results.
Everything in a computer looks bad up close if that’s not what you’re making it for.
Both methods are a nightmare in VR. The technology is not mature - yet.
*promo sm*
You actually made the thumbnail and said 'to hell with it' and switched the title places (Photo, Splat)? Shameless, and I'm not kidding. Not gonna watch your clickbait shit.
Photogrammetry vs. Gaussian Splat... he didn't lie or swap anything. Those are just really long names, and he shortened them, haha
Get better at reading comprehension before insulting someone's character next time
What lidar scanner do you recommend? Cool shit.
I used the Polycam app and the LiDAR sensor on an iPhone 12 Pro.
@Puffycheeks so does any iPhone work with lidar sensor?
@@jameshandley7197 Newer ones have LiDAR sensors built in.
@@jameshandley7197 nope, just the pro models from 12 up and the iPad Pro's
@jameshandley7197 I think it is in every iPhone pro or max since the iPhone 12. And some of the ipads since 2020
nice ending