When you stick it in everywhere, they call it gaussian distribution. You never know when you messed up. They call it gaussian blur. It leads to gaussian elimination.
Again and again, you are probably the most interesting 3d-related youtuber I know. Your deep mathematical knowledge is made easy to understand for anybody, but makes the compexity seen "in transparency" for those who know mathematics. I was going to start searching for tutorials rhat would finally make me understand gaussian splatters better than I understand them now, but this tutorial did almost all the job! Thanks!
SSIM = Structural Similarity. Super interesting paper to read. Basically measures images similarity in a more natural way for humans and fixes some weird results from SME or similar metrics 😊 Great video in any case, great content as always ❤
17:40, you can uncheck As Points Clouds to make an oval spheres to make it more full and realistic like in PostShot, but obviously fat heavier because there's a geometry polygons.
Splats are definitely a fascinating tech with some great use cases. Glad you didn't do the clickbait corridor crew thing, claiming it would be the future of realistic game graphics, when they really should know better
Your vids are great and I appreciate them. Meanwhile, I think it's worth mentioning that you say "etcetera" wrong. I'll never forget the day my family told me that I have been saying a word wrong my whole life.
i think this would be perfect for moddeling realistic interiors through windows in games. Here the viewing angles are controlled so the limitations of a set area of perspectives is not a problem.
But what can you do with this ? I always see demos of "video to gaussian platting" but you could just play the original video. I want to see a demo of using this 3D data, can you place objects there ? Are placed object showing reflection correclty ?
Softwares like unreal engine supports gaussian splatter very well, so you can create a complete virtual environments or objects inside it and use them like regular 3d assets.
Watched this completely so awesome how Gaussian splats works and you provided bit to bit workflow that’s amazing ❤ hope we get something better for blender maybe ur addon 😁 love your videos
This is huge! I imagine google earth being completely renewed by drone footage. Have you test this with drones? I imagine scale is not an issue. So cool. Do you know of any good linux alternatives to that software?
@@quadrivium1600 Not really, unless you discard smaller detail. But since we're used to zooming in pretty far in google earth we'd need to store a lot of point cloud data.
Would be really cool being able to see an 3DGS just like is shown on PostShot but on any 3d software and natively, the result inside blender is a bit offffff
If the 3d program would just simply support writing shaders for a rasterizer like any other program on the planet then yes? Like Houdini, Nuke, AfterEffects, all have implementations of Gaussian Splatting which is actually doing "splatting" (a different pipeline in your GPU) And not just putting tons of overlapping planes in your face and eating the cost of overdraw.
Awesome overview of where the tooling is at, thank you so much! I keep wondering if there could be a geometry free version of Gaussian Splats / Radiance Fields that use SDFs instead of proxy geometry. It could be a lot more efficient. Implicit Surface Representation: SDFs inherently describe surfaces as zero-level sets, which could eliminate the need for explicit geometric proxies entirely. This simplifies the representation and could drastically reduce memory overhead. Also, Gaussian splats and radiance fields often require dense proxy point clouds or surfaces to act as anchors for the field. SDFs, on the other hand, only require a compact scalar field and gradients to describe surfaces, which might significantly reduce computational requirements. Plus, SDFs provide gradients by nature, which could be used directly for surface normal calculations, shading, or integrating light transport equations. That would avoid additional computation steps needed in geometry-based methods. Finally, an SDF-based approach could dynamically refine detail where needed, focusing computational resources on areas of interest while coarsely sampling other regions, much like mip-mapping. It will be interesting to see how the technology improves and evolves over time. Thanks again for the video!
Already tried to use this technology for a production but we remove it because not enought robust few months ago. Kiri engine is awesome and I think the future of Gaussian Splatting. PostShot is problematic if you have multiple graphics cards and not have a RTX in primary card.
thanks babe. nice work. i like how you took a video of a scene, then spent ages to turn it 3d create a camera to render it out so that you basically have the same video as the input but its just loads worse. hahaha. joking. this is super sick! didn't know that Gaussian splatting has been a thing since the 90s!! thats wild its such a cool tech, that i find so hard to understand! happy holidays to you! keep up the magic.
Great tutorial as usual. Totally agree with everything you said here. I'm currently on a job trying to "kit bash" together a bunch of large wide exterior GS captures. Kitbashing multiple GS's brings it's own issues but there sadly really isn't a great solution for using GS captures at the moment. The Kiri option is as you said cool but way to slow to be useable sadly. There's a pretty good AE plugin but really we need a better GS in true 3d solution. I pretty much came to the same solution as you except I'm having to render multiple layers and comp them together.
Ngl his one sound better I knew it was that before but just hearing how he says it I like it much better 💀 but yeah that would piss the gauss off a lot
Thank you, I feel like a prick when I correct other people’s pronunciation, but hearing “gauze-ian” bothers me way more than it should. Gauss rhythms with mouse
Blender imho should have an official internal photogrammetric system, if we want Blender to live for many other years. Blender should hire you as a developer.
The mirrorverse is an interesting thing, but does it mean, that it's impossible to accurately represent two glossy surfaces at right angles to each other? Or something like a mirror/glossy cube? Because all 6 mirrorverses would share same 3d space
Gaussian splat is CGIs graphine. Its an old method that gets rehashed every few years, but never proves to be that useful because it’s essentially a base ingredient. you need to mesh it to be s useful in 3d, the resolution is never good enough to be used on its own. It’s kind of a lot of work for what’s essentially poor quality video… but it always comes back as this amazing thing. It’s just not.
I don't think that's necessarily true. Polygons and UV mapping have limitations of their own so who knows. Maybe we haven't given it proper attention yet. I've tested some very impressive and promising demos but if we don't care to go deeper then we simply won't. Another possible bottleneck could be hardware. Look at how artificial intelligence is blooming even though it was basically dead for twenty years (look up ai winters), mostly because of limitations in computing power.
@@AyushBakshi 1. To send my 3d model to customer for viewing/inscpecting without them able to steal my model. 2. Imagine you can freely moving in your cycles rendered scene in realtime without without any noise.
@@ruudygh Interesting. In the meantime: 1) you can try SketchFab for Models. That way they can't steal but only view. 2) Try baking the scene lighting.
hm, that's not quite as easy. you'd be back at classic photogrammetry. the high quality of gaussian splat kind of lies in those view dependant effects that look like reflection/refraction.. when baking, it would need to bake a different texture for each view angle, which is a data nightmare. it is at least very hard to convert back into something that is useful for rendering in any other way than those splats. The splats kind of can go beyond the quality of traditional photogrammetry based on the source material, It is a fun tech to look around a video like a 3d scene, but hard to do anything else with it..
also I think there should be nearly no difference projecting the source video images on 3D geometry directly vs. calculating gaussian splats first. maybe you get a little extra "organic linear interpolation", but most of that will be low quality anyways. But it shows the problem, when an object has different pixel colors in the same spot, seen from different angles, which one to bake, which one to prefer? If you do the average, reflectivity is gone..
@@thFaust There are a few papers on how to get PBR maps from spherical harmonics, rapidly changing values across the gradient can indicate that that pixel should be of high reflectivity. Delighting also has come long ways, most approaches require you to figure out a spherical harmonics first, so there you go. The benefit over just projecting the footage is the accurate depth information. Imagine a cracked wall. How do you model that accurately to project your footage onto? But that also the applications for gaussian splatting over photogrammetry are numerous. The fast reconstruction times plus performance of the visualisation alone make a few things even feasible now. 1. Crime scene reconstruction. Using view angles from different phone cameras at different times etc. (4D gaussian, aka 3D videos are a thing) 2. Arial surveys. Not only catography, but also farming, mining is the most interesting, since you can evaluate the weight of a pile of material if you know it's volume. A pointcloud can also easily be reduced in quality at a distance, so no LODs necessary. 3. ecommerce. You see all those shops have maculate lit product photos? - Imagine you can spin them around, try them on in AR. All without an artist having to re-create a thing that already exists. Just from using the existing photos. Will we see gaussian splatting in media and film? Probably short-lived. Since for now, it is the best way to record a believable performance of a human in 3D. (Where you are actually concerned with the appearance of the actor and costume over everything else) Will we see gaussian splatting in games? You forgot DREAMS? The PS4/PS5 game? Zelda? literally all the grass and fur of 7th gen?
@@MrSofazocker Thanks for the interesting reply. :D Well yeah, I was counting the depth estimation part towards the bigger topic of photogrammetry, gaussian splatting does not do anything new there, afaik. With projecting original footage, I meant using depth voxelization first (the same depth the splats are trained from) and then projecting onto the voxels and turning their surface into a mesh (or projecting footage onto that mesh, the order of last two steps is just a matter of resolution). Delighting can be done then as well. I wouldn't necessarily call the depth information in any of these topics accurate, since it is just estimated from colors, the accuracy is different for each resulting voxel, the more angles it is seen from the more accurate, but it varies wildly throughout the scene. If you do photogrammetry, you "commit" to that estimated depth, splats don't. To your examples for splatting use cases: 1. Sure, splats sound useful to get a most reliable look at the place. Since photogrammetry enables more spatial reasoning, so if you have enough data, you'd probably generate both anyway. I doubt you can capture anything close to a 3D video without a specifically built multi camera rig. 2. Since the depth and therefore volume precision depends on the amount of angles you have on a point, Arial sounds to me like mostly from the same angle, so the depth info wouldn't be precise. The pointcloud needs no LOD, sure, but if you want to measure volume, you'd voxelize anyways, so you don't really need the gaussian splat. The splats are a visually nice representation, but by themselves not enough to reason about the displayed space. 3. Yes, I totally agree, that is a good use case. :-) Especially since gaussian splats imply so much that it is a scan/look so realistic. But i've been working on AR apps, for that again, you'd need more than the splats. Splats can't cast a shadow or be rendered in different light conditions, so you'd need at least voxels here again, going further down the photogrammetry pipeline, delighting and all that. I know about those types of performance capture photogrammetry, that's true, but only possible with specifically built camera rigs. It is a fascinating topic overall, what can be created from just photos. I don't quite understand the game examples you mention. grass and fur? You think fur looks good in gaussian splat? Sure but it couldn't deform, so if you want to render an animal moving, you'd need a mesh or something. Record the entire movement as 4D splats? That is very static again in terms of lighting as well as motion, very hard to integrate with other things in a render..
Nvidia wants to fuse 3DGS and RT together... - speed of GS - Beauty of GS - adaptability of RT Gaussian splatting can't change the lighting like in Ray Tracing. I hope they figure it out.
as good as it looks, I dont like it because it looks too shimmerry and blurry, theres no sharpness on the edges of the objects and allat. Like someone applied TAA 20 times lol
She tuturials on my gaussian untill I splat
excellent insight 🤔
Omg! Such wisdom! 👏😇
When you stick it in everywhere, they call it gaussian distribution. You never know when you messed up. They call it gaussian blur. It leads to gaussian elimination.
Volumetric video will let you splat from any angle
This madlad will eventually run doom entirely in geo-nodes
It's alredy possible
@@SamDevAnimator has it been done though?
@@dumaass really, I don't know
@@dumaass not yet but he is right, geometry nodes are "turing complete" so can be used to run software of almost any kind.
Again and again, you are probably the most interesting 3d-related youtuber I know.
Your deep mathematical knowledge is made easy to understand for anybody, but makes the compexity seen "in transparency" for those who know mathematics.
I was going to start searching for tutorials rhat would finally make me understand gaussian splatters better than I understand them now, but this tutorial did almost all the job!
Thanks!
SSIM = Structural Similarity. Super interesting paper to read. Basically measures images similarity in a more natural way for humans and fixes some weird results from SME or similar metrics 😊 Great video in any case, great content as always ❤
You really are an O.G. I don't think anyone else knows Blender like you do.
17:40, you can uncheck As Points Clouds to make an oval spheres to make it more full and realistic like in PostShot, but obviously fat heavier because there's a geometry polygons.
Splats are definitely a fascinating tech with some great use cases. Glad you didn't do the clickbait corridor crew thing, claiming it would be the future of realistic game graphics, when they really should know better
I feel enlightened. Thank you for another great tutorial.
Really enjoyed that, something to tinker with over Xmas now. Thanks.
"I've accidentally splatted a mirror dimension" sounds insane and not entirely wholesome.
this is exactly what I've been searching for for months, thank you so much
I'm genuinely curious, why does the snow scene at the beginning look sharp but the kitchen scene looks a bit blurrier?
Maybe the source was high definition photographs instead of video footage? 🤔
Your vids are great and I appreciate them. Meanwhile, I think it's worth mentioning that you say "etcetera" wrong. I'll never forget the day my family told me that I have been saying a word wrong my whole life.
I've been waiting for someone to show the steps to do this. Will Absolutely be using this in future projects!
i think this would be perfect for moddeling realistic interiors through windows in games. Here the viewing angles are controlled so the limitations of a set area of perspectives is not a problem.
i remember seeing gaussian splatting on siggraph 2023, hard to believe it's this crazy now
you sir are a legend.
But what can you do with this ?
I always see demos of "video to gaussian platting" but you could just play the original video.
I want to see a demo of using this 3D data, can you place objects there ? Are placed object showing reflection correclty ?
Softwares like unreal engine supports gaussian splatter very well, so you can create a complete virtual environments or objects inside it and use them like regular 3d assets.
Watched this completely so awesome how Gaussian splats works and you provided bit to bit workflow that’s amazing ❤ hope we get something better for blender maybe ur addon 😁 love your videos
Me: "I bet if I tried it, I'd get some orientation conflicts..." 13:40 "The Z axis is the Y axis."
I haven't watched in a while. Good video.
In the end, it doesn't even matter...
That was really cool. I have a bunch of footage from places id love to revisit this way!
Thanks for this video.. I like the style of the tutorial
This is huge! I imagine google earth being completely renewed by drone footage. Have you test this with drones? I imagine scale is not an issue. So cool. Do you know of any good linux alternatives to that software?
Why do you imagine scale wouldn't be an issue? A point cloud can be pretty dense in information even for a small scene.
@@huraqan3761 I meant the scale of the subject. If it comes from video or pictures it would be the same if it's a sink table or a whole building.
@@quadrivium1600 Not really, unless you discard smaller detail. But since we're used to zooming in pretty far in google earth we'd need to store a lot of point cloud data.
Would be really cool being able to see an 3DGS just like is shown on PostShot but on any 3d software and natively, the result inside blender is a bit offffff
If the 3d program would just simply support writing shaders for a rasterizer like any other program on the planet then yes?
Like Houdini, Nuke, AfterEffects, all have implementations of Gaussian Splatting which is actually doing "splatting" (a different pipeline in your GPU)
And not just putting tons of overlapping planes in your face and eating the cost of overdraw.
Awesome overview of where the tooling is at, thank you so much! I keep wondering if there could be a geometry free version of Gaussian Splats / Radiance Fields that use SDFs instead of proxy geometry. It could be a lot more efficient. Implicit Surface Representation: SDFs inherently describe surfaces as zero-level sets, which could eliminate the need for explicit geometric proxies entirely. This simplifies the representation and could drastically reduce memory overhead. Also, Gaussian splats and radiance fields often require dense proxy point clouds or surfaces to act as anchors for the field. SDFs, on the other hand, only require a compact scalar field and gradients to describe surfaces, which might significantly reduce computational requirements. Plus, SDFs provide gradients by nature, which could be used directly for surface normal calculations, shading, or integrating light transport equations. That would avoid additional computation steps needed in geometry-based methods. Finally, an SDF-based approach could dynamically refine detail where needed, focusing computational resources on areas of interest while coarsely sampling other regions, much like mip-mapping. It will be interesting to see how the technology improves and evolves over time. Thanks again for the video!
I must say you are very entertaining!
Wow! Thanks for sharing? Really interesting and straightforward as hell
Fantastic tutorial, man. Thank you.
This guy have a brain running faster than us.
Already tried to use this technology for a production but we remove it because not enought robust few months ago. Kiri engine is awesome and I think the future of Gaussian Splatting. PostShot is problematic if you have multiple graphics cards and not have a RTX in primary card.
Looks so good!
thanks babe. nice work. i like how you took a video of a scene, then spent ages to turn it 3d create a camera to render it out so that you basically have the same video as the input but its just loads worse. hahaha. joking. this is super sick! didn't know that Gaussian splatting has been a thing since the 90s!! thats wild its such a cool tech, that i find so hard to understand! happy holidays to you! keep up the magic.
Great tutorial as usual. Totally agree with everything you said here. I'm currently on a job trying to "kit bash" together a bunch of large wide exterior GS captures. Kitbashing multiple GS's brings it's own issues but there sadly really isn't a great solution for using GS captures at the moment. The Kiri option is as you said cool but way to slow to be useable sadly. There's a pretty good AE plugin but really we need a better GS in true 3d solution. I pretty much came to the same solution as you except I'm having to render multiple layers and comp them together.
Great tutorial mate.
I’m reminded of a Blade Runner scene where Deckard is analyzing a photo.
Totally off-topic, but the zero-effort camera tracking looks interesting.
Great thumbnail
Its more like GUESSian Splatting :)
🥸😆
Hearing you say "gauze-ian" so many times hurts; Gauss is the name of a German guy, so it's pronounced more like "gowss"
Ngl his one sound better I knew it was that before but just hearing how he says it I like it much better 💀 but yeah that would piss the gauss off a lot
Thank you, I feel like a prick when I correct other people’s pronunciation, but hearing “gauze-ian” bothers me way more than it should. Gauss rhythms with mouse
you guys are too picky
Blender imho should have an official internal photogrammetric system, if we want Blender to live for many other years.
Blender should hire you as a developer.
Most of blender users are Freeloaders. we should donate first.
too much bloat and things to support.
@@nahoj.2569 Yep, Blender is dead sry guys.
@@MrSofazocker It's not lmao, tf you tlking bout?
awesome vid
Make addon, fast :) Seriously this would be cool. And add the ability to extract an actual 3D model, would be magical.
Incredible stuff
Damn it would be so beautiful to convert gaussian splatting to 3d
The mirrorverse is an interesting thing, but does it mean, that it's impossible to accurately represent two glossy surfaces at right angles to each other? Or something like a mirror/glossy cube? Because all 6 mirrorverses would share same 3d space
Is "Training" just gradient descent? I guess it's catchier.
Either way this stuff is super cool and looks great
Exelente! 🙏🏻👏🏻💪🏻 Yo uso un addon Blender para importar el .ply y es más simple.
ruclips.net/video/bTdLsdLytHk/видео.html
how would this work with like an insta 360 cam or something like that?
Lookin good dude
Gaussian splat is CGIs graphine.
Its an old method that gets rehashed every few years, but never proves to be that useful because it’s essentially a base ingredient. you need to mesh it to be s useful in 3d, the resolution is never good enough to be used on its own. It’s kind of a lot of work for what’s essentially poor quality video… but it always comes back as this amazing thing. It’s just not.
I don't think that's necessarily true. Polygons and UV mapping have limitations of their own so who knows. Maybe we haven't given it proper attention yet.
I've tested some very impressive and promising demos but if we don't care to go deeper then we simply won't.
Another possible bottleneck could be hardware. Look at how artificial intelligence is blooming even though it was basically dead for twenty years (look up ai winters), mostly because of limitations in computing power.
Can I add 3D objects to the splats which matches the scene.
I was literally just playing around with this stuff last week! This video could have saved me so much time in experimentation.
Is this similar tech to NERF?
PLEASE make another tutorial of HOW to turn my Blender 3D scene into Gaussian Splatting!
But what will that achieve? 🤔
@@AyushBakshi
1. To send my 3d model to customer for viewing/inscpecting without them able to steal my model.
2. Imagine you can freely moving in your cycles rendered scene in realtime without without any noise.
@@ruudygh sooo "DO THE WORK FOR ME I AM LAZY" ??
@@ruudygh Interesting. In the meantime:
1) you can try SketchFab for Models. That way they can't steal but only view.
2) Try baking the scene lighting.
There was a talk about exactly this at the blender conference this year. I would recommend watching it 👍
so we're turning to rendering with particles and only particles. got it.
Could be possible to create a USD file with splats as instances with MaterialX.
It can be rendered in any DCC software without additional plugins
Amazing
Gauss tuah! Splat on dat thang
👏👏👏
Photogrammetry was released in the 90's, the gaussian connection was misused on purpose, and somehow stuck
I wanna see what kind of creative stuff can be done by processing the video before splatting.
Is there a way to use this with a 360 degree camera video?
Cool, but please watch a pronunciation video for the name "Gaussian" 😀
Do they use this in vfx?
what about converting gaussian to 3d meshes/baking them down to a remodeled scene.
hm, that's not quite as easy. you'd be back at classic photogrammetry. the high quality of gaussian splat kind of lies in those view dependant effects that look like reflection/refraction.. when baking, it would need to bake a different texture for each view angle, which is a data nightmare. it is at least very hard to convert back into something that is useful for rendering in any other way than those splats. The splats kind of can go beyond the quality of traditional photogrammetry based on the source material, It is a fun tech to look around a video like a 3d scene, but hard to do anything else with it..
also I think there should be nearly no difference projecting the source video images on 3D geometry directly vs. calculating gaussian splats first. maybe you get a little extra "organic linear interpolation", but most of that will be low quality anyways. But it shows the problem, when an object has different pixel colors in the same spot, seen from different angles, which one to bake, which one to prefer? If you do the average, reflectivity is gone..
@@thFaust There are a few papers on how to get PBR maps from spherical harmonics, rapidly changing values across the gradient can indicate that that pixel should be of high reflectivity. Delighting also has come long ways, most approaches require you to figure out a spherical harmonics first, so there you go.
The benefit over just projecting the footage is the accurate depth information.
Imagine a cracked wall. How do you model that accurately to project your footage onto?
But that also the applications for gaussian splatting over photogrammetry are numerous.
The fast reconstruction times plus performance of the visualisation alone make a few things even feasible now.
1. Crime scene reconstruction. Using view angles from different phone cameras at different times etc. (4D gaussian, aka 3D videos are a thing)
2. Arial surveys. Not only catography, but also farming, mining is the most interesting, since you can evaluate the weight of a pile of material if you know it's volume.
A pointcloud can also easily be reduced in quality at a distance, so no LODs necessary.
3. ecommerce. You see all those shops have maculate lit product photos? - Imagine you can spin them around, try them on in AR. All without an artist having to re-create a thing that already exists. Just from using the existing photos.
Will we see gaussian splatting in media and film? Probably short-lived. Since for now, it is the best way to record a believable performance of a human in 3D. (Where you are actually concerned with the appearance of the actor and costume over everything else)
Will we see gaussian splatting in games? You forgot DREAMS? The PS4/PS5 game? Zelda? literally all the grass and fur of 7th gen?
@@MrSofazocker Thanks for the interesting reply. :D Well yeah, I was counting the depth estimation part towards the bigger topic of photogrammetry, gaussian splatting does not do anything new there, afaik. With projecting original footage, I meant using depth voxelization first (the same depth the splats are trained from) and then projecting onto the voxels and turning their surface into a mesh (or projecting footage onto that mesh, the order of last two steps is just a matter of resolution). Delighting can be done then as well. I wouldn't necessarily call the depth information in any of these topics accurate, since it is just estimated from colors, the accuracy is different for each resulting voxel, the more angles it is seen from the more accurate, but it varies wildly throughout the scene. If you do photogrammetry, you "commit" to that estimated depth, splats don't. To your examples for splatting use cases:
1. Sure, splats sound useful to get a most reliable look at the place. Since photogrammetry enables more spatial reasoning, so if you have enough data, you'd probably generate both anyway. I doubt you can capture anything close to a 3D video without a specifically built multi camera rig.
2. Since the depth and therefore volume precision depends on the amount of angles you have on a point, Arial sounds to me like mostly from the same angle, so the depth info wouldn't be precise. The pointcloud needs no LOD, sure, but if you want to measure volume, you'd voxelize anyways, so you don't really need the gaussian splat. The splats are a visually nice representation, but by themselves not enough to reason about the displayed space.
3. Yes, I totally agree, that is a good use case. :-) Especially since gaussian splats imply so much that it is a scan/look so realistic. But i've been working on AR apps, for that again, you'd need more than the splats. Splats can't cast a shadow or be rendered in different light conditions, so you'd need at least voxels here again, going further down the photogrammetry pipeline, delighting and all that.
I know about those types of performance capture photogrammetry, that's true, but only possible with specifically built camera rigs. It is a fascinating topic overall, what can be created from just photos. I don't quite understand the game examples you mention. grass and fur? You think fur looks good in gaussian splat? Sure but it couldn't deform, so if you want to render an animal moving, you'd need a mesh or something. Record the entire movement as 4D splats? That is very static again in terms of lighting as well as motion, very hard to integrate with other things in a render..
so cheaper raytracing?
What's the difference between rendering and a normal video
happy
Google maps when?
mmmm could it work with a voxel 3d engine?
Nice try at blue steel xD
fire
I tried downloading postshot and I got a virus warning
Brain dance from cyberpunk?
#ILOVEGAUSSIANSPLATING
insane
No my grandma can't she's not alive anymore
what is not 90's tech at this point
too bad you need a 2060 for jawset
12:23, bro just have a free 1 petabyte of storage.....
Nvidia wants to fuse 3DGS and RT together...
- speed of GS
- Beauty of GS
- adaptability of RT
Gaussian splatting can't change the lighting like in Ray Tracing. I hope they figure it out.
OMG
Gaussian with a soft S not a Z sound.
Gauss not gauze.
Give me linux compatibility and I will buy this ASAP.
My PC is too old to run postshot lmao
so hot rn
Brother you are 2 yrs late and there’s 5 apps that can do this in real time in our phones. But I can appreciate the hustle
Nothing late about knowing how to do things yourself, but you do you ;)
your videos are awesome but why you have less like and view subscribe ...
Made a shortfilm recently using 3dgs ruclips.net/video/X4oh_6DjF1M/видео.htmlsi=H4YHHujeClw_XoGu
Read that as Caucasian…phew
Okay. But looking at the result, what is the use case for this? Just do the camera motion in real life when you are already filming.
Still far too messing and finicky to be usable at the moment. Still
A gimmick at the moment. Too many apps require for mediocre results.
as good as it looks, I dont like it because it looks too shimmerry and blurry, theres no sharpness on the edges of the objects and allat. Like someone applied TAA 20 times lol
First?
first the worst. second the best. third the one with the hairy chest.
Do Unity urp vr
excellent!