Very proud that you used my Arc of Triomphe model as the example for photogrammetry 😅 cool video, I need to learn gaussian splatting, it seems crazy good!
I think what probably stands out most to me is that no matter what angle you pick, it just doesn't look like a bad render. It looks like a blurry or smeared photo, or maybe a painting with a particular style. I guess the big question is gonna be can we apply lighting in real time to it in some form, and can we compose multiple of them together. It seems to be really really good at rendering trees, they just looks so fuzzy and detailed from a distance even when they're just a few blobs. I'd be interested to see a racing game with a full scan of the track using Gaussian splatting for the more distant environment and traditional rendering for the road and car.
I feel like this renders stuff closer to the way we as human perceive stuff. Like a point of light when defocused is circular stretched in shape. Basically a bunch of dots that bleed into each other more or less based on focus. Which is kinda this as well.
No way you are talking about this paper right now, was about to read it tomorrow for college related stuff xD I have a course on computer graphics dealing with the latest research on bachelor level concepts and this is one of the possible papers to work with
Amazing breakdown to this tech. One thing that I want to see, and probably will happen real soon is a combination of many Gaussian splatting scenes to cover bigger areas and single assets ready to compose scenes
I feel like the perfect usage for this tech is Google Street View (especially in VR). You don't need dynamic lighting and objects, target object details are important and having big open world is not a requirement. I wonder if it's possible to have multiple scenes using splats and smoothly transition between them as camera moves to make road moving in street view less weird than it currently is.
I could think of this being used in parallel with traditional methods : Use this method to render specific static mesh models that are high in detail, while other parts of the game world, especially dynamic parts that are animated, stay as-is with polygons + textures.
@@vitordelima yeah I don't see why you couldn't at least to simple animations on the models (though considering the high point density that's probably required maybe it would be best to only use simple n64 style animation (with the different parts of the model being separate) and not full on skeletal animation for now
@@uusfiyeyh I'm sure that it's a hurdle that we'll eventually overcome, just like in early 3d games all shadows and depth was baked into the diffuse texture, only later we've got stuff like bump and normalmaps. But yeah, for now it is more of an archvis/survey tool than actually useful for gamedev.
and it could be useful to turn a complex highly detailed scene with raytracing into something that could be played on much more normal hardware, maybe with some level of reactivity added in to allow the stuff that can move to interact with it
So far it seems like these scenes are basically one big thing that you import into your project. To make it usable for a proper game, I'm imagining a future where you have individual models composed of splats (for example, a bike or house) which can then be imported into a larger scene. However, the problem with that is that these splats seem to have their lighting baked in. If you moved the bike into a different scene with different lighting, it would look really out of place. I find it hard to imagine that this would ever take over and replace polygonal rendering.
Oh but its possible. Just not yet. Youre forgetting this has just been released without optimization witch the authors thenselves realize- then theres no hardware tuned to this. Polygon rendering was created in the 60s or so in computers, took a few decades until we had hardware tuned for poligons and the tuning never stopped. The first gpu tuned for this will likely be x3 times the current performance (the first step always have the bigger gains). There may be some algorithm possible to make the blobs shift for light but even if not you forget how software development and games in particular have a history of using tricks... For example they may use 'invisible poligons'(simpler and with no texture) for colision but also someone come up with an way to use said poligons to inform the lighting; So say first iteration would be horrible (splashs AND poligons, heavy) but then they update the methods, find shortcuts between splash data and polygons, simpler polygons needed, gpus get in... In the past few decades we had what, 4 different named anti-aliasing methods, lighting methods and so on- not only each aproach improved so did the gpus tuned for the tricks used, then support software around it too (like directX, vulkan etc)... ...and something like the above would be possible without adding AI in the mix- now with ai? The same way ai is being trained for upscaling and frame generation, ai on the gpus maybe even with a dedicated chipset could be trained to deduce and recreate light and shadows from splashs on the fly. And were talking today tech only, god knows what new breakthroughs we will have on hardware or neural networks- all the fast pace weve been seeing had zero new milestone improvements(hardware wise). Just the other day ive read intel is tinkering with glass for chipmaking that could break a physical barrier for computing currently I 'predicted' current gen AI like a decade ago when i first read on neural networks at UNI wich at the time was far faaar away from anything usable yet. Im no seer, im far fron the only one. You just need a bit more imagination to extrapolate the likely path of current gen tech, the entire industry does it- the only incognita is how long it will take and how exactly, but very close aproximations are very easy. And i dont think this will take a decade to come up. It may, we can never know, but besides the current pace with AI and gpu tuning via AI we have to remenber that nerfs and such splash tech is already like a decade old... Heck i just realized the kind of tricky nanite did for polygons could come up for splashs too- something between algorithm, lods and ai around density of blob/splashs, so it could have higher resolution(points) then the examples in the video for things seen up close but dynamically lower the density on the background, distance to camera and all... The more i think about it more possible aproachs come up. You just wait, academia will be all over this with students trying different stuff and as soon as the first gpu or drivers tune for it you bet game devs will give it a spin too- the folks at unreal and unity definetly... heck the way nvidia is the moment they saw this some calls were made for a new team to toy with this
@@Teodosin not pessimistic but being a realist. It's easy to be overly optimistic if you're an amateur that had zero knowledge of how things work. Also people who used this kind of hacky technique don't give a damn about art direction, this kind of thing is just something that gets in their way and should be eliminated. Which will result in generic game. At this rate, you should just write a prompt to make a full game for you, why bother create a model or write a single code? You Already generate the model, why did you stop halfway, go generate the whole dang game
It is pretty much the Unreal’s Nanite tech but on a cloud point level instead of polygonal, biggest issue is that at a larger depth you get a noisy dithering effect, which especially in VR can cause nausea when rendered at 90+ fps in real time… amazing for still scenery but not so much for motion.
I assume you are referring to being able to navigation in a real world. But that's not really the use case. The key issue is the way point clouds are generated. They are generated *around* an object. That's what enables the recreation of the object in 3D. For recreating environments, you need the inverse, which isn't this. You can see the issues in the video when Mike ventures just a few yards from bike.
Xr2 is more about texturing than geometry. It's the opposite of what it's made for. You could post process into a low poly model, but then you might as well use nerf or traditional pg.
Absolutely jaw dropping technology. So many practical applications for simpler photogrammetry type tech; virtual museum walkthroughs, interior building walkthroughs like google maps but indoors, even maybe self scans to send to a telemedicine doctor. Just endlessly cool possibilities! For games however, I'm honestly not excited about this. Graphics with even just a pinch of intentionally designed style are much more immersive to me than just playing in a perfectly representative world. I already spend every day IRL, show me something NEW! 😁
Ok.... what about a Wallace and Gromit or Fraggle Rock style world, but physically modeled then captured with Gaussian Splatting? Or old style stop motion Harry Hausen style worlds, but scanned and playable! ;) Although honestly a traditional pure CG workflow would probably still be cheaper and more effective.
@@gamefromscratch Actually, youtuber Olli Huttunen did really cool test where he used 3D model made in Blender and converted it to 3D gaussian splat. You absolutely can mix and match. The splat is built from sequence of pictures. Technically speaking you can animate flythrough of a room on paper, scan it, send it to computer and have 3D splat of that.... if you are insane enough that is. Also.... I think, there is high potential for someone just straight up building a sculpting tool/painting tool eventually in the vein of quill. This is absolutely GIANT thing for games.
The Gaussian splat doesn’t have to use point clouds or photos and could be the actual rendering engine for 3D games. The biggest improvement here over traditional pipelines is the *massive* reduction in computing while maintaining (and arguably improving) visual quality. Imagine this being used for a next-gen version of Dreams that can be played on the Quest 2.
The interesting thing to me is this tech looks familiar: when was looking at companies in the VR/AR space, when "light fields" was causing buzz through ones like Magic Leap, found an obscure company named "Euclidean". Their idea was using point clouds (the "light fields") for VR, and liked showing off the detailing. This seems to be a much-improved evolution of that.
This really reminds me of dreams. Very detailed in specific areas, but when you try to remember non-focal elements it just isn't there. This would be a great artistic rendition of dreams!
i rarely comment on videos about its quality and content, but i had to come here to congratulate this channel's creator because of the good rhythm, speed and clarity of the pronunciation on a technical topic.
I think we will see this sooner in more static-ish applications like real-estate virtual tours. Maybe some experimental games that happens in static-ish environment like 1 house. But who knows maybe it will be game production ready in a year.
I can imagine someone pulling the PS1-era tricks with models running over these at fixed-camera angles to produce a scene. It would be a highly unorthodox workflow, though, and quite pricey at that.
it works as blobs in 3d space, just like volumetric cloud or fire, no need for any "PS1-era tricks" it can coexist with standar mesh model rendering no problem
I’ve been following since Luma Ai got NeRF going and is amazing what we can get just by recording with our phones. Hope soon they can produce a lighter player. 🔥 thanks for the teaching on high-end-terms 🚀
Point cloud approaches are definitely very cool. And this one is even more pretty. Though they always come with a couple of downsides when it comes to lighting, animation, etc.
Pretty cool tech, but I do wonder how it would play with different shaders, fog effects, and lights we control in the scene. At least these are the immediate curiosities of mine. Shaders cause ima stylized boi, and shaders is how I accomplish this even when models are originally made in a more photo-realistic manner. Otherwise, seeing how drastic changes in lights and fog density mix with the tech would truly be an awesome little demo to see
I could definitely see this getting momentum for capture; way more practical than lightfields. But seems like an uphill battle for real-time uses. The lighting may be dynamic, but those dynamics are baked. And I'm guessing there is a lack of frustum culling, and all the particles need to be sorted to properly alpha blend?
It occurs to me that pairing splatting with traditional modeling in video games could be the wave of the future if the splats are restricted to distant background objects and detailed foreground objects are replace by LOD-correct alternatives as the player's avatar approaches.
I was thinking about working on a Godot implementation but because instanced rendering (with MultiMesh) can't be culled on a per mesh basis I was uncertain of whether it was achievable with decent performance. But I'd like to hear from more experienced Godot developers, cause I'm a noob. The reference rendering implementation is also a fairly complicated one that used cuda to efficiently order the splats for rendering. Which doesn't translate well into the Godot renderer to begin with.
One could use compute shaders to oreder the splats, but I don't think Godot allows you to perform compute operations on the main RenderingDevice, and there's no way to share buffers between devices without transferring the data to the CPU and then to the other RD, which would be slow.
It's like a smart interpolation between different cameras. But this exact feature is also it's disadvantage, as it's quite hard to remove said lighting from the source, to create usable game assets. You have stretched splats which also contain the lighting. (and in this case only the lighting as you have captured it) Depending from what angle you look at it, they are differently stretched and as such are able to shade your scene correctly, but the "lighting splats" are belonging to your scene in the very same way as solid objects. It is probably possible to make it work, but you will have hard times to remove the lighting from those. (and this is needed if you want to combine multiple assets and/or different scans) It's an amazing tech, but my guess is that's it's rather useful to capture true 3D photos for personal usecases.
It's a point cloud, unstructured. Just bucket the splat in voxel and traverse voxel from the camera, in a dull, then pass the ordered data to the renderer.
@@Polygarden we aren't talking about mesh, it's not representing surfaces but lightfield. That's why a traversal using voxel make sense as a lightfield query. We aren't trying to reconstruct volume.
I think the best way to make use of this 3D Gaussian Splatting, is to be integrated with AR Glasses, and combine animated Characters into the scene. That way it will seem like a Virtual Holodeck of Star Trek. Maybe can use a kind of camera tracking feature of Blender or Unreal, and such to make it possible to put moving characters in a story like setting. Just my thoughts! Terry
@@altongames1787hm? The idea is that you could take some other rendering method that *can’t be done in real time*, and use it to create a Gaussian splatting scene which *can* be rendered in real-time. (People have tried this. It seems to work pretty well!)
Gaussian Spatting seems to be just image mapping, on feathered particles. The texture of each blob changes according to the ciewing angle, picking the original photo that best matches the angle, or a close angle that does not blocks view to each splat. Filesize must be quite big, compared to traditional, static texturing.
Seems like it’s just a matter of generating enough point clouds and pairing them up with descriptions before we can make generative models to create new seems just based on a description.
Looks amazing and practical. If they can use some NN on it to remove unnecessary particles and actually convert and split objects, that would be the end I believe.
The spiky bits of light make it look exactly like when scenes load on assassin's creed. ... Which was basically loading into a sim in the context of the game. crazy
this could be a thing in the future but will only prosper in static 3d space viewing since a 3d polygon is much more practical in a dynamic real-time environment that games need. I just hope the tech to stream humongous file sizes faster than what we have comes sooner
thanks for sharing, I saw this weeks ago sorry in another repo sorry to not mention it.. I want to implement unity volume point editor similar to Nvidia workflow also it's a great thing to implement delighting tool and neural decompression algorithms. ( long project goal) inshallah
12:25 to be fair, you can do with much less if you don't require as high a quality. Also it's pretty likely a lot of speed can be traded for lower VRAM memory usage. To train to a reasonable 7000 "Iterations" you could probably get away with way less VRAM, and according to their own calculations it should be possible to train to reference paper quality with just 8 GB of VRAM, but that hasn't been implemented.
It looks very interesting, it will be very interesting to see if game developers choose to invest the time needed to properly fake photorealism through Gaussian Splatting, achieving high fidelity very at a very low processing power cost, or they will rather just use all the path tracing methods which will be a lot more convenient but also a lot heavier on the user's hardware.
This reminds me of the system that is used in Dreams by Media Molecule. I know that its Playstation specific but, maybe if there is enough push we can get Dreams onto the PC eventually.
Can you say Splats are kinda like multi shaped 3 dimensional pixels that are doing reverse virtual 3D pixel mappings (or 3D pixel projections) based on photographic data?
No we're not. For movies yes For presentations yes Not for games Rendering like this is the opposite of why we have render pipelines It's unfeasible to ask a user to download hundreds of gigabytes of data per scene If you want to create gameobjects from these procures then im sure that can be done. But no You won't see games made like this ever
nah, without animation and adjustable lighting (day/night, dynamic lights) nobody going to change their entire pipeline. not in gamedev anyway. cool thing for virtual museums, home tours etc
I think there is enough data that lighting could be implemented. That said, to mix it in with a traditional rendering pipeline, you'd end up with two lighting paths and that wouldn't be ideal.
This came out like 3 months ago, there’s also an experiment that showed that you CAN change the lighting. I’ve also seen someone implement animations in augmented reality and displayed on an iPhone. That’s just in 3 months of progress. Think of GPT 3 in 2020 vs GPT 3.5 2022 vs GPT 4 today.
Possibly a similar idea for use in gaming but a lot simpler is to get rid high polygon models for a basic box outline shape instead and just update the quads (2 triangles) on each face with angle adjusted photos depending on the players view angle. This has been done recently in the new Ultra Engine but I would use a a more perfected method of this so you don't see any clipping between photos and is totally smooth. Maybe its as simple as making a detailed sprite sheet and a shader to smoothly combine and move between images...? If an view direction is between 2 image angles then get the shader to create the correct image based on the 2 closest ones...I think some smart people could achieve this.
It's looking great but it also means things I rather see die will survive and look even better in the future. Just imagine Metaverse+Nanites+Gaussian Splats+VR... Hell awaits.
I wonder if this tech could be used as a reference template in 3D modeling programs to model real world environments instead of using 2D reference images. It will be extremely useful to model on top of it since it gives real world scale and helps prototyping the basic shapes and scale of the environment. I'm not keeping up with 3D tech for modeling lately so I'm not sure if there is already a similar solution out there.
Sim racing and golf games, for example, have been doing this for at least a decade: using laser scanned point clouds as 3d reference to make a polygonal version of a real world location. If these splat method ends up producing results at least as accurate as lidar, but substancially cheaper and faster, then surely studios and modders will be all in, but again to use it as a reference.
Folks the real use here is cinematographers using real life backgrounds in unreal volume, as the film “the creator” proves is real life backgrounds are what really matters. This allows real life backgrounds to be used in volume filmmaking! Real-time NERF backgrounds is the future of volume filmmaking! 🤯🤯🤯🤯😎😎😎😎👍👍👍👍
It's slow because it uses the particle system to render. It's the with the unreal engine one. Each splat is rendered using Niagara and Niagara, even on GPU, gets real slow when rendering point clouds or particles systems above 1mil points.
@@vitordelima Original demo of what? Made by whom? If you watched the video the guy even confirms it's using the particle system to render the points and this scene is over 3million points. Why did you upvote yourself?
I wonder can cache them though, I’ve had quite a few intense Niagara systems running in a level but I cached them and the worked well, this for video production. Not sure how it would in game dev
Reason why this doesn't work for games (yet) is cause you can't clean it up, unless I am missing something this isn't geo and you will have a very small space.
One thing I'm not sure I understand is if you could combine scenes, or how it handles reflections other than creating a mirror universe room. Like if you have two mirrors on the back of each other.
@@ScibbieGamesMakes sense, but, it does make me wonder: what does it do if you record a scene where there’s a mirror that isn’t against a wall, and (in the training footage) have the camera go around the mirror? How will the quality of reflections in this case compare to the quality of reflections when I’m the mirror is up against a wall and it can use the “treat the mirror like a portal” trick?
@@drdca8263the gaussian has directional color, so backface mirror will probably duplicate the color of the background where they are not seen. But that's a mice observation. Remember these don't represent surface and volume but light rays.
@@NeoShameMan Sorry, I don’t think I understand quite what you mean by “duplicate the color of the background where they are not seen”. ... also, in order to represent occlusion, don’t the splats kinda sorta also have to represent volumes? That’s what the opacity value handles, isn’t it? Edit: to be clear, I do anticipate it still working somewhat when it can’t make things such that you can “go into the mirror” on account of there being views in the training footage at the locations that “going into the mirror” would take you, and which don’t look like what the mirror world would look like there, I’m just expecting that the quality would probably be somewhat lower, and wondering by how much.
@@drdca8263 they use spherical harmonics, ie directional colors. These are used in game with lightprobe to illuminate a scene. They don't represent volume, they represent light rays from the source image, basically its like they are fuzzy blurry cubemap, the overlap of all them reconstruct the view of a source image. It's like you took the 2d pixels of the source image and move it to where it's the most probable, and many pixel at the same place merged into a cubemap.
I think a really interesting real-world use case would google maps, google maps is already kinda terrible up close... so they can't make it worse! haha - but no they already have a number of photos and it would probably work 10 times better for lots of scenes, it's "just a data processing issue" ... ish ;)
If that would be available in Godot, I would immediately implement it in my project I have at least two really cool ideas how to use this tech. I wonder if the splatting could be switched from gaussian into other methods of splatting, for example...paint brush splatting... ifkywim...
I was wondering, since these are just pictures that are used to create the point cloud data, why couldn't you set up a scene or model with many cameras (or just one that flies around the scene like a drone) and use that data to create the point cloud. Would it be economical? I don't know. I'm just curious if it's possible. Like would there be a noticeable difference in data sizes between a gaussian splat model vs traditional polygons. Also, didn't the Sony game Dreams do something similar?
Is it even possible to apply lighting on this? I don't understand much from photogrammetry but it seems pretty much imutable, e.g. can't move objects, can't alter lighting and so on, which greatly reduces the applications for this
Yes and no I believe. Yes, in that you have positional and color data and it's being rendered in real-time by the GPU. You could certainly implement virtual lighting (assuming you are much much much better at math than I am). No, in a traditional pipeline, like in Unity. This isn't rendered along side the rest of the scene as I understand it, more in parallel. So if you added a light in the Unity scene, nothing would happen to the Gaussian Splat you've imported. I do think you could make it work, but you'd essentially have two parallel lighting paths (I think).
The colors and reflections are stored inside "Spherical Harmonics", they hold the color value when looked at from different angles. You could technically, probably, somehow, bake in the lighting from your scene into these harmonics, but that would still be immutable. To do that in real time, for some million points in space, It might be a bit much, perhaps you'd require some sort of fragment shader, but for splats. lol
@@ScibbieGames Surfels, which are similar to this, were used for realtime global illumination over triangles in the past. Still it would require the use of lighting probes around clusters of splats or other simplification.
I can't see this being used in games for anything but hyper casual games on the PC (which there are not many out there). Maybe in fighting games due to having very few meshes being rendered? But even there, I think the performance is not there.
Bro, your computer chugs on everything. Most people can probably get 60fps 😂 UNF did a tutorial a while back called make a full game in 2 hrs (or something similar) and he used one of the free monthly cities. He was chugging too but then went into the properties of the UE editor, changed a couple settings and it was perfect. Don't remember what he changed, but there's definitely a fix. Looks like I found a new use for my drone though. Thanks!
I've been reading about this for a while and it looks like real innovation. Unlike real time ray tracing, something extremely taxing on the GPU and that was pretty much forced down our throats by a company out of ideas on how to charge thousands of dollars for their products that aren't worth half of their asking price in real performance.
@@poetryflynn3712 I know, I even read a few of the scientific papers about it. It's really interesting stuff. The problem was Nvidia forcing the technology onto the consumer well before it was ready as this "crazy new thing that totally makes new GPUs worth double last gen" even though we're now 3 generations in and most of their lineup can't even handle it without upscaling (which was essentially invented because games could barely reach 60 fps on flagships when using ray tracing at the time. Remember the RTX 20 series?). I'd say it'll be some 4~5 more generations before real time ray tracing can become viable without upscaling. Nvidia (and AMD, which does very little besides copy its competitor) should be focusing working more closely with developers to better optimize current games, so they don't suck up 32 gigs of ram all the time and need 200 GB of storage to run. But no, that's not flashy enough to sell thousand dollar giant pieces of inefficient heatsinks.
@@poetryflynn3712 and Gaussian splatting is in a similar boat, except that the firepower has already been there for a while, just that it wasn't used for consumer applications until just now. Usually it was used for mapping out stuff like CT scans, and the original paper from 1993 ( web.cse.ohio-state.edu/~crawfis.3/Publications/Textured_Splats93.pdf ) used it to map out wind and clouds
Real time ray tracing is not something that was forced down your throat by a company out of ideas, it's a technology that has been promised and pursued for decades and will continue to be pursued for a while yet. What you have now is barely scratching the surface.
I want to see this in a program like blender... Because even though the real time aspect of this is necessarily necessary having the meshing not take literal hours would be nice.
Apart from back plates/matte paintings/Skyboxes, I don't really see much use to it in games? It's static, no collisions, no interaction, etc. ? I'm not sure but at least it seems like it's a one off, static, just one use type of thing?
Even easier just to rough out the outline via an invisible mesh and use that for collision detection. The main issue with interactions would be lighting and destruction of the model, those interaction would have a much harder time looking "right". @@vitordelima
What a time to be alive!
What a time to be alive!
Get your papers fellow scholars
The industrial revolution was indeed a good time.
Holding my papers tightly
Imagine where we'll be two more papers down the line
Very proud that you used my Arc of Triomphe model as the example for photogrammetry 😅 cool video, I need to learn gaussian splatting, it seems crazy good!
@@JohnDavid888 Thank you for the insight! It seems that, for now, it's not suitable for professional use, but I suppose it's going to evolve.
Bro your channel is unlike anything else, I appreciate you moving like a madman on getting these videos put out for game devs and artists. Thanks!
I think what probably stands out most to me is that no matter what angle you pick, it just doesn't look like a bad render. It looks like a blurry or smeared photo, or maybe a painting with a particular style. I guess the big question is gonna be can we apply lighting in real time to it in some form, and can we compose multiple of them together. It seems to be really really good at rendering trees, they just looks so fuzzy and detailed from a distance even when they're just a few blobs. I'd be interested to see a racing game with a full scan of the track using Gaussian splatting for the more distant environment and traditional rendering for the road and car.
I feel like this renders stuff closer to the way we as human perceive stuff.
Like a point of light when defocused is circular stretched in shape. Basically a bunch of dots that bleed into each other more or less based on focus. Which is kinda this as well.
No way you are talking about this paper right now, was about to read it tomorrow for college related stuff xD
I have a course on computer graphics dealing with the latest research on bachelor level concepts and this is one of the possible papers to work with
Start with the Aras rundown before jumping into the paper, it's a really elegant TL;DR summary.
@@gamefromscratch thanks I will. Will need to read that paper either way tho
Amazing breakdown to this tech. One thing that I want to see, and probably will happen real soon is a combination of many Gaussian splatting scenes to cover bigger areas and single assets ready to compose scenes
NeRF seems to have something like this already and maybe it can be adapted to Gaussian splatting.
I feel like the perfect usage for this tech is Google Street View (especially in VR). You don't need dynamic lighting and objects, target object details are important and having big open world is not a requirement. I wonder if it's possible to have multiple scenes using splats and smoothly transition between them as camera moves to make road moving in street view less weird than it currently is.
I could think of this being used in parallel with traditional methods : Use this method to render specific static mesh models that are high in detail, while other parts of the game world, especially dynamic parts that are animated, stay as-is with polygons + textures.
It can be animated by transforming the particles the same way it's done to vertices in regular models for example.
@@vitordelima yeah I don't see why you couldn't at least to simple animations on the models (though considering the high point density that's probably required maybe it would be best to only use simple n64 style animation (with the different parts of the model being separate) and not full on skeletal animation for now
@@uusfiyeyh I'm sure that it's a hurdle that we'll eventually overcome, just like in early 3d games all shadows and depth was baked into the diffuse texture, only later we've got stuff like bump and normalmaps. But yeah, for now it is more of an archvis/survey tool than actually useful for gamedev.
and it could be useful to turn a complex highly detailed scene with raytracing into something that could be played on much more normal hardware, maybe with some level of reactivity added in to allow the stuff that can move to interact with it
So far it seems like these scenes are basically one big thing that you import into your project. To make it usable for a proper game, I'm imagining a future where you have individual models composed of splats (for example, a bike or house) which can then be imported into a larger scene. However, the problem with that is that these splats seem to have their lighting baked in. If you moved the bike into a different scene with different lighting, it would look really out of place.
I find it hard to imagine that this would ever take over and replace polygonal rendering.
Oh but its possible. Just not yet. Youre forgetting this has just been released without optimization witch the authors thenselves realize- then theres no hardware tuned to this. Polygon rendering was created in the 60s or so in computers, took a few decades until we had hardware tuned for poligons and the tuning never stopped. The first gpu tuned for this will likely be x3 times the current performance (the first step always have the bigger gains). There may be some algorithm possible to make the blobs shift for light but even if not you forget how software development and games in particular have a history of using tricks... For example they may use 'invisible poligons'(simpler and with no texture) for colision but also someone come up with an way to use said poligons to inform the lighting; So say first iteration would be horrible (splashs AND poligons, heavy) but then they update the methods, find shortcuts between splash data and polygons, simpler polygons needed, gpus get in... In the past few decades we had what, 4 different named anti-aliasing methods, lighting methods and so on- not only each aproach improved so did the gpus tuned for the tricks used, then support software around it too (like directX, vulkan etc)...
...and something like the above would be possible without adding AI in the mix- now with ai? The same way ai is being trained for upscaling and frame generation, ai on the gpus maybe even with a dedicated chipset could be trained to deduce and recreate light and shadows from splashs on the fly. And were talking today tech only, god knows what new breakthroughs we will have on hardware or neural networks- all the fast pace weve been seeing had zero new milestone improvements(hardware wise). Just the other day ive read intel is tinkering with glass for chipmaking that could break a physical barrier for computing currently
I 'predicted' current gen AI like a decade ago when i first read on neural networks at UNI wich at the time was far faaar away from anything usable yet. Im no seer, im far fron the only one. You just need a bit more imagination to extrapolate the likely path of current gen tech, the entire industry does it- the only incognita is how long it will take and how exactly, but very close aproximations are very easy.
And i dont think this will take a decade to come up. It may, we can never know, but besides the current pace with AI and gpu tuning via AI we have to remenber that nerfs and such splash tech is already like a decade old...
Heck i just realized the kind of tricky nanite did for polygons could come up for splashs too- something between algorithm, lods and ai around density of blob/splashs, so it could have higher resolution(points) then the examples in the video for things seen up close but dynamically lower the density on the background, distance to camera and all... The more i think about it more possible aproachs come up. You just wait, academia will be all over this with students trying different stuff and as soon as the first gpu or drivers tune for it you bet game devs will give it a spin too- the folks at unreal and unity definetly... heck the way nvidia is the moment they saw this some calls were made for a new team to toy with this
Surely that lighting problem can be figured out. Just needs time for people to figure it out.
Hacky solutions always leads to ton of soul crushing Cleanups afterwards.
professionals found this out the hard and painful way.
Wow, such pessimism
@@Teodosin not pessimistic but being a realist.
It's easy to be overly optimistic if you're an amateur that had zero knowledge of how things work.
Also people who used this kind of hacky technique don't give a damn about art direction, this kind of thing is just something that gets in their way and should be eliminated.
Which will result in generic game.
At this rate, you should just write a prompt to make a full game for you,
why bother create a model or write a single code?
You Already generate the model, why did you stop halfway, go generate the whole dang game
I really want to see applications using Gaussian splatting in VR in something like the quest3. That needs to happen!
It is pretty much the Unreal’s Nanite tech but on a cloud point level instead of polygonal, biggest issue is that at a larger depth you get a noisy dithering effect, which especially in VR can cause nausea when rendered at 90+ fps in real time… amazing for still scenery but not so much for motion.
I think the quest 3 would probably have trouble with the polygon count, the render distance might have to be pretty low.
I assume you are referring to being able to navigation in a real world. But that's not really the use case. The key issue is the way point clouds are generated. They are generated *around* an object. That's what enables the recreation of the object in 3D. For recreating environments, you need the inverse, which isn't this. You can see the issues in the video when Mike ventures just a few yards from bike.
Xr2 is more about texturing than geometry. It's the opposite of what it's made for. You could post process into a low poly model, but then you might as well use nerf or traditional pg.
@@pixelfairy Yeah, they're kind of a package deal
Absolutely jaw dropping technology. So many practical applications for simpler photogrammetry type tech; virtual museum walkthroughs, interior building walkthroughs like google maps but indoors, even maybe self scans to send to a telemedicine doctor. Just endlessly cool possibilities!
For games however, I'm honestly not excited about this. Graphics with even just a pinch of intentionally designed style are much more immersive to me than just playing in a perfectly representative world. I already spend every day IRL, show me something NEW! 😁
Ok.... what about a Wallace and Gromit or Fraggle Rock style world, but physically modeled then captured with Gaussian Splatting? Or old style stop motion Harry Hausen style worlds, but scanned and playable! ;)
Although honestly a traditional pure CG workflow would probably still be cheaper and more effective.
@@gamefromscratch Clayfighter 2023?! I love it! 😆
@@gamefromscratch Actually, youtuber Olli Huttunen did really cool test where he used 3D model made in Blender and converted it to 3D gaussian splat. You absolutely can mix and match. The splat is built from sequence of pictures. Technically speaking you can animate flythrough of a room on paper, scan it, send it to computer and have 3D splat of that.... if you are insane enough that is.
Also.... I think, there is high potential for someone just straight up building a sculpting tool/painting tool eventually in the vein of quill.
This is absolutely GIANT thing for games.
The Gaussian splat doesn’t have to use point clouds or photos and could be the actual rendering engine for 3D games. The biggest improvement here over traditional pipelines is the *massive* reduction in computing while maintaining (and arguably improving) visual quality. Imagine this being used for a next-gen version of Dreams that can be played on the Quest 2.
The interesting thing to me is this tech looks familiar: when was looking at companies in the VR/AR space, when "light fields" was causing buzz through ones like Magic Leap, found an obscure company named "Euclidean". Their idea was using point clouds (the "light fields") for VR, and liked showing off the detailing. This seems to be a much-improved evolution of that.
This really reminds me of dreams. Very detailed in specific areas, but when you try to remember non-focal elements it just isn't there. This would be a great artistic rendition of dreams!
Interestingly, when I used to have OBE's, I experimented with focusing on distant details and things were grainy, not unlike this.
This technology has a lot in common with the game/engine called Dreams on Playstation.
Agree, looks similar to how Flecks are rendered in Bubblebath engine (Media Molecule's Dreams, Playstation).
I always wondered how the rendering in that game worked!@@bgrz
I was thinking of Dreams on the PS4 too
Mike your channel is awesome and thank you for covering all the different stuff you do!
i rarely comment on videos about its quality and content, but i had to come here to congratulate this channel's creator because of the good rhythm, speed and clarity of the pronunciation on a technical topic.
I think we will see this sooner in more static-ish applications like real-estate virtual tours. Maybe some experimental games that happens in static-ish environment like 1 house. But who knows maybe it will be game production ready in a year.
I can imagine someone pulling the PS1-era tricks with models running over these at fixed-camera angles to produce a scene. It would be a highly unorthodox workflow, though, and quite pricey at that.
it works as blobs in 3d space, just like volumetric cloud or fire, no need for any "PS1-era tricks" it can coexist with standar mesh model rendering no problem
What we need is for someone to make a Gaussian Splatting modeling program.
Why model when you can simply create that object in real life and scan it in?
Finally someone explained all this cool new tech in a way i understand.
what a time to be empathic!
Last time I saw graphics like this, they said "these are grains of dirt".
This tech can be a great alternative to traditional rasterization for backgrounds in interactive story games like 'Her Story'..
I’ve been following since Luma Ai got NeRF going and is amazing what we can get just by recording with our phones. Hope soon they can produce a lighter player. 🔥 thanks for the teaching on high-end-terms 🚀
Point cloud approaches are definitely very cool. And this one is even more pretty. Though they always come with a couple of downsides when it comes to lighting, animation, etc.
Pretty cool tech, but I do wonder how it would play with different shaders, fog effects, and lights we control in the scene. At least these are the immediate curiosities of mine. Shaders cause ima stylized boi, and shaders is how I accomplish this even when models are originally made in a more photo-realistic manner. Otherwise, seeing how drastic changes in lights and fog density mix with the tech would truly be an awesome little demo to see
SH is for Spherical Harmonics: it's the precomputed lighting environment.
I could definitely see this getting momentum for capture; way more practical than lightfields.
But seems like an uphill battle for real-time uses. The lighting may be dynamic, but those dynamics are baked. And I'm guessing there is a lack of frustum culling, and all the particles need to be sorted to properly alpha blend?
It occurs to me that pairing splatting with traditional modeling in video games could be the wave of the future if the splats are restricted to distant background objects and detailed foreground objects are replace by LOD-correct alternatives as the player's avatar approaches.
Basically Maya´s Paint Effects applied to a photogrammetry-derived point cloud. Ingenious!
I was thinking about working on a Godot implementation but because instanced rendering (with MultiMesh) can't be culled on a per mesh basis I was uncertain of whether it was achievable with decent performance. But I'd like to hear from more experienced Godot developers, cause I'm a noob.
The reference rendering implementation is also a fairly complicated one that used cuda to efficiently order the splats for rendering. Which doesn't translate well into the Godot renderer to begin with.
Some renderers use transparent quads aligned with the viewer (similar to Doom's enemies and items) to render this instead.
One could use compute shaders to oreder the splats, but I don't think Godot allows you to perform compute operations on the main RenderingDevice, and there's no way to share buffers between devices without transferring the data to the CPU and then to the other RD, which would be slow.
It's like a smart interpolation between different cameras. But this exact feature is also it's disadvantage, as it's quite hard to remove said lighting from the source, to create usable game assets. You have stretched splats which also contain the lighting. (and in this case only the lighting as you have captured it) Depending from what angle you look at it, they are differently stretched and as such are able to shade your scene correctly, but the "lighting splats" are belonging to your scene in the very same way as solid objects. It is probably possible to make it work, but you will have hard times to remove the lighting from those. (and this is needed if you want to combine multiple assets and/or different scans) It's an amazing tech, but my guess is that's it's rather useful to capture true 3D photos for personal usecases.
It's a point cloud, unstructured. Just bucket the splat in voxel and traverse voxel from the camera, in a dull, then pass the ordered data to the renderer.
@@Polygarden we aren't talking about mesh, it's not representing surfaces but lightfield. That's why a traversal using voxel make sense as a lightfield query. We aren't trying to reconstruct volume.
This is great for filmmaking.
God it looks like a dream when you get to the perifery of the scan
Im amazed at how it captures specular lighting. It's quite visible on the roof of the church.
This reminds me of how Media Molecule's "Dreams" works.
Kind of reminds me of the landscape rendering technique from the old Ecstatica games.
I think the best way to make use of this 3D Gaussian Splatting, is to be integrated with AR Glasses, and combine animated Characters into the scene. That way it will seem like a Virtual Holodeck of Star Trek. Maybe can use a kind of camera tracking feature of Blender or Unreal, and such to make it possible to put moving characters in a story like setting. Just my thoughts! Terry
Could this be used to bake "bake" extremely high fidelity 3D scenes into point clouds and then splat them in runtime?
I understand what you mean, but why would you wan't something less performant?
@@altongames1787hm? The idea is that you could take some other rendering method that *can’t be done in real time*, and use it to create a Gaussian splatting scene which *can* be rendered in real-time.
(People have tried this. It seems to work pretty well!)
@@altongames1787800 fps is quite performant in my opinion
Gaussian Spatting seems to be just image mapping, on feathered particles. The texture of each blob changes according to the ciewing angle, picking the original photo that best matches the angle, or a close angle that does not blocks view to each splat. Filesize must be quite big, compared to traditional, static texturing.
This will be crazy if you mix this tech with Google maps/earth data
M1 Max has a 24Core and a 32Core GPU. Just ordered one for mix of mobile/hobby game development. Continuing to watch the video..
Seems like it’s just a matter of generating enough point clouds and pairing them up with descriptions before we can make generative models to create new seems just based on a description.
Looks amazing and practical. If they can use some NN on it to remove unnecessary particles and actually convert and split objects, that would be the end I believe.
It works well on mobile too! interesting
Gaussian splatting reminds me of the rendering engine in dreams
The spiky bits of light make it look exactly like when scenes load on assassin's creed. ... Which was basically loading into a sim in the context of the game. crazy
i feel like this would be good if you could separate each prop, somehow type the dots to a dummy mesh and then decorate the level
Wow that remind me so hard the Virtues in Cyberpunk
this could be a thing in the future but will only prosper in static 3d space viewing since a 3d polygon is much more practical in a dynamic real-time environment that games need. I just hope the tech to stream humongous file sizes faster than what we have comes sooner
thanks for sharing, I saw this weeks ago sorry in another repo sorry to not mention it..
I want to implement unity volume point editor similar to Nvidia workflow also it's a great thing to implement delighting tool and neural decompression algorithms. ( long project goal) inshallah
Tks for the video, is it possible to export it at the end to a 3d format? (fbx, obj..)?
12:25 to be fair, you can do with much less if you don't require as high a quality. Also it's pretty likely a lot of speed can be traded for lower VRAM memory usage.
To train to a reasonable 7000 "Iterations" you could probably get away with way less VRAM, and according to their own calculations it should be possible to train to reference paper quality with just 8 GB of VRAM, but that hasn't been implemented.
And the sphere harmonics seem to be overkill for something that is just doing specular reflections most times.
To answer the question posed in the thumbnail: unlikely
In a way, Gaussian Splats are like very large atoms - they come together to make everything in the scene.
Now I just need a Unity iOS plugin to create these models in real-time :)
Can't really imagine a use in games. But for VR chatrooms, real estate buros, or VR sightseeing.
i wonder if one could use nerf to build the point cloud to use for splatting...
It looks very interesting, it will be very interesting to see if game developers choose to invest the time needed to properly fake photorealism through Gaussian Splatting, achieving high fidelity very at a very low processing power cost, or they will rather just use all the path tracing methods which will be a lot more convenient but also a lot heavier on the user's hardware.
This is the future of Google Street View in 3d!
just waiting for this to become standard monitor tech
This reminds me of the system that is used in Dreams by Media Molecule.
I know that its Playstation specific but, maybe if there is enough push we can get Dreams onto the PC eventually.
Can you say Splats are kinda like multi shaped 3 dimensional pixels that are doing reverse virtual 3D pixel mappings (or 3D pixel projections) based on photographic data?
We are 100% gonna see something along these lines for future rendering in games
No we're not.
For movies yes
For presentations yes
Not for games
Rendering like this is the opposite of why we have render pipelines
It's unfeasible to ask a user to download hundreds of gigabytes of data per scene
If you want to create gameobjects from these procures then im sure that can be done.
But no
You won't see games made like this ever
I’d give it 1% tops for games specifically
I just want to use it on non-denoised raytracing rendering output frames. with depth pass for spatial data. Just let me see if it works...
Has anyone done this with the Zapruder film yet?
I use the Luma AI plugin for unreal engine. Super easy to use and free.😊
Novalogic omg. That's a name I haven't heard in a long time.
nah, without animation and adjustable lighting (day/night, dynamic lights) nobody going to change their entire pipeline. not in gamedev anyway. cool thing for virtual museums, home tours etc
I think there is enough data that lighting could be implemented. That said, to mix it in with a traditional rendering pipeline, you'd end up with two lighting paths and that wouldn't be ideal.
This came out like 3 months ago, there’s also an experiment that showed that you CAN change the lighting. I’ve also seen someone implement animations in augmented reality and displayed on an iPhone. That’s just in 3 months of progress. Think of GPT 3 in 2020 vs GPT 3.5 2022 vs GPT 4 today.
@@euden_ytand just like that 4D Gaussian splatting dropped lol
I wonder if you could map a scene even faster by just taping a high framerate 3d camera to a pole on your back
A 360 camera is enough
That reminds me of how the models work in Dreams.
Possibly a similar idea for use in gaming but a lot simpler is to get rid high polygon models for a basic box outline shape instead and just update the quads (2 triangles) on each face with angle adjusted photos depending on the players view angle. This has been done recently in the new Ultra Engine but I would use a a more perfected method of this so you don't see any clipping between photos and is totally smooth. Maybe its as simple as making a detailed sprite sheet and a shader to smoothly combine and move between images...?
If an view direction is between 2 image angles then get the shader to create the correct image based on the 2 closest ones...I think some smart people could achieve this.
It's looking great but it also means things I rather see die will survive and look even better in the future.
Just imagine Metaverse+Nanites+Gaussian Splats+VR... Hell awaits.
I wonder if this tech could be used as a reference template in 3D modeling programs to model real world environments instead of using 2D reference images. It will be extremely useful to model on top of it since it gives real world scale and helps prototyping the basic shapes and scale of the environment. I'm not keeping up with 3D tech for modeling lately so I'm not sure if there is already a similar solution out there.
I think this is the more realistic use case - as a tool to improve current workflows.
I'm rooted for this kind of usage as well. don't want the tech deprive my fun of modeling
Sim racing and golf games, for example, have been doing this for at least a decade: using laser scanned point clouds as 3d reference to make a polygonal version of a real world location.
If these splat method ends up producing results at least as accurate as lidar, but substancially cheaper and faster, then surely studios and modders will be all in, but again to use it as a reference.
Folks the real use here is cinematographers using real life backgrounds in unreal volume, as the film “the creator” proves is real life backgrounds are what really matters. This allows real life backgrounds to be used in volume filmmaking! Real-time NERF backgrounds is the future of volume filmmaking! 🤯🤯🤯🤯😎😎😎😎👍👍👍👍
Imagine combining this tech with what was used in gta 5 enhancing photorealism demo that came of a couple years ago...
It's slow because it uses the particle system to render. It's the with the unreal engine one. Each splat is rendered using Niagara and Niagara, even on GPU, gets real slow when rendering point clouds or particles systems above 1mil points.
The original demo uses screen space 2D rendering.
@@vitordelima Original demo of what? Made by whom? If you watched the video the guy even confirms it's using the particle system to render the points and this scene is over 3million points. Why did you upvote yourself?
I wonder can cache them though, I’ve had quite a few intense Niagara systems running in a level but I cached them and the worked well, this for video production. Not sure how it would in game dev
@@MonsterJuiced Of the technology being explained in the video. I didn't, you are just insane.
Finally
Reason why this doesn't work for games (yet) is cause you can't clean it up, unless I am missing something this isn't geo and you will have a very small space.
One thing I'm not sure I understand is if you could combine scenes, or how it handles reflections other than creating a mirror universe room. Like if you have two mirrors on the back of each other.
It would show what was visible on the pictures it was 'trained' on.
@@ScibbieGamesMakes sense, but, it does make me wonder: what does it do if you record a scene where there’s a mirror that isn’t against a wall, and (in the training footage) have the camera go around the mirror? How will the quality of reflections in this case compare to the quality of reflections when I’m the mirror is up against a wall and it can use the “treat the mirror like a portal” trick?
@@drdca8263the gaussian has directional color, so backface mirror will probably duplicate the color of the background where they are not seen. But that's a mice observation. Remember these don't represent surface and volume but light rays.
@@NeoShameMan Sorry, I don’t think I understand quite what you mean by “duplicate the color of the background where they are not seen”.
... also, in order to represent occlusion, don’t the splats kinda sorta also have to represent volumes? That’s what the opacity value handles, isn’t it?
Edit: to be clear, I do anticipate it still working somewhat when it can’t make things such that you can “go into the mirror” on account of there being views in the training footage at the locations that “going into the mirror” would take you, and which don’t look like what the mirror world would look like there,
I’m just expecting that the quality would probably be somewhat lower, and wondering by how much.
@@drdca8263 they use spherical harmonics, ie directional colors. These are used in game with lightprobe to illuminate a scene. They don't represent volume, they represent light rays from the source image, basically its like they are fuzzy blurry cubemap, the overlap of all them reconstruct the view of a source image. It's like you took the 2d pixels of the source image and move it to where it's the most probable, and many pixel at the same place merged into a cubemap.
I think a really interesting real-world use case would google maps, google maps is already kinda terrible up close... so they can't make it worse! haha - but no they already have a number of photos and it would probably work 10 times better for lots of scenes, it's "just a data processing issue" ... ish ;)
Hu-po has a two and a half hour YT video on all the details.
It might be useful asset bake target.
Has anyone tried running Unity in VR mode with this plugin? In theory it should work. I'll have to give it a try.
Thanks to Aras for his contribution...
You can make a cool psychedelic game with this
"It has a price-tag of a $137.43." -- "Yeah, we'll not be using that today ..."
If that would be available in Godot, I would immediately implement it in my project I have at least two really cool ideas how to use this tech. I wonder if the splatting could be switched from gaussian into other methods of splatting, for example...paint brush splatting... ifkywim...
All we gotta do is kick down those polys and make it able to make stylized non-photo-realistic 3d models and we'll be good.
How does this handle collisions?
I was wondering, since these are just pictures that are used to create the point cloud data, why couldn't you set up a scene or model with many cameras (or just one that flies around the scene like a drone) and use that data to create the point cloud. Would it be economical? I don't know. I'm just curious if it's possible. Like would there be a noticeable difference in data sizes between a gaussian splat model vs traditional polygons.
Also, didn't the Sony game Dreams do something similar?
It's being done: ruclips.net/video/KriGDLvGDZI/видео.html
Not sure about the Sony Dreams. But I do have hazy memories reading that the Blade Runner point and click adventure did something similar.
Is it even possible to apply lighting on this? I don't understand much from photogrammetry but it seems pretty much imutable, e.g. can't move objects, can't alter lighting and so on, which greatly reduces the applications for this
Yes and no I believe.
Yes, in that you have positional and color data and it's being rendered in real-time by the GPU. You could certainly implement virtual lighting (assuming you are much much much better at math than I am).
No, in a traditional pipeline, like in Unity. This isn't rendered along side the rest of the scene as I understand it, more in parallel. So if you added a light in the Unity scene, nothing would happen to the Gaussian Splat you've imported. I do think you could make it work, but you'd essentially have two parallel lighting paths (I think).
The colors and reflections are stored inside "Spherical Harmonics", they hold the color value when looked at from different angles.
You could technically, probably, somehow, bake in the lighting from your scene into these harmonics, but that would still be immutable.
To do that in real time, for some million points in space, It might be a bit much, perhaps you'd require some sort of fragment shader, but for splats. lol
@@ScibbieGames Surfels, which are similar to this, were used for realtime global illumination over triangles in the past. Still it would require the use of lighting probes around clusters of splats or other simplification.
I can't see this being used in games for anything but hyper casual games on the PC (which there are not many out there). Maybe in fighting games due to having very few meshes being rendered? But even there, I think the performance is not there.
I can't wait for someone to figure out how to make interactive games with this stuff.
Bro, your computer chugs on everything. Most people can probably get 60fps 😂
UNF did a tutorial a while back called make a full game in 2 hrs (or something similar) and he used one of the free monthly cities. He was chugging too but then went into the properties of the UE editor, changed a couple settings and it was perfect. Don't remember what he changed, but there's definitely a fix.
Looks like I found a new use for my drone though. Thanks!
It's an unoptimized, experimental renderer for a generally unsupported rendering pipeline which is implemented on top of Unity.
He probably changed "virtual shadow maps" to regular shadow maps, that's a one-click triple the speed of Unreal engine button right there.
I've been reading about this for a while and it looks like real innovation. Unlike real time ray tracing, something extremely taxing on the GPU and that was pretty much forced down our throats by a company out of ideas on how to charge thousands of dollars for their products that aren't worth half of their asking price in real performance.
Raytracing actually came from independent scientists in the 80s, we just needed the fire power to catch up for the consumer.
@@poetryflynn3712 I know, I even read a few of the scientific papers about it. It's really interesting stuff.
The problem was Nvidia forcing the technology onto the consumer well before it was ready as this "crazy new thing that totally makes new GPUs worth double last gen" even though we're now 3 generations in and most of their lineup can't even handle it without upscaling (which was essentially invented because games could barely reach 60 fps on flagships when using ray tracing at the time. Remember the RTX 20 series?).
I'd say it'll be some 4~5 more generations before real time ray tracing can become viable without upscaling.
Nvidia (and AMD, which does very little besides copy its competitor) should be focusing working more closely with developers to better optimize current games, so they don't suck up 32 gigs of ram all the time and need 200 GB of storage to run.
But no, that's not flashy enough to sell thousand dollar giant pieces of inefficient heatsinks.
@@poetryflynn3712 and Gaussian splatting is in a similar boat, except that the firepower has already been there for a while, just that it wasn't used for consumer applications until just now. Usually it was used for mapping out stuff like CT scans, and the original paper from 1993 ( web.cse.ohio-state.edu/~crawfis.3/Publications/Textured_Splats93.pdf ) used it to map out wind and clouds
Real time ray tracing is not something that was forced down your throat by a company out of ideas, it's a technology that has been promised and pursued for decades and will continue to be pursued for a while yet. What you have now is barely scratching the surface.
@@philbob9638Then hardware accelerated realtime raytracing was forced down everyone's throats by a company out of ideas.
"Splat" is the future.
I want to see this in a program like blender... Because even though the real time aspect of this is necessarily necessary having the meshing not take literal hours would be nice.
Is this something like an Unlimited Detail engine?
I don't see it go anywhere for games, just like "unlimited detail" stuff.
I just want a tool to convert these to polygonal mesh and texture
Apart from back plates/matte paintings/Skyboxes, I don't really see much use to it in games? It's static, no collisions, no interaction, etc. ? I'm not sure but at least it seems like it's a one off, static, just one use type of thing?
It doesn't seem to be too hard to implement collisions because it's almost a collection of elliptical particles.
Even easier just to rough out the outline via an invisible mesh and use that for collision detection. The main issue with interactions would be lighting and destruction of the model, those interaction would have a much harder time looking "right". @@vitordelima
Great. But what about FPS future?