I'm a 3D artist who started on the Amiga, with Ligtwave and currently (continue to) work with Maya and Vray. Nodes, Lights, Shader, render, change things, render again until it looks good. But seeing these new technologies that seem to perform one bigger miracle than another every few days makes me feel like a caveman trying to understand our world. I have no idea how to use these things. I feel like I was instantly outdated.
Nobody knows how to use any of these things. What you do have is experience knowing when it looks right. Learning workflows only takes a few days to a few months depending on how deep you want to get.
These things only get useful when implemented in popular software like blender, unity, unreal engine. We just see these demos for years without them being available in any software.
tried to recommend you to try Postshot and try by yourself but RUclips censored me... If RUclips GOD allows me to explains.. There are plugins out there already for Blender/Unreal/Unity. Do not hesitate to ask me more, if i'm allowed to answer...
As a fellow 3D artist of about 10 years I feel it's a futile battle to learn more 3D stuff because in a few more years CGI will likely be replaced by AI. It can already generate photorealistic renderings and animations and it's just getting started.
@@SirusStarTV If you're a game engine developer they're sometimes useful because they show what's possible and have a "recipe" on how to do it. But for someone who's in modelling, or even game dev (using a preexisting engine), not as much. Even if you had the source (UE4/Blender) and could implement the technique yourself, it's really not worth the effort.
@@theuserofdoom Anything less than an NVIDIA Tesla H100 is for poor people. I game on a DGX SuperPod. Each liquid-cooled rack features 36 NVIDIA GB200 Grace Blackwell Superchips-36 NVIDIA Grace CPUs and 72 Blackwell GPUs-connected as one with NVIDIA NVLink.
This paper is actually a complete departure from Gaussian Splatting, but both of these methods create a Radiance Field. Also, the vast majority of research will be transferable between the two methods. I interviewed the first author of this paper if you want to learn more about what this method can do! ruclips.net/video/1vxn4M1fO6c/видео.html
can you tell what are the implications of this for not so smart person like me? i want to can i run pathtraced games at 60 fps at mid range gpu like rtx 3060/4060 type gpu?
I keep telling both friends and family; Now is the perfect time to own a Tech Stock, With everything going on and seeing how the world is been run by AI and all Tech is here to stay and you don’t want to miss it
With everything going on in the market My advice to anyone starting out in the market is to seek guidance as its the best way to build long term wealth while managing your risk and emotions with the passive investing strategy.
I took charge of my portfolio but faced losses in 2022. Realizing the need for a change, I sought advice from a fiduciary advisor. Through restructuring and diversification with dividend stocks, ETFs, Mutual funds, and REITs, my $1.2M portfolio surged, yielding an annualized gain of 28%.
‘Annette Christine Conte One of the finest portfolio managers in the field also widely recognized. Just research the name. You’d find necessary details to work with and set up an appointment.
Interactivity of Radiance Fields is still somewhere in the blue. Right know depending on the use case 3DGS are by far the best way to represent a single object. Its fast and highly detailed if captured right and trained properly.
This is unbelievable. If we could get gaussian splatter a bit more developed to the point it can be rigged and animated, that would go so well with this new light simulation support and could make stuff like Unreal's nanite-level of detail actually available to more hardware
@@mattmexor2882 I don't know, but I'd imagine it should be closely related to animation since it's about grouping and defining relationships between points
@@mattmexor2882 I don't think model-model collision has much to do with lighting, it'd probly just clip through and whatever's intersecting won't influence lighting.
@@mattmexor2882games generally don't use the visual mesh for collisions anyway, they add its own collision boxes and capsules that have supplier geometry depending on the need
So step 1, fly a drone through an environment to get photos from a bunch of angles; step 2, process those images into Gaussian splatter data; step 3, render a fully raytraced clone of the environment in 3D in realtime, complete with any additional 3D objects you want to add to the scene?
I love gaussian splatting technology. I just started creating some of own to record memories of interesting places or things, instead of taking photos. That way I'll be able to revisit and share them later with a VR headset.
I work on atmospheric machine learning research, visualizations for advanced particle simulations like atmospheric particle simulation sounds like a perfect application for this technology!
voxels to triangles : 'you could not live with your own failure , where did that bring you? back to me' XD what a time to be alive indeed every day we get another step closer to the simulation , nevermind 2 papers down the line , where is this going in the next several decades? its mindblowing
the funny thing is when I tell my highly educated family members about ai and so forth; they don't believe what I tell them and have never heard about ai at all.
Convincing simulation, training and game 3d displays look to be even better now. I didn't think I'd see the day when realtime raytracing would start to become this fast and convincing.
Károly, is there an AI where I can give it a Two Minute Papers video, and the output is the same video but the narration doesn't pause after every word? Thanks
I'm not hating at all, it's amazing that we can compute these things nowadays but I just cannot help but imagine what possibilities we could achieve if we put all that effort & processing power into physics effects, interactive environments, reactive damage effects, particle effects, artistic aesthetics instead of mostly focusing on realism. We definitely need stealth games to comeback and blend in the new advancements we have when it comes to lightning and all that..
The issue lies in the fact that the cost of buying processing power is shifted to the consumer, while the cost of developing program falls on the developer. This is why we often see less optimized, visually underwhelming games that still require the best GPUs available on the market; the savings on optimization are passed onto the consumer, who compensates with better hardware. This also explains why we rarely see truly interactive games with unique systems outside of the indie scene.
I think there just isn't enough footage to fill the video with all-new clips. It also helps to be able to compare against previous papers. If you really aren't paying attention, the publication year is a big hint. :)
I can see a plausible future where this degree of realistic fidelity has become so efficient that it smoothly renders at 90 in our full face VR/AR thin mask which reproduces smell and taste ^^
As a day by day CGI artist I can't wait for them to bring this over to Blender. I'm working with raytraces all day so this will greatly improve my output each day! Can't wait for what this will bring.
This reminds me of the stuff a company called Euclidean was touting around 10 years ago. They had a tech demo that showed photoreal environments an thousands of objects in a single scene running in real time due to everything being based on... I forget what, voxels maybe? In any case, they had that demo but then disappeared completely.
bro i love your videos but can you please not pause talking every 2-3 words, it doesn't make it more interesting. it's my one and only negative about this channel, for the rest i love your enthusiasm and work on bringing us graphical tech news.
Look at the part where they introduce a glass object into the image and change its properties. That is, for all intents and purposes, what the animation process would be like. I get why it doesn't register to you as animation, since it's happening in real time, but that's where we're at. Computer graphics now work like claymation.
@@michaelleue7594 Looked to me like changing the refraction value and watching the result in real-time. I was more hinting at character animation, foliage affected by wind (...), these sort of things. Remembering back, i think it was similar with voxels, also difficult to animate.
@@Charles_Bro-son you are right , it's good with static objects , but we already have Photogrammetry for this , and to calculate the X Y Z of a single particule on a moving object/character or foliage composed of billions of them in every frame the fastest possible , i can't imagine the raw power you'll need , i think for a 3D engine made toward gaming , they will use a mix between this new technique and Rasterization for moving objects !
I'd really appreciate, when we're being given 'frames per second' of an algorithm, that we're also told how parallelizable it is and what hardware it ran on. 10-78 frames per second on a hyper-cluster of H200s isn't that impressive
The paper says they used a single RTX 6000 Ada card. Pretty much a 4090 with double the memory. The 4090 even beats it in gaming benchmarks. This is a research project into rendering technology, the exact fps isn't really relevant, just the improvement versus other methods on the same hardware.
I am feeling more and more redundant every day by watching this channel, I hope our government will find an ethical and smart way to incorporate this redundancy into easing people's lives and decreasing the price of projects that used to be deemed impossible due to the price of workforce/labour cost/highly educated workers. Even if some of those fields themselves, might not be eased that much by these advances, these should free human resources from other competing fields It is such a scary yet so interesting time to be alive
I really hope we can one day see this technology implemented for videogames and I hope it gets done PROPERLY. Whether ultrarealistic graphics benefit a game obviously depends on the style of the game. But let's take a game like Dead Space or Dying Light. Games like this would greatly benefit from hyperrealistic graphics. Characters could become even more fleshed out by having much more organic movements, much more detailed faces and the environment would be a lot more immersive through ultra-realism. Here it would be beneficial. Additionally, if games eventually incorporate generative AI to dynamically generate voicelines and maybe even side quests (maybe with predetermined guidelines set by the devs for the AI) they could potentially achieve a completely new level of immersion and realism by being dynamic to a level that cannot be achieved through pre-made objectives, dialogues etc. However this will require a lot of work. 3DGS or technologies based on it are still in very early stages and so is generative AI. If this tech wants to find its way into the game development world it needs to come in the form of an Engine like Unreal Engine with a similar/better featureset. Otherwise - if it's too different while not offering as much - it won't be picked up. It needs to be acceptable to make the switch while also gaining smth from the switch. Same goes for potential generative AI that might get used one day in games: it needs to work well. That means it has to be trained on a lot of controlled high quality data in order to produe high quality outputs. A lot will change here in the next, say, 15 years and who knows, maybe in 15 years games will finally incorporate all these technologies IF. THEY. ARE. DONE. PROPERLY. I'd love to see it.
The rendering tech is cool I guess, but useless without the content to render. Needs other pieces of the puzzle to take off. An affordable 3D camera which captures scenes in that format for consumer use. Ultra HDR 12 bits 8K pro camera for use in content production. Modelling support in 3DS Max, Maya, Blender, for use in games and similar.
It'll probably be a while before this type of rendering can become widely viable in real time applications. But I could see it being used CG rendering and VFX in the near future.
I mean in principle this is essentially just caching the traced paths, which is clearly no small feat but it does compromise somewhat on the flexibility afforded by truly realtime RT - it will work great as you move around, but performance could hiccup when the lighting conditions change substantially. Nothing insurmountable but might need to be kept in mind by devs.
Given that 8gb gfx cards are not suitable for gaussian splatting and this technique uses around half the usual RAM there is still a lot of work needed to reduce memory demands.
Or what's more likely to happen is that this technology won't become mainstream and in videogames until the average budget graphics card has 12GB of vram and PCs with 32GB of ram is the norm. If this takes 4 to 5 more years to happen then so be it. We have to move on from 8GB graphics cards, we can't keep catering to such old and low end hardware.
@03chrisv Given that Nvidia is driving towards a 100% AI rendering pipeline that does away with polygons entirely there is merit towards switching over to a particle based rendering solution for lighting in preperation for geometry becoming particle based too eventually.
As someone who knows about Gaussian Splatting, It's an incredible tech. But I can't believe they are trying to merge 3D GS with RTX algorithms... It's Holy Grail 2:56 Metal Bowl has Gaussian Splatting artifacts :(
It looks like the blurry patches are in the peripheral so if you were playing a fast paced game in realtime it might look like motion blur. in other words, depending on the application, they may not even need to fix it. Would I put up with a bit of blur in order to have a game that looks like a realistic 3d video? hell yes.
I tried a demo of gaussian splatting and it was very fluid on my RTX 2080S + intel 9700K. However since those are points clouds, you must be far enough to the "object", or you'll see all the points which breaks immersion.
But are they actually performing light calculations on the gaussian splatting particles, or just using them essentially as sorta like a volumetric "skybox", with a one-way interaction between the splatting and the raytraced objects, leaving unchanged the baked-in angle-dependent coloring splattings already had?
i am simple man , i see 2 minute paper , i click
men of culture we meet again
Indeed
It's good practice!
such a unique comment
No. Sir you are a sophisticated man.
I'm a 3D artist who started on the Amiga, with Ligtwave and currently (continue to) work with Maya and Vray. Nodes, Lights, Shader, render, change things, render again until it looks good.
But seeing these new technologies that seem to perform one bigger miracle than another every few days makes me feel like a caveman trying to understand our world.
I have no idea how to use these things. I feel like I was instantly outdated.
Nobody knows how to use any of these things.
What you do have is experience knowing when it looks right.
Learning workflows only takes a few days to a few months depending on how deep you want to get.
These things only get useful when implemented in popular software like blender, unity, unreal engine. We just see these demos for years without them being available in any software.
tried to recommend you to try Postshot and try by yourself but RUclips censored me...
If RUclips GOD allows me to explains.. There are plugins out there already for Blender/Unreal/Unity. Do not hesitate to ask me more, if i'm allowed to answer...
As a fellow 3D artist of about 10 years I feel it's a futile battle to learn more 3D stuff because in a few more years CGI will likely be replaced by AI. It can already generate photorealistic renderings and animations and it's just getting started.
@@SirusStarTV If you're a game engine developer they're sometimes useful because they show what's possible and have a "recipe" on how to do it. But for someone who's in modelling, or even game dev (using a preexisting engine), not as much. Even if you had the source (UE4/Blender) and could implement the technique yourself, it's really not worth the effort.
finally the mortgage for that 8090 will be justified
Wait you don’t game on a GB200?
@@theuserofdoom Anything less than an NVIDIA Tesla H100 is for poor people. I game on a DGX SuperPod. Each liquid-cooled rack features 36 NVIDIA GB200 Grace Blackwell Superchips-36 NVIDIA Grace CPUs and 72 Blackwell GPUs-connected as one with NVIDIA NVLink.
@@theuserofdoom You wouldnt. A consumer graphics card would smash it at gaming.
Considering how quickly this has come along. It'll prolly be a 6090 bro.
@@honestgoat ni0ce0
bro has a comma every 3 words
🤣🤣🤣
OMG! I will never not notice that
That's way better than no punctuation at all.
The power of commas should not be underestimated, so there.
But it gives this man his charm… I enjoy hearing him talk about all these things.
This paper is actually a complete departure from Gaussian Splatting, but both of these methods create a Radiance Field. Also, the vast majority of research will be transferable between the two methods. I interviewed the first author of this paper if you want to learn more about what this method can do! ruclips.net/video/1vxn4M1fO6c/видео.html
Do they still do the ML fitting to generate the particles from the source data?
can you tell what are the implications of this for not so smart person like me? i want to can i run pathtraced games at 60 fps at mid range gpu like rtx 3060/4060 type gpu?
@@bmqww223 i dont study this stuff but it doesnt exist in gaming at all. the answer to your question is pretty much no
radiance fields are kind of magical
@@xXJeReMiAhXx99 Unreal Engine 5 and PlayCanvas both support Gaussian Splatting.
What a time to be two papers down the line!
😂😂😂
But we said that two papers earlier and we are still not there
What a time to lie down on two papers!
I keep telling both friends and family; Now is the perfect time to own a Tech Stock, With everything going on and seeing how the world is been run by AI and all Tech is here to stay and you don’t want to miss it
With everything going on in the market My advice to anyone starting out in the market is to seek guidance as its the best way to build long term wealth while managing your risk and emotions with the passive investing strategy.
I took charge of my portfolio but faced losses in 2022. Realizing the need for a change, I sought advice from a fiduciary advisor. Through restructuring and diversification with dividend stocks, ETFs, Mutual funds, and REITs, my $1.2M portfolio surged, yielding an annualized gain of 28%.
Do you mind sharing info on the adviser who assisted you?
‘Annette Christine Conte One of the finest portfolio managers in the field also widely recognized. Just research the name. You’d find necessary details to work with and set up an appointment.
Thank you for sharing. it was easy to find her, then I scheduled a phone call with her. She seems proficient considering her résumé.
"Two Minute Papers released a video 2 minutes ago"
You're two late.
What a time two be alive!
Great techniques soon to be used for videogames and movies with awful plots.
you forgot to add : awfull tripple A games
Good that I care more about atmosphere and vibe, I would hate to not like dishonored just because the plot wasn't outstanding
Sweet Baby has blacklisted you
Interactivity of Radiance Fields is still somewhere in the blue.
Right know depending on the use case 3DGS are by far the best way to represent a single object. Its fast and highly detailed if captured right and trained properly.
LMFAO
This is unbelievable. If we could get gaussian splatter a bit more developed to the point it can be rigged and animated, that would go so well with this new light simulation support and could make stuff like Unreal's nanite-level of detail actually available to more hardware
Can this technology resolve object boundaries? Can you move objects around in the scene and know when they collide with each other?
@@mattmexor2882 I don't know, but I'd imagine it should be closely related to animation since it's about grouping and defining relationships between points
@@mattmexor2882 I don't think model-model collision has much to do with lighting, it'd probly just clip through and whatever's intersecting won't influence lighting.
@@mattmexor2882games generally don't use the visual mesh for collisions anyway, they add its own collision boxes and capsules that have supplier geometry depending on the need
@@mattmexor2882you could just place collider objects into the gaussian splats that move with it i think
Research Papers: RTX ON
So step 1, fly a drone through an environment to get photos from a bunch of angles; step 2, process those images into Gaussian splatter data; step 3, render a fully raytraced clone of the environment in 3D in realtime, complete with any additional 3D objects you want to add to the scene?
I love gaussian splatting technology. I just started creating some of own to record memories of interesting places or things, instead of taking photos. That way I'll be able to revisit and share them later with a VR headset.
How is that done? Is there a tutorial somewhere?
@@ImpostorModanica I wanna know too
@@ImpostorModanica I'm using Scaniverse on an iPhone
@@NicoAssaf I'm using Scaniverse on an iPhone
bro living in 2077
The holy grail of graphics is always just two papers down the line!
Some 3D glasses and a bathtub away from the matrix
I work on atmospheric machine learning research, visualizations for advanced particle simulations like atmospheric particle simulation sounds like a perfect application for this technology!
Amazing tech. Can't wait for it to become available.
Gaussian raysplatting? Splattracing?
Gauslighting
@ make it gauss… then it‘s perfect
voxels to triangles : 'you could not live with your own failure , where did that bring you? back to me' XD
what a time to be alive indeed
every day we get another step closer to the simulation , nevermind 2 papers down the line , where is this going in the next several decades? its mindblowing
What a time to be alive !
I hope that you will cover what will be announced at Humanoids 2024 in november !
I think what these papers really need to improve the world we live in is attention. You're doing gods work
Károly!
the funny thing is when I tell my highly educated family members about ai and so forth; they don't believe what I tell them and have never heard about ai at all.
This is one of the first times I've ever gotten goosebumps from reading a paper.
Great video. And the icing on the cake is the narration by Ren from Ren & Stimpy.
Very cool; ultra photo-realistic video games and 3D rendered movies/elements are very close.
That is so crazy, almost unbelievable, it is very exciting to see progress like that
This is very cool! Heaps more interesting than the generative AI papers.
Finally non Ai narrated video on this channel!
Great to see some light transport simulation content again!
there must be an infinite amount of periods in this guy's script
Awesome if this could be used in Blender for fast/lightweight ArchVis backgrounds.
Convincing simulation, training and game 3d displays look to be even better now. I didn't think I'd see the day when realtime raytracing would start to become this fast and convincing.
I like this rethinking of rendering techniques.
1:1. Excellent performance is a must. Love it.
Károly, is there an AI where I can give it a Two Minute Papers video, and the output is the same video but the narration doesn't pause after every word? Thanks
The bike scene really looks real life! What a time to be alive indeed.
Nvidia sharing their research for free? Now I’m impressed.
As long as you buy their hardware to run their software, they do not mind.
@@iloveblender8999 it's a wise move since it's clearly something ONLY their AI based chips can run !
@@iloveblender8999 Sharing technical research is not the same as giving free software that runs only on their hardware, that’s why I was impressed.
I'm not hating at all, it's amazing that we can compute these things nowadays but I just cannot help but imagine what possibilities we could achieve if we put all that effort & processing power into physics effects, interactive environments, reactive damage effects, particle effects, artistic aesthetics instead of mostly focusing on realism. We definitely need stealth games to comeback and blend in the new advancements we have when it comes to lightning and all that..
The issue lies in the fact that the cost of buying processing power is shifted to the consumer, while the cost of developing program falls on the developer.
This is why we often see less optimized, visually underwhelming games that still require the best GPUs available on the market; the savings on optimization are passed onto the consumer, who compensates with better hardware.
This also explains why we rarely see truly interactive games with unique systems outside of the indie scene.
I feel like we are seeing the same papers over and over again. I keep seeing the same clips every video and never know whether its new stuff or not
I think there just isn't enough footage to fill the video with all-new clips. It also helps to be able to compare against previous papers. If you really aren't paying attention, the publication year is a big hint. :)
I can see a plausible future where this degree of realistic fidelity has become so efficient that it smoothly renders at 90 in our full face VR/AR thin mask which reproduces smell and taste ^^
As a day by day CGI artist I can't wait for them to bring this over to Blender. I'm working with raytraces all day so this will greatly improve my output each day! Can't wait for what this will bring.
Corridor Crew will not be happy with those shadows and extra dark shadows.
I was just looking at relightable gaussian avatars yesterday. This tech is truly incredible, I can't wait to see this in games especially VR.
I think the AI filters will smash everything in the next years
They're too intensive to be done in real time and too unstable to actually work.
This means gaming with splats will be possible one day? What a time to be alive!
I am constantly STUNNED at the investment, and results, into Ray Tracing from Nvidia, and much of it publicly open too o.O
This reminds me of the stuff a company called Euclidean was touting around 10 years ago. They had a tech demo that showed photoreal environments an thousands of objects in a single scene running in real time due to everything being based on... I forget what, voxels maybe? In any case, they had that demo but then disappeared completely.
This new technique is developed by Nvidia & Nintendo from Next Gen Switch project ❤👍
Lol I appreciate this type of commentary and coverage.
Job well done!
ok these shots actually look like real life now :O
Waait a minute. Gaussian splatting already did a pretty good job of capturing the specular reflections from the scanned enviorment.
This tech is going to be incredible in VR!!!
This made me think of the video, which explained how water was generated in the movie ANTZ, back in 1998.
Iiiiiii thiiiiiink thiiiiis iiiiiis amazziiiiiiing
Can you get an accent coach please
This feels like alien technology! WOW!
Absolutely incredible
3:33 Where can I download this?
bro i love your videos but can you please not pause talking every 2-3 words, it doesn't make it more interesting.
it's my one and only negative about this channel, for the rest i love your enthusiasm and work on bringing us graphical tech news.
I think it gives the speech some texture to grab to.
the future is bright!
Looks good but everything ist static. How difficult is it to animate these points compared to polygons?
Look at the part where they introduce a glass object into the image and change its properties. That is, for all intents and purposes, what the animation process would be like. I get why it doesn't register to you as animation, since it's happening in real time, but that's where we're at. Computer graphics now work like claymation.
@@michaelleue7594 Looked to me like changing the refraction value and watching the result in real-time. I was more hinting at character animation, foliage affected by wind (...), these sort of things. Remembering back, i think it was similar with voxels, also difficult to animate.
@@Charles_Bro-son you are right , it's good with static objects , but we already have Photogrammetry for this , and to calculate the X Y Z of a single particule on a moving object/character or foliage composed of billions of them in every frame the fastest possible , i can't imagine the raw power you'll need , i think for a 3D engine made toward gaming , they will use a mix between this new technique and Rasterization for moving objects !
Appreciate ya. Thanks for sharing.
Thanks 6 minutes paper
I'd really appreciate, when we're being given 'frames per second' of an algorithm, that we're also told how parallelizable it is and what hardware it ran on. 10-78 frames per second on a hyper-cluster of H200s isn't that impressive
Definitely true, this tech isn't coming to games soon if you need a $200,000 system to run it on
The paper says they used a single RTX 6000 Ada card.
Pretty much a 4090 with double the memory. The 4090 even beats it in gaming benchmarks.
This is a research project into rendering technology, the exact fps isn't really relevant, just the improvement versus other methods on the same hardware.
@@Vladtmal Yes, on the same hardware, but the hardware does matter, so that we really know it isn't a number thrown into the air.
I am feeling more and more redundant every day by watching this channel, I hope our government will find an ethical and smart way to incorporate this redundancy into easing people's lives and decreasing the price of projects that used to be deemed impossible due to the price of workforce/labour cost/highly educated workers. Even if some of those fields themselves, might not be eased that much by these advances, these should free human resources from other competing fields
It is such a scary yet so interesting time to be alive
I think we might be close to revamping a lot of old pc titles. Like a turbo shader / wrapper on existing games without needing development
I really hope we can one day see this technology implemented for videogames and I hope it gets done PROPERLY. Whether ultrarealistic graphics benefit a game obviously depends on the style of the game. But let's take a game like Dead Space or Dying Light. Games like this would greatly benefit from hyperrealistic graphics. Characters could become even more fleshed out by having much more organic movements, much more detailed faces and the environment would be a lot more immersive through ultra-realism. Here it would be beneficial. Additionally, if games eventually incorporate generative AI to dynamically generate voicelines and maybe even side quests (maybe with predetermined guidelines set by the devs for the AI) they could potentially achieve a completely new level of immersion and realism by being dynamic to a level that cannot be achieved through pre-made objectives, dialogues etc.
However this will require a lot of work. 3DGS or technologies based on it are still in very early stages and so is generative AI. If this tech wants to find its way into the game development world it needs to come in the form of an Engine like Unreal Engine with a similar/better featureset. Otherwise - if it's too different while not offering as much - it won't be picked up. It needs to be acceptable to make the switch while also gaining smth from the switch. Same goes for potential generative AI that might get used one day in games: it needs to work well. That means it has to be trained on a lot of controlled high quality data in order to produe high quality outputs. A lot will change here in the next, say, 15 years and who knows, maybe in 15 years games will finally incorporate all these technologies IF. THEY. ARE. DONE. PROPERLY. I'd love to see it.
Finally something interesting and not AI related. I mean I don't want to say that AI isn't interesting, but I just got a bit tired of it
That's going to be great for museum displays, architects etc
I do wonder how it will handle non-static objects though.
I dream of the day Street View will use this kind of technology.
The rendering tech is cool I guess, but useless without the content to render. Needs other pieces of the puzzle to take off.
An affordable 3D camera which captures scenes in that format for consumer use. Ultra HDR 12 bits 8K pro camera for use in content production. Modelling support in 3DS Max, Maya, Blender, for use in games and similar.
It'll probably be a while before this type of rendering can become widely viable in real time applications. But I could see it being used CG rendering and VFX in the near future.
I'm confused. By particles do you mean point cloud system? similar to euclidean's unlimited detail?
they are getting closer to raymarching, where geometry is represented purely by math. no particles or vertices needed, and its super fast!
Waiting for this to come to 3d posing tools like daz
I mean in principle this is essentially just caching the traced paths, which is clearly no small feat but it does compromise somewhat on the flexibility afforded by truly realtime RT - it will work great as you move around, but performance could hiccup when the lighting conditions change substantially. Nothing insurmountable but might need to be kept in mind by devs.
Really amazing!
Given that 8gb gfx cards are not suitable for gaussian splatting and this technique uses around half the usual RAM there is still a lot of work needed to reduce memory demands.
Or what's more likely to happen is that this technology won't become mainstream and in videogames until the average budget graphics card has 12GB of vram and PCs with 32GB of ram is the norm. If this takes 4 to 5 more years to happen then so be it. We have to move on from 8GB graphics cards, we can't keep catering to such old and low end hardware.
@@03chrisv Increasing VRAM takes way too long in recent years.
@03chrisv Given that Nvidia is driving towards a 100% AI rendering pipeline that does away with polygons entirely there is merit towards switching over to a particle based rendering solution for lighting in preperation for geometry becoming particle based too eventually.
As someone who knows about Gaussian Splatting, It's an incredible tech. But I can't believe they are trying to merge 3D GS with RTX algorithms... It's Holy Grail
2:56 Metal Bowl has Gaussian Splatting artifacts :(
Better looking fur?!? Rendered fast?!? I'm in!!!
Things are getting crazy in real time graphics
It seems to me that in a few years we can create worlds in a computer that are more real then reality. Maybe then it’ll be called realerlity.
If we can do this now, with technology today, we definitely live in a simulation.
I suspect this will be the method for converting realtime generative ai images into 3d rather than trying to create traditional game geometry.
i, love, this, content, thank you.
I remember when Apple users and AMD users mocked the Nivida retracing. And now they all love it.
I start to think that within10-15 years we'll reach a point of pretty much impossible to spot changes to graphics quality between generations of GPUs.
Where is your research merch?
"What a time to be alive"
"Hold on to your papers"
Every time i start my Windows computer and see a new landscape photo, i realise todays GPU is still at a toddler stage.
The unlimited detail guy has entered the chat
It looks like the blurry patches are in the peripheral so if you were playing a fast paced game in realtime it might look like motion blur. in other words, depending on the application, they may not even need to fix it.
Would I put up with a bit of blur in order to have a game that looks like a realistic 3d video? hell yes.
Game changer literally if this gets picked up by epic.
I tried a demo of gaussian splatting and it was very fluid on my RTX 2080S + intel 9700K. However since those are points clouds, you must be far enough to the "object", or you'll see all the points which breaks immersion.
NVIDIA has the best research
Okay, now if someone can find a way to train a diffusion model on this, so we can get real time generative 3D VR environments>Holodeck
Great video! Out of curiosity, have you ever covered Fourier Neural Operators for solving PDEs on your channel, or plan to?
i have the same LEGO Bonsai shown at 2:45 lol
I feel like I'm smoking the papers at this point!
Polyscope! 👏🏼
But are they actually performing light calculations on the gaussian splatting particles, or just using them essentially as sorta like a volumetric "skybox", with a one-way interaction between the splatting and the raytraced objects, leaving unchanged the baked-in angle-dependent coloring splattings already had?