Aside from Nspire 84 not being a thing: I learned much of my initial programming knowledge on a TI-84 Plus Silver Edition, and I think that really taught me to value efficiency. Love seeing other people with similar stories!
I'm a sucker for coding stuff to run on ancient hardware because I started coding when I was a preteen in the 90s, when you had to know how to get the most out of a computer or your game ran like unplayable garbage. Nowadays devs just do whatever and ignore performance and optimization because they just let their end-user's hardware go to waste. EDIT: Indie devs, for the most part, is who I was referring to, but also companies like Bethesda that build layercakes of flashy FX ontop of their ancient engine.
I have a decent pc that can even run Cyberpunk 60fps. But for some unknown reason it freezes randomly even with no load. Like im experiencing 10 second freezes: on desktop; on lockscreen; opening explorer; randomly at any point. So even if the game can run at 10k fps on a calculator im still gonna have ~0.1fps
@@cosy_sweatermight wanna have a look at your power supply, if it's defective it could have random voltage drops on certain lines that would cause GPU issues
@@cosy_sweater i've had a similar issue, it was my motherboard doing silly things to the RAM voltage which sometimes crashed my entire PC, what motherboard do you have?
There is also a much better way to render volumetric lighting. It's called "Froxels". It was first introduced in Assassin's Creed Black Flag and was later adopted by EA, Naughty Dogs, Rockstar and many more! And also it looks absolutely stunning!
@@juliandurchholz its not really state of the art, its close to 10 years old at this point. It ran great back then, and if you cant run foxels, you might as well give up running anything serious. Also it looks great in comparison with the nonsensical method proposed in this video.
@@juliandurchholz"state of the art" technically just means "what the people that know what they're doing use", but the common use is "just invented, best available way to do something".
This works somewhat ok for your specific scene because you expect a buch of sparesly scattered beams in that specific scenario. This will just look weird in a scene not covered with trees, becaue those few hundred beams will not average out into a homogeneus volume, and if you increased their amount to the point where they do, you will be running slower than aproach nº 2, and will be severely bottlenecked by fillrate. You also seem to be spawning the beams from the floor towards the sky and checking for the shadowmap at that position. That means a beam that is being occluded by a tree before it hits the floor gets faded out in its entirety: no beam after the tree NOR BEFORE IT. That too might not look so obviously wrong in that scene, but can look decidedly worse in scenarios with higher depth complexity. Its a clever hack, but with very limited aplicability.
I agree, it's certainly no volumetric light system. Definitely cool, but more a highly specific trick than anything else. Unrelatedly, the CPU readback being involved immediately makes me dislike the technique. It would be slightly less painful if they operated on the opacity data in a compute shader. Kind of does get me thinking about some kind of cached (perhaps voxel based) approach though.
As long as the trees are also receiving shadows, I imagine you could do the same technique at the edges of those shadows. Looking at the window lighting that inspired this technique, I'd expect it to be one of those things like billboarding where it's "good enough" until you really pay attention to it.
Can you please elaborate on your second sentence about a few hundred beams and homogenous volumes? Great point about checking from the floor up. An enhancement could be to check the min and max shadow depth values of each sample, then render the beam between the highest position and the lowest position (using the same opacity formula)
@kabinet0 persistent mapped buffers and fences allow the CPU readback to happen asynchronously. With 2048 beams that's 2048 * 6 floats * 4 bytes = 49kb of data, which isn't enough to cause a bottleneck
@@Vercidium Your aproximation looks somewhat plausible in a scene with very spotty coverage such as bellow trees. A swiss cheese world would also apply hahaha. But try doing that in a wide open landscape. You will still have a bunch of randomly scattered beams instead of a homogeneous solid fog volume. The random beams will make no sense (what is dividing light up into small spot-light-like rays in an open sky?". It will look more like rain than fog in that context. I said "few hundred" out of guesswork. I dont know how many billboards you are actually spawning across the scene. But still, in reasonably consistent and robust implementations of god-rays the same fog that creates these small beams in some scenes, is the fog that creates a dense Z-buffer-fog-like effect in other contexts. Its all recreations of real-world light scatering and shadowing interacting. For good reference of how broad the effect is, take a look at a RDR2 presentation on their atmospherics system they show comparisons to real world and artistic depictions of the same phenomena. AC4:BF also has a description of their implementation in a graphics presentation of theirs, which I believe was the firsr shipping "Froxel" solution. TLoU2 also has an extensive presentation on myriad optimisations and quality improvements they did to it. All that said, what you came up with is still a clever little hack that can work in specific scenes. For a broader game you'll probably want some artist control over where billboards spawn and where they don't, so they are limited to places where they look plausible. If you have a clearing in the forest for example, you'd want to have no beams in the middle of it because then again, they'd lool like weird rain in that context instead of light. Even for a window, your solution might look inproper once you have dinamic objects. Since oclusion is checked from the floor, if a character steps in front of the window, not only will the beams be bocked from reaching the floor, they will misteriously not even hit the character either. A tall enough window and a short enough dynamic object can create a very visible effect of a moving object magically making the sun-rays disapear from the window as long as it passes over the sun-lit part of the floor. PS: I just rewatched the video and remembered your trick is to only draw beams "at the edge of light". Yeah, that reduces the occurance of the "light looking like rain" problem I described. Yet, the reality is that those wide open spaces should also have the volumetric light in them (including the ocean) just instead of scattered beams, it should be a larger volume. Your solution emulates the "feeling" of a subset of a light phonomena, rather than the real thing. Thats why I recomened looking up the RDR2 presentation, they really explain all the different cases where atmospheric scatering can influence a scene.
Very clever. I never got into the technical side of things but when I worked in CryEngine (2007-2015ish). I was the optimization guy in the editor- did most of the optimizations for Mechwarrior: Living Legends environments... Kagoshima (a MW:LL level I designed) took a lot of tricks to get to run reasonably. Making Levels for that game/mod are some of my best memories, and it's still alive today, 15 Years later!
I love people who focus on optimisation because besides the truly impressive technical skills, you only get into optimisation out of two reasons: you cant run your own game/feedback from players, or and thats the reason i respect a lot because they want to make games more accessible. Games are art, more people should be able to enjoy art. Thank you!
Completely agree. Making a game more accessible by optimizing it for potato hardware is awesome. To this day I have never owned a discrete graphics card.
Such a great video on the topic. With how vast the internet is, there are still too few resources for learning this particular topic, especially OpenGL. Thanks for your hard work and for sharing.
>Sees video title that I'm interested in. >All I hear is static in my brain with the well put together information because I don't understand. >Nice video.
@@trued7461 there is probably a middle ground where you gain a little bit of performance and it actually looks the same. But from those images it ain't it chief
They could be made wider. Maybe dynamically. They're pencil thin. Maybe if they're full opacity they're also double width and you scale the same as opacity...
I really appreciate content like this that tells us how to make our code both efficient and simple. I know what it's like to have a low-budget PC and still want to play games on it, and optimizations like this save the day for such gamers. Even if unoptimized code is "enough", it's preventing you as a game dev from implementing lots of interesting features. I wish the software engineering community as a whole would stop pretending resources are unlimited.
The dynamic beam approach was very interesting. I couldn't help but consider biasing the "dynamic beam" size/ density depending on distance to camera or splaying thebeam's mesh from the threshold of object and sky towards the ground, creating a trapezoid, to emulate penumbra.
The end result isn't stunning to me visually, however the approach to get there is highly intriguing and gives me all sorts of ideas. These videos are great for getting the indie dev mind thinking.
This is crazy This channel deserves so much more!!! Keep going with your amazing work! So inspirating for people like me who would love to make video games and also like the software engineering behind it!
Yeah, this channel is so much fun and inspiring for people like us who want to get involved in engine and optimization stuff. Probably not for those who think they know everything already, but great for me. I wish there was more contents and devlogs like this. It's like a light in the darkness.
It looks alright, but you can tell all the beams are the same size. I wonder if increasing their width with the decrease in opacity would help make the overall scene look better and less uniform. Kind of simulating the beams becoming blurrier.
I think the static 2d planes variant of god rays was used in multiple games in 00s. I think Hitman 2 has it and Freedom Fighters, both are from IO Interactive.
Even Crysis 3 uses them, and still looks stunning. But such usage is limited to static scenes only. For dynamic things Crysis 3 uses screen-space Godrays effect. Works pretty well.
If you and acerola collaborated on a project, you would somehow manage to optimize something to render at the speed of light in a 360° POV with 4k resolution on every texture and also the game would do my homework and take me out to dinner
I always find it fascinating to see experts go ham in thier in their interests and listen in no matter how much i actually understand on the subject at hand 😂
I know this is an older video now, but I'd like to share a method I used 15 years ago in Blitz3D (DirectX7). Calculate the intersection point between Sun and Camera ray, capture an Ortho view from that point towards the sun. Place up to 255 transparent planes along the sun ray, placing them at equal distance step, starting at the point that first intersects the frustum, and ending at the point that last intersects the frustum. This is effectively the 2nd method, but it should be significantly faster than the 2nd method. Downside is that you need to take care of gaps between planes by tilting them towards the player camera.
I think the best move is color your models and textures like if light actually hit and color it like if light didn’t. What’s going on from I am seeing is everything is completely singularly functioned and not programmed to act like how light should hit the environment. The light shouldn’t be faked, but how you color the textures and models can be faked to the point that it looks real. If you let the engine itself do all the work, you won’t get the performances with its default settings, so getting the models and textures early on in development to look like it is real-time before the game engine, can break the limits of the engine and you won’t see the work break itself.
Another banger video! I always find it very interesting when people find alternative ways of creating same visual effects but in much more optimized way. I also thought about optimizing one effect called SSR (Screen Space Reflections) and came up with a completely different way of doing it, but I haven't tested it yet. Also would be very interesting to see attempts of recreating ray tracing/path tracing with roughly same quality, but a lot more performance, since ray marching is very heavy task for a fragment shader. Would be very interesting to see alternative ways of doing it.
Idk if this just common knowledge or if this man is just a genius but my god the game space is gonna be so much better from here on out if this guy is working on em. Great video!
Hey there, good video! Certainly a decent way to fake light shafts, but I wouldn't say it "looks the same" as raymarched volumetric lighting. True single-scattering will always be the most convincing second to raytracing. Your technique appears more flat and is restricted to a fixed amount of random light shafts, which probably leads to scene-dependent artifacts. What's your hardware for the given benchmarks? Would be nice to have that included somewhere. The "first" approach (screen-space post process) can also be extended to somewhat work for off-screen light sources, to a degree. Are you sure using an intermediate texture with directional blur is superior to epipolar sampling with weight function, as per the traditional GPU Gems 3 solution? I wish your video was a bit longer to cover those details and nuances, go into more depth when comparing other techniques, so as to really give an impression of the tradeoffs.
I don't think it's fair to say this looks the same as fully integrated volumetrics. Like, sure, in this one particular scenario in a forest where you want lots of small shafts this may work great, but what about other outside scenes where they aren't as closely occluded? What about local lights? What about generally open areas where the vast majority would be fog-filled? Generally this is a lovely technique for those very specific small beams of light, but I don't think this is anything even remotely close to volumetric scattering - hence why there aren't many games that use this particular method. Great video nonetheless, even if I would have hoped for a few more details about placement and transformation of the beams.
With persistent mapped buffers and fences the roundtrip happens asynchronously. There is a 2-3 frame delay but you'd never know while playing. Some other comments have mentioned that the vertex shader could read/write directly to an SSBO and calculate the average itself, I'll have to check that out
In an old game prototype I made, I had a lot of grass models placed around the world, but they were physical objects and would lag a lot when loading/unloading and area because of them. There is probably a way to use a shader instead of the grass actually being real while still giving me a way to control where the grass is and have the player model interact with it (I'm just not good enough at shaders to make it). That could be a cool video idea, I would love to see you render millions of blades of grass a few thousand times a second lol.
@@Vercidium i think Acerola's approach laks your way of optimising memory for meshes by using only vertex indices. Also you have your own engine and probably don't need to run a compute shader each frame just to have a vertex buffer filled (that's what DrawMeshInstancedIndirect does in unity i think). Would really love to see how you implement grass, probably in future videos
I would recommend folks look at the GDC talk for how light beams were handled for the game Abzu. Very cool. In that talk as well is a great summary of how to animate thousand of creatures simply via vertex offsets on the GPU - but that isn't so related to this.
I wonder: what about a voxel map for where light goes? The map would be a 3D greyscale texture (you can tint it later). Initially fill it with all-white voxels, black out voxels where solid geometry exists, then do a directional blur away from the light sources (which would be a single direction for the sun, given it's far enough away that its rays can be considered parallel by the time they reach Earth), and repeat the black-out and blur steps a few times. When rendering, sample the voxel grid along the camera ray to get side-illumination data using a 3D texture sampler. You'd have to experiment to find how low a resolution you can get away with, but it could allow pre-calculating some of the lighting data, and as a bonus, you'd know ahead of time how many steps to take along the ray during rendering, just from the grid resolution and distance. If your swaying trees follow a predictable loop, you could calculate a few frames of voxel lighting data and select from the appropriate one, or even interpolate between them. I have no idea how fast this would be, and it absolutely would be memory-hungry, but by transforming the volumetric lighting to image-processing (just in 3D), it might be faster than doing the real thing. With luck, you could do it at load-time, or chunk-generation, rather than having to keep it on disk. Hell, you could probably do it in a compute shader on the GPU, so the CPU never needs to touch it. Now that I think about it, it sounds like it has a bit in common with Quake's lightmaps, except 3D and only concerned with direct illumination.
not gonna lie, i'm developping a 2D automation game and i'll make so that it can run smoothly on the most shittiest laptop you've ever seen. 16 bits vertex data, instancing and more. And the maps could be HUGE, up to 2^70^2 tiles: by separating the world into regions split into chunks. All of that with not that much resources consumption (in theory). I haven't done the game yet and i hope i could share it to the world. god dammit i love game dev. Also everything will be done with opengl in zig ! I'll a devlogs when I finish the rendering stuff (at least some sort of good rendering)
There is a notable visual difference between having the beams originate from the ground vs from the sun/sky. The bottoms of the beams dont move, the tops do, where its the opposite for the real thing. Very noticable in timelapses like at the end, doesnt look natural. Could there be a way of further simulating the real thing? To better visualize, get a piece of spaghetti or a straw, hold it at the bottom resting on a surface then move the top of the straw, simulating the sun moving. That is what your optimization does. Now do the same thing but hold it about half way up (where the branches and leaves would be) and move it around again. Notice the very clear difference in movement of the bottom.
Had a thought, (and I am almost a layman here so forgive me my ignorance) could you place the light detectors about tree height and draw them from there, extending both directions but always facing the sun?
I can sleep well at night knowing that my TI-NSpire 84 calculator will always be able to run this man's games
Maybe Vercidium will make sure that your TI-NSpire 84 calculator can finally play Sector's Edge in the future
"The Nspire 84 can't hurt you, it's not real"
The nspire 84 :
Frick, as someone that got into embedded because of calculators, now I wanna port this to the Nspire
Aside from Nspire 84 not being a thing: I learned much of my initial programming knowledge on a TI-84 Plus Silver Edition, and I think that really taught me to value efficiency. Love seeing other people with similar stories!
You like modern AAA laggy games?
Jokes on you, my shitty laptop is still gonna find a way to lag from just rendering glorified pngs
I'm a sucker for coding stuff to run on ancient hardware because I started coding when I was a preteen in the 90s, when you had to know how to get the most out of a computer or your game ran like unplayable garbage. Nowadays devs just do whatever and ignore performance and optimization because they just let their end-user's hardware go to waste. EDIT: Indie devs, for the most part, is who I was referring to, but also companies like Bethesda that build layercakes of flashy FX ontop of their ancient engine.
I have a decent pc that can even run Cyberpunk 60fps. But for some unknown reason it freezes randomly even with no load. Like im experiencing 10 second freezes: on desktop; on lockscreen; opening explorer; randomly at any point. So even if the game can run at 10k fps on a calculator im still gonna have ~0.1fps
Jokes on you my shity laptop freezes when loading regular pngs
@@cosy_sweatermight wanna have a look at your power supply, if it's defective it could have random voltage drops on certain lines that would cause GPU issues
@@cosy_sweater i've had a similar issue, it was my motherboard doing silly things to the RAM voltage which sometimes crashed my entire PC, what motherboard do you have?
Amazing z-buffer visualization, never thought of the data in that way before!
So it's not lighting, but god rays?
There is also a much better way to render volumetric lighting. It's called "Froxels". It was first introduced in Assassin's Creed Black Flag and was later adopted by EA, Naughty Dogs, Rockstar and many more! And also it looks absolutely stunning!
That's a state of the art approach, but what's "better" or "worse" really depends on your goals and target hardware ;)
@@juliandurchholz its not really state of the art, its close to 10 years old at this point. It ran great back then, and if you cant run foxels, you might as well give up running anything serious. Also it looks great in comparison with the nonsensical method proposed in this video.
@@rednicstone3299 Yeah. they were first used in Assassin's Creed Black Flag for XBone and PS4 if I'm not mistaken.
@@rednicstone3299 Well, RDR2 and Last of Us 2 use it, for instance. What more modern approaches would you say it's been succeeded by?
@@juliandurchholz"state of the art" technically just means "what the people that know what they're doing use", but the common use is "just invented, best available way to do something".
This optimization content is super interesting man.
This works somewhat ok for your specific scene because you expect a buch of sparesly scattered beams in that specific scenario. This will just look weird in a scene not covered with trees, becaue those few hundred beams will not average out into a homogeneus volume, and if you increased their amount to the point where they do, you will be running slower than aproach nº 2, and will be severely bottlenecked by fillrate.
You also seem to be spawning the beams from the floor towards the sky and checking for the shadowmap at that position. That means a beam that is being occluded by a tree before it hits the floor gets faded out in its entirety: no beam after the tree NOR BEFORE IT. That too might not look so obviously wrong in that scene, but can look decidedly worse in scenarios with higher depth complexity.
Its a clever hack, but with very limited aplicability.
I agree, it's certainly no volumetric light system. Definitely cool, but more a highly specific trick than anything else.
Unrelatedly, the CPU readback being involved immediately makes me dislike the technique. It would be slightly less painful if they operated on the opacity data in a compute shader.
Kind of does get me thinking about some kind of cached (perhaps voxel based) approach though.
As long as the trees are also receiving shadows, I imagine you could do the same technique at the edges of those shadows.
Looking at the window lighting that inspired this technique, I'd expect it to be one of those things like billboarding where it's "good enough" until you really pay attention to it.
Can you please elaborate on your second sentence about a few hundred beams and homogenous volumes?
Great point about checking from the floor up. An enhancement could be to check the min and max shadow depth values of each sample, then render the beam between the highest position and the lowest position (using the same opacity formula)
@kabinet0 persistent mapped buffers and fences allow the CPU readback to happen asynchronously. With 2048 beams that's 2048 * 6 floats * 4 bytes = 49kb of data, which isn't enough to cause a bottleneck
@@Vercidium Your aproximation looks somewhat plausible in a scene with very spotty coverage such as bellow trees. A swiss cheese world would also apply hahaha. But try doing that in a wide open landscape. You will still have a bunch of randomly scattered beams instead of a homogeneous solid fog volume. The random beams will make no sense (what is dividing light up into small spot-light-like rays in an open sky?". It will look more like rain than fog in that context. I said "few hundred" out of guesswork. I dont know how many billboards you are actually spawning across the scene. But still, in reasonably consistent and robust implementations of god-rays the same fog that creates these small beams in some scenes, is the fog that creates a dense Z-buffer-fog-like effect in other contexts. Its all recreations of real-world light scatering and shadowing interacting.
For good reference of how broad the effect is, take a look at a RDR2 presentation on their atmospherics system they show comparisons to real world and artistic depictions of the same phenomena. AC4:BF also has a description of their implementation in a graphics presentation of theirs, which I believe was the firsr shipping "Froxel" solution. TLoU2 also has an extensive presentation on myriad optimisations and quality improvements they did to it.
All that said, what you came up with is still a clever little hack that can work in specific scenes. For a broader game you'll probably want some artist control over where billboards spawn and where they don't, so they are limited to places where they look plausible. If you have a clearing in the forest for example, you'd want to have no beams in the middle of it because then again, they'd lool like weird rain in that context instead of light.
Even for a window, your solution might look inproper once you have dinamic objects. Since oclusion is checked from the floor, if a character steps in front of the window, not only will the beams be bocked from reaching the floor, they will misteriously not even hit the character either. A tall enough window and a short enough dynamic object can create a very visible effect of a moving object magically making the sun-rays disapear from the window as long as it passes over the sun-lit part of the floor.
PS: I just rewatched the video and remembered your trick is to only draw beams "at the edge of light". Yeah, that reduces the occurance of the "light looking like rain" problem I described. Yet, the reality is that those wide open spaces should also have the volumetric light in them (including the ocean) just instead of scattered beams, it should be a larger volume. Your solution emulates the "feeling" of a subset of a light phonomena, rather than the real thing. Thats why I recomened looking up the RDR2 presentation, they really explain all the different cases where atmospheric scatering can influence a scene.
if we like this, and get this to the front page of youtube, we may see better performance in games if we are lucky.
Very clever. I never got into the technical side of things but when I worked in CryEngine (2007-2015ish). I was the optimization guy in the editor- did most of the optimizations for Mechwarrior: Living Legends environments... Kagoshima (a MW:LL level I designed) took a lot of tricks to get to run reasonably. Making Levels for that game/mod are some of my best memories, and it's still alive today, 15 Years later!
I love people who focus on optimisation because besides the truly impressive technical skills, you only get into optimisation out of two reasons: you cant run your own game/feedback from players, or and thats the reason i respect a lot because they want to make games more accessible. Games are art, more people should be able to enjoy art. Thank you!
Completely agree. Making a game more accessible by optimizing it for potato hardware is awesome. To this day I have never owned a discrete graphics card.
Such a great video on the topic.
With how vast the internet is, there are still too few resources for learning this particular topic, especially OpenGL.
Thanks for your hard work and for sharing.
>Sees video title that I'm interested in.
>All I hear is static in my brain with the well put together information because I don't understand.
>Nice video.
thank you, this is the type of optimisation we used to see in games back in the day when processing power was more limited
I don't know what's more precious in your videos - the insights to the amazing rendering tricks or the way you visualize them!
The editing on this is stunning 😍Also the fact that it was all about lighting had me instantly in love
Thanks Zeth! I agree, we need more videos about lighting
Give this man triple A budget
Very interesting! Always happy to see optimisation techniques explained in beginner-friendly terms
I learned more in 5 minutes by watching this video than I did in a week doing my own research. Wild
I had no idea a Feedback buffer was a thing! Thats cool
Maybe it's just me but the second approach still looks 100 times better than the last one. Last one makes it look like some early 2000s game
It does look better but 100 times better is pretty close to 70 times which his how much performance is saved, so tbh, worth it maybe?
@@trued7461 there is probably a middle ground where you gain a little bit of performance and it actually looks the same. But from those images it ain't it chief
They could be made wider. Maybe dynamically. They're pencil thin. Maybe if they're full opacity they're also double width and you scale the same as opacity...
It still could be useful for mobile game development.
For people with potato PC's is much better than disabling the effects all together
the passion you put into your videos is palpable, it’s inspiring! ❤️
@@YagamoReba thank you so much!
bro out here casually dropping absolute masterpieces every month
I really appreciate content like this that tells us how to make our code both efficient and simple. I know what it's like to have a low-budget PC and still want to play games on it, and optimizations like this save the day for such gamers.
Even if unoptimized code is "enough", it's preventing you as a game dev from implementing lots of interesting features. I wish the software engineering community as a whole would stop pretending resources are unlimited.
babe wake up Vercidium uploaded !
The dynamic beam approach was very interesting. I couldn't help but consider biasing the "dynamic beam" size/ density depending on distance to camera or splaying thebeam's mesh from the threshold of object and sky towards the ground, creating a trapezoid, to emulate penumbra.
The end result isn't stunning to me visually, however the approach to get there is highly intriguing and gives me all sorts of ideas. These videos are great for getting the indie dev mind thinking.
This is crazy
This channel deserves so much more!!!
Keep going with your amazing work! So inspirating for people like me who would love to make video games and also like the software engineering behind it!
Thanks so much!
Yeah, this channel is so much fun and inspiring for people like us who want to get involved in engine and optimization stuff.
Probably not for those who think they know everything already, but great for me. I wish there was more contents and devlogs like this. It's like a light in the darkness.
That means a lot to hear, thank you
It looks alright, but you can tell all the beams are the same size.
I wonder if increasing their width with the decrease in opacity would help make the overall scene look better and less uniform. Kind of simulating the beams becoming blurrier.
I think the static 2d planes variant of god rays was used in multiple games in 00s. I think Hitman 2 has it and Freedom Fighters, both are from IO Interactive.
Even Crysis 3 uses them, and still looks stunning. But such usage is limited to static scenes only. For dynamic things Crysis 3 uses screen-space Godrays effect. Works pretty well.
If you and acerola collaborated on a project, you would somehow manage to optimize something to render at the speed of light in a 360° POV with 4k resolution on every texture and also the game would do my homework and take me out to dinner
Hahahaha 😂
you forgot that to expect to run at calculator
We need more game devs like you, I'm sick and tired of games taking up loads more resources than necessary
this is really interesting. I look forward to see what you learn and achieve during your game engine development
This is extremely well broken down into understandable information. Great video!
@@Beets_Creations thank you!
I may not understand everything but I surely can appreciate the beauty of it all.
you are built different man. just the animations to show this would take all my knowledge
What a fantastic improvement!
ooh man this one is just mind blowing!
My god... You give one of the best explanation i could ever get, the video's awesome. Thankyou!
Great video as always!
Thank you!
You’re doing divine work here.
I always find it fascinating to see experts go ham in thier in their interests and listen in no matter how much i actually understand on the subject at hand 😂
Another wonderful video from sectors edge guys
Thank you :)
These are the best videos on youtube.
Too kind, thank you!
nice work! i love these kinds of videos
Vercidium; God emperor of Game Optimization and priest of frames per second
love what you do
Hahaha thank you
I know this is an older video now, but I'd like to share a method I used 15 years ago in Blitz3D (DirectX7). Calculate the intersection point between Sun and Camera ray, capture an Ortho view from that point towards the sun. Place up to 255 transparent planes along the sun ray, placing them at equal distance step, starting at the point that first intersects the frustum, and ending at the point that last intersects the frustum. This is effectively the 2nd method, but it should be significantly faster than the 2nd method. Downside is that you need to take care of gaps between planes by tilting them towards the player camera.
I think the best move is color your models and textures like if light actually hit and color it like if light didn’t.
What’s going on from I am seeing is everything is completely singularly functioned and not programmed to act like how light should hit the environment.
The light shouldn’t be faked, but how you color the textures and models can be faked to the point that it looks real.
If you let the engine itself do all the work, you won’t get the performances with its default settings, so getting the models and textures early on in development to look like it is real-time before the game engine, can break the limits of the engine and you won’t see the work break itself.
I've to rewatch this over and over for about 7 times just to catch up with wtf is going on lol
Damn cool
Hahaha thank you
Cool algorithm, graphics programming is so much fun!
i can't wait for someone to implement this into a minecraft mod
Now all my games run so well!
Wow, someone actually found a use for transform feedback.
Another banger video! I always find it very interesting when people find alternative ways of creating same visual effects but in much more optimized way.
I also thought about optimizing one effect called SSR (Screen Space Reflections) and came up with a completely different way of doing it, but I haven't tested it yet.
Also would be very interesting to see attempts of recreating ray tracing/path tracing with roughly same quality, but a lot more performance, since ray marching is very heavy task for a fragment shader. Would be very interesting to see alternative ways of doing it.
These videos are brilliant.
Idk if this just common knowledge or if this man is just a genius but my god the game space is gonna be so much better from here on out if this guy is working on em. Great video!
god daaaamn, i'd have never came up with something like this
5:08
"In this video on screen"
Where? I dont see anything.
Just added it, thanks for letting me know!
This guy deserves a Nobel prize for Game Development.
Haha too kind!
dude this is crazy, i'll rewatch when you translate this from smart to english.
Hahaha thanks
@@Vercidium why the fuck are you responding to me, there are genuine great questions down here you can read rather than read our shitpost.
I have no idea what is this or why this video recommended to me but it's very cool!
Haha glad you liked it!
This is so incredibly clever!
Hey there, good video! Certainly a decent way to fake light shafts, but I wouldn't say it "looks the same" as raymarched volumetric lighting. True single-scattering will always be the most convincing second to raytracing. Your technique appears more flat and is restricted to a fixed amount of random light shafts, which probably leads to scene-dependent artifacts.
What's your hardware for the given benchmarks? Would be nice to have that included somewhere.
The "first" approach (screen-space post process) can also be extended to somewhat work for off-screen light sources, to a degree. Are you sure using an intermediate texture with directional blur is superior to epipolar sampling with weight function, as per the traditional GPU Gems 3 solution?
I wish your video was a bit longer to cover those details and nuances, go into more depth when comparing other techniques, so as to really give an impression of the tradeoffs.
I haven't heard of epipolar sampling, I'll check that out thank you!
I have a 10700F CPU and an RTX 3070 GPU
@@Vercidium i dont know what both of you are talking about. the other person never said epipolar sampling. am i missing something?
@@abhishekak9619 epipolar sampling is in the 2nd last paragraph of the original comment here
@@Vercidium oh i see now.
Nice work and nice presentation
1:19 I didnt know I need this visualisation in my life, but i do.
I don't think it's fair to say this looks the same as fully integrated volumetrics. Like, sure, in this one particular scenario in a forest where you want lots of small shafts this may work great, but what about other outside scenes where they aren't as closely occluded? What about local lights? What about generally open areas where the vast majority would be fog-filled? Generally this is a lovely technique for those very specific small beams of light, but I don't think this is anything even remotely close to volumetric scattering - hence why there aren't many games that use this particular method.
Great video nonetheless, even if I would have hoped for a few more details about placement and transformation of the beams.
the gpu-cpu-gpu roundtrip in 4:28 seems quite suboptimal, can a compute shader replace this step?
With persistent mapped buffers and fences the roundtrip happens asynchronously. There is a 2-3 frame delay but you'd never know while playing.
Some other comments have mentioned that the vertex shader could read/write directly to an SSBO and calculate the average itself, I'll have to check that out
cool, it's nice how a lot of problems get solved by just using quads in a smart way
That's really really cool :O
Incredible!!
Maybe it can't replace the second method but surely it will replace the first one, not only runs faster but also looks better.
11 thounsand frames per second is just diabolical
damn bro! i shall call you shadow man
In an old game prototype I made, I had a lot of grass models placed around the world, but they were physical objects and would lag a lot when loading/unloading and area because of them. There is probably a way to use a shader instead of the grass actually being real while still giving me a way to control where the grass is and have the player model interact with it (I'm just not good enough at shaders to make it). That could be a cool video idea, I would love to see you render millions of blades of grass a few thousand times a second lol.
Take a look at Acerola's series on modern grass in video games, it's a really good watch and describes pretty much exactly what you're talking about.
^ Acerola's video on grass is excellent. My grass rendering is still pretty naïve, I have a lot to learn about vegetation rendering
@@Vercidium i think Acerola's approach laks your way of optimising memory for meshes by using only vertex indices. Also you have your own engine and probably don't need to run a compute shader each frame just to have a vertex buffer filled (that's what DrawMeshInstancedIndirect does in unity i think).
Would really love to see how you implement grass, probably in future videos
this would be SO good for VR.
light is everywhere, except where there is shadows, which much attenuate it
I would recommend folks look at the GDC talk for how light beams were handled for the game Abzu. Very cool. In that talk as well is a great summary of how to animate thousand of creatures simply via vertex offsets on the GPU - but that isn't so related to this.
Thank you for the recommendation, I'll check it out
I can't decide what's better, the split transitions, or the light rays
Haha thank you. I had to render it at half speed as my GPU couldn't render all 3 scenes at 60 FPS
This must be like, revolutionary
So weird, yet it's kind of cool
I wonder: what about a voxel map for where light goes?
The map would be a 3D greyscale texture (you can tint it later). Initially fill it with all-white voxels, black out voxels where solid geometry exists, then do a directional blur away from the light sources (which would be a single direction for the sun, given it's far enough away that its rays can be considered parallel by the time they reach Earth), and repeat the black-out and blur steps a few times. When rendering, sample the voxel grid along the camera ray to get side-illumination data using a 3D texture sampler.
You'd have to experiment to find how low a resolution you can get away with, but it could allow pre-calculating some of the lighting data, and as a bonus, you'd know ahead of time how many steps to take along the ray during rendering, just from the grid resolution and distance. If your swaying trees follow a predictable loop, you could calculate a few frames of voxel lighting data and select from the appropriate one, or even interpolate between them.
I have no idea how fast this would be, and it absolutely would be memory-hungry, but by transforming the volumetric lighting to image-processing (just in 3D), it might be faster than doing the real thing. With luck, you could do it at load-time, or chunk-generation, rather than having to keep it on disk. Hell, you could probably do it in a compute shader on the GPU, so the CPU never needs to touch it. Now that I think about it, it sounds like it has a bit in common with Quake's lightmaps, except 3D and only concerned with direct illumination.
hearing Vercidium curse its like a fever dream
that plane light thing is something from the early 2000's for example gta san andreas does that
Finally a light pasta.
Beams of light in games is called god rays
The 3rd method in the first chapter of the video reminds me a lot of LIMBO
Sick! Might implement this in Unity
awesome ,thank you for a great job
not gonna lie, i'm developping a 2D automation game and i'll make so that it can run smoothly on the most shittiest laptop you've ever seen. 16 bits vertex data, instancing and more. And the maps could be HUGE, up to 2^70^2 tiles: by separating the world into regions split into chunks. All of that with not that much resources consumption (in theory).
I haven't done the game yet and i hope i could share it to the world. god dammit i love game dev.
Also everything will be done with opengl in zig ! I'll a devlogs when I finish the rendering stuff (at least some sort of good rendering)
Legend
I think i found the guy, who make all the new Unreal Engine updates.
How in the world is this man creating these animations
I'm thinking about giving a try for compute shaders to do averaging.
There is a notable visual difference between having the beams originate from the ground vs from the sun/sky. The bottoms of the beams dont move, the tops do, where its the opposite for the real thing. Very noticable in timelapses like at the end, doesnt look natural. Could there be a way of further simulating the real thing?
To better visualize, get a piece of spaghetti or a straw, hold it at the bottom resting on a surface then move the top of the straw, simulating the sun moving. That is what your optimization does. Now do the same thing but hold it about half way up (where the branches and leaves would be) and move it around again. Notice the very clear difference in movement of the bottom.
Had a thought, (and I am almost a layman here so forgive me my ignorance) could you place the light detectors about tree height and draw them from there, extending both directions but always facing the sun?
Babe, wake up, Vercidium just uploaded
What we nedd in game dev is open source. Everything.
Incredible.
No wonder it goes faster. He was using some flat out floats in that shader code. They're way faster than normal floats.
Haha that's one tiny optimisation, but they all add up
NVIDIA hates this guy
😂😂😂😂
Perfect i love it!!
btw this is a much better title
the previous one felt like a copy paste from another video I've seen before