When DARPA s brain modem allows us to enter the matrix and fly like superman and experience snatching we want all this hard work on simulation rendering will really pay off.
I was going to watch this but within the first 20 seconds you said “games used to have vast scale” …….. and showed AC mirage! The tiniest crapiest version of ac ever released! AC Valhalla would have been completely acceptable, especially because the twitchy boy response was always that it was “too big” . But no, it was no good.. because the main character is white.
I can tell you're a developer because you sound like you haven't slept in six years. That was an amazing explanation, so kudos on your tireless efforts.
8:48 What's interesting here is that for the Portal64 project (Portal port to the N64) the dev decided to skip having a depth buffer and instead sort things from furthest away to closest using the CPU. The reason for this is that the N64 has extremely limited memory bandwidth and also more CPU cycles than it can use. So he'd rather spend CPU power sorting things than clear/write 32 bits of depth data to every pixel every frame. it does also help that the nature of Portals maps makes it really easy to cull things, anything that's not in the same room or in a room that's visible through a portal, open door or window can be culled without even thinking about it. I don't think he mentioned this in his video, but given the game is running at 320x240 this presumably also saved him 150kb of memory. A not insignificant amount when he has 4mb total to work with.
Yep, it's called Painter's Algorithm, and it was commonly used in the era before depth buffers. The whole era before modern hardware is really cool, and needs it's own set of videos! heh
You should see what the community of 3D creators on Scratch have come up with to overcome the computational limitations of scratch it's wild stuff! Painters algorithm and BSP enhanced painters is common place!
Working on the N64 is weird in general. the rambus is so ludicrously slow compared to the rest of the system that you end up having to do a lot of stuff downright backwards compared even to contemporary systems. It also makes me really appreciate modern development tools, because you look at the internals of games like Mario 64 and its pretty clear they were optimizing blind, without being able to actually measure what was taking the most time, or even whether certain changes were helpful or harmful.
Crazy, I've heard stories about the older architectures being super strange. I landed on the tail end of ps2, which was apparently a total nightmare to work with as well. Never knew any N64 people, but watching some of @KazeN64 videos makes it look super interesting.
This video is insane. All the stuff I was looking for for years on the internet, just made available in a simple, condensed, purpose-made effective educational video with no fluff. Thank you so much. If only every teacher was this good (and every research paper was readable).
@@leeroyjenkins0 indeed, it builds really well onto foundations you gain from uni. Now the problem is just time. Even the doctor's degree genius programmers and the studios spent decades iteratively developing and adding these algorithms to the game engines with each new game. Building a new game engine just seems like such a daunting, massive task. And that's just the rendering side of things! You still gotta create tools on top of that to be able to work with your engine. Modifying existing game engines like UE might be the way to go (Deep Rock Galactic devs chose this route), but even then, you gotta know the engine pretty well, which is stupidly specialized knowledge, as well as know the algorithms involved.
@@rykehuss3435 it only makes sense to make one if you want to do something that those others don't do as well (destruction, voxels...). If you want to do the same, or anything less, what's the point? UE already exists, made by people smarter than you, might as well use it.
I've always wanted to find and thank a dev for Prototype. That game and the next were amazing and the ability to bring destruction to the town was amazing. Thank you all
@@courtneyricherson2728 I was a bit disappointed with the story direction of the 2nd game. It really felt forced to make Mercer the villain instead of just apathetic. So they should have (in my OP) went with something or someone else as the villain.
I literally had the thoughts of "oh, so you can compare foreground and background objects in screen-space" and "you don't have a depth buffer yet, but you have the one from last frame" before those subjects came to be implemented in the demonstations. The examples were really well explained and very intuitive thanks to the visuals!
I guess the reason why Cities Skylines 2 does not bother with occlusion culling is that in a top-down perspective there are simply not many objects behind each other (in contrast to e.g. a 3rd person game).
I feel like you'd still benefit heavily, depending on the shot. When the camera is overhead, the amount of "stuff" is naturally constrained to an extremely small area, thus your occlusion needs aren't high anyway, vs when the camera is lower. But this is mostly conjecture, so take it for what it's worth.
@@simondev758 Its an interesting thing to think about, as a lot of simulation games have had this problem where once you zoom in to look at closer detail, performance tanked. However its especially problematic in Skylines given its a game that encourages you to zoom in to street level, why else make it so detailed? Seems like a big goof not to realise that's going to be problematic.
That's because you're thinking of it as physical real world objects. When you think of it as data in a notepad file, all the computer is doing is reading very quickly.
@@edzymods Nah. It's still hard to put into human terms just how blazing fast computers are at handling raw numbers - and the fact that they continue to get faster doesn't help in terms of wrapping your head around it. It's like trying to understand the scale of the solar system. You cut put it into various models or scale it down to a convenient size all you want; it'll never truly convey just how big it is.
I rarely do any game development, but love your content! It's good stuff. You and Acerola have become one of my favourites to watch and learn about how these digital worlds come about.
same. I think graphics programming has a lot to teach about programming in general, especially math and the performance of algorithms, and it intrinsically visualises the thing our program is manipulating, which naturally lends itself to clear educational content and a tight feedback loop for problem-solving and evaluating our methods
Glad I found this again because I wanted to let everyone know it's better than a bedtime story, I fell asleep watching and listening this and woke up to a dead phone with flat battery. Best sleep in ages.
i hope youll read this, because this video has really inspired me. not only do you explain things in a really easy to understand way, you also carve out a value system of things. "this is easy. not that complicated. not a mystery really" these really help to get a feel for the underlying relationship of all the different approaches (and systems) that you would only get in a 1on1 conversation. thank you for showing its possible! great video
🎯 Key Takeaways for quick navigation: 00:29 🌍 *Overview of Frustum Culling and Viewing Volumes* - Frustum culling defines what's visible to the player within a viewing frustum. - Objects in the scene are enclosed by simple volumes for intersection testing with the frustum. - Intersection tests between the view frustum and object volumes determine what's discarded or drawn in the scene. 02:16 🌐 *Frustum Culling Limitations and Occlusion Culling Introduction* - Frustum culling, while useful, isn't always sufficient for optimization. - Occlusion culling, introduced as the next step, eliminates what's obscured by large solid objects efficiently. - Early methods involved manually created occlusion volumes within objects to enhance rendering efficiency. 04:38 🖼️ *Simple Approach to Occlusion Culling* - Artists manually create occlusion volumes inside objects to hide non-visible parts during rendering. - Techniques involve projecting occluders onto screen space for culling and optimizing the selection process of occluders. - Well-optimized brute force approaches might outperform complex tree structures in occlusion culling. 06:26 🧩 *Optimization Strategies in Occlusion Culling* - Optimal occluder selection involves tradeoffs in time and efficiency. - Some games adopt simpler approaches like brute force box culling for performance benefits. - Handcrafted SIMD-optimized methods can outperform theoretically faster tree structures on modern hardware. 08:18 🔍 *Depth Buffers and Their Role in Optimization* - Depth buffers track pixel depth for rendering order and visibility checks in the GPU. - They enable efficient rendering, avoid object clipping, and facilitate effects like reflections. - Depth buffer data can be leveraged for occlusion culling, discarding invisible objects before rendering. 09:42 🌐 *Hierarchical Z Buffers (HZB) in Occlusion Culling* - Hierarchical Z Buffers (HZB) use downsampled occlusion maps to expedite occlusion checks. - Progressive down sampling allows for faster comparisons and reduces the number of texture reads. - HZB enables efficient object visibility checks based on screen space bounds and mip levels. 11:26 🎮 *CPU-Side Occlusion Map Generation* - Implementing a software rasterizer for occlusion map generation bypasses GPU limitations. - Software-based occlusion mapping offers flexibility in occluder shapes and union of occluders. - Optimal for quick polygon rendering and allows for more complex occluder shapes than previous methods. 14:41 🕹️ *GPU Advancements and Occlusion Queries* - GPU advancements introduced hardware occlusion queries for direct visibility checks. - Occlusion queries involve asking the GPU about the visibility of specific objects. - Despite advantages, managing queries incurs CPU overhead and asynchronous query handling poses challenges. 17:28 🔄 *Hierarchy of Bounding Volumes for Query Efficiency* - Hierarchical bounding volume structures reduce CPU overhead in occlusion queries. - Queries are performed on combined bounding volumes, allowing swift visibility determinations. - Immediate query responses prevent GPU stalling, enhancing overall optimization strategies. 17:54 🔍 *GPU and CPU Occlusion Query Systems* - Challenges with occlusion query systems: Immediate answers aren't available, stalling for responses is inefficient. - Splinter Cell's pivot: Shifted from query-based to a depth-buffer-based approach. - Depth buffer hierarchy: Utilizing depth buffers to create a hierarchy, reducing CPU work. 18:22 🖥️ *GPU-Driven Occlusion Calculations* - Rendering occluders to a depth buffer: Creating a hierarchy from depth maps to decide object visibility. - GPU efficiency: Most occlusion work handled by the GPU, minimizing CPU intervention. - Front-loading occlusion work: Early occlusion calculations to streamline subsequent GPU processes. 19:40 ⏰ *Exploiting Temporal Coherence* - Time-saving measures: Recognizing minimal frame-to-frame changes, aiming to exploit temporal coherence. - Assassin’s Creed's temporal coherence: Rendering nearby objects into a depth buffer and reprojecting the last frame’s depth to the current one. - Challenges of temporal coherence: Difficulty in implementation due to engine complexities. 21:27 🔄 *Evolution to Current Occlusion Techniques* - Modern occlusion methods: Utilizing visible objects from the last frame as occluders for the current frame. - Conservative approach: Probability-based occlusion, potential for overdraw to avoid missing objects. - Two-pass occlusion culling: An initial pass followed by a re-test against the built HZB for non-drawn objects. 22:48 💡 *GPU-Powered Pipeline and Future Directions* - GPU-centric processing: Entire occlusion pipeline handled by the GPU, leveraging compute shaders. - Future developments: Breakdown of individual objects for occlusion culling, advancements like Unreal Engine's Nanite. - Complexities beyond visibility: Alluding to other rendering optimizations beyond visibility enhancements. Made with HARPA AI
I'm currently making a 2D game and it kind of blows my mind that the tradeoff of not drawing objects is worth the time it takes to check what should be culled every single frame. Surely simply checking which objects should be culled is a massive processing task
To be fair, if those objects have a simple geometry and only a texture, culling on single objects does cost way more than not. For example, in Minecraft you wouldn't want to run culling calculations for every single block. But as objects got more and more complex and shaders entered the picture, that shifted.
In a 2D game you don have to check any objects, just have your world in a grid (2d array), then the culling is just your for-loop which ony iterates through the part of the grid which is visible on the screen.
Well actually it would be worth it if drawing object is a lot more massive task than sorting _and_ drawing the remaining stuff, so the workload is actually got two parts to it @@RandomGeometryDashStuff
As a mechanical engineer i like how you kept this simple yet technical in terms of explanation. This is a skill in itself. You got yourself a new subscriber! Keep it up my man
Ohhhh dev who worked on prototype teaching me game dev, i feel so bleesed. I absolutely loved that game, and still love it, thank you for putting in so much effirt behind it. And thank you for these amazing teaching, keep it up man, much love and respect.
Somehow you manage to "destress" me while teaching what could seem like a complex topic but you manage to break it down so it seems so simple. I like your javascript projects and I have converted some of them to typescript.
Hi Simon, I've been following you on RUclips for a couple of years and I'm very inspired by computer graphics and game development, currently making a game on Phaser 3. Thank you for explaining interesting techniques and sharing your incredible experience, I learned a lot thanks to you
This is definitely my new favorite of your videos. It's endlessly fascinating to me to hear about the crazy things that are done in rendering pipelines. I love the GDC presentations where they dig into the minutiae of their rendering customizations.
That's a pretty darn great summary for beginners, did this for 3 years myself and it's definitely one of the more challenging yet fun programming fields there are! P.s. i trawled through hundreds of pages of documentations and papers with no friendly youtube video to guide, you still can't avoid that really if you want to become actually good at this, but do watch the video to get an overview.
Prototype was hands down my favorite game when it came out, and years after. So many days coming home from a crappy shift to take out my frustration on the zombies/soldiers/citizens/mutants of New York. Thanks for the memories.
I'm a dev myself and I gotta say, 90% of this was new info and the last 10% of the new info kinda flew over my head a little. This is amazing, thanks a lot. It's not often that we get to see the nitty gritty inside stuff that you don't directly work with.
Thank you for a great explanation! If you are suddenly out of ideas how to proceed, I am personally very interested in how this process works with shadows, lights, reflections etc from the offscreen objects. By playing Hogwarts Legacy, which is an UE4-based game, I've noticed that some elements, like reflections in lakes, are often suddenly popping up when I slightly move the camera, which causes an unpleasant experience.
Yeah, one of the options in the last poll on Patreon was exactly that, the various reflection techniques used over the years, and the tradeoffs between them. I haven't played Hogwarts Legacy, but it sounds like SSR, or screenspace reflections, which is what I showed part way through this video (the short of the cheap reflections on water when I talked about the depth buffer). It's a limitation of the approach, but it's super cheap.
I actually loved prototype! I bought that game right when it released. I really wish they would have done more with the story, and made a good sequel. You guys really did do good work on that game. The free roam playability in the aspect like GTA, has been unmatched in that particular flavor of genre. Now I'm going to have to dust it off and see if I can get it to boot on something
There needs to be a VR game engine that only renders nearby objects twice (for the left & right eye) and far away objects once, because far away objects don't need stereoscopic vision. This would would save resources and improve performance.
Since distant objects are more likely to change less pixels per frame, you could render far away objects with a lower frame rate as billboards. And then if necessary smooth the movement by shifting the billboards in between their frames. Then, with the saved performance, maybe you could even render important objects at a higher resolution than they would be normally at such a distance and then downscale it instead of doing anti aliasing or something.
@@DeusExtra ah yes, that’s another good idea. Render far away objects at half the frame rate or use alternate eye rendering like the Luke Ross VR mod. But in VR, far away objects need to remain at high resolution because low res is very visible in VR. That’s actually the biggest immersion breaker in VR, when you can’t see far away objects, like in real life.
@@voxelfusion9894 how does nearsightedness translate into VR vision? The actual display panels are close to your eyes. And do corrective VR lenses help with that?
Prototype is one of my favorites games, both of them, I hope the source code gets leaked so we can get better mods, since it doesnt feel like we're getting a new one
Optimizing CPU performance is something I enjoy doing a lot, very interesting to see how optimizing GPU operations is done. Loved this! Also makes me grateful for game engines which mostly do this for you already haha, not sure I'd want to do this from scratch unless I really needed to get extra frames
am surprised to hear you worked on Prototype... am a car guy, so the none car games i played could be counted on one hand, Prototype was one of them that i loved so much, it was like a modded gta to the younger me, just a weird amazing experience really
This was really awesome! I used to develop for the nintendo DS, so learning to develop with very strict constraints was really part of the job. This format with in-engine examples really set the video apart, excellent job man!
Fantastic education, especially for a lone developer trying to learn more, who is unsatisfied with simplistic answers. These twenty minutes were more valuable to me than many hours of the Unity "tutorials" I have watched. Thanks for being so helpful.
What learned me a lot of optimization was actually the source engine when it came to mapping in Hammer. Now using your video to learn more about other types of optimizations that might be possible for Source or at least in Hammer. Thanks for sharing this type of content and information to anyone that wants it! I am also going to code my own game that is heavily inspired by a game that is poorly optimized. So watching these will hopefully ensure I will not make the same mistakes those developers did.
That last "state-of-the-art" demonstration is so cool! I honestly never even realised it was common to do visibility culling outside of precomputed visibility structures. But not only is it done, there's some very interesting algorithms to lean on. I especially love algorithms that don't rely on temporal reprojection, so that last one (use objects visible in the last frame as occluders) is quite fascinating to me.
You worked on Prototype? That's one of my favorite games! The video is very informative, and relatively easy to understand even to someone who knows nothing about game development, though there's a minor issue with captions. At a few points the captions are mismatched, like at 12:10, at it takes a few seconds for them to catch up.
A quick subscribe from me! I look forward to you going into transparency shenanigans. It surprises me that to this day it is not unlikely for a player to run across transparency issues. I remember even in the recent beautiful Armored Core 6 I found a situation where smoke consistently interacted badly with something. And in playing around with my own projects, I've gone overboard with "transparency is beautiful" too many times, and keep having to be mindful of performance impact.
god...just the sheer amount of knowledge and sublime ability to explain usually not quite so straight-forward concepts (when read black-on-white from a uni slide or a blog post written in a VERY dry fashion) *THAT FREAKIN WELL* just amazes me to the point that you Sir have officially becom my role-model (no simping intended). And I mean...duh, no wonder you were (or are, dunno) a Google sofware engineer, cuz that is the level I aspire to become anyhow one day. Thank you A LOT and I hope this world blesses you and your fam for everything! Super thankful that you make such amazing vids! Cheers!
Great video, I wish it was made a few years ago :-) In my occlusion culling journey, I originally took the HiZ approach on the GPU which worked out great at first. It became a problem though when I wanted to do more with the visible entities. I tried in several ways to send data back to the CPU but there's just too much latency and not enough types of indirect commands to keep it on the GPU, so I went the CPU route instead. Intel has a great paper for their method of CPU-side HiZ implementation, "Masked Software Occlusion Culling". They also provide an open source implementation which has performed well for my application.
Great video. I'm a 3dArtist, not a programmer of any sort and there might have been a simple explanation for it that I've missed but how does culling account for things like long shadows or global illumination, that leaks from off-screen into the visible scene? ...Maybe worth a part2? :)
Just a small tip for visualization, I have trouble seeing a difference between Red and Green, which was worse with the Green and Yellow examples - and for those with Colorblindness, contrast is always the eye's first priority over color. Better to use colors that are complimentary, or better yet, just white and black for visualization. (And yes, it's hard for us in Graphics Programming) :p Really love your video! super simple and a great start to understanding graphics and optimization. Subscribed :3
@@simondev758 thanks for taking it seriously, I appreciate how you handle feedback on your videos You can look for ready-made "accessible color palettes" to drop in, or keep it simple with contrast and patterns. It really does help, I kept pausing this video just to tell the effects apart, and the problem affects everybody when you have different calibration for every monitor.
Loved the video, I did quite a bit of reprojection shenanigans about ten years ago with the DK1 and DK2 to improve perceived frame times for stuff outside of our foveal vision!
Hey Simon, amazing video, I really love how you go in depth into the nitty gritty of optimization and the history of it. One such argument I'd love to hear about is collision engines, with broad, middle and narrow phases, aabb collisions, spacial partitioning, the challenges of long range ray and shape casting and so on, I feel like there are so many different interesting things to talk about in collision engines
Glad to hear you enjoyed it! I'd love to dive into more optimization topics, but I think I'll leave collision engines out of it. I strongly dislike when people pass themselves off as experts in things that they're not, and I hold myself to that same standard. I haven't done anything beyond superficial physics work, so I don't feel especially qualified to talk about the subject. I'd encourage you to find resources or experts that specialize in that area. Would love to send you in a direction, but I can't think of any offhand unfortunately.
Woah! 😮 Crazy you worked on Prototype. I loved that game, and I have always remembered it from time to time. Maybe you could make a video about your work on the game.
Would you be interested in that? Brief summary: I did some of the early r&d for the graphics, did a big presentation at GDC, and during production mostly did backend and optimization work.
I was thinking long about how to deal with not just culling, but ray tracing, collisions and gravity simulation too, for a space game. and yeah, cache optimization is important, but so are tree structures, esp for stuff where everything interacts with everything. I want to do a hybrid approach, where the tree nodes serve as containers (indexes) for arrays of properties. I'm super excited for it!!!! but for now i gotta work on simpler games so i can make a name for my self, and make it as a game developer \o/
I used this depth buffer many times to create from stylistic shader effects or other bells and wistle stuff. it's very handy, but today, I finally understand why translucent materials get F'ed up, because they don't have depth buffer.
Occlusion culling was always fascinating. In some games (like FFXIV) the pop-in is really in your face if you move the camera too fast, but even when it's slow to transition it's still just...hard to believe the tech exists. Cool explanation!
Loved this, as an enterprise software engineer with no game development experience, I found this highly interesting and really easy to understand. You did an amazing job, thanks and you have another subscriber. 😁
meshlet culling using mesh shaders looks like an interesting development, and i'm guessing ue5 uses something like that. I wonder what the new unity culling system uses.
Nice video. I had once read a paper on culling strategies. Its just amazing how smart people are in the gaming industry. Some of the most amazing algorithms came from the gaming industry.
Man, you're the best. Rendering is an entire discipline unto itself and it's always on the horizon of my attention/knowledge. I finally found the time to study flow fields and I'm moving onto nav meshes soon. One of your first videos was on A*, I'd be interested to hear what you have to say about group pathfinding.
Hey simon, one thing I'd like to see if you could put it up for your patrons to vote for: Rendering in high motion/scene change situations. Eg: Racing sims. While yes, in flight sims planes move faster, a majority of the time you're higher up in the sky, so objects tend to "draw" at around the same area of the scene, but racing sims are interesting (especially in cockpit, and especially in VR) because of the fact that unlike most games where the scene doesn't change TOO much (either the area is reasonably within the same occlusion area or the objects are usually from fairly "similar" angles, VR + racing sims equals fast pace forward movement with often a lot of head jiggle/turning/tilting. Add in suspension modelings, hairpin corners, etc, I've been thinking about all the optimization methods I just can't think of any good ones for racing sims that wouldn't ruin the experience. Particularly when you have something like 4 mirrors in a car (say 3 real and 1 "virtual" in the form of a "rear view camera" It's honestly kind of crazy to think about when considering the processes that most games use, because you want really high up close detail (buttons, texture art for dashes, 3D pins for dashes (especially in VR where flat physical dashes look horrible), and then transparency rendering like windows, fences, buildings, etc. The reason is I play a lot of iRacing and we end up with a lot of folks expecting 120fps in really heavy loads on it, which... well, I'd love to be able to explain that to someone. It just sounds like racing sims as a whole are the WORST possible rendering scenario of any game due to their complexity in many different areas. [Not to mention that iRacing does a lot of external-track art for broadcasting cameras that includes off-track objects you'd really only see for scenic games or tv broadcast spanning cameras. Obviously don't expect any response or coverage on this one, but I figured use cases around specific types of games and rendering pipelines regarding those games might be an interesting topic as it can vary from something like an FPS, RTS, or racing sim in how things are even able to be processed. (Like Horizon Zero Dawn/Death Stranding/Decima engine stuff looks GREAT but I dont see it working with something like a racing sim) Anyways, sorry for the spam, just wanted to send something while I was thinking of it
Gran Turismo has used this technique for at least 2 decades. WipEout 2048 also used this technique on PS Vita, but if you do some funky things with the photomode camera, you can see loads of massive black boxes through large chunks of level geometry labeled in bold red font "Blocker."
The amount of tutorials on game development that just ignore optimization is crazy, so it's nice to see that there are at least some people that are willing to talk about optimization
I love the video so far but I have one piece of feedback - the yellow and green can be hard to distinguish, especially for someone colorblind. Maybe blue instead of yellow would be a better choice there ^^
as a web dev, not familiar with game dev stuff, I had a clue on how the rendering could work, but this goes way beyond what I understood in the past. good explanation, in a very simple way, at least for people with some dev knowledge like me, can't tell if someone with no dev experience could understand, but this sort of content isn't for the average guy
Thanks for sharing! Yeah, optimization can be a challenge, but its the part of development I find most fun as its most challenging. Though simply not rendering objects that are not visible to the camera is a portion of game optimization, but nevertheless a big one :D Btw check out my game in progress!😄 And it depends on the game itself, either way this definitely saves up a lot of GPU cost! These problems are generally faced pretty quickly with games tho.
While it might seem like a huge optimisation step at first glance, backface culling already does most of this work. It's a rendering step that looks at whether a polygon is facing the camera before it gets rendered. On most concave objects, that would already account for a large part of the occluded polygons
What an elegant way to solve the depth buffer dependency issue. Render the simpler version of the view to extract depth data and then render the high resolution view.
I find Jedi: Survivor and Fallen Order have pretty lenient culling. Move the camera a bit too fast and it's a few frames of pure bright skybox. Those games have plenty of problems with their tech, really disappointing work from Respawn. The problems can be incredibly blatant and completely take me out of the experience.
That is really interesting. Your explanation triggered a question I have for quite some time. In regard to the viewing frustum you show a couple of trees top down. Now my question has to do with viewing distance. I’ve noticed a lot of games use some sort of fog system. When you look directly forward in the distance, objects and geometry get obscured. Now when from the same point you turn your view and look at the same subject in the distance but with the edge of your screen ie. periferie, you can see it more clearly. Can you elaborate on why this is? Since a straight sightline gets obscured by fog earlier than a diagonal sightline.
Ah yeah, it's how you do your distance calculations. If you sit down with some paper and draw a view frustum, or even easier, just draw a triangle. Consider the top of the triangle your viewpoint, and consider the bottom the far depth. Now just take a ruler and look at the actual distance between straight ahead, and to one of the sides, it's different. If you use that distance to do your fog calculation, the fog will change as you swivel.
Patrons can now vote for the next video! Thank you for your support.
Patreon: www.patreon.com/simondevyt
Courses: simondev.io
When DARPA s brain modem allows us to enter the matrix and fly like superman and experience snatching we want all this hard work on simulation rendering will really pay off.
Thank you for the video. I want to point out that after 17:03 the captions went nuts.
Your a very good teacher straight facts easy to understand and a calming voice..🎉
U r sick
I was going to watch this but within the first 20 seconds you said “games used to have vast scale” …….. and showed AC mirage! The tiniest crapiest version of ac ever released! AC Valhalla would have been completely acceptable, especially because the twitchy boy response was always that it was “too big” . But no, it was no good.. because the main character is white.
I can tell you're a developer because you sound like you haven't slept in six years. That was an amazing explanation, so kudos on your tireless efforts.
The not-sleeping thing is more from my kids
Sounds like Jason from “Home Movies” lol
I can mostly tell from the name
yee i was gonna say john benjamin @@CrowdContr0l
Thats some bs stereotype for devs. U make shitty code if u dont sleep well
8:48 What's interesting here is that for the Portal64 project (Portal port to the N64) the dev decided to skip having a depth buffer and instead sort things from furthest away to closest using the CPU. The reason for this is that the N64 has extremely limited memory bandwidth and also more CPU cycles than it can use. So he'd rather spend CPU power sorting things than clear/write 32 bits of depth data to every pixel every frame. it does also help that the nature of Portals maps makes it really easy to cull things, anything that's not in the same room or in a room that's visible through a portal, open door or window can be culled without even thinking about it.
I don't think he mentioned this in his video, but given the game is running at 320x240 this presumably also saved him 150kb of memory. A not insignificant amount when he has 4mb total to work with.
Yep, it's called Painter's Algorithm, and it was commonly used in the era before depth buffers. The whole era before modern hardware is really cool, and needs it's own set of videos! heh
An important clarification here, the portal64 example is per display list, and not per triangle.
You should see what the community of 3D creators on Scratch have come up with to overcome the computational limitations of scratch it's wild stuff! Painters algorithm and BSP enhanced painters is common place!
Working on the N64 is weird in general. the rambus is so ludicrously slow compared to the rest of the system that you end up having to do a lot of stuff downright backwards compared even to contemporary systems.
It also makes me really appreciate modern development tools, because you look at the internals of games like Mario 64 and its pretty clear they were optimizing blind, without being able to actually measure what was taking the most time, or even whether certain changes were helpful or harmful.
Crazy, I've heard stories about the older architectures being super strange. I landed on the tail end of ps2, which was apparently a total nightmare to work with as well. Never knew any N64 people, but watching some of @KazeN64 videos makes it look super interesting.
This video is insane. All the stuff I was looking for for years on the internet, just made available in a simple, condensed, purpose-made effective educational video with no fluff. Thank you so much. If only every teacher was this good (and every research paper was readable).
@@leeroyjenkins0 indeed, it builds really well onto foundations you gain from uni. Now the problem is just time. Even the doctor's degree genius programmers and the studios spent decades iteratively developing and adding these algorithms to the game engines with each new game. Building a new game engine just seems like such a daunting, massive task. And that's just the rendering side of things! You still gotta create tools on top of that to be able to work with your engine. Modifying existing game engines like UE might be the way to go (Deep Rock Galactic devs chose this route), but even then, you gotta know the engine pretty well, which is stupidly specialized knowledge, as well as know the algorithms involved.
Not to mention he got H John Benjamin to narrate the whole video
@@1InVader1 Building a new game engine isnt a daunting massive task, unless you want to compete with UE or Unity.
@@rykehuss3435 it only makes sense to make one if you want to do something that those others don't do as well (destruction, voxels...). If you want to do the same, or anything less, what's the point? UE already exists, made by people smarter than you, might as well use it.
@@1InVader1 I agree
I LOVED Prototype growing up. Super cool that I just stumbled across a dev on youtube. Your channel is great btw
The power creep was so fun, really made you feel like a trillion-dollar bio weapon
I've always wanted to find and thank a dev for Prototype. That game and the next were amazing and the ability to bring destruction to the town was amazing. Thank you all
that game made so many memories for me. thank you, Simon, and the entire team on Prototype.
@@courtneyricherson2728 I was a bit disappointed with the story direction of the 2nd game.
It really felt forced to make Mercer the villain instead of just apathetic. So they should have (in my OP) went with something or someone else as the villain.
I literally had the thoughts of "oh, so you can compare foreground and background objects in screen-space" and "you don't have a depth buffer yet, but you have the one from last frame" before those subjects came to be implemented in the demonstations. The examples were really well explained and very intuitive thanks to the visuals!
I guess the reason why Cities Skylines 2 does not bother with occlusion culling is that in a top-down perspective there are simply not many objects behind each other (in contrast to e.g. a 3rd person game).
I feel like you'd still benefit heavily, depending on the shot. When the camera is overhead, the amount of "stuff" is naturally constrained to an extremely small area, thus your occlusion needs aren't high anyway, vs when the camera is lower.
But this is mostly conjecture, so take it for what it's worth.
@@simondev758 Its an interesting thing to think about, as a lot of simulation games have had this problem where once you zoom in to look at closer detail, performance tanked.
However its especially problematic in Skylines given its a game that encourages you to zoom in to street level, why else make it so detailed? Seems like a big goof not to realise that's going to be problematic.
Yeah until you tilt the camera down and then you have tons of building in front and behind each other, then occlusion culling is a must lol
The speed at which it does all the calculations of what should be drawn and what shouldn't always blows my mind.
A lot of the HZB ones can be done in less than a couple ms.
That's because you're thinking of it as physical real world objects. When you think of it as data in a notepad file, all the computer is doing is reading very quickly.
@@edzymods Nah. It's still hard to put into human terms just how blazing fast computers are at handling raw numbers - and the fact that they continue to get faster doesn't help in terms of wrapping your head around it.
It's like trying to understand the scale of the solar system. You cut put it into various models or scale it down to a convenient size all you want; it'll never truly convey just how big it is.
I rarely do any game development, but love your content! It's good stuff. You and Acerola have become one of my favourites to watch and learn about how these digital worlds come about.
I love Acerola's content too!
same. I think graphics programming has a lot to teach about programming in general, especially math and the performance of algorithms, and it intrinsically visualises the thing our program is manipulating, which naturally lends itself to clear educational content and a tight feedback loop for problem-solving and evaluating our methods
-but when I do… it’s Dos Equis.
@@arcalypse1101let's not forget Sebastian Lague
sebastian lague too, these three make the ultimate trio for me
Glad I found this again because I wanted to let everyone know it's better than a bedtime story, I fell asleep watching and listening this and woke up to a dead phone with flat battery. Best sleep in ages.
I've watched it all this time and found it very interesting.
i hope youll read this, because this video has really inspired me. not only do you explain things in a really easy to understand way, you also carve out a value system of things. "this is easy. not that complicated. not a mystery really" these really help to get a feel for the underlying relationship of all the different approaches (and systems) that you would only get in a 1on1 conversation. thank you for showing its possible! great video
Honestly a lot of gamedev isn't super complex, but presented in weirdly convoluted ways.
🎯 Key Takeaways for quick navigation:
00:29 🌍 *Overview of Frustum Culling and Viewing Volumes*
- Frustum culling defines what's visible to the player within a viewing frustum.
- Objects in the scene are enclosed by simple volumes for intersection testing with the frustum.
- Intersection tests between the view frustum and object volumes determine what's discarded or drawn in the scene.
02:16 🌐 *Frustum Culling Limitations and Occlusion Culling Introduction*
- Frustum culling, while useful, isn't always sufficient for optimization.
- Occlusion culling, introduced as the next step, eliminates what's obscured by large solid objects efficiently.
- Early methods involved manually created occlusion volumes within objects to enhance rendering efficiency.
04:38 🖼️ *Simple Approach to Occlusion Culling*
- Artists manually create occlusion volumes inside objects to hide non-visible parts during rendering.
- Techniques involve projecting occluders onto screen space for culling and optimizing the selection process of occluders.
- Well-optimized brute force approaches might outperform complex tree structures in occlusion culling.
06:26 🧩 *Optimization Strategies in Occlusion Culling*
- Optimal occluder selection involves tradeoffs in time and efficiency.
- Some games adopt simpler approaches like brute force box culling for performance benefits.
- Handcrafted SIMD-optimized methods can outperform theoretically faster tree structures on modern hardware.
08:18 🔍 *Depth Buffers and Their Role in Optimization*
- Depth buffers track pixel depth for rendering order and visibility checks in the GPU.
- They enable efficient rendering, avoid object clipping, and facilitate effects like reflections.
- Depth buffer data can be leveraged for occlusion culling, discarding invisible objects before rendering.
09:42 🌐 *Hierarchical Z Buffers (HZB) in Occlusion Culling*
- Hierarchical Z Buffers (HZB) use downsampled occlusion maps to expedite occlusion checks.
- Progressive down sampling allows for faster comparisons and reduces the number of texture reads.
- HZB enables efficient object visibility checks based on screen space bounds and mip levels.
11:26 🎮 *CPU-Side Occlusion Map Generation*
- Implementing a software rasterizer for occlusion map generation bypasses GPU limitations.
- Software-based occlusion mapping offers flexibility in occluder shapes and union of occluders.
- Optimal for quick polygon rendering and allows for more complex occluder shapes than previous methods.
14:41 🕹️ *GPU Advancements and Occlusion Queries*
- GPU advancements introduced hardware occlusion queries for direct visibility checks.
- Occlusion queries involve asking the GPU about the visibility of specific objects.
- Despite advantages, managing queries incurs CPU overhead and asynchronous query handling poses challenges.
17:28 🔄 *Hierarchy of Bounding Volumes for Query Efficiency*
- Hierarchical bounding volume structures reduce CPU overhead in occlusion queries.
- Queries are performed on combined bounding volumes, allowing swift visibility determinations.
- Immediate query responses prevent GPU stalling, enhancing overall optimization strategies.
17:54 🔍 *GPU and CPU Occlusion Query Systems*
- Challenges with occlusion query systems: Immediate answers aren't available, stalling for responses is inefficient.
- Splinter Cell's pivot: Shifted from query-based to a depth-buffer-based approach.
- Depth buffer hierarchy: Utilizing depth buffers to create a hierarchy, reducing CPU work.
18:22 🖥️ *GPU-Driven Occlusion Calculations*
- Rendering occluders to a depth buffer: Creating a hierarchy from depth maps to decide object visibility.
- GPU efficiency: Most occlusion work handled by the GPU, minimizing CPU intervention.
- Front-loading occlusion work: Early occlusion calculations to streamline subsequent GPU processes.
19:40 ⏰ *Exploiting Temporal Coherence*
- Time-saving measures: Recognizing minimal frame-to-frame changes, aiming to exploit temporal coherence.
- Assassin’s Creed's temporal coherence: Rendering nearby objects into a depth buffer and reprojecting the last frame’s depth to the current one.
- Challenges of temporal coherence: Difficulty in implementation due to engine complexities.
21:27 🔄 *Evolution to Current Occlusion Techniques*
- Modern occlusion methods: Utilizing visible objects from the last frame as occluders for the current frame.
- Conservative approach: Probability-based occlusion, potential for overdraw to avoid missing objects.
- Two-pass occlusion culling: An initial pass followed by a re-test against the built HZB for non-drawn objects.
22:48 💡 *GPU-Powered Pipeline and Future Directions*
- GPU-centric processing: Entire occlusion pipeline handled by the GPU, leveraging compute shaders.
- Future developments: Breakdown of individual objects for occlusion culling, advancements like Unreal Engine's Nanite.
- Complexities beyond visibility: Alluding to other rendering optimizations beyond visibility enhancements.
Made with HARPA AI
pin
I'm currently making a 2D game and it kind of blows my mind that the tradeoff of not drawing objects is worth the time it takes to check what should be culled every single frame. Surely simply checking which objects should be culled is a massive processing task
> Surely simply checking which objects should be culled is a massive processing task
worth it if drawing object is a lot more massive task
To be fair, if those objects have a simple geometry and only a texture, culling on single objects does cost way more than not. For example, in Minecraft you wouldn't want to run culling calculations for every single block. But as objects got more and more complex and shaders entered the picture, that shifted.
In a 2D game you don have to check any objects, just have your world in a grid (2d array), then the culling is just your for-loop which ony iterates through the part of the grid which is visible on the screen.
Well actually it would be worth it if drawing object is a lot more massive task than sorting _and_ drawing the remaining stuff, so the workload is actually got two parts to it @@RandomGeometryDashStuff
@@rosen8757 Thanks for this, I'll be implementing it in my engine.
As a mechanical engineer i like how you kept this simple yet technical in terms of explanation. This is a skill in itself. You got yourself a new subscriber! Keep it up my man
Ohhhh dev who worked on prototype teaching me game dev, i feel so bleesed. I absolutely loved that game, and still love it, thank you for putting in so much effirt behind it. And thank you for these amazing teaching, keep it up man, much love and respect.
Somehow you manage to "destress" me while teaching what could seem like a complex topic but you manage to break it down so it seems so simple. I like your javascript projects and I have converted some of them to typescript.
I read that as _distress_
Your channel is an absolute gem, so many high quality videos about topics that are really hard to find online. Thanks.
Hi Simon, I've been following you on RUclips for a couple of years and I'm very inspired by computer graphics and game development, currently making a game on Phaser 3. Thank you for explaining interesting techniques and sharing your incredible experience, I learned a lot thanks to you
This is definitely my new favorite of your videos. It's endlessly fascinating to me to hear about the crazy things that are done in rendering pipelines. I love the GDC presentations where they dig into the minutiae of their rendering customizations.
That's a pretty darn great summary for beginners, did this for 3 years myself and it's definitely one of the more challenging yet fun programming fields there are!
P.s. i trawled through hundreds of pages of documentations and papers with no friendly youtube video to guide, you still can't avoid that really if you want to become actually good at this, but do watch the video to get an overview.
Hah, yeah if you could become an expert off a 20 minute youtube video, that'd be great. No shortcuts unfortunately.
Prototype was hands down my favorite game when it came out, and years after. So many days coming home from a crappy shift to take out my frustration on the zombies/soldiers/citizens/mutants of New York. Thanks for the memories.
I'm a dev myself and I gotta say, 90% of this was new info and the last 10% of the new info kinda flew over my head a little. This is amazing, thanks a lot. It's not often that we get to see the nitty gritty inside stuff that you don't directly work with.
Thank you for a great explanation! If you are suddenly out of ideas how to proceed, I am personally very interested in how this process works with shadows, lights, reflections etc from the offscreen objects. By playing Hogwarts Legacy, which is an UE4-based game, I've noticed that some elements, like reflections in lakes, are often suddenly popping up when I slightly move the camera, which causes an unpleasant experience.
Yeah, one of the options in the last poll on Patreon was exactly that, the various reflection techniques used over the years, and the tradeoffs between them. I haven't played Hogwarts Legacy, but it sounds like SSR, or screenspace reflections, which is what I showed part way through this video (the short of the cheap reflections on water when I talked about the depth buffer). It's a limitation of the approach, but it's super cheap.
This kind of quality of content is amazing as a graphics programmer to have access to. I'm amazed by your channel
Thanks! What do you work on?
So if a tree falls in the woods and no one is around to see it...
actually in this case no tree actually falls
the tree has been called and (mostly) only consumes CPU time because the simulation still tracks it in case a view frustum comes along
You do a great job of explaining abstract concepts in a clear and concrete way, thank you.
Really excellent video! I don't know much about graphics/rendering so I found this fascinating!
It's the cool but also exhausting thing about graphics, you basically have to retrain constantly hah!
I actually loved prototype! I bought that game right when it released. I really wish they would have done more with the story, and made a good sequel. You guys really did do good work on that game. The free roam playability in the aspect like GTA, has been unmatched in that particular flavor of genre. Now I'm going to have to dust it off and see if I can get it to boot on something
You should do headspace recordings. Your voice is immensely soothing 😊
Explaining major breakthroughs in game industry for a given problem is so interesting! Thanks a lot Simon and keep up the good work!
There needs to be a VR game engine that only renders nearby objects twice (for the left & right eye) and far away objects once, because far away objects don't need stereoscopic vision. This would would save resources and improve performance.
That's pretty clever. I'm curious about how it would perform
Since distant objects are more likely to change less pixels per frame, you could render far away objects with a lower frame rate as billboards. And then if necessary smooth the movement by shifting the billboards in between their frames. Then, with the saved performance, maybe you could even render important objects at a higher resolution than they would be normally at such a distance and then downscale it instead of doing anti aliasing or something.
@@DeusExtra ah yes, that’s another good idea. Render far away objects at half the frame rate or use alternate eye rendering like the Luke Ross VR mod. But in VR, far away objects need to remain at high resolution because low res is very visible in VR. That’s actually the biggest immersion breaker in VR, when you can’t see far away objects, like in real life.
@@djp1234 meh, not seeing far away objects is perfectly immersive for anyone nearsighted. Lol
@@voxelfusion9894 how does nearsightedness translate into VR vision? The actual display panels are close to your eyes. And do corrective VR lenses help with that?
Prototype is one of my favorites games, both of them, I hope the source code gets leaked so we can get better mods, since it doesnt feel like we're getting a new one
You're a great teacher. There are only a handful of good youtube channels where you can actually digest the content. This video is gold.
As someone who never made it that far past the Painter's Algorithm section in any graphics book, this was great 🙂
Subscribed. One of the best and more clear content about graphics programming that I ever seen in youtube
Optimizing CPU performance is something I enjoy doing a lot, very interesting to see how optimizing GPU operations is done. Loved this!
Also makes me grateful for game engines which mostly do this for you already haha, not sure I'd want to do this from scratch unless I really needed to get extra frames
The fun part of being a graphics engineer is that you end up doing a tonne of both CPU and GPU optimization.
am surprised to hear you worked on Prototype... am a car guy, so the none car games i played could be counted on one hand, Prototype was one of them that i loved so much, it was like a modded gta to the younger me, just a weird amazing experience really
Prototype was a fun, unique title, miss working on that team.
This was really awesome! I used to develop for the nintendo DS, so learning to develop with very strict constraints was really part of the job.
This format with in-engine examples really set the video apart, excellent job man!
Thanks! I worked with a guy who was fresh off of DS years ago, very smart guy. That platform sounded like a pain to develop for.
Fantastic education, especially for a lone developer trying to learn more, who is unsatisfied with simplistic answers. These twenty minutes were more valuable to me than many hours of the Unity "tutorials" I have watched. Thanks for being so helpful.
You worked on prototype??? Bro that is like my favourite game! Keep it up
I can imagine using the terrain or buildings in a city as occlusion objects has big benefits real quick.
What learned me a lot of optimization was actually the source engine when it came to mapping in Hammer. Now using your video to learn more about other types of optimizations that might be possible for Source or at least in Hammer. Thanks for sharing this type of content and information to anyone that wants it!
I am also going to code my own game that is heavily inspired by a game that is poorly optimized. So watching these will hopefully ensure I will not make the same mistakes those developers did.
That last "state-of-the-art" demonstration is so cool! I honestly never even realised it was common to do visibility culling outside of precomputed visibility structures. But not only is it done, there's some very interesting algorithms to lean on. I especially love algorithms that don't rely on temporal reprojection, so that last one (use objects visible in the last frame as occluders) is quite fascinating to me.
You worked on Prototype? That's one of my favorite games!
The video is very informative, and relatively easy to understand even to someone who knows nothing about game development, though there's a minor issue with captions. At a few points the captions are mismatched, like at 12:10, at it takes a few seconds for them to catch up.
Ah ok I'll double check the captions.
A quick subscribe from me! I look forward to you going into transparency shenanigans. It surprises me that to this day it is not unlikely for a player to run across transparency issues. I remember even in the recent beautiful Armored Core 6 I found a situation where smoke consistently interacted badly with something. And in playing around with my own projects, I've gone overboard with "transparency is beautiful" too many times, and keep having to be mindful of performance impact.
Man, I've done Game dev for over a decade and this still sounds amazing :) Love your channel!
I am not graphic engineer, but the content you make is extremely interesting to watch. Thank you for your work sir
god...just the sheer amount of knowledge and sublime ability to explain usually not quite so straight-forward concepts (when read black-on-white from a uni slide or a blog post written in a VERY dry fashion) *THAT FREAKIN WELL* just amazes me to the point that you Sir have officially becom my role-model (no simping intended). And I mean...duh, no wonder you were (or are, dunno) a Google sofware engineer, cuz that is the level I aspire to become anyhow one day.
Thank you A LOT and I hope this world blesses you and your fam for everything!
Super thankful that you make such amazing vids!
Cheers!
Great video, I wish it was made a few years ago :-) In my occlusion culling journey, I originally took the HiZ approach on the GPU which worked out great at first. It became a problem though when I wanted to do more with the visible entities. I tried in several ways to send data back to the CPU but there's just too much latency and not enough types of indirect commands to keep it on the GPU, so I went the CPU route instead. Intel has a great paper for their method of CPU-side HiZ implementation, "Masked Software Occlusion Culling". They also provide an open source implementation which has performed well for my application.
Yeah I wanted to call out to Intel's library at some point, but didn't have a good reason to.
Immediately subscribed. That's a really good content and you have a nice voice even for a long videos.
Great video. I'm a 3dArtist, not a programmer of any sort and there might have been a simple explanation for it that I've missed but how does culling account for things like long shadows or global illumination, that leaks from off-screen into the visible scene? ...Maybe worth a part2? :)
One of the videos on this site that I enjoyed the most. Please make more!
Just a small tip for visualization, I have trouble seeing a difference between Red and Green, which was worse with the Green and Yellow examples - and for those with Colorblindness, contrast is always the eye's first priority over color. Better to use colors that are complimentary, or better yet, just white and black for visualization.
(And yes, it's hard for us in Graphics Programming) :p
Really love your video! super simple and a great start to understanding graphics and optimization. Subscribed :3
I think someone else brought that up, and it never occurred to me. But I will 100% strive to be better in the future.
Here to second this, yellow is a devious colour that is seldom seen
@@simondev758 thanks for taking it seriously, I appreciate how you handle feedback on your videos
You can look for ready-made "accessible color palettes" to drop in, or keep it simple with contrast and patterns. It really does help, I kept pausing this video just to tell the effects apart, and the problem affects everybody when you have different calibration for every monitor.
Next they'll be hiring deaf people as lifeguards. What's the world coming to. 😂
Loved the video, I did quite a bit of reprojection shenanigans about ten years ago with the DK1 and DK2 to improve perceived frame times for stuff outside of our foveal vision!
Mr.doob is one of your patrons! Actually I’m not surprised. GG
Also just noticed. Too cool
Hey Simon, amazing video, I really love how you go in depth into the nitty gritty of optimization and the history of it. One such argument I'd love to hear about is collision engines, with broad, middle and narrow phases, aabb collisions, spacial partitioning, the challenges of long range ray and shape casting and so on, I feel like there are so many different interesting things to talk about in collision engines
Glad to hear you enjoyed it!
I'd love to dive into more optimization topics, but I think I'll leave collision engines out of it. I strongly dislike when people pass themselves off as experts in things that they're not, and I hold myself to that same standard. I haven't done anything beyond superficial physics work, so I don't feel especially qualified to talk about the subject.
I'd encourage you to find resources or experts that specialize in that area. Would love to send you in a direction, but I can't think of any offhand unfortunately.
Woah! 😮 Crazy you worked on Prototype. I loved that game, and I have always remembered it from time to time. Maybe you could make a video about your work on the game.
Would you be interested in that? Brief summary: I did some of the early r&d for the graphics, did a big presentation at GDC, and during production mostly did backend and optimization work.
@@simondev758Definitely should do a video about it. It would be a great video to watch.
Easy to follow, in depth explanation of some pretty complex concepts, waht more can you wish for... Thank you
Wow. Who knew that Bob Belcher was an expert in graphics programming?
Truly informative. This is a great perspective, i usually hear arm chair devs talk about game development. I learned a lot here. Subbed
I was thinking long about how to deal with not just culling, but ray tracing, collisions and gravity simulation too, for a space game.
and yeah, cache optimization is important, but so are tree structures, esp for stuff where everything interacts with everything.
I want to do a hybrid approach, where the tree nodes serve as containers (indexes) for arrays of properties.
I'm super excited for it!!!!
but for now i gotta work on simpler games so i can make a name for my self, and make it as a game developer \o/
Omg I remember the black book! Such a great read even if you didn’t do graphics dev!
I used this depth buffer many times to create from stylistic shader effects or other bells and wistle stuff. it's very handy, but today, I finally understand why translucent materials get F'ed up, because they don't have depth buffer.
Occlusion culling was always fascinating. In some games (like FFXIV) the pop-in is really in your face if you move the camera too fast, but even when it's slow to transition it's still just...hard to believe the tech exists.
Cool explanation!
23:19: of course this isn’t a complete picture-that’s the entire point of the culling process!
Touché
Your videos are incredible
I was out for the weekend, but thank you so much for your support!
Prototype was one of my favorite games back in the day. Very cool to hear about it here.
Loved this, as an enterprise software engineer with no game development experience, I found this highly interesting and really easy to understand. You did an amazing job, thanks and you have another subscriber. 😁
meshlet culling using mesh shaders looks like an interesting development, and i'm guessing ue5 uses something like that. I wonder what the new unity culling system uses.
Nice video. I had once read a paper on culling strategies. Its just amazing how smart people are in the gaming industry. Some of the most amazing algorithms came from the gaming industry.
Thank you so much for sharing your knowledge with us, love it when you bring out new videos!
You just threw in the fact that you were one of the devs of my all time favourite game Prototype like it was nothing 😭
It's like if Bob's Burgers explained Computer Science
Man, you're the best. Rendering is an entire discipline unto itself and it's always on the horizon of my attention/knowledge. I finally found the time to study flow fields and I'm moving onto nav meshes soon. One of your first videos was on A*, I'd be interested to hear what you have to say about group pathfinding.
That is really cool you worked on Prototype. I really enjoyed that game.
14:36 "Now if you've never done any PS3 development' I feel very called out right now... I actually haven't developed a game for the playstation 3 😥
Hey simon, one thing I'd like to see if you could put it up for your patrons to vote for: Rendering in high motion/scene change situations.
Eg: Racing sims.
While yes, in flight sims planes move faster, a majority of the time you're higher up in the sky, so objects tend to "draw" at around the same area of the scene, but racing sims are interesting (especially in cockpit, and especially in VR) because of the fact that unlike most games where the scene doesn't change TOO much (either the area is reasonably within the same occlusion area or the objects are usually from fairly "similar" angles, VR + racing sims equals fast pace forward movement with often a lot of head jiggle/turning/tilting. Add in suspension modelings, hairpin corners, etc, I've been thinking about all the optimization methods I just can't think of any good ones for racing sims that wouldn't ruin the experience.
Particularly when you have something like 4 mirrors in a car (say 3 real and 1 "virtual" in the form of a "rear view camera"
It's honestly kind of crazy to think about when considering the processes that most games use, because you want really high up close detail (buttons, texture art for dashes, 3D pins for dashes (especially in VR where flat physical dashes look horrible), and then transparency rendering like windows, fences, buildings, etc.
The reason is I play a lot of iRacing and we end up with a lot of folks expecting 120fps in really heavy loads on it, which... well, I'd love to be able to explain that to someone. It just sounds like racing sims as a whole are the WORST possible rendering scenario of any game due to their complexity in many different areas.
[Not to mention that iRacing does a lot of external-track art for broadcasting cameras that includes off-track objects you'd really only see for scenic games or tv broadcast spanning cameras.
Obviously don't expect any response or coverage on this one, but I figured use cases around specific types of games and rendering pipelines regarding those games might be an interesting topic as it can vary from something like an FPS, RTS, or racing sim in how things are even able to be processed. (Like Horizon Zero Dawn/Death Stranding/Decima engine stuff looks GREAT but I dont see it working with something like a racing sim)
Anyways, sorry for the spam, just wanted to send something while I was thinking of it
Amazing video!
Gran Turismo has used this technique for at least 2 decades. WipEout 2048 also used this technique on PS Vita, but if you do some funky things with the photomode camera, you can see loads of massive black boxes through large chunks of level geometry labeled in bold red font "Blocker."
What games dont show you: What you see is only in front of you, everything behind you is just black darkness.
sounds like in real life
unless you walk in front of a mirror. but oh well, a mirror is just another camera
The amount of tutorials on game development that just ignore optimization is crazy, so it's nice to see that there are at least some people that are willing to talk about optimization
I love the video so far but I have one piece of feedback - the yellow and green can be hard to distinguish, especially for someone colorblind. Maybe blue instead of yellow would be a better choice there ^^
That is a great point, thank you for bringing that to my attention! I'll strive to be better about that in future videos.
@@simondev758 i cannot see it too...otherwise great stuff, love you
Thanks for your work on Prototype! Has a special place in my heart
Interesting and educational. Thanks Simon!
I didn't know you worked on Prototype. I love that game!
Thanks for sharing this! Very interesting topic!
You worked on Prototype? I loved that game!
i love your content so much simon, all of it its amazing
you spent 23 minutes but all you needed to say was they do it with magic
as a web dev, not familiar with game dev stuff, I had a clue on how the rendering could work, but this goes way beyond what I understood in the past. good explanation, in a very simple way, at least for people with some dev knowledge like me, can't tell if someone with no dev experience could understand, but this sort of content isn't for the average guy
Thanks for sharing! Yeah, optimization can be a challenge, but its the part of development I find most fun as its most challenging. Though simply not rendering objects that are not visible to the camera is a portion of game optimization, but nevertheless a big one :D
Btw check out my game in progress!😄
And it depends on the game itself, either way this definitely saves up a lot of GPU cost!
These problems are generally faced pretty quickly with games tho.
Awesome ! thanks a lot ! The explanationn coupled with the rendering are amazing !
I wonder about culling only the occluded polygons of a large highly detailed object in the scene now.
While it might seem like a huge optimisation step at first glance, backface culling already does most of this work. It's a rendering step that looks at whether a polygon is facing the camera before it gets rendered. On most concave objects, that would already account for a large part of the occluded polygons
It's what I'd love to talk about in a future video leeroy! hehe
I just discovered a new excellent channel, then looked at your channel, and I found out I had already watched a few of your videos.
What an elegant way to solve the depth buffer dependency issue. Render the simpler version of the view to extract depth data and then render the high resolution view.
[Prototype] is one of my favorite games of all time. It was the first game I ever got a platinum trophy on. Thanks for working on it.
PROTOTYPE MENTIONED!!!!!!!!!! 🗣🗣🗣🗣🗣🗣🗣🗣🗣🗣
yooo you worked on prototype? i still play it till this day.
I find Jedi: Survivor and Fallen Order have pretty lenient culling. Move the camera a bit too fast and it's a few frames of pure bright skybox. Those games have plenty of problems with their tech, really disappointing work from Respawn. The problems can be incredibly blatant and completely take me out of the experience.
That is really interesting.
Your explanation triggered a question I have for quite some time.
In regard to the viewing frustum you show a couple of trees top down. Now my question has to do with viewing distance.
I’ve noticed a lot of games use some sort of fog system. When you look directly forward in the distance, objects and geometry get obscured.
Now when from the same point you turn your view and look at the same subject in the distance but with the edge of your screen ie. periferie, you can see it more clearly.
Can you elaborate on why this is?
Since a straight sightline gets obscured by fog earlier than a diagonal sightline.
Ah yeah, it's how you do your distance calculations. If you sit down with some paper and draw a view frustum, or even easier, just draw a triangle. Consider the top of the triangle your viewpoint, and consider the bottom the far depth. Now just take a ruler and look at the actual distance between straight ahead, and to one of the sides, it's different. If you use that distance to do your fog calculation, the fog will change as you swivel.