Recently I learned about Vram for optimization. My game uses Unity and mixes 2D and 3D. So we have a lot of hand-paint texture, filling up my Vram. Starting to learn how Vram works. I found most tech is a comparison for disk size like jpg but when it sends GPU is still big(I know Crunch but it is just for fast decompression not reduced size). Found the only thing I could do was pack the channel and tell the artist to reduce texture count and separate scenes into small chunks to load. I want to know more about Vram's work and if there is any way non-destructive or lost too much texture quality to be optimized in modern GPU. Anything even not for unity is appreciated!
I think the first arm cpu was so energy efficient that it ran on ambient electricity or something like that, maybe that could be an actual goal point of optimization, to have it run on that.
This is why you can use double buffering, which is kind of like what you are doing but instead of for the framebuffer, you are doing it for the mesh. Normally GPU operations queue up in the buffer, they aren't meant to be completed by the end of the refresh. If you use a double buffered system, it allows everything to render without a stall, because when a stall would have occurred, it just shows the back buffer like normal.
Thank you! so that's where I recognised this technique, it's like a Double Buffering👍❤ Its probably still worth it, but the trade off you have with this technique is with the increased memory usage, as you'd about double the memory needed for the area around the player, and it adds more complexity managing these separate sources of meshes.
@@SirNightmareFuel That's true but GPUs have massive amounts of very-fast memory these days in order to support the heavy texture bandwidth of AAA games. Doubling or even 10x'ing the memory used by the meshes is not a huge deal because they all refer to the same textures and the textures are the most memory-hungry part of rendering unless you go crazy with poly-count.
It's even better with tripple buffering. One framebuffer is displayed on screen, one is being rendered to and the third one is ready to be displayed. When the part of hardware responsible for displaying is done with a display cycle (or VSync kicks in), it looks at the buffers and either displays the same one again or displays the ready buffer if it is newer. When the part that does the rendering is done, it marks it's buffer ready, then renders into whichever of the other buffers is not being displayed. This way the newest ready buffer is always displayed, rendering works continuously and there is no tearing, at least in theory.
The real downside to this type of technique when used for the framebuffer is latency. I can't play games with triple buffering except on very high refresh-rate monitors because the latency causes me nausea (except with an uncapped internal/buffer refresh rate which doesn't seem to be commonly implemented sadly).
Semaphores could also be ideal as fences can cause the CPU to stall waiting for GPU work to finish. Semaphores would allow for exclusive synchronization of GPU tasks ensuring the CPU is constantly writing commands and the GPU is constantly processing them. Great video nonetheless.
In real world implementations these are probably implemented as semaphores, because the transferring of data is being done in a background thread anyway. The busy-waiting done by a fence is probably done here for simplicity purposes because it makes the 'waiting' part easier to explain.
@@Stoney3K Yes. Transfers don't just run as a background thread, but DMA transfers over PCIe can be done completely async to the rest of the system (and GPUs often have dedicated transfer queues to accommodate this, allowing for ultra-fast async transfer from the CPU). Nowadays you can just generate some data yourself on the GPU though in compute, eliminating the need to transfer entirely and keeping everything local to VRAM.
@@joeroeinski1107True if you consider transferring of things like textures and shaders from the CPU to the GPU. But most of the waiting is often caused by the game loading in or parsing new assets from disk, which make it noticeably slow because I/O is *much* slower than CPU/GPU/memory access. It's even more pronounced when the game is an online one and the assets have to be fetched from an offsite server.
It's worth noting that for some games, the additional input delay might be considered unacceptable. Most games it won't matter but in those rare cases it may be necessary to explore alternatives
I think this technique won't increase input delay. It effectively just leaves old geometry on screen until new geometry is available. At worst you'll get "pop ins" instead of stalls.
@@markdmckennait it does add input delay, think of it this way: now you're shipping a whole frame with this old mesh instead of the new one, that might take 16ms or so, but what if 5ms into those 16ms the transfer completes? now you still have to wait for the full frame to be drawn before we can include the new mesh in the next frame. Whereas if we just stalled for those 5ms we'd only have to wait 5+16 ms instead of 16+16ms
@@X606 Maybe a different definition of input delay here? IMO as long as we're providing "reasonably" up to date geometry for the user to interact with, there is no added input delay.
@@markdmckenna Yeah. Earlier thread wasn't making sense to me. Input processing is non-graphical, so dunno why delayed rendering would delay input processing unless you tied input processing to your draw() routine. Albeit which happens a lot in amateur games using generalized game engines, but I presume this topic is scoped to more advanced game programming techniques.
@@AntiAntYTthe eye sees up to 60fps. Everything above is still useful, because the world runs at ∞ FPS, so the closer your pc can be to that, the better it will ~feel~
@@aqua-bery Eyes can see much more than 60fps. The visual difference from 60fps to 120fps is quite noticeable as 120fps is just a lot smoother visually and your eyes can see that difference.
It's know as Asynchronous Buffer Update/Upload, nothing new. The underlying concept of decoupling data transfer from rendering to avoid stalling the graphics pipeline is fundamental to graphics programming and has been a consideration as long as there have been programmable GPUs and sophisticated graphics APIs. it's been a feature of both directx and ogl for decades at this point.
I really loved the video and the analogy. I think you should increase the size of the code and animations, because seeing them on mobile is kind of hard. Great video nonetheless!
Videos like this are rare to find But It has to be my favorite type of video by far, making code run faster, better gaming experience, GPU goes vroom I love it!
The master chef has an infinite number of pans, but doesn't want to overwhelm his apprentices, so he keeps only the amount of pans necessary in the kitchen until a greater or lesser amount is needed.
Well, maybe not _infinitely_ many pans. More like, only about 4 billion pans. And usually when the chef needs more, rather than just grabbing the amount needed, usually they'll just grab as many extra pans as they're already using. It wastes a bit more space in the kichen, but it saves on the amount of trips to fetch more pans a way that generally balances out well.
I just binged all of your optimisation videos. The metaphors are awesome and easy to follow, the code and graphics are clean, the voiceover is easy to understand. This is what I wish school/university was like. Instant subscribe, you have earned a place among my favorite RUclipsrs. Thank you for your work.
I'm not coding games and stumbled here by chance, but this is a really, really well done description which can be appreciated by anyone with at least a bit of coding experience. Thanks.
yooooo i love that not only did you make a cool game i enjoyed, but that you're sharing all the knowledge you learned from making it with everyone. Good luck dude! I hope you continue this, It's really interesting and I love it.
Usually the code with fewer lines is actually severely less optimized. Take a look at all the ""clean"" javascript code with a million abstractions and chained method calls. You have your fancy iterators when a simple for loop is literally 20x faster
Great video and I am sure it will help me in the future 👍At the first moment I thought the sorting by swapping with the first element only works with 2 meshes but after thinking about it I realized it also works well with longer lists too if the transfer time is equal for every mesh. If not, then a circular array/ring buffer might be a better alternative to avoid rendering fast-to-transfer future meshes before slow-to-transfer past ones. That would cause the future mesh to render 2 frames while skipping the past one with your algorithm.
Excellent point you’re right, a ring buffer would be more appropriate here. Now you have me thinking: I should be removing the element and inserting it at the start, rather than swapping. You’re right, great pickup! I wrote a similar thing for a particle engine, which does skip over the slow-to-transfer ones to ensure the most up to date buffer is always being rendered (even if it means an entire frames worth of data is skipped), i.e. if the cpu runs faster than the GPU and is writing to 2 buffers each frame, every 2nd buffer will never make it to the screen
I’m trying to understand how the thumbnail relates to the video, but I guess that’s the point, to make it hard to know what will be talked about without watching.
Great video, I learned a lot! Small comment for future endeavors: watching this on mobile was difficult because I assume you optimized the text size for fullscreen desktop viewing. Other than that, I loved it!
That is not really seen as a real optimization, more like a technique that (as you said) has been done. It is really interesting to see your approach compared to approaches game studio's make. A GDC talk that is on a technical level really interesting is this one "Marvel's Spider-Man: A Technical Postmortem". I also want to ask a question about your previous video, you mention that indeed triangle strip is way faster than triangle list. The problem that I seem to face, is that there are no tools, as far as I know of that convert triangle list to triangle strip. This is not a problem when you make all other models yourself and create a script that does it for you. But how did you do it in your previous video?
Absolutely, technically this is a timing issue but since it affects game performance, it falls under the broader ‘optimisation’ category For complex models like characters, converting them into triangle strips isn’t easy. Modern renderers will use an index buffer to help, where the model is broken down into triangle strips of the same length (e.g. 3 triangles) and then a GL_PRIMITIVE_RESTART is set to tell the shader to start a new triangle strip every 3 triangles (for example) There will still be some vertices that hold the same data but any reduction in memory is a performance win
@@budgetarms looks like DirectX 10 has this feature, check out the ‘Generating Multiple Strips’ section learn.microsoft.com/en-us/windows/win32/direct3d11/d3d10-graphics-programming-guide-primitive-topologies
Bruv you are the single most valuable resource I have as an indie dev. My games will run buttery smooth because of you and those like you. I've had these stories in mind since i was a child and didn't have the tools or skills to do them right. But with people like you and tools like blender and unreal, I will bring some really fun stuff! For the love of gaming!
@@Vercidium When I grow my company and add to it, you're definitely hired if you want it. I'll respond in the future with a list of works & the studio name.
Thanks for this, I'm really enjoying watching your videos. When I come to optimizing I'll be circling back to make sure I've covered the things your have. All the best 🙂
Why this works: modern GPU has a lot of bandwidth but with a high latency to transfer data from main memory to gpu.(a lot of car lanes but long way to drive). If we could have 0 latency then we wouldn't need this because any transfer would be finished very quickly (high likelihood of finishing transfer before being drawn). Instead of doing nothing and waiting for the transfer to complete, we effectively stack multiple extra transfers for future frames within the first transfer's wait time (we can do this because we have a lot of bandwidth. aka we are sending more trucks on different car lanes simultaneously instead of waiting for the first one to come back and then send it out again). This is a form of parallelism and this idea of "doing other things or doing more of the same thing while waiting for the first one to finish" is everywhere in both cpu and gpu programming and let's you make more efficient(full) use of your hardware.
Wow, I really like that underlaying background simulation at 1:51 I'm trying building something similar myself using Openprocessing but this looks like some great end result! Thanks for the inspiration.
There's actually an optimization mod for Minecraft that more or less does this where the renderer will keep using an older mesh until the new ones are ready. You did a nice job with the analogies!
Hey love the video man but the code font is a little small for a pc and almost microscopic for my phone. The animations and everything are also amazing but if you could enlarge the code it’d make the content more accessible.
This is the first time I hear the recommendation of using a list instead of an array for a performance boost! I imagine that the list is actually an dynamic array under the hood. The performance boost of caching contiguous data in memory is just way over the benefit you can get from delegating freeing some memory of a linked list to the garbage collector.
Lots of people seem to be mad at the fact that these optimizations are not groundbreaking and already exist in some engines. But as a game designer making solo projects on several engines, I'm super thankful for the amazing explanations of processes I would otherwise be unaware of. I love to see how the technologies we used are built from thousands of smart decisions like these. Also, good job and good luck on your engine project :D
This is awesome! Managing stuff like this is still a real challenge and this is a great solution. Thank you for the great easy to understand explanation too!
In my current engine project I've made an upload manager who reuses a set of staging buffers to upload data to the GPU at a capped rate (wouldn't want to choke out the bandwidth), it uses a counter fence to keep track of the latest job completed. When work is submitted to it, the submit function returns a job number and the dependent objects can check with the uploader if the work is finished (via the fence under the hood). Then, the majority of objects used in rendering hold their data in a variety of buffer types (swapped out as needed) with the more common being the swap buffer, which the uploader knows how to handle right away. I've been considering making a swap chain buffer which allows multiple jobs to be queued at once instead of just the 2, I think I'd need to make another system for recycling buffers when their done. I might also want to expand it for texture streaming solutions wherein they could produce work for both uploading and consolidating buffers, but that might be a waste of space if too many jobs are incomplete at once. I love stuff, its a wonderful puzzle, and entirely why I entered graphics programming and game engine development!
Wouldn't there be a chance that each new mesh clogs up the transfer pipeline even more, so you would end up with drawing really old meshes at some point?
Long freezes in modern games can be attributed to a variety of factors, often relating to resource-intensive operations or inefficiencies in handling game assets and rendering. Here are some of the common causes: Asset Loading: When games load large assets (like textures, models, or sound files) from the disk into memory, it can cause a noticeable freeze, especially if the game is not using asynchronous loading techniques. Garbage Collection: In games developed with languages that have automatic memory management (like Java or C#), garbage collection can sometimes cause freezes or stutters. This happens when the garbage collector runs to free up memory, temporarily halting other processes. CPU/GPU Synchronization Issues: If the CPU is waiting for the GPU to finish rendering (a scenario known as a GPU bottleneck), or vice versa (CPU bottleneck), it can result in freezes. Efficient parallel processing and synchronization are crucial to avoid such stalls. (the one this video covers part of, and far from the only possible cause) Inefficient Resource Management: Poorly managed resources, such as repeatedly loading and unloading the same assets, can lead to performance issues and freezes. Complex Calculations or Scripts: Intensive computations, like complex AI calculations, physics simulations, or extensive world updates, can cause freezes if they are not efficiently managed or offloaded to separate threads. Network Latency or Hiccups: For multiplayer games, network issues can cause freezes or lag if the game's state is tightly coupled with the timely receipt of network packets. Driver or Hardware Issues: Sometimes, the problem may lie outside the game itself, such as outdated or buggy graphics drivers, or hardware that is overheating or malfunctioning.
Man, I was watching through this and I looked at your name and was like "Huh, his name reminds me of the developer of Sector's Edge." Then a moment later: "Wait... that is really similar..." And then I checked and you actually were the dev for Sector's edge lol. Cool! Man I miss that game, it was my favourite fps! I can't wait for your next project!
Honestly, that's the future of everything nowadays, almost everything needs to be pipelined, the problem is that the more you pipeline, the more (relative) lag it introduces. Personally I settled for 2 mesh buffers for dynamic meshes, block frame if we catch up and the mesh isn't fully sent yet, though I might change this to non-blocking later. I'm also thinking which mesh data actually needs to be updated anyways. UVs might not need to be re-uploaded to the GPU, it's mostly just the coordinates and vertex normals. So I'm thinking of splitting the vertex into 2 buffers, one for coordinates and normals, and the rest into another struct. This way the transfer size should be smaller and shouldn't cause many issues. This saves space on the VRAM, allows custom attributes. Cache misses are very likely to happen though so it needs to be measured... Lots of options to explore. :)
I really really really like this kind of contents, It's like an art. I wonder what it'd be like if there's a game engine that has all optimization techniques known today and implemented efficiently. I'd really love to try that game engine or it's reels.
I’m curious: do you ever look at Minecraft optimization mods to get ideas of how to make your own voxel game engine better? People have tried many methods and there’s a lot of source code out there
@@Vercidium Check out Sodium and related projects. They rewrote rendering in Minecraft to work with OpenGL and made it really efficient. There are related projects and extension type mods that are easy to find as well.
@@toxiccan175 Minecraft has used OpenGL since the beginning, Sodium and the like just rewrote core parts of the renderer to use newer OpenGL features and be generally more efficient with memory and GPU usage.
im not really doing anything in game development these days but this tutorial was so well handled and impressive that you earned a new sub. thanks for that vid lol
Would sorting still be viable if the number of meshes is very big? Also, I loved this video. I never even realized this. Or rather, I knew of it but never was consciously aware of it while coding
I hope more game devs from indie to AAA take into account stuttering more, people seem to only be phased by FPS and are completely uneducated regarding frametime and how a consistent frametime is more important than FPS the majority of the time.
It should, I was talking to a Unity dev about how models are loaded behind the scenes onto the GPU, and it seems like there’s a few tricks you can do with offscreen rendering and preloading (not sure these terms are right) so you can know when a model is actually sent to the GPU, before you try to render it If Unity has these features I’d be surprised if Unreal didn’t! It’s a pretty important feature I reckon
Game engine devs 100%. Game engines should take care of this but I’ve heard of some devs talking about stuttering because their models weren’t preloaded/off-screen-rendered (not sure if these terms are right) before they tried rendering them onto the screen
Before clicking on this, I assumed from the thumbnail that "it" and "this" were the same, so you were trying to optimize something that breaks games. Led me to be slightly confused. Good video though!!
@@masheen_ it’s all animated in engine and screen recorded, then edited together in Davinci Resolve I use OpenGL for the 3D animations, and SkiaSharp and RichTextKit for 2D animations
@@masheen_ no worries :) It’s purely because controlling the animations in code is easier for me. I’m sure these animations would look a lot nicer in programs like After Effects
@@arossfelderthe lack of actual wisdom in the CS community is a big problem. People really have trouble understanding that others don’t have the same knowledge as them, and act like assholes because of it. They refuse to believe that they ever didn’t know about these complex subjects and thus believe that others who don’t understand it are somehow inferior. This kind of elitism is toxic, and shouldn’t be so prevalent in a community so full of intelligent people with typically good intentions. Our community needs both intelligent and wise teachers, not just one or the other.
@@pubjubz I'm sorry if that came out as rude, that was not my intention. I totally agree with you that elitism is toxic, this is not who we are (outside of the Arch Linux forum rtfm :D )
Congratulations! You've basically (re-)invented Double/Triple Buffering! 😂 JK lol, that's an *excellent* explanation of the inner workings of the graphics pipeline in a video game / really *any* interactive piece of software with complex video output!
I'll need to rewatch the video a few times to fully understand that, but weren't you explaining buffer swapping? Or is it something different that I haven't picked up on?
@@samwhaleIV for the player, enemies and effects we use instancing and recreating them each frame. The base mesh is just a quad bc we use billboards for everthing. So honestly idk what would be much faster, it needs to be tested out
@@Vercidium I'm personally in the beginning of that path. Learning math, learning code trying to get into school. Your channel has been very insightful so far, so i thank you deeply for sharing! You are an inspiration!
Download the source code for all my videos here: patreon.com/vercidium
If you have any rendering or game dev questions, ask them here!
Christ, could you have made the code any smaller? I don't know what 4k 60 inch monitor you have but this is bad on small screens.
Have you looked at Minetest, per chance? Do you have thoughts about it?
Is this technique used by any popular game engines and why?
I feel like they should stop inventing new hardware, and instead try to optimise games so they work with the "older" hardware if you know what i mean
Recently I learned about Vram for optimization. My game uses Unity and mixes 2D and 3D. So we have a lot of hand-paint texture, filling up my Vram. Starting to learn how Vram works. I found most tech is a comparison for disk size like jpg but when it sends GPU is still big(I know Crunch but it is just for fast decompression not reduced size). Found the only thing I could do was pack the channel and tell the artist to reduce texture count and separate scenes into small chunks to load.
I want to know more about Vram's work and if there is any way non-destructive or lost too much texture quality to be optimized in modern GPU. Anything even not for unity is appreciated!
The greater skill here isn't code optimization, but rather how you break it down with tasty metaphors to make for easily digestible content. Kudos.
I too, am now hungry for burgers.
Just let this man cook
Haha, he's cooking fr@@DriftJunkie
Analogy an average american can understand, yes.
I prefer strawberry flavored metaphors 🍓
123 optimization videos later: This game runs so smooth you can get 4k graphics with 120 frames a second on a potato clock even without the potatoes.
Quake 666
I think the first arm cpu was so energy efficient that it ran on ambient electricity or something like that, maybe that could be an actual goal point of optimization, to have it run on that.
@@SethbotStarrebuilds crysis to run on a single double A for a year
@@RowbotMasterand yet it remains un-portable for the switch bc nintendo.
He's gonna get Doom working on an Ancient Roman sun dial
Vercidium trying to explain game engine optimisation to an American:
"So imagine a burger."
They should make a version more at home for Aussies
"Ok so imagine a parma.."
@@emporioalnino4670 I love a good parma
@@emporioalnino4670 What's a parma, a chicken parmesan? That's standard bar food in Australia? I love that
most healthy american breakfest be like
@@violet_broregarde a chicken parmigiana
This is why you can use double buffering, which is kind of like what you are doing but instead of for the framebuffer, you are doing it for the mesh. Normally GPU operations queue up in the buffer, they aren't meant to be completed by the end of the refresh. If you use a double buffered system, it allows everything to render without a stall, because when a stall would have occurred, it just shows the back buffer like normal.
Thank you! so that's where I recognised this technique, it's like a Double Buffering👍❤
Its probably still worth it, but the trade off you have with this technique is with the increased memory usage, as you'd about double the memory needed for the area around the player, and it adds more complexity managing these separate sources of meshes.
@@SirNightmareFuel That's true but GPUs have massive amounts of very-fast memory these days in order to support the heavy texture bandwidth of AAA games. Doubling or even 10x'ing the memory used by the meshes is not a huge deal because they all refer to the same textures and the textures are the most memory-hungry part of rendering unless you go crazy with poly-count.
It's even better with tripple buffering. One framebuffer is displayed on screen, one is being rendered to and the third one is ready to be displayed. When the part of hardware responsible for displaying is done with a display cycle (or VSync kicks in), it looks at the buffers and either displays the same one again or displays the ready buffer if it is newer. When the part that does the rendering is done, it marks it's buffer ready, then renders into whichever of the other buffers is not being displayed. This way the newest ready buffer is always displayed, rendering works continuously and there is no tearing, at least in theory.
The real downside to this type of technique when used for the framebuffer is latency. I can't play games with triple buffering except on very high refresh-rate monitors because the latency causes me nausea (except with an uncapped internal/buffer refresh rate which doesn't seem to be commonly implemented sadly).
Yup, screen buffer was my first thought too.
This is standard behavior in all major game engine used in production for the last 10-15 years.
I was thinking that might be the case because unless I'm missing something, which is likely, he's described async code
I wonder when someone working at Minecraft is going to find out. Split screen with my kids is ridiculous if anyone is crafting. It freezes every time
UE already have that?
I'd be surprised if this was true.
@@greatbriton8425 Yeah it was definitely an exaggeration but he's right that it's not new.
Semaphores could also be ideal as fences can cause the CPU to stall waiting for GPU work to finish. Semaphores would allow for exclusive synchronization of GPU tasks ensuring the CPU is constantly writing commands and the GPU is constantly processing them. Great video nonetheless.
Love semaphores, very useful if a bit scary!
In real world implementations these are probably implemented as semaphores, because the transferring of data is being done in a background thread anyway. The busy-waiting done by a fence is probably done here for simplicity purposes because it makes the 'waiting' part easier to explain.
@@Stoney3K Yes. Transfers don't just run as a background thread, but DMA transfers over PCIe can be done completely async to the rest of the system (and GPUs often have dedicated transfer queues to accommodate this, allowing for ultra-fast async transfer from the CPU). Nowadays you can just generate some data yourself on the GPU though in compute, eliminating the need to transfer entirely and keeping everything local to VRAM.
@@joeroeinski1107True if you consider transferring of things like textures and shaders from the CPU to the GPU. But most of the waiting is often caused by the game loading in or parsing new assets from disk, which make it noticeably slow because I/O is *much* slower than CPU/GPU/memory access.
It's even more pronounced when the game is an online one and the assets have to be fetched from an offsite server.
@@Stoney3K Absolutely, though with emerging technology such as DirectStorage this soon may not be an issue.
It's worth noting that for some games, the additional input delay might be considered unacceptable. Most games it won't matter but in those rare cases it may be necessary to explore alternatives
I think this technique won't increase input delay. It effectively just leaves old geometry on screen until new geometry is available. At worst you'll get "pop ins" instead of stalls.
@@markdmckennait it does add input delay, think of it this way: now you're shipping a whole frame with this old mesh instead of the new one, that might take 16ms or so, but what if 5ms into those 16ms the transfer completes? now you still have to wait for the full frame to be drawn before we can include the new mesh in the next frame. Whereas if we just stalled for those 5ms we'd only have to wait 5+16 ms instead of 16+16ms
@@X606 Maybe a different definition of input delay here? IMO as long as we're providing "reasonably" up to date geometry for the user to interact with, there is no added input delay.
@@markdmckenna instead you get more render latency.
@@markdmckenna Yeah. Earlier thread wasn't making sense to me. Input processing is non-graphical, so dunno why delayed rendering would delay input processing unless you tied input processing to your draw() routine.
Albeit which happens a lot in amateur games using generalized game engines, but I presume this topic is scoped to more advanced game programming techniques.
It really was worth buying this 16,000 fps monitor---this is the first video to really use it to its fullest, but boy is it glorious.
Waste of money. Nobody needs 16,000 fps. 12,000 is more than enough.
the eye doesn't see more than 4,000 fps
@@waltonsimons12 Trust me dude, once you've experienced 16,000 fps, 12,000 feels like looking at a slideshow.
@@AntiAntYTthe eye sees up to 60fps. Everything above is still useful, because the world runs at ∞ FPS, so the closer your pc can be to that, the better it will ~feel~
@@aqua-bery Eyes can see much more than 60fps. The visual difference from 60fps to 120fps is quite noticeable as 120fps is just a lot smoother visually and your eyes can see that difference.
This kind of sounds like double buffering, but for the geometry instead of the actual framebuffer. Very cool idea!
It's know as Asynchronous Buffer Update/Upload, nothing new.
The underlying concept of decoupling data transfer from rendering to avoid stalling the graphics pipeline is fundamental to graphics programming and has been a consideration as long as there have been programmable GPUs and sophisticated graphics APIs. it's been a feature of both directx and ogl for decades at this point.
That's exactly what I thought! Same basic principles as double buffering!
I really loved the video and the analogy. I think you should increase the size of the code and animations, because seeing them on mobile is kind of hard.
Great video nonetheless!
Will do, thank you for the feedback and glad you liked the video!
Videos like this are rare to find But It has to be my favorite type of video by far, making code run faster, better gaming experience, GPU goes vroom I love it!
Thank you, more videos to come!
The master chef has an infinite number of pans, but doesn't want to overwhelm his apprentices, so he keeps only the amount of pans necessary in the kitchen until a greater or lesser amount is needed.
Haha excellent analogy
Well, maybe not _infinitely_ many pans. More like, only about 4 billion pans. And usually when the chef needs more, rather than just grabbing the amount needed, usually they'll just grab as many extra pans as they're already using. It wastes a bit more space in the kichen, but it saves on the amount of trips to fetch more pans a way that generally balances out well.
I just binged all of your optimisation videos. The metaphors are awesome and easy to follow, the code and graphics are clean, the voiceover is easy to understand. This is what I wish school/university was like. Instant subscribe, you have earned a place among my favorite RUclipsrs. Thank you for your work.
Far out, thank you! I'm stoked you like them and hope they help!
I can’t read the tiny font
school/university should teach you how to come up with these ideas by yourself
@@DaStuntChannel Universities teach you 15 years old techniques in a field that is always 5 years ahead and counting.
@@nicosoftnt That as well
I'm not coding games and stumbled here by chance, but this is a really, really well done description which can be appreciated by anyone with at least a bit of coding experience. Thanks.
yooooo i love that not only did you make a cool game i enjoyed, but that you're sharing all the knowledge you learned from making it with everyone. Good luck dude! I hope you continue this, It's really interesting and I love it.
the biggest thing this video taught me is
optimization =/= fewer lines
*fewer
@@Kyrelel thanks
Usually the code with fewer lines is actually severely less optimized. Take a look at all the ""clean"" javascript code with a million abstractions and chained method calls. You have your fancy iterators when a simple for loop is literally 20x faster
something about this video made me super excited to watch, maybe the thumbnail/title.
Great video and I am sure it will help me in the future 👍At the first moment I thought the sorting by swapping with the first element only works with 2 meshes but after thinking about it I realized it also works well with longer lists too if the transfer time is equal for every mesh. If not, then a circular array/ring buffer might be a better alternative to avoid rendering fast-to-transfer future meshes before slow-to-transfer past ones. That would cause the future mesh to render 2 frames while skipping the past one with your algorithm.
Excellent point you’re right, a ring buffer would be more appropriate here. Now you have me thinking: I should be removing the element and inserting it at the start, rather than swapping. You’re right, great pickup!
I wrote a similar thing for a particle engine, which does skip over the slow-to-transfer ones to ensure the most up to date buffer is always being rendered (even if it means an entire frames worth of data is skipped), i.e. if the cpu runs faster than the GPU and is writing to 2 buffers each frame, every 2nd buffer will never make it to the screen
@@VercidiumI would love to see a video where you take all the improvement ideas from the comments and try to apply them and see how they do.
I’m trying to understand how the thumbnail relates to the video, but I guess that’s the point, to make it hard to know what will be talked about without watching.
that's clickbait for ya
Great video, I learned a lot! Small comment for future endeavors: watching this on mobile was difficult because I assume you optimized the text size for fullscreen desktop viewing. Other than that, I loved it!
I’ll increase the font size in the next video, thank you!
Isnt this just asyncronous compute?
My favorite series on youtube, thank you for doing it!
Too kind! Thank you
Best programming analogy is always about food and cooking.
Fantastic overview. Right to the point and thorough enough to show off everything without getting bogged down in the details.
That is not really seen as a real optimization, more like a technique that (as you said) has been done.
It is really interesting to see your approach compared to approaches game studio's make.
A GDC talk that is on a technical level really interesting is this one "Marvel's Spider-Man: A Technical Postmortem".
I also want to ask a question about your previous video, you mention that indeed triangle strip is way faster than triangle list.
The problem that I seem to face, is that there are no tools, as far as I know of that convert triangle list to triangle strip.
This is not a problem when you make all other models yourself and create a script that does it for you.
But how did you do it in your previous video?
Absolutely, technically this is a timing issue but since it affects game performance, it falls under the broader ‘optimisation’ category
For complex models like characters, converting them into triangle strips isn’t easy. Modern renderers will use an index buffer to help, where the model is broken down into triangle strips of the same length (e.g. 3 triangles) and then a GL_PRIMITIVE_RESTART is set to tell the shader to start a new triangle strip every 3 triangles (for example)
There will still be some vertices that hold the same data but any reduction in memory is a performance win
@@Vercidium I wish DirectX had something like that.
It's really interesting how other API's do specific things.
@@budgetarms looks like DirectX 10 has this feature, check out the ‘Generating Multiple Strips’ section
learn.microsoft.com/en-us/windows/win32/direct3d11/d3d10-graphics-programming-guide-primitive-topologies
@@Vercidium Thanks, I am looking into it, just wondering do you have a discord server (or is something like that in the works)?
@@Vercidium Just wondering, how do you do networking on your game, do you use an external OpenGL library for that or what?
I love how every metaphor is a pub. “How does quantum computing work?” “Well it’s like a pub with a quantum pan” 😂
I don't code yet but it is something that I find fascinating. I really enjoyed how you made this problem make sense even to me. Thanks for sharing
Bruv you are the single most valuable resource I have as an indie dev. My games will run buttery smooth because of you and those like you. I've had these stories in mind since i was a child and didn't have the tools or skills to do them right. But with people like you and tools like blender and unreal, I will bring some really fun stuff! For the love of gaming!
Thanks so much! I’m glad to help
@@Vercidium When I grow my company and add to it, you're definitely hired if you want it. I'll respond in the future with a list of works & the studio name.
Great video, loved all the animations. Even as someone who isn't particularly experienced in coding I feel like I understood everything!
Thanks for this, I'm really enjoying watching your videos. When I come to optimizing I'll be circling back to make sure I've covered the things your have. All the best 🙂
Why this works: modern GPU has a lot of bandwidth but with a high latency to transfer data from main memory to gpu.(a lot of car lanes but long way to drive). If we could have 0 latency then we wouldn't need this because any transfer would be finished very quickly (high likelihood of finishing transfer before being drawn). Instead of doing nothing and waiting for the transfer to complete, we effectively stack multiple extra transfers for future frames within the first transfer's wait time (we can do this because we have a lot of bandwidth. aka we are sending more trucks on different car lanes simultaneously instead of waiting for the first one to come back and then send it out again). This is a form of parallelism and this idea of "doing other things or doing more of the same thing while waiting for the first one to finish" is everywhere in both cpu and gpu programming and let's you make more efficient(full) use of your hardware.
Wow, I really like that underlaying background simulation at 1:51 I'm trying building something similar myself using Openprocessing but this looks like some great end result! Thanks for the inspiration.
There's actually an optimization mod for Minecraft that more or less does this where the renderer will keep using an older mesh until the new ones are ready. You did a nice job with the analogies!
What's it called?
What is it called?
@@nindew21Laughyourassoffit’s either Sodium or Nvidium, there is also a mod Distant Horizons on the relevant topic
Very humble explanation style for laymen to understand. Kudos!
The first part also explains why some fast food drive thru allocate an employee taking orders instead of letting customers order at the fixed kiosk.
Hey love the video man but the code font is a little small for a pc and almost microscopic for my phone. The animations and everything are also amazing but if you could enlarge the code it’d make the content more accessible.
Will do for the next video, thank you for the feedback!
The animations are hilarious :D Those little hands of a video card :)
You lost me at burger
He won me at burger
Mmmm. Burger
This is the first time I hear the recommendation of using a list instead of an array for a performance boost!
I imagine that the list is actually an dynamic array under the hood.
The performance boost of caching contiguous data in memory is just way over the benefit you can get from delegating freeing some memory of a linked list to the garbage collector.
This guy just added Promises to a game engine
Best comment hahaha
Lots of people seem to be mad at the fact that these optimizations are not groundbreaking and already exist in some engines. But as a game designer making solo projects on several engines, I'm super thankful for the amazing explanations of processes I would otherwise be unaware of.
I love to see how the technologies we used are built from thousands of smart decisions like these.
Also, good job and good luck on your engine project :D
@@lopodyr thank you for the kind words, I’m glad this video has helped!
Got it, I just need enough memory for 15 thousand meshes.
Hahaha noooo
This is awesome! Managing stuff like this is still a real challenge and this is a great solution. Thank you for the great easy to understand explanation too!
I really appreciate this format of video from you. Please keep up the good work:)
In my current engine project I've made an upload manager who reuses a set of staging buffers to upload data to the GPU at a capped rate (wouldn't want to choke out the bandwidth), it uses a counter fence to keep track of the latest job completed. When work is submitted to it, the submit function returns a job number and the dependent objects can check with the uploader if the work is finished (via the fence under the hood). Then, the majority of objects used in rendering hold their data in a variety of buffer types (swapped out as needed) with the more common being the swap buffer, which the uploader knows how to handle right away.
I've been considering making a swap chain buffer which allows multiple jobs to be queued at once instead of just the 2, I think I'd need to make another system for recycling buffers when their done. I might also want to expand it for texture streaming solutions wherein they could produce work for both uploading and consolidating buffers, but that might be a waste of space if too many jobs are incomplete at once.
I love stuff, its a wonderful puzzle, and entirely why I entered graphics programming and game engine development!
Wouldn't there be a chance that each new mesh clogs up the transfer pipeline even more, so you would end up with drawing really old meshes at some point?
My god it's so hight quality. I subscribe immediately!
Long freezes in modern games can be attributed to a variety of factors, often relating to resource-intensive operations or inefficiencies in handling game assets and rendering. Here are some of the common causes:
Asset Loading: When games load large assets (like textures, models, or sound files) from the disk into memory, it can cause a noticeable freeze, especially if the game is not using asynchronous loading techniques.
Garbage Collection: In games developed with languages that have automatic memory management (like Java or C#), garbage collection can sometimes cause freezes or stutters. This happens when the garbage collector runs to free up memory, temporarily halting other processes.
CPU/GPU Synchronization Issues: If the CPU is waiting for the GPU to finish rendering (a scenario known as a GPU bottleneck), or vice versa (CPU bottleneck), it can result in freezes. Efficient parallel processing and synchronization are crucial to avoid such stalls. (the one this video covers part of, and far from the only possible cause)
Inefficient Resource Management: Poorly managed resources, such as repeatedly loading and unloading the same assets, can lead to performance issues and freezes.
Complex Calculations or Scripts: Intensive computations, like complex AI calculations, physics simulations, or extensive world updates, can cause freezes if they are not efficiently managed or offloaded to separate threads.
Network Latency or Hiccups: For multiplayer games, network issues can cause freezes or lag if the game's state is tightly coupled with the timely receipt of network packets.
Driver or Hardware Issues: Sometimes, the problem may lie outside the game itself, such as outdated or buggy graphics drivers, or hardware that is overheating or malfunctioning.
chatgpt has entered the chat
the major props for making my brain understand things i wouldnt have fathomed even trying to understand before
Feel like I've seen a couple of these coding youtube channels run by fellow Aussies. Makes me happy knowing we're doing our part for the world.
Man, I was watching through this and I looked at your name and was like "Huh, his name reminds me of the developer of Sector's Edge." Then a moment later: "Wait... that is really similar..." And then I checked and you actually were the dev for Sector's edge lol. Cool! Man I miss that game, it was my favourite fps! I can't wait for your next project!
Thank you! I miss it too, hoping to revisit it again some day
I can't believe the closest analogy to a bottleneck was a burger. A *bottleneck*.
But seriously, great video!!
Amazing video, one tip: The code font size is a tad too small for phone screens. I think 10-20% larger would fix it.
Will do thank you!
Reminds me of a swap chain, only for geometry instead of fully rendered frames. Very cool!
Honestly, that's the future of everything nowadays, almost everything needs to be pipelined, the problem is that the more you pipeline, the more (relative) lag it introduces. Personally I settled for 2 mesh buffers for dynamic meshes, block frame if we catch up and the mesh isn't fully sent yet, though I might change this to non-blocking later.
I'm also thinking which mesh data actually needs to be updated anyways. UVs might not need to be re-uploaded to the GPU, it's mostly just the coordinates and vertex normals. So I'm thinking of splitting the vertex into 2 buffers, one for coordinates and normals, and the rest into another struct. This way the transfer size should be smaller and shouldn't cause many issues. This saves space on the VRAM, allows custom attributes. Cache misses are very likely to happen though so it needs to be measured... Lots of options to explore. :)
You've managed to explain some really complicated programming really well visually
didn't realize you were sector's edge dev until the end!
Haha hey there! What gave it away?
I really really really like this kind of contents, It's like an art. I wonder what it'd be like if there's a game engine that has all optimization techniques known today and implemented efficiently. I'd really love to try that game engine or it's reels.
I’m curious: do you ever look at Minecraft optimization mods to get ideas of how to make your own voxel game engine better? People have tried many methods and there’s a lot of source code out there
I haven’t but that’s a good idea, would be interesting to have a look through their source code to see what OpenGL tricks they’re using
@@Vercidium Check out Sodium and related projects. They rewrote rendering in Minecraft to work with OpenGL and made it really efficient. There are related projects and extension type mods that are easy to find as well.
@@toxiccan175 awesome thank you, will do!
@@toxiccan175 Minecraft has used OpenGL since the beginning, Sodium and the like just rewrote core parts of the renderer to use newer OpenGL features and be generally more efficient with memory and GPU usage.
@@jcm2606 i've heard mumbles about the sodium sub-mod, nvidium. what's up with that?
im not really doing anything in game development these days but this tutorial was so well handled and impressive that you earned a new sub. thanks for that vid lol
Thank you! It took a while to make
Would sorting still be viable if the number of meshes is very big?
Also, I loved this video. I never even realized this. Or rather, I knew of it but never was consciously aware of it while coding
For heaps of meshes it would get pretty slow, something like a ring buffer would be much better. Thank you!
I hope more game devs from indie to AAA take into account stuttering more, people seem to only be phased by FPS and are completely uneducated regarding frametime and how a consistent frametime is more important than FPS the majority of the time.
Wow this is incredible, those this also work in Unreal Engine 5?
It should, I was talking to a Unity dev about how models are loaded behind the scenes onto the GPU, and it seems like there’s a few tricks you can do with offscreen rendering and preloading (not sure these terms are right) so you can know when a model is actually sent to the GPU, before you try to render it
If Unity has these features I’d be surprised if Unreal didn’t! It’s a pretty important feature I reckon
I’m addicted to your channel! \o/
Does this apply to game engine devs, or also to game devs that use game engines like unreal or unity?
Game engine devs 100%. Game engines should take care of this but I’ve heard of some devs talking about stuttering because their models weren’t preloaded/off-screen-rendered (not sure if these terms are right) before they tried rendering them onto the screen
Everything You Explain There in Analogy Makes So Much Sense
at the end the performance was 15k fps, what was the performance of the mesh rendering without any optimizations at the very start, how many fps?
About 115, its shown at 0:09
This video is epic. Liked, subscribed. I can't wait to watch your other ones, you have a great mind for being able to teach and explain concepts.
Before clicking on this, I assumed from the thumbnail that "it" and "this" were the same, so you were trying to optimize something that breaks games. Led me to be slightly confused. Good video though!!
This video has made me realize that I still don't understand coding at all
"a flashing red light, changes to green when your burger is ready" ... WHAT?!? i feel like Doc from BTTF "what year is it?"
Very cool! Time to stop asking where my burger is every five minutes to increase my fps.
buddy idk anything about coding but I love games, and this made (I think) sense to me. well done.
That’s great to hear, thank you!
Explaining optimization to an American: "So Imagine a burger..."
I''m kind of stunned that in the impressive history of modern gaming this hasn't been done before. Kudos for finding that out where others didn't.
This presentation is beautiful, would love to know what you used! Also I learned a ton about a game engine!! Thanks/subbed!
@@masheen_ thank you so much, glad to hear!
@@masheen_ it’s all animated in engine and screen recorded, then edited together in Davinci Resolve
I use OpenGL for the 3D animations, and SkiaSharp and RichTextKit for 2D animations
@@Vercidium Hmm never thought of animations in engine for ui charts 🫠. Thanks so much!!!
@@masheen_ no worries :)
It’s purely because controlling the animations in code is easier for me. I’m sure these animations would look a lot nicer in programs like After Effects
From now on, my toaster can run crysis.
I was hungry before watching this, now I know what I want but it's in the middle of the night and no place is open! :o
youtuber with 30k subs vs multimillion game companies
Changing the thumbnail in the name of optimisation on a video about optimisation is pure optimisation #ISawTheBurger
Haha yep I design a few and then test one each day. I think this thumbnail+title is the one though
the anecdote between burgers/restaurant pagers and rendering is hilarious and useful
So bro just used parallelisation
When you need to use "just" in a sentence about a complex subject, you clearly haven't understood it
@@arossfelderthe lack of actual wisdom in the CS community is a big problem. People really have trouble understanding that others don’t have the same knowledge as them, and act like assholes because of it. They refuse to believe that they ever didn’t know about these complex subjects and thus believe that others who don’t understand it are somehow inferior. This kind of elitism is toxic, and shouldn’t be so prevalent in a community so full of intelligent people with typically good intentions. Our community needs both intelligent and wise teachers, not just one or the other.
@@arossfelderthe person who made this video is a good teacher, though.
@@pubjubz I'm sorry if that came out as rude, that was not my intention. I totally agree with you that elitism is toxic, this is not who we are (outside of the Arch Linux forum rtfm :D )
Excellent video i genuinely learned so much from it. Love the metaphors, keep up the great work!
Glad to hear, thank you!
Congratulations! You've basically (re-)invented Double/Triple Buffering! 😂
JK lol, that's an *excellent* explanation of the inner workings of the graphics pipeline in a video game / really *any* interactive piece of software with complex video output!
I'm so glad RUclips randomly showed this to me. Really cool stuff.
When you order a burger and the chef accidently adds extra bacon ;)
Bro spend 6 years to create a Game Engine just to told us, What a hero!
This is genuinely interesting. I love it. Keep it up!
0:02 I have 2K+ hours in NMS and never (on my computer which has over minimum spec) had a freeze like that.
I'll need to rewatch the video a few times to fully understand that, but weren't you explaining buffer swapping? Or is it something different that I haven't picked up on?
"...are now stored in disarray."
...
No, I heard that wrong.
3:34
Damn explained in not even 5 minutes. Good video, keep up the great work!
Amazing video ! Please put the code part a little bit bigger next time, difficult to read on small devices :(
Will do, thank you for letting me know
0:20 Wendy’s thanks you for the inspiration for lunch lol.
Haha I love it
Nice breakdown of the concept. Now I want a burger. 🍔
love the vids
borgor
oooh this is really clever! I'm thinking about implementing this for our game engine for the dynamic objects that change every frame.
Sweet! What kind of objects? Particles or destructible objects?
@@Vercidium particles and we use instanced billboards for characters and enemies!
How dynamic is your game? Would it be more efficient to allocate your meshes ahead of time and instance them?
@@samwhaleIV for the player, enemies and effects we use instancing and recreating them each frame. The base mesh is just a quad bc we use billboards for everthing. So honestly idk what would be much faster, it needs to be tested out
You have clarity in this my friend, this was so clearly laid out.
Mm... you must have six very good years!
Thank you. The first few years were all trial and error, only lately I’m understanding why certain methods run faster
@@Vercidium
I'm personally in the beginning of that path.
Learning math, learning code trying to get into school.
Your channel has been very insightful so far, so i thank you deeply for sharing!
You are an inspiration!
You made a game engine?? Subscribed!