@@creeperlolthetrouble actually minecraft cant use alot of the things done in this video, like combining multiple voxels into one voxel, as each individual voxel can be interacted with by the player. These optimizations would work for most other voxel games though
So to summarize - Core principles: - CPU cycles expensive, GPU cycles cheap so we want to reduce CPU cycles by: - Passing as little data as possible to the GPU, which costs CPU cycles - Doing as few draw calls as possible to avoid redundant CPU cycles - Voxels have certain principles we can leverage mainly: - They exist at discrete coordinates in space - They're cubes, so only three sides are visible at a time - They have a uniform size - Techniques - Avoid drawing geometry the player won't see: - Internal faces - Backfaces(the counterclockwise vertices strips) - The three(or more) faces of the chunk the player can't see - Batch draw calls and info - Combine adjacent voxels into one mesh, forming a chunk - Combine adjacent faces sharing a normal to form "runs" that can be drawn as one face - Using instances, draw strips instead of triangles, - Use the symmetry of a cube to flip one strip into all six positions - Send one indirect buffer to the GPU in one draw call instead of by chunk - Reduce memory usage - Using the fact that voxels use discrete coordinates, ditch floats. - Pack location information, normal enum, length/width run info, and texture id into one 32-bit number - Use the fact that we're chunking in combination with the SSBO to further reduce information individual voxels must hold, so now voxel world info is only meaningful with the chunk info Wonderful video. The best part of this is how you took something "toyish" like voxels and showed off many optimization techniques that would be much hard to reason about otherwise.
CPU cycles arent expensive, transferring data to the GPU is expensive. Indirect draws can be more efficient because it allows you to upload all of the parameters needed to draw to the GPU and the gpu can do different calculations without having to communicate back and forth between the CPU and without having to set up fences to wait for the gpu
Awesome summarization, good job! And to @anon1963 and @TheJorge100 CPU cycles, by comparison are certainly more expensive, unless you have a high core count with a weak GPU. Having a quite average 8 CPU logical cores at 5 GHz, that means you have 5 billion * 8 = 40 billion cycles. Having a quite average RTX 3070 GPU, which has 5888 shader cores at 1.5 GHz, that means you have 1.5 billion * 5888 = something between 7.5 and 9.0 trillion cycles, closer to 9 (lazy to do the exact math). Now, 0.04 trillion cycles compared to, say, 8 trillion cycles... yeah, that's 200 times less. Aka 0.5% as many cycles in a recent 4 core CPU. Ok, the average would be 12 to 16 logical cores on the CPU. If it's with hyperthreading, I don't know if we can do the same math as above, but let's say we can. So for 16 logical cores, that would make it 80 billion cycles, or 1% of what a RTX 3070 can do. If we go to the top of the line, we have 32 logical cores, so at 5 GHz that's 160 billion cycles. 2% of an RTX 3070. And if we drop to an RTX 3060 that 3584 cores at 1320 MHz, so that's 1.32 billion * 3584 ... ok this time I'm actually computing ... we have 4.73088 trillion cycles. Wow, can't believe an RTX 3060 is basically half as powerful as a RTX 3070, damn. Anyway, 0.16 vs 4.7 .... That's about 3.4% CPU cycles compared to GPU cycles. Or a bit more than 1/30. Even in that case, I'd call the CPU cycles expensive :)
@@Winnetou17 you can't compare it like this. gpu cores are designed to do 1 thing very fast, matrix multiplication. which is why gpus have thousands of them. cpu cores on the other hand are MUCH more complicated, hence why like 8/16 of them in average cpus. so yeah sending data to the gpu is expensive, cpu cycles are dirt cheap. if you still are unsure then there are a lot of great books/articles/videos on this topic
@@anon1963 I know that GPU cores are much more limited and simple. But the things they are doing aren't done faster on the CPU, soo ... I don't understand your complaint. I didn't say to use the GPU to compute something generic. If the task at hand is to do a multiplication, a GPU core will do it as good as a CPU core (well, CPUs have higher frequencies, but I took that into account). It's true that cores aren't so simple, they have multiple units and pipelines allowing basically multiple instructions per cycle. So I did simplify that part, but, broadly speaking, I'd say that what I wrote above - doing a simple task like a multiplication is roughtly the same on a CPU or GPU - still stands. I can use a more different analogy. Let's say the CPU is a a dozen of smart persons while the GPU is 1000 people that are average intelligence at best. A CPU person (core) can do things that a GPU person (core) cannot, like check the feasibility of an investment or write some code. A GPU person cannot do that, but it can carry potatoes. Guess what, both a CPU person and a GPU person can carry a sack of potatoes with the same rough efficiency. So if the tasks at hand are to carry a lot of potatoes, then, by sheer amount of people, a GPU person is "cheaper" than a CPU person. Hence the saying that, because they're fewer, they're more expensive. So before you use them, better make sure the GPU persons are busy working.
They know, practically guaranteed that graphics engineers have thought of most of these. The question that rises is, will it be worth the number of hours spent on refactoring huge chunks of an existing working solution. Most likely the answer is hell no. This solution also wouldn't fit all the usecases in minecraft, so they'd need bespoke solutions for those edge cases. Final bill would be tens to hundreds of thousands of dollars minimum.
I made a voxel engine once, but it was way back in DOS demoscene days before opengl or GPUs existed. None of these optimizations were even relevant then, since engines were all bespoke and didn't have so much extra processing to cut out. Today's engines are way waaaay better, but I kind of miss how simple things were. Rendering 64k voxels with textures, lighting, and detailed fluid physics running each frame on each voxel, even an old 486 could maintain 60 fps. Good times.
Actually that makes me wonder how good a voxel game would actually run on the ps2 since the graphics hardware is so esoteric compared to something that supports opengl. There probably would be some optimizations like the ones in this video but maybe other ones are also possible
@@lorenzvo5284I'd imagine you could use VU2 to perform very similar optimisations. The channel 'gamehut' discusses a very similar approach in optimising a PS2 particle system.
I'm pretty sure I understood maybe 10% of what was said in this video. On the bright side I'm now 10% more educated on how to optimize game engines using voxels. Great video. :)
Now this is some good optimization! It's amazing how much you can get away with when you know all your verts are on a regular grid. Splitting meshes based on face direction for quick backface culling is so clever.
Insane production quality! Explaining complex problems so precisely is quite the skill. You are as good at presenting what you know as you know it. Well played brother
@@Vercidium 11:25 i have an idea to optimize it further. what if you somehow combined the faces of the different chunks when needed because when you look at the long flat areas there are alot of unneeded extra triangles
@@TheTylerCathey That's an intresting point, however, since you are already doing everything in chunks, i belive combining them is going to be tricky, and not that good of an optimization .
@@Dudex11a The very first one, as that is basically the reason any xray exploit even works. In game chunks are also split into "sub-chunks", which are just the chunks mentioned in this video. For everything else I can't say.😊
Only face culling. Nothing more. Minecraft is a massive pile of tech debt. First issue is that it was written in Java. The state of the code is like a prototype that survived and went into production and tried to be fixed along the way. @@Dudex11a
A few things I want to mention, not ever game uses clockwise order for culling. Just an example, DirectX and Opengl dont have the same winding order. And yes, basically every game uses culling in order to improve their game. Disregarding the fact that with some textures, fire, leafs, windows, the culling will be set to none.
Yes excellent point. The winding order can be set to clockwise or anticlockwise in OpenGL, I just used clockwise as the example here for simplicity And yep there's no culling on leaves, but extra logic is needed to determine the normal direction, since it should be darker on the side that's facing away from the sun
@@Vercidium Yeah, well that depends on what logic you use in your game, maybe the ambient lighting is for you already enough, and if you want phong shading added to that, then you will need even more variables.
Such an underrated channel. I've seen a lot of voxel rendering videos and this by far takes the cake. clear visualization, concise explanation, and thorough coverage of the topic. Very engaging for how informative it is
Each optimization like this has limitations of it's own. Imagine trying to add a new mechanic, for example something that has elements of a voxel, but behaves differently.
Just fyi.. all modern games do these optimizations already. The first is just occlusion culling. That's not even a modern game thing, that's shit the of Doom did.
@@thecrazything95except the fricking most sold game of all time that still being updated and for some reason no brilliant mind has thought about optimizing it (said game being minecraft) at least the comunity does what mojcan't
I already knew about a lot of this, but you demomstrated it so well that I couldn't stop watching. And then you dropped a BOMBSHELL of one mesh per face direction! I've always wondered how you could bulk discard those faces, and now its so obvious
This is so impressive to see! Thank you for putting this much effort and knowledge into an understandable video. Hope to make these kinds of videos one day
The simplistic way you put these optimizations into perspective is absolutely incredible. Many explanations I've seen miss this level of simplicity and make it sound much more complex than it needs to be.
You didn't have to do this for me, but I appreciate it so much. Litterally made a bridge between the projects I've been planning, and did it in OpenGL. Thank you so much! You're awesome!
Edit: I have unintentionally started a war in replies and I feel guilty My brain is melting and thinking "How did nobody think of this earlier?" at the same time The amount of data that can be stored in a single 32bit integer is amazing
@@PoldovicoI don't think Vercidium meant to imply that _all_ games lacked these optimizations and he's the first person to think of them. But plenty of games don't have them and plenty of programmers don't know about them, so.
@@General12th Oh, I think this video is great, and a great way to explain these techniques. No knock on Vercidium here, it's just the realities of the RUclips algorithm mean that the titles and thumbnails that will get you clicks will also mislead the viewers a bit, by framing a perfectly sensible video in a somewhat less sensible context. There's kind of an implied story here, intentional or no, that the games that frustrate us could run smooth as butter, at thousands of FPS even or on much weaker hardware, if only programmers cared enough to optimize, and that's unfortunately just not quite right. Improvement is always possible, to take the Minecraft example we have mods like Sodium, but I want to make clear the numbers in the video have a context to them that does not translate directly to a finished game.
@@General12thmost games don't use voxels. And alot of this is voxel only. And many games only use voxels for things like volumetric lighting, which doesn't have actual meshes.
Awesome video! Actually there is a way to "break up" triangle strips. When using index buffers you can set a special "Primitive Restart Index" value that the gpu interprets as the end of a triangle strip!
it's basically called a "degenerate triangle" (one where 2 vertices are at the same spot) .. not sure if the GPU actually "recognizes" this (I doubt), but as it's a triangle with no area, it basically lets you render individual strips out of a big one - so it's basically free as there is no extra VS or PS involved
@@reitinet Yes that's another way to do it. I'd be surprised if GPUs didn't immediately throw away zero area triangles. Especially useful if you don't have an index buffer. Another neat trick is assigning NaN to a vertex position. GPUs will not render triangles that contain NaN coordinates. I actually don't know if this behaviour is mandated by any standards but I have seen it used to cut holes into heightmap geometry for example.
Somehow you've outdone your previous videos... wow. This was incredibly easy to follow for a layman like me! One thing I'd be really interested in is learning about how you optimize for file sizes. Games these days are seemingly unapologetically massive. The recent Star Wars Classic Collection launch saw a (I think) 10x increase in file size without really anything added
That is great to hear, thanks man! I hadn't considered doing a video on that, I'll add it to my list. I did some pretty tricky stuff with textures and maps in Sector's Edge to keep the download size small, that would be fun to talk about
The new release has AI upscaled textures, which is presumably what they spent 40 gigabytes on. I'd expect they could have done less than half that and gotten an equivalent increase in visual quality though.
@@Vercidium Unrelated to the video. Have you thought about integrating VR to your engine? A light version of sectors edge could probably run on Quest2 and 3.
This is extremely well produced. The title is a bit click bait though, as all optimizations (other than backface culling) rely on the use of voxels. And few games use voxels. Still really cool!
I have absolutely no words to describe how in love I am with not only the great code and its explanation but also the incredible visualisation techniques. Just amazing. Truly something special, please keep it up, I'm hungry for more!
5:30 If this is standard GL4.5, there is an "extra" rendering mode: primitive restart with indexed rendering. Shrinks the max index range by one, (usually the max index value), in exchange to allow restarting triangle strips at any point using special index constant. With it, you can stuff as much as geometry into a single indexed draw op as you want. The triangle-strips basically become "null-terminated-strings of geometry". I don't know the performance of it though.
Was watching with dropped jaw the entire time, this is genius! Even crazier to me is the fact that this is 100% C# code, which destroys the assumption that you need an AOT compiled language like C++, or Rust to achieve this kind of performance. Thank you for all your work and I am looking forward to your next big project
@@user-sl6gn1ss8psending data to the gpu is a bottleneck, modern GPUs can render millions of triangles at crazy fps, since it's opengl rendering cpu overhead is pretty big, reducing amount of data reduces cpu load because it has to process less things and that's the main reason of speed ups shown in this video
Those meshes are made on the CPU so if the player can modify them then the language will matter. The actual intensive C# code in this is run once and then just calls opengl in a loop.
@@tubaeseries5705 true, a lot of it comes down to reducing the time transferring data to the gpu. But also, he shows in the video the gpu as working for 8 times as long as the cpu. My point was just that this is not cpu-bound, so it is limited in that sense when it comes to comparing the effect of different languages in performance. Data transfer times only add to that point, right?
@@JG-nm9zk Sectors Edge, which uses all of these techniques, is written in C#, including the net code and the fact that the chunks have to rebuild each interaction, so I'd say that Sectors Edge as a game showcases that C# can be extremely performant
Triangle strips (particle perfection) can be merged through degenerate triangles: the GPU will skip drawing triangles that have 0 area. So by first duplicating the last vertex in a strip, then the first vertex in the second strip you create two 0 area triangles that allow you to hop to the next location and maintain winding order properly. Backface culling (limit breaker) is enabled by default in modern game engines, you have to explicitly turn it off to get double sided geometry. It is *not* something too many developers ignore. Mesh batching (massive meshes) requires marking meshes as static, or using a batched renderer. Generic game engines can't normally enable this automatically as if it chooses to merge meshes that are moving, or even better skinned, it can negatively effect performance. It's not impossible to figure out, but it's CPU cycles spent on doing heuristics when the developer can just flip a flag instead. Merging geometry (tiny triangles) is generally handled through Level of Detail, for a voxel game it's clearly more powerful due to the massive amount of coplanar triangles however but since you're building the geometry procedurally you have to code specifically for that case. Interestingly, tesselation is a way to flip the problem around, allowing you to generate more geometry for high graphics settings and have a base mesh with lower geometry.
All of these optimizations basically boil down to how much information is factorable. In other words, what is the minimum required amount of unique information needed for a specific task. Something interesting happens when you realize this. While every single voxel is completely unique, both before and after the optimizations, there is just enough common between each of them for patterns to emerge. For instance, the ordered set of numbers between 0 and 10 all differ by one while indexing in a direction which allows a simple rule to be implemented to both calculate and store it. However, a equal set of the same cardinality may not differ by one while indexing in a given direction resulting in drastic costs to computation and storage. Excuse any incorrect or imprecise language.
I'm glad there is another voxel optimization video out there! I used to just refer everyone to Hopson's voxel game mesh optimizations video, but this is way more detailed that that 😛
Really great and pedagogical way to go through these techniques. Well visualised, well explained and neat showing how each level of optimization might lead into another that may not be possible before. Though the title is quite clickbaity, making it seem like it's techniques applicable to any and all games at first when it's pretty specific to this type of meshes and level data setup.
Every Major Dev/Publisher needs this Nowadays they think if it reaches 30fps with 1080p Frame Scaling, Frame Generation it's fine I want 144hz at least 60 if it's 30 that's unplayable once you tasted 144hz, ideally want 2k too thats what my monitor is
You can take advantage of illegal triangles to use triangle strips without instancing. Simply specify the same vertex twice in a row and the GPU will discard it. This is known as a "degenerate strip" (careful googling that one). There's also another feature called "primitive restart", but I've never used it.
Yep that works, but we’re back to 6 vertices per voxel face. Primitive restart only works for index buffers sadly. I’m curious if there’s any advanced functionality in Vulkan that solves this
to those who think 17000 fps is silly, hes not doing any fancy lighting, the tiny frame-time means there's so much room for all kinds of effects and other things. the high frame rate also means we can get away with much smaller voxels for a more detailed world.
My degree is in applied math, only adjacent to data science. But when you compressed everything into an integer, then used a bit mask to unpack it... I got up and cheered. I've spent so many afternoon walks wondering if that exact thing could happen in a game engine.
Buffers in the GPU are sick. I remember when I first learned of vertex buffers and they blew my mind. Your presentation of all these techniques is really well refined and summarized, great job dude. Loved the video.
3:49 Its very optimised to the point where adding more textures would break everything i think. Giving yourself more headroom would be less of a hassle to have to work on such low level later
@@Vercidium voxels means everything is made out of boxes, but not like minecraft, normal graphics like the smallest hair is made of hundreds of these voxels
@@starplatinum3305 ahh gotcha. I haven't worked with trees or octrees, but I've spent a lot of time optimising arrays for games, raycasting, particles, etc. I could do a video on that
Was amusing seeing the Helldivers 2 clip at the beginning when that's been one of the most stable and well-running new titles I've played recently. Great video!
Incredible work, as always. Not very many technical programming RUclipsrs can keep me on the edge of my seat-you’re one of the few that can make content like this engaging. I look forward to seeing how you optimize networking, anticheat, and other key parts of your game. Keep up the good work.
This is insane man. The optimizations are awesome, the visuals are mindblowing, the editing is smooth, voiceover clear. Thank you sharing all this info!
Nice video, but unfortunately it kind of perpetuates the notion that voxels are cubes made up of vertices and faces. That's like saying every pixel on screen (or in memory) is a flat square (or rectangle, or pair of triangles) with four vertices. Voxels (like pixels) are the elements themselves. The faces and vertices are simply auxiliary / intermediate entities used to *render* those cubes as sets of ("3D") polygons, and thus benefit from 3D API / GPU acceleration. There are other ways of rendering voxels, and there are even voxel displays that basically treat voxels the same way that monitors treat pixels (just with one extra coordinate). That's also how some types of 3D printer (ex., resin) work. Reducing the concept of "voxels" to games that use (traditional, polygonal) 3D _cubes_ as building blocks is kind of simplistic and misleading.
@@maalikserebryakov - Remember: things inside *_your_* head sound the way you make them sound. My only emotion when typing the comment was mild annoyance.
my head hurts. fantastic video, i have leaps and bounds to go in my compsci career, and hopefully someday i'll watch this video with the ability to fully appreciate the detail you've put into this.
I wonder if using Vulkan with VK_EXT_mesh_shader extension would allow even more performance in that case? 👀 There is a Minecraft mod which uses the OpenGL NVIDIA equivalent to give a pretty nice performance boost. Really nice video by the way! Definitely saving it for when I get the time to learn low level graphics lol
I'd assume this would introduce an overhead if you want to edit the world at runtime, i.e. you'd have to recalculate the combined faces in a chunk whenever you add or remove a voxel. Of course the loss is likely negligible compared to the gain, but I'm just curious if it's something you took into consideration when making it? dope video
You simply have to rebuild the chunks that you modify which means rebuilding 32x32x32 or 32,768 blocks out of the 9.6 million blocks. It's rendering the entire scene at 17,000 fps so clearly a non issue, you would just have to go through the effort of programming the recomputation function.
Yep a chunk's mesh has to be regenerated when a block is added or removed. This can be done on background threads though and written to a separate region of the large combined buffer. This means the meshes can be updated in real-time without causing a stutter
Wait, how did you achieve a speed up with instancing? It’s been years, but I profiled with instancing and it was a major slowdown for such tiny meshes. It’s been common knowledge not to use instancing with < 256 verts, and I believe NVIDIA docs mention this somewhere as well.
@@shayan-gg That’s not the issue, the issue is work the driver does to render every instance. Instancing is intended for larger meshes, like an entire character model, foliage, etc.
Yes, this video's optimization technique is mostly outdated and seems not so practical. Some of them are effective but....most general-purpose game engines already implemented them.
I haven't experienced a slowdown using tiny meshes with instancing. There was a performance improvement for all players - AMD, Intel on old laptops, NVIDIA - after deploying this rendering change to Sector's Edge. My guess is that 4x4 matrices were used for instancing, which would be a quite a lot of data for each voxel face here and definitely cause a slowdown.
@@Vercidium You mean me? Definitely not, I instanced faces, 4 vertices 6 indices. It’s been a long time (years), so maybe they just improved the drivers.
An index buffer would work but needs 4 vertices to be stored per face, whereas instancing only needs one Index buffers are also great for vertices that store a lot of data. Since the voxel data is packed so tightly, index buffers don't provide as much benefit
I feel like this is why old programmers tend to be seen as "better". They had to work with much more strained resources, so they had to work out how to optimise their code. These days you can just brute force it.
great video but I prefer to see millisecods per frame instead of frames pre second. Most of the time fps is not what you mesure while doing optimizations
And yet, every time I tell anybody that they could make something run 5x faster with one or two more lines of code, I get accused of "premature optimisation". Like if those two lines were to take the developer such a significant time to write the product release would need to be moved a week...
Having just purchased a 24,000Hz monitor, I'm very disappointed in the lack of performance
Must optimise further!
My mouse has 50000 cpi at 50000hz , fps is way too low, need more optimization
my neural implant gives me eye processing speed of over 69420Hz. Game industry optimization technique are lagging behind today's technology
@@VercidiumWhat about using Vulcan instead of OpenGL for better performance? What about mesh shaders, which is what makes nvidium so miraculous.
@@MrGamelover23 Guess there is hardly a benefit in Vulcan if you can apply AZDO techniques with modern opengl.
this is what gamers think will happen when they leave a review that says "the lag is awful, 1 star until it's fixed"
Sometimes I read dumb comments like this one and I really wonder about the reason...
Imagine somebody complaining on the forums about lag, then the next update they can play in 8k 200fps on their 10 year old setup. Insane.
Lol
What if players try 4K first...
many of those optimizations sadly only work on voxel games like minecraft
@@creeperlolthetrouble actually minecraft cant use alot of the things done in this video, like combining multiple voxels into one voxel, as each individual voxel can be interacted with by the player. These optimizations would work for most other voxel games though
@@creeperlolthetroubleminecraft rly needs that if ur already addressing it 😭
So to summarize
- Core principles:
- CPU cycles expensive, GPU cycles cheap so we want to reduce CPU cycles by:
- Passing as little data as possible to the GPU, which costs CPU cycles
- Doing as few draw calls as possible to avoid redundant CPU cycles
- Voxels have certain principles we can leverage mainly:
- They exist at discrete coordinates in space
- They're cubes, so only three sides are visible at a time
- They have a uniform size
- Techniques
- Avoid drawing geometry the player won't see:
- Internal faces
- Backfaces(the counterclockwise vertices strips)
- The three(or more) faces of the chunk the player can't see
- Batch draw calls and info
- Combine adjacent voxels into one mesh, forming a chunk
- Combine adjacent faces sharing a normal to form "runs" that can be drawn as one face
- Using instances, draw strips instead of triangles,
- Use the symmetry of a cube to flip one strip into all six positions
- Send one indirect buffer to the GPU in one draw call instead of by chunk
- Reduce memory usage
- Using the fact that voxels use discrete coordinates, ditch floats.
- Pack location information, normal enum, length/width run info, and texture id into one 32-bit number
- Use the fact that we're chunking in combination with the SSBO to further reduce information individual voxels must hold, so now voxel world info is only meaningful with the chunk info
Wonderful video. The best part of this is how you took something "toyish" like voxels and showed off many optimization techniques that would be much hard to reason about otherwise.
don't know where you got the idea that CPU cycles are expensive tbh
CPU cycles arent expensive, transferring data to the GPU is expensive. Indirect draws can be more efficient because it allows you to upload all of the parameters needed to draw to the GPU and the gpu can do different calculations without having to communicate back and forth between the CPU and without having to set up fences to wait for the gpu
Awesome summarization, good job!
And to @anon1963 and @TheJorge100 CPU cycles, by comparison are certainly more expensive, unless you have a high core count with a weak GPU.
Having a quite average 8 CPU logical cores at 5 GHz, that means you have 5 billion * 8 = 40 billion cycles.
Having a quite average RTX 3070 GPU, which has 5888 shader cores at 1.5 GHz, that means you have 1.5 billion * 5888 = something between 7.5 and 9.0 trillion cycles, closer to 9 (lazy to do the exact math). Now, 0.04 trillion cycles compared to, say, 8 trillion cycles... yeah, that's 200 times less. Aka 0.5% as many cycles in a recent 4 core CPU.
Ok, the average would be 12 to 16 logical cores on the CPU. If it's with hyperthreading, I don't know if we can do the same math as above, but let's say we can.
So for 16 logical cores, that would make it 80 billion cycles, or 1% of what a RTX 3070 can do.
If we go to the top of the line, we have 32 logical cores, so at 5 GHz that's 160 billion cycles. 2% of an RTX 3070.
And if we drop to an RTX 3060 that 3584 cores at 1320 MHz, so that's 1.32 billion * 3584 ... ok this time I'm actually computing ... we have 4.73088 trillion cycles. Wow, can't believe an RTX 3060 is basically half as powerful as a RTX 3070, damn. Anyway, 0.16 vs 4.7 .... That's about 3.4% CPU cycles compared to GPU cycles. Or a bit more than 1/30.
Even in that case, I'd call the CPU cycles expensive :)
@@Winnetou17 you can't compare it like this. gpu cores are designed to do 1 thing very fast, matrix multiplication. which is why gpus have thousands of them. cpu cores on the other hand are MUCH more complicated, hence why like 8/16 of them in average cpus. so yeah sending data to the gpu is expensive, cpu cycles are dirt cheap. if you still are unsure then there are a lot of great books/articles/videos on this topic
@@anon1963 I know that GPU cores are much more limited and simple. But the things they are doing aren't done faster on the CPU, soo ... I don't understand your complaint.
I didn't say to use the GPU to compute something generic. If the task at hand is to do a multiplication, a GPU core will do it as good as a CPU core (well, CPUs have higher frequencies, but I took that into account). It's true that cores aren't so simple, they have multiple units and pipelines allowing basically multiple instructions per cycle. So I did simplify that part, but, broadly speaking, I'd say that what I wrote above - doing a simple task like a multiplication is roughtly the same on a CPU or GPU - still stands.
I can use a more different analogy. Let's say the CPU is a a dozen of smart persons while the GPU is 1000 people that are average intelligence at best.
A CPU person (core) can do things that a GPU person (core) cannot, like check the feasibility of an investment or write some code. A GPU person cannot do that, but it can carry potatoes. Guess what, both a CPU person and a GPU person can carry a sack of potatoes with the same rough efficiency. So if the tasks at hand are to carry a lot of potatoes, then, by sheer amount of people, a GPU person is "cheaper" than a CPU person.
Hence the saying that, because they're fewer, they're more expensive. So before you use them, better make sure the GPU persons are busy working.
Please show this to Mojang
Yes
They know, practically guaranteed that graphics engineers have thought of most of these.
The question that rises is, will it be worth the number of hours spent on refactoring huge chunks of an existing working solution.
Most likely the answer is hell no. This solution also wouldn't fit all the usecases in minecraft, so they'd need bespoke solutions for those edge cases.
Final bill would be tens to hundreds of thousands of dollars minimum.
@RiversJ Yeah no, given they took the time to completely rebuild the game in C++ (bedrock edition), they clearly can do these optimizations.
It's probably far too late to change the source code from Java to C++. It would probably make the modding community especially mad 😂
@@inpoverty4128wdym, bedrock is coded in c++ and the java edition well in java, they already done it
Damn, I can't imagine the amount of work you put to visualize all these incredible techniques.
Some of these animations have the spaghettiest of spaghetti code, so I'm glad they paid off. Thank you :)
This customer doesn't care what goes on in the kitchen, as long as the spaghetti is tasty. 👍@@Vercidium
@@simonnilsson1572well said
As an OpenGL beginner, this is scary and inspiring AF!!! Thanks!
@@simonnilsson1572 this is my new favourite quote
I've made a voxel engine before and literally half of these optimizations never even occurred to me. Bravo
I hope they help! And save you a lot of time :) this video is many years of trial and error summarised into 10 minutes
@@Vercidium woah!, that is extremely impressive please keep doing what you love
I made a voxel engine once, but it was way back in DOS demoscene days before opengl or GPUs existed. None of these optimizations were even relevant then, since engines were all bespoke and didn't have so much extra processing to cut out. Today's engines are way waaaay better, but I kind of miss how simple things were. Rendering 64k voxels with textures, lighting, and detailed fluid physics running each frame on each voxel, even an old 486 could maintain 60 fps. Good times.
Just unironically assume your game need to run ok on a PS2
Actually that makes me wonder how good a voxel game would actually run on the ps2 since the graphics hardware is so esoteric compared to something that supports opengl. There probably would be some optimizations like the ones in this video but maybe other ones are also possible
@@lorenzvo5284I'd imagine you could use VU2 to perform very similar optimisations. The channel 'gamehut' discusses a very similar approach in optimising a PS2 particle system.
No jokes, a reduced render distance version of this game could actually run on a Nintendo DS
Get yourself a shitty Dell laptop from the early 2010s and optimize it to run at 60 FPS on that.
@@LumStormtrooper Someone on an early 2004 laptop of a brand you've never heard of will still complain about performance lmao
I'm pretty sure I understood maybe 10% of what was said in this video.
On the bright side I'm now 10% more educated on how to optimize game engines using voxels.
Great video. :)
So if you watch the video 10 more times it will be 100%
Now this is some good optimization! It's amazing how much you can get away with when you know all your verts are on a regular grid. Splitting meshes based on face direction for quick backface culling is so clever.
Insane production quality! Explaining complex problems so precisely is quite the skill. You are as good at presenting what you know as you know it. Well played brother
Thanks heaps! I'm glad they were conveyed well, these things are tricky to visualise
@@Vercidium 11:25 i have an idea to optimize it further. what if you somehow combined the faces of the different chunks when needed because when you look at the long flat areas there are alot of unneeded extra triangles
@@TheTylerCathey That's an intresting point, however, since you are already doing everything in chunks, i belive combining them is going to be tricky, and not that good of an optimization .
I wish Minecraft had these optimizations it's devouring my ram and anything beyond 10 chunks of render distance will dip below 20 fps
I'm rather curious on which of these optimizations does Minecraft have? Surely it's not as unoptimized to not have any of these
@@Dudex11a
The very first one, as that is basically the reason any xray exploit even works.
In game chunks are also split into "sub-chunks", which are just the chunks mentioned in this video.
For everything else I can't say.😊
There are mods that will bring you above that value
Only face culling. Nothing more. Minecraft is a massive pile of tech debt. First issue is that it was written in Java. The state of the code is like a prototype that survived and went into production and tried to be fixed along the way. @@Dudex11a
you can use mod Sodium that greately optimizes rendering
A few things I want to mention, not ever game uses clockwise order for culling.
Just an example, DirectX and Opengl dont have the same winding order.
And yes, basically every game uses culling in order to improve their game.
Disregarding the fact that with some textures, fire, leafs, windows, the culling will be set to none.
Yes excellent point. The winding order can be set to clockwise or anticlockwise in OpenGL, I just used clockwise as the example here for simplicity
And yep there's no culling on leaves, but extra logic is needed to determine the normal direction, since it should be darker on the side that's facing away from the sun
@@Vercidium Yeah, well that depends on what logic you use in your game, maybe the ambient lighting is for you already enough, and if you want phong shading added to that, then you will need even more variables.
Apart from in minecraft
@@darthpotwet2668 What??
@@darthpotwet2668If you are not using any mods, you can see all 6 sides of leaves, which is usually described as "no culling"
Such an underrated channel. I've seen a lot of voxel rendering videos and this by far takes the cake. clear visualization, concise explanation, and thorough coverage of the topic. Very engaging for how informative it is
That means a lot, thank you :)
I love this so much because everytime a bug comes, it gets fixed without me needing to do anything.
Not using the Todd Howard school of thought where you instead just blame gamers for not having modern high powered computers?
Each optimization like this has limitations of it's own. Imagine trying to add a new mechanic, for example something that has elements of a voxel, but behaves differently.
@@ducking_fonkeyWhich is where actually knowing what you're making comes in.
Something I'm pretty sure he does know.
Just fyi.. all modern games do these optimizations already. The first is just occlusion culling. That's not even a modern game thing, that's shit the of Doom did.
these techniques work much worse / not at all when you consider the fact that most games arent voxel games with simple meshes and materials..
@@thecrazything95except the fricking most sold game of all time that still being updated and for some reason no brilliant mind has thought about optimizing it (said game being minecraft) at least the comunity does what mojcan't
I already knew about a lot of this, but you demomstrated it so well that I couldn't stop watching.
And then you dropped a BOMBSHELL of one mesh per face direction! I've always wondered how you could bulk discard those faces, and now its so obvious
This is so impressive to see!
Thank you for putting this much effort and knowledge into an understandable video.
Hope to make these kinds of videos one day
Thank you! I'm glad it helped make these concepts easier to understand
The simplistic way you put these optimizations into perspective is absolutely incredible. Many explanations I've seen miss this level of simplicity and make it sound much more complex than it needs to be.
You didn't have to do this for me, but I appreciate it so much.
Litterally made a bridge between the projects I've been planning, and did it in OpenGL.
Thank you so much! You're awesome!
I'm glad to hear! Happy to help
Edit: I have unintentionally started a war in replies and I feel guilty
My brain is melting and thinking "How did nobody think of this earlier?" at the same time
The amount of data that can be stored in a single 32bit integer is amazing
This guy is just really smart. Normal people get the stock tools and just deal with problems or say stuff is impossible
They did, a lot of this stuff does get used. If games like Minecraft were just rendering unlit, immobile scenes, they'd get thousands of FPS too.
@@PoldovicoI don't think Vercidium meant to imply that _all_ games lacked these optimizations and he's the first person to think of them. But plenty of games don't have them and plenty of programmers don't know about them, so.
@@General12th Oh, I think this video is great, and a great way to explain these techniques. No knock on Vercidium here, it's just the realities of the RUclips algorithm mean that the titles and thumbnails that will get you clicks will also mislead the viewers a bit, by framing a perfectly sensible video in a somewhat less sensible context.
There's kind of an implied story here, intentional or no, that the games that frustrate us could run smooth as butter, at thousands of FPS even or on much weaker hardware, if only programmers cared enough to optimize, and that's unfortunately just not quite right.
Improvement is always possible, to take the Minecraft example we have mods like Sodium, but I want to make clear the numbers in the video have a context to them that does not translate directly to a finished game.
@@General12thmost games don't use voxels. And alot of this is voxel only. And many games only use voxels for things like volumetric lighting, which doesn't have actual meshes.
Keep making videos exactly like this 🤩
Thank you for your generosity! I definitely will, I have so much I’d love to talk about
Awesome video! Actually there is a way to "break up" triangle strips. When using index buffers you can set a special "Primitive Restart Index" value that the gpu interprets as the end of a triangle strip!
it's basically called a "degenerate triangle" (one where 2 vertices are at the same spot) .. not sure if the GPU actually "recognizes" this (I doubt), but as it's a triangle with no area, it basically lets you render individual strips out of a big one - so it's basically free as there is no extra VS or PS involved
@@reitinet Yes that's another way to do it. I'd be surprised if GPUs didn't immediately throw away zero area triangles. Especially useful if you don't have an index buffer. Another neat trick is assigning NaN to a vertex position. GPUs will not render triangles that contain NaN coordinates. I actually don't know if this behaviour is mandated by any standards but I have seen it used to cut holes into heightmap geometry for example.
Optimization my beloved ❤❤❤
(Seriously, I'm not even into game devving yet but I try to live by this thinking)
Maaaan, you accumulated the many info in one single video. That's astonishing! I looked for info about voxel games and yours is so well presented.
5:28 Btw what you were looking for there was Primitive Restart. Allows to restart a strip without using degenerate triangles.
Thank you :) that requires an index buffer though, which would use more memory (since the vertex data is already packed so tightly)
You can use the same index buffer for every chunk.
@@Vercidiumisn't using index buffer reduce the matrix multiplications count in vertex shader?
There is also the option to use degenerate, zero-area triangles to link non-contiguous meshes into a single triangle strip
Oh I see in the video that you considered it too.
Somehow you've outdone your previous videos... wow. This was incredibly easy to follow for a layman like me!
One thing I'd be really interested in is learning about how you optimize for file sizes. Games these days are seemingly unapologetically massive. The recent Star Wars Classic Collection launch saw a (I think) 10x increase in file size without really anything added
That is great to hear, thanks man! I hadn't considered doing a video on that, I'll add it to my list. I did some pretty tricky stuff with textures and maps in Sector's Edge to keep the download size small, that would be fun to talk about
The new release has AI upscaled textures, which is presumably what they spent 40 gigabytes on. I'd expect they could have done less than half that and gotten an equivalent increase in visual quality though.
@@Vercidium Unrelated to the video. Have you thought about integrating VR to your engine? A light version of sectors edge could probably run on Quest2 and 3.
This is just insane quality of material/video/description/music/idea and result! Amazing!! Wish you huge success! And joining your patreon!
The optimisation level is insane! Thanks for the great insight and especially the well animated explanations.
This is amazing. Thank you so much for all the work you put into that video !
This is extremely well produced. The title is a bit click bait though, as all optimizations (other than backface culling) rely on the use of voxels. And few games use voxels. Still really cool!
game dev stuff can get pretty crazy its cool to see all the little things that add up to make something "simple"
Quite incredible, thank you for sharing this sort of stuff.
Love the way this is all demonstrated with visual examples, makes it all very clear and easy to understand! Some very clever optimising.
Incredible visuals to make understanding this way easier, big brain optimizations and amazing way to showcase them
Thank you! Glad they helped
I am addicted to these optimization videos, you explain them so well
Incredible explanation! I always struggled with OpenGL but your explanation is so clear that I wanna try to experiment with it again
I highly recommend it! Any kind of visual programming is the most fun to experiment with
This was really cool, but honestly, with your description 4:02, I finally understand bit masking. So simple when you put it like that.
It took me a while to wrap my head around it too, glad it helped!
I have absolutely no words to describe how in love I am with not only the great code and its explanation but also the incredible visualisation techniques. Just amazing. Truly something special, please keep it up, I'm hungry for more!
Thank you so much! I'm glad you love it. More videos to come!
5:30 If this is standard GL4.5, there is an "extra" rendering mode: primitive restart with indexed rendering. Shrinks the max index range by one, (usually the max index value), in exchange to allow restarting triangle strips at any point using special index constant. With it, you can stuff as much as geometry into a single indexed draw op as you want. The triangle-strips basically become "null-terminated-strings of geometry". I don't know the performance of it though.
Damn it vercidium, that map you used as example made me nostalgic for sectors edge :(
Was watching with dropped jaw the entire time, this is genius!
Even crazier to me is the fact that this is 100% C# code, which destroys the assumption that you need an AOT compiled language like C++, or Rust to achieve this kind of performance. Thank you for all your work and I am looking forward to your next big project
well, the GPU seems to be the bottleneck there, so the language comparison is not as strong, I think
@@user-sl6gn1ss8psending data to the gpu is a bottleneck, modern GPUs can render millions of triangles at crazy fps,
since it's opengl rendering cpu overhead is pretty big, reducing amount of data reduces cpu load because it has to process less things and that's the main reason of speed ups shown in this video
Those meshes are made on the CPU so if the player can modify them then the language will matter. The actual intensive C# code in this is run once and then just calls opengl in a loop.
@@tubaeseries5705 true, a lot of it comes down to reducing the time transferring data to the gpu. But also, he shows in the video the gpu as working for 8 times as long as the cpu.
My point was just that this is not cpu-bound, so it is limited in that sense when it comes to comparing the effect of different languages in performance. Data transfer times only add to that point, right?
@@JG-nm9zk Sectors Edge, which uses all of these techniques, is written in C#, including the net code and the fact that the chunks have to rebuild each interaction, so I'd say that Sectors Edge as a game showcases that C# can be extremely performant
Triangle strips (particle perfection) can be merged through degenerate triangles: the GPU will skip drawing triangles that have 0 area. So by first duplicating the last vertex in a strip, then the first vertex in the second strip you create two 0 area triangles that allow you to hop to the next location and maintain winding order properly.
Backface culling (limit breaker) is enabled by default in modern game engines, you have to explicitly turn it off to get double sided geometry. It is *not* something too many developers ignore.
Mesh batching (massive meshes) requires marking meshes as static, or using a batched renderer. Generic game engines can't normally enable this automatically as if it chooses to merge meshes that are moving, or even better skinned, it can negatively effect performance. It's not impossible to figure out, but it's CPU cycles spent on doing heuristics when the developer can just flip a flag instead.
Merging geometry (tiny triangles) is generally handled through Level of Detail, for a voxel game it's clearly more powerful due to the massive amount of coplanar triangles however but since you're building the geometry procedurally you have to code specifically for that case. Interestingly, tesselation is a way to flip the problem around, allowing you to generate more geometry for high graphics settings and have a base mesh with lower geometry.
All of these optimizations basically boil down to how much information is factorable. In other words, what is the minimum required amount of unique information needed for a specific task.
Something interesting happens when you realize this. While every single voxel is completely unique, both before and after the optimizations, there is just enough common between each of them for patterns to emerge.
For instance, the ordered set of numbers between 0 and 10 all differ by one while indexing in a direction which allows a simple rule to be implemented to both calculate and store it. However, a equal set of the same cardinality may not differ by one while indexing in a given direction resulting in drastic costs to computation and storage. Excuse any incorrect or imprecise language.
Just started the first 25 seconds and I’m already hooked. Love the content!
That is great to hear, thank you!
I'm glad there is another voxel optimization video out there! I used to just refer everyone to Hopson's voxel game mesh optimizations video, but this is way more detailed that that 😛
Thank you for your help creating this video! I owe the face-in-SSBO optimisation to you
@@Vercidium It was a pleasure ~
this video is amazing!
i want to learn this stuff and your channel is a literal gold mine! thank you!
Thank you! You are most welcome
@@Vercidium 😁
Really great and pedagogical way to go through these techniques.
Well visualised, well explained and neat showing how each level of optimization might lead into another that may not be possible before.
Though the title is quite clickbaity, making it seem like it's techniques applicable to any and all games at first when it's pretty specific to this type of meshes and level data setup.
I don’t even code games (or any software) and I still watched and enjoyed the whole video because I love seeing how things work. Great vid!
Every Major Dev/Publisher needs this
Nowadays they think if it reaches 30fps with 1080p Frame Scaling, Frame Generation it's fine
I want 144hz at least 60 if it's 30 that's unplayable once you tasted 144hz, ideally want 2k too thats what my monitor is
You can take advantage of illegal triangles to use triangle strips without instancing. Simply specify the same vertex twice in a row and the GPU will discard it. This is known as a "degenerate strip" (careful googling that one). There's also another feature called "primitive restart", but I've never used it.
Yep that works, but we’re back to 6 vertices per voxel face. Primitive restart only works for index buffers sadly.
I’m curious if there’s any advanced functionality in Vulkan that solves this
to those who think 17000 fps is silly, hes not doing any fancy lighting, the tiny frame-time means there's so much room for all kinds of effects and other things.
the high frame rate also means we can get away with much smaller voxels for a more detailed world.
I'd be curious to see him pull off some cleverly baked tricks. Baked SSAO, Baked light map.
Also people using less powerful computers
My degree is in applied math, only adjacent to data science. But when you compressed everything into an integer, then used a bit mask to unpack it... I got up and cheered.
I've spent so many afternoon walks wondering if that exact thing could happen in a game engine.
Hahaha yes games use it quite often
Buffers in the GPU are sick. I remember when I first learned of vertex buffers and they blew my mind. Your presentation of all these techniques is really well refined and summarized, great job dude. Loved the video.
This is so impressive not just the optimization but the video editing. Bravo! Subed and notified!
Thank you! Glad you enjoyed it
3:49 Its very optimised to the point where adding more textures would break everything i think.
Giving yourself more headroom would be less of a hassle to have to work on such low level later
he could have atleast 3 more bits if he only used 5 bits per x y z
One of the best channel I have discovered recently. Your stuff is awesome and very much educational. Well done.
Sectors edge was bangin when it had a player base
The optimization at 10:05 seems both useful and easy to make. Gonna try this out for my game!
Good Video! I am also building my game engine for the same reason (that is optimization). Subscribed!
friendly reminder that
polygonal box grid != voxels
I'm intrigued, what's the difference? Can voxel data not be rendered using triangles?
@@Vercidium voxels means everything is made out of boxes, but not like minecraft, normal graphics like the smallest hair is made of hundreds of these voxels
Great video man 🔥
Lets gooooo, optimizations !!!
Btw how about the data structure thingy can you make a vid about it ?
Hey which data structure? For the voxels?
@@Vercidium no no, any kind of optimisations data structure, i think, the "tree" things the octree quadtree or something
@@starplatinum3305 ahh gotcha. I haven't worked with trees or octrees, but I've spent a lot of time optimising arrays for games, raycasting, particles, etc. I could do a video on that
proper good job with the code, visualizations and video in general
Was amusing seeing the Helldivers 2 clip at the beginning when that's been one of the most stable and well-running new titles I've played recently. Great video!
Incredible work, as always. Not very many technical programming RUclipsrs can keep me on the edge of my seat-you’re one of the few that can make content like this engaging.
I look forward to seeing how you optimize networking, anticheat, and other key parts of your game. Keep up the good work.
He must be hired
By FromSoftware
Wouldn't that kills creativity? ;-)
Mojang, hire this man!
@@FreeSalesTips nah, valve hire that coding god
@ZR0xDEADDEAD agree XD
This is so awesome, my man just gamefied game deving
This is insane man. The optimizations are awesome, the visuals are mindblowing, the editing is smooth, voiceover clear. Thank you sharing all this info!
You're most welcome, thank you for the kind words!
Bro how do I thank you enough, the editing is unreal
A-are you the Gigachad of Optimization?
Gigachad? That would use too much memory. I'd rather be the Kilochad
@@Vercidiumbro 😂
@@VercidiumThis is the best comment I have ever seen
@@Vercidiumbased💀
love me some programmer jokes@@Vercidium
This is so incredibly well explained that my monkey brain understood all of it
Nice video, but unfortunately it kind of perpetuates the notion that voxels are cubes made up of vertices and faces. That's like saying every pixel on screen (or in memory) is a flat square (or rectangle, or pair of triangles) with four vertices.
Voxels (like pixels) are the elements themselves. The faces and vertices are simply auxiliary / intermediate entities used to *render* those cubes as sets of ("3D") polygons, and thus benefit from 3D API / GPU acceleration.
There are other ways of rendering voxels, and there are even voxel displays that basically treat voxels the same way that monitors treat pixels (just with one extra coordinate). That's also how some types of 3D printer (ex., resin) work.
Reducing the concept of "voxels" to games that use (traditional, polygonal) 3D _cubes_ as building blocks is kind of simplistic and misleading.
It sounds like you spent your whole life on this stuff
From the emotionally charged way you typed this comment
@@maalikserebryakov - Remember: things inside *_your_* head sound the way you make them sound. My only emotion when typing the comment was mild annoyance.
my head hurts. fantastic video, i have leaps and bounds to go in my compsci career, and hopefully someday i'll watch this video with the ability to fully appreciate the detail you've put into this.
Thank you! Happy to answer any questions
The part about splitting the meshes based on face direction blew my mind! Well done
Please make a Playlist, in which u fix other Games or creat a Game yourself!
You should be hired by mojang to optimize minecraft
kid named jellysquid
Minecraft 2500fps on my fridge tomorrow
do you really think they're not already squeezing out every possible optimization that doesn't compromize gameplay ?
@@Splatpope not for the original, just their rebuilt mimic that they use to sell skins to kids
It is optimised lol
Ok, this works fine for static worlds, but not with physics per voxel
This is so damn satisfying. Thank you.
Thank you!
This is fantastic. Wonderful job!
This man could single-handedly revolutionize standalone VR by optimizing visual quality.
This is a cool video explaining optimization strategies but non of this is a new invention.
1:44 You didn't put the algorithm in the description
@Vercidium This, what algorithm did you use?
It's there now, sorry for the delay!
I wonder if using Vulkan with VK_EXT_mesh_shader extension would allow even more performance in that case? 👀
There is a Minecraft mod which uses the OpenGL NVIDIA equivalent to give a pretty nice performance boost.
Really nice video by the way! Definitely saving it for when I get the time to learn low level graphics lol
I'm super keen to try mesh shaders, I still haven't wrapped my head around it but I reckon it would render this scene much quicker!
Rendering optimization was a black box to me. Now I kinda understand the principles behind it. That video is a gem, thank you so much !
I'm just starting out with game dev and I'm glad I found you
I'd assume this would introduce an overhead if you want to edit the world at runtime, i.e. you'd have to recalculate the combined faces in a chunk whenever you add or remove a voxel. Of course the loss is likely negligible compared to the gain, but I'm just curious if it's something you took into consideration when making it?
dope video
You simply have to rebuild the chunks that you modify which means rebuilding 32x32x32 or 32,768 blocks out of the 9.6 million blocks. It's rendering the entire scene at 17,000 fps so clearly a non issue, you would just have to go through the effort of programming the recomputation function.
Yep a chunk's mesh has to be regenerated when a block is added or removed. This can be done on background threads though and written to a separate region of the large combined buffer. This means the meshes can be updated in real-time without causing a stutter
Wait, how did you achieve a speed up with instancing? It’s been years, but I profiled with instancing and it was a major slowdown for such tiny meshes. It’s been common knowledge not to use instancing with < 256 verts, and I believe NVIDIA docs mention this somewhere as well.
maybe because each voxel uses single 32bit integer
@@shayan-gg That’s not the issue, the issue is work the driver does to render every instance. Instancing is intended for larger meshes, like an entire character model, foliage, etc.
Yes, this video's optimization technique is mostly outdated and seems not so practical. Some of them are effective but....most general-purpose game engines already implemented them.
I haven't experienced a slowdown using tiny meshes with instancing. There was a performance improvement for all players - AMD, Intel on old laptops, NVIDIA - after deploying this rendering change to Sector's Edge.
My guess is that 4x4 matrices were used for instancing, which would be a quite a lot of data for each voxel face here and definitely cause a slowdown.
@@Vercidium You mean me? Definitely not, I instanced faces, 4 vertices 6 indices. It’s been a long time (years), so maybe they just improved the drivers.
5:20 why not use an index buffer?
An index buffer would work but needs 4 vertices to be stored per face, whereas instancing only needs one
Index buffers are also great for vertices that store a lot of data. Since the voxel data is packed so tightly, index buffers don't provide as much benefit
Why the hell is this so well made bro????? Nice f*ing work bro you are a damn genius
I feel like this is why old programmers tend to be seen as "better". They had to work with much more strained resources, so they had to work out how to optimise their code. These days you can just brute force it.
Babe wake up Vercidium uploaded
nice optimizations! now where can I find that 17000 Hz monitor?
!remindme 30 years time
great video but I prefer to see millisecods per frame instead of frames pre second. Most of the time fps is not what you mesure while doing optimizations
So incredibly good
Insane effort went into this video, editing and code-wise, this is fascinating
5:40 TIL GL_QUADS was deprecated
currently watching this while my 2d game engine runs at 7fps :)))
And yet, every time I tell anybody that they could make something run 5x faster with one or two more lines of code, I get accused of "premature optimisation".
Like if those two lines were to take the developer such a significant time to write the product release would need to be moved a week...
One of the best videos I've ever seen about graphic optimizations with amazing and intuitive visuals. Bravo! You have a gift man!
@@RmaxTwice wow, thank you so much! It took a while to make these animations so I’m glad they help :)
Amazing optimizations