I love it when channels like this just randomly pop up in my recommendations! These types of channels are the very reason why the games I'm making are even playable lol. Thank you for the videos!
I am surprised nobody is talking about the Netflix logo at 3:55 Anyway, great video, don't know if if till ever help me, but at least it is something new
@@Vercidiumnormally you have a "wind strength" vertex weight attribute or the like. It has the nice effect that you can reuse the same vertex shader for lots of different plants.
@@SimonBuchanNz I see, I like it. The UV coordinates could be used as the weight attribute as it gives the top of the leaf a value of 1 and the base a value of 0
“Go back and look at that fern. Stanley, this fern will be very important later in the story. Make sure you study it closely and remember it carefully.”
This video series is very informative, I want more! I know it's been over a year since you made one, but if you make more I'll be back. RUclips needs more game dev content like this instead of another 10,000 videos on how to get started with Unity haha.
As I am just beginning to learn about shaders I am really glad these videos popped up in my reccomended! Incredible stuff really looking forward to using some of these techniques in my own projects!
4:16 This is known as a degenerate triangle. Degeneracy in this context means that a portion of a shape is arranged in a collinear way. In the case of a triangle, when two or all three of its vertices overlap, the effective area becomes zero, so rasterisation is skipped. Using degenerate triangles in this way used to be called "stitching," but the term has multiple uses and the more general "batching" is more common these days. It was a form of batching triangle strips and (separately) fans back in the early days of OpenGL and hardware accelerators.
The lighting in your fancy fern montage reminded me of a lighting trick that seems up your alley. You can bake animated shadows for any object-light pair where neither element's motion can be influenced by the player. Great for day-night cycles.
It's similar to existing methods of baking shadow maps, but you need some more metadata at compile time, and the process results in animated textures. Quick primer on shadow maps cuz I don't know who reads these: If you mark all the lights and objects that don't move in a scene, you can precompute the shadows for those things and save them as image textures. That way you don't have to do that expensive lighting math every frame for those objects. Nice! If you have moving lights or objects, they're traditionally excluded from this method, but they don't have to be, as long as you know what the movement is going to be. Take for example the position of the sun in a day/night cycle. Give the shadow baker a reference to the movement loop, and it should be able to do its normal process for every frame of that movement. Poof! Animated shadow map! If you have multiple lights or objects on loops, you have some options: 1: You could bake a texture for each object-light pair, but then you'll undo all your hard work un-bottlenecking the cpu-gpu data pipeline. 2: You could do #1, then extend the loops of each animated texture on a mesh to the least common multiple of their runtime, then squash down to a single animated texture. Unfortunately, that texture's loop could be as long as the longest loop times the number of lights in the scene. 3: You could bake, for example, just the day/night cycle, expecting that should save you the most runtime computation. 4: You can do something clever I haven't thought of to solve this problem.
Update to the above: I'd probably just handle looping lights and stationary objects, since the kind of game that cares about this level of graphics optimization probably doesn't have many objects that move on loops anyway.
Just discovered your channel. I recently got into game engine development, and the last few videos of yours have been interesting to watch, I love your approaches to optimization. Thank you, and I hope you continue to do tutorials and tips like this!
Totally unvaluable knowledge being put out here for free, absolutley incredible tips and tricks. You popped up in my recommended and I wish you the best on youtube. :) Too good to miss a single video, subbed and notifications on. Keep up the good work! :D Hats off to you good sir.
Loving the presentation quality of these videos so far! Excited to see where you take the channel! Also, I must say the terrain at 1:13 is bizarrely striking. I don't know what it is that seems so stylish and cohesive about it, but it's more mysterious and enticing than a lot of upcoming AAA game terrain lol.
Damn okay I didn't know you had editing skills as well as game dev skills. On the other hand, I'll be taking notes for when I continue messing around with OpenGL.
@@Vercidium Oh so YOURE who made sectors edge! Halt State, Flow state, Fluid Dynamics are al some of my favourite songs of all time but The Vibe has achieved legendary status for having the best use of LFO's ive ever heard.
@@VercidiumIf you like Punch Deck you might like Exyl, he has a similar style of music with really good dynamics with a sort of crisp ethereal sound. The only difference is that Exyl mostly makes dubstep: ruclips.net/video/EEnzDZFy5oY/видео.html
@@Vercidium Honestly, I think even something like a vec3 wind uniform that includes x direction, y direction, and magnitude would improve this, too. Then you can bias the oscillations in the direction the wind is blowing ( and greater at the tips) and use a trig function composed of multiple sine functions with different periods to make it appear a bit gusty.
Looking forward to more videos! I just recently finally grasped OpenGL enough to understand what the hell is going on and got basic project going in Zig, really looking forward trying these things out! This channel will become big, especially if you keep making videos like these, just a question of time until algorithm picks you up. Thank you
@@koool56 you are too kind, thank you! Nice work setting up your first project, you now have a canvas where you can create anything. Let us know if you have any specific questions, or if you’d like a video to cover something in detail. I also recommend learnopengl.com, I knew nothing about 3D rendering 6 years ago and this site taught me so much
There’s a few advantages: - Each fern can have a different amount of leaves with varying sizes and orientations, making scenes look less repetitive. This can also be changed on the fly (no need to re-import from Blender or upload a new mesh to the GPU) - There's no vertex bandwidth. Each vertex shader can run without waiting for vertex data (model data) to be read from memory - One less matrix4 multiplication (64 multiplications and 48 additions) is performed per vertex, since it’s positioned and rotated using sin/cos instead
@@VercidiumThat third explanation now makes your use of multiple trig functions make sense. I thought it was weird that you'd be using something expensive multiple times, but it turns out to be the cheaper alternative at scale.
@@Vercidium Point 2: "Each shader can run without waiting for model data". Is this because each instance is using the same buffer? I presume there is _some_ buffer? or can you issue a draw call with no buffer and a number of verts?
Hey Mitch, great channel. Subbed after watching like two videos. The fern still looks a little bit as if it is breathing and not really affected by wind. I believe you should be able to walk across the leaves you generated from last and first (or really any index you fancy) to simulate a more direct wind. (disclaimer: I am not a game developer, just a boring software developer)
Thanks for the feedback, good point. I've since set up a basic wind system that moves ferns / trees / grass in the same direction, but it's tricky as the triangles are being stretched/compressed. Keen to solve it though because realistic wind looks awesome
keen, i've been using opengl in win32 without glm so my models are generated procedurally. i can guarantee you one thing tho, all the people raised on video games would never recognise actual non motile plants.
When using triangle strips, isn't there a special index (like 0xffffffff) for starting a new strip within the same call, so you wouldn't need to do the hack with the invisible triangle?
Very cool video! I have question, how does this work on cpu side? (buffers/vertex arrays) I trying to make something simillar for sprite rendering, just writing glDrawArray doesn't work. I already have shader like here 5:58. But I don't understand how to prepare data on the cpu side
Hey I find your videos really interesting. You made the wind more realistic with a few tweaks. I think syncing the leaves to an external wind is needed tho, this endresult wouldn't fly in a real game. That would also interest me.
I'm thinking for a simple wind direction implementation: Add a constant(around 4 or so) to wind, to move the range to all positives multiply wind by cos(windDirection - yaw) adjust the constant for totalPitch
this would be applied for stylized games i'm assuming? it doesn't seem to have the direct artistic control that DCC applications provide, and when optimization is mentioned we can see from AAA games (tlou, rdr2, cod mw, horizon, etc) that the game can run at 60fps at recommended hardware specs
If I understood correctly you load N vertecies large buffer to a GPU, and then modify the verts in a vertex shader. Instead of loading the exact amount of verticies to a GPU that you're gonna change anways, you coulld look up geometry shaders and tesselation shaders to generate meshes on the fly. But, for the specific fern/grass example I think it's better (perfomance wise) to have load a model once, then instance it, and have a shader that adds the wind animation based of some properties of the vertices (like vertex colors) and uniforms (like wind direction vector (or a texture)).
if ppl played a mobile game that uses that same optimization as you and playing in a high frame rates they would so much so that they don't focus on the game loop
How much does the transparent texture affect performance? I remember reading/watching somewhere (might’ve been the StylizedStation channel) that too many transparent textures can slow things down. Something to do with potentially having to calculate the color value for each pixel multiple times for each texture it passes through, I think.
That’s right, transparency is a performance killer. I use discard statements in the fragment shader here instead of blending, to avoid the blending cost
It's genius... Im ML engineer and trying my skills in gamedev. Would you make some 2d physics tutorials? I'm trying to make some 2d ballistic sim in python. And to me it looks like I can't do a lot to optimize it (I decided to simulate not only ballistic part but also wind friction and make rocket 2-stage (reactive+ballistic)). But after your videos, I'm sure - I'm wrong=) Best regards from Ukraine, Odesa!
If there's one thing I can't do, it's physics! I can't help there sorry, but I enrolled in this course last year and it explains everything very well: pikuma.com/courses/game-physics-engine-programming
That bit of code with the triangle vertices you showed made me wonder something. Are shading units capable of speculative execution? That is if that code is actually run by those. I know extremely little of the topic. Since the branches are always accessed in a fixed order it seems like that would be very speedy if it was able to go to the right line of code ahead of time.
Haha as long as they’re not filling the screen, it’s not too bad I looked at a few other games and they all seem to use 2D planes with a transparent leaf/branch texture on them I’d love to figure out a better system, possibly convert the texture to a mesh with minimal transparency
@@Vercidium modern gpu's can easily handle tons of vertices with optimized draw calls. You can easily model every leaf with a handful of vertices and get the additional benefit individual animations for each leaf and get better performance than having a bunch of these transparency bushes...
Pardon me, but where exactly are you generating new geometry? For all I can see, you still have to create an equal number of vertices/polygons beforehand that you are going to work with. You still are taking existing vertices, and for each existing vertex you set a single render position, as is common with current plant render solutions. Granted, it's a very clever way of rendering a single straight strip of several polygons by cleverly rearranging vertex positions, but it's still not done without requiring pre-existing geometry, let alone, as your video title wrongly claims, "no geometry".
You’re right, there is still an underlying vertex buffer that’s filled with zeroes. Rather than containing multiple floats for position/normal/UV data, each vertex in the buffer is a single byte. This means for a fern with 24 leaves and 12 vertices per leaf, it’s allocating 288 bytes of vertex data, but not using it.
there are, in fact, ways to make 3d graphics without triangles (such as distance fields), and they can actually do some pretty impressive things... but of course, they have their own limitations; not sure if anyone ever used such things for games (outside of some volumetric effects, that is)
@@VercidiumThat sounds like a good approach as well. I would imagine though that using the tessellation/geometry hardware would work better and make the vert shader less "branchy" (no pun intended).
I know very little about coding and 3d modeling, so all I have is just my assumptions. As I understand creating an object with code rather than 3d modeling software is harder, but pays off in reducing memory used. How hard would it be to create an object fit for the current AAA games? Let's take Geralt from Witcher for example. Would it even be possible to recreate this level of quality with this method?
It would be possible to generate more complex models in a vertex shader, but it would take a lot of code and at some point it’ll be quicker to just load the model from a buffer. The models that suit this approach are ones that are repetitive or follow patterns that can easily be recreated in a shader, like plants, terrain and particles
This is absolutely amazing! I do wonder about those shadows at the end though, how are they done. I've tried implementing different forms of shadows into my own engine but they are always pretty pixelated so how do you do your shadows, as they look really good.
Shadows are the bane of my existence haha, for this video I just increased the shadow resolution. I've started my shadow implementation from scratch and I'm looking into better Cascaded Shadow Mapping techniques. I don't fully understand the matrix maths behind shadows, I need to take a step back and figure out what's going on. MJP has a few great resources about shadows but they are super technical, I'm yet to wrap my head around most of it
Hey I wrote my own in C# using SkiaSharp, with RichTexKit for syntax highlighting. I was originally using tweetlet.net/code for static images but it was tedious to export them
While reading the comments I noticed there was a lot of mentions of 'loading vertex data into the shader each frame'/'vertex bandwidth'. Why is there no bandwidth? Aren't a buffer of 1-byte zeroes being sent, meaning there would be reduced but not zero bandwidth, or am I thinking about this incorrectly?
Hey that’s right, there’s a tiny amount of bandwidth but for a buffer that small it’s likely it will fit in the cache before being sent to each shader. I’m curious if GPUs optimise this further by detecting if the mesh data is used in the shader, and if not skips loading mesh data entirely
Hello friend, hope you are having a great day! A genuine question: when you started programming your engine, did you use something like openGL , directX or Vulkan or did you program this type of resources too? I want to start making an engine, but i don't know if i need( or can ) make the base of the graphics only using language resources. By the way, I wish to use c++ . Thank you for your videos, they are genuinely interesting and very inspiring!
This is really cool, thanks for sharing! I wonder what is the advantage of using a method like this, over the normal modeling. Can anybody explain please?
i remember we had to use POSITION symantic in unity shaders ( written mostly in cg ) in order to get position data from the mesh in the vertex shader. something like this: float4 position : POSITION; now i wonder if this will produce an error or not when there's no position data in the mesh.
Obviously it isn't like plants don't move even a little while at rest, however I'm curious, would this implementation be able to work with animating simulated wind or player interaction?
@@Vercidium MAN RUclips has been bad about serving me my notifications, I only found that you responded by checking the video again randomly haha, thanks much for answering my question! I can't wait to see progress done on this :D
Honestly, the results could be stored in a simple array to be "plugged-in", like the other values. They should only need to be "calculated" once, for the whole army of computed-models. Each could actually, also, be offset, so clones have unique timings and starting offsets. (Or adaptive timings, like a rush of wind moving more, faster, based off whatever external factor would control that.) Also, that the formula can also be extended to auto-create LOD too... On demand, per instance, based off multiple factors. Unlike hand-made or nasty "auto-decimated" LOD maths. Going in reverse, you ADD LOD, which yields better results than the best auto-tessellation formulas can offer, faster. (Auto-tessellation has to evaluate and "think" about each surfaces angles/normals and calculate interpolations. As opposed to a literally formulated point-product of a curved line and edges. Also, no need to "remap" the surfaces, which tessellation has to do, once it breaks-up the triangles into smaller triangles. Which has to be done, per-object, per surface, with auto-tessellation. Which is why it is such a horribly taxing novelty, and not a real-world "in production" component. Except in real rare cases, if you have a top-of-the-line GPU that has cycles to spare already. A total waste for distant objects, which are trying to down-grade, to fit more in view, against close-up objects trying to show more than is actually there, in a fake way.)
To recreate this as a regular animated model with bones (skeletal animation) it would need two matrix multiplications - one to rotate + position the leaf's bone, then another to convert it to screen space. The trig functions aren't cheap, but we can use them to skip that first matrix multiplication (64 multiplications and 48 additions), which makes it run a bit quicker
@@Vercidium Would switching out trig functions for texture lookup incur too much memory bandwidth vs removing core load? The sine can be encoded in the same texture as the leaf.
@@michaelbuckers it would need to be benchmarked, my gut feel is a texture would be slower. A uniform array or UBO or SSBO could also be used to store the precalculated sin wave
This is cool and all, but wouldn't it be faster and easier to make a model like this in blender? What benefit do you get by using code to make 3d objects, wasn't 3d software made specifically so people didn't have to do that? If this is something made purely for novelty that's cool also I'm just curious on if there are any actual benefits for this process
It would be easier, but it wouldn't run as quickly because the mesh data must be loaded into the shaders each frame before they can begin executing. In this example the shaders can begin running instantly
It’s good practice not to use ifs, but I’ve never noticed a performance hit, especially when vertex shaders aren’t the bottleneck The one time I noticed branching issues was on an AMD card, where I had about 6 of them in a nasty setup (some nested, some one after another) I reworked the logic into a set of ternaries and it fixed the issues, but can’t say for sure as it was on another user’s GPU. I profiled it on my NVIDIA GPU and didn’t notice a performance difference either way
I was using tweetlet.net/code to begin with, but it was tedious to update them and I couldn’t animate them. I ended up writing my own in C# and rendering them in game, then screen recorded it. I feel like I can create a whole other devlog about how I created this devlog!
All nice and good until you need artist control. Yes, memory bandwidth is precious, but the moment you slapped that high res texture on top I started laughing because those precious few floats just got drowned in comparison - unless that texture is procedural too, in which case I would be impressed.
That’s true, but it means I can perform one less matrix4 multiplication (64 multiplications and 48 additions) per vertex, since it’s positioned and rotated using sin+cos instead
The extra triangles trick is GOLDEN
I love it when channels like this just randomly pop up in my recommendations! These types of channels are the very reason why the games I'm making are even playable lol. Thank you for the videos!
Thank you so much!
This channel has randomly appeared in my recommended feed and damn glad I am that it did. You have so much cool stuff here!
Thank you so much! And I have even more videos planned
@@Vercidium I just found your channel too and I'm really excited, really great context and explanations. Thanks for sharing these techniques.
@@matthewzeller5026 no worries, thank you for the kind words!
Yeah me too. I watch a lot of Blender and OpenGL tutorials
I am surprised nobody is talking about the Netflix logo at 3:55
Anyway, great video, don't know if if till ever help me, but at least it is something new
Mann this is extremely polished and well produced
This is cool, but you can also precompute all vertex positions so that the shader doesn't have to do it every frame.
Where would the precomputed data be stored? It’s performed every frame as it’s animated / blows in the wind
@@Vercidium in the fbx file 🙃
@@Genebriss I tried writing an FBX parser once and ended up destroying half of downtown New York
@@Vercidiumnormally you have a "wind strength" vertex weight attribute or the like. It has the nice effect that you can reuse the same vertex shader for lots of different plants.
@@SimonBuchanNz I see, I like it. The UV coordinates could be used as the weight attribute as it gives the top of the leaf a value of 1 and the base a value of 0
“Go back and look at that fern. Stanley, this fern will be very important later in the story. Make sure you study it closely and remember it carefully.”
This video series is very informative, I want more! I know it's been over a year since you made one, but if you make more I'll be back. RUclips needs more game dev content like this instead of another 10,000 videos on how to get started with Unity haha.
As I am just beginning to learn about shaders I am really glad these videos popped up in my reccomended! Incredible stuff really looking forward to using some of these techniques in my own projects!
I love this, your visualizations are so clear! Hope you will keep making more like this!
Amazing video, super informative, saving this for later!
4:16 This is known as a degenerate triangle. Degeneracy in this context means that a portion of a shape is arranged in a collinear way. In the case of a triangle, when two or all three of its vertices overlap, the effective area becomes zero, so rasterisation is skipped. Using degenerate triangles in this way used to be called "stitching," but the term has multiple uses and the more general "batching" is more common these days. It was a form of batching triangle strips and (separately) fans back in the early days of OpenGL and hardware accelerators.
This channel is a gem, great content. Thank you.
The lighting in your fancy fern montage reminded me of a lighting trick that seems up your alley.
You can bake animated shadows for any object-light pair where neither element's motion can be influenced by the player. Great for day-night cycles.
Ooh how does that work?
It's similar to existing methods of baking shadow maps, but you need some more metadata at compile time, and the process results in animated textures.
Quick primer on shadow maps cuz I don't know who reads these:
If you mark all the lights and objects that don't move in a scene, you can precompute the shadows for those things and save them as image textures. That way you don't have to do that expensive lighting math every frame for those objects. Nice!
If you have moving lights or objects, they're traditionally excluded from this method, but they don't have to be, as long as you know what the movement is going to be. Take for example the position of the sun in a day/night cycle. Give the shadow baker a reference to the movement loop, and it should be able to do its normal process for every frame of that movement. Poof! Animated shadow map!
If you have multiple lights or objects on loops, you have some options:
1: You could bake a texture for each object-light pair, but then you'll undo all your hard work un-bottlenecking the cpu-gpu data pipeline.
2: You could do #1, then extend the loops of each animated texture on a mesh to the least common multiple of their runtime, then squash down to a single animated texture. Unfortunately, that texture's loop could be as long as the longest loop times the number of lights in the scene.
3: You could bake, for example, just the day/night cycle, expecting that should save you the most runtime computation.
4: You can do something clever I haven't thought of to solve this problem.
Update to the above: I'd probably just handle looping lights and stationary objects, since the kind of game that cares about this level of graphics optimization probably doesn't have many objects that move on loops anyway.
Love your presentation. Even though I am not into video game development, I find this quite interesting.
This is such a well explained video that now I feel I can read shader code no problem. Next step would be to learn to write it
Great video, editing is great and your explanations are also very intuitive without getting into too much complexity right away.
I love this!
If i weren't unemployed, I would join your Patreon at a pay tier.
Please do continue to share these with us here.
Just discovered your channel. I recently got into game engine development, and the last few videos of yours have been interesting to watch, I love your approaches to optimization. Thank you, and I hope you continue to do tutorials and tips like this!
Totally unvaluable knowledge being put out here for free, absolutley incredible tips and tricks.
You popped up in my recommended and I wish you the best on youtube. :) Too good to miss a single video, subbed and notifications on. Keep up the good work! :D
Hats off to you good sir.
Thank you so much, I’m glad you like the videos!
i love vertex instancing so much!
It has so many applications! I try to use it wherever possible
This is so cool, looks so natural & realistic.
Thanks for detailed explanation & tutorial 🙏
Loving the presentation quality of these videos so far! Excited to see where you take the channel!
Also, I must say the terrain at 1:13 is bizarrely striking. I don't know what it is that seems so stylish and cohesive about it, but it's more mysterious and enticing than a lot of upcoming AAA game terrain lol.
Amazing video man, amazing, good looking and infomative, gg.
Damn okay I didn't know you had editing skills as well as game dev skills.
On the other hand, I'll be taking notes for when I continue messing around with OpenGL.
Love the outro song dude! So nice to meet a fellow Punch Deck fan! Totally gotta sub.
LOVE Punch Deck! We commissioned the whole Sector's Edge soundtrack from him and he did an amazing job. It's on Spotify :)
@@Vercidium Oh so YOURE who made sectors edge!
Halt State, Flow state, Fluid Dynamics are al some of my favourite songs of all time but The Vibe has achieved legendary status for having the best use of LFO's ive ever heard.
@@VercidiumIf you like Punch Deck you might like Exyl, he has a similar style of music with really good dynamics with a sort of crisp ethereal sound. The only difference is that Exyl mostly makes dubstep:
ruclips.net/video/EEnzDZFy5oY/видео.html
you deserve 1 million sub no cap
Nice stuff! I think twisting a leaf around an axis that goes parallel the leaf would add even more realism.
Was thinking the same thing.
I hadn't considered that! I'll give it a go
@@Vercidium Honestly, I think even something like a vec3 wind uniform that includes x direction, y direction, and magnitude would improve this, too. Then you can bias the oscillations in the direction the wind is blowing ( and greater at the tips) and use a trig function composed of multiple sine functions with different periods to make it appear a bit gusty.
Absolutely amazing video
Looking forward to more videos! I just recently finally grasped OpenGL enough to understand what the hell is going on and got basic project going in Zig, really looking forward trying these things out! This channel will become big, especially if you keep making videos like these, just a question of time until algorithm picks you up. Thank you
@@koool56 you are too kind, thank you! Nice work setting up your first project, you now have a canvas where you can create anything.
Let us know if you have any specific questions, or if you’d like a video to cover something in detail.
I also recommend learnopengl.com, I knew nothing about 3D rendering 6 years ago and this site taught me so much
I just realized you 're the guy who made Sector's Edge (Love that game)
i’m keeping an eye on your channel i feel like you’re gonna get big
Instant sub. Thank you for this.
You and Acerola are godsends to me at a time like this!
That means a lot, I’m glad I can help!
good fern montage at the end. I enjoyed my stay
Thank you, I’m glad you did :)
I was hoping you could render the leaves without textures, but this is cool!
Might be able to replicate the texture procedurally one day, I’d love to try it!
no word to describe, genius
What's the advantage of doing this? Presumably mesh data only gets sent once at startup anyway, so shouldn't be too big of an impact?
There’s a few advantages:
- Each fern can have a different amount of leaves with varying sizes and orientations, making scenes look less repetitive. This can also be changed on the fly (no need to re-import from Blender or upload a new mesh to the GPU)
- There's no vertex bandwidth. Each vertex shader can run without waiting for vertex data (model data) to be read from memory
- One less matrix4 multiplication (64 multiplications and 48 additions) is performed per vertex, since it’s positioned and rotated using sin/cos instead
@@VercidiumThat third explanation now makes your use of multiple trig functions make sense. I thought it was weird that you'd be using something expensive multiple times, but it turns out to be the cheaper alternative at scale.
@@Vercidium Point 2: "Each shader can run without waiting for model data". Is this because each instance is using the same buffer? I presume there is _some_ buffer? or can you issue a draw call with no buffer and a number of verts?
@@jnevercast there is a tiny buffer with 1 byte per vertex that’s unused. Unfortunately a vertex buffer still needs to exist to render N vertices
Wow dude is all the mograph done by you? Well done
Thank you! Yes all done in C# and OpenGL. It was a ton of work but I've built up an animation system that I can re-use for the next videos
Great and useful content!
Great channel, I'll come back later if I'm making games lol
Really well presented video! Super inspiring, though i just wish the title wasnt as misleading.
What did you feel was misleading about it? I think it did what it said on the tin.
@@daveycoleman hahah well you changed the title you rascal! This title is much better imo ❤️
Hey Mitch, great channel. Subbed after watching like two videos. The fern still looks a little bit as if it is breathing and not really affected by wind. I believe you should be able to walk across the leaves you generated from last and first (or really any index you fancy) to simulate a more direct wind. (disclaimer: I am not a game developer, just a boring software developer)
Thanks for the feedback, good point. I've since set up a basic wind system that moves ferns / trees / grass in the same direction, but it's tricky as the triangles are being stretched/compressed. Keen to solve it though because realistic wind looks awesome
3:55 - hey, who invited the Netflix logo? 😀
yeah netflix go away! get your own youtube channel
I LOVE THIS SO MUCH XD
This is amazing
keen, i've been using opengl in win32 without glm so my models are generated procedurally.
i can guarantee you one thing tho, all the people raised on video games would never recognise actual non motile plants.
When using triangle strips, isn't there a special index (like 0xffffffff) for starting a new strip within the same call, so you wouldn't need to do the hack with the invisible triangle?
Yes there is, that’s called Primitive Restart. But unfortunately I’m not using an index buffer and don’t have access to it :(
@@Vercidium Ah, I see... thanks!
Very cool video! I have question, how does this work on cpu side? (buffers/vertex arrays) I trying to make something simillar for sprite rendering, just writing glDrawArray doesn't work. I already have shader like here 5:58. But I don't understand how to prepare data on the cpu side
It only missed the wind strength so the leafs move more or less.
Maybe wind direction if you really need perfection.
Hey I find your videos really interesting. You made the wind more realistic with a few tweaks. I think syncing the leaves to an external wind is needed tho, this endresult wouldn't fly in a real game. That would also interest me.
Yes that's essential and is something I'm keen to try. I've added it to my ideas list, watch out for future videos!
I'm thinking for a simple wind direction implementation:
Add a constant(around 4 or so) to wind, to move the range to all positives
multiply wind by cos(windDirection - yaw)
adjust the constant for totalPitch
this would be applied for stylized games i'm assuming?
it doesn't seem to have the direct artistic control that DCC applications provide, and when optimization is mentioned we can see from AAA games (tlou, rdr2, cod mw, horizon, etc) that the game can run at 60fps at recommended hardware specs
If I understood correctly you load N vertecies large buffer to a GPU, and then modify the verts in a vertex shader. Instead of loading the exact amount of verticies to a GPU that you're gonna change anways, you coulld look up geometry shaders and tesselation shaders to generate meshes on the fly. But, for the specific fern/grass example I think it's better (perfomance wise) to have load a model once, then instance it, and have a shader that adds the wind animation based of some properties of the vertices (like vertex colors) and uniforms (like wind direction vector (or a texture)).
Imagine an entirely procedurally generated game environment made using only shaders. Geometry, textures, animation.
See .kkrieger
😮mind blown!!!
You are a genius
Haha not quite but thank you!
if ppl played a mobile game that uses that same optimization as you and playing in a high frame rates they would so much so that they don't focus on the game loop
> Cheated and Created Plants with No Geometry
> Proceeds to use geometry
How much does the transparent texture affect performance? I remember reading/watching somewhere (might’ve been the StylizedStation channel) that too many transparent textures can slow things down. Something to do with potentially having to calculate the color value for each pixel multiple times for each texture it passes through, I think.
That’s right, transparency is a performance killer. I use discard statements in the fragment shader here instead of blending, to avoid the blending cost
Good video.
It's genius... Im ML engineer and trying my skills in gamedev. Would you make some 2d physics tutorials? I'm trying to make some 2d ballistic sim in python. And to me it looks like I can't do a lot to optimize it (I decided to simulate not only ballistic part but also wind friction and make rocket 2-stage (reactive+ballistic)). But after your videos, I'm sure - I'm wrong=) Best regards from Ukraine, Odesa!
If there's one thing I can't do, it's physics! I can't help there sorry, but I enrolled in this course last year and it explains everything very well: pikuma.com/courses/game-physics-engine-programming
@@Vercidium wow! Never heard of this website! Will take this course along with game engines! Thank you very much!!
Another pro tip to make things look more natural: don’t make your objects (such as plants) perfectly symmetrical with maths
That bit of code with the triangle vertices you showed made me wonder something. Are shading units capable of speculative execution? That is if that code is actually run by those. I know extremely little of the topic. Since the branches are always accessed in a fixed order it seems like that would be very speedy if it was able to go to the right line of code ahead of time.
keep up!!!
Damn the gpu gonna be happy about rendering so many transparent textures on top of each other for sure :)
Haha as long as they’re not filling the screen, it’s not too bad
I looked at a few other games and they all seem to use 2D planes with a transparent leaf/branch texture on them
I’d love to figure out a better system, possibly convert the texture to a mesh with minimal transparency
@@Vercidium modern gpu's can easily handle tons of vertices with optimized draw calls. You can easily model every leaf with a handful of vertices and get the additional benefit individual animations for each leaf and get better performance than having a bunch of these transparency bushes...
@@changecraft6354 sounds like an idea for a future video! For trees and plants in the distance, would it fall back to a 2D billboard?
remember, drawcalls are not expensive, state switching is, if all your drawcalls use the same state it will be fast
Pardon me, but where exactly are you generating new geometry? For all I can see, you still have to create an equal number of vertices/polygons beforehand that you are going to work with. You still are taking existing vertices, and for each existing vertex you set a single render position, as is common with current plant render solutions. Granted, it's a very clever way of rendering a single straight strip of several polygons by cleverly rearranging vertex positions, but it's still not done without requiring pre-existing geometry, let alone, as your video title wrongly claims, "no geometry".
You’re right, there is still an underlying vertex buffer that’s filled with zeroes.
Rather than containing multiple floats for position/normal/UV data, each vertex in the buffer is a single byte.
This means for a fern with 24 leaves and 12 vertices per leaf, it’s allocating 288 bytes of vertex data, but not using it.
In WebGPU you can completely avoid allocating any vertex buffers, and just draw an arbitrary number of polygons.
@@SimonBuchanNzDon't need webgpu, you can set glDrawArrays count to the number of vertices you need, no need to bind any buffer
@@Vercidium That doesn't change the fact that you are still creating geometry by rendering a texture along a polygon, making the video title a lie.
Next video: How I cheated and rendered a 3D game with no triangles
Now that sounds like a fun chal- *8 years down the drain*
Akshually its faster to render directly with voxels and/or primitives (but is WAY harder to work with and requires many changes to artstyle)
Always remember: "Switching to Default Cube is always faster than rewriting main Shader-Code" or something like that.@@bits360wastaken
there are, in fact, ways to make 3d graphics without triangles (such as distance fields), and they can actually do some pretty impressive things... but of course, they have their own limitations; not sure if anyone ever used such things for games (outside of some volumetric effects, that is)
@@quintussilenius4324 interesting! I’ll have to look this up
3:58 accidentally creating new Netflix logo. 😁
I'm curious if you would be able to handle the separation of the individual leaves by using a geometry shader and EndPrimitive()?
That would work! Someone also suggested using an index buffer and a Primitive Restart, to start a new strip every N vertices.
@@VercidiumThat sounds like a good approach as well. I would imagine though that using the tessellation/geometry hardware would work better and make the vert shader less "branchy" (no pun intended).
🙌
all these lines of code just for a player to completely ignore it.
🌿🌿🌿
That plant may be optimized, but artistically it just looks like it's "alive" and about to eat me, not like a plant xD
I know very little about coding and 3d modeling, so all I have is just my assumptions.
As I understand creating an object with code rather than 3d modeling software is harder, but pays off in reducing memory used. How hard would it be to create an object fit for the current AAA games? Let's take Geralt from Witcher for example. Would it even be possible to recreate this level of quality with this method?
It would be possible to generate more complex models in a vertex shader, but it would take a lot of code and at some point it’ll be quicker to just load the model from a buffer.
The models that suit this approach are ones that are repetitive or follow patterns that can easily be recreated in a shader, like plants, terrain and particles
@@Vercidium Interesting. Is it possible to combine the two for an optimal performance?
This is absolutely amazing! I do wonder about those shadows at the end though, how are they done. I've tried implementing different forms of shadows into my own engine but they are always pretty pixelated so how do you do your shadows, as they look really good.
Shadows are the bane of my existence haha, for this video I just increased the shadow resolution. I've started my shadow implementation from scratch and I'm looking into better Cascaded Shadow Mapping techniques.
I don't fully understand the matrix maths behind shadows, I need to take a step back and figure out what's going on. MJP has a few great resources about shadows but they are super technical, I'm yet to wrap my head around most of it
What program did you use to animate the code snippets?
Hey I wrote my own in C# using SkiaSharp, with RichTexKit for syntax highlighting. I was originally using tweetlet.net/code for static images but it was tedious to export them
While reading the comments I noticed there was a lot of mentions of 'loading vertex data into the shader each frame'/'vertex bandwidth'.
Why is there no bandwidth? Aren't a buffer of 1-byte zeroes being sent, meaning there would be reduced but not zero bandwidth, or am I thinking about this incorrectly?
Hey that’s right, there’s a tiny amount of bandwidth but for a buffer that small it’s likely it will fit in the cache before being sent to each shader.
I’m curious if GPUs optimise this further by detecting if the mesh data is used in the shader, and if not skips loading mesh data entirely
Wow.
Hello friend, hope you are having a great day! A genuine question: when you started programming your engine, did you use something like openGL , directX or Vulkan or did you program this type of resources too? I want to start making an engine, but i don't know if i need( or can ) make the base of the graphics only using language resources. By the way, I wish to use c++ . Thank you for your videos, they are genuinely interesting and very inspiring!
I started with OpenGL and still using it for these demos. It will work great with C++ or C#. Thank you!
@@Vercidium Thank you for the awnser!
Very professional video. What software do you use for your videos?
Thank you, it’s all done in code with C#, SkiaSharp and OpenGL
Won't the renderer still try to render the "invisible" triangle, and waste some time on rendering nothing?
Exactly
rdr2 does this on another level
This is really cool, thanks for sharing! I wonder what is the advantage of using a method like this, over the normal modeling. Can anybody explain please?
i remember we had to use POSITION symantic in unity shaders ( written mostly in cg ) in order to get position data from the mesh in the vertex shader. something like this:
float4 position : POSITION;
now i wonder if this will produce an error or not when there's no position data in the mesh.
When he proposes to use vertex id to generate its position, does that mean that the draw call occurs without any array/element buffer?
Obviously it isn't like plants don't move even a little while at rest, however I'm curious, would this implementation be able to work with animating simulated wind or player interaction?
That’s my next goal for this renderer :) first dynamic wind and then player interaction
@@Vercidium MAN RUclips has been bad about serving me my notifications, I only found that you responded by checking the video again randomly haha, thanks much for answering my question! I can't wait to see progress done on this :D
hey.. Hey wait. HEY WAIT! THATS CHEATING!
How does this compare against regular 3d models performance-wise? Can't imagine calling so many trig functions is computationally cheap.
Honestly, the results could be stored in a simple array to be "plugged-in", like the other values. They should only need to be "calculated" once, for the whole army of computed-models. Each could actually, also, be offset, so clones have unique timings and starting offsets. (Or adaptive timings, like a rush of wind moving more, faster, based off whatever external factor would control that.)
Also, that the formula can also be extended to auto-create LOD too... On demand, per instance, based off multiple factors. Unlike hand-made or nasty "auto-decimated" LOD maths. Going in reverse, you ADD LOD, which yields better results than the best auto-tessellation formulas can offer, faster. (Auto-tessellation has to evaluate and "think" about each surfaces angles/normals and calculate interpolations. As opposed to a literally formulated point-product of a curved line and edges. Also, no need to "remap" the surfaces, which tessellation has to do, once it breaks-up the triangles into smaller triangles. Which has to be done, per-object, per surface, with auto-tessellation. Which is why it is such a horribly taxing novelty, and not a real-world "in production" component. Except in real rare cases, if you have a top-of-the-line GPU that has cycles to spare already. A total waste for distant objects, which are trying to down-grade, to fit more in view, against close-up objects trying to show more than is actually there, in a fake way.)
To recreate this as a regular animated model with bones (skeletal animation) it would need two matrix multiplications - one to rotate + position the leaf's bone, then another to convert it to screen space.
The trig functions aren't cheap, but we can use them to skip that first matrix multiplication (64 multiplications and 48 additions), which makes it run a bit quicker
@@Vercidium Would switching out trig functions for texture lookup incur too much memory bandwidth vs removing core load? The sine can be encoded in the same texture as the leaf.
@@michaelbuckers it would need to be benchmarked, my gut feel is a texture would be slower. A uniform array or UBO or SSBO could also be used to store the precalculated sin wave
That thing is breathing
This is cool and all, but wouldn't it be faster and easier to make a model like this in blender? What benefit do you get by using code to make 3d objects, wasn't 3d software made specifically so people didn't have to do that? If this is something made purely for novelty that's cool also I'm just curious on if there are any actual benefits for this process
3D software doesn't care about real-time rendering performance and even if it did wouldn't know enough about the use context to optimize properly.
It would be easier, but it wouldn't run as quickly because the mesh data must be loaded into the shaders each frame before they can begin executing. In this example the shaders can begin running instantly
*F E R N*
i did not understand the extra triangles trick, why do they disappear
nevermind i rewatched it i understand now. we are not optimising the draw calls but just make those connecting verticies disappear
Don't these "if" and "else if" blocks in the vertex shader cause a lot of lane masking on the GPU?
It’s good practice not to use ifs, but I’ve never noticed a performance hit, especially when vertex shaders aren’t the bottleneck
The one time I noticed branching issues was on an AMD card, where I had about 6 of them in a nasty setup (some nested, some one after another)
I reworked the logic into a set of ternaries and it fixed the issues, but can’t say for sure as it was on another user’s GPU. I profiled it on my NVIDIA GPU and didn’t notice a performance difference either way
What application are you using to write the script/code in and view the generated models?
I was using tweetlet.net/code to begin with, but it was tedious to update them and I couldn’t animate them.
I ended up writing my own in C# and rendering them in game, then screen recorded it. I feel like I can create a whole other devlog about how I created this devlog!
@@Vercidium Devlogception!
@@r2d2vader I’ve seen quite a few of them! Seems to be a popular thing
All nice and good until you need artist control. Yes, memory bandwidth is precious, but the moment you slapped that high res texture on top I started laughing because those precious few floats just got drowned in comparison - unless that texture is procedural too, in which case I would be impressed.
how performant is this compared to a more traditional way to render animated plants?
Aren't sin and cos trigonometry function rather expensive on the gpu?
That’s true, but it means I can perform one less matrix4 multiplication (64 multiplications and 48 additions) per vertex, since it’s positioned and rotated using sin+cos instead
I will likely never use this information (I will be binging every video)
How Can We Use Your Game Engine?
is this done with only one draw call?
It is :) that's made possible using the glMultiDrawArraysIndirect function in OpenGL 4.3+
Rockstar Games should be taking notes lmao