CORRECTION: I say scene graph instead of frame graph. A scene graph is when you subdivide space in your game with something like a octtree or a sphere hierarchy. A frame graph is what I'm talking about here not a scene graph. Sorry about the mess up, there no excuse I will extra careful with my terms next time.
Looks very nice! I looked one time into Vulkan, but personally don't see the benefit in using it over OpenGL. It has a lot of potential for good performance, but if you don't implement things right, the performance can actually be worse than OpenGL. Besides that, you need like 10 times the code for the same thing compared to OpenGL which just leads to more errors and learning Vulkan is really difficult, but also teaches a lot, sadly with a full-time job spending that much time isn't viable. At least for me. But great video and good luck! :D
Great stuff, lookin impressive. Have you been thinking of adding support for mesh shaders, or perhaps utilizing raytracing cores for lighting/reflection calculations?
Yes! Actually I really want to use mesh shaders and all the support they bring. It's definitely going to end up in the engine for sure, mesh shader occlusion seems super powerful. I think I will have ray tracing because they can be used for things other than just lighting, but we will see with that.
@@EscPointDev I'm currently doing my diploma thesis about mesh shaders and they're pretty cool, although I made some poor design choices early on and I kind of missed out on the extra performance boost. I do however gotta admit that a more compute-like shader structure, although maybe trickier to get into, is much more powerful and natural for stuff like automatic LOD generation. Good luck with your engine!
@@CzaryTheGamer2PL Love to hear it dude. You will have to teach me about mesh shaders when you're done with your thesis lol. Yeah totally, I find the more generic/powerful something is the harder is to get right. Thanks man, I will keep on grinding, good look on your thesis my guy
im still kinda new to graphics programming but im pretty confused as to what purpose does the depth prepass serve in the rendering? from what i understand in forward rendering it'd avoid fragment shader invocations for any pixel that would get hidden by another, but doesn't the geometry pass in deferred rendering already do that without having to do twice the number of call draws?
Great question. I had to look things up because it is confusing. The way the depth testing works is it tells the shaders how far away this solves the problem of drawing far away part on top of near parts of an object. This doesn't stop overdraw of a model, so if my GPU is rendering a model and it has a wall in the background and something in the foreground, it will draw the background wall first completely and overdraw the background wall with whatever is in the foreground. The problem is that with forward shading the background wall lighting calculations is always done, meaning that our work gets thrown away for parts of the wall. The deferred rendering still does overdrawing like before but we do the bare minimum we can get away with and store that result in different textures depending on the ligthting calculations we are going to do. Then after we have all the textures we upload them to our lighting shader in the passing pass. We do a cool trick were in the lighting passes vertex shader we cover the entire screen with a quad, so all the pixels/fragments are ready to go. Finally using the textures we uploaded to the fragment shader we do 1 lighting calculation for all the pixels on the screen. This avoids doing lighting calculations for things that will be eventually covered. Does that make sense?
@alicethetransdalek7333 You are correct that the depth pre-pass prevents overdrawing, both for forward rendering and deferred rendering. You do it for deferred rendering to prevent your geometry pass from unnecessarily writing to the G-buffer, as one of the faults with deferred rendering is that it is quite memory-intensive and memory-access-intensive. TLDR, the depth pre-pass does remove unnecessary expensive pixel shader invocations. It's just that the reason for those pixel shader invocations being expensive is different between forward and deferred rendering. One is expensive due to lighting calculations(forward), and the other is expensive for writing too lots of memory(deferred). Though the deferred geo-pass is still way less expensive than the forward geo-pass so that is not usually the only reason for including a depth pre-pass with deferred. Also, @EscPointDev is wrong that the forward rendering model will always do the lighting calculations. It will not, as the depth buffer will prevent any pixel shader invocations from happening if it is further away than what was recorded in the depth buffer.
Also, I just want to say this, deferred rendering is not the be-all and end-all of rendering models. Classic deferred rendering and classic forward are rarely, if ever used and the more modern clustered forward and clustered deferred are hotly debated between which is more efficient(the difference is pretty minimal, tbh).
That's pretty interesting, but if you want realism, shining a flashlight on a singular spot does actually cause the colors of the spot to be washed out. It's kind of weird given that the black behind the columns is definitely not realistic. The only question I have is completely unimportant, but why use JSON?
I'm glad you found it interesting. Yeah, the lighting calculations certain need improving. I wil keep on trying learning and trying to solve lighting issues so it hopefully will look better over time. You can really use any type of file format to build up your frame graph it's just that, with JSON there are a lot of parsers out there for C++ that have been around for a long time, it's generally one of the easiest file formats to parse, but XML or YAML would've worked just as well.
@@EscPointDev I personally like TOML if I have to use a public format, but if I can use whatever I want I've got a basic format that is structured more like C's structs but without unnecessary quotes.
Yes, if I have something like a beta-test or something similar I will likely announce it on my Discord server. If you can't join that for some reason I do have a twitter were I will announce like that. Both links are in the description of the video at the bottom.
It is, but it's not public. The book I mentioned in the video master graphics programming with vulkan has a public github. That uses the same techniques I used to create the scene graph.
@ No problem :) I'm not sure honestly. My plan is to make the engine very specialised for the game I want to make, because of that I'm not sure useful it would be allowing others to work on it. So no plans yet but I'm not going to rule it out.
@@dexterman6361 Yeah no problem man, I would be doing the same thing. Just so you know, I got scene graph and frame graph mixed up. I made a frame graph not a scene graph, I thought they were the same thing.
CORRECTION: I say scene graph instead of frame graph. A scene graph is when you subdivide space in your game with something like a octtree or a sphere hierarchy. A frame graph is what I'm talking about here not a scene graph. Sorry about the mess up, there no excuse I will extra careful with my terms next time.
Looks very nice!
I looked one time into Vulkan, but personally don't see the benefit in using it over OpenGL.
It has a lot of potential for good performance, but if you don't implement things right, the performance can actually be worse than OpenGL.
Besides that, you need like 10 times the code for the same thing compared to OpenGL which just leads to more errors and learning Vulkan is really difficult, but also teaches a lot, sadly with a full-time job spending that much time isn't viable. At least for me.
But great video and good luck! :D
Great stuff, lookin impressive. Have you been thinking of adding support for mesh shaders, or perhaps utilizing raytracing cores for lighting/reflection calculations?
Yes! Actually I really want to use mesh shaders and all the support they bring. It's definitely going to end up in the engine for sure, mesh shader occlusion seems super powerful. I think I will have ray tracing because they can be used for things other than just lighting, but we will see with that.
@@EscPointDev I'm currently doing my diploma thesis about mesh shaders and they're pretty cool, although I made some poor design choices early on and I kind of missed out on the extra performance boost. I do however gotta admit that a more compute-like shader structure, although maybe trickier to get into, is much more powerful and natural for stuff like automatic LOD generation. Good luck with your engine!
@@CzaryTheGamer2PL Love to hear it dude. You will have to teach me about mesh shaders when you're done with your thesis lol. Yeah totally, I find the more generic/powerful something is the harder is to get right. Thanks man, I will keep on grinding, good look on your thesis my guy
im still kinda new to graphics programming but im pretty confused as to what purpose does the depth prepass serve in the rendering? from what i understand in forward rendering it'd avoid fragment shader invocations for any pixel that would get hidden by another, but doesn't the geometry pass in deferred rendering already do that without having to do twice the number of call draws?
Great question. I had to look things up because it is confusing. The way the depth testing works is it tells the shaders how far away this solves the problem of drawing far away part on top of near parts of an object. This doesn't stop overdraw of a model, so if my GPU is rendering a model and it has a wall in the background and something in the foreground, it will draw the background wall first completely and overdraw the background wall with whatever is in the foreground.
The problem is that with forward shading the background wall lighting calculations is always done, meaning that our work gets thrown away for parts of the wall. The deferred rendering still does overdrawing like before but we do the bare minimum we can get away with and store that result in different textures depending on the ligthting calculations we are going to do. Then after we have all the textures we upload them to our lighting shader in the passing pass. We do a cool trick were in the lighting passes vertex shader we cover the entire screen with a quad, so all the pixels/fragments are ready to go. Finally using the textures we uploaded to the fragment shader we do 1 lighting calculation for all the pixels on the screen. This avoids doing lighting calculations for things that will be eventually covered.
Does that make sense?
@alicethetransdalek7333 You are correct that the depth pre-pass prevents overdrawing, both for forward rendering and deferred rendering. You do it for deferred rendering to prevent your geometry pass from unnecessarily writing to the G-buffer, as one of the faults with deferred rendering is that it is quite memory-intensive and memory-access-intensive.
TLDR, the depth pre-pass does remove unnecessary expensive pixel shader invocations. It's just that the reason for those pixel shader invocations being expensive is different between forward and deferred rendering. One is expensive due to lighting calculations(forward), and the other is expensive for writing too lots of memory(deferred). Though the deferred geo-pass is still way less expensive than the forward geo-pass so that is not usually the only reason for including a depth pre-pass with deferred.
Also, @EscPointDev is wrong that the forward rendering model will always do the lighting calculations. It will not, as the depth buffer will prevent any pixel shader invocations from happening if it is further away than what was recorded in the depth buffer.
Also, I just want to say this, deferred rendering is not the be-all and end-all of rendering models. Classic deferred rendering and classic forward are rarely, if ever used and the more modern clustered forward and clustered deferred are hotly debated between which is more efficient(the difference is pretty minimal, tbh).
That's pretty interesting, but if you want realism, shining a flashlight on a singular spot does actually cause the colors of the spot to be washed out. It's kind of weird given that the black behind the columns is definitely not realistic. The only question I have is completely unimportant, but why use JSON?
I'm glad you found it interesting. Yeah, the lighting calculations certain need improving. I wil keep on trying learning and trying to solve lighting issues so it hopefully will look better over time. You can really use any type of file format to build up your frame graph it's just that, with JSON there are a lot of parsers out there for C++ that have been around for a long time, it's generally one of the easiest file formats to parse, but XML or YAML would've worked just as well.
@@EscPointDev I personally like TOML if I have to use a public format, but if I can use whatever I want I've got a basic format that is structured more like C's structs but without unnecessary quotes.
@@anon_y_mousse Just looked up TOML as I've never heard of it before and it looks really slick. It's very tempting to use actually.
hi can i sign up for your work so in the future i wil be a beta tester?
Yes, if I have something like a beta-test or something similar I will likely announce it on my Discord server. If you can't join that for some reason I do have a twitter were I will announce like that. Both links are in the description of the video at the bottom.
@EscPointDev thanks
Is the code up on github?
It is, but it's not public. The book I mentioned in the video master graphics programming with vulkan has a public github. That uses the same techniques I used to create the scene graph.
@@EscPointDev Got it, thanks!
Do you plan to make it public?
@ No problem :) I'm not sure honestly. My plan is to make the engine very specialised for the game I want to make, because of that I'm not sure useful it would be allowing others to work on it. So no plans yet but I'm not going to rule it out.
@@EscPointDev Ah good point.
I was more looking at it from the angle of "what can I learn" but yea, makes sense.
@@dexterman6361 Yeah no problem man, I would be doing the same thing. Just so you know, I got scene graph and frame graph mixed up. I made a frame graph not a scene graph, I thought they were the same thing.