Cool video, sadly I know all of this already, although - a nice reminder that some of these things exist. So, I've decided to give you some ideas: 1 - VSM light leeking and overall quality can be improved. There is an alternative to VSM, called Exponent Shadow Mapping, which uses different moments and calculations. Combination of two - EVSM, has been a standard for many years. If we generate a mip-pyramid, we can also use it for soft shadows. 2 - In CSM, some games have, say, 5 cascades, but don't render them real-time. Last cascade is precalculated or updated very slowly. Others are more dynamic, but still use 5-10 fps to save frames, and are rendered in between different frames. 3 - Water, so, there are subsurface scattering approximations, making it look green in certain areas. 4 - Alexandr Sannikov basically solved GI, both diffuse and specular, with his research, which is implemented in PoE 2 (check their latest livestream). Best use case is screenspace, topdown games. 5 - Bent normals. Alexandr Sannikov also mentioned it, but the technique is 2 decades old. Basically you just bend your normal vector in the cone direction derived from AO, improving local shadows (if done in texture-space). Can be also precalculated for model-space and baked into vertex buffers. Good not only for diffuse occlusion, but for specular occlussion too (which is not seen very often in games). 6 - Trees (in Horizon Zero Dawn I think - might be wrong) use similar approach, but use proxy geometry to bend normals in the direction of a tree volume, granting a more realistic lighting. 7 - SSDO is cool, but there is also GTAO (HBAO with multibounce and color). 8 - Virtual textures are cool for close-up details. 9 - Animations with textures instead of skeletal. Basically bake animations in the texture and sample in a vertex shader. Cool for small things like book page flipping and birds flying. 10 - Impostors with precomputed lighting. Cool for clouds, distance trees. Basic idea is that you bake your object from many directions into an atlas texture, and sample 2-3 times from different corresponding views and grab the one with the closest depth. Usually done with octahedral mapping (which is also used for normal texture compression, 2d to 3d spherical projection if making planets, etc.) 11 - Outlines is a cool technique too. Best approaches are screenspace ones, which combine normal and depth, processed separately with sobel filter (edge filter), usually 3x3, but scaled up (it will skip, but it's okay) to desired radius. Radius is scaled by distance to the camera. There are other techniques, stencil based, mesh based, wireframe based, SDF based (voxel sdf). 12 - SDF, of course. The one and only. Cool for UI rendering, text rendering, decal rendering. Vector graphics, basically, without vectors. MSDF can fix the lack of high frequencies. 13 - Cheap global AO can be achieved with rendering the whole scene, like with shadow mapping, top down, from top to bottom basically, looking down (sky visibility term). EVSM should be good here. Use is with SSAO/texture space AO, choosing min() one. In combination with probes (multiscattering approximation), can be quite good (Ghost of Tsushima). 14 - Triplanar shading. In general a good way to apply textures without UVs. In practice, can be used to project snow/dust/dirt/sand/moss layer to all geometry, masked by some topdown designer/procedurally authored mask. 15 - Heightblending. Blending textures using height, basically. Good for splatmaps, decals. 16 - Roughness could be appxorimated as variance (like in VSM) from high-resolution normal-map. Can be precalculated before runtime. Can be good for realistic water rendering since this approach gives you anisotropic roughness (anisortopy can be it's own separate detail, but I'm feeling lazy). 17 - Cloth rendering. There are several physically based models for that too. Shadertoy has examples. 18 - Histogram-preserving blending. Better than alpha blending, good for terrain, when trying to mix textures with different levels of resolution. Basic idea is that you grab low frequencies (sampled from max. lod texture) from highest resolution and subtract them from lower-resolution texture before adding two together. Can be repeated indefinitely amount of times, but usually 3 times. World of tanks uses it iirc. 19 - Local tonemapping is a thing. Ghost of Tsushima. 20 - Instancing for large amount of grass. Ghost of Tsushima. 21 - Motion vectors can be used for better flip-book animations (PoE 2, many other games). 22 - Dither can be useful for transparency (Witcher 3) and quantization (Rendering of INSIDE). Noise, created by dithering, can be fixed with bilateral blur. 23 - Directional light is usually a dot, not a circle. So, it's incorrect to use it for sunlight. It also fights with IBL, some games usually remove sun from IBL maps manually. There are some games that fix the dot issue and give you the ability to control the radius. Don't remember the sources, but can be done with analytical area lights. 24 - Specular lights usually have aliasing due to how they are calculated, there are some approximations to fix them without AA. Final Fantasy 14. That's it. Could have written more, but this, I think, is plenty enough. Enjoy :)
The "color grading" part made me laugh out loud. It's so genius yet so simple. After fine-tuning your colors in a separate app, you just say to your shader: "Here's a lookup table, just follow *that* and you're done!" No complicated color correction equations at runtime.
there is so much to learn…. ive only done the basics with opengl , and have dabbled in webgl. none of this is trivial . i will be back to this video frequently
you have just blazed through everything that I was working on last 2 years. seeing my "adventures" in front of me as the video carries through was amazing.
Cool! I wish you showed more of that world curvature formula! Edit: Nevermind I figured it out super fast, everyone, just have a uniform for your cameraposition in your vertex shader, and move gl_Position.y down based on distance to the camera, but on an exponential basis for a round-y shape. Hell yeah!
Very nice! I definitely have to try out that curvature shader later. Im really interested in the refraction in the water, along with god rays and lens flares. The refraction will go well with your water caustics video hah
The one thing I wished this coverd, binary space partitioning, ive had a lot of people tell me its outdated but im working on an engine that allows for open world spaces, but can also transition into using bsp tress to optimise indoor spaces. My next project is making the bsp trees physics objects and editable in real time because how cool would it be to make a giant base in game, then have it remain as optimised as it was outdoors, and on top of that it chaning in real time would make it fully destructible (ok with A LOT of work) but I think bsp still has massive potensial in modern day and its just ignored
@@lowlevelgamedev9330 quadtrees are better outside buildings, bsp better inside them. Bsp is cheap but a pain to code as you not only need to sort through the tree, but generate it somehow
Thanks for this, didn't know about order independent transparency 👍. Btw Path of Exile Con Dev talk youtube videos has a lot of new techniques for rendering, including using motion vectors + flow maps to have better sprite based particle animations (like explosions/smoke), global illumination cascade techniques and many others 👍
*All OpenGL Effects: A Comprehensive Overview of Graphical Techniques* * *1:10** Wave Simulations:* Simulating waves by shifting the positions of triangles in a water plane. An easier method using DUDV textures is discussed later. * *1:42** World Curvature:* Applying a simple formula after the view matrix and before the projection matrix to create a world curvature effect. * *1:53** Skeletal Animations:* Briefly explains skeletal animation using bones, vertices, keyframes, and interpolation. Suggests further learning resources. * *2:23** Decals:* Discusses decals as details applied on top of geometry and mentions optimizations used in game engines like Doom Eternal. * *2:43** Volumetric Rendering (Clouds):* Explains volumetric rendering for clouds using a 3D matrix and ray casting. * *3:05** Geometry Culling (Frustum Culling):* Introduces frustum culling as an optimization technique to discard non-visible triangles, using bounding boxes and algorithms like octrees. * *3:53** Level of Detail (LOD):* Using multiple versions of a model with different triangle counts depending on distance for optimization, mentioning Unreal Engine's Nanite system. * *4:16** Tessellation Shaders:* Increasing a model's triangle count using tessellation shaders, beneficial for procedurally generated geometry and displacement mapping. * *4:39** Geometry Shaders:* Briefly explains geometry shaders and their capabilities, but advises against using them due to performance concerns. * *5:18** Geometry Buffer:* Using a massive buffer to store all geometry and compute shaders to calculate what needs to be drawn, as seen in Doom Eternal. * *6:46** Normal Mapping:* Adding normal information to a texture to create detailed surfaces even with flat geometry. * *7:13** Light Maps:* Using textures to specify specular components of materials. * *7:25** Lens Flare:* Implementing lens flare using 2D textures and mentions advanced techniques for more realistic simulations. * *7:51** Sky Box (Atmospheric Scattering):* Generating dynamic skyboxes using atmospheric refraction formulas. * *8:02** Fog:* Blending the rendered scene with the skybox based on fragment distance to create fog. * *8:11** Chromatic Aberration:* Shifting color channels slightly to simulate chromatic aberration. * *8:30** Physically Based Rendering (PBR):* Briefly introduces PBR and the rendering equation. * *8:58** Image-Based Lighting (IBL):* Using image-based lighting with the skybox for ambient light, noting its limitations in indoor scenes. * *9:22** Multiple Scattering Microfacet Model for IBL:* An easy addition to a PBR pipeline for better ambient light calculation, accounting for secondary light bounces. * *9:47** Global Illumination:* Discusses the importance of global illumination and its challenges in rasterization engines. * *10:12** Spherical Harmonics:* Explains spherical harmonics as a method to approximate global light information. * *10:36** Light Probes:* Using light probes with spherical harmonics or IBL to capture light information in different parts of the scene. * *10:52** Screen Space Global Illumination (SSGI):* Introduces screen space global illumination as an approximation technique. * *11:07** Ray Tracing:* Briefly mentions ray tracing for achieving high-quality global illumination. * *11:28** Subsurface Scattering:* Explains subsurface scattering, using examples of wax and skin rendering. * *11:51** Volumetric Rendering (God Rays):* Rendering god rays by simulating light hitting dust particles, mentioning easier screen space techniques. * *12:06** Parallax Mapping:* Creating a 3D effect by sampling textures from slightly different coordinates, used in God of War's dynamic snow. * *12:32** Reflections:* Using IBL, omni-directional maps, and screen space reflections for different levels of reflection quality and complexity. [From Comments] Mentions bending normals for better specular occlusion. * *13:15** Refraction:* Simulating light bending as it passes through different mediums, with examples of single objects, water, and DUDV textures. * *13:50** Diffraction:* Briefly mentions diffraction and its application in rendering shiny surfaces like CDs. * *14:06** Screen Space Ambient Occlusion (SSAO):* Explains SSAO as an approximation of global illumination, along with variations like HBAO and SSDO. [From Comments] Mentions GTAO as an alternative. * *15:12** Bloom:* Simulating bloom by isolating bright light spots, blurring, and adding them on top of the original image. * *15:50** High Dynamic Range (HDR):* Encoding a wider range of light intensities using a function that compresses the values. * *16:50** HDR with Auto Exposure:* Adjusting exposure based on average luminosity for a more realistic and cinematic look. * *17:07** ACES Tonemapping HDR:* Recommends using the ACES tonemapping function for better HDR results. [From Comments] Suggests AgX Tonemapping as a better alternative. * *17:29** Depth of Field (Bokeh):* Simulating the out-of-focus effect, mentioning Bokeh for creating specific light effects. * *17:49** Color Grading:* Using lookup tables to adjust the final colors of a scene. * *18:33** Shadows (Basic, PCF, Optimizations):* Covers basic shadow mapping, percentage-closer filtering (PCF) for soft shadows, and various optimizations. [From Comments] Mentions exponent shadow mapping (ESM) and EVSM for better shadow quality. * *20:11** Variance Shadow Mapping (VSM):* Using statistical methods for soft shadows, mentioning light bleeding as a potential artifact. * *20:29** Rectilinear Texture Wrapping for Adaptive Shadow Mapping:* Morphing the shadow texture to improve precision in relevant areas. * *20:53** Cascaded Shadow Mapping / Parallel Split Shadow Maps:* Splitting the view into multiple regions with separate shadow maps for better shadow quality in large scenes. * *21:34** Transparency:* Discusses different techniques for transparency, including alpha discarding, sorting, and order-independent transparency methods like depth peeling and weighted blending. * *22:26** Order Independent Transparency:* Mentions various complex techniques for order-independent transparency. * *23:33** Rendering Many Textures (Mega Texture & Bindless Textures):* Discusses challenges in rendering many textures and solutions like mega textures and bindless textures. * *24:31** Anti-Aliasing (SSAA, MSAA, FXAA, TAA):* Covers various anti-aliasing techniques, including super-sampling, multi-sampling, fast approximate anti-aliasing, and temporal anti-aliasing. [From Comments] Mentions that MSAA can be used with deferred rendering, contrary to popular belief. * *26:00** DLSS:* Introduces Deep Learning Super Sampling (DLSS) as an AI-powered upscaling and optimization technique. * *26:35** Adaptive Resolution:* Dynamically adjusting resolution for performance optimization. * *27:05** Lens Dirt:* Applying a texture on top of the screen to simulate lens dirt. * *27:27** Motion Blur:* Using a velocity buffer to create motion blur. * *27:41** Post-Process Warp:* Using shaders to create post-processing effects like screen warping. * *28:08** Deferred Rendering:* Explains the concept of deferred rendering and its advantages and disadvantages. [From Comments] Mentions that motion vectors are relatively easy to implement. * *29:29** Tiled/Clustered Deferred Shading:* Introduces variations of deferred rendering for handling many light sources. * *29:42** Z Pre-Pass:* Optimizing forward rendering by using a depth pre-pass to avoid unnecessary fragment calculations. * *30:01** Forward+ (Clustered Forward Shading):* Combining forward rendering with clustered light grouping for optimization. I used gemini-1.5-pro-exp-0801 on rocketrecap.com to summarize the transcript. Cost (if I didn't use the free tier): $0.12 Input tokens: 27480 Output tokens: 1815incorporating clarifying and supplementary information from the comments. It provides a valuable resource for anyone looking to learn about various graphical techniques and effects used in OpenGL and other graphics APIs. Remember to check out the linked resources mentioned in the video and comments for more in-depth explanations and tutorials.
I hope one day your game engine will evolve into something, i hooe im gonna use your game engine to make my mobile games, eventhough i dont know how to code : )
Yep, when looking through a thick atmospheric/volumetric effect, like smoke, it should appear larger due to scattering. But bloom also often confused with flares, which create similar effect/halos/glowing due to how light bends inside the camera lense (lenses), our eyes have that too. WIth astigmatism (I have one), it's even more present.
5:56 lol right after you mention rotations being tricky its a game that has rotation bug. If you spin the camera 1000s of times in one direction in Witcher 3 it starts to break.
It's pretty easy to do. You just have a baseline (target) exposure, create a mip-pyramid of your render, which allows you to quickly calculate average color of the scene, grab it's luminance value (convert RGB to HSB or YCbCr or whatever and use "luminance" or "brightness" parameter) and check if it's below or above your target (which differs with each tonemapping algorithm, so just choose one by trail and error), and converge, basically minimize the difference, taking small steps towards your target. To do this, you'd just calculate your exposure like this: float exposure = mix(average, target, EXPOSURE_SPEED) * EXPOSURE_STRENGTH
Generating motion vectors doesn't seem fun? It's one of the easiest advanced things to do 😊. Instructions: calculate gl_Position, just with the data of the previous frame; pass that, and gl_Position into the fragment shader. Divide both vec4s there by their w component. Subtract them. Done 🤩
Also, just downsampling SSAO to get rid of noise is incorrect and causes edge bleeding. Use lateral filtering instead (only blend pixels that have similar depth/normal values).
not exactly, more modern graphic apis can get more you more optimized. Also opengl is not that outdated since for web for example web gl is the only option
@65 ooh, that offends me. That you cannot do use MSAA on deferred rendering isn't a fact, it's a mere myth. Ofc you can, just light calculations may be a little excessively expensive, as they should use done on multisampled targets, too. Source: my game engine does it 😄, and yes, ofc it was a little work to get everything working with multi-sampled buffers, but besides that it's easy.
OpenGL, Vulkan is just too difficult to learn as a beginner and takes a ton so I think it is better to learn opengl to get started and do some advanced stuff with it 💪
It is said that Opengl is not good to start Graphics Programming with, DirectX is suggested instead, but that's only Windows/Xbox compatible. Vulkan is the alternative but it's supposedly very hard, I wanted to ask your opinion on that? Awesome video by the way.
hm Opengl is the easiest api to learn. If you learn directx you won't need to learn opengl anymore so I woul say to start with opengl. They probsbly sugest directx because it is more modern but at that point you have to learn the new directx api and that is clearly harder than opengl
Hey, I want to make a game engine, but I don't know where to start and how can I do that I didn't make any desktop app with using high-level programming language, I'm a Unity game developer and I know about C++, C# programming language, and some concepts like OPP I want to make a basic engine just for showcase and getting high paid job, I know how to make games but when it's comes to getting high paid job game engine can be good portfolio please suggest something , where I should start which thing I should learn and other thing which can be useful
I have some videos that will help you. They will actually point to other videos and resources 😂 because thare are a lot of things to be learned but don't worry you can do it. The most difficult thing is learning opengl for the graphics part but after that it is not that difficult. Also before making an engine try to make a very simple game using my framework to understand how things work. Good luck 💪 ruclips.net/video/tK7yugR3qDU/видео.htmlsi=UCQ2hRY3_ervV3Wc ruclips.net/video/A735Y4kMIPM/видео.htmlsi=XbsAiE_taIujKZxm ruclips.net/video/21BNxCLTGWY/видео.htmlsi=C-lka3RCRq7BNqHp ruclips.net/video/zJoXMfCI9LM/видео.htmlsi=tz34Fm_aAdmPUVit ruclips.net/video/HPBXr6Zdm4w/видео.htmlsi=mscuUWxwbXSf8jPc
hey thats awesome, i might be wrong but aren't these effects made by you ON OpenGL? Like they are not strictly correlated with the library. Like a Framebuffer Object for example IS an OpenGL feature, but what you do with it is your business. It's like saying that a peculiar shader is an OpenGL effect where instead the way you do the shader is completely arbitrary. Does what i'm saying make sense?
so if you render 2 objects and one is in front of another, if you draw the fartjest one first, the gpu can't possibly know that it will be occluded in the future, so it has to color that object. With z pre pass you guarantee that you only run the fragment shader for each pixel only once
@@lowlevelgamedev9330 Well yes, but the OpenGL Wiki says "Early Fragment Test is a feature supported by many GPUs that allow the certain Per-Sample Processing tests that discard fragments to proceed before fragment processing", it also says "The depth test can take place before the Fragment Shader executes." so my question is what's the point of implementing it yourself if a GPU is already doing it
Early Fragment Test is not doing exactly what you think it's doing. When you render some triangles to the screen the GPU is going to do an early depth test against the current values in the depth buffer, and initially it's going to be empty. So we draw the triangles (full shading calculation and all) and write to the depth buffer. Now we draw a new mesh that just happens to be infront of the old mesh, we do the early depth test and see that these triangles should be drawn, so we do the full lighting calculation and update the depth buffer. In this case the early depth test didn't do anything for us. But! If we rendered the meshes in the opposite order the second mesh would be behind the closer mesh and the early depth test could save us from calculating lighting on the non-visible triangles. This leads to a simple optimization that most game engines do. Try to render the closest meshes first in hopes that they will occlude later meshes, so we can take advantage of the early depth test. The logical extreme of this is to just render out all of the depth information beforehand and then when rendering we can get perfect early depth testing. (This is z prepass) So why is it faster to render the scene twice? It's because most often it turns out that running two times the number of vertex shaders is much faster than running the lighting calculation more than once per pixel (assuming you have a more sophisticated lighting setup). To make the depth prepass even faster it is also common to separate all of the position data into its own buffer so that while doing the depth prepass you get higher cache utilization as you can fit more vertex data into cache when the vertex data is just positions. Having the z prepass data can also be useful for doing more parallel work on the GPU. For example you could start building a hierarchical Z buffer for use in occlusion culling at the same time as you are doing the shading work. Or you could start doing SSAO, or any number of other things the depth buffer could be useful for. Hopefully I was able to clear up some of the confusion and not make it even more confusing :)
Cool video, sadly I know all of this already, although - a nice reminder that some of these things exist.
So, I've decided to give you some ideas:
1 - VSM light leeking and overall quality can be improved. There is an alternative to VSM, called Exponent Shadow Mapping, which uses different moments and calculations. Combination of two - EVSM, has been a standard for many years. If we generate a mip-pyramid, we can also use it for soft shadows.
2 - In CSM, some games have, say, 5 cascades, but don't render them real-time. Last cascade is precalculated or updated very slowly. Others are more dynamic, but still use 5-10 fps to save frames, and are rendered in between different frames.
3 - Water, so, there are subsurface scattering approximations, making it look green in certain areas.
4 - Alexandr Sannikov basically solved GI, both diffuse and specular, with his research, which is implemented in PoE 2 (check their latest livestream). Best use case is screenspace, topdown games.
5 - Bent normals. Alexandr Sannikov also mentioned it, but the technique is 2 decades old. Basically you just bend your normal vector in the cone direction derived from AO, improving local shadows (if done in texture-space). Can be also precalculated for model-space and baked into vertex buffers. Good not only for diffuse occlusion, but for specular occlussion too (which is not seen very often in games).
6 - Trees (in Horizon Zero Dawn I think - might be wrong) use similar approach, but use proxy geometry to bend normals in the direction of a tree volume, granting a more realistic lighting.
7 - SSDO is cool, but there is also GTAO (HBAO with multibounce and color).
8 - Virtual textures are cool for close-up details.
9 - Animations with textures instead of skeletal. Basically bake animations in the texture and sample in a vertex shader. Cool for small things like book page flipping and birds flying.
10 - Impostors with precomputed lighting. Cool for clouds, distance trees. Basic idea is that you bake your object from many directions into an atlas texture, and sample 2-3 times from different corresponding views and grab the one with the closest depth. Usually done with octahedral mapping (which is also used for normal texture compression, 2d to 3d spherical projection if making planets, etc.)
11 - Outlines is a cool technique too. Best approaches are screenspace ones, which combine normal and depth, processed separately with sobel filter (edge filter), usually 3x3, but scaled up (it will skip, but it's okay) to desired radius. Radius is scaled by distance to the camera. There are other techniques, stencil based, mesh based, wireframe based, SDF based (voxel sdf).
12 - SDF, of course. The one and only. Cool for UI rendering, text rendering, decal rendering. Vector graphics, basically, without vectors. MSDF can fix the lack of high frequencies.
13 - Cheap global AO can be achieved with rendering the whole scene, like with shadow mapping, top down, from top to bottom basically, looking down (sky visibility term). EVSM should be good here. Use is with SSAO/texture space AO, choosing min() one. In combination with probes (multiscattering approximation), can be quite good (Ghost of Tsushima).
14 - Triplanar shading. In general a good way to apply textures without UVs. In practice, can be used to project snow/dust/dirt/sand/moss layer to all geometry, masked by some topdown designer/procedurally authored mask.
15 - Heightblending. Blending textures using height, basically. Good for splatmaps, decals.
16 - Roughness could be appxorimated as variance (like in VSM) from high-resolution normal-map. Can be precalculated before runtime. Can be good for realistic water rendering since this approach gives you anisotropic roughness (anisortopy can be it's own separate detail, but I'm feeling lazy).
17 - Cloth rendering. There are several physically based models for that too. Shadertoy has examples.
18 - Histogram-preserving blending. Better than alpha blending, good for terrain, when trying to mix textures with different levels of resolution. Basic idea is that you grab low frequencies (sampled from max. lod texture) from highest resolution and subtract them from lower-resolution texture before adding two together. Can be repeated indefinitely amount of times, but usually 3 times. World of tanks uses it iirc.
19 - Local tonemapping is a thing. Ghost of Tsushima.
20 - Instancing for large amount of grass. Ghost of Tsushima.
21 - Motion vectors can be used for better flip-book animations (PoE 2, many other games).
22 - Dither can be useful for transparency (Witcher 3) and quantization (Rendering of INSIDE). Noise, created by dithering, can be fixed with bilateral blur.
23 - Directional light is usually a dot, not a circle. So, it's incorrect to use it for sunlight. It also fights with IBL, some games usually remove sun from IBL maps manually. There are some games that fix the dot issue and give you the ability to control the radius. Don't remember the sources, but can be done with analytical area lights.
24 - Specular lights usually have aliasing due to how they are calculated, there are some approximations to fix them without AA. Final Fantasy 14.
That's it. Could have written more, but this, I think, is plenty enough. Enjoy :)
Damn you already gave me the content for part 2 🤣😂 thanks a lot I can't wait to take a look into all of them
@@lowlevelgamedev9330 There is going to be part 2!? ))))
@Nerthexx Can you show us your work then? :)
The "color grading" part made me laugh out loud. It's so genius yet so simple. After fine-tuning your colors in a separate app, you just say to your shader: "Here's a lookup table, just follow *that* and you're done!" No complicated color correction equations at runtime.
Great video! I went through and got all the timestamps; you can copy/paste them into the description :)
*GEOMETRY*:
1:10 Waves Simulations
1:42 World Curvature
1:53 Skeletal Animations
2:23 Decals
2:43 Volumetric Rendering I (Clouds)
3:05 Geometry Culling (Frustum Culling)
3:53 Level of Detail (LOD)
4:16 Tesselation Shaders
4:34 Displacement Mapping
4:39 Geometry Shaders
5:18 Geometry Buffer
5:45 Quaternions
5:56 Realistic Clothes/Hair
6:18 Wind Simulations
*LIGHTING*:
6:46 Normal Mapping
7:13 Light Maps
7:25 Lens Flare
7:51 Sky Box (Atmospheric Scattering)
8:02 Fog
8:11 Chromatic Aberration
8:30 Physically Based Rendering (PBR)
8:58 Image Based Lighting (IBL)
9:22 Multiple Scattering Microfacet Model for IBL
9:47 Global Illumination
10:12 Spherical Harmonics
10:36 Light Probes
10:52 Screen Space Global Illumination (SSGI)
11:07 Ray Tracing
11:28 Subsurface Scattering
11:44 Skin Rendering
11:51 Volumetric Rendering II (God Rays)
12:06 Parallax Mapping
12:32 Reflections
12:55 Screen Space Reflections
13:15 Refraction
13:50 Defraction
14:06 Screen Space Ambient Occlusion (SSAO)
14:28 Horizon Based Ambient Occlusion (HBAO)
14:36 Screen Space Directional Occlusion (SSDO)
15:12 Bloom
15:50 High Dynamic Range (HDR)
16:50 HDR With Auto Exposure (the one used for bloom)
17:07 ACES Tonemapping HDR
17:29 Depth of Field (Bokeh)
17:49 Color Grading
*SHADOWS*:
18:33 Shadows
18:46 Percentage Close Filtering (PCF)
19:10 Static Geometry Caching
19:28 PCF Optimizations
20:11 Variance Shadow Mapping (VSM)
20:29 Rectilinear Textre Wrapping for Adaptive Shadow Mapping
20:53 Cascaded Shadow Mapping / Parallel Split Shadow Maps
*SPECIAL EFFECTS*:
21:34 Transparency
22:26 Order Independent Transparency
22:42 Depth Peel
23:09 Weighted Blending
23:21 Fragment Level Sorting
23:33 Rendering Many Textures (Mega Texture & Bindless Textures)
24:31 Anti-Aliasing (SSAA, MSAA & TAA)
26:00 DLSS
26:35 Adaptive Resolution
27:05 Lens Dirt
27:27 Motion Blur
27:41 Post Process Warp
28:08 Deferred Rendering
29:29 Tiled Deferred Shading
29:29 Clustered Deferred Shading
29:42 Z Pre-Pass
30:01 Forward+ (Clustered Forward Shading)
Didn't expect for people to help me with this 😆thanks I will use it
@@lowlevelgamedev9330 you should also use a 0:00 timestamp, for chapters to work in video player
there is so much to learn…. ive only done the basics with opengl , and have dabbled in webgl. none of this is trivial . i will be back to this video frequently
1:06 Geometry
1:09 waves simulations
1:42 world curvature
1:52 skeletal animations
2:24 decals
2:43 volumetric rendering 1(clouds)
3:05 geometry culing
3:23 geometry culling(frustum culling)
3:45 Level of Detail (LOD)
4:15 tessolation shaders
4:31 displacement mapping
4:39 geometry shaders
5:14 geometry buffer
5:37 quaternions
5:56 realistic clothes/hair
6:18 wind simulations
6:40 Lighting
6:46 normal mapping
7:14 light maps
7:24 lens flare
7:51 sky box
7:55 sky box (atmopheric scattering)
8:02 fog
8:12 chromatic aberation
8:30 phisically based rendering (PBR)
8:58 image based lighting (IBL)
9:22 multiple-scattering microfacet model for IBL
9:47 global illumination
10:12 spherical harmonics
10:35 light probes
10:51 screen space global illumination (SSGI)
11:06 ray tracing
11:25 subsurface scattering
11:44 skin rendering
11:50 volumetric rendering 2(god rays)
12:06 parallax mapping
12:33 reflections
12:54 screen space reflections
13:16 refraction
13:49 defraction
14:07 ambient occlusion
14:17 screen space ambient occlusion (SSAO)
14:28 horizon based ambient occlusion (HBAO)
14:37 screen space directional occlusion (SSDO)
15:13 bloom
15:50 high dinamic range (HDR)
16:50 HDR with auto exposure
17:07 ACES tonemapping HDR
17:28 depth of field
17:42 depth of field (bokeh)
17:49 color grading
18:20 shadows
18:33 basic shadows
18:45 percentage close filtering (PCF)
19:10 static geometry caching
19:28 PCF Optimizations
20:11 Variance Shadow Mapping (VSM)
20:29 rectilinear texture wraping for adaptive shadow mapping
20:47 cascaded shadow mapping/parallel-split shadow maps
21:09 many more shadows effetcs
21:16 special effetcs
21:34 transparency
22:26 order independent transparency
22:41 depth peel
23:09 weighted blending
23:20 fragment level sorting
23:32 rendering many textures
23:43 rendering many textures ( mega texture)
24:04 rendering many textures (bindless textures)
24:30 anti aliasing
24:38 super sample anti aliasing (SSAA)
24:55 multi sample anti aliasing (MSAA)
25:17 fast approximate anti aliasing (FXAA)
25:29 temporal anti aliasing (TAA)
23:02 deep learning super sampling (DLSS)
26:34 adaptive resolution
27:05 lens dirt
27:27 motion blur
27:42 port process warp
28:08 deferred rendering
29:29 tiled deferred shading
29:29 clustered deferred shading
29:46 z freepass
30:02 forward+ (clustered forward shading)
nice
hero
🦸
you have just blazed through everything that I was working on last 2 years. seeing my "adventures" in front of me as the video carries through was amazing.
😂😂 glad you found it nice, also if you have tried all of those stuffs you must be advanced 💪
Чувак с именем Влад с акцентом чувака из Индии рассказывает какие есть приколы в OpenGL
Мощно, лайк
Чувака из Румынии*
@@mhdta_dev я не писал из какой он страны, имя Влад в множестве славянских странах имеется
This is an amazing video, and makes me excited!
I'll for sure go through this once I actually have OpenGL set up. Still setting up git for my engine.
Cool! I wish you showed more of that world curvature formula!
Edit: Nevermind I figured it out super fast, everyone, just have a uniform for your cameraposition in your vertex shader, and move gl_Position.y down based on distance to the camera, but on an exponential basis for a round-y shape. Hell yeah!
yes nice job 💪 Also didn't find it online so I just asked chat gpt for help 😂🤫
Very nice! I definitely have to try out that curvature shader later. Im really interested in the refraction in the water, along with god rays and lens flares. The refraction will go well with your water caustics video hah
Ah can't forget SSR either
Amazing video as always, thank you! Was really engaging and interesting.
The one thing I wished this coverd, binary space partitioning, ive had a lot of people tell me its outdated but im working on an engine that allows for open world spaces, but can also transition into using bsp tress to optimise indoor spaces. My next project is making the bsp trees physics objects and editable in real time because how cool would it be to make a giant base in game, then have it remain as optimised as it was outdoors, and on top of that it chaning in real time would make it fully destructible (ok with A LOT of work) but I think bsp still has massive potensial in modern day and its just ignored
hm I guess nowadays people use quad trees and things like that. I haven't touched bsp yet
@@lowlevelgamedev9330 quadtrees are better outside buildings, bsp better inside them. Bsp is cheap but a pain to code as you not only need to sort through the tree, but generate it somehow
@@hughjanes4883 does Valve still uses BSP? They generate a lot of triangles if the root is picked wrong
Thanks for this, didn't know about order independent transparency 👍. Btw Path of Exile Con Dev talk youtube videos has a lot of new techniques for rendering, including using motion vectors + flow maps to have better sprite based particle animations (like explosions/smoke), global illumination cascade techniques and many others 👍
Knew all of them, still watched the whole video, fantastic compilation.
chad 💪
A more in-depth tutorial about the Forward+ rendering would be amazing.
this is a god tier video. cheers mate
thank you
Agreed
*All OpenGL Effects: A Comprehensive Overview of Graphical Techniques*
* *1:10** Wave Simulations:* Simulating waves by shifting the positions of triangles in a water plane. An easier method using DUDV textures is discussed later.
* *1:42** World Curvature:* Applying a simple formula after the view matrix and before the projection matrix to create a world curvature effect.
* *1:53** Skeletal Animations:* Briefly explains skeletal animation using bones, vertices, keyframes, and interpolation. Suggests further learning resources.
* *2:23** Decals:* Discusses decals as details applied on top of geometry and mentions optimizations used in game engines like Doom Eternal.
* *2:43** Volumetric Rendering (Clouds):* Explains volumetric rendering for clouds using a 3D matrix and ray casting.
* *3:05** Geometry Culling (Frustum Culling):* Introduces frustum culling as an optimization technique to discard non-visible triangles, using bounding boxes and algorithms like octrees.
* *3:53** Level of Detail (LOD):* Using multiple versions of a model with different triangle counts depending on distance for optimization, mentioning Unreal Engine's Nanite system.
* *4:16** Tessellation Shaders:* Increasing a model's triangle count using tessellation shaders, beneficial for procedurally generated geometry and displacement mapping.
* *4:39** Geometry Shaders:* Briefly explains geometry shaders and their capabilities, but advises against using them due to performance concerns.
* *5:18** Geometry Buffer:* Using a massive buffer to store all geometry and compute shaders to calculate what needs to be drawn, as seen in Doom Eternal.
* *6:46** Normal Mapping:* Adding normal information to a texture to create detailed surfaces even with flat geometry.
* *7:13** Light Maps:* Using textures to specify specular components of materials.
* *7:25** Lens Flare:* Implementing lens flare using 2D textures and mentions advanced techniques for more realistic simulations.
* *7:51** Sky Box (Atmospheric Scattering):* Generating dynamic skyboxes using atmospheric refraction formulas.
* *8:02** Fog:* Blending the rendered scene with the skybox based on fragment distance to create fog.
* *8:11** Chromatic Aberration:* Shifting color channels slightly to simulate chromatic aberration.
* *8:30** Physically Based Rendering (PBR):* Briefly introduces PBR and the rendering equation.
* *8:58** Image-Based Lighting (IBL):* Using image-based lighting with the skybox for ambient light, noting its limitations in indoor scenes.
* *9:22** Multiple Scattering Microfacet Model for IBL:* An easy addition to a PBR pipeline for better ambient light calculation, accounting for secondary light bounces.
* *9:47** Global Illumination:* Discusses the importance of global illumination and its challenges in rasterization engines.
* *10:12** Spherical Harmonics:* Explains spherical harmonics as a method to approximate global light information.
* *10:36** Light Probes:* Using light probes with spherical harmonics or IBL to capture light information in different parts of the scene.
* *10:52** Screen Space Global Illumination (SSGI):* Introduces screen space global illumination as an approximation technique.
* *11:07** Ray Tracing:* Briefly mentions ray tracing for achieving high-quality global illumination.
* *11:28** Subsurface Scattering:* Explains subsurface scattering, using examples of wax and skin rendering.
* *11:51** Volumetric Rendering (God Rays):* Rendering god rays by simulating light hitting dust particles, mentioning easier screen space techniques.
* *12:06** Parallax Mapping:* Creating a 3D effect by sampling textures from slightly different coordinates, used in God of War's dynamic snow.
* *12:32** Reflections:* Using IBL, omni-directional maps, and screen space reflections for different levels of reflection quality and complexity. [From Comments] Mentions bending normals for better specular occlusion.
* *13:15** Refraction:* Simulating light bending as it passes through different mediums, with examples of single objects, water, and DUDV textures.
* *13:50** Diffraction:* Briefly mentions diffraction and its application in rendering shiny surfaces like CDs.
* *14:06** Screen Space Ambient Occlusion (SSAO):* Explains SSAO as an approximation of global illumination, along with variations like HBAO and SSDO. [From Comments] Mentions GTAO as an alternative.
* *15:12** Bloom:* Simulating bloom by isolating bright light spots, blurring, and adding them on top of the original image.
* *15:50** High Dynamic Range (HDR):* Encoding a wider range of light intensities using a function that compresses the values.
* *16:50** HDR with Auto Exposure:* Adjusting exposure based on average luminosity for a more realistic and cinematic look.
* *17:07** ACES Tonemapping HDR:* Recommends using the ACES tonemapping function for better HDR results. [From Comments] Suggests AgX Tonemapping as a better alternative.
* *17:29** Depth of Field (Bokeh):* Simulating the out-of-focus effect, mentioning Bokeh for creating specific light effects.
* *17:49** Color Grading:* Using lookup tables to adjust the final colors of a scene.
* *18:33** Shadows (Basic, PCF, Optimizations):* Covers basic shadow mapping, percentage-closer filtering (PCF) for soft shadows, and various optimizations. [From Comments] Mentions exponent shadow mapping (ESM) and EVSM for better shadow quality.
* *20:11** Variance Shadow Mapping (VSM):* Using statistical methods for soft shadows, mentioning light bleeding as a potential artifact.
* *20:29** Rectilinear Texture Wrapping for Adaptive Shadow Mapping:* Morphing the shadow texture to improve precision in relevant areas.
* *20:53** Cascaded Shadow Mapping / Parallel Split Shadow Maps:* Splitting the view into multiple regions with separate shadow maps for better shadow quality in large scenes.
* *21:34** Transparency:* Discusses different techniques for transparency, including alpha discarding, sorting, and order-independent transparency methods like depth peeling and weighted blending.
* *22:26** Order Independent Transparency:* Mentions various complex techniques for order-independent transparency.
* *23:33** Rendering Many Textures (Mega Texture & Bindless Textures):* Discusses challenges in rendering many textures and solutions like mega textures and bindless textures.
* *24:31** Anti-Aliasing (SSAA, MSAA, FXAA, TAA):* Covers various anti-aliasing techniques, including super-sampling, multi-sampling, fast approximate anti-aliasing, and temporal anti-aliasing. [From Comments] Mentions that MSAA can be used with deferred rendering, contrary to popular belief.
* *26:00** DLSS:* Introduces Deep Learning Super Sampling (DLSS) as an AI-powered upscaling and optimization technique.
* *26:35** Adaptive Resolution:* Dynamically adjusting resolution for performance optimization.
* *27:05** Lens Dirt:* Applying a texture on top of the screen to simulate lens dirt.
* *27:27** Motion Blur:* Using a velocity buffer to create motion blur.
* *27:41** Post-Process Warp:* Using shaders to create post-processing effects like screen warping.
* *28:08** Deferred Rendering:* Explains the concept of deferred rendering and its advantages and disadvantages. [From Comments] Mentions that motion vectors are relatively easy to implement.
* *29:29** Tiled/Clustered Deferred Shading:* Introduces variations of deferred rendering for handling many light sources.
* *29:42** Z Pre-Pass:* Optimizing forward rendering by using a depth pre-pass to avoid unnecessary fragment calculations.
* *30:01** Forward+ (Clustered Forward Shading):* Combining forward rendering with clustered light grouping for optimization.
I used gemini-1.5-pro-exp-0801 on rocketrecap.com to summarize the transcript.
Cost (if I didn't use the free tier): $0.12
Input tokens: 27480
Output tokens: 1815incorporating clarifying and supplementary information from the comments. It provides a valuable resource for anyone looking to learn about various graphical techniques and effects used in OpenGL and other graphics APIs. Remember to check out the linked resources mentioned in the video and comments for more in-depth explanations and tutorials.
damn i love how he has that "top finco/honorable mention accent" and he does funny jokes at once
This is gold, thank you!
I can understand you clearly idk why people struggle
This is cool as someone who makes a game engine by myself.
Love your channel, do you work in the industry or just indie?
thank you, just indie rn 💪💪
You mentioned like 60% if GameDev RUclipsrs that I watch
nice vid pahjeet!
Foarte tare . Și eu am început nu de mult sa îmi adâncesc cunoștințele de tip Shaders .
imi place ca nici nu ma mai intreaba lumea daca sunt roman pentru ca isi dau seama instant cu totii 🤣🇷🇴 also very nice tine-o tot asa 💪
@@lowlevelgamedev9330da te inteleg la faza cu a fi roman :) .
Bro is way too underrated
Very interesting!
Thanks a lot for this Video!
in prima secunda mi-am dat seama ca esti roman :))))) cheers
😂😂😂😂 de ce e asa de evident
I hope one day your game engine will evolve into something, i hooe im gonna use your game engine to make my mobile games, eventhough i dont know how to code : )
15:21, no, bloom just is light that gets reflected by air molecules. Every object has it, but with brighter ones, you just notice it more.
Yep, when looking through a thick atmospheric/volumetric effect, like smoke, it should appear larger due to scattering. But bloom also often confused with flares, which create similar effect/halos/glowing due to how light bends inside the camera lense (lenses), our eyes have that too. WIth astigmatism (I have one), it's even more present.
5:56 lol right after you mention rotations being tricky its a game that has rotation bug. If you spin the camera 1000s of times in one direction in Witcher 3 it starts to break.
I've never seen a good tutorial on "eye adaptation effect". It is also called "auto exposure". Perhaps this can be discussed in the next part🍤
It's pretty easy to do. You just have a baseline (target) exposure, create a mip-pyramid of your render, which allows you to quickly calculate average color of the scene, grab it's luminance value (convert RGB to HSB or YCbCr or whatever and use "luminance" or "brightness" parameter) and check if it's below or above your target (which differs with each tonemapping algorithm, so just choose one by trail and error), and converge, basically minimize the difference, taking small steps towards your target. To do this, you'd just calculate your exposure like this:
float exposure = mix(average, target, EXPOSURE_SPEED) * EXPOSURE_STRENGTH
Valve made a good writeup on how they implemented it in Source, you can probably find it somewhere if you're interested
Generating motion vectors doesn't seem fun? It's one of the easiest advanced things to do 😊.
Instructions:
calculate gl_Position, just with the data of the previous frame;
pass that, and gl_Position into the fragment shader.
Divide both vec4s there by their w component. Subtract them. Done 🤩
Also, just downsampling SSAO to get rid of noise is incorrect and causes edge bleeding. Use lateral filtering instead (only blend pixels that have similar depth/normal values).
Oh ok I didn't know that, so you basically say that I just subtract the old position from the new one?
@@lowlevelgamedev9330 Exactly, the position of the pixel in screen space, new one minus old one :), and that's all there is to it 😊
Amazing video my friend!
even though OpenGL is now outdated, but it stil the best for potato gpu trying to make shading
not exactly, more modern graphic apis can get more you more optimized. Also opengl is not that outdated since for web for example web gl is the only option
Esse canal é muito bom!
Romaaaaniaaaa mentioned, let's goooooo! Hello from Ukraine 🥃
Hello back bro 💪💪💪💪
@65 ooh, that offends me. That you cannot do use MSAA on deferred rendering isn't a fact, it's a mere myth.
Ofc you can, just light calculations may be a little excessively expensive, as they should use done on multisampled targets, too.
Source: my game engine does it 😄, and yes, ofc it was a little work to get everything working with multi-sampled buffers, but besides that it's easy.
oh ok didn't know that either :)) thanks
Hi,what you recommend to learn: OpenGL or Vulkan?
OpenGL, Vulkan is just too difficult to learn as a beginner and takes a ton so I think it is better to learn opengl to get started and do some advanced stuff with it 💪
big thanks@@lowlevelgamedev9330
It is said that Opengl is not good to start Graphics Programming with, DirectX is suggested instead, but that's only Windows/Xbox compatible.
Vulkan is the alternative but it's supposedly very hard, I wanted to ask your opinion on that? Awesome video by the way.
hm Opengl is the easiest api to learn. If you learn directx you won't need to learn opengl anymore so I woul say to start with opengl. They probsbly sugest directx because it is more modern but at that point you have to learn the new directx api and that is clearly harder than opengl
Every time I try low-level programming I screw everything up. I envy anyone who can do it properly
Hey, I want to make a game engine, but I don't know where to start and how can I do that I didn't make any desktop app with using high-level programming language,
I'm a Unity game developer and I know about C++, C# programming language, and some concepts like OPP
I want to make a basic engine just for showcase and getting high paid job, I know how to make games but when it's comes to getting high paid job game engine can be good portfolio
please suggest something , where I should start which thing I should learn and other thing which can be useful
I have some videos that will help you. They will actually point to other videos and resources 😂 because thare are a lot of things to be learned but don't worry you can do it. The most difficult thing is learning opengl for the graphics part but after that it is not that difficult. Also before making an engine try to make a very simple game using my framework to understand how things work.
Good luck 💪
ruclips.net/video/tK7yugR3qDU/видео.htmlsi=UCQ2hRY3_ervV3Wc
ruclips.net/video/A735Y4kMIPM/видео.htmlsi=XbsAiE_taIujKZxm
ruclips.net/video/21BNxCLTGWY/видео.htmlsi=C-lka3RCRq7BNqHp
ruclips.net/video/zJoXMfCI9LM/видео.htmlsi=tz34Fm_aAdmPUVit
ruclips.net/video/HPBXr6Zdm4w/видео.htmlsi=mscuUWxwbXSf8jPc
Thanks @@lowlevelgamedev9330 😀I'll Work on that
विवरण का स्तर
I love you ❤
hey thats awesome, i might be wrong but aren't these effects made by you ON OpenGL? Like they are not strictly correlated with the library. Like a Framebuffer Object for example IS an OpenGL feature, but what you do with it is your business. It's like saying that a peculiar shader is an OpenGL effect where instead the way you do the shader is completely arbitrary.
Does what i'm saying make sense?
yes that'a totally correct but I jusr used this wording because it was easier and more catchy 😂😂
@@lowlevelgamedev9330 fair, approved
Instead of ACES Tonemapping better use AgX Tonemapping, it is the successor of ACES basically and I think from the same guy.
U didn't know that thanks a lot can't wait to try it out 💪💪
last one is the most powerful and genial
yes that's why I left it last 😂
what's the point of the depth pre-pass? Doesn't GPU do a early depth test automatically?
so if you render 2 objects and one is in front of another, if you draw the fartjest one first, the gpu can't possibly know that it will be occluded in the future, so it has to color that object. With z pre pass you guarantee that you only run the fragment shader for each pixel only once
@@lowlevelgamedev9330 Well yes, but the OpenGL Wiki says "Early Fragment Test is a feature supported by many GPUs that allow the certain Per-Sample Processing tests that discard fragments to proceed before fragment processing", it also says "The depth test can take place before the Fragment Shader executes." so my question is what's the point of implementing it yourself if a GPU is already doing it
Early Fragment Test is not doing exactly what you think it's doing.
When you render some triangles to the screen the GPU is going to do an early depth test against the current values in the depth buffer, and initially it's going to be empty. So we draw the triangles (full shading calculation and all) and write to the depth buffer.
Now we draw a new mesh that just happens to be infront of the old mesh, we do the early depth test and see that these triangles should be drawn, so we do the full lighting calculation and update the depth buffer.
In this case the early depth test didn't do anything for us. But! If we rendered the meshes in the opposite order the second mesh would be behind the closer mesh and the early depth test could save us from calculating lighting on the non-visible triangles.
This leads to a simple optimization that most game engines do. Try to render the closest meshes first in hopes that they will occlude later meshes, so we can take advantage of the early depth test.
The logical extreme of this is to just render out all of the depth information beforehand and then when rendering we can get perfect early depth testing. (This is z prepass)
So why is it faster to render the scene twice? It's because most often it turns out that running two times the number of vertex shaders is much faster than running the lighting calculation more than once per pixel (assuming you have a more sophisticated lighting setup). To make the depth prepass even faster it is also common to separate all of the position data into its own buffer so that while doing the depth prepass you get higher cache utilization as you can fit more vertex data into cache when the vertex data is just positions.
Having the z prepass data can also be useful for doing more parallel work on the GPU. For example you could start building a hierarchical Z buffer for use in occlusion culling at the same time as you are doing the shading work. Or you could start doing SSAO, or any number of other things the depth buffer could be useful for.
Hopefully I was able to clear up some of the confusion and not make it even more confusing :)
Этот мексиканский акцент прекрасен)))
रूपांतरित सदिश
hallo your computer has virus
what about an ECS system
well that's not really related to graphics stuff
@@lowlevelgamedev9330 i know i was juste asking if your engine has it
6:44 Oh Why.
Can you need to make a new RUclips channel called high level game dev using python
I thought that I was rather knowledgeable in graphics programming. Now I realize that I didn't know jack shit.
ज्यामिति चयन
17th to comment
lmao the accent is ridiculous
Bro please use ai to read the script instead, it will sound more human and legible
Shut up Jake his voice is hot
Man please at least try with the English accent
sorry about that I'm trying my best 🥺😅
@lowlevelgamedev9330 don't worry about him, I love your voice and it's their issue if they don't wanna listen to you
dont bother making voiceover, just use robot voice. impossible to understand without subs.