I might be wrong, but I think you can simplify slightly using the blur sample offset node? That gives an output of vec2s that you can use for the SceneTexture UVs and takes an input for the offset amount
Damn it's so cool to see how this stuff works, awesome vid!! I was looking for how to use Gaussian filter on the render output just like offline renderers can, but it looks like I'll have to do it this way - although the proper way to do it is more than just a post-process :/
@@GameUnionTV The result isn't 100% identical, but you can use negative numbers with " mix " function. It will create a distortion at the borders of the screen, but you can tone it down with a screen mask. 10/10/2024 EDIT: I updated the shader, now there is no border distortion.
Oh i just realized, what was the HDRP version used when you create this video?, AFAIK in the latest unity release there's also MipMap LOD/Bias on HD Scene Color node, that quite useful to create blur effect
great video as always sir can u explain how the camera effect where we see a car is moving down a road but the road looks like it got water spiled on it but its not thank you
You mean a mirage? Something like this: epod.usra.edu/blog/2010/03/highway-mirage.html That's an interesting idea. I have no idea how to do that but I'll give it some thought.
Now that overlay materials are an Unreal feature maybe you could try using the reflection material from episode 24 as an animated overlay material for the ground, but with voronoi noise as an alpha mask and for the normals input. The really cheap thought I had was to just flip the screen texture upside down in a post process material, blur it out and mask that using fresnel and noise (squished vertically). But, can fresnel even be used in post-processing? I don't know how you could limit the effect to only certain places in the environment based on how upward facing the geometry is. Something like the underwater post-process effect on top could help sell it. Also thanks a lot, your tutorials have been a big help. I'm going between Blender/Unreal and the nodes are really different. There's nodes that need to be constructed manually in Unreal that are just _nodes_ in Blender and vice versa. For example, Blender has a color ramp node and procedural brick texture, but an "if" node would need to be a cluster of things.
Hi Ben, can you recommend a Serie to learn special effects/ Niagara Effect. And one other question about HLSL, Should I follow your DVD playlist even thought you said that there is a lot of subjects that are not useful for unreal. AND THANK YOU SO MUCH FOR TOUR EFFORTS!!
Thank you for taking the time for making these awesome tutorials! I have a question though.... How would you go about making an Infrared Light or Laser in UE? Is that even possible?
I've watched all your videos Ben since you started your youtube channel helping us get our shaders down. I have a question, are constants faster than dynamic scalar inputs? If say I had 100 scalar parameters awaiting input vs them all being constants would there be any difference
Yes - constants are faster than exposed parameters. Because of this, I tend to only expose values that absolutely need to be variable and tune everything else using constants in the shader itself.
I should add that the reason constants are faster is that often, the compiler can pre-compute parts of the math that they're doing offline and just do it once - because it knows that the values will not be changing. With an exposed parameter, the compiler doesn't know if the value will be changing or not, so any math that it's doing has to be done every pixel every frame. The more parameters you expose, the more real-time work the shader has to do.
@@BenCloward Epic even has a special term for this, it is ‘constant folding’, which, as they claim, available only when using material editor (as opposed to writing code in a custom node).
Awesome video, I think this is my first time seeing someone implement a sharpening or blurring "kernel" in a node-based setup. How would you go about scaling the kernel radius, so instead of sampling from just the nearest pixel we also sample from the next neighbors. Is there an easy way to scale this setup for that or at that point would it be easier to just make a custom HLSL node with a loop?
Scaling in node-based editors is rough due to a lack of loops as you said. I suppose you could create a couple of different sizes and use an enum or int together with branch nodes to either use them or not, but it's certainly not as flexible as just incrementing a for loop - and the graph would get pretty big.
Something else that I'm interested in trying is a poisson disc sample arrangement where you could choose to randomly rotate the samples based on a hash of the screen location. I've done some experiments with those in code, but it's been awhile. Might be fun to do that in a graph.
Incase anyone (like me) made a mistake and doesn't know why their blur isn't working, check and make sure your ScreenResolution "visible resolution" node is connected to the B slot of your divide node, not the A slot. This took me half an hour to figure out, lol.
I haven't tried it, but it might be possible to mask the amount of blur that's happening by one of the other channels available in the scene texture sample node. Then you could write white into that with the grass but black for everything else. Then use that mask to determine where the blur is applied.
@@BenCloward this is free engine like Unreal and Unity. Look at swizzles(expressions), loops, portals, code functions, subgraphs etc. Unigine 2.16 will have posteffects in material editor, but you can look at meshes and decals already.
is this expensive? for example if its a part of graphics style i mean its just an operation on a 2d image each frame, while we have things happening in 3d view each frame, so cant be expensive right?
That question has to always be answered with "it depends" unfortunately. In this case, it mainly depends on what hardware you're using - so the best way to know if it's expensive or not is to run it on the hardware you're planning to use for your game. It might be too expensive on a low-end mobile device, but take almost 0 time on a high-end PC graphics card. That being said, my overall answer is no, it's not expensive. The texture samples we're doing here are almost always found in the GPU's cache, which means they can be done really fast.
while we are at the topic of post processing shaders can you please do one on anti aliasing like fxaa or any(best) you can think of I know its built into both engines but still I want to learn it visually. Thank you in advance. BTW great video
I have the HLSL code for fxaa, but it's quite complex. It would be pretty hard to explain it in a 20 minute view. I'll take a look at it and see if I can make it more simple. If I succeed, I'll add it to my list of videos to make. Thanks for the suggestion.
The part about sharpening the scene was really amazing!
The lost wisdom of the Shaders. Love it
I might be wrong, but I think you can simplify slightly using the blur sample offset node? That gives an output of vec2s that you can use for the SceneTexture UVs and takes an input for the offset amount
Damn it's so cool to see how this stuff works, awesome vid!!
I was looking for how to use Gaussian filter on the render output just like offline renderers can, but it looks like I'll have to do it this way - although the proper way to do it is more than just a post-process :/
Thank you for showing this! I never knew about that relationship! very useful on my project :)
Very informative and helpful, thx!
thank you so much Ben!
WOW, you are a life saver
thank you very much! great video
I don't use UNREAL or UNITY.
I was able to transfer this shader to GODOT.
Thank you very much, that was exactly what I wanted.
Was you able to make sharpening too? There's no lerp (only mixing node) and it doesn't work well with negative values).
@@GameUnionTV The result isn't 100% identical, but you can use negative numbers with " mix " function.
It will create a distortion at the borders of the screen, but you can tone it down with a screen mask.
10/10/2024 EDIT:
I updated the shader, now there is no border distortion.
Oh i just realized, what was the HDRP version used when you create this video?, AFAIK in the latest unity release there's also MipMap LOD/Bias on HD Scene Color node, that quite useful to create blur effect
thanks for sharing😁
thanks, pretty nice, grate tutorial
great video as always sir
can u explain how the camera effect where we see a car is moving down a road but the road looks like it got water spiled on it but its not
thank you
You mean a mirage? Something like this: epod.usra.edu/blog/2010/03/highway-mirage.html That's an interesting idea. I have no idea how to do that but I'll give it some thought.
@@BenCloward oh yes thats the effect thank you i wish you luck sir awesome contest as always
Now that overlay materials are an Unreal feature maybe you could try using the reflection material from episode 24 as an animated overlay material for the ground, but with voronoi noise as an alpha mask and for the normals input.
The really cheap thought I had was to just flip the screen texture upside down in a post process material, blur it out and mask that using fresnel and noise (squished vertically). But, can fresnel even be used in post-processing? I don't know how you could limit the effect to only certain places in the environment based on how upward facing the geometry is.
Something like the underwater post-process effect on top could help sell it.
Also thanks a lot, your tutorials have been a big help. I'm going between Blender/Unreal and the nodes are really different. There's nodes that need to be constructed manually in Unreal that are just _nodes_ in Blender and vice versa. For example, Blender has a color ramp node and procedural brick texture, but an "if" node would need to be a cluster of things.
how can I attach a scene depth texture into it for a distance blur effect?
Hi Ben, can you recommend a Serie to learn special effects/ Niagara Effect. And one other question about HLSL, Should I follow your DVD playlist even thought you said that there is a lot of subjects that are not useful for unreal. AND THANK YOU SO MUCH FOR TOUR EFFORTS!!
Thank you for taking the time for making these awesome tutorials! I have a question though.... How would you go about making an Infrared Light or Laser in UE? Is that even possible?
I've watched all your videos Ben since you started your youtube channel helping us get our shaders down. I have a question, are constants faster than dynamic scalar inputs? If say I had 100 scalar parameters awaiting input vs them all being constants would there be any difference
Yes - constants are faster than exposed parameters. Because of this, I tend to only expose values that absolutely need to be variable and tune everything else using constants in the shader itself.
I should add that the reason constants are faster is that often, the compiler can pre-compute parts of the math that they're doing offline and just do it once - because it knows that the values will not be changing. With an exposed parameter, the compiler doesn't know if the value will be changing or not, so any math that it's doing has to be done every pixel every frame. The more parameters you expose, the more real-time work the shader has to do.
@@BenCloward Thank you so much for your reply. This was the working theory in my head and makes sense. I'll have to clean up my shaders more ;)
@@BenCloward Epic even has a special term for this, it is ‘constant folding’, which, as they claim, available only when using material editor (as opposed to writing code in a custom node).
Awesome video, I think this is my first time seeing someone implement a sharpening or blurring "kernel" in a node-based setup. How would you go about scaling the kernel radius, so instead of sampling from just the nearest pixel we also sample from the next neighbors. Is there an easy way to scale this setup for that or at that point would it be easier to just make a custom HLSL node with a loop?
Scaling in node-based editors is rough due to a lack of loops as you said. I suppose you could create a couple of different sizes and use an enum or int together with branch nodes to either use them or not, but it's certainly not as flexible as just incrementing a for loop - and the graph would get pretty big.
Something else that I'm interested in trying is a poisson disc sample arrangement where you could choose to randomly rotate the samples based on a hash of the screen location. I've done some experiments with those in code, but it's been awhile. Might be fun to do that in a graph.
@@BenCloward can't wait to see that! Thank you so much for making these videos!
Incase anyone (like me) made a mistake and doesn't know why their blur isn't working, check and make sure your ScreenResolution "visible resolution" node is connected to the B slot of your divide node, not the A slot. This took me half an hour to figure out, lol.
Thank you
CooooOOOll!
Is there a way to know how heavy it gets when using multiple 'SceneTexture' nodes?
hi, I know its a little out of context... But what trees are you using?
Here's the video where I started using those tree assets: ruclips.net/video/MtZd1g0aKiY/видео.html Link is in the description.
@@BenCloward thanks a lot 😊
is there away to do it for only the object i need for ex i love blury grass but i cant find in blur Shader/material? thx
I haven't tried it, but it might be possible to mask the amount of blur that's happening by one of the other channels available in the scene texture sample node. Then you could write white into that with the grass but black for everything else. Then use that mask to determine where the blur is applied.
Cool Stuff! But you can make this better with Unigine Engine, because Unigine Material Editor has Loops.
Wow, that’s cool. I’ll have to take a look.
@@BenCloward this is free engine like Unreal and Unity. Look at swizzles(expressions), loops, portals, code functions, subgraphs etc. Unigine 2.16 will have posteffects in material editor, but you can look at meshes and decals already.
is this expensive? for example if its a part of graphics style
i mean its just an operation on a 2d image each frame, while we have things happening in 3d view each frame, so cant be expensive right?
That question has to always be answered with "it depends" unfortunately. In this case, it mainly depends on what hardware you're using - so the best way to know if it's expensive or not is to run it on the hardware you're planning to use for your game. It might be too expensive on a low-end mobile device, but take almost 0 time on a high-end PC graphics card. That being said, my overall answer is no, it's not expensive. The texture samples we're doing here are almost always found in the GPU's cache, which means they can be done really fast.
while we are at the topic of post processing shaders can you please do one on anti aliasing like fxaa or any(best) you can think of I know its built into both engines but still I want to learn it visually. Thank you in advance. BTW great video
I have the HLSL code for fxaa, but it's quite complex. It would be pretty hard to explain it in a 20 minute view. I'll take a look at it and see if I can make it more simple. If I succeed, I'll add it to my list of videos to make. Thanks for the suggestion.