Been learning opengl and I just got to these different maps. When I was reading I was super confused. But after watching this, it all makes so much more sense now!! Thank you so much for these fantastic videos! I love learning about this stuff!
You may also want to point out that normal maps use three times (or four, due to alignment) the space that a bump map uses. Performance impact of memory accesses is something many engineers are not educated on. On top of that, normal maps force you to build a tangent basis. Bump maps can be evaluated in any "direction" on the fly.
55:00 the parallax occlusion mapping, how does it render at the edge of the square? How does it know to render the background color in the gaps between the bricks? The algorithm had to walk outside of the boundary of the texture map there. If that happens, you set the color alpha to 0, to make it transparent?
These lectures in the second half of this course are about the core ideas, not implementation details. You can easily find tutorials with a google search. There are also some good tutorials among the links in the description.
This is very informative, thank you! I just had an idea for improving parallax mapping. What if you do it as described at 45:00, but iteratively? You'd start at point A, move along the view vector by H(A) to point P, then move again by H(P) to a new point Q, then move by H(Q), and so on. This is a fixed-point iteration and should give a good approximation of point B with just a few iterations. Plus, you don't need to divide the depth into levels like with steep parallax mapping or parallax occlusion mapping, which requires many levels to get good results.
Take a look at "Practical Parallax Occlusion Mapping For Highly Detailed Surface Rendering". It is a presentation by ATI researchers who developed POM for the ATI ToyShop demo, back in 2005. They do a linear search which eliminates the dependent texture fetches required for an iterative binary search, and adjust the sample rate based on the angle to reduce aliasing.
Cool video, as always. This takes me back to my last year of uni and my honours project, which was essentially a program that converted bump maps to normal maps that allowed some tweaking... looking back on it, that was way too simple, and it's a miracle that I actually got my honours. So if you're in that position, don't be like me, do something cool like volumetric fog or clouds, a realistic glass or liquid shader (like the one from HL Alyx), or a ray-tracer. Or if you do want to make a clone of Bitmap2Material, you should analyse the image to work out what angle the light is hitting the things in the scene at.
In a normal map, what does the blue channel actually control? Loading up one and using it on a flat plane, I can use the slightly varying values within it or replace it with 1, and it doesn't appear to "break" anything even if I notice differences. Adding or subtracting from the Red and Green channel I obviously will tilt the normal around Y or X axis, and the result is kinda predictable. But the blue one? Do anything except replace it with 1 and it completely falls apart. So yeah, what does it actually contain? Pros and cons between bump maps and normal maps, imo: 1) Normal maps are scale independent; they reflect an offset angle no matter the scale it's used and the apparent height will vary instead. I would use this where preserving the angles are critical. If you bake out details, then sure, normal maps all the way. 2) Bump maps are scale dependent; increase the scale and now the angle will get steeper and the apparent height will remain. I would use this where I'm just adding *small* (size matters) random details where maintaining angles aren't critical. For details that change appearance per identical asset (using randomization), I'm going bump maps all the way. Game assets typically don't care about this, and as mentioned bump maps are slower to compute in real time compared to just looking up the angles. 3) Normal maps require UV tangent space to function, and don't always map well in a "random lookup environment". 4) Bump maps can be used with triplanar/box mapping and randomization (make assets look different) isn't a problem. Will also survive random rotation with no UV requirement. 5) A normal map only require a solid color to reflect a different angle. This can make it "compression friendly" but require more memory to handle. 6) A bump map will require a greyscale gradient to reflect a different angle. Less "compression friendly" but require less memory to handle. 7) Normally you can get away with 8bit (per color) normal maps at smaller scale, whereas bumpmaps may require 16 bit grayscale at higher resolution to avoid stepping. 8) For procedurally created bump maps, using microdisplacement is a great way to verify what you are doing is correct; right direction and spotting discontinuities. In Blender (no parallax tricks builtin), which is the renderer I work with, displacement height can be used as bump distance. Testing with microdisplacement takes away the guesswork when choosing bumps instead.
In tangent space, a value of (0,0,1) with a Z value of 1 indicates a completely unperturbed surface normal. So it's effectively just the normal of the triangle with no normal mapping applied. Normal mapping only looks right for small perturbations of the surface normal (say within 45-60 degrees max). That's why they tend to often be so bluish. For example, if you have a tangent space normal map texel with a value of (1,0,0) or (0,1,0) for unit vectors, those normals would be completely perpendicular to the surface of the triangle and would look completely wrong with the lighting. On #2 though, there might be a misunderstanding there in your intuitions. A bump map has no notion of scale if I understand what you meant by scale. It simply perturbs normals the way that normal maps do but with half to a third of the data (only one channel required instead of 2-3), but in exchange for more computation required to derive the perturbation to the normal. They also tend to be more portable across engines since not all engines compute tangents and bitangents the same way. They're also generally easier to create: it's easier to paint a grayscale image in Photoshop than try to reason about normal perturbations by drawing vectors into texels if you were to hand paint this digitally. Main benefit of normal maps is that they can create a larger range of effects (a bump map can only represent normal perturbations approximating mountains and volleys on a surface, while normals maps can go a bit further -- although not that much in practice without glaring artifacts) although they require specialized content creations for artists to effectively create. They're also generally faster in practice despite requiring 2-3x the memory since they don't require additional texel lookups.
For both bump mapping and true per-pixel displacement, what happens if there are no neighboring texels that even belong to the same triangle? Say the triangle only occupies one texel in the texture space and furthermore, its neighboring texels aren't even adjacent to the triangle in object space (say this lone texel is a completely discontinuous texture atlas containing just the bump map for one triangle in one texel). How can we derive the normals in that case? I might be overlooking something simple but it seems impossible in that case, like there's not enough data there in the texture map to possibly compute a perturbed normal in any reasonable amount of time. Maybe just an impractical case? But I could see it happening in very high-res meshes with only medium-res texture maps. Say the artist paints a grayscale bump or displacement map in Photoshop for a 16 million triangle mesh but the texture is 4000x4000 texels. In that case, there are exactly as many triangles as there are texels, and so I could easily see a case where a triangle might only occupy 1 texel in the texture space as well as multiple triangles mapping to the same texel. How might one compute the normals in such a case? Or is it just silly to even try? The only way I can think to do it even remotely efficiently is to, say, build a spatial/connectivity data structure of some sort that can let us find and quickly lookup texels of neighboring polygons (which may not necessarily be a neighboring texel -- it might be thousands of texels away as far as Manhattan or Euclidean texel distance).
You do realize you are worrying about correct normal perturbation for a triangle the size of a pixel, right? Maybe that answers your question. And having a tessellation level higher than the texture resolution doesn't make sense anyway, because then you don't need bump mapping!
@@DasAntiNaziBroetchen Oh, I should clarify that I'm thinking of interpolated texture sampling (ex: bilinear sampling or even possibly with mip-mapping)! Also I could see a case where a tessellation level higher than the texture resolution might make some sense as a result of the interpolation. As a simplistic example, consider a very low res 16x16 texel height map designed to produce like a smooth valley when displacing a quad patch viewed up close. In that case, we may want to tesselate to more than 16x16 subfacets if we can interpolate the texture to create a very smooth look to the resulting geometry even when zoomed in. Or in the case of a raytracer, do per-ray/sample displacement where the raytraced geometry may occupy far more than 16x16 pixels on the screen even though the texture map is only 16x16 texels; we interpolate it to get a higher geometry resolution than the texture provides. Mostly per-ray/per-pixel displacement fascinates me the most as raytracing is becoming more and more popular for real-time engines, but I was thinking we'd have to interpolate the texture in many cases to avoid our resulting displaced geometry taking on aliased artifacts from nearest neighbor sampling. Where I'm confused is how we can interpolate in a way that avoids interpolating from adjacent texels that are adjacent in the texture map but aren't adjacent in the actual geometry (discontinuous UV boundaries, so to speak).
As I understand, Bump map is like giving a feeling of up and down in the texture on the surface, basing on how it interact with the light. For example if you have a white dot on a surface, when the light hit that white dot it will interact as if that spot on the surface is higher than the other spots. While displacement map change the actual structure of the surface. So if you have a white dot on a surface, the computer see it as a dot polygon that grow above the surface and interact with it correspondingly.
This is a very difficult question to answer. So far I have not seen anyone be able to answer this. Part of the reason is that each technique has its own strengths and weaknesses.
Would there be a significant performance benefit to dynamically changing the number of steps used for parallax occlusion mapping depending on the relation between the maximum depth of the displacement map and the distance of the camera from the surface? And would it look as good as if the number of steps was always very high?
It’s incredible to me that videos like this are available for free on the internet.
you are genius to illustrate these abstract confusing concepts so vividly, thanks for sharing this cool walkthrough
brilliant teaching skill, man, brilliant. thankyou.
I´m finishing taking computer graphics class at USC and your teaching is like 100000x superior. Thanks for putting this out there!
Been learning opengl and I just got to these different maps. When I was reading I was super confused. But after watching this, it all makes so much more sense now!! Thank you so much for these fantastic videos! I love learning about this stuff!
Most videos on this subject were very short. Was looking for something a bit more descriptive and came across this, thanks!
You can not imagine how useful is this for me, thank you so much
Totally amazing, thanks so much!
I just stumbled onto this this is amazing content and you are so good and relaying it
I've got interested in how paralax mapping works and found this lecture. Really interesting topic of depth emulation! And explanation is really good
Same here!
Btw there are also "derivative maps", which are kinda in the vein of bump maps. They provide quite some benefits over normal maps.
You may also want to point out that normal maps use three times (or four, due to alignment) the space that a bump map uses. Performance impact of memory accesses is something many engineers are not educated on. On top of that, normal maps force you to build a tangent basis. Bump maps can be evaluated in any "direction" on the fly.
55:00 the parallax occlusion mapping, how does it render at the edge of the square? How does it know to render the background color in the gaps between the bricks? The algorithm had to walk outside of the boundary of the texture map there. If that happens, you set the color alpha to 0, to make it transparent?
Great video! Thank you.
Thanks for the clear explanation. I'm new to image design.
Wow when it comes to details, you're top notch man!!!!!!❤❤❤❤❤
Just what I needed. Thanks!
Amazing video!
Great could you talk about more shader implementation for lectures
These lectures in the second half of this course are about the core ideas, not implementation details. You can easily find tutorials with a google search. There are also some good tutorials among the links in the description.
This is very informative, thank you!
I just had an idea for improving parallax mapping. What if you do it as described at 45:00, but iteratively? You'd start at point A, move along the view vector by H(A) to point P, then move again by H(P) to a new point Q, then move by H(Q), and so on. This is a fixed-point iteration and should give a good approximation of point B with just a few iterations. Plus, you don't need to divide the depth into levels like with steep parallax mapping or parallax occlusion mapping, which requires many levels to get good results.
Take a look at "Practical Parallax Occlusion Mapping For Highly Detailed
Surface Rendering". It is a presentation by ATI researchers who developed POM for the ATI ToyShop demo, back in 2005. They do a linear search which eliminates the dependent texture fetches required for an iterative binary search, and adjust the sample rate based on the angle to reduce aliasing.
Cool video, as always.
This takes me back to my last year of uni and my honours project, which was essentially a program that converted bump maps to normal maps that allowed some tweaking... looking back on it, that was way too simple, and it's a miracle that I actually got my honours. So if you're in that position, don't be like me, do something cool like volumetric fog or clouds, a realistic glass or liquid shader (like the one from HL Alyx), or a ray-tracer. Or if you do want to make a clone of Bitmap2Material, you should analyse the image to work out what angle the light is hitting the things in the scene at.
In a normal map, what does the blue channel actually control? Loading up one and using it on a flat plane, I can use the slightly varying values within it or replace it with 1, and it doesn't appear to "break" anything even if I notice differences. Adding or subtracting from the Red and Green channel I obviously will tilt the normal around Y or X axis, and the result is kinda predictable. But the blue one? Do anything except replace it with 1 and it completely falls apart. So yeah, what does it actually contain?
Pros and cons between bump maps and normal maps, imo:
1) Normal maps are scale independent; they reflect an offset angle no matter the scale it's used and the apparent height will vary instead. I would use this where preserving the angles are critical. If you bake out details, then sure, normal maps all the way.
2) Bump maps are scale dependent; increase the scale and now the angle will get steeper and the apparent height will remain. I would use this where I'm just adding *small* (size matters) random details where maintaining angles aren't critical. For details that change appearance per identical asset (using randomization), I'm going bump maps all the way. Game assets typically don't care about this, and as mentioned bump maps are slower to compute in real time compared to just looking up the angles.
3) Normal maps require UV tangent space to function, and don't always map well in a "random lookup environment".
4) Bump maps can be used with triplanar/box mapping and randomization (make assets look different) isn't a problem. Will also survive random rotation with no UV requirement.
5) A normal map only require a solid color to reflect a different angle. This can make it "compression friendly" but require more memory to handle.
6) A bump map will require a greyscale gradient to reflect a different angle. Less "compression friendly" but require less memory to handle.
7) Normally you can get away with 8bit (per color) normal maps at smaller scale, whereas bumpmaps may require 16 bit grayscale at higher resolution to avoid stepping.
8) For procedurally created bump maps, using microdisplacement is a great way to verify what you are doing is correct; right direction and spotting discontinuities. In Blender (no parallax tricks builtin), which is the renderer I work with, displacement height can be used as bump distance. Testing with microdisplacement takes away the guesswork when choosing bumps instead.
In tangent space, a value of (0,0,1) with a Z value of 1 indicates a completely unperturbed surface normal. So it's effectively just the normal of the triangle with no normal mapping applied. Normal mapping only looks right for small perturbations of the surface normal (say within 45-60 degrees max). That's why they tend to often be so bluish. For example, if you have a tangent space normal map texel with a value of (1,0,0) or (0,1,0) for unit vectors, those normals would be completely perpendicular to the surface of the triangle and would look completely wrong with the lighting.
On #2 though, there might be a misunderstanding there in your intuitions. A bump map has no notion of scale if I understand what you meant by scale. It simply perturbs normals the way that normal maps do but with half to a third of the data (only one channel required instead of 2-3), but in exchange for more computation required to derive the perturbation to the normal. They also tend to be more portable across engines since not all engines compute tangents and bitangents the same way. They're also generally easier to create: it's easier to paint a grayscale image in Photoshop than try to reason about normal perturbations by drawing vectors into texels if you were to hand paint this digitally.
Main benefit of normal maps is that they can create a larger range of effects (a bump map can only represent normal perturbations approximating mountains and volleys on a surface, while normals maps can go a bit further -- although not that much in practice without glaring artifacts) although they require specialized content creations for artists to effectively create. They're also generally faster in practice despite requiring 2-3x the memory since they don't require additional texel lookups.
Cem i am very greatfull that you are enlightening us with the fundamentals . U r a star
Legendary!!!
Thank you
My respects!
For both bump mapping and true per-pixel displacement, what happens if there are no neighboring texels that even belong to the same triangle? Say the triangle only occupies one texel in the texture space and furthermore, its neighboring texels aren't even adjacent to the triangle in object space (say this lone texel is a completely discontinuous texture atlas containing just the bump map for one triangle in one texel). How can we derive the normals in that case? I might be overlooking something simple but it seems impossible in that case, like there's not enough data there in the texture map to possibly compute a perturbed normal in any reasonable amount of time. Maybe just an impractical case? But I could see it happening in very high-res meshes with only medium-res texture maps.
Say the artist paints a grayscale bump or displacement map in Photoshop for a 16 million triangle mesh but the texture is 4000x4000 texels. In that case, there are exactly as many triangles as there are texels, and so I could easily see a case where a triangle might only occupy 1 texel in the texture space as well as multiple triangles mapping to the same texel. How might one compute the normals in such a case? Or is it just silly to even try?
The only way I can think to do it even remotely efficiently is to, say, build a spatial/connectivity data structure of some sort that can let us find and quickly lookup texels of neighboring polygons (which may not necessarily be a neighboring texel -- it might be thousands of texels away as far as Manhattan or Euclidean texel distance).
You do realize you are worrying about correct normal perturbation for a triangle the size of a pixel, right?
Maybe that answers your question.
And having a tessellation level higher than the texture resolution doesn't make sense anyway, because then you don't need bump mapping!
@@DasAntiNaziBroetchen Oh, I should clarify that I'm thinking of interpolated texture sampling (ex: bilinear sampling or even possibly with mip-mapping)! Also I could see a case where a tessellation level higher than the texture resolution might make some sense as a result of the interpolation.
As a simplistic example, consider a very low res 16x16 texel height map designed to produce like a smooth valley when displacing a quad patch viewed up close. In that case, we may want to tesselate to more than 16x16 subfacets if we can interpolate the texture to create a very smooth look to the resulting geometry even when zoomed in. Or in the case of a raytracer, do per-ray/sample displacement where the raytraced geometry may occupy far more than 16x16 pixels on the screen even though the texture map is only 16x16 texels; we interpolate it to get a higher geometry resolution than the texture provides.
Mostly per-ray/per-pixel displacement fascinates me the most as raytracing is becoming more and more popular for real-time engines, but I was thinking we'd have to interpolate the texture in many cases to avoid our resulting displaced geometry taking on aliased artifacts from nearest neighbor sampling. Where I'm confused is how we can interpolate in a way that avoids interpolating from adjacent texels that are adjacent in the texture map but aren't adjacent in the actual geometry (discontinuous UV boundaries, so to speak).
Apparently I've watched this video before, probably while sleeping lol
I'm still confused what's difference between Bump Map and Displacement Map.
Sound like it's the same image, just rendered differently.
Displacement map actually generates the surface detail. Bump map just gives the illusion of surface detail by only modifying the surface normal.
@@cem_yuksel So I can use same texture for both ? And only difference will be in the shader ?
As I understand, Bump map is like giving a feeling of up and down in the texture on the surface, basing on how it interact with the light. For example if you have a white dot on a surface, when the light hit that white dot it will interact as if that spot on the surface is higher than the other spots.
While displacement map change the actual structure of the surface. So if you have a white dot on a surface, the computer see it as a dot polygon that grow above the surface and interact with it correspondingly.
is there any performance benchmark between these methods?
is parallax oclussion more cheap than doing displacement tesselation+normal mapping ?
This is a very difficult question to answer. So far I have not seen anyone be able to answer this. Part of the reason is that each technique has its own strengths and weaknesses.
Tebrikler çok başarılı bir seri yeni keşfettim hepsini tek tek inceleyeceğim :) yaptığımız işin derinliklerine girmekte fayda var
bob ross of computer graphics
Would there be a significant performance benefit to dynamically changing the number of steps used for parallax occlusion mapping depending on the relation between the maximum depth of the displacement map and the distance of the camera from the surface? And would it look as good as if the number of steps was always very high?
Yes, but shader code often runs faster with a fixed number of iterations.
Very Good Lession ! Thank U So Much !
YES!!! Subscribed!
thanks sir😇
I'm sorry for such a stupid and newb question, but, just to be sure... without directional lighting, bump maps do literally nothing... correct?
Correct. Bump mapping just changes the normal, so the shader must use the normal to utilize bump maps.
Thank you!
cool
Thanks you so much
hello :3 ty for info
🎩🎩
Thank you very much