Couldn’t understand it less than my linear algebra class taught by the nuttiest Ukrainian to ever enter the US. I still don’t know what an eigen value is. Oh, and 20 years of engineering later… I never needed to.
Im not smart enough to figure out to fix this repetition problem myself, how would you actually do it, without adding all those step nodes manually? This parallax effect is so mindblowing i really want to incorporate it in my work
@@samfellner honestly you're better off just using actual displacement instead of doing all this lol this is only useful if you're really serious about cutting down render times
To replicate this with displacement mapping I guess I'd need one triangle per pixel. That scales reaaaasaally fast. A 1K*1K texture would need a million triangles. Multiplying that if the texture repeats. Yeah, sure I probably wouldn't need 1 triangle per pixel in most cases. But even outside of games, realtime backgrounds on volume stages and stuff like that, I would love a more user friendly approach to occlusion mapping, collapsed to a single node. Would save on so much effort. :)
@@Denomote displacement can really eat up your vram/ram usage and destroy render times. This is really useful. Im thinking theres probably a way to implement using osl, that way you can just use a for loop.
Think how useful that could be, you could set it so it used the view distance to determine the number of "layers" for the parallax, I be that sort of dynamism could be great for optimization. I already use view distance to scale the amount of detail in procedural material, which I at least *think* increases performance.
Extra tip: use a white noise texture instead of the discrete depth checks - simplifies the number of nodes and gets rid of the stepping, at the cost of looking a bit noisier.
I wanna cry. I understood nothing!! But, I really wanna use parallax occlusion, because my PC isn't strong enough for displacement maps! This was still a very detailed tutorial, so I will probably get it after watching it a few times. It's easier to understand things when I have no choice but to understand them to complete a project.
This is something i ALWAYS wanted. In real time rendering engines you can have "Per Pixel Displacement Mapping". But in blender for you to actually see the displacement you need to subdivide the heck out of your mesh, since displacement works "Per Vertex". I always found that silly and ineffficient, so this technique is AMAZING for when subdividing a plane ad infinitum is just not ideal
@Concodroid That still creates geometry tho, this genuinely does not. You get real depth with no added geometry, which is great for large scenes. For example, this is how videogames make the interior of windows in large cities
@@sebastiangudino9377 No I know, it's just that this approach has limitations too. Glancing angles suffer from this technique. This does best when you're looking top-down at a flat plane; adaptive subdivision works best with something like landscapes
@@Concodroid well you can actually adjust the steps at runtime when viewing from such angles. not sure about blender, but atleast unity and unreal allow you to do that. basically you take the angle at which a surface is being looked at, and do some math to scale the amount of steps accordingly.
WHOOAAA !!! But seriously, great tutorial ! Relatively deep subject but well explained, even though we might have to take a few steps back a few times to figure things out correctly, you gave us a precise and concise explanation. I just watch a video on the colour perception of jumping spiders and all to say, it is quite a wonder the things we can manage to do with the information we can manage to perceive :)
After I plugged in the geometry incoming to vector, I went no further. That was pretty cool looking just doing that. Thanks as always for teaching and I will try to finish once I watch it again and again.
A couple years ago i followed a similar tutorial showing how to make fake windows with rooms behind them. I ended up with a project to build the rooms (with wall decorations, lighting, etc.), which put out a node set, and a shader template to modify with the node set.
First time I saw Parallax mapping was in F.E.A.R. The decals on damaged walls. It was one of the coolest things to see because it was so much detail for something that use to be a black dot.
Doesn't that break when you rotate the plain? you need to transform the incoming vector into texture space. ie take the dot product on the incoming vector with normal, tangent and binormal. Although I'm sure you know that, I'm guessing there is a part 2 🙂Be warned the technique doesn't work on curved surfaces anymore, blender clamps the normal so you can't have be pointing into the plane surface, because is messes up eevee next. That caused me such a headache trying to figure out what was wrong!
First thing popping to mind: convert the height map into something akin to 3d SDF, which will optimise the amount of steps needed for each fragment as well as accuracy.
Forgot to set normal and roughness to non color instead of sRGB (I assume you know to do that but just forgot in the moment). Normal maps and roughness maps are not displayed correctly when set as sRGB so the rocks at the end look a bit weird. Cool video though, I wish blender just had POM support by default like most game engines where you just plug in a height map.
have you seen anisotropic cone step mapping? I would be surprised if its possible to make in nodes but it is supposed to often be both faster and higher quality. It uses scaled cones centered on each pixel instead of vertical layers, and requires a preprocessing step. There's also nvidia's relaxed cone step mapping which looks similar except it makes the cones intersect the geometry and adds a binary search at the end.
It looks kinda wonderful, but I have serious concerns about how much longer the render will become with a setup like that. If only it was a sort of - low level processor instruction node...
Really cool. It got me wondering tho. Could we get rid of the "layered" effect somehow ? Like calculating what the normal should be in-between based on previous plans next layer ?
I’ve spent some time trying to use this technique to represent windows of a building similar to the ones in the Spider-Man games. Any ideas pointers you’d be able to share?
Looks great... Now... Is there a plugin that gives me that with a single node? 😅 I mean, if that node tree works for any plane. Then it should be collapsible for ease of use and minimizing any risks of user error? Also. Doing that operation 20 times makes me wonder, is there no way to do for-loops in blender?
Also a way to supply a texture for a node group without having to duplicate and faff around inside if it has a texture dependency. I think a sampling node would be super handy, basically it would act like the current texture node but allow a user to input any texture to a socket. Splitting the job of the texture node in two. It would enable a more user friendly interface, and allow for advanced coordinate manipulation. I doubt it’ll happen but I can dream. I also want a proper way to get per light information into a material for better npr shaders. Shader to rgb works but it’s limited and pretty clunky. That’s a whole nother can of worms though.
I use Blender for ages and never checked if this is possible. Granted the setup is too complex for something I'd use normally: Would be great if this was builtin!
So let me see if I'm understanding this correctly, cuz the difference between parallax mapping and displacement mapping is confusing. Traditional displacement mapping actually displaces the geometry. I'm not sure about Cycles/Eevee, but in Octane Render it's displacing the surface at the render level rather than actually displacing the polygons themselves. That's how you can have high quality displacement with a low poly model. I've always seen this as a great way to get height detail without overloading a scene. It seems parallax mapping seems to imitate displacement mapping without actually displacing anything. Kind of like how bump/normal maps create the illusion of surface detail, but when you look at the edges of the model, it's still smooth and flat. It seems this parallax method would similarly break down when you're looking along the tangent of a surface. I'd imagine it's less computationally expensive though, so it'd render way faster. Interesting. Nice tutorial!
So. I open blender up, delete the default cube......then what?
Then delete default light and then default camera. Finally, delete Blender. Start again.
Then you add a new cube.
Simple, he can’t make anymore videos because well….you deleted him :(
then you rewatch the whole tutorial again step by step
@@sicfxmusic 🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣
he's just using blender to teach you linear algebra
dont tell them
@@DefaultCube You better knock that shit off..!
It's working!
DAMNIT ALGEBRA you fooled me once again
Couldn’t understand it less than my linear algebra class taught by the nuttiest Ukrainian to ever enter the US. I still don’t know what an eigen value is. Oh, and 20 years of engineering later… I never needed to.
GOAT of blender still on the block
"get it...? BLOCK?"
9:05 and _this_ is why I want Repeat Zones in the Shader Editor
Im not smart enough to figure out to fix this repetition problem myself, how would you actually do it, without adding all those step nodes manually? This parallax effect is so mindblowing i really want to incorporate it in my work
@@samfellner honestly you're better off just using actual displacement instead of doing all this lol
this is only useful if you're really serious about cutting down render times
To replicate this with displacement mapping I guess I'd need one triangle per pixel. That scales reaaaasaally fast. A 1K*1K texture would need a million triangles. Multiplying that if the texture repeats.
Yeah, sure I probably wouldn't need 1 triangle per pixel in most cases. But even outside of games, realtime backgrounds on volume stages and stuff like that, I would love a more user friendly approach to occlusion mapping, collapsed to a single node. Would save on so much effort. :)
@@Denomote displacement can really eat up your vram/ram usage and destroy render times. This is really useful. Im thinking theres probably a way to implement using osl, that way you can just use a for loop.
Think how useful that could be, you could set it so it used the view distance to determine the number of "layers" for the parallax, I be that sort of dynamism could be great for optimization. I already use view distance to scale the amount of detail in procedural material, which I at least *think* increases performance.
Extra tip: use a white noise texture instead of the discrete depth checks - simplifies the number of nodes and gets rid of the stepping, at the cost of looking a bit noisier.
Classic “dithering”. Just bear in mind for anyone that does this, its NOT a blur method.
Dithering can mess up displacement and height map outputs.
😮 I remember this trick in making good looking refraction.
Couldn't you improve that further with a gradient texture?
I finally understand how parallax occlusion mapping works.
Thanks for motivation. I was planning to delete blender. Now I did it.
kkkkkkkk🤣🤣🤣🤣
🤣
I wanna cry. I understood nothing!! But, I really wanna use parallax occlusion, because my PC isn't strong enough for displacement maps!
This was still a very detailed tutorial, so I will probably get it after watching it a few times. It's easier to understand things when I have no choice but to understand them to complete a project.
This is something i ALWAYS wanted. In real time rendering engines you can have "Per Pixel Displacement Mapping". But in blender for you to actually see the displacement you need to subdivide the heck out of your mesh, since displacement works "Per Vertex". I always found that silly and ineffficient, so this technique is AMAZING for when subdividing a plane ad infinitum is just not ideal
Well you can just enable adaptive subdivision which is per - pixel
@Concodroid That still creates geometry tho, this genuinely does not. You get real depth with no added geometry, which is great for large scenes. For example, this is how videogames make the interior of windows in large cities
@@sebastiangudino9377 No I know, it's just that this approach has limitations too. Glancing angles suffer from this technique. This does best when you're looking top-down at a flat plane; adaptive subdivision works best with something like landscapes
@@Concodroid well you can actually adjust the steps at runtime when viewing from such angles. not sure about blender, but atleast unity and unreal allow you to do that. basically you take the angle at which a surface is being looked at, and do some math to scale the amount of steps accordingly.
Thank you so much I’ve been trying to find out how to do this forever!!!
WHOOAAA !!!
But seriously, great tutorial ! Relatively deep subject but well explained, even though we might have to take a few steps back a few times to figure things out correctly, you gave us a precise and concise explanation. I just watch a video on the colour perception of jumping spiders and all to say, it is quite a wonder the things we can manage to do with the information we can manage to perceive :)
After I plugged in the geometry incoming to vector, I went no further. That was pretty cool looking just doing that. Thanks as always for teaching and I will try to finish once I watch it again and again.
A couple years ago i followed a similar tutorial showing how to make fake windows with rooms behind them. I ended up with a project to build the rooms (with wall decorations, lighting, etc.), which put out a node set, and a shader template to modify with the node set.
First time I saw Parallax mapping was in F.E.A.R. The decals on damaged walls. It was one of the coolest things to see because it was so much detail for something that use to be a black dot.
BEST Parallax Occlusion Tutorial ever!
Men that's just so much you give us. Thanks a lot, have to rewatch it several times to fully get it i think
Great explanation for something I considered sorcery when I saw it used in 3D.
Wow, amazing stuff! Thanks a lot for sharing it! A big ciao from Italy and Long life to Blender! :)
Would love to see your take on shell texturing, been messing around with ways to do it in geometry nodes.
oh finally a great quality tutorial! I am joking you are the best.
This guy is the best blender youtuber
I totally got everything covered here!
It made my head hurt but it was well worth it, the longest thing for me here was the 7 hours I invested in Zbrush to create that texture lol
Man I was looking for these tutorial and I am also looking for a lot of other tutorial by you on geometer nodes. Like how to randomize hair thickness
damnn thats like octane ggs dude!
this method has some limitations but can work for some cases is enough, thanks for sharing
I was waiting for this video
Once again, blow my mind
Thanks for posting this! That forum is a wealth of knowledge
Doesn't that break when you rotate the plain? you need to transform the incoming vector into texture space. ie take the dot product on the incoming vector with normal, tangent and binormal. Although I'm sure you know that, I'm guessing there is a part 2 🙂Be warned the technique doesn't work on curved surfaces anymore, blender clamps the normal so you can't have be pointing into the plane surface, because is messes up eevee next. That caused me such a headache trying to figure out what was wrong!
yep, just multiply by that matrix - the one i linked in description has that
First thing popping to mind: convert the height map into something akin to 3d SDF, which will optimise the amount of steps needed for each fragment as well as accuracy.
'the man, the myth, the legend, the mathematical wizard'
Amazing man, I crave node shenanigans
Your ability to teach is phenomenal. Thank you for being an inspiration.
nice explanation, thanks 🙏
I really want fragment shader kinda setup in blender. So then I can for loop through all the iteration easily.
Must say this is quite genius! amazing thnx
I only understood the skillshare ad😢😢
Bro you're a genius
This is nice, thank you
This madlad got 'i can remap your life' kind of energy
I have tried to do this well so many times in blender
❤nice tutorial!
Hard to tell whether the guy in the video is John Snow from GOT or Pedro Pascal from the Mandalorean
I like how my brain turned off for every single thing except the skillshare ad.
This is the equivalent to "Yeah, Im a visual learner" in math class wthen learning about vectors
So Enjoyable
Forgot to set normal and roughness to non color instead of sRGB (I assume you know to do that but just forgot in the moment). Normal maps and roughness maps are not displayed correctly when set as sRGB so the rocks at the end look a bit weird. Cool video though, I wish blender just had POM support by default like most game engines where you just plug in a height map.
Very cool technique!
Awesome as always, man!
I wonder if you could do a binary search instead of buckets
I dont know blender well so I cant say, but it would help mitigate the blockiness
I literally discovered this one week ago (not quite what I discovered now that I realise, but still similar)
have you seen anisotropic cone step mapping? I would be surprised if its possible to make in nodes but it is supposed to often be both faster and higher quality. It uses scaled cones centered on each pixel instead of vertical layers, and requires a preprocessing step.
There's also nvidia's relaxed cone step mapping which looks similar except it makes the cones intersect the geometry and adds a binary search at the end.
You are a god who walks among us.
Doing stuff in steps like this seems weird.
I think we need the node equivalent of Calculus.
Finally you upgraded the tutorial..
It looks kinda wonderful, but I have serious concerns about how much longer the render will become with a setup like that. If only it was a sort of - low level processor instruction node...
We really need the ability to loop like geometry nodes in n shader nodes, that and pass a texture parameter into a node group
Bravo
Neat
Man a loop would be nice in nodes
😅 One breathe at a time, Thanks a lot saviour!! 🎉
great! Thank you!
okay so, whats the actual use? saves performace?
big heart for jordy
instead of doing the "drill check" wouldn't a binary search be better, so start at 0.5 then go half the way in the direction that it hints?
thank you!
Only if there was a way to get pixel depth offset too, it would be perfect
Is there a way of blurring the edges of an image texture into eachother to make a short fake seemless texture with nodes? Might be a cool experiment
I'm late for this party. Is there a way to remove those fringes or whatever it is called that makes it look like it's sliced?
You could say this is like a ray marching algorithm but with Blender geometry nodes
Thx for sharing!
funny for me it's heavier to use displacement texture as for using this method Is it just because displacement uses experimental subdev?
I'm following the steps until 6:52 when suddenly appeared a group called Depth, and I'm not sure how that was created. Help!
Doing loops in blender node editor is a nightmare. We have osl script node available but then lose the ability to run on GPU 😅
Thank you very much :D
Bravo sir!
Hey, I saw that your Blender speed while making animations is very quick and has good quality. What laptop or PC do you use?
Really cool. It got me wondering tho. Could we get rid of the "layered" effect somehow ? Like calculating what the normal should be in-between based on previous plans next layer ?
that's what parallax occlusion mapping does, it's also a difference between POM and Steep parallax mapping, I'm not sure if he implemented this
omg omg omg omg tysm tysm tysm tysm
I’ve spent some time trying to use this technique to represent windows of a building similar to the ones in the Spider-Man games. Any ideas pointers you’d be able to share?
Looks great... Now... Is there a plugin that gives me that with a single node? 😅
I mean, if that node tree works for any plane. Then it should be collapsible for ease of use and minimizing any risks of user error?
Also. Doing that operation 20 times makes me wonder, is there no way to do for-loops in blender?
There is a lot of talk about Z-axis coordinates here, does that mean this only works this way on surfaces that are flat horizontal?
That was amazing
we really need loop nodes in blender shader editor
Also a way to supply a texture for a node group without having to duplicate and faff around inside if it has a texture dependency. I think a sampling node would be super handy, basically it would act like the current texture node but allow a user to input any texture to a socket. Splitting the job of the texture node in two. It would enable a more user friendly interface, and allow for advanced coordinate manipulation. I doubt it’ll happen but I can dream.
I also want a proper way to get per light information into a material for better npr shaders. Shader to rgb works but it’s limited and pretty clunky. That’s a whole nother can of worms though.
Ray marching shader next?
Okey thats nice but how can we writing this data to depth buffer
can you try to release a blend file of the node?
I use Blender for ages and never checked if this is possible. Granted the setup is too complex for something I'd use normally: Would be great if this was builtin!
😮 Why use bump or normal map insted this parallax?
For those that want to listen to this normal speed, change to .75 playback.
So let me see if I'm understanding this correctly, cuz the difference between parallax mapping and displacement mapping is confusing.
Traditional displacement mapping actually displaces the geometry. I'm not sure about Cycles/Eevee, but in Octane Render it's displacing the surface at the render level rather than actually displacing the polygons themselves. That's how you can have high quality displacement with a low poly model. I've always seen this as a great way to get height detail without overloading a scene.
It seems parallax mapping seems to imitate displacement mapping without actually displacing anything. Kind of like how bump/normal maps create the illusion of surface detail, but when you look at the edges of the model, it's still smooth and flat. It seems this parallax method would similarly break down when you're looking along the tangent of a surface. I'd imagine it's less computationally expensive though, so it'd render way faster. Interesting. Nice tutorial!
In 8:36 he did not connect the 0.8, he jumped from .6 to 1 in the comparations
I understand why it works but no way could I figure out why you would so it the way you do it and with which nodes etc on my own haha
uhh.. im sorry i cant understand the what is 'depth' group node?? how make that?
so you've achieved POM, now how about PDO with shadows so it interacts with other objects and not look flat uppon intersection?
self shadows should be possible but I don't think you can have correct shadows from other objects without depth offset
How to add more than one POM texture in a blend file, since multiple materials share the same height map group?
i got whiplash at 9:04
does anyone know if the ray portal node can be used to offset a texture's pixels like in ue5 pixel depth offset?
excellent stuff as ususal
something is wrong with this implementation i think? it seems to change too drastically when camera view angle changes?