Wow, I just got this recommended to me, and this seems like a gold mine of a channel. I always liked the tutorials in unity that make something that sort of pushes the boundaries in unity; stuff like mesh deformation, liquid physics, and stuff like that. And this channel seems like exactly that but with shaders and rendering and stuff like that. Keep doing what you're doing!
The way you explain while you write things out is very easy to understand and very easy to follow. Even if I didnt already know what a for loop was I could have inferred by how you explained it, good stuff.
I would rather watch such lectures than type all the shader code from the course material i took at university. The shader code seemed complicated and made no sense until you put things together. Glad i found your channel I might be doing such effects pretty soon and this comes in handy. Thanks
Wow this video needs more recognition. It helped me a lot for making a torus and having a working raymarcher but I’m stuck on how to have the same shader render different models. (Cylinders, planes etc)
@The Art of Code Since i saw your jellyfish raymarcher in VRChat i wonder if you could also go Inside the cube and see the torus!? Or how else have they done it? And on top of that, how do you render a shader stereoscopic?
Been wanting to get into writing shaders for some time now. I subscribed to your channel some time back and will soon have time to do a bingewatching session of your videos. :)
Hello good!! Where would you recommend I start learning vertex shader? Directly in Unity or in an application like shaderToy for vertex? The truth is that there is little information about fragment shader in general. Vertex shader is practically nil! Thank you very much for your videos.
Thank you so much for sharing your knowledge with us. I'm so glad that i found your channel. There are so few that teaches about shaders and especially about ray marching. As a beginner i learned a lot from you, thank you so much again. By the way, I'm trying to recreate this in HDRP. I started with default unlit shader like Mr. Martijn. But it seems to be transparent so i add a "Queue" = "Transparent" Tag in sub shader tags so it seems to fix that for now. I'm writing so that if anyone having the same problem they can continue this awesome tutorial without trouble.
@@hsantonin65 idk if you ever figured it out, but that is a matrix multiplication that just converts the vertex positions from object space to world space. This is the equivalent of taking the position node in shader graph, and switching it to "world" instead of"object" space.
Yeah I'll do a video about texturing ray marched objects at some point. In the mean time, check out my Twisted Toroid tutorial for texturing tips for a torus.
try the following things: If you're doing this in Unity, then put the Blend SrcAlpha OneMinusSrcAlpha tag on the subshader and change the render tags to transparent, discard any pixels that are too dark
Thank you very much for this cool video. Does this work with URP? I watch your cool videos on shadertoy. It would be good to show basics of unity shading language compared to shadertoy. You already gave some hints so thank you!!!
Thanks for this and many other shader tutorials. I am able to add some basic Physical-Based Rendering (PBR) lighting, particle system data streams and light bouncing to your ray marching shader. Unfortunately though it starts to hurt the framerate once I stream more than 10 particles (just for rendering a bunch of metaballs).
Does triangle count affect performance? Should making it a billboard quad roughly improve performance by 3x times? (box 6 faces - culled 3 faces vs 1 face) => less raycasts in fragment shader Thanks for the video!
You could put camera facing billboards but triangle count is not the bottleneck here. The number of ray casts is solely dependent on the screen area covered by the geometry.
You either discard a pixel or you don't, there is no in between. So in that sense, its not anti aliased. Having said that, the edge should anti alias properly with unitys AA from the quality settings.
Thanks so much for this extremely useful guide! I have one question regarding this: Once these raymarchers are made in unity, how do you get their draw order to work properly with other shaders in the scene? Right now, these raymarchers simply project the image of a 3D object outward onto the surface of the parent object (in this case, a cube), so anything that tries to intersect the shapes disappears into the cube. I've seen raymarchers that take this a step further, and have their draw order properly intersect with surrounding geometry. What would need to be done code-wise to get this one up doing the same thing?
Thats a great question. The first step would be to render the backfaces of the object instead of the front faces. Then you would have to write to the zbuffer to get proper per-pixel depth values. I guess I should make a video about this huh? ;)
@@TheArtofCodeIsCool Wow, thanks for the prompt response here! I never thought to render to the backfaces like that. That would allow all external objects to exist within the cube volume. That also makes sense about writing to the zbuffer. I'm guessing that you'd have some sort of "if" statement that compares the depth buffer value of the background + foreground objects and simply discards the raymarch surface if its value is higher than the intersecting object. I'm guessing that something else would be invoked to force the raymarch object to be in the foreground unless negated by the depth buffer comparison? I'll have to think about this more. That would certainly be a pretty useful video!
@@TheArtofCodeIsCool Hey! I got it to work! Thanks for this suggestion. The commands I used that helped with doing this were Cull Front, LinearEyeDepth, _CameraDepthTexture, and converting the ray direction into a "forward distance" by scaling it with a dot product with a normalized camera forward view vector. Without the dot product scaling, turning the camera can result in unusual shifting with draw order. What I still don't know: how to deal with intersecting with transparent geometry. the depth buffer doesn't seem to take into account for transparent geometry (IDK if the standard shader has ZWrite set to off or not).
@@TheArtofCodeIsCool Hello! I have some warnings about the code I used: Careful using the _CameraDepthTexture. It can completely mess up in VR. I ended up tossing the above code and, instead of outputting frag() as a float4, I outputted it as a struct with... struct f2s{float4 color : SV_Target; float depth : SV_Depth;} I instantiated f2s by the name "ret" within the fragment program. then I calculated the following in order to have the object treated with proper draw order within unity environments: currentPos = mul(unity_ObjectToWorld, float4(ro + d * rd, 1)); float4 clipPos = mul(UNITY_MATRIX_VP, float4(currentPos, 1.0)); ret.depth = clipPos.z / clipPos.w; However, I got a strange draw order issue. when the camera is within the sdf, the entire color (which should fill the camera view) appears behind everything else. so I just artificially did ret.depth < 0 ? ret.depth = 1; ret.color is computed the same way as in this video, however. The if/else statement (for discarding) in the fragment program is also the same. This code is what ended up working for me, including in VR.
Hey I'm trying to create interstellar gas/nebulae with this. I'm trying to do it with perlin noise (FBM), but I don't know how to do a distance function for it. If I just move the distance little by little and sample the space for the noise density that way, it's terribly slow. If I make the distance between each sample bigger, then it looks weird as I move through space. Is there any common way to render perlin noise in 3D space using ray marching? Any advice?
@@user-og2lt8ou8i do you mean what presentation software he uses or what he uses for the visuals in his explanations? (with the crumbly paper background)
Awesome! Mate, can you re-format this that it works with SRP engine? Not just the old built in one? It would be cool if it works with URP, or HDRP. So instead CG, it should be in HLSL. Anyway, great work!
The depth buffer in this case would just be the surface of the cube, unless you also wrote a raymarch version of the shader for the depthbuffer render pass which I guess would be possible?
Properly merging ray marched objects with the rest of the Unity scene is an interesting problem. Off the top of my head you'd have to render the object in the tranparent queue so its rendered after solid geo, then you'd have to compare the computed ray marching depth with the z-buffer and determine from there if the pixel should be discarded. Sounds like a good topic for a video ;)
@@TheArtofCodeIsCool Unity uses both a Z buffer and a Depth buffer. They virtually do the same thing, however the Z-buffer cannot be read or written to by a custom shader. The only way of interacting with it is by using the ZWrite and ZTest tags on a pass which isn't very useful for raymarched geometry, unity will always compare and write the actual mesh geometry to the Z Buffer and you can't write custom values to it Now the depth buffer on the other hand is something you can read from and write to, it's disabled by default but can be enabled with a single line of script or by simply having a light source which casts shadows in the scene. When enabled, you can sample the depth value of all opaque objects from a built in depth texture in your shader and just discard pixels accordingly to have your raymarched objects interact properly with opaque geometry. Of course you have to render them after the geometry but I would not advice using the Transparent queue or else you'll obviously interfere with transparent objects. Luckily you can define your own queues in unity and just use one between geometry and transparency, just add something like "Queue"="Transparent-1" to the tags at the begining of a pass and you should be good. That should work perfectly fine when you only have opaque objects in the scene, however two independent raymarched objects wouldn't interact with eachother yet as they only read from the depth buffer but don't write to it. To do that you would need a shadow caster pass, those are responsible for writing to the depth buffer, usually unity uses the one in the fallback shader when you don't have one defined yourself which simply writes the mesh geometry to the buffer but you can write your own to account for the actual raymarched depth without issues. Last issue are the transparent objects, and that solution isn't pretty. The default shaders don't write to the z buffer but they can certainly read and check against it with the ZTest statements, and remember, your raymarched object is only properly written to the depth buffer, the z buffer will still have the cube geometry (or nothing if you disable it with ZWrite). So for those transparent objects, unfortunately the only way to make them work is to write your own transparent shader which ignores the z buffer and reads from the depth texture instead. So yeah, transparent objects will never work with the default shader and need to be adjusted As for the reason why unity uses two separate buffers to do virtually the same thing, I have no idea, something something mobile game compatability is what I read once 🤷
The camera can't fly close at the moment because as soon as the camera is inside the object, it doens't render anymore. You can solve that by rendering the backface of the mesh instead of the frontfaces. Google unity shader culling. Off the top of my head, it should be something like 'Cull Front'
Hey Martijn. I do follow your lessons for a while. You make so many things clear. That is a true fortune! THANKS! - One question i have: Is there any "proper" way to debug shader code? (except printing out via color as you do with the uv++) - I followed your code, Unity doesnt throw any errors but the cube stays black. I have no clue where the error might be. Many Greetings Trixi
Debugging is unfortunately kinda difficult with shaders and as far as I know, printing out colors is the only thing you can do really. If the cube stays black then the first thing to check is if your camera rays seem valid. Just print out the rd vector as a color, you should see a slight color gradient, and colors should change when you move the camera around. Next up, you could try a using a sphere sdf ( length(p-spherepos)-sphereRadius ) and make sure you have it at the right position. Also verify that your ray origin (ro) is at the right position. If you still can't find it then post the code somewhere and I'll have a quick look.
Hey Tristan I ran into this exact same issue and it's because I had the position vector as just float instead of float3 in the ray march loop. The dynamic type coercion between the vector types is very annoying.
Is it Okay to make the render whole game using this? or atleast a good chunk of it? Will it be Slower or Faster the Game performance I mean when doing this?
I would only use this for special effects inside of a tradionally rendered game. But as GPUs get more powerful we can expect to render more things this way.
What if I want to get close to the torus, closer than the sides of the cube? It disappears because it's rendered on the surface of the cube, but I want that torus to be there even when I get close. What do? EDIT: I can see somebody asked a similar question, shader culling.
Iirc Portal doesn't use a shader to make the portals. It literally duplicates the current room in separate coordinate space. I think that's what I remember from the developer commentary, but I could be wrong.
@@philbateman1989 yeah for sure it renders the scene to a texture and uses that. You'd still need to make a shader that renders the swirly glowing portal outline though.
Please note for anyone following along, if your camera is orthographic in editor then the homogeneous coordinates step around 24:34 will not work properly in editor.
if anyone still watching this I have this weird issue at 8:56 where did he pull that length function from ? its not a build-in function i keep trying to make it work but it doesn't can someone explain ?
very interesting, but the one thing i learned from my graphics course is to never use if statements in your shader as it drastically slows down the frame-rate as its no longer running parallel
The old 'never use if statements in shaders' argument is only valid in specific cases: on very old hardware or in the case where many neighboring pixels take different branches.
Can you do a tutorial about Interrior shading? I think it's also made with raymarching technique. Can you get UV information for those shapes (donut/torus), so you can apply a texture to it.
Its hard to get uvs from raymarched objects, because they dont have polygons; they are just math. But, you can use world space coordinates to sample a texture on a raymarched object.
Texturing can be done in many different ways: object/world coords and tri-planar mapping are two ways. A way to texture a torus using polar coordinates is described in my twisted toroid video.
@@TheArtofCodeIsCool That's what I got in the end. Shader code (Godot Shading Language): www.codepile.net/pile/e7JyVAPm And shpere with texture: games.mail.ru/pre_960x0_resize/hotbox/ugcimages/4/c/a/FFhwgFWPv5?quality=85
Hello, can someone please tell me if there is any way to render a mesh behind a raymarching object? I do gravitational lensing for black hole and it works fine with other raymarching objects and with cubemap (which I pass to shader and draw it there too). But if I put a normal object (mesh) behind the black hole, it just disappears, and by design it should distort the same way as cubemap.
@@TheArtofCodeIsCool Thank you! Can you tell me one more thing, please. I have a texture of what is behind the black hole (there is no black hole itself), I passed this texture to the shader. But I can't figure out how I can use it in the shader. Can you please tell me about it 🙏
@@Jone_501 You would have to adjust your UVs to be pulled towards a point. Check my tutorial (part 2) of rain on a window in Unity where I do exactly that to distort the background.
To do it in glsl . I would have to rayMarch in worldspace like converting to NDC - > ClipSpace - > WorldSpace. And then use this worldSpace variable ( in 4 dimensions) to use in the Raymarch instead of uv?. Can anyone explain me that plz.
Shader error in 'Unlit/Raymarch': undeclared identifier 'n' at line 75 (on d3d11) Shader error in 'Unlit/Raymarch': 'normalize': no matching 1 parameter intrinsic function; Possible intrinsic functions are: normalize(floatM|halfM|min10floatM|min16floatM) at line 75 (on d3d11) Shader error in 'Unlit/Raymarch': syntax error: unexpected token 'float3' at line 70 (on d3d11)
what strikes me is just how many branches there are in this code, and that they tend to be VERY slow to run on GPUs. but then, there's not much other way to do it I suppose
Myeah they are kind of unavoidable if you want to ray march. Also, it is my understanding that on modern hardware, branching isn't as bad, unless lots of neighboring pixels take different branches, which isn't the case here.
@@TheArtofCodeIsCool I hope you check it out :D, i tried making your example and got something close, but i failed at getting ro,rd in Godot's shaders, because i myself am a beginner in Godot.. I can share my project files if you want to see it :)
It's an expensive effect, but you'd only use it for special effects. I think that on modern gpus something like this, used sparingly, is quite feasible in a production
Please type longer, more self documenting, variable names. Looks like a good tutorial, that aside. If code isn't self documenting, it's not really worth typing.
I like it for a YT video. The operations are better framed. It's plain wrong when doing OOP or Components, but grokking shader and GPU stuff is easier with the whiteboard math-format. Parameter names like "o" could be better, granted.
Came here to say exactly that. Having seen the code for the first time and without explanation/comments, it's impossible for me to get what these stand for.
This just gave me a game idea: you can see certain objects only if you are looking at them through a certain material, knowing the object's shape and position can be used as a password, indication or something.
@@TheArtofCodeIsCool Probably the best about a 70ies 80ies youth in west germany was the close dutch border... and as misleading as this may sound I am not talking about green but about music and state of mind before the mighty calm down fog. Twenty minutes by car from one world into another. So great. Until today this short trip feels for me like a holiday... though times seem to get tougher as some dutch people esp. these days seem to have rather weird ideas.
Really Cool ! Would love a video about Spaces (Tangent, Clip, World, Local , ...)
If there is enough support for that then that could be a good video for sure.
that would be really nice
Wow, I just got this recommended to me, and this seems like a gold mine of a channel. I always liked the tutorials in unity that make something that sort of pushes the boundaries in unity; stuff like mesh deformation, liquid physics, and stuff like that. And this channel seems like exactly that but with shaders and rendering and stuff like that. Keep doing what you're doing!
I'm glad you are getting something out of it. Thanks for watching!
Thank you again. Really clear and well paced video. I like how you don't assume anything and take time to explain 'the obvious'.
This is your daily dose of Recommendation
Ray marching with Unity
Great tutorial. Just the right amount of detail to explain all the important concepts as fast and efficient as possible. Thank you!
23:00 World-Object space mismatch was interesting (everything here was interesting, but this one was just wow)
The way you explain while you write things out is very easy to understand and very easy to follow. Even if I didnt already know what a for loop was I could have inferred by how you explained it, good stuff.
Thanks!
I would rather watch such lectures than type all the shader code from the course material i took at university. The shader code seemed complicated and made no sense until you put things together. Glad i found your channel I might be doing such effects pretty soon and this comes in handy. Thanks
Very interesting way to intricate 3d renderers, I loved it ! You're a great teacher, thanks !
That's very cool. I love it when you bring this stuff into Unity.
Wow this video needs more recognition. It helped me a lot for making a torus and having a working raymarcher but I’m stuck on how to have the same shader render different models. (Cylinders, planes etc)
Check my video on different primitives. I go over a cylinder, torus, sphere, capsule and box as well as a bunch of different ways to shape them.
@The Art of Code Since i saw your jellyfish raymarcher in VRChat i wonder if you could also go Inside the cube and see the torus!?
Or how else have they done it?
And on top of that, how do you render a shader stereoscopic?
Have been waiting for this to be covered by you. Really grateful, Thanks!
Great. This is what I always was looking for, to make both coordinate systems matching.
You are an absolute lifesaver!! Thank you so much for sharing!!
Daily dose of shader coding, great stuff man!
Been wanting to get into writing shaders for some time now. I subscribed to your channel some time back and will soon have time to do a bingewatching session of your videos. :)
Thankyou for helping me learn about this really cool stuff I'm loving finding out about sdf's
Hello good!! Where would you recommend I start learning vertex shader? Directly in Unity or in an application like shaderToy for vertex? The truth is that there is little information about fragment shader in general. Vertex shader is practically nil! Thank you very much for your videos.
Wooow, very high quality content
Oh my gosh! Thank you so much! Was trying to get this going a few weeks ago. My math was so wrong lol
This is so wild! Thank you for making this tutorial.
26:30 For the same reason I put the returns up above whenever possible so the else scope isn't needed
wow, thank you for this!
Thank you so much for sharing your knowledge with us. I'm so glad that i found your channel. There are so few that teaches about shaders and especially about ray marching. As a beginner i learned a lot from you, thank you so much again.
By the way, I'm trying to recreate this in HDRP. I started with default unlit shader like Mr. Martijn. But it seems to be transparent so i add a "Queue" = "Transparent" Tag in sub shader tags so it seems to fix that for now. I'm writing so that if anyone having the same problem they can continue this awesome tutorial without trouble.
Hi, I'm also trying to do it with HDRP but I'm a bit struggling, what did you write instead of:
o.hitPos = mul(unity_ObjectToWorld, v.vertex);
@@hsantonin65 idk if you ever figured it out, but that is a matrix multiplication that just converts the vertex positions from object space to world space. This is the equivalent of taking the position node in shader graph, and switching it to "world" instead of"object" space.
Really cool! Would be nice with a video about texturing the raymarched model surfaces in different ways?!
Yeah I'll do a video about texturing ray marched objects at some point.
In the mean time, check out my Twisted Toroid tutorial for texturing tips for a torus.
Yes it would be cool to show how to give texture to the torus, or any other shape you draw with raymarch...
Priceless as always !
Would this method allow you to fly into the scene, or is there a different technique for that?
A light themed editor for a light themed man.
Awesome! Also I love your presentation!
Nice tutorial! How can I create transparency of the mask (_MainTex) also on the back of the cube so that we can see the blue background through it?
try the following things:
If you're doing this in Unity, then put the
Blend SrcAlpha OneMinusSrcAlpha
tag on the subshader and change the render tags to transparent, discard any pixels that are too dark
Thank you very much for this cool video.
Does this work with URP?
I watch your cool videos on shadertoy. It would be good to show basics of unity shading language compared to shadertoy. You already gave some hints so thank you!!!
It does work with urp but you have to write this code in HLSL. I tried and it worked
Thanks for this and many other shader tutorials. I am able to add some basic Physical-Based Rendering (PBR) lighting, particle system data streams and light bouncing to your ray marching shader. Unfortunately though it starts to hurt the framerate once I stream more than 10 particles (just for rendering a bunch of metaballs).
Thanks for watching! Yeah you can go as deep as you want with this, until your game turns into a slide-show because of performance ;)
Can this also be lit with unity lights?
Does triangle count affect performance? Should making it a billboard quad roughly improve performance by 3x times? (box 6 faces - culled 3 faces vs 1 face) => less raycasts in fragment shader
Thanks for the video!
You could put camera facing billboards but triangle count is not the bottleneck here. The number of ray casts is solely dependent on the screen area covered by the geometry.
Can a SDF geometry can be made without any Mesh or Directly like a Mesh/ without shader may be like a polygon mesh in space ?? ?
This is a truly great video thanks!
Really wonderful video many thanks : )
def gonna try do some shapes in it
Great tutorial video! Thanks for posting!!!
Using "discard", does that provide for aliasing, or really anti-aliasing?
You either discard a pixel or you don't, there is no in between. So in that sense, its not anti aliased.
Having said that, the edge should anti alias properly with unitys AA from the quality settings.
@@TheArtofCodeIsCool Ya, good point. Built in AA should help. Thanks again for the helpful demo!
Epic, thanks teacher!
this is so dope. thank you.
oh, you u are so cool in your explanations.
Thanks so much for this extremely useful guide! I have one question regarding this: Once these raymarchers are made in unity, how do you get their draw order to work properly with other shaders in the scene?
Right now, these raymarchers simply project the image of a 3D object outward onto the surface of the parent object (in this case, a cube), so anything that tries to intersect the shapes disappears into the cube.
I've seen raymarchers that take this a step further, and have their draw order properly intersect with surrounding geometry. What would need to be done code-wise to get this one up doing the same thing?
Thats a great question. The first step would be to render the backfaces of the object instead of the front faces. Then you would have to write to the zbuffer to get proper per-pixel depth values. I guess I should make a video about this huh? ;)
@@TheArtofCodeIsCool Wow, thanks for the prompt response here! I never thought to render to the backfaces like that. That would allow all external objects to exist within the cube volume. That also makes sense about writing to the zbuffer. I'm guessing that you'd have some sort of "if" statement that compares the depth buffer value of the background + foreground objects and simply discards the raymarch surface if its value is higher than the intersecting object. I'm guessing that something else would be invoked to force the raymarch object to be in the foreground unless negated by the depth buffer comparison? I'll have to think about this more. That would certainly be a pretty useful video!
@@TheArtofCodeIsCool Hey! I got it to work! Thanks for this suggestion. The commands I used that helped with doing this were Cull Front, LinearEyeDepth, _CameraDepthTexture, and converting the ray direction into a "forward distance" by scaling it with a dot product with a normalized camera forward view vector. Without the dot product scaling, turning the camera can result in unusual shifting with draw order. What I still don't know: how to deal with intersecting with transparent geometry. the depth buffer doesn't seem to take into account for transparent geometry (IDK if the standard shader has ZWrite set to off or not).
@@swishchee awesome. Great to hear you figured it out!
@@TheArtofCodeIsCool Hello!
I have some warnings about the code I used: Careful using the _CameraDepthTexture. It can completely mess up in VR. I ended up tossing the above code and, instead of outputting frag() as a float4, I outputted it as a struct with...
struct f2s{float4 color : SV_Target; float depth : SV_Depth;}
I instantiated f2s by the name "ret" within the fragment program.
then I calculated the following in order to have the object treated with proper draw order within unity environments:
currentPos = mul(unity_ObjectToWorld, float4(ro + d * rd, 1));
float4 clipPos = mul(UNITY_MATRIX_VP, float4(currentPos, 1.0));
ret.depth = clipPos.z / clipPos.w;
However, I got a strange draw order issue. when the camera is within the sdf, the entire color (which should fill the camera view) appears behind everything else.
so I just artificially did ret.depth < 0 ? ret.depth = 1;
ret.color is computed the same way as in this video, however. The if/else statement (for discarding) in the fragment program is also the same. This code is what ended up working for me, including in VR.
Hey I'm trying to create interstellar gas/nebulae with this. I'm trying to do it with perlin noise (FBM), but I don't know how to do a distance function for it. If I just move the distance little by little and sample the space for the noise density that way, it's terribly slow. If I make the distance between each sample bigger, then it looks weird as I move through space.
Is there any common way to render perlin noise in 3D space using ray marching? Any advice?
Man you are amazing keep doing more unity tutorials please
How to get the depth for comparison to interact mesh objects?
oh cuz it's cool to do that I forgot! thanks so much once again!
Is possible to build this with material nodes inside Blender?
Im not sure if you can use loops with the node approach. If you can, it would be a huge nodetree though.
I'm not sure about blender, but a lot of shader node editors have custom code nodes where you can define a function yourself for the tricky parts
definitely
a few people including me have done it
and one has shared his on gumroad: gumroad.com/l/BlenderRaymarch
Hi how we calculate shadow
Is this possible on three js?, Im stuck while marching to direction
Thank you so much!
Which software you are using to demonstrating it?
The main program? Unity?
@@TheArtofCodeIsCool are you telling or asking, you putted question mark..
@@user-og2lt8ou8i yeah because I'm not sure what you are asking. The main program is unity, it's in the title of the video
@@TheArtofCodeIsCool I was asking that how you're able to demonstrating it which software you're actually using to made us understand....
@@user-og2lt8ou8i do you mean what presentation software he uses or what he uses for the visuals in his explanations? (with the crumbly paper background)
Awesome! Mate, can you re-format this that it works with SRP engine? Not just the old built in one? It would be cool if it works with URP, or HDRP. So instead CG, it should be in HLSL. Anyway, great work!
I suppose from now on I'll do stuff in HDRP.
Great tutorial, thank you.
It would be nice to see how raymarched objects can interact with the built in depth buffer.
The depth buffer in this case would just be the surface of the cube, unless you also wrote a raymarch version of the shader for the depthbuffer render pass which I guess would be possible?
Properly merging ray marched objects with the rest of the Unity scene is an interesting problem. Off the top of my head you'd have to render the object in the tranparent queue so its rendered after solid geo, then you'd have to compare the computed ray marching depth with the z-buffer and determine from there if the pixel should be discarded.
Sounds like a good topic for a video ;)
I'm almost sure I've seen infinite field of raymarched cubes interacting with depth and lighting and all that somewhere
@@TheArtofCodeIsCool Unity uses both a Z buffer and a Depth buffer. They virtually do the same thing, however the Z-buffer cannot be read or written to by a custom shader. The only way of interacting with it is by using the ZWrite and ZTest tags on a pass which isn't very useful for raymarched geometry, unity will always compare and write the actual mesh geometry to the Z Buffer and you can't write custom values to it
Now the depth buffer on the other hand is something you can read from and write to, it's disabled by default but can be enabled with a single line of script or by simply having a light source which casts shadows in the scene.
When enabled, you can sample the depth value of all opaque objects from a built in depth texture in your shader and just discard pixels accordingly to have your raymarched objects interact properly with opaque geometry. Of course you have to render them after the geometry but I would not advice using the Transparent queue or else you'll obviously interfere with transparent objects. Luckily you can define your own queues in unity and just use one between geometry and transparency, just add something like "Queue"="Transparent-1" to the tags at the begining of a pass and you should be good.
That should work perfectly fine when you only have opaque objects in the scene, however two independent raymarched objects wouldn't interact with eachother yet as they only read from the depth buffer but don't write to it. To do that you would need a shadow caster pass, those are responsible for writing to the depth buffer, usually unity uses the one in the fallback shader when you don't have one defined yourself which simply writes the mesh geometry to the buffer but you can write your own to account for the actual raymarched depth without issues.
Last issue are the transparent objects, and that solution isn't pretty. The default shaders don't write to the z buffer but they can certainly read and check against it with the ZTest statements, and remember, your raymarched object is only properly written to the depth buffer, the z buffer will still have the cube geometry (or nothing if you disable it with ZWrite). So for those transparent objects, unfortunately the only way to make them work is to write your own transparent shader which ignores the z buffer and reads from the depth texture instead.
So yeah, transparent objects will never work with the default shader and need to be adjusted
As for the reason why unity uses two separate buffers to do virtually the same thing, I have no idea, something something mobile game compatability is what I read once 🤷
@@Naveication thanks for sharing all that knowledge! You saved me a bunch of frustration and confusion for when I do make that video :)
Is ther a way to not binde the rendering space to the Cube and also enable the camera to fly really close?
The camera can't fly close at the moment because as soon as the camera is inside the object, it doens't render anymore. You can solve that by rendering the backface of the mesh instead of the frontfaces. Google unity shader culling. Off the top of my head, it should be something like 'Cull Front'
I would love if you could make a tutorial for all the other shapes you showed at the beginning, but please also in HLSL/CG
The Mandelbox (on the right) is coming :)
Hey Martijn. I do follow your lessons for a while. You make so many things clear. That is a true fortune! THANKS! - One question i have: Is there any "proper" way to debug shader code? (except printing out via color as you do with the uv++) - I followed your code, Unity doesnt throw any errors but the cube stays black. I have no clue where the error might be. Many Greetings Trixi
Debugging is unfortunately kinda difficult with shaders and as far as I know, printing out colors is the only thing you can do really.
If the cube stays black then the first thing to check is if your camera rays seem valid. Just print out the rd vector as a color, you should see a slight color gradient, and colors should change when you move the camera around.
Next up, you could try a using a sphere sdf ( length(p-spherepos)-sphereRadius ) and make sure you have it at the right position. Also verify that your ray origin (ro) is at the right position.
If you still can't find it then post the code somewhere and I'll have a quick look.
Hey Tristan I ran into this exact same issue and it's because I had the position vector as just float instead of float3 in the ray march loop. The dynamic type coercion between the vector types is very annoying.
How to put bigger torus in smaller box
I haven´t even seen this video, and I already like it!
Thks for sharing knowledge!
Is it Okay to make the render whole game using this? or atleast a good chunk of it? Will it be Slower or Faster the Game performance I mean when doing this?
I would only use this for special effects inside of a tradionally rendered game. But as GPUs get more powerful we can expect to render more things this way.
@@TheArtofCodeIsCool Thanks for the Reply
How would you transform the ray origin to Object Space, while ignoring the scale of the cube Gameobject?
What if I want to get close to the torus, closer than the sides of the cube? It disappears because it's rendered on the surface of the cube, but I want that torus to be there even when I get close. What do? EDIT: I can see somebody asked a similar question, shader culling.
You should make a portal shader, like from the game portal.
Yeah that could be a fun exercise :)
Iirc Portal doesn't use a shader to make the portals. It literally duplicates the current room in separate coordinate space. I think that's what I remember from the developer commentary, but I could be wrong.
@@philbateman1989 yeah for sure it renders the scene to a texture and uses that. You'd still need to make a shader that renders the swirly glowing portal outline though.
Very cool
Can I do the same in Godot?
Wow!
Please note for anyone following along, if your camera is orthographic in editor then the homogeneous coordinates step around 24:34 will not work properly in editor.
Thanks for the heads up!
This will be interesting.
So cool!!!
How can I color raymarched objects
My next video will be about that!
@@TheArtofCodeIsCool Thanks
Ray-marching Voxels next, plz?
why make the 'virtual camera'? why not use the actual camera info?
Thanks for that.
Amazing video! Any chance of getting a version of this tutorial but for Godot?
I gotta look into Godot. You are not the first person talking about it.
I'm your biggest fan!
Thank you!!
AWESOME. :)
if anyone still watching this I have this weird issue at 8:56 where did he pull that length function from ? its not a build-in function i keep trying to make it work but it doesn't can someone explain ?
length IS built in. Are you sure you spelled it correctly? (e.g. Length doesn't work)
very interesting, but the one thing i learned from my graphics course is to never use if statements in your shader as it drastically slows down the frame-rate as its no longer running parallel
The old 'never use if statements in shaders' argument is only valid in specific cases: on very old hardware or in the case where many neighboring pixels take different branches.
Also, modern shader compilers are goddamn wizards, you wouldn't believe how much if-statement shenanigans they translate to non-branching machine code
thanks for the info, learn something new!
You are the best
Can you do a tutorial about Interrior shading? I think it's also made with raymarching technique.
Can you get UV information for those shapes (donut/torus), so you can apply a texture to it.
Its hard to get uvs from raymarched objects, because they dont have polygons; they are just math. But, you can use world space coordinates to sample a texture on a raymarched object.
@@thewowo Thanks for answer. Just found that texturing can be done with Tri-Planar Mapping technique.
Texturing can be done in many different ways: object/world coords and tri-planar mapping are two ways. A way to texture a torus using polar coordinates is described in my twisted toroid video.
@@TheArtofCodeIsCool Thanks! Will wach it as soon as I translate your code to Godot shading language :)
@@TheArtofCodeIsCool That's what I got in the end.
Shader code (Godot Shading Language): www.codepile.net/pile/e7JyVAPm
And shpere with texture: games.mail.ru/pre_960x0_resize/hotbox/ugcimages/4/c/a/FFhwgFWPv5?quality=85
Hello, can someone please tell me if there is any way to render a mesh behind a raymarching object? I do gravitational lensing for black hole and it works fine with other raymarching objects and with cubemap (which I pass to shader and draw it there too). But if I put a normal object (mesh) behind the black hole, it just disappears, and by design it should distort the same way as cubemap.
You would need to render the background to a texture first without the black hole
@@TheArtofCodeIsCool Thank you! Can you tell me one more thing, please. I have a texture of what is behind the black hole (there is no black hole itself), I passed this texture to the shader. But I can't figure out how I can use it in the shader. Can you please tell me about it 🙏
@@Jone_501 You would have to adjust your UVs to be pulled towards a point. Check my tutorial (part 2) of rain on a window in Unity where I do exactly that to distort the background.
To do it in glsl . I would have to rayMarch in worldspace like converting to NDC - > ClipSpace - > WorldSpace. And then use this worldSpace variable ( in 4 dimensions) to use in the Raymarch instead of uv?. Can anyone explain me that plz.
I'm not quite sure what you mean. The procedure in GLSL should be similar to what I'm doing in the video.
Shader error in 'Unlit/Raymarch': undeclared identifier 'n' at line 75 (on d3d11)
Shader error in 'Unlit/Raymarch': 'normalize': no matching 1 parameter intrinsic function; Possible intrinsic functions are: normalize(floatM|halfM|min10floatM|min16floatM) at line 75 (on d3d11)
Shader error in 'Unlit/Raymarch': syntax error: unexpected token 'float3' at line 70 (on d3d11)
You probably are just missing or misspelling something somewhere. Can you post me some lines from where the error is? +- 5 lines should probably work.
better than bumpmapping
what strikes me is just how many branches there are in this code, and that they tend to be VERY slow to run on GPUs. but then, there's not much other way to do it I suppose
Myeah they are kind of unavoidable if you want to ray march. Also, it is my understanding that on modern hardware, branching isn't as bad, unless lots of neighboring pixels take different branches, which isn't the case here.
Can you recreate this with Godot game-engine :) The Godot environment is simpler but I can't figure out how to do it :(
I don't know Godot. Not the first time I hear about it though so perhaps I should look into it.
@@TheArtofCodeIsCool I hope you check it out :D, i tried making your example and got something close, but i failed at getting ro,rd in Godot's shaders, because i myself am a beginner in Godot.. I can share my project files if you want to see it :)
Hi. I can help you with that. Just recreated this effect with Gogot game engine. Search for my comment to this video. Links to code inside.
🙌
2:08
b o x
Do you need an intergalactic-super-computer to run this?
It's an expensive effect, but you'd only use it for special effects. I think that on modern gpus something like this, used sparingly, is quite feasible in a production
Awesome stuff. It's a pity the tech is perf-intense and not artist-friendly.
Yes it is perf-intense. This is the kind of stuff you'd use sparingly for special effects.
9:51 LOOOOOOOOOOOOOOOOOOOOOOOL
Hmm so you worked as lead designer for EA, that's why you can practically write shaders so easily like eating a buckling. lol
I only really dove into shaders after my time at EA.
Please type longer, more self documenting, variable names.
Looks like a good tutorial, that aside.
If code isn't self documenting, it's not really worth typing.
I like it for a YT video. The operations are better framed.
It's plain wrong when doing OOP or Components, but grokking shader and GPU stuff is easier with the whiteboard math-format.
Parameter names like "o" could be better, granted.
Came here to say exactly that. Having seen the code for the first time and without explanation/comments, it's impossible for me to get what these stand for.
This just gave me a game idea: you can see certain objects only if you are looking at them through a certain material, knowing the object's shape and position can be used as a password, indication or something.
Are u dutch?
Jazeker!
@@TheArtofCodeIsCool Probably the best about a 70ies 80ies youth in west germany was the close dutch border... and as misleading as this may sound I am not talking about green but about music and state of mind before the mighty calm down fog. Twenty minutes by car from one world into another. So great. Until today this short trip feels for me like a holiday... though times seem to get tougher as some dutch people esp. these days seem to have rather weird ideas.
wouldn't be this very useful to create "impostors"?
RayMarch inside skybox;
Yeah this could be used for that, I'm sure I'll make a video about it.