sooooo dang helpful. The information in each lesson is great by itself, but I feel like your way of thinking and the concepts can be applied to so many different areas of game dev. So great.
These are freaking awesome! It might be nifty to do a case study on game elements such as camera movement, or input handling, or even just how games might be structured in code. I love these videos and cannot wait for more!
An alternative I could think of, based on that weird sorting, would be they use custom depth information. They could write and compare the depth per character only based on their vertical (relatively to the screen, not the 3D world) coordinate. That also means they can do the inverted shell trick by applying it as a first pass, using the same depth information as the main pass they can still draw those while avoiding outlines fighting each other, then the main pass writes the depth information, only it writes it like 0.1 (or whatever arbitrary value is needed to enforce it depending on the engine, the scales and so on) closer than the outline, hence no outline showing up in the main model. To be fully fair it would introduce a lot of depth issues, solvable, but a hassle if it doesn't bring better performances. I actually wonder which solution they used because I legitimately don't know which would be more performant. I know render-to-texture used to be pretty performance heavy, then it was performance-heavy only on mobiles, so I don't know how it fares today on mobiles. Well really it could be a mix of the two ? Render-to-texture with outlines as a first pass with no depth writing, then main model as a second pass that will erase the outline in the "internal" parts as the depth buffer would be empty there. I think this would likely be the most efficient method. The main advantage of no RTT is having no texture, meaning no memory consumption as far as non-interface stuff goes. As for the enhancement I'm actually working on a toon shader (hence YT's algorithm doing its job I guess xD) that currently has specular (so technically NdotV change I guess ? still fairly new to shaders, so bear with me :P ) and I'm planning on adding rim lighting, both of which will display the light's raw color on the model. My final goal is having a very-close-to-pixel-art look but in crisp 3D (kinda like Guilty Gear Xrd but more closely resembling indies/SNES pixel titles than fighter titles).
Hi Dan! Thanks for this super helpful video. But I think there is a small confusion about the lighting model at 1:32. The lighting direction should be inverse to make the upper left face of the ball lighted and bottom right face dark. I also think _WorldSpaceLightPos is actually the inverse of lighting direction. I guess Unity make it this way so you don't need to deal with negative sign when doing ndotl.
Another possibility is to use a deferred renderer and draw some special value to the (previously unused) normal alpha. After that an image effect can do a sobel filter on that map, much like what you do.
Great video, thank you very much! But im curious would it perform better if you make 2 pass shader istead of 2 cameras and render texture, where first pass rendering geometry with one color with offseted vertices for outline, second pass rendering geometry with toon shading without offset?
That's another option! You'll still get edge discontinuities based on your smoothing groups though. And the first pass can't write Z values in that technique, so it my be awkwardly rendered on top of by further away objects or transparent stuff. So yes, other limitations, but totally viable!
EXCELLENT TUTORIAL, love it! My game can finally begin looking the way I intended it to be. Noob question tho, How do I make shadows from other objects project onto the toon characters? I can only get the the direct shadows made by the ambient light so far. I really really don't want to have to depend on Unity's Shader Graph.
My guess is that Brawl Stars would probably still use a 2-pass approach to avoid the need for a 'many samples' outlining pass? (When rendering a character to a texture, the outline pass can be drawn first with no z-write, and the problem with hard edges can be avoided if you have a second set of normals without the hard edges to extrude along (either as a second mesh, or extra data added to the original mesh)
I'm not so sure, if you look real close at the corners of the outlines, they appear to have filtering only possible with texture sampling rather than crisp aliased mesh edges. It's totally possible though! Recently I've been using the technique of drawing the mesh a second time with front face culling and normal extrusion and then fudging the Z value of the post-transformed vertex pos back a bit to avoid weird edges on poly-dense areas. The second set of normals (probably in a UV set if vec3 is supported there, otherwise as the color channel) is a great idea, I just wish engines or 3D suites had that sort of thing built in, because the alternative is editing the mesh on import which is not usually nice for artists' workflow. Thanks for your input!
Great job! but how can we use in LightWeightRenderingPipeline or HighDefinationRenderingPipeline ? How can we do in shader graph outline shader ? please help. Thanks.
I wonder if making a post image effect for the outline would be faster rather than having a render texture per object. I know that on mobile post effect are very non effective though...
Great Video! Are you sure Brawl Stars are using CubeMaps? For a mobile game I'd probably use a MatCap instead as it's much cheaper. MatCap's can also easily be used for toon lighting, if you don't mind the lighting being attached to the camera.
Yes they could definitely be using mat cap. The fixed view lighting is fine for Brawlstars anyway because the camera doesn't rotate. Good call, I should have thought of that!
When using this in the 2018.2.2 the script is disabled on play. Have you had a chance to mess with the 2018+ render pipeline? any ideas why it might not work?
Could you do a case study on Guilty gear Xrd /Naruto storm Toon shaders as well? I found a GDC talk on guilty gear's shader but I don't understand it very well.
Can you make tutorial about screen transitions between scenes and/or cameras like the one in NieR: Automata, between game and menu (especially like that one)? I tried this but that just doesn't look good, there's a problem with filtering, pixel precision and more.
first of all! awesome! and could you make a video on different methods used to make voxel terrain with multi material and mutil texture like: minecraft, spore and no man's sky
I plan to! I've had a busy couple months at work and not a lot of energy left over to work on new stuff, but I'm off for the holidays and hoping to come back reenergized to make some more videos!
Hey man, watched a couple of your vids. I'm still fairly new when it comes to shaders and most of the graphics aspects of Unity, and your videos have been really interesting to watch! Am I correct in understanding that the language you write these shaders in is HLSL?
what is the 16 samples on Disc[16] for that's the only part i don't understand, why those very specific numbers. and thank you for your amazing videos they're so damn helpful you are the best man. i 'd like you upload more often tho :(
Instead of drawing each character to a sprite, do you not think they may have just manually depth sorted the characters on screen by Y position instead, could save all the extra rendering from separate cameras. What do you think?
BenDymegalols That would explain the lack of per-fragment depth, but not how they get such perfect outlines. I think the render to texture method is most likely.
I just found this channel. It's pretty awesome. I haven't seen a channel go this deep on this subject and also provide coding. This one might be hard but have you tried to do GGXrd/DbzFighter kind of texture? They have a GDC talk that helps a lot but I'm a little bit lost in some parts like when they use a low res model to get smoother shadows. Thanks!
If you were to have a large number of characters / enemies / items that were to use this cell shading technique; how would you approach implementing it? One camera/render texture per enemy? some sort of recycling or shuffling around? Many thanks!
A single camera created through script, and a single rendertarget would be pretty efficient. The challenge would be managing where to physically place the offscreen characters in the scene, and making sure they get the right UVs to point into your RenderTexture atlas. Just a bit of row and column math really. Would get trickier if you wanted characters to take up a different amount of space in the RT. Maybe in that case just do an RT for each character to keep things simple!
Great video! but I ran into a problem writing a vert frag shader. In my forward add pass the point/spot light range doesn't seem to be taken into account. My point and spot lights keep getting cutted off rather than fading out with proper attenuation. The lights basically look like a square in my mesh. Do you know a solution other than using a surface shader/C# script?
You'll have to really dig into the built in shaders. In the AutoLight.cginc file, there are several macros and preprocessor defines you have to make use of to support the various shapes of light.
Hey, great videos. Can I ask you if you use any particular editor in order to write in shaderlab? I find doing it in Visual Studio quite a pain due to, for example, weird indentation.
I mostly use Notepad++. There are some custom syntax highlighting formats floating around but I think at the moment I just have ".shader" files recognized as C to get a the bare minimum of highlighting. Would love a proper solution for editing shaders though if you come across one!
Looks like there are no plugin that really help writing shaders, I could only find syntax highlighting and people talking about node editor like the UE one. Anyway, keep up this great channel as long as you can, it's really inpiring and helpful.
I just hope you make a little more videos just 1 per month will do PLS. :) Your videos are the most definitive source for my shaders knowledge so it's pretty slow to grow.
Do you have a picture of what the bonus shader is meant to look like? I feel like I did it but I can't tell exactly what it's trying to do. Also with the map you provided for that kind of shader wouldnt NdotV be completely ignored since the gradient is identical all the way up? (I've experimented with different maps but nothing I made looks appealing)
Hah, sorry I think I gave a confusing prompt at the end there! That map I showed isn't meant to be the input, was more to be a sort of fill in the blank example. The idea as you've probably figured out is to author something interesting in the Y direction and sample the map in two directions with ndotl and ndotv. I'll find some time to do an example map to feed in. You can get some pretty cool hand-painted styles doing this. Try using brushes to paint a gradient instead of just using a flat gradient tool, and maybe try splashing some color in as well. Hope that helps, and sorry for the confusion!
If Brawl Stars is pre-rendering 3D model to 2D textures, why would they not first use toon shader on 3D model and then render the 2D textures. Now, they would not have to draw the character twice. Would like to hear your thoughts on this.
Better performance as compared to the Unity Standard shader? Almost definitely. A very simple lighting setup will always out perform actual PBR. In general, writing a custom shader that does exactly what you need- no more no less- is ideal for VR where you can't afford to waste cycles on features you're not even using.
Thank you! I never learned shader before, but when I change the shader from standard to mobile, the performance in VR boosted a lot. I still wish I can write my own though. I want a cartoon style for my game, but I don't know how good the toon shading will work with models that have very low poly-count. I feel like if the toon shading is used on something that has a bit detailed texture on a flat surface, it probably would look weird as opposed to models with just solid colors on each part.
Thanks a lot for these videos! But it is not clear at the end if the characters that can be rotated at every angle in the main screen (ruclips.net/video/W3Yg2i17TDo/видео.html) are sprites or real 3D...? If those are sprites, wouldn't 360 images 'only' be needed for each character presentation, which is not ok for live game but could be ok just for that screen? (sorry total newbie here).
How are you producing LUTs like the spiral one in the video? Are you writing little scripts to generate them or are they created by hand in photoshop/gimp and if so, how are you making them?
That was created in photoshop, black-to-red and black-to-green gradient fills in both directions, additively blended and then flattened the layer. Then just some twist filter or something was applied. I might do a video in the future because questions about weird technical textures are getting to be quite common.
Sorry I'm new to this, but why do you only compare with the dot product (N*L) instead of taking the arc-cosine of it (assuming the normal and light are unit vectors)? Wouldn't that be more accurate if you scale the radians properly to -1 to 1 scale?
Caresilabs Yeah, but I'm thinking if you take the arc-cosine of the dot product to find the angle, and use that instead to scale the color. I'm wondering if that would be more accurate, since cosine isn't linear.
Caresilabs mmm it's subtle: i.imgur.com/VUFUikZ.png The one on the right is the usual diffuse, and the one on the left uses the arc-cosine thing I was thinking about. I guess it's too slow to be worth using in a game :)
Can this be used in a commercial project?! Can it be changed modified and used? :) I like it a lot, I am still not sure how will my game look with it :)
I dont understand the sampling part. What are those disc values? I mean, in my head you need to detect how far are you (current pixel) from the border pixel. I mean, if you detect if youre already the outline, than where are you going to draw the outline? The outline is not on top of drawn pixels is it? This stuff is so no intuitive to me.
I have a question sir. If have many charactor in the scene.It does means how many charactor should have equals numbers of camare and rt? If rt is too much. The memory must crash.
Hi XingWu, Yes, if you have a RenderTexture for each character, you will have an increase in memory for every character in play at once. A 512x512 RT will cost you about 2MB, so with 6 characters like Brawlstars uses, you'd be at just 12MB of RenderTexture usage. This is well below the ~512MB limit of iPhone 5s and will unlikely be the bottleneck compared to the rest of your scene's texture usage.
Amplify only produces Unity Surface shaders. This makes it an abstraction on top of an abstraction. For the most part this might be fine, but it can't be guaranteed to deliver optimized shaders. I think node based editors are interesting, and they make sense when you have significantly more artists than you have graphics engs and tech artists. This was the case for Bungee's Destiny, and they rolled their own node based editor. I would love to find a node based editor that is extensible from an engineering point of view the way Bungee's was described (look up their GDC talk from this past year for more info on their approach). As for Amplify, I have a hard time trusting it's output because I've seen it fail to produce shaders that work cross platform. The performance concern is also prudent. It's a cool learning tool and probably will work for small scale projects, but I don't think it's quite ready for a larger project.
so basically you can only apply the the outline to a character and not to the entire scene, with this method you would literally have to do so many passes to do every object in the scene and you would need so many cameras lol
Correct, you wouldn't use this technique for an entire scene. This is mostly used to blend 3D and 2D elements as the game discussed does. You can achieve regular 2D sprite sorting with a 3D character, and the nice silhouette outline is just an added benefit that happens to be nearly free in this technique because we're rendering the characters to an RT anyway. For an entire scene, you would use the first technique mentioned, a second shader pass that extrudes the mesh along the normal. This is more like what you see in Okami or Viewtiful Joe. There are many artifacts to that technique and the meshes have to be water tight- so that technique also has drawbacks. If the number of cameras and RenderTextures concerns you for a technique you are thinking of, consider combining your rendertextures into an atlas with UV offsets to decrease the number of render target switches, you could also use a single camera which you move with each render, or you could try to use Unity's CommandBuffer system to render arbitrary renderers with a custom MVP matrix (this can be a bit of a pain in my experience).
texture look up for clamping?? SHTAHP just /thinking/ sbout it dropped my framerates for the shader at ~3.00 just use clampf with a min of 0 and a max of band then divide everything by band. that gives you an adjustable width for the gradient part based on your aesthetic preferences. bigger c is smaller gradient. if you want to move the band a bit just add before you clamp. it costs /significantly/ less than a texture lookup! const band = 0.05; const band_inverse = 20; const adjust = -0.02; mycos = mycos + adjust; clamp angle between 0 and band; mycos = mycos * band_inverse; done. you can even make band into a dynamic value if you need it, by using fast inverse. it still costs next to nothing and is effectively perfect. if you need more than 2 cells, i'd copy the value into two clamps and add them together. you might have to tweak the weight of each. after enough bands it would eventually be faster to do a texture lookup, but thats only at several bands. this method is probably better even up to 5 bands, if youre careful about it. maybe even a lot more. texture look ups are crazy expensive. oh and if you want hard divides between bands i'd just use if. if is only /really/ expensive when theres a lot of code inside, so just have it set a single value. if mycos>0 then mycos=1 else mycos=0;
This video makes me feel stupid :D
sooooo dang helpful. The information in each lesson is great by itself, but I feel like your way of thinking and the concepts can be applied to so many different areas of game dev. So great.
i know Im kind of off topic but does anybody know a good website to watch new series online ?
@Adrian Jaxton I watch on Flixzone. Just search on google for it =)
best cartoon/animated look is made for Guilty Gear and upcomming DBZ... there is a GDC panel about that worth watching.
Thank you! Thats exactly the art style I want.
mihoyo also have interesting shading technique
These are freaking awesome! It might be nifty to do a case study on game elements such as camera movement, or input handling, or even just how games might be structured in code. I love these videos and cannot wait for more!
Yup!
This is the only channel I turn on notifications for. Great job!
Maybe do a Mario64 paintings/sunshine levels shader case study? I know it is pretty basic, I just want to see more awesome videos! :D
Awesome job as always! I love that you build on previous videos instead of re-treading familiar ground at the start.
Dude. You are my hero. I absolutely freakin' love everything you do. Keep doing it!
You seriously need to upload more often. Minus that, WOO! Another video!
I'm amazed how well you explain what you did and how you reproduced the look so well (I'd even say your toon shading looks better than Super Cell's)!
Wow, this is taking Donkey Kong Country and Mario Kart 64 to the next level!
"You, internalizing everything seen in the past minute" EXACTLY !! Haha, you rock man, learning so much from your videos, keep it up!
Hallelujah, a new MSLG video! Just when I was wishing there would be more! Thanks!
@4:38 OH WHAT A FANTASTIC 'CHEAT'! Rendering a 3d character to a texture and treating it as 2d... I might have to steal that.
This is like using FrameBuffers for post processing, just draw the scene to a texture attachment and draw it on a quad with a post processing shader
An alternative I could think of, based on that weird sorting, would be they use custom depth information. They could write and compare the depth per character only based on their vertical (relatively to the screen, not the 3D world) coordinate. That also means they can do the inverted shell trick by applying it as a first pass, using the same depth information as the main pass they can still draw those while avoiding outlines fighting each other, then the main pass writes the depth information, only it writes it like 0.1 (or whatever arbitrary value is needed to enforce it depending on the engine, the scales and so on) closer than the outline, hence no outline showing up in the main model. To be fully fair it would introduce a lot of depth issues, solvable, but a hassle if it doesn't bring better performances.
I actually wonder which solution they used because I legitimately don't know which would be more performant. I know render-to-texture used to be pretty performance heavy, then it was performance-heavy only on mobiles, so I don't know how it fares today on mobiles.
Well really it could be a mix of the two ? Render-to-texture with outlines as a first pass with no depth writing, then main model as a second pass that will erase the outline in the "internal" parts as the depth buffer would be empty there. I think this would likely be the most efficient method. The main advantage of no RTT is having no texture, meaning no memory consumption as far as non-interface stuff goes.
As for the enhancement I'm actually working on a toon shader (hence YT's algorithm doing its job I guess xD) that currently has specular (so technically NdotV change I guess ? still fairly new to shaders, so bear with me :P ) and I'm planning on adding rim lighting, both of which will display the light's raw color on the model. My final goal is having a very-close-to-pixel-art look but in crisp 3D (kinda like Guilty Gear Xrd but more closely resembling indies/SNES pixel titles than fighter titles).
Congratz on the channel! Absolutely damn good content.
Droughts just make you appreciate water all the more. Great video, thanks 👍🏻
I would have needed this 1 month ago. I just submitted my research paper for uni about outline shading. qwq still, this video is awesome!
Ahh! Hearing LUT referred to as "look up texture" instead of "look up table" is just killing my brain. But, I'm old school. :)
NEW VID YAAAAAY
Great video as usual, I personally love the new cel shading of the new upcoming DragonBall FighterZ game.
Hi Dan! Thanks for this super helpful video. But I think there is a small confusion about the lighting model at 1:32. The lighting direction should be inverse to make the upper left face of the ball lighted and bottom right face dark. I also think _WorldSpaceLightPos is actually the inverse of lighting direction. I guess Unity make it this way so you don't need to deal with negative sign when doing ndotl.
Very nice and interesting case study. Keep going with your great work :P
Yet another awesome vid, man. Thanks for making these :D
Another possibility is to use a deferred renderer and draw some special value to the (previously unused) normal alpha. After that an image effect can do a sobel filter on that map, much like what you do.
I hope you start doing these kind of videos again.
Great video, thank you very much! But im curious would it perform better if you make 2 pass shader istead of 2 cameras and render texture, where first pass rendering geometry with one color with offseted vertices for outline, second pass rendering geometry with toon shading without offset?
That's another option! You'll still get edge discontinuities based on your smoothing groups though. And the first pass can't write Z values in that technique, so it my be awkwardly rendered on top of by further away objects or transparent stuff.
So yes, other limitations, but totally viable!
Makin' Stuff Look Good Thanks for the answer! Keep up the good work!
Whats books would you recommend for someone who wants to get into shaders or like graphics engineering in general?(: love ur vids
EXCELLENT TUTORIAL, love it! My game can finally begin looking the way I intended it to be.
Noob question tho, How do I make shadows from other objects project onto the toon characters? I can only get the the direct shadows made by the ambient light so far. I really really don't want to have to depend on Unity's Shader Graph.
My guess is that Brawl Stars would probably still use a 2-pass approach to avoid the need for a 'many samples' outlining pass? (When rendering a character to a texture, the outline pass can be drawn first with no z-write, and the problem with hard edges can be avoided if you have a second set of normals without the hard edges to extrude along (either as a second mesh, or extra data added to the original mesh)
I'm not so sure, if you look real close at the corners of the outlines, they appear to have filtering only possible with texture sampling rather than crisp aliased mesh edges. It's totally possible though! Recently I've been using the technique of drawing the mesh a second time with front face culling and normal extrusion and then fudging the Z value of the post-transformed vertex pos back a bit to avoid weird edges on poly-dense areas.
The second set of normals (probably in a UV set if vec3 is supported there, otherwise as the color channel) is a great idea, I just wish engines or 3D suites had that sort of thing built in, because the alternative is editing the mesh on import which is not usually nice for artists' workflow.
Thanks for your input!
This is freaking amazing. Thank you!
This is awesome!! gonna start messing with it right away!
Nice analysis sir, I wish I found this channel earlier.
Great job! but how can we use in LightWeightRenderingPipeline or HighDefinationRenderingPipeline ? How can we do in shader graph outline shader ? please help. Thanks.
Awesome video!
So if you want to use this setup you will need 1 camera + render texture for each character in the scene?
I wonder if making a post image effect for the outline would be faster rather than having a render texture per object. I know that on mobile post effect are very non effective though...
Great Video!
Are you sure Brawl Stars are using CubeMaps?
For a mobile game I'd probably use a MatCap instead as it's much cheaper. MatCap's can also easily be used for toon lighting, if you don't mind the lighting being attached to the camera.
Yes they could definitely be using mat cap. The fixed view lighting is fine for Brawlstars anyway because the camera doesn't rotate. Good call, I should have thought of that!
Great video! The second outline technique has the limitation of needing 1 camera per color of outline you want right?
dude, I've been waiting your new video since forever.
When using this in the 2018.2.2 the script is disabled on play. Have you had a chance to mess with the 2018+ render pipeline? any ideas why it might not work?
Could you do a case study on Guilty gear Xrd /Naruto storm Toon shaders as well? I found a GDC talk on guilty gear's shader but I don't understand it very well.
Please make more of these!
Can you make tutorial about screen transitions between scenes and/or cameras like the one in NieR: Automata, between game and menu (especially like that one)?
I tried this but that just doesn't look good, there's a problem with filtering, pixel precision and more.
first of all! awesome!
and could you make a video on different methods used to make voxel terrain with multi material and mutil texture
like: minecraft, spore and no man's sky
Quality stuff as always
Please upload more often if you can. we love your videos
I plan to! I've had a busy couple months at work and not a lot of energy left over to work on new stuff, but I'm off for the holidays and hoping to come back reenergized to make some more videos!
Hey man, watched a couple of your vids. I'm still fairly new when it comes to shaders and most of the graphics aspects of Unity, and your videos have been really interesting to watch! Am I correct in understanding that the language you write these shaders in is HLSL?
it is great to see your new video ...pls upload your awesome videos more often...
what is the 16 samples on Disc[16] for that's the only part i don't understand, why those very specific numbers. and thank you for your amazing videos they're so damn helpful you are the best man. i 'd like you upload more often tho :(
can you cover more different methods of outline techniques ?
Instead of drawing each character to a sprite, do you not think they may have just manually depth sorted the characters on screen by Y position instead, could save all the extra rendering from separate cameras. What do you think?
BenDymegalols That would explain the lack of per-fragment depth, but not how they get such perfect outlines. I think the render to texture method is most likely.
I just found this channel. It's pretty awesome. I haven't seen a channel go this deep on this subject and also provide coding. This one might be hard but have you tried to do GGXrd/DbzFighter kind of texture? They have a GDC talk that helps a lot but I'm a little bit lost in some parts like when they use a low res model to get smoother shadows. Thanks!
I couldn't find the GDC talk that you mentioned. Can you share it?
Thank you for this amazing tutorial.
Could I suggest a hatch shader video?. Watercolor, crayon effect?.
Thanks.
If you were to have a large number of characters / enemies / items that were to use this cell shading technique; how would you approach implementing it? One camera/render texture per enemy? some sort of recycling or shuffling around? Many thanks!
A single camera created through script, and a single rendertarget would be pretty efficient. The challenge would be managing where to physically place the offscreen characters in the scene, and making sure they get the right UVs to point into your RenderTexture atlas. Just a bit of row and column math really. Would get trickier if you wanted characters to take up a different amount of space in the RT. Maybe in that case just do an RT for each character to keep things simple!
Interesting idea! so essentially using a render texture as a texture atlas. Thanks :)
Awesome, I think I finally get it now :D
I love you
great as always
Great video! but I ran into a problem writing a vert frag shader. In my forward add pass the point/spot light range doesn't seem to be taken into account. My point and spot lights keep getting cutted off rather than fading out with proper attenuation. The lights basically look like a square in my mesh. Do you know a solution other than using a surface shader/C# script?
You'll have to really dig into the built in shaders. In the AutoLight.cginc file, there are several macros and preprocessor defines you have to make use of to support the various shapes of light.
As always, you are awesome.
Hey, great videos. Can I ask you if you use any particular editor in order to write in shaderlab? I find doing it in Visual Studio quite a pain due to, for example, weird indentation.
I mostly use Notepad++. There are some custom syntax highlighting formats floating around but I think at the moment I just have ".shader" files recognized as C to get a the bare minimum of highlighting.
Would love a proper solution for editing shaders though if you come across one!
Looks like there are no plugin that really help writing shaders, I could only find syntax highlighting and people talking about node editor like the UE one.
Anyway, keep up this great channel as long as you can, it's really inpiring and helpful.
LOL freezeframe at 2:23
I just hope you make a little more videos just 1 per month will do PLS. :) Your videos are the most definitive source for my shaders knowledge so it's pretty slow to grow.
What would you need to do (or what nodes are needed) if you would to put this into Shaderforge or something?
Nice trick!
altho i just realized at1:36.
The direction of light is from below, isn't the shadow on the wrong side?
Do you have a picture of what the bonus shader is meant to look like? I feel like I did it but I can't tell exactly what it's trying to do. Also with the map you provided for that kind of shader wouldnt NdotV be completely ignored since the gradient is identical all the way up? (I've experimented with different maps but nothing I made looks appealing)
Hah, sorry I think I gave a confusing prompt at the end there! That map I showed isn't meant to be the input, was more to be a sort of fill in the blank example. The idea as you've probably figured out is to author something interesting in the Y direction and sample the map in two directions with ndotl and ndotv. I'll find some time to do an example map to feed in.
You can get some pretty cool hand-painted styles doing this. Try using brushes to paint a gradient instead of just using a flat gradient tool, and maybe try splashing some color in as well. Hope that helps, and sorry for the confusion!
Thanks, great explanation.
Finally he's back !!!!!
If Brawl Stars is pre-rendering 3D model to 2D textures, why would they not first use toon shader on 3D model and then render the 2D textures. Now, they would not have to draw the character twice. Would like to hear your thoughts on this.
Hello, thanks for the great tutorial! Will making the game with toon shader help with performance in VR? (without the outlines)
Better performance as compared to the Unity Standard shader? Almost definitely. A very simple lighting setup will always out perform actual PBR. In general, writing a custom shader that does exactly what you need- no more no less- is ideal for VR where you can't afford to waste cycles on features you're not even using.
Thank you! I never learned shader before, but when I change the shader from standard to mobile, the performance in VR boosted a lot. I still wish I can write my own though. I want a cartoon style for my game, but I don't know how good the toon shading will work with models that have very low poly-count. I feel like if the toon shading is used on something that has a bit detailed texture on a flat surface, it probably would look weird as opposed to models with just solid colors on each part.
Thanks a lot for these videos! But it is not clear at the end if the characters that can be rotated at every angle in the main screen (ruclips.net/video/W3Yg2i17TDo/видео.html) are sprites or real 3D...? If those are sprites, wouldn't 360 images 'only' be needed for each character presentation, which is not ok for live game but could be ok just for that screen? (sorry total newbie here).
How are you producing LUTs like the spiral one in the video? Are you writing little scripts to generate them or are they created by hand in photoshop/gimp and if so, how are you making them?
That was created in photoshop, black-to-red and black-to-green gradient fills in both directions, additively blended and then flattened the layer. Then just some twist filter or something was applied. I might do a video in the future because questions about weird technical textures are getting to be quite common.
Suggestion for shaders case study: Fortnite item gathering bounce and gathered resources icon sliding into backpack
Sorry I'm new to this, but why do you only compare with the dot product (N*L) instead of taking the arc-cosine of it (assuming the normal and light are unit vectors)? Wouldn't that be more accurate if you scale the radians properly to -1 to 1 scale?
JoeDuckTV Dot product has a relationship with the cosine :) I don't know how it would be more accurate?
Caresilabs Yeah, but I'm thinking if you take the arc-cosine of the dot product to find the angle, and use that instead to scale the color. I'm wondering if that would be more accurate, since cosine isn't linear.
JoeDuckTV Well Lambert is just a approximation of real lightning. If you have time, try it and tell us how it worked out :)
Caresilabs mmm it's subtle: i.imgur.com/VUFUikZ.png The one on the right is the usual diffuse, and the one on the left uses the arc-cosine thing I was thinking about. I guess it's too slow to be worth using in a game :)
JoeDuckTV nice work dude! Maybe try with a more complex shape to see how it does in a more realistic use case.
FINALLY A NEW VIDEO 😍
love these videos
Can this be used in a commercial project?! Can it be changed modified and used? :) I like it a lot, I am still not sure how will my game look with it :)
hello could you help me to replicate this yader in amplify shader edtor?
Nice video!!
Man please make a video about Guilty Gear xrd shader , specially the vertex normal trick ?
I dont understand the sampling part. What are those disc values? I mean, in my head you need to detect how far are you (current pixel) from the border pixel. I mean, if you detect if youre already the outline, than where are you going to draw the outline? The outline is not on top of drawn pixels is it?
This stuff is so no intuitive to me.
I have a question sir.
If have many charactor in the scene.It does means how many charactor should have equals numbers of camare and rt?
If rt is too much. The memory must crash.
Hi XingWu,
Yes, if you have a RenderTexture for each character, you will have an increase in memory for every character in play at once. A 512x512 RT will cost you about 2MB, so with 6 characters like Brawlstars uses, you'd be at just 12MB of RenderTexture usage. This is well below the ~512MB limit of iPhone 5s and will unlikely be the bottleneck compared to the rest of your scene's texture usage.
Thank u so much answer to me!!!
It was great!!!
"If only there was a wikihow for shaders" hahaha that's right
this video is Awesome !!!!
cool! I did learn a lot!
Top notch videos
What is your opinion on a tool like amplify shader?
Amplify only produces Unity Surface shaders. This makes it an abstraction on top of an abstraction. For the most part this might be fine, but it can't be guaranteed to deliver optimized shaders.
I think node based editors are interesting, and they make sense when you have significantly more artists than you have graphics engs and tech artists. This was the case for Bungee's Destiny, and they rolled their own node based editor. I would love to find a node based editor that is extensible from an engineering point of view the way Bungee's was described (look up their GDC talk from this past year for more info on their approach).
As for Amplify, I have a hard time trusting it's output because I've seen it fail to produce shaders that work cross platform. The performance concern is also prudent. It's a cool learning tool and probably will work for small scale projects, but I don't think it's quite ready for a larger project.
do you allow people to use the shaders in commercial projects?
Yes all the work the shaders shown in my videos are available on github under a very permissive license.
so basically you can only apply the the outline to a character and not to the entire scene, with this method you would literally have to do so many passes to do every object in the scene and you would need so many cameras lol
Correct, you wouldn't use this technique for an entire scene. This is mostly used to blend 3D and 2D elements as the game discussed does. You can achieve regular 2D sprite sorting with a 3D character, and the nice silhouette outline is just an added benefit that happens to be nearly free in this technique because we're rendering the characters to an RT anyway.
For an entire scene, you would use the first technique mentioned, a second shader pass that extrudes the mesh along the normal. This is more like what you see in Okami or Viewtiful Joe. There are many artifacts to that technique and the meshes have to be water tight- so that technique also has drawbacks.
If the number of cameras and RenderTextures concerns you for a technique you are thinking of, consider combining your rendertextures into an atlas with UV offsets to decrease the number of render target switches, you could also use a single camera which you move with each render, or you could try to use Unity's CommandBuffer system to render arbitrary renderers with a custom MVP matrix (this can be a bit of a pain in my experience).
YESSSS HE'S ALIVE!!!!! OMG I LOVE U SIGN MY BABY!!!!!!!!!!!
Blessed be the fruit
omg this was so useful. thank you so much ;_; ❤
texture look up for clamping??
SHTAHP
just /thinking/ sbout it dropped my framerates
for the shader at ~3.00 just use clampf with a min of 0 and a max of band then divide everything by band. that gives you an adjustable width for the gradient part based on your aesthetic preferences. bigger c is smaller gradient. if you want to move the band a bit just add before you clamp. it costs /significantly/ less than a texture lookup!
const band = 0.05; const band_inverse = 20; const adjust = -0.02;
mycos = mycos + adjust; clamp angle between 0 and band; mycos = mycos * band_inverse; done.
you can even make band into a dynamic value if you need it, by using fast inverse. it still costs next to nothing and is effectively perfect.
if you need more than 2 cells, i'd copy the value into two clamps and add them together. you might have to tweak the weight of each.
after enough bands it would eventually be faster to do a texture lookup, but thats only at several bands. this method is probably better even up to 5 bands, if youre careful about it. maybe even a lot more. texture look ups are crazy expensive.
oh and if you want hard divides between bands i'd just use if. if is only /really/ expensive when theres a lot of code inside, so just have it set a single value.
if mycos>0 then mycos=1 else mycos=0;
Scury teef guy is very unsettling to me ;___;
woohoo new video!
Where exactly is the "clamp Wrapping" Option?
It's determined by the .tga files, you can edit/create them on pretty much any image editors such as GIMP, photoshop, paint.NET, etc.
can we have full speech of dunk gfx?
Genius!
Here's a somewhat related blog post you may find interesting, and a massive technical feat. dolphin-emu.org/blog/2017/07/30/ubershaders/
Oh you're alive then... so much for increasing frequency in videos that you mentioned months ago.... :(
Sorry! My full time job takes up many hours, and it can be very fatiguing to work in Unity all day only to continue doing so at home.
wtf that's fucking smart
is this possible to achieve with unity tho
Yes. The sample you saw in the video was done with Unity.
@@totallynotabot151 Oh right lol.