Change Your Understanding of Normals In Eight Minutes

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 547

  • @DECODEDVFX
    @DECODEDVFX  3 года назад +165

    Yo! Thanks for watching. Leave your video requests here.

    • @UnderfundedScientist
      @UnderfundedScientist 3 года назад +1

      Maybe a video reviewing subs games, or channels. But I'm totally biased as I have a channel and a game , and I'm a committed sub oO

    • @kyleboynton2748
      @kyleboynton2748 3 года назад +3

      Possibly a more in depth video on how the weighted normal modifier works? Assuming you know of course. I’m finding it hard to figure out and you have a way of explaining things. Thanks!

    • @genesis2303
      @genesis2303 3 года назад +1

      Why bend modifier is always in the wrong direction, seriously it newer start out of the box the way you want.
      Compositing videos are always appreciated, there are so few of those. Taking into account that recently blender moved in alpha compositing part to after render period it would be a nice opportunity to throw some more light at this topic.

    • @MrWoundedalien
      @MrWoundedalien 3 года назад +1

      Displacement. Thanks for sharing your knowledge!

    • @bitsurface5654
      @bitsurface5654 3 года назад +1

      whats about object vs. tangent space in normal maps? maybe you could make a video about this?

  • @rhyslogan127
    @rhyslogan127 3 года назад +823

    "All the modern 3d softwares, even Maya" lmao the shade

    • @roswarmth
      @roswarmth 3 года назад +4

      What's so funny about it I don't understand ?

    • @MrFastsone
      @MrFastsone 3 года назад +117

      @@roswarmth lowkey maya is old

    • @binodsarkarIN
      @binodsarkarIN 3 года назад +1

      😂😂

    • @roswarmth
      @roswarmth 3 года назад +1

      @@MrFastsone yeah I know that but what's the funny part in it ?

    • @AlexTsekot
      @AlexTsekot 3 года назад +90

      @@roswarmth Probably the blender cult trying to be comedians, don't get me wrong Blender is great but its not the second coming of jesus, not yet anyway.

  • @daniellee6912
    @daniellee6912 3 года назад +123

    Thank you, finally I can understand what normals are. Most tutorials tell you to "flip the normals" but don't explain why.

    • @hart1254
      @hart1254 3 года назад +5

      just flip the damn normals lool, or even better, recalculate normals

    • @PinkeySuavo
      @PinkeySuavo 3 месяца назад +1

      to make everything look nice

  • @badoli1074
    @badoli1074 3 года назад +129

    Nice! Some more technical details:
    The RGB colors of the normalmap represent vector positions in X, Y and Z. A normalmap without any details has all its normals point straight upwards. The vector would look like (0 0 1), which if encoded in 24bit looks like (128 128 255), which is exactly the typical normalmap blue!
    Something to be aware is: These normalmap vectors should be normalized! A normalized vector always has a length of 1 and that means not all colors represent a correct vector. This is usually not an issue when you render normals from substance or blender, but back in the dark days we had to paint out rendering issues in Photoshop and as such it was important to normalize the map again...
    Also i used Photoshop layers generate normal map details... Once you understand how that tech works, you can do freaky stuff with it... Fun!

    • @bakedbeings
      @bakedbeings 3 года назад +12

      Footnote for readers: "Up" in this context means perpendicular to the uv plane. Up becomes a bit foggier with tangent space and world space.

    • @pbonfanti
      @pbonfanti 2 года назад

      So, painting the channels rgb in separate, allows full control of normals?

    • @ThBlueSalamander
      @ThBlueSalamander 2 года назад

      @@pbonfanti I guess so

    • @bakedbeings
      @bakedbeings 2 года назад

      @@pbonfanti Yep one channel per axis. No height though, you need another map or channel for that.

    • @iliasilias2108
      @iliasilias2108 2 года назад

      @@ThBlueSalamander iLiAS iLiAS F0RTNlTE adonnés RUclips

  • @FlippedNormals
    @FlippedNormals 3 года назад +294

    Haha, we love it! 01:08

  • @davidmurphy563
    @davidmurphy563 3 года назад +309

    In case anyone is interested in how the maths work with 3d graphics and normals I thought I'd ramble on a bit (Ok, a lot). This is totally unnecessary to know to use blender but I've got a glass of wine, it's lockdown and I feel like it. If you value your time, ignore this comment. Ok, you've been warned...
    I'll use the example of ray marching (Eevee) as it's simpler than ray tracing (Cycles) but the basic concepts apply.
    First you need to make a camera. All you have to do is give it a location in 3d space and for that you use a vector (a coordinate). Let's set ours at vec3(0, -3, 1). So that's one unit above the plane and three back looking forward. You then need to cast a ray from the camera, through the viewport to the object. Well, your GPU comes with a fragment shader which will run a calculation for each pixel in your viewport (screen). This is run for each pixel every frame - GPUs programme on a "wave front" (like a wave crashing on the beach) running your instruction for every pixel on the screen every frame. The name of the game here is to find out what colour that pixel should be. Coding a shader is running a programme a million times simultaneously, what GPUs do is amazing... It's like each pixel on your screen has it's own CPU.
    You know which pixel is running your shader by a UV value the shader gives you which is really just the x and y coordinate of the pixel. Generally you normalise this from -0.5 to 0.5 which you can do easily by dividing it by the resolution and subtracting 0.5 to put it in the middle (for convenience).
    Next step is getting the angle from the camera to the pixel. Dead easy, just minus your target position from your camera position. Let's put the viewport on zero on the Y so the vector would be vec3(U, 0, V) - say you wanted the pixel at the top right of the screen. That would be vec3(0.5, 0.0, 0.5) for example which you deduct from your camera position vec3(0, -3, 1). You then normalise this result: that means to doing a bit of Pythagoras and making it a unit vector (with a length of 1) so you just have the direction. Right, now you have an arrow pointing from the camera towards your pixel with a length of 1. Remember, this is run for every pixel on the screen at the same time. Want to make it 60 long? Just multiply it by 60. The "60" is called a "scalar" for obvious reasons.
    Ok, let's put an object in our scene because right now we've only got a viewport and a camera. The simplest is a sphere. Let's put two of them just for fun. one is at vec3(2, 3, 2) and the other is to the left and back a bit at vec3(-2, 5, 2) and they both have a radius of 1 unit. Ok, so the question is: does this one particular ray hit a sphere? Well, if we send our ray too far we'll miss the target which would be a fail and the computer gods will chide you. How about to keep things simple we ignore direction and just look at distances?
    But it's easy enough to work out the distance to the centre of the sphere. I'll skip the maths on that but it's very basic and, anyway, shaders have a length() function that does it for you. Then we just need to deduct the radius and we've got the distance to the surface of the object. But we've got two objects and we're running this same programme for every pixel on the screen so how far do we extend the ray? By the length of the closest one, that way we know we won't overshoot. Cool, now we check the distance to the surface again (within the same frame) and if it's very close then we call it a hit and we set the colour of that pixel to white. If not, we move on, how far? Well, we're in a loop, checking the distance to all the surfaces and then moving the ray forward by the minimum distance. Then we tell the loop to stop after a certain distance in case it didn't hit anything and to return black as the colour.
    That's it, we have an image. And it's a black background and we have two white circles. Which looks completely crap. :) Ok, what about adding a light source? ok, let's say our light source is at vec3(0, 2, 7) above the spheres. Now [finally] we get to normals. We've got the location that the ray struck the object, and we've got the position of the light. Well, like we did before we can subtract one from the other and normalise and get the direction unit vector to the light source. Now we need another vector, the normal. Think about it, if it's facing away from the light then it'll be in shade, if it's on the top of the sphere then we can return a light value. So everything depend on whether the face is facing the light.
    Working out the normal is a bit of a pain tbh, it's all a bit manual. You have to go cast a ray a tiny bit to the right/left of where you hit and another a tiny bit up/down and then you minus these vectors to get two tangent vectors. Imagine the sphere is a football. You grab a marker and put a dot where you hit. you put some dot left and right (by repeating what you did to get the first dot) and then you minus these new offset vectors and you've got two that cross flat to the surface of the ball.
    You can then do a bit of maths called the cross product to get the vector perpendicular to these vectors: the normal. I won't explain the maths to the cross product but it's quite cool and, again, shaders have a function that does it for you. Ok, so now we have the ray direction, the normal direction and the light direction all in unit vectors. The next bit of maths magic is called the dot product. If two vectors are pointing in the same direction the dot product will give you a result of 1. If they're in opposite directions -1 and if they're at right angles then 0. So, we can use this value to determine how bright the pixel is. Less than 0, make it black. Greater than one (so pointing in the direction of the light source) then the pixel has that much brightness.
    Tah-dah! Run this and there are two shaded spheres in your scene. Better yet, move the camera back and forth and it zooms. Move the camera with the viewport and you change the perspective. Move the light and the lighting changes on the spheres. We can easily add a plane or cubes by using the same principle. But what about shadows? Dead easy, just take the point of collision, run the vector towards the light source using the min distance loop and if it hits something then make it black.
    Now, you'll notice that these are geometric shapes and not meshes. Well, a mesh is an array: a series of vectors each indicating a vertex. There's also an index. So, it's a long list of triangles looping round and round. So now to work out if you hit then you need to do a bit of maths, involved the dot product again, to work out if you hit a face. It's a bit more complicated but it's pretty much the same thing as our sphere examples. Now, the GREAT thing is as well as vertex location you can send a normal vector in this array. So no more messing around sending more rays working it out, it's in the data! Whoop! Now you run the maths and bang, Suzanne the Monkey is in your scene. You can give it a colour too, and just use the dot product calculation to decide how bright that is. You can give t he mesh a UV of its own and send a texture in so now you've got a photo.
    If anyone got to the end of this extended edition of War and Peace (sorry), I hope you see how key normals are to 3d rendering, how simple the basics of it are (obviously, this is the simplest version I could make, blender is much more involved) and why giving them to your GPU in an array is a great idea.

    • @davidmurphy563
      @davidmurphy563 3 года назад +15

      If anyone has a question, if anything wasn't clear, I'd be more than happy to respond btw. If your question is on first principles, great, that's actually the more interesting end of things.

    • @pigydog123
      @pigydog123 3 года назад +18

      YOU IS BIG BIG SMART!!!

    • @unversedunavailable793
      @unversedunavailable793 3 года назад +33

      Clicked read more and my jaw dropped.

    • @cg1582
      @cg1582 3 года назад +3

      U are magic. Thanks.

    • @NOTORIOUS404
      @NOTORIOUS404 3 года назад +3

      But are normal map color values translated to angular values? I don't understand why nobody mentioned this :(

  • @WaterShowsProd
    @WaterShowsProd 3 года назад +55

    Before the video: I understand how normals work.
    Watching the video: Oh, that's interesting... Ha... Ohh....
    It was nice to hear someone mention Phong Shading after all these years, I thought I might be the only person who still calls it that.

    • @simonlicman5166
      @simonlicman5166 3 года назад +1

      I often deleted phong tags and added subdivision lol, and then i was like: 'computer stupid'

  • @Chevifier
    @Chevifier 3 года назад +16

    First video that actually explained the colors of the normal map. I was trying to find out the difference between Normal and Bump maps. Putting 2 and 2 together, this explains it. Thank you.

  • @EANIIX
    @EANIIX 3 года назад +26

    As I struggled with understanding normals for quite a long time as well, I'd like to add some information that can be pretty valuable for anyone trying to understand this topic:
    1. There's actually not only face normals but also vertex normals. Face normals are what you've shown in the video and as they determine the direction of a face they are also important for operations like extrusions or modifiers like solidify since they use face normals to calculate the direction of the operation.
    2. There's also vertex normals which are (who would have thought that) the normals of vertices. Vertex normals are actually responsible for the shading instead of face normals as you've shown in the video. I highly recommend you testing this in your 3D program of choice to really understand this. In Blender for example you find it in edit mode at the bottom of the viewport overlay settings (actually check vertex-split-normals, not vertex normals).
    With flat shading each vertex normal has a normal for each connected face, so if it's connected to 4 faces, it has 4 vertex normals who each follow the face normal direction of their related face. If two faces now have an angle to each other, you can see the vertex normals separating from each other and pointing in different directions. That's why the edges between faces appear sharp.
    With smooth shading however, these vertex normals are averaged out to represent a mix of all related face normals. You can now see that they all follow the same direction and it looks like only one normal now. With auto-smooth in Blender or soften/harden in Maya you can now determine at which angle the vertex normals are averaged and appear smooth and when they are left at flat shading.
    4. Make use of the normal orientation for transformations in edit mode, it can help a lot!
    5. Backface Culling is a technique primarily used in game engines for not rendering the backside of a face. So a plane would be visible from one side and invisible from the other one. In Blender you can enable it in the viewport shading settings. You might want to check that if you experience visibility issues.
    6. The backfacing output of the geometry node in the shading editor can be immensly helpful if you're trying to texture an object without thickness like a plane or a leaf. It gives you a white value for backfaces and a dark value for frontfaces. Use this to drive a mix-node and you can add materials for each side of a face.
    Although normals are pretty fundamental for 3D, they often get overlooked at the beginning and it helped me a lot to understand what's actually going on with them! It's best to test everything out yourself as you can learn a lot and it's also pretty fun (at least if you're a little nerdy like me haha).

    • @DECODEDVFX
      @DECODEDVFX  3 года назад +3

      Yeah, I actually have an old video about how to texture objects like leaves using the negative normals of the face.

    • @AdamKiraly_3d
      @AdamKiraly_3d 3 года назад +4

      just wanted to add to this seeing you already mentioned vertex normals in a vertex shaded system. The normal maps in the video were Tangent space normal maps, they don't override the surface normals but alter them relative to the original vertex normal direction. Due to the map only adjusting the shading it can be used on deforming shapes. Object and world space normals (the one shown in the render debug) can be used to actually override the surface normals entirely, most likely the reason you don't see them around that often, but are still used as intermediary for textures in Substance painter for example. As you said, nothing can ever be easy in 3d. Keep em coming tho, good video

    • @DanielSamulewiczXXI
      @DanielSamulewiczXXI 2 года назад +1

      Thanks for additional info! That’s a lot )) So, I made a screenshot to get back to some points later. Another nerd, ha ha

  • @noel975
    @noel975 3 года назад +55

    Absolutely amazing explanation! I always viewed normals as something that just comes with your downloaded texture and didn’t give it any more thought. And I didn’t even know that flipped normals is a thing to worry about. Honestly great job on this one

    • @DECODEDVFX
      @DECODEDVFX  3 года назад +11

      Yeah, that's why I made this video. Normals rarely get mentioned in tutorials, so it's not something a lot of artists really understand very well.

    • @winstonlloyd1090
      @winstonlloyd1090 3 года назад +3

      @@DECODEDVFX Once I learned about flipped normals I started inspecting my face orientation in my projects and was shocked how many normals were just wrong.

    • @cactustactics
      @cactustactics 3 года назад +1

      @@winstonlloyd1090 there's a Recalculate Normals option that ~generally~ gets them all pointed outwards, but depending on what you've been up to (usually naughty things) you might have to fix some yourself!

    • @DECODEDVFX
      @DECODEDVFX  3 года назад

      Yeah, I dread to look at my old projects sometimes. Flipped normals and amateur mistakes everywhere.

  • @PorpaTM
    @PorpaTM 3 года назад +6

    I knew how normals work without knowing exactly what they were. Thanks to your video I learned everything I was missing. Thanks!

  • @CLARKCLOUT
    @CLARKCLOUT 3 года назад +11

    Just started learning blender last week and this 8 min video was most of the other videos I've seen soo far, definitely do more short in depth explanations!

  • @CalvinBacon
    @CalvinBacon 3 года назад +1

    Regarding Normal maps, there are 2 main types of mapes called "Tangent Space Normal Maps" which are the blue/purple-ish textures and then there are "Object Space Normals Maps" which are a more accurate solution, but can not be tiled or used on animated meshes.

    • @jessekendrick6553
      @jessekendrick6553 3 года назад

      Interesting… do you know where I can learn more about this?

    • @CalvinBacon
      @CalvinBacon 3 года назад

      @@jessekendrick6553 try Google, not sure what papers or sites explain this properly but I'm sure you can find something on Google depending on the topic your e looking for. Tangent space maps are very common so there's tons of info about them. Object space maps are not widely used even though they are superior in every way. Might need to do some digging

  • @FranciscoTChavez
    @FranciscoTChavez 3 года назад +2

    A bit of information about image formats for storing Normal-Maps:
    When sampling (reading) Normal values from Textures (Images), the value contained within the texture needs to be within 1% of the value that was recorded into the texture to properly produce the normal that we want. If you are using 8-bits per color channel, then there are a lot of angles where normals will an error that's greater than 1%. This is the reason why some programs will just default to 16 bits per color channel. Using R16G16B16 will give you a nice bake while a lot of people will get a crappy bake if they use R8G8B8. Now, it turns out that the 1% max error gets resolved at 10 bits per channel, but 30 (or 40) bits per pixel lower the performance of most computers, so R11G11B10 would be preferable.
    So, when baking normals onto a texture, R11G11B10 should be minimum color format that you select. Selecting more memory intensive formats like R16G16B16 is perfectly fine. But, going lower, like R8G8B8 will give you a lot of visible artifacts if the normal is a curved surface.
    Think of it this way. You are trying to arrange a chair inside a scene, but you can only turn the chair by multiples of 30 degrees. Lets say that you need to set the chair to 45 degrees. Well, that's not possible because you can only turn it by multiples of 30 degrees. You can try 30, 60, 90, etc., etc.. If you only turn the chair by 30 degrees, it might not be noticeable that it's off by 15 degrees. Yet, if place that chair next to a table that has a 45 degree rotation, then it becomes noticeable that the chairs rotation is off. You play with the editor settings so that you can rotate things by 20 degrees. Well, that would let you rotate the chair to 40 degrees, making it 5 degrees off the target value. Being off by 5 degrees is a lot less noticeable then being off by 15 degrees. Now, being off by 5 degrees won't be noticeable for this use case, so close enough.
    Switching from 30 to 20 degree rotations is a lot like switching from R8B8G8 to R11G11B10; it won't recreate all normal values perfectly, but it will be close enough. We just need the value in the image to be within 1% of the target value.

  • @fabianeer6675
    @fabianeer6675 3 года назад +2

    one of the best invested 8 mins of my life

  • @elliejohnson2786
    @elliejohnson2786 3 года назад +4

    I think the simple definition of what a normal actually is was very helpful for me to understand it. I already knew mostly how they worked, but was never officially taught them.

  • @ixxirecords26
    @ixxirecords26 3 года назад +9

    These are really, really awesome. Love tutorials, but understanding the principles behind the different tools, can be infinitely more valuable.
    "Give a man a fish, he eats for a day; Teach a man to fish, he eats for a lifetime."

    • @ZackMathissa
      @ZackMathissa 3 года назад +6

      But you need to give him a fish before teaching him or he will be hungry and cannot learn properly.

    • @ixxirecords26
      @ixxirecords26 3 года назад +2

      @@ZackMathissa I've never heard that extension of the phrase before, absolutely love that.

  • @duramirez
    @duramirez 3 года назад +16

    DECODED: Tell me which other areas of 3D you want me to explain.
    ME: Yes.

  • @briost123
    @briost123 3 года назад +7

    Even though I already knew all this, this added value when its explained in such an effective way the same with your other video in this style, and it helps me think of ways to solve more problems with these tools, thank you!

  • @paxdriver
    @paxdriver 2 года назад +1

    Flipping the green channel for dx normal maps and the explanation of what's OpenGL / DX do different in processing is a massive gold nugget, thank you! I haven't heard anyone on RUclips even mention that, super good to know

  • @travissmith7471
    @travissmith7471 Год назад +1

    It has been a wonderful journey learning Blender... I try every day to expand my knowledge... I do not like following instructions blindly... I want to know why I am doing what I am doing... You have given me a better understanding... Then what I thought I knew before this video... Thanks for taking the time to share...

  • @lichdust
    @lichdust 3 года назад +1

    Man this is just pure uncut freebase information, I really wish other instruction was as good as this, but am very grateful for it here. subbed.

    • @DECODEDVFX
      @DECODEDVFX  3 года назад

      You're in luck. The next video in this series will be released in the next day or so.

  • @JordanMossy
    @JordanMossy 3 года назад +9

    Coming across this as a game artist its strange, as the knowledge is like breathing and so you don't ever really consciously think about it. It's nice to see a breakdown I can send to someone if I ever feel the need to explain how we make bad looking things look nice in engine :)

    • @5ld734
      @5ld734 3 года назад +1

      what games did you work on bro

    • @JordanMossy
      @JordanMossy 3 года назад +1

      @@5ld734 Dirty Bomb, Gears of War and Outcasters

  • @mrofnoctonod
    @mrofnoctonod 3 года назад +8

    7:02 nothing can ever be easy in 3D. lol! Thanks for the great teaching.

  • @TheMimzez
    @TheMimzez 3 года назад +1

    I've taken 3d classes and seen other videos explaining what normals are, but I never really understood normal maps until this video! thanks!

  • @johntnguyen1976
    @johntnguyen1976 3 года назад +1

    The ACTUAL cameo from Flipped Normals had me rofl! Luv it!

  • @darkflamesquirrel
    @darkflamesquirrel 3 года назад +3

    Even if I felt like I understood it, I still watched the video. And I actually did end up learning something new! (The green channel thing). Will come in handy later on once I start using normal maps!

  • @Nejvyn
    @Nejvyn 3 года назад +2

    Wow, thank you so so much, now I finally know why some normal maps produce such strange shading issues! My workaround usually produced okayish results but this is a much cleaner method!

  • @globglob3d
    @globglob3d 3 года назад +1

    What a sweet in depth video, I love how you kept it simple while going in depth into normal.

  • @formdusktilldeath
    @formdusktilldeath 3 года назад +3

    1:24 That's actually an unfortunate example, because that square is made up of TWO triangular faces and the normal it shows is the average of those two faces normals. It's maybe not a big deal but it can be confusing if you're new to this concept.

  • @TwashMan
    @TwashMan 3 года назад

    I've been watching shader tutorials trying to explain what normals are for a while not but this is the first one i truly understood everything in
    Thank you so much. You should be really proud

  • @Willsing7
    @Willsing7 3 года назад +1

    I never understood Normals until I watched this video. Thanks for posting!

  • @eleventhoperator
    @eleventhoperator 3 года назад +1

    This was incredibly helpful! Understanding something makes it a lot easier to work with creatively.

  • @TheBritMonk
    @TheBritMonk 3 года назад +4

    This is great I would love a breakdown of flow / tangent maps used in realtime engines and such

  • @soyleo_san
    @soyleo_san 3 года назад +1

    Wow I didn't know there were two types of normals, open gl and direct x, great video.

  • @iwein
    @iwein 3 года назад +1

    thank you for this! that certainly clears up a topic I never thought I'd fully understand about 3d but makes total sense now

  • @richyrich118
    @richyrich118 3 года назад +1

    First video where i've actually understood Normals, thank you sir :)

  • @TheThousandYardStare
    @TheThousandYardStare 3 года назад

    I thought this video was going to be about enlightenment, but it turned out to be a blender tutorial... Still stuck around this stuff is really cool!

  • @moodberry
    @moodberry 3 года назад +1

    Don't use Blended, but your explanation was easy to follow and I learned something today. Thanks.

  • @nektoxyz1013
    @nektoxyz1013 3 года назад

    "They have flipped normals."
    And dat photo filled with awesome humor and sarcastic background.
    Definitely like && subscription.
    Your sense of humour just awesome.
    Standing ovation!!!

  • @disruptive_innovator
    @disruptive_innovator 2 года назад

    I used to know where that Auto smooth setting was and then 2.8 happened and I lost track of it. Thank you so much! I thought it was just removed because low poly was fading out of popularity!

  • @ebslater29
    @ebslater29 2 года назад

    Amazing video! I've been struggling with understanding normals for months! And finally I get it! Thank you!

  • @lore_emu
    @lore_emu 3 года назад +30

    1:08 Why did I laugh so hard at this?

  • @Porus3D
    @Porus3D 3 года назад +1

    That "Flipped normals" reference is genius 1:08 lol

  • @blenderzone5446
    @blenderzone5446 3 года назад +6

    amazing how normals can affect the model render appearance!

  • @LadyAster
    @LadyAster 3 года назад +3

    "If you extrude down the normals are flipped" this explains some things haha

  • @jamesdelb6885
    @jamesdelb6885 3 года назад +1

    Seems like magic still. Great explanation. Thank you.

  • @MBaadsgaard
    @MBaadsgaard 3 года назад +4

    You could consider making a video about tangent normals vs. world or local space normals and why tangent basis matters. It would be a natural, though somewhat twisty continuation

    • @jessekendrick6553
      @jessekendrick6553 3 года назад

      Seconded. I would be very interested in this video.

  • @treedoesstuff
    @treedoesstuff 3 года назад +2

    Thaaank yooooou. Love your videos so far, you're so clear and concise with the information, audio quality is good and your demonstrations to go with it really help. Helping me no end with my 3D journey and understanding what the heck it all is XD

  • @arqamdeen5162
    @arqamdeen5162 2 года назад

    this was superb, really enjoyed learning and understanding the logic behind what we do

  • @namity1305
    @namity1305 3 года назад

    thanks for the NVidia x OpenGL part, otherwise i would've never noticed i''ve been using my normal textures incorrectly this whole time

  • @nfainer
    @nfainer 3 года назад

    wow I learned so much from this video. I'm a cinema 4d user and I always thought normals maps were just magic. now it all makes so much more sense thank you.

  • @henkkok9437
    @henkkok9437 3 года назад +1

    I loved the video and don’t want to miss any of your future ones. This fundamental level was super useful for me, thank you!

  • @BlenderBeanie
    @BlenderBeanie 3 года назад +1

    Seeing you using my addon makes me happy :3

    • @DECODEDVFX
      @DECODEDVFX  3 года назад +2

      The temporal denoising is a nice addition. I'll be sure to give it a mention next time I made a video focused on different addons.

  • @SugoiDesu1
    @SugoiDesu1 3 года назад +1

    As always, your video was concise and very easy to understand. Thank you for the insight!

  • @Damnev
    @Damnev 3 года назад +1

    VERY informative. Many thanks for taking the time to make this video good sir.

  • @maltede8855
    @maltede8855 3 года назад +1

    Nice video, and great summarization :)!
    at around 0:50 you speak about positive and negative normals, but as far as i know there is no thing like that! it has to do with triangle winding order which decides wheather or not the side you are looking at is a front face, or a back face.
    and that is used when it comes to culling for performance, or when you want different materials or apperances depending on the "inside"/"outside" of a mesh.
    just wanted to clear that up :) but great videos !!

  • @Caesar_Online
    @Caesar_Online 3 года назад +3

    Thought that this was gonna be a philosophical video for a sec.
    Luckily I still need to learn how to use Blender so I'm glad I found this 😎

  • @RC-1290
    @RC-1290 3 года назад

    Worth noting that normal maps are usually relative to the direction of the surface they are on. In other words: they are in 'tangent space'. This is what makes them blue ish; the blue is a normal pointing along the original direction of the surface. This is as opposed to object or world space normal maps, which point in a specific direction.

  • @UnderfundedScientist
    @UnderfundedScientist 3 года назад +9

    Notification squad, keep up the great work . Really informative stuff

  • @friedbread2843
    @friedbread2843 3 года назад

    I remember watching this when I first started using blender, it felt like those trigonometry videos

  • @marcfuchs6938
    @marcfuchs6938 3 года назад

    I am using Blender 2.79, I don't know if it's the same in newer versions, but every now and then, I use the command "make normals consistent". Because sometimes, you extrude normals facing the wrong way while editing and you don't notice it. So in edit mode, select all faces with A and search for that command. Blender will automatically put all the normals into correct directions. If they are all correct, nothing happens.

  • @FreeFall23
    @FreeFall23 3 года назад +1

    Saw this recommended. Thought you will tell me, why an every day, regular, normal guy is actually pretty cool.

    • @Maouww
      @Maouww 3 года назад

      "Change your understanding of normies in 8 minutes"?
      I think there are some good lessons here:
      Flip a normie and he becomes weird.
      Normalness is a spectrum.
      It's easy to smooth differences between 2 normies of similar angle, but often it's really useful to highlight differences between normies.
      If a normie is being too extra, remove the green.

  • @deedoubs
    @deedoubs 3 года назад +1

    Damn, I've known the basic idea of normal maps since like 2003, but this is the first time I actually connected the idea with normals.

  • @xikes
    @xikes 2 года назад

    When you're moving the point light around at 6:23 the shadows don't look right at all.
    It's fine, if the light comes from bottom-left, but really funky, if light is moved to top-right.

  • @jasonadams4321
    @jasonadams4321 3 года назад +1

    This was extremely helpful, thank you

  • @FlashySenap
    @FlashySenap 3 года назад +1

    This was helpful as I didn´t know blender used openGL and I bake normal maps that I use for a game engine that use DirectX.

  • @lmz000
    @lmz000 3 года назад +1

    Very informative! Thanks for the video. Also "because nothing can never be easy in 3d" I laughed SOOO MUCH! hahaha

  • @Leeki85
    @Leeki85 3 года назад +2

    You should mention about soft and hard edges. This works great in Wings3D. You have auto-smooth feature, but all it does is setting soft/hard edges based on angle. Most of the time such automation is good, but in almost every model there's at least few edges that you need to manually set soft/hard. For example metal parts should smooth at much lower angle than organic ones.

  • @GinoZump
    @GinoZump 3 года назад

    Thank you for the video. What I was missing in the end was a simple example of using a normal map in blender. But I guess the video was supposed to be universal. Cheers.

  • @carn1voor
    @carn1voor 3 года назад +1

    First of all awesome explanation! I would love to see an explanation of the benefits of using an normal map compared to a bump map or vice versa and what there limitations are for example.

  • @mortenjaeger4997
    @mortenjaeger4997 2 года назад

    Did not expect to see my face in the middle of a video haha!
    Great video, very well explained

  • @arnabmusouwir9018
    @arnabmusouwir9018 3 года назад +1

    It is amazing what you can learn in 8 minutes!

  • @glennzone
    @glennzone 3 года назад +1

    This was extremely insightful on this topic. Thank you !

  • @felipebulac
    @felipebulac 3 года назад +28

    Since the thumbnail looked kinda deep fried I was half expecting a meme guide to understanding normies... Funny how my brain works these days 🤔

  • @kotofelius
    @kotofelius 3 года назад

    It's not Change My Understanding of Normals) Good for beginners. Good job. Missing: object normals and texture normal work together, it is not the same thing

  • @Acuzzio
    @Acuzzio 3 года назад +1

    This video was truly amazing. Thanks a lot.

  • @LizzyKoopa
    @LizzyKoopa 2 года назад +1

    keep it up brother, good stuff!

  • @gamedevboy1181
    @gamedevboy1181 3 года назад +2

    Volume is too low D:

  • @sanketdhone7823
    @sanketdhone7823 3 года назад +1

    Great video about the normal map 👍👍
    Also the last part was were important and u mentioned that properly of two different systems of normal maps viz. Directx and opengl
    Thanks bro keep it up💯💯

  • @MaxMRasmussen
    @MaxMRasmussen 3 года назад +1

    Really god explanation. Thanks.

  • @kaustik185
    @kaustik185 3 года назад

    the green channel flip blew my mind.
    the kinda shit school is for

    • @DECODEDVFX
      @DECODEDVFX  3 года назад

      Yeah, I sometime forget to change the normal mode when I export from substance painter. This way is faster than re-exporting the normal map.

  • @scribblecloud
    @scribblecloud 3 года назад

    ahhh i need these for like- EVERYTHING- but especially geometry, im still very confused about messy geometry and stuff like poles or non flat faces and how they happen and how they work and why theyre bad and how to fix them etc id be super interested in that!!

  • @thePrzemko17
    @thePrzemko17 3 года назад

    Very good video, tight information in less then 8 minutes. But I would title the video "Lets go back to basic: Normals".

  • @datguy6745
    @datguy6745 3 года назад +1

    That Flipped Normals easteregg had me giggling

  • @derpersona
    @derpersona 3 года назад

    @1:05 "but the one on the right looks kind of weird" literally clicked on this video because the one of the right looked more realistic so I thought you'd show how to make more realistic glass hahahah

    • @DECODEDVFX
      @DECODEDVFX  3 года назад

      Yeah, the thing is glass (and all transparent materials) flip and invert reflections -meaning the reflection of a light source on the object should appear as an inverted reflection on the opposite side. On the second model can can see the dark roof of the cave HDRI is reflected on the top of the monkey head, which isn't accurate. The reflection should be on the bottom of the mesh due to refraction.

  • @FabioSalvidotcom
    @FabioSalvidotcom 3 года назад +1

    Amazing video, thank you to share your brilliant knowledge.

  • @piderman871
    @piderman871 3 года назад

    4:14 Actually, it smooths faces of which the *normals* are at an angle smaller than the number shown. Generally the faces would have an angle greater than 180 minus the number shown.

  • @chillazchillazius7634
    @chillazchillazius7634 3 года назад +1

    Wow, that was very well explained!

  • @雪鷹魚英語培訓的領航
    @雪鷹魚英語培訓的領航 3 года назад

    Fong... that's the name of the wise man character in Reboot. Didn't know he was also a 3D fundamentals reference.

  • @brennenbeck7311
    @brennenbeck7311 3 года назад

    Pretty good video and explanation. A couple others on here have expanded on the knowledge a little. I thought I'd add my 2 cents.
    Maybe this is too much information - as I proof read it it's certainly a wall of text, but here goes. And maybe the deep dive on what's going on here might help make it all more clear for some. Although, you mostly only need to know this if you are a programmer, it certainly couldn't hurt the artist to know what is actually happening under the covers.
    1) "Normal" is a nick-name for normalized vector. A vector is just an arrow in 3D space (or 2D and such, but 3D in our case) used in the mathematics of the game engine, etc. It's represented by x,y,z coordinates that define the position of the head of the arrow. Because of the way the math works, you can always assume that the tail end of the arrow is at 0,0,0 meaning you no longer need to write it out or keep track of it (in vector math, the position of the vector is never useful - only the vector length and the vector direction - again, imagine an arrow). So, just the head coordinate is sufficient to describe the arrow in terms of length and direction (finding the length from the head position - since the tail is at 0,0,0 - is fairly easy math, especially for a computer). Vectors are used for lots of stuff in game programming but especially for lighting calculations.
    A normalized vector is a vector with it's length information removed but maintaining the direction information. If you were to completely zero a vector out, it would cease to exist because you would not have an arrow if it's length is zero and it can't point in any direction. So, instead they decided that a length of 1 will be a vector with no length information (unless of course you for some reason need a specific length of 1, which might get confusing, but that's how it works anyway). There is some convenient math that can be done on normalized vectors making normals especially useful. So, this is what a "normal" actually is: x,y,z coordinates for the head of an arrow that has a length of 1 indicating the length doesn't matter but the direction does. Note, that those x,y,z coordinates are relative to the 0,0,0 origin and you have to separately maintain the position of the arrow in the scene, which is usually on the surface of a 3D triangle (or quad in Blender).
    To normalize a vector, you do some specific math that keeps the vector pointing in the exact same direction, but sets the over-all length to exactly 1.0 which may not be obvious when looking at the x,y,z coordinates unless the normal just happens to point straight down one axis, which is usually the direction they face when you first create them before they've been rotated to face a specific direction. So, as the math at the end of this post explains, your normalized vector might be x,y,z=0.0,0.0,1.0 which converts to RGB = 127.5, 127.5, 255.0 or a medium brightness blue (the main color of your normal maps). The positive Z axis in this case would be "straight" forward. Map this to a pixel position on the model and it's flat. Any other normal will map that pixel on the model as pointing in a different direction than "straight forward".
    2) Maps are photographs that contain data. A normal map is a photograph that contains vector normal data. Maps are per pixel or actually per texel (pixels in a texture file).
    This all started with texture maps. Someone figured out that you could interpolate the position of each pixel on the face of a triangle in computer graphics (remember that all 3D models are nothing more than a series of triangles drawn on your monitor). This allows you to map information directly to a pixel on the face of a 3D triangle (using U,V coordinates in the UV map for the texture map). The visual effect is like wrapping a photograph around a statue and it immediately made 3D models look far more realistic by being able to map photos to the surface of the model.
    Eventually, they figured out that you were just mapping the color data of the texel in the photo to the correct position on the face of the triangle and then projecting it to the 2D screen to be viewed as the correct color of that pixel. And, when you look at the color as just pure data, you suddenly realize that you can map ANY kind of data to a given pixel on the face of the triangle or model. And that's what a map is. Normal maps are just mapping the direction a pixel on the surface of the model faces. The normal map is logically wrapped around the model the same way as a texture map and then instead of using it to know the color of that pixel on the surface of the model/triangle, you use it instead to determine what direction that pixel is facing for use in lighting calculations. This allows you to use fewer triangles (faces) in your model and through visual trickery, make it LOOK as if the model has a lot more geometry (surface complexity) than it actually does by using tricks in the lighting calculations for example.

    • @brennenbeck7311
      @brennenbeck7311 3 года назад

      3) There are different types of normals used for different situations in this video. You have face normals, which is what you see in Blender and mostly what is discussed in this video and is typically what you would need for faceted shading where smoothing is not done.
      Then you have vertex normals which are calculated for each vertex and this is generally what is actually being used most of the time to interpolate across the face of the triangle in smooth shading. You're trippling the number of normals used and it makes it possible for the triangle to appear to be bent and not just face one direction.
      Then you have what is in the normal map, which is a pixel normal. In fact the map stores the pixel normals which are then interpolated across the face of each triangle it is mapped to through U,V coordinates. So, we really have normals for each pixel which are completely separate from the normal of the face it is being interpolated across. It will generally line up exactly with the texture map because the same UV coordinates are used. So, the texture map shows what each pixel color is for each pixel across the face of the model and the normal map shows the direction each pixel is facing for use in lighting calculations.
      Of course the real magic is when you make a very high poly model, bake it's normals down to a normal map, which you then interpolate across the faces of a low poly model, making it look as if the low poly model has all the geometry of the high poly model at a fraction of the cost in graphics card performance.
      4) You may have noticed that the photo stores colors, not normals, because that's what photo files are for - storing pixel colors.
      A vector normal is represented by a position in 3D space (the position of the tip of the arrow with it's tail at the origin 0,0,0) where the total length of the arrow being exactly 1. If it's pointing straight down an axis it will either be 1 or -1 for that axis coordinate (depending on if it faces down the axis one way or the opposite) and 0 for the other two coordinates. But usually it's not pointing straight down the axis meaning all three coordinates will be 0 point something and either positive or negative. It's usually difficult to just look at the numbers and know that the over-all length is 1 without doing the math when the vector points any direction other than straight down an axis, but it is important for the math involved that the length remain 1 when rotating the normal.
      The problem here is that the standards for these photograph files don't allow for negative values because a color is always supposed to be a positive value. The Red, Green, and Blue values for the texel/pixel in the photograph is for example 0 to 255 (different standards do things differently, such as use a value between 0 and 1 but let's just go with RGB 0 to 255). RGB maps nicely to x,y,z but the numeric range is off.
      So, you first make sure all your normals are normalized specifically because the math here isn't going to work if the vectors have not been normalized. So, now all your normals have coordinate values between -1 and 1. So, you add 1 to every coordinate to make them between 0 and 2. Then all you have to do is make that a percentage of the highest RGB value, for example divide by 2 to get a percentage and then multiply by 255. Now you have turned that normal's x,y,z coordinates into an RGB color that can be stored in the photo. Just do the math in reverse to convert it from a color back into the x,y,z coordinates of your normal. Again, remember that the normal is just the direction of that pixel/texel which are the x,y,z coordinates of the head of the arrow on the assumption that the tail is always at the coordinates 0,0,0. So, it's just a direction the pixel is facing with no information about where the pixel is actually located in 3D space (a completely separate process).
      5) The most simple lighting calculation, used in many early graphics programs including video games, is to take the direction/normal and illuminate the object, percentage wise, by comparing it to the direction/normal of the light source. If the object (pixel in this case) is facing the same direction as the light source is facing, that means that the object (pixel) is facing away from the light and should receive 0% of the light. So, you give it no color (black). If the object (pixel) is facing directly into the light source, you give it 100% of the light, which means multiplying your color by 1 (100%), giving you the original color. For any other direction your object (pixel) is facing from 0 to 90 degrees on either side (forming a 180 degree hemisphere), you give it a percentage from 0 to 1 and multiply the color by that percentage so that it gets darker the further it faces away from the light source. So, when the normal faces directly into the light the angle difference between the two directions is 0% and as it turns away from the light source the difference gets closer and closer to 100% as the angle gets closer to 90%. Any angle of 90 degrees to 180 degrees is 0% (black) so that the back side of the object has no illumination from the light source (the half of the object facing away from the light source is in complete darkness and is colored black). (Different methods are used to calculate the ambient light on the back side of the object when needed which is probably just a fixed percentage for everything not directly illuminated thus giving everything not illuminated by the light source the same percentage of light regardless of facing. Usually, the two are added together so that objects/pixels in direct illumination are colored by both sources.)
      The bottom line is that the rendering engine takes the part of that model that the lighting is being calculated for and draws that pixel on the screen as a percentage of the difference of the angle the pixel faces and the light source. So, that it gets a smaller percentage (darker) of the original color of the pixel the more it faces away until it faces completely away from the light source and is colored pitch black to represent no light received. (Modern lighting techniques get way more complicated than this, but this is where you start when learning even more complicated lighting.)
      Lighting calculations just get more complex from there, but pretty much all of it starts with this calculation. Maybe it's at least a little useful to understand what actually occurs in this most basic lighting calculation.

  • @Skygerobrian
    @Skygerobrian 3 года назад +1

    I just learned something thx.

  • @RoundCornerConnection_Founder
    @RoundCornerConnection_Founder 2 года назад +1

    You explain everything very well! Thanks! Do we have a new guru here? :)

  • @arturgomes
    @arturgomes 3 года назад +1

    Thanks for the awesome explanation

  • @g0bo_4typ1c3
    @g0bo_4typ1c3 3 года назад

    6:32 congratulations ! You designed a Pink Floyd album !

  • @maximutatro3176
    @maximutatro3176 3 года назад

    Im not anything close to an animator but this video helped me fall asleep

  • @MTN1601
    @MTN1601 3 года назад +2

    *congrat for 100K SUBSCRIBERS*

  • @Eichro
    @Eichro 3 года назад

    "Change your understanding"
    bold of you to assume i have any understanding

  • @Soulsphere001
    @Soulsphere001 3 года назад

    I had no clue that a weighted normal is really just a normal map. Though I didn't know what a weighted normal was until yesterday either.

  • @stefanguiton
    @stefanguiton 3 года назад +1

    Excellent explanation!