I wish I knew this before using Image Textures (Blender)

Поделиться
HTML-код
  • Опубликовано: 30 янв 2025

Комментарии • 75

  • @itestthings5337
    @itestthings5337 2 года назад +22

    This Blender series is hands down the best learning resource I found on YT. Please keep them coming, I can't get enough.

  • @richochet
    @richochet 2 года назад +28

    I 'cannot tell you' how much I needed a channel that actually explains the why, not just the how!

  • @cristiianos4336
    @cristiianos4336 2 года назад +8

    Thank you! Please do more of these "But WHY?" for us, the really beginners! This is amazingly helpful and I wish I had this kind of tutorials from the moment I downloaded Blender.

  • @archivexstudios
    @archivexstudios Год назад +1

    Man just need to say I love your video's. I was stuck for a minute, i'd watched so many tuts with no answers and yours cleared this up for me straight away. Please keep doing more videos.

  • @gottagowork
    @gottagowork 6 месяцев назад

    Note that under object data properties there is a texture space tab. Using generated (and/or object) texture coordinates you can adjust the position and size of the bounding box that is used. If using XYZ tangents for anisotropic shading and it looks off from what you'd expect, chances are the texture space has been automatically updated to something less than ideal. Generated will use 0-1 in XYZ for the bounding box, whereas Object will use origo and allow negative coordinates in unclamped space. The old modulo math function was prone to breaking with negative numbers, so had to account for that, whereas the new (truncated?) fixes those issues. Generated also accounts for some modifiers like bending. Object coordinates are great for box mapped objects, but we have to adjust the coordinate orientation to change how the mapping aligns, as there are no way to rotate the coordinate system with nodes prior to be used for the image texture node.

  • @osseous42
    @osseous42 2 года назад +5

    Wow!! Your Blender videos are by far the most informative I've seen! Please keep these coming! (Hoping that you do rigging and animation next.)

  • @shadix365
    @shadix365 Год назад +1

    you are a freaking life saver I cannot thank you enough for this and the geometry nodes video. It would be nice if there was another one of these for the last set of nodes, the compositor as I imagine it would probably demystify renderer settings, and also how the video editor interacts with other areas of blender, as I imagine it would help gain a deeper comprehension for blender files are actually organized. These fundamentals though cleared up so many ??? things for me that I think I might be able to figure the rest out now.

  • @UnshackledBanana
    @UnshackledBanana Год назад +2

    Cool video, less Rabit holes compared to the first video. Still need to go back and make the timer :). I found when using the Alpha slider on the Principled BSDF both the Logo and the Leather were becoming transparent. Following your method I inserted a Map Range, linked to the Logo Alpha and then onto the Principle BSDF Alpha with the same logic you applied above and it worked. Been tinkering around with Blender for about a month your videos are awesome, learnt a lot today.

  • @goatpepperherbaltea7895
    @goatpepperherbaltea7895 2 года назад +5

    Thanks for another one of these keep doing them they’re great I already knew most of this stuff intuitively just from doing it but hearing more technical explanations of why these things are the way they are is very useful

  • @tisexblack
    @tisexblack Год назад +4

    i was in denial to watch this 1 hour understanding tutorial. but i knew i had to and i wish i had watched this eversince it came out. WATCH THIS PEOPLE. THIS IS THE ACTUAL SOLUTION YOUR LOOKING FOR

  • @GaryParris
    @GaryParris 2 года назад +4

    For those who want an open source alternative i would suggest InkScape (in place of illustrator) Gimp/Krita(in place of photoshop) Darktable/rawtherapee (instead of lightroom, espectially DT) they all just as good as the paid options; well done with the tutorial Rabbit!

  • @gottagowork
    @gottagowork 6 месяцев назад

    Some pointers with displacement, bump, and normal maps.
    Displacement: Can be height field (like a bump map) or a vector displacement map, which connects to the material output. So all shaders will be affected by it. You don't *have* to use true microdisplacement to use it (there is a setting for bump only), but I would recommend it; if the height field texture map is to be used as a bump map, you can use microdisplacement to find the ideal displacement height and use that as bump distance (bump effect always 1).
    Bump maps: Can only be a height field map, but the result is plugged into individual shaders instead of on the material level, so you can have a bumpy substrate with a smooth topcoat, impossible with displacement without additional model for the smooth part. Ideally you should only use Bump Strength 1 and adjust Bump Distance (or the height input) instead to obtain ground truth normals that reflects what a displacement would do. Adjusting Bump Strength below 1 tend to introduce some weird clipping of the input. Bump maps requires on the fly *CALCULATIONS* of the new normals, something a normal map provides by simple lookup. Because of that, it is scale dependent. More on that below. Bump maps require no UV layout and less prone to brake (i.e. box mapping), are way more flexible to mix and match, and using the microdisplacement trick (above) you can find out the distance to use or spot if there is any math functions that break down easily. Bump maps are best used for small details because stepping issues will be less visible, but does add computing time.
    Normal maps: These provide the normal offsets by simply looking up the color. However, to work properly these require a UV layout, are tricky as hell to mix and match (correctly), and can be prone to errors (box mapping). The angles in a normal map are *NOT* affected by texture density, as the color lookup remains the same. It's as if it's the height of the mountain was automatically reduced to maintain the angles, compared to the bump map where the height is maintained (producing steeper angles). Make a triangle wave and array it. Scale on all axis gives you what normal maps do. Scale on x axis only give you want bump maps do (without accounting for it, anyway). Normal maps are preferred by game engines, since angles don't have to be computed (by the bump map slopes) and thus impact performance less. They are also better used for large details like large chamfers since it's done with a single color indicating that angle instead of a gradient. Which means you typically get away with smaller maps and 8 bit (per channel) instead of large 16/32 bit floating point maps to handle gradients (if stepping is a problem) for bump maps.

  • @xDaShaanx
    @xDaShaanx 2 года назад +3

    Notifications are worth it for channels like yours. :D

  • @BlendingEdge
    @BlendingEdge Год назад

    Man.. the intro to your video is EXACTLY what I have been feeling after many years of watching Blender videos related to texture.. I still don’t know how they work under the hood and how all texture related concepts in blender are related to each other, how different tools are using the same texture resources and whether some tools like texture node editor is really needed (or is a soon to be obsolete tool) as you could probably do the same compositions within shader editor. No tutorial so far dived deeper into the nitty gritty of the subject from a technical perspective. Showing step by step how to do certain things WITH textures does not teach you much about how they work. I’m excited to see the rest of the video :) Don’t let me down bro. ;p

  • @abv250
    @abv250 Год назад +1

    Please do more dives on geo nodes in Blender. Your interpretations are enormously helpful.

  • @asr59
    @asr59 2 года назад +4

    Amazing videos! I'm a 3d artist that is getting into programming now, it's nice to see how related are both worlds

    • @alvin3171997
      @alvin3171997 2 года назад +1

      From programming to 3D also feeling the same ;)

  • @Abhlaash
    @Abhlaash 11 месяцев назад +1

    love your thought process, this is how we should be learning.

  • @Arjjacks
    @Arjjacks 2 года назад +2

    These are super fascinating rabbit holes, it's so similar to my own thought process and workflow that watching is like looking into a mirror. Couple of things of note, though: your unwrap of the patch looked weird after marking the seams because the patch had a non-uniform scale. Apply the scale with ctrl A and it'll unwrap correctly. They also didn't set the specular at 0.5 for the leather for the aesthetics, but because 0.5 is the physically accurate value for materials that do not use subsurface scattering or transmission (in which case it needs to be calculated from the IOR). But for everything else, Blender calculates the specular according to the roughness.

  • @danielboivin6758
    @danielboivin6758 2 года назад +1

    This is awesome. I'm training a newbie and I'm guilty of skipping over some things I know intuitively and forgot to explain. I'd love a rabbit hole video on the ins and outs of Bump vs Normal maps vs Displacement maps, and why "developmental" is still the only way to apply displacements... hasn't it been many many versions now? I keep plopping Normal maps in my projects and see no effect so I just use bump. I also once downloaded a map that had the RGB channels were different than the usual XYZ... took me a while to figure that one out!

  • @Furiac.
    @Furiac. Год назад +1

    exactly what i was looking for. using the alpha on fac for the mixrgb is not intuitive so thanks for the explanation

  • @sikeIgaita
    @sikeIgaita Год назад +1

    this video deserves more likes

  • @ravipatel792
    @ravipatel792 2 года назад +1

    F**# doughnuts! Dude you’re the real Gem! I’m glad I found your channel! It deserves millions! Sending love brother for creating these kind of videos! We need more! Thanks again!

  • @jeffamcavoy
    @jeffamcavoy 2 года назад +2

    I just learned SO MUCH! Thank you!

  • @bigsmoke_og
    @bigsmoke_og 2 года назад +1

    Keep movin, your contet is amazing!

  • @clarissa8804
    @clarissa8804 2 года назад +1

    This was really insightful. Thank you ❤️

  • @MichaelJones-xr5sk
    @MichaelJones-xr5sk Год назад +1

    The best tutorial ever!!!

  • @pZq_
    @pZq_ 2 года назад +1

    Great tutorial. Thank you!

  • @meutiti
    @meutiti 2 года назад +1

    Amazing tutorials 🙌

  • @unsoundmethodology
    @unsoundmethodology 2 года назад +1

    Thanks for doing this! I do want to add to the UV mapping discussion that I just realized that some textures in Blender do use a W coordinate! When you set the noise (and noise-like? Not sure which ones are properly noise) textures to 4d, they expose a W value, and a lot of animations I've seen work by keyframing this. I only just now associated it with the UV space.

    • @RabbitHoleSyndrome
      @RabbitHoleSyndrome  2 года назад

      Thanks for watching! I'm fascinated by what you're describing about the W coordinate... Do you have any references or links by chance that go into detail on it within Blender?

    • @unsoundmethodology
      @unsoundmethodology 2 года назад

      @@RabbitHoleSyndrome Looking briefly, I think I got the direction of influence wrong. The docs for things like the Noise Texture node say that in 2d and 3d versions, you're selecting a point in the texture with XY and XYZ, and then the fourth dimension _of that noise texture_ is W, not that it's an expression of the UV space. My mistake.

    • @RabbitHoleSyndrome
      @RabbitHoleSyndrome  2 года назад

      @@unsoundmethodology Gotcha. No worries! Thanks for checking.

  • @soundbeee
    @soundbeee 10 месяцев назад +1

    Something worth mentioning about multiple UV maps is that .obj export does not support multiple UV maps per object. This drove me crazy for hours.

  • @max_neto
    @max_neto 6 месяцев назад

    the yv map node works correctly when baking?

  • @craigbaker6382
    @craigbaker6382 2 года назад +1

    YES!! WHY. Thank you for WHY.
    I am still at that level that needs the explanations for why I proceed with any action. My goal is to be proficient with Blender like a pianist becomes good at playing what they are sight reading only it would be images and objects in 3D and their animation. It is obvious now that this is gonna require some GOOD foundational exrcises
    Your videos are exactly that . Thank you.

  • @Ullone
    @Ullone 2 года назад

    Sry i don´t know if it´s a dump question... but can i also use the geometry nodes as an input for a bumpmap on the patch? (so that the stiching seems to deform it a little?)

    • @asd-mw3cw
      @asd-mw3cw 2 года назад

      The stiching has a separate material, you'd add a bump map to that material.

  • @BlendingEdge
    @BlendingEdge Год назад

    Loved the video.. it explained quite a lot. Btw, i love the rabbit hole approach. i didn’t know I have a rabbit hole kind of learning affinity :) After watching the video though I still don’t have my texture related questions answered... i was hoping to see coverage of the texture slots, texture node editor and how these resources which appear in all sort of places in blender (e.g. Properties,, texture paint property panel, UV editor and Image editor property panel and perhaps in many other places. Why do we have texture slots to begin with (i know some opengl concepts related to texture that parallel this texture slots idea but i’m not sure if there is a hard connection between the two.. and is it even necessary for blender to have a texture slot concept.. or texture objects (resources). I wonder if any texture related operations could be done solely within the shader editor. Anyway.. Texture Paint is a cool concept but soon as I started to learn about it I was immediately realized how little I know about the texture world in blender. I hope you’ll have a deeper dive rabbit hole style on the texture concept as a whole throughout the entire blender (not just in the shader node / UV editors). Regardless, great job on the video. Keep it coming!

  • @LukeHomay
    @LukeHomay Год назад

    Hey buddy! Great video. I can't seem to find what I'm suppose to be naming these image textures in order to have them automatically link up to the proper place in the Principal BSDF. I feel like tTHAT is some important info and I can not find it for the life of me. I know its like "_roughness" ? and stuff like that? How many different ones are there?Metallic? Roughness? Yeah if you could just confirm the way those are suppose to be named I would appreciate it! Thanks so much.

  • @zillaquazar
    @zillaquazar 2 года назад

    I have one question about the uv editor, I was told you are not supposed to scale outside if your uv square. Is it bad or is it just better practice?

    • @RabbitHoleSyndrome
      @RabbitHoleSyndrome  2 года назад +1

      I would say it really depends on what you're trying to do. I think there are definitely valid use cases for scaling outside UV (ie. Mapping node). But in general to keep things most simple, sticking to UV is a good way to go.

    • @zillaquazar
      @zillaquazar 2 года назад +1

      @@RabbitHoleSyndrome thankyou for taking the time to reply, I'm glad i found your channel you seem really nice

    • @gottagowork
      @gottagowork 6 месяцев назад

      @@zillaquazar It's more about what you're gonna use the UVs for. If you're creating for game or baking, packing all UVs into the UV space is pretty much mandatory. Baking should be obvious, and some game engines may not allow more than one set of UVs making its internal bake problematic. If creating strictly for rendering purposes and own/company use, then nobody cares. Hell, if I can avoid it - and I normally can, since I'm not animating/bending anything - I'll avoid UVs completely and stick with object coordinates (usually) or generated (in some cases, for convenience). I'll also make UV maps where the UV islands are collapsed into points, for randomization purposes, in some cases, even if Object Coordinates are my main coordinate lookup.

  • @NorseGraphic
    @NorseGraphic 2 года назад

    8:51 In version 3.4 of Blender you can import SVG (scalable vector graphics). The program will import it as curves with Bezier-points. You can modify these points, fill the gaps, and extrude etc. So, you're wrong on this part.

    • @RabbitHoleSyndrome
      @RabbitHoleSyndrome  2 года назад

      Great points, thanks for mentioning. I was referring to using SVG directly as ImageTextures, which I don't believe is possible right now. Please correct me if this is wrong!

  • @عبدالسلامالعتيبي-ه3ج
    @عبدالسلامالعتيبي-ه3ج 2 года назад +1

    shift the uv by -0.5 on all the location in the maping node to center it , and add anather on for scaling , this is the better way of doing it

  • @gnarthdarkanen7464
    @gnarthdarkanen7464 Год назад

    Casual reminder for any fellow beginners "behind me"... haha...
    Rather than Delete the Camera and Light for your "Startup" in Blender... GO to the right hand side, and HIDE them, just click the places with the "eye" for visibility (or find a shortcut... do your homework) and they "go invisible" in the viewport... THEN you can go back to the FILE tab, and open the menu, and down at (or near) the bottom you can SAVE AS STARTUP...
    You CAN go ahead and delete the cube if you like... I mean, it DOES have "return factory preset" as an option right next to "Save as Startup"... so you CAN go back, no harm... BUT for a LOT of these tutorials and projects, they use the stock cube, anyways... SO that choice is up to you...
    BUT Blender has a cycle of "cleaning up" also known as "garbage collection"... AND when you delete things, you leave orphan data. This gets "forgotten" whether you notice or not, and can slowly erode the performance of your file over time. It might not be terrible, and I'm sure it's possible to work around it... BUT if you're a beginner like me, just save yourself the headache of figuring out what orphans were left to be "cleaned up" behind the camera or light... and what to do about it... AND just hide them. They'll still be there, and you can still select and move and find them and all the manipulations, later. They just don't clutter up your view while you try to work..
    Good luck... HOPE is this helps someone... haha... ;o)

  • @ashtyler8
    @ashtyler8 Год назад

    Awesome, thanks for your videos, something tells me you're very OCD like many of us here :) Are you also an excellent modeler? A topology rabbit-hole series would be very nice! Would that interest you? A lot of the other topology videos are too scattered or not in depth enough, or too short. Cheers!

  • @bluewater1721
    @bluewater1721 Год назад

    what country are you in?

  • @MaheerKibria
    @MaheerKibria 2 года назад +3

    So as an experienced Blender user I am all for tutorials about understanding how the render engine works etc. But watching this hurts. Not because you're wrong about everything but because you clearly don't know best practices yet. So multiple uv maps isn't technically better or corrrect. It actually makes the object harder to render. It's fine for simple examples like this one but each time you add UV you need to add another set of calculations to the rendering. So you want the least number of UV Maps you can get away with while retaining details. You use multiple uv maps to make it easier on the artist and then you usually bake it before the end to a UV that fits in the 1001 UV tile. That is all uvs islands should fit within a 1x1 uv grid unless using multiple UDIMs In which case you can have island in the various tiles but no single island should span multiple tiles. Any production asset you download will follow this standard. If you want to work on a production this is what people are going to expect especially since it lightens the load on the renderer. Also that is now you make an object transparent with the principled BSDF. I mean it is and it isn't. It's how you might do it if you just want to make it visibe or not visible but not a transparent clear objects. The best practice is to use transmission instead of alpha. Unless you are specifically working on game engine assets But that is an antire other can of worms. You'll also notice that it kind of works even without changing any settings in eevee like it won't make it completely "invisible" because transparent objects still reflect light but you get the idea. You just aten't going to get the refraction until you turn it on. Next bump maps, height maps, and displacement maps are the same thing. This is important if you get assets or texture sets from outside because people use those interchangeably. The only that is tricky is displacement maps can be grayscale and therefore be the same as a height map or bump map or color in which case they are a vector displacement map and they map something compeltely different. Also as a best practice you don't really want to use the specular to make the logo less shiny, You want to do it in the roughness using a overlay or lighten etc. Messing with the Specular changes the frenel properties and eliminates the glancing reflections that just about every material has. Very few materials require you to mess with specular in a PBR worklow which is what the principled BSDF shader is.

    • @RabbitHoleSyndrome
      @RabbitHoleSyndrome  2 года назад +1

      This is some really good insight that obviously comes from lots of experience. Thank you for sharing this.

  • @robwahl
    @robwahl Год назад +1

    'MixRGB' is now just 'Mix', with a drop down from 'Float' to "Color'

  • @pauliusmscichauskas558
    @pauliusmscichauskas558 2 года назад +2

    One big error. You are using "Specular" wrong.
    When doing PBR materials, do not ever touch "Specular", as doing so is not physically correct. The control is added just for artistic control. The only physically valid value is 0.5.
    Objects that appear less reflective, *actually, aren't*. They are just rougher. So, add roughness, do not reduce specularly.

    • @RabbitHoleSyndrome
      @RabbitHoleSyndrome  2 года назад

      Great points, thanks for the correction.

    • @gottagowork
      @gottagowork 6 месяцев назад

      That is so wrong. Simply by enabling a topcoat you're "changing the IOR" (because the interface is now between substrate and topcoat instead of substrate and air, which is what IOR reflects) of the substrate layer. I.e. a shiny black rubber thing with bumps; reflections are nice and shiny, right? IOR is 1.5 ('ish). Sink this into water ("topcoat") with IOR of 1.33. The resulting substrate IOR should now be 1.5/1.33, or 1.125 which is way outside of what a normal dielectric would do. Do the same to a metal spoon, and it remains just as reflective because it is a conductor.
      Secondly, there are objects that appear less reflective, because they *are* actually less reflective. If you read the papers on various distribution models you'll see they don't (and can't) account for cavities and pores, which *will* trap specular energy and prevent it from never reaching your eyes. Multiscatter GGX is an improvement, but it was done in order to preserve energy, and we still don't have a single shader way of doing hazy gloss well, nor polished anisotropics for that matter. I believe it was Unreal Engine who proposed making Specular work the opposite way and rename it cavity, just to combat the "number whoring" you're showing here; re-create by observations, not by numbers. That said, specular 0.5 (IOR 1.5) yields a F0 reflectivity of 4%. Anything 3-5% would be within reasonable values for a regular material, and/but not what makes or breaks the render. Outside of that would be special use cases; coats, gratings, heavy cavity/porous materials, water/glass/diamond, semiconductors, or complex fresnel (metals only, we can only fake it now via edge tint). So yeah, I agree, control roughness over specular, but specular isn't something you should nonchalantly ignore, it's there for very good reasons.
      Another other obvious use-case is shadow gaps, since a shadow gap is in essence a massive cavity on the macroscale. Floorboards modeled as a single plane for simplicity with visible gaps between the boards; no amount of roughness is gonna get rid of specular reflections in same complete fashion as the real thing does. And its the same reasoning for cavity/pores on the microscale; we just don't model these "light traps" in for obvious reasons.
      IOR is a theoretical number measured for a polished sample under ideal conditions and should never be taken at face value except if dealing with accurate IORs for lens construction. IOR 1.5 (or specular 0.5) is a good average starting point for most dielectrics and considered "good enough" for (most) games, but you adjust it if you observe something different. I wish there was a cheat sheet lookup for how to correctly use and think of the specular slider.

  • @grandemations
    @grandemations 9 месяцев назад

    Thank you, Mr. Professor!╰(*°▽°*)╯ I've watched many videos, and each of them varies in style. I couldn't replicate what I did because I kind of mixed what I learned from them, which confused me. This opened a window that I was passing by every time. I didn't learn basketball by imitating Michael Jordan; I realized at a later age that the key was to understand and master the fundamentals. Thank you for sharing the fundamentals. I'm subscribed. 😍

  • @avatr7109
    @avatr7109 2 года назад

    I'm trying to add rock texture to mountains
    An dmy mountains look shiet.

  • @daeunshin5171
    @daeunshin5171 Год назад

    46:40