Unity Shader Graph Basics (Part 8 - Scene Intersections 1)

Поделиться
HTML-код
  • Опубликовано: 21 окт 2024

Комментарии • 13

  • @danielilett
    @danielilett  5 месяцев назад +9

    Under the hood, the graphics pipeline uses 4D vectors to represent 3D points in space. This representation is called “homogeneous coordinates” or “perspective coordinates”, and we use them because it is impossible to represent a 3D translation (i.e., moving a point in space) using a 3x3 matrix. Since we want to efficiently package as many transformations as possible into a single matrix (which you can do by multiplying individual rotation matrices, scaling matrices, and any other transformation matrices together), we take our 3D point vector in Cartesian space (what you probably normally think of when you are using a coordinate system) and bolt an additional “w” component equal to 1 onto the end of the vector. This is a homogeneous coordinate. Thankfully, it is possible to represent translations using a 4x4 matrix, so we use those instead. Adding a component to the vector was necessary because you can’t apply a 4x4 matrix transformation to a 3D vector.
    In homogeneous coordinates, any vector that is a scalar multiple of another vector are in fact representative of the same point - the homogeneous points (1,2,3,1) and (2,4,6,2) both represent the Cartesian 3D point (1,2,3). So, now by the time we get to just before the view-to-clip space transformation, the w component of each point is still 1 since none of the preceding transformations alter the w. After the view-to-clip space transformation, the w component of each point is set to be equal to the view-space z component. I’d post the full matrices involved here, but RUclips comments aren’t really a matrix-friendly zone! In essence, this means the clip space w is equal to the distance between the camera and the vertex of the object being rendered. That’s what I needed in this tutorial.
    And, for funsies, after this, the graphics pipeline executes the “perspective divide”, whereby your 4D vector is divided by its own w component in order to collapse every point on screen onto a virtual ‘plane’ located at z=1. This is where things get shown on screen. Basically, two points with identical (x,y) clip space values do not necessarily get placed at the same (x,y) screen positions, as they may have different clip space z values - with a perspective camera, further away objects appear smaller. After the perspective divide, all your points are in the form (x,y,1,1) so you can drop the z and w components and bam, there’s your 2D screen positions. It’s fascinating to me that we need to deal with 3D, 4D, and 2D just to get stuff on your screen.

  • @sadusko7103
    @sadusko7103 4 месяца назад +1

    The goose made everything more than clear!
    All jokes aside, this is highly professionally done and incredibly clear.
    I had yet to find someone that explained it instead of just telling us what to put where and which to connect to what.
    Thank you so much.

  • @aleksp8768
    @aleksp8768 4 месяца назад +3

    This is exactly what I need for soft particles on shader graph!!!

    • @danielilett
      @danielilett  4 месяца назад +1

      It's a total coincidence, but Ben Cloward put out a video about soft particles a couple of days ago! I just clicked on a totally random part of the video and saw literally the same depth intersection nodes I use - it's definitely a technique I've seen many times before: ruclips.net/video/3WPsrdCjhuQ/видео.html

  • @Kinosei30
    @Kinosei30 14 дней назад

    Was trying this one with HDRP and it doesn't seems to be detecting it's close or far from another surface. If I leave Occlusion Strength to 1, it becomes totally black, and the texture shows only if I have a value smaller than 1. I thought that was becasue it was opaque, but after changing to Transparent, it still don't work as intended. Not sure if that has to do with something on HDRP or I did something wrong. I re-checked my graphs and it really doesn't seems to have any mistake, it's exactly as yours in the video. Any idea on what could that be? Thanks

  • @orpheuscreativeco9236
    @orpheuscreativeco9236 4 месяца назад +1

    This is SO GOOD 💯🙏

  • @fleity
    @fleity Месяц назад

    ScreenPosition.w vs Position(View).z .. both work for depth difference, view space and clip space are not the same though in general right? Is there a meaningful difference when using viewPosition.z for this?

  • @tnt345i7
    @tnt345i7 5 месяцев назад +1

    the scene diffrence could be for the post processing only

  • @zing3647
    @zing3647 4 месяца назад +1

    i need help projecting a urp decal (or any decal) onto the surface of a transparent sphere, would you know how to do that by any chance

  • @AlexBradley123
    @AlexBradley123 5 месяцев назад +1

    Very nice series, brw

  • @AlexBradley123
    @AlexBradley123 5 месяцев назад

    Hello, can’t find a feature to hide objects and it parts inside a cutout object. Ii want to create some kind of 3d cutout mask to hide walls and objects. Is it actually possible in URP?

  • @JenkinsPendragon16
    @JenkinsPendragon16 5 месяцев назад +1

    u are the best

  • @okanaydin06
    @okanaydin06 4 месяца назад

    Hi,
    I want to make blend transition, so I had my intersection shader which runs but just with opaque objects that can seen on camera. I want to make this effect with an object that doesnt seen on camera. How can I do that?