Unity Shader Graph Basics (Part 8 - Scene Intersections 1)

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 พ.ค. 2024
  • Many shader effects rely on detecting the intersection between the mesh being rendered and other objects in the scene. In this tutorial, I break down exactly how to detect those intersections and use the technique to create a basic ambient occlusion effect in screen-space.
    I'm using Unity 2022.3.0f1, although these steps should look similar in previous and subsequent Unity versions.
    ------------------------------------------------------------------------
    👇 Download the project on GitHub: github.com/daniel-ilett/shade...
    📰 Read this tutorial in article format instead: danielilett.com/2024-05-21-tu...
    🍃 Get the grass texture: ambientcg.com/view?id=Grass001
    🧱 Get the rock mesh: sketchfab.com/3d-models/rocks...
    ------------------------------------------------------------------------
    ✨ Grab Snapshot Shaders Pro or Hologram Shaders Pro here (affiliate): assetstore.unity.com/publishe...
    📚 Get a copy of my shader book here (affiliate): www.dpbolvw.net/click-10074214...
    ------------------------------------------------------------------------
    💬 Join the Discord: / discord
    💖 Support me on Patreon: www.patreon.com/danielilett?f...
    ☕ Or throw me a one-off coffee on Ko-fi: ko-fi.com/danielilett
    ------------------------------------------------------------------------

ความคิดเห็น • 12

  • @danielilett
    @danielilett  2 หลายเดือนก่อน +6

    Under the hood, the graphics pipeline uses 4D vectors to represent 3D points in space. This representation is called “homogeneous coordinates” or “perspective coordinates”, and we use them because it is impossible to represent a 3D translation (i.e., moving a point in space) using a 3x3 matrix. Since we want to efficiently package as many transformations as possible into a single matrix (which you can do by multiplying individual rotation matrices, scaling matrices, and any other transformation matrices together), we take our 3D point vector in Cartesian space (what you probably normally think of when you are using a coordinate system) and bolt an additional “w” component equal to 1 onto the end of the vector. This is a homogeneous coordinate. Thankfully, it is possible to represent translations using a 4x4 matrix, so we use those instead. Adding a component to the vector was necessary because you can’t apply a 4x4 matrix transformation to a 3D vector.
    In homogeneous coordinates, any vector that is a scalar multiple of another vector are in fact representative of the same point - the homogeneous points (1,2,3,1) and (2,4,6,2) both represent the Cartesian 3D point (1,2,3). So, now by the time we get to just before the view-to-clip space transformation, the w component of each point is still 1 since none of the preceding transformations alter the w. After the view-to-clip space transformation, the w component of each point is set to be equal to the view-space z component. I’d post the full matrices involved here, but TH-cam comments aren’t really a matrix-friendly zone! In essence, this means the clip space w is equal to the distance between the camera and the vertex of the object being rendered. That’s what I needed in this tutorial.
    And, for funsies, after this, the graphics pipeline executes the “perspective divide”, whereby your 4D vector is divided by its own w component in order to collapse every point on screen onto a virtual ‘plane’ located at z=1. This is where things get shown on screen. Basically, two points with identical (x,y) clip space values do not necessarily get placed at the same (x,y) screen positions, as they may have different clip space z values - with a perspective camera, further away objects appear smaller. After the perspective divide, all your points are in the form (x,y,1,1) so you can drop the z and w components and bam, there’s your 2D screen positions. It’s fascinating to me that we need to deal with 3D, 4D, and 2D just to get stuff on your screen.

  • @sadusko7103
    @sadusko7103 หลายเดือนก่อน +1

    The goose made everything more than clear!
    All jokes aside, this is highly professionally done and incredibly clear.
    I had yet to find someone that explained it instead of just telling us what to put where and which to connect to what.
    Thank you so much.

  • @aleksp8768
    @aleksp8768 หลายเดือนก่อน +3

    This is exactly what I need for soft particles on shader graph!!!

    • @danielilett
      @danielilett  หลายเดือนก่อน +1

      It's a total coincidence, but Ben Cloward put out a video about soft particles a couple of days ago! I just clicked on a totally random part of the video and saw literally the same depth intersection nodes I use - it's definitely a technique I've seen many times before: th-cam.com/video/3WPsrdCjhuQ/w-d-xo.html

  • @orpheuscreativeco9236
    @orpheuscreativeco9236 หลายเดือนก่อน +1

    This is SO GOOD 💯🙏

  • @AlexBradley123
    @AlexBradley123 2 หลายเดือนก่อน +1

    Very nice series, brw

  • @JenkinsPendragon16
    @JenkinsPendragon16 2 หลายเดือนก่อน +1

    u are the best

  • @zing3647
    @zing3647 หลายเดือนก่อน +1

    i need help projecting a urp decal (or any decal) onto the surface of a transparent sphere, would you know how to do that by any chance

  • @tnt345i7
    @tnt345i7 2 หลายเดือนก่อน +1

    the scene diffrence could be for the post processing only

  • @AlexBradley123
    @AlexBradley123 2 หลายเดือนก่อน

    Hello, can’t find a feature to hide objects and it parts inside a cutout object. Ii want to create some kind of 3d cutout mask to hide walls and objects. Is it actually possible in URP?

  • @okanaydin06
    @okanaydin06 หลายเดือนก่อน

    Hi,
    I want to make blend transition, so I had my intersection shader which runs but just with opaque objects that can seen on camera. I want to make this effect with an object that doesnt seen on camera. How can I do that?

  • @sussy-coder
    @sussy-coder 2 หลายเดือนก่อน +1

    stopping kids from saying first