A longer explanation of how the depth buffer actually stores values, since I cut a very long explanation of what the "non-linear relationship" is from the video, paraphrased from my shader book: Unity calculates the distance of a pixel from the camera, which we can call its z-value, hence "z-buffer". This value is between the near and far clip distances of your camera, because those are the minimum and maximum distances that actually get rendered. This z-value is changed into a depth value that we can store in the depth buffer by transforming the [near, far] range to a [0, 1] range. The depth buffer stores floating-point numbers between 0 and 1. If this mapping were linear, we could run into precision issues with close objects. Especially for small objects close together, we could feasibly end up with errors where objects end up rendered in the wrong order. To avoid that, we want to use as much precision as possible to represent close objects. The exact formula that is used for converting the z-value to a depth value is: depth = ( 1/z - 1/near ) / ( 1/far - 1/near ) What you end up with is a curve. For the default Unity camera values where near = 0.3 and far = 1000, 70% of all information stored in the depth buffer represents objects up to a distance of one meter from the camera. Which is amazing when you consider the remaining 30% represents the other 999 meters! As mentioned in the video, those depth buffer values get copied to the depth texture, and then Shader Graph gives you the tools to decode this curve into two linear formats (Linear01, where values are linearized in the same 0-1 range, and Eye where values are just the original z-values - distances from the camera - that we started with). Hope that gives a bit more context!
Wow, this is the best introductory video I've ever seen, and it's for the latest version of Unity. It was a huge help. I sincerely hope you can keep updating.🥰
Glad you liked the lerp explanation, I almost didn't even include it in the video! Sometimes it's pretty tricky to strike a balance between being concise and including all of the context.
Hi Daniel! I have a question. I dont know why, but, "Depth test 2:44 " only works for me if "Surface Type" is Transparent. I don't understand why it doesn't work when is opaque, like you
Thanks for this tutorial, it's awesome! I just have one tiny question: if I have a second camera in my scene that renders the scene to a RenderTexture with a Color Format of R8G8B8A8_UNORM and a Depth Stencil Format of D32_SFLOAT, and I pass that RenderTexture to a URP Shader Graph as a Texture2D, is there a way to read the depth values from the RenderTexture in the graph? I believe it is not possible, but just wanted to confirm. It's very odd, but it seems like it's impossible :(
I have this problem, that with a Canvas Group and some GO with Images, the images on top get transparent and the color of images behind mix with the color on front giving a bad result, there is a good way to fix it? I tried with stencil, but then the aliasing scream at your face haha.
Do you mean the part about 5 minutes in where you have to find the tickbox? I just checked out a new Unity 6 project and it looks like it's in basically the same place for me. If you started with a brand new Unity 6 URP project then your Assets folder should contain another folder called Settings. In there, there are a bunch of weird looking assets. You're looking for the ones named either "Mobile_RPAsset" or "PC_RPAsset". These are both Render Pipeline Assets which basically hold together a lot of URP's settings. The Depth Texture tickbox should be right at the top of the Inspector when you click on them. If you're still using 2022.3 or versions before that, then it's basically the same process except the assets you're looking for are called "UniversalRP-HighQuality", "UniversalRP-LowQuality", and "UniversalRP-MediumQuality". If you don't see what I'm seeing in Unity 6 then I'm not sure what's up and honestly maybe Unity just changed something randomly in one of the Unity 6 releases so far. They like doing that. For avoidance of doubt I'm using 6.0.0b11.
Not sure yet. I am working on the next part, but I also have a couple of other videos in production right now that are closer to completion. The next part will likely be sometime in February, but I hope it's towards the start.
A longer explanation of how the depth buffer actually stores values, since I cut a very long explanation of what the "non-linear relationship" is from the video, paraphrased from my shader book:
Unity calculates the distance of a pixel from the camera, which we can call its z-value, hence "z-buffer". This value is between the near and far clip distances of your camera, because those are the minimum and maximum distances that actually get rendered.
This z-value is changed into a depth value that we can store in the depth buffer by transforming the [near, far] range to a [0, 1] range. The depth buffer stores floating-point numbers between 0 and 1.
If this mapping were linear, we could run into precision issues with close objects. Especially for small objects close together, we could feasibly end up with errors where objects end up rendered in the wrong order.
To avoid that, we want to use as much precision as possible to represent close objects. The exact formula that is used for converting the z-value to a depth value is:
depth = ( 1/z - 1/near ) / ( 1/far - 1/near )
What you end up with is a curve. For the default Unity camera values where near = 0.3 and far = 1000, 70% of all information stored in the depth buffer represents objects up to a distance of one meter from the camera. Which is amazing when you consider the remaining 30% represents the other 999 meters!
As mentioned in the video, those depth buffer values get copied to the depth texture, and then Shader Graph gives you the tools to decode this curve into two linear formats (Linear01, where values are linearized in the same 0-1 range, and Eye where values are just the original z-values - distances from the camera - that we started with).
Hope that gives a bit more context!
Wow, this is the best introductory video I've ever seen, and it's for the latest version of Unity. It was a huge help. I sincerely hope you can keep updating.🥰
Great explanation, hope you keep making this series!
Such a brilliant and helpful series. Great stuff Daniel. (Not least because that 3 sec lerp explanation was the best and most concise i've seen)
Glad you liked the lerp explanation, I almost didn't even include it in the video! Sometimes it's pretty tricky to strike a balance between being concise and including all of the context.
Thank you, this series is helping me a lot to understand shader graph
Thank you. I've learned exactly what I was looking for last few days.
Once again nicely explained, thank you so much !
Great explanations of basics in your videos, thank you!
Great work! Looking forward for the vertex shader
These videos are helping me to better understand the ShaderGraphs, thx❤️
Like #100! Your tutorials are absolutely amazing. Please please keep going :)
Thx, pure gold explanation.
loving it thanks
Thanks for the tutorial, can i do this in Built In Pipeline?
Unreal has a pixel depth and scene depth node while Unity seems to only have scene depth. Would you know how to get the pixel depth?
What about hairs in URP, will it work for the same or could you cover "Hairs in URP" topic?
좋은 이야기 잘듣고 가요~~
Hi Daniel! I have a question. I dont know why, but, "Depth test 2:44 " only works for me if "Surface Type" is Transparent. I don't understand why it doesn't work when is opaque, like you
Thanks for this tutorial, it's awesome! I just have one tiny question: if I have a second camera in my scene that renders the scene to a RenderTexture with a Color Format of R8G8B8A8_UNORM and a Depth Stencil Format of D32_SFLOAT, and I pass that RenderTexture to a URP Shader Graph as a Texture2D, is there a way to read the depth values from the RenderTexture in the graph? I believe it is not possible, but just wanted to confirm. It's very odd, but it seems like it's impossible :(
I have this problem, that with a Canvas Group and some GO with Images, the images on top get transparent and the color of images behind mix with the color on front giving a bad result, there is a good way to fix it? I tried with stencil, but then the aliasing scream at your face haha.
Is there a way to sample a pixel of the depth texture here?
Has this been changed in Unity 6 or am I just blind? Cannot find Depth Texture setting.
Do you mean the part about 5 minutes in where you have to find the tickbox? I just checked out a new Unity 6 project and it looks like it's in basically the same place for me.
If you started with a brand new Unity 6 URP project then your Assets folder should contain another folder called Settings. In there, there are a bunch of weird looking assets. You're looking for the ones named either "Mobile_RPAsset" or "PC_RPAsset". These are both Render Pipeline Assets which basically hold together a lot of URP's settings. The Depth Texture tickbox should be right at the top of the Inspector when you click on them.
If you're still using 2022.3 or versions before that, then it's basically the same process except the assets you're looking for are called "UniversalRP-HighQuality", "UniversalRP-LowQuality", and "UniversalRP-MediumQuality".
If you don't see what I'm seeing in Unity 6 then I'm not sure what's up and honestly maybe Unity just changed something randomly in one of the Unity 6 releases so far. They like doing that. For avoidance of doubt I'm using 6.0.0b11.
When is the next part going to release?
Not sure yet. I am working on the next part, but I also have a couple of other videos in production right now that are closer to completion. The next part will likely be sometime in February, but I hope it's towards the start.
And how many part it have?@@danielilett
Can you show this with Shadergraph in Built In Render Pipeline.