Great video, thanks a ton, had a project in my company that needs to do some volume rendering of some CT scan data in Unity. Working on it right now , might take a month for me, but your video is the foundational for me
Writing on 19:14 so it may be discussed later - another way would be to extend last approach be subdividing plane into multiple triangles and discarding their rendering at compute stage (or mesh shader) if some triangle is outside data volume. This will result in an approximation of clipped plane and minimal overhead in fragment invocations (especially if tessellation can be used.
What I don't really get is how to apply local illumination to a volume that is rendered using a CT-Volume and a transfer function. E.g. the tooth shown in 24:17 has some shiny reflection at the root canal. So there must be a light source that interacts with the surface. But how can this be done without normals? We just have voxels, don't we?
I may not be able to answer your question completely, but you can calculate normals from density samples. It is very easy. You take 6 samples in a cross pattern and subtract each pair. Search for "gradient normals".
@@DasAntiNaziBroetchen danke, ja wahrscheinlich würde man die Zentraldifferenz verwenden. Ich weiß nicht wie sinnvoll Sobel wäre, denn der glättet ja vorher.
ngl, I'd either, just draw the box, use the normal of the visible faces to orient the data, and just go through the data in rays, where you exit on first "solid" voxel. or as a second approach I'd do the same with a screen space shader, if there's multiple volumes.
19:00 I think we can render the cube first, then render these slices with some depth tests(greater than), which will discard those fragments in front of the cube plane. Is that doable?
Is it uncommon to have volumetric data where the samples have more than one dimension? What does the transfer function usually look like in those cases?
I see rasterization more related to triangles cause their flat nature as I see ray tracing more related to volumetric geometries using implicit equations or even explicit voxel grids, octrees partitioning, etc... What do you think?
Depends; if any of the pixels in the quad's lane still go through then it will still have to wait for that quad to finish. So you'd only gain perf if the entire quad is masked out. Though you're saving some perf by not having as much data divergence
@@nielsbishere Correct, but when drawing one big polygon, most of the time if a pixel is completely transparent, the other 3 of the quad is also transparent. So it's better to draw smaller polygons where data is expected. Or when one detects a pixel is out of bounds, to discard it without reading a texture.
@@DiThi you mean like cut out alpha masked stuff in the big lines and then cut off the rest in the pixel shader right? Because making polygons too small will waste helper lanes too
@@nielsbishere Yeah there's a balance between wasted transparent polygon drawing and too small polygons. The specifics depend on the hardware, but in most cases just clipping the polygon to the bounding box of what it's going to be drawn is generally enough. For some architectures it's worth to add many more polygons. For example for the classic iPhone and Samsung Galaxies of a decade ago, the PowerVR architecture was way faster drawing alpha blended polygons if we made two separate meshes: the inside part as opaque and the border around it as alpha blended. The tiled architecture was way faster drawing many polygons (and only a small fraction of pixels on screen alpha blended) than drawing one plane per image all alpha blended. Like, orders of magnitude faster!
And yessss finally a nice of 3d textures that is not just hosting texture for instances… problem is how can i have a cohrent blur in a 3d texture…. Find a solution find a problem!!! ❤
It's probably for kids, very basic introduction and totally useless for practical volume rendering skill useful to render volumetric data like CT MRI etc...this dude apparently is clueless about this...
@@robinsus it is... this fellow did not even mention about: sampling->classification OR classification->sampling and its relation to frequency theorem, no mentions about gradient modulated shadowing, he did not even mention about gradient calculations to have a high quality shading (the most common reason of ugly qualities of VR)... it's a basic introductory material for beginners...
Great video, thanks a ton, had a project in my company that needs to do some volume rendering of some CT scan data in Unity. Working on it right now , might take a month for me, but your video is the foundational for me
Almost the same^^ though I will try to do it with C++!
Writing on 19:14 so it may be discussed later - another way would be to extend last approach be subdividing plane into multiple triangles and discarding their rendering at compute stage (or mesh shader) if some triangle is outside data volume. This will result in an approximation of clipped plane and minimal overhead in fragment invocations (especially if tessellation can be used.
18:00 I am confused. What about ray marching a 3D texture? No overdraw and much simpler.
Awesome!
What I don't really get is how to apply local illumination to a volume that is rendered using a CT-Volume and a transfer function. E.g. the tooth shown in 24:17 has some shiny reflection at the root canal. So there must be a light source that interacts with the surface. But how can this be done without normals? We just have voxels, don't we?
I may not be able to answer your question completely, but you can calculate normals from density samples. It is very easy. You take 6 samples in a cross pattern and subtract each pair. Search for "gradient normals".
@@DasAntiNaziBroetchen danke, ja wahrscheinlich würde man die Zentraldifferenz verwenden. Ich weiß nicht wie sinnvoll Sobel wäre, denn der glättet ja vorher.
ngl, I'd either, just draw the box, use the normal of the visible faces to orient the data, and just go through the data in rays, where you exit on first "solid" voxel.
or as a second approach I'd do the same with a screen space shader, if there's multiple volumes.
19:00 I think we can render the cube first, then render these slices with some depth tests(greater than), which will discard those fragments in front of the cube plane. Is that doable?
I thought you’re tom scott ngl :p
Thank you Prof. Yuksel. How was the explosion and cloud volumetric data generated?
The explosion is generated using Houdini. I'm not sure about the could data.
@@cem_yuksel So if It's Houndini I bet the data structure is OpenVDB.
Is it uncommon to have volumetric data where the samples have more than one dimension? What does the transfer function usually look like in those cases?
I see rasterization more related to triangles cause their flat nature as I see ray tracing more related to volumetric geometries using implicit equations or even explicit voxel grids, octrees partitioning, etc... What do you think?
Ray tracing is more general than rasterization.
Amazing content. Any way we can acknowledge / re-use your slides somehow?
IIRC being able to discard a fragment before reading any texture can save the GPU some processing power which is not saved just by returning alpha 0.
Depends; if any of the pixels in the quad's lane still go through then it will still have to wait for that quad to finish. So you'd only gain perf if the entire quad is masked out. Though you're saving some perf by not having as much data divergence
@@nielsbishere Correct, but when drawing one big polygon, most of the time if a pixel is completely transparent, the other 3 of the quad is also transparent. So it's better to draw smaller polygons where data is expected. Or when one detects a pixel is out of bounds, to discard it without reading a texture.
@@DiThi you mean like cut out alpha masked stuff in the big lines and then cut off the rest in the pixel shader right? Because making polygons too small will waste helper lanes too
@@nielsbishere Yeah there's a balance between wasted transparent polygon drawing and too small polygons. The specifics depend on the hardware, but in most cases just clipping the polygon to the bounding box of what it's going to be drawn is generally enough. For some architectures it's worth to add many more polygons. For example for the classic iPhone and Samsung Galaxies of a decade ago, the PowerVR architecture was way faster drawing alpha blended polygons if we made two separate meshes: the inside part as opaque and the border around it as alpha blended. The tiled architecture was way faster drawing many polygons (and only a small fraction of pixels on screen alpha blended) than drawing one plane per image all alpha blended. Like, orders of magnitude faster!
@@DiThi I'd like to point out that mobile hardware behaves very differently from PC hardware.
Are you planning to write any books on this?
Cem abi, keşke videolarına altyazı da eklesen... İngilizcem kifayetsiz kaldı ne yalan söyleyeyim.
And yessss finally a nice of 3d textures that is not just hosting texture for instances… problem is how can i have a cohrent blur in a 3d texture…. Find a solution find a problem!!! ❤
surprised that your video is really picking up
Why surprised?
This is probably because of NeRF (Neural Radiance Fields) paper.
It's probably for kids, very basic introduction and totally useless for practical volume rendering skill useful to render volumetric data like CT MRI etc...this dude apparently is clueless about this...
Not really..
@@robinsus it is... this fellow did not even mention about: sampling->classification OR classification->sampling and its relation to frequency theorem, no mentions about gradient modulated shadowing, he did not even mention about gradient calculations to have a high quality shading (the most common reason of ugly qualities of VR)... it's a basic introductory material for beginners...