GSN Composer
GSN Composer
  • 16
  • 77 777
Deferred Shading [Shaders Monthly #14]
In Episode #14 of Shaders Monthly, we talk about deferred shading and implement a first simple deferred shading pipeline in GLSL.
Here is the code of the created shaders:
GSN Composer: www.gsn-lib.org/index.html#projectName=ShadersMonthly14&graphName=DeferredShading
C++: www.mathematik.uni-marburg.de/~thormae/lectures/sm/sm14/sm14_cpp_deferred.zip
Java: www.mathematik.uni-marburg.de/~thormae/lectures/sm/sm14/sm14_java_deferred.zip
Documentation for the shader plugin node of the GSN Composer:
gsn-lib.org/docs/nodes/ShaderPluginNode.php
00:00 Introduction
00:49 Forward Shading
07:15 Transparent Surface
08:21 Deferred Shading
17:14 Implementation of a deferred shading pipeline in GLSL
มุมมอง: 1 638

วีดีโอ

Sampling of Environment Maps for Image-based Lighting [Shaders Monthly #13]
มุมมอง 2.4Kปีที่แล้ว
In Episode #13 of Shaders Monthly, we talk about importance sampling of environment maps for image-based lighting. We cover the theory and implement a chain of GLSL shaders. Here is the code of the created shaders: GSN Composer: www.gsn-lib.org/index.html#projectName=ShadersMonthly13&graphName=SampleEnvmap C : www.mathematik.uni-marburg.de/~thormae/lectures/sm/sm13/sm13_cpp_sampleenvmap.zip Jav...
Halton Low-Discrepancy Sequence [Shaders Monthly #12]
มุมมอง 2.7Kปีที่แล้ว
In Episode #12 of Shaders Monthly, we talk about low-discrepancy sequences and their applications in computer graphics. In particular, we implement the Halton sequence in GLSL. As an application example, we will use the Halton sequence to improve the image-based lighting approach that we implemented in the last two episodes. Here is the code of the created shader: GSN Composer: www.gsn-lib.org/...
Image-based Lighting (IBL) of PBR Materials [Shaders Monthly #11]
มุมมอง 6K2 ปีที่แล้ว
In Episode #11 of Shaders Monthly, we talk about image-based lighting of the Cook-Torrance microfacet BRDF. This will allow us to implement image-based lighting for basic PBR materials in GLSL. For real-time performance, we follow closely the Siggraph 2013 tutorial "Real Shading in Unreal Engine 4" by Brian Karis: cdn2.unrealengine.com/Resources/files/2013SiggraphPresentationsNotes-26915738.pdf...
Importance Sampling: Image-based Lighting of a Lambertian Diffuse BRDF [Shaders Monthly #10]
มุมมอง 5K2 ปีที่แล้ว
In Episode #10 of Shaders Monthly, we talk about importance sampling. Importance sampling is an essential tool in computer graphics. As a first simple example, we discuss in this episode how importance sampling can be used for real-time image-based lighting if the material is Lambertian diffuse. In the practical part, we implement two shaders in GLSL: 1) PrefilterDiffuse GSN Composer: www.gsn-l...
Microfacet BRDF: Theory and Implementation of Basic PBR Materials [Shaders Monthly #9]
มุมมอง 12K2 ปีที่แล้ว
In Episode #9 of Shaders Monthly, the Cook-Torrance microfacet BRDF is explained, which is used nowadays to implement physically-based rendering (or "PBR" for short). In the theoretical part, we discuss a user-friendly parameterization known as Disney's "principled" BRDF or the "Metallic-Roughness-Workflow". Furthermore, we present the theory of the Cook-Torrance microfacet model and introduce ...
Procedural Noise: Value and Gradient Noise in GLSL [Shaders Monthly #8]
มุมมอง 2.6K2 ปีที่แล้ว
In Episode #8 of Shaders Monthly, we talk about procedural noise. We implement value noise and colored gradient noise in GLSL. Here is the code of the created shader: GSN Composer: www.gsn-lib.org/index.html#projectName=ShadersMonthly08&graphName=ValueGradientNoise C : www.mathematik.uni-marburg.de/~thormae/lectures/sm/sm08/sm08_cpp_valuegradientnoise.zip Java: www.mathematik.uni-marburg.de/~th...
Procedural Textures: A Practical Introduction [Shaders Monthly #7]
มุมมอง 2.5K2 ปีที่แล้ว
In Episode #7 of Shaders Monthly, we talk about procedural textures. As a practical example, it is demonstrated how 2D shapes can be rendered with a GLSL shader using a signed distance function (SDF). Furthermore, we discuss how to perform distance-based anti-aliasing. Here is the code of the created shader: GSN Composer: www.gsn-lib.org/index.html#projectName=ShadersMonthly07&graphName=Procedu...
What are Mipmaps? Texture Filtering in GLSL [Shaders Monthly #6]
มุมมอง 7K2 ปีที่แล้ว
In Episode #6 of Shaders Monthly, we talk about mipmaps and how they are used for texture filtering in computer graphics. Furthermore, we will demonstrate the implementation in GLSL. 1) Magnification Filter GSN Composer: www.gsn-lib.org/index.html#projectName=ShadersMonthly06&graphName=MagnificationFilter C : www.mathematik.uni-marburg.de/~thormae/lectures/sm/sm06/sm06_cpp_magnificationfilter.z...
Texture Mapping in GLSL [Shaders Monthly #5]
มุมมอง 4.4K2 ปีที่แล้ว
In Episode #5 of Shaders Monthly, we talk about texture mapping and how it can be implemented in GLSL. 1) Textured Triangle GSN Composer: www.gsn-lib.org/index.html#projectName=ShadersMonthly05&graphName=TexturedTriangle C : www.mathematik.uni-marburg.de/~thormae/lectures/sm/sm05/sm05_cpp_textured_triangle.zip Java: www.mathematik.uni-marburg.de/~thormae/lectures/sm/sm05/sm05_java_textured_tria...
What are Shaders? A Hands-on Introduction [Shaders Monthly #1]
มุมมอง 7K2 ปีที่แล้ว
In this first episode of Shaders Monthly, we explain the term "shader" and provide an overview of the OpenGL pipeline. Afterwards, two shaders are implemented in the OpenGL shading language (GLSL): 1) Red Triangle: A minimalistic GLSL shader that renders a red triangle. GSN Composer: www.gsn-lib.org/index.html#projectName=ShadersMonthly01&graphName=RedTriangle C : www.mathematik.uni-marburg.de/...
Blinn Phong Shading: Theory and Implementation [Shaders Monthly #4]
มุมมอง 8K2 ปีที่แล้ว
In Episode #4 of Shaders Monthly, we implement the Phong and Blinn-Phong shading models in GLSL. Both models are reflection models. This means they describe mathematically how light is reflected at an opaque surface. From today's point of view, the models are quite simple. Consequently, we will not only talk about the Phong and Blinn-Phong models. Instead these models are presented in a general...
OpenGL Modelview Matrix and 3D Transformations [Shaders Monthly #3]
มุมมอง 4.9K3 ปีที่แล้ว
In Episode #3 of Shaders Monthly, we introduce the OpenGL modelview matrix, which allows you to perform 3D transformations, such as translation, scaling, or rotation. We look into the mathematical theory of these transformations and implement the corresponding shaders in GLSL. The implementation is based on the following example from episode #2: www.gsn-lib.org/index.html#projectName=ShadersMon...
Perspective Projection in GLSL [Shaders Monthly #2]
มุมมอง 5K3 ปีที่แล้ว
In Episode #2 of Shaders Monthly, we talk about perspective projection and how it is applied in a shader. The GLSL shader implementation is based on the following example from episode #1: www.gsn-lib.org/index.html#projectName=ShadersMonthly01&graphName=TorusKnot Here are links to the modified shader code that performs perspective projection: GSN Composer: www.gsn-lib.org/index.html#projectName...
Vulkan GLSL Ray Tracing Emulator
มุมมอง 5K3 ปีที่แล้ว
The Vulkan GLSL Ray Tracing Emulator is an online application that aims to simulate the ray tracing shader pipeline from the Vulkan GL EXT ray tracing specification. To this end, a pre-compiler is provided that translates the ray tracing shader code into a classic WebGL GLSL fragment shader, which generates a direct preview of the shader's output image in the browser. Most importantly, the emul...
GSN Composer: New Features of the Shader Editor in Version 0.7
มุมมอง 2K3 ปีที่แล้ว
GSN Composer: New Features of the Shader Editor in Version 0.7

ความคิดเห็น

  • @senkrouf
    @senkrouf 7 ชั่วโมงที่ผ่านมา

    These videos are amazing. Thanks

  • @zoumzoumzou
    @zoumzoumzou วันที่ผ่านมา

    good quality materials ! thanks for sharing

  • @OneOneTwo3
    @OneOneTwo3 9 วันที่ผ่านมา

    As a Graphics Programmer student, these are some of the finest and well explained videos you can find on TH-cam.

  • @carnalexxx
    @carnalexxx 13 วันที่ผ่านมา

    Excellent tutorials!

  • @athirstyguy
    @athirstyguy 29 วันที่ผ่านมา

    You are SO UNDERRATED my friend. Stay motivated!

    • @gsn-composer
      @gsn-composer 28 วันที่ผ่านมา

      Thank you for the motivation. 👍

  • @13Septem13
    @13Septem13 หลายเดือนก่อน

    Is it possible to use GSN composer offline?

    • @gsn-composer
      @gsn-composer หลายเดือนก่อน

      No, this is not possible. However, you can export a project as a standalone web application that runs independently of the GSN Composer front-end, either offline or as part of your own website.

  • @13Septem13
    @13Septem13 หลายเดือนก่อน

    gsn composer looks like amazing tool for studying CG concepts.

  • @100drips
    @100drips 2 หลายเดือนก่อน

    This must've been the best shader introduction I've seen so far.

  • @StarContract
    @StarContract 3 หลายเดือนก่อน

    These videos are by far the most in-depth and practical resource out there for game engine developers

  • @sichenghe9160
    @sichenghe9160 3 หลายเดือนก่อน

    great video!

  • @santitabnavascues8673
    @santitabnavascues8673 3 หลายเดือนก่อน

    Great video! You could do one about clustered rendering, which covers both forward and deferred approaches to have a large amount of lights without the caveats of the deferred shading and with savings in the memory bandwidth that the deferred shading often incurs on

  • @ZazeLove
    @ZazeLove 3 หลายเดือนก่อน

    ever think of gettig in to shader for vrc?

  • @mr.cheeseburger3746
    @mr.cheeseburger3746 4 หลายเดือนก่อน

    wow really amazing thank you!

  • @streetware-games
    @streetware-games 5 หลายเดือนก่อน

    these videos are worth gold, can't wait to see more

  • @Katlex_threejs
    @Katlex_threejs 5 หลายเดือนก่อน

    Thank you! It's really informative

  • @DigitalDuo2211
    @DigitalDuo2211 7 หลายเดือนก่อน

    Can this technique also be used for the specular part? Thanks again for these videos. Looking forward to watching the next ones.

  • @MDLeide
    @MDLeide 7 หลายเดือนก่อน

    That was excellent.

  • @cleyang
    @cleyang 8 หลายเดือนก่อน

    I can reproduce the diffuse part by Sampling the Environment Maps method in this video, but how should I handle the specular part? Can you provide some relevant articles? Thank you very much!

    • @gsn-composer
      @gsn-composer 8 หลายเดือนก่อน

      For the specular part, it is usually better to sample the BRDF rather than the envmap. You can watch Shaders Monthly #11. The most general solution would be "multiple importance sampling", which is mentioned at the end of the video: graphics.stanford.edu/courses/cs348b-03/papers/veach-chapter9.pdf

  • @cleyang
    @cleyang 8 หลายเดือนก่อน

    In the `directionToSphericalEnvmap` and `sphericalEnvmapToDirection` functions, used z axis as up direction. But in OpenGL, y axis is up direction. Do we need to swap the y and z axes?

    • @gsn-composer
      @gsn-composer 8 หลายเดือนก่อน

      Yes, the y-axis points up in the OpenGL camera coordinate system. However, you can set the orientation of the environment map to any direction in the global coordinate system of the scene, which is what we did here.

  • @cleyang
    @cleyang 9 หลายเดือนก่อน

    Hi, very nice video! I just have a simple question, the output value of the fragment shader is RGB color or linear radiance?Because I saw that the `outColor` ball light was called `rgb2lin`, but when coloring the object, `lin2rgb` was also called, so I was confused?

    • @gsn-composer
      @gsn-composer 9 หลายเดือนก่อน

      Inputs and outputs of the presented shaders are in the sRGB color space. The inputs are converted from sRGB to linear with the function "rgb2lin()" (gamma expansion). Then the shading is calculated in linear space. Finally, the result is converted from linear to sRGB with the "lin2rgb()" function (gamma compression).

    • @cleyang
      @cleyang 8 หลายเดือนก่อน

      Thanks!

  • @mahalis
    @mahalis 9 หลายเดือนก่อน

    Thank you for publishing this-really appreciate the clear explanation and walkthrough.

  • @IronAttorney1
    @IronAttorney1 9 หลายเดือนก่อน

    One thing I should say, if anyone referencing the linked GSN repro in their own project, beware of the 0 alpha value when storing the final frag colour. Took me a while to debug that :D I guess GSN composer, or that particular project file, has alpha blend turned off or something

  • @IronAttorney1
    @IronAttorney1 9 หลายเดือนก่อน

    Great video, thanks! I've been messing around with hard to understand shadertoy noise functions, and this has helped alot. Am I right in thinking that to get the gradient for the actual pixel, I just need to interpolate between the 4 gradients, just as you did with the values you calculated from the gradients?

    • @gsn-composer
      @gsn-composer 9 หลายเดือนก่อน

      Yes, correct. At each grid cell, we generate a gradient of random size and orientation. Then we get the four gradient values for the current pixel. One gradient value for each of the four neighboring grid cells. These gradient values are called d11, d12, d21, and d22 in the code. And then we perform interpolation based on the location of the pixel with in the grid cell using the same code as in the value noise example.

    • @IronAttorney1
      @IronAttorney1 9 หลายเดือนก่อน

      @@gsn-composer Thanks very much!

  • @cialk
    @cialk 9 หลายเดือนก่อน

    This is so good thank you

  • @DasAntiNaziBroetchen
    @DasAntiNaziBroetchen 9 หลายเดือนก่อน

    Some German snuck into your slide #21.

  • @抹茶乐奈
    @抹茶乐奈 9 หลายเดือนก่อน

    very nice toturial !

  • @Mcs1v
    @Mcs1v 10 หลายเดือนก่อน

    Nice and detailed video! Just a hint: you didnt need the position buffer, its much faster to reconstruct the 3d position from the depth buffer with the inverse camera matrix (it's saves a lot of gpu bandwidth)

  • @Khytau
    @Khytau 10 หลายเดือนก่อน

    thanks!

  • @HaonanHe-ep5vs
    @HaonanHe-ep5vs 10 หลายเดือนก่อน

    Excellent video! But how to render shadow when using image-based lighting in physically based rendering?

    • @gsn-composer
      @gsn-composer 10 หลายเดือนก่อน

      When we do rasterization and perform LOCAL shading, shadows are always difficult because they are a GLOBAL effect that we cannot decide locally. We need to know if the path to the light source is occluded or not. This is why ray tracing approaches are so popular, because they can solve this problem by shooting a ray towards the light source (or environment map) and can check if the light is occluded by some other object. Here is an image-based lighting example for ray tracing. It has shadows and other "global" effects: gsn-lib.org/apps/raytracing/index.php?name=example_transmission I have some slides about raytracing here: www.uni-marburg.de/en/fb12/research-groups/grafikmultimedia/lectures/graphics2 In a rasterization approach, we need to cheat somehow. For example, we could look for the brightest location in the environment map and assume that this direction is the only one that matters for shadows (a very rough approximation, of course). Then we could add a cascaded shadow map for that direction: developer.download.nvidia.com/SDK/10.5/opengl/src/cascaded_shadow_maps/doc/cascaded_shadow_maps.pdf

  • @csanadtemesvari9251
    @csanadtemesvari9251 10 หลายเดือนก่อน

    I just got this recommended: th-cam.com/video/QVbOp1h-Jb4/w-d-xo.html

  • @roozbubu
    @roozbubu 10 หลายเดือนก่อน

    the goat has returned

  • @iNkPuNkeR
    @iNkPuNkeR 10 หลายเดือนก่อน

    Thank you for the work you put out!

  • @DigitalDuo2211
    @DigitalDuo2211 10 หลายเดือนก่อน

    Excellent video as always! ♥

  • @neil_m899
    @neil_m899 10 หลายเดือนก่อน

    Great to see you back man! Loved your IBL videos!

  • @thegeeko1
    @thegeeko1 10 หลายเดือนก่อน

    thanks for the very helpful videos

  • @oSteMo
    @oSteMo 11 หลายเดือนก่อน

    Excellent explanation, thanks for all the effort you put in those videos :)

  • @xgamer-od7mb
    @xgamer-od7mb 11 หลายเดือนก่อน

    thank you for sharing

  •  11 หลายเดือนก่อน

    I've a second question, thanks for calculating these radiation quantities for point light! Are you going to do the same for directional lights, later in the course? Because I'm having difficulty in wrapping my head around visualizing radiant intensity of a directional light (i.e. a light model that does not have position, and it's direction is constant through space, a good approximation of sunlight on earth's surface) because I can't surround its source via spherical shell.

    • @gsn-composer
      @gsn-composer 11 หลายเดือนก่อน

      Good point. Point lights could be controlled in a user interface with the parameter "radiance flux" (in watts). For an idealized directional light, such a parameter makes no sense because the directional light covers an infinitely large area, which would require an infinite radiance flux. Instead, for a directional light, the user should be allowed to directly specify the perpendicular irradiance (in watts per meter).

  •  11 หลายเดือนก่อน

    Hi professor! Thank you so much for making this very high quality resources publicly available. I'm studying PBR rendering at the moment and found your videos through a TH-cam search. Yours is the best resource I was able to find that involves both math/theory and code/implementation. ^_^ I got confused by the meaning of theta, and definitions of and relations between irradiance and radiance. AFAIU, on slide 14 when you explain irradiance, around t=18:55, theta is the angle between the incoming light direction and the surface normal. Again in slide 14, when you explain radiance, around t=20:30, theta is the angle between the outgoing/reflected light direction (view direction) and the surface normal. you say that's why we multiple dA with cos(theta), to get the projected area. Again in slide 14, when you explain the relationship from incoming radiance to irradiance, around t=22:18, theta again becomes the angle of incidence. oh! I think I got it now! In the third case we are talking about the incoming radiance, but in the second case we were talking about outgoing radiance. We always use theta, but meaning of theta changes depending on whether the radiance we are interested in is incoming or outgoing wrt surface. Lol, it took me a few days to finally grasp this and that happened while I was writing this question. :-) I'm glad I posed the question. Still gonna leave my comment here in case anyone else got confused about the same concepts. One tiny bit confusing aspect that was left in my mind is that, the outgoing radiance, the essential quantity that we are calculating in our shaders, by definition has a cos(theta_prime) in it. (called it prime to distinguish it from angle of incidence). However, AFAIU, by definition, BRDF handles the "projected area" for that case... I mean, the integral to calculate L_o(v) is kind of a transformation of irradiance from l-space to v-space via the BRDF kernel, and by definition, the direction v is into our eye/camera. so, theta_prime is always 90. At least, this is how I'm going to sooth my mind. :-)

    • @gsn-composer
      @gsn-composer 11 หลายเดือนก่อน

      It is normal to be a little confused. This is a difficult topic and the video would need to be even longer to fully explain everything. You can watch this video th-cam.com/video/UzQXXfI0Blo/w-d-xo.htmlsi=ctmtY_JDbwqOaiJx&t=1299 Matthias Teschner (University of Freiburg) goes into more detail and may answer your remaining questions about radiance starting at 21:39 min:sec.

  • @WatchInVR1
    @WatchInVR1 ปีที่แล้ว

    Hi. I was typing "understanding registers in shaders" in the search bar in TH-cam and got to your video. r1, r2 etc are temporary registers; o0, o1 etc are output register; v1, v2 etc are input register (for vertex shaders 3.0). I would really like to understand this because I want to stereorise shaders in games. This means I want to edit shaders of games through DLL wrappers. Do you have any videos or sources that explains what these registers do and what they mean in ASM or HLSL language..?

    • @gsn-composer
      @gsn-composer ปีที่แล้ว

      Sorry, writing shaders at assembly/register level is very uncommon and I have never done it myself, so I cannot help you. As far as I know, writing shaders directly in assembly is only possible in DirectX/HLSL, not in OpenGL/GLSL, which I cover in these videos. learn.microsoft.com/en-us/windows/win32/direct3dhlsl/dx9-graphics-reference-asm It is strange that you found this video, which covers the very basics, while searching for such an advanced topic. Normally, shaders are compiled at runtime by the graphics card driver to generate machine code for the specific graphics card. These compilers are pretty good at optimizing the code, so I think you really need to know what you are doing to beat them.

    • @WatchInVR1
      @WatchInVR1 ปีที่แล้ว

      @@gsn-composer thank you very much for your response! Yeah it is a very complicated topic and while I have thousands of examples of what is done very few are able to explain exactly how its done. Even TH-cam and google are dry. Ive been to the Microsoft reference but its also scratching the surface. The issue is primarily to alter the shader through alterations in the matrices but the back and forth between the vertex shader and pixel shader with the complexity on top of using the registers is very hard to understand.

  • @camerbot
    @camerbot ปีที่แล้ว

    This is an awesome video series thank you so much at 7:05 you said that use of three primary colors has something to do with the fact that we have 3 cones where can we get more info on this topic? i though that we use RGB model mostly for technical reasons (im guessing its easier/better to produce red green and blue diode screens than lets say cymk) is there really a link of some kind between how our cones work and how rgb models are defined?

    • @gsn-composer
      @gsn-composer ปีที่แล้ว

      Yes, the RGB color model is certainly motivated by the fact that humans are trichromats. Have you seen the section about the CIE RGB color space from 1931 at 7:20 min:sec? You cannot cover the entire space of perceivable colors with three primary colors, but you can cover a large area: en.wikipedia.org/wiki/CIE_1931_color_space . Unlike the additive RGB model, CYMK is a subtractive color model typically used by printers: en.wikipedia.org/wiki/CMYK_color_model

  • @doriancorr
    @doriancorr ปีที่แล้ว

    A truly great work, thank you for sharing your work with us. I have learned so much from your series.

  • @roozbubu
    @roozbubu ปีที่แล้ว

    This series is a truly phenomenal resource. Thank you!

  • @wkxvii
    @wkxvii ปีที่แล้ว

    You deserve more views and likes friend! Awesome content!!! Thank you very much!

  • @azbukaChisel
    @azbukaChisel ปีที่แล้ว

    Best of the best, thank you teacher!

  • @xinyanqiu3601
    @xinyanqiu3601 ปีที่แล้ว

    This is the best class I've ever heard, thank you!

  • @unveil7762
    @unveil7762 ปีที่แล้ว

    I was thinking to precompiute the brdf integration map as an attribute. This can save maybe gpu resource since than the texture is an attribute at i guess point level! 🎉 thanks for all those lessons. Anytime i have time i come here to become a better artist! Little pearls each time. Thanks!!

    • @gsn-composer
      @gsn-composer ปีที่แล้ว

      Thank you very much. The result of performing shading (or parts of shading) in the vertex shader depends on the number of vertices in your mesh. Of course, if you have control over the tessellation of your input mesh, you can use it for low-frequency shading components, but I cannot recommend it as a general solution, especially in combination with high-frequency (detailed) textures. The texture coordinates are interpolated by the rasterizer when they are passed from the vertex to the fragment shader. This means that the texture coordinates and material properties read from textures, such as roughness (see the last example at 39:33), may change per pixel, not per vertex.

  • @unveil7762
    @unveil7762 ปีที่แล้ว

    This isi so cool!! is there a way to out put the result on an external screen? have a quad mapper as web app can be pritty handy!!

    • @gsn-composer
      @gsn-composer ปีที่แล้ว

      It is not possible to have two synchronized instances of the GSN Composer in two different browser tabs, one running on an external screen and the other on the primary screen. However, it is possible to view the output in full screen mode. In the GSN Composer interface, you can use the "View Control" panel to set the output of any node as the background. But you will still see some of the interface. Currently there are two ways to get a "clean" full screen mode: 1) If you make the project "public", you will find a "GalleryLink" in the project dialog (next to the graph name). If you append "&fs=1" to this link, you will get a full screen output, see www.gsn-lib.org/docs/gallery.php To switch the browser to full screen mode, press "F11" on Windows. 2) In the project dialog, use the "Export as Website" button. If you are familiar with basic CSS, you will be able to place/scale the output as you like on your website.

  • @lambcrystal583
    @lambcrystal583 ปีที่แล้ว

    That's the most clear and convincing learning resource for cg i had ever seen.

  • @4AneR
    @4AneR ปีที่แล้ว

    Many times before I got stuck with radiance and BRDF math, but this video finally allowed me to understand it. This is the best video on lighting computations on TH-cam. Big thank you!