Some additional links from the video. Also, working discord link: Discord: graphics-programming.org/ RC Experimental Testbed: www.shadertoy.com/view/4ctXD8
This feels like it's related to wavelet transforms. Like, DCT:Wavelet Transform::Spherical Harmonic Lighting:Radiance Cascade. I don't see why it wouldn't work in 3d using cubemaps.
4 หลายเดือนก่อน +1620
I never thought I'd see Radiance Cascades, let alone create one!
If you would be so good as to climb up and start the compilers. We can bring the Global Illumination Spectrometer to eighty fps and hold it there until the release date arrives.
@@fonesrphunny7242 if implemented properly it might be able to be used as a spatial acceleration structure for spatial audio, example: th-cam.com/video/M3W7m0QSX-8/w-d-xo.html though I'm not sure if it would be better quality or more perfomant than existing techniques.
the casual "GI in O(1)" made me do a double take like that sketch "this programming language knows if the program halts - nice. - wait, it knows if the program halts ?!?"
The Penumbra Collection includes Penumbra Overture, Black Plague, and the expansion Requiem. A thrilling blend of puzzles with multiple solutions and horror that will have you screaming for more! Full freedom of movement along with the ability to manipulate everything using natural gestures creates an immersive world.
It all makes so much sense when you explain and show it to us. Without your video, i would get lost in "paper" articles with just a few images. Scrolling through equations and getting familiar with new terms. Thanks for another great video SimonDev. 👍
Ever since Exilecon i've been waiting for someone to do a nice video breakdown of Radiance Cascades. I can see it becoming a mainstream technique in the upcoming years, so much potential
yeah the main limitation is that it's screen space so doesn't care about lights outside of the screen (mostly behind the camera is an issue, you can fairly trivially have cascades computed at ~1.5x1.5 resolution, i.e 25% extra space all around. and cropped down.). so it doesn't work well as is for first or behind the shoulder third person. (you can use world space probes but that's a bit more complex and not a neat constant time like SSRC) but there's also a lot of games that are 2d or pseudo-2d where this would work really well (e.g. league of legends/dota, or side scrollers like hollow knight, city builders would also benefit greatly as you could have individual home lights for free ).
@@satibel The effect is not tied to screenspace, you could do it screenspace, but it's usable with any grid of data. If you have a 3D grid of light probes in your world, you can use this. Have probes with 8 directions checked over a small area and place them every meter for example. Then every 2x2 meter in worldspace make probes that scan 64 directions further out. and so forth. Update these probes periodically, and importantly you really only need to update probes close to the player at any regular rates, and you don't need to have probes at infinite distance, you could center a 32x32x32 grid of probes around the player for example and update the probe positions as the player moves.
I always know you're going to make me understand something new in the way that I need it to understand it. I think we speak the same exact language; like a mixture of nothing-is-new-just-another-rehashed-version-of-the-same-stuff-we-already-did, and developer-that-wants-his-code-to-run-as-fast-as-possible. Thank you. Every time. Thank you for speaking my language.
We're in this weird place where I don't want to work enough that I will sit through a college level dissertation on lighting simulation. LoL Great Video!!!
Such an intuitive explanation of a super cool rendering method. Awesome work! The only thing I would have loved to see more detail is the actual implementation, especially: How does a point on the screen actually get it's value? A raycast I assume? how does the raycast avoid having to loop over every light source in the image to find a collision? Also, is your explanation only valid in 2d, would it map into 3d by projecting all the points onto the nearest surface, or would it need a 3d matrix of points everywhere? Some of this could have perhaps been clarified by a brief section detailing where this method can be used and where it can not be used as presented. Other than these nitpicks / curious questions though, excellent intuitive explanation!
RC is compatible with any technique of casting rays: SDF raymarching, voxel tracing, etc. Even RTX, I guess. PoE2 uses just a constant step per-pixel screenspace raymarching. As for 3d, I suggest you read the paper, because there's a lot of nuances: you can make full-on 3d grid of radiance probes, 2.5d screenspace probes with screenspace intervals, 2.5d screenspace probes with world intervals, etc.
Keep in mind that (as @Alexander_Sannikov mentioned in his presentations) the screenspace techniques work well for PoE(2) due to the PoV limitations of the game... something that is undoubtedly familiar to players of the genre and PoE specifically but which may be lost on other folks. IMO the expansion of this technique beyond PoE's rendering purview is the next major area of research for Radiance Cascades.
So what it's sounding like is multiple resolutions of like real time light probes? You create a fixed grid of probes, and then occasionally precompute the incoming light from different directions for each point, and then when determining the light of any point, you interpolate the light between the points, for each "cascade" of light and then combine them together? At least that's what I'm gathering. This way for each point you're only computing the light from the nearest few cascade points not the whole scene
the most important point is that probes don't store radiance (rays that start at the probe), they store radiance intervals (rays that start at a certain distance away from the probe and connect into a continuous ray).
I became mathematician at age of 6. Then I became programmer at age of 8. And at age of 10, I did learn that I was already programmer & mathematician at age of 4, as I fully grasped mathematical concept of "Propositional Logic". Every mathematician is programmer. Many just do not know any computer programming languages. And every programmer is expert mathematician in field of logic.
Okay…I’m on my fourth watch of this and I can feel myself *slowwwwwly* getting to grips with it, but even with a background in physics and maths (my degrees are in physics and electronic engineering) and a long career as a systems architect, I’ll be honest: I’m struggling. It’s a testament both to the PoE developers for the original idea and to Simon (who I follow) that this is penetrating my thick skull at all. Definitely not for the faint of heart but it’s worth watching over and over until it clicks because the end result is fucking gorgeous. Thanks Simon (aka Bob from Bobs Burgers) ❤️
In all likelihood the issue here is that the verbal and symbolic explanations are "high frequency" while the animated visual explanations are "low frequency". There were many times in the video where I was waiting for more detailed animations, which never came. The predominant example was the rendering equation, which could have been more fully elucidated by continued animations of each term (and possibly subterm) in the equation, but my critique extends to the rest of the video, where the animations were solid but stopped short of fully explaining what was being said and shown symbolically.
I started creating my own game engine to learn how it works behind the scenes, all because of your videos. But since I only know JavaScript, I felt intimidated by WebGL and did everything in context2D. Your video on spatial hash grids helped me a lot to create my own version with dynamic ranges instead of fixed arrays. Watching this video, I realized my improvised lighting system in 2D is pretty humble lol.
this is such a well put-together explanation. you convey a difficult concept from ground 0 to implementation really smoothly and i understand more than i'd expect. hats off.
I read the paper months ago and got the basic gist but made a mental note to revisit it for better understanding. This DEFINITELY jogged my memory. Bravo to @SimonDev for exposing this wonderful research to a broader audience.
This is such a great source of information, it explains Radiance Cascades so much better than other videos and papers, I finally managed to understand it! Thank you so much!
I've really liked the demo! If you add the possibility to upload an image from wich generate the lights/shadows, and the posibility to change the backgound, you can sell it/launch it as a tool for graphic designers!
for those curious, cem yuksel has a series of graphics videos that are very easy to understand, including a really intuitive explanation of the rendering equation. he does things very visually
The quality of presentation and the in depth knowledge u are able to explain in simple terms is awesome. Please keep it up I love your content. I would also love to have something focused on physics like gjk/epa for collision and response stuff.
Awesome video, thought it would be realistic lightning bolts which would also be interesting since I've looked into it a bit but can't find much usable information on it.
Hmm, I think you could also use lower resolution cascades the further away you are from the camera, to save up on computation! :D I'm definitely going to try working with this!!!
I’m not a game dev or know anything about any of this. Watched the whole thing without skipping through. You’re a good presenter, even if I still don’t fully get it 😅
Awesome video, however I have one suggestion. In this video, even at 1080p, TH-cam's video compression and low bitrate are extremely noticeable and there are a lot of artifacts all over the place the entire time. As a suggestion, could you upload videos like this at 1440p in the future? Even for people with a 1080p display, this can make a massive change in how clean the video looks because of the better bitrate.
I am curious to see what the bias is like for large scenes though. It reminds me a bit of "surfels" which were developed by EA if I remember correctly. It was an innovative technique but contributed a lot of bias to get real-time noise free images. The way this method is layed out, it seems like that's also going to be the case here, limiting it's effective use in real-time games with certain FPS goals
Could you do a video about different shadow techniques? From basic shadow mapping using hard coded projection params [like in directional shadows ortho(left: -10, right: 10, bottom: -10, top: 10, near: -10, far: 10)], through tight projection math, normal bias, texel size world space, etc. to CSM and VSM?
Maybe I'm missing something, but I think this only works in screen space, right? Therefore, it'll exhibit the usual disocclusion artifacts that such techniques have, such as SSAO, SSR.
Cool, reminds me of voxel cone tracing with 3D clipmaps. It also has the same issues: light leakage, not good at perfect reflections but hopefully the new technique scales better and uses less vram. I'll have to look at the paper once it's released in its final form. Edit: Btw. for 2D you can make cone tracing work quite well and fast for GI. I only implemented the 3D version 8 years ago. A little bit surprised that it was hardly adapted since it can work quite well in certain types of games.
I'm wondering how this could extend to 3d. Maybe we do something similar but for points on a UV mapped surface? If you could do that, you could actually speed up ray tracing by a large margin, and allow a high degree of freedom for the hardware spec based on how many iterations you run. Something I may experiment with, but my main expertise is Blender shaders. Glsl and it's equivalents are new to me.
nice to see alexander sannikov's radiance cascades be used. I actually theorized a way to use a similar things for real time physics calculations with fluid or fluid-like objects (e.g. plague tale's rats/huge armies) the idea is that only boundaries get true physics and the other are moved by a vector field based on the population (i.e. they move from high population to low population). and the physics need good angular resolution in the middle of the pack, but only good position in the outside.
I understood everything up until the radiance cascades nodes. The rays are casting out something to something, because directions... uh, you do it again because idk, then you have another pair of nodes doing something farther away... then you combine it for some reason, somehow... and you get this magical thing I can't explain. 🥴 This is just before the GPU part and using pixels to solve for ray directions.
I would love to see a full comparison of this technique and full path tracing rendering the same scene, while also showing how long it takes both to compute, PT would be done on software ofc to make it a fair fight
Could just be me but it felt like the video ended a bit soon. Like I am not sure how you get from "layers of probes at different resolutions sampling lighting" to what you show at the end. Also is this global illumination? because it looks more like direct illumination with soft shadows and emmissive surfaces (which is addmittedley impressive in it's own right if it runs fast).
Not sure if you've already done a video on this or not but could you do a video about transparency - as an artist i'd like to understand what makes it expensive, draw order issues when you have overlapping planes etc.
Im not qualified into that field at all but that's always interesting to learn about new things. I aslo seen Gaussian Splatting (GSplat) techniques which could also provide quite interesting things for the game industry. Like preprocessing all the environnement + light inside a Gsplat which consume way less compute power, which can have lifelike graphics and also take way less space on the harddrive. Don't know how Radiance Cascades compete next to Gsplat though, would be an interesting subject to discuss actually (from a professional)
You know, when I saw the interpolation and probes, it reminded me of a version of pong I made that would coordinate check the ball and then calculate the angles of incidence and reflection. Lol, I was inadvertently doing a similar kind of math to the checks being made for radiance. Honestly I made an argument about using this kind of behavior for a game that does radar simple simulation. The idea simply being if an object appears in a field of view. The non programmers all said "that's too computationally expensive!!". And of course, anyone who's done a simple coordinate check knows how easy it is to have something test that it can "see" the distant object. Add in some fourth power roots and presto you have a photon energy calculation.
@@simondev758 I don't see it either. I see: "How do Major Video Games Render Grass?" "How Big Budget AAA Games Render Clouds" "I Tried Making an FPS Game in JavaScript" "I made an EVEN BETTER Minecraft"
Isn't there a way to use engine properties like: check for emissive materials, get the size, position and luminance of the object and directly fire probe arrays at it? Maybe use the inverse Square and luminance to decide if it even worth to take it into account for further calculations Just a quick thought about it 😅
Fantastic video and Alexander's concept is incredibly intriguing. I'm trying to understand how it's applied in a screen-space context, as hinted at in Alexander's paper where he mentions they use a 'hierarchy of screen-space radiance probe cascades populated with screen-space ray-marching', - section 4.2, page 23. I noticed in your demo that you ray-march an SDF representing the scene's 'geometry' to populate the radiance cascades. I'm curious about what's being ray-marched in the screen-space implementation of say a 3d scene. Would it be similar to screen-space shadows; using the depth buffer to detect occlusion? I'm new to graphics programming, so any insights would be greatly appreciated. Thanks again for the content.
Reminds me of a video from several years back on Unitys Light Probes. I have VERY little understanding on how all this works though so not sure how similar they are other then they attempt to help solve the same issue of lighting.
Been waiting for someone else to validate this technique. It's really cool to hear promises, but always even better when others get to compare the results. Would have loved a comparison with some other technique, though since you've only implemented on 2D I guess you can't really compare with your 3D ray-trace model.
14:56 it is still possible to get away with only the 4 samples, the method would just be: spin the samples and take the data over time its basically just a temporal method of doing that with roughly the same cost as 4 so could get away with doing something like 4 with a assumed compute cost of 6-7 depending on the method used (this is a good method for 2D but 3D would require more then 4 samples so around 16 should be good enough)
Hi, just tried the live demo on your projects page and it's melting my gtx 1080ti at highest quality settings also on my 21:9 monitor brush is squished along Y axis very cool demo though
For some reason, even though you explained all it's doing, the end result looks better than what I would imagine if you didn't show it. Like I'd expect worse artifacts from this.
I feel like this is what CS2 uses for its lighting probes. If you look at the debug menu, you can see the transition from more sparse probes to more dense probes depending on some conditions. I like computer graphics but hate programming so I wouldn’t know tho.
Some additional links from the video. Also, working discord link:
Discord: graphics-programming.org/
RC Experimental Testbed: www.shadertoy.com/view/4ctXD8
why do i only see this comment on mobile but not pc
nvm i see it now on pc
oh also does ROBLOX use radiance cascading (probably not)
Some critical voices say that radiance cascades work in 2D, but were a non starter in 3D. Is this true?
This feels like it's related to wavelet transforms. Like, DCT:Wavelet Transform::Spherical Harmonic Lighting:Radiance Cascade.
I don't see why it wouldn't work in 3d using cubemaps.
I never thought I'd see Radiance Cascades, let alone create one!
Now now, Simon doesn't need to hear all this. He's a highly trained professional. We've assured the PoE2 team NOTHING will go wrong.
Alright. Let's let him in.
We've just been informed that the lightbulb is ready, Simon. It should be coming up to you at any moment
_panics in scientist_
If you would be so good as to climb up and start the compilers. We can bring the Global Illumination Spectrometer to eighty fps and hold it there until the release date arrives.
So this approach, but for audio, would be called a "resonance cascade"?
Isn't that what happened in Half-Life?
Gordon doesn't need to hear all this, he's a highly trained professional
Prepare for unforseen consequences.
As someone doing audio stuff, I can't imagine why you'd ever want a resonance cascade anywhere.
@@fonesrphunny7242 if implemented properly it might be able to be used as a spatial acceleration structure for spatial audio, example: th-cam.com/video/M3W7m0QSX-8/w-d-xo.html though I'm not sure if it would be better quality or more perfomant than existing techniques.
Gordon doesn't need to hear all this, he's a highly trained professional!
Make Half-Life great again.
half life 3‼️‼️‼️
what
i dont understand
@@Monkeymario. resonance cascade, half life reference
The original presentation by alexander for those that are interested: th-cam.com/video/TrHHTQqmAaM/w-d-xo.html
the casual "GI in O(1)" made me do a double take like that sketch
"this programming language knows if the program halts - nice. - wait, it knows if the program halts ?!?"
That one statement @ 2:08 is precisely why I love this channel. Although i cant deny how much i need the maths in my life
frrrr tho, math so unreadable
Programming is a form of math
I knew those PoE2 devs were up to something!
Yeah, they are a talented bunch!
Great work.
It's him! He's the PoE2 dev!
The man, the MYTH, THE LEGEND
They're smart cookies, definitely :)
What’s next, Radiance Cascading Style Sheets?!
Quick, contact the Chrome devs!
LMAO XD
A wild CSS framework has appeared!
10x web developers: hey folks, here's my implementation of Radiance Cascades, written entirely in HTML+CSS!
NO! No God please no. No!
Nooooooooo!
The Penumbra Condition sounds like a nice title for a game
if deltarune was made by sony
There's the Penumbra Collection
The Penumbra Collection includes Penumbra Overture, Black Plague, and the expansion Requiem.
A thrilling blend of puzzles with multiple solutions and horror that will have you screaming for more!
Full freedom of movement along with the ability to manipulate everything using natural gestures creates an immersive world.
penumbra mentioned 👹👹👹👹👹👹👹👹👹👹👹
It all makes so much sense when you explain and show it to us.
Without your video, i would get lost in "paper" articles with just a few images. Scrolling through equations and getting familiar with new terms.
Thanks for another great video SimonDev. 👍
Papers are always hard to read (for me).
@@simondev758 reminds me of the meme "I hate how research papers are written, so much yapping, just get to the point bro."
Ever since Exilecon i've been waiting for someone to do a nice video breakdown of Radiance Cascades. I can see it becoming a mainstream technique in the upcoming years, so much potential
yeah the main limitation is that it's screen space so doesn't care about lights outside of the screen (mostly behind the camera is an issue, you can fairly trivially have cascades computed at ~1.5x1.5 resolution, i.e 25% extra space all around. and cropped down.).
so it doesn't work well as is for first or behind the shoulder third person. (you can use world space probes but that's a bit more complex and not a neat constant time like SSRC)
but there's also a lot of games that are 2d or pseudo-2d where this would work really well (e.g. league of legends/dota, or side scrollers like hollow knight, city builders would also benefit greatly as you could have individual home lights for free ).
@@satibel The effect is not tied to screenspace, you could do it screenspace, but it's usable with any grid of data. If you have a 3D grid of light probes in your world, you can use this. Have probes with 8 directions checked over a small area and place them every meter for example. Then every 2x2 meter in worldspace make probes that scan 64 directions further out. and so forth. Update these probes periodically, and importantly you really only need to update probes close to the player at any regular rates, and you don't need to have probes at infinite distance, you could center a 32x32x32 grid of probes around the player for example and update the probe positions as the player moves.
@@DreadKyller how would the performance compare to screenspace ?
Looking through the comments, and I'm glad that I'm not the only one who thought the title said "Resonance Cascade"
I always know you're going to make me understand something new in the way that I need it to understand it. I think we speak the same exact language; like a mixture of nothing-is-new-just-another-rehashed-version-of-the-same-stuff-we-already-did, and developer-that-wants-his-code-to-run-as-fast-as-possible. Thank you. Every time. Thank you for speaking my language.
You're welcome!
We're in this weird place where I don't want to work enough that I will sit through a college level dissertation on lighting simulation. LoL
Great Video!!!
Such an intuitive explanation of a super cool rendering method. Awesome work! The only thing I would have loved to see more detail is the actual implementation, especially: How does a point on the screen actually get it's value? A raycast I assume? how does the raycast avoid having to loop over every light source in the image to find a collision? Also, is your explanation only valid in 2d, would it map into 3d by projecting all the points onto the nearest surface, or would it need a 3d matrix of points everywhere? Some of this could have perhaps been clarified by a brief section detailing where this method can be used and where it can not be used as presented. Other than these nitpicks / curious questions though, excellent intuitive explanation!
RC is compatible with any technique of casting rays: SDF raymarching, voxel tracing, etc. Even RTX, I guess. PoE2 uses just a constant step per-pixel screenspace raymarching. As for 3d, I suggest you read the paper, because there's a lot of nuances: you can make full-on 3d grid of radiance probes, 2.5d screenspace probes with screenspace intervals, 2.5d screenspace probes with world intervals, etc.
Keep in mind that (as @Alexander_Sannikov mentioned in his presentations) the screenspace techniques work well for PoE(2) due to the PoV limitations of the game... something that is undoubtedly familiar to players of the genre and PoE specifically but which may be lost on other folks. IMO the expansion of this technique beyond PoE's rendering purview is the next major area of research for Radiance Cascades.
So what it's sounding like is multiple resolutions of like real time light probes? You create a fixed grid of probes, and then occasionally precompute the incoming light from different directions for each point, and then when determining the light of any point, you interpolate the light between the points, for each "cascade" of light and then combine them together? At least that's what I'm gathering. This way for each point you're only computing the light from the nearest few cascade points not the whole scene
the most important point is that probes don't store radiance (rays that start at the probe), they store radiance intervals (rays that start at a certain distance away from the probe and connect into a continuous ray).
"Most of us are programmers, not math people." -> that's a great quote.
Exactly. I program so the computer can do the maths I don't understand 😅
I became mathematician at age of 6. Then I became programmer at age of 8.
And at age of 10, I did learn that I was already programmer & mathematician at age of 4, as I fully grasped mathematical concept of "Propositional Logic".
Every mathematician is programmer. Many just do not know any computer programming languages. And every programmer is expert mathematician in field of logic.
As a dev who took 3 tries to pass Calculus I, I agree with this statement.
Okay…I’m on my fourth watch of this and I can feel myself *slowwwwwly* getting to grips with it, but even with a background in physics and maths (my degrees are in physics and electronic engineering) and a long career as a systems architect, I’ll be honest: I’m struggling.
It’s a testament both to the PoE developers for the original idea and to Simon (who I follow) that this is penetrating my thick skull at all. Definitely not for the faint of heart but it’s worth watching over and over until it clicks because the end result is fucking gorgeous. Thanks Simon (aka Bob from Bobs Burgers) ❤️
But Simrola,
what about the ring and ray artifacts?
Nice animations, and intuitive explanations, great video!
And thanks for consulting & mentioning the community at the end :D
lets hope to see this implemented in some open source engines. and especially blender. this could be really good tech to at least preview renders.
Do you think it could be used in full production games? (Not a programmer. Just curious about the technology.
@@TristanCleveland it already has. it was specifically invented for a game you see in the intro
This channel is a gold mine. Thank you.
1:50 Saving a timestamp for the next time I have to explain the difference between math and programming.
13:18 Is it really live on your website? I don't see it, only Grass, Cloud, FPS Game and Minecraft projects.
Yeah some people seem to be getting older versions, let me know if it's still not showing up.
@@simondev758 didnt show up a minute ago but now it works, feels laggy but impressive either way
@@oscarelenius4801 Yeah, it's a stock implementation, with no optimizations whatsoever heh.
@@simondev758Might be a caching issue? Reloading with ctrl + f5 might work
This video went from super simple to utterly incomprehensible in a span of seconds! I'm having whiplash! 😂
Hah
In all likelihood the issue here is that the verbal and symbolic explanations are "high frequency" while the animated visual explanations are "low frequency". There were many times in the video where I was waiting for more detailed animations, which never came.
The predominant example was the rendering equation, which could have been more fully elucidated by continued animations of each term (and possibly subterm) in the equation, but my critique extends to the rest of the video, where the animations were solid but stopped short of fully explaining what was being said and shown symbolically.
@@NotAnInterestingPerson this comment is big brain lol
Lowkey wanna suggest that the term that comes after "umbra, penumbra" should be called "bruh."
I started creating my own game engine to learn how it works behind the scenes, all because of your videos. But since I only know JavaScript, I felt intimidated by WebGL and did everything in context2D. Your video on spatial hash grids helped me a lot to create my own version with dynamic ranges instead of fixed arrays. Watching this video, I realized my improvised lighting system in 2D is pretty humble lol.
7:03
I was the whole time distracted from the artefact on the left side.
Is this a computational error?
this is such a well put-together explanation. you convey a difficult concept from ground 0 to implementation really smoothly and i understand more than i'd expect. hats off.
Great to see the crazy graphics devs at GGG getting some love!
Seeing you refrence Aleksander Sannikovs paper is not something I was expecting :O
I read the paper months ago and got the basic gist but made a mental note to revisit it for better understanding. This DEFINITELY jogged my memory. Bravo to @SimonDev for exposing this wonderful research to a broader audience.
Excellent video. The paper was a bit too complex for me to understand, but this video explained it very well. I’ll probably go make my own now…
Thanks for helping give this awesome paper wider visibility! It's a fantastic insight.
Crazy good idea and so simple in a way.
I really doubted if I should write the paper because of how obvious it seemed.
Thanks you for linking the paper. For such complex topics I like to carefully read an article rather than just watch the video
This is such a great source of information, it explains Radiance Cascades so much better than other videos and papers, I finally managed to understand it! Thank you so much!
No idea what I just watched but still fascinated how clever people are.
I've really liked the demo! If you add the possibility to upload an image from wich generate the lights/shadows, and the posibility to change the backgound, you can sell it/launch it as a tool for graphic designers!
for those curious, cem yuksel has a series of graphics videos that are very easy to understand, including a really intuitive explanation of the rendering equation. he does things very visually
You are such a great teacher. Starting by building the intuition then it all makes sense. Thanks for posting this
Great stuff. Although the project isn't in the projects list?
yeah i can't find it either
Should be there, if not, just go to my github.
@@simondev758 It's not there. The project is indeed on your Github but I can't get it working.
This is the answer I was looking for. Thank you for this fracking awesome video. You sir are appreciated.
This is one of those videos that I'm going to have to watch like 3 times over before this gets hammered into my thick skull
This is really cool. Thanks for explaining it in an easy to understand manner!
The quality of presentation and the in depth knowledge u are able to explain in simple terms is awesome. Please keep it up I love your content. I would also love to have something focused on physics like gjk/epa for collision and response stuff.
Awesome video, thought it would be realistic lightning bolts which would also be interesting since I've looked into it a bit but can't find much usable information on it.
Hmm, I think you could also use lower resolution cascades the further away you are from the camera, to save up on computation! :D
I'm definitely going to try working with this!!!
Oh cool, didn’t know the PoE devs published this method! Thanks for the breakdown!
The live demo seems not to be available on your homepage yet.
I think there's some caching issues, I'll try invalidating and hopefully you can access it.
A radiance cascade? At this time of year, at this time of day, on this side of the border world, localized entirely within our facility?
May I see it?
No.
I’m not a game dev or know anything about any of this. Watched the whole thing without skipping through. You’re a good presenter, even if I still don’t fully get it 😅
I love the fact that the explanations in this video are really easy to understand,great video!
oh man i'm so hype to have bob belcher explain new and exciting graphics techniques to me
Awesome video, however I have one suggestion. In this video, even at 1080p, TH-cam's video compression and low bitrate are extremely noticeable and there are a lot of artifacts all over the place the entire time. As a suggestion, could you upload videos like this at 1440p in the future? Even for people with a 1080p display, this can make a massive change in how clean the video looks because of the better bitrate.
It could also be the background having a sorta high amount of detail?
I am curious to see what the bias is like for large scenes though. It reminds me a bit of "surfels" which were developed by EA if I remember correctly. It was an innovative technique but contributed a lot of bias to get real-time noise free images. The way this method is layed out, it seems like that's also going to be the case here, limiting it's effective use in real-time games with certain FPS goals
I love this lighting - definitely an inspiration towards trying new things - you never know what might work!
Love the video! It'd be really great if you could make a video covering the 3d version and some of the fixes of the artifacts this technique has.
Could you do a video about different shadow techniques? From basic shadow mapping using hard coded projection params [like in directional shadows ortho(left: -10, right: 10, bottom: -10, top: 10, near: -10, far: 10)], through tight projection math, normal bias, texel size world space, etc. to CSM and VSM?
Maybe I'm missing something, but I think this only works in screen space, right? Therefore, it'll exhibit the usual disocclusion artifacts that such techniques have, such as SSAO, SSR.
NO, it can work in world space as well
Cool, reminds me of voxel cone tracing with 3D clipmaps. It also has the same issues: light leakage, not good at perfect reflections but hopefully the new technique scales better and uses less vram. I'll have to look at the paper once it's released in its final form.
Edit: Btw. for 2D you can make cone tracing work quite well and fast for GI. I only implemented the 3D version 8 years ago. A little bit surprised that it was hardly adapted since it can work quite well in certain types of games.
I think voxel cone tracing was used in CryEngine but nowhere else.
GorDon doesn' need to hear all this, he'sa highly trained propfessional. We've assurdly administrated that nothing-will-go-wrong.
Thank you so much for sharing this knowledge! Super interesting video, as always
thanks for the laughter and learning in every video!
Cascade: The JPEG of Light Render.
I like it.
It is, isn't it?
I never thought I needed a young H John Benjamin explaining lighting algorithms, yet here we are.
I'm wondering how this could extend to 3d. Maybe we do something similar but for points on a UV mapped surface? If you could do that, you could actually speed up ray tracing by a large margin, and allow a high degree of freedom for the hardware spec based on how many iterations you run. Something I may experiment with, but my main expertise is Blender shaders. Glsl and it's equivalents are new to me.
glsl syntax is really easy
nice to see alexander sannikov's radiance cascades be used.
I actually theorized a way to use a similar things for real time physics calculations with fluid or fluid-like objects (e.g. plague tale's rats/huge armies)
the idea is that only boundaries get true physics and the other are moved by a vector field based on the population (i.e. they move from high population to low population).
and the physics need good angular resolution in the middle of the pack, but only good position in the outside.
Thanks so much for the website, it's so cool!
very interesting approach, seems to sit somewhere between light probe grids and surfels.
I understood everything up until the radiance cascades nodes. The rays are casting out something to something, because directions... uh, you do it again because idk, then you have another pair of nodes doing something farther away... then you combine it for some reason, somehow... and you get this magical thing I can't explain. 🥴
This is just before the GPU part and using pixels to solve for ray directions.
Amazing video. Thanks Simon.
Awesome! Thanks!
I would love to see a full comparison of this technique and full path tracing rendering the same scene, while also showing how long it takes both to compute, PT would be done on software ofc to make it a fair fight
Excellent presentation. Thank you!
Amazing explanation and it looks awesome on the website!
I love learning about programming stuff from Archer.
Could just be me but it felt like the video ended a bit soon. Like I am not sure how you get from "layers of probes at different resolutions sampling lighting" to what you show at the end. Also is this global illumination? because it looks more like direct illumination with soft shadows and emmissive surfaces (which is addmittedley impressive in it's own right if it runs fast).
Not sure if you've already done a video on this or not but could you do a video about transparency - as an artist i'd like to understand what makes it expensive, draw order issues when you have overlapping planes etc.
Great idea, I'll keep it as a potential topic, but ultimately I let my patreon supporters do the final vote.
These animations look top notch. Any chance of sharing what software you use to create them?
I animate them via code in shaders. I cover a lot of it in my shader course.
Im not qualified into that field at all but that's always interesting to learn about new things.
I aslo seen Gaussian Splatting (GSplat) techniques which could also provide quite interesting things for the game industry. Like preprocessing all the environnement + light inside a Gsplat which consume way less compute power, which can have lifelike graphics and also take way less space on the harddrive.
Don't know how Radiance Cascades compete next to Gsplat though, would be an interesting subject to discuss actually (from a professional)
You know, when I saw the interpolation and probes, it reminded me of a version of pong I made that would coordinate check the ball and then calculate the angles of incidence and reflection. Lol, I was inadvertently doing a similar kind of math to the checks being made for radiance.
Honestly I made an argument about using this kind of behavior for a game that does radar simple simulation. The idea simply being if an object appears in a field of view. The non programmers all said "that's too computationally expensive!!". And of course, anyone who's done a simple coordinate check knows how easy it is to have something test that it can "see" the distant object. Add in some fourth power roots and presto you have a photon energy calculation.
Alexander, The Great!
Great video Simon - the projects page isn't showing that demo though, may need the caches clearing?
Yeah, let me know if it still isn't showing and I'll try to force an invalidation or something.
@@simondev758 I don't see it either.
I see:
"How do Major Video Games Render Grass?"
"How Big Budget AAA Games Render Clouds"
"I Tried Making an FPS Game in JavaScript"
"I made an EVEN BETTER Minecraft"
Isn't there a way to use engine properties like:
check for emissive materials, get the size, position and luminance of the object and directly fire probe arrays at it?
Maybe use the inverse Square and luminance to decide if it even worth to take it into account for further calculations
Just a quick thought about it 😅
Fantastic video and Alexander's concept is incredibly intriguing. I'm trying to understand how it's applied in a screen-space context, as hinted at in Alexander's paper where he mentions they use a 'hierarchy of screen-space radiance probe cascades populated with screen-space ray-marching', - section 4.2, page 23. I noticed in your demo that you ray-march an SDF representing the scene's 'geometry' to populate the radiance cascades. I'm curious about what's being ray-marched in the screen-space implementation of say a 3d scene. Would it be similar to screen-space shadows; using the depth buffer to detect occlusion? I'm new to graphics programming, so any insights would be greatly appreciated. Thanks again for the content.
Reminds me of a video from several years back on Unitys Light Probes. I have VERY little understanding on how all this works though so not sure how similar they are other then they attempt to help solve the same issue of lighting.
This is really beautiful! Well done.
Been waiting for someone else to validate this technique. It's really cool to hear promises, but always even better when others get to compare the results.
Would have loved a comparison with some other technique, though since you've only implemented on 2D I guess you can't really compare with your 3D ray-trace model.
14:56 it is still possible to get away with only the 4 samples, the method would just be: spin the samples and take the data over time its basically just a temporal method of doing that with roughly the same cost as 4 so could get away with doing something like 4 with a assumed compute cost of 6-7 depending on the method used (this is a good method for 2D but 3D would require more then 4 samples so around 16 should be good enough)
Cant wait for unreal engine to pick up on this.
Very nice!
Thank you so much for sharing your valuable knowledge! :)
Hi, just tried the live demo on your projects page and it's melting my gtx 1080ti at highest quality settings
also on my 21:9 monitor brush is squished along Y axis
very cool demo though
Still can't overappreciate what NVidia did in 2018. We still aren't fully there, but it put us so much closer.
Love your video what are you using for editing presentation anims?
Everything is render/animated via custom shaders.
This effect is beautiful! Thanks for sharing!
Also can you help me find your demo? I'm on the projects page and I don't see it!
Thanks!
Thank you!
Commenting mainly for the algorithm, but thank you for the video, please keep it up!
For some reason, even though you explained all it's doing, the end result looks better than what I would imagine if you didn't show it. Like I'd expect worse artifacts from this.
I feel like this is what CS2 uses for its lighting probes. If you look at the debug menu, you can see the transition from more sparse probes to more dense probes depending on some conditions. I like computer graphics but hate programming so I wouldn’t know tho.
Counter Strike 2? Cities Skylines 2?
9:27 its so weird to see the multiplication symbol as x over the years i have gotten so used to the multiplication symbol as * or .