How Ray Tracing Works - Computerphile

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ธ.ค. 2024
  • Ray tracing is massive and gives realistic graphics in games & movies but how does it work? Lewis Stuart explains.
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottsco...
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan...
    Thank you to Jane Street for their support of this channel. Learn more: www.janestreet...

ความคิดเห็น •

  • @RockLou
    @RockLou 7 หลายเดือนก่อน +647

    "We should go outside" rarely uttered words by a computer engineer

    • @loc4725
      @loc4725 7 หลายเดือนก่อน +60

      Yes, the bravery on show was impressive.

    • @tango_doggy
      @tango_doggy 7 หลายเดือนก่อน +8

      you're thinking of computer science side.. the electrical engineering side entrances people to fly across the world testing different country's fault protection systems

    • @vinnyfromvenus8188
      @vinnyfromvenus8188 7 หลายเดือนก่อน +14

      literally a "touch grass" moment

    • @MePeterNicholls
      @MePeterNicholls 7 หลายเดือนก่อน +7

      He struggled on the line “that’s how real life works” tho

    • @clearleaf2646
      @clearleaf2646 7 หลายเดือนก่อน +1

      He should get a nobel prize just for the idea

  • @emanggitulah4319
    @emanggitulah4319 7 หลายเดือนก่อน +213

    Awesome CGI... Looked so real having sunshine in Britain. 😂

  • @phiefer3
    @phiefer3 7 หลายเดือนก่อน +68

    One thing that's sort of brushed over here, is that while all the things he mentioned about rasterization sounds more complicated than ray tracing; in the early days of computer graphics it was by far the simpler method (heck, in the early early days, things like having a light source wasn't even a thing). Rasterization evolved from the fact that doing something like ray tracing for every single pixel was not even close to being practical, especially not in anything real time. Essentially, rasterization was a shortcut that allowed us to render graphics in a very simplified manner because it was all that the technology of the time was capable of.
    As technology improved, we then added more bells and whistles to rasterization to improve it, like lighting, shadowmaps, depth maps, etc. These things all made it a bit more complex to take advantage of improved hardware and software, but it was still far easier than ray tracing, which was still beyond what could be done in real time. And this was mostly how graphics technology improved over time: adding more and more bells and whistles to shortcuts built on top of shortcuts on top of shortcuts that was rasterization.
    But in recent years we've reached a turning point where 2 key things have happened: First is that modern technology is now capable of things like ray tracing in real time; and the second is that all the extra stuff that's been added to rasterization over the years to improve its quality is starting to approach the complexity level of ray tracing. That's why it now seems like ray tracing is such a big leap in quality for such a small difference in performance. The tradeoff is still there, but eventually we'll probably see a point where ray tracing is both faster and higher quality than rasterization based graphics.

    • @yoyonel1808
      @yoyonel1808 7 หลายเดือนก่อน

      Very nice explications thanks you 😊

    • @jcm2606
      @jcm2606 7 หลายเดือนก่อน +5

      Another thing is that rasterization is starting to reach an accuracy ceiling where it's becoming disproportionately harder to push the accuracy higher. The best example of this I can think of would be complex interactions between different lighting phenomena (like a surface diffusely reflecting another surface that is specularly reflecting a light source, leaving a pattern of light behind on the diffuse surface; think the really pretty patterns reflecting off of water on the underside of a boat hull, that's what I mean).
      To accurately reproduce those interactions you really need the ability to have light be simulated/approximated in any order (ie you need the ability to diffusely reflect a specular reflection AND the ability to specularly reflect a diffuse reflection AT THE SAME TIME), which is extremely difficult to do with rasterization in a performance-friendly way, because of how rasterization uses a well defined order in its approximations (you could use some tricks like reuse the output of intermediate passes from past frames, then reproject, validate and maybe reweight past frame outputs to make them match more closely with the current frame, but that introduces a bunch of errors which hurts accuracy).
      Raytracing, on the other hand, gets this basically for free as it's an inherently "recursive" algorithm in that any type of lighting interaction is naturally nested within any other type of lighting interaction (at least for path tracing or recursive raytracing, most games nowadays are using "raytracing" to refer to replacing specific lighting interactions with standalone raytraced variants, so "raytracing" in the context of most current games still has a well defined order like rasterization).

    • @AnttiBrax
      @AnttiBrax 7 หลายเดือนก่อน +6

      Are you sure about that historic part? Computerized ray tracing dates back to the late 60's. I think you might be only considering things from the real time graphics point of view.

    • @emporioalnino4670
      @emporioalnino4670 7 หลายเดือนก่อน +4

      I disagree about the small impact on performance, it's pretty substantial. Most gamers choose to turn RT off to save frames!

    • @fasinaayo4483
      @fasinaayo4483 6 หลายเดือนก่อน

      Thank you

  • @shanehebert396
    @shanehebert396 7 หลายเดือนก่อน +90

    Way back in the day, one of our computer graphics assignments in college was to write a parallel ray tracer in C on a SGI Power IRIS server. It was a lot of fun.

    • @maximmk6446
      @maximmk6446 7 หลายเดือนก่อน +4

      Is that sometime around the late 80s?

    • @sagejpc1175
      @sagejpc1175 7 หลายเดือนก่อน +3

      You poor soul

    • @HaggardPillockHD
      @HaggardPillockHD 6 หลายเดือนก่อน +2

      Back in my day we drew rays with a protractor and slide rule

  • @broyojo
    @broyojo 7 หลายเดือนก่อน +179

    first time on computerphile that we touched grass

    • @vitaly2432
      @vitaly2432 7 หลายเดือนก่อน +16

      I'm new to this channel and I hope it is the first time (and the last) that we touched a monitor, too

    • @benwisey
      @benwisey 7 หลายเดือนก่อน +3

      I think it may not be the first time touching grass or a monitor.

    • @DerdOn0ner
      @DerdOn0ner 5 หลายเดือนก่อน

      @@vitaly2432you may want to look for another channel then

  • @lMINERl
    @lMINERl 7 หลายเดือนก่อน +87

    How ray tracing works : make a line then trace it where it will go

    • @Hydrabogen
      @Hydrabogen 7 หลายเดือนก่อน +22

      One may even go so far as to call the line a ray

  • @mahdijafari7281
    @mahdijafari7281 7 หลายเดือนก่อน +64

    "It's really easy!"
    Until you need to optimise it...
    Great video btw.

    • @oldcowbb
      @oldcowbb 6 หลายเดือนก่อน +1

      true for many things in comp sci

  • @AySz88
    @AySz88 7 หลายเดือนก่อน +9

    Oof, the question from Brady at 14:10 about the "pixel" units, that's then glossed(ha) over, really does touch upon one of the hard subtle things about raytracing: avoiding infinite loops. If you're not careful, you can make lighting and reflections that don't "conserve energy" and end up unpredictably creating an infinite number of rays, imply an infinite amount of light, or (usually) both!
    So a ray can't simply be a "pixel" - you have to be pretty careful with precise units ("radiance" vs "radiosity", etc.), making it potentially less forgiving to create new effects with than rasterization shaders. Meanwhile rasterization can start out a lot more intuitive for artists that would like to think of the image like it's a canvas. Raytracing isn't all easier all the time.

  • @jalsiddharth
    @jalsiddharth 7 หลายเดือนก่อน +17

    OMG ITS MY TA FROM THE COMPUTER GRAPHICS MODULE, LEWIS!!!! LETS GOOO!

  • @chrischeetham2659
    @chrischeetham2659 7 หลายเดือนก่อน +129

    Great video, but my brain couldn't cope with the monitor prodding 😂

    • @MelHaynesJr
      @MelHaynesJr 7 หลายเดือนก่อน +13

      I am glad I wasn't the only one. I was screaming in my head

    • @AySz88
      @AySz88 7 หลายเดือนก่อน +11

      I was curious and googled the model number. It's apparently a ~2012 monitor discontinued prior to ~2018. I still wouldn't approve, but I could imagine its continued existence being considered more bane than boon.

    • @mihainita5325
      @mihainita5325 7 หลายเดือนก่อน +1

      Same, I wanted to yell "stop doing that!" :-)
      The only explanation (in my head) was that it is in fact a touch screen (the way it deformed it might be?). So touching it is then a very natural thing to do.

    • @TheGreatAtario
      @TheGreatAtario 7 หลายเดือนก่อน +6

      @@mihainita5325 Touch screens don't do that light-squish-ripple effect. Also if it were a touch screen I would have expected the touches to be doing clicks and drags and such

  • @arrowtlg2646
    @arrowtlg2646 7 หลายเดือนก่อน +9

    6:20 man seeing that campus is a throwback! Miss Nottingham!

  • @yooyo3d
    @yooyo3d 7 หลายเดือนก่อน +3

    I wrote my first ray tracer ~30 years ago on 486 PC in MSDOS. I used Watcom C compiler to build 32bit code and to use FPU unit on the CPU.
    My friend and me developed a wide variety of math functions to calculate intersection between line and triangle, sphere, quadratic surface, boxes, even bezier surfaces. Then we developed math functions to describe materials and surfaces, to calculate refractions and reflection. Our material system allow to define object with multiple reflection and refraction index. Then we developed procedural texturing, soft shadows, area lighting, CSG, "blobby" objects, .. we know that slowest thing is scene with triangles so we try hard to describe scene with more or less complex math functions.
    Part of project was to speed up ray-hit so we tried various algorithms like octree, bounding spheres, bounding boxes and finally we stick with some unique approach to project boundary boxes to world axis and then step through entry/exit points.. like open and closed braces.
    The scene was described from code itself. We didn't have any 3D editor. I was trying to develop it but eventually I give up.
    This was long before we had internet access.
    Then we got internet access and we found POV-Ray.

  • @BrianMcElwain
    @BrianMcElwain 7 หลายเดือนก่อน +27

    A golden opportunity was missed here to explain subsurface scattering par excellence via that 99.9% translucent skin of dear Lewis here.

    • @TheSliderW
      @TheSliderW 7 หลายเดือนก่อน

      And bounce light contributing to object color ans lighting. Oie the green grass lighting him up on the left side :)

  • @Sora_Halomon
    @Sora_Halomon 7 หลายเดือนก่อน +4

    I know why most computerphiles are filmed indoors, but I really like seeing outdoor footage. It kinda reminds me of the early Number videos filmed in the stadium and roads.

  • @realdarthplagueis
    @realdarthplagueis 7 หลายเดือนก่อน +20

    I remember the Persistence of Vision ray-tracer from the 90s. I was running it on a 486 Intel PC. Every frame took hours to render.

    • @user-zz6fk8bc8u
      @user-zz6fk8bc8u 7 หลายเดือนก่อน +2

      Me too. It was awesome.

    • @philp4684
      @philp4684 7 หลายเดือนก่อน +3

      I played with it on my Atari ST. A 320x200 image would take all night, and you'd have to use an image viewer that did clever things with palette switching in timer interrupts to make it display more than 16 colours on screen to even see the result.

    • @feandil666
      @feandil666 7 หลายเดือนก่อน +1

      yep and now a high end game on a high end gpu can raytrace a 4K image in 15ms (cheating of course, there are things like DLSS that allow the game to use a much small resolution to raytrace that is upscaled after)

    • @ryan0io
      @ryan0io 7 หลายเดือนก่อน +1

      Who remembers DKBTrace before there was POV-Ray? I ran it on a 386. talk about needing patience.

    • @Zadster
      @Zadster 7 หลายเดือนก่อน

      POV-Ray was incredible. I first used it on my 386SX20. Even rendering 80x60 pixel images needed a LOT of patience. When I got my first 24-bit video card it was mindblowing! That and FractInt really soaked up CPU cycles.

  • @DS-rd8ud
    @DS-rd8ud 7 หลายเดือนก่อน +7

    16:01 Arnold warning sign telling people to not touch the robots or the table. The ultimate security measure.

  • @luispereira628
    @luispereira628 7 หลายเดือนก่อน +1

    Whenever someone who is very passionate about his work speaks you know the video will be great! Great explanation and loved the passion 😊

  • @DigitalJedi
    @DigitalJedi 7 หลายเดือนก่อน +3

    Great breakdown of how ray tracing works. I'd love to see another video comparing the usual ray-traced approach to things like cone tracing and the other shape-tracing ideas. Would be interesting to see what optimizations and tradeoffs each makes.

  • @250bythepark
    @250bythepark 7 หลายเดือนก่อน +1

    I think you're a great addition to Computerphile, hope you make more videos, really interesting stuff!

  • @IanFarquharson2
    @IanFarquharson2 7 หลายเดือนก่อน +22

    1987, BSc computer science graphics course, same maths, but all night to render a teapot on a minicomputer. PS5 today doing cars in realtime.

  • @therealEmpyre
    @therealEmpyre 7 หลายเดือนก่อน +1

    In 1986, I wrote a program that used ray casting, a more primitive form of ray tracing, for a university project. It took several minutes for that 286 to render a simple scene at 320 by 200 by 256 colors in VGA.

  • @BruceZempeda
    @BruceZempeda 7 หลายเดือนก่อน +1

    Best computerphile video in a while

  • @Joseph_Roffey
    @Joseph_Roffey 7 หลายเดือนก่อน +16

    I was sad that he didn’t mention “do mirrors themselves get treated as potential light sources”?
    Because from the way he described it, it didn’t seem like they would automatically do so, and also because in the final example I was surprised the shadow didn’t seem to change shape when the mirrors were added as surely the light would’ve been able to hit more of the shadow with the mirrors on either side.

    • @AnttiBrax
      @AnttiBrax 7 หลายเดือนก่อน +27

      Mirrors aren't light sources per se. Whenever the ray hits any surface you basically start the same calculation as you did when you shot the first ray and add that result to the original ray. So a ray that hits a mirror may eventually hit a light source. What was skipped here was that the ray can keep bouncing hundreds of times before it hits a light source and each hit affects the colour of the pixel. And that's why ray tracing is so slow.

    • @unvergebeneid
      @unvergebeneid 7 หลายเดือนก่อน +3

      Mirrors are just another surface in ray tracing but they are funnily enough the easiest surface to compute. Diffuse materials are much harder because each point needs to generate in theory infinitely many rays itself, whereas a perfect mirror needs only a single ray.

    • @SMorales851
      @SMorales851 7 หลายเดือนก่อน +1

      No, mirrors are not light sources. For the mirrors to have the effect you described, a less basic raytracer is necessary. Typically, the rays are not bounced directly towards the light (that's more of classic rasterizer thing). Instead, the behavior depends on the type of surface. Smooth, mirror-like surfaces bounce the ray in one specific angle, like mirrors do in real life. Rough surfaces, instead, "split" the ray into smaller rays that shoot out in random directions; the color of the surface is then the sum of the colors returned by those rays. The more of those subrays you have, and the more times they are allowed to split and bounce recursively, the better the image quality (but performance suffers greatly). That random ray bouncing generates what's known as "indirect lighting", which is all light that doesn't come directly from a light source, but instead reflected off of something else first.

    • @jcm2606
      @jcm2606 7 หลายเดือนก่อน

      @@SMorales851 To be pedantic, some raytracers do actually trace rays directly towards light sources. There's an entire optimisation technique called next event estimation where a subset of rays are specifically dedicated to being traced towards known light sources, then the returned energy value is weighted to conserve energy since you technically did introduce some bias to the algorithm by doing this. There's also another optimisation technique called reservoir importance sampling which generalises NEE (specifically as part of multiple importance sampling which combines NEE with BRDF importance sampling) to sample _pixels_ that are known to contribute meaningfully to the image, rather than specifically known light sources (this technique is commonly known as ReSTIR, though reservoir sampling is useful in other areas so ReSTIR isn't the only use of it).

  • @jordantylerflores
    @jordantylerflores 7 หลายเดือนก่อน +3

    Thank you! This was very informative. Could you do a segment on the differences between Ray Tracing and Path Tracing?

  • @paull923
    @paull923 7 หลายเดือนก่อน +1

    great video, thank you very much!
    Regarding 10:01 "How do we figure out what objects we've hit", can you make video about that?

  • @ChadGatling
    @ChadGatling 7 หลายเดือนก่อน +4

    My first thought is how would ray tracing handle a scene where there is something next to a big red wall or something where you really should be able to see some red reflections but if the ray just hits the car then goes to the sun the ray will never see the wall an not know to tint the car a bit red

    • @kevingillespie5242
      @kevingillespie5242 7 หลายเดือนก่อน +1

      (i have no graphics experience but) My guess is you can track rays that bounce off the wall and hit the car. Perhaps construct some sort of graph that tracks all the values at each point a ray hits so you can track how much light from the car is supposed to reflect off the wall? But each feature like that will make it more expensive / memory intensive.

    • @mrlithium69
      @mrlithium69 7 หลายเดือนก่อน +2

      wouldnt apply to this convo. need additional calculations - specular and diffuse reflection

    • @sephirothbahamut245
      @sephirothbahamut245 7 หลายเดือนก่อน +6

      You do multiple ray bounces. More rays, more bounces = more realistic image. For realtime rendering you mostly stick to 1 or 2 bounces. Stuff like rendering for cinema can easily go past 200 bounces, and hundreds of rays per pixel. That's why they can take days to render a scene.

    • @BeheadedKamikaze
      @BeheadedKamikaze 7 หลายเดือนก่อน

      @@sephirothbahamut245 You're correct, but that process is typically called path tracing to make it distinct from ray tracing, which does not account for this effect.

    • @AySz88
      @AySz88 7 หลายเดือนก่อน

      Ironically this is what the Cornell Box is supposed to test too (note the big red and green side walls). There's also an actual photo of a real life Cornell Box to compare to, where you see the effect you mention.

  • @shreepads
    @shreepads 7 หลายเดือนก่อน +27

    The explanation feels unsatisfactory, probably in an attempt to keep things simple, e.g. light source occluded at a point would be rendered black but clearly it's picking up diffused light from other objects

    • @jean-naymar602
      @jean-naymar602 7 หลายเดือนก่อน +4

      I don't think that's diffuse lighting in this specific implementation. It looks like the shadow color is set to some greyish black.
      You could set the shadow color to any color, it does not need to be black.

    • @mytech6779
      @mytech6779 7 หลายเดือนก่อน +2

      They call that ambiant lighting and it's just a preset level of light applied to every pixel.
      Doing true diffuse lighting via raytracing is not possible with current computing power. There are far too many re-reflection calculations.
      Even the hard shadows will not be accurate in his mirror-room example, because the rays would need to originate at the light source, not at the observer, to have any practical chance of finding all of the primary reflections that would be lighting the "shadow" area, (let alone secondary tertiary reflections). The reflected lighting thing can be done to a limited extent but the rendering time increases several orders of magnitude so it is only used in pre-rendered scenes not realtime gaming.

    • @shreepads
      @shreepads 6 หลายเดือนก่อน

      Thanks!

  • @zebraforceone
    @zebraforceone 7 หลายเดือนก่อน +2

    A question at @14:20 I see the bounce towards a fixed number of point lights,
    How does this work in raytracing with emissive surfaces?

    • @feandil666
      @feandil666 7 หลายเดือนก่อน +1

      same, the surface is just considered a light source directly, so its emissivity is added to what the light would normally be there

  • @karolispavilionis8901
    @karolispavilionis8901 7 หลายเดือนก่อน +2

    In the last comparison with mirrors, shouldn't the shadow on the box be smaller because of light bouncing off the mirror, therefore illuminating below the box?

    • @cannaroe1213
      @cannaroe1213 7 หลายเดือนก่อน

      Mirrors reflect light, but that doesn't make them special. A white surface will probably reflect MORE light, but a mirror keeps the light linear without scattering. So a mirror could reflect less light, because it makes a reflection, but its darker. Does that make snese?
      So anyway, since everything reflects light not just mirrors, with some average amount of "reflectiveness" based on the material properties, they have something thats the opposite of shadow maps, called light maps, which layers additional light over everything based on the math behind what's emitting light

    • @jcm2606
      @jcm2606 7 หลายเดือนก่อน

      In short, yes, it should. Light from the light source and the rest of the scene should specularly reflect off of the mirror and onto the floor below the box, illuminating the box's shadow. It's not doing that in the raytracer likely because the raytracer is simplistic and isn't handling multiple bounces correctly (if at all), but if it did then you'd naturally see what you're describing (especially with a path tracer, which is the big boy raytracer).

    • @jcm2606
      @jcm2606 7 หลายเดือนก่อน

      @@cannaroe1213 None of this makes any sense. Firstly, mirrors (or, rather, metals) are actually a little special since they're one of the few surfaces that can reflect almost the entirety of incoming light in any outgoing direction, whereas most other surfaces will generally lose some amount of light to diffuse transmission at perpendicular angles (ie angles where you're looking straight down at the surface).
      Secondly, because mirrors keep the light linear without much scattering (being real pedantic here but there will always be _some_ scattering due to debris on the surface and inner layers of the mirror, and imperfections in the mirror's surface), the light they reflect is typically actually _brighter_ for the outgoing direction since more of the incoming light is exiting in that outgoing direction (light source emits 100% incoming light; incoming light reflects off of a very rough surface, 95+% of the incoming light is scattered in different directions,

  • @totlyepic
    @totlyepic 7 หลายเดือนก่อน +13

    16:22 Dude must want a new monitor the way he's jamming his finger into this one.

  • @ukbloke28
    @ukbloke28 7 หลายเดือนก่อน +1

    Wait. Where did you get that oldschool printer paper? I used to draw on that as a kid in the 70s, you just gave me mad nostalgia. I want to get hold of some!

  • @HarhaMedia
    @HarhaMedia 7 หลายเดือนก่อน

    Writing a bunch of raytracers as a hobbyist programmer really helped me understand vectors and matrices.

  • @whatthesigmaW
    @whatthesigmaW 3 หลายเดือนก่อน

    when you explained that all you needed to do was think in the opposite way in how real life *rendering* works, I immediately understood.

  • @jonnydve
    @jonnydve 7 หลายเดือนก่อน

    I love that you are using (presumably old stocks) of ininite paper. Growing up I used to draw lots on there because my grandfather had tons of it still lying around (I was born 99)

  • @Bugside
    @Bugside 7 หลายเดือนก่อน +2

    Should have shown bounce light, ambient occlusion, colors affecting other objects. I find it the coolest effects

  • @lolroflmaoization
    @lolroflmaoization 7 หลายเดือนก่อน +5

    Honestly you would get a much better explanation of rasterization and it's limitations as compared to raytracing by watching Alex's videos on Digital Foundry

  • @richardwigley
    @richardwigley 7 หลายเดือนก่อน +18

    Tell you me you didn’t pay for your monitor without telling me you didn’t pay for your monitor….

  • @musthavechannel5262
    @musthavechannel5262 7 หลายเดือนก่อน +11

    Obviously it is oversimplified version of ray tracing since it doesn't explain why shadows aren't pitch black

    • @trevinbeattie4888
      @trevinbeattie4888 7 หลายเดือนก่อน +1

      Ambient lighting :)

    • @evolutionarytheory
      @evolutionarytheory 7 หลายเดือนก่อน

      It's implied. If the lightsource has a non zero width it's implied. If you bounce the ray more than once it's also implied. But he didn't cover stochastic raytracing which would have made it more obvious.

    • @vast634
      @vast634 6 หลายเดือนก่อน

      If you look at a raw raytraced image with a very low sample count and no denoising, it looks very noisy with many over bright and over dark pixels. Using more rays or denoising will smooth the result to make the ambient lighting appear.

    • @evolutionarytheory
      @evolutionarytheory 6 หลายเดือนก่อน

      @vast634 you are confusing terms. Denoising and using more samples has nothing to do with either soft shadows or ambient lighting. Soft shadows are only a function of the size of the lightsource. Ambient lighting either appears when you disable shadows or use indirect lighting (ray continuation) or "bounces". Denoising will only reduce the existing noise. It won't "add" ambient lighting.

    • @vast634
      @vast634 6 หลายเดือนก่อน

      @@evolutionarytheory Simple ray tracers will point at the reflection point somewhere to get a sample of how bright the next hit surface / sky is, and this might or might not hit a lightsource. The chosen angle is random when the surface is fully rough. Denoising creates an average over many of those nearby samples.

  • @zxuiji
    @zxuiji 7 หลายเดือนก่อน

    5:50, Occurs to me at this point the fragments could be a way of checking for objects in the way. If for example you've already calculated a fragment to be nearer to the camera than the one doing but the one doing should've effected the fragment that's been chosen then could just take on that fragments details and apply the reduction in lighting before finally overriding the nearer fragment with the current fragment that's now mascarading as the original. Naturally if the fragment you're working on is nearer then you can just apply the lighting reduction normally and carry on.

    • @jcm2606
      @jcm2606 7 หลายเดือนก่อน +2

      This is an actual technique and is called screen-space raytracing or screen-space raymarching. Basically, a ray is marched across the screen's depth buffer until it either reaches its destination or the ray goes behind a pixel (which is determined by comparing the ray's depth to the pixel's), with different methods of handling each case depending on what it's being used for. The problem with this technique is that you don't know how "thick" the pixel is, so you don't know if you've _actually_ hit the object that the pixel belongs to or if you're sufficiently far behind it to not have hit it. You can sort of approximate the thickness in a couple different ways, but you'll always end up with false positives where the ray thinks it missed the object when it actually hit it (or vice versa), plus you don't know what other objects are behind the pixel so you don't know if you've hit some other object. For that reason it's not the best technique to use for shadows (it can work sometimes so some games do use it as a fallback to a main shadowing technique), but it _is_ commonly used for reflections since, for most surfaces, reflections are primarily visible at grazing angles where the limitations of the technique aren't as painful to deal with (doing reflections this way is called screen-space reflections).

    • @zxuiji
      @zxuiji 7 หลายเดือนก่อน

      @@jcm2606dam yt and it's shadow deleting. I replied to this once and my post still isn't there. I basically said both problems are solvable. The pixel thickness by distance from camera and the multiple object thing by a shadow pixel that accumulates pixel values to apply

    • @jcm2606
      @jcm2606 7 หลายเดือนก่อน

      *"The pixel thickness by distance from camera"*
      This is a very crude approximation of thickness, more crude than the industry standard of just having a fixed depth threshold. The thickness is meant to be measuring how long the object is along the screen's Z axis, so basing it off of the distance to the camera isn't correct as distance to the camera doesn't affect how long the object is along the screen's Z axis (ignoring perspective, which is already accounted for by coordinate space transformations).
      *"the multiple object thing by a shadow pixel that accumulates pixel values to apply"*
      This won't do anything at all as what we're looking for is what range of depths actually contain objects. Say we had a depth buffer that was in the range [0, 1]. We had three objects positioned at the current pixel, with the first object occupying depth range [0.1, 0.2], the second object occupying depth range [0.4, 0.6], and the third object occupying depth range [0.7, 1.0].
      What we'd like to know is if a ray has hit an object (ie is in one of those three ranges) or has missed all objects (ie is outside of those three ranges), but the problem is that the depth buffer can only store the _minimum_ value of all three of these ranges, which in this case is 0.1. Even though we know based on intuition that there's gaps between each of these ranges, the depth buffer can't store anything more than the minimum of all ranges, so the ray only ever sees that there's nothing in the depth range [0, 0.1] and something in the depth range [0.1, 1.0]. It has no idea what's behind depth 0.1.
      There are a few different techniques that try to address this, but none are perfect. The simplest technique would be deep depth buffers which allows you to store multiple depth values in a depth buffer as separate depth samples. This would let you at least store multiple minimums to get a better idea of the scene composition (especially if you were to combine deep depth buffers with a dedicated back face pass to store the depths of back faces in addition to front faces, letting you get both parts of each object's depth range), but it limits you to a specific number of depth values (2, 4, 8, 16, 32, etc) and each new depth value you add increases the memory footprint of the depth buffer by an additional 1x (ie 10 depth values = 10x memory footprint), so it's impractical for this alone (since deep depth buffers were intended to be used with transparent objects, so using them for screen-space raytracing would add even more memory usage).

    • @zxuiji
      @zxuiji 7 หลายเดือนก่อน

      @@jcm2606 Still reading your msg but pixels do not need to know the object length, they're always a fixed size and since normalisation smplifies logic (can apply the camera dimensions after) it's always better to treat the pixels at the camera as 1x1x1 and scale down(+Z) or skip (-Z) based on distance from camera

    • @zxuiji
      @zxuiji 7 หลายเดือนก่อน

      @@jcm2606 For the depth thing the shadow pixel DOES work, remember each pixel starts off assuming it has no objects in front of it. If it's further from the camera it's hidden by the pixel in front so it's values just get added to the accumulator. If it's closer then it's values get added to the accumulator and the colour values set with 0 light applied yet. Once all camera pixels and shadow pixels have been set then the pixels of just the camera are looped to apply the accumulated values are applied to the colour of the pixel. So if fragment 1 takes 0.1 light, 2 takes 0.3 and 3 takes 0.4 then there's 0.2 left to multiply against the colours. I'll move onto reading the last chunk of your msg now

  • @vast634
    @vast634 6 หลายเดือนก่อน

    A intermediary step: rasterizers using ray tracing offline (before rendering) to determine shadows and bounce lighting. The classic light baking. Its allows (static) shadows and bounce lighting in a fast rasterizer engine, without the runtime costs.

  • @MatthewHoworko
    @MatthewHoworko 7 หลายเดือนก่อน

    Can we just appreciate how you don't hear the wind in the audio for the outside demo section

  • @SeamusHarper1234
    @SeamusHarper1234 7 หลายเดือนก่อน +1

    I love how you got the green color all over your hands xD

  • @AlbertoApuCRC
    @AlbertoApuCRC 7 หลายเดือนก่อน

    I followed a raytracing course a few years ago... loved it

    • @WeeeAffandi
      @WeeeAffandi 7 หลายเดือนก่อน

      How different is Parh Tracing?

  • @emwave100
    @emwave100 7 หลายเดือนก่อน

    I vote for more computer graphics videos. I am surprised ray tracing hasn't been covered on this channel before

  • @Roxor128
    @Roxor128 7 หลายเดือนก่อน +1

    And let's not forget the mad geniuses of the Demoscene who pulled off real-time ray tracing as far back as 1995!
    Recommended viewing: Transgression 2 by MFX (1996), Heaven 7 by Exceed (2000), and Still Sucking Nature by Federation Against Nature (2003). All of them have video recordings uploaded to TH-cam, so just plug the titles into the search box.

    • @bishboria
      @bishboria 7 หลายเดือนก่อน

      Heaven 7 was/is amazing

    • @yooyo3d
      @yooyo3d 7 หลายเดือนก่อน

      TGR2 by MFX intro didn't use any ray tracing. It was very effective fake.

    • @Roxor128
      @Roxor128 7 หลายเดือนก่อน

      @@yooyo3d Citation? Better still, how about an annotated disassembly walking us through what it actually does?

  • @glitchy_weasel
    @glitchy_weasel 6 หลายเดือนก่อน

    I think it would be fun for more outdoor Computerphile episodes :)

  • @4santa391
    @4santa391 7 หลายเดือนก่อน +12

    am I the only one getting triggered by the screen touching? 😆 16:15

    • @realdarthplagueis
      @realdarthplagueis 7 หลายเดือนก่อน

      Agreed.

    • @rich1051414
      @rich1051414 7 หลายเดือนก่อน +1

      I am not sure how he has so much marker ink on his hands as well... There is something about this guy that triggers me 100 different ways, but I am trying to keep it bottled down.

  • @frickxnas
    @frickxnas 7 หลายเดือนก่อน

    Ray tracing for rendering and ray picking for mouse are incredible. Been using them since 2014

  • @MKBlackbird
    @MKBlackbird 7 หลายเดือนก่อน +1

    Ray tracing in modern games are optimized by shooting fewer rays and then "guessing" how it would have looked with all the rays using AI. It is super cool how that makes ray tracing actually feasible for games. Now another interesting approach is bringing in a diffusion model. By either dreaming up the final image from a color coded (cheaply rendered rasterization) segmentation of the scene or just adding a final touch on top of a normal rendered frame. I imagine the diffusion models and other similar approches will become increasingly fast to actually make this possible. It would be like SweetFX with style transfer.

    • @jonsen2k
      @jonsen2k 7 หลายเดือนก่อน +2

      We're still using rasterization at the bottom as well, aren't we?
      I thought ray tracing was only used to get shadows and reflections and stuff more lifelike on top of the else rasterized frame.

    • @MKBlackbird
      @MKBlackbird 7 หลายเดือนก่อน +1

      @@jonsen2k Yes, that's true.

    • @jcm2606
      @jcm2606 7 หลายเดือนก่อน +2

      AI was really only introduced recently with NVIDIA's ray reconstruction, and even that more so seems to be just a neural network performing the same work that a traditional denoiser does. Outside of RR, games tend to use traditional denoisers like SVGF or A-SVGF, which don't use any AI at all. Typically they'll use an accumulation pre-pass, where raw raytracer outputs are fed into a running average spanning multiple frames (anywhere from half a dozen to possibly 100+ frames) to try to gather as many samples as possible across time, then they'll feed the output of the accumulation pre-pass into a series of noise/variance-aware spatial filters which selectively blur parts of the image that are considered to be too noisy.

    • @DripDripDrip69
      @DripDripDrip69 7 หลายเดือนก่อน

      @@jonsen2k There are different levels of implementation, some games only have a few ray traced effects like RT shadows and RT reflections, some have RT global illumination to replace light probe based rasterization GI (the most transformative RT effect imo), so they have ray traced effects slapped on top of rasterized image. Some go all out with path tracing like Cyberpunk and Allen Wake 2, those have very few rasterized components apart from the primary view, and if I'm remembering it right in Portal RTX and Quake RTX even the primary view is ray traced so there's no rasterization whatsoever.

  • @skyscraperfan
    @skyscraperfan 7 หลายเดือนก่อน +3

    Doesn't it get complicated, if a ray hits a rough surface and is diffused in many directions?

    • @user-zz6fk8bc8u
      @user-zz6fk8bc8u 7 หลายเดือนก่อน +3

      Yes in practice (because real ray tracing is still more complicated than the shown examples) but in theory it's simple. Just shoot more rays per pixel and if you hit a surface you randomize the continuation path based on the roughness. This way you can even render stuff like foggy half transparent glass.

    • @skyscraperfan
      @skyscraperfan 7 หลายเดือนก่อน +1

      @@user-zz6fk8bc8u That sounds like a lot of computation per pixel, because if some of those rays hit another rough surface, you would need even more rays.

    • @AnotherPointOfView944
      @AnotherPointOfView944 7 หลายเดือนก่อน +1

      @@skyscraperfan There is a cut off point obviously, otherwise it would never finish.

    • @SteelSkin667
      @SteelSkin667 7 หลายเดือนก่อน +3

      @@skyscraperfan That is why in practice rough materials are more expensive to trace against. In games where RT is only used for reflections there is often a roughness cutoff, and sometimes the roughness is even faked by just blurring the reflection.

  • @bengoodwin2141
    @bengoodwin2141 7 หลายเดือนก่อน

    They mentioned Minecraft using Ray tracing, there is an experimental version of the game that uses it, but the main version of the game still uses rasterization. There are also fan made modifications that add ray tracing and/or extra shaders to make the game look nicer as well.

  • @OnionKnight541
    @OnionKnight541 7 หลายเดือนก่อน

    can you please do a part II of this video using (say) apple's SceneKit / Vision frameworks, so that we as devs can see how that is implemented properly?

  • @toxicbullets10
    @toxicbullets10 7 หลายเดือนก่อน +2

    intuitive explanation

  • @unvergebeneid
    @unvergebeneid 7 หลายเดือนก่อน +1

    Would've been nice to compare the toy ray tracer he built as an undergrad to an actual ray tracer with bounce lighting. Just to illustrate how much the explanation in this video only barely scratches the surface.

  • @scaredyfish
    @scaredyfish 7 หลายเดือนก่อน

    I’d like to know more about how modern game engines use ray tracing.
    As I understand it, it’s still rasterised and the ray tracing is an additional step. They do a low resolution ray trace of the scene that’s then denoised and used as a lighting pass - that’s how it can run at game frame rates. Is that understanding correct?

  • @jonnypanteloni
    @jonnypanteloni 7 หลายเดือนก่อน

    I can finally gather my steps and talk to hilbert about how I just can't stop dithering on this topic. Maybe I should get my buckets in order.

  • @badcrab7494
    @badcrab7494 7 หลายเดือนก่อน

    In future with more powerful computers, would you do ray tracing the correct way round with rays starting at the light source rather than the camera?

    • @Juansonos
      @Juansonos 7 หลายเดือนก่อน

      I believe that we start at camera to control how many calculations are needed to be done. Starting rays at light source sends more rays to more places and a lot of those would not be useful to us rendering a scene from the camera's vantage point. Leading to wasted computationsthat did nothing to make the result better.

  • @jojox1904
    @jojox1904 7 หลายเดือนก่อน

    Can you do another video discussing the chances and dangers of AI? The last videos on this I saw from your channel were from 8 years ago and I'd be curious about an updated view on this

  • @sbmoonbeam
    @sbmoonbeam 7 หลายเดือนก่อน

    I think your modern games rendering pipeline for something like cyberpunk 2077 will be a blend of these techniques with physically based rendering (PBR) pipeline using a rastering pathway enhanced by using ray tracing (using your RTX/compute pipeline pathway) to calculate reflectance and refraction effects rather than calculating every pixel in the render from first principles.

  • @j7ndominica051
    @j7ndominica051 7 หลายเดือนก่อน

    His head is filling the Zed-buffer. Where does diffuse light come from, which has been reflected from other nearby objects?

  • @lundiz92
    @lundiz92 2 หลายเดือนก่อน

    What happens if you put a mirror opposite a mirror?
    Would the gpu calculate the ray to infinity and explode?

  • @adityavardhanjain
    @adityavardhanjain 6 หลายเดือนก่อน +1

    Ray Tracing is one of those things people keep referring to but not actually knowing what it actually is.

  • @zxuiji
    @zxuiji 7 หลายเดือนก่อน

    4:23, uh, couldn't you just check the fragment's position BEFORE deciding to colour it? You calculate the position, check against the buffer position, if it's not nearer you just move onto the next fragment, otherwise begin identifying what colour to give it.

  • @Parax77
    @Parax77 7 หลายเดือนก่อน

    in that last scene.. whilst the view was reflected in the mirror the light source was not? how come?

    • @jcm2606
      @jcm2606 7 หลายเดือนก่อน

      Would need a better image to really know for sure since the angle of the camera wouldn't have allowed us to see the light source to begin with (it looked like the light source was at the center of the ceiling, whereas we could at best see the far left and right sides), but it could have also been that the raytracer just wasn't set up for it to begin with. The style of raytracing that he was describing will generally treat all light sources as infinitely small points, so at best you'll only get one or two pixels that represent the light source, which may have been why we couldn't see it. Generally in that case you need to either trace rays in random directions within a cone pointed at the light source (which simulates a spherical light source with a specific radius), or you need to have the shaders set up to handle glossy surfaces to "blur" the reflection of the light source out across the surface.

  • @ai_outline
    @ai_outline 7 หลายเดือนก่อน +1

    Computer Science is awesome!

  • @morlankey
    @morlankey 7 หลายเดือนก่อน

    Why is the ray-traced blue box casting a shadow below it but isn't lighter on top?

  • @ocamlmail
    @ocamlmail 6 หลายเดือนก่อน

    Tremendously cool, thank you!

  • @erikziak1249
    @erikziak1249 7 หลายเดือนก่อน

    Why is the shadow grey and not black then if you do not render anything there?

  • @nickthane
    @nickthane 7 หลายเดือนก่อน

    Highly recommend checking Sebastian Lague’s video where he builds a raytracing renderer step by step.

  • @Kane0123
    @Kane0123 7 หลายเดือนก่อน

    “Now you can see why that’s more efficient” - another line added to my CV.

  • @thenoblerot
    @thenoblerot 7 หลายเดือนก่อน

    I started ray tracing with POV-ray on my 386/387 with 2mb ram. Hours or even days for 320x240 image!

  • @TESRG35
    @TESRG35 7 หลายเดือนก่อน

    Wouldn't you need to also trace a line from the shadow back to the light source *through the mirror*?

    • @jcm2606
      @jcm2606 7 หลายเดือนก่อน

      Yes, though typically it's not framed that way. Typically with this style of raytracing (recursive raytracing) you "shift the frame of reference", so to speak, each time you start "processing" a new ray, so it's more like you tracing a line from the mirror to the floor just in front of the blue cube, then tracing a new line back to the light source and "passing the light" back to the mirror and eventually back to the camera.

  • @DavidDLee
    @DavidDLee 7 หลายเดือนก่อน

    How much of this printer paper do you still have?
    Do you still use a printer which takes them?

  • @GfoxSim
    @GfoxSim 6 หลายเดือนก่อน

    Ray Tracing is so interesting now that we're seeing how it actually works!

  • @omegahaxors9-11
    @omegahaxors9-11 7 หลายเดือนก่อน +1

    If you've ever played minecraft or a shooter you've used a ray trace, because that's how the game knows what you're targeting.

    • @vast634
      @vast634 6 หลายเดือนก่อน

      Similar, but internally they would use different data structures for each task - physics raycasts and render raycasts.

  • @kenjinks5465
    @kenjinks5465 7 หลายเดือนก่อน

    Instead of rays, could we just trace the frustrum? Look for plane/frustum intersections, fragment the frustrum by the plane intersections, generate new frustum from planes... return with vectorized output not rasterized, much smaller?

  • @oskartegby
    @oskartegby 6 หลายเดือนก่อน

    When they went outside, I was expecting them to say that they had discovered that grass exists

  • @agoatmannameddesire8856
    @agoatmannameddesire8856 7 หลายเดือนก่อน +1

    Do ambient occlusion next :)

    • @cannaroe1213
      @cannaroe1213 7 หลายเดือนก่อน

      No yur an ambient occlusion.

  • @Serjgap
    @Serjgap 7 หลายเดือนก่อน +3

    I am now understanding even less than before

  • @muhammadsiddiqui2244
    @muhammadsiddiqui2244 7 หลายเดือนก่อน

    It's the first I have seen a computer scientist "outside" 🤣

  • @bw6378
    @bw6378 7 หลายเดือนก่อน +2

    Anyone else remember POVray from way back when? lol

    • @Roxor128
      @Roxor128 7 หลายเดือนก่อน

      Done a lot of mucking around with it. Even produced a few scenes that look nice.

  • @elirane85
    @elirane85 7 หลายเดือนก่อน +3

    I remember more then 20 years ago, I was learning to program video-games, so I read a book about "real-time graphics".
    The first chapter was about ray-tracing, it was only about 5 pages, and it basically covered most of the theory behind it and even had a full implementation which was less then 1 page long.
    But at the end of the chapter it said something like:
    "But this approach can take hours to render a single frame, so this technic is only good for pre-rendering on massive server farms, and the next 300 pages will teach you how to fake it" 😋

    • @AlmerosMusicCode
      @AlmerosMusicCode 7 หลายเดือนก่อน

      That's a fantastic approach for explaining the subject! Must have been a great read.

  • @jeromethiel4323
    @jeromethiel4323 7 หลายเดือนก่อน

    What's being described here is what i have always heard referred to as "ray casting." Because it eliminates a lot of unnecessary calculations that used to be done with Ray casting. Ray casting, classically, traced the rays from the light source. Which is inefficient, since a lot of those rays will never hit the "camera."
    I remember ray casting software on the Amiga, and it was glacially slow. While a similar program using ray casting was much, much faster.

  • @michaelsmith4904
    @michaelsmith4904 7 หลายเดือนก่อน

    how does this relate to ray casting?

  • @robertkelleher1850
    @robertkelleher1850 7 หลายเดือนก่อน +6

    I'm surprised we can see anything with all the fingerprints that must be on that monitor.

    • @vast634
      @vast634 6 หลายเดือนก่อน

      He is like one of my colleagues who also likes to poke the screen. I hate that.

  • @i-like-turnips
    @i-like-turnips 4 หลายเดือนก่อน

    You keep saying "see how much easier that is". Yes it is, but it is also exponentially more computationally expensive, this is less a case of better methods for lighting and more a case of having the spare processing power to do it.

  • @ares106
    @ares106 7 หลายเดือนก่อน

    6:30 but can you imagine actually going outside?

  • @yard2380
    @yard2380 7 หลายเดือนก่อน +1

    I'm both an IT guy and a 3D guy. 3D software engineers are gods, but they really SHOULD STOP POKING THE LCD screen! :P Thanks for the info, good stuff!

  • @paulmitchell2916
    @paulmitchell2916 7 หลายเดือนก่อน

    Has anyone heard of ray tracing used for enhanced audio reverb?

  • @HeilTec
    @HeilTec 7 หลายเดือนก่อน

    Reflected light is the hardest.

  • @Gosu9765
    @Gosu9765 7 หลายเดือนก่อน

    I don't think this video conveyed the subject well. What I got from it is that RT looks much better (tho as explained it seemed like you can get only the minecraft graphics with rasterisers, which is so far from truth) and that RT is slower, but simpler. Is it? All the ways to optimize BVH structures, accumulate traces across frames for real time graphics and then to cleanup artefacts from all the shortcuts you took. Studios clearly struggle with this - it's probably one of the hardest things to pull of well now in the gaming industry when it comes to graphics. What really should have been shown are the limitations of rasterisation techniques as that's the reason industry is moving towards RT now - SSR occlusion artefacts, shadow map resolutions, light bleeding through objects, etc. (no need to explain those - they could simply be shown). The only thing that was shown was that RT can reflect things that are not in screen space, but even that wasn't explained as limitation of rasterisation. As a simple gamer I'm kinda disappointed as I've seen other simple gamers explaining this much better.

  • @TheGreatAtario
    @TheGreatAtario 7 หลายเดือนก่อน +4

    All the monitor-touching is triggering me

    • @andywest5773
      @andywest5773 6 หลายเดือนก่อน +2

      Bad for the screen and leaves fingerprint marks. Why do people do that?

  • @andrewdunbar828
    @andrewdunbar828 7 หลายเดือนก่อน +1

    I must be an old fart for thinking it's raytracing that's traditional since it goes back to at least the '70s whereas OpenGL started about 1990. For realtime game stuff raytracing is newer of course since it's only recently that some hardware has been able to do it fast enough.

  • @flippert0
    @flippert0 7 หลายเดือนก่อน

    Plot twist: it's UK, so the brightly lit day outside was of course all CGI

  • @ishaan863
    @ishaan863 7 หลายเดือนก่อน

    11:11 something in the way

  • @aquacruisedb
    @aquacruisedb 7 หลายเดือนก่อน

    Wonder if there is anyone old enough a computerphile to remember what that continuous feed dot matrix paper is actually for?! (other than marker pen crazy idea sketches)

  • @Ceelvain
    @Ceelvain 7 หลายเดือนก่อน

    Looks like this video raises a lot of emotions. I hope you see there's a content mine here. ^^

  • @dinsm8re
    @dinsm8re 7 หลายเดือนก่อน

    couldn’t have asked for better timing

  • @Scenesencive
    @Scenesencive 7 หลายเดือนก่อน

    Field trip daaaayyyy!

    • @Scenesencive
      @Scenesencive 7 หลายเดือนก่อน

      Interesting video , I am fairly familiar with the subject , felt like there was maybe just one interesting key point missing of raytracing , or even in general light , witch is ofc GI , indirect lighting , that in theory we would like to cast on every hit point and infinite amount of new rays and again on every single hit point, and again; etc. That ultimately every surface basically is a verry rough form a mirror , wich results in the underside of the car not be completely black even though there is no direct ray to any light source , and the wall next to the bright red cube have a red tinted ambient light in games other than reflections this seems the most frequent usecase for rt especially when baking lightmaps is challenging like in open world games, that prolly the subjects you mention in the 1 hour directors cut prolly! gg

  • @4.0.4
    @4.0.4 7 หลายเดือนก่อน

    I had to do a double take to make sure this video wasn't from 2018.