Thank you! Consider the case on 0:35 -- am I right that because of refraction/reflection of ray we have distorted image of mountain? I'm comparing this with gps case where we have signal reflection hence timing issues hence distorted true distance. Here we have no issues with time, no close time binding, rather we have multiply rays goint into our eye but the distance of these rays were different, so how eye form visualization and capture proper distance and scaling? Simple averaging? I mean that there should be different images based only on reflected rays vs direct rays, but in reality we see them all (mixed case), so how final visualization is produced?
Simply put, for each pixel to be drawn on the screen, many rays are sent out with slight offsets. The pixel covers an area of the screen, not a single point, so just using one ray would give some really random result that may not be correct for the majority of the pixel's area. Each ray then has a probability of being reflected/refracted depending on the material it hits. A shiny surface will reflect all rays in the same direction every time, while a matte surface will scatter them. If you have ever seen a video showing a raytraced image being formed over time, each frame was adding an additional ray for each pixel. After the first one, you have what looks like random pixels where you can guess the outline of objects, but a couple of hundred or thousand rays later, you get a pretty nice picture. However, be advised that this describes real raytracing. Modern GPUs use path tracing instead, where they send out a couple of rays to determine how objects are lit and then use traditional rendering to do the heavy lifting.
simple and sensational
you are the best. I watched many vids but that is the only one makes sense.
Amazing, thanks from Brazil
Obrigado
great explanation! short & sweet.
Thank you! Consider the case on 0:35 -- am I right that because of refraction/reflection of ray we have distorted image of mountain? I'm comparing this with gps case where we have signal reflection hence timing issues hence distorted true distance. Here we have no issues with time, no close time binding, rather we have multiply rays goint into our eye but the distance of these rays were different, so how eye form visualization and capture proper distance and scaling? Simple averaging? I mean that there should be different images based only on reflected rays vs direct rays, but in reality we see them all (mixed case), so how final visualization is produced?
Simply put, for each pixel to be drawn on the screen, many rays are sent out with slight offsets. The pixel covers an area of the screen, not a single point, so just using one ray would give some really random result that may not be correct for the majority of the pixel's area. Each ray then has a probability of being reflected/refracted depending on the material it hits. A shiny surface will reflect all rays in the same direction every time, while a matte surface will scatter them.
If you have ever seen a video showing a raytraced image being formed over time, each frame was adding an additional ray for each pixel. After the first one, you have what looks like random pixels where you can guess the outline of objects, but a couple of hundred or thousand rays later, you get a pretty nice picture.
However, be advised that this describes real raytracing. Modern GPUs use path tracing instead, where they send out a couple of rays to determine how objects are lit and then use traditional rendering to do the heavy lifting.
amazing