I made a better Ray-Tracing engine

แชร์
ฝัง
  • เผยแพร่เมื่อ 9 พ.ค. 2024
  • Two years ago, I showed you how I created a simple ray-tracer from scratch. This is my attempt at improving my first version and showing you how I did it!
    ► My new Discord server: / discord
    ► My first Ray-tracing attempt: • I created my own Ray T...
    ► Source code: github.com/carl-vbn/opengl-ra...
    ► Valuable resources:
    - OpenGL tutorials: learnopengl.com/
    - Ray-tracing blog post: three-eyed-games.com/2018/05/0...
    - ImGUI Library: github.com/ocornut/imgui
    - TheCherno's Bloom video: • Bloom.
    🎵 Music from Epidemic Sound, register with my link to support the channel and get a discount:
    www.epidemicsound.com/referra...
    Detailed description:
    In this video, I explain how I created a ray tracing engine in C++ using OpenGL. Raytracing is a rendering algorithm that can be used to generate photorealistic images with nice shadows and reflections. The technology has recently been made accessible to many when NVIDIA announced their RTX graphics cards. I successfully implemented lighting, soft shadows, ray-traced global illumination, progressive rendering, reflections and more.
    Chapters:
    0:00 - Intro
    1:23 - GPU acceleration
    2:06 - Ray-tracing recap
    3:10 - Direct illumination
    4:37 - First result
    4:42 - Soft shadows
    5:20 - New result
    5:39 - User interface
    6:27 - Indirect illumination
    8:18 - Progressive rendering
    9:42 - Reflections
    10:24 - Skybox
    10:51 - Recursion problem
    13:50 - Anti-aliasing
    14:33 - Bloom
    15:24 - Final results & conclusion
    #Raytracing
    #OpenGL
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 528

  • @NamePointer
    @NamePointer  ปีที่แล้ว +201

    Sorry for the short break :)
    I hope this video was worth the wait!
    Also, thank you so much for 10k subscribers!!
    EDIT: At around 1:17, I called OpenGL a graphics "library" which isn't the right word. OpenGL is an API, not a library!

    • @dynastylobster8957
      @dynastylobster8957 ปีที่แล้ว +1

      i know this isn't entirely realistic, but what if you blurred the noise somehow instead of using accurate information?

    • @ahmeddawood8847
      @ahmeddawood8847 ปีที่แล้ว +2

      use vulcan ?

    • @ssj3mohan
      @ssj3mohan ปีที่แล้ว

      You could do it for unity the [ Doomed Engine ] Why not right ?

    • @localareakobold9108
      @localareakobold9108 ปีที่แล้ว

      if your raytracing takes lesser resources I'll buy it

    • @teenspider
      @teenspider ปีที่แล้ว

      *s h o r t e s t* break of all time
      (joke)

  • @mgkeeley
    @mgkeeley ปีที่แล้ว +464

    A quick hack for fast antialiasing is to cast the rays through the corners of the pixels instead of the centers. It's basically the same amount of rays, but you can average the 4 values for each pixel and get 1 step for free. Adding your offsets will improve the antialiasing over time as you currentlly have it.

    • @waldolemmer
      @waldolemmer ปีที่แล้ว +14

      MSAA

    • @felineboy
      @felineboy ปีที่แล้ว +54

      Add a fifth ray in the center (with 1/2 the weight, and 1/8 for each corner) and you got the quincunx algorithm.

    • @mrburns366
      @mrburns366 ปีที่แล้ว +3

      I'm not a programmer and I can't wrap my brain around a pixel having a corner.. 🤦‍♂️ a pixel is a finite point with and X,Y coordinate right? Say pixel 0,0 was in the upper left.. what would be the coordinates for the corners? 🤷‍♂️ Lol

    • @mgkeeley
      @mgkeeley ปีที่แล้ว +39

      @@mrburns366 good question! When casting a ray through the center of pixel "0,0", you actually cast it though coordinates "0.5, 0.5". The "screen" is virtual inside the raytracer, and has floating point coordinates for where the pixels are. Each pixel is a square with sides of length 1.0. Hope that helps!

    • @grande1900
      @grande1900 ปีที่แล้ว +2

      Basically 4xMSAA

  • @ProjectPhysX
    @ProjectPhysX ปีที่แล้ว +302

    Your first raytracing video motivated me to implement fast ray-grid traversal in my CFD software for ultra-realistic fluid rendering. The simple stuff already brought me quite far. I'm amazed by the more complex techniques you show in this video. Thank you for sharing your knowledge!

    • @uniquelyrics2331
      @uniquelyrics2331 4 หลายเดือนก่อน

      that is some quite complex vocabulary

  • @pablovega7697
    @pablovega7697 ปีที่แล้ว +251

    Don’t believe it. You just saw a video online. Or used google street view. There’s no way you went outside

    • @mattiskardell
      @mattiskardell ปีที่แล้ว +6

      lol

    • @thehollowknerd3858
      @thehollowknerd3858 ปีที่แล้ว

      LMAO 🤣

    • @distraughtcat
      @distraughtcat ปีที่แล้ว +5

      I don’t know how he survived. The light coming out of the window would have burnt his eyes out for sure.

    • @mattiskardell
      @mattiskardell ปีที่แล้ว

      @@distraughtcat lol

  • @OllAxe
    @OllAxe ปีที่แล้ว +86

    9:17 One potential solution is to implement motion vectors and move the pixels in the buffer accordingly. That way you can move the camera while keeping old samples for additional data. Note however that newer samples need to be weighted more heavily so that new data is generated for previously invisible parts of the screeen, and that specular reflections with low roughness would look inaccurate as you move around since they are dependent on the camera direction. The latter may help the former a bit but a proper solution might need to put specular reflections in a separate buffer and handle them differently.
    This is an important part of ray-tracing in Teardown, the SEUS PTGI Minecraft shader, Quake II RTX and many other RTX-powered games, so it's a well-known technique. There might even be papers or tutorials out there that describe how to do it in more detail. I also know that Dennis Gustavsson, the programmer of Teardown and its custom engine, has written a blog post on using blue noise to decrease perceived noise in ray-tracing, and other things about real-time ray-tracing that could be of help.

    • @NamePointer
      @NamePointer  ปีที่แล้ว +14

      Thanks for the interesting insight!

    • @WilliumBobCole
      @WilliumBobCole ปีที่แล้ว +4

      I came to the comments to say this, though as expected, others have beaten me to it. You're already doing temporal smoothing of the image, may as well not throw out the entire buffer. Obviously the more the camera moves, the fewer previous frames will be useful, but it's still way better than starting from scratch any time the camera moves

    • @oskartornevall8265
      @oskartornevall8265 ปีที่แล้ว +1

      If you don't care about object movement, then simply reprojecting the samples based on the difference in camera movement / rotation and filtering based on projected vs real depth of the pixel works (a spatiotemporal filter, if you care about terminology). This is used in GTAO (Ground Truth Ambient Occlusion) if you want to look at an example of such a filter.

    • @convenientEstelle
      @convenientEstelle ปีที่แล้ว

      @@oskartornevall8265 Ground Truth Ambient Occlusion

    • @oskartornevall8265
      @oskartornevall8265 ปีที่แล้ว

      @@convenientEstelle Yes, thanks. Was tired when I wrote that and misremembered the name :)

  • @JJIsShort
    @JJIsShort ปีที่แล้ว +57

    When I was implementing a raymarching algorithm, a lot of my stuff looked fake. Thanks for the new features for me to implement. Something I did use was an AABB optimisation. I went from being able to render about 15 objects in almost real time to way more. If you want more frames, it's quite easy. You have also inspired me to implement ray tracing and try to make my own engine. Thanks.

  • @KingBobXVI
    @KingBobXVI ปีที่แล้ว +45

    One simple change to consider: look into different color spaces for image processing, RGB is very intuitive because it's what displays use, but it's not really the best option for things like blending values together - actual color info can get lost and coalesce into muddy grays pretty easily. If you do all the math in HSV color space though, you can do blending the same, and maintain better hue and saturation while you blend before converting back to RGB for display.

    • @omnificatorg4426
      @omnificatorg4426 ปีที่แล้ว +4

      The main advantage of RGB over HSV is linearity, so you can easily add and multiply the values. Of course, don't forget about gamma correction, or you will get dark gloomy colours. The mean of #FF0000 and #00FF00 is #BBBB00.

  • @evannibbe9375
    @evannibbe9375 ปีที่แล้ว +63

    The better solution to avoid rendering from scratch when the camera moves is to save the colors found, not as a buffer based on what appears on the screen, but instead as a buffer of what color should be associated with each piece of 3D objects those rays hit (color in this case being the total light that part of the shape could be considered to emit, which is averaged with the new calculation for that point).
    The one downside of this method is that it will require a lot more memory associated with each object in the scene, (sort of like a baked light map texture), and that more metallic objects will take a bit longer to converge (since their lighting changes considerably with camera movements).

  • @novygaming5713
    @novygaming5713 10 หลายเดือนก่อน +2

    One mistake I noticed is that reflective spheres have dark edges. This is caused because dot product shading is still being done for non-diffuse materials. The solution is to interpolate between the shaded brightness and a full brightness as the roughness goes down.

  • @dazcar2203
    @dazcar2203 ปีที่แล้ว +29

    other channels may do this sort of thing, but none go quite as in depth on the technical side as you do. the 10k subs are well deserved!

  • @monuminmonumin6783
    @monuminmonumin6783 ปีที่แล้ว +1

    i love that you're learning all this, sharing it and especially that you're putting in the Effort.
    Great Work! i'm hoping for more advanced Versions, just because i'm curious how far you can come!

  • @monstrositylabs
    @monstrositylabs ปีที่แล้ว +4

    I only subscribed two hours ago. Looked at the date of your last video and assumed this channel was dead. Then coincidently you post the first video in a year 10 minutes after I subscribed!

  • @kamranki
    @kamranki ปีที่แล้ว +1

    Lovely video! I love how you go out of your way to explain everything visually while keeping it simple. I am glad to have found your channel.

  • @marexexe7308
    @marexexe7308 ปีที่แล้ว +2

    The visuals in this video is stunning! Great job! I enjoyed every frame of the video

  • @christophercoronaios4732
    @christophercoronaios4732 ปีที่แล้ว

    Great job man!! I find your ray tracing videos very helpful and informative. Please make more!

  • @spacechannelfiver
    @spacechannelfiver ปีที่แล้ว +3

    You can do an optimisation by rendering into sparse voxel space instead of screen space. All of those dot products you calculated from the lights are still the same within voxel space, you can just cull the non visble voxels and recalculate whatever lights are in screen space if they move / change intensity. It just becomes a data management task which is much faster. Lumen works like this AFAIK.

  • @ThrillDaWill
    @ThrillDaWill ปีที่แล้ว +7

    Great video!! I’m excited to see your new projects! Don’t stress too much over them and try to have fun!

  • @WhiteDragon103
    @WhiteDragon103 ปีที่แล้ว +11

    If you separate view-dependent lighting (reflections) from view-independent lighting (lambertian) you can keep the view independent lighting buffer while moving the camera. If you move an object though, you'll have to reset both buffers.

    • @forasago
      @forasago ปีที่แล้ว +1

      Or you just accept that indirect lighting will lag behind / ghost a little. Only direct light / shadows need to keep up with the full framerate to look okay. Indirect lighting lags behind in basically every game engine, even Unreal 5.

  • @oskartornevall8265
    @oskartornevall8265 ปีที่แล้ว +5

    If you want even more realistic material behaviour, try looking into GGX scattering, it's a microfacet distribution, meaning it models the materials as a ton of microscopic mirrors oriented depending on smoothness etc. Great video btw!

  • @londongaz2
    @londongaz2 ปีที่แล้ว

    Great video! You've inspired me to work on improving my own rt engine which suffers from many of these similar problems.

  • @bovineox1111
    @bovineox1111 ปีที่แล้ว +2

    Super stuff - always wanted to create a raytracer myself, did a bit of work but I think that the hardest bit to do quickly, is sorting the objects and determining the nearest collision.

  • @oscill8ocelot
    @oscill8ocelot ปีที่แล้ว

    So glad I subscribed last year. :3 Excellent stuff!

  • @jorgeromeu
    @jorgeromeu ปีที่แล้ว +2

    Hi, a month ago or so i finished my bachelor's thesis which revolved around path tracing. This video explains it better than I've seen anywhere else!

  • @hamzazafar5182
    @hamzazafar5182 ปีที่แล้ว

    Omg thank you! This was an extremely useful and simple tutorial. I'm not sure if I would have been able to install without it

  • @caiostange2770
    @caiostange2770 ปีที่แล้ว +6

    Hello!
    A fix for not having accumulation when moving the camera is: instead of merging frames directly, take into account a velocity buffer. This should tell how much each pixel moved each frame. With that, it can combine pixels with previous ones even if they moved. TAA does this as well, you should look into it

  • @zelcion
    @zelcion ปีที่แล้ว

    Okay, I got this recommended on my TH-cam front page, and i have never seen any of your videos. This is it, you're making it big.
    By the way, haven't i looked at the view count and subcriber count, I would think this was a big production of a 500K sub channel. Great work! Got my sub!

  • @shitshow_1
    @shitshow_1 ปีที่แล้ว +1

    Absolutely Amazing. I'm an undergrad. I've been very enthusiastic and learning 3D Computer Graphics from 9th grade. You put all my learnings in a nut shell which gave me a good Recap. Thank you so much ❤

  • @2002budokan
    @2002budokan 6 หลายเดือนก่อน

    Being able to summarize the entire ray-tracing process, its finest details and professional touches in such a short video is a special ability. Thanks.

  • @alex-yk8bh
    @alex-yk8bh ปีที่แล้ว +4

    Proud to say you're the reason why I disable adblock sometimes! Such a great piece of content. Congrats.

  • @adorable_yangire
    @adorable_yangire ปีที่แล้ว +2

    best way to implement bloom imo is to make it have 0 threshold, but make it so that the bloom increases exponentially. and have 2 bloom levels, one for large screen area bloom, and one for small screen area bloom.

  • @nahomicastillolecca3719
    @nahomicastillolecca3719 ปีที่แล้ว

    THANK YOU!!! TNice tutorials is such an amazing tutorial. I just got soft soft today and was playing around on it but had no clue how to really use it.

  • @darltrash
    @darltrash ปีที่แล้ว

    Cool project, man!

  • @lonewolfsstuck
    @lonewolfsstuck ปีที่แล้ว +3

    Should add a de-noiser post process effect, would help significantly

  • @yooyo3d
    @yooyo3d ปีที่แล้ว +2

    You can use Multi Render Target extension to render stuff in multiple buffers at same time. Use those additional buffers to store current state of "recursion". Be wise to encode only necessary things in those buffers. Then just iterate multiple times over those buffers and image will get better and better.

  • @GaryMcKinnonUFO
    @GaryMcKinnonUFO ปีที่แล้ว +3

    Very cool indeed. I wrote my first tracer in BASIC, only Phong shading and of course it took hours to render a single polygon but it was a good exercise, makes matrix multiplication actually interesting :)

  • @nickadams2361
    @nickadams2361 ปีที่แล้ว

    Man this looks like a very fun project to think through

  • @timothyoh9715
    @timothyoh9715 ปีที่แล้ว

    Your content is great man. Keep up the good work

  • @minhlucnguyen7614
    @minhlucnguyen7614 ปีที่แล้ว

    I'm learning 2d art, watching your video makes me realize the way an artist decide the hue, satturation, value of a spot on the painting is exactly like how ray tracing work. The video is very fun and comprehensive to watch!

  • @youtubehandlesux
    @youtubehandlesux ปีที่แล้ว +10

    You could improve the realism of the scene easily with some tonemapping algos, they basically imitate how eyes or cameras perceive different strengths of light (e.g. color desaturates at high light strength while not straight up becoming #FFFFFF), as opposed to just a simple gamma function

  • @ravenmillieweikel3847
    @ravenmillieweikel3847 ปีที่แล้ว +1

    A way that the noise while moving problem could be fixed is offsetting the memory buffer's pixels by the depth buffer in the direction of movement rather than completely starting over
    Another way to get rid of aliasing is to downsample, that is, render the entire screen at a higher resolution, then scale it down.

  • @martinevans8965
    @martinevans8965 ปีที่แล้ว

    Best video on this topic on TH-cam. So well explained and great result.

  • @pablovega7697
    @pablovega7697 ปีที่แล้ว

    Great video! Hope more come soon

  • @thanzawoo3389
    @thanzawoo3389 ปีที่แล้ว

    after browsing through so many channels. Yours is by far the best. The explaining thod is so great and detailed even complex stuff is

    • @DaveeeOnTop
      @DaveeeOnTop 11 หลายเดือนก่อน

      I found the Sebastian Lague video also very informing, I think it wasnt out by the time you wrote your comment, but if you're still interested, I'd recommend you watch it

  • @Supakills101
    @Supakills101 4 หลายเดือนก่อน

    This is a massive improvement well done. Leveraging hardware acceleration would take this to another level.

  • @sjoervanderploeg4340
    @sjoervanderploeg4340 ปีที่แล้ว +2

    You could also draw a circle where the ray intersects a surface, this could help you with indirect lighting around objects as you can use it to average a larger area!
    I think I saw Unreal implement this, it is called splotch mapping.

  • @eboatwright_
    @eboatwright_ ปีที่แล้ว

    Awesome! Definitely looks alot better :)

  • @Layzy3D
    @Layzy3D ปีที่แล้ว +3

    If you continue this raytracer, you could add pbr materials and fresnel (for the moment it looks like you blended between metallic and diffuse materials)

  • @kelownatechkid
    @kelownatechkid ปีที่แล้ว

    Great work!!

  • @fghjkcvb2614
    @fghjkcvb2614 ปีที่แล้ว

    Great to see you again!

  • @djpiercy1235
    @djpiercy1235 ปีที่แล้ว

    I think a smart way to reduce noise while minimising performance impact would be to reduce the amount of indirect rays depending on the roughness of the surface. A surface with a roughness of 0 should only need to emit one reflection ray, since the light can only bounce in one direction. A surface with higher roughness would need a lot more samples, since the cone of directions that the light can bounce off in is so much larger, you need more rays to fill it up.

  • @3DProgramming
    @3DProgramming ปีที่แล้ว

    wonderful video! thank you!

  • @alfred4194
    @alfred4194 ปีที่แล้ว +1

    Amazing content once again

  • @miguelguerrero3394
    @miguelguerrero3394 ปีที่แล้ว +1

    Very good video, next implementation could be importance sampling, so that indirect rays are biased towards the light sources, significantly reducing the noise

  • @ruix
    @ruix ปีที่แล้ว +1

    Real cool project. But next time you should raise your volume a bit

  • @user-kv2jk8vc1l
    @user-kv2jk8vc1l ปีที่แล้ว

    need the other half of the video!

  • @hannescampidell
    @hannescampidell ปีที่แล้ว

    cool project i couldnt make this perfect result

  • @ImMagicDesigns
    @ImMagicDesigns ปีที่แล้ว

    Hey! Thanks so much for this video!

    • @NamePointer
      @NamePointer  ปีที่แล้ว

      Glad you liked it!

    • @ImMagicDesigns
      @ImMagicDesigns ปีที่แล้ว

      @@NamePointer that was not my comment xD

  • @ThePixelisaThor
    @ThePixelisaThor ปีที่แล้ว

    Clean comeback !

  • @silvertakana3932
    @silvertakana3932 ปีที่แล้ว +1

    I love your videos! It's very informative! What video editor did you use? And how do you make these animations?

    • @NamePointer
      @NamePointer  ปีที่แล้ว

      I use Adobe Premiere Pro for editing, After Effects for 2D animations and Blender for 3D animations

  • @Raftube02
    @Raftube02 ปีที่แล้ว +1

    I think that one way to stop noise resulting from using random numbers independently of each other would be to use perlin noise, because then the color of the pixels would be more related to each other.

  • @ibemper1850
    @ibemper1850 ปีที่แล้ว

    great job dude

  • @jeremiahlyleseditor437
    @jeremiahlyleseditor437 ปีที่แล้ว

    Great work.
    Have you thought to use Phong shading?
    It gives Great results but not HD crisp images.

  • @AMan-xz7tx
    @AMan-xz7tx ปีที่แล้ว

    this is great! but I have to wonder, have you ever thought of trying another lighting method like ray marching as a follow up?

  • @dexterman6361
    @dexterman6361 ปีที่แล้ว +1

    To be accurate, the deep learning algo nvidia uses is called DLSS (Deep learning super sampling). This can theoretically be used with RTX off. This is technically unrelated to RTX (hardware accelerated ray tracing or more accurately, hardware accelerated bounding box checks for use in RTX)

    • @jcm2606
      @jcm2606 ปีที่แล้ว +1

      DLSS actually has nothing to do at all with raytracing, and as of 2.0 is essentially just a variation of TAAU with a machine learning model taking care of when to reject previous frames and how it should blend frames together.

  • @GafftheHorse
    @GafftheHorse ปีที่แล้ว

    Fascinating.

  • @98SE
    @98SE ปีที่แล้ว +1

    Great Video! Please make more videoS!

  • @rigbyb
    @rigbyb 3 หลายเดือนก่อน

    Holy shit, this is amazing. Thanks for making this video

  • @user-tt9so4pj9i
    @user-tt9so4pj9i ปีที่แล้ว

    Good job!

  • @BossBeneBaby
    @BossBeneBaby 5 หลายเดือนก่อน

    Hey great video. In 2021 Khronos realeased the Raytracing Pipeline for Vulkan. It supports all modern graphics cards (even AMD) and its incredibly fast. I managed to write a realtime Pathtracer and even with 4k resolution it is possible to render in realtime.

  • @voided9377
    @voided9377 ปีที่แล้ว

    That's so cool!

  • @crestofhonor2349
    @crestofhonor2349 ปีที่แล้ว

    I do love seeing anything ray tracing related

  • @yash1152
    @yash1152 ปีที่แล้ว

    i am super happy to see many many comments about how the part with camera movements and reusing the data can be done. soo many of points are given in comments. seems it would be enough to serve a separate video on its own (:

  • @thecoweggs
    @thecoweggs ปีที่แล้ว

    This is actually really impressive

  • @dragoncosmico
    @dragoncosmico ปีที่แล้ว +1

    you can study the bloom shaders of reshade, they're also written in c

  • @MrSeezero
    @MrSeezero ปีที่แล้ว

    I have a question. When I rotate an image on a 2-dimension plane, and I get the point to show at its new location as being 2.43, 3.76; can the color to copy be calculated as ((0.57 + 0.24) * color at 2,3 + (0.57 + 0.76) * color at 2,4 + (0.43 + 0.24) * color at 3,3 + (0.43 + 0.76) * color at 3,4)/4? With a coordinate being 2.43, I figured that since I was 43 percent of the way from there to 2 and 57 percent of the way from there to 3 that I need to put a 43 percent weight towards 3 and a 57 percent weight toward 2. Is that a good idea?

  • @onesome6091
    @onesome6091 ปีที่แล้ว

    man great stuff

  • @malachyfernandez6285
    @malachyfernandez6285 ปีที่แล้ว +1

    this has inspired me to make a raytracer from scratch, in scratch.

  • @omarlopezrincon
    @omarlopezrincon ปีที่แล้ว

    AMAZING !!!

  • @nafizjubaer1717
    @nafizjubaer1717 ปีที่แล้ว +3

    Trying using motion vectors to offset previous frames when combining it with the current one during denoising., ie Temporal Denoising

  • @LouSan
    @LouSan ปีที่แล้ว

    great video!

  • @liudas5377
    @liudas5377 ปีที่แล้ว

    You are a genius...Thanks

  • @cgpoly3419
    @cgpoly3419 ปีที่แล้ว +3

    I just finished an pathtracer for an university project. I am currently rendering the 30sec animation and hope it finished rendering by the deadline in two days. While my project is quite diffrent (it doesn't even try to be real time because it wouldn't work with our scene and we don't use our sky map for lighting since just contains stars and wouldnt contribute an significant amount of light (it's a space scene)) some of the problems where the same especially the rewriting of some functions to make them non recursively. Its reassuring to see that I am not the only one who is annoyed by some aspects of OpenGL.

    • @jcm2606
      @jcm2606 ปีที่แล้ว +3

      This isn't an issue with OpenGL, rather it's an issue with GPUs in general. GPUs don't have a stack, every function call is inlined and all automatic variables exist in a shared register file, so it's not possible for a GPU to support recursion, at a fundamental level. You *can* emulate recursion via iteration, by creating your own stack structure and dynamically appending to and iterating over it, but this will cause coherence issues and will significantly worsen register pressure, which can result in performance plummeting.

  • @gui205
    @gui205 ปีที่แล้ว

    Franchement bravo ! Quelle est la prochaine étape ? :)

  • @notgartificial8591
    @notgartificial8591 ปีที่แล้ว

    I recommend adding something called "fresnel" to the engine since the ground plane is looking a bit flat near the horizon. The steeper a ray comes in the more reflective the object gets. This effect gets weaker the rougher the object is. It is also a mandatory feature if you want photorealism since our brains know something is off.
    I also recommend adding caustics because it also affects realism. When computing indirect lighting, you should make rays bounce off reflective surfaces and if it reaches a light or a bright surface, you light the original surface accordingly.

  • @peacefulexistence_
    @peacefulexistence_ ปีที่แล้ว +5

    Afaik RTX "raytracing cores" just hardware accelerate Ray-Triangle intersections. By machine learning, did you mean DLSS, which uses machine learning to upscale frames so that they don't have to be computed at full resolutions?

    • @NamePointer
      @NamePointer  ปีที่แล้ว +1

      I watched this video from NVIDIA a while back: th-cam.com/video/6O2B9BZiZjQ/w-d-xo.html
      From my understanding, it implies that RTX cards have to deal with quite a bit of noise, and that they use neural networks for denoising.

    • @peacefulexistence_
      @peacefulexistence_ ปีที่แล้ว +1

      @@NamePointer afaik there's no special hardware in the RTX card for denoising.
      It's just an NN which you can run on an non-rtx card.
      Same for DLSS.
      Thus both are post-processing steps done in software. The GPU just accelerates some of the computation in the NN with the tensor cores (I haven't looked into the tensor cores much)

    • @ABaumstumpf
      @ABaumstumpf ปีที่แล้ว +1

      ​@@NamePointer You might want to watch that video again and then follow the links they provide in the description. The short version:
      No, RTX is just the brand-name and behind it are just a Vulkan, Dx12 and OptiX API that lets you ... trace rays.
      The denoising is an entirely separate library. Or more accurate - not just one but multiple libraries with multiple approaches and algorithms, each designed for specific applications with their own pros and cons.
      But just "RTX" does not mean anything as it is a marketing-term that encompass Vulkan-Raytracing (vendor agnostic) that does just ray acceleration.

    • @NamePointer
      @NamePointer  ปีที่แล้ว

      When I said "RTX technology" I didn't mean hardware specifically, but the entire Ray tracing package NVIDIA gives game developers to add ray tracing to their game. If I understood correctly, that one uses Neural networks for denoising.

    • @ABaumstumpf
      @ABaumstumpf ปีที่แล้ว +1

      @@NamePointer"If I understood correctly, that one uses Neural networks for denoising."
      There are multiple libraries included, one of which uses the result of a trained network, others are just "simple" postprocessing or temporal solutions.

  • @Takehere
    @Takehere ปีที่แล้ว

    Awesome !

  • @eliasjensen3115
    @eliasjensen3115 ปีที่แล้ว

    Nice tutorial,

  • @ProjectCity1
    @ProjectCity1 ปีที่แล้ว

    Please make a part 2!

  • @niloytesla
    @niloytesla ปีที่แล้ว +2

    You are my inspiration!
    I was also tried to make my own 'ray tracing' engine but I couldn't. It's hard, so I just stop it.
    But now I think I should start over.

  • @paulroberto2286
    @paulroberto2286 ปีที่แล้ว

    Great video! Have you tried non-euclidean raytracing? It's pretty interesting, could make for a cool extension of your current engine :D

  • @FelixNielsen
    @FelixNielsen ปีที่แล้ว

    Regarding using a buffer of previously rendered frames, to reduce noise, not working once the camera has been moved, of is moving, it seems to me a reasonably good approximation som simply transform the image in the buffer, to fit the new perspective. Of course it isn't a perfect solution, but it shouldn't be all that difficult, or resource intensive, compared to the other work being done.
    Furthermore, it might be worth considering weighin the buffered frames, so that earlier frames has a lesser impact, than later rendered frames, but I don't really know it that is worth the effort, or indeed, has the desired effect.

  • @NathanNahrung
    @NathanNahrung ปีที่แล้ว

    I wonder if you could implement a motion vector to transform your buffer like those used in video compression in inter frames? For a static scene a z-buffer with camera motion might be sufficient to calculate the motion vector, though it would be difficult to account for the z of reflective surfaces. I suspect you would need to throw away data after several frames. Perhaps you could also implement a heat map for newly revealed places or edges so you can identify them then use extra rays per pixel in the hottest areas for rapid anti-aliasing where needed most.

  • @phillipotey9736
    @phillipotey9736 ปีที่แล้ว

    This has given me an idea for a quantum renderer. You have each point on the object "be a camera" and save the color value to that point on the surface. The colors are then constantly streamed to the global camera continuously. Each object would have a then texture that would emit light for every direction and would update only when something changes in the objects direct line of sight. It might take a lot of ram so further things are from the global camera would save more dynamically in memory. This works off of the current quantum interpretation that light is a wave until it's collapsed by hitting an object or interacting, and the true color/intensity is chosen.

  • @NavySturmGewehr
    @NavySturmGewehr ปีที่แล้ว +2

    I don't think I've laughed so hard in a long time... We used to joke about that unnatural light in the outside world that burned our skin. Let's stay under the warm comfort of florescent lighting!

    • @VRchitecture
      @VRchitecture ปีที่แล้ว

      Everything outside of virtual space is unnatural ☝🏻

  • @LemurG
    @LemurG ปีที่แล้ว

    I know very little about coding, but I still have a question about calculating shadows (sorry if it’s redundant or doesn’t make sense).
    Is it possible to calculate the intensity of a shadow by measuring the distance between it and the object? Or the indirect lighting from the light source? Like in real life when shadows travel further from the object and indirect lighting is further from the light source they become more fuzzy.

    • @NamePointer
      @NamePointer  ปีที่แล้ว

      While that seems intuitive, it wouldn't really work as some shadows will stay sharp even when the object is far away. It really depends on how much of the light source is blocked by the geometry

  • @adytm
    @adytm ปีที่แล้ว

    Really cool

  • @JorgetePanete
    @JorgetePanete ปีที่แล้ว +1

    I made one in Java without GPU too but projecting photons, much... slower, used as a showcase of Java

  • @hutchw2947
    @hutchw2947 ปีที่แล้ว

    I love you im surprised you don't have more subscribers

  • @helenvalencia7073
    @helenvalencia7073 ปีที่แล้ว

    for bloom take your frame buffer, make every pixel under a threshold black, then sample it to a lower resolution and average it with the normal sized one, repeat a few times, and then add that to the original image

  • @abenezertena6441
    @abenezertena6441 ปีที่แล้ว

    I thaught Graphics programming was a rocket science, you inspired me a lot, Thanks