Dont know if anyone will read this but I don't make games, i don't code, I just game! However, I love knowing the intricacies of what makes my hobby possible. I know techs/devs don't get the credit when its due and get the hate when it's not but please know that you guys make my life worth living! I am mostly bed ridden due to a spinal cord injury that left me paralyzed 12 years ago. Going outside is a chore due to my lack of mobility and constant pain so exploring the digital worlds and experiencing the stories that are built through your hard work means the world to me! I save up so i can to upgrade my PC to get the best experience possible within my budget. It's worth every penny! It keeps me wanting to live just long enough to see what technology will come next!
Great paper! We have just made a material system for the unreal marketplace, heavily relying on temporal dithered triplanar mapping like you mentioned. I would love to see Unreal implement your novel solution to texture filtering. What are your thought on this technique or something similar to sample rough reflections? I believe currently this uses mipmapping aswell. Also I have been thinking about that in future rendering we would probably render at multiple times the screen resolution and sample it down to the screen resolution with all the benefits that could come with that...
@@Cloroqx Ah it seems I'm not allowed to add links in comments, but the asset is called NOVA - modular sci-fi kit and there is also a separate kit with materials only. But I do not want to hijack this comment section.
This seems like a problem with the neural texture technique itself. It's using downscaled features, but if you can't filter it (like with anisotropic filtering) then it must be relying on aliasing to pass high frequency information from the lower resolutions. See Alias Free GAN / StyleGAN 3 and its comparisons with StyleGAN 2 showing the feature maps. An idea for a simple solution, during optimization of the neural textures, shift the target image over by one or so pixels randomly, and interpolate the low resolution feature maps in the neural texture accordingly. For example, if the neural texture's features are 0.25 the resolution, then move the image over 1 pixel, and move the features over 0.25 pixels, interpolating between the neighboring values in the same way that hardware filtering would do it. Perhaps then it would learn to look good with anisotropic filtering without needing to implement any sort of additional filtering in the neural texture decoder.
You lost me at DLSS. If the method needs to lean into DLSS to get decent results, that excludes *a lot* of platforms. I'm not yet convinced on temporal filtering, hell I'm not convinced on TAA given how brutal its overblurring and artifacts tend to be. I don't trust a compressed video to convey how different it is from traditional filtering.
I still don't understand why people try to erase specular dots, because that's how specular objects work in real life. The little glittery sparkles crawling around look natural.
Dont know if anyone will read this but I don't make games, i don't code, I just game! However, I love knowing the intricacies of what makes my hobby possible. I know techs/devs don't get the credit when its due and get the hate when it's not but please know that you guys make my life worth living! I am mostly bed ridden due to a spinal cord injury that left me paralyzed 12 years ago. Going outside is a chore due to my lack of mobility and constant pain so exploring the digital worlds and experiencing the stories that are built through your hard work means the world to me! I save up so i can to upgrade my PC to get the best experience possible within my budget. It's worth every penny! It keeps me wanting to live just long enough to see what technology will come next!
Great paper and presentation. Can't wait for these new filtering methods and neural textures to arrive in future games.
Thanks for sharing, great to see advancements in that field!
This is what the graphic community need right now
Great video and paper Bart :)
You always amaze ordinary game devs like me. I'm very curious about how you study and what math courses you took in college.
I have a question. How do you do 64x filtering when use Stochastic Texture Filtering?
Great paper! We have just made a material system for the unreal marketplace, heavily relying on temporal dithered triplanar mapping like you mentioned. I would love to see Unreal implement your novel solution to texture filtering. What are your thought on this technique or something similar to sample rough reflections? I believe currently this uses mipmapping aswell. Also I have been thinking about that in future rendering we would probably render at multiple times the screen resolution and sample it down to the screen resolution with all the benefits that could come with that...
Link?
@@Cloroqx th-cam.com/video/Pz8Zlo4bcY4/w-d-xo.htmlsi=TlEsK0p2V3tWZR69
@@Cloroqx Ah it seems I'm not allowed to add links in comments, but the asset is called NOVA - modular sci-fi kit and there is also a separate kit with materials only. But I do not want to hijack this comment section.
@@Ricky_Lauw Thanks!
I hope this gets implemented in Microsoft Flight Simulator for the speed/quality of cloud rendering.
This seems like a problem with the neural texture technique itself. It's using downscaled features, but if you can't filter it (like with anisotropic filtering) then it must be relying on aliasing to pass high frequency information from the lower resolutions. See Alias Free GAN / StyleGAN 3 and its comparisons with StyleGAN 2 showing the feature maps.
An idea for a simple solution, during optimization of the neural textures, shift the target image over by one or so pixels randomly, and interpolate the low resolution feature maps in the neural texture accordingly. For example, if the neural texture's features are 0.25 the resolution, then move the image over 1 pixel, and move the features over 0.25 pixels, interpolating between the neighboring values in the same way that hardware filtering would do it. Perhaps then it would learn to look good with anisotropic filtering without needing to implement any sort of additional filtering in the neural texture decoder.
You lost me at DLSS. If the method needs to lean into DLSS to get decent results, that excludes *a lot* of platforms. I'm not yet convinced on temporal filtering, hell I'm not convinced on TAA given how brutal its overblurring and artifacts tend to be. I don't trust a compressed video to convey how different it is from traditional filtering.
I still don't understand why people try to erase specular dots, because that's how specular objects work in real life. The little glittery sparkles crawling around look natural.