And then there is another complication: RGB values are not linear themselves (gamma-corrected), with 128 giving almost 1/4 brightness compared to 255, as this was necessary for CRT monitors but also allowed to get good range with just 8 bits. Linear representation would require 10 to 12 bits and didn't improve image as human eye works logarithmically, it sure sees difference between 1 and 2 but can't spot difference between 999 and 1000. So for correct sampling it is actually better to convert all pixel values into linear representation, sample pixel and return to gamma-corrected value. See MinutePhysics: computer color is broken. Most visible effect of simplified methods is different brightness of textures depending on scale. Like, human averages small checkerboard correctly but when computer does it converting neighboring 0 and 255 into uniform 127, it gets 1/4 brightness instead of 1/2!
I guess you could use 3 bicubic interpolation calculations for the following rows each time to increase performance? You’d have to do an entire columns worth of interpolated values at a time though. Starts to hurt your brain trying to describe this stuff, well done for relaying it so well David! Definitely one of your strengths!
very nice video as always! Used bilinear sample in lightmaps for my 3D engine, that way I don't need too much resolution and the lack of resolution is less noticeable.
These interpolation methods ensure that the interpolated curve goes through the sample points exactly. But usually the low resolution texture to be interpolated is acquired by down-sampling a (theoretical) higher resolution image through mixing pixels together. So under this assumption in order to more effectively "undo" this down-sampling, wouldn't it make more sense for the interpolation to preserve area under the curve for each sample region instead?
These days i m busy with Bjarn Books, and Metaprogramming. I’m working on something abstract like mathematics field named algebraic geometry; so as in C++ there too match Libraries for Graphics; so why we don’t writing a Classes and functions for graphics object that can be valid for every Libs. I ‘ m steel use Olc Console, your technic very valuable. _thanks J_
How about creating new interpolation that "interpolates" between point sampling if slope is big and bicubic if slope is small to preserve hard edges on the checkerboard or around the tree branches while remaining soft transitions on surfaces with smooth slopes? Maybe it would be possible to dynamically change "sharpness" of cubic interpolation curve?
If I were to build a rasterizer from scratch for a set of vectors like filled bezier paths such as in a font glyph or a simple vector drawing is a good method to draw it at a higher resolution (I.e. 2x or 4x) and then to scale down with bilinear or bicubic sampling, or is there a better method? Also, is that how "2x" and "4x" anti-aliasing is achieved?
You're not the only one, I think it depends on the type of image but especially computer images such as the checkerboard pattern are better when using point sampling as they preserve the hard edges whereas interpolation smooths over everything and makes it soft and blurry.
For pixel art and retro games, definitely. On the N64 you would always see tiny textures blown up to huge proportions because the texture unit was so limited it could only hold 32x32 pixels or something ridiculous like that, and everything was blurry. But I think most 3d games these days have an abundance of texture resolution, so you're never really zooming in to this degree unless you face plant into a wall. The main issue with point sampling is what happens when you animate it and move slightly. You get a lot of flickering and rapidly changing colors. But that's more of an issue zooming out than in, so I guess we're not there yet.
I suppose this would be useful if you were making an image editor like GIMP. But this is way too expensive for realtime rendering in 3D, which I assume is where this will eventually be going.
I recently watched the absolute masterpiece that is Freya Holmer's "The Continuity of Splines." As soon as you mentioned basis functions, I knew where this was going. :P If anyone wants to know more about splines and their properties, I highly recommend that video. It will all make sense. I wonder what would happen if you used a cardinal spline and starting playing with the tension value, would it make edges look sharper? I might have to try that.
if you use instancing on low resolution 4x4 or 64x64 per pixel reflection map buffers, you get the full benefit of the instanced fast rendering, not just multiple same objects on the main draw buffer, but also fast real time scatter lighting system, mirror reflections should be done by single reflection per pixel ray tracing as usual
My teacher gave as a homework in university the homework is to make a program that describe it self It is something like when I use a keyword like int it tool me this is an integer Where I use + it tool me this is the sum operator Something like this But he told as about a property coled token When I searched for it they show me the basics of c++ I am sorry because I am weak in English i hop you onderstand me
What you are describing is a "text parser", which "tokenises input" strings to form an "abstract syntax tree". These can be written in any language that can process text input. I hope those search phrases help. I haven't done a video on this (yet) but I plan to before the end of this year.
Very interesting. If I'd had a teacher like you back in my highschool days, I'd surely would have loved maths much more. Nice video!
Hello from Brazil! I love your channel and I'm glad there are still people who make smart content on TH-cam.
Brilliant, I learn so much from this thank you
You're a godsend. I'm writing my computer graphics exam in two days and texture sampling is the only topic I couldn't fully grasp
And then there is another complication: RGB values are not linear themselves (gamma-corrected), with 128 giving almost 1/4 brightness compared to 255, as this was necessary for CRT monitors but also allowed to get good range with just 8 bits. Linear representation would require 10 to 12 bits and didn't improve image as human eye works logarithmically, it sure sees difference between 1 and 2 but can't spot difference between 999 and 1000.
So for correct sampling it is actually better to convert all pixel values into linear representation, sample pixel and return to gamma-corrected value. See MinutePhysics: computer color is broken.
Most visible effect of simplified methods is different brightness of textures depending on scale. Like, human averages small checkerboard correctly but when computer does it converting neighboring 0 and 255 into uniform 127, it gets 1/4 brightness instead of 1/2!
I am currently writing real time software for car engine managment and just had to implement this too !
Never thought about bicubic sampling being a valid use of matrices until you laid out the math for it.
Very interesting and educational topic. Thanks for making these!
Your videos are of great value to me. Thank you
I guess you could use 3 bicubic interpolation calculations for the following rows each time to increase performance? You’d have to do an entire columns worth of interpolated values at a time though. Starts to hurt your brain trying to describe this stuff, well done for relaying it so well David! Definitely one of your strengths!
very nice video as always! Used bilinear sample in lightmaps for my 3D engine, that way I don't need too much resolution and the lack of resolution is less noticeable.
This is too interesting.
If you're able/willing to, please also cover bicublin (bicubic for luma, bilinear for chroma), and sinc/lanczos sampling.
This is very interesting and also very frightening as it reveals just how much I don't know. 😂
You're so great!
You're my hero ✨
I like the shorter format! Specially if it makes it easier for you to put the videos out ;)
I forgot to vote last video! I'm a fan of these shorter videos more frequently.
Awesome video!
Great video!
shorted videos indeed are OK, easier to comprehend the data received )))
Great weekend!
Great video Javi, sadly I couldnt watch past 19:34
Missed a square and also use 0-3 for the base formula but 1-4 for the basis functions
P2t sounds like a refreshing beverage
These interpolation methods ensure that the interpolated curve goes through the sample points exactly. But usually the low resolution texture to be interpolated is acquired by down-sampling a (theoretical) higher resolution image through mixing pixels together.
So under this assumption in order to more effectively "undo" this down-sampling, wouldn't it make more sense for the interpolation to preserve area under the curve for each sample region instead?
These days i m busy with Bjarn Books, and Metaprogramming. I’m working on something abstract like mathematics field named algebraic geometry; so as in C++ there too match Libraries for Graphics; so why we don’t writing a Classes and functions for graphics object that can be valid for every Libs.
I ‘ m steel use Olc Console, your technic very valuable.
_thanks J_
How about creating new interpolation that "interpolates" between point sampling if slope is big and bicubic if slope is small to preserve hard edges on the checkerboard or around the tree branches while remaining soft transitions on surfaces with smooth slopes? Maybe it would be possible to dynamically change "sharpness" of cubic interpolation curve?
If I were to build a rasterizer from scratch for a set of vectors like filled bezier paths such as in a font glyph or a simple vector drawing is a good method to draw it at a higher resolution (I.e. 2x or 4x) and then to scale down with bilinear or bicubic sampling, or is there a better method? Also, is that how "2x" and "4x" anti-aliasing is achieved?
19:00 indices are wrong.
It's probably just me but I think point sampling looks better than interpolation 😅
You're not the only one, I think it depends on the type of image but especially computer images such as the checkerboard pattern are better when using point sampling as they preserve the hard edges whereas interpolation smooths over everything and makes it soft and blurry.
For pixel art and retro games, definitely. On the N64 you would always see tiny textures blown up to huge proportions because the texture unit was so limited it could only hold 32x32 pixels or something ridiculous like that, and everything was blurry. But I think most 3d games these days have an abundance of texture resolution, so you're never really zooming in to this degree unless you face plant into a wall. The main issue with point sampling is what happens when you animate it and move slightly. You get a lot of flickering and rapidly changing colors. But that's more of an issue zooming out than in, so I guess we're not there yet.
Next video about mipmaps!?
I suppose this would be useful if you were making an image editor like GIMP. But this is way too expensive for realtime rendering in 3D, which I assume is where this will eventually be going.
Dude, you are insanely smart
G’day
I recently watched the absolute masterpiece that is Freya Holmer's "The Continuity of Splines." As soon as you mentioned basis functions, I knew where this was going. :P If anyone wants to know more about splines and their properties, I highly recommend that video. It will all make sense. I wonder what would happen if you used a cardinal spline and starting playing with the tension value, would it make edges look sharper? I might have to try that.
Absolutely the same thing I thought!
New PGEX 3d update?
if you do ray traced raster correctly, you dont even hit all pixels, ie, if pixels were objects, ie ray culled raster
if you use instancing on low resolution 4x4 or 64x64 per pixel reflection map buffers, you get the full benefit of the instanced fast rendering, not just multiple same objects on the main draw buffer, but also fast real time scatter lighting system, mirror reflections should be done by single reflection per pixel ray tracing as usual
Hope to see you in Rockstar studios.
It's interesting that the performance increases with the filtering! But then what is the price that is paid?
I m not wander if one day; if found olc::OS.
How can I make a program in c++ that describe the used command in it I asked in a lot of channels on TH-cam but no one gave me an answer
If you can make a little lesson about that you well make me a great favor
Your question is unclear. What is the "used" command perhaps? Can you give an example of what you are expecting as a result?
My teacher gave as a homework in university the homework is to make a program that describe it self
It is something like when I use a keyword like int it tool me this is an integer
Where I use + it tool me this is the sum operator
Something like this
But he told as about a property coled token
When I searched for it they show me the basics of c++
I am sorry because I am weak in English i hop you onderstand me
What you are describing is a "text parser", which "tokenises input" strings to form an "abstract syntax tree". These can be written in any language that can process text input. I hope those search phrases help. I haven't done a video on this (yet) but I plan to before the end of this year.
Thank you so much 🌹
firstt!!!! love your stuff javid!!
center of the pixel?
try the wolf3d engine but replace the rays with plane-rays (X+Y resolution plane-ray intersections with sphere map/entity-bvh per cell/entity)
yes you get both sphere 360 degree and screen projected renders, easily
try super high resolution font grayscale compressed jpg images, down-sampled to any resolution, just like textures but for fonts, per character images
approximating the SDR font glyphs, compressed does not take up space, high resolution gets each pixel approximately right when filtered
how about trying a pre-rom cpu emulator
first
no i'm first