Speaker here! Thanks for watching! A few addendums: - Missed a citation: the painting at 29:53 is by the wonderful Kazuo Oga - The question at 36:41 is: "I have two monitors, why does an image look different when I drag it between them?" - The slide at 28:34 is mislabeled: The bar labeled "Linear sRGB" is actually "Non-Linear sRGB". When using Linear sRGB, you actually have the opposite effect -- the middle colors look way too bright. Happy to answer questions if you have them, too.
does different monitors have different wavelengths(peak and spectral distribution) of their primary colours? i.e is red for one monitor is 630nm with a gaussian falloff and other monitor may have red peak at 635nm with a sharper falloff ; which leads to different visual of same image ?
What they really should do is recreate a much larger colour space... because neither of those three renders are any good at all. But it was a nice talk.
@@debajyotimajumder472 Yep! Most monitors have different primaries. There's a good (if technical) summary here: pdfs.semanticscholar.org/96a6/d64a646efd33d6d690c8a677625be2af6ffa.pdf You can purchase a Spyder which can re-calibrate your monitor a bit (many manufacturers don't care that much). An accurate spectral distribution is what you're paying for when you get a nice monitor.
Hey, great talk! People should really know these basics. It's horrible that so much graphics software does math in non-linear sRGB (including parts of GIMP, Photoshop, almost any SVG renderer, etc..). I found the desaturating in Lab part confusing though. How do you do that? Transform to Lab, set ab=0, then transform back? But that would be the same as transforming to any other colorspace that separates luma from chroma, neutralizing the chroma, and converting back! Even a linear color space would do (XYZ, YCbCr, etc...). Given that you start with linear sRGB, going through Lab would give the same shade of gray as by Y = 0.2126R + 0.7152G + 0.0722B, then setting RGB = YYY, which is much simpler. The color uniformity of Lab isn't used in this transformation. I also want to mention that for artistic purposes a color-to-grayscale conversion is frequently done the 'wrong' way to highlight some colors versus the others.
Really nice talk, I spend like a week read wiki and read three books about color but still feel confused about CIE XYZ, CIE RBG, CIE LAB until I reach this video. Really helpful.
Thanks a lot to the Speaker, John Austin! Now I actually understand, what this XYZ color space is and why it really can represent all colors in the visual spectrum... especially, how to interpret these 2D coordinate mappings, while actually having 3 color components. And that doing math in sRGB is a "bad" thing - and which colors paces to use for those calculations instead... That video just gave me a lot of answers in so short time!
That was awesome! Thanks so much; I've tried to understand computer color before by Wikipedia-surfing -- this was so nice to get a big picture explanation!
This was a nice deeper dive into color and I'm happy to have watched it as an addition to a video of what went into picking new default colormaps for Python's Matplotlib: th-cam.com/video/xAoljeRJ3lU/w-d-xo.html Spoiler: they use (a version of) CIE-LAB to make them perceptually uniform, exactly as intended
You did not mention that human vision also involves rods, which have a spectral sensitivity between the blue and green cones. In general, your considerations are a good approximation of color vision in bright light conditions, i.e. phototopic vision, but there is also mesopic and scotopic vision, where color information is very limited.
alpha/trans size: 2 bit, red: 2 bit, green: 2 bit, blue: 2 bit, saturation: 8 bit, luminosity: 8 bit.... alpha/trans size indicated the size of the transparency field, high transparency bit size = lower saturation size so the higher the transparency value the lower the color saturation resolution.... The 4 mono cases where r,g,b equal 0,0,0 or 1,1,1 or 2,2,2 or 3,3,3 can also be used to cover the highest transparency / lowest color depth cases for transparency between 192 and 256.... This way you get 22 bit solid color down to 16 bit color for very transparent pixels... The first byte with rgb hue and alpha/trans data is used in a case statement to jump to the correct conversion procedure when converting to/from RGBA32 or its RGBASL48 expanded form.... This RGBTSL24 format is compact and aids realtime transparency, luminosity and saturation effects... Preserves the alpha/trans data at the expense of color depth.
Dear Sir , could you please tell me the precise value of the angle between the line of purples and the horizontal ? I tried to measure it on the screen of my laptop , but it is not very accurate . BTW is there also a precise equation for the curve depicting the spectral locus ?
It's wrong that CIELAB would be perceptually uniform. If that would be the case Delta E would be perceptually uniform as well -- which it is not. Therefore uniform color spaces have been created (DIN99) or color difference calculation needs to deal with the perctual non-uniformity of CIELAB. This is what lead to Delta E 2000.
Looks like you're right. I had never actually heard of Delta E before, and I'm struggling to find any clear visual examples, but from the descriptions it seems to improve on the perceptual uniformity of CIELAB. Thanks for the pointer.
I probably shouldn't have specified CIELAB as perceptually uniform. In the talk my intended meaning was "as perceptually close as you can get". Obviously that's not true, either, now knowing about Delta E.
At 29::2 you label the top gradient "Linear RGB". It's not a linear gradient, it's 'naive' sRGB blending. (blending red->green in a linear color-space looks much more CIE LAB)
So I am often visualizing data with Matplotlib/Python and ImageJ. They both have colormaps such as Viridis and Inferno. As far as a I understand these colormaps are not interpolating between RGB values in the sRGB color space but they are rather LUTs that have been crafted to mimick the output one would get when interpolating in CIELAB? Is this correct? So when using Viridis there should be no advantage of changing to a CIELAB workflow? Another question: I am also working with Xray fluorescence measurements. In these measurements you spatially resolve the distribution of chemical elements. Often we visualize this type of data by assigning RGB to the three most prominent elements in the measurement, thus creating a color picture representing the result of the measurement. Use Google image search for "XRF RGB" to see some examples of what I mean. If I understood the talk correctly, CIELAB should yield a huge advantage in this case?
Correct, those color schemes are essentially doing the interpolation for you as a LUT, so you're getting all the benefits already, assuming you're just graphing a scalar value for visualization. CIELAB would only be a benefit if you wanted to change these color scales. Regarding the XRF measurements. I'd say potentially! It sounds like the main desire is to make it easier for humans to pick out different elements against others, and for instance, your Green-assigned element in RGB is going to pop much more due to the fact that our eye responds more strongly to the green parts of the spectrum. The only trickiness, though, of using CIELAB in this case would be how to blend between the elements. You're essentially picking 3 different primaries, and then when multiple are present you linearly interpolate between them, but it's not absolutely clear to me that this interpolation would necessarily be easier to see. That said, CIELAB would definitely give you more control.
As someone who has spend considerable time fiddling around with colours, I can’t describe how fast RGB and CMYK colorspaces lose all their supposed intuitiveness. It’s hard to describe - the direction always feels “intuitive”, but the outcome of multiple transitions just does not. Colorspaces like HSV solve this issue: you can “reason” intuitively about the change. Unless you are working with three leds and each of them has less than ten possible levels of brightness, RBG is reasonable only as long as you don’t touch anything.
Theory of Everything right there. It's just missing the PE flat plane. Imagine presenting this to the world and some dude says, "no, that aint right. Let's compromise."
The default colormaps in matplotlib 2.0+ were designed to be visually uniform, with help of the CIE LAB (or a more modern version) th-cam.com/video/xAoljeRJ3lU/w-d-xo.html
How come if there are three types of cone cells, we can't mix all perceptual colours from three primaries? And is it obvious that we can't make spectra that would correspond to the "imaginary primaries" (though perhaps they would appear the same as other colours on their own, they would appear to mix differently)?
The main reason you can't make them is due to the phenomena at this point in the video: th-cam.com/video/AS1OHMW873s/w-d-xo.html That's the intuition. The scientific reasoning has more to do with the fact that cones overlap in the way they respond to the spectra. If the cones were perfectly independent, you could stimulate each one individually, but because they overlap, there's no way to "just stimulate the blue cone" without stimulation the green cone just a bit. This is why you'd need "negative" colors -- to subtract out the extra bits you didn't intend to stimulate. The imaginary colors have no spectral meaning. So they don't produce a weird spectrum, they're just undefined. The mappings we use to go from RGB back to spectrums can't handle those colors.
Speaker here! Thanks for watching! A few addendums:
- Missed a citation: the painting at 29:53 is by the wonderful Kazuo Oga
- The question at 36:41 is: "I have two monitors, why does an image look different when I drag it between them?"
- The slide at 28:34 is mislabeled: The bar labeled "Linear sRGB" is actually "Non-Linear sRGB". When using Linear sRGB, you actually have the opposite effect -- the middle colors look way too bright.
Happy to answer questions if you have them, too.
does different monitors have different wavelengths(peak and spectral distribution) of their primary colours? i.e is red for one monitor is 630nm with a gaussian falloff and other monitor may have red peak at 635nm with a sharper falloff ; which leads to different visual of same image ?
What they really should do is recreate a much larger colour space... because neither of those three renders are any good at all. But it was a nice talk.
When showing the transform animation, did you use the same software that 3blue1brown uses? It looked similar.
@@debajyotimajumder472 Yep! Most monitors have different primaries. There's a good (if technical) summary here: pdfs.semanticscholar.org/96a6/d64a646efd33d6d690c8a677625be2af6ffa.pdf
You can purchase a Spyder which can re-calibrate your monitor a bit (many manufacturers don't care that much). An accurate spectral distribution is what you're paying for when you get a nice monitor.
Hey, great talk! People should really know these basics. It's horrible that so much graphics software does math in non-linear sRGB (including parts of GIMP, Photoshop, almost any SVG renderer, etc..).
I found the desaturating in Lab part confusing though. How do you do that? Transform to Lab, set ab=0, then transform back? But that would be the same as transforming to any other colorspace that separates luma from chroma, neutralizing the chroma, and converting back! Even a linear color space would do (XYZ, YCbCr, etc...). Given that you start with linear sRGB, going through Lab would give the same shade of gray as by Y = 0.2126R + 0.7152G + 0.0722B, then setting RGB = YYY, which is much simpler. The color uniformity of Lab isn't used in this transformation. I also want to mention that for artistic purposes a color-to-grayscale conversion is frequently done the 'wrong' way to highlight some colors versus the others.
Really nice talk, I spend like a week read wiki and read three books about color but still feel confused about CIE XYZ, CIE RBG, CIE LAB until I reach this video. Really helpful.
Still the best talk on color spaces!
Thanks a lot to the Speaker, John Austin! Now I actually understand, what this XYZ color space is and why it really can represent all colors in the visual spectrum... especially, how to interpret these 2D coordinate mappings, while actually having 3 color components. And that doing math in sRGB is a "bad" thing - and which colors paces to use for those calculations instead... That video just gave me a lot of answers in so short time!
Everybody interested (fascinated) by "color" as a concept should read "Catching the Light: The Entwined History of Light and Mind" by Arthur Zajonc
This talk was so much more interesting than I expected. Thanks a lot for this.
It's basically a 30 minute presentation of the history of color and somehow it's better explained than in my multimedia classes
That was awesome! Thanks so much; I've tried to understand computer color before by Wikipedia-surfing -- this was so nice to get a big picture explanation!
This was a nice deeper dive into color and I'm happy to have watched it as an addition to a video of what went into picking new default colormaps for Python's Matplotlib: th-cam.com/video/xAoljeRJ3lU/w-d-xo.html
Spoiler: they use (a version of) CIE-LAB to make them perceptually uniform, exactly as intended
Wow, such a great talk. Very interesting. Thank you very much. 👍🙏
You did not mention that human vision also involves rods, which have a spectral sensitivity between the blue and green cones. In general, your considerations are a good approximation of color vision in bright light conditions, i.e. phototopic vision, but there is also mesopic and scotopic vision, where color information is very limited.
I was wondering why we should pass from RGB to XYZ color space or vice versa ( I got the answer now, thank you)
Fascinating talk, thank you!
alpha/trans size: 2 bit, red: 2 bit, green: 2 bit, blue: 2 bit, saturation: 8 bit, luminosity: 8 bit.... alpha/trans size indicated the size of the transparency field, high transparency bit size = lower saturation size so the higher the transparency value the lower the color saturation resolution.... The 4 mono cases where r,g,b equal 0,0,0 or 1,1,1 or 2,2,2 or 3,3,3 can also be used to cover the highest transparency / lowest color depth cases for transparency between 192 and 256.... This way you get 22 bit solid color down to 16 bit color for very transparent pixels... The first byte with rgb hue and alpha/trans data is used in a case statement to jump to the correct conversion procedure when converting to/from RGBA32 or its RGBASL48 expanded form.... This RGBTSL24 format is compact and aids realtime transparency, luminosity and saturation effects... Preserves the alpha/trans data at the expense of color depth.
Could you rephrase this in a less schizophrenic manner? I have no idea what you are on about. Some kind of color compression?
amazing talk!
I'm making app for measuring colors, this video really helps, thanks!
This is so so interesting
That was insanely interessting.
Dear Sir , could you please tell me the precise value of the angle between the line of purples and the horizontal ? I tried to measure it on the screen of my laptop , but it is not very accurate .
BTW is there also a precise equation for the curve depicting the spectral locus ?
It's wrong that CIELAB would be perceptually uniform. If that would be the case Delta E would be perceptually uniform as well -- which it is not. Therefore uniform color spaces have been created (DIN99) or color difference calculation needs to deal with the perctual non-uniformity of CIELAB. This is what lead to Delta E 2000.
Looks like you're right. I had never actually heard of Delta E before, and I'm struggling to find any clear visual examples, but from the descriptions it seems to improve on the perceptual uniformity of CIELAB. Thanks for the pointer.
I probably shouldn't have specified CIELAB as perceptually uniform. In the talk my intended meaning was "as perceptually close as you can get". Obviously that's not true, either, now knowing about Delta E.
@@JohnAustinPontoco To my understanding, oklab improves on that quality.
That version of the Union Jack hadn't been in use for 130 years in 1931
Yeah, I'm not sure how I ended up using that flag. :|
@@JohnAustinPontoco And the Japanese flag is incorrect for 1931.
Why would any of this matter? He didn't even need to have the flags.
Hi John,
After I watched your video I learn a lot.
Could you share the slide with me?
whoah dude , thank u
At 29::2 you label the top gradient "Linear RGB". It's not a linear gradient, it's 'naive' sRGB blending. (blending red->green in a linear color-space looks much more CIE LAB)
So I am often visualizing data with Matplotlib/Python and ImageJ. They both have colormaps such as Viridis and Inferno. As far as a I understand these colormaps are not interpolating between RGB values in the sRGB color space but they are rather LUTs that have been crafted to mimick the output one would get when interpolating in CIELAB? Is this correct? So when using Viridis there should be no advantage of changing to a CIELAB workflow?
Another question: I am also working with Xray fluorescence measurements. In these measurements you spatially resolve the distribution of chemical elements. Often we visualize this type of data by assigning RGB to the three most prominent elements in the measurement, thus creating a color picture representing the result of the measurement. Use Google image search for "XRF RGB" to see some examples of what I mean. If I understood the talk correctly, CIELAB should yield a huge advantage in this case?
Correct, those color schemes are essentially doing the interpolation for you as a LUT, so you're getting all the benefits already, assuming you're just graphing a scalar value for visualization. CIELAB would only be a benefit if you wanted to change these color scales.
Regarding the XRF measurements. I'd say potentially! It sounds like the main desire is to make it easier for humans to pick out different elements against others, and for instance, your Green-assigned element in RGB is going to pop much more due to the fact that our eye responds more strongly to the green parts of the spectrum. The only trickiness, though, of using CIELAB in this case would be how to blend between the elements. You're essentially picking 3 different primaries, and then when multiple are present you linearly interpolate between them, but it's not absolutely clear to me that this interpolation would necessarily be easier to see. That said, CIELAB would definitely give you more control.
As someone who has spend considerable time fiddling around with colours, I can’t describe how fast RGB and CMYK colorspaces lose all their supposed intuitiveness. It’s hard to describe - the direction always feels “intuitive”, but the outcome of multiple transitions just does not.
Colorspaces like HSV solve this issue: you can “reason” intuitively about the change.
Unless you are working with three leds and each of them has less than ten possible levels of brightness, RBG is reasonable only as long as you don’t touch anything.
Theory of Everything right there. It's just missing the PE flat plane. Imagine presenting this to the world and some dude says, "no, that aint right. Let's compromise."
Now I understand why plots made in R is so beautiful....they use CIE LAB!!!!! python's matplotlib library probably uses sRGB......
The default colormaps in matplotlib 2.0+ were designed to be visually uniform, with help of the CIE LAB (or a more modern version)
th-cam.com/video/xAoljeRJ3lU/w-d-xo.html
21:18 Even camera of iPhone X uses Display P3.
25:22 No, it is not. ICC and ITU standard mandate operation on linear sRGB light.
How come if there are three types of cone cells, we can't mix all perceptual colours from three primaries? And is it obvious that we can't make spectra that would correspond to the "imaginary primaries" (though perhaps they would appear the same as other colours on their own, they would appear to mix differently)?
The main reason you can't make them is due to the phenomena at this point in the video: th-cam.com/video/AS1OHMW873s/w-d-xo.html
That's the intuition. The scientific reasoning has more to do with the fact that cones overlap in the way they respond to the spectra. If the cones were perfectly independent, you could stimulate each one individually, but because they overlap, there's no way to "just stimulate the blue cone" without stimulation the green cone just a bit. This is why you'd need "negative" colors -- to subtract out the extra bits you didn't intend to stimulate.
The imaginary colors have no spectral meaning. So they don't produce a weird spectrum, they're just undefined. The mappings we use to go from RGB back to spectrums can't handle those colors.