"RGB to XYZ: The Science and History of Color" by John Austin

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ก.ค. 2024
  • Color is a fascinating subject. It is both incredibly subjective (ie. what makes a painting beautiful) and perfectly scientific (ie. wavelengths of light) at the same time. This talk will be a deep dive into the history and science of color and the fascinating world of human perception and the scientists that attempt to define it.
    Many of us are familiar with the RGB or perhaps even the sRGB color space. Generally, though, we stare at our color pickers, fiddle with the values, and pick something that looks right. However, the modern sRGB specification follows a long lineage. It extends all the way back to the turn of the century and the CIE 1931 XYZ color space -- one of the earliest specified color spaces and one which is still in use today.
    As display technology advanced, so did our color. New color spaces emerged: CIELUV, CIELAB, YUV, HSV, HSL, RGB, sRGB. Each was molded for a specific use case, whether it was print, perceptual uniformity, compression, or LED display. By the time we work our way back to the present day, we're left with a lot of color options and a lot of questions.
    In reality, though, the math and science isn't all that complex. We'll start with color spaces like RGB that we know and love, and see how these naturally evolve mathematically from earlier color spaces such as CIE XYZ. More importantly, we'll see how to apply this knowledge to the decisions we make about color in our day to day work.
    John Austin
    A Stranger Gravity
    @kleptine
    John Austin is a developer and designer currently living in San Francisco, California. He has been making games for nearly 13 years and has worked at Google, Microsoft, Funomena, and others. He founded and currently leads the studio, A Stranger Gravity, seeking to build thoughtful, accessible experiences that seek to enrich the lives of people across the world.
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 56

  • @JohnAustinPontoco
    @JohnAustinPontoco 4 ปีที่แล้ว +116

    Speaker here! Thanks for watching! A few addendums:
    - Missed a citation: the painting at 29:53 is by the wonderful Kazuo Oga
    - The question at 36:41 is: "I have two monitors, why does an image look different when I drag it between them?"
    - The slide at 28:34 is mislabeled: The bar labeled "Linear sRGB" is actually "Non-Linear sRGB". When using Linear sRGB, you actually have the opposite effect -- the middle colors look way too bright.
    Happy to answer questions if you have them, too.

    • @debajyotimajumder472
      @debajyotimajumder472 4 ปีที่แล้ว

      does different monitors have different wavelengths(peak and spectral distribution) of their primary colours? i.e is red for one monitor is 630nm with a gaussian falloff and other monitor may have red peak at 635nm with a sharper falloff ; which leads to different visual of same image ?

    • @livedandletdie
      @livedandletdie 4 ปีที่แล้ว +1

      What they really should do is recreate a much larger colour space... because neither of those three renders are any good at all. But it was a nice talk.

    • @heylookitsnana
      @heylookitsnana 4 ปีที่แล้ว

      When showing the transform animation, did you use the same software that 3blue1brown uses? It looked similar.

    • @JohnAustinPontoco
      @JohnAustinPontoco 4 ปีที่แล้ว

      ​@@debajyotimajumder472 Yep! Most monitors have different primaries. There's a good (if technical) summary here: pdfs.semanticscholar.org/96a6/d64a646efd33d6d690c8a677625be2af6ffa.pdf
      You can purchase a Spyder which can re-calibrate your monitor a bit (many manufacturers don't care that much). An accurate spectral distribution is what you're paying for when you get a nice monitor.

    • @ybungalobill
      @ybungalobill 4 ปีที่แล้ว

      Hey, great talk! People should really know these basics. It's horrible that so much graphics software does math in non-linear sRGB (including parts of GIMP, Photoshop, almost any SVG renderer, etc..).
      I found the desaturating in Lab part confusing though. How do you do that? Transform to Lab, set ab=0, then transform back? But that would be the same as transforming to any other colorspace that separates luma from chroma, neutralizing the chroma, and converting back! Even a linear color space would do (XYZ, YCbCr, etc...). Given that you start with linear sRGB, going through Lab would give the same shade of gray as by Y = 0.2126R + 0.7152G + 0.0722B, then setting RGB = YYY, which is much simpler. The color uniformity of Lab isn't used in this transformation. I also want to mention that for artistic purposes a color-to-grayscale conversion is frequently done the 'wrong' way to highlight some colors versus the others.

  • @QuaChanh1202
    @QuaChanh1202 ปีที่แล้ว +2

    Really nice talk, I spend like a week read wiki and read three books about color but still feel confused about CIE XYZ, CIE RBG, CIE LAB until I reach this video. Really helpful.

  • @michaelmcnally1242
    @michaelmcnally1242 4 ปีที่แล้ว +14

    Everybody interested (fascinated) by "color" as a concept should read "Catching the Light: The Entwined History of Light and Mind" by Arthur Zajonc

  • @Voltra_
    @Voltra_ 2 ปีที่แล้ว +4

    It's basically a 30 minute presentation of the history of color and somehow it's better explained than in my multimedia classes

  • @sdwarfs
    @sdwarfs 3 ปีที่แล้ว +4

    Thanks a lot to the Speaker, John Austin! Now I actually understand, what this XYZ color space is and why it really can represent all colors in the visual spectrum... especially, how to interpret these 2D coordinate mappings, while actually having 3 color components. And that doing math in sRGB is a "bad" thing - and which colors paces to use for those calculations instead... That video just gave me a lot of answers in so short time!

  • @apropostt
    @apropostt 4 ปีที่แล้ว +4

    This talk was so much more interesting than I expected. Thanks a lot for this.

  • @wmhilton-old
    @wmhilton-old 4 ปีที่แล้ว +5

    That was awesome! Thanks so much; I've tried to understand computer color before by Wikipedia-surfing -- this was so nice to get a big picture explanation!

  • @user-ni4uq9ph7z
    @user-ni4uq9ph7z ปีที่แล้ว +2

    You did not mention that human vision also involves rods, which have a spectral sensitivity between the blue and green cones. In general, your considerations are a good approximation of color vision in bright light conditions, i.e. phototopic vision, but there is also mesopic and scotopic vision, where color information is very limited.

  • @valentinussofa4135
    @valentinussofa4135 2 ปีที่แล้ว

    Wow, such a great talk. Very interesting. Thank you very much. 👍🙏

  • @BasDirks
    @BasDirks ปีที่แล้ว

    Fascinating talk, thank you!

  • @mega4488
    @mega4488 2 ปีที่แล้ว +1

    amazing talk!

  • @dougmiyoussef6963
    @dougmiyoussef6963 4 ปีที่แล้ว

    I was wondering why we should pass from RGB to XYZ color space or vice versa ( I got the answer now, thank you)

  • @KangJangkrik
    @KangJangkrik 3 ปีที่แล้ว

    I'm making app for measuring colors, this video really helps, thanks!

  • @lukasmoser2211
    @lukasmoser2211 2 ปีที่แล้ว

    That was insanely interessting.

  • @yehudadm
    @yehudadm 2 ปีที่แล้ว +1

    This is so so interesting

  • @PrivateSi
    @PrivateSi 4 ปีที่แล้ว

    alpha/trans size: 2 bit, red: 2 bit, green: 2 bit, blue: 2 bit, saturation: 8 bit, luminosity: 8 bit.... alpha/trans size indicated the size of the transparency field, high transparency bit size = lower saturation size so the higher the transparency value the lower the color saturation resolution.... The 4 mono cases where r,g,b equal 0,0,0 or 1,1,1 or 2,2,2 or 3,3,3 can also be used to cover the highest transparency / lowest color depth cases for transparency between 192 and 256.... This way you get 22 bit solid color down to 16 bit color for very transparent pixels... The first byte with rgb hue and alpha/trans data is used in a case statement to jump to the correct conversion procedure when converting to/from RGBA32 or its RGBASL48 expanded form.... This RGBTSL24 format is compact and aids realtime transparency, luminosity and saturation effects... Preserves the alpha/trans data at the expense of color depth.

    • @DasAntiNaziBroetchen
      @DasAntiNaziBroetchen 11 หลายเดือนก่อน +1

      Could you rephrase this in a less schizophrenic manner? I have no idea what you are on about. Some kind of color compression?

  • @Energya01
    @Energya01 4 ปีที่แล้ว +3

    This was a nice deeper dive into color and I'm happy to have watched it as an addition to a video of what went into picking new default colormaps for Python's Matplotlib: th-cam.com/video/xAoljeRJ3lU/w-d-xo.html
    Spoiler: they use (a version of) CIE-LAB to make them perceptually uniform, exactly as intended

  • @terriplays1726
    @terriplays1726 4 ปีที่แล้ว +1

    So I am often visualizing data with Matplotlib/Python and ImageJ. They both have colormaps such as Viridis and Inferno. As far as a I understand these colormaps are not interpolating between RGB values in the sRGB color space but they are rather LUTs that have been crafted to mimick the output one would get when interpolating in CIELAB? Is this correct? So when using Viridis there should be no advantage of changing to a CIELAB workflow?
    Another question: I am also working with Xray fluorescence measurements. In these measurements you spatially resolve the distribution of chemical elements. Often we visualize this type of data by assigning RGB to the three most prominent elements in the measurement, thus creating a color picture representing the result of the measurement. Use Google image search for "XRF RGB" to see some examples of what I mean. If I understood the talk correctly, CIELAB should yield a huge advantage in this case?

    • @JohnAustinPontoco
      @JohnAustinPontoco 4 ปีที่แล้ว +2

      Correct, those color schemes are essentially doing the interpolation for you as a LUT, so you're getting all the benefits already, assuming you're just graphing a scalar value for visualization. CIELAB would only be a benefit if you wanted to change these color scales.
      Regarding the XRF measurements. I'd say potentially! It sounds like the main desire is to make it easier for humans to pick out different elements against others, and for instance, your Green-assigned element in RGB is going to pop much more due to the fact that our eye responds more strongly to the green parts of the spectrum. The only trickiness, though, of using CIELAB in this case would be how to blend between the elements. You're essentially picking 3 different primaries, and then when multiple are present you linearly interpolate between them, but it's not absolutely clear to me that this interpolation would necessarily be easier to see. That said, CIELAB would definitely give you more control.

  • @philnewman7613
    @philnewman7613 3 ปีที่แล้ว +3

    It's wrong that CIELAB would be perceptually uniform. If that would be the case Delta E would be perceptually uniform as well -- which it is not. Therefore uniform color spaces have been created (DIN99) or color difference calculation needs to deal with the perctual non-uniformity of CIELAB. This is what lead to Delta E 2000.

    • @JohnAustinPontoco
      @JohnAustinPontoco 3 ปีที่แล้ว

      Looks like you're right. I had never actually heard of Delta E before, and I'm struggling to find any clear visual examples, but from the descriptions it seems to improve on the perceptual uniformity of CIELAB. Thanks for the pointer.

    • @JohnAustinPontoco
      @JohnAustinPontoco 3 ปีที่แล้ว

      I probably shouldn't have specified CIELAB as perceptually uniform. In the talk my intended meaning was "as perceptually close as you can get". Obviously that's not true, either, now knowing about Delta E.

    • @DasAntiNaziBroetchen
      @DasAntiNaziBroetchen 11 หลายเดือนก่อน +1

      @@JohnAustinPontoco To my understanding, oklab improves on that quality.

  • @chadyonfire7878
    @chadyonfire7878 ปีที่แล้ว

    whoah dude , thank u

  • @gillesregnery9692
    @gillesregnery9692 4 หลายเดือนก่อน

    Dear Sir , could you please tell me the precise value of the angle between the line of purples and the horizontal ? I tried to measure it on the screen of my laptop , but it is not very accurate .
    BTW is there also a precise equation for the curve depicting the spectral locus ?

  • @rj8528
    @rj8528 2 ปีที่แล้ว

    Hi John,
    After I watched your video I learn a lot.
    Could you share the slide with me?

  • @Synthetica9
    @Synthetica9 4 ปีที่แล้ว +3

    That version of the Union Jack hadn't been in use for 130 years in 1931

    • @JohnAustinPontoco
      @JohnAustinPontoco 4 ปีที่แล้ว +1

      Yeah, I'm not sure how I ended up using that flag. :|

    • @simonmoore8776
      @simonmoore8776 2 ปีที่แล้ว +1

      @@JohnAustinPontoco And the Japanese flag is incorrect for 1931.

    • @DasAntiNaziBroetchen
      @DasAntiNaziBroetchen 11 หลายเดือนก่อน

      Why would any of this matter? He didn't even need to have the flags.

  • @j_razavi
    @j_razavi 4 ปีที่แล้ว +1

    How come if there are three types of cone cells, we can't mix all perceptual colours from three primaries? And is it obvious that we can't make spectra that would correspond to the "imaginary primaries" (though perhaps they would appear the same as other colours on their own, they would appear to mix differently)?

    • @JohnAustinPontoco
      @JohnAustinPontoco 4 ปีที่แล้ว +4

      The main reason you can't make them is due to the phenomena at this point in the video: th-cam.com/video/AS1OHMW873s/w-d-xo.html
      That's the intuition. The scientific reasoning has more to do with the fact that cones overlap in the way they respond to the spectra. If the cones were perfectly independent, you could stimulate each one individually, but because they overlap, there's no way to "just stimulate the blue cone" without stimulation the green cone just a bit. This is why you'd need "negative" colors -- to subtract out the extra bits you didn't intend to stimulate.
      The imaginary colors have no spectral meaning. So they don't produce a weird spectrum, they're just undefined. The mappings we use to go from RGB back to spectrums can't handle those colors.

  • @jeffmcclintock
    @jeffmcclintock 2 ปีที่แล้ว +3

    At 29::2 you label the top gradient "Linear RGB". It's not a linear gradient, it's 'naive' sRGB blending. (blending red->green in a linear color-space looks much more CIE LAB)

  • @debajyotimajumder472
    @debajyotimajumder472 4 ปีที่แล้ว +1

    Now I understand why plots made in R is so beautiful....they use CIE LAB!!!!! python's matplotlib library probably uses sRGB......

    • @Energya01
      @Energya01 4 ปีที่แล้ว +1

      The default colormaps in matplotlib 2.0+ were designed to be visually uniform, with help of the CIE LAB (or a more modern version)
      th-cam.com/video/xAoljeRJ3lU/w-d-xo.html

  • @catcatcatcatcatcatcatcatcatca
    @catcatcatcatcatcatcatcatcatca ปีที่แล้ว

    As someone who has spend considerable time fiddling around with colours, I can’t describe how fast RGB and CMYK colorspaces lose all their supposed intuitiveness. It’s hard to describe - the direction always feels “intuitive”, but the outcome of multiple transitions just does not.
    Colorspaces like HSV solve this issue: you can “reason” intuitively about the change.
    Unless you are working with three leds and each of them has less than ten possible levels of brightness, RBG is reasonable only as long as you don’t touch anything.

  • @thomasolson7447
    @thomasolson7447 10 หลายเดือนก่อน

    Theory of Everything right there. It's just missing the PE flat plane. Imagine presenting this to the world and some dude says, "no, that aint right. Let's compromise."

  • @lolerie
    @lolerie ปีที่แล้ว

    21:18 Even camera of iPhone X uses Display P3.
    25:22 No, it is not. ICC and ITU standard mandate operation on linear sRGB light.