Hats off for this extensive research. Throughout my long career I have met endless professors, doctors and engineers. Most have one thing in common: Create the very best instrument a human can build. This is the reason they invented the word "Philosophy". For a while I was in the Fuji camp and I had my followers that just loved Fuji colors. Then I did some travels. Cultural interpretations of colours can be wider than the ones in music. Color temperature and how humans interprete their own sun as the ideal light. Let alone the fact that sunlight created vitamin D, a fact even the very best researcher can not escape. Create your own waveform and then find the best scientists will never agree with even the weakest of artist. Who would ever admit that they are partially color blind? Well our brains can not only heal, they can also interpolate. I just love to find the new discoveries from the newly discovered sub particles and how the photons is changing research all the time.
I have the privilege of working with actual multi-thousand-dollar calibration lamps and professional spectrophotometers, and I can say that your approach is very commendable. Every technique you've used (except machine learning and modifying an actual SLR) is something I've also done done at some point. The only thing I would have done differently is making a calculation pipeline from entered calibration lamp color temperature to estimated solar spectrum, so you can drag a slider and see at which color temperature the solar spectrum is most ideal. Furthermore it should be very easy to make a gradient decent function for the dips in the solar spectrum, so you can just click on a dip (say, 486nm) and it will find the exact pixel. For an extra kudos, implement a cubic interpolation curve to get sub-pixel accuracy, which I've done too.
That's great feedback, thank you very much! I'll keep your suggestions in mind for a potential follow up (I was thinking of illustrating why using jpegs is not super accurate, could combine that with some other improvements ...)
you could also add to the machine learning a parameter of the temperature lamp - with a penalty for how far from 2700 it is , you are consistently getting red-low, green high. it feels like a slight tweak of the "known" temperature you could get a better result. if you do that you could also add in the other lamps - 'please find 3 temperatures and 3 spectra, so that these all match'
Monochrome camera with no Bayer filter layer is actually easy to get. There's two options i can think of, one is a Raspberry Pi module by Arducam and another an industrial naked USB module from a Chinese supplier, UVC webcam compatible. Neither is actually expensive.
@8:00 I think you are right that THAT is what happens when you stop down the lens. But the graph you use to explain it seems to not make the point very clearly. The spectroscope creates an *angular* separation between the spectral wavelenghts. The other OP's through the lens and apperture are simply not available for these angles. @9:00 The (main) issue, i think, is not that the green light has an other origin in the source, but that it has a specific angle.
Fantastic project! Thank you for sharing it. Ive played with using a webcam (IR filter removed), using Theremino Spectrometer V3.1. The setup was good enough for testing UV pass through various lenses down to about 365 nm. My only disappointment was that the setup is too low of a resolution for much beyond pass/fail testing. I cant wait to make and try out your project for myself!
I've read from multiple sources that white Teflon is the ideal cheap and available reflective material for a reflective surface from UV through IR for reasonably accurate UV and IR.
@@jimzielinski946 yes, white Teflon is the right choice, if possible, a frosted one. I used such a reflector to calibrate the spectrometer for measuring a sun simulator for solar panels.
Take the picture of the spectrum at an angle on the Bayer filter? Fluorescent lights are good as well for calibration. 25:16 glass is terrible for UV. And IR cutoff is a pain as well.
@@_mpr_ you can theoretically get better resolution because... Pythagoras and all that stuff. The cameras I like playing with are the camera modules you can get from China. They'll either be C/CS or M12 mount (that you can actually take off). The IR filters are pretty easy to remove or replace. You can find some very narrow FoV lenses. You can put a piece of diffraction grating right in front of the lens and angle it away from a slit of razor blades in a box. But if you want to stick with raw image data, probably a Pi with a camera module could replace the USB cameras. With a different lens setup you can even turn it into a microscope with a 3d printed tube and a cheap objective.
So I do have a few cheap M12 mount cameras lying around and I used them in my initial tests. Being able to easily remove the IR filter is certainly very useful but I found the quality/resolution of the images to be worse than those of my XT2/4. Also, with cheap lenses like those we do have to worry about distortions and aberrations impacting our results. Another reason for choosing this setup was that I wanted to keep the hardware tinkering to a minimum for this project. There's also one more point for not using jpegs that I didn't mention in the video (because I only thought about it afterwards). And that's that the whitebalancing process used when creating jpegs will impact the accuracy of your results (I'll post something about that once I've tested it thoroughly).
hey, yes, this specific setup is not really suitable for measuring IR radiation because the IR filter in your camera would block most of the IR wavelengths; some of the smaller cheaper camera/lens combinations (e.g. as used for cctv) contain a removable IR filter but when I tested those the results were not as good as with a high-quality camera (although I didn't focus on the IR side ...)
Wow, this is absolutely *_brilliant!_* I’m just blown away by all the work you did to establish the calibration 🤯 I’d imagine that the spectral width of the Fraunhofer lines is well known; can you determine the resolution of your spectrograms from them? (I don’t know how resolution is specified, but assume it’s something like FWHM.) This is really outstanding work, and your explanation of all of it was crystal-clear! Thanks for sharing, good luck on your future projects! (A couple of minutes later… :-) I was thinking a bit more about resolution and how to increase it, and had a maybe-harebrained idea: From camera testing, I’m familiar with the slanted-edge test on the ISO-12233 resolution test target. The idea is that a slightly slanted black-white edge will result in a whole range of pixels with more or less of the white side of the edge falling within them. From this, you can extract a curve that shows the spatial response of the camera’s sensor with sub-pixel resolution. Although I haven’t seen it done in camera imaging, it seems to me that you could extract the point-spread function and then deconvolve (I think that’s what the operation would be, would have to think about it more) that with the camera’s images to get an optimally-sharpened image. I wonder if you could extract similar information and perform similar deconvolution relative to spectral resolution by rotating the grating axis slightly and then looking along the columns of pixels to see what transitions looked like? You wouldn’t have something s clean as a sharp black/white edge to work with, but intuitively it seems like you ought to be able to do something with the Fraunhofer lines to accomplish something like this. (Oh - could you maybe use ML to adjust a deconvolution function until the profiled of the Fraunhofer lines more closely matched their actual structure?) I dunno if anything along these lines would work, and it’s probably waaay more further effort than you’d want to put in on this particular project, but the whole project is very intriguing to me 😁 (I may be all wet here; I should be going to bed so my brain isn’t very sharp and I don’t have time to devote to thinking about it.)
Hey Dave, thanks for your kind words about the video, it was a lot of work and it's great to get feedback like that! I really like your idea about the slanted-edge/slanted-Fraunhofer lines. At first glance I think that could work at some of the very steep and deep Fraunhofer lines. I also think that it will require a lot of careful thinking and testing to see if and how well it works (so many of the great ideas that I've had fell apart when testing them carefully :D ). In any case, I'll put it on my list of future things to test!
You can get the exact temperature of a light bulb by measuring the voltage across it and current through it when taking the photo, and then breaking it open and weighing the filament. Don't ask me for the formula though. But it's a cheap standard derivation if you have good meters and a microgram scale.
Hats off for this extensive research. Throughout my long career I have met endless professors, doctors and engineers. Most have one thing in common: Create the very best instrument a human can build. This is the reason they invented the word "Philosophy". For a while I was in the Fuji camp and I had my followers that just loved Fuji colors. Then I did some travels. Cultural interpretations of colours can be wider than the ones in music. Color temperature and how humans interprete their own sun as the ideal light. Let alone the fact that sunlight created vitamin D, a fact even the very best researcher can not escape. Create your own waveform and then find the best scientists will never agree with even the weakest of artist. Who would ever admit that they are partially color blind? Well our brains can not only heal, they can also interpolate. I just love to find the new discoveries from the newly discovered sub particles and how the photons is changing research all the time.
You are so underrated, deserve millions subs
I have the privilege of working with actual multi-thousand-dollar calibration lamps and professional spectrophotometers, and I can say that your approach is very commendable. Every technique you've used (except machine learning and modifying an actual SLR) is something I've also done done at some point. The only thing I would have done differently is making a calculation pipeline from entered calibration lamp color temperature to estimated solar spectrum, so you can drag a slider and see at which color temperature the solar spectrum is most ideal. Furthermore it should be very easy to make a gradient decent function for the dips in the solar spectrum, so you can just click on a dip (say, 486nm) and it will find the exact pixel. For an extra kudos, implement a cubic interpolation curve to get sub-pixel accuracy, which I've done too.
That's great feedback, thank you very much! I'll keep your suggestions in mind for a potential follow up (I was thinking of illustrating why using jpegs is not super accurate, could combine that with some other improvements ...)
you could also add to the machine learning a parameter of the temperature lamp - with a penalty for how far from 2700 it is , you are consistently getting red-low, green high. it feels like a slight tweak of the "known" temperature you could get a better result.
if you do that you could also add in the other lamps - 'please find 3 temperatures and 3 spectra, so that these all match'
Jesus....this is...BEYOND insanely amazingly rigorous. And now, I think I'm going to just go buy a spectrometer off the shelf. lol
Monochrome camera with no Bayer filter layer is actually easy to get. There's two options i can think of, one is a Raspberry Pi module by Arducam and another an industrial naked USB module from a Chinese supplier, UVC webcam compatible. Neither is actually expensive.
@8:00 I think you are right that THAT is what happens when you stop down the lens. But the graph you use to explain it seems to not make the point very clearly.
The spectroscope creates an *angular* separation between the spectral wavelenghts.
The other OP's through the lens and apperture are simply not available for these angles.
@9:00 The (main) issue, i think, is not that the green light has an other origin in the source, but that it has a specific angle.
If you put the spectrum at 45 degrees, the mosaics are both nicer to work with, and you get way more pixels to waste on smoothing.
Fantastic project! Thank you for sharing it. Ive played with using a webcam (IR filter removed), using Theremino Spectrometer V3.1. The setup was good enough for testing UV pass through various lenses down to about 365 nm. My only disappointment was that the setup is too low of a resolution for much beyond pass/fail testing. I cant wait to make and try out your project for myself!
excellent explantation💥💥💥💥💥💥💥💥💥💯💯💯💯💯💯💜💜💜💜
Most white surfaces (paper, paint, ...) contain fluorescent additives. Huge effect if there's any uv...
I've read from multiple sources that white Teflon is the ideal cheap and available reflective material for a reflective surface from UV through IR for reasonably accurate UV and IR.
@@jimzielinski946
yes, white Teflon is the right choice, if possible, a frosted one.
I used such a reflector to calibrate the spectrometer for measuring a sun simulator for solar panels.
Very good work and excellent video!
Pretty nice video and really nice project. I might give it a try as well.
Thanks to Fujirumors btw for making me aware of this video.
Take the picture of the spectrum at an angle on the Bayer filter?
Fluorescent lights are good as well for calibration.
25:16 glass is terrible for UV. And IR cutoff is a pain as well.
Great suggestions, I like the idea of taking a picture at an angle!
@@_mpr_ you can theoretically get better resolution because... Pythagoras and all that stuff. The cameras I like playing with are the camera modules you can get from China. They'll either be C/CS or M12 mount (that you can actually take off). The IR filters are pretty easy to remove or replace. You can find some very narrow FoV lenses. You can put a piece of diffraction grating right in front of the lens and angle it away from a slit of razor blades in a box.
But if you want to stick with raw image data, probably a Pi with a camera module could replace the USB cameras. With a different lens setup you can even turn it into a microscope with a 3d printed tube and a cheap objective.
So I do have a few cheap M12 mount cameras lying around and I used them in my initial tests. Being able to easily remove the IR filter is certainly very useful but I found the quality/resolution of the images to be worse than those of my XT2/4. Also, with cheap lenses like those we do have to worry about distortions and aberrations impacting our results. Another reason for choosing this setup was that I wanted to keep the hardware tinkering to a minimum for this project. There's also one more point for not using jpegs that I didn't mention in the video (because I only thought about it afterwards). And that's that the whitebalancing process used when creating jpegs will impact the accuracy of your results (I'll post something about that once I've tested it thoroughly).
Whitebalancing and de-linearization of intensity values is so inaccurate in cameras, it's not even a joke.
This is awesome, do you know if the color filtering would affect a version that wants to look at IR side?
hey, yes, this specific setup is not really suitable for measuring IR radiation because the IR filter in your camera would block most of the IR wavelengths; some of the smaller cheaper camera/lens combinations (e.g. as used for cctv) contain a removable IR filter but when I tested those the results were not as good as with a high-quality camera (although I didn't focus on the IR side ...)
Wow, this is absolutely *_brilliant!_*
I’m just blown away by all the work you did to establish the calibration 🤯
I’d imagine that the spectral width of the Fraunhofer lines is well known; can you determine the resolution of your spectrograms from them? (I don’t know how resolution is specified, but assume it’s something like FWHM.)
This is really outstanding work, and your explanation of all of it was crystal-clear!
Thanks for sharing, good luck on your future projects!
(A couple of minutes later… :-)
I was thinking a bit more about resolution and how to increase it, and had a maybe-harebrained idea: From camera testing, I’m familiar with the slanted-edge test on the ISO-12233 resolution test target. The idea is that a slightly slanted black-white edge will result in a whole range of pixels with more or less of the white side of the edge falling within them. From this, you can extract a curve that shows the spatial response of the camera’s sensor with sub-pixel resolution.
Although I haven’t seen it done in camera imaging, it seems to me that you could extract the point-spread function and then deconvolve (I think that’s what the operation would be, would have to think about it more) that with the camera’s images to get an optimally-sharpened image.
I wonder if you could extract similar information and perform similar deconvolution relative to spectral resolution by rotating the grating axis slightly and then looking along the columns of pixels to see what transitions looked like? You wouldn’t have something s clean as a sharp black/white edge to work with, but intuitively it seems like you ought to be able to do something with the Fraunhofer lines to accomplish something like this.
(Oh - could you maybe use ML to adjust a deconvolution function until the profiled of the Fraunhofer lines more closely matched their actual structure?)
I dunno if anything along these lines would work, and it’s probably waaay more further effort than you’d want to put in on this particular project, but the whole project is very intriguing to me 😁
(I may be all wet here; I should be going to bed so my brain isn’t very sharp and I don’t have time to devote to thinking about it.)
Hey Dave, thanks for your kind words about the video, it was a lot of work and it's great to get feedback like that! I really like your idea about the slanted-edge/slanted-Fraunhofer lines. At first glance I think that could work at some of the very steep and deep Fraunhofer lines. I also think that it will require a lot of careful thinking and testing to see if and how well it works (so many of the great ideas that I've had fell apart when testing them carefully :D ). In any case, I'll put it on my list of future things to test!
You can get the exact temperature of a light bulb by measuring the voltage across it and current through it when taking the photo, and then breaking it open and weighing the filament. Don't ask me for the formula though. But it's a cheap standard derivation if you have good meters and a microgram scale.
Great work! 🌈🤗
You should publish a tutorial paper about this method.