bro your channel is so underrated the quality of your videos is for a channel of millions of subscribers, and don't think we don't notice the effort you are getting so much better in blender
Dude just wow - this single video was more informative than reading hundreds of pages of a photography forum. You are a small channel but I have notifications on for everything you post. What's your acad background? Are you an Engineering major?
On point. Well researched. Spot on. TH-cam needs more of these correct videos and not the endless quagmire of washed up photographers who decided to start a Film Course having ZERO knowledge or understanding. Kudos kid. Keep up the good work. You are among the very few doing it right now.
The content you created is truly unique in the world. I can’t believe how much time you spent organizing and turning this into a video. I really admire you.
Thank you so much !! I'm obsessive about this stuf too I can not thank you enouth for all the details you put into this !! No you are not anoying ! You are extremlly clear !
Excellent video - detailed, yet easily digestible, I'm glad the algorithm pointed me in your direction. The only tip I have is don't project your voice so much, as this can be fatiguing to the listener and will make absorbing the information less efficient. Imagine that you are talking to the Lavalier Microphone on your chest, not the Camera filming you, and you should have an ideal level. Compare recordings using both methods to find the "sweet spot"*. Once again, thank you for a fascinating and well illustrated video! [* Gleaned from 35 years of qualified professional vocal coaching, performing, and media content creation]
Excellent video! I would add only two things: first is that some cine cameras have a single gain/amplification prior to the ADC, and that the "ISO" setting on those cameras controls a digital gain after the ADC, which can provide a different tradeoff depending on the read noise inherent to the amplifier. The other thing I'd highlight is maybe less relevant with modern sensor designs, but if each photosite has some amount of surface area that collects light and a portion that contains the remaining electronics, then obviously you get better light gathering by maximizing the light sensitive proportion of the surface area. If we lived in a world where the absolute size of the other electronics was fixed for each photosite, then having smaller photosites would give worse light gathering. Not gonna be a noticeable factor nowadays, but it's something to consider.
Fantastic. I can't believe that I am watching this on youtube! I think that you got pretty much everything right and delivered all of it extremely well. Thanks a lot for taking the time (and risk) to make this very instructive video. Considering that read noise depends mostly on engineering tradeoffs, perhaps it would have been even better to focus only on pixel size at same sensor size (with smaller sensors, there are additional considerations, for instance they could be easier to keep cool, or they could be cheaper to optimize for better quantum efficiency).
So glad I just found your channel man! I have to rewatch some aspects of what you say just because I’m very visual but when you use models I can follow along just fine. I also appreciate the theatrical inner dialogue. Watch your eye line, this imvideo is awesome. As a mechanical engineering student who loves being a cinematographer we are very similar thinkers.
Subbed, loved the video, I wish that you would do an extended version explaining the difference for video specific shooters (most can't shoot raw), lots of confusion in that realm!
Really good video. I'd like to add to this that the native sensitivity of the inter-scene dynamic range sensor is when the smaller capacitor is engaged or for other designs when the capacitor is off. That is the native for the sensor. The second native circuit is the one where the larger capacitor or the capacitor itself is engaged, increasing the full well capacity. One thing too, The A7SIII and its derivatives using the same sensor, use an ungodly amount of noise reduction to make the low light footage possible. It's not just the pixel binning doing all the work
Love this video, it has some ideas that I had really wished were explained better, and even considered making a video myself to explain them, but you've done a much better job already. I'm gonna share this with a few friends as an example of a great resource. A couple comments/confusions though: 1. The wave nature of light makes the analogy to a chaotic system an monte carlo rendering a little less accurate. To some extent the function of what light might reach a given point on the sensor (the probability) is continuous, but obviously light is still quantized, and so only a finite number of photons will actually be absorbed (and then some fraction of those successfully converted into an electrical signal). The result is still largely the same, but at least from my (limited!) understanding of optics this section stood out as matching the my understanding of the underlying physics less well 2. on native ISO, I thought there was some amount of standardization to what ISO meant, in that I can take a 400 speed film, and several cameras with different sensors set to 400 ISO, and a lens with some f-number aperture, and in the same lighting conditions and the same shutter speed, take a more or less equivalently exposed image across all of them (in that middle gray will be at about the middle of the exposure range). Is the arbitrary part here that each *individual pixel*'s response is arbitrary in how it results in an electrical voltage, but each camera maker is targeting some standard for the response to the flux density across the sensor to get the overall image right after all of the processing?
I know camera sensor have microlens for each pixel for focusing light to photo diode, but what if the light drop exactly at the border or edge between pixel? Does it mean higher megapixel with smaller pixel hypothetically record less light than lower megapixel with larger pixel?
Yes, the video neglects to address this factor. Additionally, the related factor of circuitry blocking the light gathering portion of a sensor. Circuitry that is per-pixel. This means that otherwise equal sensors with larger pixels will capture more light per area.
if im not wrong, the less pixels a sensor has, the less empty space between the pixels it has and thus can collect more of the incoming light, so a lower resolution sensor can in theory use more of the light that the lens transmits (edit: thinking back to where i read this, which i dont remember sorry, they also explained that modern sensors usually have microlenses above each pixel, so that the light collected isnt affected by the gaps between pixels)
Yes. You've found the major flaw in this video's thesis. The related factor missed here is that all sensors have some amount of circuitry blocking their light gathering pixels. Modern BSI and dual layer sensors move some of this circuitry behind the light gathering portion, but there is still signifigant light being blocked. This circuitry is required per-pixel. Fewer pixels = less circuitry blocking light. Due to these factors, an otherwise equal sensor with larger pixels *will* capture more light per a given area.
what i find fascinating is how misunderstood ISO is, as i constant see many people expose for different ISOs as if it actually changes the amount of light the camera sees, within a base iso, the different ISO settings are merely digital gain, so iso 100 and iso 1000 is technically the same image, just with different amount of digital gain. I personally just swap between the base iso and shoot on those, which in my case is 640 and 4000, the only times i go over or under these numbers is to monitor it on the camera more easily. I remember my teacher in school being completely wrong about ISO, and he always blamed me for using the wrong one, but when i showed him the images i captured, he was surprised it wasnt how he thought, and i tried to explain how ISO in modern cameras actually works and he just told me I was wrong and that what i said is impossible. Sony (probably others too) actually clamps the signal at lower isos (usually extended iso range) and pushes things more into the blacks, so the noise seems lower, so they compromise the contrast and dynamic range for a "better noise performance" Cinema cameras usually has a Base ISO of around 400, 640 or 800 (some higher and lower but usually this range) but the Sony F55 actually had a base ISO of 2000 which is quite high for the time at least. Now the highest end cinema cameras from Arri actually only has a single base ISO, but i believe thats kinda wrong, as they do use a dual gain type of technology to combine a high gain and low gain circuit to produce a combined image of them both. I usually refer to Base ISOs as Analog Gain, and ISO changes within the bases as Digital Gain. When recording in RAW, you can always change the Digital Gain in post, but not the Analog Gain. You cant change a Base ISO in post because the information is already stored. This is where my teacher didnt understand. I told him that my camera could shoot RAW video, but i cant change between Base ISO in post, only the digital gain. and he told me "then its not actually raw, its fake, its a lie" and I explained how Base ISO and ISO settings within a base is different. He still just called me and the camera brand (Blackmagic Design) a liar. A bit of a ramble, but i hope this all makes sense. I just have to say i love your videos so much. Clearly explains everything, doesnt skip details and everything is so well presented, and you dont just tell us things, you show us
I think that what you explain is the reason why cameras sensors in cinema cameras dont really care about s35 vs LF and such, and they use T stops instead of F stops. cuz a S35 constructed lens with a T stop of 2 at 75mm, is the same as a 35mm fullframe lens at the same FOV and the same T Stop, it would look essentially identical. whilst if the same thing were to be done with photography lenses and f stop, the exposure of the lenses would be completely different (correct me if im wrong but this is at least how i understand it)
You did not mention, that the distance between sensors and their transport-pipeline on bigger sensors (especially with lower resolution) is often bigger, leaving more room for the information to be transported, leading to less crosspolution on the transport of the single pixel information from its neighbouring pixels transport. Or is that not a big enough factor for consideration?
im new to photography and im confused. so if i buy a camera which is 'good at low light' say an a7 iv is this going to be better at low light photography with THE SAME LENS say an f/4 lens versus another 'worse at low light' camera. or will the low light performance be identical with both cameras having the same f/4 lens? thanks
Hi Camon, thank you for this very instructive video! I had one question: as I learned in class, a film marked as 3200 iso is more sensitive to light as a 100 iso one because the grain used is much bigger, catching more photons, and therefore getting turned into silver easier. Wouldn't that also apply to a photo resistor? As far as I know, it is pretty similar, exept it's electronic...
A larger photosite size can contribute to more light gathering ability in situations where the ETTR exposure technique can be used towards overexposure without overly clipping highlights, but otherwise, the size of a modern sensor (with no significant spaces between the neighbouring photosites) says much about the light gathering ability provided for any given f-stop (or more accurately t-stop). Roughly, LGAR or CRAP (Light Gathering Ability Rating or Crop-Restricted Aperture Performance - terms I admit I made up for an e-book) equals crop factor multiplied by f-stop. My old video on the subject, and unfortunate misinformation around the subject: th-cam.com/video/WlKBrnGTkfc/w-d-xo.html
Can use please please please add a resolution that doesn't have hdr, I really really really wanna watch these videos but the hdr makes it unbearable it makes the screen unbelievably dark on my consoles to the point where I can barely see anything on my consoles
I find it VERY suspicious you did not test FF vs APS-C to show us the difference. And then completely ignore SNR that bigger pixels give versus smaller ones. I mean, if you take a 64 megapixel full frame sensor, crop it two times then use a 16mp m4/3 sensor for comparison. Your noise levels are the same, because the pixel pitch is the same.
For the purposes of the video I felt it was sufficient to demonstrate how exposure (and therefore shot noise) could be equalized between cameras of different sensor sizes (in my example, an a7 iv and an FX30) and then how sensor design impacts the relative read noise. My demonstrations of the latter did all involve full frame cameras, but the concepts can be extended to cameras with different sizes as long as the exposure is properly equalized. In your example comparing a 64MP full frame sensor to a 16MP m4/3 sensor with the same pixel design, the analysis could work as follows. First, the exposure would need to be equalized by adjusting the lens until the field of view and entrance pupil diameter match. Then, the difference between the two comes down to scaling. Assuming the images are scaled to the same size, the full frame version will have roughly double the read noise as each output pixel is an average of four input pixels. This analysis seems to imply that smaller sensors are better, but ignores the fact that concentrating the same amount of light into a smaller area may result in clipping. If the same light is spread among fewer pixels, each individual pixel will collect more light and will therefore clip sooner (assuming the pixel design is the same, which in your example it would be.) So while in principle a 2x crop sensor could be made to match or exceed the performance of a full frame sensor, the practical constraints of lens design mean that this doesn't often happen. The smaller sensor would need to have lower sensitivity to avoid clipping, and use a lens with a shorter focal length yet larger aperture in order to achieve equal exposure. My goal is to demonstrate how sensors can be compared and which factors do and don't affect noise levels. I'm not trying to push a particular narrative, only increase understanding.
@@VideoTechExplained Thats fine and all. Except, if your settings are the same, with a full frame thats using a 50mm F1.8 lens and a m4/3 that uses 25mm F1.8, your exposure is the same. if your FF is 64mp and your m4/3 is 16mp, you crop twice in the FF, your noise is the same. This is real world tests that people would be more familiar with, the mathematics were already done for this 22 years ago by NASA engineer who made a blog on it way back in the early days. Smaller pixel pitch, regardless the size of the sensor has worse SNR because well its small. While bigger pixel pitch, has less noise, thats because a bigger pixel pitch is saturated more which does not increase brightness, but it increases accuracy which in turn reduces noise, SNR after all signal to noise ratio dictate the noise levels that are to be expected. Your argument is simply allowing the same amount of light out the back of the lens without taking into account the fact that the sensor size dictates how big the image circle out the lens is; if the lens is made for that platform. FF needs a bigger image circle to cover a bigger area2. A m4/3 effectively 4 times smaller by area2 hence why lenses can be smaller. If you take lumens reading which is the scientific way of doing this, you will see, a 50mm F1.8 versus a 25mm F1.8 gives equal amount of lumens at F1.8 for both lenses. Your background blur will be different because of the field of view being different, a smaller sensor has smaller field of view after all.
For some reason this guy knows so much more technical stuff than most of the youtubers.
bro your channel is so underrated the quality of your videos is for a channel of millions of subscribers, and don't think we don't notice the effort you are getting so much better in blender
I feel like I finally found a place I can learn things that matter, as an engineer and photographer this guy is awesome
Yeah! He's crazy knowledgeable! A real expert! ❤
Agree
Please don't ever stop uploading
it's great to have royalty coming by this video, your highness
This is by far the best video on this subject on all of youtube. More deep dives like this please!
You’re not annoying at all! I love this kind of nerd talk. Keep it up my man!!
Exactly the topic I've been wondering about
Dude just wow - this single video was more informative than reading hundreds of pages of a photography forum. You are a small channel but I have notifications on for everything you post. What's your acad background? Are you an Engineering major?
His bachelor's was in computer engineering :)
Love that subtle star trek enterprise reference ❤
On point. Well researched. Spot on.
TH-cam needs more of these correct videos and not the endless quagmire of washed up photographers who decided to start a Film Course having ZERO knowledge or understanding.
Kudos kid. Keep up the good work. You are among the very few doing it right now.
This channel is gonna be big. This is so well done I’m shocked it’s not from million sub creator.
The content you created is truly unique in the world. I can’t believe how much time you spent organizing and turning this into a video. I really admire you.
Finally a video that puts all the information about ISO all together in one place. 😊 thx
Wow, what a video! Great job!
This video will be a reference point from now on! 👏👏👏
Great video
“you’re gonna be a star, kid!”
-Italian film producer
Honestly, one of the best videos i ever seen on cameras, please continue making more videos.
Insane production value, damn. Interesting stuff.
Very in-depth explanation. That's exactly my style! Thanks for taking the time to make this video.
Cheers 🍻
love the intro, and as a camera nerd even I was surprised that the A7sIII has a 48mp sensor
Thank you so much !! I'm obsessive about this stuf too I can not thank you enouth for all the details you put into this !! No you are not anoying ! You are extremlly clear !
This content deserves 1000 times more views.
love your animations dude
Thanks for spending the time on it, well done.
Amazing video!
Great video as always
Photographers channels only wish to been able to explain this topic that detailed.
Excellent video - detailed, yet easily digestible, I'm glad the algorithm pointed me in your direction. The only tip I have is don't project your voice so much, as this can be fatiguing to the listener and will make absorbing the information less efficient. Imagine that you are talking to the Lavalier Microphone on your chest, not the Camera filming you, and you should have an ideal level. Compare recordings using both methods to find the "sweet spot"*. Once again, thank you for a fascinating and well illustrated video!
[* Gleaned from 35 years of qualified professional vocal coaching, performing, and media content creation]
You're doing good work. Don't stop.
Very enjoyable entertaining and informative
Thanks!
Excellent video! I would add only two things: first is that some cine cameras have a single gain/amplification prior to the ADC, and that the "ISO" setting on those cameras controls a digital gain after the ADC, which can provide a different tradeoff depending on the read noise inherent to the amplifier.
The other thing I'd highlight is maybe less relevant with modern sensor designs, but if each photosite has some amount of surface area that collects light and a portion that contains the remaining electronics, then obviously you get better light gathering by maximizing the light sensitive proportion of the surface area. If we lived in a world where the absolute size of the other electronics was fixed for each photosite, then having smaller photosites would give worse light gathering. Not gonna be a noticeable factor nowadays, but it's something to consider.
For the past 10 years I've been wrong. Thanks for making this video
I love nerding out but most people out there don’t even bother reading manuals
Very enlightening and in-depth. Thanks for uploading!
Fantastic. I can't believe that I am watching this on youtube! I think that you got pretty much everything right and delivered all of it extremely well. Thanks a lot for taking the time (and risk) to make this very instructive video.
Considering that read noise depends mostly on engineering tradeoffs, perhaps it would have been even better to focus only on pixel size at same sensor size (with smaller sensors, there are additional considerations, for instance they could be easier to keep cool, or they could be cheaper to optimize for better quantum efficiency).
Great information. You rule! Thanks so much.
So glad I just found your channel man! I have to rewatch some aspects of what you say just because I’m very visual but when you use models I can follow along just fine.
I also appreciate the theatrical inner dialogue. Watch your eye line, this imvideo is awesome. As a mechanical engineering student who loves being a cinematographer we are very similar thinkers.
Sooo excited to see you making more videos!
Excellent content. Great work!
Fantastic job!!!
This info helps! Especially on used digital cameras where the sensors are "tired' .
One tiny error in the video, at 4:56 the light hits on the diaphragm, not on the aperture. The aperture is the hole in the middle of the diaphragm
This is the best, most thorough video on the topic I’ve ever seen!
never stop making these videos man 👍
Subbed, loved the video, I wish that you would do an extended version explaining the difference for video specific shooters (most can't shoot raw), lots of confusion in that realm!
Welcome back!
Really good video. I'd like to add to this that the native sensitivity of the inter-scene dynamic range sensor is when the smaller capacitor is engaged or for other designs when the capacitor is off. That is the native for the sensor. The second native circuit is the one where the larger capacitor or the capacitor itself is engaged, increasing the full well capacity.
One thing too, The A7SIII and its derivatives using the same sensor, use an ungodly amount of noise reduction to make the low light footage possible. It's not just the pixel binning doing all the work
Love this video, it has some ideas that I had really wished were explained better, and even considered making a video myself to explain them, but you've done a much better job already. I'm gonna share this with a few friends as an example of a great resource. A couple comments/confusions though:
1. The wave nature of light makes the analogy to a chaotic system an monte carlo rendering a little less accurate. To some extent the function of what light might reach a given point on the sensor (the probability) is continuous, but obviously light is still quantized, and so only a finite number of photons will actually be absorbed (and then some fraction of those successfully converted into an electrical signal). The result is still largely the same, but at least from my (limited!) understanding of optics this section stood out as matching the my understanding of the underlying physics less well
2. on native ISO, I thought there was some amount of standardization to what ISO meant, in that I can take a 400 speed film, and several cameras with different sensors set to 400 ISO, and a lens with some f-number aperture, and in the same lighting conditions and the same shutter speed, take a more or less equivalently exposed image across all of them (in that middle gray will be at about the middle of the exposure range). Is the arbitrary part here that each *individual pixel*'s response is arbitrary in how it results in an electrical voltage, but each camera maker is targeting some standard for the response to the flux density across the sensor to get the overall image right after all of the processing?
Really enjoyed the video! Would love it if you included sources!
I know camera sensor have microlens for each pixel for focusing light to photo diode, but what if the light drop exactly at the border or edge between pixel?
Does it mean higher megapixel with smaller pixel hypothetically record less light than lower megapixel with larger pixel?
Yes, the video neglects to address this factor. Additionally, the related factor of circuitry blocking the light gathering portion of a sensor. Circuitry that is per-pixel. This means that otherwise equal sensors with larger pixels will capture more light per area.
if im not wrong, the less pixels a sensor has, the less empty space between the pixels it has and thus can collect more of the incoming light, so a lower resolution sensor can in theory use more of the light that the lens transmits
(edit: thinking back to where i read this, which i dont remember sorry, they also explained that modern sensors usually have microlenses above each pixel, so that the light collected isnt affected by the gaps between pixels)
Yes. You've found the major flaw in this video's thesis.
The related factor missed here is that all sensors have some amount of circuitry blocking their light gathering pixels. Modern BSI and dual layer sensors move some of this circuitry behind the light gathering portion, but there is still signifigant light being blocked. This circuitry is required per-pixel. Fewer pixels = less circuitry blocking light.
Due to these factors, an otherwise equal sensor with larger pixels *will* capture more light per a given area.
what i find fascinating is how misunderstood ISO is, as i constant see many people expose for different ISOs as if it actually changes the amount of light the camera sees, within a base iso, the different ISO settings are merely digital gain, so iso 100 and iso 1000 is technically the same image, just with different amount of digital gain. I personally just swap between the base iso and shoot on those, which in my case is 640 and 4000, the only times i go over or under these numbers is to monitor it on the camera more easily. I remember my teacher in school being completely wrong about ISO, and he always blamed me for using the wrong one, but when i showed him the images i captured, he was surprised it wasnt how he thought, and i tried to explain how ISO in modern cameras actually works and he just told me I was wrong and that what i said is impossible. Sony (probably others too) actually clamps the signal at lower isos (usually extended iso range) and pushes things more into the blacks, so the noise seems lower, so they compromise the contrast and dynamic range for a "better noise performance"
Cinema cameras usually has a Base ISO of around 400, 640 or 800 (some higher and lower but usually this range) but the Sony F55 actually had a base ISO of 2000 which is quite high for the time at least. Now the highest end cinema cameras from Arri actually only has a single base ISO, but i believe thats kinda wrong, as they do use a dual gain type of technology to combine a high gain and low gain circuit to produce a combined image of them both. I usually refer to Base ISOs as Analog Gain, and ISO changes within the bases as Digital Gain. When recording in RAW, you can always change the Digital Gain in post, but not the Analog Gain. You cant change a Base ISO in post because the information is already stored. This is where my teacher didnt understand. I told him that my camera could shoot RAW video, but i cant change between Base ISO in post, only the digital gain. and he told me "then its not actually raw, its fake, its a lie" and I explained how Base ISO and ISO settings within a base is different. He still just called me and the camera brand (Blackmagic Design) a liar.
A bit of a ramble, but i hope this all makes sense. I just have to say i love your videos so much. Clearly explains everything, doesnt skip details and everything is so well presented, and you dont just tell us things, you show us
exzellent Video
Great, Thanks!
Perfect video. As always. I thougth I know something about resolutions and pixel sizes etc. Nope 😂
I think that what you explain is the reason why cameras sensors in cinema cameras dont really care about s35 vs LF and such, and they use T stops instead of F stops. cuz a S35 constructed lens with a T stop of 2 at 75mm, is the same as a 35mm fullframe lens at the same FOV and the same T Stop, it would look essentially identical. whilst if the same thing were to be done with photography lenses and f stop, the exposure of the lenses would be completely different (correct me if im wrong but this is at least how i understand it)
You did not mention, that the distance between sensors and their transport-pipeline on bigger sensors (especially with lower resolution) is often bigger, leaving more room for the information to be transported, leading to less crosspolution on the transport of the single pixel information from its neighbouring pixels transport.
Or is that not a big enough factor for consideration?
Thank you very much. 🌚🌝
Why the noise level is not proportional to the signal itself but to its root?
im new to photography and im confused. so if i buy a camera which is 'good at low light' say an a7 iv is this going to be better at low light photography with THE SAME LENS say an f/4 lens versus another 'worse at low light' camera. or will the low light performance be identical with both cameras having the same f/4 lens? thanks
Also if you set the equivalent size aperture (like the same phisical size) you will get the same DOF.
Hi Camon, thank you for this very instructive video!
I had one question: as I learned in class, a film marked as 3200 iso is more sensitive to light as a 100 iso one because the grain used is much bigger, catching more photons, and therefore getting turned into silver easier.
Wouldn't that also apply to a photo resistor? As far as I know, it is pretty similar, exept it's electronic...
So capacitor determined how much sensor dynamic range?
Smaller capacitor = Lower dynamic range
Bigger capacitor = Higher dynamic range
A larger photosite size can contribute to more light gathering ability in situations where the ETTR exposure technique can be used towards overexposure without overly clipping highlights, but otherwise, the size of a modern sensor (with no significant spaces between the neighbouring photosites) says much about the light gathering ability provided for any given f-stop (or more accurately t-stop). Roughly, LGAR or CRAP (Light Gathering Ability Rating or Crop-Restricted Aperture Performance - terms I admit I made up for an e-book) equals crop factor multiplied by f-stop. My old video on the subject, and unfortunate misinformation around the subject: th-cam.com/video/WlKBrnGTkfc/w-d-xo.html
Background music is too loud!
A7s iii set to slog3 iso 12800 with f1.4 lens is like demon sorcery when I first saw it.
nerd power.. let's goooo 🎉
Can use please please please add a resolution that doesn't have hdr,
I really really really wanna watch these videos but the hdr makes it unbearable it makes the screen unbelievably dark on my consoles to the point where I can barely see anything on my consoles
very good video! just stop waving hands when talking, bro :)
I find it VERY suspicious you did not test FF vs APS-C to show us the difference.
And then completely ignore SNR that bigger pixels give versus smaller ones.
I mean, if you take a 64 megapixel full frame sensor, crop it two times then use a 16mp m4/3 sensor for comparison. Your noise levels are the same, because the pixel pitch is the same.
For the purposes of the video I felt it was sufficient to demonstrate how exposure (and therefore shot noise) could be equalized between cameras of different sensor sizes (in my example, an a7 iv and an FX30) and then how sensor design impacts the relative read noise. My demonstrations of the latter did all involve full frame cameras, but the concepts can be extended to cameras with different sizes as long as the exposure is properly equalized.
In your example comparing a 64MP full frame sensor to a 16MP m4/3 sensor with the same pixel design, the analysis could work as follows. First, the exposure would need to be equalized by adjusting the lens until the field of view and entrance pupil diameter match. Then, the difference between the two comes down to scaling. Assuming the images are scaled to the same size, the full frame version will have roughly double the read noise as each output pixel is an average of four input pixels.
This analysis seems to imply that smaller sensors are better, but ignores the fact that concentrating the same amount of light into a smaller area may result in clipping. If the same light is spread among fewer pixels, each individual pixel will collect more light and will therefore clip sooner (assuming the pixel design is the same, which in your example it would be.) So while in principle a 2x crop sensor could be made to match or exceed the performance of a full frame sensor, the practical constraints of lens design mean that this doesn't often happen. The smaller sensor would need to have lower sensitivity to avoid clipping, and use a lens with a shorter focal length yet larger aperture in order to achieve equal exposure.
My goal is to demonstrate how sensors can be compared and which factors do and don't affect noise levels. I'm not trying to push a particular narrative, only increase understanding.
@@VideoTechExplained
Thats fine and all.
Except, if your settings are the same, with a full frame thats using a 50mm F1.8 lens and a m4/3 that uses 25mm F1.8, your exposure is the same.
if your FF is 64mp and your m4/3 is 16mp, you crop twice in the FF, your noise is the same.
This is real world tests that people would be more familiar with, the mathematics were already done for this 22 years ago by NASA engineer who made a blog on it way back in the early days.
Smaller pixel pitch, regardless the size of the sensor has worse SNR because well its small.
While bigger pixel pitch, has less noise, thats because a bigger pixel pitch is saturated more which does not increase brightness, but it increases accuracy which in turn reduces noise, SNR after all signal to noise ratio dictate the noise levels that are to be expected.
Your argument is simply allowing the same amount of light out the back of the lens without taking into account the fact that the sensor size dictates how big the image circle out the lens is; if the lens is made for that platform.
FF needs a bigger image circle to cover a bigger area2.
A m4/3 effectively 4 times smaller by area2 hence why lenses can be smaller.
If you take lumens reading which is the scientific way of doing this, you will see, a 50mm F1.8 versus a 25mm F1.8 gives equal amount of lumens at F1.8 for both lenses.
Your background blur will be different because of the field of view being different, a smaller sensor has smaller field of view after all.
Answer is yes and no
Hahaha 33mp super high resolution I think not. Now 60 to 100mp yes
You should not listen to loud music for prolonged times. It decreases your hearing and as a result you end up speaking too loud.
Please stop shouting, and waving your hands about. It detracts from the content.