The 100% conundrum. To those that are suggesting what I am doing is 'wrong' my intention here was to show the pixels. Not to show sampled pixels. Sampled pixels are changed by whatever method the software application, LR, Capture One etc, you choose to re sample images. This sampling process will attempt to deliver an improved image, as we can see for example in photoshop were you can choose different types of re-sampling, which applies different rules of sampling for example 'Preserve Detail' or 'Preserve Detail 2.0' (an improved algorithm, or just different) which is attempting to keep detail that might be lost, by the sampling process, thus introducing sharpening or contrast, or both and most importantly creating pixels that did not exist in the original. If you want to turn 2 pixels into 3, or 5 pixels in 6, the software has to interpolate (generated a non existent pixel). This is not the image your camera created anymore, this is an image the software generated and resampled. Once any sampling is done, this is not the image out of your camera. Thus it is not showing us how the cameras performed. It is showing us how the cameras performed with post processing, and that is not the intent of this video. This approach is not misleading at all in showing what the pixels, as supplied by the cameras, and presented by the same software, are behaving. To repeat. To resample is not the intent of this video. And if you watched the whole video you would have seen a further intent is to print large at a later date. In the real world, the far majority of images are never printed, and an even smaller subset are printed large. Thus the majority of images created today, even with high resolution cameras are viewed digitally or printed less than 1m / 3 feet. Thus understanding how a pixel for pixel view performs is completely valid. And both of these cameras provide more than enough detail and resolution for digital viewing. May it be TH-cam, Facebook, website, instagrams, flicker etc. Also large printing. The critical take away here, wether we ask the image to be re-sampled on the screen, or re-sampled during output (whatever file format) this is no longer the image the camera created. This is an image re-sampled (created) via software. Of which different software will have proprietary sampling methods, that will differ, (yet be imperceptible to us) and some software will give you multiple sampling methods. But how do we know how the lens and camera perform together? We just look at that, without third party software intervention. And sure we can upscale these images with AI (as I mention in this video) double their size, with no perceptible loss, and both images will be so large at that point, it does not matter which you choose. And this applies to crops also.
I think it's a bit suspicious that you avoided macro shots of subjects with lots of micro details and kept showing off simple surfaces and glass panels from far away. This is a macro lens after all.
@@orangesnake2256 LOL you are looking for the boogey man in the 'shadows'. No, there was no plan to avoid. I shot what I know best, with a lens which is equally good at macro and anything else. This video is not about this lens being a macro lens. This video is about having access to THE SAME LENS, which can then be tested on different bodies. I also suspect you did not watch the whole video.
Sorry Matt for having created a mini ruckus for you with my comment on FOV. That was not my intention. I fully agree with you that both the lens and the two camera bodies are probably more than capable enough for 90% of most photographers' needs. Jeez, even my MFT gear covers, I'd say, at least 75%. I also agree with you sticking with integer pixel enlargements. I'm also glad that you clearly understand the issues of interpolated pixel peeping. So many other photography channels blithely enlarge photo comparisons to produce equal-size images and draw conclusions, possibly wrong, from them.
I disagree with the reasoning in favoring 1:1 vs. normalizing to common dimensions for comparing which produces a better image but I understand why you've chosen to do things this way. It's your channel and your choice to compare different magnifications of a scene if you want. You're not alone in favoring such an approach. I'm still going to shake my head every time I see it done. Anyway, cheers for the attempt even if I disagree with it.
Interesting comparison. You say the 15.1% difference in image width (or 32.5% difference in image size) does not make a difference, but then you say the 15.7% pixel pitch (35.4% difference in photo receptor area) does make a difference. I say they both make a difference and, if you are going to compare sharpness on a monitor at 200%, you need to scale one or the other image so that they are the same resolution. My guess is that then, they will be the same sharpness.
I own both the A7rV and the Z8. Generally I think your observations are spot on. However in my opinion the stabilzation of the Z8 is slightly worse. I also have to say that in my opinion the autofocus of the Z8 (especially for portraits) is slightly worse than the eye af of the Sony. Also in my experience it is necessary to focus differently with the Z8. I can take the A7rV with a 50mm f1.2 GM lens at f1.2 point it rougly in the direction of the person and press the shutter button without looking through the viewfinder and I know the image will be in focus (using wide area, af-c and eye af). With the Z8 there's a chance of maybe 5% that the camera focuses on the chin on the ears or somewhere else when I point the camera in the direction of a person and press the shutter. If you want a 100% hit rate with the Z8 I would recommend to use tracking. I usually put the af point to the center of the frame, then point the camera at the person I want to focus, press and hold the af-on button and reframe the picture. Now I can press the shutter as often and as quickly as I want and will have all pictures in focus. The advantages of the Z8 are in my opinion the better ergonomics, stacked sensor (no rolling shutter, no cut in the middle bokeh balls), video quality, the possibility to connect a hardware gps receiver, the automatic subject detection (no matter if human or animal) that the Sony doesn't support, 20fps and more. The downsides of the Z8 are in my opinion battery life, size, weight and a little less sophisticated autofocus and less efficient stabilizer.
12:59 If someone is pushing shadows up like that in editing on a regular basis, they have bigger issues than if there's a 1/16th less stop of DR or a minuscule amount more of noise in shadows. That being said, personally, I shoot high efficiency* on my Z8 and never notice any difference in the files while editing. The file size (on disk) is usually several MB less in size than photos from my Lumix G9 II with a 25MP sensor.
@ even my Panasonic G9 II has larger files than my Nikon Z8. I’ve been surprised how small the Z8 raw files are with no loss in quality. It’s great for saving disk space.
In all comparisons of image sharpness, especially the zoomed-in comparisons, I see a distinct difference in the field of view of each camera, the Sony producing noticeably larger objects than the Nikon. Simply enlarging each original image by the same integer amount, keeping the relative object sizes as they are, will usually create an impression of greater sharpness in the image with the smaller objects. It will produce higher amounts of higher spatial frequencies to the visual system -- hence better perceived sharpness. But at this level of pixel peeping adjusting either image to enlarge by non-integer amounts will run into possible problems with the quality of the enlargement calculations, especially the pixel-interpolation and rounding algorithms used to produce an integer-based digital-display output. What intrigues me the most here is the difference in FOV, slight though it is. Perhaps it is the difference in flange distance between E- and Z-mounts? But I thought specializing a lens for a particular mount was supposed to account for this.
No every sensor in every format is the same size. Even within the same brand. For instance, the A7R V is 35.7 x 23.8 mm, an A1 is 35.9 x 24 mm, and the Z8 is 35.9 x 23.9 mm. At these pixel densities even fractions of a mm make for significant differences.
@@MattIrwinPhotography Matt, does a photographed square look exactly square on each camera? It almost seems like one or the other camera is squishing either the horizontal or the vertical when converting the object to an image on the sensor and then to a file to be displayed.
@@bmwohl I cant say what the manufacturers do. I don't personally see the difference you are talking about. I suspect the photo sites, which always appear to be round when I see diagrams, or at least the micro lenses atop them are round, record a value, whether round or square. Then when digitally represented it is shown as a square, or at least what looks like a square to me.
7:25 I agree with you, it is not necessary to us aperture 8.0, 11.0 or smaller apertures for Landscape Photography, it is more important where you focus and the out of fucus areas can be a help to get what you want with the image.
When you compare the two files, would it be possible to match the screen size? Also do not forget that Captute One has two different profiling. One thing is certain, there is no practical difference. By the way I am a Nicon shooter.
If they are the same size, pixels will become interpolated on one or the other image. Unfortunately this does not help us see the outcome of the pixels as provided by the file. Software interpretation is required
The FOV has to be exactly the same. FOV will always change sharpness the comparisons are not correct if the FOV is not the same. The lens may be a worse performer overall on the Sony system and not exactly the same. Or if the FOV was the same then it is the same?
Great video Matt. Very enjoyable listening to you logically break down these bodies with this lens as a benchmark. Would love more videos like this. 👍🏻
The Nikon sensor has lower noise and more dynamic range because the pixels are larger. The Nikon has a 2mm advantage in flange to sensor distance, which is why the Nikon S series lenses are so good. I think Nikon just played a longer game.
The stills out of these 2 cameras are almost indistinguishable. The only side by side that seems to show a slight difference is around @14:01. The police sign and the tower directly behind seem to show more details on the left photograph. This what I find on my laptop screen.
Well Matt, I just subscribed. That was the best comparison video I have ever seen in 14 years of photography. Having 2 exact same lenses (Although different mounts) made the comparisons more realistic, than having two lenses set on the same focal range. I have been with Nikon for 14 years. I live in East Gippsland, so most of my photos are seascapes, landscapes, wildlife and nature. I also dabble a bit in video, with drones and various types of action cameras. I've never been a Nikon Vs any other brand, because it costs a lot of money to change. The difference in pixel count made the comparison easy to pic which was which, and on my monitor, and it was difficult to tell the difference, with sharpness and clarity. Love your work.
@@MattIrwinPhotography Eagle Point. Great spot for sunrises, there and along the old jetties toward Paynesville. I'm in Bruthen, and work in Lakes and Bairnsdale. So I'm in the middle of Lakes, ocean, and the beautiful high country. Heading up to Nunniong this weekend to try for Brumby photos.
I’m fond of the 90mm so the Tamron is interesting. Being able to compare its performance simultaneously on 2 platforms is also valuable. Being able to achieve virtually identical fidelity with both sensors and processors tells us just how good these cameras really are. Hope we see more tests like this, thanks Matt.
If the A7RV is reaching diffraction at f/7.1 and the Z8 is reaching diffraction at f/9 then the Z8 will have a slightly advantage when shooting at f/11. That’s just how physics work. When shooting wide open, the Z8 appeared sharper than the A7RV. This means that there is something else that you may have overlooked, and that is resolving power. Once you pass that megapixel threshold, the images will inevitably look softer.
I will spell it out to you. Had he used an adaptor, the test wouldve taken longer. Furthermore, limited time of light and it would not be a completely true comparison.
What are you asking exactly? The Z lineup currently has 2 Macro lens, being them the 50mm Macro and the 105mm MC. So what lens do you want him to compare?
@@bubbachua I don't own the Tamron but I own the Z 105mm among other Z lenses. The Z 105 MC is easily one of Nikon best Z lens, also one of the sharpest, it is the best macro out there between Canon, sony and Nikon offerings regarding macro. This makes me believe that it also beats the Tamron most likely
@@bubbachua hoo but it was. First one can always check the MFT charts for this 105 vs canon 100mm vs Sony 90mm macros. Then plenty of outlets that do reviews also claim to be the best out there (across these 3 brands). The only real downside of this lens is that you can't go beyond the 1:1 magnification and thats where lens like the Lawoa go beyond this one regarding macro.
I'm lucky I dont see the differences. I'd say though, as a proud owner of a Z8, that 60mpx is a notable difference, not just residual... 1000 pixels is quite a lot if we're cropping, obviously when printing 10mpx is enough. I always say the more the merrier, the compromise is bigger file size, and the most important signal to noise ratio, which in modern cameras is a negligible difference. ACAB: all cameras are beautiful ;)
I'm going to agree with the commenter who suggested the FOV of the images on screen must be the same for a true comparison. That means you can't just set both to 100%. Why? Because the more you blow up an image, the grainier it looks when viewed from the same distance. And the Sony gets blown up more when setting the image to 100% (all pixels the same size, so the smaller pixels get enlarged more). Another reason the Nikon might look sharper is the larger pixels are more accurate individually. Larger pixels will be more accurate as they can each sample more light. Since you are comparing the same number of pixels from each camera, the sensor with larger pixels/photosites should win. On the other hand, if you compare the same FOV portion from each sensor, the sensor with more pixels covering the same subject allows less magnification to achieve the same image size...and will likely look sharper. Of course, all of this is dependent on the lens actually producing the same level of image quality...which has not been demonstrated whatsoever. Sample variation exists. It would exist if you were comparing two Tamron lenses for Nikon. But you're comparing Nikon vs Sony mount and have admitted the lenses need adjustment in the manufacturing process to fit those mounts. Who knows if Tamron was equally successful in meeting a standard for both types of mount?
The problem with your tests is that you failed to normalise the image sizes to account for the resolution difference - the Sony will naturally exhibit more noise when viewing both at 200% - as the effective magnification factor is greater. For a fair comparison of noise and sharpness view the Sony at 200% and the Nikon at about 244%.
Hey Matt, 1000 thanks for this comparison. Might it be possible to make some of the raw files available for download and pixel peeping on the screen at home?
Matt, interesting seeing your images. Thanks for taking the time to do this. A couple of observations. Viz the discussion about pixels/demozaicing/rescaling remember that the demozaicing is done by the raw converter, and this process is inherently a rescaling. Another point to remember is that C1's defaults are highly tuned for Nikon files , they are not so optimal for other file types. From my own testing i would suggest that suggest that at least as much of the diffences that you showed come from optimisation in the raw converter as from the different sensor/capture systems. Just so that it is said - i'm completely agnostic to the camera manufacturer religious wars, so have no investment in Nikon vs Sony vs ....
Thanks for your comment, and I am sure you have read some of the lengthy threads below. You said Viz the discussion about pixels/demozaicing/rescaling remember that the demozaicing is done by the raw converter, and this process is inherently a rescaling. Indeed this argument has already been made, and you can see my responses below. Short version of my thoughts, sure - but the RAW we download from our cameras, is the baseline we all work from. That RAW is then interpreted into a pixel represented in our editor of choice. These pixels will be the same RGB value from editor to editor, and can be overlaid with endless non destructive edits or layers, presents, or however the editor is programmed to present the RAW, which differs from editor to editor. This is the baseline where photographers can begin to interact, prior to that moment we have no say. Interestingly in all the comments below, with people telling me how wrong I am, I have asked, how is it ok that the sampler is inventing pixels. None of the various threads has answered this. RAW sitting in our editor is the baseline with which we all work by. Sampling for size, removes us from that universal baseline. Further sampling software is designed to enhance an image when scaling, to actually make it better. We are removed from the RAW file. As for Capture One being optimised for Nikon. I am not aware of Sony being worse in Capture One. I also state very clearly, that this outcome is a surprise, and is specific to all of the parameters of the video. Which includes Capture One. Which by way of that is me stating the outcome might be different with any single parameter changing.
@@MattIrwinPhotography Totally agree with your last paragraph. However I don't agree with "These pixels will be the same RGB value from editor to editor". All my testing has shown that the RGB values and even the 'pixels' (results of the demosaicing/bin gridding) are different from one RAW editor to another. You will even see that the total number of pixels or the exact image coverage will vairy. In converters like RawTherapee you can even change the demosaicing algorithm and get different results in the same converter. Each of the converters/methods have their intrinsic default sharpening methodologies - this is prior to any "sharpening" that the user has access to. The RGB values are the result of the conversion profile used in the converter, and in C1, like in many others, you can calibrate for your exact camera body/lens /lighting combination and load your own icc profile - obviously changing the resulting RGB values. I think of a raw file as being analogous to an exposed negative, the converter is more like the developer, change the developer or the development time/temperature and you get a different result. Remember the RAW file doesn't actually have any "colour" in it, it just has "counts" for each of the RGGB photosites. The RAW converter then takes these counts through non-linear conversions to generate the RGB values. In more or less all but academic/forensic cases the resultant RGB values are tuned to be aesthetically pleasing rather than pedantically "correct". Anyway, i thoroughly enjoyed your video, and appreciate the time you spent on it - even if i draw a different conclusion.
Hey Matt, interestingly, I’ve had a chance to shoot handheld macro and close-up images with both the A7RV with the FE 90/2.8 macro and the Z8 with the native 105/2.8 macro and found the stabilization to be much better with the A7RV. At times, I felt the stabilization of the Z8 was not sync’ed up with the 105mm lens but it was a totally different story with the 105mm lens on the Zf and its improved IBIS. That said, I would be interested in your comparison between the A7RV and the Z8 at more “macro” distances when both cameras are outfitted with the new Tamron 90mm lens.
Smaller pixels have the ability to show more "fuzziness" in the diffraction zone. In both cases the sensors are able to show the diffraction that exists at f11, it's just the Sony can show more fuzziness than the Nikon. The whole thing is that "consumer cameras", even these giant ones, have their Nyquist frequency below the diffraction limit of a good lens today. I am almost sure that this Tamron _or any other high-end macro - can deliver resolution at f4 that no camera can show (at least, not APSC or FF, small cameras with pixels smaller than 2 µm exist, but they are for scientific or industrial use, not consumer level) . For the nerds out there, here's a full explanation of MTF, how it relates to pixel pitch and the diffraction limit of lenses. Phones cameras have a peculiarity that the Nyquist frequency is way above the theoretical diffraction limit of the lens, so those cameras are doomed to be "diffraction limited" (even at f 2.35🤯). On the other hand, all current FF, APSC and Micro 4/3 cameras are "pixel limited" (of course, they are diffraction limited near f8 ( 3 something µm pixel picth) it's just that there are excellent lenses that can deliver more fine detail at f8, 5,6 and f4 than the cameras can record. If the old Micro Nikkor 55 f3,5 can operates "diffraction limited" at f4 and deliver over 300 lp/mm (there's a paper on that, from the International Society for Optical Engineering · 2009) I'm pretty sure modern macros can do it too. But cameras can't handle that yet) th-cam.com/video/1aCXMBP8_3I/w-d-xo.html
I have been debating on these two. I was using an old full frame camera and got people to buy my nature photography photos. But, i want to get a flagship camera. Subscribed, i like you to use the same lenses.
Great and informative Matt! I shoot portraits with a Z8 using a Nikkor 70-200mm S 2.8 which can start to feel a bit heavy after a while when shooting handheld. I was considering getting Nikkor’s 85mm 2.8 as a lighter option. Can this Tamron’s 90mm compete with the Nikkor 85mm? Thanks Matt.
Hi Matt. Interesting results. I suppose to address the possible lens to lens variability within the same manufacturer, if you happen to do these tests again, you could get an ETZ adapter and use the Sony mount lens on both cameras. Obviously that makes it more difficult in changing lighting conditions since it will take you longer to swap lenses around, and generally be a PITA. But it would be interesting to see.
I have done that in the past. I am not a fan of the idea of attaching and removing that ETZ 50 times an hour. And to be honest, I think it is more useful the idea of a Sony user going, Sony camera, Sony version of lens, this is the result, and Nikon camera and Nikon version of the lens and this is the result. As this is a more likely outcome, users will buy the lens in their native mount.
The Tamron 50-400mm comes in Sony E mount and Nikon Z mount … an extremely versatile lens and very sharp. I have it on my Z9 at the moment but am thinking of getting the E mount for my Sony system. I love my Sony, but, the system lacks any kind of multiple exposure mode. I am mainly a Landscape/Woodland photographer and I love shooting multiple exposure, in camera, of different scenes using the different, in camera blending modes available in the Z9 (all Nikon Canon Mirrorless and later DSLR’s have this feature … why not Sony???) You can get a similar result with images from a Sony, but it all has to be done in post processing, so you can’t see the results in the field which take s the spontaneity and experimentation, not to mention the fun out of the experience. I asked this about this omission on a Sony forum a few years ago and the Sony rep said this was not a professional feature!!! Nice to know that from the Sony perspective, Nikon and Canon cameras are not up to the PRO standards of Sony system.
I’m super excited about Tamron releasing this new lens on both Z and E mount at the same time. Maybe this means all new Tamron lenses will also be released on Z mount moving forward?
I also hope Viltrox can release their lab series lens for the Z mount. Their new lab 135mm F1.8 is very good lens . There is a lot of F1.2 full frame lenses in their lens roadmap.
Very good comparison Matt. Thanks for putting this together. Can you say why you didn’t also shoot the Nikon in 14-bit lossless compressed for a more even comparison at least for DR?
If I can get all the pieces of the puzzle together again, I will do that. What I wanted to show in this video was what HE* is capable of real world. Since it arrived in 2021, I have used nothing else on the cameras which support it, and have observed no disadvantage for that HE* usage.
I may be wrong, but I thought Sigma (as well as other manufacturers) made lenses for various mounts decades ago i.e. offered the same lenses in several mounts from different manufactureres
Yep for sure, this is the new generation - mirrorless, and we are going through that all over again :) Is it a first for mirrorless from a major company?
Sunday morning Matt vids, my favorites! 👏🏻 I love what Tamron is offering lately, I always go for native glass but that 50-400 is in my future. Also just added the Z6III & grip to my kit, shot a stage event with it Friday night. It’s phenomenal - feels like the Z8 & Zf had a baby 😆
It can be sample variations of the lenses. If you should have a third lense (for Nikon ocr Sony-mount) you will probably have another result (a third result). I have tested so many lenses and seen that thougt they are the same type of lense they are different. I compared three Canon 135/2.0 L lenses on the same body - alla had different sharpness. I have test two Sony Zeiss 55/1.8 - One was sharper in the middle - the other sharper at the edges.
Same here. I have two Olympus 17mm f/1.2 lenses. One is noticeably sharper than the other. But here he is not even comparing the same mount. Tamron may have down a better job with the Nikon mount than the Sony mount.
I dont understand why the comparison wasn't done on a level playing field, i.e. magnification, iso etc. Why didn't you shoot at a mid range aperature instead you start out at f11 for an infinity shot? I don't get it.
I don't get what you are saying. Base ISO of each cameras is different. 64 and 100. I shot at 3 different apertures, perhaps you did not watch the whole video. When you say magnification, for this style of shoot impossible. Moving closer or further changes perspective.
That is an interesting comparison and maybe you are right with what you have been saying about diffraction ,maybe 45 MP is the sweet spot at least for now, cheers Matt !
Interesting, though I'm not sure why this would be important to anyone shooting one system or the other. It's not like anyone would want to dump one system for the other because of it. And of course as usual, it's not the camera that makes great photographs, its the person behind the lens! Most all of the systems available are highly capable. Personally a comparison of lenses within one system would be more valuable for someone interested in purchasing the Tamron 90mm, say for example a comparison with Sony's own 90mm macro. And even then one would probably be spilling hairs!
Extremely comprehensive and detailed comparison. I have ultimate faith in my Nikon Brand, the dynamic range, and color science is more pleasing to me especially in VIDEO, the diff. for me is the Z mount, being the best mount in the industry, very interesting video, amd yes there are no bad choices, all great cameras these days, I love my Nikons, hopefully they will continue to improve and release better deals for all consumers, I have the ZIII in My sights at the moment, any opinions on that, anyone?
It’s interesting to watch. However, I do not believe that how why we actually choose cameras is based on objective measurable information. I think we mostly use the reports and comparisons to justify our existing decisions and biases. When you look at a great image, do you actually know which brand the photographer used? As far as I can tell all the brands are very good. Who has the best equipment brand seems to be a lot like model car or stamp collecting. I own Nikon stuff but I can’t say I would be unhappy with a Sony or a Canon. I like the Nikon lenses and the Z9. I love it when I manage to produce great images. It’s not the gear which I need to think about, it’s my photographic skills. I used to think that better gear would help me with that. Now I am just interested in improving my creative vision and responses.
I wonder if you under exposing the Nikon makes it appear a tad sharper. With darker lines having that effect. Why not use 100 iso on both cameras since this is a Lens test.
It’s his methodology that’s skewing the results. Essentially, the way he’s comparing the lenses at different magnifications is unintentionally favoring the Z8 due to the perceptual differences introduced with viewing size. A different methodology could have been to use the macro lens for macro photography, comparing smallest resolvable details. This would also have the benefit of demonstrating whether the lens is capable of resolving for 60MP, or if it the A7RV was oversampling
@@JojoJoget Yes maybe but using 64 iso on the Nikon. And 100 iso on the Sony. And basing the exposures on the Nikon from the Sony reading. Is of course going to under expose the Nikon. You can see it in the images. I shoot Nikon but still
Interesting… but I do not believe the lenses are identical. They clearly have some different elements to handle the different mounts. So it’s just possible (and probably likely) that one lens is slightly sharper than the other.
What an interesting video, thanks! Personally I’d guess the expeed 7 is making the difference. They are obviously both great cameras, but I noticed the shadow 100 lift is more obvious on the Sony in the main drill shot, but that may be skewed by the slightly darker exposure. Ps. Your use of the Tamron 90mm macro is also a great choice because this lens has really been perfected over so many iterations. I have an old f2.5 manual focus version of this lens and even that is very sharp.
Matt: get a Megadap Emount-to-Zmount adapter and you can try the same SONY lens on SONY and Nikon. Of course you're probably still not comparing apples to apples but you'll have fun.
At the end of the day, the native megapixel stills are virtually identical on these two examples of bodies and lenses. The question I have is there any difference when a pixel-shifted & multiplied image is compared? Your points on video differences due to technology platforms might also be a good comparison in a future video. I always enjoy your pragmatic approach to producing your content. Cheers!
Every photo, i notice the Z8 was absolutely sharper. But only by a small percentage. This also could boil down to the lenses being minutely different as they arent 100% apples to apples. Now the ISO/noise absolutely the Z8 due to less noise and smaller mpix resulting in larger pixel size (Light gathering) resulting in generally less noise. Nearly every image, i could pick out and see which image was sharper or had better contrast/real-to-life colors and brightness which was nearly always the Z8.
Thanks Matt, a really interesting review. Looking at the images via TH-cam it is hard to see any difference however, on my 5k 27" display the Z8 might be a whisker sharper - I am looking at 11:37 on the timeline with the split left/right and the one on the right just has it, however, I think that image shows a dust spot just above the left side of teh short building on the right. Looks to round to be a cloud !!
The hardcore Sony pushers/owners are sort of what the Canon users used to be in the past...a bit arrogant at times. I don't care about the brands. I've owned several Sony cameras, Fuji, Nikon, Olympus, Panasonic and maybe I'm forgetting one or two brands. Don't think I've ever owned Canon and that probably comes down to the fact that I picked the brand back in 2004 that gave me the most bang for the buck, which was Nikon and the D70. Going forward, I didn't buy Canon because of the dynamic range disadvantage and that lasted for several years. Anyway, I'm not anti-Sony or anti any other brand because they all have some great cameras...but yeah, their marketing, their influencers and some of the arrogant enthusiast brand loyalist can be annoying. I wish they'd focus more on talking about the art of photography instead of endlessly bragging about specs (and these are some of the same people that use high end/high spec cameras to do nothing more than shoot birds at the feeder through their bedroom window).
Nikon has a holding of Tamron that is 0.04% ($1.7m), also there's multiple Mitsubishi Group (of which Nikon is a part) companies holding various shares in Tamron.
@MattIrwinPhotography The Mitsubishi Group is a group of companies that coordinate with each other and own shares of each other (as opposed to the parent-subsidiary structure). There's difficulty narrowing down how much they collectively own of Tamron, but I think it is safe to assume it is over 1% (which may not seem much, but given that they opperate as a collective there would be an incentive for Tamron to treat them as clients as well as shareholders).
The monitor I am watching this video on is just a 2K monitor and I just cannot see any difference between the two. Which brings up a point to consider, the Purpose for Photography is to produce images for Display and the Human Eye can only resolve 300 dpi if the vision is 20/20. My point here is that what you see in a pixel peep with a 46mp or higher resolution camera is completely meaningless because there isn't a person on the planet who can see that level of detail in a print when viewed at a "normal" distance for the size of the image. It's why images made with 12 or even 6 mp cameras are still capable of producing perfectly acceptable prints.
Very interesting, and once again confirms that pixel counts need to be judged carefully - only part of the formula. Appears the world needs to take more notice of Nikon !!
Matt: I’d urge you to read through your comments here, as several knowledgeable commenters have highlighted fundamental errors in your methodology, assumptions, and frankly just factual errors in your understanding of how sensors work. They did a good job so I won’t repeat them. This isn’t meant as a personal slight, and I’m a Nikon owner (best to head off bad faith rebuttals from other commenters). I know that no one assumes they’re wrong, and most people aren’t open to hearing it, and you obviously put a ton of work into this, but unfortunately your conclusions and the path that lead you there are just inaccurate and I’d encourage you to approach the corrections you've received with curiosity and openness.
Amazing content, I was my whole life a pro Nikon shooter till I change to Hasselblad X series for photos and FX3 for videos, Then I got the A7R5 as a fast camera (since I have many Sony lenses) but was never satisfied with the less sharp busy noisy images of A7R5, I hate the ugly small shape of Sony cameras and their menu (thought its just a full frame camera and shouldn't be compared to Medium format Hassy), lately I'm thinking to change back to my favorite full frame brand "Nikon", and with your detailed videos and explanation I'm convinced that will be the right choice. Thanks Matt
Sony A1 is much closer to the Z8 in specs. Agree about Sony body shape but that's changing with the A9III and the A1II which is to be released on the 19th. I think the AF is better on Sony but Wow the Z8 is amazing value.
I always think it's funny when people say they can't tell the difference. I can obviously tell the difference and surprisingly the Nikon is sharper I wouldn't have expected that.
Hi matt, I would not have made a test with aperture smaller than 5.6, I would have focused the test more towards macro and portrait photography. I got the Son Full Frame system in March 2019 where Sony was leading in mirrorless full frame Cameras, it is a joy with all the different Cameras to choose from and for me specially all the Lenses you can get for the Sony E mount system. Nikon is coming more nto that direction now. The Sony A7CR is a great design, I love that the EVFsits in the corner, but I do not like the small magnification and the low resolution of the EVF, the A7CR also misses a joy stick, the backside screen should only have been tilt up/down to be more compact, so the camera could be more slim, the IBIS could be better and the shutter could be better, if Sony makes a A7CR II and upgrade without the issues, then I will be one of the first to order one of two of those !
Of course, pixel for pixel 45 Mp would look sharper and show less noise than 60 Mp, no surprise at all here for me. Diffraction limit means that we cannot take full advantage of the pixel resolution beyond certain number of pixels in the sensor of a certain size. To take it to the extreme, 1000 Mp full frame sensor would not resolve any additional detail in comparison to a 100 Mp FF sensor. At the pixel level, 1000 Mp image would look terrible, but when displayed at the same size as 100 Mp image, the two would look identical. The same with noise, small pixels gather less light than large pixels and therefore display more noise at the pixel level, but for the perception of noise in the whole image, pixels size is pretty much irrelevant; what is important, is the size and light -gathering capacity of the whole sensor. A fair comparison would be to downscale 60 Mp to 45 Mp or to upscale 45 Mp to 60 Mp.
Despite looking less sharp at the pixel level, the 60 Mp image may be expected potentially to resolve a bit more detail than 45 Mp image. This bears true at least in one instance here, as noted by another commenter, at 14:01. I could probably spot a few more instances when the finest details of the landscape are seen clearer on 60 Mp image, but nothing too significant.
@@MattIrwinPhotography Diffraction does not just happen at narrow apertures. In any image created by a lens, the transition on contrasting edges is not a step, but a curve. If you sample the curve with higher spatial frequency (more pixels), you stretch the X-axis and the curve appears less steep (image less sharp). Steepness of the curve depends also on the lens: some lenses are perfectly sharp for low MP sensors (low sampling frequency), but not as sharp for high MP sensors (high sampling frequency). Regarding this lens, I would wait for the review of Christopher Frost who always test lenses on 42 MP FF and 24 MP APSC (roughly equivalent to 60 MP FF). Finally, in your case, with the distant subjects, there is likely an atmospheric disturbance: the more you magnify the image (higher MP), the more you notice it. Regarding the resampling, if you are against upscaling because it creates new pixels and does who knows what, why not try downscaling? I bet that if you downscale the Sony image to 45 MP, it would look just as sharp and with no more noise than Nikon image irrespective of the sampling method. By the way, if you intend to print at the same size, it will also involve resampling, only it's done by printer and then by our eyes and brain. Having said all that, I agree absolutely that Z8 works very well for landscape and overall represents a more compelling package than a7RV. But if the point of this review was to show that higher MP Sony sensor when viewed at the pixel level produces softer images with more noise than lower MP Nikon sensor (in fact, it is also a Sony sensor), then there is no surprise and you don't need to do any tests for that - these are just the laws of physics. It could only be different if low MP sensor was not at a similar technical level as high MP sensor. But in this case both sensors are modern and made by the same manufacturer, so their technological levels should be comparable. And I agree, it is remarkable that high speed of the Z8 sensor does not result in significant penalty in resolution or DR. Still, higher MP sensor in good conditions could potentially resolve more detail. But to appreciate this, it is necessary to compare images of the subjects with finer textures and at shorter distances, so the atmosphere is not in play. Since the difference between 45 and 60 is not great, the range of spatial frequencies that are resolvable on 60 MP, but not resolvable on 45 MP will be rather narrow. Also, photonstophotos report that Z8 has slightly worse dynamic range than a7RV, especially at intermediate ISO values (that is not tested here), but this also is expected to be only hardly noticeable in real life.
@@AVerkhovsky Regarding diffraction I talk about it being a range, I never said it was binary - on off. My understanding is it is always 'there', but becomes more evident as apertures change. Being evident sooner as pixel density increases. Go back and listen to those sections again. As for scaling, up scaling and down scaling invents pixels. Not the intention of this video. And people can tell me I am wrong. Sadly they missed what the video was about. And they are wrong. Intention - camera A, camera B, same lens, what are the pixels telling us. That is it. Pixel baseline. And we can all take these files from the baseline and create from there, with all the computer tools at our fingertips. Scaling - all sorts, brands, algorithms of scaling can give us all sorts of outcomes. And in the AI world cameras like Z8 and A7rv with a 2x AI scale give us huge files. Roughly 19,000 for Sony and 16,500 Nikon, more than 99.999% of us will ever need.
@@MattIrwinPhotography Ok, this was your intention. But what is the conclusion? What are the pixels telling us? One conclusion is that Z8 produces very nice landscape images. With this I cannot agree more. But you also make a significant point that 60 MP image at the pixel level looks softer and noisier than 45 MP image. The inference is that 60 MP sensor is somehow not as good. Surprise, surprise! - lower MP sensor is better than high MP sensor even for landscape. This is what I argue against - what you see as perceived softness and noise in your comparison is not the fault of the sensor, but the expected and unavoidable consequence of the physical laws. Does it mean that we don't ever need higher MP sensors? No, it doesn't. You simply did not test the circumstances where the advantages of more MP may show up. Speaking about AI upscaling, it cannot always replace actual resolution - try to upscale with AI cropped image of a small bird - you will be amazed with what artefactual detail in plumage the AI would invent that never existed in reality. People who photograph birds appreciate 60 Mp of A7RV even in comparison to 50 Mp of a1, although the fast sensor of a1 as well as that of Z8 are of course preferable if there is any action.
Would of liked to seen EVF recording for macro use & the AF accuracy. Of course the more MP the bigger the subject looks in the same exact photo. On paper the Z8 is much mor bang for the buck & both are at the same price point when on sale or not. But I do heavy cropping for wildlife & it really matters that the photo is pin-point sharp or it will fall apart. Also low light at distance performance matters. I can get sharp photos with my still using D810200-500mm if close enough with good light, which isn't always the case, I'll probably pick up a D850 or Z8 to use with my F-mount glass & if all good (Z8 wise-80%or better in focus & not jump off subject) add an expensive Nikon long telephoto. If not what I expext, on to another brand trials. =Sony-Canon. Got to give it to Nikon with their pricing & wildlife lens line-up, = KILLER !!!
I'm Looking forward to the New version of the Z9 Matt. Will it be a reduced MP down from 45 to 24? As a Press Photographer for Getty Images, I don't need 45 MP, although I do use the Z9. A smaller MP down to say 24 or even 30 would be sufficient enough for News and events that I shoot. What are your thoughts on the New Z9 what will it be called rumors say Z9H? I'm also An NPS member so we get first crack when the new models come out. Dave
Nice comparison, lots of interesting stuff. I'd beware of the 'diffraction calculators' on the web, because they are all bogus. They are plagiarised from Cambridge in Colour's, which has a fundamental error. (One good way of spotting plagiarism is tracking the propagation of mistakes). The error is to do with how pixel size and diffraction interact, and an assumption that bigger pixels somehow mask diffraction - they don't. The misleading nature of this model goes through the video, I'm afraid. It doesn't make a whole load of sense to compare cameras with different pixel count at 100%, because you're comparing at different magnifications. The Z8 looks sharper and less noisy simply because you're looking at a patch of a smaller photograph - and therefore you're less deep into the lens' MTF curve and looking at lower noise frequencies. The 'amplification' argument is also wrong, I'm afraid. A smaller pixel doesn't require any more amplification (it has a higher intrinsic conversion gain) and neither does amplification cause the noise that you see. The real problem with this is that cameras nowadays are so good that the differences easily get lost in methodological errors unless your experimental design is very very good.
Hi Bob, my intention here was to show the pixels. Not to show sampled pixels. Sampled pixels are changed by whatever method the software application, LR, Capture One etc, choose to re sample images. This sampling process will attempt to deliver an improved image, as we can see for example in photoshop were you can choose different types of re-sampling, which applies different rules of sampling for example 'Preserve Detail' or 'Preserve Detail 2.0' (an improved algorithm, or just different) which is attempting to keep detail that might be lost, by the sampling process, thus introducing sharpening or contrast, or both. If you want to turn 2 pixels into 3, or 5 pixels in 6, the software has to interpolate. This is not the image your camera created anymore, this is an image the software generated and resampled. Once any sampling is done, this is not the image out of your camera. Thus it is not showing us how the cameras performed. It is showing us how the cameras performed with post processing, and that is not the intent of this video. This approach is not misleading at all in showing what the pixels, as supplied by the cameras, and presented by the same software, how they are behaving. Your suggestion is to resample the images. This is no longer what the camera/s created. It is altered. To repeat. To resample is not the intent of this video. And if you watched the whole video you would have seen a further intent is to print large at a later date. In the real world, the far majority of images are never printed, and an even smaller subset are printed large. Thus the majority of images created today, even with high resolution cameras are view digitally. Thus understanding how a pixel for pixel view performs is completely valid. And both of these cameras provide more than enough detail and resolution for digital viewing. May it be TH-cam, Facebook, website, instagrams, flicker etc. Regards Matt
@@MattIrwinPhotography Hi Matt. I still think that you got it wrong. To start with what you said at the end of the post, 'in the real world...' In the real world photos are practically never ever viewed at native resolution, which is why pixel peeping is unrealistic. If you're printing the output resolution is dictated by the printer and the paper size. On a screen, by the pixel grid of that screen, which rather discounts you argument about viewing digitally. In real life you never ever look at an image that hasn't been resampled somewhere. The resampling happened in the printer driver or the screen driver software - unless you're a critical user and decided to take control yourself, which is why you get the options in the software. Then, even if you look at an image 100%, you're still looking at resampled pixels, because the sensor is a Bayer array. What you see as a 'pixel' is reconstructed from surrounding pixels and there are lots of demosaicking algorithms, with different effects on noise and sharpness. I could make a comparison like this and convince myself that my Z6 is 'sharper' and 'less noisy' than my Z9. IRL the Z9 is visibly sharper with the same lens viewed the same size, even on a computer screen. So, sorry, in the end this mistake makes it really hard to draw any valid conclusions when the differences are so small. I said experiment design was hard :-).
@@BobN54 Cheers Bob you think I am wrong, cool. Where as I am happy to say both approaches are have their place. As for the Bayer array, it is about colour. And pixels are not created/invented at this stage, colour is determined by the Bayer. Regardless, everyday users to the best of my understanding have no capacity to have input on changing the information recorded in the RAW file prior to it arriving in your software. Thus I find this point moot. May it be photoshop, LR or C1, from here the RAW is interpreted, and we are presented with pixels and their RGB values. As for sampling, of course we sample. I never said we don't. But this test was clearly started as camera a vs camera b. And to my mind the cleanest, clearest way to understand what that means is to look at the pixels supplied by the RAW file. Every user will start with the RAW and manipulate for that point their own way, we are all different and this might mean thousands if not millions of potential permutations of one image, including scaling methodology. I was interested in find out what do these camera do, what do their pixels do, when confronted with the same lens? That is the question I was looking at. Sure, resample them to make them the 'same'/ similar. But you have just moved one major step away from the pixels supplied by the camera. And the power of software is now impacting your results. This was not my intention, my intention was to have as little intervention from 3rd party software. I did not not ask the question can I enlarge or reduce by 5-15% to changed the outcome. What I do know is, with both images 2x scaled with AI, you will have a roughly 19,000 pixel wide Sony image and a roughly 16,500 pixel wide Nikon image, and both images at that point are more than 99.999% of people will ever need. Now if you want to do this a different way, and get a different results, and it is because you scaled. Great. Does not change what I found without scaling. Further, you may be at my channel for the first time. I dont know. A little history on my printing. I have sold my artworks for over 30 years. First hand printing in the darkroom over 100,000 prints over a 10-11 year period. I moved to printing with machines like a Lamda for a few years, then in 2005, I purchased an Epson 7800 24 inch printer, not long later I upgraded to the 9800 (44 inch printer) as demand for 4/5 foot prints grew. Within another year or two I had purchased the 11800 a 64 inch printer which allowed me to print 64 inches on the short side, I was creating artworks that went beyond 7/8 even 9 feet on the long edge. I ran my own galleries, selling my work for close to 25 years. Creating the highest possible quality outcome was critical to me. And thus a great deal of time was put into establishing the most effective ways of enlarging 8, 12, 16, 24 MP cameras. And we did not have the software then, we have today. Running along side this was off-set printing for limited edition prints, greeting cards, calendars and coffee table books. Much less forgiving than a large Giclee print, with files needing to be 300 DPI, otherwise quality loss was evident in the print. Again this was in an era of 8,12, 16, 24, 36 MP, and every method under the sun was look at and experimented with to ensure the best quality was maintained. I only tell you this, as I have a deep interest in large and high quality printing, of which I have undertaken my entire career, and I have produced millions of printed works across books, cards, calendars, posters and originals. I currently own the Epson 20000 64 inch printer.
@@MattIrwinPhotography Thanks for giving my post the dignity of such a long detailed answer. I clearly wouldn't argue with your experience, but I think that there are one or two things that are just not technically not well founded. You say that Bayer is only about colour. That's simply not right. Think about it. At each pixel location there is information on just one colour channel. When the file is processed the other two at that location have to be interpolated. Even that's over simplistic, since the colour space you're converting to is not the camera's native space but a standardised one - and all the channels in that space will be interpolated to one degree or another. So, if we concentrate on the lightness (which is according to your take, native) then in only 50% of the pixels will that be predominantly from the local ('G') pixel. In the other 50% it will be predominantly interpolated from neighbouring pixels. And we can see this if we compare results from different interpolation algorithms - which can produce quite different results with respect to sharpness and noise. So when you say 'And to my mind the cleanest, clearest way to understand what that means is to look at the pixels supplied by the RAW file', that's not what you're doing. You're looking at a heavily processed image which is derived from the data in the pixels and happens to have the same pixel count, but is not the same. You can't 'look' at a raw file. It's a latent image, just as in a processed film. You can look at crude visualisations, where the data from each pixel is coloured according to the channel it represents. but that's not the raw file, it's a visualisation. Given that you're familiar with film, you would know that any comparison of films which involved comparing different sized prints would not be considered very sensible. Imagine that the test method involved adjusting the scale of enlargement between films so the grain rendered in the print was constant size. You'd compare an 8x12 on 800 ISO film with a 20x16 on 125 ISO, look at them through a loupe and decide that the 800 ISO was sharper. Clearly not sensible. I don't own as big a printer as you do but I'm not without experience in the subject. I've been taking photographs for over 60 years. I'm a retired professor with a specialisation in sensing systems, so I do know about sampling theory, and that you can't learn anything from a single sample. I also write a regular column for a major photographic magazine on the technical side of photography. From my point of view, I would say that the important thing is to clearly decide on what your objectives are. Your test is fine if what you want to learn is what looks best viewed 100%, but in terms of what's likely to yield the best looking images in real-world usage, not so useful, I would say. Still, thank you for giving the time for a detailed discussion - it's appreciated.
Not all shots over water. Not all shots over distance. Large bay, not ocean, about a mile or more away. Not humid. Mostly not windy. Lovely clear spring day.
Ok but the more valid test is to compare the images at the same size, not both at 100%. The higher pixel density will look worse pixel for pixel but SHOULD look better once resized down to the same size (or the lower pixel density sensor image resized up).
Any time you interpolate above 100% you are loosing quantity and changing the collected data. I was interested in showing how the image is rendered across all metrics. And looking at non upsampled images. As I stated the next test is printing at 3 - 6 feet. To see if a difference is noticeable. Both sensors will be printed at the same size.
@@MattIrwinPhotography downscaling the A7RV image or upscaling the Z8 image would still have been a fairer comparison (and preferably both on no compromises uncompressed RAW). If specifically comparing sharpness, it is also worth using a different aperture as like you mentioned, it'll be diffraction limited for resolution at f11. A sharp lens at optimal (for sharpness) aperture would be pretty noticeably better at 60 MP than 45 MP, but in real world use with the majority of lenses, there'd generally be small differences in results. Both are superb bodies, I'm mostly Sony at the moment but I really like where Nikon are now positioned, especially for wildlife shooters. Being able to adapt Sony lenses to Nikon Z is rather nice too...
@@MattIrwinPhotography It doesn't really matter whether you upscale the Z8 or downscale the rV. But for a valid comparison you have to sample the same sensor area from each camera and output it at the same field of view. Otherwise you are sampling a smaller portion of higher rez sensor (less photon-collecting real estate) and putting it at a signal to noise disadvantage. Would you compare the Z8 at 100% to a ZF at 100% without resizing the Z8 down to the same FOV (or the ZF up) and make any meaningful conclusions?
@@AlistairOwensgooglePlus and @timwhite1111 Below is my pinned comment. When you say 'fair', I completely disagree, using third party software to re-sample the image, is changing the test. It becomes a new test, a test I was not conducting. Please consider the below. Consider what you want me to have done, and what I have done are both valid, but different. As for " Otherwise you are sampling a smaller portion of higher rez sensor" exactly, and understanding that impact is part of the reason for making this video. What does a smaller photo site camera do when faced with the same lens, when we look at un-resampled images. Once we start sampling (re-sizing), scaling, the permutations of outcomes are multitudinous, and are not showing us how each cameras photo sites performed. Please read below. The 100% conundrum. To those that are suggesting what I am doing is 'wrong' my intention here was to show the pixels. Not to show sampled pixels. Sampled pixels are changed by whatever method the software application, LR, Capture One etc, you choose to re sample images. This sampling process will attempt to deliver an improved image, as we can see for example in photoshop were you can choose different types of re-sampling, which applies different rules of sampling for example 'Preserve Detail' or 'Preserve Detail 2.0' (an improved algorithm, or just different) which is attempting to keep detail that might be lost, by the sampling process, thus introducing sharpening or contrast, or both and most importantly creating pixels that did not exist in the original. If you want to turn 2 pixels into 3, or 5 pixels in 6, the software has to interpolate (generated a non existent pixel). This is not the image your camera created anymore, this is an image the software generated and resampled. Once any sampling is done, this is not the image out of your camera. Thus it is not showing us how the cameras performed. It is showing us how the cameras performed with post processing, and that is not the intent of this video. This approach is not misleading at all in showing what the pixels, as supplied by the cameras, and presented by the same software, are behaving. To repeat. To resample is not the intent of this video. And if you watched the whole video you would have seen a further intent is to print large at a later date. In the real world, the far majority of images are never printed, and an even smaller subset are printed large. Thus the majority of images created today, even with high resolution cameras are viewed digitally or printed less than 1m / 3 feet. Thus understanding how a pixel for pixel view performs is completely valid. And both of these cameras provide more than enough detail and resolution for digital viewing. May it be TH-cam, Facebook, website, instagrams, flicker etc. Also large printing. The critical take away here, wether we ask the image to be re-sampled on the screen, or re-sampled during output (whatever file format) this is no longer the image the camera created. This is an image re-sampled (created) via software. Of which different software will have proprietary sampling methods, that will differ, (yet be imperceptible to us) and some software will give you multiple sampling methods. But how do we know how the lens and camera perform together? We just look at that, without third party software intervention. And sure we can upscale these images with AI (as I mention in this video) double their size, with no perceptible loss, and both images will be so large at that point, it does not matter which you choose. And this applies to crops also.
Sony's do it all camera as you well know, is the A1 and not the A7RV. Having owned both (photos only) I think the A1 is a better camera but the Z8 is stunning value for money.
@@MattIrwinPhotography I was referring to you saying Nikon had a do it all camera but Sony cameras were more niche. I hope you make more non Nikon videos as you are one of the best on you tube :)
Sure but the A1 via its price limits its market more than say the Z8, sheer price makes for a smaller market potential, smaller market potential equates to on-a-'niche scale' it being more niche. B+H now Z8-$3496 A1-$6498 substantial difference for cameras that are very close.
@@MattIrwinPhotography I totally agree on the price, I think both the Z8 and the Canon R5 mark two are far better value than what Sony offer. I'm a full-time professional and heavily infested in Sony so will not be changing. However, anyone asking for advice when starting out, I now recommend Nikon. The Z8 is such incredible value for money as is the Z6III . It will be interesting to see what the Sony A7V looks like as I believe that range is in serious need an update. Especially the read out speed which is incredibly slow. It would be nice to think it would be stacked sensor or partially stacked. As I think it's ridiculous that Sony only stacked sensor cameras are 6K and above.
In 2024, both bodies have great sensors and IQ is, not surprisingly world class on both bodies and is the least important discriminator. And, I like some others in the comments, don't see the differences you imagine you're seeing. This methodology is problematic -- far too few samples to be statistically significant, not using "blind" comparison (which allows biases to enter in the scoring), differences in jpeg processing (for many use cases, raw comparison is more apt for comparing IQ then jpeg) and other issues which make this IQ comparison worthless. The differences between these bodies will not be IQ in general, but issues specific to use cases. For example, the Nikon weights 200 grams more than the Sony; the lens ecosystems are distinct plus specific features and use cases that make one or the other more suitable. And, of course, if you already own lenses for one or the other and are accustomed to the ergonomics and menus, that will often be a decisive factor. Personally, for my usage, the A7R5 is a dream camera. That said, the Z8 and the Nikon ecosystem is world class and can be expected to excel compared to the A7R5 in many use cases.
? Did not use jpeg. This is not all the images. A 1 hour video would have shown the same outcome with this very specific use case. Which I stated clearly if you watched the whole video, the outcome, assertions are only specific to this usecase as I tested nothing else.
The Z8 definitely looked sharper on my 27” 4k LG monitor while the a7RV image was noticeably larger (closer). I’d have expected the higher pixel sensor to be sharper but, in this case the lower pixel Nikon appears sharper. I’m a Sony shooter. Thanks for an interesting comparison.
Interesting. I had a similar experience in a similar comparison but unlike this video I wasn't using the "same" lens. I have a Z8 and fitted with 20mm S prime at f/8 alongside my Fujifilm 102Mp medium format GFX with the 20-35mm set at around 30mm (about the same field of view as 20mm on the full frame) and stopped to the rough equivalent aperture (about 1 f number different) and the Nikon image although half the resolution was easily as sharp as the big Fuji. I was really into pixel peeping and I couldn't tell much difference. The advantage of the medium format sensor is dynamic range. Which is crazy on the Fuji. I never need to exposure bracket on it. But I concluded for sharpness, the Megaoixel resolution was just a wild goose chase. This confirms the glass makes the image sharp. The Z8 is a phenomenonally capable camera with a good lens on it.
Smaller pixels will always show you more "fuzziness" due to diffraction; larger pixels do not resolve such fine detail, but it's there (assuming equal conditions of course, and sensors of the same generation)
With the Z6III having such poor dynamic range. Nothing better than the Z50 according to Peta Pixel. I am thinking about picking up the small Z50III for light wildlife video and some street. But I have seen nothing on its weather resistance. Any test or coverage in that area ?
The truth is that 60mp vs 45.7 mp is completely irrelevant. It takes at least twice the amount of pixels to see a notable difference in print at a given size. This is why the upcoming high res Nikon will most probably be at least 80mp. The Nikon is a much better deal than the Sony. And this is even more visible if you compare it its true competitor, the a1.
Hi Matt, I'm disturbed by what I hear and see here. A lens projects an image on the sensor. This image has some degree of definition. When you stop down, this degree of definition may be reduced by diffraction. It is not related to the sensor, it is a property of the projected image. When you compare 2 pictures from different sensors, you have to compare them at the same final size, not at 100%. In this case, the Z8 picture should be at 133%. If the lens is absolutely the same, you are likely to notice the same degradation at the same f stop.
@@lsaideOK at 133% each pixel of the sensor is not being represent by a pixel of the screen. It is not a pixel for pixel representation. It is not 1 to 1. This means interpolation will need to occur. We don’t want that.
@@MattIrwinPhotography What we want as photographers is a final image. Unless you print the picture at 1 to 1 pixel, in which case the one coming from the Sony will be 33% larger, there will be some interpolation.
Er. What? The test is looking at lens sharpness and sensors. To interpolate pixels for a test using a screen as the viewing medium makes no sense. We will interpolate when we print.
The 100% conundrum. To those that are suggesting what I am doing is 'wrong' my intention here was to show the pixels. Not to show sampled pixels. Sampled pixels are changed by whatever method the software application, LR, Capture One etc, you choose to re sample images. This sampling process will attempt to deliver an improved image, as we can see for example in photoshop were you can choose different types of re-sampling, which applies different rules of sampling for example 'Preserve Detail' or 'Preserve Detail 2.0' (an improved algorithm, or just different) which is attempting to keep detail that might be lost, by the sampling process, thus introducing sharpening or contrast, or both and most importantly creating pixels that did not exist in the original. If you want to turn 2 pixels into 3, or 5 pixels in 6, the software has to interpolate (generated a non existent pixel). This is not the image your camera created anymore, this is an image the software generated and resampled. Once any sampling is done, this is not the image out of your camera. Thus it is not showing us how the cameras performed. It is showing us how the cameras performed with post processing, and that is not the intent of this video. This approach is not misleading at all in showing what the pixels, as supplied by the cameras, and presented by the same software, are behaving. To repeat. To resample is not the intent of this video. And if you watched the whole video you would have seen a further intent is to print large at a later date.
In the real world, the far majority of images are never printed, and an even smaller subset are printed large. Thus the majority of images created today, even with high resolution cameras are viewed digitally or printed less than 1m / 3 feet. Thus understanding how a pixel for pixel view performs is completely valid. And both of these cameras provide more than enough detail and resolution for digital viewing. May it be TH-cam, Facebook, website, instagrams, flicker etc. Also large printing.
The critical take away here, wether we ask the image to be re-sampled on the screen, or re-sampled during output (whatever file format) this is no longer the image the camera created. This is an image re-sampled (created) via software. Of which different software will have proprietary sampling methods, that will differ, (yet be imperceptible to us) and some software will give you multiple sampling methods. But how do we know how the lens and camera perform together? We just look at that, without third party software intervention. And sure we can upscale these images with AI (as I mention in this video) double their size, with no perceptible loss, and both images will be so large at that point, it does not matter which you choose. And this applies to crops also.
I think it's a bit suspicious that you avoided macro shots of subjects with lots of micro details and kept showing off simple surfaces and glass panels from far away. This is a macro lens after all.
@@orangesnake2256 LOL you are looking for the boogey man in the 'shadows'. No, there was no plan to avoid. I shot what I know best, with a lens which is equally good at macro and anything else. This video is not about this lens being a macro lens. This video is about having access to THE SAME LENS, which can then be tested on different bodies. I also suspect you did not watch the whole video.
Sorry Matt for having created a mini ruckus for you with my comment on FOV. That was not my intention. I fully agree with you that both the lens and the two camera bodies are probably more than capable enough for 90% of most photographers' needs. Jeez, even my MFT gear covers, I'd say, at least 75%. I also agree with you sticking with integer pixel enlargements. I'm also glad that you clearly understand the issues of interpolated pixel peeping. So many other photography channels blithely enlarge photo comparisons to produce equal-size images and draw conclusions, possibly wrong, from them.
I found your comparison fair and balanced. However, THE SLANTED LENS did a somewhat similar test and found the a7RV to be sharper. Go figure!
I disagree with the reasoning in favoring 1:1 vs. normalizing to common dimensions for comparing which produces a better image but I understand why you've chosen to do things this way. It's your channel and your choice to compare different magnifications of a scene if you want. You're not alone in favoring such an approach. I'm still going to shake my head every time I see it done.
Anyway, cheers for the attempt even if I disagree with it.
Interesting comparison. You say the 15.1% difference in image width (or 32.5% difference in image size) does not make a difference, but then you say the 15.7% pixel pitch (35.4% difference in photo receptor area) does make a difference. I say they both make a difference and, if you are going to compare sharpness on a monitor at 200%, you need to scale one or the other image so that they are the same resolution. My guess is that then, they will be the same sharpness.
I own both the A7rV and the Z8. Generally I think your observations are spot on. However in my opinion the stabilzation of the Z8 is slightly worse. I also have to say that in my opinion the autofocus of the Z8 (especially for portraits) is slightly worse than the eye af of the Sony. Also in my experience it is necessary to focus differently with the Z8. I can take the A7rV with a 50mm f1.2 GM lens at f1.2 point it rougly in the direction of the person and press the shutter button without looking through the viewfinder and I know the image will be in focus (using wide area, af-c and eye af). With the Z8 there's a chance of maybe 5% that the camera focuses on the chin on the ears or somewhere else when I point the camera in the direction of a person and press the shutter. If you want a 100% hit rate with the Z8 I would recommend to use tracking. I usually put the af point to the center of the frame, then point the camera at the person I want to focus, press and hold the af-on button and reframe the picture. Now I can press the shutter as often and as quickly as I want and will have all pictures in focus. The advantages of the Z8 are in my opinion the better ergonomics, stacked sensor (no rolling shutter, no cut in the middle bokeh balls), video quality, the possibility to connect a hardware gps receiver, the automatic subject detection (no matter if human or animal) that the Sony doesn't support, 20fps and more. The downsides of the Z8 are in my opinion battery life, size, weight and a little less sophisticated autofocus and less efficient stabilizer.
Thank you for your experienced comparison. It really helps.
As a Nikon Z8 user and youtube watcher, it seems to me that the stabilisation in video is better than the stabilisation for stills. Would you agree?
@@torngrasp Yes, I agree.
Matt’s always unbiased 🤣…
@@torngraspNikon is has lower quality not worth investing in 2024 imo.
12:59 If someone is pushing shadows up like that in editing on a regular basis, they have bigger issues than if there's a 1/16th less stop of DR or a minuscule amount more of noise in shadows. That being said, personally, I shoot high efficiency* on my Z8 and never notice any difference in the files while editing. The file size (on disk) is usually several MB less in size than photos from my Lumix G9 II with a 25MP sensor.
It's one thing that annoys me about the A1 is the file size. compressed raw is still 50mb
@ even my Panasonic G9 II has larger files than my Nikon Z8. I’ve been surprised how small the Z8 raw files are with no loss in quality. It’s great for saving disk space.
In all comparisons of image sharpness, especially the zoomed-in comparisons, I see a distinct difference in the field of view of each camera, the Sony producing noticeably larger objects than the Nikon. Simply enlarging each original image by the same integer amount, keeping the relative object sizes as they are, will usually create an impression of greater sharpness in the image with the smaller objects. It will produce higher amounts of higher spatial frequencies to the visual system -- hence better perceived sharpness. But at this level of pixel peeping adjusting either image to enlarge by non-integer amounts will run into possible problems with the quality of the enlargement calculations, especially the pixel-interpolation and rounding algorithms used to produce an integer-based digital-display output. What intrigues me the most here is the difference in FOV, slight though it is. Perhaps it is the difference in flange distance between E- and Z-mounts? But I thought specializing a lens for a particular mount was supposed to account for this.
FOV was something I wanted to test further. And I agree with the above conundrum. I elected to remain at pixel for pixel.
No every sensor in every format is the same size. Even within the same brand. For instance, the A7R V is 35.7 x 23.8 mm, an A1 is 35.9 x 24 mm, and the Z8 is 35.9 x 23.9 mm. At these pixel densities even fractions of a mm make for significant differences.
FYI - I did not crop, and I did not move, further or closer in any of these images
@@MattIrwinPhotography Matt, does a photographed square look exactly square on each camera? It almost seems like one or the other camera is squishing either the horizontal or the vertical when converting the object to an image on the sensor and then to a file to be displayed.
@@bmwohl I cant say what the manufacturers do. I don't personally see the difference you are talking about. I suspect the photo sites, which always appear to be round when I see diagrams, or at least the micro lenses atop them are round, record a value, whether round or square. Then when digitally represented it is shown as a square, or at least what looks like a square to me.
7:25 I agree with you, it is not necessary to us aperture 8.0, 11.0 or smaller apertures for Landscape Photography, it is more important where you focus and the out of fucus areas can be a help to get what you want with the image.
When you compare the two files, would it be possible to match the screen size?
Also do not forget that Captute One has two different profiling.
One thing is certain, there is no practical difference.
By the way I am a Nicon shooter.
If they are the same size, pixels will become interpolated on one or the other image. Unfortunately this does not help us see the outcome of the pixels as provided by the file. Software interpretation is required
The FOV has to be exactly the same. FOV will always change sharpness the comparisons are not correct if the FOV is not the same. The lens may be a worse performer overall on the Sony system and not exactly the same. Or if the FOV was the same then it is the same?
Great video Matt. Very enjoyable listening to you logically break down these bodies with this lens as a benchmark. Would love more videos like this. 👍🏻
The Nikon sensor has lower noise and more dynamic range because the pixels are larger. The Nikon has a 2mm advantage in flange to sensor distance, which is why the Nikon S series lenses are so good. I think Nikon just played a longer game.
Yet despite the supposedly worse mount we keep seeing Sony lenses that optically outperform both Nikon and Canon options - often while weighing less.
The stills out of these 2 cameras are almost indistinguishable. The only side by side that seems to show a slight difference is around @14:01. The police sign and the tower directly behind seem to show more details on the left photograph. This what I find on my laptop screen.
Well Matt, I just subscribed. That was the best comparison video I have ever seen in 14 years of photography. Having 2 exact same lenses (Although different mounts) made the comparisons more realistic, than having two lenses set on the same focal range.
I have been with Nikon for 14 years. I live in East Gippsland, so most of my photos are seascapes, landscapes, wildlife and nature. I also dabble a bit in video, with drones and various types of action cameras. I've never been a Nikon Vs any other brand, because it costs a lot of money to change. The difference in pixel count made the comparison easy to pic which was which, and on my monitor, and it was difficult to tell the difference, with sharpness and clarity. Love your work.
Sounds like you are near Raymond Island, I spent a lot of time at my grandparents place which was at Eagle Point, not far from there.
@@MattIrwinPhotography Eagle Point. Great spot for sunrises, there and along the old jetties toward Paynesville. I'm in Bruthen, and work in Lakes and Bairnsdale. So I'm in the middle of Lakes, ocean, and the beautiful high country. Heading up to Nunniong this weekend to try for Brumby photos.
I’m fond of the 90mm so the Tamron is interesting. Being able to compare its performance simultaneously on 2 platforms is also valuable. Being able to achieve virtually identical fidelity with both sensors and processors tells us just how good these cameras really are. Hope we see more tests like this, thanks Matt.
I agree. It’s very interesting and informative.
Just curious does the A7 sensor have an anti aliasing filter? Could explain some of the differences
If the A7RV is reaching diffraction at f/7.1 and the Z8 is reaching diffraction at f/9 then the Z8 will have a slightly advantage when shooting at f/11. That’s just how physics work.
When shooting wide open, the Z8 appeared sharper than the A7RV. This means that there is something else that you may have overlooked, and that is resolving power. Once you pass that megapixel threshold, the images will inevitably look softer.
Why not use the sony mount only and use an adaptor like the megadap etz21 to eliminate any difference in the individual lenses.
I will spell it out to you. Had he used an adaptor, the test wouldve taken longer. Furthermore, limited time of light and it would not be a completely true comparison.
Love your channel Matt😊
I think you did a great job. I’m sure this was a ton of work, so thank you for taking the time to make this.
Wonderful video Matt! Have a great weekend!
Hi Matt, thank you very much for this great comparison!
Cheers, Martin
Matt could u do a macro comparison of the Z lens vs the Nikon Z MC 105?
What are you asking exactly? The Z lineup currently has 2 Macro lens, being them the 50mm Macro and the 105mm MC. So what lens do you want him to compare?
@ the Tamron Z lens… vs the Z 105
@@bubbachua I don't own the Tamron but I own the Z 105mm among other Z lenses. The Z 105 MC is easily one of Nikon best Z lens, also one of the sharpest, it is the best macro out there between Canon, sony and Nikon offerings regarding macro. This makes me believe that it also beats the Tamron most likely
@ no doubt it is probably one of the best macro lens out there but no one has yet done a comparison and I thought it’d be good content 😁
@@bubbachua hoo but it was. First one can always check the MFT charts for this 105 vs canon 100mm vs Sony 90mm macros. Then plenty of outlets that do reviews also claim to be the best out there (across these 3 brands). The only real downside of this lens is that you can't go beyond the 1:1 magnification and thats where lens like the Lawoa go beyond this one regarding macro.
I'm lucky I dont see the differences. I'd say though, as a proud owner of a Z8, that 60mpx is a notable difference, not just residual... 1000 pixels is quite a lot if we're cropping, obviously when printing 10mpx is enough. I always say the more the merrier, the compromise is bigger file size, and the most important signal to noise ratio, which in modern cameras is a negligible difference. ACAB: all cameras are beautiful ;)
I'm going to agree with the commenter who suggested the FOV of the images on screen must be the same for a true comparison. That means you can't just set both to 100%. Why? Because the more you blow up an image, the grainier it looks when viewed from the same distance. And the Sony gets blown up more when setting the image to 100% (all pixels the same size, so the smaller pixels get enlarged more). Another reason the Nikon might look sharper is the larger pixels are more accurate individually. Larger pixels will be more accurate as they can each sample more light. Since you are comparing the same number of pixels from each camera, the sensor with larger pixels/photosites should win. On the other hand, if you compare the same FOV portion from each sensor, the sensor with more pixels covering the same subject allows less magnification to achieve the same image size...and will likely look sharper. Of course, all of this is dependent on the lens actually producing the same level of image quality...which has not been demonstrated whatsoever. Sample variation exists. It would exist if you were comparing two Tamron lenses for Nikon. But you're comparing Nikon vs Sony mount and have admitted the lenses need adjustment in the manufacturing process to fit those mounts. Who knows if Tamron was equally successful in meeting a standard for both types of mount?
The problem with your tests is that you failed to normalise the image sizes to account for the resolution difference - the Sony will naturally exhibit more noise when viewing both at 200% - as the effective magnification factor is greater. For a fair comparison of noise and sharpness view the Sony at 200% and the Nikon at about 244%.
Hey Matt, 1000 thanks for this comparison. Might it be possible to make some of the raw files available for download and pixel peeping on the screen at home?
Matt, interesting seeing your images. Thanks for taking the time to do this. A couple of observations. Viz the discussion about pixels/demozaicing/rescaling remember that the demozaicing is done by the raw converter, and this process is inherently a rescaling. Another point to remember is that C1's defaults are highly tuned for Nikon files , they are not so optimal for other file types. From my own testing i would suggest that suggest that at least as much of the diffences that you showed come from optimisation in the raw converter as from the different sensor/capture systems. Just so that it is said - i'm completely agnostic to the camera manufacturer religious wars, so have no investment in Nikon vs Sony vs ....
Thanks for your comment, and I am sure you have read some of the lengthy threads below.
You said
Viz the discussion about pixels/demozaicing/rescaling remember that the demozaicing is done by the raw converter, and this process is inherently a rescaling.
Indeed this argument has already been made, and you can see my responses below. Short version of my thoughts, sure - but the RAW we download from our cameras, is the baseline we all work from. That RAW is then interpreted into a pixel represented in our editor of choice. These pixels will be the same RGB value from editor to editor, and can be overlaid with endless non destructive edits or layers, presents, or however the editor is programmed to present the RAW, which differs from editor to editor. This is the baseline where photographers can begin to interact, prior to that moment we have no say.
Interestingly in all the comments below, with people telling me how wrong I am, I have asked, how is it ok that the sampler is inventing pixels. None of the various threads has answered this.
RAW sitting in our editor is the baseline with which we all work by.
Sampling for size, removes us from that universal baseline.
Further sampling software is designed to enhance an image when scaling, to actually make it better. We are removed from the RAW file.
As for Capture One being optimised for Nikon. I am not aware of Sony being worse in Capture One. I also state very clearly, that this outcome is a surprise, and is specific to all of the parameters of the video. Which includes Capture One. Which by way of that is me stating the outcome might be different with any single parameter changing.
@@MattIrwinPhotography Totally agree with your last paragraph. However I don't agree with "These pixels will be the same RGB value from editor to editor". All my testing has shown that the RGB values and even the 'pixels' (results of the demosaicing/bin gridding) are different from one RAW editor to another. You will even see that the total number of pixels or the exact image coverage will vairy. In converters like RawTherapee you can even change the demosaicing algorithm and get different results in the same converter. Each of the converters/methods have their intrinsic default sharpening methodologies - this is prior to any "sharpening" that the user has access to.
The RGB values are the result of the conversion profile used in the converter, and in C1, like in many others, you can calibrate for your exact camera body/lens /lighting combination and load your own icc profile - obviously changing the resulting RGB values. I think of a raw file as being analogous to an exposed negative, the converter is more like the developer, change the developer or the development time/temperature and you get a different result. Remember the RAW file doesn't actually have any "colour" in it, it just has "counts" for each of the RGGB photosites. The RAW converter then takes these counts through non-linear conversions to generate the RGB values. In more or less all but academic/forensic cases the resultant RGB values are tuned to be aesthetically pleasing rather than pedantically "correct".
Anyway, i thoroughly enjoyed your video, and appreciate the time you spent on it - even if i draw a different conclusion.
Nice comparison, Matt. Have you had a chance to compare the stabilization effectiveness of the Tamron 90mm vs the Nikon 105mm macro yet?
Great review Matt, really interesting! Reckon this could be a lens well worth buying!
Fab comparison Matt! some nice glass turning up I've always been happy with the Tamron lenses I've owned.
Woulc love to see the difference with the Viltrox 16mm 1.8 Sony vs. Nikon.
Hey Matt, interestingly, I’ve had a chance to shoot handheld macro and close-up images with both the A7RV with the FE 90/2.8 macro and the Z8 with the native 105/2.8 macro and found the stabilization to be much better with the A7RV. At times, I felt the stabilization of the Z8 was not sync’ed up with the 105mm lens but it was a totally different story with the 105mm lens on the Zf and its improved IBIS. That said, I would be interested in your comparison between the A7RV and the Z8 at more “macro” distances when both cameras are outfitted with the new Tamron 90mm lens.
Smaller pixels have the ability to show more "fuzziness" in the diffraction zone. In both cases the sensors are able to show the diffraction that exists at f11, it's just the Sony can show more fuzziness than the Nikon.
The whole thing is that "consumer cameras", even these giant ones, have their Nyquist frequency below the diffraction limit of a good lens today. I am almost sure that this Tamron _or any other high-end macro - can deliver resolution at f4 that no camera can show (at least, not APSC or FF, small cameras with pixels smaller than 2 µm exist, but they are for scientific or industrial use, not consumer level)
.
For the nerds out there, here's a full explanation of MTF, how it relates to pixel pitch and the diffraction limit of lenses. Phones cameras have a peculiarity that the Nyquist frequency is way above the theoretical diffraction limit of the lens, so those cameras are doomed to be "diffraction limited" (even at f 2.35🤯). On the other hand, all current FF, APSC and Micro 4/3 cameras are "pixel limited" (of course, they are diffraction limited near f8 ( 3 something µm pixel picth) it's just that there are excellent lenses that can deliver more fine detail at f8, 5,6 and f4 than the cameras can record.
If the old Micro Nikkor 55 f3,5 can operates "diffraction limited" at f4 and deliver over 300 lp/mm (there's a paper on that, from the International Society for Optical Engineering · 2009) I'm pretty sure modern macros can do it too. But cameras can't handle that yet)
th-cam.com/video/1aCXMBP8_3I/w-d-xo.html
I have been debating on these two. I was using an old full frame camera and got people to buy my nature photography photos. But, i want to get a flagship camera. Subscribed, i like you to use the same lenses.
Great and informative Matt! I shoot portraits with a Z8 using a Nikkor 70-200mm S 2.8 which can start to feel a bit heavy after a while when shooting handheld. I was considering getting Nikkor’s 85mm 2.8 as a lighter option. Can this Tamron’s 90mm compete with the Nikkor 85mm? Thanks Matt.
Hi Matt. Interesting results. I suppose to address the possible lens to lens variability within the same manufacturer, if you happen to do these tests again, you could get an ETZ adapter and use the Sony mount lens on both cameras. Obviously that makes it more difficult in changing lighting conditions since it will take you longer to swap lenses around, and generally be a PITA. But it would be interesting to see.
I have done that in the past. I am not a fan of the idea of attaching and removing that ETZ 50 times an hour. And to be honest, I think it is more useful the idea of a Sony user going, Sony camera, Sony version of lens, this is the result, and Nikon camera and Nikon version of the lens and this is the result. As this is a more likely outcome, users will buy the lens in their native mount.
The Tamron 50-400mm comes in Sony E mount and Nikon Z mount … an extremely versatile lens and very sharp. I have it on my Z9 at the moment but am thinking of getting the E mount for my Sony system. I love my Sony, but, the system lacks any kind of multiple exposure mode. I am mainly a Landscape/Woodland photographer and I love shooting multiple exposure, in camera, of different scenes using the different, in camera blending modes available in the Z9 (all Nikon Canon Mirrorless and later DSLR’s have this feature … why not Sony???) You can get a similar result with images from a Sony, but it all has to be done in post processing, so you can’t see the results in the field which take s the spontaneity and experimentation, not to mention the fun out of the experience. I asked this about this omission on a Sony forum a few years ago and the Sony rep said this was not a professional feature!!! Nice to know that from the Sony perspective, Nikon and Canon cameras are not up to the PRO standards of Sony system.
I’m super excited about Tamron releasing this new lens on both Z and E mount at the same time. Maybe this means all new Tamron lenses will also be released on Z mount moving forward?
Hope so. 😀🤞
I also hope Viltrox can release their lab series lens for the Z mount. Their new lab 135mm F1.8 is very good lens . There is a lot of F1.2 full frame lenses in their lens roadmap.
Very good comparison Matt. Thanks for putting this together. Can you say why you didn’t also shoot the Nikon in 14-bit lossless compressed for a more even comparison at least for DR?
If I can get all the pieces of the puzzle together again, I will do that. What I wanted to show in this video was what HE* is capable of real world. Since it arrived in 2021, I have used nothing else on the cameras which support it, and have observed no disadvantage for that HE* usage.
A very good video, as always very detailed one too!
Thanks for sharing your insights!
Very similar colors at 17:35 ?????
The light in the skyline was amazing. I hope you made also some images additional to rolling the takes for the review 😊
I may be wrong, but I thought Sigma (as well as other manufacturers) made lenses for various mounts decades ago i.e. offered the same lenses in several mounts from different manufactureres
Yep for sure, this is the new generation - mirrorless, and we are going through that all over again :) Is it a first for mirrorless from a major company?
Sunday morning Matt vids, my favorites! 👏🏻 I love what Tamron is offering lately, I always go for native glass but that 50-400 is in my future.
Also just added the Z6III & grip to my kit, shot a stage event with it Friday night. It’s phenomenal - feels like the Z8 & Zf had a baby 😆
What is in left corner in cloud of right pic , in 22:00, some kind of orb?
May be off topic, but I have auqestion, which metering mode on Nikon cameras do you think is best for portrait photography?
It can be sample variations of the lenses. If you should have a third lense (for Nikon ocr Sony-mount) you will probably have another result (a third result). I have tested so many lenses and seen that thougt they are the same type of lense they are different. I compared three Canon 135/2.0 L lenses on the same body - alla had different sharpness. I have test two Sony Zeiss 55/1.8 - One was sharper in the middle - the other sharper at the edges.
For sure. I mention it at some point in this video. 😀
Same here. I have two Olympus 17mm f/1.2 lenses. One is noticeably sharper than the other. But here he is not even comparing the same mount. Tamron may have down a better job with the Nikon mount than the Sony mount.
When making comparisons of two pictures side by side, It would be helpful to label them in large print which brand is which.
I prefer to not have it in my face. I'll look and see what the camera says. If there's a difference, the photo should be easy to recognize.
Man, I would love to see this test with the Canon R5 added in. I have bought Canon for a lifetime but am strongly considering Nikon.
The sharpness difference will be diffraction. Did you take any at 6.3?
Not at 2.8 and 3.5. 😀
I dont understand why the comparison wasn't done on a level playing field, i.e. magnification, iso etc. Why didn't you shoot at a mid range aperature instead you start out at f11 for an infinity shot? I don't get it.
I don't get what you are saying. Base ISO of each cameras is different. 64 and 100. I shot at 3 different apertures, perhaps you did not watch the whole video. When you say magnification, for this style of shoot impossible. Moving closer or further changes perspective.
That is an interesting comparison and maybe you are right with what you have been saying about diffraction ,maybe 45 MP is the sweet spot at least for now, cheers Matt !
3:40 mount size difference!!!
Cracking comparison! Both amazing cameras… though I’m happy that I’m a Z8 owner 😊
Interesting, though I'm not sure why this would be important to anyone shooting one system or the other. It's not like anyone would want to dump one system for the other because of it. And of course as usual, it's not the camera that makes great photographs, its the person behind the lens! Most all of the systems available are highly capable. Personally a comparison of lenses within one system would be more valuable for someone interested in purchasing the Tamron 90mm, say for example a comparison with Sony's own 90mm macro. And even then one would probably be spilling hairs!
Extremely comprehensive and detailed comparison. I have ultimate faith in my Nikon Brand, the dynamic range, and color science is more pleasing to me especially in VIDEO, the diff. for me is the Z mount, being the best mount in the industry, very interesting video, amd yes there are no bad choices, all great cameras these days, I love my Nikons, hopefully they will continue to improve and release better deals for all consumers, I have the ZIII in My sights at the moment, any opinions on that, anyone?
It’s interesting to watch. However, I do not believe that how why we actually choose cameras is based on objective measurable information. I think we mostly use the reports and comparisons to justify our existing decisions and biases.
When you look at a great image, do you actually know which brand the photographer used? As far as I can tell all the brands are very good.
Who has the best equipment brand seems to be a lot like model car or stamp collecting. I own Nikon stuff but I can’t say I would be unhappy with a Sony or a Canon. I like the Nikon lenses and the Z9. I love it when I manage to produce great images. It’s not the gear which I need to think about, it’s my photographic skills. I used to think that better gear would help me with that. Now I am just interested in improving my creative vision and responses.
Really well said Martin
I wonder if you under exposing the Nikon makes it appear a tad sharper. With darker lines having that effect. Why not use 100 iso on both cameras since this is a Lens test.
I do not think he can do so, thus his adjusting with shutter speeds.
@ He could have chosen iso 100 on both cameras and taken the same exposures correct ? I am pretty sure the Z8 has an iso 100 setting.
It’s his methodology that’s skewing the results. Essentially, the way he’s comparing the lenses at different magnifications is unintentionally favoring the Z8 due to the perceptual differences introduced with viewing size. A different methodology could have been to use the macro lens for macro photography, comparing smallest resolvable details. This would also have the benefit of demonstrating whether the lens is capable of resolving for 60MP, or if it the A7RV was oversampling
ISOs will never be the same even if both is at 100, different manufacturers have different gain increments
@@JojoJoget Yes maybe but using 64 iso on the Nikon. And 100 iso on the Sony. And basing the exposures on the Nikon from the Sony reading. Is of course going to under expose the Nikon. You can see it in the images. I shoot Nikon but still
Interesting… but I do not believe the lenses are identical. They clearly have some different elements to handle the different mounts. So it’s just possible (and probably likely) that one lens is slightly sharper than the other.
What an interesting video, thanks! Personally I’d guess the expeed 7 is making the difference. They are obviously both great cameras, but I noticed the shadow 100 lift is more obvious on the Sony in the main drill shot, but that may be skewed by the slightly darker exposure. Ps. Your use of the Tamron 90mm macro is also a great choice because this lens has really been perfected over so many iterations. I have an old f2.5 manual focus version of this lens and even that is very sharp.
Matt: get a Megadap Emount-to-Zmount adapter and you can try the same SONY lens on SONY and Nikon. Of course you're probably still not comparing apples to apples but you'll have fun.
At the end of the day, the native megapixel stills are virtually identical on these two examples of bodies and lenses. The question I have is there any difference when a pixel-shifted & multiplied image is compared? Your points on video differences due to technology platforms might also be a good comparison in a future video. I always enjoy your pragmatic approach to producing your content. Cheers!
16:50 dynamic range is a bit higher for the 7RV
What ??? Bottom one is nikon looks way better in dynamic range
Every photo, i notice the Z8 was absolutely sharper. But only by a small percentage. This also could boil down to the lenses being minutely different as they arent 100% apples to apples. Now the ISO/noise absolutely the Z8 due to less noise and smaller mpix resulting in larger pixel size (Light gathering) resulting in generally less noise. Nearly every image, i could pick out and see which image was sharper or had better contrast/real-to-life colors and brightness which was nearly always the Z8.
Thanks Matt, a really interesting review. Looking at the images via TH-cam it is hard to see any difference however, on my 5k 27" display the Z8 might be a whisker sharper - I am looking at 11:37 on the timeline with the split left/right and the one on the right just has it, however, I think that image shows a dust spot just above the left side of teh short building on the right. Looks to round to be a cloud !!
Sigma do release lenses for l mount and e mount at the same time.
I believe they still need permission for z mount.. let's not even mention RF mount
Super video and explained as always perfect, glad to have a Z8 and a bunch of z glasses anyway!
The hardcore Sony pushers/owners are sort of what the Canon users used to be in the past...a bit arrogant at times. I don't care about the brands. I've owned several Sony cameras, Fuji, Nikon, Olympus, Panasonic and maybe I'm forgetting one or two brands. Don't think I've ever owned Canon and that probably comes down to the fact that I picked the brand back in 2004 that gave me the most bang for the buck, which was Nikon and the D70. Going forward, I didn't buy Canon because of the dynamic range disadvantage and that lasted for several years. Anyway, I'm not anti-Sony or anti any other brand because they all have some great cameras...but yeah, their marketing, their influencers and some of the arrogant enthusiast brand loyalist can be annoying. I wish they'd focus more on talking about the art of photography instead of endlessly bragging about specs (and these are some of the same people that use high end/high spec cameras to do nothing more than shoot birds at the feeder through their bedroom window).
Well said.
What you said is true. Tamron produces excellent lence.
Interesting comparison indeed
Nikon has a holding of Tamron that is 0.04% ($1.7m), also there's multiple Mitsubishi Group (of which Nikon is a part) companies holding various shares in Tamron.
Very interesting, do you know what share of Tamron Mitsubishi holds, as they are Nikon's parent.
@MattIrwinPhotography The Mitsubishi Group is a group of companies that coordinate with each other and own shares of each other (as opposed to the parent-subsidiary structure). There's difficulty narrowing down how much they collectively own of Tamron, but I think it is safe to assume it is over 1% (which may not seem much, but given that they opperate as a collective there would be an incentive for Tamron to treat them as clients as well as shareholders).
I am firmly in the Nikon camp, ... however, I agree, the Sony A7RV is very good.
The monitor I am watching this video on is just a 2K monitor and I just cannot see any difference between the two. Which brings up a point to consider, the Purpose for Photography is to produce images for Display and the Human Eye can only resolve 300 dpi if the vision is 20/20. My point here is that what you see in a pixel peep with a 46mp or higher resolution camera is completely meaningless because there isn't a person on the planet who can see that level of detail in a print when viewed at a "normal" distance for the size of the image. It's why images made with 12 or even 6 mp cameras are still capable of producing perfectly acceptable prints.
Watch the whole video. I state, if they were not side by side we would not know.
You should have included the Z6 III in this comparison too...!
Very interesting, and once again confirms that pixel counts need to be judged carefully - only part of the formula. Appears the world needs to take more notice of Nikon !!
Matt: I’d urge you to read through your comments here, as several knowledgeable commenters have highlighted fundamental errors in your methodology, assumptions, and frankly just factual errors in your understanding of how sensors work. They did a good job so I won’t repeat them. This isn’t meant as a personal slight, and I’m a Nikon owner (best to head off bad faith rebuttals from other commenters). I know that no one assumes they’re wrong, and most people aren’t open to hearing it, and you obviously put a ton of work into this, but unfortunately your conclusions and the path that lead you there are just inaccurate and I’d encourage you to approach the corrections you've received with curiosity and openness.
100% topic?
Read the pinned comment
Amazing content, I was my whole life a pro Nikon shooter till I change to Hasselblad X series for photos and FX3 for videos, Then I got the A7R5 as a fast camera (since I have many Sony lenses) but was never satisfied with the less sharp busy noisy images of A7R5, I hate the ugly small shape of Sony cameras and their menu (thought its just a full frame camera and shouldn't be compared to Medium format Hassy), lately I'm thinking to change back to my favorite full frame brand "Nikon", and with your detailed videos and explanation I'm convinced that will be the right choice. Thanks Matt
Sony A1 is much closer to the Z8 in specs. Agree about Sony body shape but that's changing with the A9III and the A1II which is to be released on the 19th. I think the AF is better on Sony but Wow the Z8 is amazing value.
You can use the exact same lens with a Sony E to Nikon Z adapter.
4:00 It's a shame the bird was out of focus. Very pretty example of it's species despite being very common.
I always think it's funny when people say they can't tell the difference. I can obviously tell the difference and surprisingly the Nikon is sharper I wouldn't have expected that.
Sony A7R5 is around $500 AUD cheaper in Australia. (at Digidirect at least)
For sure today, and it could be more or less in week, just like the Z8 :)
Compare lens systems too.
Hi matt, I would not have made a test with aperture smaller than 5.6, I would have focused the test more towards macro and portrait photography. I got the Son Full Frame system in March 2019 where Sony was leading in mirrorless full frame Cameras, it is a joy with all the different Cameras to choose from and for me specially all the Lenses you can get for the Sony E mount system. Nikon is coming more nto that direction now. The Sony A7CR is a great design, I love that the EVFsits in the corner, but I do not like the small magnification and the low resolution of the EVF, the A7CR also misses a joy stick, the backside screen should only have been tilt up/down to be more compact, so the camera could be more slim, the IBIS could be better and the shutter could be better, if Sony makes a A7CR II and upgrade without the issues, then I will be one of the first to order one of two of those !
Of course, pixel for pixel 45 Mp would look sharper and show less noise than 60 Mp, no surprise at all here for me. Diffraction limit means that we cannot take full advantage of the pixel resolution beyond certain number of pixels in the sensor of a certain size. To take it to the extreme, 1000 Mp full frame sensor would not resolve any additional detail in comparison to a 100 Mp FF sensor. At the pixel level, 1000 Mp image would look terrible, but when displayed at the same size as 100 Mp image, the two would look identical. The same with noise, small pixels gather less light than large pixels and therefore display more noise at the pixel level, but for the perception of noise in the whole image, pixels size is pretty much irrelevant; what is important, is the size and light -gathering capacity of the whole sensor. A fair comparison would be to downscale 60 Mp to 45 Mp or to upscale 45 Mp to 60 Mp.
Despite looking less sharp at the pixel level, the 60 Mp image may be expected potentially to resolve a bit more detail than 45 Mp image. This bears true at least in one instance here, as noted by another commenter, at 14:01. I could probably spot a few more instances when the finest details of the landscape are seen clearer on 60 Mp image, but nothing too significant.
Read my pinned comment. Some of images in this video are shot at 2.8 and 3.5, unfortunately it does not all come down to diffraction.
@@MattIrwinPhotography Diffraction does not just happen at narrow apertures. In any image created by a lens, the transition on contrasting edges is not a step, but a curve. If you sample the curve with higher spatial frequency (more pixels), you stretch the X-axis and the curve appears less steep (image less sharp). Steepness of the curve depends also on the lens: some lenses are perfectly sharp for low MP sensors (low sampling frequency), but not as sharp for high MP sensors (high sampling frequency). Regarding this lens, I would wait for the review of Christopher Frost who always test lenses on 42 MP FF and 24 MP APSC (roughly equivalent to 60 MP FF). Finally, in your case, with the distant subjects, there is likely an atmospheric disturbance: the more you magnify the image (higher MP), the more you notice it.
Regarding the resampling, if you are against upscaling because it creates new pixels and does who knows what, why not try downscaling? I bet that if you downscale the Sony image to 45 MP, it would look just as sharp and with no more noise than Nikon image irrespective of the sampling method. By the way, if you intend to print at the same size, it will also involve resampling, only it's done by printer and then by our eyes and brain.
Having said all that, I agree absolutely that Z8 works very well for landscape and overall represents a more compelling package than a7RV. But if the point of this review was to show that higher MP Sony sensor when viewed at the pixel level produces softer images with more noise than lower MP Nikon sensor (in fact, it is also a Sony sensor), then there is no surprise and you don't need to do any tests for that - these are just the laws of physics. It could only be different if low MP sensor was not at a similar technical level as high MP sensor. But in this case both sensors are modern and made by the same manufacturer, so their technological levels should be comparable. And I agree, it is remarkable that high speed of the Z8 sensor does not result in significant penalty in resolution or DR.
Still, higher MP sensor in good conditions could potentially resolve more detail. But to appreciate this, it is necessary to compare images of the subjects with finer textures and at shorter distances, so the atmosphere is not in play. Since the difference between 45 and 60 is not great, the range of spatial frequencies that are resolvable on 60 MP, but not resolvable on 45 MP will be rather narrow. Also, photonstophotos report that Z8 has slightly worse dynamic range than a7RV, especially at intermediate ISO values (that is not tested here), but this also is expected to be only hardly noticeable in real life.
@@AVerkhovsky Regarding diffraction I talk about it being a range, I never said it was binary - on off. My understanding is it is always 'there', but becomes more evident as apertures change. Being evident sooner as pixel density increases. Go back and listen to those sections again.
As for scaling, up scaling and down scaling invents pixels. Not the intention of this video. And people can tell me I am wrong. Sadly they missed what the video was about. And they are wrong.
Intention - camera A, camera B, same lens, what are the pixels telling us. That is it.
Pixel baseline.
And we can all take these files from the baseline and create from there, with all the computer tools at our fingertips.
Scaling - all sorts, brands, algorithms of scaling can give us all sorts of outcomes. And in the AI world cameras like Z8 and A7rv with a 2x AI scale give us huge files. Roughly 19,000 for Sony and 16,500 Nikon, more than 99.999% of us will ever need.
@@MattIrwinPhotography Ok, this was your intention. But what is the conclusion? What are the pixels telling us? One conclusion is that Z8 produces very nice landscape images. With this I cannot agree more. But you also make a significant point that 60 MP image at the pixel level looks softer and noisier than 45 MP image. The inference is that 60 MP sensor is somehow not as good. Surprise, surprise! - lower MP sensor is better than high MP sensor even for landscape. This is what I argue against - what you see as perceived softness and noise in your comparison is not the fault of the sensor, but the expected and unavoidable consequence of the physical laws. Does it mean that we don't ever need higher MP sensors? No, it doesn't. You simply did not test the circumstances where the advantages of more MP may show up. Speaking about AI upscaling, it cannot always replace actual resolution - try to upscale with AI cropped image of a small bird - you will be amazed with what artefactual detail in plumage the AI would invent that never existed in reality. People who photograph birds appreciate 60 Mp of A7RV even in comparison to 50 Mp of a1, although the fast sensor of a1 as well as that of Z8 are of course preferable if there is any action.
both use sony sensor tech, so the only difference is resolution.❤
Would of liked to seen EVF recording for macro use & the AF accuracy. Of course the more MP the bigger the subject looks in the same exact photo. On paper the Z8 is much mor bang for the buck & both are at the same price point when on sale or not. But I do heavy cropping for wildlife & it really matters that the photo is pin-point sharp or it will fall apart. Also low light at distance performance matters. I can get sharp photos with my still using D810200-500mm if close enough with good light, which isn't always the case, I'll probably pick up a D850 or Z8 to use with my F-mount glass & if all good (Z8 wise-80%or better in focus & not jump off subject) add an expensive Nikon long telephoto. If not what I expext, on to another brand trials. =Sony-Canon. Got to give it to Nikon with their pricing & wildlife lens line-up, = KILLER !!!
I'm Looking forward to the New version of the Z9 Matt. Will it be a reduced MP down from 45 to 24? As a Press Photographer for Getty Images, I don't need 45 MP, although I do use the Z9. A smaller MP down to say 24 or even 30 would be sufficient enough for News and events that I shoot. What are your thoughts on the New Z9 what will it be called rumors say Z9H? I'm also An NPS member so we get first crack when the new models come out. Dave
Nice comparison, lots of interesting stuff. I'd beware of the 'diffraction calculators' on the web, because they are all bogus. They are plagiarised from Cambridge in Colour's, which has a fundamental error. (One good way of spotting plagiarism is tracking the propagation of mistakes). The error is to do with how pixel size and diffraction interact, and an assumption that bigger pixels somehow mask diffraction - they don't. The misleading nature of this model goes through the video, I'm afraid. It doesn't make a whole load of sense to compare cameras with different pixel count at 100%, because you're comparing at different magnifications. The Z8 looks sharper and less noisy simply because you're looking at a patch of a smaller photograph - and therefore you're less deep into the lens' MTF curve and looking at lower noise frequencies. The 'amplification' argument is also wrong, I'm afraid. A smaller pixel doesn't require any more amplification (it has a higher intrinsic conversion gain) and neither does amplification cause the noise that you see. The real problem with this is that cameras nowadays are so good that the differences easily get lost in methodological errors unless your experimental design is very very good.
Thank you, my thought too, I see these assumptions and mistakes in lots of sensors comparisons.
Hi Bob, my intention here was to show the pixels. Not to show sampled pixels. Sampled pixels are changed by whatever method the software application, LR, Capture One etc, choose to re sample images. This sampling process will attempt to deliver an improved image, as we can see for example in photoshop were you can choose different types of re-sampling, which applies different rules of sampling for example 'Preserve Detail' or 'Preserve Detail 2.0' (an improved algorithm, or just different) which is attempting to keep detail that might be lost, by the sampling process, thus introducing sharpening or contrast, or both. If you want to turn 2 pixels into 3, or 5 pixels in 6, the software has to interpolate. This is not the image your camera created anymore, this is an image the software generated and resampled. Once any sampling is done, this is not the image out of your camera. Thus it is not showing us how the cameras performed. It is showing us how the cameras performed with post processing, and that is not the intent of this video. This approach is not misleading at all in showing what the pixels, as supplied by the cameras, and presented by the same software, how they are behaving. Your suggestion is to resample the images. This is no longer what the camera/s created. It is altered. To repeat. To resample is not the intent of this video. And if you watched the whole video you would have seen a further intent is to print large at a later date.
In the real world, the far majority of images are never printed, and an even smaller subset are printed large. Thus the majority of images created today, even with high resolution cameras are view digitally. Thus understanding how a pixel for pixel view performs is completely valid. And both of these cameras provide more than enough detail and resolution for digital viewing. May it be TH-cam, Facebook, website, instagrams, flicker etc. Regards Matt
@@MattIrwinPhotography Hi Matt. I still think that you got it wrong. To start with what you said at the end of the post, 'in the real world...' In the real world photos are practically never ever viewed at native resolution, which is why pixel peeping is unrealistic. If you're printing the output resolution is dictated by the printer and the paper size. On a screen, by the pixel grid of that screen, which rather discounts you argument about viewing digitally. In real life you never ever look at an image that hasn't been resampled somewhere. The resampling happened in the printer driver or the screen driver software - unless you're a critical user and decided to take control yourself, which is why you get the options in the software. Then, even if you look at an image 100%, you're still looking at resampled pixels, because the sensor is a Bayer array. What you see as a 'pixel' is reconstructed from surrounding pixels and there are lots of demosaicking algorithms, with different effects on noise and sharpness. I could make a comparison like this and convince myself that my Z6 is 'sharper' and 'less noisy' than my Z9. IRL the Z9 is visibly sharper with the same lens viewed the same size, even on a computer screen. So, sorry, in the end this mistake makes it really hard to draw any valid conclusions when the differences are so small. I said experiment design was hard :-).
@@BobN54 Cheers Bob you think I am wrong, cool. Where as I am happy to say both approaches are have their place.
As for the Bayer array, it is about colour. And pixels are not created/invented at this stage, colour is determined by the Bayer.
Regardless, everyday users to the best of my understanding have no capacity to have input on changing the information recorded in the RAW file prior to it arriving in your software. Thus I find this point moot. May it be photoshop, LR or C1, from here the RAW is interpreted, and we are presented with pixels and their RGB values.
As for sampling, of course we sample. I never said we don't. But this test was clearly started as camera a vs camera b. And to my mind the cleanest, clearest way to understand what that means is to look at the pixels supplied by the RAW file. Every user will start with the RAW and manipulate for that point their own way, we are all different and this might mean thousands if not millions of potential permutations of one image, including scaling methodology.
I was interested in find out what do these camera do, what do their pixels do, when confronted with the same lens?
That is the question I was looking at. Sure, resample them to make them the 'same'/ similar. But you have just moved one major step away from the pixels supplied by the camera. And the power of software is now impacting your results. This was not my intention, my intention was to have as little intervention from 3rd party software.
I did not not ask the question can I enlarge or reduce by 5-15% to changed the outcome.
What I do know is, with both images 2x scaled with AI, you will have a roughly 19,000 pixel wide Sony image and a roughly 16,500 pixel wide Nikon image, and both images at that point are more than 99.999% of people will ever need.
Now if you want to do this a different way, and get a different results, and it is because you scaled. Great. Does not change what I found without scaling.
Further, you may be at my channel for the first time. I dont know. A little history on my printing. I have sold my artworks for over 30 years. First hand printing in the darkroom over 100,000 prints over a 10-11 year period. I moved to printing with machines like a Lamda for a few years, then in 2005, I purchased an Epson 7800 24 inch printer, not long later I upgraded to the 9800 (44 inch printer) as demand for 4/5 foot prints grew. Within another year or two I had purchased the 11800 a 64 inch printer which allowed me to print 64 inches on the short side, I was creating artworks that went beyond 7/8 even 9 feet on the long edge. I ran my own galleries, selling my work for close to 25 years. Creating the highest possible quality outcome was critical to me. And thus a great deal of time was put into establishing the most effective ways of enlarging 8, 12, 16, 24 MP cameras. And we did not have the software then, we have today. Running along side this was off-set printing for limited edition prints, greeting cards, calendars and coffee table books. Much less forgiving than a large Giclee print, with files needing to be 300 DPI, otherwise quality loss was evident in the print. Again this was in an era of 8,12, 16, 24, 36 MP, and every method under the sun was look at and experimented with to ensure the best quality was maintained.
I only tell you this, as I have a deep interest in large and high quality printing, of which I have undertaken my entire career, and I have produced millions of printed works across books, cards, calendars, posters and originals. I currently own the Epson 20000 64 inch printer.
@@MattIrwinPhotography Thanks for giving my post the dignity of such a long detailed answer. I clearly wouldn't argue with your experience, but I think that there are one or two things that are just not technically not well founded. You say that Bayer is only about colour. That's simply not right. Think about it. At each pixel location there is information on just one colour channel. When the file is processed the other two at that location have to be interpolated. Even that's over simplistic, since the colour space you're converting to is not the camera's native space but a standardised one - and all the channels in that space will be interpolated to one degree or another. So, if we concentrate on the lightness (which is according to your take, native) then in only 50% of the pixels will that be predominantly from the local ('G') pixel. In the other 50% it will be predominantly interpolated from neighbouring pixels. And we can see this if we compare results from different interpolation algorithms - which can produce quite different results with respect to sharpness and noise. So when you say 'And to my mind the cleanest, clearest way to understand what that means is to look at the pixels supplied by the RAW file', that's not what you're doing. You're looking at a heavily processed image which is derived from the data in the pixels and happens to have the same pixel count, but is not the same. You can't 'look' at a raw file. It's a latent image, just as in a processed film. You can look at crude visualisations, where the data from each pixel is coloured according to the channel it represents. but that's not the raw file, it's a visualisation.
Given that you're familiar with film, you would know that any comparison of films which involved comparing different sized prints would not be considered very sensible. Imagine that the test method involved adjusting the scale of enlargement between films so the grain rendered in the print was constant size. You'd compare an 8x12 on 800 ISO film with a 20x16 on 125 ISO, look at them through a loupe and decide that the 800 ISO was sharper. Clearly not sensible.
I don't own as big a printer as you do but I'm not without experience in the subject. I've been taking photographs for over 60 years. I'm a retired professor with a specialisation in sensing systems, so I do know about sampling theory, and that you can't learn anything from a single sample. I also write a regular column for a major photographic magazine on the technical side of photography.
From my point of view, I would say that the important thing is to clearly decide on what your objectives are. Your test is fine if what you want to learn is what looks best viewed 100%, but in terms of what's likely to yield the best looking images in real-world usage, not so useful, I would say.
Still, thank you for giving the time for a detailed discussion - it's appreciated.
Shooting through that a much atmosphere in a coastal city, I'm surprised you got any kind of sharpness.
Not all shots over water. Not all shots over distance. Large bay, not ocean, about a mile or more away. Not humid. Mostly not windy. Lovely clear spring day.
Agree
@@ourtvchannel What country are you in? Traveling from the Southern to the Northern hemisphere, the light is very different.
Ok but the more valid test is to compare the images at the same size, not both at 100%. The higher pixel density will look worse pixel for pixel but SHOULD look better once resized down to the same size (or the lower pixel density sensor image resized up).
Yup, this comparison isn't done how it should be, and most people in the comments aren't technically knowledgeable enough to understand that.
Any time you interpolate above 100% you are loosing quantity and changing the collected data. I was interested in showing how the image is rendered across all metrics. And looking at non upsampled images. As I stated the next test is printing at 3 - 6 feet. To see if a difference is noticeable. Both sensors will be printed at the same size.
@@MattIrwinPhotography downscaling the A7RV image or upscaling the Z8 image would still have been a fairer comparison (and preferably both on no compromises uncompressed RAW). If specifically comparing sharpness, it is also worth using a different aperture as like you mentioned, it'll be diffraction limited for resolution at f11. A sharp lens at optimal (for sharpness) aperture would be pretty noticeably better at 60 MP than 45 MP, but in real world use with the majority of lenses, there'd generally be small differences in results. Both are superb bodies, I'm mostly Sony at the moment but I really like where Nikon are now positioned, especially for wildlife shooters. Being able to adapt Sony lenses to Nikon Z is rather nice too...
@@MattIrwinPhotography It doesn't really matter whether you upscale the Z8 or downscale the rV. But for a valid comparison you have to sample the same sensor area from each camera and output it at the same field of view. Otherwise you are sampling a smaller portion of higher rez sensor (less photon-collecting real estate) and putting it at a signal to noise disadvantage. Would you compare the Z8 at 100% to a ZF at 100% without resizing the Z8 down to the same FOV (or the ZF up) and make any meaningful conclusions?
@@AlistairOwensgooglePlus and @timwhite1111
Below is my pinned comment. When you say 'fair', I completely disagree, using third party software to re-sample the image, is changing the test. It becomes a new test, a test I was not conducting. Please consider the below. Consider what you want me to have done, and what I have done are both valid, but different. As for " Otherwise you are sampling a smaller portion of higher rez sensor" exactly, and understanding that impact is part of the reason for making this video. What does a smaller photo site camera do when faced with the same lens, when we look at un-resampled images. Once we start sampling (re-sizing), scaling, the permutations of outcomes are multitudinous, and are not showing us how each cameras photo sites performed. Please read below.
The 100% conundrum. To those that are suggesting what I am doing is 'wrong' my intention here was to show the pixels. Not to show sampled pixels. Sampled pixels are changed by whatever method the software application, LR, Capture One etc, you choose to re sample images. This sampling process will attempt to deliver an improved image, as we can see for example in photoshop were you can choose different types of re-sampling, which applies different rules of sampling for example 'Preserve Detail' or 'Preserve Detail 2.0' (an improved algorithm, or just different) which is attempting to keep detail that might be lost, by the sampling process, thus introducing sharpening or contrast, or both and most importantly creating pixels that did not exist in the original. If you want to turn 2 pixels into 3, or 5 pixels in 6, the software has to interpolate (generated a non existent pixel). This is not the image your camera created anymore, this is an image the software generated and resampled. Once any sampling is done, this is not the image out of your camera. Thus it is not showing us how the cameras performed. It is showing us how the cameras performed with post processing, and that is not the intent of this video. This approach is not misleading at all in showing what the pixels, as supplied by the cameras, and presented by the same software, are behaving. To repeat. To resample is not the intent of this video. And if you watched the whole video you would have seen a further intent is to print large at a later date.
In the real world, the far majority of images are never printed, and an even smaller subset are printed large. Thus the majority of images created today, even with high resolution cameras are viewed digitally or printed less than 1m / 3 feet. Thus understanding how a pixel for pixel view performs is completely valid. And both of these cameras provide more than enough detail and resolution for digital viewing. May it be TH-cam, Facebook, website, instagrams, flicker etc. Also large printing.
The critical take away here, wether we ask the image to be re-sampled on the screen, or re-sampled during output (whatever file format) this is no longer the image the camera created. This is an image re-sampled (created) via software. Of which different software will have proprietary sampling methods, that will differ, (yet be imperceptible to us) and some software will give you multiple sampling methods. But how do we know how the lens and camera perform together? We just look at that, without third party software intervention. And sure we can upscale these images with AI (as I mention in this video) double their size, with no perceptible loss, and both images will be so large at that point, it does not matter which you choose. And this applies to crops also.
Great video!
Sony's do it all camera as you well know, is the A1 and not the A7RV. Having owned both (photos only) I think the A1 is a better camera but the Z8 is stunning value for money.
LOL Nick, what ... I mean, LOL, but this is not about the A1.
@@MattIrwinPhotography I was referring to you saying Nikon had a do it all camera but Sony cameras were more niche. I hope you make more non Nikon videos as you are one of the best on you tube :)
Sure but the A1 via its price limits its market more than say the Z8, sheer price makes for a smaller market potential, smaller market potential equates to on-a-'niche scale' it being more niche. B+H now Z8-$3496 A1-$6498 substantial difference for cameras that are very close.
@@MattIrwinPhotography I totally agree on the price, I think both the Z8 and the Canon R5 mark two are far better value than what Sony offer. I'm a full-time professional and heavily infested in Sony so will not be changing. However, anyone asking for advice when starting out, I now recommend Nikon. The Z8 is such incredible value for money as is the Z6III . It will be interesting to see what the Sony A7V looks like as I believe that range is in serious need an update. Especially the read out speed which is incredibly slow. It would be nice to think it would be stacked sensor or partially stacked. As I think it's ridiculous that Sony only stacked sensor cameras are 6K and above.
A1 II is going to be exciting. Seriously no bad cameras today. We are so spoilt for cameras and lenses
I think the biker at the 4:00 mark was flipping you off and therefore me by proxy.
I thought that too, and actually I think he is taking a selfie LOL
In 2024, both bodies have great sensors and IQ is, not surprisingly world class on both bodies and is the least important discriminator. And, I like some others in the comments, don't see the differences you imagine you're seeing. This methodology is problematic -- far too few samples to be statistically significant, not using "blind" comparison (which allows biases to enter in the scoring), differences in jpeg processing (for many use cases, raw comparison is more apt for comparing IQ then jpeg) and other issues which make this IQ comparison worthless.
The differences between these bodies will not be IQ in general, but issues specific to use cases. For example, the Nikon weights 200 grams more than the Sony; the lens ecosystems are distinct plus specific features and use cases that make one or the other more suitable. And, of course, if you already own lenses for one or the other and are accustomed to the ergonomics and menus, that will often be a decisive factor. Personally, for my usage, the A7R5 is a dream camera. That said, the Z8 and the Nikon ecosystem is world class and can be expected to excel compared to the A7R5 in many use cases.
? Did not use jpeg. This is not all the images. A 1 hour video would have shown the same outcome with this very specific use case. Which I stated clearly if you watched the whole video, the outcome, assertions are only specific to this usecase as I tested nothing else.
"This methodology is problematic -- far too few samples to be statistically significant,"
How many did he have to try? 200? 😂
It’s a clear win for Nikon 🥇😅
The Z8 definitely looked sharper on my 27” 4k LG monitor while the a7RV image was noticeably larger (closer). I’d have expected the higher pixel sensor to be sharper but, in this case the lower pixel Nikon appears sharper. I’m a Sony shooter.
Thanks for an interesting comparison.
Interesting. I had a similar experience in a similar comparison but unlike this video I wasn't using the "same" lens.
I have a Z8 and fitted with 20mm S prime at f/8 alongside my Fujifilm 102Mp medium format GFX with the 20-35mm set at around 30mm (about the same field of view as 20mm on the full frame) and stopped to the rough equivalent aperture (about 1 f number different) and the Nikon image although half the resolution was easily as sharp as the big Fuji. I was really into pixel peeping and I couldn't tell much difference.
The advantage of the medium format sensor is dynamic range. Which is crazy on the Fuji. I never need to exposure bracket on it. But I concluded for sharpness, the Megaoixel resolution was just a wild goose chase. This confirms the glass makes the image sharp.
The Z8 is a phenomenonally capable camera with a good lens on it.
Smaller pixels will always show you more "fuzziness" due to diffraction; larger pixels do not resolve such fine detail, but it's there (assuming equal conditions of course, and sensors of the same generation)
So cropping isn't important?
Who said that? Did you get to the APS-C section?
With the Z6III having such poor dynamic range. Nothing better than the Z50 according to Peta Pixel. I am thinking about picking up the small Z50III for light wildlife video and some street. But I have seen nothing on its weather resistance. Any test or coverage in that area ?
The truth is that 60mp vs 45.7 mp is completely irrelevant. It takes at least twice the amount of pixels to see a notable difference in print at a given size. This is why the upcoming high res Nikon will most probably be at least 80mp.
The Nikon is a much better deal than the Sony. And this is even more visible if you compare it its true competitor, the a1.
Nikon Z8 is at least 2k more expensive than the A7RV from where I’m from so it differs country to country.
Hi Matt, I'm disturbed by what I hear and see here.
A lens projects an image on the sensor. This image has some degree of definition. When you stop down, this degree of definition may be reduced by diffraction. It is not related to the sensor, it is a property of the projected image.
When you compare 2 pictures from different sensors, you have to compare them at the same final size, not at 100%. In this case, the Z8 picture should be at 133%. If the lens is absolutely the same, you are likely to notice the same degradation at the same f stop.
It was 100% for each sensor relative to that sensor.
@@MattIrwinPhotography Yes and he's saying to compare the images from the two cameras you should zoom the Nikon to 133%
@@lsaideOK at 133% each pixel of the sensor is not being represent by a pixel of the screen. It is not a pixel for pixel representation. It is not 1 to 1. This means interpolation will need to occur. We don’t want that.
@@MattIrwinPhotography What we want as photographers is a final image. Unless you print the picture at 1 to 1 pixel, in which case the one coming from the Sony will be 33% larger, there will be some interpolation.
Er. What? The test is looking at lens sharpness and sensors. To interpolate pixels for a test using a screen as the viewing medium makes no sense. We will interpolate when we print.