My intention is to continue with the R5 until I've covered as much of the features and functions as possible. That said, I will be interspersing R5 tips and tricks with more content for the R5C, as well as some other things, so the pace will be slowing down some going forward, but I'm not walking away from it until I'm done with it.
Thanks for this. I have an R5. I am new to auto focusing. I admit I find it very confusing. I don't fully understand how the camera will select the focal plane using all the different auto focus methods. Maybe I just need to experiment more with each mode to get a better feel for it.
For what I call the simple methods (spot, 1 point, & expanded), the camera just focuses on what's under the point. For the zone and tracking methods, that's where things are complicated enough that it's worth spending some time experimenting to get a feel for what the camera will do. So yes, I think it's valuable putting in some time to experiment, and I would encourage doing that.
Thanks for the video! On the crop mode vs FF mode for subject detection. I wish you had said more. I think the camera sees the same image in LV in both situations but only uses the cropped part for AF in crop mode. Yes, you get an magnified view, but since the lens is the same, you don’t get better info for subject detection. Hence, I don’t see how it can reliably focus better. Please tell me what I am missing. Thanks.
This is something I'd love to dig into at a more technical level, but I haven't quite figured out how to do the testing. So this is my speculation, but this is what I think the camera is doing. When you set the camera to 1.6x crop mode, it windows in on the sensor, and only reads out the area in the 1.6x area. Canon's commercial sensors do this, and they talk about using multiple simultaneously windows in the literature. In fact, it's fairly common in commercially available image sensors from most manufacturers. The less data you have to process the less power you use and the higher the frame rate can be for the same amount of compute time. As an aside, this is likely also how the camera does the magnified views. Window in to the area defined by the zoom window and return only the sensor data from that area. What I don't know is what the "live" view resolution is in either mode. I know the camera is line skipping, but I don't know if it's line skipping to like a 4K resolution, or something smaller. Bear in mind, the EVF is only 1800x1200 pixels, and the screen is even less, so the sensor doesn't need to read out any higher than that to support the internal displays. Unfortunately, I don't have a 4K HDMI capture device, let alone one that does uncompressed video, so I'm not sure what the output on the HDMI port is (though it doesn't necessarily have to be the same as the internal displays either). But the live view feed over USB is only 500 and change lines (the same with the EOS Webcam Util which just uses the live view data stream). We know the camera isn't using the full sensor readout for the live image, because the power consumption is too low. The R5C reads the full sensor area and down samples at frame rates
@@PointsInFocus Thanks for the detailed reply! Enjoyed reading it. Since I have two R5 in here, i guess I too can do some testing as well to see what I can learn. I have heard this claim before, but haven’t found a solid reason to accept it as true. I’d love it if you can find a way to dig deeper on this topic in video. I think there would be many interested people who would enjoy that, as this is definitely not something that seemed to be addressed by anyone else using ML cameras, at least that I am aware of.
I have some ideas, but I'm not sure how much light they'd actually shed on anything. Plus I don't have the tools in place to do anything about testing it right now, actually there's several projects sitting on hold because I have to build test rigs and tools to do it correctly. Well that and a long list of stuff I that I need to dig into ahead of that for my own needs for shooting stuff again. So yea. I wish I had more time and mental capacity in a day to get more done.
@@RogerZoul From a software perspective, it would make sense for efficiency reasons to only scan through the data you're intending to capture - and if there's less of that data, you can process it faster, so can run your detection algorithms faster. No evidence for this whatsoever, just my two cents as a programmer. You almost always want to filter your data before processing it.
Just a question on what you described for the face + tracking method. It is dependant on what is set in the AF option 5 mode -> initial servo AF pt. Am I right to say that it must be set to auto?
Thank you Jason. Can you tell me why doesn't the face + tracking method have a starting AF point? That way you could tell it what to track and then touch and drag would never be needed. You could simply focus and recompose and leave the tracking computer to reposition the AF point for you.
Short answer, because Canon. Actually it can, if you're in servo AF mode and you set the initial AF point to something other than auto. In this specific case, I think the problem is that the implementation stems from DSLRs, and in a DSLR, you always have to use the rear screen so just touching to select something makes sense. With the move to mirrorless and an EVF, nobody seems to have gone back to properly evaluate how things should work in both cases. To me at least, touch and drag AF really just feels like a workaround to make the EVF kind of work like the touch screen where you can just touch something to select it. That said, one of the most frustrating things I've run into since starting this series is the number of places where Canon just kind of stops in the middle of implementing something. Unfortunately, I have no idea why they do that. Ultimately, it may simply come down to a lack of time (in the budget) or, I think more likely, a lack fo vision/understanding of how a feature is, should, or could be used. E.g. The focus bracketing system could easily calculate the number of exposures needed base on a starting and ending focus position, Canon even had a DoF priority mode in early EOS-1D DLSRs that did a similar calculation to select the aperture and focus position, but they don't. Likewise the multiple exposure mode already is 90% of the way to implementing digital ND (stacking and averaging) but can only do that for 9 frames not an arbitrary number so it's use is limited.
The two confusingly named options either tie the box's location to the other AF methods (e.g. you switch out of face+tracking to spot and the point stays in the same place) or saves it separately from the other methods. Auto just allows the camera to automatically select the subject based on either subject detection or the older never-quite-completely-described-by-Canon algorithm. I did two other videos on this: th-cam.com/video/o5z-qxeOXY0/w-d-xo.html and th-cam.com/video/4eWapvY8DRU/w-d-xo.html. But in retrospect, the first one on the actual settings is about as clear as the manual is, so basically muddy water in a cave. Sorry Canon could have done a way better job here.
I find that the area under consideration is ACTUALLY bigger that the AF box. My test with Single POint AF seems to show about 10 % borders are being detected, perhaps you can do another video on this
Interesting, I hadn't noticed that in my use of the camera, but I've never gone looking for it specifically either. I'll have to add that to my list of potential things investigate.
The servo cases? They're on my list to do, as soon as I can figure out what exactly is going on under the hood (if there's differences between the cases that aren't exposed or if the difference is just the slider positions), and how to test and demonstrate that effectively. It's one of those things that I'm interested in, but not quite sure how to test yet.
@@PointsInFocus right! Yeah I mess with it and at times can’t see it do better than the other, some of the trouble I have is I don’t want it to jump from one person to the other while I’m holding focus. I have tested focusing on one person while another person walks across the frame
The only thing I can think of off the top of my head is to check the "Limit AF Methods" setting on the AF4 menu page, and make sure that all of the methods you want to use are enabled there. The only other thing that limits AF methods is when the camera is set to A+ (full automatic shooting). In that case, the camera will always be in face+tracking. But that wouldn't let you switch to 1 point either.
Thanks for sharing that. I've never had an opportunity to photograph surfers, though it's always looked like a fun day at the beach (pun intended). But it's good to know that face + tracking doesn't really cut it in that situation.
Your careful testing is much appreciated. Thanks
Thank-you for continuing to talk about R5… it seems so many TH-camrs are already into the Next Hot Thing, when R5 was a serious investment for me.
My intention is to continue with the R5 until I've covered as much of the features and functions as possible. That said, I will be interspersing R5 tips and tricks with more content for the R5C, as well as some other things, so the pace will be slowing down some going forward, but I'm not walking away from it until I'm done with it.
THE BEST R5 AF explanation video, on TH-cam!! Subscribed!
Thank you for the informative and useful information.
Thanks a lot! Just bought my R5.
Its clear to me now!
Nice video!
That was super helpful in understanding the AF modes. Thanks!
Thanks for this. I have an R5. I am new to auto focusing. I admit I find it very confusing. I don't fully understand how the camera will select the focal plane using all the different auto focus methods. Maybe I just need to experiment more with each mode to get a better feel for it.
For what I call the simple methods (spot, 1 point, & expanded), the camera just focuses on what's under the point.
For the zone and tracking methods, that's where things are complicated enough that it's worth spending some time experimenting to get a feel for what the camera will do. So yes, I think it's valuable putting in some time to experiment, and I would encourage doing that.
Thanks for the video! On the crop mode vs FF mode for subject detection. I wish you had said more. I think the camera sees the same image in LV in both situations but only uses the cropped part for AF in crop mode. Yes, you get an magnified view, but since the lens is the same, you don’t get better info for subject detection. Hence, I don’t see how it can reliably focus better. Please tell me what I am missing. Thanks.
This is something I'd love to dig into at a more technical level, but I haven't quite figured out how to do the testing.
So this is my speculation, but this is what I think the camera is doing. When you set the camera to 1.6x crop mode, it windows in on the sensor, and only reads out the area in the 1.6x area. Canon's commercial sensors do this, and they talk about using multiple simultaneously windows in the literature. In fact, it's fairly common in commercially available image sensors from most manufacturers. The less data you have to process the less power you use and the higher the frame rate can be for the same amount of compute time.
As an aside, this is likely also how the camera does the magnified views. Window in to the area defined by the zoom window and return only the sensor data from that area.
What I don't know is what the "live" view resolution is in either mode. I know the camera is line skipping, but I don't know if it's line skipping to like a 4K resolution, or something smaller. Bear in mind, the EVF is only 1800x1200 pixels, and the screen is even less, so the sensor doesn't need to read out any higher than that to support the internal displays. Unfortunately, I don't have a 4K HDMI capture device, let alone one that does uncompressed video, so I'm not sure what the output on the HDMI port is (though it doesn't necessarily have to be the same as the internal displays either). But the live view feed over USB is only 500 and change lines (the same with the EOS Webcam Util which just uses the live view data stream).
We know the camera isn't using the full sensor readout for the live image, because the power consumption is too low. The R5C reads the full sensor area and down samples at frame rates
@@PointsInFocus Thanks for the detailed reply! Enjoyed reading it. Since I have two R5 in here, i guess I too can do some testing as well to see what I can learn. I have heard this claim before, but haven’t found a solid reason to accept it as true. I’d love it if you can find a way to dig deeper on this topic in video. I think there would be many interested people who would enjoy that, as this is definitely not something that seemed to be addressed by anyone else using ML cameras, at least that I am aware of.
I have some ideas, but I'm not sure how much light they'd actually shed on anything. Plus I don't have the tools in place to do anything about testing it right now, actually there's several projects sitting on hold because I have to build test rigs and tools to do it correctly. Well that and a long list of stuff I that I need to dig into ahead of that for my own needs for shooting stuff again.
So yea. I wish I had more time and mental capacity in a day to get more done.
@@RogerZoul From a software perspective, it would make sense for efficiency reasons to only scan through the data you're intending to capture - and if there's less of that data, you can process it faster, so can run your detection algorithms faster.
No evidence for this whatsoever, just my two cents as a programmer. You almost always want to filter your data before processing it.
Just a question on what you described for the face + tracking method. It is dependant on what is set in the AF option 5 mode -> initial servo AF pt. Am I right to say that it must be set to auto?
Yes, that's correct.
Edit: to be clear, that's for when you're in servo. In one shot that setting doesn't apply.
Thank you Jason. Can you tell me why doesn't the face + tracking method have a starting AF point? That way you could tell it what to track and then touch and drag would never be needed. You could simply focus and recompose and leave the tracking computer to reposition the AF point for you.
Short answer, because Canon.
Actually it can, if you're in servo AF mode and you set the initial AF point to something other than auto.
In this specific case, I think the problem is that the implementation stems from DSLRs, and in a DSLR, you always have to use the rear screen so just touching to select something makes sense. With the move to mirrorless and an EVF, nobody seems to have gone back to properly evaluate how things should work in both cases. To me at least, touch and drag AF really just feels like a workaround to make the EVF kind of work like the touch screen where you can just touch something to select it.
That said, one of the most frustrating things I've run into since starting this series is the number of places where Canon just kind of stops in the middle of implementing something. Unfortunately, I have no idea why they do that. Ultimately, it may simply come down to a lack of time (in the budget) or, I think more likely, a lack fo vision/understanding of how a feature is, should, or could be used.
E.g. The focus bracketing system could easily calculate the number of exposures needed base on a starting and ending focus position, Canon even had a DoF priority mode in early EOS-1D DLSRs that did a similar calculation to select the aperture and focus position, but they don't. Likewise the multiple exposure mode already is 90% of the way to implementing digital ND (stacking and averaging) but can only do that for 9 frames not an arbitrary number so it's use is limited.
@@PointsInFocus Thank you Jason. I did have initial AF point on Auto which I assumed picked between the other two options as it saw fit. Oops!
The two confusingly named options either tie the box's location to the other AF methods (e.g. you switch out of face+tracking to spot and the point stays in the same place) or saves it separately from the other methods. Auto just allows the camera to automatically select the subject based on either subject detection or the older never-quite-completely-described-by-Canon algorithm.
I did two other videos on this: th-cam.com/video/o5z-qxeOXY0/w-d-xo.html and th-cam.com/video/4eWapvY8DRU/w-d-xo.html. But in retrospect, the first one on the actual settings is about as clear as the manual is, so basically muddy water in a cave. Sorry Canon could have done a way better job here.
I find that the area under consideration
is ACTUALLY bigger that the AF box.
My test with Single POint AF seems to show about 10 % borders
are being detected, perhaps you can do another video on this
Interesting, I hadn't noticed that in my use of the camera, but I've never gone looking for it specifically either. I'll have to add that to my list of potential things investigate.
Will you go over cases and -2 to 2+ adjustments ?
The servo cases? They're on my list to do, as soon as I can figure out what exactly is going on under the hood (if there's differences between the cases that aren't exposed or if the difference is just the slider positions), and how to test and demonstrate that effectively. It's one of those things that I'm interested in, but not quite sure how to test yet.
@@PointsInFocus right! Yeah I mess with it and at times can’t see it do better than the other, some of the trouble I have is I don’t want it to jump from one person to the other while I’m holding focus. I have tested focusing on one person while another person walks across the frame
For that you probably also want to play with the "Switching tracked subjects" setting on the AF4 menu.
How can i change AF method on my canon r6? it doesnt let me change other than face tracking or 1 point
The only thing I can think of off the top of my head is to check the "Limit AF Methods" setting on the AF4 menu page, and make sure that all of the methods you want to use are enabled there.
The only other thing that limits AF methods is when the camera is set to A+ (full automatic shooting). In that case, the camera will always be in face+tracking. But that wouldn't let you switch to 1 point either.
Thank you so much!
Space + Tracking seems like it would be dope for surfers but it is terrible. Expanded focus seems to work best.
Thanks for sharing that.
I've never had an opportunity to photograph surfers, though it's always looked like a fun day at the beach (pun intended). But it's good to know that face + tracking doesn't really cut it in that situation.