For those wondering about us not testing in GPU limited scenarios (we did, but still), we've put up some additional results at higher resolutions on our Twitter feed: twitter.com/HardwareUnboxed/status/1178662444048084992 Even when definitively GPU limited, games like Far Cry 5 and The Division 2 do not see reduced latency from frame rate caps
i think you need to do some retest. battlenet finding was about 100% vs 95% not 95% vs 65%. such a large decrease of fps will surely add more latency than (if true) not capping your frames will reduce.
maybe try diffrent monitors ? Two TN panels, two VA panels and two IPS panels ? If there is aany diffrence between ULL, FFR and RTSS using diffrent monitors then maybe its based on panel too ? Just thinkink more widely xD
I love Battle(non)sense, and it's awesome to see Chris' content being acknowledged by one of the top TH-cam PC tech channels. He's doing great work and deserves more attention.
I like Battle(non)sense and hardware unboxed. However, Radeon AntiLag uncapped 100% makes me have less input lag on PUBG, so I don't know what's real or who to believe anymore.
Ok... I'm a little late to the party here, but the methodology of this study is quite abysmal and does not "bust" any myth. Framerate capping is ideal when the GPU framerate is higher than the monitor refresh rate. This was not the case in too many of the examples given in the video. When the GPU framerate is higher than the monitor refresh rate the GPU render queue is overloaded with an excess of frames the monitor can't even display. The GPU is struggling to render the excessively large buffer to keep up with the last draw call passed by the CPU leading to increased input lag. That's why Battle(non)sense and others capped the framerate slightly below the refresh rate effectively pruning just enough data from the render queue so that the rendering is more at par with the monitor refresh rate and the GPU doesn't get caught up in intermediate frames that won't even show on the display on its way to the most recent frame in the buffer nor is it slowed down excessively below the monitor refresh rate. The frame rate cap in the examples of this video is not a few FPS below the monitor refresh rate, it's way below it. Battle(non)sense did not make the argument that framerate capping diminishes input lag anywhere below the monitor refresh rate, only slightly below it. This video in my view does not prove otherwise because it shows what happens when the cap is excessively below the refresh rate and as expected input lag is increased in that case. A framerate cap too far below the monitor refresh rate will add input lag from frames becoming visible more slowly, just as an uncap framerate too far above the monitor refresh rate will add input lag from the GPU racing to display the last buffered CPU draw call.
It makes sense, let me explain: As we all know the Game loop works as follows: 1. The CPU calculates a frame and sends the information to the GPU. 2. The GPU then draws the frame to the screen. Those two things are not synchronized though! So what happens is the following: Unlimited Framerate: -CPU sends Drawcalls too fast -GPU finishes drawing the previous frame -> frame can not be drawn immediately -GPU draws the frame. Capping the framerate with rivatuner is a similar story, since Rivatuner is not synchronized with the game as well: -CPU sends frame -GPU waits for RivaTuner to be able to draw -> frame can not be drawn immediately -GPU draws the Frame If on the other hand the game limits the Framerate, the CPU waits in sync with the Game: -CPU calculates frame -CPU issues drawcall -GPU draws the frame && CPU waits for the next frame (e.g. ~7ms for 144hz - time it took to calculate the frame) at the same time. -> no latency between drawcall and GPU drawing the frame -> With ingame frame cap when the next drawcall is issued the GPU is neither busy with the previous frame nor waiting for RivaTuner to allow it to draw and can thus render it immediately after the drawcall was issued. Edit: I since learned more about graphics programming and what I described is almost correct except for the fact that multiple frames can actually be queue up by the cpu, which further increases the latency.
How happy I am that the topic of optimization, low latency and stable rendering is becoming more and more popular! Even if my main interest is simracing, where maybe high frames aren't so importand as in shooters, but fast and stable renderer is whats most expected. In racing games this gives a perfect feeling of the car and immersion on the track among many opponents. What I lacked here is a reference showing that not only latency and fps are important, but also lack of tearing. Equalized framepacing, low latency and tearing free experience - thats the KING!
Hey, a couple of ideas: 1. Maybe add this testing to your game optimization videos 2. Test different scenarios, gpu and cpu limited cases in the same game (changing the resolution?) or using a different APIs 3. Test different fps caps, multiple combinations above and bellow your pc's capabilities. Imagine you can run a game at 120 fps. By putting a 200fps cap maybe you can get the benefits of lower latency while maintaining the same fps and gpu load. Maybe you need to go bellow 120 but 110 is better than 90 or something Anyways, great video as always!
Framecaps do reduce latency in general, but only if the GPU load is maxed, take a look at your own results. The reason FarCry CSGO and Fortnite didn't see an improvement was because they were CPU bound. You should've tested them at 1440p
yeh the title is alarming. It's obviously something that any intelligent person wouldn't even question. [as long as the GPU is struggling]. it's like saying "if i drive slower will i use less petrol?" umm yes, unless you're trying to make clickbait.
Input latency is more an issue of the game engine's threading and less an issue with the GPU. Inputs do not go through the GPU, they go through the CPU, and the data must be fed through several different CPU threads before it actually makes it out to the display. In addition, the game engine is almost certainly not processing input asynchronously... whatever thread it uses to process input must interact with the rest of the engine which likely means several locks are involved that could easily stall if competing against multiple render-threads going full-out. One could assume that when the frame rate is capped by the game itself, the game engine has some (fully synchronized) down-time between frames that gives it time to process whatever input is pending before starting the next frame. It can really be that simple. But when capped by external means the stall does not necessarily occur in the game engine itself, and thus might not be syncronized (meaning that the game engine could be holding locks required by the input-processing thread when it gets stalled by whatever mechanism is capping the frame rate that is outside of its control). -Matt
Hi what happens to the guy i shoot at..will this input lag effect how this work? Does cpu tell network card that i shoot or wait for whole gpu process?
@@dimwillow7113 input lag has nothing to do with your network, input lag is literally the time it takes between YOUR input (move mouse press a key or click mouse) and it registering on your PC itself
Some input on the subject (I'm no game engine expert, so take this with a grain of salt): In the past most game would work in the following way: In each frame the game would calculate the logic step that evolved from the previous frame, then it would draw the conclusion of this logical evolution. Let's these steps logical frame and graphical frame. As game engines evolved, some elements which were part of the logical frame moved to the graphical frame, for example simple animations that had little to no effect over the game state besides graphics. Developers realized that you could improve the graphical experience by making the graphical frame independent from the logic one. The logical frame in many situations need to be at a static pace (ex: 60 fps), otherwise some of the collision steps would become too complex. Engines like unity separates them. You can have your game running at 240fps, but the logical frames will be set to 30. This allow very smooth animations and better gpu utilization than if you locked both frames stages. The reason why you must be getting better input lag when reducing the frame rate may be that the logical frames are set to a much lower value then the graphical frame, so by increasing the graphical frame rate you are also increasing the necessary CPU utilization for doing the graphical frames (best case scenario should be only the bureaucracy of sending information to the GPU). So by increasing the graphical frame rate, we may actually be causing a bad influence to the logic frame rate... Games that have both of them locked together should always improve latency when you increase frame rate. Otherwise it becomes complicated. Well... that's my 2 cents.
Take home here : Cap with in-game settings with lag reduction utility built in driver software and cap FPS a little bit below (2-3 FPS) what your get on maximum load for best benefit. Don't cap too much or else you'll actually add more latency to it. This will mostly work if your GPU struggles i.e. GPU load stays around 100%.
Great video. I've been capping framerates with RTSS for a long time as I don't like big fluctuations, makes for inconsistent gameplay and more heat, etc when your card runs full out. And I always though the higher the frame rate the better for latency. Like many things of PC gaming (and computers in general), enigma is a pretty good description here. Sucks that we can't have a straight answer and it varies on a per game basis. Hopefully all Frostbite games work on the same concept and I will now be creating a .cfg file to cap my framerates instead of RTSS, at least for the BF series.
Don't be so quick to dismiss RTSS... in my experience, in game limiters often really suck. They often cause large variation in resulting frame times & rates, don't allow granular control (often only let you use certain preset values)... hell, sometimes they simply don't even work at all. I dunno how the one in Frostbite is, if indeed it is an engine feature and not one put in by the game devs. The one in Unreal is pretty good... though there are still exceptions. Notably, the implementation in Borderlands 3, where it is pretty bad; it doesn't apply itself from your saved settings on launch - you need to toggle it off and then on again to get it to do anything. Ugh. Due to these types of issues, I tend to just use RTSS, unless I know the game implementation is particularly good. I'm happy to take the latency it causes for the sake of frametime consistency - effectively, the 0.1% lows of input lag, as well as traditional 0.1% lows of render frametimes, because while I can adapt to consistent input lag, I, or anyone else for that matter, cannot adapt to inconsistent input lag - AKA jitter.
Well the higher fps the more recent up to date frame you can see, so I always like getting the most fps I can at 4k ultra high settings near 120 to 144 for my monitor, of course have to adjust my settings to each game
Uh.. sorry but I think you're off the mark here... capping FPS helps reduce input lag ONLY when the GPU utilization is 99%, when the GPU utilization is already below it like something like 95 or 82 or 66 whatever, capping FPS WON'T help. So the real issue is the GPU utilization.
Would be nice to see how stable the frametimes were when running uncapped. Because I certainly prefer more stable frametimes over slightly lower input latency
I think the real benefit to capping the FPS is frame timing you can get a way smoother experience if you cap your FPS and get a consistent frame time obviously at the cost of input delay
Idk about that. If you have a mid-end to high-end setup you are going to have a consistent frame timing regardless. When I play on my RTX 3060 setup, playing older games such as Call of Duty: Black Ops the "smoothness" of the game is significantly reduced. Capping the game a bit above my refresh rate makes it smoother, while staying at my refresh rate or below makes it worse. Also when capping frames, your pc isn't utilizing all the available horsepower and background applications still take up some of those resources, so it's better to force it, so all the resources are expended into the game. In addition to most games there is nothing wrong with just ignoring frame rate caps, unless the game is very demanding or there are engine issues or unpleasent artifacts.
The thing about having frame caps for esports players is that by providing a frame cap, they are getting a reliable input lag. If you have it on unlimited, then your input lag will be less in high fps scenarios and receive suffering in random dips. For the sake of consistency, they put a cap in order to ensure that they are removing the variable of variable input latency.
@@xpodxI don’t think anyone is saying that if you can render a game on average 500fps but your monitor only supports 240Hz that you should cap at 240fps. Implementing a cap somewhere between 450-480fps would reduce latency while also keeping a low consistent frame time and reducing 1% lows due to more headroom.
@@xpodx technically yes, but you’re sacrificing worse 1% lows, more likely hitching, and worse input latency due to maxing out your system all the time for 20fps. 480fps vs 500 fps means literally a 0.083ms difference in frame times (2.000ms vs 2.083ms) which is literally imperceivable. If you can see/feel a 0.000083 second difference you’re not a human lol.
The thing is: Game runs at a certain physics rate Monitor has a fixed refresh rate gpu fps is often variable Game needs to pipeline all that shizzle If your capped FPS is divisible by the physics rate, you will get the lowest input lag while also keeping the tearing as low as possible. This theory is based on the fact that monitors used to run at 60Hz for such a long time, and it works for many games. Let me give you an example: Rocket League physics run at 120hz. If you own a 144Hz Monitor, set it to 120Hz. Now go to the game settings and cap FPS at 240Hz. If your controller or anything else causes framedrops, try 241Hz. There you go: Low input lag, less tearing. No Gsync or stuff like that needed.
ayy lmao You can try and ask a game developer in the forum of the game you wanna play. Otherwise just try out different caps. In my case, i also have an old iMac with 60Hz, and there i’ve set it to 90Hz (1.5 x 60 ) which works better than 120hz because of the weak gpu.
Storm , Make sure you got these settings: Vsync OFF Render Quality -> High Quality Any kind of Details -> Performance On the right side everything unboxed, beside High quality shaders, Dynamic shadows, Transparent goal posts It might be possible that 250 works with same or even better results, due to pipelining techniques implemented in the game. In my case 240Hz works better, but many pros like ScrubKilla and Squishy use to play on 250Hz.
You'll likely want to drop it a few FPS below the max refresh rate and not cap just to drop the gpu usage. While G-sync will cap it at your refresh rate, it will cause the normal extra latency that V-sync has. However, if it is a few FPS below, it will have much less latency, giving a better overall experience. Blurbusters tested this, and found that when it gets within a few FPS of the refresh rate, it will behave exactly like V-sync does.
Nice finding, that RTSS limiter does not seems to give the same benfit as some ingame limiters, when it comes to latency. The other results at 96%). Also keep in mind that a good gaming experience is not only dependent on input latency but also frame time consistency, that's why I would always recommend to use a combination of graphic settings and frame limit so that the system will always be able to put out the desired framerate without big fps spikes (up or down).
Rtss is the best limiter for now. Most accurate to 0.001 fps. Read here (www)blurbusters(com)/gsync/gsync101-input-lag-tests-and-settings/5/ Should be similar to g-sync
Yeah RTSS gives the most consistent frame times. Some in game caps are a bit dodgy. If you're already at 140+ FPS you should have pretty low latency in general
@@Hardwareunboxed RTSS also causes game crashes in DX9 and some DX11 titles so make sure you watch for them. An example of this are Guild Wars 2 and Insurgency.
I'd honestly recommend Radeon Chill for capping for Freesync as you get the added benefit of lower temps as your clocks lower to match the intended FPS.
Gears 5 uses UE4. UE4 implements frame rate limiter (default way) by adding delay before processing input in the Game Thread. So frame is like that: . Important is the distance from input to present. Without the distance is also short but frame rate is too fast, so they a queued-up in the GPU back-buffers. Some of the above is my guess, the only way to be sure is to have source code of the game ;) RTSS can add delay only inside , so distance will be longer. Additionally I think that with 144 cap there might be some GPU back-buffering. Try next time RTSS with 143 FPS limit (or Refresh Rate - 1). Interesting to see results. I have tested myself the Bulletstorm on 60Hz monitor and measured: 94ms with VSync ON; 63ms with VSync ON + RTSS 59 FPS Cap; 67ms with VSync ON + RTSS 59.9 FPS Cap. 50ms with VSync ON (Fast Sync).
i always cap my frame rate, have been for many years. having wild swings in fps doesn't feel as smooth as capping the fps to about what your normal average would be, where the swings in fps are much less.
On online shooters I do the same since RTCW:ET, as config pros pointed this solution out. Also Battlenonsense had different setup- GPU AND CPU. Yes it could have different effects on different games. But in BFV it really helps: with i7 3770k and RX5700XT @1440p@high going from uncapped 70-144FPS stuttering mess to smooth(relatively) gameplay @71FPS cap(in FS). But I guess in my case it has a lot to do with CPU being bottleneck also. RTFC doesn´t always work, in BFV for me it has no effect. The Blurbusters had even more interestin results with VSYNC ON\OFF and etc...
I do the same no matter what game I play vsync is always on it messes me up with the all the jittering and sents I ben playing like that always I dont think latency was ever a thing for me prolly just use to it. Plus makes me feel good that my gpu is not always under full stress at all times and keeps cooler while doing it.
We (me and my friends plus some other old school gamers of 2004 and onwards) have been doing it since Team Fortress Classic, Day of Defeat and pre Counter Strike 1.3. With CTR monitors seems it did a good job capping fps. For us it was like intuitive: with the fps cap you stop having so much of a dip when things went sideways with our ancient hardware. We just wanted to have a sincere gaming experience by not having something that lasted a few seconds like having 99 fps in the base to have them drop to almost 60. So we capped at around 70/75 f.e..
The RTSS fps limiter adds 1 additional(!) frame of input lag compared to ingame limiters, it always did. That's why you always try to use the latter if available.
I find that in-game limiters are unreliable, also a single frame at a 100% consistent 60fps is 0.27ms, with the number only dropping lower as fps go up, are you sure you know what you're talking about?
@@iliasvelaoras3038 I've been following the input lag discussion for over 3 years now, mainly because of OW and its floaty mouse movement stuff in the past. This discussion is really something else, since it primarily focusses on gpu utilization as a core variable. There are even cases of 2,5+ frames of added input lag (R6 Siege), when using RTSS. Again, I'm not hating on anything here, I have used RTSS myself a lot. While I can't post links without getting spamfiltered on YT - there are people like 'klasbo' on reddit who did a significant amount of testing in that regard.
So, depending on the game engine used... You may have a different result. With the different engines, they have timings/buffer etc going to the CPU/GPU. Recently, with the Win1903 update. Many games with the UE3 game engine had their game's FPSs nearly chopped in half when before the update all was good. Toxikk, for example was one of the games to take the FPS hit. But a quick update to the game's engine that controls the timings/buffer fixed the FPS. Overall, across all Win OS Versions, there was a MASSIVE FPS increase and now the game runs smoother than ever.
Another test you need to perform is capping framerate to something just below the refresh rate of your monitor. Such as capping framerate to 141hz if using a 144hz monitor. This has been something people using g-sync/g-sync compatible monitors have been doing for a while. This will also lessen the input lag difference that happens as a direct result of massively lower FPS compared to an uncapped framerate. Like how you mentioned 60fps has a 33.3ms input lag compared to 200fps at 10ms. What if you compared to 197fps to 200fps?
I don't like jumping fps number, sometimes you can see the drop even in 80+fps scenarios. That's why I cap my fps for sp games, while mp games I keep it uncapped.
don't matter how powerful GPU are there are time to will go too fast and some time too slow.... that going drive you mad when playing game... i normally cap it around 45...
RTSS capping adds one frame delay. That’s published by rtss I thought. So yeah as long as you can cap and be under that one frame with rtss you will get a benefit
You will also benefit in a way that the averages shown here don't actually reveal - there is less jitter. If they also tested for 0.1% lows of input lag, they would find that it is greatly reduced when using RTSS (or similar). This is why I frame cap... not to reduce _average_ input lag, but to reduce _jitter_ (the 1% and 0.1% lows; unexpected hitches, and the like). This is what one should care about, because it's the type of lag that you CANNOT adapt to.
Very interesting Tim! I wonder if a more mainstream GPU with less overhead might benefit more to having free gpu resources (ex RX 580 or GTX 1060.) Keep up the great work!
Given that the CPU could be influencing the ubisoft results, I'd say so. A lower end GPU would take a chunk of the load off the CPU and make the testing more GPU bound.
don't matter how powerful GPU are there are time to will go too fast and some time too slow.... that going drive you mad when playing game... i normally cap it around 45...
Im talking more in terms of the input lag if that is important then do what i stated but for me i prefer to see how high fps I can get a game at max settings, especially single player games.
the more FPS you get, the better the latency will be, AS LONG as you stay BELOW 97% GPU usage, this why in Far Cry (92% GPU Usage) and The Division 2 (97% GPU Usage) the latency increased with capping the frames. It ONLY helps in GPU bound scenarios (98% or higher), if you would test the results with 1440p or higher (where you are 100% GPU bound, capping the frame rate would decrease massive the input latency, because then without the frame caps your latency without capping would be much higher.
Something to maybe also consider, blurbusters (I think) commented that going over the maximum gsync value (i.e. going over 120fps on a 120hz monitor) increases input lag, and you should cap underneath it.
Depends on if you use v-sync or not. Without v-sync it'll be lower but there'll be tearing. With v-sync you get normal v-sync behaviour (double/triple buffering).
I wasn't happy about the Battle(non)sense because he made it sound like there was some kind of bug. But he didn't know enough about how engines and frame delivery works so I can totally understand that. I'm not too happy about the title and not even the conclusion of this video. What NVIDIA told you, is about the best short explanation they could have given without explaining the entire buffer/queueing system. As for the explanation. I found this great comment by Yong Yung underneath the Battle(non)sense video and I couldn't write it better myself: "So the reason you get more input lag when you're GPU bound is this. The game sends a bunch of commands to the GPU, which get executed asynchronously. Then the game calls present(). At this point the driver can force the game to either wait until the frame was completed, or it can just let the game render the next frame, even if the current frame isn't completed yet (leading to better GPU utilization, as there are always more rendering commands in the pipeline). How often the game can call present() while the first frame still hasn't finished rendering without being stopped by the driver is essentially the "pre-rendered frames" setting. If the driver would always stop the game at present() even if no other frame was in-flight, performance would be terrible because you would potentially give up a huge amount of GPU/CPU asynchronicity. (I hope that's a word.) But stopping the game when a single frame is in-flight usually only incurs a small performance penalty. I guess what low-latency mode is trying to do is guess how long they have to block in present() so that the next present() comes in just as the GPU finishes the previous frame. Of course if you're CPU bound (or simply not GPU bound through a frame rate limit), none of this really does much, because every time you call present() the previous frame is already done anyway. It's essentially perfect low-latency mode. What can be done about this? Well, nothing really. Except for low-latency mode or frame rate caps. You could be even more aggressive than the current modes, but that would incur an even bigger performance penalty. And then you have to ask yourself, in (probably competitive) games where people care so much about latency, aren't users playing with low settings anyway to get the most FPS, and are therefore usually CPU bound anyway? It's probably not worth it to provide an even more aggressive mode." This principle should always be the case and it is, in fact, a large part of why regular VSync has high input lag. Even though the GPU will not be at 100% load, the GPU waits for the VSync signal until it processes the next frame if the back buffer is already full. As NVIDIA said, it can't instantly process the new frame. There are of course other reasons for VSync input lag. The frame that is already done gets delayed by up to one refresh period and you're essentially capping your framerate lower than what you could achieve. Some engines have features similar to anti-lag. In fact, I know UE3 Rocket League has a setting in the config file called OneFrameThreadLag which works even more aggressively. But that is not the default and I would be surprised if it is default in any engine. There is a good chance that in the titles that you measured GPU never was in a spot where it is unable to process the next frame. That's why I didn't quite like the conclusion. If you tried the test with a very underpowered graphics card at max settings (maybe 4k), I would expect you to be able to force a difference in more games. For example, Far Cry was only at 92% GPU load in the first place. Even in The Division, just from the data I can look at, it doesn't seem a clear cut case where the GPU would continually be the part that slows everything down. (NVIDIA: "[...]this may not be present when the rendering load fluctuates[...]) Any situation where anti-lag works, should be a situation where capping framerates works at least as good. I know you guys like to test real-world and I respect that, but it may just be that in situations like these, where the theory would predict every game to be affected, that this is not the right way. Of course, someone with the exact same system and settings would get your results, but for others, the result may not be the same and giving per-game recommendations based on your data isn't a good idea. Also, regarding the Fortnite testing: I'm not sure if I fully understood what you're testing. Are you testing the building menu or something that gets rendered in 3D? It wouldn't be impossible for the game to render UI completely independently, which would explain the difference in default input lag and could also mean that the effects of capping the framerate may be different. Edit: Hardware Unboxed on twitter and here in the comments posted extra data. The conclusion is that the tested games are indeed unaffected. Would be interesting to hear from the developers what steps they have taken to prevent input lag. Far Cry has such bad input lag that it may just force queueing multiple frames regardless of framerate cap. In that case, they've really just made it worse.
I know that uncapping your fps increases visual updates because your getting highest possible fps, but it also increases stuttering and screen tearing because your monitor refresh rate can't keep up with the fps and stuttering happens due to the high gpu usage. So going into a graphic intensive moments suddenly will cause the fps to dip suddenly as well. I use Rivatuner, and I think now after watching this video I just need to cap at a higher rate. I run my moniter at 100hz and mostly cap at 108, I might start cap at 144 or 160.
What framerate capping definitely helps with is frame time consistency, especially when in certain areas of the game turning the camera makes the fps jump as certain elements appear or disappear from view. But in this case the goal is to put a cap that is lower than your min fps for like 90% of the game which is quite hard to achieve and may need revising for areas later in the game. For me however it is worth it as I hate fps fluctuations, especially around the 60 fps mark where they are quite noticeable. The improved frame time consistency may even help with input lag as a placebo, at least for me it does probably because the brain can adjust easier to a consistent than variable input lag.
G'day Tim, Another Great Video, although I don't play fast paced multi player shooters (or pretty much any Shooter) I still found the topic really interesting, as for game testing size, like Steve has shown with FPS the more games that you test for a result the more different results you get, with even games that use the same engine getting different results
*I'm more interested in what amounts to the 1% and 0.1% lows of the input latency* ... not merely the mean (which is what I assume you are using. If you said otherwise, I must have missed it). Any chance you could test that? I use RTSS for capping my framerate in games that have _jitter_ . I am happy to take a reduction in framerate, and some additional input latency - I always expected that my use of RTSS would result in at least some amount of both - if the resultant latency is _consistent_ - because while one can adapt to latency when it is consistent, one _cannot_ adapt to jitter! (jitter is, effectively, inconsistent latency) N.B. I use RTSS even when in-game framerate limiters are available, even though, as you showed, such in-game limiters can introduce less latency than other limiters such as RTSS - presumably due to their ability to introduce pauses internal to the rendering process, rather than only afterwards/externally. So, why do I do this? 1) often the in-game versions kind of suck, with large variance/swings, large input lag, or even sometimes don't work at all - and, critically: 2) RTSS is consistent, always working the same in every game; I can adapt to the (small!) amount of latency it introduces, since it is so consistent. Thanks for reading!
This interaction between hardware and software (the workload on the GPU is software) reminds me of the kerfuffle regarding why The BFS scheduler was created on the Linux platform, which lead to the adjustments to the stock scheduler, CFS. The GPU is like a slave device running all out. Load it down and keep it resource limited (your >95% utilization) and the firmware scheduler prioritizes throughput, not latency. As you found, the results are variable, but strikingly similar. It’s cool to see how these similar situations in computing pop up over the years.
Ya, need to test for the 1% and 0.1% lows for input lag, for the same reason that only showing an average frame rate doesn't tell the full story in regular performance analysis. We're interested in *input jitter* . I will happily take a slightly higher average input lag in exchange for lower variance/jitter.
'Capping' your FPS using V-Sync also stabilizes frame times and can make games feel smoother, but it wasn't tested here either and for good reason. The latency/input lag introduced makes either method objectively worse in multiplayer/competitive gaming and not worth pursuing. The videos you refer to are very interesting, but without showing the penalties to input lag and display latency alongside the frame time changes, it gives the impression that the results of RTSS are all benefit and no penalty, which is simply just not the case.
@@jonathanmitchell9779 I know capping your frame rate using V-sync or RTSS isn't all benefit with no penalty. I'm kind of curious to see the an input lag comparison along with a frame time comparison for V-sync vs RTSS. I know RTSS does increase input latency but I tried it out in a few MP games the other day and couldn't feel a difference.
if you plan to keep using freesync, capping will always give you lower input lag compared to uncapped. but if you don't plan to use freesync, then uncapping fps all the way IS the minimal input lag. an advice, you need a specific fps value to your LCD refresh rate to cap to achieve best result. please read this: (www)blurbusters(com)/gsync/gsync101-input-lag-tests-and-settings/5/ in principle, freesync works similar to gsync.
Been saying this for so long. This combined with Battlenonsense's other videos also proves that when running a GPU heavy game competitively (let's say with a maxfps of 180) you're better off using Gsync/Freesync with your fps capped slightly below your highest refresh (to avoid accidental vsync/turning gsync or freesync off) since it'll result in lower input lag *and* smoother visuals even at higher framerates. Consistency in visuals is worth something, even competitively. Now just wait a year or two more before the pros catch up on this fact and another 3 for the general public to realize it's true. Man this is going to save me some online discussions for sure.
I'm pretty upset Radeon Chill isn't being tested as a frame limiter when it should be the best frame limiter out there according to the AMD Engineer who wrote it
10:20 he literally said Radeon Chill didn't improve latency, so obviously they didn't disregard this. It's always a good idea to watch the entire video mate.
@@humanbeing9079 I rewatched the video you're talking about but there's no data for chill only RTSS & FRTC. I'm not sure if he's assuming FRTC and Chill are the same thing because I don't think they are when you set Chill's min & max to the same number
The location of the bottleneck determines how many frames of input lag you get, your framerate determines how much latency each frame is worth. You want to set the framerate cap just low enough to be sure you're staying CPU limited. This isn't just a game to game issue, but also a system to system issue, as cpu vs gpu combinations matter. RTSS will also reduce input lag in some situations, for example if you're using v-sync, or the game is GPU limited with several pre-rendered frames. My recommendation is to use an in game cap if you have it, either just below the maximum refresh rate of your monitor if you have freesync or gsync, or at your minimum framerate with vsync off, if you don't.
in competitive titles you will find fps locking feature. This effect very clear with cs-go and var number in net graph. if you cap your fps near your max, frametime will be smoother, and var will be less.
Exactly this. I always use Vsync as I don't play competitive online games. So the testing above was disappointing really. Would it still hold true? I don't know...
there is a few things i think you got wrong. the VS was gpu at 100% vs gpu at 95%. most of these test you already had the gpu at 95%. than you cap the gpu at 65%. his finding was about the gpu being max out vs capping it below 95%. not this
Would love to see this same testing with adaptive sync. I always cap my frame rate to 2 less than my max refresh rate, using the game engine or RTSS if the option is unavailable. When I leave my fps uncapped, the added latency is immediately noticeable to me (at least in the games I play.) On my old AMD gpus, I would force triple buffering vsync and then cap my fps. That allowed me to use vsync with reduced latency, and really helped my games feel more consistent.
framerate cap is a must have when you are playing with g-sync and v-sync enabled at the same time (at framerates near top on your monitor refresh rate). otherwise you'll have a lot of lag from v-sync engaging. without v-sync you'll have slight tearing near the bottom of the screen, so you kinda need both vsync and cap when playing with g-sync.
The Division 2 is made by Ubisoft to have the best input lag in this test, and Fortnite is a competitive online shooter with worse input lag than single+coop FC5. You guys are fucking morons.
@@skoopsro7656 Their multiplayer games like The Division and Rainbow 6 are optimized extremely well. Its the single player ones where they just don't seem to care.
This is similar to what i have been doing for years. I don't have freesync or gsync and I love the smoothness of a vsynced fps. What I have noticed is that when you use vsync and a frame cap it reduces the vsynced lag significantly. Ive been doing this since before in-game frame caps were a thing so RTSS is what I always used, it can be a hit or miss but it's so nice when it does work.
Uhhh a lot of your tests didn't even have the GPU utilized at 98-99% ......you can't even make some of the conclusions you said because your tests were flawed on those ones. Battlenonsense' tests were much better.
I realize adaptive sync wasn't brought up, but from what I recall reading at Blur Busters some years ago, you should either cap your FPS within the adaptive sync range or run a framerate significantly above the supported range because there is a sort of dead zone in between where latency is worse than either option. In practical terms this means you configure something older like CS:GO to run uncapped while most newer games should be capped in an adaptive sync scenario. Even then, I'm unsure of the benefits of running something like CS:GO uncapped on a 240 Hz monitor. I have a feeling capping a 240 (or 237) Hz might be the better option.
I'm pretty sure Apex legends benefits from A 144fps cap. I first noticed The difference when I tried playing on A 1440P monitor then when I went back to my 240 Hz monitor i lowered the cap to 144 and the game felt so much better
@@adammeek2280 Yes, the engine is limited to around ~180fps by most accounts before you start getting strange issues. Some get away with a bit higher, some a bit lower. If I had a GPU that was powerful enough 1440p @ 144-165hz would be my choice for a monitor.
@@adammeek2280 After seeing Linus's FPS video, I think that 1440p @ 144-165hz would be my choice basically across the board. the extra resolution I think will give a larger benefit then the extra FPS at that point. Sooner or later we will get 1440p 200fps+ monitors, that will be about perfect IMHO.
I didn't expect these results, I came in here skeptical about having any meaningful results thinking this might just be a "fluff piece" - but I was wrong!
I’d like to see an Overwatch only test with varying frame rate caps to find which value provides the lowest input lag, or whether it’s dependent on GPU utilization
This was all expected and I also argued that it was just his "DX11/12" titles that work the same say design wise that an engine framecap can improve the latency. What I'm more worried about is the general latency increase - seeing something like a 60ms on FarCry with 200fps seems atrocious. I'm part of the generation that started playing games when the average inputlag where somewhere +10ms zone with a Win98 system, using AGP port and CRT monitors - while games weren't running much above 100fps... These days aiming in games feels very sluggish, even with the higher Hz and framerates due to said internal engine lag increases. On linux with opengl/vulkan the lag seems to be way lower, so I just wonder if it's a performance>gameplay choice on directX implementation in general... How much lag does the latest DOOM have? It being a vulkan game with 200fps cap.
Rewatch the video, it's pretty flawed. The premise of battlenonsense's video was to test the impact of capping FPS in GPU bound scenarios. This video pretends to verify those results yet a few of the games tested aren't GPU bound to begin with. In addition, Battlenonsense capped games at 95% while tim capped them at a fixed FPS (not a gpu utilization percentage). It should be obvious that capping a game a 40% utilization will not yield input lag benefits. The original hypothesis was that a completely tapped GPU would yield lower input lag if given a little breathing room, not cap the FPS at a ridiculously low number. Why tim did not cap at 95% vs 100% or make sure that games were at 100% utilization is beyond me but these two factors disqualify the results. It needs to be completely redone.
@@giglioflex Additionally, he only tested the average (mean) latency. I'm much more interested in the *consistency* of frame times. I'll happily take a frame or even 2 of lag if it's _consistently_ that amount. The enemy is JITTER - a particular variety of lag - not only lag itself. We should always want lower lag, but if we are given the choice between lower lag but with more variance, or equal or higher lag but with less variance - we should always choose the latter! You can adapt to lag, but only if it is consistent. You cannot adapt to inconsistent lag, AKA jitter.
Hail to the scientific method. Great video! I actually saw the Battle Nonsense video and was really surprised that such a simple solution would have such a big impact on input lag.
Not all that strange when you think about it as UE can have an infinite number of different other plugins dealing with physics, controller input, etc. So its how they integrate with the base engine that would determine how this all works together latency wise.
It only helps if you're GPU limited, because many titles will let the CPU queue up frames If you test those games, you see no change. if you test dx12 games with reflex, you get no change. If you test dx11 games with low latency mode on and vsync off, you get no change. It's a complex issue but almost always a good idea to implement an fps limit your system can sustain
It's kinda sad that you *haven't* "discovered" that in the rest of the games you were CPU-bottlenecked, hence, no 100% GPU load, hence, lowering the GPU load won't make a difference. And what testing method did you use to get those values - were you using apples and oranges?.. *Battle(non)sense* at least showed us *both* the test setup *and* high-speed camera capture as a proof. You showed us a Corsair ad, right in the beginning of a video, nothing else. And the title. Do You Think Capping It Like That Will Make It Look Prettier Or Something, Or Make It Stand Out As More Professional?.. On the contrary.
Great work looking into this! I had watched the Battlenonsense video and I was very confused. It just didn't made sense. If you limit the FPS by stopping the GPU for a bit of time after each frame....like RTSS or V-sync...or by downclocking it(Radeon Chill)..then most games queue up Drawcalls...maybe 2 Frames..maybe 3..or up to 5....depending on the FPS that is a big delay.....but frametimes should be great....antilag can force that queue down to 1....but the limiter is still delaying the GPU. So the theory here would be, that ingame limiters can have the abillity to limit the frequency of drawcalls....like every 10ms for a 100FPS cap. And the GPU will get those, the moment they are ready and can start working as fast as possible....getting the informations as fast as possible to the display. ...nice, but why is the improvement that big in BF? 52 down to 33 @120FPS....that is two full frames...maybe antilag is not working as intended here. ...and what about Frametimes? Having a queue of drawcalls usually makes the Frametimes better because the GPU is only waitig for them in extreme Situations...As long as the GPU has everything it needs in the VRam, it should be able to keep the frametimes flat. These ingame limiters could add more variation to the frametimes....maybe no problem at 120FPS....but could be visible at 60 FPS.
@@laggy0336 already using it. My lg 24 inch 1080p monitor support upto 74 hz refresh rate and no gsync freesync support so im using nvidia fast sync. Smooth and no lag and whatever fps game runs i have no tearing.
The main reason you want to cap fps is to not hit the vsync cap. Especially with gsync you always want to be 2 fps under your max gsync range so you don't activate vsync
@4:56 Yeah well, 26% reduction in response time in Milliseconds That means on a total basis of 1000 milliseconds you have about 0.007% response increase... Just imagine the competitive advantage that this brings to the table :0
I think your results paint a clearer picture. I believe you see a benefit with a frame cap when you gpu has any struggle with a title, even if it's a small struggle. It could be related to particular visual settings. I think this topic should be investigated further. I might put together a video testing what my belief of the situation. Is there a way to test input lag without having the equipment you own?
this mean that the developers are not setting up the framerate limit the same way. some of them stops any process, causing a bigger latency. so its time for them to setup a low latency mode in their games, where they reduces the framerate but still handle the inputs. we should create a "low latency" stamp, which tell the user if the game support it or not (like RTX ON etc...)
Before I upgraded to a vega 56, my GTX 960 could push over 200fps in battlefield 4 on low settings when paired with an i7 8700k. The gpu usage was around 98% and capping the fps to 60 would've definitely reduced the gpu usage, but you cannot possibly tell me that those 60fps would've offered lower input lag than 200fps. We saw a reduction in input lag when going from a gpu bound 190fps to 144fps in Gears 5, but I would've liked to also see 60fps, and even 30fps to see if that trend continues.
On Far cry 5 the added input latency with capped FPS makes sense because you were not getting 98-99% gpu usage to begin in, so you're getting less FPS with the cap and so you increase the delay. Also while RTSS makes the frametime stable, it does feel like it adds some lag and that's probably why you see latency similar to 99% usage, it's not because you have to use in-game limiter but because RTSS adds lag. So everything checks with Battle(non)sense, the best scenario possible is to have as high frame rate as you can while not maxing your GPU. A little CPU bottleneck will help here, or of course limiting your fps a little bit.
I'm sorry to say it but this was not a good addendum to BattleNonsense's video. BattleNonsense demonstrated the input lag advantage of capping FPS in a GPU bound scenario. Unfortunately that premise seems to have been lost as this video either do not use games that are GPU bound or caps the FPS way too low. Battlenonsense capped GPU at 95% vs 100%. In this video often it's 94% vs 34%. The whole video needs to be redone.
10:06 No, you didnt discover that those game dont benefit from it... You discovered that you dont benefit from frame cap when the gpu is not the bottleneck. This is what i see.
Capping frame rates has its benefits. It is more about consistent frame times than reduced latencies v/s non capped scenarios. Consistent frame times are always positively felt by the gamer than a 5 or 10 ms latency increase which is basically about 1 frame delay.
For those wondering about us not testing in GPU limited scenarios (we did, but still), we've put up some additional results at higher resolutions on our Twitter feed: twitter.com/HardwareUnboxed/status/1178662444048084992
Even when definitively GPU limited, games like Far Cry 5 and The Division 2 do not see reduced latency from frame rate caps
i think you need to do some retest. battlenet finding was about 100% vs 95% not 95% vs 65%. such a large decrease of fps will surely add more latency than (if true) not capping your frames will reduce.
Might want to check the link. We tested 99% vs 93% and still didn't see a latency improvement
maybe try diffrent monitors ? Two TN panels, two VA panels and two IPS panels ? If there is aany diffrence between ULL, FFR and RTSS using diffrent monitors then maybe its based on panel too ? Just thinkink more widely xD
Please add rainbow six siege, crucial to this test..
Would be nice to have a look at the frametimes at the same time to see if there are any differences
You could start including a latency test in your optimization guides.
Great idea!
Yeah
I second this
Sounds hard
+1 for latency measurements
I love Battle(non)sense, and it's awesome to see Chris' content being acknowledged by one of the top TH-cam PC tech channels. He's doing great work and deserves more attention.
I like Battle(non)sense and hardware unboxed. However, Radeon AntiLag uncapped 100% makes me have less input lag on PUBG, so I don't know what's real or who to believe anymore.
RinsedSkateboarding it certainly does assuming you’re gpu bottlenecked At 99 % usage.
Now you can include how to achieve the best imput latency when doing an optimization guide for a new game.
Great idea!
Stay under 99% gpu load, there you go... gpu load cap tool probably incoming now
That would be really cool!
@@anomous2307 Can't you do that now with any overclocking software, like MSI's Afterburner?
@@anomous2307 Cool, now I can hit an artificial cap instead of the natural cap. Pass.
Ok... I'm a little late to the party here, but the methodology of this study is quite abysmal and does not "bust" any myth. Framerate capping is ideal when the GPU framerate is higher than the monitor refresh rate. This was not the case in too many of the examples given in the video. When the GPU framerate is higher than the monitor refresh rate the GPU render queue is overloaded with an excess of frames the monitor can't even display. The GPU is struggling to render the excessively large buffer to keep up with the last draw call passed by the CPU leading to increased input lag. That's why Battle(non)sense and others capped the framerate slightly below the refresh rate effectively pruning just enough data from the render queue so that the rendering is more at par with the monitor refresh rate and the GPU doesn't get caught up in intermediate frames that won't even show on the display on its way to the most recent frame in the buffer nor is it slowed down excessively below the monitor refresh rate. The frame rate cap in the examples of this video is not a few FPS below the monitor refresh rate, it's way below it. Battle(non)sense did not make the argument that framerate capping diminishes input lag anywhere below the monitor refresh rate, only slightly below it. This video in my view does not prove otherwise because it shows what happens when the cap is excessively below the refresh rate and as expected input lag is increased in that case. A framerate cap too far below the monitor refresh rate will add input lag from frames becoming visible more slowly, just as an uncap framerate too far above the monitor refresh rate will add input lag from the GPU racing to display the last buffered CPU draw call.
It makes sense, let me explain:
As we all know the Game loop works as follows:
1. The CPU calculates a frame and sends the information to the GPU.
2. The GPU then draws the frame to the screen.
Those two things are not synchronized though!
So what happens is the following:
Unlimited Framerate:
-CPU sends Drawcalls too fast
-GPU finishes drawing the previous frame -> frame can not be drawn immediately
-GPU draws the frame.
Capping the framerate with rivatuner is a similar story, since Rivatuner is not synchronized with the game as well:
-CPU sends frame
-GPU waits for RivaTuner to be able to draw
-> frame can not be drawn immediately
-GPU draws the Frame
If on the other hand the game limits the Framerate, the CPU waits in sync with the Game:
-CPU calculates frame
-CPU issues drawcall
-GPU draws the frame && CPU waits for the next frame (e.g. ~7ms for 144hz - time it took to calculate the frame) at the same time.
-> no latency between drawcall and GPU drawing the frame
-> With ingame frame cap when the next drawcall is issued the GPU is neither busy with the previous frame nor waiting for RivaTuner to allow it to draw and can thus render it immediately after the drawcall was issued.
Edit: I since learned more about graphics programming and what I described is almost correct except for the fact that multiple frames can actually be queue up by the cpu, which further increases the latency.
Didn't want to waste 13 minutes. Thanks! :)
Living legend right here, thanks so much for clarifying!
so we should frame cap ?
@@KINGGAMING-nd8le not through external programs like Riva tuner. In game cap: yes
@@feschber right. i have a 60hz monitor and i should limit my monitors fps to 60 as my gpu heats up. so thank you i'll try this
why do people always skip over Rainbow Six for competitive testing but then somehow Fortnite makes the cut?
I know, right? Who even cares about input lag in purely single player games, for that matter?
I Know Right?
coz we're supposed to hate ubisoft as per youtube and reddit
@@kebugcheck And Epic Games is any better?
Fortnite apex and pubg are the only games that exist
I bought some input lag on ebay, so I'm all good.
lol
a "plug 'n play" one? lucky bastard
You got it all wrong. Should have bought _anti_input lag, smh.
@@YourCRTube oh
Kids
Back in my day we had to make our own anti input lag and plug it in to the back of the sound card!
How happy I am that the topic of optimization, low latency and stable rendering is becoming more and more popular!
Even if my main interest is simracing, where maybe high frames aren't so importand as in shooters, but fast and stable renderer is whats most expected. In racing games this gives a perfect feeling of the car and immersion on the track among many opponents.
What I lacked here is a reference showing that not only latency and fps are important, but also lack of tearing.
Equalized framepacing, low latency and tearing free experience - thats the KING!
It use to be this way many moons ago. Tuning games for the highest stable frame rate instead of omgwtfbbq highest frame rates.
Hey, a couple of ideas:
1. Maybe add this testing to your game optimization videos
2. Test different scenarios, gpu and cpu limited cases in the same game (changing the resolution?) or using a different APIs
3. Test different fps caps, multiple combinations above and bellow your pc's capabilities. Imagine you can run a game at 120 fps. By putting a 200fps cap maybe you can get the benefits of lower latency while maintaining the same fps and gpu load. Maybe you need to go bellow 120 but 110 is better than 90 or something
Anyways, great video as always!
Framecaps do reduce latency in general, but only if the GPU load is maxed, take a look at your own results.
The reason FarCry CSGO and Fortnite didn't see an improvement was because they were CPU bound.
You should've tested them at 1440p
Nah, that would mean having to understand how things work and would take away from TH-cam bashing.
yeh the title is alarming. It's obviously something that any intelligent person wouldn't even question. [as long as the GPU is struggling]. it's like saying "if i drive slower will i use less petrol?" umm yes, unless you're trying to make clickbait.
Thought the same while watching the video.
If I could post the meme of Morgan Freeman pointing up with the caption "he's right you know".. I would.
Battlefield 5 wasn't gpu bound but still benifited from frame cap. The RTSS frame cap reduced gpu usage but had no effect on frame times. Why is that
Input latency is more an issue of the game engine's threading and less an issue with the GPU. Inputs do not go through the GPU, they go through the CPU, and the data must be fed through several different CPU threads before it actually makes it out to the display. In addition, the game engine is almost certainly not processing input asynchronously... whatever thread it uses to process input must interact with the rest of the engine which likely means several locks are involved that could easily stall if competing against multiple render-threads going full-out.
One could assume that when the frame rate is capped by the game itself, the game engine has some (fully synchronized) down-time between frames that gives it time to process whatever input is pending before starting the next frame. It can really be that simple. But when capped by external means the stall does not necessarily occur in the game engine itself, and thus might not be syncronized (meaning that the game engine could be holding locks required by the input-processing thread when it gets stalled by whatever mechanism is capping the frame rate that is outside of its control).
-Matt
Hi what happens to the guy i shoot at..will this input lag effect how this work? Does cpu tell network card that i shoot or wait for whole gpu process?
@@dimwillow7113 input lag has nothing to do with your network, input lag is literally the time it takes between YOUR input (move mouse press a key or click mouse) and it registering on your PC itself
Some input on the subject (I'm no game engine expert, so take this with a grain of salt):
In the past most game would work in the following way:
In each frame the game would calculate the logic step that evolved from the previous frame,
then it would draw the conclusion of this logical evolution. Let's these steps logical frame and graphical frame.
As game engines evolved, some elements which were part of the logical frame moved to the graphical frame, for example simple animations that had little to no effect over the game state besides graphics. Developers realized that you could improve the graphical experience by making the graphical frame independent from the logic one.
The logical frame in many situations need to be at a static pace (ex: 60 fps), otherwise some of the collision steps would become too complex.
Engines like unity separates them. You can have your game running at 240fps, but the logical frames will be set to 30.
This allow very smooth animations and better gpu utilization than if you locked both frames stages.
The reason why you must be getting better input lag when reducing the frame rate may be that the logical frames are set to a much lower value then the graphical frame, so by increasing the graphical frame rate you are also increasing the necessary CPU utilization for doing the graphical frames (best case scenario should be only the bureaucracy of sending information to the GPU). So by increasing the graphical frame rate, we may actually be causing a bad influence to the logic frame rate...
Games that have both of them locked together should always improve latency when you increase frame rate.
Otherwise it becomes complicated.
Well... that's my 2 cents.
Take home here : Cap with in-game settings with lag reduction utility built in driver software and cap FPS a little bit below (2-3 FPS) what your get on maximum load for best benefit. Don't cap too much or else you'll actually add more latency to it. This will mostly work if your GPU struggles i.e. GPU load stays around 100%.
Great video. I've been capping framerates with RTSS for a long time as I don't like big fluctuations, makes for inconsistent gameplay and more heat, etc when your card runs full out. And I always though the higher the frame rate the better for latency. Like many things of PC gaming (and computers in general), enigma is a pretty good description here. Sucks that we can't have a straight answer and it varies on a per game basis. Hopefully all Frostbite games work on the same concept and I will now be creating a .cfg file to cap my framerates instead of RTSS, at least for the BF series.
Don't be so quick to dismiss RTSS... in my experience, in game limiters often really suck. They often cause large variation in resulting frame times & rates, don't allow granular control (often only let you use certain preset values)... hell, sometimes they simply don't even work at all.
I dunno how the one in Frostbite is, if indeed it is an engine feature and not one put in by the game devs.
The one in Unreal is pretty good... though there are still exceptions. Notably, the implementation in Borderlands 3, where it is pretty bad; it doesn't apply itself from your saved settings on launch - you need to toggle it off and then on again to get it to do anything. Ugh.
Due to these types of issues, I tend to just use RTSS, unless I know the game implementation is particularly good. I'm happy to take the latency it causes for the sake of frametime consistency - effectively, the 0.1% lows of input lag, as well as traditional 0.1% lows of render frametimes, because while I can adapt to consistent input lag, I, or anyone else for that matter, cannot adapt to inconsistent input lag - AKA jitter.
You still cap your fps?
im pretty sure you can set it in your graphics control panel
Well the higher fps the more recent up to date frame you can see, so I always like getting the most fps I can at 4k ultra high settings near 120 to 144 for my monitor, of course have to adjust my settings to each game
Uh.. sorry but I think you're off the mark here... capping FPS helps reduce input lag ONLY when the GPU utilization is 99%, when the GPU utilization is already below it like something like 95 or 82 or 66 whatever, capping FPS WON'T help. So the real issue is the GPU utilization.
Would be nice to see how stable the frametimes were when running uncapped. Because I certainly prefer more stable frametimes over slightly lower input latency
Tim please make a video on the Nvidia control panel 3D settings you use!
Glad I'm not the only one who watches Battle(Non)Sense. Glad you guys are putting his name out there, he doesn't get the views his work deserves.
I think the real benefit to capping the FPS is frame timing you can get a way smoother experience if you cap your FPS and get a consistent frame time obviously at the cost of input delay
Idk about that. If you have a mid-end to high-end setup you are going to have a consistent frame timing regardless. When I play on my RTX 3060 setup, playing older games such as Call of Duty: Black Ops the "smoothness" of the game is significantly reduced. Capping the game a bit above my refresh rate makes it smoother, while staying at my refresh rate or below makes it worse. Also when capping frames, your pc isn't utilizing all the available horsepower and background applications still take up some of those resources, so it's better to force it, so all the resources are expended into the game. In addition to most games there is nothing wrong with just ignoring frame rate caps, unless the game is very demanding or there are engine issues or unpleasent artifacts.
The thing about having frame caps for esports players is that by providing a frame cap, they are getting a reliable input lag. If you have it on unlimited, then your input lag will be less in high fps scenarios and receive suffering in random dips. For the sake of consistency, they put a cap in order to ensure that they are removing the variable of variable input latency.
But if it's super high even in dips then it'll still be higher then a locked fps, so can still see the enemy sooner with more recent frames
@@xpodx You haven't capped actually analyzed your frametimes before have you? Everytime I frame cap my 1% lows improve.
@@xpodxI don’t think anyone is saying that if you can render a game on average 500fps but your monitor only supports 240Hz that you should cap at 240fps. Implementing a cap somewhere between 450-480fps would reduce latency while also keeping a low consistent frame time and reducing 1% lows due to more headroom.
@johnmoore1495 if it can hit 500 on avg it'll be the most up to date image that you can see
@@xpodx technically yes, but you’re sacrificing worse 1% lows, more likely hitching, and worse input latency due to maxing out your system all the time for 20fps. 480fps vs 500 fps means literally a 0.083ms difference in frame times (2.000ms vs 2.083ms) which is literally imperceivable. If you can see/feel a 0.000083 second difference you’re not a human lol.
The thing is:
Game runs at a certain physics rate
Monitor has a fixed refresh rate
gpu fps is often variable
Game needs to pipeline all that shizzle
If your capped FPS is divisible by the physics rate, you will get the lowest input lag while also keeping the tearing as low as possible. This theory is based on the fact that monitors used to run at 60Hz for such a long time, and it works for many games. Let me give you an example:
Rocket League physics run at 120hz.
If you own a 144Hz Monitor, set it to 120Hz.
Now go to the game settings and cap FPS at 240Hz. If your controller or anything else causes framedrops, try 241Hz.
There you go: Low input lag, less tearing. No Gsync or stuff like that needed.
Imma try this on RL tomorrow
How do you find the 'physics rate' for each game?
ayy lmao
You can try and ask a game developer in the forum of the game you wanna play. Otherwise just try out different caps. In my case, i also have an old iMac with 60Hz, and there i’ve set it to 90Hz (1.5 x 60 ) which works better than 120hz because of the weak gpu.
Do u happen to know pubgs physics rate i can't seem to find anything related
Storm , Make sure you got these settings:
Vsync OFF
Render Quality -> High Quality
Any kind of Details -> Performance
On the right side everything unboxed, beside High quality shaders, Dynamic shadows, Transparent goal posts
It might be possible that 250 works with same or even better results, due to pipelining techniques implemented in the game. In my case 240Hz works better, but many pros like ScrubKilla and Squishy use to play on 250Hz.
You'll likely want to drop it a few FPS below the max refresh rate and not cap just to drop the gpu usage. While G-sync will cap it at your refresh rate, it will cause the normal extra latency that V-sync has. However, if it is a few FPS below, it will have much less latency, giving a better overall experience. Blurbusters tested this, and found that when it gets within a few FPS of the refresh rate, it will behave exactly like V-sync does.
Nice finding, that RTSS limiter does not seems to give the same benfit as some ingame limiters, when it comes to latency. The other results at 96%).
Also keep in mind that a good gaming experience is not only dependent on input latency but also frame time consistency, that's why I would always recommend to use a combination of graphic settings and frame limit so that the system will always be able to put out the desired framerate without big fps spikes (up or down).
What about if I NEED to cap my framerate to keep it within freesync range? (usually doing it through radeon settings)
Rtss is the best limiter for now. Most accurate to 0.001 fps.
Read here (www)blurbusters(com)/gsync/gsync101-input-lag-tests-and-settings/5/
Should be similar to g-sync
Yeah RTSS gives the most consistent frame times. Some in game caps are a bit dodgy. If you're already at 140+ FPS you should have pretty low latency in general
@@Hardwareunboxed RTSS also causes game crashes in DX9 and some DX11 titles so make sure you watch for them. An example of this are Guild Wars 2 and Insurgency.
I'd honestly recommend Radeon Chill for capping for Freesync as you get the added benefit of lower temps as your clocks lower to match the intended FPS.
Radeon Chill is the best limiter though since it doesn't increase input lag. Set the Min and Max to the same value if you want to use it as a limiter
Gears 5 uses UE4. UE4 implements frame rate limiter (default way) by adding delay before processing input in the Game Thread.
So frame is like that: .
Important is the distance from input to present. Without the distance is also short but frame rate is too fast, so they a queued-up in the GPU back-buffers.
Some of the above is my guess, the only way to be sure is to have source code of the game ;)
RTSS can add delay only inside , so distance will be longer. Additionally I think that with 144 cap there might be some GPU back-buffering. Try next time RTSS with 143 FPS limit (or Refresh Rate - 1). Interesting to see results.
I have tested myself the Bulletstorm on 60Hz monitor and measured:
94ms with VSync ON;
63ms with VSync ON + RTSS 59 FPS Cap;
67ms with VSync ON + RTSS 59.9 FPS Cap.
50ms with VSync ON (Fast Sync).
i always cap my frame rate, have been for many years. having wild swings in fps doesn't feel as smooth as capping the fps to about what your normal average would be, where the swings in fps are much less.
On online shooters I do the same since RTCW:ET, as config pros pointed this solution out. Also Battlenonsense had different setup- GPU AND CPU. Yes it could have different effects on different games. But in BFV it really helps: with i7 3770k and RX5700XT @1440p@high going from uncapped 70-144FPS stuttering mess to smooth(relatively) gameplay @71FPS cap(in FS). But I guess in my case it has a lot to do with CPU being bottleneck also. RTFC doesn´t always work, in BFV for me it has no effect.
The Blurbusters had even more interestin results with VSYNC ON\OFF and etc...
I do the same no matter what game I play vsync is always on it messes me up with the all the jittering and sents I ben playing like that always I dont think latency was ever a thing for me prolly just use to it. Plus makes me feel good that my gpu is not always under full stress at all times and keeps cooler while doing it.
@@fdjahksglkfaj not exactly look up blurrbusters
if its a really old game and you are getting 1000+FPS then capping it is a really bad idea
We (me and my friends plus some other old school gamers of 2004 and onwards) have been doing it since Team Fortress Classic, Day of Defeat and pre Counter Strike 1.3. With CTR monitors seems it did a good job capping fps.
For us it was like intuitive: with the fps cap you stop having so much of a dip when things went sideways with our ancient hardware. We just wanted to have a sincere gaming experience by not having something that lasted a few seconds like having 99 fps in the base to have them drop to almost 60. So we capped at around 70/75 f.e..
The RTSS fps limiter adds 1 additional(!) frame of input lag compared to ingame limiters, it always did. That's why you always try to use the latter if available.
I find that in-game limiters are unreliable, also a single frame at a 100% consistent 60fps is 0.27ms, with the number only dropping lower as fps go up, are you sure you know what you're talking about?
@@iliasvelaoras3038 I've been following the input lag discussion for over 3 years now, mainly because of OW and its floaty mouse movement stuff in the past. This discussion is really something else, since it primarily focusses on gpu utilization as a core variable.
There are even cases of 2,5+ frames of added input lag (R6 Siege), when using RTSS.
Again, I'm not hating on anything here, I have used RTSS myself a lot.
While I can't post links without getting spamfiltered on YT - there are people like 'klasbo' on reddit who did a significant amount of testing in that regard.
So, depending on the game engine used... You may have a different result.
With the different engines, they have timings/buffer etc going to the CPU/GPU.
Recently, with the Win1903 update. Many games with the UE3 game engine had their game's FPSs nearly chopped in half when before the update all was good. Toxikk, for example was one of the games to take the FPS hit. But a quick update to the game's engine that controls the timings/buffer fixed the FPS. Overall, across all Win OS Versions, there was a MASSIVE FPS increase and now the game runs smoother than ever.
Could you please test some more games like Destiny 2, Call of Duty MW (when released), Rainbow Six and Apex Legends?
As well as Valorant
@@reqru1tno
@@wraith7852 lol why not
Needed apex included as well...
Another test you need to perform is capping framerate to something just below the refresh rate of your monitor. Such as capping framerate to 141hz if using a 144hz monitor. This has been something people using g-sync/g-sync compatible monitors have been doing for a while. This will also lessen the input lag difference that happens as a direct result of massively lower FPS compared to an uncapped framerate. Like how you mentioned 60fps has a 33.3ms input lag compared to 200fps at 10ms. What if you compared to 197fps to 200fps?
To anyone reading this, you do not need to limit your FPS anymore in games that support Nvidia reflex and/or AMD Anti-lag.
capping FPS can also reduce stutters by making 1% lows more stable. Though this will vary from game to game.
I don't like jumping fps number, sometimes you can see the drop even in 80+fps scenarios. That's why I cap my fps for sp games, while mp games I keep it uncapped.
don't matter how powerful GPU are there are time to will go too fast and some time too slow.... that going drive you mad when playing game... i normally cap it around 45...
@@campkira I have a 2060 paired with a 1080p monitor, my frames are high but it constantly changing from 150+ to 80+ bothers me so I cap it at 60
RTSS capping adds one frame delay. That’s published by rtss I thought. So yeah as long as you can cap and be under that one frame with rtss you will get a benefit
You will also benefit in a way that the averages shown here don't actually reveal - there is less jitter.
If they also tested for 0.1% lows of input lag, they would find that it is greatly reduced when using RTSS (or similar). This is why I frame cap... not to reduce _average_ input lag, but to reduce _jitter_ (the 1% and 0.1% lows; unexpected hitches, and the like). This is what one should care about, because it's the type of lag that you CANNOT adapt to.
@@mduckernz That's interesting.
Great video, would this be something worth considering when you do your per game optimisation videos?
A section for input lag?
Any update to this in 2024 ?
Would've liked some Apex testing, seeing as the new season is rolling around just tomorrow
Probably cause apex sucks..
Very interesting Tim! I wonder if a more mainstream GPU with less overhead might benefit more to having free gpu resources (ex RX 580 or GTX 1060.) Keep up the great work!
Given that the CPU could be influencing the ubisoft results, I'd say so. A lower end GPU would take a chunk of the load off the CPU and make the testing more GPU bound.
High gpu demand. Cap fps. Low gpu demand, dont cap. Seems like a good general rule.
far cry has high gpu demand and still dont cap
don't matter how powerful GPU are there are time to will go too fast and some time too slow.... that going drive you mad when playing game... i normally cap it around 45...
@@campkira I dont have a "normal" pc, but with my 2080ti I just always let it fly. No going mad over here
Im talking more in terms of the input lag if that is important then do what i stated but for me i prefer to see how high fps I can get a game at max settings, especially single player games.
Wrong.
the more FPS you get, the better the latency will be, AS LONG as you stay BELOW 97% GPU usage, this why in Far Cry (92% GPU Usage) and The Division 2 (97% GPU Usage) the latency increased with capping the frames. It ONLY helps in GPU bound scenarios (98% or higher), if you would test the results with 1440p or higher (where you are 100% GPU bound, capping the frame rate would decrease massive the input latency, because then without the frame caps your latency without capping would be much higher.
My opinion, this relates with the Game Engine, and how the developer developed their game.....no black and white answer since too many variables.
Something to maybe also consider, blurbusters (I think) commented that going over the maximum gsync value (i.e. going over 120fps on a 120hz monitor) increases input lag, and you should cap underneath it.
Depends on if you use v-sync or not.
Without v-sync it'll be lower but there'll be tearing. With v-sync you get normal v-sync behaviour (double/triple buffering).
I wasn't happy about the Battle(non)sense because he made it sound like there was some kind of bug. But he didn't know enough about how engines and frame delivery works so I can totally understand that. I'm not too happy about the title and not even the conclusion of this video. What NVIDIA told you, is about the best short explanation they could have given without explaining the entire buffer/queueing system.
As for the explanation. I found this great comment by Yong Yung underneath the Battle(non)sense video and I couldn't write it better myself:
"So the reason you get more input lag when you're GPU bound is this. The game sends a bunch of commands to the GPU, which get executed asynchronously. Then the game calls present(). At this point the driver can force the game to either wait until the frame was completed, or it can just let the game render the next frame, even if the current frame isn't completed yet (leading to better GPU utilization, as there are always more rendering commands in the pipeline).
How often the game can call present() while the first frame still hasn't finished rendering without being stopped by the driver is essentially the "pre-rendered frames" setting. If the driver would always stop the game at present() even if no other frame was in-flight, performance would be terrible because you would potentially give up a huge amount of GPU/CPU asynchronicity. (I hope that's a word.) But stopping the game when a single frame is in-flight usually only incurs a small performance penalty. I guess what low-latency mode is trying to do is guess how long they have to block in present() so that the next present() comes in just as the GPU finishes the previous frame.
Of course if you're CPU bound (or simply not GPU bound through a frame rate limit), none of this really does much, because every time you call present() the previous frame is already done anyway. It's essentially perfect low-latency mode.
What can be done about this? Well, nothing really. Except for low-latency mode or frame rate caps. You could be even more aggressive than the current modes, but that would incur an even bigger performance penalty. And then you have to ask yourself, in (probably competitive) games where people care so much about latency, aren't users playing with low settings anyway to get the most FPS, and are therefore usually CPU bound anyway? It's probably not worth it to provide an even more aggressive mode."
This principle should always be the case and it is, in fact, a large part of why regular VSync has high input lag. Even though the GPU will not be at 100% load, the GPU waits for the VSync signal until it processes the next frame if the back buffer is already full. As NVIDIA said, it can't instantly process the new frame. There are of course other reasons for VSync input lag. The frame that is already done gets delayed by up to one refresh period and you're essentially capping your framerate lower than what you could achieve.
Some engines have features similar to anti-lag. In fact, I know UE3 Rocket League has a setting in the config file called OneFrameThreadLag which works even more aggressively. But that is not the default and I would be surprised if it is default in any engine. There is a good chance that in the titles that you measured GPU never was in a spot where it is unable to process the next frame. That's why I didn't quite like the conclusion. If you tried the test with a very underpowered graphics card at max settings (maybe 4k), I would expect you to be able to force a difference in more games. For example, Far Cry was only at 92% GPU load in the first place. Even in The Division, just from the data I can look at, it doesn't seem a clear cut case where the GPU would continually be the part that slows everything down. (NVIDIA: "[...]this may not be present when the rendering load fluctuates[...]) Any situation where anti-lag works, should be a situation where capping framerates works at least as good. I know you guys like to test real-world and I respect that, but it may just be that in situations like these, where the theory would predict every game to be affected, that this is not the right way. Of course, someone with the exact same system and settings would get your results, but for others, the result may not be the same and giving per-game recommendations based on your data isn't a good idea.
Also, regarding the Fortnite testing: I'm not sure if I fully understood what you're testing. Are you testing the building menu or something that gets rendered in 3D? It wouldn't be impossible for the game to render UI completely independently, which would explain the difference in default input lag and could also mean that the effects of capping the framerate may be different.
Edit: Hardware Unboxed on twitter and here in the comments posted extra data. The conclusion is that the tested games are indeed unaffected. Would be interesting to hear from the developers what steps they have taken to prevent input lag. Far Cry has such bad input lag that it may just force queueing multiple frames regardless of framerate cap. In that case, they've really just made it worse.
Thank you
Rocket Science
I know that uncapping your fps increases visual updates because your getting highest possible fps, but it also increases stuttering and screen tearing because your monitor refresh rate can't keep up with the fps and stuttering happens due to the high gpu usage. So going into a graphic intensive moments suddenly will cause the fps to dip suddenly as well. I use Rivatuner, and I think now after watching this video I just need to cap at a higher rate. I run my moniter at 100hz and mostly cap at 108, I might start cap at 144 or 160.
What framerate capping definitely helps with is frame time consistency, especially when in certain areas of the game turning the camera makes the fps jump as certain elements appear or disappear from view.
But in this case the goal is to put a cap that is lower than your min fps for like 90% of the game which is quite hard to achieve and may need revising for areas later in the game.
For me however it is worth it as I hate fps fluctuations, especially around the 60 fps mark where they are quite noticeable. The improved frame time consistency may even help with input lag as a placebo, at least for me it does probably because the brain can adjust easier to a consistent than variable input lag.
G'day Tim,
Another Great Video, although I don't play fast paced multi player shooters (or pretty much any Shooter) I still found the topic really interesting, as for game testing size, like Steve has shown with FPS the more games that you test for a result the more different results you get, with even games that use the same engine getting different results
*I'm more interested in what amounts to the 1% and 0.1% lows of the input latency* ... not merely the mean (which is what I assume you are using. If you said otherwise, I must have missed it).
Any chance you could test that?
I use RTSS for capping my framerate in games that have _jitter_ .
I am happy to take a reduction in framerate, and some additional input latency - I always expected that my use of RTSS would result in at least some amount of both - if the resultant latency is _consistent_ - because while one can adapt to latency when it is consistent, one _cannot_ adapt to jitter!
(jitter is, effectively, inconsistent latency)
N.B. I use RTSS even when in-game framerate limiters are available, even though, as you showed, such in-game limiters can introduce less latency than other limiters such as RTSS - presumably due to their ability to introduce pauses internal to the rendering process, rather than only afterwards/externally.
So, why do I do this?
1) often the in-game versions kind of suck, with large variance/swings, large input lag, or even sometimes don't work at all - and, critically:
2) RTSS is consistent, always working the same in every game; I can adapt to the (small!) amount of latency it introduces, since it is so consistent.
Thanks for reading!
This interaction between hardware and software (the workload on the GPU is software) reminds me of the kerfuffle regarding why The BFS scheduler was created on the Linux platform, which lead to the adjustments to the stock scheduler, CFS.
The GPU is like a slave device running all out. Load it down and keep it resource limited (your >95% utilization) and the firmware scheduler prioritizes throughput, not latency.
As you found, the results are variable, but strikingly similar. It’s cool to see how these similar situations in computing pop up over the years.
According to Battle Nonsense. Capping your FPS using RTSS stabilizes frame times to make your game smoother. Could you test that as well?
Ya, need to test for the 1% and 0.1% lows for input lag, for the same reason that only showing an average frame rate doesn't tell the full story in regular performance analysis.
We're interested in *input jitter* . I will happily take a slightly higher average input lag in exchange for lower variance/jitter.
'Capping' your FPS using V-Sync also stabilizes frame times and can make games feel smoother, but it wasn't tested here either and for good reason. The latency/input lag introduced makes either method objectively worse in multiplayer/competitive gaming and not worth pursuing. The videos you refer to are very interesting, but without showing the penalties to input lag and display latency alongside the frame time changes, it gives the impression that the results of RTSS are all benefit and no penalty, which is simply just not the case.
@@jonathanmitchell9779 I know capping your frame rate using V-sync or RTSS isn't all benefit with no penalty. I'm kind of curious to see the an input lag comparison along with a frame time comparison for V-sync vs RTSS. I know RTSS does increase input latency but I tried it out in a few MP games the other day and couldn't feel a difference.
That's a nice merch design! You should bring it back!
my framerate is always cap to 60 upto 72 because my monitor has freesync max 75hz
if you plan to keep using freesync, capping will always give you lower input lag compared to uncapped. but if you don't plan to use freesync, then uncapping fps all the way IS the minimal input lag.
an advice, you need a specific fps value to your LCD refresh rate to cap to achieve best result. please read this:
(www)blurbusters(com)/gsync/gsync101-input-lag-tests-and-settings/5/
in principle, freesync works similar to gsync.
@@MichaelHarto are you sure man?
@@z3phiro193 dead sure
Been saying this for so long. This combined with Battlenonsense's other videos also proves that when running a GPU heavy game competitively (let's say with a maxfps of 180) you're better off using Gsync/Freesync with your fps capped slightly below your highest refresh (to avoid accidental vsync/turning gsync or freesync off) since it'll result in lower input lag *and* smoother visuals even at higher framerates. Consistency in visuals is worth something, even competitively. Now just wait a year or two more before the pros catch up on this fact and another 3 for the general public to realize it's true. Man this is going to save me some online discussions for sure.
I'm pretty upset Radeon Chill isn't being tested as a frame limiter when it should be the best frame limiter out there according to the AMD Engineer who wrote it
And to test it with "power efficiency" setting on and off. Mine set to "off".
10:20 he literally said Radeon Chill didn't improve latency, so obviously they didn't disregard this. It's always a good idea to watch the entire video mate.
Battle(non)sense already tested it, and RTSS is just better.
You can check it on his FreeSync vs. G-Sync Compatible video.
@@humanbeing9079 he didn't show results though. Would have liked to see how much of a difference it really is
@@humanbeing9079 I rewatched the video you're talking about but there's no data for chill only RTSS & FRTC. I'm not sure if he's assuming FRTC and Chill are the same thing because I don't think they are when you set Chill's min & max to the same number
The location of the bottleneck determines how many frames of input lag you get, your framerate determines how much latency each frame is worth. You want to set the framerate cap just low enough to be sure you're staying CPU limited. This isn't just a game to game issue, but also a system to system issue, as cpu vs gpu combinations matter.
RTSS will also reduce input lag in some situations, for example if you're using v-sync, or the game is GPU limited with several pre-rendered frames.
My recommendation is to use an in game cap if you have it, either just below the maximum refresh rate of your monitor if you have freesync or gsync, or at your minimum framerate with vsync off, if you don't.
8:40 Where is testing without ultra low latency setting AKA baseline for cs:go?
yeah we need to see that to know whether to use ultra low latency.
in competitive titles you will find fps locking feature. This effect very clear with cs-go and var number in net graph. if you cap your fps near your max, frametime will be smoother, and var will be less.
What about reducing input lag in CPU bound games?
They did test CS:GO
Allot of games people call "cpu limited" aren't actually cpu limited, Often when you see a game and the gpu us
@@tomstech4390 so in conclusion get as many fps as possible without getting capped by either GPU/CPU (keep at 90% max)?
Steve from GN, Tim and Chris are the only ones I can trust when it comes to gaming science, so pleased to have them in the community!
If you are gonna test like battle none sens maybe do all the same variables, i was missing vsync on/off. Rather interresting finding about this topic
Exactly this. I always use Vsync as I don't play competitive online games. So the testing above was disappointing really. Would it still hold true? I don't know...
Looks like you guys found a new category to add to your game optimization videos! Thanks in advance ;)
there is a few things i think you got wrong. the VS was gpu at 100% vs gpu at 95%. most of these test you already had the gpu at 95%. than you cap the gpu at 65%.
his finding was about the gpu being max out vs capping it below 95%. not this
Would love to see this same testing with adaptive sync. I always cap my frame rate to 2 less than my max refresh rate, using the game engine or RTSS if the option is unavailable. When I leave my fps uncapped, the added latency is immediately noticeable to me (at least in the games I play.) On my old AMD gpus, I would force triple buffering vsync and then cap my fps. That allowed me to use vsync with reduced latency, and really helped my games feel more consistent.
No Apex Legend tests? Cmon man, i need that.
blur busters would be the perfect team to test on a per game basis, its under their purview for sure
Plz check latency in Call Of Duty Warzone or COD MW 2019.
framerate cap is a must have when you are playing with g-sync and v-sync enabled at the same time (at framerates near top on your monitor refresh rate). otherwise you'll have a lot of lag from v-sync engaging. without v-sync you'll have slight tearing near the bottom of the screen, so you kinda need both vsync and cap when playing with g-sync.
No wonder FC5 feels so laggy even with high frame rates, that input lag equals about 16fps
Ubisoft "optimization" strikes back
The Division 2 is made by Ubisoft to have the best input lag in this test, and Fortnite is a competitive online shooter with worse input lag than single+coop FC5. You guys are fucking morons.
@@skoopsro7656 Its not the publisher, its the developer.
@@skoopsro7656 Their multiplayer games like The Division and Rainbow 6 are optimized extremely well. Its the single player ones where they just don't seem to care.
This is similar to what i have been doing for years. I don't have freesync or gsync and I love the smoothness of a vsynced fps. What I have noticed is that when you use vsync and a frame cap it reduces the vsynced lag significantly. Ive been doing this since before in-game frame caps were a thing so RTSS is what I always used, it can be a hit or miss but it's so nice when it does work.
Uhhh a lot of your tests didn't even have the GPU utilized at 98-99% ......you can't even make some of the conclusions you said because your tests were flawed on those ones. Battlenonsense' tests were much better.
I realize adaptive sync wasn't brought up, but from what I recall reading at Blur Busters some years ago, you should either cap your FPS within the adaptive sync range or run a framerate significantly above the supported range because there is a sort of dead zone in between where latency is worse than either option. In practical terms this means you configure something older like CS:GO to run uncapped while most newer games should be capped in an adaptive sync scenario. Even then, I'm unsure of the benefits of running something like CS:GO uncapped on a 240 Hz monitor. I have a feeling capping a 240 (or 237) Hz might be the better option.
I want to see Apex Legends results for this type of test.
I'm pretty sure Apex legends benefits from A 144fps cap. I first noticed The difference when I tried playing on A 1440P monitor then when I went back to my 240 Hz monitor i lowered the cap to 144 and the game felt so much better
@@adammeek2280 Yes, the engine is limited to around ~180fps by most accounts before you start getting strange issues. Some get away with a bit higher, some a bit lower.
If I had a GPU that was powerful enough 1440p @ 144-165hz would be my choice for a monitor.
@@TexasAce now my next question is what about modern warfare
@@adammeek2280 After seeing Linus's FPS video, I think that 1440p @ 144-165hz would be my choice basically across the board. the extra resolution I think will give a larger benefit then the extra FPS at that point. Sooner or later we will get 1440p 200fps+ monitors, that will be about perfect IMHO.
@@TexasAce I just tried it capped at 144 I have a 2080ti and usually leave it uncapped. May be placebo but I feel like it's much better capped.
Oh no... Chris.. what have you done...
This will only start another input lag and settings war... Oh no...
You guys cappin' your FPS to reduce input lag while I'm here struggling to reach 50fps playing Dota2
Same bruh same
Great video, investigation into things people talk about and figuring out if rumours or data is true is why I like this channel.
I didn't expect these results, I came in here skeptical about having any meaningful results thinking this might just be a "fluff piece" - but I was wrong!
I’d like to see an Overwatch only test with varying frame rate caps to find which value provides the lowest input lag, or whether it’s dependent on GPU utilization
Feels like all of this has been on blurbusters since forever, along with a lot of other optimization info.
Great video. Insights like this are valuable to me. I tend to use in game limiters to ensure G-sync is working.
This was all expected and I also argued that it was just his "DX11/12" titles that work the same say design wise that an engine framecap can improve the latency.
What I'm more worried about is the general latency increase - seeing something like a 60ms on FarCry with 200fps seems atrocious. I'm part of the generation that started playing games when the average inputlag where somewhere +10ms zone with a Win98 system, using AGP port and CRT monitors - while games weren't running much above 100fps...
These days aiming in games feels very sluggish, even with the higher Hz and framerates due to said internal engine lag increases. On linux with opengl/vulkan the lag seems to be way lower, so I just wonder if it's a performance>gameplay choice on directX implementation in general...
How much lag does the latest DOOM have? It being a vulkan game with 200fps cap.
Rewatch the video, it's pretty flawed. The premise of battlenonsense's video was to test the impact of capping FPS in GPU bound scenarios. This video pretends to verify those results yet a few of the games tested aren't GPU bound to begin with. In addition, Battlenonsense capped games at 95% while tim capped them at a fixed FPS (not a gpu utilization percentage). It should be obvious that capping a game a 40% utilization will not yield input lag benefits. The original hypothesis was that a completely tapped GPU would yield lower input lag if given a little breathing room, not cap the FPS at a ridiculously low number. Why tim did not cap at 95% vs 100% or make sure that games were at 100% utilization is beyond me but these two factors disqualify the results. It needs to be completely redone.
@@giglioflex Additionally, he only tested the average (mean) latency. I'm much more interested in the *consistency* of frame times. I'll happily take a frame or even 2 of lag if it's _consistently_ that amount.
The enemy is JITTER - a particular variety of lag - not only lag itself. We should always want lower lag, but if we are given the choice between lower lag but with more variance, or equal or higher lag but with less variance - we should always choose the latter! You can adapt to lag, but only if it is consistent. You cannot adapt to inconsistent lag, AKA jitter.
Hail to the scientific method. Great video! I actually saw the Battle Nonsense video and was really surprised that such a simple solution would have such a big impact on input lag.
Strange really, Fortnite and Gears 5 both use the Unreal Engine 4 but display different results.
Not all that strange when you think about it as UE can have an infinite number of different other plugins dealing with physics, controller input, etc. So its how they integrate with the base engine that would determine how this all works together latency wise.
It only helps if you're GPU limited, because many titles will let the CPU queue up frames
If you test those games, you see no change. if you test dx12 games with reflex, you get no change.
If you test dx11 games with low latency mode on and vsync off, you get no change.
It's a complex issue but almost always a good idea to implement an fps limit your system can sustain
It's kinda sad that you *haven't* "discovered" that in the rest of the games you were CPU-bottlenecked, hence, no 100% GPU load, hence, lowering the GPU load won't make a difference.
And what testing method did you use to get those values - were you using apples and oranges?..
*Battle(non)sense* at least showed us *both* the test setup *and* high-speed camera capture as a proof. You showed us a Corsair ad, right in the beginning of a video, nothing else.
And the title. Do You Think Capping It Like That Will Make It Look Prettier Or Something, Or Make It Stand Out As More Professional?.. On the contrary.
Great work looking into this!
I had watched the Battlenonsense video and I was very confused. It just didn't made sense.
If you limit the FPS by stopping the GPU for a bit of time after each frame....like RTSS or V-sync...or by downclocking it(Radeon Chill)..then most games queue up Drawcalls...maybe 2 Frames..maybe 3..or up to 5....depending on the FPS that is a big delay.....but frametimes should be great....antilag can force that queue down to 1....but the limiter is still delaying the GPU.
So the theory here would be, that ingame limiters can have the abillity to limit the frequency of drawcalls....like every 10ms for a 100FPS cap.
And the GPU will get those, the moment they are ready and can start working as fast as possible....getting the informations as fast as possible to the display.
...nice, but why is the improvement that big in BF?
52 down to 33 @120FPS....that is two full frames...maybe antilag is not working as intended here.
...and what about Frametimes? Having a queue of drawcalls usually makes the Frametimes better because the GPU is only waitig for them in extreme Situations...As long as the GPU has everything it needs in the VRam, it should be able to keep the frametimes flat.
These ingame limiters could add more variation to the frametimes....maybe no problem at 120FPS....but could be visible at 60 FPS.
Whenever i cap my framerate . Frame hiccups occers. So i let it free like an animal 😂.
Use RivaTunerStasticsServer
@@laggy0336 already using it. My lg 24 inch 1080p monitor support upto 74 hz refresh rate and no gsync freesync support so im using nvidia fast sync. Smooth and no lag and whatever fps game runs i have no tearing.
The main reason you want to cap fps is to not hit the vsync cap. Especially with gsync you always want to be 2 fps under your max gsync range so you don't activate vsync
@4:56 Yeah well, 26% reduction in response time in Milliseconds
That means on a total basis of 1000 milliseconds you have about 0.007% response increase...
Just imagine the competitive advantage that this brings to the table :0
I think your results paint a clearer picture. I believe you see a benefit with a frame cap when you gpu has any struggle with a title, even if it's a small struggle. It could be related to particular visual settings. I think this topic should be investigated further.
I might put together a video testing what my belief of the situation. Is there a way to test input lag without having the equipment you own?
All the games that showed no improvement weren't at 99% gpu usage.... Come on man.
this mean that the developers are not setting up the framerate limit the same way.
some of them stops any process, causing a bigger latency.
so its time for them to setup a low latency mode in their games, where they reduces the framerate but still handle the inputs.
we should create a "low latency" stamp, which tell the user if the game support it or not (like RTX ON etc...)
Capping your frame rate wasn't the thing xD it was having your GPU at 99% load and its impact on latency.
Before I upgraded to a vega 56, my GTX 960 could push over 200fps in battlefield 4 on low settings when paired with an i7 8700k. The gpu usage was around 98% and capping the fps to 60 would've definitely reduced the gpu usage, but you cannot possibly tell me that those 60fps would've offered lower input lag than 200fps.
We saw a reduction in input lag when going from a gpu bound 190fps to 144fps in Gears 5, but I would've liked to also see 60fps, and even 30fps to see if that trend continues.
Hey hey hey hey Tim... Want to reduce input lag???
... Buy a CRT 😂
On Far cry 5 the added input latency with capped FPS makes sense because you were not getting 98-99% gpu usage to begin in, so you're getting less FPS with the cap and so you increase the delay. Also while RTSS makes the frametime stable, it does feel like it adds some lag and that's probably why you see latency similar to 99% usage, it's not because you have to use in-game limiter but because RTSS adds lag. So everything checks with Battle(non)sense, the best scenario possible is to have as high frame rate as you can while not maxing your GPU. A little CPU bottleneck will help here, or of course limiting your fps a little bit.
You actually failed to understand battlenonse concept and methodology.
I'm glad you looked into it. BattlefieldNonsense video kinda blew my mind.
I'm sorry to say it but this was not a good addendum to BattleNonsense's video. BattleNonsense demonstrated the input lag advantage of capping FPS in a GPU bound scenario. Unfortunately that premise seems to have been lost as this video either do not use games that are GPU bound or caps the FPS way too low. Battlenonsense capped GPU at 95% vs 100%. In this video often it's 94% vs 34%.
The whole video needs to be redone.
@@giglioflex yea. I noticed that as well. 97% and 95% is not 99%
10:06 No, you didnt discover that those game dont benefit from it... You discovered that you dont benefit from frame cap when the gpu is not the bottleneck. This is what i see.
Capping frame rates has its benefits. It is more about consistent frame times than reduced latencies v/s non capped scenarios.
Consistent frame times are always positively felt by the gamer than a 5 or 10 ms latency increase which is basically about 1 frame delay.