Since I had many people ask me about this, here is a short video on how to setup the on-screen overlay as seen in this video: th-cam.com/video/EgzPXy8YYJw/w-d-xo.html
My analogy for what the CPU and the GPU do is that every frame is a pizza and the GPU is baking them. When the GPU gets a pizza ready it hands it to the CPU which folds a box for the pizza to deliver it. If you're gpu bound the pizzas are being made as fast as possible and the cpu has no trouble putting them in boxes. If you're cpu bound the gpu starts piling up pizzas and has to slow down because the cpu can't keep up folding and putting the pizzas in boxes for them to be delivered.
And when you lower your resolution, you need to make smaller pizzas which are quicker to make which means boxes have to be folded for them on a faster pace.
It makes sense, but I think the CPU is not the last stage, but the first, with a set of instructions for what the GPU must create. So I'd rather think of it as the CPU preparing the dough and the GPU mounting the ingredients and baking it. The GPU can only mount and bake a pizza made by the dough a CPU has previously prepared. If the dough gets too complicated (with many different ingredients and layers and whatever), then the CPU will take more time. But even if it's simple, the CPU can prepare a limited amount of pizza ready doughs, and some GPUs have incredibly large ovens. In this case, even though we could bake 100 pizza together, our CPU can only prepare 50 of them each time, so we'll use only 50% of our capacity. The thing with many CPU cores handling different amount of workloads can be like different people doing different parts of the preparation of the dough for a pizza. Like when putting the dough to rest becomes a much longer and time consuming task compared to simply opening it in the correct shape. Describing GPU bottleneck would be when there is enough dough prepared by the CPU, but the oven is too small, or when the oven is big enough, but there are more complex steps in the mouting phase, such as when the client asks for filled borders, extra ingredients or simply more calabresa, that take more time to cut.
Very good and interesting video, glad to see more people explaining things! We're going to make a video about this, but I will now add some of you case scenarios here and show your channel as well
Been watching your videos for years, and as you know, had a few pleasant interactions with you on Twitter as well, but never did I think you'd actually watch one of my videos! This really means a ton to me, really appreciate it!
One easier way to see when you have a CPU bottleneck, is to use process explorer to examine the individual threads of the game. Cyberpunk will run dozens of different threads, but not every aspect of the game engine is multithreaded, and the scheduler will move some of those threads between various cores rapidly depending on the load (perfectly normal when a game launches 100+ threads. If you look at the individual threads, you will often see 1 thread using 1 core worth of CPU time, and at that point, frame rates stop improving. A simply test is to use settings that that get the GPU usage in the 85-95% range, then while gaming, downclock the GPU (e.g., lower it by 500MHz in afterburner), that would get the GPU to be pegged at 100% while also lowering frame rates, ensuring a GPU bottleneck). then gradually increase the clock speed of the GPU while looking at the threads in process explorer. You will notice a gradual increase in frame rates that will stop as soon as one of the threads in the game reaches 1 core worth of CPU time (to figure out what is 1 core worth, take 100, and divide it by the number of threads available by the CPU, including hyperthreading. For example, a 16 thread CPU would allow for up to 6.25% of CPU time representing 1 core worth of CPU time. Since a single thread cannot use more than 1 core worth of CPU time, that thread will have encountered a CPU bottleneck. PS, a system memory bandwidth bottleneck is very rare, the only time I have been able to replicate one in a modern system, was when a lower VRAM card, e.g., 8GB or less, and in a game where the engine did not disable the use of shared memory. Once shared memory is in use actively (you can tell by the PCIe bus usage increasing significantly, and once it hots 50%indicating saturation in one direction, you will notice a scenarios where both the GPU and CPU is underutilized. PS, in those cases, you can get small boosts in performance by tightening the timings of the system RAM to improve the latency. as with modern cards, they typically have around 25GB/s DMA access from a PCIe 4.0 X16 bus, but in some cases where full saturation is not reached, lower latency minimizes how much performance is lost while shared memory is in use. Once full saturation happens, then frame times will become very inconsistent and the game will hitch/ hang as well. A storage bottleneck, will show dips in both CPU and GPU usage (most easily seen as simultaneous drops in power consumption).
Memory Bandwidth bottleneck ? like being limited by 128-bit bus ? i mean now AMD and NVIDIA released 16 gigs on entry level (rx 6600 xt + 4060 ti) where the memory bus width are 128 bit, which is potentially limiting the performance
@@i_zoru For the video card VRAM throughput that largely impacts how well the video card will handle throughput intensive tasks , such as many large textures and higher resolutions, as well as game engines that will dynamically resample assets. GPUs can resample textures hundreds of times per second with very little overhead, but it is more intensive on the VRAM throughput. For games that are not using functions like that, a card like the RTX 4060 16GB works decently, but for games that rely heavily on throughput intensive tasks, then the 4060 underperforms. It is also why the RTX 4060 seems to scale too well from VRAM overclocking to a point where some games can get a 5%+ performance boost from just VRAM overclocking, where with the RTX 4070 may get 1-2% at best in gaming, though outside of gaming, tasks like stable diffusion and other memory hard workloads will scale extremely well from VRAM overclocking, especially when using AI models and render resolutions and settings that will get the card to use around 90-95% of the VRAM. PS, stable diffusion will get a massive slowdown if there is any spillover to system RAM. For example, a RTX 3060 12GB and a RTX 4070 12GB will end up performing the same once system RAM is being used, you will also notice the power consumption of the cards drop significantly. The RTX 4060 drastically underperforms when there is spillover to system RAM, the crippled PCIe X8 interface instead of X16 makes the bottleneck far worse.
You could enable GPU time and CPU time in msec. For example if the CPU sits at 5ms per frame and the GPU time at 3ms per frame you'd conclude a CPU bottleneck and vice versa a GPU bottleneck.
Yeah, that's a good option too, but would need to chart it as the on-screen overlay doesnt update quickly enough. But it can give you an idea at least 👍
How to identify a hard CPU bottleneck in 10 seconds: Turn down your (native) resolution and if your framerate stays about the same (or the same) you are CPU bottlenecked, say ~30FPS at 4K and ~32FPS at 1440P. Why? Because if you lower your resolution your GPU should render WAY more frames, since there are A LOT less Pixels to render. In this case your CPU would not be fast enough to (for example) process the NPC AI, BVH for Raytracing etc. to get more than about 30ish FPS despite the fact that the GPU now has to "only" do maths for 3,6 million pixels (1440P) instead of 8.2 million pixels (4K)
Yeah indeed. When people ask me whether they are bottlenecked but they dont want to check it with something like this then I always tell them to lower their resolution. If the framerate doesnt increase much you are held back by the CPU.
A faster way, or used in conjunction with this, is to look at the gpu power consumption. The 4070 Super is a 220w card. When gpu bottlenecked, it was running under 150w.
Yeah indeed. I spoke about this, saying it gives you a good idea and indication, and then GPUBusy can confirm it. Well, it's still not 100% accurate, but 99% is good enough for most people 🤣
That's what I've been looking at as well. It's pretty accurate too. I have a 4070 super as well. When it drops to e.g. 210w in cyberpunk, it's always a CPU bottleneck situation, at least in my experience. Adding GPU Busy to my overlay now just confirms it.
@@bigninja6472 Notifications? Meaning as soon as a new video goes up you get notified? I do set it on each video to send notifications to people who have notifications enabled, but it almost never works, even on big accounts :(
This man's a genius.. Whilst experimenting with disabling cores ( the Intel issue of the day ).. turned it into one of the best explanations I've ever heard on a completely different issue.. can't wait for his Intel cores video ( wonder what I'll learn from that :-)
i only ask to choose whether my contents are paid ones or free ones, bcs seems like they (and myself personally) treated like pro paid one, but generate income like free, jobless one which is very confusing, and things been talking abt everything but the question asked
This was an excellent video. From someone who started to understand this after getting into PC gaming 2 years ago, it's always quite jarring seeing the comments in other benchmark videos or even in steam forums such as "my system is fine because my CPU is only 30 percent utilized so it must be the game" or "why test the 4090 at 1080P" in CPU benchmark comparisons.
Appreciate the kind words! And I agree, often the comments on some of these videos are mindblowing. And they are also often made by people that are so sure that what they say is correct that it's just spreading misinformation unfortunately. But it's great that you educated yourself on stuff like this within only 2 years of PC gaming. People think stuff like this is not important, but I think it is, as it can really help you plan your build and not overspend on something that'll go to waste. I see so many people that still have 4th gen CPUs rocking 4070 Tis or 4080s because "CPU doesnt matter".
@@Mostly_Positive_Reviews I am curious though, 10:57, does frame gen always get rid of the CPU bottleneck? I notice in some games like Hogwarts Legacy and Immortals of Aveum, where I am CPU bound, GPU usage is unusually underutilized at times. Digital Foundry did a video about Frame gen on console in immortals of Aveum, and the increase in performance was relatively modest despite a probable CPU bottleneck which I suspect is frame gen not entirely getting rid of that bottleneck. Could there be other limits or game engine limitations frame gen doesn't take account ?
@@Stardomplay I only used frame gen in this scenario as when this close to being fully GPU bound it almost always ensures a GPU bound scenario. But it doesnt always. Hogwarts Legacy is actually a very good example where even with frame generation you arent always GPU bound, especially when using ray tracing. Another good example is Dragon's Dogma 2 in the cities. Sometimes other instructions just take up too much of the CPU in terms of cycles that it is almost always CPU bound, even at 4K. In Dragon's Dogma 2, the devs blame the poor CPU performance on AI in the cities for example, and there even at High settings, 4K with Frame Gen on this 4070 Super I am CPU bound. Step outside of the city and you become fully GPU bound. When things like that happens people usually call a game unoptimized, and in the case of DD2 it is definitely true. Render APIs can also impact CPU bottlenecks significantly. For example, DX11 doesnt handle multithreading well at all in most games, and chances of you running into a CPU bottleneck with a DX11 game is greater in my experience. With regards to Immortals, I have only ever played the demo in the main city, and I personally find the game to be very stuttery there, regardless of CPU / GPU used. Something in that game just feels slightly off in terms of motion fluidity. But to answer your question, no, frame gen doesnt always eliminate a CPU bottleneck, but it can help a lot in alleviating it under the correct conditions.
I cap the FPS on my GPU so it runs cooler & smoother, since FPS performance over a certain quality threshold amount is just benchmarking giggle numbers, unnecessary strain on components, & extra heat generation for my bedroom. Running a GPU at max FPS is more likely to create occasional stuttering & less-smooth play, since it has no additional overhead to handle the sudden loads that create drop-off lows. So, my R9-7900 CPU likewise is probably just jogging along with my capped GPU instead of sprinting. Works well for me.
That's the great thing about PCs, there's a setting / configuration that caters to everybody! I also prefer to try and max out my monitor's refresh rate with a little headroom to spare to try and keep a more consistent framerate.
One is a CPU bottleneck (the 4 core example) the other is a game engine bottleneck that makes it seem like a CPU bottleneck, which as you said sometimes it is the CPU itself sometimes it isn't. There is a difference and it is pretty damn important to note.
Sure it's the game engine not properly using every cores but that's the case for 99% of games nowadays Still if you'd use let's say a 7ghz 16900k you will still have a low CPU % usage but that GPU will be closer to 100% But by that time you'll probably have a better GPU and the issue will remain, though your framerate might have doubled lol
@@blakey_wakey If the game only uses one or two cores (like older games such as star craft II) if the game uses multiple cores but it uses them at a very low % in your task manager or afterburner read outs. Software usually leaves a lot of compute performance on the table when it comes to CPUs. This is why you can have a lower powered console use a CPU from 4 or 5 gens behind PCs yet still be GPU limited. In the consoles case it is much easier to program for that specific set of hardware. The limiting factor is almost always GPU compute. However those are just two very simplified examples of a game engine being the bottleneck. There are others but they are more in the weeds. Something like.... A cache bottleneck (aka memory latency within the core), or a core to core latency bottleneck (like with the Ryzen chips with two dies on them). Every computer either has a software or hardware bottleneck though as if it didnt your games would run at infinite fps.
Also, even if your CPU load says 100%, that is most likely a lie. That is actually the average of the CPU’s P cores load. Meaning that if you have both the GPU and the CPU usage at 100%, you can still get a better GPU and not bottleneck it. As long as you don’t play something like Star Citizen. In that game, every single GPU gets bottlenecked no matter what CPU you have 😂
I also noticed nvidia user in cyberpunk say they are cpu bottlenecked at about 50% cpu usage, but on amd it often goes right to 100% in this game; amd gpus use the cpu differently
Almost every high end game I played, I realized that I had stuttering issues, and when I checked my CPU usage it was 100%, at that time I didn't really know what it meant, but then I showed it to a friend and he told me that my CPU was bottlenecking my GPU, and told me to cap the fps to 60 and use frame gen, then my CPU usage went down and no longer had the stuttering issues.
Yeah, CPU at 100% will definitely do that. I made a video a while ago showing how capping your framerate can resolve a lot of stuttering issues when that happens.
Shout out to my fellow South African🤙, great video buddy, very informative. I’ve been out of the pc gaming scene for a while now, so I started searching for videos before I upgrade my hardware, stumbled onto your video. Subscribed, I will definitely support your videos in the future.👍
Brilliant video, very clear and concise! It's crazy how many people in comments on benchmarks spread false information. I think everyone watching benchmarks should be made to watch this video first. Thanks for the upload!
for the layman out there, if you see your GPU usage below 90-100%, then you have most likely have a cpu bottleneck. that cpu bottleneck however could be due to your cpu being underpowered compared to your gpu or it could however be due to the game/engine itself (like dragon dogma 2, jedi survivors, witcher 3 remaster)
all cpus currently are bottlenecked at 4k for example, it wouldnt matter much what cpu you have at 4k. If you have a older cpu, the biggest difference you would notice, would be in the 1%. This man talks a whole lot using so many words to describe something very simple.
Lol! Firstly, not every CPU will max out every GPU at 4K. There are plenty benchmarks out there showing that even something as recent as a 5600X can bottleneck a 4090 at 4K. Even a 4080 can be bottlenecked at 4K by a slower CPU. Secondly, I explained it in "a whole lot of words" because it's not always as simple. Yes, you can get a good idea by checking GPU usage, but using GPUBusy you can get a lot more info. And that's what the video was about...
Thankfully there’s a solution, the 7800X3D. Fastest gaming CPU on the market. Even digital foundry ditched Intel and went AMD. Do the right thing people
this might sound like a nitpick but as an RTSS overlay user myself , or at least yours looks pretty much the same , it bugs me to see reversed usage and temp placements for CPU and GPU , gpu is temp - usage and cpu is usage - temp @.@
Hahaha, you arent the first to point this out. It really was an honest mistake. The default order in CapFrameX is slightly different and I didnt pick it up. It has been fixed in subsequent videos.
Since I have discovered this video over a month ago, thanks to you, now my games have better FPS. Yeah it's bad if the gpu usage is only 50% or lower even though the CPU usage is still at 50% or lower. Making the GPU usage at 100% does infact give my way better FPS and way less stuttering. Taking the load off of the CPU really helps significantly. What is really strange is going from 1080p to 1440p in some games makes almost zero difference, it's because the lower your resolution & graphics settings are, the more likely you might come across a CPU bottleneck.
I am really glad I could help at least one person. And you are 100% correct, if you go from 1080p to 1440p and there is no difference you have a very big CPU bottleneck. Not many people understand that fact. And another important one is that when you are GPU bound your CPU has some breathing room, resulting in better lows. So even though you might not always gain 50% FPS by getting rid of a CPU bottleneck, you will almost always have better lows / less stuttering. Thanks for leaving this comment, it is really appreciated!
@@Mostly_Positive_Reviews Yeah it's better to be GPU bound, CPU bound no thanks, I also subbed. You do know what you're talking about when it comes to PC, you are the man! Edit: I do like the fact that if you go from low to max graphic settings, in some cases there's little to no penalty, sometimes you might even get an fps increase because you're taking the load off of the CPU.
@@niko2002 Thank you for the sub. Just hit 3000 yesterday so I am very grateful for that! I've been around the block when it comes to PCs, but I am still learning every day, and that's what makes it so enjoyable for me!
@@Mostly_Positive_Reviews Yeah since me and my brother Built a PC with a Ryzen 7600x with an Rx6700 (non-xt), I've been very interested in computers. But I also love cars & motorcycles, it's just more fascinating to learn outside of highschool. I'm glad TH-cam exists, that's how I got interested in computers, cars & motorcycles. Because of that, I know how to build computers, which I have learned from reading a manual. I mean you can make custom power switches for PC and it can go anywhere you want, make a PC case out of cardboard, lmao the creativity just never stops, and that's what I love about computers.
@@niko2002 In the olden days of TH-cam you could actually find videos on how to make an atomic bomb, so yeah, there really is something for everyone, and it is a great place to learn for free!
You can also have CPU Bottleneck even if only 1 Core hits 100%. On other side, if you play online games where you need lower input lag, its better to have CPU Bottleneck. Alternative is to Cap your FPS (or Reflex & Anti-Lag) but CPU bottleneck is preferable if you can get much higher FPS than the Monitor hz. Actually both CPU% & GPU% should be as low as possible while maintain high FPS (lower than 60-70% GPU & the individual CPU cores). Even if the monitor cant show more frames than its refresh rate, the input lag improves (higher 1% & 0.1% FPS).
Yeah, agree! In multiplayer shooters most people prefer much higher fps and dont care about screen tearing as it is all about the lowest input lag and high framerates.
It is not preferable to have a CPU bottleneck as this results in frame pacing issues. Always always always use a variable refresh cap just under your CPU limit for the best combination of input latency and motion fluidity. The exception to the VRR cap rule is if your game runs well in excess of your VRR window (600 FPS on 240hz display for example). In this case cap your game to a multiple of the refresh rate for even lower input lag. Do not. Ever. And I mean ever. Leave your framerate free to do whatever it wants.
@@edragyz8596 Im talking exactly about much higher fps than yout monitor hz. You're talking about average gamer. What you saying make sense but Im talking about competitive gaming (3v3, 5v5, 8v8 ect.) Especially if you play with worse connection than your enemies. Fluid or consistent Frame pacing doesnt mean much if its higher & get worse input lag in a game where you need every possible ms. VRR has inherant latency penalty depending on your system. If you have expensive PC & good monitor around 1ms is the best possible penaly. If the FPS is around the monitor refresh rate I will prefer driver based Ultra Latency Mode if the game doesnt have build-in Nvidia Reflex option. I play on 360hz, 780fps ingame cap which allows my CPU cores to stay around 55-75% max & GPU around 60-70%. 720p. This is the best setup I get and while its not always consistent it gives me best possible input lag. When i cap on 354FPS I get the same 1% lows but I get worse results even though the games feels better. You need at least 4-500fps in these games. If you dont feel the difference you probably play with low ping which compansates & that slight delay isnt going to get a big impact on your game but every player whos location is far from the server will feel a difference. Also it doesnt matter whether can feel a difference or not. If you have an oponent you have trouble killing you will find that way you have better chances to kill him in 1v1 situations.
main problem here might be not what you described. frame generation has a lot of different GPU units doing different types of stuff. shaders or RT usually the slowest ones so the rest of GPU rests. and then after that also post processing for a ready frame... that how you get underloaded GPU in most cases. and BTW the logic of howand in what order frame is generated and what game engine does is different in every game. so you can get same or totally different CPU load on different GPU!
I don't understand why they just don't put a powerful cpu into a GPU, add 128 gigabyte of ddr 7 shared ram and 4 ssd slots directly on the graphics card! It would get rid of the ridiculous bottlenecking! It could future proof for many years also. We are almost in 2025, we should have 8k gaming graphics! Nothing should be 1080p or 1440p! Its like these people still are using Betamax and VCR's in their home entertainment systems!
i was bottleneck with my intel 9900X and my 3080ti. I have simply double my FPS in all game. And now i use loseless scaling software for get generation frame and run all game at 150-250 FPS. that work greatly.
Rivatuner has built-in options to show "GPUbusy" time in milliseconds and "frame time" in milliseconds. Whichever takes longer is the limit. It uses PresentMon preset to do the math and spits out a "Limited by: GPU/CPU" message during the benchmark
You can also have GPU-Z running on your second screen. Go to sensors tab. It will give you PerfCap Reason. It might be Thermal, which is obvious but it can also give voltage related reasons, such as Reliability Voltage which means cannot boost higher at current voltage, or something else it will indicate.
In a huge amount of cases where there's low CPU usage and not 100% GPU usage while unlocked framerate is either software problem not properly optimized or memory botllenecked cause if the same CPU had much more and faster L1/L2/L3 cache I'm sure would get much higher framerates and also much faster ram with lower timings count for increasing CPU performance.
Your 2nd scenario isn’t necessarily cpu bound. All it shows is that your aren’t gpu bound. Your limiting factor could be cpu but not usage related. So like the cache or something. Or it could be engine limitations, or IO. Or it could be gpu memory bound. That won’t show up at utilization. Typically if your cpu bound and less than 100% utilization you’ll still see one of the threads pegging 100%. I’d imaging frame gen also moved the limiting factor to your gpu memory speed as well.
Yeah, for sure a lot of variables in the second scenario, and I touched on it briefly to say it's not necessarily just the CPU in this case, but many people still just call it a CPU bottleneck for simplicity.
@@Mostly_Positive_Reviews it’s definitely a rabbit hole I got sucked down for a while. Really the only conclusion is how much we lack in monitoring capabilities tbh.
@@BulkybearOG 100%. If there was an app to tell me definitively "RAM bottleneck" or "PCIE Bandwidth bottleneck" or "SSD bottleneck" I would buy it in an instant!
One of my computers has the 12400f rtx 3050 and 3200Hhz ram, no matter what settings I was never able to saturate the GPU to anything close to 100%. no matter if if was running 1920x1080 or in the 720 range. great video. information. Thanks.
I have a 12400F in my recording system. It's a very underrated CPU and it doesnt break the bank. It should be able to fully saturate a 3050, I use mine with a 3060 and it can keep it at 99% usage in most games I play.
@@Mostly_Positive_Reviews must be the 100 dollar mother board, it was a build a threw together to replace a system that i plugged the 3040 into that was built in 2007. if i had the time I would do some investigating as to why that 2nd system can not peg the GPU.
Thanks. I learnt a lot from your video. But as I checked this with Tomb Raider, even when the GPU Busy Deviation was lower, the CPU usage became higher. Is this inconsistency exactly what you said that this method is not 100% accurate?
Using DLSS, or any upscaler, renders the game internally at a lower resolution. Here I was using DLSS Ultra Performance, so 540p, to enforce a CPU bind. When I started rendering at native 1440p by disabling the upscaler, the load on the GPU became enough for the CPU not to be the bottleneck anymore as the resolution then went from 540p to 1440p.
@@Summorial2 DLSS Frame Generation works best when CPU limited, and thus, the more CPU limited you are the better it can work. I personally always use DLSS Quality with Frame Generation just to bump up the frame rate, make it a bit more CPU bound, and then FG can take care of the rest. In some games there wont be a difference, like Spider-Man Remastered for example. There at 1440p High I get the same frame rate whether I use DLSS upscaling or not, and others it makes a bigger difference.
Thank you, appreciate it! Yeah, the 5600G is definitely holding that 4070S back. Good thing is that you can just slot in a 5800X3D without changing anything else and you should be good.
@@Mostly_Positive_Reviews or just play a 4K DLAA and problem solved lol cpu wont be the bottleneck i have 4070 paired with 12700k and at 1080p i have cpu bound, but not in 4K, if you aim 4K 60 then bottle is much more rare scenario. if you aim 1080 240 fps then you really need a strong cpu fore sure, 7800x3D even more maybe
There's also something else that isn't often mentioned - GPU usage impacting input delay even in CPU bound scenarios. Explanation: If you use an FPS limiter in competitive games, for example I use V-SYNC (nvclp global On) + G-SYNC with a 200Hz monitor in Overwatch 2. That automatically locks my FPS to 189 to prevent V-SYNC activation at all times. This results in a certain amount of input delay. Then if I reduce my render resolution below 100% and use FSR for example, this will result in the same 189FPS cap, but a lower GPU usage and therefore lower input delay, because GPU Busy times are reduced. This is why people who play competitive titles like Overwatch, Valorant or CS would still use low graphics settings even on a RTX 4090. Additionally features like the NVIDIA Reflex + Boost will result in even lower input delay in these scenarios, because +Boost setting causes your GPU clocks to stay at the minimum of the Default Max Clock. It's the same value as if you used the Prefer Maximum Performance setting in nvcpl. This results in your GPU Busy times to be reduced even further on top of the reduced render queue which is the main purpose of the NVIDIA Reflex itself without +Boost enabled.
Yeah, you're right, this topic does not get a lot of airtime, and I think it's because it's not that well known. I might think of a way to string a video together to explain the basics. Many people arent aware that Reflex + V-sync actually caps your framerate just below your monitor's refresh rate for this exact reason. It only needs very little additional GPU headroom to reduce latency. For example, my 165hz panel gets capped at 158 fps with V-sync + Reflex + G-sync. It also prevents the framerate to hit the v-sync limit, which will also increase input latency, as you said.
During the first scenario, I only enabled 4 cores, no hyper threading, and no e-cores, that's why at 100% CPU usage power usage is lower than it would be if all cores were enabled.
Crazy indeed! The 2700X was such a good CPU when it launched, and still isnt terrible, but it's amazing to see how far things have improved since then. You do have a saving grace in the 5800X3D though, which you can just plop in.
What im hearing is, unless the CPU is maxed out at 100% and GPU is chilling, which indicates an obvious CPU bottleneck, anything else will damn near be impossible to tell due to the game/software being used.
Yeah, pretty much. You can check other things but not just with normal software. So for instance if you reduce your Cas latency and the framerate increases you know it's more RAM related. Similar thing when you increase your RAM speed. I mean, on a very basic level. It obviously goes a bit deeper than that but there are things you can do to somewhat determine why the GPU is not reaching it's max potential.
So in the case of scenario 2, what would be the bottleneck? You mentioned that it could be anything system related bottlenecking the CPU and thus bottlenecking the GPU. How would you go about diagnosing the cause of the bottleneck on the CPU?
Identifying exactly what the issue is becomes a bit more tricky, but it can be done. You can first overclock the CPU a bit and see if it improves your framerate. You can also tune memory a bit to see if that improves it. But dont do 2 things at the same time otherwise you wont know what improved it. It can be either the CPU is just not fast enough, memory speed, or memory latency, or all 3. But in the case of the 2nd scenario I wouldnt really worry about it too much, unless your performance is severely impacted, and then in most cases all that will fix it to a greater degree is a new CPU and/or new RAM, which is rarely worth the cost, unless, as I said, your performance is severely impacted.
if someone has got an i3 cpu than obv that person is playing at 1080p . playing at 1440p with an i3 is obv bottleneckking. i have an i3 and playing cyberpunk at 1080p with no bottlenecks whatsoever.
One of the ways to alleviate a CPU bottleneck is to increase resolution. The lower your resolution, the more frames your GPU can render, the more is required from the CPU.
Hlo brother if the intel i5 13600k had good 1% lows than amd r5 7600x in gaming with rtx 4070 or rx 7800xt then sir please tell the right answer...then should me buy intel or amd..
Tough one. If you go 13600K you wont be able to upgrade much, except to a 14700K for example, whereas AM5 platform should get a few more upgrades. It also depends on price. If you dont care about the upgrade path then go for the 13600K, as it generally has better memory overclocking as well.
@@Mostly_Positive_Reviews but what if i never overclock these two amd r5 7600x and intel i5 13600k because i am afraid to i burn my cpu by overclocking then which prossesor should I buy.. Brother if you give your Instagram id or any your contact for right component then I'll pay you some money..
@@goluop0982 If you dont want to overclock then I'd say go for the 7600. That way you can still upgrade to a 9000 series CPU later without having to change motherboards. And the lows on the 13th gen is only slightly better anyway. It becomes much better with overclocking, but if you arent going to overclock I'd say go for the system with the better upgrade path currently. I dont have IG or anything else, except Twitter, but that account is on hold for now. Really dont need to pay me ;) You'll be perfectly fine with a 7600X, B650 motherboard, and 32GB DDR5 6000Mhz CL30 memory. If you want to save a bit of money then go 5600Mhz cl28 / cl30, but the price difference shouldnt be that big between these kits anyway. You can then get a 750W power supply, which would be perfect for the 7800 XT / 4070, whichever you decide to buy. I will say that if you decide to buy the AMD CPU, go for the AMD GPU as well as Smart Access Memory on AMD is supported on more games than Resizable BAR from Nvidia. But really, either system will be perfect for 1440p gaming. The 7800 XT should last a bit longer due to more VRAM, but the 4070 will be slightly faster in RT, and it also has DLSS< which currently has better image quality than FSR.
U can check if ryzen 7500f is available in ur area it will save u some money and its basically same processor as 7600 but without igpu that u dont need cos u have gpu
@@Mostly_Positive_Reviews brother tell me that what is the reason of buy intel i5 14600kf any reason .does its faster in gaming from amd r5 7600x at stock GHz speeds???????????
There are more and more South Africans finding these videos, and I love it! In this video I used CapFrameX instead of MSI Afterburner. It still uses RTSS but it has additional overlay options 👍
@@Sebastianino Hmm, make sure you have the latest version installed perhaps? Other than that, check for the correct labels in the Overlay tab. It is called GPU Active Time Deviation, GPU Active Time Average and Frame Time Average.
@@Mostly_Positive_Reviews I did not download latest... Funny because from searcher site it says 1.7.1 as latest but from inside web site 1.7.2 is newest.
10:10 - I don't want that. The graphics card has much higher power consumption. Does bottleneck negatively affect the life of CPU and GPU components? Sorry if I didn't understand something. English is my second language and I won't always grasp everything right away. If in all scenarios the frame rate is sufficient because let's say it is 100 frames and I only need 60 frames then with or without bottleneck it will come out the same? Is it a matter of individual preference of a person who, for example: prefers less power consumption than dealing with bottleneck? Thanks for video! That was important. --------------------------------- Edit: I plan to use the 12400f in the future along with the 4070S. I have seen that this processor, despite its low price and power consumption, performs nicely in company with this powerful graphics card. I wonder if there will be any problems. However, I haven't seen tests related to the bottleneck of these two parts. Yet it's a much weaker processor than the one in your video. In order to put the GPU on par with the weaker CPU, an undervolt of the GPU would be good idea, right? Because i only play in FHD with 60fps. I'm old school ;D
If your main aim is 1080p 60 fps at all times then yeah, this doesnt mean anything to you really. This is only for people who want to maximize their framerate by not having the CPU limit the GPU, as the GPU is generally the most expensive component in your system. The 12400F is a great little CPU. I have one in my recording PC, and yes, you will be bottlenecked by it with a 4070 Super at 1080p. But in your case that doesnt matter as the CPU is more than capable of doing 60 frames per second, and if you cap your framerate to 60 fps both the CPU and GPU will have lots of idle time. In your case I would say dont worry about this video at all. Most modern CPUs are capable of doing 60 fps, and if you cap it to 60 fps the full potential of the components arent important at all.
Yeah, that's the right approach for sure. Set yourself a framerate target, and if you can reach it that's all that matters. Doesnt matter if you barely reach it or if you have 50% headroom, as long as you are happy with your performance nothing else really matters.
Man could you help me? I need help and nobody could help me yet. I got a RX 6650XT and Ryzen 5 3600. Other people with the exact same setup or an even better GPU dont have the huge stutter i have in Battlefield 5, Battlefield 2042, Palworld etc. I tried EVERYTHING and it looks like a CPU bottleneck to me. But why do other people have a slight bottleneck but dont get stutter so the game is unplayable like mine is? Like its the same exact setup? i got fast 3600mhz cl18 ram, nvme m.2 980 pro, temps are well under the limit and my mobo is great too. XMP on, SAM / Rebar on etc...
Hey man, sure, let's see if I can help. Can you perhaps upload a video while using an overlay to show CPU / GPU usages etc? You can upload to your TH-cam, list it as "Unlisted" and mail me a link so I can have a look? My email address can be found under my info on my channel 👍
@@Mostly_Positive_Reviews i will do that in a Couple days, i found the program called PresentMon and im trying to figure out if i just have insane bottleneck. But trank you so so much dude. I will do that :)
Anytime! If you need help, I have a video on here about setting up RTSS with MSI Afterburner. It shows you step-by-step how to set it up to show CPU and GPU usages etc. Once you are ready in a few days, send that email and we'll take it from there 👍
You can still be CPU bound in that case, but it's more likely that the game is just not utilizing the hardware well, which means it's an engine / software issue.
I’ve got an I5-10400F and 3060 and my cpu usage is 100% while my gpu doesn’t go past like 46-47 usage. I honestly don’t know if this is ok and what to do about it. Maybe my cpu is just too weak for the game?
@@Mostly_Positive_Reviews Cyberpunk. It’s done this in other games but I was able to change settings to get it to stop but for cyberpunk no matter what I change it stays at 100% usage
Okay, makes sense in Cyberpunk. I had a 9600K and it too would run at 100% usage all the time, with my 3070 running at around 80%. Unfortunately Cyberpunk is just too heavy on your CPU, and it's holding your GPU back. 100% CPU usage would also result in more stuttering. The only solution is to upgrade your CPU if you want better performance. If you are fine with the performance as it is then dont worry about it. The CPU running at 100% in games is not going to cause issues, except for not making full use of your GPU, and the increased stuttering.
I wish someone would create an algorithm/chart into an app where you selected cpu & gpu then selected “desired fps” & it would automatically print the graphics settings you need to apply for your game.
@@Mostly_Positive_Reviews i have i7 6700k with rtx 3060ti and i dont know i have bottleneck or not hahah. my gpu %95-99 my cpu %70-80 with ultra settings with RT. thanks dlss frame generation. or should i thanks to AMD for open source FSR 3.0 🤣🤣
How do i tell if i am GPU or CPU bound for newer games at 1080p? As a noob, i have no idea what is what. I went with a build on suggestion from a friend 🤷♂️ My build: Motherboard: Gigabyte x570 UD CPU: 5700x3D GPU: RX 6600 power color (or something) RAM: 24GB @ 2133 MHZ (soon to be 2x 16GB @ 3600mhz) PSU: CoolerMaster 850W Gold Cooler: Peerless assassin 120mm (dual fan) Case: Corsair 4000D + 2x Corsair LL 120mm intake fans
That CPU is definitely good enough to keep that GPU fed, so you dont have to worry about. You'll see a decent uplift in performance in CPU heavy games with the faster RAM for sure. But at 1080p you'll even be fine with a 6700XT with that system in GPU heavy games.
Yeah, that's the correct approach. Depending on what performance I am getting it's not worth it to upgrade a whole platform for a 10% improvement kind of thing.
Great video !! How do you get the GB in ram and cram to appear in Gb instead of mb and also how do you put the names explaining what is what in screen ? Can you provide your afterburner profile ? 😅
Simple, if your CPU is at 100% but your GPU is not even close to 90. With dynamic boost in both AMD and Nvidia based gpus those 80% could simply be P2 not P1
Sure, but not all CPU bottlenecks present themselves as 100% CPU usage. Your CPU usage can be 40% but still be CPU bound. It's possible that a single thread is running at 100%, and the overall usage is reported lower because it is averaged out across all cores, or you most used thread can also be sitting at 60% but still being CPU bound because it's just not fast enough to keep the GPU fully utilized.
Hi i have a weird situation where I can't find much help (due to lack of players maybe). In Avatar: Frontiers Of Pandora when benchmarking the cpu usage reported is just 11%, it is crazy low but i think the cpu is already doing what it can (stuttering heavily and all graphics to absolute lowest with frame gen). So why it might read 11% usage or less? There are 4 physical cores in old ryzen 5 3550H so at least 25% read if only a single core being used make sense... But 11%?
@@Mostly_Positive_Reviews 1650 4GB, it's even less powerful than an iGPU today :), but the usage on that is much less than 50 at low, and 30 at lowest. So ... My GPU isn't doing anything really (maybe)😅
Hmm thinking about it maybe that 4GB is the issue. Not anywhere enough the minimum required VRAM maybe right? The reported VRAM usage is 3.3GB out of 3.8GB but maybe in reality it's not enough? Not sure tho... A lot of smoother games ran with usage edging 3.6-3.7 (FC6, Cyberpunk 2077 PL, Satisfactory)
@@dzxtricks This game is quite heavy on VRAM, so I wouldnt be surprised if it is that. That CPU should be sufficient to fully utilize your GPU, but if it runs out of VRAM there's not much it can do, so I think you might be right.
I always looked at it as: Gpu at or near 100% utilisation then your gpu bottlenecked. Gpu consistently not at or near 100% cpu bottleneck. This way does fall apart if a game for example is fps capped
I am 99% sure it limits CPU performance as the GPU frametimes still show sya 6ms, but CPU frame times show 16.6ms for example. But that would reduce all usage if you are well below what the max your system is capable of rendering is. It can help with smoothness indeed. G-sync does not limit the framerate, so if you go above your monitor's refresh rate you will see tearing. So limiting it a few frames below your monitor's refresh rate will prevent tearing, and with g-sync enabled you will get a much smoother presentation.
@Mostly_Positive_Reviews Thank you for responding, I'm using 165hz Dell monitor, when G-Sync Is enabled I'm getting 158 fps without any frame capping. I should still limit frames in games to let's say 155, or limit but to 158? I'm getting strange tearings while gaming, I'm getting 120 and more but I can still see the tearing. 😞
I have this problem with Cyberpunk 2077, CPU 100% all the time GPU 50-80% AMD overlay says I have 70-80 fps, but when I use mod that show fps it is 30-40 and it feels like 30-40
What are your specs? And do you use AFMF? When you enable AFMF your final output will be double what is shown in normal FPS overlays, with similar latency of say 30-40 fps if AMD overlay says 80 fps
@@zerenxx8841 AFMF is AMD Fluid Motion Frames. There is a toggle for it in the AMD Radeon Software, Adrenaline. See if that is enabled? Also, the 3770K is for sure holding back the 6600 unfortunately.
Thanks. This showed me that my GPU is sitting idle waiting for frames to be generated and it justified my CPU upgrade if anyone asks :P 3070ti was at 6ms where my 11700k was at 16ms and the worst part is I am getting a 3090 later this week so it will just get worse
I have a 7800x3d paired to a 4080 OC. I am currently playing Skyrim at 1440p without a frame rate cap (a mod). And with enough mods, textures, grass mods and DynDOLOD you can absolutely see the CPU bottleneck everything. The FPS will be below the v sync 165hz cap, and GPU will be at 90% with CPU at 20%. I think it's because the game only really uses one thread. So while it seems the CPU is chilling. It's actually not able to let the 4080 go to 99% on Skyrims game engine
On u setup u have random stutters? I have literally same specs(cpu,ram,motherboard,gpu) and all games random stutter for me, the one difference from u setup from me is the psu(I have a Corsair CX650w)
Depends on the game. Many games have inherent traversal and shader compilation stutters. Others run perfectly fine. Which games do you have issues with?
@Mostly_Positive_Reviews like, 80% of games, Koovaks, Quake Live, Tomb Raider, etc, And if I see with Capframex, I have exactly the same stutter in this games, moving average move on UP And DOWN in stutter, I already tried format my system, BIOS stock vs Optimized and nothing works
@Mostly_Positive_Reviews like, 80% of games, Koovaks, Quake Live, Tomb Raider, etc, And if I see with Capframex, I have exactly the same stutter in this games, moving average move on UP And DOWN in stutter, I already tried format my system, BIOS stock vs Optimized and nothing works
@Mostly_Positive_Reviews like, 80% of games, Koovaks, Quake Live, Tomb Raider, etc, And if I see with Capframex, I have exactly the same stutter in this games, moving average move on UP And DOWN in stutter, I already tried format my system, BIOS stock vs Optimized and nothing works
@Mostly_Positive_Reviews like, 80% of games, Koovaks, Quake Live, Tomb Raider, etc, And if I see with Capframex, I have exactly the same stutter in this games, moving average move on UP And DOWN in stutter, I already tried format my system, BIOS stock vs Optimized and nothing works
Maybe a stupid question but will increasing the resolution be enough to reduce cpu bottleneck in some cases? For example I have a 2700x with rx 6600 and I was thinking of buying 1440p 144hz monitor that may help me reach a stable 60 more easily by reducing cpu bottleneck. Is it a good idea?
Not a stupid question at all. If you are getting 50 fps and CPU limited at say 1080p, increasing the load on the GPU by increasing the resolution you will still only get 50 fps max as the CPU is only able to prepare 50 fps, regardless of resolution. But shifting the load to the GPU will most likely get you better frametimes and less stutter.
In the above example where your cpu can only put out 50 fps you will be cpu bottlenecked until you increase the graphics settings enough to drop below 50 fps. You can only ever get the highest fps of your weakest part.
For the situation on the right in the beginning of the video, it's not quite CPU bottleneck but RAM speed bottleneck. RAM are not fast enough to deliever all the data that CPU needed. That is why AMD has 3D cache CPU which is just to stack very large L3 cache for a higher Memory Hit Ratio. For most of the RAM speed bottlenect, it's the RAM latency problem. RAM read and write speed isn't a serious problem for majority of the game. Imagine copying files, if you copy and paste single big file like zip or a movie, it is very fast. But if you want to copy a lot of small files like pictures, then it's slow. Same thing to the RAM. Some game is optimized, they will try to combine many small data into a large one or simply cut down unnecessary process. But other are not, especially for the indie games.
To determine CPU bottleneck, beside obviously looking at GPU load, also enable monitoring per core of CPU. Whole CPU load means nothing because one core may be doing 95% of the job while others sitting without any load or even sleeping (parked).
My 7 5700g only pulls 39% in elden ring My 3060 needs just a tad more for full settings.Very GPU heavy,I was going to test the onboard to see what it is like in 1440p.Might have more usage or maybe share cache.
You will have to use CapFrameX for those. Once installed you can go to the overlay tab and select everything you want to see. I'll see if I can do a tutorial on how to set it up.
Should you happen to know the reason why but my gpu usage is 90-99 on games like destiny and warframe. On games like cyberpunk and black myth wukong it won’t go past 85. I have a 4090 and i9 13900kf I just can’t seem to find the reason
What resolution do you game at? Cyberpunk is quite CPU heavy, so there it makes sense, but Wukong is very GPU heavy so I am very surprised it doesnt hit 99% usage there.
@@Mostly_Positive_Reviews 4K is the resolution I play on. Cyperpunk was getting good gpu usage like 6 months ago then I started yesterday and it won’t go past 80. I’m hoping it’s not the intel degradation thing
Very strange. They introduced a new feature called P-Core Preference or something like that. In the latest version it's under the Utilities tab in the settings menu. Make sure it's not set to on, as that caused many issues on my end. I cant remember the exact name now but it'll stand out when you see it.
Good video. Cyberpunk is a good game for this because it's been out in some form for 4 years, so it's probably pretty well optimized. Driver issues and egine issues could probably cause some bottlenecking.
Thanks again! Yeah, there are many other reasons for a "CPU bottleneck", and software / engine is just one of them. I touched on it briefly but because I was more focused on showing how to identify a bottleneck via CapFrameX that I didnt go into too much detail, and some people wanted me to, based on the comments.
The best way of identifying if youre cpu or gpu bound is by looking at the GPU usage. In general you want your GPU usage to be pinned at 95-100%. If its significantly lower on avg then youre most likely cpu bound.
GPU may be underutilized(not due to CPU bottlenecking) in the following cases: 1. Old games. They usually need a fraction of modern GPU power to run smoothly. 2. Games that more CPU bound, like stellaris. Graphically speaking there isn't much to render in Stellaris, but CPU calculations and loads are tremendous. In graphically challenging games you should see 95-100% of GPU utilization. If the number is lower, then: 1. Poor optimization. 2. CPU bottleneck.
@@Mostly_Positive_Reviews thanks for the quick response I sold my computer i9 14900k 4080 t yesterday. Now I want to buy Predator Orion X POX-950 Gaming. With i7 13700kf 4090super. Just worried about bottleneck. If you say that i7 with 4090 gets more fps, I'm happy then :) thx
@@Reflex3745 I know quite a few people with 4090 / 13700K systems and though it is slightly CPU bound at 1440p, it will still give a higher framerate than the 14900K and 4080 S system. Except, as I said, in games that heavily utilize the CPU, like city builders / sims, but even then the difference between the 13700K and 14900K wont be that big, unless heavily tuned.
What resolution do you play at? The 8th gen 8700 wont be too bad, but the 6th gen i3 is holding back the 3060 quite a lot. I made a video on how to setup RTSS with MSI Afterburner if you want to set it up and check your GPU usage. That'll at least give you an idea 👍
Not really, no, but it is indeed much better than the 6th gen and 3060 pairing. Also depends what resolution you play on both. What resolution do you use?
So I have a question. If I have a ryzen 7600x bottleneck (bought 7600x like 3 months ago to wait for 9800x3d with normal price and ofc availability so I would sell my cpu so also didn't want to spend much money) with 4070 ti super is it possible that it cause stutters even if I lock my fps with cpu usage less than 50% even sometimes less than 30%? I have stutters in most games. For example I have like 400fps in cs2 so I lock it on 280fps but still have stuttering, even against bots so it's not internet problem (cs2 sometimes has some problems with network - loss). Also I checked my old gtx 1070 with new pc and it seems like there are no stutters in cs2 and much less in different games like PUBG. I tested some games with same settings and lock fps at ~85-99% usage of gtx1070 (just min fps which I got) and then tested same games with same settings and same fps lock with 4070 ti s (so if I had min 70 fps with gtx1070 I lock it at 70fps, test and then I lock fps at 70 with 4070 ti s too) and I had much more stutters with 4070ti s even if gpu usage was 60% with cpu usage ~40%. So I had a similar cpu usage (never more than 60%) and ofc much less gpu usage with 4070 but much more stuttering. I'm not sure if it's a gpu (or its driver) problem or just a bottleneck. I don't know if it is possible to have cpu bottleneck stuttering with cpu usage ~30-40% by frame cap. Without locking fps it's like ~50% cpu usage (gpu 75-85) so still not 100% like in the first part of the video. I saw some stutters in the first part so i decided to ask if it is possible to have them with 80% gpu and 50% cpu usage because I see that there are no stutters in the 2nd case on video. Also I notice that if I lock fps in cs2 on 220, it's never 220, it's like 195-209 in match (I can get 220 without players or bots on map). The problem is that with gtx1070 if I lock 220 it's 220 with some drops and also even 1%low and 0,1low are lower with 4070 ti s. Without fps lock it's terribly low with avg 400fps+ and 1%low - 95 fps, 0.1%low - 60. Also in different games 1% and 0.1% low are awful too.
And ofc I tried to install Windows again, drivers reinstallation by DDU (I tested my old 1070 so I had to), tried different chipset drivers, some optimization guide like game bar rename/disable, DXCache files uninstall, NVIDIA hd audio disable (this one helped me with stuttering which made some games unplayable but still there are so frustrating frametime drops, less but still), simple BIOS option like disable expo profile, Resize bar.
In Space Marine 2, my gpu-busy is around 9ms and frametime around 22ms. RTX 4070 Super and Ryzen 5 1600X. Got any recommendations for a good AM4-socket processor to pair with my GPU?
If you have the cash to spare I'd slot in a 5700X3D. Will be a great pairing with that GPU. Otherwise even a 5600X will get you a great boost, especially.in Space Marine 2 as it is very CPU dependant.
@@Mostly_Positive_Reviews I have now installed the Ryzen 7 5800X. My frames pretty much doubled in Space Marine 2 to about 78fps. Frame time is now down to about 12.5ms and gpu busy is still around 9ms. The Ryzen 5 1600X was a massive bottleneck.
While I do love the video and the way you explained, you're always gonna be bottlenecked by either the CPU or the GPU, because not all games are designed the same. Some games rely on heavy CPU calculations while other rely on heavy graphical fidelity and some on both. So no matter which way you go you're always gonna have a bottleneck.
Thank you, and yeah, 100% correct. The only reason to really care about a CPU bottleneck in particular is because your GPU is almost always the most expensive component and it would be better overall if that is then your bottleneck as it means you are getting the most out of your system. That said, you shouldnt rush out and buy a 7800X3D if you are 10% CPU bottlenecked. I'd say even up to 20% is fine and not necessarily worth upgrading a whole platform over.
For me i dont mind as long as it isnt dtuttering. If im stuttering due to a bottleneck then its time to get a new cpu. As we know gpu bottlenecks arent as bad as cpu bottlenecks. Also a cpu bottleneck isnt usually bevause the game is demanding its usuallly due to the cpu not being able keep up with communication of the gpu. Thats usually a bottleneck. Games bottlenecking a cpu or being cpu limited usually doesnt effect much. Like here the game was more then likely still running good so for me thats not as bad. Good explanation on shifting the load. I love the comments on facebook and reddit of mone believers. They legit think bottlenecks arent real haha. They dont believe any cpu can bottleneck with a 4090. They really think there 7800x3d can keep up which yes it performs well but they still have a bottleneck more then they may think. The 4090 is just that powerful. Or like me a 4080 super with a 13400f. I have a bottleneck and my bottleneck even causes stuttering at times which is why i will be upgrading soon. But yeah alot of people go naw thats not bottlenecked. Haha
Appreciate you taking the time to write this comment. Yeah, it's amazing how many people deny stuff like this. But I agree 100% with you, unless it's really impacting your experience dont worry too much about it. If you still get framerates that are acceptable to you with no stuttering then it's most probably not worth it upgrading your whole platform. But also, a 13400F is definitely holding back that 4080, so in your case you'll get a very decent bump by going for a 13700K or similar. I have a 12400F system as well that I test more entry-level GPUs with, and once tested the 4070 Ti on there, and it was quite bad. GPU temps were pretty low though as the GPU hardly got used 🤣
@Mostly_Positive_Reviews yep exactly haha. I will say I'm getting over 90% utilization in 4k which is OK still bottlenecked but yeah it's been running ok in 4k. Def has some stutters so def going to be upgrading soon.
Most times the issue is not cpu and the games its the background apps that are the issue. Like you run Discord and browser but the game already needs 100% of your cpu so it causes stutters. good example is my cpu is a i5 8600K @4.8 GHz but pubg stutters when discord and my browser is open. close it and the stutters are gone and my fps is stable at 120+ fps on 2k
Yeah, background tasks can absolutely cause undesireable performance for sure. I remember when I had my 9600K I had to close everything before playing Cyberpunk just to reduce the stuttering a little bit!
One way you can try to resolve this issue or at least minimize the effects is to use ProcessLasso or other similar tools to set the process affinity. You can see which core is the least effective when playing the game and then set the discord and browser affinity to only that core, and the game to the rest of the cores. This will significantly decrease the stuttering, as your game process won’t be slowed down to wait for the discord/browser processes, as they’d be on different cores.
Yes for sure. I actually had that exact system that I built for my wife. Was a very decently paired system. Some games would still be slightly CPU bound at 1080p, but very minimal.
hey, i have 16gb of ram, an i5 11400f, an rx 6600 xt, my fps in fortnite is generally low (dx 12, competetive settings, 1080p), the cpu gets to a 100% of usage when in the sky and i get awful stutters, i've tried optimizing, reinstalling windows, anything, couldn't fix it, do i need to upgrade my cpu, my ram, suggestions?
Does it improve the more you play? Fortnite suffers from terrible shader compilation stutter but it gets better over time. Have you tried the DX11 mode perhaps? It does sound like your CPU is the issue here. The problem is that you'll have to do a whole platform upgrade at this point unless you go for something like an 11700K.
@@Mostly_Positive_Reviews yea, it does improve overtime but i still get occasional 100% cpu usage and lag, and can't really play with my browser open cuz i lag a lot as well, would going for an 11700k fix my problems cuz that would be the most budget upgrade, and btw i have an 2,5 inch ssd with slow speeds, do i need to buy some budget nvme and maybe upgrade to 32gb or ram so i can have my browser open while playing games?
@@spookytv4044 All those things will definitely make a bit of a difference. An nvme not so much during gameplay as an SSD with 450MBps read/write speed is definitely still good enough for most games. RAM speed will also make a difference, so if you have normal 2666Mhz RAM now, getting something like 3600Mhz CL18 combined with a 11700K will definitely help a lot in CPU bound scenarios. Stuttering while a browser tab is open is a bit on the strange side, especially if it is only one or two. Is there not maybe something else eating up CPU cycles? A Windows reload might also help, depending on the issue.
@@Mostly_Positive_Reviews thanks for the answer, I currently have a 3200mhz cl16 2x8 ram, i will probably get 2 more sticks of the same ram i have so it's cheaper and will upgrade to the 11700k when i have the means, and i really haven't noticed anything eating up the cpu, i only noticed that chrome and fortnite max out my ram usually
This more applies to older games most games will have a engine limit where no matter your CPU you can't push past a certain frame rate. Normally this is absurdly high framerates and you wouldn't want past 200fps for most games unless they are super competitive esport titles. Something like GTA 5 will let you push into the 180 to 200fps for example but it will extremely unstable so that's why most people suggest capping it at around 120 to 144fps. It's worth doing research on certain titles especially if you have a high end machine that is more likely to run into that sort of problem.
Yeah, GTA V was notorious for breaking over 180 fps, and then some Bethesda games also have their physics tied to the framerate, so if it goes over 60 fps things start to freak out. There are patches that help with that though, but agreed, best to find out if a game has an engine limit or not and try to stay below that. Many new games do have uncapped frame rates though. If you do all you can with frame gen and dlss and overclocking you can push Cyberpunk to 300 fps or more. But there definitely are games that have engine limits, or issues when going above certain framerates.
I like to cap my CPU to not go over a certain t° (laptop) so the CPU and GPU aren't always at their best. I think i have a good pairing Ryzen 7 7735HS, RTX4060, 2x8GB (4800) 1080P. Nice explanation, thanks!
There are definitely instances where limiting components is a good thing indeed. But yeah, your components are pretty well paired. Would even be able to power a more powerful GPU with that CPU as well, so you are fine.
Thank you! The easiest way is to use CapFrameX. It has it built-in already. You can do it with Afterburner but you have to go add plugins in RTSS and configure it from there. It's possible but a bit of a mission.
Counter-Strike does have some inherent stutters that you cant do much about, especially at the start of the match. But they normalize very quickly. You should be able to get 240 fps yeah.
Since I had many people ask me about this, here is a short video on how to setup the on-screen overlay as seen in this video:
th-cam.com/video/EgzPXy8YYJw/w-d-xo.html
My analogy for what the CPU and the GPU do is that every frame is a pizza and the GPU is baking them. When the GPU gets a pizza ready it hands it to the CPU which folds a box for the pizza to deliver it. If you're gpu bound the pizzas are being made as fast as possible and the cpu has no trouble putting them in boxes. If you're cpu bound the gpu starts piling up pizzas and has to slow down because the cpu can't keep up folding and putting the pizzas in boxes for them to be delivered.
And when you lower your resolution, you need to make smaller pizzas which are quicker to make which means boxes have to be folded for them on a faster pace.
As silly as it sounds, it actually makes sense!
pizza pepperoni reconstruction featured now in latest updated! make up the shape of the pepperonis on the go!
It makes sense, but I think the CPU is not the last stage, but the first, with a set of instructions for what the GPU must create. So I'd rather think of it as the CPU preparing the dough and the GPU mounting the ingredients and baking it. The GPU can only mount and bake a pizza made by the dough a CPU has previously prepared. If the dough gets too complicated (with many different ingredients and layers and whatever), then the CPU will take more time. But even if it's simple, the CPU can prepare a limited amount of pizza ready doughs, and some GPUs have incredibly large ovens. In this case, even though we could bake 100 pizza together, our CPU can only prepare 50 of them each time, so we'll use only 50% of our capacity. The thing with many CPU cores handling different amount of workloads can be like different people doing different parts of the preparation of the dough for a pizza. Like when putting the dough to rest becomes a much longer and time consuming task compared to simply opening it in the correct shape. Describing GPU bottleneck would be when there is enough dough prepared by the CPU, but the oven is too small, or when the oven is big enough, but there are more complex steps in the mouting phase, such as when the client asks for filled borders, extra ingredients or simply more calabresa, that take more time to cut.
Mmmm pizza
Very good and interesting video, glad to see more people explaining things! We're going to make a video about this, but I will now add some of you case scenarios here and show your channel as well
Been watching your videos for years, and as you know, had a few pleasant interactions with you on Twitter as well, but never did I think you'd actually watch one of my videos! This really means a ton to me, really appreciate it!
It's the man himself 😱😱
Like, how did this even happen 🤣 I am awestruck here 🥳
Appreciate it my man 🙏
Good stuff. The more educated consumers become it makes it harder for CPU and GPU vendors to BS us with their garbage marketing and slides.
One easier way to see when you have a CPU bottleneck, is to use process explorer to examine the individual threads of the game. Cyberpunk will run dozens of different threads, but not every aspect of the game engine is multithreaded, and the scheduler will move some of those threads between various cores rapidly depending on the load (perfectly normal when a game launches 100+ threads.
If you look at the individual threads, you will often see 1 thread using 1 core worth of CPU time, and at that point, frame rates stop improving. A simply test is to use settings that that get the GPU usage in the 85-95% range, then while gaming, downclock the GPU (e.g., lower it by 500MHz in afterburner), that would get the GPU to be pegged at 100% while also lowering frame rates, ensuring a GPU bottleneck). then gradually increase the clock speed of the GPU while looking at the threads in process explorer. You will notice a gradual increase in frame rates that will stop as soon as one of the threads in the game reaches 1 core worth of CPU time (to figure out what is 1 core worth, take 100, and divide it by the number of threads available by the CPU, including hyperthreading. For example, a 16 thread CPU would allow for up to 6.25% of CPU time representing 1 core worth of CPU time. Since a single thread cannot use more than 1 core worth of CPU time, that thread will have encountered a CPU bottleneck.
PS, a system memory bandwidth bottleneck is very rare, the only time I have been able to replicate one in a modern system, was when a lower VRAM card, e.g., 8GB or less, and in a game where the engine did not disable the use of shared memory. Once shared memory is in use actively (you can tell by the PCIe bus usage increasing significantly, and once it hots 50%indicating saturation in one direction, you will notice a scenarios where both the GPU and CPU is underutilized.
PS, in those cases, you can get small boosts in performance by tightening the timings of the system RAM to improve the latency. as with modern cards, they typically have around 25GB/s DMA access from a PCIe 4.0 X16 bus, but in some cases where full saturation is not reached, lower latency minimizes how much performance is lost while shared memory is in use. Once full saturation happens, then frame times will become very inconsistent and the game will hitch/ hang as well.
A storage bottleneck, will show dips in both CPU and GPU usage (most easily seen as simultaneous drops in power consumption).
Actually very useful info, appreciate it 🙏
Memory Bandwidth bottleneck ? like being limited by 128-bit bus ? i mean now AMD and NVIDIA released 16 gigs on entry level (rx 6600 xt + 4060 ti) where the memory bus width are 128 bit, which is potentially limiting the performance
@@i_zoru For the video card VRAM throughput that largely impacts how well the video card will handle throughput intensive tasks , such as many large textures and higher resolutions, as well as game engines that will dynamically resample assets. GPUs can resample textures hundreds of times per second with very little overhead, but it is more intensive on the VRAM throughput. For games that are not using functions like that, a card like the RTX 4060 16GB works decently, but for games that rely heavily on throughput intensive tasks, then the 4060 underperforms. It is also why the RTX 4060 seems to scale too well from VRAM overclocking to a point where some games can get a 5%+ performance boost from just VRAM overclocking, where with the RTX 4070 may get 1-2% at best in gaming, though outside of gaming, tasks like stable diffusion and other memory hard workloads will scale extremely well from VRAM overclocking, especially when using AI models and render resolutions and settings that will get the card to use around 90-95% of the VRAM. PS, stable diffusion will get a massive slowdown if there is any spillover to system RAM. For example, a RTX 3060 12GB and a RTX 4070 12GB will end up performing the same once system RAM is being used, you will also notice the power consumption of the cards drop significantly.
The RTX 4060 drastically underperforms when there is spillover to system RAM, the crippled PCIe X8 interface instead of X16 makes the bottleneck far worse.
You could enable GPU time and CPU time in msec. For example if the CPU sits at 5ms per frame and the GPU time at 3ms per frame you'd conclude a CPU bottleneck and vice versa a GPU bottleneck.
Yeah, that's a good option too, but would need to chart it as the on-screen overlay doesnt update quickly enough. But it can give you an idea at least 👍
in MSI Afterburner ? how?
How to identify a hard CPU bottleneck in 10 seconds:
Turn down your (native) resolution and if your framerate stays about the same (or the same) you are CPU bottlenecked, say ~30FPS at 4K and ~32FPS at 1440P.
Why? Because if you lower your resolution your GPU should render WAY more frames, since there are A LOT less Pixels to render.
In this case your CPU would not be fast enough to (for example) process the NPC AI, BVH for Raytracing etc. to get more than about 30ish FPS despite the fact that the GPU now has to "only" do maths for 3,6 million pixels (1440P) instead of 8.2 million pixels (4K)
Yeah indeed. When people ask me whether they are bottlenecked but they dont want to check it with something like this then I always tell them to lower their resolution. If the framerate doesnt increase much you are held back by the CPU.
Even simpler, if your gpu isn't at max utilization and you don't have a fps limit, you're bottlenecked by something - probably the cpu
A faster way, or used in conjunction with this, is to look at the gpu power consumption. The 4070 Super is a 220w card. When gpu bottlenecked, it was running under 150w.
Yeah indeed. I spoke about this, saying it gives you a good idea and indication, and then GPUBusy can confirm it. Well, it's still not 100% accurate, but 99% is good enough for most people 🤣
That's what I've been looking at as well. It's pretty accurate too. I have a 4070 super as well. When it drops to e.g. 210w in cyberpunk, it's always a CPU bottleneck situation, at least in my experience. Adding GPU Busy to my overlay now just confirms it.
Gosh, I really need to pay attention when typing as I constantly make typos. Apologies for typing "Scenario" as "Scenarion" 🤦♂️
Please enable notifications 🙏
@@bigninja6472 Notifications? Meaning as soon as a new video goes up you get notified? I do set it on each video to send notifications to people who have notifications enabled, but it almost never works, even on big accounts :(
You are now attention bottlenecked
@@vash42165 One of my more popular videos and I make rookie mistakes like that 😥
This man's a genius.. Whilst experimenting with disabling cores ( the Intel issue of the day ).. turned it into
one of the best explanations I've ever heard on a completely different issue.. can't wait for his Intel cores
video ( wonder what I'll learn from that :-)
You get snarky sarcasm, and then you get well-written sarcasm. I gave this a heart because I do appreciate some well-written sarcasm :)
i only ask to choose whether my contents are paid ones or free ones, bcs seems like they (and myself personally) treated like pro paid one, but generate income like free, jobless one
which is very confusing, and things been talking abt everything but the question asked
its very naturally and make sense if i feel very unfair if then someone else just live easily from here
First time I've seen this example explained so well. Thank you!
Appreciate the comment, thank you!
This was an excellent video. From someone who started to understand this after getting into PC gaming 2 years ago, it's always quite jarring seeing the comments in other benchmark videos or even in steam forums such as "my system is fine because my CPU is only 30 percent utilized so it must be the game" or "why test the 4090 at 1080P" in CPU benchmark comparisons.
Appreciate the kind words! And I agree, often the comments on some of these videos are mindblowing. And they are also often made by people that are so sure that what they say is correct that it's just spreading misinformation unfortunately.
But it's great that you educated yourself on stuff like this within only 2 years of PC gaming. People think stuff like this is not important, but I think it is, as it can really help you plan your build and not overspend on something that'll go to waste. I see so many people that still have 4th gen CPUs rocking 4070 Tis or 4080s because "CPU doesnt matter".
@@Mostly_Positive_Reviews I am curious though, 10:57, does frame gen always get rid of the CPU bottleneck? I notice in some games like Hogwarts Legacy and Immortals of Aveum, where I am CPU bound, GPU usage is unusually underutilized at times. Digital Foundry did a video about Frame gen on console in immortals of Aveum, and the increase in performance was relatively modest despite a probable CPU bottleneck which I suspect is frame gen not entirely getting rid of that bottleneck. Could there be other limits or game engine limitations frame gen doesn't take account ?
@@Stardomplay I only used frame gen in this scenario as when this close to being fully GPU bound it almost always ensures a GPU bound scenario. But it doesnt always. Hogwarts Legacy is actually a very good example where even with frame generation you arent always GPU bound, especially when using ray tracing. Another good example is Dragon's Dogma 2 in the cities. Sometimes other instructions just take up too much of the CPU in terms of cycles that it is almost always CPU bound, even at 4K. In Dragon's Dogma 2, the devs blame the poor CPU performance on AI in the cities for example, and there even at High settings, 4K with Frame Gen on this 4070 Super I am CPU bound. Step outside of the city and you become fully GPU bound. When things like that happens people usually call a game unoptimized, and in the case of DD2 it is definitely true.
Render APIs can also impact CPU bottlenecks significantly. For example, DX11 doesnt handle multithreading well at all in most games, and chances of you running into a CPU bottleneck with a DX11 game is greater in my experience.
With regards to Immortals, I have only ever played the demo in the main city, and I personally find the game to be very stuttery there, regardless of CPU / GPU used. Something in that game just feels slightly off in terms of motion fluidity.
But to answer your question, no, frame gen doesnt always eliminate a CPU bottleneck, but it can help a lot in alleviating it under the correct conditions.
I cap the FPS on my GPU so it runs cooler & smoother, since FPS performance over a certain quality threshold amount is just benchmarking giggle numbers, unnecessary strain on components, & extra heat generation for my bedroom. Running a GPU at max FPS is more likely to create occasional stuttering & less-smooth play, since it has no additional overhead to handle the sudden loads that create drop-off lows. So, my R9-7900 CPU likewise is probably just jogging along with my capped GPU instead of sprinting. Works well for me.
That's the great thing about PCs, there's a setting / configuration that caters to everybody! I also prefer to try and max out my monitor's refresh rate with a little headroom to spare to try and keep a more consistent framerate.
do you actually get better 0.2% lows if you cap your framerate?
Exactly what I do too. Especially for casual gaming.
Also, with careful adjustment in MSI afterburner you can set all the values to align. CPU % is inline with GPU C° in yours
CPU% is inline with CPU Temp, and GPU % is inline with GPU Temp.
One is a CPU bottleneck (the 4 core example) the other is a game engine bottleneck that makes it seem like a CPU bottleneck, which as you said sometimes it is the CPU itself sometimes it isn't.
There is a difference and it is pretty damn important to note.
Like the stuttering from amnesia a machine for pigs? Even with nice hardware, you get mini stuttering
Sure it's the game engine not properly using every cores but that's the case for 99% of games nowadays
Still if you'd use let's say a 7ghz 16900k you will still have a low CPU % usage but that GPU will be closer to 100%
But by that time you'll probably have a better GPU and the issue will remain, though your framerate might have doubled lol
@@cuysaurusReady or Not seems to suffer from that too unfortunately
But how do you know its a game engine bottleneck?
@@blakey_wakey If the game only uses one or two cores (like older games such as star craft II)
if the game uses multiple cores but it uses them at a very low % in your task manager or afterburner read outs.
Software usually leaves a lot of compute performance on the table when it comes to CPUs. This is why you can have a lower powered console use a CPU from 4 or 5 gens behind PCs yet still be GPU limited. In the consoles case it is much easier to program for that specific set of hardware. The limiting factor is almost always GPU compute.
However those are just two very simplified examples of a game engine being the bottleneck. There are others but they are more in the weeds. Something like.... A cache bottleneck (aka memory latency within the core), or a core to core latency bottleneck (like with the Ryzen chips with two dies on them).
Every computer either has a software or hardware bottleneck though as if it didnt your games would run at infinite fps.
Also, even if your CPU load says 100%, that is most likely a lie. That is actually the average of the CPU’s P cores load. Meaning that if you have both the GPU and the CPU usage at 100%, you can still get a better GPU and not bottleneck it. As long as you don’t play something like Star Citizen. In that game, every single GPU gets bottlenecked no matter what CPU you have 😂
Star Citizen is indeed the destroyer of CPUs 🤣
cyberpunk is so CPU
@@raiden_131 Cyberpunk is fine! Go play any ps5 exclusive that released on pc and see for yourself how much worse that is in comparison to Cyberpunk.
Love these vids!
Not as much as I love you! Wait, what...
I also noticed nvidia user in cyberpunk say they are cpu bottlenecked at about 50% cpu usage, but on amd it often goes right to 100% in this game; amd gpus use the cpu differently
Almost every high end game I played, I realized that I had stuttering issues, and when I checked my CPU usage it was 100%, at that time I didn't really know what it meant, but then I showed it to a friend and he told me that my CPU was bottlenecking my GPU, and told me to cap the fps to 60 and use frame gen, then my CPU usage went down and no longer had the stuttering issues.
Yeah, CPU at 100% will definitely do that. I made a video a while ago showing how capping your framerate can resolve a lot of stuttering issues when that happens.
@madrain can I message you rq?
Shout out to my fellow South African🤙, great video buddy, very informative.
I’ve been out of the pc gaming scene for a while now, so I started searching for videos before I upgrade my hardware, stumbled onto your video.
Subscribed, I will definitely support your videos in the future.👍
Ha, another Saffa! Glad to have you here buddy, appreciate watching the video and subscribing 🙏
Brilliant video, very clear and concise! It's crazy how many people in comments on benchmarks spread false information. I think everyone watching benchmarks should be made to watch this video first. Thanks for the upload!
Thank you! Really appreciate the kind words! It's funny you say this because even on this video some comments are way out there 🤣
@Mostly_Positive_Reviews I have seen them. Some people are just stuck in their ways I suppose lol. Keep up the great content 👏🏻
@@BenchmarkGaming01 Thanks! Just subscribed to your channel as well, will check your videos out soon 👍
@Mostly_Positive_Reviews oh wow. Thank you very much! Highly appreciate it. Any feedback/ suggestions would be highly appreciated. Thank you again!
for the layman out there, if you see your GPU usage below 90-100%, then you have most likely have a cpu bottleneck.
that cpu bottleneck however could be due to your cpu being underpowered compared to your gpu or it could however be due to the game/engine itself (like dragon dogma 2, jedi survivors, witcher 3 remaster)
all cpus currently are bottlenecked at 4k for example, it wouldnt matter much what cpu you have at 4k. If you have a older cpu, the biggest difference you would notice, would be in the 1%. This man talks a whole lot using so many words to describe something very simple.
Yeah, pretty much.
Lol!
Firstly, not every CPU will max out every GPU at 4K. There are plenty benchmarks out there showing that even something as recent as a 5600X can bottleneck a 4090 at 4K. Even a 4080 can be bottlenecked at 4K by a slower CPU.
Secondly, I explained it in "a whole lot of words" because it's not always as simple. Yes, you can get a good idea by checking GPU usage, but using GPUBusy you can get a lot more info. And that's what the video was about...
Shut up kid its depends on the setting or even the game optimize. It doesn't make any sense if there are games your gpu 90-100% and some not
Thankfully there’s a solution, the 7800X3D. Fastest gaming CPU on the market. Even digital foundry ditched Intel and went AMD. Do the right thing people
fastest is 147- 14900k atm
@@iikatinggangsengii2471 specifically in gaming no the 7800X3D is better
i dont want amd thanks
@@IBaknam amd shit
@@raiden_131 you rather enjoy blue screens i get it to each their own
Very informative video, thank you for your efforts !
My pleasure! Thank you for watching and for the kind words, it is much appreciated!
this might sound like a nitpick but as an RTSS overlay user myself , or at least yours looks pretty much the same , it bugs me to see reversed usage and temp placements for CPU and GPU , gpu is temp - usage and cpu is usage - temp @.@
Hahaha, you arent the first to point this out. It really was an honest mistake. The default order in CapFrameX is slightly different and I didnt pick it up. It has been fixed in subsequent videos.
Since I have discovered this video over a month ago, thanks to you, now my games have better FPS. Yeah it's bad if the gpu usage is only 50% or lower even though the CPU usage is still at 50% or lower.
Making the GPU usage at 100% does infact give my way better FPS and way less stuttering. Taking the load off of the CPU really helps significantly.
What is really strange is going from 1080p to 1440p in some games makes almost zero difference, it's because the lower your resolution & graphics settings are, the more likely you might come across a CPU bottleneck.
I am really glad I could help at least one person. And you are 100% correct, if you go from 1080p to 1440p and there is no difference you have a very big CPU bottleneck. Not many people understand that fact.
And another important one is that when you are GPU bound your CPU has some breathing room, resulting in better lows. So even though you might not always gain 50% FPS by getting rid of a CPU bottleneck, you will almost always have better lows / less stuttering.
Thanks for leaving this comment, it is really appreciated!
@@Mostly_Positive_Reviews Yeah it's better to be GPU bound, CPU bound no thanks, I also subbed. You do know what you're talking about when it comes to PC, you are the man!
Edit: I do like the fact that if you go from low to max graphic settings, in some cases there's little to no penalty, sometimes you might even get an fps increase because you're taking the load off of the CPU.
@@niko2002 Thank you for the sub. Just hit 3000 yesterday so I am very grateful for that!
I've been around the block when it comes to PCs, but I am still learning every day, and that's what makes it so enjoyable for me!
@@Mostly_Positive_Reviews Yeah since me and my brother Built a PC with a Ryzen 7600x with an Rx6700 (non-xt), I've been very interested in computers.
But I also love cars & motorcycles, it's just more fascinating to learn outside of highschool. I'm glad TH-cam exists, that's how I got interested in computers, cars & motorcycles.
Because of that, I know how to build computers, which I have learned from reading a manual. I mean you can make custom power switches for PC and it can go anywhere you want, make a PC case out of cardboard, lmao the creativity just never stops, and that's what I love about computers.
@@niko2002 In the olden days of TH-cam you could actually find videos on how to make an atomic bomb, so yeah, there really is something for everyone, and it is a great place to learn for free!
Nice video. I have i5-13600K with RX 7800 XT and 32 GB DDR5 5600 Mhz RAM. Running everything on 1440p native. Im GPU bound all the time.
Yeah, decent pairing. Not much difference between a 13600K and 14600K really.
You can also have CPU Bottleneck even if only 1 Core hits 100%. On other side, if you play online games where you need lower input lag, its better to have CPU Bottleneck. Alternative is to Cap your FPS (or Reflex & Anti-Lag) but CPU bottleneck is preferable if you can get much higher FPS than the Monitor hz. Actually both CPU% & GPU% should be as low as possible while maintain high FPS (lower than 60-70% GPU & the individual CPU cores). Even if the monitor cant show more frames than its refresh rate, the input lag improves (higher 1% & 0.1% FPS).
Yeah, agree! In multiplayer shooters most people prefer much higher fps and dont care about screen tearing as it is all about the lowest input lag and high framerates.
It is not preferable to have a CPU bottleneck as this results in frame pacing issues. Always always always use a variable refresh cap just under your CPU limit for the best combination of input latency and motion fluidity.
The exception to the VRR cap rule is if your game runs well in excess of your VRR window (600 FPS on 240hz display for example). In this case cap your game to a multiple of the refresh rate for even lower input lag.
Do not. Ever. And I mean ever. Leave your framerate free to do whatever it wants.
@@edragyz8596 Im talking exactly about much higher fps than yout monitor hz. You're talking about average gamer. What you saying make sense but Im talking about competitive gaming (3v3, 5v5, 8v8 ect.) Especially if you play with worse connection than your enemies. Fluid or consistent Frame pacing doesnt mean much if its higher & get worse input lag in a game where you need every possible ms. VRR has inherant latency penalty depending on your system. If you have expensive PC & good monitor around 1ms is the best possible penaly. If the FPS is around the monitor refresh rate I will prefer driver based Ultra Latency Mode if the game doesnt have build-in Nvidia Reflex option. I play on 360hz, 780fps ingame cap which allows my CPU cores to stay around 55-75% max & GPU around 60-70%. 720p. This is the best setup I get and while its not always consistent it gives me best possible input lag. When i cap on 354FPS I get the same 1% lows but I get worse results even though the games feels better. You need at least 4-500fps in these games. If you dont feel the difference you probably play with low ping which compansates & that slight delay isnt going to get a big impact on your game but every player whos location is far from the server will feel a difference. Also it doesnt matter whether can feel a difference or not. If you have an oponent you have trouble killing you will find that way you have better chances to kill him in 1v1 situations.
@@n1kobg Cap your framerate at 720 for the best results with your setup man.
main problem here might be not what you described. frame generation has a lot of different GPU units doing different types of stuff. shaders or RT usually the slowest ones so the rest of GPU rests. and then after that also post processing for a ready frame... that how you get underloaded GPU in most cases. and BTW the logic of howand in what order frame is generated and what game engine does is different in every game. so you can get same or totally different CPU load on different GPU!
Great video. Very informative and enjoyable!
Thank you! And thanks for watching and commenting, really appreciate the support and engagement!
I don't understand why they just don't put a powerful cpu into a GPU, add 128 gigabyte of ddr 7 shared ram and 4 ssd slots directly on the graphics card! It would get rid of the ridiculous bottlenecking! It could future proof for many years also. We are almost in 2025, we should have 8k gaming graphics! Nothing should be 1080p or 1440p! Its like these people still are using Betamax and VCR's in their home entertainment systems!
Amazing video.
How do you make is so that RTSS shows correct CPU % utilization? Task Manager show correct value while RTSS is false for me.
Thanks!
I didnt do anything special, but make sure you have the latest version installed. I think it's 3.2.7 now.
4:33 what have done? I don't understand 😅
i was bottleneck with my intel 9900X and my 3080ti. I have simply double my FPS in all game. And now i use loseless scaling software for get generation frame and run all game at 150-250 FPS. that work greatly.
But why do you need that when your monitor refresh rate probably isn't even that good? You don't even **see** a difference past 120 so... wtf?
@Dyanosis my Samsung G9 is 240hz. And you can have 60hz and have 150fps, game work really better...
Rivatuner has built-in options to show "GPUbusy" time in milliseconds and "frame time" in milliseconds. Whichever takes longer is the limit. It uses PresentMon preset to do the math and spits out a "Limited by: GPU/CPU" message during the benchmark
I've seen this in some videos but havent played around with it before, and only used CapFrameX for the GPUBusy metric. Will check it out, thanks!
HWinfo64 does frame time /busy too
You can also have GPU-Z running on your second screen. Go to sensors tab. It will give you PerfCap Reason. It might be Thermal, which is obvious but it can also give voltage related reasons, such as Reliability Voltage which means cannot boost higher at current voltage, or something else it will indicate.
In a huge amount of cases where there's low CPU usage and not 100% GPU usage while unlocked framerate is either software problem not properly optimized or memory botllenecked cause if the same CPU had much more and faster L1/L2/L3 cache I'm sure would get much higher framerates and also much faster ram with lower timings count for increasing CPU performance.
Your 2nd scenario isn’t necessarily cpu bound. All it shows is that your aren’t gpu bound. Your limiting factor could be cpu but not usage related. So like the cache or something. Or it could be engine limitations, or IO. Or it could be gpu memory bound. That won’t show up at utilization. Typically if your cpu bound and less than 100% utilization you’ll still see one of the threads pegging 100%. I’d imaging frame gen also moved the limiting factor to your gpu memory speed as well.
Yeah, for sure a lot of variables in the second scenario, and I touched on it briefly to say it's not necessarily just the CPU in this case, but many people still just call it a CPU bottleneck for simplicity.
@@Mostly_Positive_Reviews it’s definitely a rabbit hole I got sucked down for a while. Really the only conclusion is how much we lack in monitoring capabilities tbh.
@@BulkybearOG 100%. If there was an app to tell me definitively "RAM bottleneck" or "PCIE Bandwidth bottleneck" or "SSD bottleneck" I would buy it in an instant!
How could it be engine limitation if changing to better and faster CPU get you higher GPU usage and frame rates?
One of my computers has the 12400f rtx 3050 and 3200Hhz ram, no matter what settings I was never able to saturate the GPU to anything close to 100%. no matter if if was running 1920x1080 or in the 720 range. great video. information. Thanks.
I have a 12400F in my recording system. It's a very underrated CPU and it doesnt break the bank. It should be able to fully saturate a 3050, I use mine with a 3060 and it can keep it at 99% usage in most games I play.
@@Mostly_Positive_Reviews must be the 100 dollar mother board, it was a build a threw together to replace a system that i plugged the 3040 into that was built in 2007. if i had the time I would do some investigating as to why that 2nd system can not peg the GPU.
@@videocruzer could be. Might have some PCIE limitations or very poor VRMs holding the CPU back or something.
You are amazing, your explanation is clear 😊
Thank you! 😃 I really appreciate the kind words, a lot!
Thanks. I learnt a lot from your video. But as I checked this with Tomb Raider, even when the GPU Busy Deviation was lower, the CPU usage became higher. Is this inconsistency exactly what you said that this method is not 100% accurate?
But I don't understand how it got fixed at 9:55 just by disabling the upscaler????
Using DLSS, or any upscaler, renders the game internally at a lower resolution. Here I was using DLSS Ultra Performance, so 540p, to enforce a CPU bind. When I started rendering at native 1440p by disabling the upscaler, the load on the GPU became enough for the CPU not to be the bottleneck anymore as the resolution then went from 540p to 1440p.
@@Mostly_Positive_Reviews so then which one has more frames despite the low gpu usage and high cpu usage, dlss quality fg or native fg?
@@Summorial2 DLSS Frame Generation works best when CPU limited, and thus, the more CPU limited you are the better it can work. I personally always use DLSS Quality with Frame Generation just to bump up the frame rate, make it a bit more CPU bound, and then FG can take care of the rest. In some games there wont be a difference, like Spider-Man Remastered for example. There at 1440p High I get the same frame rate whether I use DLSS upscaling or not, and others it makes a bigger difference.
Great video dude, well explained. I have a 5600G paired with an rtx 4070 super and definitely get a cpu bottleneck.
Thank you, appreciate it! Yeah, the 5600G is definitely holding that 4070S back. Good thing is that you can just slot in a 5800X3D without changing anything else and you should be good.
@@Mostly_Positive_Reviews or just play a 4K DLAA and problem solved lol cpu wont be the bottleneck
i have 4070 paired with 12700k and at 1080p i have cpu bound, but not in 4K, if you aim 4K 60 then bottle is much more rare scenario. if you aim 1080 240 fps then you really need a strong cpu fore sure, 7800x3D even more maybe
There's also something else that isn't often mentioned - GPU usage impacting input delay even in CPU bound scenarios.
Explanation: If you use an FPS limiter in competitive games, for example I use V-SYNC (nvclp global On) + G-SYNC with a 200Hz monitor in Overwatch 2. That automatically locks my FPS to 189 to prevent V-SYNC activation at all times.
This results in a certain amount of input delay. Then if I reduce my render resolution below 100% and use FSR for example, this will result in the same 189FPS cap, but a lower GPU usage and therefore lower input delay, because GPU Busy times are reduced.
This is why people who play competitive titles like Overwatch, Valorant or CS would still use low graphics settings even on a RTX 4090.
Additionally features like the NVIDIA Reflex + Boost will result in even lower input delay in these scenarios, because +Boost setting causes your GPU clocks to stay at the minimum of the Default Max Clock.
It's the same value as if you used the Prefer Maximum Performance setting in nvcpl. This results in your GPU Busy times to be reduced even further on top of the reduced render queue which is the main purpose of the NVIDIA Reflex itself without +Boost enabled.
Yeah, you're right, this topic does not get a lot of airtime, and I think it's because it's not that well known.
I might think of a way to string a video together to explain the basics.
Many people arent aware that Reflex + V-sync actually caps your framerate just below your monitor's refresh rate for this exact reason. It only needs very little additional GPU headroom to reduce latency. For example, my 165hz panel gets capped at 158 fps with V-sync + Reflex + G-sync. It also prevents the framerate to hit the v-sync limit, which will also increase input latency, as you said.
Can we talk about the CPU power use too? I can see it's not 100% so either clock or memory isn't at full tilt within that "100% usage"
During the first scenario, I only enabled 4 cores, no hyper threading, and no e-cores, that's why at 100% CPU usage power usage is lower than it would be if all cores were enabled.
@@Mostly_Positive_Reviews thank you
Very useful info, thanks
Glad it was helpful!
I've done this with Intel PresentMon and GPU/CPU Wait to see what next I would change
Yeah, this is using Intel's Presentmon just tied to the overlay. But you can use Intel PresentMon standalone too like you did to get the same result.
Can't believe my Ryzen 2700x came out 6 years ago and can't handle 1% low anymore. Time flies so fast
Crazy indeed! The 2700X was such a good CPU when it launched, and still isnt terrible, but it's amazing to see how far things have improved since then. You do have a saving grace in the 5800X3D though, which you can just plop in.
Ryzen 2000 and 3000 series came out and still slower than Intel's Core i9 9900k so it's not really a surprise
What im hearing is, unless the CPU is maxed out at 100% and GPU is chilling, which indicates an obvious CPU bottleneck, anything else will damn near be impossible to tell due to the game/software being used.
Yeah, pretty much. You can check other things but not just with normal software. So for instance if you reduce your Cas latency and the framerate increases you know it's more RAM related. Similar thing when you increase your RAM speed. I mean, on a very basic level. It obviously goes a bit deeper than that but there are things you can do to somewhat determine why the GPU is not reaching it's max potential.
Rule of thumb for games: less than 100% GPU utilization means you are CPU bottlenecked. >90% of games are graphics heavy.
So in the case of scenario 2, what would be the bottleneck? You mentioned that it could be anything system related bottlenecking the CPU and thus bottlenecking the GPU. How would you go about diagnosing the cause of the bottleneck on the CPU?
Identifying exactly what the issue is becomes a bit more tricky, but it can be done. You can first overclock the CPU a bit and see if it improves your framerate. You can also tune memory a bit to see if that improves it. But dont do 2 things at the same time otherwise you wont know what improved it. It can be either the CPU is just not fast enough, memory speed, or memory latency, or all 3. But in the case of the 2nd scenario I wouldnt really worry about it too much, unless your performance is severely impacted, and then in most cases all that will fix it to a greater degree is a new CPU and/or new RAM, which is rarely worth the cost, unless, as I said, your performance is severely impacted.
if someone has got an i3 cpu than obv that person is playing at 1080p . playing at 1440p with an i3 is obv bottleneckking. i have an i3 and playing cyberpunk at 1080p with no bottlenecks whatsoever.
One of the ways to alleviate a CPU bottleneck is to increase resolution. The lower your resolution, the more frames your GPU can render, the more is required from the CPU.
I dont have any fancy programs.. but could I use Task manager as a "somewhat reliable" way of looking at component usage?
Yeah, task manager is good enough if you just want to see usages. It updates slower than some of these overlays but will give you a good idea.
@@Mostly_Positive_Reviews thanks
Hlo brother if the intel i5 13600k had good 1% lows than amd r5 7600x in gaming with rtx 4070 or rx 7800xt then sir please tell the right answer...then should me buy intel or amd..
Tough one. If you go 13600K you wont be able to upgrade much, except to a 14700K for example, whereas AM5 platform should get a few more upgrades. It also depends on price. If you dont care about the upgrade path then go for the 13600K, as it generally has better memory overclocking as well.
@@Mostly_Positive_Reviews but what if i never overclock these two amd r5 7600x and intel i5 13600k because i am afraid to i burn my cpu by overclocking then which prossesor should I buy..
Brother if you give your Instagram id or any your contact for right component then I'll pay you some money..
@@goluop0982 If you dont want to overclock then I'd say go for the 7600. That way you can still upgrade to a 9000 series CPU later without having to change motherboards. And the lows on the 13th gen is only slightly better anyway. It becomes much better with overclocking, but if you arent going to overclock I'd say go for the system with the better upgrade path currently.
I dont have IG or anything else, except Twitter, but that account is on hold for now. Really dont need to pay me ;)
You'll be perfectly fine with a 7600X, B650 motherboard, and 32GB DDR5 6000Mhz CL30 memory. If you want to save a bit of money then go 5600Mhz cl28 / cl30, but the price difference shouldnt be that big between these kits anyway.
You can then get a 750W power supply, which would be perfect for the 7800 XT / 4070, whichever you decide to buy. I will say that if you decide to buy the AMD CPU, go for the AMD GPU as well as Smart Access Memory on AMD is supported on more games than Resizable BAR from Nvidia. But really, either system will be perfect for 1440p gaming. The 7800 XT should last a bit longer due to more VRAM, but the 4070 will be slightly faster in RT, and it also has DLSS< which currently has better image quality than FSR.
U can check if ryzen 7500f is available in ur area it will save u some money and its basically same processor as 7600 but without igpu that u dont need cos u have gpu
@@Mostly_Positive_Reviews brother tell me that what is the reason of buy intel i5 14600kf any reason .does its faster in gaming from amd r5 7600x at stock GHz speeds???????????
How did you get CPUbusy in MSI AFterburner/RTSS? I would like that functionality Shout out from SA BTW
There are more and more South Africans finding these videos, and I love it!
In this video I used CapFrameX instead of MSI Afterburner. It still uses RTSS but it has additional overlay options 👍
@@Mostly_Positive_Reviews How you get it GPUBusy and GPUBusy Deviation? I can't see it in CapFrameX
@@Sebastianino Hmm, make sure you have the latest version installed perhaps? Other than that, check for the correct labels in the Overlay tab. It is called GPU Active Time Deviation, GPU Active Time Average and Frame Time Average.
@@Mostly_Positive_Reviews I did not download latest... Funny because from searcher site it says 1.7.1 as latest but from inside web site 1.7.2 is newest.
10:10 - I don't want that. The graphics card has much higher power consumption. Does bottleneck negatively affect the life of CPU and GPU components?
Sorry if I didn't understand something. English is my second language and I won't always grasp everything right away.
If in all scenarios the frame rate is sufficient because let's say it is 100 frames and I only need 60 frames then with or without bottleneck it will come out the same? Is it a matter of individual preference of a person who, for example: prefers less power consumption than dealing with bottleneck?
Thanks for video! That was important.
---------------------------------
Edit: I plan to use the 12400f in the future along with the 4070S. I have seen that this processor, despite its low price and power consumption, performs nicely in company with this powerful graphics card. I wonder if there will be any problems. However, I haven't seen tests related to the bottleneck of these two parts. Yet it's a much weaker processor than the one in your video. In order to put the GPU on par with the weaker CPU, an undervolt of the GPU would be good idea, right? Because i only play in FHD with 60fps. I'm old school ;D
If your main aim is 1080p 60 fps at all times then yeah, this doesnt mean anything to you really. This is only for people who want to maximize their framerate by not having the CPU limit the GPU, as the GPU is generally the most expensive component in your system.
The 12400F is a great little CPU. I have one in my recording PC, and yes, you will be bottlenecked by it with a 4070 Super at 1080p. But in your case that doesnt matter as the CPU is more than capable of doing 60 frames per second, and if you cap your framerate to 60 fps both the CPU and GPU will have lots of idle time.
In your case I would say dont worry about this video at all. Most modern CPUs are capable of doing 60 fps, and if you cap it to 60 fps the full potential of the components arent important at all.
This is a very good video!
Thank you bud. Except I ruined it with typos 🤣🤣🤣
As long as my FPS is above 120 and there are no stutters, I don't really mind if there's a bottleneck.
Yeah, that's the right approach for sure. Set yourself a framerate target, and if you can reach it that's all that matters. Doesnt matter if you barely reach it or if you have 50% headroom, as long as you are happy with your performance nothing else really matters.
Man could you help me? I need help and nobody could help me yet. I got a RX 6650XT and Ryzen 5 3600. Other people with the exact same setup or an even better GPU dont have the huge stutter i have in Battlefield 5, Battlefield 2042, Palworld etc. I tried EVERYTHING and it looks like a CPU bottleneck to me. But why do other people have a slight bottleneck but dont get stutter so the game is unplayable like mine is?
Like
its the same exact setup? i got fast 3600mhz cl18 ram, nvme m.2 980 pro, temps are well under the limit and my mobo is great too. XMP on, SAM / Rebar on etc...
Hey man, sure, let's see if I can help. Can you perhaps upload a video while using an overlay to show CPU / GPU usages etc? You can upload to your TH-cam, list it as "Unlisted" and mail me a link so I can have a look? My email address can be found under my info on my channel 👍
@@Mostly_Positive_Reviews i will do that in a Couple days, i found the program called PresentMon and im trying to figure out if i just have insane bottleneck. But trank you so so much dude. I will do that :)
Anytime! If you need help, I have a video on here about setting up RTSS with MSI Afterburner. It shows you step-by-step how to set it up to show CPU and GPU usages etc.
Once you are ready in a few days, send that email and we'll take it from there 👍
But what do i do whan both gpu and cpu are chilling at ~50% load and im still not hitting a fps cap?
You can still be CPU bound in that case, but it's more likely that the game is just not utilizing the hardware well, which means it's an engine / software issue.
I’ve got an I5-10400F and 3060 and my cpu usage is 100% while my gpu doesn’t go past like 46-47 usage. I honestly don’t know if this is ok and what to do about it. Maybe my cpu is just too weak for the game?
Which game is this happening the most in? I have a 12400F and a 3060 system as well, and even the 12400F holds back the 3060 in many games at 1080p.
@@Mostly_Positive_Reviews Cyberpunk. It’s done this in other games but I was able to change settings to get it to stop but for cyberpunk no matter what I change it stays at 100% usage
Okay, makes sense in Cyberpunk. I had a 9600K and it too would run at 100% usage all the time, with my 3070 running at around 80%.
Unfortunately Cyberpunk is just too heavy on your CPU, and it's holding your GPU back. 100% CPU usage would also result in more stuttering. The only solution is to upgrade your CPU if you want better performance. If you are fine with the performance as it is then dont worry about it. The CPU running at 100% in games is not going to cause issues, except for not making full use of your GPU, and the increased stuttering.
I have i9 14900k and 4080 Super, but when playing games, CPU is usually full load 100%, tried everything but useless :(
I wish someone would create an algorithm/chart into an app where you selected cpu & gpu then selected “desired fps” & it would automatically print the graphics settings you need to apply for your game.
This would indeed be a good idea, and would be popular as well. Would take a lot of effort, but I think it can work.
which programm do you use for benchmark?? (left top corner) :)
Busy uploading a short video now to show you how to set it up!
@@Mostly_Positive_Reviews i have i7 6700k with rtx 3060ti and i dont know i have bottleneck or not hahah. my gpu %95-99 my cpu %70-80 with ultra settings with RT. thanks dlss frame generation. or should i thanks to AMD for open source FSR 3.0 🤣🤣
@@toolzgosu Here you go: th-cam.com/video/EgzPXy8YYJw/w-d-xo.html
very informative, thank you so much!
Thanks go to you for watching!
How do i tell if i am GPU or CPU bound for newer games at 1080p? As a noob, i have no idea what is what. I went with a build on suggestion from a friend 🤷♂️
My build:
Motherboard: Gigabyte x570 UD
CPU: 5700x3D
GPU: RX 6600 power color (or something)
RAM: 24GB @ 2133 MHZ (soon to be 2x 16GB @ 3600mhz)
PSU: CoolerMaster 850W Gold
Cooler: Peerless assassin 120mm (dual fan)
Case: Corsair 4000D + 2x Corsair LL 120mm intake fans
That CPU is definitely good enough to keep that GPU fed, so you dont have to worry about. You'll see a decent uplift in performance in CPU heavy games with the faster RAM for sure. But at 1080p you'll even be fine with a 6700XT with that system in GPU heavy games.
@@Mostly_Positive_Reviews awesome! Thanks for your time! 😁
I don't mind if I'm CPU bottlenecked, as long as I get the performance I'm after.
Yeah, that's the correct approach. Depending on what performance I am getting it's not worth it to upgrade a whole platform for a 10% improvement kind of thing.
What is the name of that monitoring software? I’ve been looking for something with granular detail.
In this video I used CapFrameX which is a free utility you can download 👍
Is that MSI afterburner/riva tuner to check all the info? How did you manage to get it spaced out like that?
This is Riva Tuner combined with CapFrameX for the GPU Busy metric 👍 I used the default layout found in CapFrameX.
Great video !!
How do you get the GB in ram and cram to appear in Gb instead of mb and also how do you put the names explaining what is what in screen ?
Can you provide your afterburner profile ? 😅
Busy uploading a video now as there have been a few people that asked 👍
Here is a quick video of my CapFrameX setup: th-cam.com/video/EgzPXy8YYJw/w-d-xo.html
How did you get your MSI afterburner to display like that?
This was using CapFrameX instead of MSI Afterburner overlay 👍
Simple, if your CPU is at 100% but your GPU is not even close to 90. With dynamic boost in both AMD and Nvidia based gpus those 80% could simply be P2 not P1
Sure, but not all CPU bottlenecks present themselves as 100% CPU usage. Your CPU usage can be 40% but still be CPU bound. It's possible that a single thread is running at 100%, and the overall usage is reported lower because it is averaged out across all cores, or you most used thread can also be sitting at 60% but still being CPU bound because it's just not fast enough to keep the GPU fully utilized.
Hi i have a weird situation where I can't find much help (due to lack of players maybe). In Avatar: Frontiers Of Pandora when benchmarking the cpu usage reported is just 11%, it is crazy low but i think the cpu is already doing what it can (stuttering heavily and all graphics to absolute lowest with frame gen). So why it might read 11% usage or less? There are 4 physical cores in old ryzen 5 3550H so at least 25% read if only a single core being used make sense... But 11%?
What GPU is paired with that CPU in your laptop?
@@Mostly_Positive_Reviews 1650 4GB, it's even less powerful than an iGPU today :), but the usage on that is much less than 50 at low, and 30 at lowest. So ... My GPU isn't doing anything really (maybe)😅
Hmm thinking about it maybe that 4GB is the issue. Not anywhere enough the minimum required VRAM maybe right? The reported VRAM usage is 3.3GB out of 3.8GB but maybe in reality it's not enough? Not sure tho... A lot of smoother games ran with usage edging 3.6-3.7 (FC6, Cyberpunk 2077 PL, Satisfactory)
@@dzxtricks This game is quite heavy on VRAM, so I wouldnt be surprised if it is that. That CPU should be sufficient to fully utilize your GPU, but if it runs out of VRAM there's not much it can do, so I think you might be right.
I always looked at it as: Gpu at or near 100% utilisation then your gpu bottlenecked. Gpu consistently not at or near 100% cpu bottleneck. This way does fall apart if a game for example is fps capped
Is caping frames affects cpu or gpu usage? And also does it helps with g sycn and overall smooth of the game?
I am 99% sure it limits CPU performance as the GPU frametimes still show sya 6ms, but CPU frame times show 16.6ms for example. But that would reduce all usage if you are well below what the max your system is capable of rendering is.
It can help with smoothness indeed. G-sync does not limit the framerate, so if you go above your monitor's refresh rate you will see tearing. So limiting it a few frames below your monitor's refresh rate will prevent tearing, and with g-sync enabled you will get a much smoother presentation.
@Mostly_Positive_Reviews
Thank you for responding, I'm using 165hz Dell monitor, when G-Sync Is enabled I'm getting 158 fps without any frame capping. I should still limit frames in games to let's say 155, or limit but to 158? I'm getting strange tearings while gaming, I'm getting 120 and more but I can still see the tearing. 😞
I have this problem with Cyberpunk 2077, CPU 100% all the time GPU 50-80%
AMD overlay says I have 70-80 fps, but when I use mod that show fps it is 30-40 and it feels like 30-40
What are your specs? And do you use AFMF? When you enable AFMF your final output will be double what is shown in normal FPS overlays, with similar latency of say 30-40 fps if AMD overlay says 80 fps
@@Mostly_Positive_Reviews Intel© Core™ i7-3770K CPU @ 3.50GHz × 4, Radeon RX 6600
I dont know what AFMF is...
@@zerenxx8841 AFMF is AMD Fluid Motion Frames. There is a toggle for it in the AMD Radeon Software, Adrenaline. See if that is enabled?
Also, the 3770K is for sure holding back the 6600 unfortunately.
Thanks. This showed me that my GPU is sitting idle waiting for frames to be generated and it justified my CPU upgrade if anyone asks :P
3070ti was at 6ms where my 11700k was at 16ms and the worst part is I am getting a 3090 later this week so it will just get worse
Oh right, you already saw this video hahaha. Just replied to your other comment 🤣
I have a 7800x3d paired to a 4080 OC.
I am currently playing Skyrim at 1440p without a frame rate cap (a mod).
And with enough mods, textures, grass mods and DynDOLOD you can absolutely see the CPU bottleneck everything.
The FPS will be below the v sync 165hz cap, and GPU will be at 90% with CPU at 20%.
I think it's because the game only really uses one thread.
So while it seems the CPU is chilling. It's actually not able to let the 4080 go to 99% on Skyrims game engine
Yeah, Skyrim being quite an old game at this point means it's not optimized for multithreading. And mods can be brutal too!
Skyrim is horrible optimised
Interesting
@@Greenalex89 yeah I figured it was at least noteworthy
On u setup u have random stutters? I have literally same specs(cpu,ram,motherboard,gpu) and all games random stutter for me, the one difference from u setup from me is the psu(I have a Corsair CX650w)
Depends on the game. Many games have inherent traversal and shader compilation stutters. Others run perfectly fine. Which games do you have issues with?
@Mostly_Positive_Reviews like, 80% of games, Koovaks, Quake Live, Tomb Raider, etc, And if I see with Capframex, I have exactly the same stutter in this games, moving average move on UP And DOWN in stutter, I already tried format my system, BIOS stock vs Optimized and nothing works
@Mostly_Positive_Reviews like, 80% of games, Koovaks, Quake Live, Tomb Raider, etc, And if I see with Capframex, I have exactly the same stutter in this games, moving average move on UP And DOWN in stutter, I already tried format my system, BIOS stock vs Optimized and nothing works
@Mostly_Positive_Reviews like, 80% of games, Koovaks, Quake Live, Tomb Raider, etc, And if I see with Capframex, I have exactly the same stutter in this games, moving average move on UP And DOWN in stutter, I already tried format my system, BIOS stock vs Optimized and nothing works
@Mostly_Positive_Reviews like, 80% of games, Koovaks, Quake Live, Tomb Raider, etc, And if I see with Capframex, I have exactly the same stutter in this games, moving average move on UP And DOWN in stutter, I already tried format my system, BIOS stock vs Optimized and nothing works
Maybe a stupid question but will increasing the resolution be enough to reduce cpu bottleneck in some cases? For example I have a 2700x with rx 6600 and I was thinking of buying 1440p 144hz monitor that may help me reach a stable 60 more easily by reducing cpu bottleneck. Is it a good idea?
Not a stupid question at all. If you are getting 50 fps and CPU limited at say 1080p, increasing the load on the GPU by increasing the resolution you will still only get 50 fps max as the CPU is only able to prepare 50 fps, regardless of resolution. But shifting the load to the GPU will most likely get you better frametimes and less stutter.
In the above example where your cpu can only put out 50 fps you will be cpu bottlenecked until you increase the graphics settings enough to drop below 50 fps. You can only ever get the highest fps of your weakest part.
For the situation on the right in the beginning of the video, it's not quite CPU bottleneck but RAM speed bottleneck. RAM are not fast enough to deliever all the data that CPU needed. That is why AMD has 3D cache CPU which is just to stack very large L3 cache for a higher Memory Hit Ratio.
For most of the RAM speed bottlenect, it's the RAM latency problem. RAM read and write speed isn't a serious problem for majority of the game. Imagine copying files, if you copy and paste single big file like zip or a movie, it is very fast. But if you want to copy a lot of small files like pictures, then it's slow. Same thing to the RAM.
Some game is optimized, they will try to combine many small data into a large one or simply cut down unnecessary process. But other are not, especially for the indie games.
To determine CPU bottleneck, beside obviously looking at GPU load, also enable monitoring per core of CPU. Whole CPU load means nothing because one core may be doing 95% of the job while others sitting without any load or even sleeping (parked).
100%. I spoke about this as well. If you have 20 threads and 1 is running 100%, and the rest do nothing, CPU usage will be reported as 5%.
My 7 5700g only pulls 39% in elden ring My 3060 needs just a tad more for full settings.Very GPU heavy,I was going to test the onboard to see what it is like in 1440p.Might have more usage or maybe share cache.
Do you have the mod that unlocks the framerate? If not, Elden Ring is locked to 60 fps so your usages will be lower.
How do I show frame time average and gpubusy avg, I use afterburner to check performance, is it available on there?
You will have to use CapFrameX for those. Once installed you can go to the overlay tab and select everything you want to see. I'll see if I can do a tutorial on how to set it up.
Should you happen to know the reason why but my gpu usage is 90-99 on games like destiny and warframe. On games like cyberpunk and black myth wukong it won’t go past 85. I have a 4090 and i9 13900kf I just can’t seem to find the reason
What resolution do you game at? Cyberpunk is quite CPU heavy, so there it makes sense, but Wukong is very GPU heavy so I am very surprised it doesnt hit 99% usage there.
@@Mostly_Positive_Reviews 4K is the resolution I play on. Cyperpunk was getting good gpu usage like 6 months ago then I started yesterday and it won’t go past 80. I’m hoping it’s not the intel degradation thing
@@Mostly_Positive_Reviews I can get wukong to 95 if I use dlaa but idk why dlss it just won’t go above 85
Very strange. They introduced a new feature called P-Core Preference or something like that. In the latest version it's under the Utilities tab in the settings menu. Make sure it's not set to on, as that caused many issues on my end. I cant remember the exact name now but it'll stand out when you see it.
@@Mostly_Positive_Reviews ok thanks I’ll give it a try
Good video. Cyberpunk is a good game for this because it's been out in some form for 4 years, so it's probably pretty well optimized. Driver issues and egine issues could probably cause some bottlenecking.
Thanks again! Yeah, there are many other reasons for a "CPU bottleneck", and software / engine is just one of them. I touched on it briefly but because I was more focused on showing how to identify a bottleneck via CapFrameX that I didnt go into too much detail, and some people wanted me to, based on the comments.
@@Mostly_Positive_Reviews The video was good. You'll never be able to hit every single point. Your videos would be hours long.
Yeah, not a lot of people realize that. Appreciate the kind words and watching the video 👍
The best way of identifying if youre cpu or gpu bound is by looking at the GPU usage. In general you want your GPU usage to be pinned at 95-100%. If its significantly lower on avg then youre most likely cpu bound.
GPU may be underutilized(not due to CPU bottlenecking) in the following cases:
1. Old games. They usually need a fraction of modern GPU power to run smoothly.
2. Games that more CPU bound, like stellaris. Graphically speaking there isn't much to render in Stellaris, but CPU calculations and loads are tremendous.
In graphically challenging games you should see 95-100% of GPU utilization. If the number is lower, then:
1. Poor optimization.
2. CPU bottleneck.
Yeah indeed, many cases where GPU can be underutilized.
what do you find better i7 13700kf with nvidia 4090
other PC: 4080super with i9 14900k, which one give better FPS?
The 4090 / 13700KF system should perform better in pretty much all games, except heavy CPU bound games like Dragon's Dogma.
@@Mostly_Positive_Reviews thanks for the quick response
I sold my computer i9 14900k 4080 t yesterday.
Now I want to buy Predator Orion X POX-950 Gaming. With i7 13700kf 4090super.
Just worried about bottleneck.
If you say that i7 with 4090 gets more fps, I'm happy then :) thx
@@Reflex3745 I know quite a few people with 4090 / 13700K systems and though it is slightly CPU bound at 1440p, it will still give a higher framerate than the 14900K and 4080 S system. Except, as I said, in games that heavily utilize the CPU, like city builders / sims, but even then the difference between the 13700K and 14900K wont be that big, unless heavily tuned.
@@Mostly_Positive_Reviews thank you for answer. I play normaly only pubg
I know that it is a bottleneck (bad) ,but I'm not sure how bad it is. My spec is i3 6gen with 3060 12gb. And other PC is i7 8700 with rx 6800 xt.
What resolution do you play at? The 8th gen 8700 wont be too bad, but the 6th gen i3 is holding back the 3060 quite a lot. I made a video on how to setup RTSS with MSI Afterburner if you want to set it up and check your GPU usage. That'll at least give you an idea 👍
@@Mostly_Positive_Reviews So you saying is that my cpu i7 8700 with gpu rx 6800 xt is fine?
Not really, no, but it is indeed much better than the 6th gen and 3060 pairing. Also depends what resolution you play on both. What resolution do you use?
@@Mostly_Positive_Reviews 1080p . So the solution is that I need to pair my 3060 with my i7 8700 ?
That would be a better pairing yeah, and then get something like a 5600X or above for the 6800XT
So I have a question. If I have a ryzen 7600x bottleneck (bought 7600x like 3 months ago to wait for 9800x3d with normal price and ofc availability so I would sell my cpu so also didn't want to spend much money) with 4070 ti super is it possible that it cause stutters even if I lock my fps with cpu usage less than 50% even sometimes less than 30%? I have stutters in most games. For example I have like 400fps in cs2 so I lock it on 280fps but still have stuttering, even against bots so it's not internet problem (cs2 sometimes has some problems with network - loss). Also I checked my old gtx 1070 with new pc and it seems like there are no stutters in cs2 and much less in different games like PUBG. I tested some games with same settings and lock fps at ~85-99% usage of gtx1070 (just min fps which I got) and then tested same games with same settings and same fps lock with 4070 ti s (so if I had min 70 fps with gtx1070 I lock it at 70fps, test and then I lock fps at 70 with 4070 ti s too) and I had much more stutters with 4070ti s even if gpu usage was 60% with cpu usage ~40%. So I had a similar cpu usage (never more than 60%) and ofc much less gpu usage with 4070 but much more stuttering. I'm not sure if it's a gpu (or its driver) problem or just a bottleneck. I don't know if it is possible to have cpu bottleneck stuttering with cpu usage ~30-40% by frame cap. Without locking fps it's like ~50% cpu usage (gpu 75-85) so still not 100% like in the first part of the video. I saw some stutters in the first part so i decided to ask if it is possible to have them with 80% gpu and 50% cpu usage because I see that there are no stutters in the 2nd case on video. Also I notice that if I lock fps in cs2 on 220, it's never 220, it's like 195-209 in match (I can get 220 without players or bots on map). The problem is that with gtx1070 if I lock 220 it's 220 with some drops and also even 1%low and 0,1low are lower with 4070 ti s. Without fps lock it's terribly low with avg 400fps+ and 1%low - 95 fps, 0.1%low - 60. Also in different games 1% and 0.1% low are awful too.
And ofc I tried to install Windows again, drivers reinstallation by DDU (I tested my old 1070 so I had to), tried different chipset drivers, some optimization guide like game bar rename/disable, DXCache files uninstall, NVIDIA hd audio disable (this one helped me with stuttering which made some games unplayable but still there are so frustrating frametime drops, less but still), simple BIOS option like disable expo profile, Resize bar.
In Space Marine 2, my gpu-busy is around 9ms and frametime around 22ms.
RTX 4070 Super and Ryzen 5 1600X.
Got any recommendations for a good AM4-socket processor to pair with my GPU?
If you have the cash to spare I'd slot in a 5700X3D. Will be a great pairing with that GPU. Otherwise even a 5600X will get you a great boost, especially.in Space Marine 2 as it is very CPU dependant.
@@Mostly_Positive_Reviews I just bought a Ryzen 7 5800X before the parts store closed for the day. Good choice?
@@dankuspanku4650 Yeah, solid choice indeed! Had one with my 3080 for a long time.
@@Mostly_Positive_Reviews I have now installed the Ryzen 7 5800X. My frames pretty much doubled in Space Marine 2 to about 78fps. Frame time is now down to about 12.5ms and gpu busy is still around 9ms. The Ryzen 5 1600X was a massive bottleneck.
Yeah, going from a 1600X to 5800X is a big jump! You'll see pretty big differences in other games too! Enjoy 👍
While I do love the video and the way you explained, you're always gonna be bottlenecked by either the CPU or the GPU, because not all games are designed the same. Some games rely on heavy CPU calculations while other rely on heavy graphical fidelity and some on both. So no matter which way you go you're always gonna have a bottleneck.
Thank you, and yeah, 100% correct. The only reason to really care about a CPU bottleneck in particular is because your GPU is almost always the most expensive component and it would be better overall if that is then your bottleneck as it means you are getting the most out of your system.
That said, you shouldnt rush out and buy a 7800X3D if you are 10% CPU bottlenecked. I'd say even up to 20% is fine and not necessarily worth upgrading a whole platform over.
but what does it mean when my pc just freezes
That means get a new one 😅
@naniurquia9276 it only does it sometimes and in certain games
For me i dont mind as long as it isnt dtuttering. If im stuttering due to a bottleneck then its time to get a new cpu. As we know gpu bottlenecks arent as bad as cpu bottlenecks.
Also a cpu bottleneck isnt usually bevause the game is demanding its usuallly due to the cpu not being able keep up with communication of the gpu. Thats usually a bottleneck. Games bottlenecking a cpu or being cpu limited usually doesnt effect much.
Like here the game was more then likely still running good so for me thats not as bad.
Good explanation on shifting the load.
I love the comments on facebook and reddit of mone believers. They legit think bottlenecks arent real haha. They dont believe any cpu can bottleneck with a 4090. They really think there 7800x3d can keep up which yes it performs well but they still have a bottleneck more then they may think. The 4090 is just that powerful.
Or like me a 4080 super with a 13400f. I have a bottleneck and my bottleneck even causes stuttering at times which is why i will be upgrading soon. But yeah alot of people go naw thats not bottlenecked. Haha
Appreciate you taking the time to write this comment. Yeah, it's amazing how many people deny stuff like this. But I agree 100% with you, unless it's really impacting your experience dont worry too much about it. If you still get framerates that are acceptable to you with no stuttering then it's most probably not worth it upgrading your whole platform. But also, a 13400F is definitely holding back that 4080, so in your case you'll get a very decent bump by going for a 13700K or similar. I have a 12400F system as well that I test more entry-level GPUs with, and once tested the 4070 Ti on there, and it was quite bad. GPU temps were pretty low though as the GPU hardly got used 🤣
@Mostly_Positive_Reviews yep exactly haha. I will say I'm getting over 90% utilization in 4k which is OK still bottlenecked but yeah it's been running ok in 4k. Def has some stutters so def going to be upgrading soon.
Most times the issue is not cpu and the games its the background apps that are the issue. Like you run Discord and browser but the game already needs 100% of your cpu so it causes stutters. good example is my cpu is a i5 8600K @4.8 GHz but pubg stutters when discord and my browser is open. close it and the stutters are gone and my fps is stable at 120+ fps on 2k
Yeah, background tasks can absolutely cause undesireable performance for sure. I remember when I had my 9600K I had to close everything before playing Cyberpunk just to reduce the stuttering a little bit!
One way you can try to resolve this issue or at least minimize the effects is to use ProcessLasso or other similar tools to set the process affinity. You can see which core is the least effective when playing the game and then set the discord and browser affinity to only that core, and the game to the rest of the cores. This will significantly decrease the stuttering, as your game process won’t be slowed down to wait for the discord/browser processes, as they’d be on different cores.
Do you think running a cpu amd 5 5500 and a 3060 asus 12gb gpu would be ok? 🤷🏻♂️
Yes for sure. I actually had that exact system that I built for my wife. Was a very decently paired system. Some games would still be slightly CPU bound at 1080p, but very minimal.
hey, i have 16gb of ram, an i5 11400f, an rx 6600 xt, my fps in fortnite is generally low (dx 12, competetive settings, 1080p), the cpu gets to a 100% of usage when in the sky and i get awful stutters, i've tried optimizing, reinstalling windows, anything, couldn't fix it, do i need to upgrade my cpu, my ram, suggestions?
Does it improve the more you play? Fortnite suffers from terrible shader compilation stutter but it gets better over time.
Have you tried the DX11 mode perhaps?
It does sound like your CPU is the issue here. The problem is that you'll have to do a whole platform upgrade at this point unless you go for something like an 11700K.
@@Mostly_Positive_Reviews yea, it does improve overtime but i still get occasional 100% cpu usage and lag, and can't really play with my browser open cuz i lag a lot as well, would going for an 11700k fix my problems cuz that would be the most budget upgrade, and btw i have an 2,5 inch ssd with slow speeds, do i need to buy some budget nvme and maybe upgrade to 32gb or ram so i can have my browser open while playing games?
@@spookytv4044 All those things will definitely make a bit of a difference. An nvme not so much during gameplay as an SSD with 450MBps read/write speed is definitely still good enough for most games. RAM speed will also make a difference, so if you have normal 2666Mhz RAM now, getting something like 3600Mhz CL18 combined with a 11700K will definitely help a lot in CPU bound scenarios.
Stuttering while a browser tab is open is a bit on the strange side, especially if it is only one or two. Is there not maybe something else eating up CPU cycles? A Windows reload might also help, depending on the issue.
@@Mostly_Positive_Reviews thanks for the answer, I currently have a 3200mhz cl16 2x8 ram, i will probably get 2 more sticks of the same ram i have so it's cheaper and will upgrade to the 11700k when i have the means, and i really haven't noticed anything eating up the cpu, i only noticed that chrome and fortnite max out my ram usually
This more applies to older games most games will have a engine limit where no matter your CPU you can't push past a certain frame rate. Normally this is absurdly high framerates and you wouldn't want past 200fps for most games unless they are super competitive esport titles. Something like GTA 5 will let you push into the 180 to 200fps for example but it will extremely unstable so that's why most people suggest capping it at around 120 to 144fps. It's worth doing research on certain titles especially if you have a high end machine that is more likely to run into that sort of problem.
Yeah, GTA V was notorious for breaking over 180 fps, and then some Bethesda games also have their physics tied to the framerate, so if it goes over 60 fps things start to freak out. There are patches that help with that though, but agreed, best to find out if a game has an engine limit or not and try to stay below that.
Many new games do have uncapped frame rates though. If you do all you can with frame gen and dlss and overclocking you can push Cyberpunk to 300 fps or more. But there definitely are games that have engine limits, or issues when going above certain framerates.
I like to cap my CPU to not go over a certain t° (laptop) so the CPU and GPU aren't always at their best. I think i have a good pairing Ryzen 7 7735HS, RTX4060, 2x8GB (4800) 1080P. Nice explanation, thanks!
There are definitely instances where limiting components is a good thing indeed. But yeah, your components are pretty well paired. Would even be able to power a more powerful GPU with that CPU as well, so you are fine.
@@Mostly_Positive_Reviews Keep up the good videos!
Thank you, appreciate it a lot 🙏
How can i get “GPUBusy Avg”,”Frametime Avg” and “GPUBusy Deviation” in msi afterburner? Very good video!
Thank you!
The easiest way is to use CapFrameX. It has it built-in already. You can do it with Afterburner but you have to go add plugins in RTSS and configure it from there. It's possible but a bit of a mission.
@@Mostly_Positive_Reviews Oh thank you!!
Thinking to get this gpu for ryzen 5 7600? Will i have a cpu bottleneck in 1080p?
It'll be fine. A slight CPU bottleneck, depending on the game, but nothing to worry about too much.
Thanks for the answer. My main game is Counter strike 2 with 240 hz monitor.
@@Mostly_Positive_Reviewswill i get constant 240 fps without stuttering and how do i prevent it if it happens?
Counter-Strike does have some inherent stutters that you cant do much about, especially at the start of the match. But they normalize very quickly. You should be able to get 240 fps yeah.