I love wendell's computing science videos explaining how stuff works and asking questions. Might not be as popular as performance reviews on x but really appreciate it dude!
Core parking conjures up a negative feeling for some gamers. mostly because Bulldozer core parking would make an 8 core CPU appear to run better or worse in a super case by case way. Toss into the mix the sheer number of Windows 7 and 8 pirates who never updated to the Bulldozer friendly new Scheduler so their default assumption is its still broken / breaking their games.
@@jetrpg22 What wendel seems to like about AMDs current implementation, is that no tasks are ever pushed to one chiplet or another. All games/apps are free to use the "entire" system if they so choose. However it's windows and the scheduler (and I guess the chipset drivers?) that will determine how "big" the entire system is, ie. how many cores to keep parked. The metrics right now seem to be if an app uses less than 8cores (1CCD) then it will keep the other CCD entirely parked. Which CCD is parked is set via gamebar and perhaps the system setting of prefer cache vs frequency. However once an app asks for more than 8 cores, then the system will be forced to wake up the other CCD and both will be available. But that is a very user hands off approach (compared to say affinity) so if you wanted to run two separate apps, each on their own CCD, you can't really do that unless one of the apps launches more than 16threads. And even then, without affinity, once the entire system (both ccds) are awake/unparked, you have no control over what app runs where and when. So in some ways it's less elegant and more crude. But surprisingly effective in a large majority of scenarios.
Great explanation! This also clearly establishes why there is a difference in price and value between the 3 models. It is fortunate that if you are primarily a gamer and don't need to compile a lot of code or do 3d work, then you can get a great product for just that. If you want both, with the additional performance per watt, the 7950x3d does a really good job at delivering a solution that can be versatile, and you can enjoy a smaller energy bill without too much inconvenience compared to the non 3D. I think some reviewers misunderstands this usecase for 7950x3d because the parking concept was not well understood by all. And while it loses in multi core benchmarks to the non 3d, it is easier to cool -> lower fan noise -> happy wife. And saving 50-100$ on the energy bill per year is also a benefit in the long run (double for some power hungry Intel CPUs). If the price rises again in Europe, it could end up saving its own cost on energy by one year.
Thanks for the explanation. Waiting for stock back and this will help as I do love tweaking. I was thinking about doing a misaligned affinity mask to see the l3 hop latency but you've gone and tested this already. Which is why having third party reviews is so great for the industry. So excited for this level of power soon. I'm in the rare class of developers that needs gaming and compilation etc. performance.
this is the kind of information i've been wanting about these new CPUs and how they work. now I can think about how I will be using them for my own use case and choose an appropriate one for myself, thank you wendell
AMD has mentioned the huge work they have done with Microsoft over the past 10 months to optimize the new cpus for windows 11. It has more to do with Windows than it does with the architecture of ZEN
Yes and no. Intel and Microsoft had to work together to make the scheduler work correctly for the hybrid parts. And frankly there are still issues or you'd be able to have AVX-512 enabled for 12th and 13th gen Intel parts. Microsoft isn't under any obligation to optimize performance for AMD's Vcache parts. And this is where AMD has in the past fallen behind Intel and Nvidia. Nvidia for instance has more SOFTWARE ENGINEERS than AMD has in their entire Radeon division. This is also why Nvidia GPUs tend to work better, with less problems. The same has been true with the Intel vs. AMD situation. Intel has more software engineers than AMD. It's why THEY could come out with a 1st gen GPU that uses AI for XeSS. So this is also on AMD, AMD IS getting better at getting more software engineers on their teams because they KNOW these parts are becoming ever more complex and you can't keep pushing these problems off on other companies to deal with. Microsoft isn't a black box to these hardware companies. They can get access to the software they need to do the development they need to do, and Microsoft can HELP, but it's not on Microsoft to do all this work.
@@johndoh5182 Yep, this really is a almost bulldozer level issue. I mean its not that bad because it works, but the solution so far seems so crude. And maybe i am wrong but i keep seeing people saying well if need it will un park them.... But i have not seen someone play a game start up YT, record, and something else and what cores are doing said work. If they are using the vcache cores, thats bad. I guess the good part is windows and system firmware updates could fix this or expand use later but still in its current form its just not good. I WANTED to go AMD for the 2nd time... Atm i will go intel because their solution works (maybe a 7900x if the price tanks with the x3d). The 7800x3d is a different story so ill wait for it and i do like the idea of it (even if id rather prefer more cores, i like cores, because when i bought my bother's 3xxx and my 4790k) people were saying really single core maybe duo is all you need. But today my brother is still playing modern wow on his 3xxx and my niece is using my 4790k (they are oc'ed and all that jazz). But still 8 cores is enough but even then 6-10 years it will suffer vs a 12+ core machine.
@@johndoh5182 That AVX512 problem came from software trying to use all cores for AVX-512. The E-cores lack of AVX512 in Alderlake was the main problem. It had nothing to do with the scheduler. That was a software problem. No the reason why Nvidia has better software is having a bigger market makes it easy to get volume of different system setups. size of software department has nothing to do with talent inside of it either, it's still about volume.
@@johndoh5182 Odd, I have had more problems with Nvidia software than AMD. Those engineers were often employed to do dumb things like add massive tesselation to games to win on benchmarks. When you're running a mix of softare, some using older APIs and continue using older hardware Nvidia are a massive PITA, their effort is put into the latest releases and Jensen's goals to rule the metaverse. I've lost so much time due to bad practices by Nvidia, which are a matter of deliberate policy. An example is features missing from the WHQL drivers, so when support is dropped by them, you lose functionality provided in Experience. Another example trick Nvidia used was to punt scheduling from the GPU onto the unused cores of CPUs, that allowed the GPU to borrow power budget from the CPU and score higher. The trouble with that is, it is troublesome for async compute and when that CPU roundtrips cannot match hardware scheduling, then you have games that a heapened to do stream decompression on a heap of cores while the console the game was developed on had a decompression engine built into its disk management silicon. Of course if you're into the latest game releases, your best option is Sony Playstation or Xbox , but if you are on PC then Nvidia are better tested on release because they're the market leader. Want to carry on using that 1080ti or play old classic games then you may well discover that bitrot is a characteristic feature of Nvidia, you MIGHT find time consuming work rounds for the API they aren't interested in anymore.
The scheduler is MS's, but MS need strategies to deal with high core count CPUs, otherwise they'll be relegated to containers running in virtual machines, like in many server deployments. There was always a problem of threads core bouncing, you really don't want to evict a thread from the primed L1 & L2 cache they're using, but Windows appeared to like mixing it up, while the high boost ST is reasonable to migrate to a cooler 2nd core, you really don't want to do that when thermals are not constraining things. Fundamentally having CCX's with different properties and cache sizes is a coming problem that MS needs answers for. I can imagine having 4 power efficient but slower cores intended for OS tasks and assisting drivers, integrated into the IOD. They'd be like the ARM little cores in phones or the original design intent of Intel's Atom cores, where not waking up main cores, but doing light processing at lower frequency could benefit the overal design. In future something like the Zen2 compact 4 CCX block, might have a compact variant on a cheaper process node, than the main flagship cores intended to run applications and heavier tasks.
Hes so right, parking cores brings down thermal load alot , and once software gets perfect it will run as least amount of cores needed and see if it can just keep everything at closer hardware because ITS FASTER if it can just do the job one one core instantly. its getting pretty good now.
One missing concept was, IMHO, mentioning "Context Change", core/threads switching. Not just the caching advantage that 3D brings. Most single threaded processes really benefit from not being preemptively "core switched" for no reason. That is when the cache between the cores really comes into play, especially regarding high performance single threading gaming loads. I might have missed something and glad to learn something new, but IMHO a parked core is something that the scheduler can pull in if it is really needed.
The issue i am having is, what trips this. So far i haven't seen a video of someone running chrome with a yt video, recording, and say discord, while playing a game and looking at app and core behavior. If all those run on vcache cores, until its like nearing 80-90% use, thats not good. You are going to have negative effect from this... still any current issues could be improved via windows and other updates so there is a lot of room for improvement.
@@jetrpg22 I don't know if the following is the case (especially given how kludgy the current Windows workarounds are), but the information that really needs to be collected (if it even can be, since collecting this info is very nontrivial) in order to make good scheduling decisions about mapping threads to V-Cache cores versus ordinary cores (including the important case you mention, that is, thinking twice before switching a thread to a different core complex for no justifiable reason) - includes how big the memory space is that is referenced by the set of threads that comprise each process, relative to the putative L2+L3 cache size is (for a given proposed mapping), along with the level of locality of memory reference (as opposed to its opposite namely scatteredness of memory reference), as well as the CPU-usage intensity cumulative over the threads comprising each process (i.e., do these threads cumulatively want to consume all available CPU cycles), plus the CPU-usage intensity of each of the latter threads comprising a given process. Even if you had all the above info at your fingertips, you (in the role of a scheduler) would have a hard time making optimal scheduling decisions. An algorithm to do so is very non-trivial (and I have some historical expertise in this area from which to compare, having contributed to a classical scheduler that was the best of its era, and knowing how much simpler the requirements were back then versus the mess of a scheduling problem we face now, given that Intel has contributed one axis of hybridity to the scheduling problem via its Apple-like E-cores and P-cores, while AMD has just now contributed an orthogonal axis of hybridity to the scheduling problem via its normal-cache-cores versus big-cache-cores). Short of a beautiful clean scheduling algorithm for what we have to deal with these days, one can at least state some general rules of thumb. First, per @jetrpg22, don't switch a thread from a normal-cache-core to a big-cache-core for no good reason - you would do so only for the betterment of all threads. Currently, the latter rule in the context of X3D CPUs is equivalent to saying don't switch to a different core complex for no good reason; but this could be even more complicated if a future CPU has 3 core complexes with a mix of 3D V-Cache or not. Second, at a high level, your goal is to give the big-cache-cores (on a core complex containing 3D V-Cache) to those threads of those processes that would benefit the most from having the massive amount of L2+L3 cache available to them. This leads to two obvious cases and one non-obvious contra-case. In a thread within a process that has a large amount of memory being accessed, and furthermore this thread is itself using a large chunk of available CPU, it is beneficial to put that thread on a big-cache core residing in the core complex that has 3D V-Cache. Especially if there's sufficient locality of memory reference, the massive L3 cache (plus sizable L2 cache) will hopefully make it such that most memory references are L3 cache hits, with a fairly large chunk of those actually being L2 cache hits, and only in the worst case do you have to load from memory. In DDR5 memory that's a huge speedup - a cache hit versus a memory access. It could as much as triple the effective CPU speed over memory-only accesses. The second case is not so much when memory is huge, but rather when memory accessed is sufficiently small such that all memory fits into L3 cache. That's even better actually. You might also say that you should put the sister threads of that process onto the same core complex - if possible. But counter-intuitively, in some cases - namely sister threads that, say, only run one millisecond out of every second - might best be put elsewhere (to make room for threads from some other process) since they execute so few instructions that that they macht nichs). In general, the rule is to keep sister threads together on the same core complex if you can, but don't worry about that for threads that are mostly idle - you're generally better off putting those elsewhere if their slot can be producticely used in some thread in some other CPU-intense process. You can see how complex this scheduling decision process "should be" if done right. And ideally, you want to know what the cache hit rate would be for each thread under the two alternate scenarios of putting the thread on one of the big-cache cores vs one of the normal-cache cores - along with how much CPU that thread uses in general. For dealing with the hybridity of Intel big.LITTLE cores, Intel has added useful data collection features to help out the Windows scheduler. I don't know whether AMD has (or has not) added the required data collection features (such as memory sizes and cache hit rates and whether a thread is a big CPU consumer or not). If they haven't added these features, they should do so in their next iteration of 3D V-Cache CPUs.
Hi Wendell, the new AMD chipset drivers (5.08.02.027) have drastically changed the way my 7950X3D works. First: performance in games is massively boosted and Second: cores don't park anymore. Would love to see an indepth look at these new drivers. With the old drivers Metro Exodus Enhanced was a stuttery mess and CP2077 had issues during some cutscenes. Now, everything is smooth as butter!
This is awesome Wendell. Any idea what the default Linux scheduler behavior is? Does Linux park cores for games or can is accurately identify if a game is running? Or will we have to go into BIOS to change the settings to “prefer cache” or “prefer frequency” based on what we’re doing?
@@thetj8243 and Benjamin Lynch, this makes sense to me - I would expect the Linux engineers to probably have better abilities to properly address the scheduling issues here, and the Microsoft engineers to probably be somewhat less facile at this task. This is on the (almost certainly true) assumption that Dave Cutler no longer leads this team at Microsoft. Where's Dave when you really need him (lol)? Since lots of people play games on Windows, it might be that the existing solution approaches in Windows (as so excellently explained by Wendell) are sufficient to solve the complex Windows scheduling issues presented by what amounts to the hybrid-but-different-type-of-hybrid-than-Intel architecture of the AMD 7900X3D and 7950X3D chips, at least on systems mostly used for gaming. The advantage that gamers have (that implies that the current "partial Windows solution to the X3D scheduling problem" is probably good enough for gamers) is that when you're actually playing a game currently, your focus is pretty much completely on the game. Sure there's a few system kernel threads that have to run at the same time, but you're not likely to be streaming TH-cam and TikTok videos at the same time. Admittedly, the stuff Wendell describes here is a gigantic kludge. But it's a kludge that is likely to kinda-sorta work for gamers at least. On the other hand, if you're not gaming but running a host of simultaneous productivity applications that all need lots of threads and lots of memory, I'm betting that the beta Linux scheduler might do a better job. For those folks, gamers or otherwise, who are committed (or stuck) using Windows, and who would like to use an X3D AMD CPU in their next system build, I'd recommend the following decision matrix. If money is no object, go ahead and buy the 7950X3D. By saying money is no object, what you're really saying is that I can afford the delta cost of a 7950X3D over a 7900X3d to get the delta +4 (=16) cores over the 7900X3D part just for those cases where I really need the extra CPU horsepower, and especially for those cases where I have 8 cores (16 threads) using lots of memory that can benefit from the 3D V-Cache to increase the cache hit rate markedly, while having say 8 other cores (16 threads) worth of work that can comfortably run on the other core complex without the 3D V-Cache. Hopefully the core parking and other crazy features will make pretty good scheduling decisions for the borderline cases. This scenario it tantamount to buying the Chevy Suburban because occasionally you need to haul around your kids basketball team, even though most of the time you're just driving a family of four. For the person who wants a very high-end system but chokes on the extra delta cost of a 7950X3D over the 7900X3D, consider buying the 7900X3D. The advantage of that CPU part is that whenever it does have to park cores (cuz it judges it better to do so, which may or may not have been a good decision, but hey, Windows is at least trying its best), it's only "wasting" (by parking) a max of 4 cores typically (8 threads). In other words, the only money you wasted (and only in those rare scenarios where such a waste is actually most likely to yield better performance on that task mix) is the delta cost of the 7900X3D over the cost of the 7800X3D (which has 8 cores instead of 12 cores). This leaves the option of buying the 7800X3D to many buyers that want to save a substantial couple hundred dollars on the CPU part of their system cost. My guess is that about half or more of customers fall into this bucket. For most gamers in particular, they might be better off spending that extra cash on a better GPU. A 7800X3D plus a 4090 or a 7900XT is probably going to game better than a 7900X3D with a 4080, for instance. Clearly, the reason AMD is delaying the release of the 7800X3D a couple months just to incentivize those customers that are thinking seriously about a 7900X3D or a 7950X3D to pull the trigger on that rather than waiting for the 7800X3D to arrive. Finally, for the buyer that wants 3D V-Cache on the cheap, with no wait, there's always the 5800X3D with the advantage of a cheaper motherboard and cheaper memory.
Thanks for delving into these tests. Can I ask you to try running two instances of a multiboxer game that IS catered for by the game bar, if you've got a second screen available, and see whether it is a simple matter to specify each instance to inhabit a different CCD? In games such as e.g. Eve online, I ran my main character screen on higher graphical settings than my alt screen, hence the thought that on a 7950x3d I'd want to run main toon on the vcache cores and the alt on the other cores. My old EO box was a crossfire setup so dual gpus helped out a lot in making this work.
yeah im super curious how the 7900x3d performs since it has only 6 cores of 3d cache. Really disappointed that amd didnt seed any to reviewers for launch day coverage
@@TheTechhX According to some of the reviews I watched, AMD wasn't sampling anything other than the 7950X3D to reviewers, meaning the reviewers will have to buy the part themselves now that it is released to test it. Reviews should come in a couple of days once they've had time to benchmark everything. I can only speculate why AMD chose to only sample the 7950X3D to reviewers; best case scenario is they wanted to put their best foot forward and wanted to maintain as high consumer stock as they could on both parts while doing so. It could also be that the 7900X3D performs almost identically to the 7950X3D in most workloads (but certainly not production/extremely core heavy workloads) and they were concerned that the reviews would cause sales of the 7950X3D to be cannibalized by their lower margin part. This is what I'm betting on, as I've already ordered a 7900X3D to finish up my newest computer build once it and the GPU arrive. Worst case scenario is the 7900X3D has serious performance problems compared to the 7950X3D due to the four less threads on the CCD with the extra L3 cache, requiring most workloads to utilize both at the same time. Considering how this is a problem with every multi-CCD Zen processor (latency communicating between cores) and most workloads have the 7900X similar to the 7950X, I'm doubtful that this is the reason, but there's always the possibility that the heterogeneous core design causes even more disparity between utilizing one CCD versus both in any particular workload... Again, this seems unlikely given the multicore performance of the 7950X3D versus the 7950X, but there's always the possibility of Murphy's Law coming into effect and that the lack of enough threads on the CCD with the L3 cache becomes an issue for the cheaper part.
Core parking also loves to core park wrong cores, and also loves to switch off cores during actual use of that core. It's been an issue since 11+ years ago, and I always remove core parking On AMD it may work, but turning cores of and on to save power (how it does it now) actually creates micro-stutter to begin with Process lasso can easily fix that issue by putting cores that are needed to use, and letting Windows 11 scheduler to take a smoke break, like a long break, with lots of cigarettes
I think Microsoft has a work to do. At least to give us ability to permanently set affinity mask to specific executable or shortcut (yes, I know about start.exe but this is not reasonable way to do stuff). Having possibility to set core preference for a process would be awesome. I think good time to introduce it was when first Threadrippers were available.
For a gaming PC would it make sense to direct the Windows OS to use the frequency cores while the game uses the cache cores to minimize competition between the OS and Game? Are there any special considerations like certain windows services which need to cross talk with the game services and should be run on the same chip to optimize communications? I am surprised no one is talking about this potential for optimization. Hopefully AMD will have the OS/Game contention issue resolved automatically one day, but in the meantime, is there an obvious approach to implementation?
AFAIK Bergamo will be simply lots and lots of dense zen4c cores. No heterogeneous mixing there that need to be handled, so I don't see how this applies?! I'd also guess the primary OS the main target customer base will use with those parts is not desktop Windows.
Here’s the thing I don’t yet understand. If Windows 11 parks the frequency cores because a game is detected in the foreground, and then a background process (which will presumably be running on the cache CCD along with the game) reaches its threshold for “too many resources to share available cache cores”, it will wake the parked frequency cores. But will that also effectively manage affinity? Does it unpark _all_ of the frequency cores, or just the _n_ cores necessary for that process alone, shifting it to the frequency CCD and leaving the game threads on the cache CCD? I fear you could end up in a situation where some game threads get moved to the newly-unlocked frequency cores, introducing stutters and lowering performance simply because a background task spun up. As far as I can tell, AMD is not using affinity masking for their heterogenous behavior on the 79XXx3D, perhaps to avoid a situation like the Alder Lake release where anti-cheat engines freaked out over Intel’s Thread Director changing the process’ affinity. From my limited exposure to the technical details, it seems AMD instead instructs the CPU to reorder the list of “preferred cores” that it reports to the Windows scheduler, in order to place either the cache cores or the performance cores first among them. But it still seems like setting affinity manually to prevent cache-hungry games from _ever_ spreading onto the frequency cores and incurring cross-die latency would be wise, even with core parking enabled. Right?
I believe you are correct with everything except for one important thing. AMD's software only changes which cores get parked with the selector. So, it would be interesting to see what would happen when the setting is set to park the cache cores, but the game has a manual affinity set to only the cache cores. I have a hypothesis that the game would suffer from major stuttering.
@@mattparker7568 AMD's software doesn't change which cores get parked. It changes from the Windows default of "no cores can park" to "up to half of cores can park". Windows always parks the higher numbered cores first, and AMD numbers the cores from 0-7 on the cache CCD and 8-15 on the frequency CCD. See the slides from AMD in Techpowerup's review, and the Microsoft documentation on Processor Power Management Options.
100% it will move your games to the non vcache cores currently. If it was working otherwise, they wouldnt need to park at all. This is a dumb solution where windows/xboxbar sees a game and says, "game". Then it parks the non vcache CCd (or vcache CCD if it sees a game that it knows likes hz). If it was smart like intels you could just keep all of them unparked and push most non game process to the desired CCD (like intel typically does with E cores). More so you have to shift these processes and shifting cores is bad for lows (you get jumps).
This is a very good explanation, but I'm left with some questions. It appears to me that the ideal process scheduling would have processes that get a benefit from the V-Cache (ie games) running on CCD0, and all other processes running on CCD1 (the reality is probably more nuanced than this). That being said, I'm left feeling lost as to the level of tinkering I should do on my 7950X3D. As I see it, there are basically 2 approaches: 1. I can trust the Windows Scheduler + Game Bar + Chipset driver combo that AMD and Microsoft have worked to optimize 2. Manually assigning different core affinities. ie setting processes to default to frequency cores (CCD1) in BIOS, and then manually assigning games to CCD0 through process lasso or something similar. I'm a hopeless tinkerer, so I'm very tempted to run option 2. But I am wondering, which do you think is ideal?
pauls hardware and you are the only ones that have talked about it properly. I assumed it worked this way but all the others I've seen gave me the impression that the non 3d ccd was parked for every game.
One of them is parked, okay he didnt say otherwise. That is what the gamebar does. Now if you go to a hiogh load on those cores it will unpark and move process to the other CCD, that not a great thing. It better than not having any more cpu left. But it also means you are now moving processes from the CCD you want to the one you dont, and they can be game processes.
A directly indirect response to Frame Chasers who is completely unaware of what's going on. Can we next talk about Sub-NUMA clustering in enterprise? 😊
Thank you so much Wendell. Picking a new CPU has been a hassle for me. I've been going back and forth between v cache or no v cache. I want that gaming performance. So the 7800x3d makes sense. However, I also do workloads that make use of more than 8 cores. But I've watched several videos that made using the 7900x3d or 7950x3d seem like a nuisance because of the core parking. So I would decide against a v cache chip. But your video explains it way better and it now seems obvious to me. The 7950x3d seems like the best of both worlds option for me. It has the v cache available to 8 cores when I'm gaming, but has the 16 cores needed for when I need to work. Finally have my mind made up.
The thing is, the 7800X3D is basically the same as a 7950X3D in gaming. So it's actually better overall, unless you absolutely need 16 cores, or unless you want to do heavy workloads while gaming. You basically only gain productivity performance with the 7950, which is what hurts its value a bit. You don't really want a situation where in gaming you have all 8 cores on the VCACHE to be in use + 2 cores not on the VCACHE, because then the other cores will not really have the benefit of VCACHE, so they'll be slowed down some. That's why the 7800X3D is better, you will never get that scenario. Games also aren't all that multithreaded to begin with. Even if a game uses 8 cores, it can only do so much with them, usually underutilizing some cores.
@@peoplez129You're missing the point of the 7950x3d. I said in my original comment I need more than 8 cores for work, but also wanted 3d vcache for gaming. So explain to me how the 7800x3d is better for my use case? You just rambled on about how the 7800x3d is better for gaming. And nobody is saying that it doesn't make more sense if you're just gaming. But I'll remind you yet again that I'm not just gaming.
Is this supposed to become independent of the Xbox Game Bar eventually.. Or is it intended that you always have to run this for your CPU to work correctly on Windows?
this is amd's solution for now. i'm guessing that in time they could make it work with bios/windows/amd driver updates and without the gamebar but who knows. 1st generation products are always overcomplicated and may not always work properly
@@Level1Techs This is really helpful! Thank you so much! here is my conclusion on the 7950X3D: Everyone seems to say that 7950X3D is a more expensive 7800X3D, but for 250$ more you get all the productivity performance and if you are willing to play around in your system and optimize scheduler per game basis you could end up getting the best of both world. So basically 7950X3D is like having a 7800X3D and a 7700X together but "binned" so you get a slightly better CCDs for both. At the end of the day, 7800X3D will be a plug and play, great gaming experience for most games, 7950X3D will be for advanced mixed users who have the budget and like to tweak thing for a slightly better performance.
@@Level1Techs Thats the Thing. As far as i can tell no Reviewer did test this or tell ppl this or make Charts for this with and with out X3D. All Charts i see are always only the 7950X3D but only the X3D Part of the CPU is tested. No one testet the 7950X3D with the none V Cache CCD. I have seen some Tests where Reviwers did also Test an normal 7950X besides the 7950X3D and tell ppl that it is better than the 7950X3D in some Cases. But that doesnt make any Sense to me at all (maybe a few FPS sure but not a few %). While its correct in Theorie... when you make Tests and an out of the Box Experience... i think it should very well be highlighted that you can also just get the same Performance always on the 7950X3D compared to the 7950X. I know its unfair to Rant (i am not so sorry if that sounds like i am) and very much Work but it seemed like no one really cared to even test this Theorie to tell ppl... look, if you have an 7950X3D you basically can also have the same FPS as an 7950X. At least ONE TEST to prove the Point would be nice. Because honestly now it seems everyone is kinda biased and against AMD because there should never be the Case where the 7950X3D is (Way) worse than the 7950X. If it is it should be solved since its basically just an 7800X3D and an 7800X. What i heard from every Review is that you should 100% wait a Month for the 7800X3D. No Question about it. And thats somethin i honestly think is unfair in a Way towards AMD and the Consumer who thinks the 7800X3D will always be the better choice against the 7950X3D. And thats simply not the Case. There are Games out there where the 7950X3D will be the better Choice because you can just force the none V Cache to do the work. With the 7800X3D there is no such Choice. Of corse i get that the 7800X3D is Way cheaper and also almost no Difference will be in most Games. But still i think that it just is not the whole Story. Just because 98% of ppl wont care and know... There are some who want the best of the best and them telling the 7800X3D will be even bit better seems... strange. When you could tell them, look it depends on so many Things. (Sorry for so much Text and maybe its not 100% Clear either what i meant. English isnt my main Language. Also again i am not mad against any Reviewer but i just feel like everyone was missing that. If i am wrong just say so. I might be not getting it)
@@slimmkawar9780 Oh. I just wrote a similar Thought. Well, thanks. lol. Yeah, i too think its strange no one pointed that out. Its like everyone thinks the X3D Part will always be the best Choice for Games which isnt the Case and there is a good Chance that the Scheduler improves as Time goes by and that at some Point we dont need to mess around anything anymore but the 7950X3D just automatically knows whats better to use. the CCD with the 7800X3D or the 7700X. Right now it seems kinda bugged if i am totally honest. Not really but you get the Idea what i mean.
Some games you physically cannot do some of these tricks if they're EAC protected (presumably other anti-cheats as well) as they block affinity changes. Very frustrating when you watch a single thread heavy game land on the worst core or a HT core and can do nothing about it.
In that case, at least on Intel, there is a thing you can do: disable the other cores from BIOS. Well, not the best, but you can maximize your performance that way.
Ty for pointing out the affinity configuration. Screw using these features when I can go full autism on every game testing different CCD configurations on this 7950x3D...
So this is pretty much a 5800x3d and a 5800x “glued” together. That’s crazy. This is probably the most creative way the CCDs/CCXs have been used. Imagine having a dedicated gpu core paired with a 7800x3d.
Its bad in its current iteration tho, and maybe this is due to the poor Task scheduler performance, but its bad. Because unlike with intels E cores (which are lame but do work) You arent off loading non game processes to the 7700x ccd part of the chip. Instead, its all on the selected core (typically 7800x3d part , or the vcache part), unless it its super full then it unparks the other CCD (7700x ) for current processes. It unparks them for not just the background process, but all process, INCLUDING THE GAME PROCESSES. Thus, you can now have your game running on the 7700x if you have a lot running, or if say its 6-8 years from now and well you need both CCD's to keep up with the current gen games. But intel isnt much different in its poor long term prospects, with a max of 8 P cores. Because say in 6-8 years it could leverage 12 p cores.. well there is no 12 p core option. This is why a 7900x isnt the worst option if you are looking long term (but really long term like i said 6+ years). I would wait for the 7800x3d to drop if the 7900x price drops a good amount its not a bad long term option, high hz (still matter) 12 full cores.
@@jetrpg22 Intel’s iteration was horrible with the 12000 series as well. The e-cores were Intel’s poor excuse to counter AMDs higher core counts. Some games wouldn’t even run because the DRM didn’t know what the e-cores were. Thankfully Intel and Microsoft fixed those issues. AMD will also fix those issues without a doubt. Ironically Intel did the initial legwork. Honestly though, I think AMDs solution has much better potential compared to intel. Intel is packing weaker cores that will always be weak when compared to their power cores, but AMD has cores that are both powerful but optimized for different workloads. AMD still has a ways to go but they’ll get there eventually. Imagine a world, which is kind of already here, where the windows scheduler perfectly parses out a gaming task and a production task and directs them to the right CCXs. For example, a streamer who is using OBS with the production cores and playing a game with the x3D cores. That would be amazing since they would get the best of both worlds. I know you were discussing long term prospects but I believe you missed out a huge factor in AMDs AM5 platform, and that is its amazing longevity. You can hop onto any of their current CPUs, and easily sway it with a better one further down the line. It’s trivial just how easy it is to do so, especially since AM5 CPUs are LGA now.
@@greensleeves8095 Well i mis-clicked and lost it all. Anyway, Intels e cores are lame, but thats a design issue. The solution is better (on chip scheduler and current algs). AMD isnt saying they will get 7xxx3d to work this way. But i think the possibility is there with Xbar, even with this being a really crappy way of doing this. Intel only having 8 p cores means 7900x, and higher, has better longevity anyway. Odd in that while 7800x3d will probably appear to be the best option today BY FAR. The 7900x3d with its extra full cores means in 5-6 years its going to smoke the 7800x3d. This also means the 13700k is probably the better option than the 7800x3d, price depending, regarding longer term and current performance. The socket point is a good one.
11:48 Is that a hint of an upcoming video? Performance setting in windows when you're running more than just a game (eg: Discord+Spotify+Browser+Game)? Is it better to be on Balanced vs Ultra?
What happens when I game and encode/stream? If it parks the others cores instead of using them for OBS, it will slow my game. If it unparks them then the game can bounce on the other cores? For games to me it seems neither is the solution. Not parking, not affinity, but rather "preference". So if the game can run only the preferred cores then run only on them, but if it maxxes them then use the non preferred ones as well.
Yes, this is how it currently works. And whythe "if it needs to unpack it will" narrative isnt exactly helpful. Unless this info is also being included. Its a bad solution. The solution works now. But by 2029 and your cpu is the bottleneck, its a bad solution. But because this is mostly a software issue, it may not be an issue by then at all. And work like intels P cores vs E cores (but even a bit better).
Its true. When u have an 8 core cpu and gaming, youll notice that some cores are more utilized than others are significantly less percentage. Thats because game devs only use those cores and not all 8cores.
So having your PC running at balanced power while gaming is a good thing, if understand correctly? Windows under system>Display>Graphics has custom options for apps that always choose performance for games.
I've had problems with core parking and virtualization. I have found it necessary to tweek Windows into holding cores active and setting both minimum and maximum values for CPU utilization in power management.
How would you go about finding out what's causing a micro stutter on a PC? I have this weird issue where there's random stutters, but my systems only at like 20% utilization while gaming on the CPU and GPU, and nothing shows what's causing this. I've tried everything I can think of.
if you don't know what is causing something from your own past experience, then the direction you can turn to are Profilers like Intel Vtune and AMD uprof. these are the same tools that Software Engineers use to profile their own Software. you can get all the data you could ever need out of the CPU to see exactly what it is doing and ergo what the bottleneck is. if it's Anti-Cheat protected games though, then you're kinda out of luck if you don't have the experience to know yourself, since you probably can't hook there Profilers to those games (without getting banned, atleast).
@@taiiat0 so I've tried AMDs profiler. It only does processes, and I tested it on a game, specifically VRChat and it won't even show the process for the game when I run it as aministrator. Idk what's going on but this is driving me crazy.
@@nathantron hmm i don't think VR should make a Process special, and i'm pretty sure VRchat doesn't have Anti-Cheat... past that, beats me, sorry. i don't have any experience using AMDs' tool.
@@taiiat0 I didn't think so either. I tried it in VR mode and Desktop mode. It's so weird, but the game has an issue where it grinds to a hault when anything avatar related is loaded. I even moved the avatar cache folder to ram and it didn't help at all.
glad you answered the parking question as i was thinking of why not just un park them. i'm curious on how well the game star citizen would do with intel vs amd as it is a very cpu and memory intensive game.
I REALLY wish CCD1 was the frequency one, and CCD2 was the cache one. That way, the operating system and all background tasks could default to those and then you can launch games and give them CCD2 by themselves to minimize the amount of effort needed to manage this manually. Really frustrating decision on AMDs part.
Well AMD's preferred core logic already works independently from this, so its going to put other things by default on the frequency cores. It uses XB game bar to otherwise specifically flag the cache cores.
I have a question, if in the future, any game would need more cores, the cpu will unpark them right ? Is there any downside of using unparked cores instead of just a cpu like i9 13900k or i7 13700k which uses all the cores ?
As much as i enjoyed watching this video, there is 100% issues with the core parking and the scheduling atm and it sucks, some games will only utilize 50% of the 3d v cache cores and then also use frequency cores which will result in lower fps/performance i have tested this with a few games.
thanks for telling us the new normal LOL I am hoping that the 7950x3d will be better for me in space sim as running Tobii eye tracker , Dual Joysticks, and discord can run on the other cores and leaving the heavy load for the V-cache and from what I understand Star citizen uses as many core as it can so not too sure what will happen if it helps the game or it will think its only a 8 core CPU, or just put the main hungry loads like the render thread on V-cache and spread the rest over the mix . this was one reason why I wanted both CCD's with V-cache lol
Neat theory video - but 1% on practical how-to. You have anything that actually answers questions like how-to get your 7950X3D to work in games? And/Or how to know your system is properly handling games?
Question, in case of non-gaming (game bar) apps, how the task scheduler will know to use the faster CCD2 for the main threads to benefit from speed and not caching? From what I understood parking is only for games and parks CCD2 cores.
There's something called CPPC "preferred cores", which tells Windows which cores are the highest performing. In normal use, the second CCD is used more often.
@@Aashishkebab CPPC doesn't work across CCDs, it is working inside a CCD, all those cores ratings are specific per CCD when they are created and evaluated at the factory level. But maybe they did something like instructing OS to use CCD2 as the main CCD with that driver.
@@andreiga76 I'm guessing you don't have one. If you look in BIOS with the latest update, you can specify the CPPC to prefer cache, frequency, or driver/auto.
Couldn't the scheduler automatically decide whether a process is best served by faster cores or more cache by looking at the amount of cache misses? It could start on the faster cores but if it observes a lot of cache misses it could move it to the cores with more cache. If then it does not improve (e.g. too much data even for the bigger cache) it could move back?
Allot of this was way too scientific for me but I get the gist of it. So basically this is the best cpu overall is my take :) I have actually ordered it. Was 50 / 50 between this or non 3D or 13900k. I think list power in combination of AM5 being more future proof won me over. Plus I don't care about 1080p and ALL these cpus do great. Part 100 FPS won't matter anyone really.
Lots of Easy Anti Cheat titles do not let you set affinity. Some do if you set affinity for the start_protected_game.exe using process lasso with 0 delay, but some won't and if the windows game bar doesn't work you're kind of screwed and have to disable the cores.
I was wondering something similar, will I see any benefit while using my 7950x CPU ? I've already installed the latest driver package from AMD that has these functions.
compiles are more cores more better. compilers themselves seem not to benefit for huge caches up to and including projects as large as openEmbedded... which is quite large.
@@Level1Techs so if running and testing multiple VM's at the same time would also benefit from more active cores too ? Would I still see any benefits from changing my power plan to balance from Performance using the 7950x or is that going to be a case by case basis ? Seems like it shows better gains from apps like gaming vs. production type of work.
For the chart at 17:40 where the clocks/prices are shown, one has to remember that for the 7900X3D and 7950X3D, those higher clocks are for the OTHER CCD that doesn't have Vcache. NOTHING you can do will change the fact that the CCD with Vcache has a substrate sitting above the cores and it's harder to dissipate heat from it and that's not made perfectly clear here, or anywhere else frankly when I've watched videos on these 3D parts. So yes of COURSE the 7800X3D shows much slower clocks, but if AMD listed the specs properly for the 7900X3D and the 7950X3D there would actually be a listing for base and boost clocks for each CCD. The Vcache CCDs aren't running faster JUST BECAUSE there's another CCD. It CAN'T. That doesn't change the fact that a layer thrown on top of the Vcache CCD creates a heat issue on THAT CCD and nothing can change that. It's going to run as fast as it can while keeping temps under the listed spec, and the same will be true for the 7800X3D. So, the CCDs that have Vcache on ALL THREE PARTS are running about the same speed, and having another CCD on the CPU doesn't change anything. I keep hearing this usage of "offloading" but it doesn't apply. You can't "offload" heat except UP through the substrate to the cooler, and HEAT is the limiter.
First off love this comment, so few are mentioning this. Second i agree AMD really pulled a fast one not listing clock speed per CCD, I was hoping they had fixed the extra cache slowing max clocks, but the frequency on the 7800X3D killed that dream. "The Vcache CCDs aren't running faster JUST BECAUSE there's another CCD. It CAN'T."
@@chainingsolid I still have yet to see anyone having background apps run naturally on the non vcache side without an affinity mask. I dont think they do, unless Ts thinks vcache cores are mostly loaded... but thats a bad thing, because that means you continually load up your game cores with background shit until it gets bad.
The correct solution is an option to just put a program on a core and leave it there. not to disable a core just because the active process works better with a lot of cache. If you have 2 cores with different cache, you put the game on the one with lots of cache and the OS on the other core, not force the OS and the game to fight over the one core with extra cache that the OS doesn't even need. I haven't seen anyone doing tests to see if that's what's being done, but that's what parked core does on older CPUs and its dumb tricks for benchmarks.
would be great to understand how this would effect game streaming. can you have the benefit of the 'game mode' for playing the game and instead of parking the other cores could you use them for video encoding ?
IMHO: the confusion regarding V Cache on 16 core or higher CPUs has to do with how the chip itself is perceived. V Cache has been portrayed as a "gaming only feature" that gives the assumption that when any game is loaded, the game only runs on the V Cache cores and everything else is to shunt to the non V Cache cores. Recently an acquaintance of mine said she doesn't bother with any of the 3D V Cache chips because 5 to 10fps isn't worth it. While she went on to talk about the human eye versus FPS: in her mind, anything over 144fps is a waste as the human eye can't keep up with motions on a screen above 60fps. While I have no experience with any of the X3D CPUs: there are many who advocate using the non X3D CPUs due to the scheduling issues. When a CPU is advertised as a drop in upgrade: that is what the user expects. However, as other streamers have commented: there is a 47 page guide on how to get certain higher core count X3D CPUs to run correctly or the "way they're supposed to run." From a computer science stand point: core parking makes sense as a "soft disable." However, users who expect 16 core X3D CPU to work a certain way are often disheartened because of have to "tweak" this or that to get it to hopefully run the way it is supposed to. In many respects: the CPU cores with 3D V Cache are also liken to Intel's E Cores as they cut run as fast and a non 3D V Cacje CCD. In a way: AMD CPUs with 3D V Cache oprrate backwards from Intel's E-Core/P-Core setup. From personal experience with my 7700X processor: the games I play run really good for not having V Cache. At the time also and with what was being said about the X3D CPUs: the 7700X made more sense for what I do. IMy PC does run games very well. The productivity stuff I do runs very well. The ability to play a game and have audio or video encoding running in the background doesn't cause a perceived performance loss. In the end though: all AMD users (IMHO) need to evaluate their purchases based on research versus what AMD sugar-coated with the X3D CPUs. The reason why 8 core CPUs with V Cache are hailed as the "king of Gaming CPUs" is that it is drop in and go with no "user intervention" apart from having the lastest chipset drivers for their respective motherboards. Yes, many AMD users don't trust AMD or Microsoft as bar as allowing software to dictate how their CPU is supposed to handle itself. Yes, there has been talk that AMD is working on ways to decrease the latency between multiple CCDs (maybe Ryzen 9000 will give a hint of that direction). True, AMD's language can be easily misinterpreted (precursor boost vs precision boost overdrive is just one example) that users get caught up in the hype without actually knowing what it is they are getting. Like I said: I have no experience with any of the X3D CPUs; but, if I were to go that route: I'd only stick to a single CCD chip are is kinda has no choice but to do what it is advertised as doing without user intervention. I am reminded of an exercise in my basic networking class in college where the student's were to spec out a file server. One student advocated running a server (this was back during token ring networks) with 4 486 CPUs. The instructor laughed as and "What is Windows NT or Novell actually going to do with 4 processors versus just one?" Given how modern CPUs are: there is still this disconnect between the CPU and the expectations of what it can deliver. One thing that hasn't been talked about is multiple games running simultaneously. Yes, TH-cam is full of benchmarks of X game running on X CPU. However, what if your play style has you running multiple games at the same time? As one streamer commented: Gamebar should run applications that are not games on the non 3D V Cache CCD and keep the CCD with B Cache as "parked" until a listed game is loaded. Yet, that would still be same issue but in reverse. Perhaps AMD needs to design CPUs that function more like GPUs. I don't think I've ever seen someone talk about GPUs with multiple chipsets having latency issues between the GPU processing unit and the zRAM on the GPU itself. Yes, GPUs are only responsible for a set type of tasks whereas a CPU has to manage everything within itself and the PC as a whole. Perhaps AMD should have focused on the latency between CCDs first before making an unbalanced CPU. True, CPUs have come a long way from the single digit megahertz days. However and as I've heard a few others say: AMD should have just increased the caches in all CCDs versus just doing it on one CCD. That is probably why so many advocate for the non 3D V Cache cpus eventhough there is sometimes a mild drop in FPS. Now, I will say this from observation alone and seeing it with my own eyes. A friend of mine stayed with for a few days. Him and I have almost identical systems apart from the CPU (everything else is identical right down to the case). While his system has the 7800X3D: he noted that the game looked "clearer and cleaner" even though the game was running 15fps lower that his 7800X3D (my system has the 7700X in it). While CPU and GPU manufacturers what an do push X number of FPS in a game: sometimes quality is more important that quantity. Yes, competitive games/gamers want max FPS all the time. However, how many people can perceive 144 or higher FPS? There has become this imbalance between what a CPU should be "good at." Back in the 486 days or even 386 days: those were "jack of all trades and master of all trades." CPUs these days seem to be a "jack of all trades and a master of none." Back then: the OS or program would issue instructions to the CPU and the CPU would comply. A much simpler time versus now having applications that was this or that before executing the instructions given to the CPU. Maybe that is why some older CPUs still perform as good as they do today: they run the instructions given regardless of cache type or size. Yes, a 386 or 486 CPU ran differently back then versus today's CPUs. Perhaps AMD and Intel both are guilty of making CPUs doing tasks that should be handled by hardware and not software. Sound Blaster days come to mind. A game or application requires a sound to be played. The CPU would say "nope" and send those instructions to the Sound Blaster card to generate the sound. Perhaps, going back to those simpler days would see a CPU that is the "jack of all trades and master of all trades." No one CPU is a master of everything: it' is either this utilization or that utilization. .. Gaming versus Productivity versus General PC use. No one processor from AMD or Intel is perfect in this respect. AMD nor Intel have CPUs now that do everything perfectly. This X3D CPU runs games exceptionally good; but, falls behind on certain non-gaming tasks. This non-X3D CPU runs non-gaming tasks exceptionally good; but, falls behind on gaming tasks. Perhaps that is why non-C3D CPUls are advocated for more than their X3D parts. While not completely prefect for gaming and not completely perfect for non-gaming work loads: they are still exceptionally good at doing both without any fuss or additional steps or programs needed. In the end though: the concept of the X3D CPUs is great when they run as intended. However, their performance is more volatile when compared to their non X3D counterparts: especially with 16+ core CPUs. That is one advantage of the non X3D CPUs. A non-X3D CPU will run the same thing the same way almost every time with a very small margin of error. An X3D chip has an element of OS confusion as there is always a chance a game will run on a CCD without the 3D V Cache and perform completely differently that what is considered "normal." Alas, I am no CPU engineer; bitut, common 18:46 sense has to play a part as well in viewing CPUs these days. CPUs need to become more streamlined as they grow in capabilities. There is too much focus on just one aspect of what a CPU should do instead of making a CPU that can do it all without temporarily nerfing the CPU just to make it work a particular way. It' is like some automobiles that have 8 cylinder motors that cut to only 4 cylinders when it is going down the road. Those too have a lag if more power is needed. In this case as it relates to CPUs: perhaps the cores should be listed as idle versus parked. Parked gives the connotation that the core is stopped and unavailable for work. Idle would say: I'm not busy; but, I'm ready to go if needed. That is an assurance afforded by the non X3D CPUs:. When it comes to gaming: developers have (IMHO) been given a very long leash in how they develop a game versus what CPU it runs on. I can't count thr number of times that I've heard someone say that game X or program X "favors" one CPU manufacturer over another. This gives the impression that favorites are been implemented at the software level instead of the hardware level. When it comes to gaming especially: the GPU should be the determining factor: not the CPU. X3D CPUs above the 7800X3D are (as Mr. Gump would say) are like a box of chocolates: you don't really know for sure what you're going to get from one box to the next. A 16 core X3D CPU could run fame X like it should for days, weeks, or even months; but, there is always a risk that the OS or Gamebar will make a mistake and cause game X to underperform or possibly perform better in some cases. Personally: I'll stick with just the non-X3D V Cache CPUs as I already know that they will run the same way day in and day out.
Is now the time to upgrade to Windows 11? I have been holding out on Win10 for many reasons. Including it seemed like in the beginning AM5 was struggling with the Win11 scheduler. What do the benchmarks say between 10 and 11?
I have a question. I play MSFS20 in VR almost exclusively, but I also need to use addons like planes as well as selected airports, also apps such as VPilot and other things which add to my experience. These addons run while MSFS is running. It would all come under ‘gaming’ in my mind, but do these apps (outside the core MSFS game) use the same ‘gaming’ part of the CPU or the ‘productivity’ part? (5800x3d/ 4090). I know that the 5800x3d is a gaming cpu, but it surely can do productivity work if needed, albeit far less efficiently than other CPUs designed for this type of work.
I assume this is the same for the 7900X3D as well? I built a system for someone with that CPU and he wanted to run Linux (arch fork). I was concerned that it wouldn't be updated to properly use the correct cores with either more frequency or cache. It runs fine and works well though. I am not sure if it's fully optimized yet, I can install and use Linux but I am far from anything more than a novice.
I see you use process lasso, im wondering how that app handles the new CPU, I use the pro version with current 7700X and it's perfect, but about to update my system with an X3D CPU.
I love wendell's computing science videos explaining how stuff works and asking questions. Might not be as popular as performance reviews on x but really appreciate it dude!
These are SO much better than performance videos. There’s hundreds of performance review channels… so few do these explanation videos
We are talking about The All-knowing Wendell
@@tst6735 THE neuro-atypical Wendell
If he doesn't type out his points in notepad on screen i don't believe him TBH
Exactly. And where do we know reviews from x tubers turn to when they tech support? Ding, ding, ding! To Wendell.
Thanks for going over this stuff, I figured that's how it behaves but nobody else was talking about how the "core parking" worked.
I still dont know at what rate or when other tasks are pushed to the chiplet.
Core parking conjures up a negative feeling for some gamers. mostly because Bulldozer core parking would make an 8 core CPU appear to run better or worse in a super case by case way. Toss into the mix the sheer number of Windows 7 and 8 pirates who never updated to the Bulldozer friendly new Scheduler so their default assumption is its still broken / breaking their games.
@@jetrpg22 What wendel seems to like about AMDs current implementation, is that no tasks are ever pushed to one chiplet or another. All games/apps are free to use the "entire" system if they so choose. However it's windows and the scheduler (and I guess the chipset drivers?) that will determine how "big" the entire system is, ie. how many cores to keep parked. The metrics right now seem to be if an app uses less than 8cores (1CCD) then it will keep the other CCD entirely parked. Which CCD is parked is set via gamebar and perhaps the system setting of prefer cache vs frequency. However once an app asks for more than 8 cores, then the system will be forced to wake up the other CCD and both will be available. But that is a very user hands off approach (compared to say affinity) so if you wanted to run two separate apps, each on their own CCD, you can't really do that unless one of the apps launches more than 16threads. And even then, without affinity, once the entire system (both ccds) are awake/unparked, you have no control over what app runs where and when. So in some ways it's less elegant and more crude. But surprisingly effective in a large majority of scenarios.
Now this is some gourmet content.
Another amazing video, great information explained so well that anyone can understand it.
Just love this channel.
My first thought was how good would that extra L3 be for factorio, this is really interesting thank you!
Factorio absolutely VORES that cache
@@GewelReal
Vore me daddy
Hardware unboxed tends to include factorio in there cpu reviews. Check there review.
@@GewelRealIt's so good it almost looks like bad data (were it not a repeatable result~) lol
Great explanation! This also clearly establishes why there is a difference in price and value between the 3 models. It is fortunate that if you are primarily a gamer and don't need to compile a lot of code or do 3d work, then you can get a great product for just that. If you want both, with the additional performance per watt, the 7950x3d does a really good job at delivering a solution that can be versatile, and you can enjoy a smaller energy bill without too much inconvenience compared to the non 3D.
I think some reviewers misunderstands this usecase for 7950x3d because the parking concept was not well understood by all. And while it loses in multi core benchmarks to the non 3d, it is easier to cool -> lower fan noise -> happy wife. And saving 50-100$ on the energy bill per year is also a benefit in the long run (double for some power hungry Intel CPUs).
If the price rises again in Europe, it could end up saving its own cost on energy by one year.
Its still not. This didnt answered or show really how or when unparking is occurring for non-game apps in the background.
These deep dives make the difference. Thanks Wendell!
Thank you so much for breaking that down for us. That helps solidify my purchase of the 7950 X3 d
AMD esta triste jajaja y Intel es mejor bestia brutal!!
Why have you chosen it over a regular 7950X or the upcoming 7800X3D?
@@Anankin12 I like the energy efficiency that it has as well as the extra cores. I'm upgrading from a 5900X
Good luck getting the non 3d cores to actually park. i have done it “by the book” and have been totally unsuccessful. Frustrating.
@@techluvin7691Change your BIOS CPPC Preferred Core setting to Driver and verify you have the AMD V Cache driver installed. should work fine.
So, what I got out of this is that Windows is capable of Parkour.
Thanks for sharing. I actually did learn a lot from the content!
Windows sometimes drives me up the wall, motivated parkour.
❤ you Wendell. Frantically awaiting UPS to drop off my 7950X3D today.
@@MusicChann3l I got mine. Smooth sailing from here.
Thanks for the explanation. Waiting for stock back and this will help as I do love tweaking. I was thinking about doing a misaligned affinity mask to see the l3 hop latency but you've gone and tested this already. Which is why having third party reviews is so great for the industry. So excited for this level of power soon. I'm in the rare class of developers that needs gaming and compilation etc. performance.
this is the kind of information i've been wanting about these new CPUs and how they work. now I can think about how I will be using them for my own use case and choose an appropriate one for myself, thank you wendell
AMD has mentioned the huge work they have done with Microsoft over the past 10 months to optimize the new cpus for windows 11. It has more to do with Windows than it does with the architecture of ZEN
Yes and no. Intel and Microsoft had to work together to make the scheduler work correctly for the hybrid parts. And frankly there are still issues or you'd be able to have AVX-512 enabled for 12th and 13th gen Intel parts.
Microsoft isn't under any obligation to optimize performance for AMD's Vcache parts. And this is where AMD has in the past fallen behind Intel and Nvidia. Nvidia for instance has more SOFTWARE ENGINEERS than AMD has in their entire Radeon division. This is also why Nvidia GPUs tend to work better, with less problems. The same has been true with the Intel vs. AMD situation. Intel has more software engineers than AMD. It's why THEY could come out with a 1st gen GPU that uses AI for XeSS.
So this is also on AMD, AMD IS getting better at getting more software engineers on their teams because they KNOW these parts are becoming ever more complex and you can't keep pushing these problems off on other companies to deal with.
Microsoft isn't a black box to these hardware companies. They can get access to the software they need to do the development they need to do, and Microsoft can HELP, but it's not on Microsoft to do all this work.
@@johndoh5182 Yep, this really is a almost bulldozer level issue. I mean its not that bad because it works, but the solution so far seems so crude. And maybe i am wrong but i keep seeing people saying well if need it will un park them.... But i have not seen someone play a game start up YT, record, and something else and what cores are doing said work. If they are using the vcache cores, thats bad. I guess the good part is windows and system firmware updates could fix this or expand use later but still in its current form its just not good. I WANTED to go AMD for the 2nd time... Atm i will go intel because their solution works (maybe a 7900x if the price tanks with the x3d). The 7800x3d is a different story so ill wait for it and i do like the idea of it (even if id rather prefer more cores, i like cores, because when i bought my bother's 3xxx and my 4790k) people were saying really single core maybe duo is all you need. But today my brother is still playing modern wow on his 3xxx and my niece is using my 4790k (they are oc'ed and all that jazz). But still 8 cores is enough but even then 6-10 years it will suffer vs a 12+ core machine.
@@johndoh5182 That AVX512 problem came from software trying to use all cores for AVX-512. The E-cores lack of AVX512 in Alderlake was the main problem. It had nothing to do with the scheduler. That was a software problem. No the reason why Nvidia has better software is having a bigger market makes it easy to get volume of different system setups. size of software department has nothing to do with talent inside of it either, it's still about volume.
@@johndoh5182 Odd, I have had more problems with Nvidia software than AMD. Those engineers were often employed to do dumb things like add massive tesselation to games to win on benchmarks.
When you're running a mix of softare, some using older APIs and continue using older hardware Nvidia are a massive PITA, their effort is put into the latest releases and Jensen's goals to rule the metaverse. I've lost so much time due to bad practices by Nvidia, which are a matter of deliberate policy. An example is features missing from the WHQL drivers, so when support is dropped by them, you lose functionality provided in Experience.
Another example trick Nvidia used was to punt scheduling from the GPU onto the unused cores of CPUs, that allowed the GPU to borrow power budget from the CPU and score higher. The trouble with that is, it is troublesome for async compute and when that CPU roundtrips cannot match hardware scheduling, then you have games that a heapened to do stream decompression on a heap of cores while the console the game was developed on had a decompression engine built into its disk management silicon.
Of course if you're into the latest game releases, your best option is Sony Playstation or Xbox , but if you are on PC then Nvidia are better tested on release because they're the market leader. Want to carry on using that 1080ti or play old classic games then you may well discover that bitrot is a characteristic feature of Nvidia, you MIGHT find time consuming work rounds for the API they aren't interested in anymore.
The scheduler is MS's, but MS need strategies to deal with high core count CPUs, otherwise they'll be relegated to containers running in virtual machines, like in many server deployments.
There was always a problem of threads core bouncing, you really don't want to evict a thread from the primed L1 & L2 cache they're using, but Windows appeared to like mixing it up, while the high boost ST is reasonable to migrate to a cooler 2nd core, you really don't want to do that when thermals are not constraining things.
Fundamentally having CCX's with different properties and cache sizes is a coming problem that MS needs answers for. I can imagine having 4 power efficient but slower cores intended for OS tasks and assisting drivers, integrated into the IOD. They'd be like the ARM little cores in phones or the original design intent of Intel's Atom cores, where not waking up main cores, but doing light processing at lower frequency could benefit the overal design.
In future something like the Zen2 compact 4 CCX block, might have a compact variant on a cheaper process node, than the main flagship cores intended to run applications and heavier tasks.
Hes so right, parking cores brings down thermal load alot , and once software gets perfect it will run as least amount of cores needed and see if it can just keep everything at closer hardware because ITS FASTER if it can just do the job one one core instantly. its getting pretty good now.
One missing concept was, IMHO, mentioning "Context Change", core/threads switching. Not just the caching advantage that 3D brings. Most single threaded processes really benefit from not being preemptively "core switched" for no reason. That is when the cache between the cores really comes into play, especially regarding high performance single threading gaming loads.
I might have missed something and glad to learn something new, but IMHO a parked core is something that the scheduler can pull in if it is really needed.
The issue i am having is, what trips this. So far i haven't seen a video of someone running chrome with a yt video, recording, and say discord, while playing a game and looking at app and core behavior. If all those run on vcache cores, until its like nearing 80-90% use, thats not good. You are going to have negative effect from this... still any current issues could be improved via windows and other updates so there is a lot of room for improvement.
@@jetrpg22 I don't know if the following is the case (especially given how kludgy the current Windows workarounds are), but the information that really needs to be collected (if it even can be, since collecting this info is very nontrivial) in order to make good scheduling decisions about mapping threads to V-Cache cores versus ordinary cores (including the important case you mention, that is, thinking twice before switching a thread to a different core complex for no justifiable reason) - includes how big the memory space is that is referenced by the set of threads that comprise each process, relative to the putative L2+L3 cache size is (for a given proposed mapping), along with the level of locality of memory reference (as opposed to its opposite namely scatteredness of memory reference), as well as the CPU-usage intensity cumulative over the threads comprising each process (i.e., do these threads cumulatively want to consume all available CPU cycles), plus the CPU-usage intensity of each of the latter threads comprising a given process. Even if you had all the above info at your fingertips, you (in the role of a scheduler) would have a hard time making optimal scheduling decisions. An algorithm to do so is very non-trivial (and I have some historical expertise in this area from which to compare, having contributed to a classical scheduler that was the best of its era, and knowing how much simpler the requirements were back then versus the mess of a scheduling problem we face now, given that Intel has contributed one axis of hybridity to the scheduling problem via its Apple-like E-cores and P-cores, while AMD has just now contributed an orthogonal axis of hybridity to the scheduling problem via its normal-cache-cores versus big-cache-cores).
Short of a beautiful clean scheduling algorithm for what we have to deal with these days, one can at least state some general rules of thumb. First, per @jetrpg22, don't switch a thread from a normal-cache-core to a big-cache-core for no good reason - you would do so only for the betterment of all threads. Currently, the latter rule in the context of X3D CPUs is equivalent to saying don't switch to a different core complex for no good reason; but this could be even more complicated if a future CPU has 3 core complexes with a mix of 3D V-Cache or not. Second, at a high level, your goal is to give the big-cache-cores (on a core complex containing 3D V-Cache) to those threads of those processes that would benefit the most from having the massive amount of L2+L3 cache available to them. This leads to two obvious cases and one non-obvious contra-case. In a thread within a process that has a large amount of memory being accessed, and furthermore this thread is itself using a large chunk of available CPU, it is beneficial to put that thread on a big-cache core residing in the core complex that has 3D V-Cache. Especially if there's sufficient locality of memory reference, the massive L3 cache (plus sizable L2 cache) will hopefully make it such that most memory references are L3 cache hits, with a fairly large chunk of those actually being L2 cache hits, and only in the worst case do you have to load from memory. In DDR5 memory that's a huge speedup - a cache hit versus a memory access. It could as much as triple the effective CPU speed over memory-only accesses. The second case is not so much when memory is huge, but rather when memory accessed is sufficiently small such that all memory fits into L3 cache. That's even better actually. You might also say that you should put the sister threads of that process onto the same core complex - if possible. But counter-intuitively, in some cases - namely sister threads that, say, only run one millisecond out of every second - might best be put elsewhere (to make room for threads from some other process) since they execute so few instructions that that they macht nichs). In general, the rule is to keep sister threads together on the same core complex if you can, but don't worry about that for threads that are mostly idle - you're generally better off putting those elsewhere if their slot can be producticely used in some thread in some other CPU-intense process. You can see how complex this scheduling decision process "should be" if done right. And ideally, you want to know what the cache hit rate would be for each thread under the two alternate scenarios of putting the thread on one of the big-cache cores vs one of the normal-cache cores - along with how much CPU that thread uses in general. For dealing with the hybridity of Intel big.LITTLE cores, Intel has added useful data collection features to help out the Windows scheduler. I don't know whether AMD has (or has not) added the required data collection features (such as memory sizes and cache hit rates and whether a thread is a big CPU consumer or not). If they haven't added these features, they should do so in their next iteration of 3D V-Cache CPUs.
I didn't know any of this thank you very much Wendell really informative.
Great explanation! Good call on this video. The core parking thing was baffling me too
Thank you for the explanation and great to see someone mention niche games like Factorio.
Well, I guess it's like they always say, why park cores when you can parkour.
Welp, there it is. All I could hear the entire video through was parkour.
Hi Wendell, the new AMD chipset drivers (5.08.02.027) have drastically changed the way my 7950X3D works. First: performance in games is massively boosted and Second: cores don't park anymore. Would love to see an indepth look at these new drivers. With the old drivers Metro Exodus Enhanced was a stuttery mess and CP2077 had issues during some cutscenes. Now, everything is smooth as butter!
I have the same chipset driver version but my 7950X3D cores are still parking properly.
@BenjaminRay Yeah, initially, it didn't seem like they were parking in the same behavior, but after several runs it seems they are.
This is awesome Wendell. Any idea what the default Linux scheduler behavior is? Does Linux park cores for games or can is accurately identify if a game is running? Or will we have to go into BIOS to change the settings to “prefer cache” or “prefer frequency” based on what we’re doing?
there is already a video on the level1linux channel where he discusses that the latest beta kernel already works mostly fine with the 7xx0X3D CPUs.
@@thetj8243 and Benjamin Lynch, this makes sense to me - I would expect the Linux engineers to probably have better abilities to properly address the scheduling issues here, and the Microsoft engineers to probably be somewhat less facile at this task. This is on the (almost certainly true) assumption that Dave Cutler no longer leads this team at Microsoft. Where's Dave when you really need him (lol)?
Since lots of people play games on Windows, it might be that the existing solution approaches in Windows (as so excellently explained by Wendell) are sufficient to solve the complex Windows scheduling issues presented by what amounts to the hybrid-but-different-type-of-hybrid-than-Intel architecture of the AMD 7900X3D and 7950X3D chips, at least on systems mostly used for gaming. The advantage that gamers have (that implies that the current "partial Windows solution to the X3D scheduling problem" is probably good enough for gamers) is that when you're actually playing a game currently, your focus is pretty much completely on the game. Sure there's a few system kernel threads that have to run at the same time, but you're not likely to be streaming TH-cam and TikTok videos at the same time. Admittedly, the stuff Wendell describes here is a gigantic kludge. But it's a kludge that is likely to kinda-sorta work for gamers at least. On the other hand, if you're not gaming but running a host of simultaneous productivity applications that all need lots of threads and lots of memory, I'm betting that the beta Linux scheduler might do a better job.
For those folks, gamers or otherwise, who are committed (or stuck) using Windows, and who would like to use an X3D AMD CPU in their next system build, I'd recommend the following decision matrix. If money is no object, go ahead and buy the 7950X3D. By saying money is no object, what you're really saying is that I can afford the delta cost of a 7950X3D over a 7900X3d to get the delta +4 (=16) cores over the 7900X3D part just for those cases where I really need the extra CPU horsepower, and especially for those cases where I have 8 cores (16 threads) using lots of memory that can benefit from the 3D V-Cache to increase the cache hit rate markedly, while having say 8 other cores (16 threads) worth of work that can comfortably run on the other core complex without the 3D V-Cache. Hopefully the core parking and other crazy features will make pretty good scheduling decisions for the borderline cases. This scenario it tantamount to buying the Chevy Suburban because occasionally you need to haul around your kids basketball team, even though most of the time you're just driving a family of four.
For the person who wants a very high-end system but chokes on the extra delta cost of a 7950X3D over the 7900X3D, consider buying the 7900X3D. The advantage of that CPU part is that whenever it does have to park cores (cuz it judges it better to do so, which may or may not have been a good decision, but hey, Windows is at least trying its best), it's only "wasting" (by parking) a max of 4 cores typically (8 threads). In other words, the only money you wasted (and only in those rare scenarios where such a waste is actually most likely to yield better performance on that task mix) is the delta cost of the 7900X3D over the cost of the 7800X3D (which has 8 cores instead of 12 cores).
This leaves the option of buying the 7800X3D to many buyers that want to save a substantial couple hundred dollars on the CPU part of their system cost. My guess is that about half or more of customers fall into this bucket. For most gamers in particular, they might be better off spending that extra cash on a better GPU. A 7800X3D plus a 4090 or a 7900XT is probably going to game better than a 7900X3D with a 4080, for instance. Clearly, the reason AMD is delaying the release of the 7800X3D a couple months just to incentivize those customers that are thinking seriously about a 7900X3D or a 7950X3D to pull the trigger on that rather than waiting for the 7800X3D to arrive. Finally, for the buyer that wants 3D V-Cache on the cheap, with no wait, there's always the 5800X3D with the advantage of a cheaper motherboard and cheaper memory.
Thanks for delving into these tests. Can I ask you to try running two instances of a multiboxer game that IS catered for by the game bar, if you've got a second screen available, and see whether it is a simple matter to specify each instance to inhabit a different CCD?
In games such as e.g. Eve online, I ran my main character screen on higher graphical settings than my alt screen, hence the thought that on a 7950x3d I'd want to run main toon on the vcache cores and the alt on the other cores.
My old EO box was a crossfire setup so dual gpus helped out a lot in making this work.
All we need now is a databasing website that lists games and whether that game prefers frequency or cache
looking forward to seeing the benchmarks between the all 3 of the new x3d chips
yeah im super curious how the 7900x3d performs since it has only 6 cores of 3d cache. Really disappointed that amd didnt seed any to reviewers for launch day coverage
@@TheTechhX According to some of the reviews I watched, AMD wasn't sampling anything other than the 7950X3D to reviewers, meaning the reviewers will have to buy the part themselves now that it is released to test it. Reviews should come in a couple of days once they've had time to benchmark everything.
I can only speculate why AMD chose to only sample the 7950X3D to reviewers; best case scenario is they wanted to put their best foot forward and wanted to maintain as high consumer stock as they could on both parts while doing so. It could also be that the 7900X3D performs almost identically to the 7950X3D in most workloads (but certainly not production/extremely core heavy workloads) and they were concerned that the reviews would cause sales of the 7950X3D to be cannibalized by their lower margin part. This is what I'm betting on, as I've already ordered a 7900X3D to finish up my newest computer build once it and the GPU arrive.
Worst case scenario is the 7900X3D has serious performance problems compared to the 7950X3D due to the four less threads on the CCD with the extra L3 cache, requiring most workloads to utilize both at the same time. Considering how this is a problem with every multi-CCD Zen processor (latency communicating between cores) and most workloads have the 7900X similar to the 7950X, I'm doubtful that this is the reason, but there's always the possibility that the heterogeneous core design causes even more disparity between utilizing one CCD versus both in any particular workload... Again, this seems unlikely given the multicore performance of the 7950X3D versus the 7950X, but there's always the possibility of Murphy's Law coming into effect and that the lack of enough threads on the CCD with the L3 cache becomes an issue for the cheaper part.
Just came across your videos and wow, the way you explain things make it very easy to understand. Instant sub!
I love the architecture of this CPU. I like the ability to manage processes across different cores and CCDs to get the best of both worlds.
Loved this video. This has finally clarified core parking and affinity for me!
Core parking also loves to core park wrong cores, and also loves to switch off cores during actual use of that core.
It's been an issue since 11+ years ago, and I always remove core parking
On AMD it may work, but turning cores of and on to save power (how it does it now) actually creates micro-stutter to begin with
Process lasso can easily fix that issue by putting cores that are needed to use, and letting Windows 11 scheduler to take a smoke break, like a long break, with lots of cigarettes
finally someone explained how it all works, other reviewers don't even bother (or don't even know)
Its more like they dont explain it as everyone would fall asleep
@@malborboss xD
This is an amazing video! Love deep dives into your knowledge
THANK YOU for the explainer. The oversimplification / mischaracterization of core parking by well intentioned people has been driving me crazy.
Wendell’s brain is on another freaking level (no pun intended). My God, this channel is beyond underrated.
I think Microsoft has a work to do. At least to give us ability to permanently set affinity mask to specific executable or shortcut (yes, I know about start.exe but this is not reasonable way to do stuff). Having possibility to set core preference for a process would be awesome. I think good time to introduce it was when first Threadrippers were available.
Same thing for a user friendly GPU override.
More Level1Linux pls Wendel
This video is exactly what I was asking
For a gaming PC would it make sense to direct the Windows OS to use the frequency cores while the game uses the cache cores to minimize competition between the OS and Game?
Are there any special considerations like certain windows services which need to cross talk with the game services and should be run on the same chip to optimize communications?
I am surprised no one is talking about this potential for optimization.
Hopefully AMD will have the OS/Game contention issue resolved automatically one day, but in the meantime, is there an obvious approach to implementation?
Thank you, Wendell, for this teaching.
I saw an AIO with tubes up. If GCN find out we'll say it was a "validation" experiment. Sssssssh! :p
This all feels like beta testing for Epyc Bergamo.
AFAIK Bergamo will be simply lots and lots of dense zen4c cores. No heterogeneous mixing there that need to be handled, so I don't see how this applies?! I'd also guess the primary OS the main target customer base will use with those parts is not desktop Windows.
Here’s the thing I don’t yet understand. If Windows 11 parks the frequency cores because a game is detected in the foreground, and then a background process (which will presumably be running on the cache CCD along with the game) reaches its threshold for “too many resources to share available cache cores”, it will wake the parked frequency cores. But will that also effectively manage affinity? Does it unpark _all_ of the frequency cores, or just the _n_ cores necessary for that process alone, shifting it to the frequency CCD and leaving the game threads on the cache CCD? I fear you could end up in a situation where some game threads get moved to the newly-unlocked frequency cores, introducing stutters and lowering performance simply because a background task spun up.
As far as I can tell, AMD is not using affinity masking for their heterogenous behavior on the 79XXx3D, perhaps to avoid a situation like the Alder Lake release where anti-cheat engines freaked out over Intel’s Thread Director changing the process’ affinity. From my limited exposure to the technical details, it seems AMD instead instructs the CPU to reorder the list of “preferred cores” that it reports to the Windows scheduler, in order to place either the cache cores or the performance cores first among them. But it still seems like setting affinity manually to prevent cache-hungry games from _ever_ spreading onto the frequency cores and incurring cross-die latency would be wise, even with core parking enabled. Right?
I believe you are correct with everything except for one important thing. AMD's software only changes which cores get parked with the selector. So, it would be interesting to see what would happen when the setting is set to park the cache cores, but the game has a manual affinity set to only the cache cores. I have a hypothesis that the game would suffer from major stuttering.
@@mattparker7568 AMD's software doesn't change which cores get parked. It changes from the Windows default of "no cores can park" to "up to half of cores can park". Windows always parks the higher numbered cores first, and AMD numbers the cores from 0-7 on the cache CCD and 8-15 on the frequency CCD.
See the slides from AMD in Techpowerup's review, and the Microsoft documentation on Processor Power Management Options.
@@Vegemeister1 Thank you for the information.
100% it will move your games to the non vcache cores currently. If it was working otherwise, they wouldnt need to park at all. This is a dumb solution where windows/xboxbar sees a game and says, "game". Then it parks the non vcache CCd (or vcache CCD if it sees a game that it knows likes hz). If it was smart like intels you could just keep all of them unparked and push most non game process to the desired CCD (like intel typically does with E cores). More so you have to shift these processes and shifting cores is bad for lows (you get jumps).
Very informative video, really thanks, taught me a lot about windows scheduler
This is a very good explanation, but I'm left with some questions. It appears to me that the ideal process scheduling would have processes that get a benefit from the V-Cache (ie games) running on CCD0, and all other processes running on CCD1 (the reality is probably more nuanced than this). That being said, I'm left feeling lost as to the level of tinkering I should do on my 7950X3D. As I see it, there are basically 2 approaches:
1. I can trust the Windows Scheduler + Game Bar + Chipset driver combo that AMD and Microsoft have worked to optimize
2. Manually assigning different core affinities. ie setting processes to default to frequency cores (CCD1) in BIOS, and then manually assigning games to CCD0 through process lasso or something similar.
I'm a hopeless tinkerer, so I'm very tempted to run option 2. But I am wondering, which do you think is ideal?
pauls hardware and you are the only ones that have talked about it properly. I assumed it worked this way but all the others I've seen gave me the impression that the non 3d ccd was parked for every game.
One of them is parked, okay he didnt say otherwise. That is what the gamebar does. Now if you go to a hiogh load on those cores it will unpark and move process to the other CCD, that not a great thing. It better than not having any more cpu left. But it also means you are now moving processes from the CCD you want to the one you dont, and they can be game processes.
@@jetrpg22 you are part of the problem. you do not understand what I said or what he said and you're just spewing bs
Couldn't get the 7950x3d but managed to get a 7900x3d from AMD today. Very useful video, thanks.
Same waiting mine tomorriw
A directly indirect response to Frame Chasers who is completely unaware of what's going on. Can we next talk about Sub-NUMA clustering in enterprise? 😊
Frame Chasers changes his opinions CONSTANTLY, in direct contradiction of his previous videos
Perfect lecture
As always
Holy shit 😂 i didn't realize some people actually went and turned it off and then went and recommended others to do it.
Geniuses all of them
I’m excited for Wendell to teach me about Parkour!
lol
Thank you so much Wendell. Picking a new CPU has been a hassle for me. I've been going back and forth between v cache or no v cache. I want that gaming performance. So the 7800x3d makes sense. However, I also do workloads that make use of more than 8 cores. But I've watched several videos that made using the 7900x3d or 7950x3d seem like a nuisance because of the core parking. So I would decide against a v cache chip. But your video explains it way better and it now seems obvious to me. The 7950x3d seems like the best of both worlds option for me. It has the v cache available to 8 cores when I'm gaming, but has the 16 cores needed for when I need to work. Finally have my mind made up.
The thing is, the 7800X3D is basically the same as a 7950X3D in gaming. So it's actually better overall, unless you absolutely need 16 cores, or unless you want to do heavy workloads while gaming. You basically only gain productivity performance with the 7950, which is what hurts its value a bit. You don't really want a situation where in gaming you have all 8 cores on the VCACHE to be in use + 2 cores not on the VCACHE, because then the other cores will not really have the benefit of VCACHE, so they'll be slowed down some. That's why the 7800X3D is better, you will never get that scenario. Games also aren't all that multithreaded to begin with. Even if a game uses 8 cores, it can only do so much with them, usually underutilizing some cores.
@@peoplez129You're missing the point of the 7950x3d. I said in my original comment I need more than 8 cores for work, but also wanted 3d vcache for gaming. So explain to me how the 7800x3d is better for my use case? You just rambled on about how the 7800x3d is better for gaming. And nobody is saying that it doesn't make more sense if you're just gaming. But I'll remind you yet again that I'm not just gaming.
He's THAT wendell? Teksyndicate wendell? Holy hell
Is this supposed to become independent of the Xbox Game Bar eventually.. Or is it intended that you always have to run this for your CPU to work correctly on Windows?
this is amd's solution for now. i'm guessing that in time they could make it work with bios/windows/amd driver updates and without the gamebar but who knows. 1st generation products are always overcomplicated and may not always work properly
These type of topics are the ones I enjoy the best! Thank you, Wendell...🇺🇸 😎👍☕
@Level1Techs, Theoretically if a game benefit more from higher clock (ex: CSGO), can you use core affinity to run it on the non 3DVcache cores?
yes, and you can script it so this "always" happens. Or a utility like process lasso can totally do this for you.
@@Level1Techs This is really helpful! Thank you so much! here is my conclusion on the 7950X3D: Everyone seems to say that 7950X3D is a more expensive 7800X3D, but for 250$ more you get all the productivity performance and if you are willing to play around in your system and optimize scheduler per game basis you could end up getting the best of both world. So basically 7950X3D is like having a 7800X3D and a 7700X together but "binned" so you get a slightly better CCDs for both. At the end of the day, 7800X3D will be a plug and play, great gaming experience for most games, 7950X3D will be for advanced mixed users who have the budget and like to tweak thing for a slightly better performance.
@@Level1Techs Thats the Thing. As far as i can tell no Reviewer did test this or tell ppl this or make Charts for this with and with out X3D.
All Charts i see are always only the 7950X3D but only the X3D Part of the CPU is tested. No one testet the 7950X3D with the none V Cache CCD. I have seen some Tests where Reviwers did also Test an normal 7950X besides the 7950X3D and tell ppl that it is better than the 7950X3D in some Cases. But that doesnt make any Sense to me at all (maybe a few FPS sure but not a few %).
While its correct in Theorie... when you make Tests and an out of the Box Experience... i think it should very well be highlighted that you can also just get the same Performance always on the 7950X3D compared to the 7950X.
I know its unfair to Rant (i am not so sorry if that sounds like i am) and very much Work but it seemed like no one really cared to even test this Theorie to tell ppl... look, if you have an 7950X3D you basically can also have the same FPS as an 7950X. At least ONE TEST to prove the Point would be nice. Because honestly now it seems everyone is kinda biased and against AMD because there should never be the Case where the 7950X3D is (Way) worse than the 7950X. If it is it should be solved since its basically just an 7800X3D and an 7800X.
What i heard from every Review is that you should 100% wait a Month for the 7800X3D. No Question about it. And thats somethin i honestly think is unfair in a Way towards AMD and the Consumer who thinks the 7800X3D will always be the better choice against the 7950X3D. And thats simply not the Case. There are Games out there where the 7950X3D will be the better Choice because you can just force the none V Cache to do the work. With the 7800X3D there is no such Choice. Of corse i get that the 7800X3D is Way cheaper and also almost no Difference will be in most Games. But still i think that it just is not the whole Story. Just because 98% of ppl wont care and know... There are some who want the best of the best and them telling the 7800X3D will be even bit better seems... strange. When you could tell them, look it depends on so many Things.
(Sorry for so much Text and maybe its not 100% Clear either what i meant. English isnt my main Language. Also again i am not mad against any Reviewer but i just feel like everyone was missing that. If i am wrong just say so. I might be not getting it)
@@slimmkawar9780 Oh. I just wrote a similar Thought. Well, thanks. lol. Yeah, i too think its strange no one pointed that out. Its like everyone thinks the X3D Part will always be the best Choice for Games which isnt the Case and there is a good Chance that the Scheduler improves as Time goes by and that at some Point we dont need to mess around anything anymore but the 7950X3D just automatically knows whats better to use. the CCD with the 7800X3D or the 7700X. Right now it seems kinda bugged if i am totally honest. Not really but you get the Idea what i mean.
Some games you physically cannot do some of these tricks if they're EAC protected (presumably other anti-cheats as well) as they block affinity changes. Very frustrating when you watch a single thread heavy game land on the worst core or a HT core and can do nothing about it.
In that case, at least on Intel, there is a thing you can do: disable the other cores from BIOS. Well, not the best, but you can maximize your performance that way.
Is this a denuvo thing?
@@BBWahoo Not, sure, you can try it with Task Manager directly and see if loads move to the cores you select.
Great explanation! Thank you. Have you considered tuning for your audience a Star Citizen rig?
Ty for pointing out the affinity configuration. Screw using these features when I can go full autism on every game testing different CCD configurations on this 7950x3D...
Wonderful video. Learned something new today
So this is pretty much a 5800x3d and a 5800x “glued” together. That’s crazy. This is probably the most creative way the CCDs/CCXs have been used. Imagine having a dedicated gpu core paired with a 7800x3d.
Well a 7700x and 7800x3d glued together, but yes you get the idea ;)
Its bad in its current iteration tho, and maybe this is due to the poor Task scheduler performance, but its bad. Because unlike with intels E cores (which are lame but do work) You arent off loading non game processes to the 7700x ccd part of the chip. Instead, its all on the selected core (typically 7800x3d part , or the vcache part), unless it its super full then it unparks the other CCD (7700x ) for current processes. It unparks them for not just the background process, but all process, INCLUDING THE GAME PROCESSES. Thus, you can now have your game running on the 7700x if you have a lot running, or if say its 6-8 years from now and well you need both CCD's to keep up with the current gen games.
But intel isnt much different in its poor long term prospects, with a max of 8 P cores. Because say in 6-8 years it could leverage 12 p cores.. well there is no 12 p core option. This is why a 7900x isnt the worst option if you are looking long term (but really long term like i said 6+ years). I would wait for the 7800x3d to drop if the 7900x price drops a good amount its not a bad long term option, high hz (still matter) 12 full cores.
@@jetrpg22 Intel’s iteration was horrible with the 12000 series as well. The e-cores were Intel’s poor excuse to counter AMDs higher core counts. Some games wouldn’t even run because the DRM didn’t know what the e-cores were. Thankfully Intel and Microsoft fixed those issues. AMD will also fix those issues without a doubt. Ironically Intel did the initial legwork.
Honestly though, I think AMDs solution has much better potential compared to intel. Intel is packing weaker cores that will always be weak when compared to their power cores, but AMD has cores that are both powerful but optimized for different workloads. AMD still has a ways to go but they’ll get there eventually. Imagine a world, which is kind of already here, where the windows scheduler perfectly parses out a gaming task and a production task and directs them to the right CCXs. For example, a streamer who is using OBS with the production cores and playing a game with the x3D cores. That would be amazing since they would get the best of both worlds.
I know you were discussing long term prospects but I believe you missed out a huge factor in AMDs AM5 platform, and that is its amazing longevity. You can hop onto any of their current CPUs, and easily sway it with a better one further down the line. It’s trivial just how easy it is to do so, especially since AM5 CPUs are LGA now.
@@greensleeves8095 Well i mis-clicked and lost it all.
Anyway, Intels e cores are lame, but thats a design issue. The solution is better (on chip scheduler and current algs). AMD isnt saying they will get 7xxx3d to work this way. But i think the possibility is there with Xbar, even with this being a really crappy way of doing this.
Intel only having 8 p cores means 7900x, and higher, has better longevity anyway. Odd in that while 7800x3d will probably appear to be the best option today BY FAR. The 7900x3d with its extra full cores means in 5-6 years its going to smoke the 7800x3d. This also means the 13700k is probably the better option than the 7800x3d, price depending, regarding longer term and current performance.
The socket point is a good one.
Thanks. Ppl are freaking out way too much.
11:48 Is that a hint of an upcoming video? Performance setting in windows when you're running more than just a game (eg: Discord+Spotify+Browser+Game)? Is it better to be on Balanced vs Ultra?
What happens when I game and encode/stream? If it parks the others cores instead of using them for OBS, it will slow my game. If it unparks them then the game can bounce on the other cores? For games to me it seems neither is the solution. Not parking, not affinity, but rather "preference".
So if the game can run only the preferred cores then run only on them, but if it maxxes them then use the non preferred ones as well.
This is basicly making me think all the time and im glad that im not the only one who thinks about this.
Yes, this is how it currently works. And whythe "if it needs to unpack it will" narrative isnt exactly helpful. Unless this info is also being included. Its a bad solution. The solution works now. But by 2029 and your cpu is the bottleneck, its a bad solution. But because this is mostly a software issue, it may not be an issue by then at all. And work like intels P cores vs E cores (but even a bit better).
Wow, I'd not heard of factoria. Will be downloading the demo tonight and getting familiar. Thanks Wendell!!
Say goodbye to your life
I have to ask, have you slept yet? 😜
@@ripgfa lol
@@optiquest86 No, not yet 🙂
This was a very informative video. Thanks :)
Its true. When u have an 8 core cpu and gaming, youll notice that some cores are more utilized than others are significantly less percentage. Thats because game devs only use those cores and not all 8cores.
Mid work week game, drink every time Wendel says "core parking".
I tried this while driving and ended up parking my car into a wall.
I love that you use powershell to run powershell.
Hey, I resent the scandalous class reference to the delicious and nutritious potato in all it's forms :).
So having your PC running at balanced power while gaming is a good thing, if understand correctly? Windows under system>Display>Graphics has custom options for apps that always choose performance for games.
I've had problems with core parking and virtualization. I have found it necessary to tweek Windows into holding cores active and setting both minimum and maximum values for CPU utilization in power management.
How would you go about finding out what's causing a micro stutter on a PC? I have this weird issue where there's random stutters, but my systems only at like 20% utilization while gaming on the CPU and GPU, and nothing shows what's causing this. I've tried everything I can think of.
if you don't know what is causing something from your own past experience, then the direction you can turn to are Profilers like Intel Vtune and AMD uprof.
these are the same tools that Software Engineers use to profile their own Software. you can get all the data you could ever need out of the CPU to see exactly what it is doing and ergo what the bottleneck is.
if it's Anti-Cheat protected games though, then you're kinda out of luck if you don't have the experience to know yourself, since you probably can't hook there Profilers to those games (without getting banned, atleast).
@@taiiat0 so I've tried AMDs profiler. It only does processes, and I tested it on a game, specifically VRChat and it won't even show the process for the game when I run it as aministrator. Idk what's going on but this is driving me crazy.
@@nathantron
hmm
i don't think VR should make a Process special, and i'm pretty sure VRchat doesn't have Anti-Cheat...
past that, beats me, sorry. i don't have any experience using AMDs' tool.
@@taiiat0 I didn't think so either. I tried it in VR mode and Desktop mode. It's so weird, but the game has an issue where it grinds to a hault when anything avatar related is loaded. I even moved the avatar cache folder to ram and it didn't help at all.
goddamn that's a fresh wendell
glad you answered the parking question as i was thinking of why not just un park them. i'm curious on how well the game star citizen would do with intel vs amd as it is a very cpu and memory intensive game.
I REALLY wish CCD1 was the frequency one, and CCD2 was the cache one. That way, the operating system and all background tasks could default to those and then you can launch games and give them CCD2 by themselves to minimize the amount of effort needed to manage this manually. Really frustrating decision on AMDs part.
Well AMD's preferred core logic already works independently from this, so its going to put other things by default on the frequency cores. It uses XB game bar to otherwise specifically flag the cache cores.
As an operating system engineer, trust the scheduler. It'll be more right, more of the time in the long-run.
I have a question, if in the future, any game would need more cores, the cpu will unpark them right ? Is there any downside of using unparked cores instead of just a cpu like i9 13900k or i7 13700k which uses all the cores ?
Just the facts ma'am...just the facts.
Awesome
As much as i enjoyed watching this video, there is 100% issues with the core parking and the scheduling atm and it sucks, some games will only utilize 50% of the 3d v cache cores and then also use frequency cores which will result in lower fps/performance i have tested this with a few games.
Are those issues fixed yet?
@@danijelb.3384 Yea they have actually
So funny that Wendell has so few subscribers and LTT has so many.
Looking good man
thanks for telling us the new normal LOL I am hoping that the 7950x3d will be better for me in space sim as running Tobii eye tracker , Dual Joysticks, and discord can run on the other cores and leaving the heavy load for the V-cache and from what I understand Star citizen uses as many core as it can so not too sure what will happen if it helps the game or it will think its only a 8 core CPU, or just put the main hungry loads like the render thread on V-cache and spread the rest over the mix . this was one reason why I wanted both CCD's with V-cache lol
Neat theory video - but 1% on practical how-to. You have anything that actually answers questions like how-to get your 7950X3D to work in games? And/Or how to know your system is properly handling games?
Question, in case of non-gaming (game bar) apps, how the task scheduler will know to use the faster CCD2 for the main threads to benefit from speed and not caching? From what I understood parking is only for games and parks CCD2 cores.
There's something called CPPC "preferred cores", which tells Windows which cores are the highest performing.
In normal use, the second CCD is used more often.
@@Aashishkebab CPPC doesn't work across CCDs, it is working inside a CCD, all those cores ratings are specific per CCD when they are created and evaluated at the factory level. But maybe they did something like instructing OS to use CCD2 as the main CCD with that driver.
@@andreiga76 I'm guessing you don't have one.
If you look in BIOS with the latest update, you can specify the CPPC to prefer cache, frequency, or driver/auto.
Couldn't the scheduler automatically decide whether a process is best served by faster cores or more cache by looking at the amount of cache misses? It could start on the faster cores but if it observes a lot of cache misses it could move it to the cores with more cache. If then it does not improve (e.g. too much data even for the bigger cache) it could move back?
There's a BIOS version of the scheduler which does something like that.
@@spankeyfish the BIOS has nothing to do with the scheduler in Windows apart from maybe providing some information to it.
I remember the first time I saw 128 threads on a CPU ~15 years ago (Sun's Niagara T1/T2) ... My jaw dropped.
Allot of this was way too scientific for me but I get the gist of it. So basically this is the best cpu overall is my take :)
I have actually ordered it. Was 50 / 50 between this or non 3D or 13900k. I think list power in combination of AM5 being more future proof won me over. Plus I don't care about 1080p and ALL these cpus do great. Part 100 FPS won't matter anyone really.
How you finding the cpu/build want to build this too
Lots of Easy Anti Cheat titles do not let you set affinity. Some do if you set affinity for the start_protected_game.exe using process lasso with 0 delay, but some won't and if the windows game bar doesn't work you're kind of screwed and have to disable the cores.
Is this CPU modification benefits only to games? Will I have any speedup in C++ compilation or the regular 7950X is better due to its higher freqs?
I was wondering something similar, will I see any benefit while using my 7950x CPU ?
I've already installed the latest driver package from AMD that has these functions.
compiles are more cores more better. compilers themselves seem not to benefit for huge caches up to and including projects as large as openEmbedded... which is quite large.
@@Level1Techs so if running and testing multiple VM's at the same time would also benefit from more active cores too ?
Would I still see any benefits from changing my power plan to balance from Performance using the 7950x or is that going to be a case by case basis ?
Seems like it shows better gains from apps like gaming vs. production type of work.
Thank you for the video, so much miss information right now about this. Great video to show folks how it actually works.
For the chart at 17:40 where the clocks/prices are shown, one has to remember that for the 7900X3D and 7950X3D, those higher clocks are for the OTHER CCD that doesn't have Vcache. NOTHING you can do will change the fact that the CCD with Vcache has a substrate sitting above the cores and it's harder to dissipate heat from it and that's not made perfectly clear here, or anywhere else frankly when I've watched videos on these 3D parts.
So yes of COURSE the 7800X3D shows much slower clocks, but if AMD listed the specs properly for the 7900X3D and the 7950X3D there would actually be a listing for base and boost clocks for each CCD. The Vcache CCDs aren't running faster JUST BECAUSE there's another CCD. It CAN'T. That doesn't change the fact that a layer thrown on top of the Vcache CCD creates a heat issue on THAT CCD and nothing can change that. It's going to run as fast as it can while keeping temps under the listed spec, and the same will be true for the 7800X3D.
So, the CCDs that have Vcache on ALL THREE PARTS are running about the same speed, and having another CCD on the CPU doesn't change anything. I keep hearing this usage of "offloading" but it doesn't apply. You can't "offload" heat except UP through the substrate to the cooler, and HEAT is the limiter.
First off love this comment, so few are mentioning this.
Second i agree AMD really pulled a fast one not listing clock speed per CCD, I was hoping they had fixed the extra cache slowing max clocks, but the frequency on the 7800X3D killed that dream.
"The Vcache CCDs aren't running faster JUST BECAUSE there's another CCD. It CAN'T."
@@chainingsolid I still have yet to see anyone having background apps run naturally on the non vcache side without an affinity mask. I dont think they do, unless Ts thinks vcache cores are mostly loaded... but thats a bad thing, because that means you continually load up your game cores with background shit until it gets bad.
The correct solution is an option to just put a program on a core and leave it there. not to disable a core just because the active process works better with a lot of cache. If you have 2 cores with different cache, you put the game on the one with lots of cache and the OS on the other core, not force the OS and the game to fight over the one core with extra cache that the OS doesn't even need. I haven't seen anyone doing tests to see if that's what's being done, but that's what parked core does on older CPUs and its dumb tricks for benchmarks.
Thank you for this
would be great to understand how this would effect game streaming. can you have the benefit of the 'game mode' for playing the game and instead of parking the other cores could you use them for video encoding ?
You didn't mention it. But what about if you have background tasks on purpose. Does core parking still outperform putting the other tasks on ccd 2
IMHO: the confusion regarding V Cache on 16 core or higher CPUs has to do with how the chip itself is perceived. V Cache has been portrayed as a "gaming only feature" that gives the assumption that when any game is loaded, the game only runs on the V Cache cores and everything else is to shunt to the non V Cache cores. Recently an acquaintance of mine said she doesn't bother with any of the 3D V Cache chips because 5 to 10fps isn't worth it. While she went on to talk about the human eye versus FPS: in her mind, anything over 144fps is a waste as the human eye can't keep up with motions on a screen above 60fps.
While I have no experience with any of the X3D CPUs: there are many who advocate using the non X3D CPUs due to the scheduling issues. When a CPU is advertised as a drop in upgrade: that is what the user expects. However, as other streamers have commented: there is a 47 page guide on how to get certain higher core count X3D CPUs to run correctly or the "way they're supposed to run."
From a computer science stand point: core parking makes sense as a "soft disable." However, users who expect 16 core X3D CPU to work a certain way are often disheartened because of have to "tweak" this or that to get it to hopefully run the way it is supposed to. In many respects: the CPU cores with 3D V Cache are also liken to Intel's E Cores as they cut run as fast and a non 3D V Cacje CCD. In a way: AMD CPUs with 3D V Cache oprrate backwards from Intel's E-Core/P-Core setup.
From personal experience with my 7700X processor: the games I play run really good for not having V Cache. At the time also and with what was being said about the X3D CPUs: the 7700X made more sense for what I do. IMy PC does run games very well. The productivity stuff I do runs very well. The ability to play a game and have audio or video encoding running in the background doesn't cause a perceived performance loss.
In the end though: all AMD users (IMHO) need to evaluate their purchases based on research versus what AMD sugar-coated with the X3D CPUs. The reason why 8 core CPUs with V Cache are hailed as the "king of Gaming CPUs" is that it is drop in and go with no "user intervention" apart from having the lastest chipset drivers for their respective motherboards.
Yes, many AMD users don't trust AMD or Microsoft as bar as allowing software to dictate how their CPU is supposed to handle itself. Yes, there has been talk that AMD is working on ways to decrease the latency between multiple CCDs (maybe Ryzen 9000 will give a hint of that direction). True, AMD's language can be easily misinterpreted (precursor boost vs precision boost overdrive is just one example) that users get caught up in the hype without actually knowing what it is they are getting. Like I said: I have no experience with any of the X3D CPUs; but, if I were to go that route: I'd only stick to a single CCD chip are is kinda has no choice but to do what it is advertised as doing without user intervention.
I am reminded of an exercise in my basic networking class in college where the student's were to spec out a file server. One student advocated running a server (this was back during token ring networks) with 4 486 CPUs. The instructor laughed as and "What is Windows NT or Novell actually going to do with 4 processors versus just one?" Given how modern CPUs are: there is still this disconnect between the CPU and the expectations of what it can deliver.
One thing that hasn't been talked about is multiple games running simultaneously. Yes, TH-cam is full of benchmarks of X game running on X CPU. However, what if your play style has you running multiple games at the same time?
As one streamer commented: Gamebar should run applications that are not games on the non 3D V Cache CCD and keep the CCD with B Cache as "parked" until a listed game is loaded. Yet, that would still be same issue but in reverse. Perhaps AMD needs to design CPUs that function more like GPUs. I don't think I've ever seen someone talk about GPUs with multiple chipsets having latency issues between the GPU processing unit and the zRAM on the GPU itself. Yes, GPUs are only responsible for a set type of tasks whereas a CPU has to manage everything within itself and the PC as a whole. Perhaps AMD should have focused on the latency between CCDs first before making an unbalanced CPU.
True, CPUs have come a long way from the single digit megahertz days. However and as I've heard a few others say: AMD should have just increased the caches in all CCDs versus just doing it on one CCD. That is probably why so many advocate for the non 3D V Cache cpus eventhough there is sometimes a mild drop in FPS.
Now, I will say this from observation alone and seeing it with my own eyes. A friend of mine stayed with for a few days. Him and I have almost identical systems apart from the CPU (everything else is identical right down to the case). While his system has the 7800X3D: he noted that the game looked "clearer and cleaner" even though the game was running 15fps lower that his 7800X3D (my system has the 7700X in it). While CPU and GPU manufacturers what an do push X number of FPS in a game: sometimes quality is more important that quantity. Yes, competitive games/gamers want max FPS all the time. However, how many people can perceive 144 or higher FPS?
There has become this imbalance between what a CPU should be "good at." Back in the 486 days or even 386 days: those were "jack of all trades and master of all trades." CPUs these days seem to be a "jack of all trades and a master of none." Back then: the OS or program would issue instructions to the CPU and the CPU would comply. A much simpler time versus now having applications that was this or that before executing the instructions given to the CPU. Maybe that is why some older CPUs still perform as good as they do today: they run the instructions given regardless of cache type or size. Yes, a 386 or 486 CPU ran differently back then versus today's CPUs. Perhaps AMD and Intel both are guilty of making CPUs doing tasks that should be handled by hardware and not software. Sound Blaster days come to mind. A game or application requires a sound to be played. The CPU would say "nope" and send those instructions to the Sound Blaster card to generate the sound. Perhaps, going back to those simpler days would see a CPU that is the "jack of all trades and master of all trades." No one CPU is a master of everything: it' is either this utilization or that utilization. .. Gaming versus Productivity versus General PC use. No one processor from AMD or Intel is perfect in this respect. AMD nor Intel have CPUs now that do everything perfectly. This X3D CPU runs games exceptionally good; but, falls behind on certain non-gaming tasks. This non-X3D CPU runs non-gaming tasks exceptionally good; but, falls behind on gaming tasks. Perhaps that is why non-C3D CPUls are advocated for more than their X3D parts. While not completely prefect for gaming and not completely perfect for non-gaming work loads: they are still exceptionally good at doing both without any fuss or additional steps or programs needed.
In the end though: the concept of the X3D CPUs is great when they run as intended. However, their performance is more volatile when compared to their non X3D counterparts: especially with 16+ core CPUs. That is one advantage of the non X3D CPUs. A non-X3D CPU will run the same thing the same way almost every time with a very small margin of error. An X3D chip has an element of OS confusion as there is always a chance a game will run on a CCD without the 3D V Cache and perform completely differently that what is considered "normal."
Alas, I am no CPU engineer; bitut, common 18:46 sense has to play a part as well in viewing CPUs these days. CPUs need to become more streamlined as they grow in capabilities. There is too much focus on just one aspect of what a CPU should do instead of making a CPU that can do it all without temporarily nerfing the CPU just to make it work a particular way. It' is like some automobiles that have 8 cylinder motors that cut to only 4 cylinders when it is going down the road. Those too have a lag if more power is needed. In this case as it relates to CPUs: perhaps the cores should be listed as idle versus parked. Parked gives the connotation that the core is stopped and unavailable for work. Idle would say: I'm not busy; but, I'm ready to go if needed. That is an assurance afforded by the non X3D CPUs:. When it comes to gaming: developers have (IMHO) been given a very long leash in how they develop a game versus what CPU it runs on. I can't count thr number of times that I've heard someone say that game X or program X "favors" one CPU manufacturer over another. This gives the impression that favorites are been implemented at the software level instead of the hardware level. When it comes to gaming especially: the GPU should be the determining factor: not the CPU.
X3D CPUs above the 7800X3D are (as Mr. Gump would say) are like a box of chocolates: you don't really know for sure what you're going to get from one box to the next. A 16 core X3D CPU could run fame X like it should for days, weeks, or even months; but, there is always a risk that the OS or Gamebar will make a mistake and cause game X to underperform or possibly perform better in some cases.
Personally: I'll stick with just the non-X3D V Cache CPUs as I already know that they will run the same way day in and day out.
So ultimately for a game like WoW, should I bother setting the affinity mask or just let windows/game bar do the work?
Is now the time to upgrade to Windows 11? I have been holding out on Win10 for many reasons. Including it seemed like in the beginning AM5 was struggling with the Win11 scheduler. What do the benchmarks say between 10 and 11?
awesome video thank you
I have a question.
I play MSFS20 in VR almost exclusively, but I also need to use addons like planes as well as selected airports, also apps such as VPilot and other things which add to my experience. These addons run while MSFS is running. It would all come under ‘gaming’ in my mind, but do these apps (outside the core MSFS game) use the same ‘gaming’ part of the CPU or the ‘productivity’ part? (5800x3d/ 4090). I know that the 5800x3d is a gaming cpu, but it surely can do productivity work if needed, albeit far less efficiently than other CPUs designed for this type of work.
Should you just use core unparking app and potentially fix it for Ryzen cpu? I've done it before on Intel ones, and it worked fine.
I assume this is the same for the 7900X3D as well? I built a system for someone with that CPU and he wanted to run Linux (arch fork). I was concerned that it wouldn't be updated to properly use the correct cores with either more frequency or cache. It runs fine and works well though. I am not sure if it's fully optimized yet, I can install and use Linux but I am far from anything more than a novice.
I see you use process lasso, im wondering how that app handles the new CPU, I use the pro version with current 7700X and it's perfect, but about to update my system with an X3D CPU.