Which chipset driver did you use for the 5800x3d. I ask because a new chipset driver became available for my AM4 MSI B550 Unify on MSI Product support page and its revision number is the same as the new AM5 chipset driver with x3d optimisation. I'm just wondering if there any improvements for 5800x3d
I'm surprised we didn't see any 4K FPS results for gaming. I'm looking to buy the 7900X3D, but even with the 7950X3D, the 4K is what's important to me. How many people are regularly gaming in 1080p when they don't have to? Can you update your pinned comment with 4K results, please?
Yes. But to be fair for Linux or whatever where your expected to be technical and have use for 8+ cores. I adore the idea of 8cores for games/cache applications. And 8cores dedicated with a extra + 8 lower clocked cores for compute tasks. Basically AMD uses big cores right now as there E-cores. And good on them for making something that good. But mainstream? Gamers? 5600(nonx), 5700x, 5800x3d, 7700x, 7800x3d. And Intel if they manage to deliver a value option or AMD is garbage. It depends really on AMD now. If the idea of free RAM and discounts on CPU and motherboards are not a sign of AMD failures then I do not know what else to say. I do like to mention how distasteful it is that AM5 launched with no real marketing about ECO mode. Did not ditch old coolers for better thermal dissipation benefits. Basically Ryzen 7000 is just Ryzen 5000 with the power usage of Intel. And bigger cache. PCIE 5.0 that is useless and almost DDR5 is also completely useless. And a premium for all of it that when there are 5700x and 5800x3d out there already with huge discounts on AM4 stuff. And since there is NO GPU out there worth buying that actually is bottlenecked by a 5700x. Let alone 5800x3d. AMD's biggest problem is AMD. Like ALLWAYS. Every single time outside of Ryzen 2000-5000 or something.
@@TheDiner50 DDR5 is starting to look pretty decent and does show performance benefits, but it has definitely been a rough launch. That's why I'm just kind of waiting for the 7800X3D to decide how interested I am in a platform upgrade. Zen 4 has been selling badly and it sure has been deserved. AMD is just focusing on Epyc right now.
I really appreciate the 1% and 0.1% lows in the charts, especially the discussion about 'prefer frequency'. Too often people look at average FPS as their only metric. If you have a 120hz monitor, a stable 120fps is far better than something that fluctuates from 100 to 150.
i actually mostly look at 1% lows. averages are meaningless. what matter to me is "does it drop from the frame rate i want?" .1% lows are equally meaningless, because the amount of data necessary for this to be statistically relevant would mean that a .1% low of 60fps needs a benchmark 3 hours long. a 60 second benchmark with an average of 100 fps literally only uses 6 frames to calculate .1% lows. that's... meaningless. i wish people would use 5% lows instead.
Workstation user here. I appreciate you clearing up everything and giving your honest recommendation. I didn’t know if the extra cache was worth it and I got my answer. Keep up the great work.
Same, sounds like perhaps Intel may be the route for workstation focus unless they do some optimization, or you have a very heavy gpu load which perhaps could benefit from the cache.
I have seen numbers that suggest machine learning tasks would profit from large amounts of cache, so that might be one niche production workload where this one excels
Only issue I've seen with Intel on the workstation front has been virtualization, specifically VMWare. It sounds like Workstation can't differentiate between E-cores and P-cores. My buddy's i5 system choked trying to run his usual VM. It's been surprising since they've had a year to figure it out with 12th gen. Would be curious to hear if anyone hasn't had this issue.
@@VirtualAirTour the numbers look silly, but it's purposely creating a CPU bottleneck. You want the GPU to be irrelevant. If a flight sim can create a CPU bottleneck then it would be a good addition, but if it acts like cyberpunk and maxes out the GPU then it's not too informative, all it tells you is if you have enough CPU or not, and anything past that is like a brick wall of pretty much identical numbers as the CPU waits for the GPU.
@@VirtualAirTour I like how people make videos to explain why they do it, people refuse to watch it, then just come to the comment section and complain about it
I'd absolutely LOVE to see an in-depth undervolt + curve optimizer w/ PBO video with the new 3D CPUs, and how efficient boosts can get with your guys' expertise. It'd be a really cool educational video for how CPUs achieve higher boosts efficiently nowadays, too, for those who have been out of the game a bit and are just used to throwing more volts around.
what a time to be alive, the r5 3600 and i3 12100 give out over 150 fps and we're looking at 700$ cpus for 300 fps . we need a channel to test more old chips against these results
Just wait, there will be a time very soon (10 years) where nearly everything is realtime raytraced at human eye resolution. It will be reality-bending, if it's not already.
@@DragonOfTheMortalKombat definitely, I’m so sick and tired of people complaining about new high end products being expensive for gaming when you can go on last generation for cheaper and get a good portion of the performance as the new stuff. On example of this is how I got a RX 6600 for $185 on Amazon, that card can run most games perfectly fine at 1080p and probably 1440p. (I haven’t tested it since I run a RX 6700 XT.) That RX 6600 would probably be perfectly fine for >50% of the community to play games on. (Albeit without Raytracing, but that’s a gimmick/marketing tool to sell new cards anyways.)
Would love to see the 7950X3D's performance on streaming. E.g. Tarkov is benefiting from 3D cache massively. If you assign the v-cache-less 8 cores exclusively to OBS while plaing on those with 3D-Cache this would propably make a great single PC streaming set-up. like having a 5800X3D and 7700X in one
yeah this. If one ccd is basically shot, while you game & can't stream IE like it pretends it's just 8 cores, this CPU makes no sense whatsoever. This is the reason why I wanted to buy it
@@TheHighborn same. I was hoping we would get best of both worlds but looks like the ccd with out the vcache is disabled while under gaming load. What if I want to multitask or stream while I'm gaming, I hope there's a way around this with future updates. if not then I'm just gonna wait for the 7800x3d not paying $700 for a cpu that turns into an 8 core while I'm gaming.
it's too bad that nobody else cares about the FPS in FPS games to the point where they will highlight the FPS in the FPS as much as this FPS chart full of high FPS FPS games.
Why do people even like these stupid comments?? Only Utuber? What a TERRIBLE statement!! Hardware unboxed, paul's hardware, der8auer, some others? nobody remembers them before liking such horrible comments, I'm surprised.
What I like about this channel compared to other is that you give us detail and logic behind your awsners. Most channels just do benchmarks and be like "buy this or buy that" but you tell us the fine details behind everything about why this would work this way compared to X for example. As an I.T analyst it helps me understand more about PC hardware and why certain things preform the way they do.
The only place I trust for in-depth, honest, reliable reporting these days. Thanks GN for your hard work and taking the time to go a step beyond what other outlets do.
That power efficiency chart was really interesting. Comparing power draw to time needed to complete a task makes sense but I hadn't considered it before. Going forward it might even be something that can be a deciding factor in the future when determining relative performance.
Great review! Thank you for including 5950x and 5800x3d in all your charts. I have been planning to get into video editing for Autocross this year and have been leaning towards upgrading to a 5950x from a 3700x instead of jumping to a 7950x, but maybe the x3D model might be a more balanced option for how I use my computer… I’ll have to think on it.
It depends. The 5950X is far better than the 5800X3D for video editing, and depending on the game, you may or may not see a difference. If its anything with a lot going on all at once like MMO's or simulators, then that would tip the scales for the X3D. But if its single player titles, honestly there's not much difference.
3:12 I'm with Jeremy. When they first announced that the 7800 would be a month later, my immediate thought was that they were going after the people who were looking at getting a 7800 but want to build NOW and once the 7900/7950 are out they'll be impatient and fork over the extra cash.
@@eliadbu True, AMD decided to please their shareholders at the expense of early adopters who's always willing to pay more. The real treat will be for those willing to wait 5 short weeks for the value-oriented product.
That was me but comments made me stick it up for another month. Not like if I don't have a PC right now, wanted to upgrade platform for issues and future proofing.
The lower power consumption compared to the i9-13900K is insane. Intel is literally consuming twice the power for the same or minor difference in performance.
For the vast majority of its use the Intel cpu will be using a lot less power. For typical users power consumption testing needs to be done during an actual workload, obviously if your using the cpu to do rendering or something then the likes of blender makes sense as a comparison, not so much for gamers.
Hey Steve, Im sure you heard about KSP 2's launch and its insane system requirements. the game is an awful mess right now, but in the future, i think it'd be fantastic to use it as a benchmark for more CPU heavy physics games. A lot of sim players are the people who dump a lot of money into their systems just for a specific game, and those games make interesting test grounds being that they sit somewhere between a workstation task and a game. case and point: -From the Depths (if you crash one boat into another all 8 threads get pegged for a good 5 minuets) -KSP 1 (game has terrible optimization, so its more of a single core test sadly) -Dwarf Fortress (drain the ocean) - X-plane and MS Flight Sim with settings cranked to max for sim realism. it would be nice to see something of the sort added to your benchmark list so we can see how exceptionally CPU heavy or physics based games behave on CPUs.
Well they are not that insane tbh. They are recommending a 3 year old mid tier CPU and minium a 7 year old mid tier CPU. On the GPU side, its more demanding but i can see the reasoning why and 2060 is 4 years old allready, and franky its not that powerfull, high end or expensive (new one under 300$). You can probably buy used PC that runs KSP2 just fine for 500$.
@@FINRaver then you clearly didn't play any ksp. The recommended (GPU, cpu probably does even worse) falls appart as soon as you have more than 5 engines at the same time or a few hundred parts, and it happens really quick as soon as you go out of the kerbin SOI
@@attilavs2 Yeah but i said i understand WHY the game is so DEMANDING. Its doing so many real time calculations on physics at once that it need the computin power of modern CPUs and GPUs. Dude here is complaning that it need so powerfull PC to run the game that they are "insane", wich they even actually arent. You can run the game without 4090 and 13900k. You can play it with mid tier computers. R5 3600/11500 & 2060 isnt mid tier anymore. It was mid tier 4 years ago. You can get PS5/XSX for 500$ and those will run KPS2 just fine. And you can buy used PC parts for 500$ that will roughly match PS5/XSX computing power. Of course you aint going to run it in 4K max settings +100 fps but 1080p/1440p with lower setting for sure.
@@FINRaver what i mean is that on console or minimum requirements you haven't got a playable experience as soon as you step out of beginner territory, and also ksp1 can litterally run on my 300$ school laptop doing similar physics calculations (smooth 24fps gameplay gang)
@@attilavs2 yeah i hear you. But the requirments arent that bad considering what the game is actually doing. Its not a shit show like op is saying and you dont need the +3K PC to enjoy it.
Can't wait to see this monster's performance in ACC. It's by far the game that takes the most advantage of the 3D V-cache. As for the 5800X3D. I fall in love over and over again every single day when i use it. Just yesterday found out that in my country it outsells the rest of the CPUs in the top 10 spots COMBINED. The sales of 9 models of CPUs can't beat the sales of the 5800X3D. AMD really struck gold with this one.
I will never regret buying my 5800X3D. Great price for awesome performance and I saved a boat load of money by not upgrading everything else. Thanks AMD
I get reminded of the old powermac machines that had aftermarket CPU drop-in upgrades. It required a driver - to turn on additional cores and scheduling for them - else the machine would run in a single core, underclocked state. I kind of feel nostalgic for needing a proper driver to run faster instructions.
Reminds me more of back in the Windows 2000/XP/Vista era, where you most certainly had to install your chipset driver or your system would run at 5% of its performance.
Thanks for this! I'm super keen to see how this performs on Linux, I think there will be some great usecases for the higher cache. Also excited about the stacked dram and how that''ll go. Thanks as always for showing code compiling benchmarks, I usually go to phoronix for that as well, user benchmarks there are pretty good, though their main review is also good.
According to phoronix, there are no kernel patches from AMD yet that change scheduling behavior. Benchmarks look still good though, 16% on avg over the 7950X in gaming. It is also easy to manually assign processes to cores (taskset in the console). Should be possible to add that to launch options in Steam and find the optimum placement for each game yourself. Though would be nice to know if AMD is working on the schedular.
You guys should add spider man remastered/miles morales to the testing mix. Very cpu intensive games, particularly with ray tracing turned on. They seem to become throttled with memory bandwidth because ddr5 has big performance gains. Would be interesting to see if the 3d chips help with the bandwidth. Cyberpunk rt can be the same way
L3 cache of course boosts bandwidth. The 5800X3D can beat an i7-12700K with DDR4 in a game, only when you pair the 12700K or 12900K with DDR5 that you offset the cache difference. Also, the iGPU on my i7-12700K is basicaly twice as fast with DDR5 vs DDR4. Considering that the 7950X3D has an iGPU, it would be interesting watching the boost for the iGPU with the 3D cache tech combined with DDR5 bandwidth.
The extra efficiency is one of the most interesting aspects to me. Maintaining the similar performance while cutting power would be very nice for reducing noise.
It depends, because if your workload is lightly threaded, Intel is using Big-Little. My i7-12700K draws 5 watts at iddle and the i5-13600K draws 3 watts. The medium for power efficency goes down on premiere pro, photoshop, and other similar programs. I recommended a friend a 5900X for blender and similar 3D stuff a long time ago. I have my i7-12700K for music production needs with a Corsair 5000D and a Deepcool Assassin 3, it's more silent than my refrigerator.
Could you throw in tests for simulation type games? Factorio for example has benchmarks which are not based on fps and it is a very cpu bound game. Would be interesting to see benchmarks for very cpu heavy games with this CPU.
@@atriusvinius319The point isn't fps, the point is scalability. The 3d vcache processors can build much bigger bases before your UPS (updates per second) starts to drop. Factorio is memory latency bottlenecked, so the enormous cache helps a ton.
Great and really detailed review,Steve ! However would have really liked to see the boosting behaviour of that monster . Like was it really 5.7 GHz single core or not , what was all-core boost frequency , all unparked core boost frequency and maybe how boost frequency changed in light-threaded (1-4 threads/cores) applications . Obviously I don't expect it to boost as much as 7950x since Tj max and TDP are lower with the same manufacturing node but still would have been intresting for future comparisons with other 7000 X3Ds when you'll finally get them .
so can everyone start talking about the massive disappointment amd has been this year? in both gpus and cpus they cant keep up with competition. even after releasing their top end cpu for gaming specifically it still loses in some games to the cheaper 13900k 😂take the L
@@luxemier not really a surprise. thats how 3D v-cache works. some games use the extra cache and some doesnt. this isnt a surprise. I wouldnt really say this is taking the L. It will probably get even better, with bios updates. possibly.
@@luxemier obv a fanboi. Always people talking about the loses when there are just as many wins. You just look at the numbers that you want to believe in don't you? I have zero allegiance.. I do however see through the shill bullshit.
@@luxemier it loses to an OC’d or higher ram mhz 13900K too. Prob a 13700K too. You have to remember you cant OC X3D or its stuck with 6000mhz ram at best while you can throw 8000mhz on a 13900K if youre willing to spend that much. This cpu is pointless, if you need thw extra cores buy a cheaper 13900K or 7950X
I would love to see some multitasking benchmarks for these high core count CPUs. For example streaming would be interesting to see the game running on the P/3D cores and encoding on the E/High Frequency cores. Also I've noticed that doing CPU intensive stuff like rendering has a surprisingly low impact on game performance when you lower their priority in task manager or process lasso, so that would be interesting to see as well.
At that point, assuming core affinity is setup properly, it would be the equivalent of two separate 8 core 16 thread CPUs being smashed together and working at the same time. Basically a two-in-one package.
@@darkl3ad3r Well ideally yes, but more resources are shared than just the CPU cores. So game perfomance would be impacted regardless, the question is just how much.
Intel implemented Thread Scheduler and telemetry on the CPU, it will beat AMD for multitasking. There no more CPU intensive task than a digital audio workstation, i was running FL Studio 20 with 48 extremely CPU intensive tracks with synthesizers and effects or overall 100% CPU usage at an audio latency of 1.6 ms (like running a game at 690fps), while running two 4K videos on the background and the audio doesn't stutter. I don't even have a graphics card on my PC, audio is 100% bottlenecked by RAM and CPU.
If AMD nexus did it then Intel will absolutely makes amd zen 4 looks pathetic. I'm not making things up but OptimumTech already did this with Alder Lake and it destroys every Amd chip in comparison. It just sad to see how the so called "trusted" channel like this one tend to be biased toward Amd because they often showed Amd advantages but non about Intel.
@@runninginthe90s75 "non about Intel" Exactly. I called out Steve for his AMD bias on the i7-11700K review (unlike the i9-11900K, this is actually a good chip).
The CCD priority problem could be easily fixed by Microsoft if they just added an option to tag programs, you could tag the game executables as "games" and the OS would assign the correct cores, but Microsoft doesnt want to give power to the users, you can assign cores to applications from the task manager, but it gets reseted every time
I really wish amd made an 7950x3dx2 with both ccds getting the extra cache. That would be incredibly interesting and beneficial for workstation users (per the performance of last gen epyc).
They'd have to lower the clocks even more cause 3D 7000 already sucks at heat draw even w/ at low wattage. Not to mention it'd add another $100 to the tag. Sounds great in practice, but it would be worse for most users.
@@OC.TINYYY Yes, but not for all workloads. Phoronix previously showed many CA he compute workloads greatly benefitting from more cache, with the 5800x3d beating out the 5950x. Ai workloads come to mind.
Glad I recently built on the 5800X3D paired with the 4090. Seems like minimal gains for the much higher cost of a new platform. Haven’t played a game that is CPU limited with this setup yet
Try Planetside2, one of the most CPU dependant games out there (basically battlefield with 3 teams instead of 2 and 300 players per team on a huge map with multiple bases). Benefits massively from the extra v-cache. On my R5 2600x it's roughly at 60 fps :( wanna upgrade to one of the new 7000X3Ds once they are all released and more info/updates are present. Also AMD and Microsoft might be able to work around parking the fast cores while gaming in the future. Then the 7950X3D might actually be worth it.
This basically really makes me even more interested in the upcoming threadripper pro 7000wx series. They have more cache (not 3d vcache) by default and if they retain similar base & boost speeds as the desktop 7900 series it would be a hands-down win for HEDT as a general use cpu.
They have more cache but not more cache per CCD usually, meaning neglible effect for games. (Except Star Citizen, the only game that can use 64 threads) And they didn't sample 7900X3D to reviewers because it's effectively a $600 6-core for games, heading off the $100 per core memes
@@Vinterloft games is only partially what I use them for. I need a 'workstation' not a 'game machine'. Mainly use VM's heavily and just 'game' between/waiting on work tasks. So high cycle speed is good for occasional gaming but when 'working' base freq across more cores is important. From what I can see with the X3D chips with cache not off the same ccx you get more latency issues which you would NOT have on a TRP or epyc. Anyway, will see when benchmarks come out. holding my breath. :)
Dying laughing coming over from Optimum's review, very stoic and straight to the point.. versus Steve instantly hitting us with The 7950x3d lookin a lil sus 😳
Holy smokes .. seems the 7950X3D would be a massive upgrade from my 3900X. I'm really tempted, because i can utilise the extra cores (compared to a 7800X3D) for code compiling as a Programmer.
@@Takyodor2 it's better at price, production and vast majority of games it's basically exactly same. So sounds like it's better over all. Microcenter has 13900k at $520 why buy this at $700. Makes no sense
After using 12900k I realized that power efficiency matters. I used a 360 liquid cooler and the temperature creeps up to 95+ after a while running cinebench r23. Initially I was able to hit 27000 in R23. But a year later, it could never pass 25000 (due to thermal throttles) and the temperature jumps to 100+ within a few minutes. I spent countless hours to reapply the thermal paste but I didn’t have any success. Now I decided to switch to 7950x3d. I’m sure any AIO and air cooler can handle 160W max TDP.
Agree. AMD wants 7950X3D to sell a lot of units at that high price for a month or two before they release the part that people actually want. If you're waiting for 7800x3D, you'll still wait. But maybe some will pull the trigger on the 16 core part. Ugh.
The bragging rights people who have more money than sense, that is who this is for (as well as the 13900k). 7800x3d should sell like gangbusters, as the 5800x3d did
@@deansmits006 Still will have better performance with multiple monitor setup and trying to game on one and watch a video on other or what ever task you are doing on other monitor. Noticeable difference when I went from the 5950X to the 5800X3d, I would rather have the ability of both, which is where the 7950X3D comes in or the 13900KS!
The 79050x is perfect for Gamers/Creatives like myself, and what matters on top of that for me is the power efficiency of AMD's architecture compared to Intel. I will wait for the next generation before moving away from my 5950x but this competition is a real benefit for performances boost from 1 generation to the other. So I expect the 8950x to be about at least 70% faster than my actual config for a similar TDP.
Thanks for this. Even though my common load (Factorio as a game mainly) favors the 3D cache very heavily (usually 5800X3D at least on par or better with 13900KS) I don't see a reason to do a whole system upgrade yet (which a switch from AM4 to AM5 would mean), for the 10-20% performance gain the 7950X3D got over the 5800X3D at that cost.
@@TheGuruStud Don't OC it, use the automatic per core curve optimizer. That did wonders for my 5800x and in reviews it does wonders for the 7950x3D, so I assume the same applies to 5800x3D. Temps go down and clock speeds go up in 1 click.
Holy crap. I thought the 7800 was coming out at the beginning of this coming month, not the month after that, They definitely did that to stagger sales and fomo
Flagships have launched first, at different times, across every single consumer technology generation from every company involved in the past 15 years.
@@CyberneticArgumentCreator Not typically on the CPU side though. GPU? Definitely. New cpu's are all usually released on the same day and not staggered.
Can't wait to see the whole stack. This really seems like a CPU for someone that does Production, but also gaming - which is more common nowadays, for sure. But for pure gaming, I'm REALLY excited for the lower SKUs.
If you're not doing heavy rendering or machine learning/AI (highly doubt it cause Quadro and 4000 series cards are ass for gaming). Why would you pay more for AMD when you could get a 13900K for cheaper and almost exactly the same performance except for in LIKE TWO GAMES that no one plays (tomb raider....) Intel is so much better for the consumer right now, they are disrupting the graphics market all while maintaining beastly CPU performance. AMD has become the enemy of the consumer, but go ahead and buy into that....
@@rbush83 why would you need above 700 fps in an mmo? Those are the numbers these chips will do in such easy to run games. Wouldn't anything above 277fps be negligible on almost every monitor in the consumer world?
I would love for a comparison of these kinds of performances at 4k, since it seems that graphics cards are at unreasonably high fps at 1080 p to the point that it is comedically unmeaningful.
I've been waiting for this review, to see how it fares against the 7950X. My workload is part productivity (Premiere Pro, After Effects, Handbrake and Topaz Video Enhance) and part gaming. It looks like the more flexible option for my use case is going to be the regular 7950X. And currently cheaper too. Thank you for the review.
Make sure you use Game Mode in Ryzen Master to make your 7950X to run like a 7700X on GN charts when gaming. Just one click. And a click and a reboot to go back to productivity.
@@cheshirster Tbh, at the moment this whole parking while gaming stuff really removes the benefit of having this difference in v3-cores and fast cores. It will hopefully get fixed in the future so you can get the best of both worlds. Also, this thing is like 33% more expensive (at least in EU, dunno about prices elsewhere) than the 7950x and the benefit of the 7950X3D isn't worth that money (atm). One massive benefit of the 7950X3D is the massively reduced power consumption under heavy load (which could make a financial difference on your energy bill :D)
@@zhanurdos Absolutely, Intel remains king in terms of predictability and overall results in games. The AMD's are wonky as always - killing in one game and sucking in the other.
Not really even Shadow of Tomb Raider Intel was actually 1 fps ahead in 1% lows which Steve ignored to point..... So someone who owns 700$ surely will not be using 1080P display in which 1%, lows more important at high res scenarios.... Not even 13900KS even 13900K is better, I'm suprised tbh but that's reality
@@puciohenzap891 That's cos the graphs use Intel as the baseline. If AMD gaming performance was the baseline it would make Intel processors looks like they were killing it in one game and sucking in another. They're optimised differently so their best and worst performance will be in different tasks.
What an interesting product, it seems like the extra threads don't really help with gaming workloads and the extra cache doesn't really help with workstation workloads which puts it in a bit of an awkward place of not really making purchasing sense for either of them, especially when 7800x3D launches soon and 7950x already exists. Now it makes more sense as to why a 5950x3D never launched, and makes me wonder why this launched first instead of 7800x3D. This is starting to look a lot like a "we made it because we can lol" type of product.
DOA CPU from AMD, this cpu cost more than already overpriced R9 7950X, it even worse because AMD need premium Mobo and premium DDR5 to be able to competes with i9-13900K. What a flop LOL
Cant wait for microcenter bundles for these that include memory and/or motherboard with CPU... It is crazy how they do 32GB memory, ROG Strix Motherboard and 7900x for $600. Wondering/hoping they do deals with these. Just might get me to upgrade from my 3900x setup.
If RT is CPU intensive, let's make the graphics of a game running entirely on a CPU... +20 000 cuda cores wasted for vector math not being used for light rays, let's just use a few CPU cores. Freaking lazy devs.
Amazing review, as always! I’m a little underwhelmed to be honest, though I’m happy there is good competition with Intel now. It’s a win/win for the consumer regardless of what camp you’re in/root for.
@@Hito343 Seems like that's the case for just about every product coming out, CPU or GPU, most could benefit from an undervolt. At least my last 2 GPUs, 5700XT and 3080 12GB both benefit, as does my 5800X, though I like to keep my system very quiet so there is definitely some preference in there.
@@frankguy6843 Yeah I do the same, few extra fps are not worth over significantly lower temps and power consumption. Skipping this gen tho, upgraded from 5600 to 5800X, gonna use this as a daily driver/games and also getting 13900K rig for Adobe apps and Handbrake, can just use my fast DDR4 kit and snatch Z690 mobo.
@@Hito343 Yeah I went from a 3600 to 5800X before they announced/released the 5800X3D which was sad lol was thinking about the 7800X3D when it comes out, but having to also upgrade the MOBO and get DDR5 sounds expensive, but it will have to happen eventually...
I think the most interesting one is that 7950X3D's 1% low perfs are generally lower than 13900k's, which should have more consideration for playing games in high resolution and in the value component.
@@leonrein7393 I've never understood the power consumption argument especially if its for a work application your talking the difference of max $30 a year between them. Which is nothing to a general consumer let alone for business.
@@Lethal_Intent I'm not sure where in the world you are, but in Europe it's currently closer to $150-$200 a year based on an 8hr working day, still not a lot for business use, but does offset the additional platform costs in a year.
@@Lethal_Intent For me its not about power usage, but the heat it creates. I cant have my AC on, because my family hates it in the summer so its either sit in a room with 30C with AMD or 34-35 with Intel.
@@tyrantworm7392 In Europe, it's actually a lot higher than that if the CPU is constantly used at close to 100% (and most of these CPUs are used like that in a workstation environment).
Streamers require **ALL** of the cores to be high performance. The cores need to be running the game, not to be parked, but also running their streaming apps, mixers, multiple monitors, chat and etc so not to reduce the frame rate of the game they're trying to stream. A 'Streaming Benchmark' needs to be done for this task!!! Streamers doing this exact task are a large part of the market for these high-end CPUs.
Wow the power consumption is such a massive focus for me personally, and seeing it beat the 13900K in a lot of scenarios for about half its power budget is CRAZY! Power has been really expensive in Europe so this is absolutely massive.
I think, that depends on a workload. I don't expect 13900K to consume much more power than 7950x when gaming or single-core tasks, for example. Another point is how much time do you use your CPU at max power. I.e. maybe you run blender rendering for 3 hours a day, and 13900k consumes 150w more than 7950x; that's 150Wh*3 of difference per day, 13kWh difference per month. Not an insignificant amount, but if you spend 500+ USD on your CPU, you can probably manage extra electricity cost (roughly 7USD assuming 0.5USD/kWh)
@@innocentiuslacrim2290 I'm still using my GTX 1080 and it's undervolted. I see no reason to upgrade to anything else since pretty much everything is either a negligible upgrade or the power consumption will have to be higher, and the GPU prices themselves are just disgusting.
What's really exciting me is how the competition is really bringing out innovation that makes a tangible difference. The fact that Intel had to stop resting on their laurels and come out with the change in architecture that allowed 12th and 13th gen to right the embarrassment that was 11th gen. The fact that AMD had to conceptualize and implement not only their chiplet design but also 3D cache that shows actual performance gains in gaming at the very least. If they can somehow find away for the extra v-cache to be leverage in areas outside of gaming then that'll be a true game-changer for sure.
13700K/13600K seems to still be the best value. I'm extremely happy with my 13700K and undervolting then overclocking it makes great performance gains. But can't wait to see what the 7800x3d brings to the table, seems the 7950x3d just isn't worth it considering the high price tag.
@@jasonhurdlow6607 More expensive than an i7-12700K? Nah. It's good but isn't plus 300 dollars. The i7-12700K was 250 dollars on Amazon last week. The thing is, the 12700K benefits from more cores, IPC and RAM speed that negates the 5800X3D's 3D cache, so it's a no compromise well rounded chip for the 300-400 dollar range, same happens for the i5-13600K.
@@big0bad0brad It's more expensive than a raw IPC increase by changing the interconnects and making the CPU cores themselves faster. The gaming perfomance increase from Zen 2 to Zen 3 is something that we won't see in a very long time.
Great chip if you are being ignorant AMD fanboy. This overpriced 79503D still lose to i9-13900K even with high end motherboard and ram. Zen 4 is totally disappointing.
This is kind of problematic for me. I was looking forward to these dual CCD chips with 3D Vcache because I wanted to test a number of modded RTS titles that benefit heavily from 12 or more cores in certain scenarios, but only some of the time. It sounds like, despite those games obviously benefitting heavily from leaving the whole CPU on right now, I might actually see a performance hit in those titles? Also games like Death Stranding, I noticed that hitting my 3900x for up to 70% CPU utilization, and I saw similar behavior on my 5950x after upgrading. I hope they get the scheduler sorted out to be CCD-aware because there are probably a lot of workloads that would benefit from it, not just games. I'm still looking forward to seeing how the 7900x3D behaves, what with having more cache per core than the 7950x3D
I wonder how would 7950x3d perform under combined workloads like gaming + streaming. Assigning that high capacity 3d cache ccd to games and the other one to video encode might make a great single pc streaming setup. Would you mind testing it?
Yes, I'm also curious if the parked cores can actually be used by other applications. If not it really does not make much sense because when you game you lose half you cores even if they can be used on other non gaming tasks in the same time.
It probably won't be be as great as you are thinking. If the current "fix" is to park the non-cache cores when gaming, then the option to run games on xcores while doing other things on ycores isn't even an option right now. It will probably run fine, I just wouldn't expect anything "great" compared to the non x3d versions if you try to game and do "x" at the same time. Everything is going to be forced onto the same 8x lower clocked cache cores.
@@joee7452 1080p 60hz encoding on x264 6kbit slow preset is achievable even in an 5700x while gaming, so I don't think having a bit slower (by slow i mean lower clocked) but unoccupied cores would have hard time encoding that. I don't know how AMD's new software works but if it's only assigning cores to do the stuff, it would be great. Also, assigning cores manually might also help. That's why we need tests.
@@OzMuGu I never said it wouldn't work. I said not to expect miracles. They are parking cores. That is disabling their use for the duration of the parking. Now, maybe they are actually doing something else and just using bad terms, but from their explanation, those cores are not available for use while gaming currently so that all the threads are forced onto the cache cores. I am just passing on what AMD's release seems to be saying. Having Windows parking core and then unparking them and putting them to use, then parking again each time a thread is created or continued would introduce a ton of latency at the Windows control level, so I don't see that happening. Remember, part of the issue here is Windows and how it runs. What AMD could do to make it work optimally is stuck with what Windows can actually work with unless they can bypass that and do it at the chip level. I have hope that this is a temporary situation and they will work out a way to have everything usable all the time and just have gaming threads directed to the correct cores. I am just pointing out that currently, that doesn't seem to be the case.
Honestly, when even those are above 250 fps, it really doesn't matter anymore. Nobody can tell the difference between these numbers as they are about 1ms.
Yes, we know. We're the ones who popularized that terminology after Tech Report debuted the frametime thesis, haha. We do when they matter. That's why we keep saying "proportional scaling" in these charts. There's nothing to say if they're working as intended.
@@prman9984 It's not only 1 ms (not even in theory, and even less so in practice), some people do notice. Especially when you start dropping below monitor Hz.
It's so refreshing to see a balanced approach to your reviews. I.E. Not an obvious Intel hater nor fanboy, nor an AMD hater or fanboy. We can all agree to be NVIDIA haters though. It is why this is one of the few channels I am actually subbed to.
I'm not an Nvidia hater. Why would I be? Because of them I sat comfortably on a 1080 Ti for nearly 6 years, and now I have a 4090 that's going to do the same or better with DLSS 3. What's to hate?
@KuraiShidosha 4090 users have nothing to complain about. You are getting unprecedented performance for just 100 uss more than 3090. Ironically 4090 is the most value for money card rn in the high end. However the rest of Lovelace stack is absolute dogshit ripoff.
@@darkl3ad3r its not about the products, its about the pricing. When you have no competition the worst comes out in a company. I still have my 1080 as well as a 3090 and love them both. I can love the products and hate the company
Thank you! Since I am still on AM4, this finally got me to order a 5800x3d. Anything else, especially a platform switch wouldnt make sense for gaming. Cool!
I did the same recently upgrading from a 3700x, very happy with the performance. It's fairly minimal in 4k on a 3070 for well optimised games (DLSS & G-sync pick up the slack already) but noticed a big difference on games like Fallout 76 and especially VR games. I suppose driving two high refresh displays on the VR headset requires a bit more from the CPU.
100%, the uplift is too little and you'd need a 4090 to see gains, also if your getting a 4090 you'd be playing at 1440 or 4k in which cpu difference is even less
It honestly seems like you'd only be paying for the extra V-cache if you're disabling 1 CCD to get the best performance compared to the 7800x3D. Guess the 7900x3D will just be a 7900x for core count that can *also* be a 7600x"3D" for gaming scenarios
@@thealien_ali3382 that makes sense, 7900x has a ton of cache to begin with, high peak frequency, impressive IPC, and is really only bottlenecked by 4080/4090 or 7900 XTX. Still, I was genuinely surprised to see the 7950x3D have meaningful improvement over the regular one seeing as pretty much all of the Zen 4 lineup already has even more cache than previous gen. But it goes to show, the CCD that has the extra cache is the one that matters most. I was also wondering how AMD was gonna approach that with the 12 & 16 core CPUs. Guess the infinity fabric will have to compensate for it, and as such higher speed DDR5 is necessary. And with the 7800x3D, it should be fairly competitive (despite clock speed deficit). I'm sure it will be capable of overclocking shortly after launch, just like the 5800x3D -- just "unofficially".
I'm not sure if I'm the only person that's noticed this, but I have to point it out to you guys at Gamers Nexus. Through out this whole video I can hear a very high pitch "squeak" when Steve talks. I may be completely wrong and it's only me, but just in case it isn't, that the editing team at Gamers Nexus can look into the high pitch audio and perhaps adjust for the next video. I do not mean this in any negative manner, just an observation. Thank you Steve and Gamers Nexus Team for all your hard work and informative video's. Keep up the good work.
Just one last note I'm using a Focusrite DAC, with DT770 Pro headphones with volume at around 60 - 70%. The high pitch "squeak" is prominent in the left channel. Again if this already have been mentioned, and been looked at, I do apologize.
Could AMD work this in a way where other programs can still run on the parked cores? Like discord sharing, OBS, all the background stuff that people run while gaming, not to mention streaming software/overhead.
this was discussed on the flight sim forums regarding 3rd party apps running alongside MSFS. Such as; Navigraph, SimBrief, Orbx... that sort of thing. I think I remember a user there saying his 7950x3D was using parked cores to run those apps, thereby there was no degradation to the running cores devoted to running the sim. Apparently, this processor is doing a very good job running the sim.
I spit my drink laughing at the rapid-fire FPS usage part for R6 Siege. Thank you, please include things like that in the future if possible! I averaged 23 frames per spittle with it.
Not feeling bad about getting a 5800x3D in October for a little over a third of this price lol Also I know it isn't the most popular game but I'd love to see GW2 being tested on these new x3Ds. That game is more CPU bound than any I can think of and to this day is the only game that humbled me since I've had my setup.
Super true. Bro, I got 5800x3d in December and i'm laughing my ass off for this new line up. Like why bother buying $700 CPU now, if there is such minimal difference in games. And that is the only CPU, if you want to build system for AM5 - rebuilding whole pc, lmao, good luck to those people. Best $300 I have spent on the CPU.
@@uglysedzh same for me totally happy with my 5800x3d the only thing i dont like are the temps i have a custom setup with a 420mm and a 280mm rad and cant get it below 90 on load
@@andreasstrauss5194 There's something wrong with your setup for sure, it should not be running that hot at all. It's not a hot chip in the first place.
That's what I don't get.. GN (this video) didn't have to manually disable the non-3D Vcache CCD and still got the same results as HUB! What gives? For example, in Cyberpunk 2077, HUB's simulated 7800X3D is slightly ahead of 13900K, which is the same result as HN's out of the box 7950X3D (not simulated 7800X3D). Edit: GN, not HN. Plus, the Shadow of the Tomb Raider benchmark results for GN were in line with PC World’s (Gordon’s) which is around 20% higher than 13900K, while for HUB the difference was much lower (< 10%).
@@MasterKoala777 Possibly installed the drivers wrong 😅 Seems like a major pitfall, unless there's some scheduling issues in software & performance is going to vary on a per time basis (which would be terrible).
@@MasterKoala777 "HN (this video) didn't have to manually disable the non-3D Vcache CCD and still got the same results as HUB" HUB just wanted to bash AMD for their prices that he sees as unfair. There is no need to disable cores, you can prioritize CCDs. TPU did it right this time.
Something that would be nice is if for an example in Ryzen Master, we could say which program you want to run on the part of the chip with the stacked cache, and which part you prefer to run on the part with the higher clocks, or if you don't mind and might let it run however it might find it being the most convenient option.
@@blackicemilitia7401 in workloads where the cache matters, they’re not. Where they do beat the 5800x3d significantly is due to the extra cores, but that’s not why you buy something like the 5800x3d in the first place. It’ll be interesting when the 7800x3d is released because that will be an apples to apples comparison. But it still costs almost 50% more…
Hey, that's fine. Not only is the 5800X3D a stellar performer that actually makes the 12900KS look silly, but if you decide to move to AM5, well, it would probably be best to wait for Zen 5 or Zen 6.
I went with the 7950x3d, as I'm not just gaming. Streaming and making videos will benefit from that insane performance and also "future-proof" me a bit longer hopefully
I'm still on the fence having to run the Xbox bar all the time just to play a game is moronic. I'm also not thrilled about having to turn off half the course while I'm gaming I often watch videos on the other monitor.
@@RGBeanie Gamers Nexus said it uses the XBox bar to determine if it should turn off the non-cache CCU and you need to set power mode to balanced to properly disable half the CPU so it will only use the part of the CPU with the enhanced cache. Also apparently if you use the old driver it kills your performance so you really want to make sure you're using the current version of the chipset driver. The future proofing is one of the reasons I'm interested but also the lower heat and the performance seems to be better on some games, but not all It's really a mixed bag this time.
@@lance4862 I think I already have Bar disabled for other reasons, I'm sure. I should probably check but performance has been insane thus far for me personally
Still running a 5600x and my next upgrade will be a 5800x3d in a few years before finally upgrading to am5. Thanks amd for letting us stay on am4 for so long
You should also check out our R9 7950X Eco Mode testing: th-cam.com/video/W6aKQ-eBFk0/w-d-xo.html
*takes deep sniff of your hair*
Smells like fresh silicon.
Really disappointed that the editor didn't put an FPS counter on screen during the Rainbow Six Siege portion of this review.
Which chipset driver did you use for the 5800x3d. I ask because a new chipset driver became available for my AM4 MSI B550 Unify on MSI Product support page and its revision number is the same as the new AM5 chipset driver with x3d optimisation. I'm just wondering if there any improvements for 5800x3d
so for 4K gaming there's still no point going above a 13th gen i5 or AMD R5 7000?
I'm surprised we didn't see any 4K FPS results for gaming. I'm looking to buy the 7900X3D, but even with the 7950X3D, the 4K is what's important to me. How many people are regularly gaming in 1080p when they don't have to? Can you update your pinned comment with 4K results, please?
The FPS chart on the FPS was my favourite anaylis of a chart, ever.
Effing high FPS FPS!
Steve breaks the monotony of all the numbers. His sarcasm is god level.
i guess you havnt seen the X`es chart? amazing work =D
That freaking game has so much fps at this point I don't mind him memeing the charts every time
@@GamersNexus Coil whine generator on GPU XD
I'm sure that AMD pushed back the R7 7800X3D because it will destroy the sales of the two other 3D parts.
Obviously
Yes. But to be fair for Linux or whatever where your expected to be technical and have use for 8+ cores.
I adore the idea of 8cores for games/cache applications. And 8cores dedicated with a extra + 8 lower clocked cores for compute tasks. Basically AMD uses big cores right now as there E-cores. And good on them for making something that good.
But mainstream? Gamers? 5600(nonx), 5700x, 5800x3d, 7700x, 7800x3d. And Intel if they manage to deliver a value option or AMD is garbage. It depends really on AMD now. If the idea of free RAM and discounts on CPU and motherboards are not a sign of AMD failures then I do not know what else to say. I do like to mention how distasteful it is that AM5 launched with no real marketing about ECO mode. Did not ditch old coolers for better thermal dissipation benefits.
Basically Ryzen 7000 is just Ryzen 5000 with the power usage of Intel. And bigger cache. PCIE 5.0 that is useless and almost DDR5 is also completely useless. And a premium for all of it that when there are 5700x and 5800x3d out there already with huge discounts on AM4 stuff. And since there is NO GPU out there worth buying that actually is bottlenecked by a 5700x. Let alone 5800x3d. AMD's biggest problem is AMD. Like ALLWAYS. Every single time outside of Ryzen 2000-5000 or something.
@@TheDiner50 personally I'd love a 7900X3D cos I do computer science and I can actually use the more cores, but I also game on the same machine
@@TheDiner50 DDR5 is starting to look pretty decent and does show performance benefits, but it has definitely been a rough launch. That's why I'm just kind of waiting for the 7800X3D to decide how interested I am in a platform upgrade. Zen 4 has been selling badly and it sure has been deserved. AMD is just focusing on Epyc right now.
Equivalent performance for less money, they know that. They want you to FOMO and buy what is available which is the more expensive models.
I really appreciate the 1% and 0.1% lows in the charts, especially the discussion about 'prefer frequency'. Too often people look at average FPS as their only metric. If you have a 120hz monitor, a stable 120fps is far better than something that fluctuates from 100 to 150.
"125 FPS average" with 1% lows of 115 FPS versus "150 FPS average" with 1% lows of 70 FPS.
i actually mostly look at 1% lows. averages are meaningless. what matter to me is "does it drop from the frame rate i want?"
.1% lows are equally meaningless, because the amount of data necessary for this to be statistically relevant would mean that a .1% low of 60fps needs a benchmark 3 hours long. a 60 second benchmark with an average of 100 fps literally only uses 6 frames to calculate .1% lows. that's... meaningless. i wish people would use 5% lows instead.
I look at them all at once and fail to comprehend anything
Never thought about it, thanks!
@@GraveUypothat’s great insight, thanks!
Workstation user here. I appreciate you clearing up everything and giving your honest recommendation. I didn’t know if the extra cache was worth it and I got my answer. Keep up the great work.
Awesome to hear it helped!
Same, sounds like perhaps Intel may be the route for workstation focus unless they do some optimization, or you have a very heavy gpu load which perhaps could benefit from the cache.
I have seen numbers that suggest machine learning tasks would profit from large amounts of cache, so that might be one niche production workload where this one excels
Only issue I've seen with Intel on the workstation front has been virtualization, specifically VMWare. It sounds like Workstation can't differentiate between E-cores and P-cores. My buddy's i5 system choked trying to run his usual VM. It's been surprising since they've had a year to figure it out with 12th gen. Would be curious to hear if anyone hasn't had this issue.
What was the answer?
Thanks for getting this review out right at the embargo time! Hope the team enjoyed their Winter break!
Thanks for being patient while we had fewer uploads the last few weeks! It's been good to get back up to speed!
@@GamersNexus i dont have anymore patience for your other 4090 brand reviews!!
@@GamersNexus The speed with which you can fire "FPS" is definitely impressive!
Thanks for keeping the R5 3600 on the charts. It reminds me how anemic my system is
😄
it's only anemic when you have a 4090 installed otherwise not really lmao
Cries in R5 1600....
I wonder where the 8700k would be on this chart.
im honestly wondering where my i5 10400 is on the chart... due for a upgrade after I've installed a gifted 3080 on it.
You should add a flight sim to the benchmark like MSFS or DCS, the 3d's absolutely slay those titles.
That would certainly be more interesting that comparing games achieving over 500fps.
doupt
@@VirtualAirTour the numbers look silly, but it's purposely creating a CPU bottleneck. You want the GPU to be irrelevant.
If a flight sim can create a CPU bottleneck then it would be a good addition, but if it acts like cyberpunk and maxes out the GPU then it's not too informative, all it tells you is if you have enough CPU or not, and anything past that is like a brick wall of pretty much identical numbers as the CPU waits for the GPU.
@@VirtualAirTour I like how people make videos to explain why they do it, people refuse to watch it, then just come to the comment section and complain about it
Those simulators actually use all the CPU cores too. Something that made the 5800X3D suffer a little on those titles.
I'd absolutely LOVE to see an in-depth undervolt + curve optimizer w/ PBO video with the new 3D CPUs, and how efficient boosts can get with your guys' expertise. It'd be a really cool educational video for how CPUs achieve higher boosts efficiently nowadays, too, for those who have been out of the game a bit and are just used to throwing more volts around.
I am still impressed by the 5800X3D and it's "refusal" to let the older AM4 chipset hold it back.
€315 right now, seems like a bargain.
Agree. I’m considering getting one instead of a platform upgrade.
@@platinums99 Where? I guess I will just sell my 5600 get a 5800x3D and be done for the next 3 years.
yeah guess its sort of gift for am4 buyers
Way too expensive vs the i7-12700K.
what a time to be alive, the r5 3600 and i3 12100 give out over 150 fps and we're looking at 700$ cpus for 300 fps . we need a channel to test more old chips against these results
It is certainly a good time to buy a CPU, 6 cores 12 threads for 100-150$, 96MB cache for 360$ and 14 cores for 300-350$. Just amazing.
Just wait, there will be a time very soon (10 years) where nearly everything is realtime raytraced at human eye resolution. It will be reality-bending, if it's not already.
@@DragonOfTheMortalKombat definitely, I’m so sick and tired of people complaining about new high end products being expensive for gaming when you can go on last generation for cheaper and get a good portion of the performance as the new stuff.
On example of this is how I got a RX 6600 for $185 on Amazon, that card can run most games perfectly fine at 1080p and probably 1440p. (I haven’t tested it since I run a RX 6700 XT.) That RX 6600 would probably be perfectly fine for >50% of the community to play games on. (Albeit without Raytracing, but that’s a gimmick/marketing tool to sell new cards anyways.)
@@Fractal_32 i say it even fine for 90% of games at 1080p ( witjout RT of course)
But its just a nitpick . Who cares. I agree with everything you said
sure except for the GPU problem..
This just makes the 5800X3D seem like an even better value than it already was. I"ll definitely be upgrading to one soon after seeing this video.
It has 1080ti-esque staying power
AMD accidentally made the 5800X3D too good, it just keeps cannibalizing Zen 4 sales lol
Would love to see the 7950X3D's performance on streaming. E.g. Tarkov is benefiting from 3D cache massively. If you assign the v-cache-less 8 cores exclusively to OBS while plaing on those with 3D-Cache this would propably make a great single PC streaming set-up. like having a 5800X3D and 7700X in one
yeah this. If one ccd is basically shot, while you game & can't stream IE like it pretends it's just 8 cores, this CPU makes no sense whatsoever. This is the reason why I wanted to buy it
@@TheHighborn same. I was hoping we would get best of both worlds but looks like the ccd with out the vcache is disabled while under gaming load. What if I want to multitask or stream while I'm gaming, I hope there's a way around this with future updates. if not then I'm just gonna wait for the 7800x3d not paying $700 for a cpu that turns into an 8 core while I'm gaming.
Now that's a reason to use Process Lasso.
@@prman9984 I'm not familiar with process lasso, how will that help?
It can pin process to cpu core
The only TH-camr providing crucial FPS FPS data to consumers. Bravo
GN and Hardware unboxed are both good.
it's too bad that nobody else cares about the FPS in FPS games to the point where they will highlight the FPS in the FPS as much as this FPS chart full of high FPS FPS games.
Hardware Unboxed is the best when it comes to FPS data
Why do people even like these stupid comments?? Only Utuber? What a TERRIBLE statement!! Hardware unboxed, paul's hardware, der8auer, some others? nobody remembers them before liking such horrible comments, I'm surprised.
even an AMD FX CPU has higher FPS in current FPS games than the IQ of some of these replies, lol
What I like about this channel compared to other is that you give us detail and logic behind your awsners. Most channels just do benchmarks and be like "buy this or buy that" but you tell us the fine details behind everything about why this would work this way compared to X for example. As an I.T analyst it helps me understand more about PC hardware and why certain things preform the way they do.
*i love you PC Jesus*
tech jesus*
imagine getting a Jesus themed pc build video
Lmaooo
“He gave us his only begotten son, Steve, to highlight every sin caused by tech companies.”
-Corsairs, 4:20
@@YamiMajic if this isn't already a copypasta it deserves to be
The only place I trust for in-depth, honest, reliable reporting these days. Thanks GN for your hard work and taking the time to go a step beyond what other outlets do.
That power efficiency chart was really interesting. Comparing power draw to time needed to complete a task makes sense but I hadn't considered it before. Going forward it might even be something that can be a deciding factor in the future when determining relative performance.
Techpowerup has always made performance per watt and performance per dollar charts in their reviews. I always check their reviews for this reason.
Great review! Thank you for including 5950x and 5800x3d in all your charts. I have been planning to get into video editing for Autocross this year and have been leaning towards upgrading to a 5950x from a 3700x instead of jumping to a 7950x, but maybe the x3D model might be a more balanced option for how I use my computer… I’ll have to think on it.
It depends. The 5950X is far better than the 5800X3D for video editing, and depending on the game, you may or may not see a difference. If its anything with a lot going on all at once like MMO's or simulators, then that would tip the scales for the X3D. But if its single player titles, honestly there's not much difference.
I'm getting the 7950X3D due to less power usage, runs cooler. The extra 100 dollar is nothing compared to everything else.
@@gamingallday9225 It’s crazy to think $100 is a drop in the bucket, but it’s the market we are in now.
3:12 I'm with Jeremy. When they first announced that the 7800 would be a month later, my immediate thought was that they were going after the people who were looking at getting a 7800 but want to build NOW and once the 7900/7950 are out they'll be impatient and fork over the extra cash.
Defenitly a cash grab, the usual tactic of releasing the more expensive part and later the more value oriented product.
@@eliadbu True, AMD decided to please their shareholders at the expense of early adopters who's always willing to pay more. The real treat will be for those willing to wait 5 short weeks for the value-oriented product.
Yeah, it's capitalizing on fomo 100%.
Yes and that's why I'm grabbing the 13900k now.
That was me but comments made me stick it up for another month. Not like if I don't have a PC right now, wanted to upgrade platform for issues and future proofing.
The lower power consumption compared to the i9-13900K is insane. Intel is literally consuming twice the power for the same or minor difference in performance.
Ye it’s awesome on load, but intel is way better at idle afaik. Idk what to get atm…. Probably wait for the 7800x3d
What a shame for Intel. Which has 10x employees as much as AMD. But still can't beat AMD at all.
For the vast majority of its use the Intel cpu will be using a lot less power.
For typical users power consumption testing needs to be done during an actual workload, obviously if your using the cpu to do rendering or something then the likes of blender makes sense as a comparison, not so much for gamers.
@@sb5253 really depends on what you do and what’s cheap at the time
@@sb5253 13600K vs 7800X3D will be a very tough choice
Hey Steve,
Im sure you heard about KSP 2's launch and its insane system requirements.
the game is an awful mess right now, but in the future, i think it'd be fantastic to use it as a benchmark for more CPU heavy physics games.
A lot of sim players are the people who dump a lot of money into their systems just for a specific game, and those games make interesting test grounds being that they sit somewhere between a workstation task and a game.
case and point:
-From the Depths (if you crash one boat into another all 8 threads get pegged for a good 5 minuets)
-KSP 1 (game has terrible optimization, so its more of a single core test sadly)
-Dwarf Fortress (drain the ocean)
- X-plane and MS Flight Sim with settings cranked to max for sim realism.
it would be nice to see something of the sort added to your benchmark list so we can see how exceptionally CPU heavy or physics based games behave on CPUs.
Well they are not that insane tbh. They are recommending a 3 year old mid tier CPU and minium a 7 year old mid tier CPU. On the GPU side, its more demanding but i can see the reasoning why and 2060 is 4 years old allready, and franky its not that powerfull, high end or expensive (new one under 300$). You can probably buy used PC that runs KSP2 just fine for 500$.
@@FINRaver then you clearly didn't play any ksp. The recommended (GPU, cpu probably does even worse) falls appart as soon as you have more than 5 engines at the same time or a few hundred parts, and it happens really quick as soon as you go out of the kerbin SOI
@@attilavs2 Yeah but i said i understand WHY the game is so DEMANDING. Its doing so many real time calculations on physics at once that it need the computin power of modern CPUs and GPUs. Dude here is complaning that it need so powerfull PC to run the game that they are "insane", wich they even actually arent. You can run the game without 4090 and 13900k. You can play it with mid tier computers. R5 3600/11500 & 2060 isnt mid tier anymore. It was mid tier 4 years ago. You can get PS5/XSX for 500$ and those will run KPS2 just fine. And you can buy used PC parts for 500$ that will roughly match PS5/XSX computing power. Of course you aint going to run it in 4K max settings +100 fps but 1080p/1440p with lower setting for sure.
@@FINRaver what i mean is that on console or minimum requirements you haven't got a playable experience as soon as you step out of beginner territory, and also ksp1 can litterally run on my 300$ school laptop doing similar physics calculations (smooth 24fps gameplay gang)
@@attilavs2 yeah i hear you. But the requirments arent that bad considering what the game is actually doing. Its not a shit show like op is saying and you dont need the +3K PC to enjoy it.
Can't wait to see this monster's performance in ACC. It's by far the game that takes the most advantage of the 3D V-cache.
As for the 5800X3D. I fall in love over and over again every single day when i use it. Just yesterday found out that in my country it outsells the rest of the CPUs in the top 10 spots COMBINED. The sales of 9 models of CPUs can't beat the sales of the 5800X3D. AMD really struck gold with this one.
Hardware Unboxed included ACC in it's video on this cpu,
From hardware unboxed, the performance seems almost the same
@@Valefor61 And it's kinda weird considered that Zen 4 has an 13% +/- IPC improvement, double L2 Cache and higher clocks compared to Zen 3 :/
I will never regret buying my 5800X3D. Great price for awesome performance and I saved a boat load of money by not upgrading everything else. Thanks AMD
Lmfao what is ACC??
I get reminded of the old powermac machines that had aftermarket CPU drop-in upgrades. It required a driver - to turn on additional cores and scheduling for them - else the machine would run in a single core, underclocked state. I kind of feel nostalgic for needing a proper driver to run faster instructions.
🙄
😁👍
Reminds me more of back in the Windows 2000/XP/Vista era, where you most certainly had to install your chipset driver or your system would run at 5% of its performance.
Steve and GN's entire team frigging rule. This is the best video covering the 7950X3D in great detail. You're simply the best, dudes!
Thanks for this!
I'm super keen to see how this performs on Linux, I think there will be some great usecases for the higher cache. Also excited about the stacked dram and how that''ll go.
Thanks as always for showing code compiling benchmarks, I usually go to phoronix for that as well, user benchmarks there are pretty good, though their main review is also good.
Seeing how the scheduling works with Xbox game bar, potentially it could be not great on Linux?
@@sm1rks I'm sorry, but what game bar has to do with scheduling?
@@sm1rks That's the beauty of Linux. You can write a new scheduler yourself. :)
@@alksonalex186 It's mentioned in the video that when game bar detects a game, it turns off the extra cores, boosting performance.
According to phoronix, there are no kernel patches from AMD yet that change scheduling behavior. Benchmarks look still good though, 16% on avg over the 7950X in gaming.
It is also easy to manually assign processes to cores (taskset in the console). Should be possible to add that to launch options in Steam and find the optimum placement for each game yourself. Though would be nice to know if AMD is working on the schedular.
You guys should add spider man remastered/miles morales to the testing mix. Very cpu intensive games, particularly with ray tracing turned on. They seem to become throttled with memory bandwidth because ddr5 has big performance gains. Would be interesting to see if the 3d chips help with the bandwidth. Cyberpunk rt can be the same way
L3 cache of course boosts bandwidth. The 5800X3D can beat an i7-12700K with DDR4 in a game, only when you pair the 12700K or 12900K with DDR5 that you offset the cache difference.
Also, the iGPU on my i7-12700K is basicaly twice as fast with DDR5 vs DDR4. Considering that the 7950X3D has an iGPU, it would be interesting watching the boost for the iGPU with the 3D cache tech combined with DDR5 bandwidth.
The extra efficiency is one of the most interesting aspects to me. Maintaining the similar performance while cutting power would be very nice for reducing noise.
It depends, because if your workload is lightly threaded, Intel is using Big-Little. My i7-12700K draws 5 watts at iddle and the i5-13600K draws 3 watts. The medium for power efficency goes down on premiere pro, photoshop, and other similar programs.
I recommended a friend a 5900X for blender and similar 3D stuff a long time ago. I have my i7-12700K for music production needs with a Corsair 5000D and a Deepcool Assassin 3, it's more silent than my refrigerator.
aaand would be Europe-friendly :)
Especially for me producing audio in a studio enviroment, it really is EVERYTHING.
Could you throw in tests for simulation type games? Factorio for example has benchmarks which are not based on fps and it is a very cpu bound game. Would be interesting to see benchmarks for very cpu heavy games with this CPU.
Hardware Unboxed has factorio data in their testing.
How many FPS do you need in Factorio?) It has simple graphics and the game is not demanding. You can play it on weak CPU and GPU without problems.
@atriusvinius319 its not fps is basically "turn time" like civ he means
@@ThePlebicide But they do FPS instead of turn time, missing the point.
@@atriusvinius319The point isn't fps, the point is scalability. The 3d vcache processors can build much bigger bases before your UPS (updates per second) starts to drop. Factorio is memory latency bottlenecked, so the enormous cache helps a ton.
Great and really detailed review,Steve ! However would have really liked to see the boosting behaviour of that monster . Like was it really 5.7 GHz single core or not , what was all-core boost frequency , all unparked core boost frequency and maybe how boost frequency changed in light-threaded (1-4 threads/cores) applications . Obviously I don't expect it to boost as much as 7950x since Tj max and TDP are lower with the same manufacturing node but still would have been intresting for future comparisons with other 7000 X3Ds when you'll finally get them .
You do not know how much i was waiting for this review. Thank you Steve.
*Thanks Steve
so can everyone start talking about the massive disappointment amd has been this year? in both gpus and cpus they cant keep up with competition. even after releasing their top end cpu for gaming specifically it still loses in some games to the cheaper 13900k 😂take the L
@@luxemier not really a surprise. thats how 3D v-cache works. some games use the extra cache and some doesnt. this isnt a surprise.
I wouldnt really say this is taking the L. It will probably get even better, with bios updates. possibly.
@@luxemier obv a fanboi. Always people talking about the loses when there are just as many wins. You just look at the numbers that you want to believe in don't you? I have zero allegiance.. I do however see through the shill bullshit.
@@luxemier it loses to an OC’d or higher ram mhz 13900K too. Prob a 13700K too. You have to remember you cant OC X3D or its stuck with 6000mhz ram at best while you can throw 8000mhz on a 13900K if youre willing to spend that much. This cpu is pointless, if you need thw extra cores buy a cheaper 13900K or 7950X
I would love to see some multitasking benchmarks for these high core count CPUs.
For example streaming would be interesting to see the game running on the P/3D cores and encoding on the E/High Frequency cores.
Also I've noticed that doing CPU intensive stuff like rendering has a surprisingly low impact on game performance when you lower their priority in task manager or process lasso, so that would be interesting to see as well.
At that point, assuming core affinity is setup properly, it would be the equivalent of two separate 8 core 16 thread CPUs being smashed together and working at the same time. Basically a two-in-one package.
@@darkl3ad3r Well ideally yes, but more resources are shared than just the CPU cores. So game perfomance would be impacted regardless, the question is just how much.
Intel implemented Thread Scheduler and telemetry on the CPU, it will beat AMD for multitasking.
There no more CPU intensive task than a digital audio workstation, i was running FL Studio 20 with 48 extremely CPU intensive tracks with synthesizers and effects or overall 100% CPU usage at an audio latency of 1.6 ms (like running a game at 690fps), while running two 4K videos on the background and the audio doesn't stutter.
I don't even have a graphics card on my PC, audio is 100% bottlenecked by RAM and CPU.
If AMD nexus did it then Intel will absolutely makes amd zen 4 looks pathetic. I'm not making things up but OptimumTech already did this with Alder Lake and it destroys every Amd chip in comparison. It just sad to see how the so called "trusted" channel like this one tend to be biased toward Amd because they often showed Amd advantages but non about Intel.
@@runninginthe90s75 "non about Intel"
Exactly.
I called out Steve for his AMD bias on the i7-11700K review (unlike the i9-11900K, this is actually a good chip).
The CCD priority problem could be easily fixed by Microsoft if they just added an option to tag programs, you could tag the game executables as "games" and the OS would assign the correct cores, but Microsoft doesnt want to give power to the users, you can assign cores to applications from the task manager, but it gets reseted every time
I really wish amd made an 7950x3dx2 with both ccds getting the extra cache. That would be incredibly interesting and beneficial for workstation users (per the performance of last gen epyc).
Not enough X's
@@fajaradi1223 7950X3DX2XX
They'd have to lower the clocks even more cause 3D 7000 already sucks at heat draw even w/ at low wattage. Not to mention it'd add another $100 to the tag. Sounds great in practice, but it would be worse for most users.
@@OC.TINYYY Yes, but not for all workloads. Phoronix previously showed many CA he compute workloads greatly benefitting from more cache, with the 5800x3d beating out the 5950x. Ai workloads come to mind.
@@OC.TINYYY I'm sure, but I'd like more options in either case.
Glad I recently built on the 5800X3D paired with the 4090. Seems like minimal gains for the much higher cost of a new platform. Haven’t played a game that is CPU limited with this setup yet
That's great! The AMD upgradeability seems to have greatly benefited you.
Try Planetside2, one of the most CPU dependant games out there (basically battlefield with 3 teams instead of 2 and 300 players per team on a huge map with multiple bases). Benefits massively from the extra v-cache. On my R5 2600x it's roughly at 60 fps :( wanna upgrade to one of the new 7000X3Ds once they are all released and more info/updates are present. Also AMD and Microsoft might be able to work around parking the fast cores while gaming in the future. Then the 7950X3D might actually be worth it.
This basically really makes me even more interested in the upcoming threadripper pro 7000wx series. They have more cache (not 3d vcache) by default and if they retain similar base & boost speeds as the desktop 7900 series it would be a hands-down win for HEDT as a general use cpu.
They have more cache but not more cache per CCD usually, meaning neglible effect for games. (Except Star Citizen, the only game that can use 64 threads) And they didn't sample 7900X3D to reviewers because it's effectively a $600 6-core for games, heading off the $100 per core memes
@@Vinterloft games is only partially what I use them for. I need a 'workstation' not a 'game machine'. Mainly use VM's heavily and just 'game' between/waiting on work tasks. So high cycle speed is good for occasional gaming but when 'working' base freq across more cores is important. From what I can see with the X3D chips with cache not off the same ccx you get more latency issues which you would NOT have on a TRP or epyc. Anyway, will see when benchmarks come out. holding my breath. :)
Dying laughing coming over from Optimum's review, very stoic and straight to the point.. versus Steve instantly hitting us with The 7950x3d lookin a lil sus 😳
Holy smokes .. seems the 7950X3D would be a massive upgrade from my 3900X. I'm really tempted, because i can utilise the extra cores (compared to a 7800X3D) for code compiling as a Programmer.
But you need a new motherboard and ram. At that point get a i9 13900k it's way cheaper and better overall.
@Jimmys TheBestCop It's not better overall, check the charts.
@@Takyodor2 it's better at price, production and vast majority of games it's basically exactly same. So sounds like it's better over all. Microcenter has 13900k at $520 why buy this at $700. Makes no sense
Why not go 5800x3d? Keep your AM4 board.
Just get a 13900K or 7950X for cheaper instead.the cache barely makes a realistic difference
After using 12900k I realized that power efficiency matters. I used a 360 liquid cooler and the temperature creeps up to 95+ after a while running cinebench r23. Initially I was able to hit 27000 in R23. But a year later, it could never pass 25000 (due to thermal throttles) and the temperature jumps to 100+ within a few minutes. I spent countless hours to reapply the thermal paste but I didn’t have any success. Now I decided to switch to 7950x3d. I’m sure any AIO and air cooler can handle 160W max TDP.
Agree. AMD wants 7950X3D to sell a lot of units at that high price for a month or two before they release the part that people actually want. If you're waiting for 7800x3D, you'll still wait. But maybe some will pull the trigger on the 16 core part. Ugh.
The bragging rights people who have more money than sense, that is who this is for (as well as the 13900k). 7800x3d should sell like gangbusters, as the 5800x3d did
@@deansmits006 Still will have better performance with multiple monitor setup and trying to game on one and watch a video on other or what ever task you are doing on other monitor. Noticeable difference when I went from the 5950X to the 5800X3d, I would rather have the ability of both, which is where the 7950X3D comes in or the 13900KS!
nothing better than start the morning with this review
The 79050x is perfect for Gamers/Creatives like myself, and what matters on top of that for me is the power efficiency of AMD's architecture compared to Intel.
I will wait for the next generation before moving away from my 5950x but this competition is a real benefit for performances boost from 1 generation to the other.
So I expect the 8950x to be about at least 70% faster than my actual config for a similar TDP.
Thanks for this.
Even though my common load (Factorio as a game mainly) favors the 3D cache very heavily (usually 5800X3D at least on par or better with 13900KS) I don't see a reason to do a whole system upgrade yet (which a switch from AM4 to AM5 would mean), for the 10-20% performance gain the 7950X3D got over the 5800X3D at that cost.
OC 5800x3d, problem solved.
@@TheGuruStud Don't OC it, use the automatic per core curve optimizer. That did wonders for my 5800x and in reviews it does wonders for the 7950x3D, so I assume the same applies to 5800x3D.
Temps go down and clock speeds go up in 1 click.
Holy crap. I thought the 7800 was coming out at the beginning of this coming month, not the month after that, They definitely did that to stagger sales and fomo
Because they know people mainly want the 7800X3D. They releasing the best ones first just like Nvidia releasing their best gpu's first for more money
@@TheBlueBunnyKen in gaming 7800x 3d will destroy 7950x3d thats why they pushed the higher 3d cpus first for sales
Flagships have launched first, at different times, across every single consumer technology generation from every company involved in the past 15 years.
@@CyberneticArgumentCreator Not typically on the CPU side though. GPU? Definitely. New cpu's are all usually released on the same day and not staggered.
Classic amd.
its been a minute since I've seen one of your videos and I gotta say your lighting has definitely improved. Much appreciated
Not a second lost in getting this up hahah. Waiting for the 7800x3D myself.
Can't wait to see the whole stack. This really seems like a CPU for someone that does Production, but also gaming - which is more common nowadays, for sure. But for pure gaming, I'm REALLY excited for the lower SKUs.
If you're not doing heavy rendering or machine learning/AI (highly doubt it cause Quadro and 4000 series cards are ass for gaming). Why would you pay more for AMD when you could get a 13900K for cheaper and almost exactly the same performance except for in LIKE TWO GAMES that no one plays (tomb raider....) Intel is so much better for the consumer right now, they are disrupting the graphics market all while maintaining beastly CPU performance. AMD has become the enemy of the consumer, but go ahead and buy into that....
@@billyberner X3D chips really outshine in flight and racing sims and MMOs. So for people who spend most of their time in those they're a no brainer.
@@rbush83 why would you need above 700 fps in an mmo? Those are the numbers these chips will do in such easy to run games. Wouldn't anything above 277fps be negligible on almost every monitor in the consumer world?
I would love for a comparison of these kinds of performances at 4k, since it seems that graphics cards are at unreasonably high fps at 1080 p to the point that it is comedically unmeaningful.
I've been waiting for this review, to see how it fares against the 7950X. My workload is part productivity (Premiere Pro, After Effects, Handbrake and Topaz Video Enhance) and part gaming. It looks like the more flexible option for my use case is going to be the regular 7950X. And currently cheaper too. Thank you for the review.
Make sure you use Game Mode in Ryzen Master to make your 7950X to run like a 7700X on GN charts when gaming. Just one click. And a click and a reboot to go back to productivity.
@@prman9984 Thanks
You cant be more flexible than 7950X3D cause it has clocks AND cache and it can be prioritized.
@@cheshirster Tbh, at the moment this whole parking while gaming stuff really removes the benefit of having this difference in v3-cores and fast cores. It will hopefully get fixed in the future so you can get the best of both worlds. Also, this thing is like 33% more expensive (at least in EU, dunno about prices elsewhere) than the 7950x and the benefit of the 7950X3D isn't worth that money (atm). One massive benefit of the 7950X3D is the massively reduced power consumption under heavy load (which could make a financial difference on your energy bill :D)
I would like to see w10vs w11 testing just a short test to see if there’s a difference between the schedulers
same.
Would be more interesting to see Linux too. Recent kernels have brought a lot of new scheduler changes that should benefit CPU heavy workloads.
Most likely win 10 results are worse unles these optimizations comes to it one day…
7800x3d is safe bett in win10…
Didn't watched it yet but level1tech has a vidéo on their Linux channel (level1linux iirc).
Me too.
30:19 just wanted to say I thought the CPUs resting in that keyboard was a nice touch. Great review, as expected.
So it was obvious from the start. 7800X3D would be the best gaming CPU out of the bunch.
13900ks still the best
@@zhanurdos Absolutely, Intel remains king in terms of predictability and overall results in games. The AMD's are wonky as always - killing in one game and sucking in the other.
Not really even Shadow of Tomb Raider Intel was actually 1 fps ahead in 1% lows which Steve ignored to point..... So someone who owns 700$ surely will not be using 1080P display in which 1%, lows more important at high res scenarios....
Not even 13900KS even 13900K is better, I'm suprised tbh but that's reality
Intel 💙 but i just have 13600k🤣
@@puciohenzap891 That's cos the graphs use Intel as the baseline. If AMD gaming performance was the baseline it would make Intel processors looks like they were killing it in one game and sucking in another. They're optimised differently so their best and worst performance will be in different tasks.
What an interesting product, it seems like the extra threads don't really help with gaming workloads and the extra cache doesn't really help with workstation workloads which puts it in a bit of an awkward place of not really making purchasing sense for either of them, especially when 7800x3D launches soon and 7950x already exists. Now it makes more sense as to why a 5950x3D never launched, and makes me wonder why this launched first instead of 7800x3D. This is starting to look a lot like a "we made it because we can lol" type of product.
DOA CPU from AMD, this cpu cost more than already overpriced R9 7950X, it even worse because AMD need premium Mobo and premium DDR5 to be able to competes with i9-13900K. What a flop LOL
Cant wait for microcenter bundles for these that include memory and/or motherboard with CPU... It is crazy how they do 32GB memory, ROG Strix Motherboard and 7900x for $600. Wondering/hoping they do deals with these. Just might get me to upgrade from my 3900x setup.
Why bother upgrading if you have a fancy rig like that already? You could just slot a 5800x3d in if you want a close-enough upgrade.
It would be interesting to see RT games in cpu benchmarks since it's considered a cpu intensive feature (at least by some).
If RT is CPU intensive, let's make the graphics of a game running entirely on a CPU...
+20 000 cuda cores wasted for vector math not being used for light rays, let's just use a few CPU cores. Freaking lazy devs.
@@saricubra2867 I think the one chip solution is coming but in the distant future, let's say in 20,25 years.
That FPS FPS high FPS had me laughing like an idiot... Great review as always.
Amazing review, as always! I’m a little underwhelmed to be honest, though I’m happy there is good competition with Intel now. It’s a win/win for the consumer regardless of what camp you’re in/root for.
Crazy how close it is to the normal 7950X in production with so much less power
7950X is overtuned out of the box, you are supposed to undervolt it.... whoala, same now.
You can run ECO mode on the 7950X, which is also close the the normal 7950X (about 1%-2%) with so much less power.
@@Hito343 Seems like that's the case for just about every product coming out, CPU or GPU, most could benefit from an undervolt. At least my last 2 GPUs, 5700XT and 3080 12GB both benefit, as does my 5800X, though I like to keep my system very quiet so there is definitely some preference in there.
@@frankguy6843 Yeah I do the same, few extra fps are not worth over significantly lower temps and power consumption. Skipping this gen tho, upgraded from 5600 to 5800X, gonna use this as a daily driver/games and also getting 13900K rig for Adobe apps and Handbrake, can just use my fast DDR4 kit and snatch Z690 mobo.
@@Hito343 Yeah I went from a 3600 to 5800X before they announced/released the 5800X3D which was sad lol was thinking about the 7800X3D when it comes out, but having to also upgrade the MOBO and get DDR5 sounds expensive, but it will have to happen eventually...
I think the most interesting one is that 7950X3D's 1% low perfs are generally lower than 13900k's, which should have more consideration for playing games in high resolution and in the value component.
@@leonrein7393 I've never understood the power consumption argument especially if its for a work application your talking the difference of max $30 a year between them. Which is nothing to a general consumer let alone for business.
@@Lethal_Intent I'm not sure where in the world you are, but in Europe it's currently closer to $150-$200 a year based on an 8hr working day, still not a lot for business use, but does offset the additional platform costs in a year.
@@Lethal_Intent For me its not about power usage, but the heat it creates. I cant have my AC on, because my family hates it in the summer so its either sit in a room with 30C with AMD or 34-35 with Intel.
@@tyrantworm7392 In Europe, it's actually a lot higher than that if the CPU is constantly used at close to 100% (and most of these CPUs are used like that in a workstation environment).
@@Lethal_Intent One word. Heat.
Streamers require **ALL** of the cores to be high performance. The cores need to be running the game, not to be parked, but also running their streaming apps, mixers, multiple monitors, chat and etc so not to reduce the frame rate of the game they're trying to stream. A 'Streaming Benchmark' needs to be done for this task!!! Streamers doing this exact task are a large part of the market for these high-end CPUs.
Waiting for that 7900X3D review.
Us too! We'll be trying to get one when they launch tomorrow.
me also
7800X3D for me
Looking forward to it!
Also interested in seeing if Jeremy's "let's see who will spend $700 before they launch the others!", comment rings true.
Wow the power consumption is such a massive focus for me personally, and seeing it beat the 13900K in a lot of scenarios for about half its power budget is CRAZY! Power has been really expensive in Europe so this is absolutely massive.
Yup, the same here. Energy prices are sky rocketing and I also need to keep the temps down.
It depends how long per day your CPU is seeing high demand.
I think, that depends on a workload. I don't expect 13900K to consume much more power than 7950x when gaming or single-core tasks, for example. Another point is how much time do you use your CPU at max power. I.e. maybe you run blender rendering for 3 hours a day, and 13900k consumes 150w more than 7950x; that's 150Wh*3 of difference per day, 13kWh difference per month. Not an insignificant amount, but if you spend 500+ USD on your CPU, you can probably manage extra electricity cost (roughly 7USD assuming 0.5USD/kWh)
Power consumption is big factor for you and you dont know anything about power consumption of 13900K. You are just ignoring the real world scenarios.
@@innocentiuslacrim2290 I'm still using my GTX 1080 and it's undervolted. I see no reason to upgrade to anything else since pretty much everything is either a negligible upgrade or the power consumption will have to be higher, and the GPU prices themselves are just disgusting.
15:05 my brain hurts now 😳 lol Good vid as always man
Thank you Steven and team for your honesty and transparency. Can't wait for your review of the 7800X3D.
What's really exciting me is how the competition is really bringing out innovation that makes a tangible difference. The fact that Intel had to stop resting on their laurels and come out with the change in architecture that allowed 12th and 13th gen to right the embarrassment that was 11th gen. The fact that AMD had to conceptualize and implement not only their chiplet design but also 3D cache that shows actual performance gains in gaming at the very least. If they can somehow find away for the extra v-cache to be leverage in areas outside of gaming then that'll be a true game-changer for sure.
13700K/13600K seems to still be the best value. I'm extremely happy with my 13700K and undervolting then overclocking it makes great performance gains. But can't wait to see what the 7800x3d brings to the table, seems the 7950x3d just isn't worth it considering the high price tag.
A 5800X3D is still cheaper (where I live) and has better or equal performance in most games.
I will drop in price like all AMD CPUs do
@@BOTYgaming Oh yeah the 5800x3d is amazing for what it is, $20 more than a 13600K here in Canada but it's a great gaming chip for sure!
Great work GN team. Always delivering the best hardware reviews.
I guess this is the time for me to upgrade! From i5-2500k to this. It should be a HUGE upgrade I guess.
Man that 5800x 3D is such an amazing chip 😂 kinda regret not getting that one when I had the opportunity, might wait for that 7800x 3D one though
whats stopping you? they're still around and are a steal at $323 on amazon in the US.
It's still for sale at just over $300.
@@jasonhurdlow6607 More expensive than an i7-12700K?
Nah. It's good but isn't plus 300 dollars.
The i7-12700K was 250 dollars on Amazon last week. The thing is, the 12700K benefits from more cores, IPC and RAM speed that negates the 5800X3D's 3D cache, so it's a no compromise well rounded chip for the 300-400 dollar range, same happens for the i5-13600K.
@@saricubra2867 Personally I'd care more about worst case performance and cache goes a long way to improve that
@@big0bad0brad It's more expensive than a raw IPC increase by changing the interconnects and making the CPU cores themselves faster. The gaming perfomance increase from Zen 2 to Zen 3 is something that we won't see in a very long time.
"Hi, I heard you like FPS, so I put FPS in your FPS."
Thanks Steve. Great review and fantastic work from you all as always. Like, so much quality!
I’m honestly surprised at the productivity benchmarks wow. What a great chip
Mediocre at best for the price
@@pancakegainz Pretty great since it's not even what the chip is designed for.
@@pancakegainz bro it was the same price as the original 7950x. It’ll only get cheaper 🤷♂️
Great chip if you are being ignorant AMD fanboy. This overpriced 79503D still lose to i9-13900K even with high end motherboard and ram. Zen 4 is totally disappointing.
@@runninginthe90s75 says the dude who is shilling for intel.
Im so excited to watch this. Ive been putting off upgrading from a 9900k to 13th gen i7, just to see how the X3D does. Let's go!
Same here, going from an 8700k here
I have a x670e and a 7900x and loving it ngl
im riding my 9900k out for another generation or so.
Using a 7950x rn, if you don’t want memory stability problems then I’d say go with intel
@@rozzbourn3653 I'm waiting until the rest of the X3D chips come out before either getting X3D, or a 13900k, or holding out
Many thanks for the compilation breakdown, and power consumption charts ! To (very) few of us that data actually matters, so again - THANKS !
This is kind of problematic for me. I was looking forward to these dual CCD chips with 3D Vcache because I wanted to test a number of modded RTS titles that benefit heavily from 12 or more cores in certain scenarios, but only some of the time. It sounds like, despite those games obviously benefitting heavily from leaving the whole CPU on right now, I might actually see a performance hit in those titles? Also games like Death Stranding, I noticed that hitting my 3900x for up to 70% CPU utilization, and I saw similar behavior on my 5950x after upgrading. I hope they get the scheduler sorted out to be CCD-aware because there are probably a lot of workloads that would benefit from it, not just games. I'm still looking forward to seeing how the 7900x3D behaves, what with having more cache per core than the 7950x3D
it sounds like marking any software as a game in the xbox game bar thing will make it use the cache CCD
I wonder how would 7950x3d perform under combined workloads like gaming + streaming. Assigning that high capacity 3d cache ccd to games and the other one to video encode might make a great single pc streaming setup. Would you mind testing it?
Yes, I'm also curious if the parked cores can actually be used by other applications. If not it really does not make much sense because when you game you lose half you cores even if they can be used on other non gaming tasks in the same time.
It probably won't be be as great as you are thinking. If the current "fix" is to park the non-cache cores when gaming, then the option to run games on xcores while doing other things on ycores isn't even an option right now. It will probably run fine, I just wouldn't expect anything "great" compared to the non x3d versions if you try to game and do "x" at the same time. Everything is going to be forced onto the same 8x lower clocked cache cores.
@@joee7452 1080p 60hz encoding on x264 6kbit slow preset is achievable even in an 5700x while gaming, so I don't think having a bit slower (by slow i mean lower clocked) but unoccupied cores would have hard time encoding that. I don't know how AMD's new software works but if it's only assigning cores to do the stuff, it would be great. Also, assigning cores manually might also help. That's why we need tests.
@@OzMuGu I never said it wouldn't work. I said not to expect miracles. They are parking cores. That is disabling their use for the duration of the parking. Now, maybe they are actually doing something else and just using bad terms, but from their explanation, those cores are not available for use while gaming currently so that all the threads are forced onto the cache cores.
I am just passing on what AMD's release seems to be saying. Having Windows parking core and then unparking them and putting them to use, then parking again each time a thread is created or continued would introduce a ton of latency at the Windows control level, so I don't see that happening.
Remember, part of the issue here is Windows and how it runs. What AMD could do to make it work optimally is stuck with what Windows can actually work with unless they can bypass that and do it at the chip level.
I have hope that this is a temporary situation and they will work out a way to have everything usable all the time and just have gaming threads directed to the correct cores. I am just pointing out that currently, that doesn't seem to be the case.
30:12 This is doing what it's supposed to do in gaming, indeed! Keep up the good work!
Wish you would highlight the 1% and .1% lows in gaming more, it’s really more important than the average or highs
but, those aren't as many FPS in those FPS. FPS.
Honestly, when even those are above 250 fps, it really doesn't matter anymore. Nobody can tell the difference between these numbers as they are about 1ms.
He did it, did you watch the video ? Lol
Yes, we know. We're the ones who popularized that terminology after Tech Report debuted the frametime thesis, haha. We do when they matter. That's why we keep saying "proportional scaling" in these charts. There's nothing to say if they're working as intended.
@@prman9984 It's not only 1 ms (not even in theory, and even less so in practice), some people do notice. Especially when you start dropping below monitor Hz.
It's so refreshing to see a balanced approach to your reviews. I.E. Not an obvious Intel hater nor fanboy, nor an AMD hater or fanboy. We can all agree to be NVIDIA haters though. It is why this is one of the few channels I am actually subbed to.
I'm not an Nvidia hater. Why would I be? Because of them I sat comfortably on a 1080 Ti for nearly 6 years, and now I have a 4090 that's going to do the same or better with DLSS 3. What's to hate?
@KuraiShidosha 4090 users have nothing to complain about. You are getting unprecedented performance for just 100 uss more than 3090. Ironically 4090 is the most value for money card rn in the high end. However the rest of Lovelace stack is absolute dogshit ripoff.
@@darkl3ad3r its not about the products, its about the pricing. When you have no competition the worst comes out in a company. I still have my 1080 as well as a 3090 and love them both. I can love the products and hate the company
I like how the cable in the sponsored case pops its head out to say hi at 1:23
Thank you! Since I am still on AM4, this finally got me to order a 5800x3d. Anything else, especially a platform switch wouldnt make sense for gaming. Cool!
I did the same recently upgrading from a 3700x, very happy with the performance. It's fairly minimal in 4k on a 3070 for well optimised games (DLSS & G-sync pick up the slack already) but noticed a big difference on games like Fallout 76 and especially VR games. I suppose driving two high refresh displays on the VR headset requires a bit more from the CPU.
100%, the uplift is too little and you'd need a 4090 to see gains, also if your getting a 4090 you'd be playing at 1440 or 4k in which cpu difference is even less
It honestly seems like you'd only be paying for the extra V-cache if you're disabling 1 CCD to get the best performance compared to the 7800x3D. Guess the 7900x3D will just be a 7900x for core count that can *also* be a 7600x"3D" for gaming scenarios
I am using a 7900x and ngl with my 3080ti I am getting over 100fps and above 5.3ghzt on all games, the CPU highest utilisation I have seen is 28%
@@thealien_ali3382 that makes sense, 7900x has a ton of cache to begin with, high peak frequency, impressive IPC, and is really only bottlenecked by 4080/4090 or 7900 XTX. Still, I was genuinely surprised to see the 7950x3D have meaningful improvement over the regular one seeing as pretty much all of the Zen 4 lineup already has even more cache than previous gen. But it goes to show, the CCD that has the extra cache is the one that matters most. I was also wondering how AMD was gonna approach that with the 12 & 16 core CPUs. Guess the infinity fabric will have to compensate for it, and as such higher speed DDR5 is necessary. And with the 7800x3D, it should be fairly competitive (despite clock speed deficit). I'm sure it will be capable of overclocking shortly after launch, just like the 5800x3D -- just "unofficially".
The other CCD is not disabled but deprioritized.
I'm not sure if I'm the only person that's noticed this, but I have to point it out to you guys at Gamers Nexus. Through out this whole video I can hear a very high pitch "squeak" when Steve talks. I may be completely wrong and it's only me, but just in case it isn't, that the editing team at Gamers Nexus can look into the high pitch audio and perhaps adjust for the next video. I do not mean this in any negative manner, just an observation. Thank you Steve and Gamers Nexus Team for all your hard work and informative video's. Keep up the good work.
Just one last note I'm using a Focusrite DAC, with DT770 Pro headphones with volume at around 60 - 70%. The high pitch "squeak" is prominent in the left channel. Again if this already have been mentioned, and been looked at, I do apologize.
Could AMD work this in a way where other programs can still run on the parked cores? Like discord sharing, OBS, all the background stuff that people run while gaming, not to mention streaming software/overhead.
this was discussed on the flight sim forums regarding 3rd party apps running alongside MSFS. Such as; Navigraph, SimBrief, Orbx... that sort of thing. I think I remember a user there saying his 7950x3D was using parked cores to run those apps, thereby there was no degradation to the running cores devoted to running the sim. Apparently, this processor is doing a very good job running the sim.
Pity you didn't include MSFS 2020 as the old X3D worked amazingly with this title . Hopefully this dose as well.
I spit my drink laughing at the rapid-fire FPS usage part for R6 Siege. Thank you, please include things like that in the future if possible! I averaged 23 frames per spittle with it.
Not feeling bad about getting a 5800x3D in October for a little over a third of this price lol
Also I know it isn't the most popular game but I'd love to see GW2 being tested on these new x3Ds. That game is more CPU bound than any I can think of and to this day is the only game that humbled me since I've had my setup.
Super true. Bro, I got 5800x3d in December and i'm laughing my ass off for this new line up.
Like why bother buying $700 CPU now, if there is such minimal difference in games. And that is the only CPU, if you want to build system for AM5 - rebuilding whole pc, lmao, good luck to those people. Best $300 I have spent on the CPU.
This video convinced me to get one! I have the 5600x now but just got a 4080 so this should do well
@@uglysedzh same for me totally happy with my 5800x3d the only thing i dont like are the temps i have a custom setup with a 420mm and a 280mm rad and cant get it below 90 on load
@@andreasstrauss5194 wait wat? The Temps are that high??
@@andreasstrauss5194 There's something wrong with your setup for sure, it should not be running that hot at all. It's not a hot chip in the first place.
HUB did some tests on only the 3d-vcache ccd to simulate a 7800x3d. It might be interesting to see it replicated by Gamers Nexus too.
We don't do simulations like that and will wait for the chip to come out.
That's what I don't get.. GN (this video) didn't have to manually disable the non-3D Vcache CCD and still got the same results as HUB! What gives? For example, in Cyberpunk 2077, HUB's simulated 7800X3D is slightly ahead of 13900K, which is the same result as HN's out of the box 7950X3D (not simulated 7800X3D).
Edit: GN, not HN. Plus, the Shadow of the Tomb Raider benchmark results for GN were in line with PC World’s (Gordon’s) which is around 20% higher than 13900K, while for HUB the difference was much lower (< 10%).
@@MasterKoala777 Possibly installed the drivers wrong 😅
Seems like a major pitfall, unless there's some scheduling issues in software & performance is going to vary on a per time basis (which would be terrible).
@@MasterKoala777 Could also just be they benchmarked different parts of the game and it's just variance.
@@MasterKoala777 "HN (this video) didn't have to manually disable the non-3D Vcache CCD and still got the same results as HUB"
HUB just wanted to bash AMD for their prices that he sees as unfair.
There is no need to disable cores, you can prioritize CCDs.
TPU did it right this time.
Great review. I liked the comment about the investors not having played a game since Pac-Man. :D
The performance per watt graph is insane. AMD's power consumption did a 180° since back in the day, now they are the most efferent compared to Intel.
Seriously. Nearly equivalent performance to the 13900K (and better in some cases) for nearly 50% less power is pretty big.
@@Damaniel3 half of 295w isn't 230w is it.. its not 50% less lol
@@TheBURBAN111 156w vs 295.2w is pretty close to half
I got this thing recently, and I'm not disappointed. It runs anything, anywhere lightning fast.
Something that would be nice is if for an example in Ryzen Master, we could say which program you want to run on the part of the chip with the stacked cache, and which part you prefer to run on the part with the higher clocks, or if you don't mind and might let it run however it might find it being the most convenient option.
Feeling pretty good about the $320 I spent on my 5800x3d. These are faster but not 100% faster!
they are tho
@@blackicemilitia7401 in workloads where the cache matters, they’re not. Where they do beat the 5800x3d significantly is due to the extra cores, but that’s not why you buy something like the 5800x3d in the first place.
It’ll be interesting when the 7800x3d is released because that will be an apples to apples comparison. But it still costs almost 50% more…
You guys should include star citizen in your benchmarks. The 5800x3d absolutely steamrolls any intel chip in there 😉
OMG Steve. Your logic around FPS is starting to feel and sound like lickable wallpaper. thanks
I went with the 5800X3D after I decided that I was going to skip upgrading this generation, at least in terms of platform. I don't regret it at all.
Hey, that's fine. Not only is the 5800X3D a stellar performer that actually makes the 12900KS look silly, but if you decide to move to AM5, well, it would probably be best to wait for Zen 5 or Zen 6.
5800x3D still holds it own perfectly, feels good!
I went with the 7950x3d, as I'm not just gaming. Streaming and making videos will benefit from that insane performance and also "future-proof" me a bit longer hopefully
I'm still on the fence having to run the Xbox bar all the time just to play a game is moronic. I'm also not thrilled about having to turn off half the course while I'm gaming I often watch videos on the other monitor.
@@lance4862 I haven't had to do any of these things?
@@RGBeanie Gamers Nexus said it uses the XBox bar to determine if it should turn off the non-cache CCU and you need to set power mode to balanced to properly disable half the CPU so it will only use the part of the CPU with the enhanced cache.
Also apparently if you use the old driver it kills your performance so you really want to make sure you're using the current version of the chipset driver.
The future proofing is one of the reasons I'm interested but also the lower heat and the performance seems to be better on some games, but not all It's really a mixed bag this time.
@@lance4862 I think I already have Bar disabled for other reasons, I'm sure. I should probably check but performance has been insane thus far for me personally
When are we going to stop testing at 1080p using $700 CPUs and $1,600 GPUs?
The results mean nothing because no one sane is running at these settings.
6:34 those are the first 18 threads, thread 0-17 are 18 threads. U use an index, but thread 17 with index starting with 0 is the 18th core
Still running a 5600x and my next upgrade will be a 5800x3d in a few years before finally upgrading to am5. Thanks amd for letting us stay on am4 for so long