9000 series launch was so bad that 7800x3d actually rose in price. Super annoying as this was the week i was driving to microcenter for my friends new pc
@@Dryloch alas my friend is only waiting on me to send the bundle to make his pc so need to get it to him asap so he can test stuff within return windows
I was expecting this too. Here, at least (Germany), the price fell from around 370€ to around 355€ at the moment. Not much, but 7800x3d is still overpriced here anyway compared to US prices.
Mindfactory in Germany sold in total about 250 Ryzen 9000 CPUs since launch (about half a month) For comparison, they sold in total 70,000 7800X3D CPUs (which averages to 1000 per week)
I've been checking prices almost daily since the release of the 9000 CPUs. The 7000 CPUs are steadily increasing in pricing. The 7700X, the 7800X3D, the 7950X, all of them. I wonder why...
@@punishthemeatpocket I mentioned it before but I got a 7950X for 320 euros (used but fine with warranty) from a guy that was selling to upgrade to the 9000 series. Can't complain either.
From my understanding, Zen 5 was primary engineered for enterprise/workstation environments and that's where most of the transistor budget was spent on. Double the AVX-512 throughput (plus some ML-accelerating instructions) and wider fetch and decode stage that apparently only benefit dense multi-threaded workloads. Every other improvement is pretty much incremental at best for the casual user -- 50% more L1 data cache and slightly larger ROB size. Zen 4 customers might need not apply for upgrade this generation. Let's hope Zen 6 will bring some more general performance boost along side.
I wonder if AMD is taking a leaf from Intel book - and going something like "tik, tok". Zen 5 we see the foundations uplift, and good improvements on workstation/server workloads. Zen 6 we will see maturing, and will get a good uplift in gaming. Zen 7 will be foundations and workstations/server again. Could be it?
@@sysbofh Don't forget Zen 4 TMSC process has matured a lot for the cpu. It takes time to get everything sorted out on nodes. CPUs are totally different from GPUs on nodes, they react differently to temperatures too. They maybe N4P as Nvidia's GPU's now but it's different type of design.
@@greasebob which isn't showing up in single thread currently. I'm wondering if the "spercte prediction injection patch" is seeing the 2 ahead prediction as "Trojan" instruction being injected & bugging out on Zen 5 flushing the micro-op cache & instructions prematurely.
I do agree you are right. But then why did they advertise this generation as a gaming focused one? It's crystal clear it isn't. There must be going on some serious miscommunication between amd engineering and amd marketing teams
Zen 5 is an unmitigated awful marketing disaster. Their whole marketing dept needs to be fired. Every opportunity is squandered, every single time. Dr Lisa, do something about it!!!
@@YuriMomoiro While Zen 5 didn't get the desired uplift in all applications it still did have significant gains in some production workloads. If the marketing team had focused on the areas where the processors actually showed improvement, rather than trying to show gaming games that weren't there the launch wouldn't have looked so terrible. Besides that, the 3D chips are the best gaming CPUs there are, and a new one comes out every generation so we are going to see gaming gains this generation just a bit later. The whole point of marketing is to show case your products in a way that makes them attractive to customers, and it seems AMDs marketing department misunderstands what the chips can do and who they could benefit most.
Not actually a great ad for new hires: "oh, yeah! Those offices are empty because we fired everyone. Yup, we pay exactly the same as everyone else. Wait, where are you going?"
Wouldn’t be an issue if they didn’t lie and say it performed better than the 7800x3d or better than the 14700k. If they priced the cpus and came out with the truthful numbers they would sell easily.
They shouldn’t of marketed it to anyone. Not just gamers. It’s not even 5% better in normal production work loads. The only place I’ve seen it be even slightly better is in synthetic benchmarks and who gives a shit about that
9700X unlocked (aka not gimped with 65W) and with PBO pretty much matches 7800X3D overall in games which can't be overclocked at all. 9700X wins with ease outside of gaming. 9800X3D is going to be great I think -> Higher clockspeeds, unlocked for OC and I can easily see this chip beating 7800X3D by 10-15 if not 20% (tweaked) in gaming. There's several windows bugs holding back Zen 5 perf right now as well which are going to be fixed.
Wendell recently showed that disabling VBS (Virtualization Based Security), preferably through some obscure BIOS setting, benefited Ryzen 9000 in gaming more than Ryzen 7000.
For the question on memory issues with Zen 5, that's something that bios updates will fix via AGESA updates. Lest we forget, Zen 4 had issue running 6000MHz RAM on a lot of chips & couldn't go above 2000MHz FClk when they first came out. Now we've got a majority if chops with memory controllers that can handle 6200-6400mhz RAM with 2100-2200FCLK With less issues. I'm running 2200MHz FCLK (at 1.2V soc) and 6200MHz CL28 @ 1.4V on my 7800X3D and it's happy as a clam. Took some tuning, but it's tight timings without crushing it with voltage.
Thanks for the great show Steve & Tim I feel as if there wasn't enough controversy though I've got a quick fix though Edits audio at 21:15 : "Don't be poor........ Buy an RTX 4080... ....Get More Vram......That is absolutely .. ... what we've ever said" - posts on reddit -
Personally, HDR makes more of a difference than RTX to me. HDR is VERY important, I dont get whya lot of PC devs dont add it. Even windows auto hdr looks great to me.
I used to game in HDR, but I no longer do so. I play in a dark room (on an LG OLED TV), so most games are too bright for me. Most implementations raise the overall brightness, not just highlights. I love HDR in movies, though, but it still differs greatly. I've seen a few movies with horrible HDR, where everything is overblown.
The games that have aged the best are most definitely not the games that targeted photorealism. The games that have aged the best have been games that used a more stylized art style and focused on things like lighting. A great example of this is Halo 3. 19:37
They are saying that for graphics, not whole games, for example, Arkham Knight & MGS phantom pain are going to be 10 year old in 2025, but their graphics hold pretty well
So, as an owner of a high VRAM GPU I do agree that I would like to see higher res texture options but not at the cost of other visual enhancements (like ray tracing and path tracing). I think the dev of Black Myth made some great choices to ensure performance scales across a broad array of GPUs and including higher res textures options would likely have only led to reviewers selecting those options and claiming the game was “badly optimized”.
No, higher res textures don't really have an impact on performance, that is the whole point why they are so great. The lighting effects and such are dependant on your output resolution, not texture resolution, so those don't change. And stuff like raytracing already usualy runs at far lower resolution ( if memory serves correctly, the raytracing layer tends to be run at 40-60% of game resolution)
For Zen5, the main issue is in the marketing. Their marketing have been terrible lately. While the numbers are correct, but it was done in a very biased way. The product itself does have generational performance improvement, but most of it manifested in non gaming workload. In gaming, the performance increase is meh. The issue is that in the past few generations, we have been spoiled by CPU releases accompanied with clock speed increase and possibly cache increase (like with Intel 12th gen to 13/14th gen). Those 2 factors are actually the biggest contributor in gaming performance increase instead of the actual architectural improvement. If you ask someone that understand how CPU works, they will tell you that Zen5 is better and worthy of a new generation. Gamers don't care about that. They just want to see their FPS up by like 15 to 20%. What also didn't help is the pricing. From gamers standpoint, it doesn't worth the price increase. The issue of course is the fact that this CPU actually can have a large generational improvement for some non gaming workloads (and even in some games!). I do think that the MSRP is just a bit too expensive (just a bit!), but gamers think that it is flat out too expensive. The difference in view come from the fact that I value the productivity side a lot since I also use my PC for work that definitely can take advantage of Zen5 improvement.
Thing is it's not just gaming where you don't see much improvement, plenty of productivity applications don't care about the AVX-512 and L1 improvements.
In 2022 when Zen 4 got released, 7600X in my country costed 300 euros, now the 9600X costs 370 euros... 7700X costed 400 euros, now 9700X over 500... crazy
That sounds like a scam specific to Hvratska. Here in Germany, the 9700X is already sold below €380 (and the 9600X below €300) and that includes 19% VAT. Clicking around, I found offers just over €450 in Croatia. So on one hand you were exaggerating and on the other, the 7700X is still sold for €340 there while it's only €280 here, so CPUs just seem to be priced higher there in general and it's not limited to Zen 5.
@@Lightkie prices that I have mentioned are here in my country, Montenegro. Here prices are crazy high, GPUs especially, even in Serbia I can find some of the CPUs and GPUs for cheaper
It's shocking really.Zen 5 is not some refresh,it's a new ground up,clean sheet design with a ~30% larger core built on a new node.AMD team had all the funds they ever needed and worked under no pressure from their competition and yet they still failed,getting beat up in some cases by their old Zen 3 refresh aka Zen 4.Zen 5 is a failed experiment and a disgrace.
@@ytctuserThe goal seems to have been avx512 performance for AI workloads, the shareholders basically decided how it would be designed. Marketing didn't get the message xD
@@ytctuser I wouldn't call it a failure quite yet. At least not from AMD's perspective. Gaming is one of the smallest markets with servers and laptops being far more important. Zen 5 seems to be good for heavy compute work loads so data centers *might* be happy with it. There is also the hope that Zen 6 will have a big uplift like Zen+ had. It is hard to beat a mature platform off the bat; there is a good chance there are more than a few hardware and software oversights sapping performance(Windows admin bug being an example).
I remember going from zen to zen+ was a big leap, going from zen+ from zen2 was underwhelming.. I actually sold my 3600 after 3days and went back to 2700x. zen3(5600x) was good again, and now im sitting on 5800x3d and seems like there´s no urgent need to switch...So lets wait for zen5+\x3d variant.
I would like to see benchmarks with Zen 5 on RPCS3 and how much the AVX512 improvements boost emulation performance, compared to last generation of cpus.
not much, zen4 had avx512 just not the 512bit ops at full speed. rpcs3 mostly loves avx512 for the non-512bit ops it contains. Zen5 has an errata in that all simd ops have at least 2 cycle latency, which may make it slightly worse in some cases than zen4. Zen4 vs zen5 mobile would be an interesting comparison, zen5 mobile remains the same as zen4 not running the 512 bit ops are full speed, but I don;t know if zen5mobile has the 2 cycle simd errata. Not saying I don't want avx512 comparisons, just saying why expectations in rpcs3 case should be tempered.
@@tappy8741Not quite. It's only SIMD integer addition ops that have had their latency increased to 2 cycles. There should actually be some pretty good performance gains otherwise.
@@DrewWalton So binary ops like xor and the like are still 1 cycle latency? That's good if true. Alex Yee's avx512 analysis titled "zen 5's avx512 teardown + more" (which I would link if youtube let me) has this quote. "Latency of all 1-cycle SIMD instructions (regardless of width) has regressed to 2 cycles." It's a very technical article and the guy is knowledgeable I have no reason to doubt his findings, do you have any sources to back up your claim?
TSMC quotes that N4P further improves N4's performance by around 6%. ...compared to N5, N4P will deliver an 11% performance boost, a 22% improvement in power efficiency, and a 6% improvement in transistor density. Importantly for customers, TSMC says that N4P features a simplified (and cheaper) manufacturing process, requiring fewer masks and less wafer turnaround time.
To be fair, Zen5 average IPC gain is around 10% and power efficiency in the ballpark of 20%. But if N4P is cheaper than N5 then why is Zen5 much more expensive than the 65W TDP chips they're supposed to replace?
Those performance claims are pure marketing garbage. The only performance related metric the TSMC process affects is the latency in theory. Otherwise performance of a n4P and a 192 nm transistor are the same. Performance gains come from beeing able to pack more transistors together, both due to size and to lower power draw per transistor, and thus less heat problems. And of course due to different architectures and such, but those are unrelated to the fab node. In theory there are some latency barriers that need to be crossed for some architectures, but those are chip designer level considerations, not fabe node levels.
as far as i am aware, Zen 5 is a complete redesign, which is why i am not all that disappointed with Zen 5, it's a solid starting point for the next 6 or so years and it was most likely needed, Zen 4 was probably at the limits of what the original Zen base architecture could do the marketing sucked tho, don't get me wrong
transcript for that time as follows: welcome back to Harbor unbox it is time for our Q&A series I don't actually recall the last time we did this quite a 0:12 while ago at least not here we did one at computex but then before that here 0:17 it's a long time ago not much has been happening must be at least 3 months since we got I think so yeah so anyway 0:24 August crazy August Q&A isn't it that's wild yeah the year is flown I don't know
From Zen 1 to Zen 1 Plus (Zen+) was only 3% in IPC, which seems so low But they had like 5% more frequency, and 5% less letency, also much better DDR4 OC frequency So in general.. from Zen 1 (Ryzen 1000 series) to Zen+ (Ryzen 2000 series) was like 10% overall From Zen 4, to Zen 5, is like 3% in IPC, and frequency boost, latency & RAM OC and everything else is the same.. only efficiency improved I had Ryzen 2600, 3600x, 5600x, and now 7700x, I see no reason to pick Ryzen 9000 series, unless they are the same price as Ryzen 7000
Not sure. With Zen 5 being so disappointing for many, I think if Arrowlake just slaps a solid 10% on top of 14th gen performance and lowers power draw, they'll sell better already.
im not sure intel can afford to do anything other than knock it out of the park after 2 generations of defective cpus and massively higher powerdraw compared to amd.
@@IOSALive Same here. You know you're getting good and unbiased information and facts about all the tech you want and need. 👍 To be honest, it is the channel i watch gor when waiting for new info about this topic.
if 9000x3D will perform almost identical to the previous gen, I think people will opt to upgrade to the ryzen 7000 series instead of jumping to 9000. And people with the ryzen 7000 already, will keep it until Zen 6 comes out. On the other hand, if Arrow lake turns out to be decently better than x3D, it's gonna be a clutch saving moment. Big if though. This is more uphill battle for intel to regain market vs AMD
I never understood why developers who no doubt develop high res textures (they scale them down for less 8 GB cards) why they don't include these high resolution textures as an option. High res textures are the single most impactful graphics quality setting.
9400x should've been the 6-Core Part for $249.99 9600x should've been the 8-Core part for $329.99 9700x should've been the 12-core part for $459.99 9900x should've been the 16-core part for $599.99 ------------------- The xx50 should be saved for a refresh.
I don't like companies changing specifications on the same model number but I wouldn't assume anything. It may end up making zero difference if it previously had more bandwidth than it needed.
34:10 I’m surprised and a bit proud of PC enthusiasts for not falling for marketing this time. I expected the portion of people who either didn’t do any research, fell for the marketing, or didn’t care about value and just wanted the latest thing to be higher. But it looks like 90-95% of buyers were clued in. Maybe the CPU launch week crowd is unusually informed and it’ll even out a bit.
My personal opinion on this whole Zen 5 thing from what I have seen on all the TH-cam reviews seems to be a mixture of things: 1.) AMD has been consistent in sticking to efficiency. They usually operate at much lower wattages than the competition thus some generation leaps may not be as big of a performance gain as people want. 2.) There was that weird delay at the launch time, I'm guessing that there are some problems with microcode, that will be worked out within 6 months and we'll see performance gains. 3.) The Windows scheduler has always plagued AMD on launches, we've seen this a few times, wouldn't be surprised if some Windows updates (aside from the fixes already mentioned on the way) come in the next 6 months and we see performance gains. With that being said there's some things I haven't seen explored that would probably be a cool piece of testing to see if there is a maximum performance that can be gained from Ryzen 9000 (or numbers look close to AMD's original before launch) Some of these may of been done already and I just haven't found the videos yet. A.) Fine tuning PBO for max all core clock speeds instead of just maxing out the power usage. What happens if the 9700X is running at 5.5Ghz all core all the time in Cinebench or game benchmarks? Either through undervolting and overclocking or via overclocking, or testing both. Not throwing everything at it and seeing what it can do, but trying to fine tune it to run at max boost when the application/game will take it. B.) What will the performance look like once AMD gets higher RAM speeds working? We all know Ryzen loves faster RAM speeds. (This of course we have to wait on for AMD and BIOS updates) I mean this launch has definitely been weird, but I'd like to see where AMD's at in 6 months with this, it would definitely be a good revisit around like Christmas time to see if bugs and other updates to Windows and etc have at least helped claw back some performance gains and rectified the situation. Only time will tell, seems this is the normal for everything software and hardware now, release an incomplete product and fix customer perception of it through software updates over the next year... It kind of sucks
AMD engineering ~ good. AMD marketing and PR ~ not so good. Q: Why is Zen 5 so disappointing? A: Depends on who you ask. For you guys, with a gaming focus channel, it's really no better than Zen 4. I am an old guy who plays with Linux. I do hardly any gaming ~ I have an Intel i7 6700 which is almost a decade old, and a GTX 750 Ti in it ~ which genuinely is a decade old. I just want a new computer, because this one is getting a bit old and skanky ~ like me. I don't compile a Linux kernel every day, I do it in bursts, and I guess it averages out to about once a week. At some point it gets to be like trying to use an HQ Holden from 1975 as your daily driver. It just gradually gets more difficult and expensive and inconvenient. Maybe the cross-over for cars is 30 years ~ the cross-over for computers is about ten. I do some web-serfing, watch some youtube, and I compile a kernel every now and then, but I do play with VirtualBox and I have one copy of Windows and about a dozen guest installs of various flavours of Linux. I just like to keep an eye on things. A 9950X with 64GB of RAM, and a 2TB gen-4 M.2 drive is going to suit me down to the ground. I am cynical about AI and the inclusion of a Neural Processing Unit, so getting the final generation before that shit becomes standard equipment, that's another reason to jump in here. This is a really nice point to do my once in a decade complete platform upgrade. All the stuff that makes zen 5 a major disappointment for you guys ~ for me it's pretty much ideal. The fact it's not selling like hot-cakes and the price is going to drop ~ that's like the cherry on top! Just makes it even better.
Why it is so disappointing? Because they plain lied in their marketing. They promised a new Porsche and delivered a refurbished VW Beetle. They should have market it as Zen 4 refresh with better productivity. We would all say it is better than expected.
Yeah... It's significantly more expensive and doesn't offer anything like the gaming performance they advertised. No mystery. If they do slash prices it's still a good product, just not what they advertised.
To add to the 8GB VRAM convo: Even my 3070 Ti is a REALLY good card; it's very easy for me to get very high frame rates in RE Engine titles for instance even at 1440p but as soon as I enable enough settings (typically the RT ones), I get massive frame rate drops and/or stuttering. Until it hits that ~8GB of VRAM usage it's always extremely smooth and playable though; really wish Nvidia would've equipped it with 12GB instead
wtf I was just thinking that I hadn't seen an hardware unboxed Q&A video for ages then come to the channel look for one and see ones been uploaded 1 minutes ago. ITS FATE!
That whole first comment is backwards Zen 4 is pretty much maxed out on IPC & possible clock speeds while its gained substantial increase in clock speeds from its base. There is no more room for Zen 4 to grow IPC there for the design needed to rework to allow for higher IPC from a new base point. For God sakes Zen 4 is like 55% to 75% faster than original Zen core design & for the most part the main design of the original cores on Zen are still in Zen 4.
28:04 VRAM requirements usually go up with consoles and available screen resolution. If you are developing a game that targets 1080P it does not make sense to include 4K textures, since 1 single texture would be 4x the size that the "target resolution" can display, even if that texture would fill 100% of your display, you would not be able to see every pixel of that texture. For example: If you are developing a game right now, you won't use a 4K Texture on a small item that has the size of 100x200 pixels on your screen.
Regarding VRAM size: I maintain that good art direction can make it happen on less than 8GB VRAM for the textures, depending on how much the rest of the tech stack is using up VRAM (Ray Tracing, Framegen, etc...). If the effect you want is close-up looks at objects that reveal tremendous minute details, that's gonna be expensive on texture size, sure. I'd argue a compelling visual design doesn't necessarily need that. Now, a super "hyper-real" vibe will probably want that, and artists should be allowed to go there. But it's a nice move to make a settings combination that looks reasonable within lower VRAM sizes. Since some people are gonna need to turn textures down, making it still look decent should be done if at all possible. Once Nvidia and AMD (and intel??) give us more VRAM on cards, we can maybe say it's time to move on. But that's a ways out, and devs should make games that look good for the most users at launch -- give us good big textures AND good small textures. Eventually, if artists want it, I do think that should happen. We should be able to move on to 12GB+ and not look back too much, eventually. But until that makes sense, IMO if devs want to save time/resources and only make one tier of textures, doing what WuKong did and prioritizing small textures is acceptable and even the right move. Only giving "huge textures" or else "low-effort potato textures" if you dial down settings is kind of a jerk move by comparison, since it'd just about ruin the game for a lot of people.
Shame on you for not using Special K for HDR. It allows for such in depth HDR tuning that you can easily make it better than most native HDR implementations
I agree that the normal textures should be where they are at with Black Myth Wukong, but if you're creating a game that requires the 16GB or 24GB cards to turn up the settings, it would be nice to have a texture pack to actually make better use of the cards. I'd take an HD texture pack for the game, over some of the heavy settings available. I'd be ok with allowing modding, because you know it would get a community made texture pack.
NO regressions? Good luck! Every mayor CPU had regressions in some benchmarks. i386 was slower on some instructions than i286. Pentium was slower than i486 on some benchmarks... It's been like this forever. You have a average speed increase, but some instructions/corner cases get slower. The only exceptions are clock speed boosts when the architecture is unchanged.
Guys we don’t even need new CPU’s/gpu‘s .. we have very powerful hardware these days, we need developers to take advantage of it and deliver games that have new elements like physix or something. Every 2 years a new cpu gen would be enough!!
Can't really blame developers when the frameworks are horrendously complicated to use and corporate/market is pushing for fast development and short deadlines.
I know we are in the disillusionment phase of the AI hype cycle, but I'd like to see more "AI" hardware (formerly AI/ML, e.g. TPUs/NPUs) be leveraged for better NPCs, better procedural generation, and more dynamic gameplay. Nvidia has showed off some of this with their ACE demo, and we've seen Skyrim modded to use ChatGPT, but it would be nice to see more games where characters act and talk more realistically. Open-world games in particular could benefit from this, but even FPS and other genres could, as well. It feels like NPC "AIs" haven't advanced much relative to graphics in the past two decades. The amount of TOPs in lower end GPUs or CPUs isn't much compared to higher GPUs, but if it's going to be taking up die space, anyway, it might as well be leveraged for games and not just the handful of prosumer applications or Windows features that currently support integrated AI processing.
@@zivzulander To be fair, AI hasn't even started being properly used where it's useful. But yeah, the hype train took it in a completely different direction from where it's actually good. I also think it's too early for games to start using NPUs, since there's only few people who are going to have that hardware. It'd be pretty short sighted to only make a game that can run on the newest CPU, don't you think?
I tried black myth benchmark tool on my GTX 980M and was surprised it runs with low preset with FG at 60 fps and medium preset at 40. Without FG 25-30 fps. All tested at FHD resolution
I don't think anyone was actually upset about Zen5's lack of performance, only disappointed. It was AMD's doubling down on it's bullshit claims, waffling about blaming anyone and everyone but themselves, and gaslighting us all like we're f*cking stupid, *_that_* pissed people off. If they'd have just let the issues work themselves out it would have simply been a disappointing launch and been over within weeks, but AMD just couldn't help themselves to simply STFU and not make everything 10x worse. Everyone was already focused on Intel and the RPL instability drama, and upset that Intel wasn't saying enough publicly. But how much worse would it have been if Intel said there's no problems with Intel, there's no problems with Raptor Lake, and *YOU* just aren't using it correctly? The internet would f**king explode if Intel did what AMD is doing right now.
It's not just gaming GPU VRAM amounts holding back texture quality and game visuals. The largest factor is the hardware in videogame consoles. Those are what really holds back game visual quality. The biggest offenders being the Xbox Series S and to a lessser extent the Series X. Even with the unified memory on consoles, its not all the same. Even on the Xbox Series X a portion of its memory is clocked significantly slower than the rest of it so that portion of the memory cannot be leveraged in the same way as the rest of it.
Intel needs to strap in for the most intense reliability benchmarking a consumer CPU launch has ever seen by a landslide. If they see 10% performance increase from 12900k, but all reliability concerns are completely eliminated, it will make me happy to continue to invest in Intel for my machine. Too many bugs with Ansys and other software to change from the default architectures tested.
10% up from the 12900K would end up being a considerable drop in performance from the 13/14-900K, but it would mean that their CPU's will probably atleast be stable. They would seriously have to eat their pride to release CPU's that are _less_ powerful than their trashfire generations to deliver on stability. But if they focus on energy efficiency and value, that would win back a lot of trust.
The fact that I didn't even notice "worse textures" and I thought to myself "this game looks amazing" goes to show how much people like me would benefit from better performance rather than higher res textures.
Textures don't impact performance if you have enough VRAM, and you will notice texture quality more than just about anything else. If the textures were worse you'd notice, and if they were better you'd also notice.
@@Hardwareunboxednot more than motion quality. Games are constantly moving pictures how they look when they move matter more than how they look at a standstill
@@Hardwareunboxed No, textures definitely impact load times, especially before the shaders cache. Lower textures will mean better load times as the game doesn't have to process as much. I realize you're benchmarking past those initial load times, but it still happens
Are we sure it isn't just a rebranded 7000 serie cpus? like they did with the gpus back in the days hd7950 > r9 280 I have a a zen 3 cpu, I skipped zen 4 because I previously experienced a lot of bugs being a early adopter. AM5 has now matured and zen 5 isn't an real performance uplift. Now I have to wait for zen 6 and some time for not being an early adopter. I will not be a beta tester again.
I think it’s pretty straightforward why gaming is essentially unchanged on Zen 5: the I/O die is the same. Given the high sensitivity of AMD chips to memory latency in terms of gaming performance I wouldn’t be surprised if AMD CPU gaming performance is heavily bottlenecked by the I/O die. It would explain why the architecture changes that benefit more brutal cpu workloads just don’t do anything for gaming.
Agreed. Zen 4 clearly has the same problem, seeing the uplift of the 3D V-Cache. So the bottleneck is the same in Zen 5 and likely even worse with its better cores. Hence why I expect the 9000X3D parts to surprise positivity.
Yeah definitely not complacency because of Intel's struggles. When the instabilities and oxidations started to happen Zen 5 was already ready for mainstream production.
Well, Intel's new round of stagnation started at least back at dropping MTL desktop in favor of RPL-refresh. Still not long enough to design a chip tho.
Thank you for pointing out the textures! It looks flat that's the thing ive seen with my eyes. It bothers me yet so so many go omg it's do beautiful but my eyes always notice textures ! The rock look soft the wood looks soft etc. I need more detail and textures! But I've seen games with 8 to 9gb vram usage with amazing textures so something to think about. Also I think that ue5 just isn't great with textures. I think that is its downfall. Look at snowdrop or other engines making games using 8 or 9gb of vram textures look outstanding so don't always blade that it's also somehow inherent to ue5. I believe it's the issues with it to my eye. I don't notice the softness or flat look on most engines or games like I do in wukong. Think about older game textures most used under 8gb and yes they looked amazing. I'm sry but its not purely based on that. Play a 1080ti on Spiderman textures still look incredible even with a 4060 8gb they look incredible. So yeah it's not all related to that. Textures can still be made to look good it's all about chocolate from the devs. But you guys are correct with ue5 we need the vram used alan wake proved hpw good ue5 can be but we aren't seeing games use it like that. Talk about textures in 4k? Yeah 14 to 15gn of vram beautiful. So with ue5 it needs the vram but again I've seen games use 8gb of vram and look stunning on textures. Imo it's based on the engine also it's not such a clean cut and dry comparison.
Black Myth: Wukong... About the lighting I'm still of the unpopular opinion that the simplest version looks best. Well per haps not best in the form of fidelity, but certainly it looks best for me considering that it's a game and I want to see as much of it as possible, and that doesn't include hiding things in deep shadows. In several of the pictures I have a hard time seeing the equipment, armor and weapon that the character have. But in the most simplified version of lighting everything is plainly visible. That's what I want to see if I'm playing a game. If I'm watching a movie the shadows can look nice, but then I'm not directing the action and getting surprised by things you can hardly see is a part of the move ambience.
I just think gaming is becoming a last priority for AMD. Hopefully I’m wrong but I feel like these huge companies really just don’t care about us gamers anymore.
@@noahallen5046 plus they also care a whole lot about AI. At the end of the day they want to become a trillion dollar company and gaming is becoming an afterthought😂
I was surprised to be able to ran Wukong on my 8GB 5700 at 1080p native and Cinematic textures while retaininig 60 fps in almost all scenarios. If I enable framegen, it's 60 rocksolid. I can even push it to high details and still remain within the 8 GB frame buffer, but locking the FPS at 30. I'm really impressed in what they were able to do with UE5 since my previous experience with the Matrix city demo on the same machine.
People don't seem to understand that black myth wukong is well optimized and scalles on a wide variety of hardware while also keeping VRAM usage low, it is the engine that has traversal stutter problem. Also, they often confuse demanding games with unoptimized ones, sure the game is hard to run, but it also looks great.
Yaa it can run on 4k dlss performance while still not breaking 8gb vram cards so I really liked that. I m playing it on 1440p quality dlss capped at 60fps on my 3070ti no framegen and it's running smooth
Kitguru did a comparison with Zen4 and Zen5, while using Windows 24H2 and found some good improvements to performance. Maybe this is something HU could investigate with a bigger sample of games.
Eh, not really. Their marketing, even if it had been true, was mostly missing the mark anyways. The x3d cpus are so much better for gaming, if you got a normal budget for a gaming pc youwill go for those anyways. The nonx3d ones are mroe for productivity users already.
@@reappermen sorry you don't see it. Gamers get new architectures they didn't ask for that barely increase performance (Ryzen 5, RTX) and we buy them up because we don't have alternatives. They use the revenue to finance R&D for more server/AI chips. They use us, a smaller segment of their income, to pay for R&D. Quite the definition of leverage.
@@Serious-Man except they are not really leveraging gamers. Anyone that wants a AMD cpu for gaming and has the budget for a x700 and up would just get an x3d variant, since they perform so much better for gaming. Can't leverage gamers with a non gamer targeted cpu
Red dead 2 is example of cpu nightmare even on 4k with 4090 and 13900kf getting cpu bottlenecks in the heavy areas wonder why this isnt used more in cpu benchmarking on major channels apart from testing games.
Certainly going to be interesting to see how 9K X3D 'Stack up'. I'm gonna hedge a bet that the gains over 7K X3D will be as minimal as the none X3D have been.
Zen 5 is more a redesign than previous iteration, so the hope is that it will be a base to build upon for the future, otherwise it is quite the oof. Basically more of a 7000 replacement than a upgrade, at least in gaming
Sadly for people that don't yet have an AMD PC it still makes more sense to buy the 7000 series. Currently I'm curious if the rumoured 7600X3D will be a thing.
There will be update for windows 24. version. Performance increase from 3 to 30 % depends on game even on 7800x3d gains fps or 7000 series... there u can see that 9000 is faster... It was tested by KITGURU you can see video even 70 fps increase was with 7800x3d and 9000 series in game.
Starfield has a mod that introduces native renderer level HDR that has a separate brightness value for UI, it works flawlessly and better than windows HDR, RTX HDR and SpecialK. It's been out for months.
I was seriously expecting that Starfield and the Fallout 4 “next gen” update were going to bring native HDR and Ray Tracing at least, as well as some new tech to reduce the frequency and severity of loading screens. I mean, these are all pretty space-y features, perfect for a new space game. HDR should really be native on all Xbox titles already. Really disappointed in Bethesda. And fans will swear up and down this is a ‘new’ game engine just because they have it a different version number.
@@levygaming3133 outside of some situational global illumination, raytracing would be a complete waste of resources in Starfield given the sheer prevalence of static interior cells. gamers really need to stop insisting on slapping raytracing everywhere regardless of how little benefit it can bring - and then pulling a 'surprised Pikachu' when even the 4090 struggles to maintain solid and consistent framerates.
Can’t wait until there’s something else to talk about. All my favorite channels are telling me the same thing over and over again…..ok I’m not going to buy one……so what’s next? Which ddr maker is crushing it, who’s making the best cases, what’s shaking in the motherboard world? Dozens of things we could be moving on to at their point.
@@fatidicusaeternus6498 why was he describing the clockspeed of a CPU on that reverse clear board that time then? most famous example of him showing higher clocks than they got
@@fatidicusaeternus6498 I am well aware that he was GPU. I am also aware that he sucked! He caught over-promise syndrome from Raja and never got rid of it.
Using your optimize settings + weak motion blur + HDR & overly sharpen fix I found on other TH-cam channels + using OLED panel, I find the game is flawless in terms of graphics. But I'm sure high texture pack + HDR support will be updated later, just like cyberpunk for path tracing. The only bad thing of this game is my skill 🙂
Zen +5% has been an unmitigated marketing disaster at a time when AMD had clear air, and not one single reviewer could in good faith recommend Intel. AMD would honestly have been better off not releasing it at all. AMD Really needs to go through it's marketing department the exact same way that intel is culling jobs.
Something most people don't realize is intel is still on 10nm with raptorlake while AMD was on a 5nm node (now 4nm). Intel is making a massive node jump from 10 to 3nm while using TSMC. At the very least Arrowlake by design should gain massive effeciancy uplifts. I'm very much looking forward to how arrowlake performs in reviews.
You can't say Zen5 which encompasses _all_ 9000s series CPUs when you havent even reviewed the x3d *_gaming_* CPUs. Plus Zen5 is better than Zen4 at literally everything _except_ gaming where it is equal performance with less power consumed; which means it's still better.
@@band0lero Those are a very matured process node though, so it's not comparable. Isn't AMD using like the same N4P that Nvidia's uses for their GPU now? CPU design is quite different from GPU & the maturing process will be different.
9000 series launch was so bad that 7800x3d actually rose in price. Super annoying as this was the week i was driving to microcenter for my friends new pc
Wow you are correct! They increased it 50.00 for the bundle. I think it is temporary. Keep checking as the prices should go back down.
@@Dryloch alas my friend is only waiting on me to send the bundle to make his pc so need to get it to him asap so he can test stuff within return windows
What a launch LMAO (and 😢)
Same happens with the 7000 against 5800X3D...
I was expecting this too. Here, at least (Germany), the price fell from around 370€ to around 355€ at the moment. Not much, but 7800x3d is still overpriced here anyway compared to US prices.
Mindfactory in Germany sold in total about 250 Ryzen 9000 CPUs since launch (about half a month)
For comparison, they sold in total 70,000 7800X3D CPUs (which averages to 1000 per week)
AMD jacked up the prices of 7800x3d after the 9000 series launch fail and demand of 7800x3d rising.
I think everyone is thinking either "7800x3d can do xyz" or "lets see the reviews for 9000x3d" rather than buying 9000 series just yet.
@@skilletpan5674 ill wait for zen 6, my 7600 is doing great.
Repost ryzen 7000 non x3d vs 9000
@@KenpachiAjax that’s how supply and demand works….not sure if you missed that memo.
I've been checking prices almost daily since the release of the 9000 CPUs. The 7000 CPUs are steadily increasing in pricing. The 7700X, the 7800X3D, the 7950X, all of them. I wonder why...
I bought a 7900x for 226$ before Zen5 launched... I feel lucky af
@@punishthemeatpocket I mentioned it before but I got a 7950X for 320 euros (used but fine with warranty) from a guy that was selling to upgrade to the 9000 series. Can't complain either.
Supply and demand: exists
Store raise price:
Fuck I was hoping the x3d go on sale one the opposite direction! Dammit!
They cut them assuming they would need to with the new chips out... and now they are finding out they didn't need to.
From my understanding, Zen 5 was primary engineered for enterprise/workstation environments and that's where most of the transistor budget was spent on. Double the AVX-512 throughput (plus some ML-accelerating instructions) and wider fetch and decode stage that apparently only benefit dense multi-threaded workloads. Every other improvement is pretty much incremental at best for the casual user -- 50% more L1 data cache and slightly larger ROB size.
Zen 4 customers might need not apply for upgrade this generation. Let's hope Zen 6 will bring some more general performance boost along side.
I wonder if AMD is taking a leaf from Intel book - and going something like "tik, tok".
Zen 5 we see the foundations uplift, and good improvements on workstation/server workloads.
Zen 6 we will see maturing, and will get a good uplift in gaming.
Zen 7 will be foundations and workstations/server again.
Could be it?
Not so. The largest advancement overall in the architecture is the branch prediction, which affects everything to some degree.
@@sysbofh Don't forget Zen 4 TMSC process has matured a lot for the cpu. It takes time to get everything sorted out on nodes. CPUs are totally different from GPUs on nodes, they react differently to temperatures too. They maybe N4P as Nvidia's GPU's now but it's different type of design.
@@greasebob which isn't showing up in single thread currently. I'm wondering if the "spercte prediction injection patch" is seeing the 2 ahead prediction as "Trojan" instruction being injected & bugging out on Zen 5 flushing the micro-op cache & instructions prematurely.
I do agree you are right. But then why did they advertise this generation as a gaming focused one? It's crystal clear it isn't. There must be going on some serious miscommunication between amd engineering and amd marketing teams
Steve is sitting, I am calm
Position shows mood,
Steve is sitting, so he's calm,
It's Hardware Unboxed. 🗻
Im so happy i upgraded to 5800x3d last year..
7800x3D, no regrets either.
@@TotalXPvideos same $500 for the microcenter 7800x3d bundle was a steal for performance
Even if Zen 5 was actually good that doesn't take away the fact the 5800X3D and 7800X3D get hundreds of fps in games and would be good for many years.
my ryzen 7 7700X is so fast that i am not going to need to upgrade for li the next 5-6 years
I still on 5600x 🤐
Zen 5 is an unmitigated awful marketing disaster. Their whole marketing dept needs to be fired. Every opportunity is squandered, every single time. Dr Lisa, do something about it!!!
Blind shareholder detected. It's not just a marketing issue.
Lisa: it's my decision
Maybe Dr. Lisa is the problem 😮 ?!?!?!
@@YuriMomoiro While Zen 5 didn't get the desired uplift in all applications it still did have significant gains in some production workloads. If the marketing team had focused on the areas where the processors actually showed improvement, rather than trying to show gaming games that weren't there the launch wouldn't have looked so terrible.
Besides that, the 3D chips are the best gaming CPUs there are, and a new one comes out every generation so we are going to see gaming gains this generation just a bit later.
The whole point of marketing is to show case your products in a way that makes them attractive to customers, and it seems AMDs marketing department misunderstands what the chips can do and who they could benefit most.
Not actually a great ad for new hires: "oh, yeah! Those offices are empty because we fired everyone. Yup, we pay exactly the same as everyone else. Wait, where are you going?"
AMD shouldn't have marketed the 9000x series to gamers.
Wouldn’t be an issue if they didn’t lie and say it performed better than the 7800x3d or better than the 14700k. If they priced the cpus and came out with the truthful numbers they would sell easily.
They shouldn’t of marketed it to anyone. Not just gamers. It’s not even 5% better in normal production work loads. The only place I’ve seen it be even slightly better is in synthetic benchmarks and who gives a shit about that
they should have advertised the efficiency of ryzen 9000 instead
9700X unlocked (aka not gimped with 65W) and with PBO pretty much matches 7800X3D overall in games which can't be overclocked at all. 9700X wins with ease outside of gaming. 9800X3D is going to be great I think -> Higher clockspeeds, unlocked for OC and I can easily see this chip beating 7800X3D by 10-15 if not 20% (tweaked) in gaming. There's several windows bugs holding back Zen 5 perf right now as well which are going to be fixed.
Wendell recently showed that disabling VBS (Virtualization Based Security), preferably through some obscure BIOS setting, benefited Ryzen 9000 in gaming more than Ryzen 7000.
For the question on memory issues with Zen 5, that's something that bios updates will fix via AGESA updates. Lest we forget, Zen 4 had issue running 6000MHz RAM on a lot of chips & couldn't go above 2000MHz FClk when they first came out. Now we've got a majority if chops with memory controllers that can handle 6200-6400mhz RAM with 2100-2200FCLK With less issues. I'm running 2200MHz FCLK (at 1.2V soc) and 6200MHz CL28 @ 1.4V on my 7800X3D and it's happy as a clam. Took some tuning, but it's tight timings without crushing it with voltage.
doubt BIOS can do anything about the cross CCD latency issue tho
Thank you Steve for your amazingly honest reviews 😊
Thanks for the great show Steve & Tim
I feel as if there wasn't enough controversy though
I've got a quick fix though
Edits audio at 21:15 :
"Don't be poor........ Buy an RTX 4080... ....Get More Vram......That is absolutely .. ... what we've ever said"
- posts on reddit -
Personally, HDR makes more of a difference than RTX to me. HDR is VERY important, I dont get whya lot of PC devs dont add it. Even windows auto hdr looks great to me.
I used to game in HDR, but I no longer do so. I play in a dark room (on an LG OLED TV), so most games are too bright for me. Most implementations raise the overall brightness, not just highlights.
I love HDR in movies, though, but it still differs greatly. I've seen a few movies with horrible HDR, where everything is overblown.
The games that have aged the best are most definitely not the games that targeted photorealism. The games that have aged the best have been games that used a more stylized art style and focused on things like lighting. A great example of this is Halo 3. 19:37
Halo 3s character models other than Chief and Cortana look goofy, but everything else holds up and is very enjoyable
They are saying that for graphics, not whole games, for example, Arkham Knight & MGS phantom pain are going to be 10 year old in 2025, but their graphics hold pretty well
So, as an owner of a high VRAM GPU I do agree that I would like to see higher res texture options but not at the cost of other visual enhancements (like ray tracing and path tracing). I think the dev of Black Myth made some great choices to ensure performance scales across a broad array of GPUs and including higher res textures options would likely have only led to reviewers selecting those options and claiming the game was “badly optimized”.
No, higher res textures don't really have an impact on performance, that is the whole point why they are so great. The lighting effects and such are dependant on your output resolution, not texture resolution, so those don't change. And stuff like raytracing already usualy runs at far lower resolution ( if memory serves correctly, the raytracing layer tends to be run at 40-60% of game resolution)
For Zen5, the main issue is in the marketing. Their marketing have been terrible lately. While the numbers are correct, but it was done in a very biased way.
The product itself does have generational performance improvement, but most of it manifested in non gaming workload. In gaming, the performance increase is meh. The issue is that in the past few generations, we have been spoiled by CPU releases accompanied with clock speed increase and possibly cache increase (like with Intel 12th gen to 13/14th gen). Those 2 factors are actually the biggest contributor in gaming performance increase instead of the actual architectural improvement. If you ask someone that understand how CPU works, they will tell you that Zen5 is better and worthy of a new generation. Gamers don't care about that. They just want to see their FPS up by like 15 to 20%.
What also didn't help is the pricing. From gamers standpoint, it doesn't worth the price increase. The issue of course is the fact that this CPU actually can have a large generational improvement for some non gaming workloads (and even in some games!). I do think that the MSRP is just a bit too expensive (just a bit!), but gamers think that it is flat out too expensive. The difference in view come from the fact that I value the productivity side a lot since I also use my PC for work that definitely can take advantage of Zen5 improvement.
Thing is it's not just gaming where you don't see much improvement, plenty of productivity applications don't care about the AVX-512 and L1 improvements.
In 2022 when Zen 4 got released, 7600X in my country costed 300 euros, now the 9600X costs 370 euros... 7700X costed 400 euros, now 9700X over 500... crazy
That sounds like a scam specific to Hvratska. Here in Germany, the 9700X is already sold below €380 (and the 9600X below €300) and that includes 19% VAT.
Clicking around, I found offers just over €450 in Croatia. So on one hand you were exaggerating and on the other, the 7700X is still sold for €340 there while it's only €280 here, so CPUs just seem to be priced higher there in general and it's not limited to Zen 5.
@@Lightkie prices that I have mentioned are here in my country, Montenegro. Here prices are crazy high, GPUs especially, even in Serbia I can find some of the CPUs and GPUs for cheaper
The funny part is even ZEN+ had higher gains vs ZEN in gaming, than ZEN 5 over ZEN 4.
It's shocking really.Zen 5 is not some refresh,it's a new ground up,clean sheet design with a ~30% larger core built on a new node.AMD team had all the funds they ever needed and worked under no pressure from their competition and yet they still failed,getting beat up in some cases by their old Zen 3 refresh aka Zen 4.Zen 5 is a failed experiment and a disgrace.
@@ytctuserThe goal seems to have been avx512 performance for AI workloads, the shareholders basically decided how it would be designed. Marketing didn't get the message xD
What? Zen+ had ~3% IPC improvement over Zen.
@@ytctuser I wouldn't call it a failure quite yet. At least not from AMD's perspective. Gaming is one of the smallest markets with servers and laptops being far more important.
Zen 5 seems to be good for heavy compute work loads so data centers *might* be happy with it.
There is also the hope that Zen 6 will have a big uplift like Zen+ had. It is hard to beat a mature platform off the bat; there is a good chance there are more than a few hardware and software oversights sapping performance(Windows admin bug being an example).
I remember going from zen to zen+ was a big leap, going from zen+ from zen2 was underwhelming.. I actually sold my 3600 after 3days and went back to 2700x. zen3(5600x) was good again, and now im sitting on 5800x3d and seems like there´s no urgent need to switch...So lets wait for zen5+\x3d variant.
I would like to see benchmarks with Zen 5 on RPCS3 and how much the AVX512 improvements boost emulation performance, compared to last generation of cpus.
not much, zen4 had avx512 just not the 512bit ops at full speed. rpcs3 mostly loves avx512 for the non-512bit ops it contains. Zen5 has an errata in that all simd ops have at least 2 cycle latency, which may make it slightly worse in some cases than zen4. Zen4 vs zen5 mobile would be an interesting comparison, zen5 mobile remains the same as zen4 not running the 512 bit ops are full speed, but I don;t know if zen5mobile has the 2 cycle simd errata.
Not saying I don't want avx512 comparisons, just saying why expectations in rpcs3 case should be tempered.
@@tappy8741Not quite. It's only SIMD integer addition ops that have had their latency increased to 2 cycles. There should actually be some pretty good performance gains otherwise.
@@DrewWalton So binary ops like xor and the like are still 1 cycle latency? That's good if true. Alex Yee's avx512 analysis titled "zen 5's avx512 teardown + more" (which I would link if youtube let me) has this quote. "Latency of all 1-cycle SIMD instructions (regardless of width) has regressed to 2 cycles." It's a very technical article and the guy is knowledgeable I have no reason to doubt his findings, do you have any sources to back up your claim?
Techpowerup does RPCS3 benchmarking and the results are disappointing.
Actually, besides aiming for a wider base of 8gb GPUs, I the Wukong devs also had to make the game work on the PS5 too.
TSMC quotes that N4P further improves N4's performance by around 6%.
...compared to N5, N4P will deliver an 11% performance boost, a 22% improvement in power efficiency, and a 6% improvement in transistor density. Importantly for customers, TSMC says that N4P features a simplified (and cheaper) manufacturing process, requiring fewer masks and less wafer turnaround time.
They used all that for the new accelerators that benefit servers.
To be fair, Zen5 average IPC gain is around 10% and power efficiency in the ballpark of 20%. But if N4P is cheaper than N5 then why is Zen5 much more expensive than the 65W TDP chips they're supposed to replace?
Those performance claims are pure marketing garbage. The only performance related metric the TSMC process affects is the latency in theory. Otherwise performance of a n4P and a 192 nm transistor are the same.
Performance gains come from beeing able to pack more transistors together, both due to size and to lower power draw per transistor, and thus less heat problems. And of course due to different architectures and such, but those are unrelated to the fab node. In theory there are some latency barriers that need to be crossed for some architectures, but those are chip designer level considerations, not fabe node levels.
as far as i am aware, Zen 5 is a complete redesign, which is why i am not all that disappointed with Zen 5, it's a solid starting point for the next 6 or so years
and it was most likely needed, Zen 4 was probably at the limits of what the original Zen base architecture could do
the marketing sucked tho, don't get me wrong
KitGuruTech just tested a new Windows update that provides disproportionate gains for Zen 5 vs 4.
SPOILER: 0:19 Tim summarising the entire Zen 5 launch
transcript for that time as follows:
welcome back to Harbor unbox it is time for our Q&A series I don't actually recall the last time we did this quite a
0:12
while ago at least not here we did one at computex but then before that here
0:17
it's a long time ago not much has been happening must be at least 3 months since we got I think so yeah so anyway
0:24
August crazy August Q&A isn't it that's wild yeah the year is flown I don't know
From Zen 1 to Zen 1 Plus (Zen+) was only 3% in IPC, which seems so low
But they had like 5% more frequency, and 5% less letency, also much better DDR4 OC frequency
So in general.. from Zen 1 (Ryzen 1000 series) to Zen+ (Ryzen 2000 series) was like 10% overall
From Zen 4, to Zen 5, is like 3% in IPC, and frequency boost, latency & RAM OC and everything else is the same.. only efficiency improved
I had Ryzen 2600, 3600x, 5600x, and now 7700x, I see no reason to pick Ryzen 9000 series, unless they are the same price as Ryzen 7000
For Intel to make Arrowlake a success it not only has to be an exceptional product, they should double the warranty or maybe even five years.
13th gen warranty got increased to 5 years
Not sure. With Zen 5 being so disappointing for many, I think if Arrowlake just slaps a solid 10% on top of 14th gen performance and lowers power draw, they'll sell better already.
im not sure intel can afford to do anything other than knock it out of the park after 2 generations of defective cpus and massively higher powerdraw compared to amd.
@@quantum5661 2 "defective" generations can be fixed by locking your cores and undervolting. 🤷♂
Until they eventually break anyway @@ProVishGaming
Really appreciate your work on this over the last couple of wks. Tough process but super helpful to us
Hardware Unboxed, I can't get enough of your videos
@@IOSALive Same here. You know you're getting good and unbiased information and facts about all the tech you want and need. 👍
To be honest, it is the channel i watch gor when waiting for new info about this topic.
They’re beating a dead horse and making money off telling you want they already told since launch.
Intel needs to ""at least"" double the Generational support from its Motherboards. At Least. A minimum.
if 9000x3D will perform almost identical to the previous gen, I think people will opt to upgrade to the ryzen 7000 series instead of jumping to 9000. And people with the ryzen 7000 already, will keep it until Zen 6 comes out.
On the other hand, if Arrow lake turns out to be decently better than x3D, it's gonna be a clutch saving moment. Big if though. This is more uphill battle for intel to regain market vs AMD
Meanwhile the 7000 series is increasing in price while I’m trying to piece together a new system
All in all 9800x3d is suposed to be much better in productivity than 7800x3d that suffer a lot in productivity compared to 7700x
@@haukionkannel if that's true, that would be awesome. I ended up with intel because of the same reason 7800x3D is not as good with productivity apps.
I never understood why developers who no doubt develop high res textures (they scale them down for less 8 GB cards) why they don't include these high resolution textures as an option. High res textures are the single most impactful graphics quality setting.
9400x should've been the 6-Core Part for $249.99
9600x should've been the 8-Core part for $329.99
9700x should've been the 12-core part for $459.99
9900x should've been the 16-core part for $599.99
-------------------
The xx50 should be saved for a refresh.
The RTX 4070 with GDDR6 review is gonna be hillarious .
Might not matter if cheaper. Most folks play @1080 still.
@@erhanozaydin853Spoiler: it won't be cheaper and the naming will be obscured.
I don't like companies changing specifications on the same model number but I wouldn't assume anything. It may end up making zero difference if it previously had more bandwidth than it needed.
Testing is good'n all but availability of more midrange gpus should lead to cheaper prices in general gradually.
Will it run cooler though?
34:10 I’m surprised and a bit proud of PC enthusiasts for not falling for marketing this time. I expected the portion of people who either didn’t do any research, fell for the marketing, or didn’t care about value and just wanted the latest thing to be higher. But it looks like 90-95% of buyers were clued in. Maybe the CPU launch week crowd is unusually informed and it’ll even out a bit.
My personal opinion on this whole Zen 5 thing from what I have seen on all the TH-cam reviews seems to be a mixture of things:
1.) AMD has been consistent in sticking to efficiency. They usually operate at much lower wattages than the competition thus some generation leaps may not be as big of a performance gain as people want.
2.) There was that weird delay at the launch time, I'm guessing that there are some problems with microcode, that will be worked out within 6 months and we'll see performance gains.
3.) The Windows scheduler has always plagued AMD on launches, we've seen this a few times, wouldn't be surprised if some Windows updates (aside from the fixes already mentioned on the way) come in the next 6 months and we see performance gains.
With that being said there's some things I haven't seen explored that would probably be a cool piece of testing to see if there is a maximum performance that can be gained from Ryzen 9000 (or numbers look close to AMD's original before launch) Some of these may of been done already and I just haven't found the videos yet.
A.) Fine tuning PBO for max all core clock speeds instead of just maxing out the power usage. What happens if the 9700X is running at 5.5Ghz all core all the time in Cinebench or game benchmarks? Either through undervolting and overclocking or via overclocking, or testing both. Not throwing everything at it and seeing what it can do, but trying to fine tune it to run at max boost when the application/game will take it.
B.) What will the performance look like once AMD gets higher RAM speeds working? We all know Ryzen loves faster RAM speeds. (This of course we have to wait on for AMD and BIOS updates)
I mean this launch has definitely been weird, but I'd like to see where AMD's at in 6 months with this, it would definitely be a good revisit around like Christmas time to see if bugs and other updates to Windows and etc have at least helped claw back some performance gains and rectified the situation. Only time will tell, seems this is the normal for everything software and hardware now, release an incomplete product and fix customer perception of it through software updates over the next year... It kind of sucks
AMD engineering ~ good.
AMD marketing and PR ~ not so good.
Q: Why is Zen 5 so disappointing?
A: Depends on who you ask. For you guys, with a gaming focus channel, it's really no better than Zen 4.
I am an old guy who plays with Linux. I do hardly any gaming ~ I have an Intel i7 6700 which is almost a decade old, and a GTX 750 Ti in it ~ which genuinely is a decade old. I just want a new computer, because this one is getting a bit old and skanky ~ like me. I don't compile a Linux kernel every day, I do it in bursts, and I guess it averages out to about once a week. At some point it gets to be like trying to use an HQ Holden from 1975 as your daily driver. It just gradually gets more difficult and expensive and inconvenient. Maybe the cross-over for cars is 30 years ~ the cross-over for computers is about ten.
I do some web-serfing, watch some youtube, and I compile a kernel every now and then, but I do play with VirtualBox and I have one copy of Windows and about a dozen guest installs of various flavours of Linux. I just like to keep an eye on things.
A 9950X with 64GB of RAM, and a 2TB gen-4 M.2 drive is going to suit me down to the ground. I am cynical about AI and the inclusion of a Neural Processing Unit, so getting the final generation before that shit becomes standard equipment, that's another reason to jump in here. This is a really nice point to do my once in a decade complete platform upgrade.
All the stuff that makes zen 5 a major disappointment for you guys ~ for me it's pretty much ideal. The fact it's not selling like hot-cakes and the price is going to drop ~ that's like the cherry on top! Just makes it even better.
Why it is so disappointing? Because they plain lied in their marketing. They promised a new Porsche and delivered a refurbished VW Beetle. They should have market it as Zen 4 refresh with better productivity. We would all say it is better than expected.
Yeah... It's significantly more expensive and doesn't offer anything like the gaming performance they advertised. No mystery. If they do slash prices it's still a good product, just not what they advertised.
Amd engineering is also bad you fanboy
To add to the 8GB VRAM convo:
Even my 3070 Ti is a REALLY good card; it's very easy for me to get very high frame rates in RE Engine titles for instance even at 1440p but as soon as I enable enough settings (typically the RT ones), I get massive frame rate drops and/or stuttering. Until it hits that ~8GB of VRAM usage it's always extremely smooth and playable though; really wish Nvidia would've equipped it with 12GB instead
HUB got to 1mil subs milestone and they revived Q&A series, we missed those!
19:20 One game that aged really well is Star Wars Battlefron 2 with shaders. 8 years old and looks amazing...
wtf I was just thinking that I hadn't seen an hardware unboxed Q&A video for ages then come to the channel look for one and see ones been uploaded 1 minutes ago. ITS FATE!
That whole first comment is backwards Zen 4 is pretty much maxed out on IPC & possible clock speeds while its gained substantial increase in clock speeds from its base. There is no more room for Zen 4 to grow IPC there for the design needed to rework to allow for higher IPC from a new base point.
For God sakes Zen 4 is like 55% to 75% faster than original Zen core design & for the most part the main design of the original cores on Zen are still in Zen 4.
28:04 VRAM requirements usually go up with consoles and available screen resolution. If you are developing a game that targets 1080P it does not make sense to include 4K textures, since 1 single texture would be 4x the size that the "target resolution" can display, even if that texture would fill 100% of your display, you would not be able to see every pixel of that texture.
For example: If you are developing a game right now, you won't use a 4K Texture on a small item that has the size of 100x200 pixels on your screen.
AT LEAST AMD Doesnt have a failure rate of 57% across 2 generations of cpus like intel 13thand 14th gen
Regarding VRAM size: I maintain that good art direction can make it happen on less than 8GB VRAM for the textures, depending on how much the rest of the tech stack is using up VRAM (Ray Tracing, Framegen, etc...).
If the effect you want is close-up looks at objects that reveal tremendous minute details, that's gonna be expensive on texture size, sure. I'd argue a compelling visual design doesn't necessarily need that.
Now, a super "hyper-real" vibe will probably want that, and artists should be allowed to go there. But it's a nice move to make a settings combination that looks reasonable within lower VRAM sizes. Since some people are gonna need to turn textures down, making it still look decent should be done if at all possible.
Once Nvidia and AMD (and intel??) give us more VRAM on cards, we can maybe say it's time to move on. But that's a ways out, and devs should make games that look good for the most users at launch -- give us good big textures AND good small textures.
Eventually, if artists want it, I do think that should happen. We should be able to move on to 12GB+ and not look back too much, eventually.
But until that makes sense, IMO if devs want to save time/resources and only make one tier of textures, doing what WuKong did and prioritizing small textures is acceptable and even the right move. Only giving "huge textures" or else "low-effort potato textures" if you dial down settings is kind of a jerk move by comparison, since it'd just about ruin the game for a lot of people.
Shame on you for not using Special K for HDR.
It allows for such in depth HDR tuning that you can easily make it better than most native HDR implementations
Zen 5 reminds me of RDNA 3. AMD thought it would be good but things didn't really add up somewhere.
Can we start calling it Ryzen 14th gen since it's pretty much the same thing as previous gen?
I agree that the normal textures should be where they are at with Black Myth Wukong, but if you're creating a game that requires the 16GB or 24GB cards to turn up the settings, it would be nice to have a texture pack to actually make better use of the cards. I'd take an HD texture pack for the game, over some of the heavy settings available. I'd be ok with allowing modding, because you know it would get a community made texture pack.
As a consumer, id expect a new design to be faster than the prior ones its replacing, always. No regressions in benchmarks.
They want more money!!!!
NO regressions? Good luck! Every mayor CPU had regressions in some benchmarks. i386 was slower on some instructions than i286. Pentium was slower than i486 on some benchmarks... It's been like this forever. You have a average speed increase, but some instructions/corner cases get slower. The only exceptions are clock speed boosts when the architecture is unchanged.
@GormHornbori Basically nothing modern is like this. It's been an extremely long time since those days
Zen5 also had a smaller node jump than 7>5 with 5 to 4nm being a refinement vs 7>5 being a bigger jump.
Guys we don’t even need new CPU’s/gpu‘s .. we have very powerful hardware these days, we need developers to take advantage of it and deliver games that have new elements like physix or something. Every 2 years a new cpu gen would be enough!!
I honestly would like to have both: better hardware and developers. But we live in a world without either
It's been 2 years since 7000 tho.
Can't really blame developers when the frameworks are horrendously complicated to use and corporate/market is pushing for fast development and short deadlines.
I know we are in the disillusionment phase of the AI hype cycle, but I'd like to see more "AI" hardware (formerly AI/ML, e.g. TPUs/NPUs) be leveraged for better NPCs, better procedural generation, and more dynamic gameplay. Nvidia has showed off some of this with their ACE demo, and we've seen Skyrim modded to use ChatGPT, but it would be nice to see more games where characters act and talk more realistically.
Open-world games in particular could benefit from this, but even FPS and other genres could, as well. It feels like NPC "AIs" haven't advanced much relative to graphics in the past two decades.
The amount of TOPs in lower end GPUs or CPUs isn't much compared to higher GPUs, but if it's going to be taking up die space, anyway, it might as well be leveraged for games and not just the handful of prosumer applications or Windows features that currently support integrated AI processing.
@@zivzulander To be fair, AI hasn't even started being properly used where it's useful. But yeah, the hype train took it in a completely different direction from where it's actually good. I also think it's too early for games to start using NPUs, since there's only few people who are going to have that hardware. It'd be pretty short sighted to only make a game that can run on the newest CPU, don't you think?
I tried black myth benchmark tool on my GTX 980M and was surprised it runs with low preset with FG at 60 fps and medium preset at 40. Without FG 25-30 fps. All tested at FHD resolution
7950X works, 9950x has the potential to not work when you need it because of core parking. It’s a downgrade, I won’t pay a single cent for that.
Exactly I have no clue why they made it worse.
Nothing was wrong with Zen4 non x3d but they “fixed” it and broke it. They clearly have not learned from the 7950X3D.
@@simptrix007 to make it better for servers where they make all their money
Think you're milking this a bit.
I don't think anyone was actually upset about Zen5's lack of performance, only disappointed. It was AMD's doubling down on it's bullshit claims, waffling about blaming anyone and everyone but themselves, and gaslighting us all like we're f*cking stupid, *_that_* pissed people off. If they'd have just let the issues work themselves out it would have simply been a disappointing launch and been over within weeks, but AMD just couldn't help themselves to simply STFU and not make everything 10x worse.
Everyone was already focused on Intel and the RPL instability drama, and upset that Intel wasn't saying enough publicly. But how much worse would it have been if Intel said there's no problems with Intel, there's no problems with Raptor Lake, and *YOU* just aren't using it correctly? The internet would f**king explode if Intel did what AMD is doing right now.
Give us an example of a game that aged really well due to textures, please!
That anti-static modmat from Gamers Nexus better be signed by Stephen Burke.
On the last podcast, I was all smiles, I'm hoping we get spicy today also.
So glad i just upgraded to a 7800x3d from a 5600, the difference is just INSANE
What is your GPU and which resolution are you playing at? Or are you talking about non game differences?
You must be a low settings, high refresh gamer. I'm on 5600x and something like wukong stresses my RTX4080 way before my cpu.
@@PawnbandOR…….he could just play CPU heavy games like Minecraft Java w/ mods
It's not just gaming GPU VRAM amounts holding back texture quality and game visuals. The largest factor is the hardware in videogame consoles.
Those are what really holds back game visual quality. The biggest offenders being the Xbox Series S and to a lessser extent the Series X. Even with the unified memory on consoles, its not all the same. Even on the Xbox Series X a portion of its memory is clocked significantly slower than the rest of it so that portion of the memory cannot be leveraged in the same way as the rest of it.
Intel needs to strap in for the most intense reliability benchmarking a consumer CPU launch has ever seen by a landslide. If they see 10% performance increase from 12900k, but all reliability concerns are completely eliminated, it will make me happy to continue to invest in Intel for my machine. Too many bugs with Ansys and other software to change from the default architectures tested.
10% up from the 12900K would end up being a considerable drop in performance from the 13/14-900K, but it would mean that their CPU's will probably atleast be stable. They would seriously have to eat their pride to release CPU's that are _less_ powerful than their trashfire generations to deliver on stability. But if they focus on energy efficiency and value, that would win back a lot of trust.
I have a 7950X3D / 4090 and a 5800X3D / 4070 and I use the 5800X3D the majority of the time. I love these two platforms.
I think even 10 and 12 GB of VRAM being made standard would be fine for a while. Still, all GPUs need to be more affordable.
Let me see this... Guys who never coded a calculator app are getting serious on CPU performance.
The fact that I didn't even notice "worse textures" and I thought to myself "this game looks amazing" goes to show how much people like me would benefit from better performance rather than higher res textures.
Textures don't impact performance if you have enough VRAM, and you will notice texture quality more than just about anything else. If the textures were worse you'd notice, and if they were better you'd also notice.
@@Hardwareunboxednot more than motion quality. Games are constantly moving pictures how they look when they move matter more than how they look at a standstill
@@Hardwareunboxed No, textures definitely impact load times, especially before the shaders cache. Lower textures will mean better load times as the game doesn't have to process as much. I realize you're benchmarking past those initial load times, but it still happens
@@paulcox2447 No, they are not constantly moving. You can stand still in games and look at the environment.
@@luzhang2982 Get an SSD.
Are we sure it isn't just a rebranded 7000 serie cpus? like they did with the gpus back in the days hd7950 > r9 280
I have a a zen 3 cpu, I skipped zen 4 because I previously experienced a lot of bugs being a early adopter. AM5 has now matured and zen 5 isn't an real performance uplift. Now I have to wait for zen 6 and some time for not being an early adopter. I will not be a beta tester again.
I think it’s pretty straightforward why gaming is essentially unchanged on Zen 5: the I/O die is the same. Given the high sensitivity of AMD chips to memory latency in terms of gaming performance I wouldn’t be surprised if AMD CPU gaming performance is heavily bottlenecked by the I/O die. It would explain why the architecture changes that benefit more brutal cpu workloads just don’t do anything for gaming.
Agreed.
Zen 4 clearly has the same problem, seeing the uplift of the 3D V-Cache. So the bottleneck is the same in Zen 5 and likely even worse with its better cores.
Hence why I expect the 9000X3D parts to surprise positivity.
@@MacGuyver85 We'll see, although more likely the 9000x3d's will also have minimal gains compared to 7000x3d's
@@MacGuyver85 every cpu would see a massive jump if you slapped an extra 64MB of cache on top of it
Flop is right I upgrade every year, and I’m not planning on upgrading this time. Zen5 X3D will have to show improvement over my current 7950X3D.
Yeah definitely not complacency because of Intel's struggles. When the instabilities and oxidations started to happen Zen 5 was already ready for mainstream production.
Well, Intel's new round of stagnation started at least back at dropping MTL desktop in favor of RPL-refresh. Still not long enough to design a chip tho.
Starfield has a great in-game implementation of HDR with the Luma mod. Well worth the effort of the install.
It’s ridiculous that any Xbox titles are leaning on Windows Auto HDR instead of shipping native HDR.
Thank you for pointing out the textures! It looks flat that's the thing ive seen with my eyes. It bothers me yet so so many go omg it's do beautiful but my eyes always notice textures ! The rock look soft the wood looks soft etc.
I need more detail and textures!
But I've seen games with 8 to 9gb vram usage with amazing textures so something to think about.
Also I think that ue5 just isn't great with textures. I think that is its downfall. Look at snowdrop or other engines making games using 8 or 9gb of vram textures look outstanding so don't always blade that it's also somehow inherent to ue5. I believe it's the issues with it to my eye. I don't notice the softness or flat look on most engines or games like I do in wukong.
Think about older game textures most used under 8gb and yes they looked amazing. I'm sry but its not purely based on that.
Play a 1080ti on Spiderman textures still look incredible even with a 4060 8gb they look incredible. So yeah it's not all related to that. Textures can still be made to look good it's all about chocolate from the devs.
But you guys are correct with ue5 we need the vram used alan wake proved hpw good ue5 can be but we aren't seeing games use it like that. Talk about textures in 4k? Yeah 14 to 15gn of vram beautiful. So with ue5 it needs the vram but again I've seen games use 8gb of vram and look stunning on textures. Imo it's based on the engine also it's not such a clean cut and dry comparison.
Black Myth: Wukong... About the lighting I'm still of the unpopular opinion that the simplest version looks best. Well per haps not best in the form of fidelity, but certainly it looks best for me considering that it's a game and I want to see as much of it as possible, and that doesn't include hiding things in deep shadows. In several of the pictures I have a hard time seeing the equipment, armor and weapon that the character have. But in the most simplified version of lighting everything is plainly visible. That's what I want to see if I'm playing a game. If I'm watching a movie the shadows can look nice, but then I'm not directing the action and getting surprised by things you can hardly see is a part of the move ambience.
I just think gaming is becoming a last priority for AMD. Hopefully I’m wrong but I feel like these huge companies really just don’t care about us gamers anymore.
They have an entire lineup of chips that are specifically focused on gaming, the x3d chips..
X3D系列就是专门为游戏玩家设计的,只是CPU的发展路线更加的专业化了,不同的需求就不同的CPU,这样其实很不好,也不支持
What makes you think they ever did care about gamers dude? They make chips for enterprise customers. Gaming is a side hustle.
@@noahallen5046 plus they also care a whole lot about AI. At the end of the day they want to become a trillion dollar company and gaming is becoming an afterthought😂
I was surprised to be able to ran Wukong on my 8GB 5700 at 1080p native and Cinematic textures while retaininig 60 fps in almost all scenarios. If I enable framegen, it's 60 rocksolid.
I can even push it to high details and still remain within the 8 GB frame buffer, but locking the FPS at 30.
I'm really impressed in what they were able to do with UE5 since my previous experience with the Matrix city demo on the same machine.
People don't seem to understand that black myth wukong is well optimized and scalles on a wide variety of hardware while also keeping VRAM usage low, it is the engine that has traversal stutter problem. Also, they often confuse demanding games with unoptimized ones, sure the game is hard to run, but it also looks great.
what???
Yaa it can run on 4k dlss performance while still not breaking 8gb vram cards so I really liked that. I m playing it on 1440p quality dlss capped at 60fps on my 3070ti no framegen and it's running smooth
Because it has very limited number of assets
@@knowpai9432 Because it's UE5 game. UE5 games don't use much VRAM🤦♂
You tried to put so many words in your sentence that you are contradicting yourself.
Sure like has been given haha goodwork boys as always
Kitguru did a comparison with Zen4 and Zen5, while using Windows 24H2 and found some good improvements to performance.
Maybe this is something HU could investigate with a bigger sample of games.
We have a 45 game benchmark coming next week.
@@Hardwareunboxed you didn't have to say that out loud for us to know you where already on it! :D
@@Hardwareunboxed Linux benchmarks when?
@@Zfentom better off going to Phoronix for that
@@Zfentom never, no one uses Linux (well, sorry 10% of ppl)
I still remember downloading those 8k textures for oblivion lol. It would wreck my 290x so bad and it was pretty much the fastest gpu at the time lol
AMD pulled an Nvidia. Leverage the gamer community to finance their AI chips.
Eh, not really. Their marketing, even if it had been true, was mostly missing the mark anyways. The x3d cpus are so much better for gaming, if you got a normal budget for a gaming pc youwill go for those anyways. The nonx3d ones are mroe for productivity users already.
@@reappermen sorry you don't see it. Gamers get new architectures they didn't ask for that barely increase performance (Ryzen 5, RTX) and we buy them up because we don't have alternatives. They use the revenue to finance R&D for more server/AI chips. They use us, a smaller segment of their income, to pay for R&D. Quite the definition of leverage.
@@Serious-Man except they are not really leveraging gamers. Anyone that wants a AMD cpu for gaming and has the budget for a x700 and up would just get an x3d variant, since they perform so much better for gaming. Can't leverage gamers with a non gamer targeted cpu
Glad I went with a 4090/ 7800x3d/ 64 gig ddr5 all white build. Anticipating the 9000 series x3d and 5090 benchmarks.
Nothing has beat Talos Principle 2 for texture resolution.. Must play on a QDOLED 1440p or 4k ... INSANE!
Red dead 2 is example of cpu nightmare even on 4k with 4090 and 13900kf getting cpu bottlenecks in the heavy areas wonder why this isnt used more in cpu benchmarking on major channels apart from testing games.
Certainly going to be interesting to see how 9K X3D 'Stack up'. I'm gonna hedge a bet that the gains over 7K X3D will be as minimal as the none X3D have been.
Yesterday's video ended with "we don't want to cover Zen5 anymore"
Steve and Tim being in the same room? We are so back.
Zen 5 is more a redesign than previous iteration, so the hope is that it will be a base to build upon for the future, otherwise it is quite the oof. Basically more of a 7000 replacement than a upgrade, at least in gaming
Sadly for people that don't yet have an AMD PC it still makes more sense to buy the 7000 series. Currently I'm curious if the rumoured 7600X3D will be a thing.
@@drago939393 Don't buy a 7600X3D. You want the benefit of the full 8-cores.
@@drago939393 i mean for price and especially only gaming it makes more than enough sense, yep
There will be update for windows 24. version. Performance increase from 3 to 30 % depends on game even on 7800x3d gains fps or 7000 series... there u can see that 9000 is faster... It was tested by KITGURU you can see video even 70 fps increase was with 7800x3d and 9000 series in game.
Starfield has a mod that introduces native renderer level HDR that has a separate brightness value for UI, it works flawlessly and better than windows HDR, RTX HDR and SpecialK. It's been out for months.
I was seriously expecting that Starfield and the Fallout 4 “next gen” update were going to bring native HDR and Ray Tracing at least, as well as some new tech to reduce the frequency and severity of loading screens. I mean, these are all pretty space-y features, perfect for a new space game. HDR should really be native on all Xbox titles already.
Really disappointed in Bethesda. And fans will swear up and down this is a ‘new’ game engine just because they have it a different version number.
@@levygaming3133 outside of some situational global illumination, raytracing would be a complete waste of resources in Starfield given the sheer prevalence of static interior cells.
gamers really need to stop insisting on slapping raytracing everywhere regardless of how little benefit it can bring - and then pulling a 'surprised Pikachu' when even the 4090 struggles to maintain solid and consistent framerates.
Can’t wait until there’s something else to talk about. All my favorite channels are telling me the same thing over and over again…..ok I’m not going to buy one……so what’s next? Which ddr maker is crushing it, who’s making the best cases, what’s shaking in the motherboard world? Dozens of things we could be moving on to at their point.
They are a Masochistic company! I thought it was Herkelman, He gone tho! It must just be in their RDNA!
Herkelman was working on Radeon, not Ryzen
@@fatidicusaeternus6498 why was he describing the clockspeed of a CPU on that reverse clear board that time then? most famous example of him showing higher clocks than they got
@@WayStedYou what time?
@@fatidicusaeternus6498 I am well aware that he was GPU. I am also aware that he sucked! He caught over-promise syndrome from Raja and never got rid of it.
Hdr on oled is the real life ray tracing 😁
if the game weighs 120 gigs with their textures, then with high-resolution it would weigh 500 gigs
Using your optimize settings + weak motion blur + HDR & overly sharpen fix I found on other TH-cam channels + using OLED panel, I find the game is flawless in terms of graphics.
But I'm sure high texture pack + HDR support will be updated later, just like cyberpunk for path tracing.
The only bad thing of this game is my skill 🙂
Great time for developers to focus on story and gameplay and not on graphics.
Both of them together feels like a crossover
Zen +5% has been an unmitigated marketing disaster at a time when AMD had clear air, and not one single reviewer could in good faith recommend Intel.
AMD would honestly have been better off not releasing it at all.
AMD Really needs to go through it's marketing department the exact same way that intel is culling jobs.
RTX HDR is just as good as native in most cases
After seeing how shady Intel was during the situation, I don’t ever want to give them my money
You could say the same about most capitalist companies
@@cesare4926yes capitalist companies are lying, but at least they do deliver if there is competition
Cry me a river 😂😂😂😂😂
Something most people don't realize is intel is still on 10nm with raptorlake while AMD was on a 5nm node (now 4nm). Intel is making a massive node jump from 10 to 3nm while using TSMC. At the very least Arrowlake by design should gain massive effeciancy uplifts. I'm very much looking forward to how arrowlake performs in reviews.
You can't say Zen5 which encompasses _all_ 9000s series CPUs when you havent even reviewed the x3d *_gaming_* CPUs.
Plus Zen5 is better than Zen4 at literally everything _except_ gaming where it is equal performance with less power consumed; which means it's still better.
I mean, even the less power in gaming is hardly true, since the non-X Zen4 parts perform very similarly and have similar power consumption.
@@band0lero Those are a very matured process node though, so it's not comparable. Isn't AMD using like the same N4P that Nvidia's uses for their GPU now? CPU design is quite different from GPU & the maturing process will be different.
All of this is moot until we see the X3D variants. At that point we'll know exactly how "bad" this CPU launch is... for gaming.