@@ro55mo22Still Zen 1 delivered better performance than Bulldozer while using far less power. Intel arrow to the knee lake deliveres at best the same, but often worse performance with a bit less power, but still more than Amd.
The First generation Ryzen did cost pennies on the dollar, so it didn't matter if they weren't the best. This arrow lake cost way too much for what they offer, so I wouldn't be willing to pay a premium for something that at best has the same perfomance as otger products (same with Ryzen 9000 btw) Dumb example, you need a CPU for gaming and productivity between 300-400 bucks, you won't buy a 245k or a 9700x, you'll buy a ryzen 7900, easily, still good gor gaming and crushes the other two chips in productivity
1st gen Ryzen was almost trading blows with Intel in gaming, smashed it in multi-core performance and cost less. It was actually a really good launch, people just forgot about it. Arrow lake costs more and loses in basically every benchmark, not to mention against 1-2 gen old Intel parts as well.
And Intel can't really compete on the low end either as Moore's Law Is Dead has reported that AMD will be releasing new AM4 chips through the end of 2026 because the data centers are buying Zen 3 based Milan chips like they are going out of style so they are increasing not decreasing Zen 3 production. Since every chip that doesn't hit the metrics for speed or power that is required for Milan can just be sold as desktop? The sub $250 market for CPUs will be AM4 until then, especially with the 5700X3D and 5600X being great for 1080p gaming.
1st gen Ryzen also completely dominated in productivity, while costing less, while NOT regressing in performance compared to their previous gen. Arrow Lake is closer to Bulldozer than Zen.
@@gamingmarcus Very wrong. 1st gen Ryzen was not "almost trading blows" in gaming, it was losing consistently. It wasn't that good of a launch for gamers, except those that came from i3 and bought an R5 for the same money. It would take 2 years, until Zen 2 to arrive for Ryzen to trade blows with Intel on gaming.
Based on the subtle comments are that laugh, The new 9800X3D is gonna blow the new intels out of the water. I think that's what products they were testing when mentioning it early in the video lol. They can't say much but there's some hints. The new X3D's are gonna be fuckin sick.
I'm sure the 9800X3D will give us a decent uplift over the 7800X3D. But even if not, blowing Intel out of the water should be the easiest bar to pass, given that the 7800X3D already crushes anything Intel has to offer in gaming.
I read that another reviewer dropped similar hints last week about the 9800X3D being ridiculously fast, so, yeah, I'm sure that all of these major outlets have had them in shop for several days now and have already done a lot of benchmarking.
since i 50/50 games and productivity with heavy rendering needs like blender and premiere, i'm mostly interested in how the 9950x3D will do, and how it compares to 285k after some updates and fixes. x800x3D have historically been a bit too poop in multicore and productivity workloads for what i want.
It is incredible how in-depth testing has become. Back in 1990 when I started getting into PC Hardware there was only the information that a new CPU hit the market and you only looked at how high the MHZ is to determine its speed :D
I mean, they did manage to only use half the power of their prior edition. That's not a fail but a pretty good improvement. It's still lacking in comparison to AMD. But in terms of most generational improvement, I'd say Intel actually wins. Zen 5 isn't really more efficient than Zen 4, performance improvement is small overall, with 7800X3D, for the time being, still being the CPU to go. Halving the power draw is a way bigger improvement. That AMD is still ahead just shows how much ahead they were in general.
All you people are so poorly informed. Do you also have other sources that you watch or only Tech Tubers that dislike Intel? Efficiency is great with Arrow Lake! In gaming it is way less power using. And 1 simple BIOS setting DLVR of and setting voltage to 1.2 volt instead of 1.5volt will make the 285K consume 170/180watts with 44K Cinebench R23 score! Der8auer has a video on that. All you people are sheep if you ask me. Never trust YT channels like these. They all are sponsored! Windows 11 is garbage. This channel showed that turning of core isolation for AMD had a good impact on the 9000 series for gaming. But they fail to tell you that doing this for the new Intel even gives you even more FPS compared to AMD! Using fast ddr5 increases it even more. CUdimm that is coming out this month gives even again over 10% extra performance. So it will pass the 7800X3D in average gaming and be close to the 9800X3D. But they want you to believe that Intel is bad. The new Arrow Lake CPU is extremly advance. It's the first CPU you can set the individual P core voltage and E core cluster of 4! But hey why be positive if you can manipulate the world? Userbenchmars is one of the worst sites out their always bashing AMD. And the tedch tubers thought we can do that 2.
14nm was originally only supposed to be in 5th and 6th gen, 10nm should have been ready by 7th gen, instead all they had after a year of delay was a broken dual core laptop CPU on 10nm.
@@juanme555 6th gen: It was great 7th gen: It's good 8th gen: It's ok 9th gen: Are they going to use it forever? 10th gen: They're gonna use it forever 11th gen: Jesus Christ It was great but then it wasn't
@@Moin0123 you say that and yet the 9900K is one of the most legendary gaming cpus of all time, it absolutely crushed the 3700X despite being a year older.
Disagree about the 1080ti. Moving from high end to mid range to entry level is exactly what I do and what you should do. Amortized over time, your parts will last the longest and you'll save the most money, and you should learn to adapt anyways. I had my 1070 for 7 years before Alan Wake 2 made me move to the 7900 XT. If not for that game I'd still be on it. Its like the difference between leasing your car and paying off your car and running it into the ground. Leasing is a way worse use of your money
I still can't find a good reason to switch away from my 6700xt 5600G - primarily because I play on 1080p 144hz. The most sensible upgrade for me is a better display and a larger computer table lol, should get on that.
Same here. I'm on a 1060 6GB and I wish I had the 1070-1080 TI instead, since I've been happily playing all I want with the 1060 but have noticed some games where it's holding me back. Indeed, it's only games like Space Marine 2 that make me want to upgrade... Then again the market is terrible and every GPU on offer feels like a compromise or Faustian bargain lol.
I understand your feel but indeed it still bottleneck with GTX285 back in the day I still use a X58 system with Xeon X5670 which is far more powerfull than i7 920 STILL it bottlenecking when i tried to benchmark 2009 AAA Resident Evil 5 at 1440 x 900 using a GTX280 1Gb 512bit 😂
You could still use an i7-970 today if you really wanted to since its got 6 cores 12 threads. Could only really do 1080p but still usable. Which is pretty unreal for a 15-year old CPU, still good for a NAS build. No way Arrow Lake is still going to be usable for what computers demand 15 years from now. Intel has fallen.
Good part of the blame goes to the Windows scheduler. It seems to prefer hopping threads from core to core for whatever reason, invalidating L1/L2 cache and predictor optimizations all the time. Heterogenous architectures are hit the most, and AMD could tell some stories about it...
There are some performance reasons to move threads from core to core but there are a lot of subtleties to consider. eg. It can take advantage of higher boost speeds by switching from a thermally throttled core to a cool core, but it should try to time this to match with an existing context switch or pipeline bubble to avoid additional latency penalty. On a 20 core desktop there shouldn't be many context switches though. On multi socket Servers with NUMA sometimes not all cores have direct access to peripheral IO so threads need to take turns on the cores that do and more frequent switching helps this situation despite huge latency from cross socket cache transfers (More of a last decade problem). I bring this up because some multi-tile Xeons have an internal NUMA structure.
@@mytech6779 Basically, CPU performance gains were gained by improving IPC as pushing frequency is getting harder and harder. Better IPC means stalling pipeline less, and it can be helped by tighter affinity to the cores. Windows are stuck in previous decade in this regard. NUMA is not great on Windows either.
@@inflex4456 I never said windows was good, they probably reuse the same scheduler in desktop, laptop, and server. The point is that a layer of NUMA has developed within many CPUs to deal with multicore access to L3 which spans multiple tiles/chiplets. NUMA of any level is overhead for all operating systems and really needs custom scheduler configuration for the specific workload, which involves testing and tuning by experienced engineers which is mostly only economical on large deployments.
Yep, this really here is the story. TO be fair, scheduling is a challenging task. But windows hasn't always been great on that front anyway and has definitely lagged in terms of improvements. They really have relied heavily on "homogenous" (ie. all cores are equal and interchangable) systems, which is no longer the case. First e-cores being widely different in terms of performance. But also now with a tiled architecture, moving threads between cores can have real performance implications (core to core, communication, latency, cache toplogy etc). This sound a lot like Zen 1. Gaming tasks seem pretty sensitive to these sorts of things, they don't just want raw parallelized power, but need heavily synchronized activities to finish predictibly on time, all to fit under the ms budget for a given frame (eg. 16ms for 60fps). Even back when hyperthreading was introduced by Intel, it wasn't an instant win for games (and often cause performance regression), because while HT has a theorhetical best average case of a 40% improvement, that really is for unrelated threads in separate applications. In a game where you have tightly integrated threads often with overlapping processing requirements, the potential HT gains can be much smaller, and since a "virtual HT core" is not the same as a true discrete core in terms of performance, could actually cause performance losses. It took both windows and game developers to get to a point to actually benefit from HT (and even to this day, it's still not true of every game). So this all sounds like and even stronger version of that. If windows can't develop a one size fits all scheduling strategy, they we might need to expose this info to the games themselves and let them fine tune their own scheduling preferences. As if game developers don't have enough work to do already ;)
@@aggies11 Not sure where "40%" came from. There is no theoretical generalized ideal for hyperthreading. The potential improvement is completely dependent on the specific set of tasks, and generally most influenced by the rate of pipeline stalls due to memory access patterns. HT provides zero theoretical benefit for some tasks, while others will see even more throughput gains with 4-way HT (Xeon-Phi, ie.).
Just watch that interview Wendell had recently with that Intel executive (whose name escapes me just now) and you know all you need to know. The level of arrogance is mind blowing! If that's how all of Intel operates, it's no wonder that ARL is such a dud.
That has nothing to do with how Arrow Lake performs. It's entirely possible that executive didn't even work at Intel by the time Arrow Lake was designed. The reason Arrow Lake sucks is because of memory latency because of its chiplet nature. Specifically, the IMC is on a separate tile from the cores. Period. Stop reaching.
@@remsterx the problem is, the average consumer (and lets be real, computer makers) ain’t going to be able to do that. Also, you have to buy brand new, expensive as fuck ram, literally 1 generation after ddr5. I can save 50% and just get a 7800 x3d
@@remsterx I'm not in the mood to overclock my cpu straight out of the box and risk having the silicon degrade and fail prematurely before I'm ready to upgrade. I only do it 1-2 years after I've bought it to stretch more life out of it before I move to a new platform.
The time stamps don’t seem to line up with the actual questions they’re assigned to. The “Where Are The Ryzen/Ultra 3s?” question seems to be the culprit.
Cannot compare Zen1 to ARL since Zen1 was much faster than the previous AMD CPU arch and much better priced than Intel's CPUs. ARL has none of those 2 advantages. Big fail in almost all CPU market points.
Your mind is so fried by AMD astroturf marketing. Zen1 was much SLOWER than its contemporary Intel CPUs and priced the same until later when they were forced to fire sell it at a much lower price. That's the actual story and timeline. Arrow Lake is the same MSRP pricing scheme as every generation for like 5-6 years. Let's "fine wine" this one rough Intel gaming performance at launch together, like we have for every AMD generation since 2012, shall we?
@@CyberneticArgumentCreator AMD's Zen1 was bad only for gaming while destroying on productivity tasks. That forced Intel to launch 6-core CPUs on desktop very soon in order to keep having expensive CPUs in the market. Most people do not need the fastest CPU for gaming since they do not have the fastest GPU to pair it with. So, when a company has best VFM and productivity and efficiency they get market share. So did Zen1 to Intel CPU lineup back then. In what of those factors ARL CPUs are better than Zen4 & 5? Face the reality and don't try to distort it. Intel is in BIG trouble ready to get bought.
@@CyberneticArgumentCreatorlol the copium is real. Arrow lake is intels pile driver moment. Increasing efficiency over bulldozer (rpl) but too little too late. Still half the efficiency of Zen 4/5 and 1/3 the efficiency of X3D models. 7800X3D beats the 14900K by 10% and 285K is 6% slower than the 14900K. Its fucking embarassing.
One thing I wonder about Ecores, for "gaming-focused" applications wouldn't you still want one or two assuming you have an amazing OS sheduler, so that OS background stuff can run on them? You can kind of do this by forcing efficiency mode on processes, and it seems to have some measurable effect on how fast cores that usually do OS stuff work (0/1). That way you could offload it without sacrificing too much power
Yes. That is one of the many, many reasons MacBooks get amazing battery life. Problem is that the Windows Scheduler is really bad so it doesn't work out in practice as well as it should
@@_shreyash_anand I've had the opportunity to experiment with the 24H2 on AMD's mobile "Ryzen AI" (xd) processors that have e-cores, and yeah it feels like a bit of a mess. OS processes/services seem to randomly wake multiple P-cores really frequently even on battery with best battery life overlay, and each time they wake up it's a noticeable hit to battery life.
@@_shreyash_anandThe scheduler works on straightfoward Big-Little CPUs like my 12700K. When Alder Lake was launched, the i9 with so many threads and cores was experiencing perfomance problems vs my i7, years latter and it looks like that hasn't changed. Effectively going above a 12700K for gaming is pointless and productivity is still more than enough.
@@saricubra2867dude you need to find a new identity than 12700k, come one we get it you love the CPU - its time for you to find a nice girl, or if you already have one focus on her more as your primary infatuation. Love with a CPU never ends well.
5:14 because scheduling is way way harder than it used to be, windows has never really had to deal with cores that are so dissimilar in performance (beyond NUMA) and I suspect a lot of these regressions came in when they started having to mess around with it to accomodate Alder Lake
Actually, the Vista thread scheduler was the main reason Vista was such a debacle. People said it was the drivers, but ridiculously enough, the thread scheduler was released in an unfinished state. There were edge cases where it'd simply halt. Particularly, cases involving drivers... This has been a long ongoing issue, but YT wasn't used for hardware news back then.
After the release of 9000 the price of 7800x3d rose by 25-30%, and after the release of intel it already rose another 8-10% and it continues to grow! It would be better if both of these generations never came out and we would have had a 7800x3D for 350 dollars!
Yeah exactly I was going to buy it that week as I got the motherboard the week prior. But now the 9800x3d is going to be 480 so I’ll just get that as the 7800x3d is the same price
Not saying no way AMD cashing in on retaining the gaming crown, but I wonder how much of these price increases are just supply/demand dynamics. I mean every gamer who was waiting for ARL and Zen5 is now rushing out to buy a 7800X3D instead (except those waiting for the 9800X3D). Would not surprise me if that leads to a supply shortage and retailers raise their prices in reaction to that steep increase in demand. This at least makes more sense to me, given that AMD already had the gaming crown for 1.5 years and actually lowered the price during that time.
@@Hugh_INow the 7800X3D is just 6% better than a 13700K and the latter has a massive 61% lead on multithreading. I can't believe that someone would overspend on AMD's i9 just for being destroyed for long term usage 💀
@@saricubra2867 6%? lol. why lie? at 42:50 we can see that 7800x3d is 13% faster than 14900k! 💀 in my area the 7800x3d AFTER the price hike is only 10% more expensive than the 13700k, and it is on a good am5, not a crap LGA1700. so in the long run you can upgrade on am5, but not on 1700. or get a 7500f or 7600 for now for gaming and then drop in 7/9800x3d in a couple of years. and the only option with 13700 is to sell the entire computer. I can't believe that someone would overspend on 1700 just for not being able to upgrade or sell it at good price.
@@saricubra2867 People would be willing to spend that, because they're gamers and the MT performance is not their focus and good enough for them. Also the 13700K is in danger of cooking itself. And please stop making up numbers already. The 7800X3D was over 15% ahead of the 14700K in gaming in the latest HUB benchmarks (ARL review). I strongly suspect the 13700K wouldn't be faster than that. It is better in MT, true. But despite your nonsense take - It is not AMD's i9. That would be an R9. Compare the R9 7950X3D if you want AMD's "i9" that's best for mixed usage.
You may be surprised, but depending on what you do on your PC there could be little to no difference between AMD and intel in terms of power consumption. 13/14th gen idle at 5W, while all the new AMD chips including the 7800x3d avg around 30W idle. Moral of the story, if AMD PC is idle then turn it off, otherwise might be using more power than intel in the long run. Also intel is not that inefficient under load in stock configuration, its only under mild overclock of 13900k/14900k where you get to see 250-300W in gaming and up to 450W in synthetic all core workloads
@@juice7661 The total system power should be measured. A CPU cannot be used without a motherboard and RAM. There are also several idle power levels (sometimes called sleep states) that depend on individual configuration settings. Deeper lower power idle has more lag when a task arrives. CPU power in isolation only matters for cooling selection and ability of a motherboard to supply socket-power.
@hardwareunboxed by the look of it arrow lake is build like 2 4P/8E CPUs next to each other with 2 ring buses. a bit like how ZEN 1 was 2 4c CPUs and we know how that one was for gaming... that can explain why a 6 core CPU will be much worse too ( way more jumping around the 2 clusters ) a core to core latency test will show that and isolating the 4P/8E may show an interesting results compered to raptor lake with the same setup. may also explain why 1P core + some E cores show an improvement like u said
Intel was never the gold standard, they simply paid oems enough money to not use amd products that amd almost went bankrupt and wasn't able to compete, so intel was the only standard. Very different than say Nvidia, who actually just dominated the market through sheer marketing
They werent the gold standard at all. They were just the only option. They had no real competition. And then proceeded to completely shit the bed when faced with it
@@james2042Oh boy do i have news for you. Nvidia did that but to even worse ectents than intel ever did. They did obviously have the faster cards for that time Yeah just like intel had faster cpus. But Nvidia was the shadiest of the shady and still is.
The way AMD did the "gluing" not only helped with yield, but it also allowed AMD to use the same chiplets to build EPYC server CPUs. The way Intel did the "gluing" doesn't allow for any such benefit. Every single chip design and manufacturing costs an arm and a leg, so there's no way Intel way can be cheap.
In theory intels way should be closer to monolith in speed than AMD… let’s see if Intel can benefit from it in the future. But Intels way definitely is the more expensive one to make! There definitely are things that Intel can tinker with arrow lake architecture to reduce bottlenecks. But is it enough… that is hard to say. I am sure that Intel can increase speed more from arrow lake than AMD can increase from 9000 series. But is it enough when arrow lake is so much slower….
@@haukionkannelThis is all with AMD still being the dominate competitor, meaning the prices they use now are marked up from what they would be if they had proper competition and still made a profit, if Intel can't compete on price in this state they are cooked
@ The tech Intel use in Arrow lake is much more complex than what AMD use! And more complex = more expensive to produce! So yeah… this will not be easy time to Intel until they release the second gen for the arrow lake architecture. Nobody knows how much Intel can improve. AMD did get rather nice improvement from zen 1 to zen 2. Intel needs that kind of jump to get back in competition. But if AMD also can improve from 9000 series… Intel definitely is not out of the woods!
@ complex ≠ better. Intel is playing catchup to AMD, years of stagnation has finally caught up to them, the main reason they are not bankrupt is because of government "loans" and payouts for empty promises and countless delays. They got too comfortable too quick. The fact that Intel went from TEN nm to THREE nm (~7 node shrinks) and is still losing in efficiency means they are FAR behind the competition (5nm Ryzen 7000 series and even 7nm ryzen 5000 series is beating them). I want Intel to succeed as the usual consumer would but they dug themselves a deep hole than fanboyism and shill hats can't save alone.
From running Cinebench 2024 under Intel’s Software Development Emulator, Cinebench 2024 heavily leverages the AVX extension. Like libx264 video encoding, scalar integer instructions still play a major role. Both libx264 and Cinebench contrast with Y-Cruncher, which is dominated by AVX-512 instructions. AVX-512 is used in Cinebench 2024, but in such low amounts that it’s irrelevant. Although SSE and AVX provide 128-bit and 256-bit vector support respectively, Cinebench 2024 makes little use of vector compute. Most AVX or SSE instructions operate on scalar values. The most common FP/vector math instructions are VMULSS (scalar FP32 multiply) and VADDSS (scalar FP32 add). About 6.8% of instructions do math on 128-bit vectors. 256-bit vectors are nearly absent, but AVX isn’t just about vector length. It provides non-destructive three operand instruction formats, and Cinebench leverages that. RaptorLake's E-Cores has 128 bit vector hardware.
In terns of future-proofing, your advice is correct. Buy what is best value today within your budget, and 80% of the case it would have aged better too.
In terms of dGPUs, there's lots of exceptions. Mostly from Nvidia providing less hardware for your money, but better overall performance due to their better drivers. While AMD not being able to provide as good software, and leaning on more hardware to make up the difference. And over time, AMD is able to close the gap on the software, which makes their products age better. It's basically happened since GCN vs Kepler. But if you ask most people, they rather enjoy their new dGPU while it's still new. Because they may age better, but their value would also drop by the time it is worth it. Because with that mentality, most people would just buy a larger dGPU from 1-2 generations ago. Example/ Why buy the RTX 5060... ...if you can get the RTX 4070 Super, RX-6800, or RTX 2080Ti instead for cheaper ?
Although there are some exceptions in the CPU front. And it all depends on your level of research. Should you get the i5-2500k value pick, or spend the extra for the i7-2600k. Well the i7 was actually the correct choice. Unless you were able to snag a 3770k on holiday discount, and sell your old i5 for a decent price. Then came the choice between the i7-6700k vs r9-1800X. Well the Intel was slightly cheaper, more efficient, and much faster for games. But if you went for the r7-1700 then the AMD looks competitive. Then you factor in the platform cost, then boom, the AMD was actually the smarter choice. Similar case with Zen+ where it didn't make sense. But probably worse was the Intel 7600-7700k products. The smarter buy was to get the older Zen1 product as it gave you the benefits of the platform, while championing the value point. With the 8700k that was a mixed bag. Intel took a half measure. And they got crucified for it, as the r7-3700 launched soon. And if permitted the higher r7-3800X which even slightly beat Intel's next flagship i9-9900k overall. It was game over after that with Zen3 with the r5-5600X and r7-5800X. We saw similar insights with other product launches. Intel 10 and Intel 11 seemed to be problematic. Zen3+ looked to be smart buy. Zen4 was good but expensive. Intel 12 looked good but lacked the platform aspect. Intel 13 and Intel 14 looked problematic. Zen5 looks good and bad. Intel 15 or Intel Ultra just looks bad. Based on what we know so far, the r9-9800x3D looks something worth paying for, in terms of future-proofing. If you want cheaper, stick with the r7-7800X instead. It is a similar dilemma to the i5-2500k vs i7-2600k. The smart play would be to get the i7, but even smarter to go i5 for cheap and get on the platform and upgrade for cheap in the future. We can do that because Zen6 coming in Early 2026 will still be on the AM5 Platform.
I have a question for you guys on your video which you did not cover the CuDimm memory coming into play I seen that AMD and MSI tried this memory on their CPUs of the new ones do you think it'd be available next year or no for AMD
6:49 they keep using E-cores because it just makes a lot of sense on a desktop. You only ever need a handful of fast cores, or as much multi-core performance as possible. Workloads where you need tons of cores AND for each of them to be as fast as possible individually don't really exist on desktop. However, workloads like this are very common in the datacenter (web servers, hypervisors, etc.), and thus mainline Xeons are all P-cores, no E-cores. AMD builds all of their chips from the same base 8-core chiplet - from desktop CPUs to 192-core server behemoths. So that base chiplet has to balance both needs well, and thus no E-cores. Intel is forced to make different monolithic chips for desktop and server anyway, so they don't interfere with each other and each can do their own thing. I would actually really love to see AMD try out a big-little design as well. Imagine a CPU where one chiplet is 8 fast cores with X3D, and the other is 16 or even 32 simpler cores. Once AMD figures out how to mix different types of chiplets inside the same CPU like Intel did on Arrow Lake, we're in for treats.
AMD already has the dense C-core chiplets they could use for a 8+16C cores config on desktop. And they already have a hybrid design on Strix Point with 4+8C cores. They could do what you suggest today on Ryzen Desktop. But AMD tends to release like one proof-of-concept kinda product before they apply a new design across their different product categories. I guess they're working on fixing kinks with our all favorite OS and its abysmal scheduler before they also do this on desktop. Other considerations could be that they determined: nah, ARL isn't really beating us, so no need to trouble us with yet another config to get working. And then there might be issues with memory bandwidth limitations to feed all those cores. If that is an issue, working on the memory controller to support faster clocked memory would be higher prio and also a prerequisite to make more cores give you actual tangible uplifts.
@@Hugh_I Windows truly is abysmal with big-little scheduling. Literally every other OS (except maybe BSD) has figured it out years ago. My guess is that AMD is letting Intel take the brunt of the development work for big-little support, and is going to slot their C-cores in later with just minor scheduling tweaks instead of having to build the whole damn ecosystem from the ground up.
Hi, just want to know if u guys be doing another high speed memory test? Especially when 9800x3d comes out on different Mobo’s, especially between 9700x, 9800x3d, 9900x, 9950x on all 21, 870 series mobo’s u tested earlier? Really want to know which combination will work best for productivity and gaming and if there will be any performance gain with 7200mhz cl34 or on 8000mhz + rams.Thank you so much again for all the hard work u guys do in testing. Ur channel is the first I go to, thanks again.
I remember the ryzen hate on here and on Gn when the 1st gen and 2nd gen came out saying AMD sucks for games and is a joke. Man, have times changed. Now Intel is viewed how you all viewed AMD back in 2017
Computers is a huge hobby for me and I have 2 systems and a server (Synology NAS 160TB gross 120TB net) running at all times in my home. I also have many hardware parts on hand for spare parts. I just gave away my 12900k with a 3090 to my family because I'm building an intel 285k computer. I have most of the parts and just took delivery of my Asus Maximus Extreme Z890 MB. Now I'm just waiting for that intel CPU to hit the shelves here in Canada. It will compliment my 7950X3D with 4090 system well when it's done. Can't wait for the 5090 to come out, have to buy 2. Thank you guys who host this channel for keeping people like me who enjoy this hobby excited on the latest news on all things computers!
@@gymnastchannel7372 LOL, thank for the comment. I'm sure whatever computer you currently own will continue to serve you and your family well. Have a great day!
Arrow Lake is really good for workstation applications. I'd still get a Ryzen system due to the platform longevity and the fact that my niche, music production with live MIDI input, heavily favours large cache and therefore the X3D CPUs, but to someone whose sole focus isn't gaming or lightly threaded productivity it's a compelling option.
That, and not having to worry about whether some task is gonna get scheduled on an E-core or a P-core. The heterogenous architecture is fine for mobile stuff, but I wouldn't want it on a desktop processor (even if supposedly it helps with multithreaded performance within a specific power budget. I don't see the reason to care on the desktop unless I'm trying to maximize every last inch of power efficiency, and even then, it's not like AMD has failed to remain competitive even WITHOUT the E-cores.)
@@jermygod if intel made a 12p core cpu instead of a 8p+16e it would consume much more power and have worse mt performance and games do not really scale significantly past 6 cores
To add to the Core Ultra 3 discussion, Intel stated that Core Ultra would consist of 5, 7 and 9, while core 'non-ultra' would be 3, 5 and 7 (presumably lower TDP counterparts for the 5 and 7)
Currently the non-ultra are using older generation cores. Intel always keeps some previous generation marketing names for years after moving to a new series just to use on OEM trash-binned CPUs. They still used the Pentium and Celeron marketing names a decade after moving to Core2 and then Core-i (They had "Atom" for intentionally designed low power single core products.)
Windows scheduling loves to move threads between cores, which I’ve always considered to be a really stupid thing to do (for performance). This is true even when you’re running a single threaded process, because all the other Windows processes need to run and those getting time ends up triggering a switch when the single thread gets re-tasked.
Not sure if the one core would “burn” earlier if they run only one core? That would explain why there is that jumping… All in all would be interesting to hear… why…
@ Yeah…. I know… so not sure why there is that core hopping. We also know that during long time, the cpu does get weaker because of heat cold cycle and too much current. So there is some degration.. but it should not be huge. My best ques is that core can keep ”safely” super high freguences Only short time and the core hopping is to keep those max boost going on… but how usefull it actually is… seems counterproductive…
Move from a hot thermally throttling core to a fresh cool boostable core. There are some other reasons on the server side having to due with NUMA and asymmetric IO access, but MS may be lazy and reusing the same scheduler for desktop.
26:45 Nvidia released RT cores in 2018 that are, 6 years later, slowly being used but still far from essential. Had Turing 2000 been released with only Tensor cores and more rasterizing power, games would have looked better. Same for Ampere 3000. Only with Ada 4000 could you say there are a handful of games that truly use those RT cores and the cores are powerful enough on 4080 and higher. Anything below 4080 is useless for RT unless you really get hard for reflections. RT is great for reflections! They should have called it Reflection Cores.
@@Dexion845they are used for work applications far more than gaming. Rtx in gaming is gimick still. And its very very rare to find a game that uses rtx. Most aaa use up scaling though but that just looks like low resolution but sharp. So its ass. 4k raw is just so much better looking.
@ Oh? I thought those were the Tensor's not RT. But yeah 4K Native looks great but to be honest not even a 4090 can do that above 60fps on most graphically demanding games.
Microsoft has gone completely off the rails in the last couple of years. Judging by the absolute mess they made with simple stuff like notepas, it is obvious they are to blame. I have a 5950x with a 4090 and the thing simply stopped working after 24h2 with constant reboots and freezes. Totally stable under linux or win 10 though. Onedrive is so useless we prefer to send ssds by mail because the service isba total useless crap. It is a horlt mess there.
Make sure you use revo uninstaller for the chipset and your graphics card if you upgraded or had previous different hardware in your system. I had some issues with my 5900x going from the 3080 to 7900xtx and upgrading to 24h2 but after everything runs fine.
@@Clutch4IceCream after doing all this and two clean installs I quitted and went back to 10. There are a LOT of horror stories of people RMAing the entire PC with the problem persisting. To me, the answer is clear: Windows is the problem. Especially after having zero problems with the exact same hardware on different OSes. Those late launches by Intel and AMD make it also very clear. Microsoft is not competent anymore.
@@OficinadoArdito I work with Microsoft Azure, and it’s been an absolute cluster*uck this year, it’s like they’re being run by the marketing team who have zero understanding of computing. I’ve ditched windows entirely now, have a MBP laptop (which destroys anything X86) and moved to Linux for my gaming pc and Plex server. I don’t regret it at all, windows sucks now.
33:11 gaming hardware nerds need this reminder when they start talking about broad industry decisions. There is so much more to CPUs than gaming, and especially 5-10% differences here and there.
Precisely. Believe it or not, Intel is actually a business… and they respond to what major customers ask them to provide. (Energy efficiency, ecosystem costs, etc.)
It’s memory latency… overclocking the Ring Bus to a 1:2 ratio compared to the memory speed also yields decent results. Tuning the memory to ~65ns or less is a must. No miracles but it does help a lot in most games.
It is not about scheduling issues. It is about available thread level paralelism and overhead. When paralelism is available e-cores are more performant as they have greater perf/area, but games are not there and somewhat 8 threads is the max paralelism exploitable because most of the computing (a very specialized one) is done on the gpu anyway. Humbly I will add that intel was always considered an excellent tweaker of the pipelines and its critical path and they may have a problem realizing that idea on tsmc. I also think that they have failed to obtain the IPC they targeted with the wider 8-way microarchitecture with +500 reorder buffer and +20 issue width for whatever the reason. And that is key because freq is not the way to go (Pentium IV happened). Apple designed their M processors overly complex from the beggining and only recently have achieved 4.5GHz on the best tsmc. A couple of additional cycles of latency on a 2-300 cycles memory latency I don't think it's important.
What’s even more crazy with windows and scheduling and optimising gaming performance. The Xbox uses the Ryzen cores. It’s quite interesting how the ‘save state’ is saved on Xbox in the game is essentially a VM and they use some variant of hyper-v to contain games. That’s insane and sounds like it would be really interesting if Windows could do this, bare metal optimising of game code and OS isolation…. And yet, the Ryzen Xbox team that must exist, have no line or input into the Windows OS?? Is it still the same kernel on Xbox and Windows? Did they drift apart years ago.. 🤷
They have drifted apart. The Xbox OS can be highly tuned since it's literally running on 2-3 SKU's at a time max and every unit is guaranteed to be identical. On general PC side you can't do that since hardware varies so wildly. Windows 10 can run on anything from a Core 2 Duo all the way through to Ryzen 9000 & Arrow Lake for example. Xbox Series S & X has 2 Ryzen based CPU's only to worry about.
Have you guys tried any testing of Arrow Lake with CUDIMMS? Someone said that the different memory might be needed to make it actually a good CPU. IT would still be cost prohibitive but it would be nice to know if the CPU is complete trash or just mostly trash.
@@pavelpenzev8129 That sounds extremely unlikely. I don't have the game but I just watched a performance test with HT off running 140 fps, you're saying HT will net you over 466 fps?
@@Wahinies 24H2 actually fixed a lot of random inexplicable dips while gaming I had experienced on previous windows builds on AMD CPUs and slightly improved frame rate a bit. The only downside of 24H2 is for benchmarking/ xoc stuff where you actually lose a bit of performance.
If you can't get it from microsoft then download the ISO online and install it. Certain people have uploaded the ISO which can basically be installed into any hardware.
I hate the e-core / p-core stuff. It's just a band-aid, and a poor-one. Intel spent years throwing the kitchen sink into performance and made very poor power/performance trade-offs along the way. Now they need to clean all of that junk up. I don't blame the windows scheduler one bit for not being able to perfectly figure out which thread is supposed to go where... its an impossible task for an operating system.
Intel already knew it’s gonna be a flop thats why they cancelled the next in line 20A node and going all-in for the 18A. The 18A needs to be insanely good.
AMD had Ryzen 3 in the 1000 series with the 1200X and the 1300X and these were based on the Desktop Zeppelin dies that the 1700X and 1800X were using. in the 2000 series Ryzen 3 was relegated to the 2300G, an Notebook APU on an AM4 substrate. Then we had the 3000 series which gave us the 3100 and the 3300X, both were based on the Matisse desktop architecture. There were also Ryzen 3 CPU's in the 4000 series like the 4300G and the 4350G, but these again were Notebook APU's based on Renoir. The 5000 series and 7000 series completely skipped the Ryzen 3 line and it looks to be the same with the 9000 line. I think the reason for that is AMD has now make the APU line-up a separate naming scheme, with the 8000 series being APU based, they take up the mantle of the Ryzen 3 line
> The 5000 series and 7000 series completely skipped the Ryzen 3 line and it looks to be the same with the 9000 line. R3 5100, 5300G, 5300GE, and 8300G all exist, though not in retail packaging. Also, the 8000 series aren't a separate APU naming scheme - there are 2 without iGPUs (8400F, 8700F) and the 7000 non-F series are all APUs. 8000 series is 4nm instead of 5nm and have RDNA3 iGPUs instead of RDNA2.
I NEVER pay attention to first-party marketing. I ONLY rely on actual benchmarks and the resulting data. I also build a computer every 5-7 years, so it has to last. The only exception to my build cadence is because my daughter needed a new computer to run a game with settings that made her old computer unable to run it, much less be a good experience. In this case, I bought all AMD because I didn't want to spend a lot of money, and I wanted to have a computer that gave the best performance per dollar and per watt. The 4080 and 4090 were non-starters because of the risk of burning the VASTLY INSUFFICIENT 12VHPWR connector, so my alternatives were a 4070 Ti Super or 7900 XT or XTX. Because my now-daughter's computer has a 7900 XT, the XTX was the remaining choice. 4K ultra gaming is now a reality.
People need to understand a few things in the CPU market I think. 1st AMD has introduced the two Tier CPU system. gaming is X3D, all others are mainly productivity, they have some with cross over, but if you look for gaming you get X3D. Intel has the issue that they only gained speed by pumping insane amounts of power into the CPU if you look at the last 4 releases from Intel. Now they need to scale back the power and come up with a new design, which will take at least 2-3 generations (equal to AMD Zen road). I would not expect anything competitive from Intel for the next 3 years at least.
How hard would it be for Intel to release a SKU that was just P Cores so with the transistor budget of 16 E Cores give another 4 P cores. Personly I would pay more for 16 P cores, release it as an Extreme version of the processor, is that feasible at all.?
Partly by design as it is what you should expect. Another part are bugs. It is really bad in some instances where it shouldn't be. If you would run Arrow Lake with just one performance core, it is actually amazingly well performing CPU.
You say that people won't stay with something like a 1080ti throughout its life but that's exactly what I did. Now I've recently got a 4090 and plan on keeping it for quite a while. The performance of these things doesn't go backwards over time.
I went from a 1080 to 6800xt, double performance for the same price. You went from a 1080ti to a 4090, triple the performance for double the price. I get the 4090 price. What I don’t get is the 4070/4080 series pricing. The 4080s seems reasonable, so everything below that should have an msrp of 80% the original. That would bring a 4070ti to 650msrp(it’s 799) compared to the 1080ti 699.
They had lots of good stuff lined up for arrow lake, but cut most of it. Keller said in interviews he's more of a team builder and a consultant than an engineer at this point.
@libaf5471 yeah exactly and most importantly a 'direction setter'. If they got rid of Murphy early on, made Keller either CEO after the writing was on the wall for Swan or just gave him a very high up position so he could do whatever he wanted then Intel would be in a massively better place than it is today.
@@Speak_Out_and_Remove_All_Doubt Ceo's job in tech isn't to help the company make anything useful, it's to help the company's investors make money short term. That's unfortunately how it is.
I would be really interested to see what the modern Zen 5 built as an eight core monolithic would perform like. Mostly to see how much performance is left on the table from the chiplet design, and latency to the IMC.
I have just started looking to game on Linux. I wonder how fps will differ. The test that I have done so far was successful, booting from an ext SSD so far no issues.
@@G_MAN_325 I’ve moved to Nobara, you will lose some FPS but it’s not anything that destroys the experience. I don’t regret it at all. If you want an easier time setting it up go with Mint or Ubuntu, Nobara can be a bit fiddly. They’re also working on SteamOS being a proper desktop option, Linux gaming has come on leaps and bounds in the last few years
AMD do only make one type of CCD for desktop usage. The six core is a failed full 8 core CCD with two cores fused off. I guess AMD could fuse off two more cores. But my guess the number of dies that are suitable for this are not available in a large enough numbers to make it economical even for OEMs
Pretty much this. The only time they did that was with zen2 and the 3300X/3100 parts. They released them at the end of the zen2 cycle to collect up enough defective dies that couldn't go into a 3600, and yet still those parts sold out instantly with no way for AMD to provide sufficient supply (other than by disabling perfectly good dies). And IIRC the yields on the current nodes are actually even better than they were on 7nm then. It just doesn't make any economic sense for them.
@@Hugh_I AMD could absolutely make a custom half size 4 core CCD, yields would be even higher, and the economics would magically work. And they actually did do that, they made a custom quad-core just for some Ryzen Atholon refresh that got “a new and revised box design to help consumers differentiate it” as apposed to say a new name.
I forgot who said it (maybe Jay?) but it goes like this. "Sometimes you need to take a step sideways before you can move forward again" and that may be what is happening with Arrow Lake and it's architecture.
They been going sideways for years on 14nm this is their fault lol they’re fucking up in every segment and they just lost their deal with TSMC so now they have to pay full price for the new wafers 😭 we are not seeing a good intel chip until after 2026 it will be mediocre at best at a ridiculous price like how this was.
About the Intel’s reasoning for using economy cores, another important aspect here is that they were absolutely destroyed in the laptop space by Apple in terms of power consumption, thermals and fan noise. And the E-cores were a stopgap solution for this without having to do a more radical redesign of the entire CPU architecture
That’s what’s bad about the whole thing - Intel aren’t the only ones losing out because of this massive flop, all of the motherboard vendors are losing out just as much designing motherboards that people won’t buy
@@heinzwaldpfad Been on Intel all my life. Other than 13th & 14th gen self-destructing series, I was all ready to jump onto the 285K ... until I realized it may not even beat grandpa 5800X3D in gaming. LOL. So, it'll be my first time on either an AMD 9950X3D or 9800X3D. Shame the Z890 mobos are nicer than the X870E ones.
One of the points of Chiplets for AMD is to use those same consumer chiplets on server products and viceversa while Intel made chiplets for the sake of having chiplets (sort of) as they spent lots of money to get them working and unless they have some kind of brilliant plan, can only use them for the desktop CPUs (I suppose they are compatible between multiple SKUs)
I think intel shot itself in the foot with the whole E-core stuff. Sure, the "initial" idea was great: Just add some "low power cores" to get more multithreaded performance without increasing the power draw too much. But then, intel failed to realize: E-Cores can be really good "if and only if" there are many of them. Like with GPUs, single core performance is not that important. A whole lot of low power cores can rip through any multithreaded load without using that much energy. What Intel did instead was: Increase E-Core performance (to match the P-Cores from 5 years ago). Imagine the scenario: An Intel CPU with 8 P-Cores and 64 (or even 128, 256, etc.) E-Cores, where the E-Cores clock at ~2ghz and are optimized for low power consumption and low die space (instead of raw performance). A CPU like that would lead "all" multicore benchmarks by 100%+. We programmers are lazy people: If I can "omp parallel for" something without putting much thought into it, I'll do it. Switching to e.g. CUDA for massively parallel tasks brings a whole additional bag of problems with it. Even if its 100x faster, you need a decent GPU, additional cuda compiler step, push data across the bus, GPU memory is not enough, datastructure not GPU optimized, etc. etc. It takes a lot more time and puts a lot of restrictions on you and the customer (the one you are writing the program for). Give me the "omp parallel for E" with 256 E-Cores and my productivity increases by a factor of 2.
The other problem is with task scheduling. Programs want all threads they put tasks on to be completely identical in performance. if the program starts dividing tasks up on p cores and e cores with wildly different processing speeds that causes a huge problem with stability and hangups. your task would work just fine if the program knew to ONLY use those e cores and stay off the P cores but as we have seen with the dogshit windows 11 scheduler that is clearly not the case. Another thing that enrages me is the "single core boost" scam. They push suicide voltages into 1-2 cores to clock them higher and this does nothing for real workloads or gaming or anything any user would want. All it does is make the Single Thread number score in synthetic benchmarks go higher to make their bullshit processor seem like it has more ST than it actually does on review sites/channels. To make it seem like the new CPU architecture is 10-15% faster when it is actually only 3-4% faster because otherwise no one would buy the garbage.
@@HoldinContempt You "can" run your threads on specific cores (e.g. look for "ThreadAffinityMask" under Win32. This might be more or less a "suggestion" to the system... it usually just works, so no questions asked), but because all cores were equal in the past, most people do not care and just consider all threads to run at the same speed (and it is an "Intel only thing"). The next huge Problem is: If the code is a bit old (predating "E-Cores"), it obviously does not select specific cores for specific tasks. Then the scheduler should do its magic, but realistically speaking, it can only guess what to do and never "know" what to do... Cant be helped... :)
The funny thing about the comparison between Zen 5 and Arrow Lake to me is that *both* are revamps in pretty significant ways, but AMD feels like more of a refinement because they achieved parity or a small uplift in most cases, whereas Intel struggled to even achieve parity in most cases. I guess Intel is dealing with even more of a change because it's their first chiplet-based CPU, but I think it's reasonable to expect AMD to also achieve significant refinements in generations to come based on their architectural reset with Zen 5. It's going to be tough for Intel.
I have not built an Intel based PC in 16 years. The fact you usually have to change the Mobo with any future CPU upgrade has deterred me. Not the case with AMD. My thoughts are not that Arrow lake is particularly bad but all Intel CPU's are bad by design rather than any performance criteria.
If you built like 2 computers over all these years, its quite possible you didn't screw yourself by sticking with AMD, However Intel CPUs were better for many of those years and if you buried your head in the sand and settled for less, well , that's what you did
@Conumdrum I'm about to upgrade my Ryzen 3600 to something considerably better for less than £200. No mobo required just a bios update. Did I really choose wrong?
I'm in a great position. I'm looking to upgrade from a 2014 cpu! i7 5960x! So I'm looking at a 300-400% boost! To be honest I'm only upgrading due to the Windows 10 going out of support next year.
Drop HT, only have 8 P cores with 8 threads. Drop frequency as well since the previous gen was cooking itself having to clock so high to remain competitive. I don't know what anyone was expecting there.
36:00 it's important to note on this that dropping the GPU load of an application DOES NOT mean the system performance becomes CPU limited. This is a big misconception. It can quite simply be software limited. You should be taking an application that you know is highly CPU demanding, ie will have a high FPS (eg CSGO, Valorant, etc) based largely on CPU performance. Many applications when you drop off the GPU load will simply perform based on OS scheduling, latency and memory optimization, at that level of demand you also potentially see application issues influence performance potentially too. I'll give you an example. Eve Online is a game that is neither CPU limited or GPU limited, it is server side limited. There are so many server side checks going on and then tracking of so many entities that server side performance is the single biggest factor in your fps a lot of the time. You will get better fps by simply going to a lower populated system in game. Some applications are simply designed not to hammer the CPU as hard as they possibly could. There's been times in the past where OS updates or game updates radically can change a performance on a given platform. You are better off finding games/apps that are single clock speed dependent and then multithreaded dependent. Otherwise you might find in 12 months time that 10% performance gap you saw was simply closed with software. If you ignore the software side of this whole equation you do so at your own peril because there will be times when a new game comes out that will catch you out. Also more cache !== better, for that matter more memory doesn't necessarily mean better performance in any regard. It can actually end up with worse performance in extreme cases. A lot of performance is governed by better access time and maintaining frequency which comes down to thermals. The combination of faster cache access and faster RAM access latency via DDR5 has translated into better performance for both Intel and AMD. If you want to take away a lesson from that it should be that performance does not solely comes down to CPU core goes brrrr or GPU goes brrr or big number === better.
The only reason Intel used E-Cores was for power consumption and heat production reasons. If they had all P-Cores, their chips would have been melting using 500Watts or more.......
It's because their P-Cores are too large physically and use too much power for the amount of performance it gives. E-Cores were meant to tackle this, to get close to P-Core IPC (with Skymont) while being much smaller and using significantly less power.
That's not why Intel uses E cores or at least not the main reason. E cores are die space efficient. 4 E cores are roughly 130-150% the MT performance of a single P core while using the same or less die space. It means Intel can get roughly 14P cores worth of performance from an 8P + 16E core configuration rather than 12P while keeping the die size the same. And using more cores that are slower is generally more power efficient than a few cores that are more powerful (because of the diminishing returns of raising clocks). And like HUB has shown, games don't really care for more cores past 8, which is why Intel doesn't care to provide more P cores, but for applications that do scale (so professional tasks) E cores work just fine. Heck, the 5600X3D shows that games don't really scale past 6 cores.
Biggest problem is that games are not designed to take advantage of newer cpu types. It used to be that games would be designed to take the best advantage of modern hardware.
I think the ryzen 5 should become ryzen 3. Six cores is the new standard, another thing they could do make regular six cores. They could make normal six cores ryzen. I think one of those would be the best solution.
9:56 imagine dual CPU system, the left slot is for P-core CPU, optimized for gaming (285PK for example, which has 12 P-cores), and the right slot is for E-core CPU (285EK for example, 48 E-cores, optimized for productivity). You can put one or both CPU to satisfy your need.
C’mon guys…….easy answer to the “why e-cores”. To keep up with AMD’s core count and nothing more………a smokescreen by Intel to say “hey….we have that many cores also”. Most Computer users are just that. They know the name……they look at the marketing……..and make their buying decisions upon that. Intel pulled a fast one on consumers that are not computer savvy. “E” cores were and are nothing more than a marketing ploy. Intel was forced into this type of architecture because their platform couldn’t scale………..and what do they do next? Come out with another bad idea in the “Core Ultra”. Not a road to success IMO. You use the excuse of core size limitations……… “what is Intel to do”. And yet, just look at their previous 2011-3 socket. It was huge. I’m sure they could have returned to that socket size if core real estate was a problem. They’ve had a new socket every two gens anyway. Sadly, Intel absolutely will be gobbled up by another conglomerate within a few short years. The writing is on the wall for Intel. They got too comfortable as #1 and did not innovate. Their own fault……….so lay the blame squarely upon Intel’s shoulders and stop making excuses for their lack of leadership vision. BTW………your hardware reviews are top notch. Best on the internet……….aka……..The Screensavers
Arrow Lake: 77 cycles for memory controller Meteor Lake: 80 cycles for memory controller Lunar Lake: 55 cycles for memory controller Raptor Lake: 55 cycles for memory controller I think if they just rereleased Arrow Lake as a monolithic CPU without the iGPU, Display Controller, and NPU they'd have a solid competitor.
They should have included the memory controller on the compute tile at least, that would improve latency a lot. A nanosecond or two latency to the SOC tile does not matter.
@@aos32Making Alder Lake and Raptor Lake on Intel's nodes that are way better than their overrated TMSC equivalents in MT/mm2 was the cost effective solution. There's no way i would spend almost 400USD on a 7800X3D if a 13700K is around 300USD.
You mentioned 64 GB RAM. There are two titles I've read where 64 GB can help a lot: the flight simulator DCS (where 56 GB has been reported), and Cities: Skylines 2. And maybe Star Citizen, whenever that actually releases.
Looks like Intel aimed the performance arrow but it landed in dead sea and simply offers better power efficiency than 14th gen. The major downside is needing a new mobo when am5 users can swap out 7800x3d to 9800x3d or even Zen 6 x3d if they stick to am5.
It also brings very limited improvements outside of it, at a much higher price... The 7950X and 9950X mostly match it out there and the 9950X3D is incoming soon.
@@nolategame6367 I'm not saying it's great or anything. But there are workloads that it excels in. Just not gaming so the TH-cam creators say it sucks. Tile based rendering for example is an extremely time consuming workload. Improvements there are more important that 120fps to 160fps. And not everyone, by a huge percentage, own 4090's and play at 1080p. Talk about a niche market. And I'm not singling out Intel here. These guys truly believe that CPUs, unlike virtually everything else in the world, should just get cheaper and cheaper. Last years closeout CPUs are a better deal. Just like every other consumer item when the new models come out. If they weren't they'd never sell the old stock. And this bothers them. These guys don't understand the economics of retail.
They switched places...I am so confused now.
I was going to say doesn't look right. 🤣
Tim wanted the captain's chair for this podcast.
I can't watch anymore😢
Imagine Tim standing up.
@@catsspatThe whole world would be shook, Tim got that dingo in em.
The more you arrow, the more you lake
This is genius 🤣
Thanks AKK5I
The more money tsmc makes
Huh?
Arose the error lake, Gelsinger's Arrow, poorly efficient and unstable, as his management.
First attempt at gluing cores together.
Yeah Tile-Node
That was Meteor Lake, and Lunar Lake does the same thing and is wayy better. There was something wrong with the core ARL design itself.
Yeah, low cache and high latency @@auritro3903
Yeah. It's Intel's Zen1. They HAD to try a different tack unless we were all going to need personal generators to run Intel based PCs.
@@ro55mo22Still Zen 1 delivered better performance than Bulldozer while using far less power. Intel arrow to the knee lake deliveres at best the same, but often worse performance with a bit less power, but still more than Amd.
The First generation Ryzen did cost pennies on the dollar, so it didn't matter if they weren't the best.
This arrow lake cost way too much for what they offer, so I wouldn't be willing to pay a premium for something that at best has the same perfomance as otger products (same with Ryzen 9000 btw)
Dumb example, you need a CPU for gaming and productivity between 300-400 bucks, you won't buy a 245k or a 9700x, you'll buy a ryzen 7900, easily, still good gor gaming and crushes the other two chips in productivity
1st gen Ryzen was almost trading blows with Intel in gaming, smashed it in multi-core performance and cost less. It was actually a really good launch, people just forgot about it.
Arrow lake costs more and loses in basically every benchmark, not to mention against 1-2 gen old Intel parts as well.
And Intel can't really compete on the low end either as Moore's Law Is Dead has reported that AMD will be releasing new AM4 chips through the end of 2026 because the data centers are buying Zen 3 based Milan chips like they are going out of style so they are increasing not decreasing Zen 3 production. Since every chip that doesn't hit the metrics for speed or power that is required for Milan can just be sold as desktop? The sub $250 market for CPUs will be AM4 until then, especially with the 5700X3D and 5600X being great for 1080p gaming.
Generally agree, although it's a bit of a moot point once the old inventory is constrained and dries up. Then you won't have a choice
1st gen Ryzen also completely dominated in productivity, while costing less, while NOT regressing in performance compared to their previous gen. Arrow Lake is closer to Bulldozer than Zen.
@@gamingmarcus Very wrong. 1st gen Ryzen was not "almost trading blows" in gaming, it was losing consistently. It wasn't that good of a launch for gamers, except those that came from i3 and bought an R5 for the same money. It would take 2 years, until Zen 2 to arrive for Ryzen to trade blows with Intel on gaming.
Intel used to be a warrior, until it took an Arrow to the knee.
Lake
And fell in the lake. There they can maybe find Alan Wake.
@@DragonOfTheMortalKombat well if the fell they definitely made a "wake"
Learn your Greek mythology mate, Achilles was hit in the ankle lol.
Possibly the worst cross meme to release a poor gaming CPU named arrow Lake
Not much of a lake this time. More like a puddle
🤣
Puddle left by a poodle 😂😂
all the mudd
Arrow puddle
Arrow pond
Arrow lake xD
Or a swamp..
they focused on adding more backdoors
Based on the subtle comments are that laugh, The new 9800X3D is gonna blow the new intels out of the water. I think that's what products they were testing when mentioning it early in the video lol. They can't say much but there's some hints. The new X3D's are gonna be fuckin sick.
I'm sure the 9800X3D will give us a decent uplift over the 7800X3D. But even if not, blowing Intel out of the water should be the easiest bar to pass, given that the 7800X3D already crushes anything Intel has to offer in gaming.
Well, it's replacing the part that already blows Intel out of the water, so it's expected haha.
I read that another reviewer dropped similar hints last week about the 9800X3D being ridiculously fast, so, yeah, I'm sure that all of these major outlets have had them in shop for several days now and have already done a lot of benchmarking.
since i 50/50 games and productivity with heavy rendering needs like blender and premiere, i'm mostly interested in how the 9950x3D will do, and how it compares to 285k after some updates and fixes. x800x3D have historically been a bit too poop in multicore and productivity workloads for what i want.
Meh, the 7800x3d bundle still went up like 30% in price at micro center when zen 5 launched. It doesn't bode well for pricing ultimately
It is incredible how in-depth testing has become. Back in 1990 when I started getting into PC Hardware there was only the information that a new CPU hit the market and you only looked at how high the MHZ is to determine its speed :D
now clock speed no longer the be all and end all. very much depends on one's use of it
This is how we ended up with the late single core Pentium parts, literally put everything into clocksl speed and prayed
The good old days.
they focused on efficiency, so performance went out at the beginning, and they failed efficiency too they messed up their one selling point
Compared to raptor lake they didn’t fail on efficiency
Did they? They barely dropped TDP and went to a super expensive TSMC node and still use way more power than AMD does
I mean, they did manage to only use half the power of their prior edition. That's not a fail but a pretty good improvement.
It's still lacking in comparison to AMD. But in terms of most generational improvement, I'd say Intel actually wins. Zen 5 isn't really more efficient than Zen 4, performance improvement is small overall, with 7800X3D, for the time being, still being the CPU to go. Halving the power draw is a way bigger improvement.
That AMD is still ahead just shows how much ahead they were in general.
@@maskharat exactly why most call it a failure since its not efficient enough to beat a 7700/7600 but its slower than last gen
All you people are so poorly informed. Do you also have other sources that you watch or only Tech Tubers that dislike Intel? Efficiency is great with Arrow Lake! In gaming it is way less power using. And 1 simple BIOS setting DLVR of and setting voltage to 1.2 volt instead of 1.5volt will make the 285K consume 170/180watts with 44K Cinebench R23 score!
Der8auer has a video on that. All you people are sheep if you ask me. Never trust YT channels like these. They all are sponsored!
Windows 11 is garbage. This channel showed that turning of core isolation for AMD had a good impact on the 9000 series for gaming. But they fail to tell you that doing this for the new Intel even gives you even more FPS compared to AMD!
Using fast ddr5 increases it even more. CUdimm that is coming out this month gives even again over 10% extra performance. So it will pass the 7800X3D in average gaming and be close to the 9800X3D.
But they want you to believe that Intel is bad. The new Arrow Lake CPU is extremly advance. It's the first CPU you can set the individual P core voltage and E core cluster of 4!
But hey why be positive if you can manipulate the world?
Userbenchmars is one of the worst sites out their always bashing AMD. And the tedch tubers thought we can do that 2.
"I used to milk 14nm++++, but then I lost arrow in the lake"
14nm was originally only supposed to be in 5th and 6th gen, 10nm should have been ready by 7th gen, instead all they had after a year of delay was a broken dual core laptop CPU on 10nm.
+ most companies that go Woke most go Broke.
And probably a lot of brain damage because of 💉🧬........ 🤔
14nm was awesome.
@@juanme555 6th gen: It was great
7th gen: It's good
8th gen: It's ok
9th gen: Are they going to use it forever?
10th gen: They're gonna use it forever
11th gen: Jesus Christ
It was great but then it wasn't
@@Moin0123
you say that and yet the 9900K is one of the most legendary gaming cpus of all time, it absolutely crushed the 3700X despite being a year older.
Do we actually need 16 E cores for low power tasks😅😅
No, they are needed for more power efficient highly threaded tasks
Hyperthreading had its downsides especially security. The new design eliminates the need for hyperthreading.
@@igoresquee cores aren't power efficient tho
@@igoresque E cores are less power efficient than P cores, they're more space efficient as they occupy about a quarter of the space.
@@igoresque More power efficient...
Since when?
You guys misspelled Error Lake
You misspelled it as well: it's Error Leak 😅
Haha, it's actually Err-row Lake.
had me in the first half, not gonna lie 😂
Haha. Funny.
Hey! That's not a nice thing to say about Team Blue Screen.
Disagree about the 1080ti. Moving from high end to mid range to entry level is exactly what I do and what you should do. Amortized over time, your parts will last the longest and you'll save the most money, and you should learn to adapt anyways. I had my 1070 for 7 years before Alan Wake 2 made me move to the 7900 XT. If not for that game I'd still be on it. Its like the difference between leasing your car and paying off your car and running it into the ground. Leasing is a way worse use of your money
I still can't find a good reason to switch away from my 6700xt 5600G - primarily because I play on 1080p 144hz.
The most sensible upgrade for me is a better display and a larger computer table lol, should get on that.
Same here. I'm on a 1060 6GB and I wish I had the 1070-1080 TI instead, since I've been happily playing all I want with the 1060 but have noticed some games where it's holding me back. Indeed, it's only games like Space Marine 2 that make me want to upgrade...
Then again the market is terrible and every GPU on offer feels like a compromise or Faustian bargain lol.
Remember the i7 920? Now THAT was a powerfull CPU..
I understand your feel but indeed it still bottleneck with GTX285 back in the day
I still use a X58 system with Xeon X5670 which is far more powerfull than i7 920 STILL it bottlenecking when i tried to benchmark 2009 AAA Resident Evil 5 at 1440 x 900 using a GTX280 1Gb 512bit 😂
@@phatminhphan4121 games weren't as GPU heavy back in the day
And it has incredible overclocking headroom
You could still use an i7-970 today if you really wanted to since its got 6 cores 12 threads. Could only really do 1080p but still usable. Which is pretty unreal for a 15-year old CPU, still good for a NAS build. No way Arrow Lake is still going to be usable for what computers demand 15 years from now. Intel has fallen.
Good part of the blame goes to the Windows scheduler. It seems to prefer hopping threads from core to core for whatever reason, invalidating L1/L2 cache and predictor optimizations all the time.
Heterogenous architectures are hit the most, and AMD could tell some stories about it...
There are some performance reasons to move threads from core to core but there are a lot of subtleties to consider.
eg. It can take advantage of higher boost speeds by switching from a thermally throttled core to a cool core, but it should try to time this to match with an existing context switch or pipeline bubble to avoid additional latency penalty. On a 20 core desktop there shouldn't be many context switches though.
On multi socket Servers with NUMA sometimes not all cores have direct access to peripheral IO so threads need to take turns on the cores that do and more frequent switching helps this situation despite huge latency from cross socket cache transfers (More of a last decade problem). I bring this up because some multi-tile Xeons have an internal NUMA structure.
@@mytech6779 Basically, CPU performance gains were gained by improving IPC as pushing frequency is getting harder and harder. Better IPC means stalling pipeline less, and it can be helped by tighter affinity to the cores. Windows are stuck in previous decade in this regard. NUMA is not great on Windows either.
@@inflex4456 I never said windows was good, they probably reuse the same scheduler in desktop, laptop, and server. The point is that a layer of NUMA has developed within many CPUs to deal with multicore access to L3 which spans multiple tiles/chiplets.
NUMA of any level is overhead for all operating systems and really needs custom scheduler configuration for the specific workload, which involves testing and tuning by experienced engineers which is mostly only economical on large deployments.
Yep, this really here is the story. TO be fair, scheduling is a challenging task. But windows hasn't always been great on that front anyway and has definitely lagged in terms of improvements. They really have relied heavily on "homogenous" (ie. all cores are equal and interchangable) systems, which is no longer the case.
First e-cores being widely different in terms of performance. But also now with a tiled architecture, moving threads between cores can have real performance implications (core to core, communication, latency, cache toplogy etc). This sound a lot like Zen 1.
Gaming tasks seem pretty sensitive to these sorts of things, they don't just want raw parallelized power, but need heavily synchronized activities to finish predictibly on time, all to fit under the ms budget for a given frame (eg. 16ms for 60fps). Even back when hyperthreading was introduced by Intel, it wasn't an instant win for games (and often cause performance regression), because while HT has a theorhetical best average case of a 40% improvement, that really is for unrelated threads in separate applications. In a game where you have tightly integrated threads often with overlapping processing requirements, the potential HT gains can be much smaller, and since a "virtual HT core" is not the same as a true discrete core in terms of performance, could actually cause performance losses. It took both windows and game developers to get to a point to actually benefit from HT (and even to this day, it's still not true of every game).
So this all sounds like and even stronger version of that. If windows can't develop a one size fits all scheduling strategy, they we might need to expose this info to the games themselves and let them fine tune their own scheduling preferences. As if game developers don't have enough work to do already ;)
@@aggies11 Not sure where "40%" came from. There is no theoretical generalized ideal for hyperthreading. The potential improvement is completely dependent on the specific set of tasks, and generally most influenced by the rate of pipeline stalls due to memory access patterns. HT provides zero theoretical benefit for some tasks, while others will see even more throughput gains with 4-way HT (Xeon-Phi, ie.).
Just watch that interview Wendell had recently with that Intel executive (whose name escapes me just now) and you know all you need to know. The level of arrogance is mind blowing! If that's how all of Intel operates, it's no wonder that ARL is such a dud.
That has nothing to do with how Arrow Lake performs. It's entirely possible that executive didn't even work at Intel by the time Arrow Lake was designed.
The reason Arrow Lake sucks is because of memory latency because of its chiplet nature. Specifically, the IMC is on a separate tile from the cores. Period.
Stop reaching.
@lycanthoss 🤦♂️
Ahh, the times when reading comprehension was still a thing...
@@lycanthosslmao sit the F down kid
@@danieloberhofer9035 no, I think you lack reading comprehension.
Can i have the video’s link please ?
i think a gaming test using only e-cores would be informative. since some twitter threads show 20ns less latency in e cores if p cores are disabled
Jay did that with 8400 cu ram and he over clocked just the e cores and got amazing results
@@remsterx the problem is, the average consumer (and lets be real, computer makers) ain’t going to be able to do that. Also, you have to buy brand new, expensive as fuck ram, literally 1 generation after ddr5. I can save 50% and just get a 7800 x3d
@@remsterx I'm not in the mood to overclock my cpu straight out of the box and risk having the silicon degrade and fail prematurely before I'm ready to upgrade. I only do it 1-2 years after I've bought it to stretch more life out of it before I move to a new platform.
Can you actually do that? With Alder/Raptor lake, you had to have at least one P core left active.
The time stamps don’t seem to line up with the actual questions they’re assigned to. The “Where Are The Ryzen/Ultra 3s?” question seems to be the culprit.
Cannot compare Zen1 to ARL since Zen1 was much faster than the previous AMD CPU arch and much better priced than Intel's CPUs. ARL has none of those 2 advantages. Big fail in almost all CPU market points.
Your mind is so fried by AMD astroturf marketing. Zen1 was much SLOWER than its contemporary Intel CPUs and priced the same until later when they were forced to fire sell it at a much lower price. That's the actual story and timeline.
Arrow Lake is the same MSRP pricing scheme as every generation for like 5-6 years. Let's "fine wine" this one rough Intel gaming performance at launch together, like we have for every AMD generation since 2012, shall we?
@@CyberneticArgumentCreator AMD's Zen1 was bad only for gaming while destroying on productivity tasks. That forced Intel to launch 6-core CPUs on desktop very soon in order to keep having expensive CPUs in the market. Most people do not need the fastest CPU for gaming since they do not have the fastest GPU to pair it with. So, when a company has best VFM and productivity and efficiency they get market share. So did Zen1 to Intel CPU lineup back then. In what of those factors ARL CPUs are better than Zen4 & 5? Face the reality and don't try to distort it. Intel is in BIG trouble ready to get bought.
@@CyberneticArgumentCreatorthen your brain also fried from Intel's Marketing.
Really? 4 cores for 5 GENERATIONS? And after that? 14nm+++++??
@@CyberneticArgumentCreatorlol the copium is real. Arrow lake is intels pile driver moment. Increasing efficiency over bulldozer (rpl) but too little too late. Still half the efficiency of Zen 4/5 and 1/3 the efficiency of X3D models. 7800X3D beats the 14900K by 10% and 285K is 6% slower than the 14900K. Its fucking embarassing.
@@CyberneticArgumentCreator you're clueless
One thing I wonder about Ecores, for "gaming-focused" applications wouldn't you still want one or two assuming you have an amazing OS sheduler, so that OS background stuff can run on them? You can kind of do this by forcing efficiency mode on processes, and it seems to have some measurable effect on how fast cores that usually do OS stuff work (0/1). That way you could offload it without sacrificing too much power
Yes. That is one of the many, many reasons MacBooks get amazing battery life. Problem is that the Windows Scheduler is really bad so it doesn't work out in practice as well as it should
@@_shreyash_anand I've had the opportunity to experiment with the 24H2 on AMD's mobile "Ryzen AI" (xd) processors that have e-cores, and yeah it feels like a bit of a mess. OS processes/services seem to randomly wake multiple P-cores really frequently even on battery with best battery life overlay, and each time they wake up it's a noticeable hit to battery life.
@@_shreyash_anandThe scheduler works on straightfoward Big-Little CPUs like my 12700K.
When Alder Lake was launched, the i9 with so many threads and cores was experiencing perfomance problems vs my i7, years latter and it looks like that hasn't changed. Effectively going above a 12700K for gaming is pointless and productivity is still more than enough.
@@saricubra2867dude you need to find a new identity than 12700k, come one we get it you love the CPU - its time for you to find a nice girl, or if you already have one focus on her more as your primary infatuation.
Love with a CPU never ends well.
@@aravindpallippara1577 i7-12700K is modern day 2600K, the only current decent CPU is the 7800X3D and maybe some laptop chips.
5:14 because scheduling is way way harder than it used to be, windows has never really had to deal with cores that are so dissimilar in performance (beyond NUMA) and I suspect a lot of these regressions came in when they started having to mess around with it to accomodate Alder Lake
Actually, the Vista thread scheduler was the main reason Vista was such a debacle. People said it was the drivers, but ridiculously enough, the thread scheduler was released in an unfinished state. There were edge cases where it'd simply halt. Particularly, cases involving drivers...
This has been a long ongoing issue, but YT wasn't used for hardware news back then.
Hope everyone has a lovely day.
After the release of 9000 the price of 7800x3d rose by 25-30%, and after the release of intel it already rose another 8-10% and it continues to grow!
It would be better if both of these generations never came out and we would have had a 7800x3D for 350 dollars!
Yeah exactly I was going to buy it that week as I got the motherboard the week prior. But now the 9800x3d is going to be 480 so I’ll just get that as the 7800x3d is the same price
Not saying no way AMD cashing in on retaining the gaming crown, but I wonder how much of these price increases are just supply/demand dynamics. I mean every gamer who was waiting for ARL and Zen5 is now rushing out to buy a 7800X3D instead (except those waiting for the 9800X3D). Would not surprise me if that leads to a supply shortage and retailers raise their prices in reaction to that steep increase in demand. This at least makes more sense to me, given that AMD already had the gaming crown for 1.5 years and actually lowered the price during that time.
@@Hugh_INow the 7800X3D is just 6% better than a 13700K and the latter has a massive 61% lead on multithreading.
I can't believe that someone would overspend on AMD's i9 just for being destroyed for long term usage 💀
@@saricubra2867 6%? lol. why lie? at 42:50 we can see that 7800x3d is 13% faster than 14900k! 💀
in my area the 7800x3d AFTER the price hike is only 10% more expensive than the 13700k, and it is on a good am5, not a crap LGA1700. so in the long run you can upgrade on am5, but not on 1700.
or get a 7500f or 7600 for now for gaming and then drop in 7/9800x3d in a couple of years. and the only option with 13700 is to sell the entire computer. I can't believe that someone would overspend on 1700 just for not being able to upgrade or sell it at good price.
@@saricubra2867
People would be willing to spend that, because they're gamers and the MT performance is not their focus and good enough for them. Also the 13700K is in danger of cooking itself.
And please stop making up numbers already. The 7800X3D was over 15% ahead of the 14700K in gaming in the latest HUB benchmarks (ARL review). I strongly suspect the 13700K wouldn't be faster than that.
It is better in MT, true. But despite your nonsense take - It is not AMD's i9. That would be an R9. Compare the R9 7950X3D if you want AMD's "i9" that's best for mixed usage.
Intel cpus owners are the favorite clients for energy suppliers...
You may be surprised, but depending on what you do on your PC there could be little to no difference between AMD and intel in terms of power consumption.
13/14th gen idle at 5W, while all the new AMD chips including the 7800x3d avg around 30W idle. Moral of the story, if AMD PC is idle then turn it off, otherwise might be using more power than intel in the long run. Also intel is not that inefficient under load in stock configuration, its only under mild overclock of 13900k/14900k where you get to see 250-300W in gaming and up to 450W in synthetic all core workloads
@@juice7661 The total system power should be measured. A CPU cannot be used without a motherboard and RAM. There are also several idle power levels (sometimes called sleep states) that depend on individual configuration settings. Deeper lower power idle has more lag when a task arrives.
CPU power in isolation only matters for cooling selection and ability of a motherboard to supply socket-power.
@@juice7661 I keep asking GN to add system idle-power to their tests but I'm sure my comments are lost in the crowd.
@@juice7661 so you mean is relevant when you do nothing on, 14900k with 280+w on gaming is good for you ?
@hardwareunboxed by the look of it arrow lake is build like 2 4P/8E CPUs next to each other with 2 ring buses.
a bit like how ZEN 1 was 2 4c CPUs and we know how that one was for gaming...
that can explain why a 6 core CPU will be much worse too ( way more jumping around the 2 clusters )
a core to core latency test will show that and isolating the 4P/8E may show an interesting results compered to raptor lake with the same setup.
may also explain why 1P core + some E cores show an improvement like u said
that cpu took an arrow to the knee
Intel is trying to steal our sweet rolls 😂😂
arrow to the lake
@@GewelReal yea it defo didnt hit it's target
Label the title as Q&A?
I feel so let down by Intel, once the gold standard, now just coasting the highway of disappointment.
Intel was never the gold standard, they simply paid oems enough money to not use amd products that amd almost went bankrupt and wasn't able to compete, so intel was the only standard. Very different than say Nvidia, who actually just dominated the market through sheer marketing
@@james2042 amen
They werent the gold standard at all. They were just the only option.
They had no real competition. And then proceeded to completely shit the bed when faced with it
@@ricky2629 I need to agree with that! It’s disappointing to see them fall behind now that AMD’s raising the bar.
@@james2042Oh boy do i have news for you. Nvidia did that but to even worse ectents than intel ever did. They did obviously have the faster cards for that time Yeah just like intel had faster cpus. But Nvidia was the shadiest of the shady and still is.
The way AMD did the "gluing" not only helped with yield, but it also allowed AMD to use the same chiplets to build EPYC server CPUs.
The way Intel did the "gluing" doesn't allow for any such benefit.
Every single chip design and manufacturing costs an arm and a leg, so there's no way Intel way can be cheap.
In theory intels way should be closer to monolith in speed than AMD… let’s see if Intel can benefit from it in the future. But Intels way definitely is the more expensive one to make! There definitely are things that Intel can tinker with arrow lake architecture to reduce bottlenecks. But is it enough… that is hard to say. I am sure that Intel can increase speed more from arrow lake than AMD can increase from 9000 series. But is it enough when arrow lake is so much slower….
@@haukionkannelThis is all with AMD still being the dominate competitor, meaning the prices they use now are marked up from what they would be if they had proper competition and still made a profit, if Intel can't compete on price in this state they are cooked
@
The tech Intel use in Arrow lake is much more complex than what AMD use! And more complex = more expensive to produce!
So yeah… this will not be easy time to Intel until they release the second gen for the arrow lake architecture.
Nobody knows how much Intel can improve. AMD did get rather nice improvement from zen 1 to zen 2. Intel needs that kind of jump to get back in competition. But if AMD also can improve from 9000 series… Intel definitely is not out of the woods!
@ complex ≠ better. Intel is playing catchup to AMD, years of stagnation has finally caught up to them, the main reason they are not bankrupt is because of government "loans" and payouts for empty promises and countless delays. They got too comfortable too quick. The fact that Intel went from TEN nm to THREE nm (~7 node shrinks) and is still losing in efficiency means they are FAR behind the competition (5nm Ryzen 7000 series and even 7nm ryzen 5000 series is beating them). I want Intel to succeed as the usual consumer would but they dug themselves a deep hole than fanboyism and shill hats can't save alone.
@@PelonixYT They got "loans" for their fabs, not for their CPU or other products, important distinction there.
Is NarrowFake a better name for them?
Error Lake?
No, no it's is not a better name lol. Error lake is perfect.
Arrow Flake.
From running Cinebench 2024 under Intel’s Software Development Emulator, Cinebench 2024 heavily leverages the AVX extension. Like libx264 video encoding, scalar integer instructions still play a major role. Both libx264 and Cinebench contrast with Y-Cruncher, which is dominated by AVX-512 instructions. AVX-512 is used in Cinebench 2024, but in such low amounts that it’s irrelevant.
Although SSE and AVX provide 128-bit and 256-bit vector support respectively, Cinebench 2024 makes little use of vector compute. Most AVX or SSE instructions operate on scalar values. The most common FP/vector math instructions are VMULSS (scalar FP32 multiply) and VADDSS (scalar FP32 add). About 6.8% of instructions do math on 128-bit vectors. 256-bit vectors are nearly absent, but AVX isn’t just about vector length. It provides non-destructive three operand instruction formats, and Cinebench leverages that.
RaptorLake's E-Cores has 128 bit vector hardware.
In terns of future-proofing, your advice is correct. Buy what is best value today within your budget, and 80% of the case it would have aged better too.
In terms of dGPUs, there's lots of exceptions.
Mostly from Nvidia providing less hardware for your money, but better overall performance due to their better drivers. While AMD not being able to provide as good software, and leaning on more hardware to make up the difference. And over time, AMD is able to close the gap on the software, which makes their products age better. It's basically happened since GCN vs Kepler.
But if you ask most people, they rather enjoy their new dGPU while it's still new. Because they may age better, but their value would also drop by the time it is worth it. Because with that mentality, most people would just buy a larger dGPU from 1-2 generations ago.
Example/
Why buy the RTX 5060...
...if you can get the RTX 4070 Super, RX-6800, or RTX 2080Ti instead for cheaper ?
Although there are some exceptions in the CPU front. And it all depends on your level of research.
Should you get the i5-2500k value pick, or spend the extra for the i7-2600k. Well the i7 was actually the correct choice. Unless you were able to snag a 3770k on holiday discount, and sell your old i5 for a decent price.
Then came the choice between the i7-6700k vs r9-1800X. Well the Intel was slightly cheaper, more efficient, and much faster for games. But if you went for the r7-1700 then the AMD looks competitive. Then you factor in the platform cost, then boom, the AMD was actually the smarter choice.
Similar case with Zen+ where it didn't make sense. But probably worse was the Intel 7600-7700k products. The smarter buy was to get the older Zen1 product as it gave you the benefits of the platform, while championing the value point.
With the 8700k that was a mixed bag. Intel took a half measure. And they got crucified for it, as the r7-3700 launched soon. And if permitted the higher r7-3800X which even slightly beat Intel's next flagship i9-9900k overall. It was game over after that with Zen3 with the r5-5600X and r7-5800X.
We saw similar insights with other product launches. Intel 10 and Intel 11 seemed to be problematic. Zen3+ looked to be smart buy. Zen4 was good but expensive. Intel 12 looked good but lacked the platform aspect. Intel 13 and Intel 14 looked problematic. Zen5 looks good and bad. Intel 15 or Intel Ultra just looks bad.
Based on what we know so far, the r9-9800x3D looks something worth paying for, in terms of future-proofing. If you want cheaper, stick with the r7-7800X instead. It is a similar dilemma to the i5-2500k vs i7-2600k. The smart play would be to get the i7, but even smarter to go i5 for cheap and get on the platform and upgrade for cheap in the future. We can do that because Zen6 coming in Early 2026 will still be on the AM5 Platform.
@@ekinteko3080Ti is still better than a 4070 super.
I have a question for you guys on your video which you did not cover the CuDimm memory coming into play I seen that AMD and MSI tried this memory on their CPUs of the new ones do you think it'd be available next year or no for AMD
CUDIMM memory isn't a silver bullet for Arrow Lake. We tested 8200 CUDIMM in all of our day one reviews.
6:49 they keep using E-cores because it just makes a lot of sense on a desktop. You only ever need a handful of fast cores, or as much multi-core performance as possible. Workloads where you need tons of cores AND for each of them to be as fast as possible individually don't really exist on desktop. However, workloads like this are very common in the datacenter (web servers, hypervisors, etc.), and thus mainline Xeons are all P-cores, no E-cores.
AMD builds all of their chips from the same base 8-core chiplet - from desktop CPUs to 192-core server behemoths. So that base chiplet has to balance both needs well, and thus no E-cores. Intel is forced to make different monolithic chips for desktop and server anyway, so they don't interfere with each other and each can do their own thing.
I would actually really love to see AMD try out a big-little design as well. Imagine a CPU where one chiplet is 8 fast cores with X3D, and the other is 16 or even 32 simpler cores. Once AMD figures out how to mix different types of chiplets inside the same CPU like Intel did on Arrow Lake, we're in for treats.
AMD already has the dense C-core chiplets they could use for a 8+16C cores config on desktop. And they already have a hybrid design on Strix Point with 4+8C cores. They could do what you suggest today on Ryzen Desktop. But AMD tends to release like one proof-of-concept kinda product before they apply a new design across their different product categories. I guess they're working on fixing kinks with our all favorite OS and its abysmal scheduler before they also do this on desktop.
Other considerations could be that they determined: nah, ARL isn't really beating us, so no need to trouble us with yet another config to get working. And then there might be issues with memory bandwidth limitations to feed all those cores. If that is an issue, working on the memory controller to support faster clocked memory would be higher prio and also a prerequisite to make more cores give you actual tangible uplifts.
@@Hugh_I that's false intel offers a xeon full e cores and a p core one
@@Hugh_I Windows truly is abysmal with big-little scheduling. Literally every other OS (except maybe BSD) has figured it out years ago. My guess is that AMD is letting Intel take the brunt of the development work for big-little support, and is going to slot their C-cores in later with just minor scheduling tweaks instead of having to build the whole damn ecosystem from the ground up.
@@subrezon I’m sure that DragonflyBSD probably has a handle on big.LITTLE cores even.
@@levygaming3133 might be, I haven't been up to date with the latest and greatest in BSD world in the last 3-4 years.
Hi, just want to know if u guys be doing another high speed memory test? Especially when 9800x3d comes out on different Mobo’s, especially between 9700x, 9800x3d, 9900x, 9950x on all 21, 870 series mobo’s u tested earlier? Really want to know which combination will work best for productivity and gaming and if there will be any performance gain with 7200mhz cl34 or on 8000mhz + rams.Thank you so much again for all the hard work u guys do in testing. Ur channel is the first I go to, thanks again.
Gents your chapters are amiss, where is the 3,s is not registering in the right location. Cheers
I remember the ryzen hate on here and on Gn when the 1st gen and 2nd gen came out saying AMD sucks for games and is a joke. Man, have times changed. Now Intel is viewed how you all viewed AMD back in 2017
Woah i never expected october 2025 q&a being so early.
Computers is a huge hobby for me and I have 2 systems and a server (Synology NAS 160TB gross 120TB net) running at all times in my home. I also have many hardware parts on hand for spare parts. I just gave away my 12900k with a 3090 to my family because I'm building an intel 285k computer. I have most of the parts and just took delivery of my Asus Maximus Extreme Z890 MB. Now I'm just waiting for that intel CPU to hit the shelves here in Canada. It will compliment my 7950X3D with 4090 system well when it's done. Can't wait for the 5090 to come out, have to buy 2.
Thank you guys who host this channel for keeping people like me who enjoy this hobby excited on the latest news on all things computers!
You should give that 7950X3D with 4090 to me so a poor fellow can join in on your hobby. 😁😁
@@gymnastchannel7372 LOL, thank for the comment. I'm sure whatever computer you currently own will continue to serve you and your family well. Have a great day!
Why are you posting your life story on a YT comment?
@@mytech6779 For you to read. Thank you for reading
Arrow Lake is really good for workstation applications. I'd still get a Ryzen system due to the platform longevity and the fact that my niche, music production with live MIDI input, heavily favours large cache and therefore the X3D CPUs, but to someone whose sole focus isn't gaming or lightly threaded productivity it's a compelling option.
That, and not having to worry about whether some task is gonna get scheduled on an E-core or a P-core. The heterogenous architecture is fine for mobile stuff, but I wouldn't want it on a desktop processor (even if supposedly it helps with multithreaded performance within a specific power budget. I don't see the reason to care on the desktop unless I'm trying to maximize every last inch of power efficiency, and even then, it's not like AMD has failed to remain competitive even WITHOUT the E-cores.)
I understand the E Core P Core situation on laptops or handheld devices, but desktop? Why?
they say why in the video.
cos you can do 8+16e or whatever cores, but 16-24p cores is too big, too expensive, too hot. amd split them into 8s
@@jermygod if intel made a 12p core cpu instead of a 8p+16e it would consume much more power and have worse mt performance and games do not really scale significantly past 6 cores
Desktops are faster with e cores also!
Because programs that benefit from 16 P cores also perform well on 16 e cores.
To add to the Core Ultra 3 discussion, Intel stated that Core Ultra would consist of 5, 7 and 9, while core 'non-ultra' would be 3, 5 and 7 (presumably lower TDP counterparts for the 5 and 7)
Currently the non-ultra are using older generation cores.
Intel always keeps some previous generation marketing names for years after moving to a new series just to use on OEM trash-binned CPUs.
They still used the Pentium and Celeron marketing names a decade after moving to Core2 and then Core-i (They had "Atom" for intentionally designed low power single core products.)
They had the Core 100U series chips last year in laptops which were just a 2P+8E+96EU Alder Lake-U refresh.
Windows scheduling loves to move threads between cores, which I’ve always considered to be a really stupid thing to do (for performance). This is true even when you’re running a single threaded process, because all the other Windows processes need to run and those getting time ends up triggering a switch when the single thread gets re-tasked.
Not sure if the one core would “burn” earlier if they run only one core? That would explain why there is that jumping…
All in all would be interesting to hear… why…
@@haukionkannelnah, modern cpu's are designed to handle near constant uptime.
@
Yeah…. I know… so not sure why there is that core hopping. We also know that during long time, the cpu does get weaker because of heat cold cycle and too much current. So there is some degration.. but it should not be huge. My best ques is that core can keep ”safely” super high freguences Only short time and the core hopping is to keep those max boost going on… but how usefull it actually is… seems counterproductive…
Move from a hot thermally throttling core to a fresh cool boostable core. There are some other reasons on the server side having to due with NUMA and asymmetric IO access, but MS may be lazy and reusing the same scheduler for desktop.
@ None of my machines run hot enough for any thermal throttling. And Windows has had this behavior since before thermal throttling was an issue.
26:45 Nvidia released RT cores in 2018 that are, 6 years later, slowly being used but still far from essential. Had Turing 2000 been released with only Tensor cores and more rasterizing power, games would have looked better. Same for Ampere 3000. Only with Ada 4000 could you say there are a handful of games that truly use those RT cores and the cores are powerful enough on 4080 and higher. Anything below 4080 is useless for RT unless you really get hard for reflections. RT is great for reflections! They should have called it Reflection Cores.
I agree, RT reflections is really the only thing that matters, and maybe shadows. But rasterized shadows are still very good.
@@Dexion845they are used for work applications far more than gaming. Rtx in gaming is gimick still. And its very very rare to find a game that uses rtx. Most aaa use up scaling though but that just looks like low resolution but sharp. So its ass. 4k raw is just so much better looking.
@ Oh? I thought those were the Tensor's not RT. But yeah 4K Native looks great but to be honest not even a 4090 can do that above 60fps on most graphically demanding games.
Microsoft has gone completely off the rails in the last couple of years. Judging by the absolute mess they made with simple stuff like notepas, it is obvious they are to blame. I have a 5950x with a 4090 and the thing simply stopped working after 24h2 with constant reboots and freezes. Totally stable under linux or win 10 though. Onedrive is so useless we prefer to send ssds by mail because the service isba total useless crap. It is a horlt mess there.
Make sure you use revo uninstaller for the chipset and your graphics card if you upgraded or had previous different hardware in your system. I had some issues with my 5900x going from the 3080 to 7900xtx and upgrading to 24h2 but after everything runs fine.
@@remsterxright! Do a clean install. 24h2 gave me a huge performance bump on my 7950x3d
@@Clutch4IceCream after doing all this and two clean installs I quitted and went back to 10. There are a LOT of horror stories of people RMAing the entire PC with the problem persisting. To me, the answer is clear: Windows is the problem. Especially after having zero problems with the exact same hardware on different OSes. Those late launches by Intel and AMD make it also very clear. Microsoft is not competent anymore.
@@OficinadoArdito I work with Microsoft Azure, and it’s been an absolute cluster*uck this year, it’s like they’re being run by the marketing team who have zero understanding of computing. I’ve ditched windows entirely now, have a MBP laptop (which destroys anything X86) and moved to Linux for my gaming pc and Plex server. I don’t regret it at all, windows sucks now.
33:11 gaming hardware nerds need this reminder when they start talking about broad industry decisions. There is so much more to CPUs than gaming, and especially 5-10% differences here and there.
Precisely. Believe it or not, Intel is actually a business… and they respond to what major customers ask them to provide. (Energy efficiency, ecosystem costs, etc.)
@@anzov1n not on a gaming cpu actually. And it’s 5% on their previous gen, not the competition. Weak defense
@@gymnastchannel7372 and they’re still losing in all those metrics, so what’s your point?
It’s memory latency… overclocking the Ring Bus to a 1:2 ratio compared to the memory speed also yields decent results. Tuning the memory to ~65ns or less is a must. No miracles but it does help a lot in most games.
the "late halloween q&a"
It is not about scheduling issues. It is about available thread level paralelism and overhead. When paralelism is available e-cores are more performant as they have greater perf/area, but games are not there and somewhat 8 threads is the max paralelism exploitable because most of the computing (a very specialized one) is done on the gpu anyway. Humbly I will add that intel was always considered an excellent tweaker of the pipelines and its critical path and they may have a problem realizing that idea on tsmc. I also think that they have failed to obtain the IPC they targeted with the wider 8-way microarchitecture with +500 reorder buffer and +20 issue width for whatever the reason. And that is key because freq is not the way to go (Pentium IV happened). Apple designed their M processors overly complex from the beggining and only recently have achieved 4.5GHz on the best tsmc. A couple of additional cycles of latency on a 2-300 cycles memory latency I don't think it's important.
What’s even more crazy with windows and scheduling and optimising gaming performance. The Xbox uses the Ryzen cores. It’s quite interesting how the ‘save state’ is saved on Xbox in the game is essentially a VM and they use some variant of hyper-v to contain games. That’s insane and sounds like it would be really interesting if Windows could do this, bare metal optimising of game code and OS isolation…. And yet, the Ryzen Xbox team that must exist, have no line or input into the Windows OS?? Is it still the same kernel on Xbox and Windows? Did they drift apart years ago.. 🤷
Check windows sandbox
They have drifted apart. The Xbox OS can be highly tuned since it's literally running on 2-3 SKU's at a time max and every unit is guaranteed to be identical. On general PC side you can't do that since hardware varies so wildly. Windows 10 can run on anything from a Core 2 Duo all the way through to Ryzen 9000 & Arrow Lake for example. Xbox Series S & X has 2 Ryzen based CPU's only to worry about.
Have you guys tried any testing of Arrow Lake with CUDIMMS? Someone said that the different memory might be needed to make it actually a good CPU. IT would still be cost prohibitive but it would be nice to know if the CPU is complete trash or just mostly trash.
It is not trash, lmao, it is about as good as the other processors that are available.
JayZ tested CUDIMMS. Didn't solve all the problems. And obv you wouldn't buy 500-600$ worth of ram for the i5 or i7.
@libaf5471 CUDIMM RAM should not cost much more than UDIMM parts. Probably initially it might be a lot more, but it should level out to similar costs.
They claim to have done that but did not get even close to the results that JayZ did so who knows what is happening here.
@@Tugela60 cringe shill
If they keep HT off for the next generations aswell, doesnt that mean you need 16 P-cores for gaming?
No, you'd need maybe 8. HT doesn't add more than 30 percent average, and that's on games that utilize it. Some games perform faster with HT disabled.
@@nimrodery now try HT-off in space marine, spoiler - 30% left. I wouldnt be so confident about HT
@@pavelpenzev8129 That sounds extremely unlikely. I don't have the game but I just watched a performance test with HT off running 140 fps, you're saying HT will net you over 466 fps?
@@nimrodery "Don't trust your eyes". I did tests with and without it on my subject. Result 120 vs 90 online (12700kf rx7700xt 1080p)
@@pavelpenzev8129 Yeah so a slightly above average result. Good stuff. When you wrote the comment you said "30 percent left."
My PC’s still can’t get 24H2. When will it be available for everyone?
You dont want it
@@Wahinies 24H2 actually fixed a lot of random inexplicable dips while gaming I had experienced on previous windows builds on AMD CPUs and slightly improved frame rate a bit.
The only downside of 24H2 is for benchmarking/ xoc stuff where you actually lose a bit of performance.
If you can't get it from microsoft then download the ISO online and install it. Certain people have uploaded the ISO which can basically be installed into any hardware.
Man, i wish 7800x3d would cost $370 in my country but the average price of it in my country is $600+
Same here in Canada
Then buy $87 7500F and overclock it to match the 7800x3d?
@@juice7661 7500f is like 200$ and it's not even close to 7800x3d in terms of perfromance
I hate the e-core / p-core stuff. It's just a band-aid, and a poor-one. Intel spent years throwing the kitchen sink into performance and made very poor power/performance trade-offs along the way. Now they need to clean all of that junk up. I don't blame the windows scheduler one bit for not being able to perfectly figure out which thread is supposed to go where... its an impossible task for an operating system.
Intel already knew it’s gonna be a flop thats why they cancelled the next in line 20A node and going all-in for the 18A. The 18A needs to be insanely good.
AMD had Ryzen 3 in the 1000 series with the 1200X and the 1300X and these were based on the Desktop Zeppelin dies that the 1700X and 1800X were using. in the 2000 series Ryzen 3 was relegated to the 2300G, an Notebook APU on an AM4 substrate. Then we had the 3000 series which gave us the 3100 and the 3300X, both were based on the Matisse desktop architecture. There were also Ryzen 3 CPU's in the 4000 series like the 4300G and the 4350G, but these again were Notebook APU's based on Renoir. The 5000 series and 7000 series completely skipped the Ryzen 3 line and it looks to be the same with the 9000 line. I think the reason for that is AMD has now make the APU line-up a separate naming scheme, with the 8000 series being APU based, they take up the mantle of the Ryzen 3 line
> The 5000 series and 7000 series completely skipped the Ryzen 3 line and it looks to be the same with the 9000 line.
R3 5100, 5300G, 5300GE, and 8300G all exist, though not in retail packaging.
Also, the 8000 series aren't a separate APU naming scheme - there are 2 without iGPUs (8400F, 8700F) and the 7000 non-F series are all APUs. 8000 series is 4nm instead of 5nm and have RDNA3 iGPUs instead of RDNA2.
Intel is the Boeing of chip makers.
I NEVER pay attention to first-party marketing. I ONLY rely on actual benchmarks and the resulting data. I also build a computer every 5-7 years, so it has to last. The only exception to my build cadence is because my daughter needed a new computer to run a game with settings that made her old computer unable to run it, much less be a good experience. In this case, I bought all AMD because I didn't want to spend a lot of money, and I wanted to have a computer that gave the best performance per dollar and per watt. The 4080 and 4090 were non-starters because of the risk of burning the VASTLY INSUFFICIENT 12VHPWR connector, so my alternatives were a 4070 Ti Super or 7900 XT or XTX. Because my now-daughter's computer has a 7900 XT, the XTX was the remaining choice. 4K ultra gaming is now a reality.
Error Lake.. I love it!
People need to understand a few things in the CPU market I think. 1st AMD has introduced the two Tier CPU system. gaming is X3D, all others are mainly productivity, they have some with cross over, but if you look for gaming you get X3D. Intel has the issue that they only gained speed by pumping insane amounts of power into the CPU if you look at the last 4 releases from Intel. Now they need to scale back the power and come up with a new design, which will take at least 2-3 generations (equal to AMD Zen road). I would not expect anything competitive from Intel for the next 3 years at least.
How hard would it be for Intel to release a SKU that was just P Cores so with the transistor budget of 16 E Cores give another 4 P cores.
Personly I would pay more for 16 P cores, release it as an Extreme version of the processor, is that feasible at all.?
just get AMD lol
They have all P core workstation chips. Xeon w7 2595X with 26 (Alder Lake) P core and quad channel memory.
Partly by design as it is what you should expect. Another part are bugs. It is really bad in some instances where it shouldn't be.
If you would run Arrow Lake with just one performance core, it is actually amazingly well performing CPU.
You say that people won't stay with something like a 1080ti throughout its life but that's exactly what I did. Now I've recently got a 4090 and plan on keeping it for quite a while. The performance of these things doesn't go backwards over time.
I went from a 1080 to 6800xt, double performance for the same price. You went from a 1080ti to a 4090, triple the performance for double the price. I get the 4090 price. What I don’t get is the 4070/4080 series pricing. The 4080s seems reasonable, so everything below that should have an msrp of 80% the original. That would bring a 4070ti to 650msrp(it’s 799) compared to the 1080ti 699.
6:20 Could somebody tell me please the name of that case shown right there? I'm curious of what it is.
Msi Project Zero, it's meant for back connect motherboards.
Because they lost Jim Keller way too early!!
They had lots of good stuff lined up for arrow lake, but cut most of it. Keller said in interviews he's more of a team builder and a consultant than an engineer at this point.
@libaf5471 yeah exactly and most importantly a 'direction setter'. If they got rid of Murphy early on, made Keller either CEO after the writing was on the wall for Swan or just gave him a very high up position so he could do whatever he wanted then Intel would be in a massively better place than it is today.
@@Speak_Out_and_Remove_All_Doubt Ceo's job in tech isn't to help the company make anything useful, it's to help the company's investors make money short term. That's unfortunately how it is.
I would be really interested to see what the modern Zen 5 built as an eight core monolithic would perform like. Mostly to see how much performance is left on the table from the chiplet design, and latency to the IMC.
Less you buy the more you save with intel
I think "the less you buy the more you save" is basic economics as taught in preschool
same with amd.
I have just started looking to game on Linux. I wonder how fps will differ. The test that I have done so far was successful, booting from an ext SSD so far no issues.
@@G_MAN_325 I’ve moved to Nobara, you will lose some FPS but it’s not anything that destroys the experience. I don’t regret it at all. If you want an easier time setting it up go with Mint or Ubuntu, Nobara can be a bit fiddly. They’re also working on SteamOS being a proper desktop option, Linux gaming has come on leaps and bounds in the last few years
AMD do only make one type of CCD for desktop usage. The six core is a failed full 8 core CCD with two cores fused off. I guess AMD could fuse off two more cores. But my guess the number of dies that are suitable for this are not available in a large enough numbers to make it economical even for OEMs
Now it’s flipped so without the heat maybe they’ll put 32 mb on both sets of cores now
Pretty much this. The only time they did that was with zen2 and the 3300X/3100 parts. They released them at the end of the zen2 cycle to collect up enough defective dies that couldn't go into a 3600, and yet still those parts sold out instantly with no way for AMD to provide sufficient supply (other than by disabling perfectly good dies). And IIRC the yields on the current nodes are actually even better than they were on 7nm then. It just doesn't make any economic sense for them.
@@Hugh_I AMD could absolutely make a custom half size 4 core CCD, yields would be even higher, and the economics would magically work.
And they actually did do that, they made a custom quad-core just for some Ryzen Atholon refresh that got “a new and revised box design to help consumers differentiate it” as apposed to say a new name.
Can you test 14900k with Hyper thread and undervolting off.. To see if it matches the perf and power with the 285k
I think the 14900K would pull ahead even more in quite a few games with HT off.
I forgot who said it (maybe Jay?) but it goes like this. "Sometimes you need to take a step sideways before you can move forward again" and that may be what is happening with Arrow Lake and it's architecture.
They been going sideways for years on 14nm this is their fault lol they’re fucking up in every segment and they just lost their deal with TSMC so now they have to pay full price for the new wafers 😭 we are not seeing a good intel chip until after 2026 it will be mediocre at best at a ridiculous price like how this was.
About the Intel’s reasoning for using economy cores, another important aspect here is that they were absolutely destroyed in the laptop space by Apple in terms of power consumption, thermals and fan noise. And the E-cores were a stopgap solution for this without having to do a more radical redesign of the entire CPU architecture
But this is laptop. For PC they are pretty much useless
@@dantebg100 Well, they are good for inflating multi-threaded benchmark scores 😅
Apple has E-cores.
i7-12700H matched the M1 Pro in perfomance per watt and the i9-13980HX at 120 watts is as fast as a 14700K at 250.
Apple uses ARM and perfomance per watt is worse than x86 because it's harder to optimize for ARM desktop than x86
Wasted pretty Z890 mobos ..... 😭
you buy in too Z890 ? after the 12 and 13 problems uff
I'm eyeing MSI's lineup for Intel and yes, I'm a bit sad that they haven't brought some of those boards to AM5 as well.
@@heinzwaldpfad 13 and 14, 12 is fine. And the problems are easily solvable, if you're not afraid of a BIOS screen that is.
That’s what’s bad about the whole thing - Intel aren’t the only ones losing out because of this massive flop, all of the motherboard vendors are losing out just as much designing motherboards that people won’t buy
@@heinzwaldpfad Been on Intel all my life. Other than 13th & 14th gen self-destructing series, I was all ready to jump onto the 285K ... until I realized it may not even beat grandpa 5800X3D in gaming. LOL. So, it'll be my first time on either an AMD 9950X3D or 9800X3D. Shame the Z890 mobos are nicer than the X870E ones.
I can't recall, did you guys test with CUDIM memory? Thanks.
Yes, CUDIMM memory isn't a silver bullet for Arrow Lake. We tested 8200 CUDIMM in all of our day one reviews.
Looks like Intel is making Windows get ready for ARM processors P Cores, E Cores, X cores, Y Cores, LGBTQ Cores LOL
You forgot AI cores and Crypto cores
Mmm...gimme some XXX cores, R cores, and PG-13 cores while you're at it.
🥱
One of the points of Chiplets for AMD is to use those same consumer chiplets on server products and viceversa while Intel made chiplets for the sake of having chiplets (sort of) as they spent lots of money to get them working and unless they have some kind of brilliant plan, can only use them for the desktop CPUs (I suppose they are compatible between multiple SKUs)
I think intel shot itself in the foot with the whole E-core stuff. Sure, the "initial" idea was great: Just add some "low power cores" to get more multithreaded performance without increasing the power draw too much.
But then, intel failed to realize: E-Cores can be really good "if and only if" there are many of them. Like with GPUs, single core performance is not that important. A whole lot of low power cores can rip through any multithreaded load without using that much energy. What Intel did instead was: Increase E-Core performance (to match the P-Cores from 5 years ago).
Imagine the scenario: An Intel CPU with 8 P-Cores and 64 (or even 128, 256, etc.) E-Cores, where the E-Cores clock at ~2ghz and are optimized for low power consumption and low die space (instead of raw performance). A CPU like that would lead "all" multicore benchmarks by 100%+.
We programmers are lazy people: If I can "omp parallel for" something without putting much thought into it, I'll do it. Switching to e.g. CUDA for massively parallel tasks brings a whole additional bag of problems with it. Even if its 100x faster, you need a decent GPU, additional cuda compiler step, push data across the bus, GPU memory is not enough, datastructure not GPU optimized, etc. etc. It takes a lot more time and puts a lot of restrictions on you and the customer (the one you are writing the program for). Give me the "omp parallel for E" with 256 E-Cores and my productivity increases by a factor of 2.
The other problem is with task scheduling. Programs want all threads they put tasks on to be completely identical in performance. if the program starts dividing tasks up on p cores and e cores with wildly different processing speeds that causes a huge problem with stability and hangups. your task would work just fine if the program knew to ONLY use those e cores and stay off the P cores but as we have seen with the dogshit windows 11 scheduler that is clearly not the case. Another thing that enrages me is the "single core boost" scam. They push suicide voltages into 1-2 cores to clock them higher and this does nothing for real workloads or gaming or anything any user would want. All it does is make the Single Thread number score in synthetic benchmarks go higher to make their bullshit processor seem like it has more ST than it actually does on review sites/channels. To make it seem like the new CPU architecture is 10-15% faster when it is actually only 3-4% faster because otherwise no one would buy the garbage.
@@HoldinContempt You "can" run your threads on specific cores (e.g. look for "ThreadAffinityMask" under Win32. This might be more or less a "suggestion" to the system... it usually just works, so no questions asked), but because all cores were equal in the past, most people do not care and just consider all threads to run at the same speed (and it is an "Intel only thing"). The next huge Problem is: If the code is a bit old (predating "E-Cores"), it obviously does not select specific cores for specific tasks. Then the scheduler should do its magic, but realistically speaking, it can only guess what to do and never "know" what to do... Cant be helped... :)
At this point you're describing a server CPU.
The funny thing about the comparison between Zen 5 and Arrow Lake to me is that *both* are revamps in pretty significant ways, but AMD feels like more of a refinement because they achieved parity or a small uplift in most cases, whereas Intel struggled to even achieve parity in most cases. I guess Intel is dealing with even more of a change because it's their first chiplet-based CPU, but I think it's reasonable to expect AMD to also achieve significant refinements in generations to come based on their architectural reset with Zen 5. It's going to be tough for Intel.
I have not built an Intel based PC in 16 years. The fact you usually have to change the Mobo with any future CPU upgrade has deterred me. Not the case with AMD. My thoughts are not that Arrow lake is particularly bad but all Intel CPU's are bad by design rather than any performance criteria.
If you built like 2 computers over all these years, its quite possible you didn't screw yourself by sticking with AMD, However Intel CPUs were better for many of those years and if you buried your head in the sand and settled for less, well , that's what you did
@Conumdrum I'm about to upgrade my Ryzen 3600 to something considerably better for less than £200. No mobo required just a bios update. Did I really choose wrong?
Your cherry picking, what did you build 16 years ago and how long did you use that hot mess?
@Conumdrum you forgot quotations when you said better.
@@Conumdrum That was answered. "I have not built an Intel based PC in 16 years" that means the "hot mess" was an Intel build.
I'm in a great position. I'm looking to upgrade from a 2014 cpu! i7 5960x! So I'm looking at a 300-400% boost! To be honest I'm only upgrading due to the Windows 10 going out of support next year.
Drop HT, only have 8 P cores with 8 threads.
Drop frequency as well since the previous gen was cooking itself having to clock so high to remain competitive.
I don't know what anyone was expecting there.
HT isn’t the end all be all. We shall see
The 285K seems to be using all 24 cores in Starfield.
36:00 it's important to note on this that dropping the GPU load of an application DOES NOT mean the system performance becomes CPU limited. This is a big misconception. It can quite simply be software limited. You should be taking an application that you know is highly CPU demanding, ie will have a high FPS (eg CSGO, Valorant, etc) based largely on CPU performance. Many applications when you drop off the GPU load will simply perform based on OS scheduling, latency and memory optimization, at that level of demand you also potentially see application issues influence performance potentially too.
I'll give you an example. Eve Online is a game that is neither CPU limited or GPU limited, it is server side limited. There are so many server side checks going on and then tracking of so many entities that server side performance is the single biggest factor in your fps a lot of the time. You will get better fps by simply going to a lower populated system in game.
Some applications are simply designed not to hammer the CPU as hard as they possibly could. There's been times in the past where OS updates or game updates radically can change a performance on a given platform. You are better off finding games/apps that are single clock speed dependent and then multithreaded dependent. Otherwise you might find in 12 months time that 10% performance gap you saw was simply closed with software. If you ignore the software side of this whole equation you do so at your own peril because there will be times when a new game comes out that will catch you out.
Also more cache !== better, for that matter more memory doesn't necessarily mean better performance in any regard. It can actually end up with worse performance in extreme cases. A lot of performance is governed by better access time and maintaining frequency which comes down to thermals. The combination of faster cache access and faster RAM access latency via DDR5 has translated into better performance for both Intel and AMD. If you want to take away a lesson from that it should be that performance does not solely comes down to CPU core goes brrrr or GPU goes brrr or big number === better.
The only reason Intel used E-Cores was for power consumption and heat production reasons. If they had all P-Cores, their chips would have been melting using 500Watts or more.......
They still do
It's because their P-Cores are too large physically and use too much power for the amount of performance it gives. E-Cores were meant to tackle this, to get close to P-Core IPC (with Skymont) while being much smaller and using significantly less power.
That's not why Intel uses E cores or at least not the main reason. E cores are die space efficient. 4 E cores are roughly 130-150% the MT performance of a single P core while using the same or less die space. It means Intel can get roughly 14P cores worth of performance from an 8P + 16E core configuration rather than 12P while keeping the die size the same. And using more cores that are slower is generally more power efficient than a few cores that are more powerful (because of the diminishing returns of raising clocks).
And like HUB has shown, games don't really care for more cores past 8, which is why Intel doesn't care to provide more P cores, but for applications that do scale (so professional tasks) E cores work just fine. Heck, the 5600X3D shows that games don't really scale past 6 cores.
@@lycanthoss Apologise more....
e cores arent power efficient tho
Biggest problem is that games are not designed to take advantage of newer cpu types. It used to be that games would be designed to take the best advantage of modern hardware.
Error lake is suitable
I think the ryzen 5 should become ryzen 3. Six cores is the new standard, another thing they could do make regular six cores. They could make normal six cores ryzen. I think one of those would be the best solution.
Now imagine if AMD bankrupted in past and zen1 moment never happened...Intel will probably get completly destroyed by apple and first arm atm...
9:56 imagine dual CPU system, the left slot is for P-core CPU, optimized for gaming (285PK for example, which has 12 P-cores), and the right slot is for E-core CPU (285EK for example, 48 E-cores, optimized for productivity). You can put one or both CPU to satisfy your need.
Latency would be atrocious. There's a reason dual CPU systems are only used in productivity stuff.
@juice7661 thank you for the explanation!
C’mon guys…….easy answer to the “why e-cores”. To keep up with AMD’s core count and nothing more………a smokescreen by Intel to say “hey….we have that many cores also”. Most Computer users are just that. They know the name……they look at the marketing……..and make their buying decisions upon that. Intel pulled a fast one on consumers that are not computer savvy. “E” cores were and are nothing more than a marketing ploy. Intel was forced into this type of architecture because their platform couldn’t scale………..and what do they do next? Come out with another bad idea in the “Core Ultra”. Not a road to success IMO. You use the excuse of core size limitations……… “what is Intel to do”. And yet, just look at their previous 2011-3 socket. It was huge. I’m sure they could have returned to that socket size if core real estate was a problem. They’ve had a new socket every two gens anyway. Sadly, Intel absolutely will be gobbled up by another conglomerate within a few short years. The writing is on the wall for Intel. They got too comfortable as #1 and did not innovate. Their own fault……….so lay the blame squarely upon Intel’s shoulders and stop making excuses for their lack of leadership vision. BTW………your hardware reviews are top notch. Best on the internet……….aka……..The Screensavers
My question got picked! Thanks Tim and Steve, makes perfect sense too :)
Arrow Lake: 77 cycles for memory controller
Meteor Lake: 80 cycles for memory controller
Lunar Lake: 55 cycles for memory controller
Raptor Lake: 55 cycles for memory controller
I think if they just rereleased Arrow Lake as a monolithic CPU without the iGPU, Display Controller, and NPU they'd have a solid competitor.
They should have included the memory controller on the compute tile at least, that would improve latency a lot. A nanosecond or two latency to the SOC tile does not matter.
Making a monolithic die with everything is probably not cost effective on 3nm.
@@aos32Making Alder Lake and Raptor Lake on Intel's nodes that are way better than their overrated TMSC equivalents in MT/mm2 was the cost effective solution.
There's no way i would spend almost 400USD on a 7800X3D if a 13700K is around 300USD.
@@aos32 Apple can do that
is the e-cores used for background tasks ? so as i understand that's why intel removed the hyperthreading ? are there any benchmarks for e-cores ?
You mentioned 64 GB RAM. There are two titles I've read where 64 GB can help a lot: the flight simulator DCS (where 56 GB has been reported), and Cities: Skylines 2. And maybe Star Citizen, whenever that actually releases.
Looks like Intel aimed the performance arrow but it landed in dead sea and simply offers better power efficiency than 14th gen. The major downside is needing a new mobo when am5 users can swap out 7800x3d to 9800x3d or even Zen 6 x3d if they stick to am5.
intel :(
Maybe we shouldn't call something a flop just because it's not great at gaming?
Agree, there is more to CPUs than playing games.
It also brings very limited improvements outside of it, at a much higher price... The 7950X and 9950X mostly match it out there and the 9950X3D is incoming soon.
@@nolategame6367 I'm not saying it's great or anything. But there are workloads that it excels in. Just not gaming so the TH-cam creators say it sucks. Tile based rendering for example is an extremely time consuming workload. Improvements there are more important that 120fps to 160fps. And not everyone, by a huge percentage, own 4090's and play at 1080p. Talk about a niche market.
And I'm not singling out Intel here. These guys truly believe that CPUs, unlike virtually everything else in the world, should just get cheaper and cheaper. Last years closeout CPUs are a better deal. Just like every other consumer item when the new models come out. If they weren't they'd never sell the old stock. And this bothers them. These guys don't understand the economics of retail.
The thing is, percentage of consumers buying CPUs for gaming is larger than of those buying CPUs for productivity work.
No Ryzen 3 parts since 3000 series but they did release a quad core Zen epyc 4 CPU that works on AM5 boards, the Epyc 4124P
I hate win 11