Keep in mind that it's possible that one or both of these architectures could improve over time as their new architectural features are better taken advantage of in software. Just keep an open mind, and don't assume that benchmarks at launch are going to determine relative performance over the lifetime of the products. I know for sure that the Ryzen 9000 CPUs have some very major changes to the architecture, but I don't know if Arrow lake has any major changes to the architecture which could benefit from software being better designed to take advantage of them. That being said, you can't count on performance improving a lot over time with better software optimizations, and even when this does happen, sometimes an architecture can be over 4 years old by the time a wide range of software starts to be well optimized for new architectural features that were introduced with that CPU. If you remember when the first quad-core CPUs came out, you are probably aware of how long it took before a significant number of games started to be able to take advantage of that. The same is true for quad-core CPUs with SMT. i7's were considered a poor value for gaming for a long time, and they were, but suddenly a lot of games started being well optimized to take advantage of quad-cores with SMT. If you bought a first or second gen i7, you would have been waiting something like 6 years or more for a wide range of games to significantly benefit from the 4core/8thread architecture of these CPUs.
This video is straight to the point. I like it. Wondering whether it's hardware latency, frequency, or task scheduling causing the slower performance in games.
İ saw some Chinese review that disables e cores and get around 10 percent to 15 percent uplift on games. İt is probably a bug and will fixed.games like cyberpunk sees massive fps drops even 12600k levels.there are some outlier games likey that .
This youtube review really is a breath of fresh air in comparison to the shouty rest. It`s real informative without being obtrusive. Great work, just like the website.
I thought this too. Yesterday I had to turn off a very popular channel reviewing these chips because the host spoke so fast and loud I could barely keep up. The scrolling graphs work very well here too.
@@ChiquitaSpeaksI love GN's underlying content, but I get tired of all the meme baiting at times. I can only take, "Thanks, Steve!" so many times before I get a bit annoyed.
The lower-tier 245K is the best value CPU out of the lineup so far. I'm not sure I'd call it a good value, but it's not terrible either. The cost of current motherboards is the bigger problem. Until more affordable motherboards are available, these CPUs only make sense for people who just happen to be running the few applications which these CPUs actually perform very competitively in. But, that being said, for a less informed buyer who just buys a 245k or 265k out of sheer loyalty, they're not getting a garbage product, they're just paying more than they need to. If they're really lucky, then performance may actually improve significantly over time as the architecture becomes better optimized for in various applications.
With it being a new architecture for Intel CPU's I am not surprised that there are not major performance gains or even the slight regression. What will be interesting is to see whether Intel is able to do going forward over the next couple of revisions or generations of the new design and then and only then will we really know whether the design and direction was a correct move or a mistake for Intel. Look at pretty much any major changes and at times the real performance gains take a while to be seen. One of the easiest to compare is when memory modules advance as when first introduced you are seeing nowhere near the actual performance capabilities that you will see after a year or two and the process and technology matures.
The laptop Ultra 200 cpus are best in terms of efficiency for x86 and incredibly close to ARM counterparts, much better than Ryzen AIs. Was kinda hoping to see the same revolution at desktop, but hey, at least they got laptop division going strong, so it's not the end
Nice way of showing that for most games in these sorts of benchmarks, all modern CPUs (7600X+, 12900K+) have similar gaming performance. I would show an example where the CPUs perform differently and note that these are outliers. i.e. Elden Ring (no RT) is significantly worse on Arrow Lake than other CPUs and still shows differences at 4k Spiderman remastered (With RT) is significantly better on Arrow Lake than other CPUs and still shows differences at 4k I'm also curious if there are game scenarios where the CPU does start bogging down performance at 4k. The impact of computing updates for all units on an a large RTS map or flight sim, or NPC activity in a crowded city might show real differences i.e. favor faster main memory access or more cores.
I had decided on buying an i7-14700K, but read about all the scary TDP problems, so was hoping for a good Core Ultra since it just launched. After this video, seems like Core Ultra is no better. I am thinking of going back to the i7-14700K, but not sure if the latest fixes from Intel are really effective. So I am stuck. Any advice?
That is what we avoided by using the MSI board to test power since it doesn't pull power from the 24 pin. The ASUS does. "The ASUS Z890 Hero motherboard feeds four of the CPU VRM phases from the 24-pin ATX connector, instead of the 8-pins, which makes it impossible to measure CPU-only power with dedicated test equipment (we're not trusting any CPU software power sensors). You either lose some power because only the two 8-pin CPU connectors are measured, or you end up including power for the GPU, chips, and storage when measuring the two 8-pin connectors along with the 24-pin ATX. For this reason, we used an MSI Z890 Carbon motherboard exclusively for the power consumption tests in this review."
@@philmarsden9594 W1zzard here from TPU. I discussed this at length with all the major mobo vendors over the last two weeks. Only ASUS has switched their VRM design (which is a reasonable approach for a general consumer audience imo). My current info is that all "Maximus" branded boards from ASUS use it, I have a STRIX here, too, and I think that's affected as well.
I'd hold untill 9900X3D and 9950X3D come out. Because 9800X3D will not be a big increase over 7800X3D and because now both the CCDs of the Ryzen 9 X3D CPUs will have acces to the level 3 cache, likely increasing performance.
@@Hardcore_Remixer Incorrect. 9800X3D will be 30% faster in muticore performance than the 7800X3D (verified in Cinebench 4D) and will overclock much higher because the 3D-Vcache is now stacked in Zen 5 on the bottom, not on the top, given less restrictions for X3D OC headroom!
I understand that games are what most people are interested in, but I'd like to see more focused tests on single-threaded and multi-threaded performance. Tests involving arithmetic-heavy programs, compile times, FFmpeg encoding/decoding, benchmarks (compiled with march=arrowlake), just to name a few. If the 285K is able to run a scientific simulation 15% faster than the 14900K, I'd like to know that.
MTL was Bulldozer but so bad on desktop it was not even released and Intel chose RPL Refresh. ARL is Pile-driver improving efficiency so almost as good as previous generation chips rather than clearly behind them.
Nono. 14900K was buldozer moment for Intel because it ate a lot of power and degraded over a small increase. This looks... meh... Like a Ryzen 1000 moment, but with lower performance instead of better (because they reduced the power consumption). Who knows, maybe they get something good the next generation, but I wouldn't hold my breath.
@@Hardcore_RemixerIt's a Zen 1 moment, but with way higher expectations. For AMD, all that Ryzen had to do during that time was be competent. For Intel, they needed to knock it out of the park. They barely missed the first one and completely missed the second.
@@EbonySaints Zen 1 was extremely competitively priced. Zen 1 CPUs offered far superior perf per dollar than Intel CPUs at the time, and the motherboards weren't very expensive either. Zen 1 CPUs were also quite cheap to make, whereas these Intel CPUs are more expensive to make than AMD's current Zen 5 CPUs, which helps to explain why Intel is trying to price these so high. These Intel CPUs are absolutely not competitively priced, and the motherboards are extremely expensive. Much cheaper motherboards should come out for these Intel chips sooner or later, but we'll have to see how affordable they end up being. The real battle will be for OEM sales to laptop and pre-built system makers, and Intel will likely be having to decide between selling chips at cost or even at a loss, or continuing to lose market share to AMD. Hopefully their next generation will be a lot better.
Hasn't gone through our full suite of tests, though we did cover the 8600G in our ASRock X600 content. th-cam.com/video/Xeqt9_OEo9A/w-d-xo.html www.techpowerup.com/review/asrock-deskmini-x600-barebones-amd-ryzen-8600g-mini-pc/
W1zzard here. I don't have either of those processors and AMD said they have too few samples. I bought a 8500G, because it was an interesting SKU with the c cores, but at this time in the cycle it makes little sense to buy an 8600G or even 8700G just to have it in comparison charts.
No they did not miss the mark they went back in time and they realized that a lot of gamers are not buying new hardware. Now with this Arrow Lake CPU 1 core per 1 thread is not bad also you have to realize that the IPC is there for application so it is up to the PC User to understand the Arrow Lake CPU and what it can do for you. If I had to do it all over again, I would see myself using the Arrow Lake edition of the Ultra 7 265K with the 8GB Arc GPU because 8GB of VRAM is all I need for my PC needs as well as retro gaming.
@@Fearless13468 It wasn't entirely better either. Remember i9 11900K (8c/16t) vs i9 10900K (10c/20t)? In gaming the 11900K was usually better, but sometimes those 2 extra cores would turn the tables in CPU all-core intensive tasks.
11th gen backported an otherwise decent architecture from 10nm to 14nm which made it suck. Meteor Lake and Arrow Lake seem to be fundamentally broken, I was hopefull Arrow Lake could brute force a win on the new node even after Meteor Lake failed but I was wrong.
@@Fearless13468 Partly correct. Most of the 11th gen was better than the 10th gen, but not all of it. Remember i9 10900K with 10c/20t and then i9 11900K with 8c/16t? Dropping 2 cores wasn't a wize move by Intel.
No not at all. Arrow lake is built on a smaller process node than Zen 5 (4nm vs 3nm), yet is still more powerful and efficient. Not only that, 2nm process node exist and it looks like it will be used in 2028 and later. There are not huge gains left to get, but there is certainly something. To say that cpu production hit the ceiling is just corporate greed propaganda
Actually, there is still place for improvement. Zen 5 is also struggling against Zen 4, but only on Windows because of the Windows scheduler just won't schedule the program threads on the same threads they were on, thus making context switches more expensive than they have to be, especially for AMD's chiplet architecture while Linux doesn't have this problem and shows the actual performance. At Hardware Unboxed I've seen the 9700X doing better than the 9950X in at least one game. This speaks volumes about the Windows scheduler. And, as someone above my comment said, Intel used a newer node for their 15th gen than AMD used for Zen 5. Even with this, these are both inefficient x86-64 architectures. If we look in the ARM area we can see Apple getting crazy efficiency with their M chips and Qualcome at least getting somewhere there. If ARM gets enough support then we might soon unlock new limits for performance without having to reach insane amounts of power. This might not seem like a lot, but I think we can reach at least 500% of the performance we have now.
Finally some honest TH-camr with 1440p and 4k resolution benchmarks. I got 59 minimum FPS increase and 52 average FPS increase in Farcry 6 at 4k with RTX off…going from 9700K to 265K with 4070ti. I can’t stand people still talking about 1080p resolution in 2024, it’s been over two decades already, time to move on. Also, some believe that 9700K is still good enough but obviously my personal benchmarks prove them wrong, even at 4k.🤦♂️
As a result, we have that the intel processor from 3 years ago (i7 12700kf) is overclocked to 5.0\4.0 at the level or more powerful than the newly released line. And then you say that intel doesn't like its customers)))
I keep my PC for 10+ years, if possible. Top-tier CPU performance today will be a bottleneck after a decade ......... Thus, I'm getting either the 9800X3D or 9950X3D when they get released with the 5090 in a few months.
Userbenchmark is jaw-droppingly biased. It's really quite bizarre for such a successful and well established website. The guy who runs that website is literally mentally ill and needs help. Even the r/Intel sub-reddit mods have banned all links and posts of information from Userbenchmark.
I like both AMD and Intel and have PCs based on both. I do the same with video cards having Nvidia and AMD and Intel Arc. One thing Intel might deserve some credit for that I don't see them given in reviews of these "Ultra" CPUs: Removal of Hyperthreading is an interesting architectural change that could result in greater security and efficiencies. May also be good for overclocking.
You know you can do the same thing in previous Intel CPUs and AMD CPUs by disabling Hyperthreading/SMT in BIOS. This new architectural change didn't increase efficiencies that wouldn't be observed by the node change, and it certainly seem to help overclocking. The Hyperthreading change does not guarantee greater security other than maybe those attributed to it.
Intel has tons of great engineers. The pricing is too high on these CPUs to make sense for me to recommend to anybody except for the few who need the best performance in the minority of applications that these CPUs perform best in, but, people don't understand just how incredibly difficult it is to design and manufacture a new CPU, or to get software to make the most of those CPUs. People should at least keep an open mind. Much like with the Ryzen 9000 series, it's entirely possible that this new architecture will see a lot of performance improvements over time as software becomes better designed to take advantage of their new architectural features. I know for a fact that Ryzen 9000 has a lot of changes to the architecture which may take a lot of time before software can be well designed to take advantage of it, but the same could be true for these new Intel CPUs as well, and it's hard to predict how these CPUs will be seen in two or three years from now.
Why do you complain so much the power consumption has decreased a lot and is quite competitive in single thread and multithread. Even when is not the fastest in gaming
The power consumption is NOT very competitive in heavily threaded applications. The 285k offers very good heavily threaded performance, but unfortunately in heavily threaded efficiency performance, it is still way behind CPUs like the Ryzen 9950X, and way behind even the Ryzen 5950X, which is a three generation old CPU which uses an older generation motherboard platform.
From the bat messed up the RAM. Correct RAM is CUDIMM 8200 or higher. Arrow Lake supports CUDIMM and good MB's +10000 CUDIMM speeds. If you review with correct RAM the 285K would be on top. Specs there is UDIMM that is up to 6400. But using this RAM you loose fair bit of performance. Arrow Lake hasn't been designed on UDIMM. There is big difference in performance. Only reason you would use old slow UDIMM instead and take the performance away is if you are paid by AMD that doesn't support CUDIMM or you just don't know any better. So I presume all the tests are AMD on top as from here it is waste of time to watch the rest.
@@laszlozsurka8991 When BIOS updates are almost daily and to enable required settings you need to manually set and OS doesn't fully support the new platform it really comes down to the tester and their knowledge. Just to get the OS working needs tweaks and patience. That said if you think UDIMM 6400 supported that is downgraded to 5600 because the memory controller is designed for CUDIMM there is no difference. Even if same speed the difference is noticeable UDIMM vs CUDIMM. Another point as I already pointed out is BIOS and there are big differences between MB's. That said it is already clear that at productivity 285K is a clear winner. Gaming when I see the design and architecture it is not suited for AMD/Nvidia GPU's even that on paper it is on top places depending on game. It has big problems on frame time and Latency. I see clear attempt to have all INTEL system. And what I see adding Battlemage has potential to beat 5080 for less coin. If so INTEL has a winner because instead of selling just the CPU they are also selling the GPU and have their fingers in CUDIMM and MB's. So my recommendation is to wait and see if the problems for AMD/Nvidia GPU support is fixed, MB/BIOS with RAM and to OS is fixed and hopefully see what the end product is for Battlemage. 285K there is no stock so my guess is that it is been held back till the issues are fixed and maybe even the anticipation of Battlemage. Early launch and no stock I think is to find the problems before actually releasing it to the public. Just my thoughts.
@NWOResistance because higher than 1080 mainly show GPU power and don't show affectively CPU performance at all 1-3% is margin of errors also 1080 is 97% of the player base
People laughing at Zen 5%
Intel: i got you bro
Good one
sansctions to china reduced chips to old series..
Keep in mind that it's possible that one or both of these architectures could improve over time as their new architectural features are better taken advantage of in software. Just keep an open mind, and don't assume that benchmarks at launch are going to determine relative performance over the lifetime of the products. I know for sure that the Ryzen 9000 CPUs have some very major changes to the architecture, but I don't know if Arrow lake has any major changes to the architecture which could benefit from software being better designed to take advantage of them.
That being said, you can't count on performance improving a lot over time with better software optimizations, and even when this does happen, sometimes an architecture can be over 4 years old by the time a wide range of software starts to be well optimized for new architectural features that were introduced with that CPU.
If you remember when the first quad-core CPUs came out, you are probably aware of how long it took before a significant number of games started to be able to take advantage of that. The same is true for quad-core CPUs with SMT. i7's were considered a poor value for gaming for a long time, and they were, but suddenly a lot of games started being well optimized to take advantage of quad-cores with SMT. If you bought a first or second gen i7, you would have been waiting something like 6 years or more for a wide range of games to significantly benefit from the 4core/8thread architecture of these CPUs.
I really like Techpowerup's format.
Underrated channel
This video is straight to the point. I like it.
Wondering whether it's hardware latency, frequency, or task scheduling causing the slower performance in games.
Yes
İ saw some Chinese review that disables e cores and get around 10 percent to 15 percent uplift on games. İt is probably a bug and will fixed.games like cyberpunk sees massive fps drops even 12600k levels.there are some outlier games likey that .
This youtube review really is a breath of fresh air in comparison to the shouty rest.
It`s real informative without being obtrusive.
Great work, just like the website.
I thought this too. Yesterday I had to turn off a very popular channel reviewing these chips because the host spoke so fast and loud I could barely keep up. The scrolling graphs work very well here too.
@@PhilipBryden Gamers Nexus: “INNFOORRMAATIIOOONNN!!”
@@ChiquitaSpeaksI love GN's underlying content, but I get tired of all the meme baiting at times. I can only take, "Thanks, Steve!" so many times before I get a bit annoyed.
@@EbonySaints I was talking more about how they pack on so much detail and how the host talks fast nonstop through the video
I'm glad they change the naming scheme so it's easier to differentiate the shittier SKU.
The lower-tier 245K is the best value CPU out of the lineup so far. I'm not sure I'd call it a good value, but it's not terrible either. The cost of current motherboards is the bigger problem.
Until more affordable motherboards are available, these CPUs only make sense for people who just happen to be running the few applications which these CPUs actually perform very competitively in. But, that being said, for a less informed buyer who just buys a 245k or 265k out of sheer loyalty, they're not getting a garbage product, they're just paying more than they need to. If they're really lucky, then performance may actually improve significantly over time as the architecture becomes better optimized for in various applications.
They couldn't call it 15900K because it's slower than 13900K
@@club4ghzthat would have been the kicker 😂
i think price for 7800x3d in the perf per $ graph is from august or september =)
With it being a new architecture for Intel CPU's I am not surprised that there are not major performance gains or even the slight regression.
What will be interesting is to see whether Intel is able to do going forward over the next couple of revisions or generations of the new design and then and only then will we really know whether the design and direction was a correct move or a mistake for Intel.
Look at pretty much any major changes and at times the real performance gains take a while to be seen.
One of the easiest to compare is when memory modules advance as when first introduced you are seeing nowhere near the actual performance capabilities that you will see after a year or two and the process and technology matures.
The laptop Ultra 200 cpus are best in terms of efficiency for x86 and incredibly close to ARM counterparts, much better than Ryzen AIs. Was kinda hoping to see the same revolution at desktop, but hey, at least they got laptop division going strong, so it's not the end
the jokes have been the best thing about arrow lake
Arrow in the knee Lake 😂
so how does the NPU work and reviewed? this is unique to Ultra right?
Keep making these videos please, they complement your articles.
7800X3D and 9800X3D CPUs are likely to be sold out soon.....
like said its 2500k moment again, when enthusiast all bought it and running their cpu at 4.5ghz+, in 2011
AMD is competing with her own CPUs now, no competition.
What? TechpowerUp on TH-cam? Nice! My favourite website for tech news!
Have you tried the Intel Core Ultra 5 265KF (Arrow Lake) Socket LGA 1851 Processor in UAE from GCC Gamers yet?
No, though it should have the same performance as the 265K just without the iGPU for ~$20 savings.
Underrated channel bruh
Nice way of showing that for most games in these sorts of benchmarks, all modern CPUs (7600X+, 12900K+) have similar gaming performance.
I would show an example where the CPUs perform differently and note that these are outliers.
i.e. Elden Ring (no RT) is significantly worse on Arrow Lake than other CPUs and still shows differences at 4k
Spiderman remastered (With RT) is significantly better on Arrow Lake than other CPUs and still shows differences at 4k
I'm also curious if there are game scenarios where the CPU does start bogging down performance at 4k.
The impact of computing updates for all units on an a large RTS map or flight sim, or NPC activity in a crowded city might show real differences
i.e. favor faster main memory access or more cores.
4k its all about GPU, CPU not doing much
I had decided on buying an i7-14700K, but read about all the scary TDP problems, so was hoping for a good Core Ultra since it just launched. After this video, seems like Core Ultra is no better. I am thinking of going back to the i7-14700K, but not sure if the latest fixes from Intel are really effective. So I am stuck. Any advice?
Any chance BIOS and OS updates can salvage Arrow Lake?
do those power numbers reflect the 40/50w they seem to be pulling from the 24 pin?
That is what we avoided by using the MSI board to test power since it doesn't pull power from the 24 pin. The ASUS does.
"The ASUS Z890 Hero motherboard feeds four of the CPU VRM phases from the 24-pin ATX connector, instead of the 8-pins, which makes it impossible to measure CPU-only power with dedicated test equipment (we're not trusting any CPU software power sensors). You either lose some power because only the two 8-pin CPU connectors are measured, or you end up including power for the GPU, chips, and storage when measuring the two 8-pin connectors along with the 24-pin ATX. For this reason, we used an MSI Z890 Carbon motherboard exclusively for the power consumption tests in this review."
@@TechPowerUp is it only the asus mobo doing this?
thanks! great to know :)
@@philmarsden9594 W1zzard here from TPU. I discussed this at length with all the major mobo vendors over the last two weeks. Only ASUS has switched their VRM design (which is a reasonable approach for a general consumer audience imo). My current info is that all "Maximus" branded boards from ASUS use it, I have a STRIX here, too, and I think that's affected as well.
But are they snappy like before?
This means that 9800x3d will be out of stock for a long time
I'd hold untill 9900X3D and 9950X3D come out. Because 9800X3D will not be a big increase over 7800X3D and because now both the CCDs of the Ryzen 9 X3D CPUs will have acces to the level 3 cache, likely increasing performance.
@@Hardcore_Remixer Incorrect. 9800X3D will be 30% faster in muticore performance than the 7800X3D (verified in Cinebench 4D) and will overclock much higher because the 3D-Vcache is now stacked in Zen 5 on the bottom, not on the top, given less restrictions for X3D OC headroom!
I understand that games are what most people are interested in, but I'd like to see more focused tests on single-threaded and multi-threaded performance. Tests involving arithmetic-heavy programs, compile times, FFmpeg encoding/decoding, benchmarks (compiled with march=arrowlake), just to name a few. If the 285K is able to run a scientific simulation 15% faster than the 14900K, I'd like to know that.
Check out the full reviews in the description! We have all that for each CPU
@@TechPowerUp I read it. It's a good writeup
holy shit this is bulldozer moment for Intel
MTL was Bulldozer but so bad on desktop it was not even released and Intel chose RPL Refresh.
ARL is Pile-driver improving efficiency so almost as good as previous generation chips rather than clearly behind them.
Nono. 14900K was buldozer moment for Intel because it ate a lot of power and degraded over a small increase. This looks... meh... Like a Ryzen 1000 moment, but with lower performance instead of better (because they reduced the power consumption).
Who knows, maybe they get something good the next generation, but I wouldn't hold my breath.
@@Hardcore_RemixerIt's a Zen 1 moment, but with way higher expectations. For AMD, all that Ryzen had to do during that time was be competent. For Intel, they needed to knock it out of the park. They barely missed the first one and completely missed the second.
@@EbonySaints Zen 1 was extremely competitively priced. Zen 1 CPUs offered far superior perf per dollar than Intel CPUs at the time, and the motherboards weren't very expensive either. Zen 1 CPUs were also quite cheap to make, whereas these Intel CPUs are more expensive to make than AMD's current Zen 5 CPUs, which helps to explain why Intel is trying to price these so high.
These Intel CPUs are absolutely not competitively priced, and the motherboards are extremely expensive. Much cheaper motherboards should come out for these Intel chips sooner or later, but we'll have to see how affordable they end up being.
The real battle will be for OEM sales to laptop and pre-built system makers, and Intel will likely be having to decide between selling chips at cost or even at a loss, or continuing to lose market share to AMD.
Hopefully their next generation will be a lot better.
i agree that ryzen am5 is more price/perf, esp for casual user who wont tweak anything, think 7600x or 7700x are cheap now, theyre fast
They are back to Alder Lake Power consumption at higher performance. So that's an improvement.
can we have 8600g and 8700g charts?
Hasn't gone through our full suite of tests, though we did cover the 8600G in our ASRock X600 content. th-cam.com/video/Xeqt9_OEo9A/w-d-xo.html www.techpowerup.com/review/asrock-deskmini-x600-barebones-amd-ryzen-8600g-mini-pc/
W1zzard here. I don't have either of those processors and AMD said they have too few samples. I bought a 8500G, because it was an interesting SKU with the c cores, but at this time in the cycle it makes little sense to buy an 8600G or even 8700G just to have it in comparison charts.
@@TechPowerUp Okay, thanks for the works so far
No they did not miss the mark they went back in time and they realized that a lot of gamers are not buying new hardware. Now with this Arrow Lake CPU 1 core per 1 thread is not bad also you have to realize that the IPC is there for application so it is up to the PC User to understand the Arrow Lake CPU and what it can do for you. If I had to do it all over again, I would see myself using the Arrow Lake edition of the Ultra 7 265K with the 8GB Arc GPU because 8GB of VRAM is all I need for my PC needs as well as retro gaming.
this is even worse than 11th gen
Well 11th gen wasn't worse than 10th gen, so that's already better than this gen is doing.
@@Fearless13468 It wasn't entirely better either. Remember i9 11900K (8c/16t) vs i9 10900K (10c/20t)?
In gaming the 11900K was usually better, but sometimes those 2 extra cores would turn the tables in CPU all-core intensive tasks.
Ikr
11th gen backported an otherwise decent architecture from 10nm to 14nm which made it suck. Meteor Lake and Arrow Lake seem to be fundamentally broken, I was hopefull Arrow Lake could brute force a win on the new node even after Meteor Lake failed but I was wrong.
@@Fearless13468 Partly correct. Most of the 11th gen was better than the 10th gen, but not all of it. Remember i9 10900K with 10c/20t and then i9 11900K with 8c/16t? Dropping 2 cores wasn't a wize move by Intel.
Cpu production hit the ceiling of what physically its posible to do with that tech. Differences with new cpus are not worth even mentioning anymore.
No not at all. Arrow lake is built on a smaller process node than Zen 5 (4nm vs 3nm), yet is still more powerful and efficient. Not only that, 2nm process node exist and it looks like it will be used in 2028 and later. There are not huge gains left to get, but there is certainly something. To say that cpu production hit the ceiling is just corporate greed propaganda
Actually, there is still place for improvement. Zen 5 is also struggling against Zen 4, but only on Windows because of the Windows scheduler just won't schedule the program threads on the same threads they were on, thus making context switches more expensive than they have to be, especially for AMD's chiplet architecture while Linux doesn't have this problem and shows the actual performance.
At Hardware Unboxed I've seen the 9700X doing better than the 9950X in at least one game. This speaks volumes about the Windows scheduler.
And, as someone above my comment said, Intel used a newer node for their 15th gen than AMD used for Zen 5.
Even with this, these are both inefficient x86-64 architectures. If we look in the ARM area we can see Apple getting crazy efficiency with their M chips and Qualcome at least getting somewhere there.
If ARM gets enough support then we might soon unlock new limits for performance without having to reach insane amounts of power.
This might not seem like a lot, but I think we can reach at least 500% of the performance we have now.
Finally some honest TH-camr with 1440p and 4k resolution benchmarks. I got 59 minimum FPS increase and 52 average FPS increase in Farcry 6 at 4k with RTX off…going from 9700K to 265K with 4070ti. I can’t stand people still talking about 1080p resolution in 2024, it’s been over two decades already, time to move on. Also, some believe that 9700K is still good enough but obviously my personal benchmarks prove them wrong, even at 4k.🤦♂️
As a result, we have that the intel processor from 3 years ago (i7 12700kf) is overclocked to 5.0\4.0 at the level or more powerful than the newly released line. And then you say that intel doesn't like its customers)))
Gonna stick with my intel 12700 and if i need to upgrade 14th gen will be probably fix 😂
I keep my PC for 10+ years, if possible. Top-tier CPU performance today will be a bottleneck after a decade .........
Thus, I'm getting either the 9800X3D or 9950X3D when they get released with the 5090 in a few months.
Techpowerup is legendary unlike userbenchmarks, aka AMD hater.
Userbenchmark is jaw-droppingly biased. It's really quite bizarre for such a successful and well established website. The guy who runs that website is literally mentally ill and needs help.
Even the r/Intel sub-reddit mods have banned all links and posts of information from Userbenchmark.
I like both AMD and Intel and have PCs based on both. I do the same with video cards having Nvidia and AMD and Intel Arc.
One thing Intel might deserve some credit for that I don't see them given in reviews of these "Ultra" CPUs:
Removal of Hyperthreading is an interesting architectural change that could result in greater security and efficiencies. May also be good for overclocking.
You know you can do the same thing in previous Intel CPUs and AMD CPUs by disabling Hyperthreading/SMT in BIOS. This new architectural change didn't increase efficiencies that wouldn't be observed by the node change, and it certainly seem to help overclocking. The Hyperthreading change does not guarantee greater security other than maybe those attributed to it.
Intel has tons of great engineers. The pricing is too high on these CPUs to make sense for me to recommend to anybody except for the few who need the best performance in the minority of applications that these CPUs perform best in, but, people don't understand just how incredibly difficult it is to design and manufacture a new CPU, or to get software to make the most of those CPUs.
People should at least keep an open mind. Much like with the Ryzen 9000 series, it's entirely possible that this new architecture will see a lot of performance improvements over time as software becomes better designed to take advantage of their new architectural features. I know for a fact that Ryzen 9000 has a lot of changes to the architecture which may take a lot of time before software can be well designed to take advantage of it, but the same could be true for these new Intel CPUs as well, and it's hard to predict how these CPUs will be seen in two or three years from now.
@@syncmonism well, I agree but most wanna laugh at Intel. What can you say > People are just people - It's quite a mixed bag of comments.
The 5800X3D clowning the new CPUs near the top of the stack is impressive and hilarious. 😂
GOAT.
That is officially Waste of Sand 2.0
** " Windows 23H2 " **
What a wet fart of a product launch.
Honestly, I would rather have a core ultra #1234... that lasts 5 years.
...Than a 14900K that shits his pants if I run the calculator too hard!
Agreed. The 285K uses much less power. Though, not nearly close to similar Ryzen CPUs.
Why do you complain so much the power consumption has decreased a lot and is quite competitive in single thread and multithread. Even when is not the fastest in gaming
The power consumption is NOT very competitive in heavily threaded applications. The 285k offers very good heavily threaded performance, but unfortunately in heavily threaded efficiency performance, it is still way behind CPUs like the Ryzen 9950X, and way behind even the Ryzen 5950X, which is a three generation old CPU which uses an older generation motherboard platform.
sku here sku there
its model
Terrible Gen from Intel
Only Ultra disappointing so far.
Ultra Meh.
Hahahaha
Intel=Ubisoft
Still there is potential
From the bat messed up the RAM. Correct RAM is CUDIMM 8200 or higher. Arrow Lake supports CUDIMM and good MB's +10000 CUDIMM speeds. If you review with correct RAM the 285K would be on top. Specs there is UDIMM that is up to 6400. But using this RAM you loose fair bit of performance. Arrow Lake hasn't been designed on UDIMM. There is big difference in performance.
Only reason you would use old slow UDIMM instead and take the performance away is if you are paid by AMD that doesn't support CUDIMM or you just don't know any better.
So I presume all the tests are AMD on top as from here it is waste of time to watch the rest.
lol. sure spend an additional $500 and you might be only 1.9 times slower than a 7800X3D.
Steve from HUB tested the 285K with 8200 MT CUDIMM... it doesn't change anything.
@@laszlozsurka8991 HUB = Hardware Unboxed (in case someone doesn't know)
@@laszlozsurka8991 When BIOS updates are almost daily and to enable required settings you need to manually set and OS doesn't fully support the new platform it really comes down to the tester and their knowledge. Just to get the OS working needs tweaks and patience.
That said if you think UDIMM 6400 supported that is downgraded to 5600 because the memory controller is designed for CUDIMM there is no difference. Even if same speed the difference is noticeable UDIMM vs CUDIMM. Another point as I already pointed out is BIOS and there are big differences between MB's. That said it is already clear that at productivity 285K is a clear winner. Gaming when I see the design and architecture it is not suited for AMD/Nvidia GPU's even that on paper it is on top places depending on game. It has big problems on frame time and Latency. I see clear attempt to have all INTEL system. And what I see adding Battlemage has potential to beat 5080 for less coin. If so INTEL has a winner because instead of selling just the CPU they are also selling the GPU and have their fingers in CUDIMM and MB's.
So my recommendation is to wait and see if the problems for AMD/Nvidia GPU support is fixed, MB/BIOS with RAM and to OS is fixed and hopefully see what the end product is for Battlemage. 285K there is no stock so my guess is that it is been held back till the issues are fixed and maybe even the anticipation of Battlemage. Early launch and no stock I think is to find the problems before actually releasing it to the public. Just my thoughts.
"The cope is strong with this one"
Intel Bulldozer moment.
Honestly in 4k you are gpu bottleneck not cpu i dont see why showing 4k results
I don’t see why showing 1080p results, it’s been out for almost three decades, time to forget about it.
@NWOResistance because higher than 1080 mainly show GPU power and don't show affectively CPU performance at all 1-3% is margin of errors also 1080 is 97% of the player base