The power consumption isn't that low though. it channels a lot of power through the board rather then the connectors. This is rather strange behavior though. It seems like it is uses around 50% more than it seems judging by Steve's numbers.
I looked at another reviewer and depending on if the program takes advantage of the faster memory speeds this can sometimes be a big difference or none at all.
It would have been way worse. His testing config is both high bandwith and low latency. Cl is per clock, so faster clock faster ticks. So even with cutting edge ram this thing sucks. Imagine tested against 14900k at 5Ghz ring and 8000 cl 36 ram. Most 14900k does 7600Mhz on gear 2 now.
Basically this release from Intel will just increase the sales of the X3D. Those who waited for the charts before deciding which way to go now have their answer.
Can't wait to see the overclocking/tweaking results for this with a higher power envelope. So much room for performance with faster RAM, faster rings, and pure overclock.
I bet there's a lot of headroom but remember, there's a reason Intel nerfed the ring clock so much. A lot of people suspect the high ring clocks on 13th/14th gen was the reason cpus were degrading and from HWU's video, 8000mhz ram didn't offer any performance over 7200mhz ram so it'll probably be like Ryzen now where latency is more important than ram speed. I'm just interested now how long highly overclocked samples will last compared to Raptor Lake.
This is actually insane seeing Intel running full load synthetic bench at 70C and consuming less than 200W while maintaining 14900ks performance and better. I wonder if direct die is even worth it for arrow lake.
When I first got into PC and PC gaming, I ran into J2C and soon realized that a lot of his takes are really bad and he just is not well educated in PC even though he's a PC TH-camr. He's an embarrassment to the PC community.
In the gaming benchmarks, it would be interesting to see how much power the 285k is using, and then constrain the 14900k to that power limit, and see what FPS you get. Considering power usage over about 100W on the 12th, 13th and 14th gen has always been a game of diminishing returns, I think the 14900K using the same power as the 285k, there still wouldn't be much in it.
@@Hermes_C Yup that's what I'm getting at. I think I saw it on one of Derbauer's videos, so did it on my 12900K I was dropping the power to 90 or even 50W and it wasn't making a lot of difference to FPS.
So what are my options then, as an uneducated non-tech-savvy person like myself, who wants the latest and greatest? Dip my toe into AMD for the first time and wait till January for the 9950X3D?
@@Darkness-ud5wk Roman is a little biased towards intel. AMD and intel are kinda same in terms of desktop cpus and it is easy to get slightly different results if you want to show some company as the winning one.
@@MarioAPN where do you get that from? Intel is a better target for fancy cooling solutions (which is his thing) because of the power issues. But i see no reason for "bias" claims in benchmark and performance videos. . Except of course this is social media and everything has to be a conspiracy theory.
I was going to get 13600, an upgraded 2 cores more version of 12700, but I read of the problems 13/14 had. I actually just found out about the problems last week when I watched benchmark videos. I only watch the numbers, so I don't usually know of a GPU or CPU having problems. I just happened to read on a forum if I should buy a 12700 or 13600, and people were talking about the problems. Even people with new CPUs still get those problems even with the BIOS update, so I thought I might as well stay away from those and get a 12700. So I bought it last week but haven't used it yet because the cooler comes today.
Watch GN on the power draw stuff. Literally ignore everyone else, though, Derbauer did a pretty good job on it. Most didn't properly isolate/test accurately to show the actual true efficiency (or lack thereof).
Even though performance is underwhelming, tweaking potential and features are interesting to mess around, also cool temps and reduced power are cool. Also wouldn't really expect cooler temps at same wattage and it being effectively 2 nodes newer but here we are
1. How did you benchmark valorant? 2. The Direct Die OC is that 5.7P 5.1 ring? What are the D2D and NPU clocks for it? Also did you find ecores off to ever help? Or did overclocking the ecores ever help?
1. It's a predefined path in practice mode that takes about 5min and it's done 3 times and the average is taken of it 2. The DD OC is 5.7P and 5.1E. The D2D and NPU are the same as maxed below in the chart. I didn't have enough space to squeeze it all on that row :D
@@der8auer-en that bit of gains from insane cooling and a static P & E core? Wow. And the ring is the same 4.2 as well right Would also be interesting to see if disabling ecores allows for much higher ring
Thanks, nice review. Now let's hope the new 9800X3D is a decent step up. We need something that gets a positive review. These meh launches are getting a bit tedious now...
Hello Roman, did you check how does 285k divide the load across cores, especially in case of the best resulted game and the worst? Also, it would be very interesting to see how 285k shows itself at different power limits, let's say 125 or 65W comparing with 9950x and 14900k and with e-cores turned off. In general, it seems that the platform is very raw and there will be more updates. A little zen1 vibe, 285k is r7 1800x, and 7700k is 7800x3d.
I do hope this is the start of intel's return to the top and functions as a test platform for later gens like ryzen 1st gen was. Ironically the low 1% might be due to a bad interconnect design between chips so hopefully it would get ironed out later on.
The AI process should allow better background blurring, noise suppression, video editing, eye tracking, zero wake time, AI modified detection and picture framing for commercial and gaming apps
Isn't the DLVR calculation pessimistic? I would think they use a buck converter, so the cores get more amperage with lower voltage. And then the power loss at a given power output depends on the efficiency of the DLVR at that output.
The "L" in DLVR is linear, means it is not a switching regulator. The voltage differential is low enough to provide so much current, but the power loss...must be something, given it is linear.
The can't use buck converters because they can't put appropriate inductors on the chip so they have to LVRs which are inefficient for high current applications.
@@LazyTurtle1988 & @ruikazane5123 Thanks for the clarification! I hadn't considered the limitations placed on inductors by the node/chip area, but it makes total sense they're using a linear reg given that.
I wasn't expecting much tbh but having watched multiple reviews, something feels really off. On all reviews the P-Cores and E-Cores were reported completely wrong by monitoring software and it makes me wonder if intel messed up with the microcode and the system sees some P-Cores as E-cores and vice versa. Because, it does not make sense that the 285K is faster at both single and multi core tasks but completely falls apart in hybrid tasks like gaming. It might be possible that E-Cores get assigned P-Core tasks (and vice versa) just because the system sees them as such.
I have the same thoughts, synthetics tell you whats possible in the CPU, so something is really falling apart here with games. I'm wondering if there's some tile-to-tile latency issue or some other bus latency. This is the stuff the team at AnandTech would have found eventually RIP.
For me it makes sense beacuse this cpu has much higher internal latency. Both in Aida 64 latency test and PyPrime it has about 40% higher (worse) score than 14900k with similar memory settings.
@@der8auer-en Great testing and review as always! It seems that it was far from ready for a launch and intel rushed it. We'll see how it develops in the future.
It's the fabric. Intel is paying the latency penalty with the mesh now chiplet architecture. They paid the same performance penalty with the mesh fabric monolithic skylake-X chips. Literally history repeating itself. In scenarios that aren't latency bound you see very nice gains. In scenarios where you are primarily latency bound you see regression.
Was delidding easy and safe with the new multi-tile configuration. I'd be concerned about the strength of solder connection to the substrate of the smaller subtiles.
@@karehaqt agreed.... Also no use going backwards on the 7xxx even though it will be cheaper. For gaming and editing and blender better off going 12core chip or higher.
Intel has drawn a line in the sand and has chosen Business applications over gaming. Lower power, more versatility = a very business application friendly platform.
@@drewnewby Idk, that tech looks promising...for a consumer platform. If you have anything fancier ECC REG should be about the same performance-wise + extra stuff, like memory capacity which is basically unlimited.
Not sure why you guys think a clock redriver can fix signal quality and noise issues. Loss of clock signal to the DRAM chips does not cause unstable RAM, you just won’t POST at all
I genuinely thought they were being euphemistic when they said 'linear voltage regulator' - i seriously cant believe this is a thing in 2024 tbh. I thought modular nerds were about the only ones using LVRs for any serious amount of current, but here we are burning watts in linear regs in CPUs.
The additional switching noise probably made things too unstable for the CPU. A low-dropout regulator on the other hand can handle output voltages very close to what the VRMs are supplying and doesn't introduce switching noise while even having a fairly high power supply rejection ratio that can filter out higher frequency noise from a switching power supply.
@EkiToji I think the reason they went for ldo is much more simple - ldo can be implemented such that it requires only external capacitor, while in case of switchers you will also need external inductors, which are much more problematic.
The DLVR would save CPUs if the VRM on the board is questionable, but seeing no entry-level or basic motherboards out there that might take advantage of the linear regulator is pretty awful. Then we have rebadged 14th gen processors for this generation, we'll see how that pans out!
One thing you dont seem to cover is the idle power consumption. If you're just web browsing and not pushing it to the bleeding edge, what is the performance like in comparison?
Video about that is coming shortly :) Probably saturday. You can already read some stuff here: www.thermal-grizzly.com/en/blog/intel-core-ultra-200-new-products-for-intel-s-arrow-lake-processors
DLVR reminded me of the digital regulator added in Haswell CPUs. These CPUs were hotter than Ivy Bridge because of this. I wonder if Intel faced same bottleneck as AMD with tile design and tile-to-tile interconnection. Pure guessing, but these interconnection under load added too much latency to all CPU operations, related to peripheral devices, mainly RAM and PCIe videocard. RAM, PCIe and Cores are on different tiles. It may be the case with the gaming loads that performance of interconnection layer not enough to feed RAM I\O and PCIe I\O at the same time with minimal delays. Which delays is added to a frame time in the high FPS scenarios. In the case of AMD 3D cache CPUs, their performance increase may be not only by increase in cache hits, but in lowered RAM I\O usage and therefore lowered interconnection layer delays for PCIe I\O. Also there may be some interesting magic bus-to-bus, core-to-bus clock ratios like 1:1, 1:2, 2:1, etc.
Isn't the big point of DLVR (and, especially, restricting the bypass mode) to not let motherboard vendors have insane presets that end up toasting the CPUs?
if DLVR causes so much power wastage then what's the point of having it? I see the low - medium load improvements but even in your 80W example there's about 25% power wastage?
Sorry Roman, I find comparing efficiency to the notoriously inefficient 14900KS to be disingenious. Also, what the heck is the point of the DLVR. Is it more efficient, faster or cheaper than doing the regulation on the mobo? Condensing more heat in the socket is no bueno after all.
My take is this processor isn’t as gimped as people are saying. Either the new design and these new features and quirks. It will take a while for people to learn how to properly run and overclock. I suspect the abysmal game performance is related to needed optimization I the game and future drivers.
One can only hope they have their zen 2 moment next year, cause with their share shrinking on DC products, their manufacturing not really working for 3rd parties and the burning CPU scandal, it's a rough year incoming. At least AMD could sell chips for consoles back then.
@@nekogami87 I do belive they will have a "ZEN 2" moment as well, this CPU might not be the CPU we were hoping for, but in terms of technology, it's pretty impressive what they were able to achieve for a 1st gen tile based desktop cpu, no HT, lower clock speeds and lower threads and all the hiccups that comes with new tech, the amount of data and knowledge they're gonna get from this is huge, plus Intel 18a new node (cutting edge) power back delivery and all that, it can only get better from here, just like AMD did back then when they moved to a chiplet design and refined it over time to what it is today.
@@Core2 I sure hope, they indeed do a lot of interesting thing, but most of them, we are still not sure how they will pan out, they good thing about zen was that they were able to reuse the chaplet to make other products quickly and at reduce cost, I don't think we saw that accomplished yet (might have missed it though). 18a seems really nice, but when their current gen is still using tsmc, I still highly doubt they will make it in time for their next one, replacing HT is interesting, still waiting to see how it evolved, cause they still use it on server when needed. So who knows how it will evolve, but definitely interesting.
@@nekogami87 Regarding the 18a i think it's a very interesting move from Intel, they could have spent a tons of resources getting good at the 20A and building Arrow on that node, but, to me, it seems like they went in this direction: we could spend all that time on 20A and getting it really good for Arrow Lake, or, we could just outsource this one generation to TSMC and focus everything on our upcoming cutting edge nodes such as the Intel 18A. Very interesting times ahead.
gaming needs to seriously get on linux - kernel development for these wild new cpus is going to be better on linux - unless hardware manufacturers stop pushing the limits of what they can put on one socket, os kernel development is going to be key to performance
@der8auer-en i looked on pcworld pcwelt.. on idle its 2.5w,3% more than before.. still... chipset and other things make up full picture. I want idle low as possible with c states max so it sips power,but am happy it can boost to 200w if needed.
Something allot of reviewers don't talk about , why not try head to head comparison , you can literally undervolt 14900k to 5.20/4.30 with only 1.15v and get around 80 w 90w in gaming , so why not do a clock to clock comparison with only 8core + 16 ecores without HT , same clock across ring ecore and pcores with an undervolt , even if the nand and the architecture are different , just to have in idea how much gains on the ipc and also have a not bad idea on an undervolted 14900k vs undervolted 285k and see if they are close
Its the other way around!! Even with the FASTEST ram this things sucks because of internal latency. He tested the 14900ks with 6600Mhz ram to put in a good light. Most do 7600Mhz now and ring atleast 4.7Ghz. It would have been smoked. In games that like 16threads hyperthreading it would have been a bloodbath.
I think what they should drop is the E/P core architecture for desktop CPUs ASAP. If there were only P cores in the CPU then even this DLVR would not be necessary and the pain of software issues would be gone too.
Wish granted. For the die size, this is now a 12 P-core, 12-thread chip. Boost clocks don't really go up because the E-cores aren't holding that back. What has changes is that now everything ahead of the 7900X absolutely runs away with the multi-threaded crown.
lol, the p cores drain power like there is no tomorrow, that is why they put so few the power consumption was so high that they had to remove hyperthreading, not to menion they couldnt do chipplets and had to pay to tsmc t do it for them intel has to change so many things and when they do, they will have a amd clone product
@@danielkowalski7527 You pack your desktop with an over 500 Watt PSU and it is not quite efficient for low power draws which E cores are good for. Personally I prefer more P cores and accelerators over all the E cores.
The thing is that arrow lake uses those new cudimms probably and that's why it has way higher mhz but higher latency My guess would be that intel kinda mandated it for launch reviews
Ok……so it’s efficient……….and that is reflected in its performance against the 12900k. Intel simply reduced its power draw compared to the 12900k. 🤔…….now if Intel loses the “e” cores we’d be all set.
Interesting about the 2P cores then 4 ecores then 2 P core etc, games are not that intelligent in how it distributes threads also does windows need a new CPU scheduler to optimise the workload?
But, But, But, ... when the CPU is not burning the power internally (no bypass), the VRMs on the mainboard have to do it (as they always do for normal CPUs). In the end the result shouldn't be that different, especially on high low scenarios. Of course the temperature internally is higher, when the CPU handles the voltage (it has to burn it into heat), but the overall consumption shouldn't be much different. Would be interesting if ASUS is measuring the CPU power in front of the VRMs, or behind them. Of course multiple VRMs in series is worse than just one. And the DLVRs are nothing than this extra step. So in high load, disable them. Should be the normal decision for Intel.
@@der8auer-en using resistors to burn away 80 watts is a hard task 🙂.would be interesting how they did it in detail. very interesting topic. technical the main problem for VRMs inside the CPU would be to get the needed capacitors into the CPU. as fast as a regulator is, it has to have extra stabilization of any kind behind it to get rid of the pulses.
@@ronny332 Nah...there are LDOs for ASICs that can deliver something upwards of 4kA...those are used for cleaning up any switching noise from a switching "pre-regulator". Perhaps their fix to the voltage problem? Edit: TI put wrong numbers in their catalog. They put "A" instead of "mA"...sorry. More like 4 amps for a single LDO. Though the IC package is so tiny, the silicon must be even tinier and if on-board the CPU it should get better cooling in theory...
Interesting call out of the layout for ARL, P cores alternating with E cores physically. I wonder if this is the reason for inconsistent benchmarks for gaming where the scheduler may send requests to E cores instead due to the funky layout. Hopefully there’s an explanation because lack of competition is bad for all of us. AMD might charge $800 for a 9950X3D now.
i like the watts in every chart. Most us reviewers just swipe that aspect aside. Buit also because it makes Intels boasting about more efficiency look QUITE bad compared with AMD.
Having recently looked again at memory speeds in gaming, latency makes a considerably bigger difference than memory speed. ... If the videos I have looked at were right, then sadly you have balked your results. Going by what I have seen, 8800 C42 is going to be quite a bit slower than 6000 C30.
And doesn't the faster RAM needed also add to the cost of the total system? Quite a difficult situation for Intel. And then we have the 9 series x3d chips sure very soon. Intel does not make sense right now
@der8auer How old is this camera your are filming the monitor with? I ask because I see so many odd white pixels in the footage, maybe it's time to switch out the camera? (viewing in 4K on LG OLED)
This is the only review I'm going to bother with because it is professional and not "it sucks for gaming! It so is bad) BS. IMO the so called gaming performance is a non-issue. I run a 5950x and I am never CPU bound because I don't game with potato graphic settings. The power consumption of this chip makes it very appealing to anyone who would rather dump that power into the GPU - like me. All this CPU has to do is keep up, and it seems to do that just fine. I can't wait to see how this CPU performs in real world non-potato gaming scenarios because I haven't gamed at that level in a DECADE. Come on other reviewers, join the second decade of the 21st century already! And don't give me the Steam says crap - the people gaming at potato levels aren't going to buy this chip because it isn't the cheapest so go get your dumb clicks by talking about the potato junk.
How does he have so few subs? Got to be one of the most trusted sources available.
I felt the same way when I found him a few years ago from GN.
This is his "new" English language channel. His main channel has more than 2x the subs as the EN one.
He's also the founder and creator of thermal grizzly, so that sponsor spot is literally his own product.
he split the channel for english and for german viewers, that is why this one is called der8abuer EN, he has two main channels basically
Mention someone MORE trustworthy than Roman.
I'll wait.
The power consumption isn't that low though. it channels a lot of power through the board rather then the connectors. This is rather strange behavior though. It seems like it is uses around 50% more than it seems judging by Steve's numbers.
Yes. Steve saw about 40-50W from the ATX12V (24pin) on top of what is pulled from the EPS (that is the only numbers reported on HWinfo).
They're not strangers to underhanded tactics. Just another way to hide the true power consumption, one of the many facets where AMD beats them
Thank you for keeping the graph style, also excellent coverage as usual.
PC Builders: I've got 99 problems, using Intel ain't one... 🤣🤣🤣
Home server builder: this is perfect
would have been nice if you had also tested the 285k with 6000 cl30 memory so we could se the different in faster memory but worse latency.
I looked at another reviewer and depending on if the program takes advantage of the faster memory speeds this can sometimes be a big difference or none at all.
It would have been way worse. His testing config is both high bandwith and low latency. Cl is per clock, so faster clock faster ticks. So even with cutting edge ram this thing sucks. Imagine tested against 14900k at 5Ghz ring and 8000 cl 36 ram. Most 14900k does 7600Mhz on gear 2 now.
It always impress me to see those 3D parts. They're so efficient!
Basically this release from Intel will just increase the sales of the X3D. Those who waited for the charts before deciding which way to go now have their answer.
So you managed to OC it to the advertised max boost of 5.7GHz? Astounding.
(My sarcasm is directed at Intel)
The power consumption numbers in all the graphs are really useful, thanks!
Can't wait to see the overclocking/tweaking results for this with a higher power envelope. So much room for performance with faster RAM, faster rings, and pure overclock.
I bet there's a lot of headroom but remember, there's a reason Intel nerfed the ring clock so much. A lot of people suspect the high ring clocks on 13th/14th gen was the reason cpus were degrading and from HWU's video, 8000mhz ram didn't offer any performance over 7200mhz ram so it'll probably be like Ryzen now where latency is more important than ram speed. I'm just interested now how long highly overclocked samples will last compared to Raptor Lake.
This is the only video I will watch of this CPU, all others are waste of time really.
This is actually insane seeing Intel running full load synthetic bench at 70C and consuming less than 200W while maintaining 14900ks performance and better. I wonder if direct die is even worth it for arrow lake.
In cinebench, mostly everything else, it's a pos.
Cuz Jay has never had any idea what he’s talking about
Jay two cents fell off. He spits so much false information these days it’s not even funny. He’s a sell out now too.
@@Multimeter1 I feel the same way.
If you look up the definition of the Dunning-Kruger effect you'll find a picture of Jay.
Just buy EVGA cards! What a scam that was... This guy is a joke.
When I first got into PC and PC gaming, I ran into J2C and soon realized that a lot of his takes are really bad and he just is not well educated in PC even though he's a PC TH-camr. He's an embarrassment to the PC community.
In the gaming benchmarks, it would be interesting to see how much power the 285k is using, and then constrain the 14900k to that power limit, and see what FPS you get. Considering power usage over about 100W on the 12th, 13th and 14th gen has always been a game of diminishing returns, I think the 14900K using the same power as the 285k, there still wouldn't be much in it.
if you disable E-cores and hyper threading on 14900k and make undervolt, fps will be the same on average and consumption will drop by 2 times
I other Review I see 50W difference from the wall 420 to 470W if FPS looked at 100.
Wendel did that, that's why it's good to watch multiple reviews as Derbauer said.
@@Hermes_C Yup that's what I'm getting at. I think I saw it on one of Derbauer's videos, so did it on my 12900K I was dropping the power to 90 or even 50W and it wasn't making a lot of difference to FPS.
Thank you for all these in-depth reviews that most of the others don't do, you have found your own style and you also make great products!
I agree tweaking the platform looks really interesting.
So what are my options then, as an uneducated non-tech-savvy person like myself, who wants the latest and greatest? Dip my toe into AMD for the first time and wait till January for the 9950X3D?
Depends on the usecase, for games 9950x3d will probably be worse than 9800x3d
But amd seems like a better option at the moment
Thanks Roman.
i think this is the most or even only positive review the 285k has seen so far
Will this platform last for more than one generation? I've heard the next gen will need a new socket. Great vid. Thanks.
Music to my ears like i said, this platform gen gonna be GOLD for all us TWEAKERS.
1 Jay has no idea what he's doing
2 Jay video looked like hostage/paid promotion. It was weirdly stupid :P
That's why I don't watch anyone else. These guy's video comes off as genuine unlike all the other Techtubers whoa re just basically promoting amd.
@@Darkness-ud5wk Roman is a little biased towards intel. AMD and intel are kinda same in terms of desktop cpus and it is easy to get slightly different results if you want to show some company as the winning one.
@@MarioAPN where do you get that from? Intel is a better target for fancy cooling solutions (which is his thing) because of the power issues.
But i see no reason for "bias" claims in benchmark and performance videos. . Except of course this is social media and everything has to be a conspiracy theory.
I was going to get 13600, an upgraded 2 cores more version of 12700, but I read of the problems 13/14 had. I actually just found out about the problems last week when I watched benchmark videos. I only watch the numbers, so I don't usually know of a GPU or CPU having problems. I just happened to read on a forum if I should buy a 12700 or 13600, and people were talking about the problems. Even people with new CPUs still get those problems even with the BIOS update, so I thought I might as well stay away from those and get a 12700. So I bought it last week but haven't used it yet because the cooler comes today.
Crazy performance gaming 7800x3d!!!
I really appreciate all the work you do for boss.
Watch GN on the power draw stuff. Literally ignore everyone else, though, Derbauer did a pretty good job on it. Most didn't properly isolate/test accurately to show the actual true efficiency (or lack thereof).
Well Der8auer has the perfect tool to judge power real power consumptions: a german energy bill.
a new 289 euro 12900KS looks very very alluring
Great review, only was strange how much praise you gave Intel for power draw. It should have never gotten so high
12900KF here is feeling okay today knowing it's safe.
Even though performance is underwhelming, tweaking potential and features are interesting to mess around, also cool temps and reduced power are cool. Also wouldn't really expect cooler temps at same wattage and it being effectively 2 nodes newer but here we are
1. How did you benchmark valorant?
2. The Direct Die OC is that 5.7P 5.1 ring? What are the D2D and NPU clocks for it? Also did you find ecores off to ever help? Or did overclocking the ecores ever help?
1. It's a predefined path in practice mode that takes about 5min and it's done 3 times and the average is taken of it
2. The DD OC is 5.7P and 5.1E. The D2D and NPU are the same as maxed below in the chart. I didn't have enough space to squeeze it all on that row :D
@@der8auer-en that bit of gains from insane cooling and a static P & E core? Wow. And the ring is the same 4.2 as well right
Would also be interesting to see if disabling ecores allows for much higher ring
Thanks, looking good for Intel. I'm not all for FPS boost and such but I absolutely love the power draw reduction!
Thanks, nice review. Now let's hope the new 9800X3D is a decent step up. We need something that gets a positive review. These meh launches are getting a bit tedious now...
Hello Roman, did you check how does 285k divide the load across cores, especially in case of the best resulted game and the worst? Also, it would be very interesting to see how 285k shows itself at different power limits, let's say 125 or 65W comparing with 9950x and 14900k and with e-cores turned off.
In general, it seems that the platform is very raw and there will be more updates. A little zen1 vibe, 285k is r7 1800x, and 7700k is 7800x3d.
I do hope this is the start of intel's return to the top and functions as a test platform for later gens like ryzen 1st gen was. Ironically the low 1% might be due to a bad interconnect design between chips so hopefully it would get ironed out later on.
I hope Jay responds back with a technical video from his Lab* 🤐
The AI process should allow better background blurring, noise suppression, video editing, eye tracking, zero wake time, AI modified detection and picture framing for commercial and gaming apps
Isn't the DLVR calculation pessimistic? I would think they use a buck converter, so the cores get more amperage with lower voltage. And then the power loss at a given power output depends on the efficiency of the DLVR at that output.
The "L" in DLVR is linear, means it is not a switching regulator. The voltage differential is low enough to provide so much current, but the power loss...must be something, given it is linear.
The can't use buck converters because they can't put appropriate inductors on the chip so they have to LVRs which are inefficient for high current applications.
@@LazyTurtle1988 & @ruikazane5123 Thanks for the clarification! I hadn't considered the limitations placed on inductors by the node/chip area, but it makes total sense they're using a linear reg given that.
@@LazyTurtle1988 Haswell FIVR did put inductors on the package
I wasn't expecting much tbh but having watched multiple reviews, something feels really off. On all reviews the P-Cores and E-Cores were reported completely wrong by monitoring software and it makes me wonder if intel messed up with the microcode and the system sees some P-Cores as E-cores and vice versa. Because, it does not make sense that the 285K is faster at both single and multi core tasks but completely falls apart in hybrid tasks like gaming. It might be possible that E-Cores get assigned P-Core tasks (and vice versa) just because the system sees them as such.
I also tested with E-Cores disabled in the early days and it didn't help at all. Seems not so easy
I have the same thoughts, synthetics tell you whats possible in the CPU, so something is really falling apart here with games. I'm wondering if there's some tile-to-tile latency issue or some other bus latency. This is the stuff the team at AnandTech would have found eventually RIP.
For me it makes sense beacuse this cpu has much higher internal latency. Both in Aida 64 latency test and PyPrime it has about 40% higher (worse) score than 14900k with similar memory settings.
@@der8auer-en Great testing and review as always! It seems that it was far from ready for a launch and intel rushed it. We'll see how it develops in the future.
It's the fabric. Intel is paying the latency penalty with the mesh now chiplet architecture. They paid the same performance penalty with the mesh fabric monolithic skylake-X chips. Literally history repeating itself.
In scenarios that aren't latency bound you see very nice gains. In scenarios where you are primarily latency bound you see regression.
Was delidding easy and safe with the new multi-tile configuration. I'd be concerned about the strength of solder connection to the substrate of the smaller subtiles.
So is it safe to say that I should just stick with getting a 9950x...or 9900x or wait for the X3D 9xxx chips
If your a gamer might as well wait for the 9800X3D as it comes out on November 7.
@@karehaqt Don't you mean 7800X3D refreshTM
Get a 7700 non-x with a cheap b650 board 😊❤
@@karehaqt agreed.... Also no use going backwards on the 7xxx even though it will be cheaper. For gaming and editing and blender better off going 12core chip or higher.
yes
Intel has drawn a line in the sand and has chosen Business applications over gaming. Lower power, more versatility = a very business application friendly platform.
I think you’re right. Intel knows where their bread is buttered.
lol
Need to see some memory tuning with CUDIMM first. This is chiplet so it's needs as low of latency as possible.
There's little reason to pay what CUDIMM costs for this performance. It's DOA for this launch, much like CAMM.
@@drewnewby Idk, that tech looks promising...for a consumer platform. If you have anything fancier ECC REG should be about the same performance-wise + extra stuff, like memory capacity which is basically unlimited.
Not sure why you guys think a clock redriver can fix signal quality and noise issues. Loss of clock signal to the DRAM chips does not cause unstable RAM, you just won’t POST at all
Did you turn off PCIE link power saving? A lot of ppl forget that one.
Jay is worried about board manufacturers abusing the modes, leading to failures and drawn out RMAs all over again.
I genuinely thought they were being euphemistic when they said 'linear voltage regulator' - i seriously cant believe this is a thing in 2024 tbh.
I thought modular nerds were about the only ones using LVRs for any serious amount of current, but here we are burning watts in linear regs in CPUs.
The additional switching noise probably made things too unstable for the CPU. A low-dropout regulator on the other hand can handle output voltages very close to what the VRMs are supplying and doesn't introduce switching noise while even having a fairly high power supply rejection ratio that can filter out higher frequency noise from a switching power supply.
@EkiToji I think the reason they went for ldo is much more simple - ldo can be implemented such that it requires only external capacitor, while in case of switchers you will also need external inductors, which are much more problematic.
@@asmi06 then the question becomes - why do it at all on chip
@@mycosys the closer regulator is to a load, the better is it's transient response.
@@asmi06 also get that, still seems counterproductive.
as a macro world electrician it always amazes me, ow many AMPs move around in a PC.
Finally we are over with 400W parts for a while...
The DLVR would save CPUs if the VRM on the board is questionable, but seeing no entry-level or basic motherboards out there that might take advantage of the linear regulator is pretty awful. Then we have rebadged 14th gen processors for this generation, we'll see how that pans out!
Yeah, the motherboards they showed first are more expensive than the cpus somehow
Could you test in games with only the p-cores active? And also to know how much cooler the p-cores get, being able to increase the clock even more
I love the move less heat is better! now need gpu to do the same
One thing you dont seem to cover is the idle power consumption. If you're just web browsing and not pushing it to the bleeding edge, what is the performance like in comparison?
Do you believe these will be harder to delid without damaging if they've chosen indium as the TIM now that they've abandoned monolithic chips?
Video about that is coming shortly :) Probably saturday. You can already read some stuff here: www.thermal-grizzly.com/en/blog/intel-core-ultra-200-new-products-for-intel-s-arrow-lake-processors
DLVR reminded me of the digital regulator added in Haswell CPUs. These CPUs were hotter than Ivy Bridge because of this.
I wonder if Intel faced same bottleneck as AMD with tile design and tile-to-tile interconnection. Pure guessing, but these interconnection under load added too much latency to all CPU operations, related to peripheral devices, mainly RAM and PCIe videocard. RAM, PCIe and Cores are on different tiles. It may be the case with the gaming loads that performance of interconnection layer not enough to feed RAM I\O and PCIe I\O at the same time with minimal delays. Which delays is added to a frame time in the high FPS scenarios. In the case of AMD 3D cache CPUs, their performance increase may be not only by increase in cache hits, but in lowered RAM I\O usage and therefore lowered interconnection layer delays for PCIe I\O.
Also there may be some interesting magic bus-to-bus, core-to-bus clock ratios like 1:1, 1:2, 2:1, etc.
Isn't the big point of DLVR (and, especially, restricting the bypass mode) to not let motherboard vendors have insane presets that end up toasting the CPUs?
if DLVR causes so much power wastage then what's the point of having it? I see the low - medium load improvements but even in your 80W example there's about 25% power wastage?
How did u test cyberpunk? Was u using the benchmark?
No I'm using my own ingame scene. Always use my own for all games even if it has a benchmark built in
@@der8auer-en Yea cus Hardware uboxed had lower fps compared to 14900k
@@listX_DE I would not trust anyone to properly test Intel platform beside der8auer and framechaser
Yeah, you have a Ferrari, and it goes faster than mine
...but my Ferrari uses less mpg
SAID NO ONE EVER!
Sorry Roman, I find comparing efficiency to the notoriously inefficient 14900KS to be disingenious.
Also, what the heck is the point of the DLVR. Is it more efficient, faster or cheaper than doing the regulation on the mobo? Condensing more heat in the socket is no bueno after all.
My take is this processor isn’t as gimped as people are saying. Either the new design and these new features and quirks. It will take a while for people to learn how to properly run and overclock. I suspect the abysmal game performance is related to needed optimization I the game and future drivers.
does DLVR auto intelligently switch it on and off? That would be smart.
I kinda think that if the chip replaced all the E cores with P cores, for 12 P cores with more cache it would make a far stronger gaming cpu.
Can you please do power measurements at idle, and low power use scenarions?
Feels like a "ZEN 1" moment for Intel, not a bad CPU but definitely a step in the right direction
One can only hope they have their zen 2 moment next year, cause with their share shrinking on DC products, their manufacturing not really working for 3rd parties and the burning CPU scandal, it's a rough year incoming. At least AMD could sell chips for consoles back then.
@@nekogami87 I do belive they will have a "ZEN 2" moment as well, this CPU might not be the CPU we were hoping for, but in terms of technology, it's pretty impressive what they were able to achieve for a 1st gen tile based desktop cpu, no HT, lower clock speeds and lower threads and all the hiccups that comes with new tech, the amount of data and knowledge they're gonna get from this is huge, plus Intel 18a new node (cutting edge) power back delivery and all that, it can only get better from here, just like AMD did back then when they moved to a chiplet design and refined it over time to what it is today.
@@Core2 I sure hope, they indeed do a lot of interesting thing, but most of them, we are still not sure how they will pan out, they good thing about zen was that they were able to reuse the chaplet to make other products quickly and at reduce cost, I don't think we saw that accomplished yet (might have missed it though).
18a seems really nice, but when their current gen is still using tsmc, I still highly doubt they will make it in time for their next one, replacing HT is interesting, still waiting to see how it evolved, cause they still use it on server when needed. So who knows how it will evolve, but definitely interesting.
@@nekogami87 Regarding the 18a i think it's a very interesting move from Intel, they could have spent a tons of resources getting good at the 20A and building Arrow on that node, but, to me, it seems like they went in this direction: we could spend all that time on 20A and getting it really good for Arrow Lake, or, we could just outsource this one generation to TSMC and focus everything on our upcoming cutting edge nodes such as the Intel 18A.
Very interesting times ahead.
Tiles needs tinkering! I expect more from 395k next year… but lets see… seems that next gen is also very different also…
Where are you testing the CPUs to get these valorant results? Is this just like starting at a wall?
gaming needs to seriously get on linux - kernel development for these wild new cpus is going to be better on linux - unless hardware manufacturers stop pushing the limits of what they can put on one socket, os kernel development is going to be key to performance
Idle power consumption?
he forgot, lol
I never do/did Idle Power. Actually good point :D Will add it to my list for the future
@der8auer-en i looked on pcworld pcwelt.. on idle its 2.5w,3% more than before.. still... chipset and other things make up full picture. I want idle low as possible with c states max so it sips power,but am happy it can boost to 200w if needed.
no space heater anymore
Something allot of reviewers don't talk about , why not try head to head comparison , you can literally undervolt 14900k to 5.20/4.30 with only 1.15v and get around 80 w 90w in gaming , so why not do a clock to clock comparison with only 8core + 16 ecores without HT , same clock across ring ecore and pcores with an undervolt , even if the nand and the architecture are different , just to have in idea how much gains on the ipc and also have a not bad idea on an undervolted 14900k vs undervolted 285k and see if they are close
14900ks with 6000 memory?! LMAO!! ✌️🇺🇲
performance can be made up with a good tune though still disappointing for out of box numbers, that efficiency is very tasty for a "24 core" though
Its the other way around!! Even with the FASTEST ram this things sucks because of internal latency. He tested the 14900ks with 6600Mhz ram to put in a good light. Most do 7600Mhz now and ring atleast 4.7Ghz. It would have been smoked. In games that like 16threads hyperthreading it would have been a bloodbath.
I think what they should drop is the E/P core architecture for desktop CPUs ASAP. If there were only P cores in the CPU then even this DLVR would not be necessary and the pain of software issues would be gone too.
Wish granted. For the die size, this is now a 12 P-core, 12-thread chip. Boost clocks don't really go up because the E-cores aren't holding that back. What has changes is that now everything ahead of the 7900X absolutely runs away with the multi-threaded crown.
i want low power draw
lol, the p cores drain power like there is no tomorrow, that is why they put so few
the power consumption was so high that they had to remove hyperthreading, not to menion they couldnt do chipplets and had to pay to tsmc t do it for them
intel has to change so many things and when they do, they will have a amd clone product
@@danielkowalski7527 You pack your desktop with an over 500 Watt PSU and it is not quite efficient for low power draws which E cores are good for. Personally I prefer more P cores and accelerators over all the E cores.
Come on…6000 on an apex encore with 14900ks? Then 8800 on 285k? Could have at least ran 8000 xmp on apex encore
The thing is that arrow lake uses those new cudimms probably and that's why it has way higher mhz but higher latency
My guess would be that intel kinda mandated it for launch reviews
@@Burbund probably but its lop sided but i suppose to the people who dont know or look at that would twist their view on the value of 285k
Sooooo... When's the direct die cooler coming out?
www.thermal-grizzly.com/en/intel-1851-mycro-direct-die-pro/s-tg-my-dd-p-rgb-i1851-v1
Already on sale :)
Ok……so it’s efficient……….and that is reflected in its performance against the 12900k. Intel simply reduced its power draw compared to the 12900k. 🤔…….now if Intel loses the “e” cores we’d be all set.
Interesting about the 2P cores then 4 ecores then 2 P core etc, games are not that intelligent in how it distributes threads also does windows need a new CPU scheduler to optimise the workload?
But, But, But, ... when the CPU is not burning the power internally (no bypass), the VRMs on the mainboard have to do it (as they always do for normal CPUs). In the end the result shouldn't be that different, especially on high low scenarios. Of course the temperature internally is higher, when the CPU handles the voltage (it has to burn it into heat), but the overall consumption shouldn't be much different.
Would be interesting if ASUS is measuring the CPU power in front of the VRMs, or behind them.
Of course multiple VRMs in series is worse than just one. And the DLVRs are nothing than this extra step. So in high load, disable them. Should be the normal decision for Intel.
yea it's different. A DLVR acts more like a resistor. The VRM involves mosfets, inductors, caps. So over the VRM the loss will be much less
@@der8auer-en using resistors to burn away 80 watts is a hard task 🙂.would be interesting how they did it in detail. very interesting topic. technical the main problem for VRMs inside the CPU would be to get the needed capacitors into the CPU. as fast as a regulator is, it has to have extra stabilization of any kind behind it to get rid of the pulses.
@@ronny332 Nah...there are LDOs for ASICs that can deliver something upwards of 4kA...those are used for cleaning up any switching noise from a switching "pre-regulator". Perhaps their fix to the voltage problem?
Edit: TI put wrong numbers in their catalog. They put "A" instead of "mA"...sorry. More like 4 amps for a single LDO. Though the IC package is so tiny, the silicon must be even tinier and if on-board the CPU it should get better cooling in theory...
so undervolting would increase powerdraw?
They should have just refreshed raptor lake again for a 15th gen and kept ultra to laptop
Who's gonna be first to delid an Ultra 9 285k and see how it runs overclocked?
he literally did that in the video :D
Interesting call out of the layout for ARL, P cores alternating with E cores physically. I wonder if this is the reason for inconsistent benchmarks for gaming where the scheduler may send requests to E cores instead due to the funky layout. Hopefully there’s an explanation because lack of competition is bad for all of us. AMD might charge $800 for a 9950X3D now.
and what about gaming power consuption in "power gate bypass" mode? it still the same as at default settings or higher?
Maybe they want disable "bypass" cos' afraid of 13/14 gen cpu degradation problem?
likely the scheduler needs tuning
skipping this gen, the mobos are insanely too expensive vs Z790
If bypass mode can cut down consumption and temperature with same benchmark scores it's stupid to delete this feature...
i like the watts in every chart. Most us reviewers just swipe that aspect aside.
Buit also because it makes Intels boasting about more efficiency look QUITE bad compared with AMD.
Having recently looked again at memory speeds in gaming, latency makes a considerably bigger difference than memory speed. ... If the videos I have looked at were right, then sadly you have balked your results. Going by what I have seen, 8800 C42 is going to be quite a bit slower than 6000 C30.
Guess there's no reason to upgrade my 12600k homelab test bench
How come he got the CPU weeks ago the review date to play with it and HW Unboxed just 3 days ago?
Can you still use your delidder on it?
And doesn't the faster RAM needed also add to the cost of the total system? Quite a difficult situation for Intel. And then we have the 9 series x3d chips sure very soon. Intel does not make sense right now
@der8auer How old is this camera your are filming the monitor with?
I ask because I see so many odd white pixels in the footage, maybe it's time to switch out the camera?
(viewing in 4K on LG OLED)
14:33 is that 1.501v vcore? and 1.619v on VLatch? I guess this is fine now on a new architecture...or?
This is the only review I'm going to bother with because it is professional and not "it sucks for gaming! It so is bad) BS. IMO the so called gaming performance is a non-issue. I run a 5950x and I am never CPU bound because I don't game with potato graphic settings. The power consumption of this chip makes it very appealing to anyone who would rather dump that power into the GPU - like me. All this CPU has to do is keep up, and it seems to do that just fine. I can't wait to see how this CPU performs in real world non-potato gaming scenarios because I haven't gamed at that level in a DECADE. Come on other reviewers, join the second decade of the 21st century already! And don't give me the Steam says crap - the people gaming at potato levels aren't going to buy this chip because it isn't the cheapest so go get your dumb clicks by talking about the potato junk.
Is the option disabling E cores available in the BIOS?
yes. Tested it and at least in Gaming doesnt help. Also doesnt significantly reduce P-Core temp
What kind of magical e sports settings are these?
Amd thanks Intel for existing and selling more v cache chips for them
Direct die cooling and only a 5.7 ghz overclock??? What a regression Intel!!!
The more you buy the less fps you get $$$ Intel
Excellent video!