Just beware that power loss calculations may be incorrect in the video. A linear drop-out regulator, LDO, works as explained. I doubt they are using those. Other voltage regulators, like for example how the multiple stage buck converters work on the motherboard, achieve a higher efficiency as they can decouple magnitude of input current to the magnitude of output current.
@@der8auer-enquick question I have heard on former generations disabling the E-Cores was possible, does this save power, or heat and what practical performances is lost, and do those cores that are turned off still pull power.
@@RadarLeon with 13th and 14th gen, disabling E-cores still pulled miniscule levels of power since it was a monolithic CPU design, you just didnt use them. But the design is different with this new CPU
@@dankmemes3153 I've just think that in most cases outside browsing the Internet on web pages or office tasks there seems to be little benefit to E-Cores, they are supposed to be efficient but in doing so having the performance cores operating in a efficient lower power state seems to make more sense and having the chip park what cores it's not using instead of the junk intel is doing
@@der8auer-en Thank you for doing such an in-depth look in any case. It looks that there is some small niche for ARL-S afterall, looks like tweaker's dream. Looking forward to more videos with overclocking and memory tuning.
My fav review of this new platform so far. Very informative and cool at the same time. Thanks Roman. Its funny to see that ppl that just enjoying tinkering are doing better job at making videos than most of the reviewers xD
G'day Roman, I noticed some comments pointing out the difference in "Power in W" between channels. As GN Steve said until there is more understanding of how Power is Supplied to the VRM & CPU & there is a Consitent way to get the most accurate results "CPU Power Usage" will be all over the place between channels as the way people measure is going to be very different.
GN Steve did great work solving a problem in front of him but he is incorrect to imply that every board routes substantial CPU power through the 24-pin connector. Some boards still route most CPU power through EPS12V
Irrelevant to this video, but I just took delivery and installed your Intel Mycro Direct Die Pro block on my 14900KF. I had tried a competitors block, and found it to be an overly complex design, in addition to using the standard socket latch - resulting in the usual issues with RAM stability at high clocks. No such issues with the (superior) design of the Mycro Direct Die Pro. Installation was flawless, and easy. As the design is essentially a contact frame and direct die bock in one, my system immediately booted with RAM @ 8000Mt/s, and has remained stable at that speed. Temps have dropped from an all core load of 98 - 100C to nothing above 68C. Absolutely outstanding work, Roman. As always, Thermal Grizzly present an exceptional product that exceeds expectations. 10/10.
Thanks a lot for your feedback :) Really happy to hear. We had a lot of issues with that block before it came to that status but I'm so happy it works fine now!
@@der8auer-en I followed the whole way. You did a fuck up perfectly - you admitted the mistake, and looked after any customers affected. Once things were sorted, you were 100% transparent about what happened and why, and the measures you took to fix the situation. Much, much bigger companies can learn a lot from how Thermal Grizzly conducts business!
Basically this release from Intel will just increase the sales of the X3D. Those who waited for the charts before deciding which way to go now have their answer.
@@PAIN-ot4cjI am on 14900KS and mine is the third bonked one. I am just evaluating if it's worth getting a new mother and cooler bracket. Look like a new 14900KS is best bang for buck right now for me. The new Amd and Intel stuff are so small improvements over my 14900KS that I can't justify buying more parts to change platform.
I looked at another reviewer and depending on if the program takes advantage of the faster memory speeds this can sometimes be a big difference or none at all.
It would have been way worse. His testing config is both high bandwith and low latency. Cl is per clock, so faster clock faster ticks. So even with cutting edge ram this thing sucks. Imagine tested against 14900k at 5Ghz ring and 8000 cl 36 ram. Most 14900k does 7600Mhz on gear 2 now.
@@impuls60 I mean it really shouldn't be way worse. 8800 CL42 is only gonna net you a .5 nanosecond latency improvement over 6000 CL30. You don't usually see massive deviations until you are getting into 2+ nanosecond differences in cas latency.
I love the detail you go into. More technical than any other, including GN. I wouldn't compare the DLVR to a resistor though. The MOSFETs in the DLVR produce heat through switching losses and the internal resistance of the MOSFET (which is pretty small). I would expect the DLVR to be 90%-95% efficient, rather than the ~65% efficiency your showing. I still think the bypass mode can be really useful, but i love the added precision the DLVR gives you. Vdroop should be minimal, given you're talking the resistance of the socket and power plane out of the equation. If also love to see you test the overall system efficiency with DLVR bypass on vs off. My guess is the benefit of reducing power losses in the CPU will be offset by pushing more current, at a lower voltage, through the vcore MOSFETs, power plane and socket.
Your in depth high quality coverage has basically become standard at this point and it's greatly appreciated. I hope down the road here you'll do some more in depth looks at how all of the different settings affect the CPU performance. I know the overall memory latency is quite a bit higher than RPL so this seems like a good place to target with tweaking setups.
When I first got into PC and PC gaming, I ran into J2C and soon realized that a lot of his takes are really bad and he just is not well educated in PC even though he's a PC TH-camr. He's an embarrassment to the PC community.
@@thetheoryguy5544not really its still a perfectly fine modern 8c/16t cpu still perfectly viable. Not world beating, but totally fine obviously buy a work station cpu if you need one.
I love that you tested 8800C42 memory with the 285K to show its limits, as many other sources used the same RAM configuration across all platforms. The efficiency is pretty impressive, and the CPU Cores themselves look like they will work great with low latency memory, but I think this is exactly what's holding Arrow Lake back. The Chiplet architecture on Intel processors have proven to be great at controlling uncore power, which is really not the case with AMD's offerings (comes through as high idle power measurements and efficiency penalty due to less CPU Core power / package power). It seems like Intel still got a long way to optimize the memory controller for this new design, as some reviews point to a memory latency increase by around 20ns going from 14900KS to 285K with the same memory kit (referring to Tony Yu's data, which can be considered as official Asus numbers on some extent). This latency could be the main reason why we're seeing gaming performance regression from last gen, despite the IPC and benchmark increases
I saw somewhere (don't remeber where) that one board maker managed to fix this latency grow. Don't ask details, as I only remeber the idea, not where I saw this. :))
I fully agreed with the conclusion but would like a full efficiency test between the 9950x and 285k since once you start manually setting voltages etc then you might as well be undervolting and though this is like you said a good step in the right direction for intel which they should be commended for in some way at least, I don't quite think the 285k would outperform a 9950x undervolted and in ECO mode in a highest efficiency vs highest efficiency duel overall but would absolutely love some actual testing in this area at different power profiles like 65w, 80w, 120w 240w and maxed out with a underclock maybe.
It really seems like it would help a LOT to have the MB VRM supply 1.3 Volts, or something closer to what the DLVR needs, so there is less drop in the core, but you still have the advantage of that fine grained local control.
Thanks for the work. Performs as expected for new architecture. Will be interesting content tweaking it out and hopefully software will come up with some updates to improve stability and performance.
Can't wait to see the overclocking/tweaking results for this with a higher power envelope. So much room for performance with faster RAM, faster rings, and pure overclock.
I bet there's a lot of headroom but remember, there's a reason Intel nerfed the ring clock so much. A lot of people suspect the high ring clocks on 13th/14th gen was the reason cpus were degrading and from HWU's video, 8000mhz ram didn't offer any performance over 7200mhz ram so it'll probably be like Ryzen now where latency is more important than ram speed. I'm just interested now how long highly overclocked samples will last compared to Raptor Lake.
@@nateTrh Well the 8000/8800MHz memory has quite severe latency...i dont think we would see much difference until 10 000MHz cudimms finally start appearing.
In the gaming benchmarks, it would be interesting to see how much power the 285k is using, and then constrain the 14900k to that power limit, and see what FPS you get. Considering power usage over about 100W on the 12th, 13th and 14th gen has always been a game of diminishing returns, I think the 14900K using the same power as the 285k, there still wouldn't be much in it.
@@Hermes_C Yup that's what I'm getting at. I think I saw it on one of Derbauer's videos, so did it on my 12900K I was dropping the power to 90 or even 50W and it wasn't making a lot of difference to FPS.
@@yakacm Just locking the 13600k at 1.18v and setting all P cores to 5.2ghz drops stock power in gaming to 45-50W. Try and push 5.5ghz at 1.25v all P core will push that over 75W with a few % gained. That 5.5ghz basically gives you a 14600k lol. Interesting to see him be able to push 5.7ghz with a healthy drop in power draw. Maybe in-depth OCing makes a comeback.
The DLVR would save CPUs if the VRM on the board is questionable, but seeing no entry-level or basic motherboards out there that might take advantage of the linear regulator is pretty awful. Then we have rebadged 14th gen processors for this generation, we'll see how that pans out!
Recently bought a laptop with 185 H and 4060 inside. According the Intel 185 is the second year of production of the Ultra series. Seems like if you do not have the right cooling dynamics in the box including the fan / vapor chamber setup, you are basically screwed. The combo which I am sure both have power issues is running quite hot under 90 - 100 % load over several hours and causing some issues with graphics set at ultra settings and 3.2K resolution or above. With that said I am sure Intel made 2% improvement for the 285H, but be warned if you do not have the right airflow in your box it will be a big issue. Can't say AMD is better but there are definitely benefits and draws on both sides, needed to be considered these days before spending another 2 - 5K on a new setup.
Great job as usual Roman. This is why you are one of the few channels I subscribe to. Hopefully, Intel is paying attention to the people who actually have engineering brains like yourself. So many of my friends have recently left for greener pastures.
I could not tell. Did your testing factor in the new power delivery mechanism for arrow Lake? Seeing power consumption around 170 W could be because you measured at the EPS 12 V rail and not the other mechanism by which the CPU receives power. Of course gamer Nexus can explain it better than I can.
as far as I can tell yes, since he talks about the VRM's output power measurement; the split between EPS and 24-pin is for feeding the VRM. think of it like you have a bucket with a hole in bottom, but you're filling the bucket with 2 different flow taps/hoses/whatever, and he's just looking at what comes out of the hole in the bottom
So basicly they got it down efficient as AMD, but the preformance is just worse even then their own 14gen. And it requires a whole new platfrom, wow how interesting ! What a product, good job Intel !
No, read, watch it all again. Intel is more efficient than AMD. And performance just needs to be tuned more by the ring clock (which is standard quite low) and Windows 11. Then the 285K is a monster fast CPU. AMD also needed W11 to get somewhere better. Think in solutions not in problems.
i was just looking a this.. but im not sure if its worth going from my 12700k as i still only get 8p cores.. i think ill wait for the 9800x3d scores.. the 7800x3d would have been a good choice but the price went silly when it was announced Amd were stopping production. :-(
@@mindurbusiness-b3u No. Don't do it. If you're on Alder Lake, disable the E-cores and clock the ring bus higher . You should get better fps. Although you can't overclock to match the L3 cache of an i9.
I’d be more interested if mobo prices weren’t through the roof. In any case, thanks for always bringing your own perspective, I always find it illuminating.
Yeah you basically have to drop 300€ for a mobo and 600€ for the CPU, and you'd be stupid to pair this with anything but the fastest memory so that's another 300€ for 8200 or 8800 MT/s. So you have to pay like 1200€ to have performance comparable to a 7950X or 9950X on a B650 with 6600 MT/s memory which you can get for 600€ total. Or (if the prices weren't stupid) you can get beaten in gaming by a 7800X3D for 600€. Worse of all you can be beaten in both by a 7950X3D with a total platform cost of 800€/ upcoming 9950X3D. It's not even a high core but low power chip which you can use in a server to handle multiple connections or something, because it chugs alone as much as my whole 5800X3D+3060 12GB+fans+spinny+2 170 Hz overclocked monitors system (idle 170 W, under maximum load 270W total as measured by my UPS).
I agree with you. I don't think they should remove the DLVR feature. Intel can let the motherboard manufacturers include a switch for it but with their track record of burning down the house, that's not a good idea. Performance and power draw are equally important for business work loads. I'm willing to take a slight performance hit if it saves power. Jay's business is gaming and he wants unlimited power and is willing to fry his boards. Most of us can't afford it.
Nice overview, did miss the 1300K in the comparison like I've seen in other reviews. All in all nice its 'only' the V2 285. Wondering who will fabricate the possible V3, say 385. Intel must by then been able to put a few new design updates in such version. So for work it is a nice efficient platform / CPU for gaming on and off a lesser as... still lots of FUN for DIY. Danke schone Herr DeBauer. 😎👍🤛
It’s refreshing to see some balance here. I understand the frustration with some reviewers because there has been a level of beta testing done by reviewers… however there are genuine efficiency gains here which is a good change for Intel. Let’s hope they continue to get more efficient rather than cramming more wattage into the CPU to fake a generational uplift
Jay is a hot head and always goes full speed when he gets even a whiff of what he thinks is a big deal... and its always a non issue, or sometimes his own fault.
I loved working on Arrow Lake throughout this year. Unfortunately everyone only looks at performance; we did a lot of work on fixing the power and heat issues everyone complained about last gen, but not many seem to be talking about this.
Techtubers only care about "big number better!!!!" yes. Although this time, everything should be less important than "is ist actually usable now, unlike 13/14th?"
absolutely, Im baffled at the takes of 99% of YT reviewers bombing on it while not addressing its a power house of processing power with WAY less power consumption and really cool temps across the board against every single cpu out there... but hey, oh no it doesnt pull a billion FPS on every single game, its a flop!
@@HellPedre to be fair, it is still using more power than AMD, and providing only equivalent performance on non Gaming workloads, while losing badly in gaming workloads to both AMD and 14th gen and 13th gen.
@@EdDale44135 more power than an almost all 3d AMDs for sure, not more than 9xxx series and even 7900, 7950s and so on (from what I've seen) and it also runs cooler in the only benchmark I saw that mentioned (Puget) so idk where you seeing that. gaming performance is all over the place, it matches the 14xxx series sometimes, beats it sometimes, and looses to everything, depends on which game CPU and reviewer
@@Darkness-ud5wk Roman is a little biased towards intel. AMD and intel are kinda same in terms of desktop cpus and it is easy to get slightly different results if you want to show some company as the winning one.
@@MarioAPN where do you get that from? Intel is a better target for fancy cooling solutions (which is his thing) because of the power issues. But i see no reason for "bias" claims in benchmark and performance videos. . Except of course this is social media and everything has to be a conspiracy theory. Also pretty much everybody else went into this with "ha now we can dump on Intel". So neutrality may just look like bias.
@Thisandthat8908 he likes intel because it was the only solution for hardcore overclocking. I wrote slightly biased, as he prefers intel. It makes no difference in real life, anyone can get what they want. I was referring to the comment of user Darkness. He wrote that he watches only Roman and no one else, because everyone else are promoting amd. For me, I can see both sides of a coin. Intel is good. Amd is good. Customers win. I am going with amd for the last 7 years. I am buying the first x600 line of cpus and then after a couple of years slap stronger cpu into the same build. Convenient way for me.
@@MarioAPN i can see that no destryoning intel over this may already look biased. But he`s exited about how many things you can play with here as an overclocker. Not really an AMD thing,
So what are my options then, as an uneducated non-tech-savvy person like myself, who wants the latest and greatest? Dip my toe into AMD for the first time and wait till January for the 9950X3D?
It depends on how AMD does the 3d cache on the 9950x3d, if it is 3d cache on both 8 cores it will be a beast. If it is split like 7950x3d, with one normal and one x3d. It will not be as impressive.
Yes, currently AMD is definitely the way to go. 7950X or 9950X for productivity. 7800X3D for gaming and light-ish productivity. 7950X3D if you want both high productivity and high gaming performance. 280mm OR 360mm AIO cooler like the ARCTIC Liquid Freezer. Even a tower like Deep cool AK620 will do. Any good B650 motherboard, you probably don't need X670(E)/X870(E). As for RAM, get a kit of 6000 MHz, a kit of two sticks(not four) and the capacity you need. And there's the CPU side of your build done.
@@karehaqt agreed.... Also no use going backwards on the 7xxx even though it will be cheaper. For gaming and editing and blender better off going 12core chip or higher.
1. How did you benchmark valorant? 2. The Direct Die OC is that 5.7P 5.1 ring? What are the D2D and NPU clocks for it? Also did you find ecores off to ever help? Or did overclocking the ecores ever help?
1. It's a predefined path in practice mode that takes about 5min and it's done 3 times and the average is taken of it 2. The DD OC is 5.7P and 5.1E. The D2D and NPU are the same as maxed below in the chart. I didn't have enough space to squeeze it all on that row :D
@@der8auer-en that bit of gains from insane cooling and a static P & E core? Wow. And the ring is the same 4.2 as well right Would also be interesting to see if disabling ecores allows for much higher ring
I only tried to disable E-Cores to see if it helps performance but didn't see if it helps to push ring. Will try that for sure :) thanks for the great idea!
The power consumption isn't that low though. it channels a lot of power through the board rather then the connectors. This is rather strange behavior though. It seems like it is uses around 50% more than it seems judging by Steve's numbers.
@@AMabud-lv7hy And where AMD beats them to such an extent that even cutting power consumption in half isn't enough of a difference. The one positieve about this new CPU is that it does use a chiplet (or "tile") design which should allow Intel to release new CPUs with updated cores much more quickly. Which I seriously hope they do. AMD's advantage is slowly disappearing into overpricing and availability issues. Which aren't AMDs fault really, as the pricing and availability issues are again caused by scalpers.
That's why I just take the EC reporing from the mainboard that records the entire VRM Pout :) edit: I was also one of the first reviewers to point this out in my first arrow lake video to warn the other reviewers. So I'm very aware of this topic!
This is the only review I'm going to bother with because it is professional and not "it sucks for gaming! It so is bad) BS. IMO the so called gaming performance is a non-issue. I run a 5950x and I am never CPU bound because I don't game with potato graphic settings. The power consumption of this chip makes it very appealing to anyone who would rather dump that power into the GPU - like me. All this CPU has to do is keep up, and it seems to do that just fine. I can't wait to see how this CPU performs in real world non-potato gaming scenarios because I haven't gamed at that level in a DECADE. Come on other reviewers, join the second decade of the 21st century already! And don't give me the Steam says crap - the people gaming at potato levels aren't going to buy this chip because it isn't the cheapest so go get your dumb clicks by talking about the potato junk.
I understand, you did test with the maximum ram speed the new cpu can handle, unlike maybe previous gen. But why aren't you making the comparaison chart with the same ram speed? it doesn't help us make an idea how fast is the cpu ingame compare to the old one when you test them with different ram speed.
Even though performance is underwhelming, tweaking potential and features are interesting to mess around, also cool temps and reduced power are cool. Also wouldn't really expect cooler temps at same wattage and it being effectively 2 nodes newer but here we are
Well all the TH-cam channels almost destroyed Intel over the 14900k so obviously they had to bring power levels down to ensure reliability so it's not intels fault really.
Thanks for the deep dive on stuff like DLVR and how we can bypass the effect if they from move the actual bypass feature, I get what people are coming from not wanting board partners to go back to their sneaky ways boosting Voltages with their own MB tunes. The platform does look fun for thinkers, maybe it might impove a bit with time but in the now competion is fierce. OCing isnt useless by these results, you got some meaty Mhz their. Curious how the overall Ocing goes across the board, I'm sure more will investegate.
Sorry Roman, I find comparing efficiency to the notoriously inefficient 14900KS to be disingenious. Also, what the heck is the point of the DLVR. Is it more efficient, faster or cheaper than doing the regulation on the mobo? Condensing more heat in the socket is no bueno after all.
14900KS was the highest performing from last gen :) DLVR can supply voltage to each single P Core individually and also to E-Core Clusters. If you just do normal vCore supply, everything would get the same voltage
@der8auer-en thanks for the direct reply! This just reinforces my hatred for big.little architecture lmao. Maybe DLVR will look better when paired with an overall better product.
Hello Roman, did you check how does 285k divide the load across cores, especially in case of the best resulted game and the worst? Also, it would be very interesting to see how 285k shows itself at different power limits, let's say 125 or 65W comparing with 9950x and 14900k and with e-cores turned off. In general, it seems that the platform is very raw and there will be more updates. A little zen1 vibe, 285k is r7 1800x, and 7700k is 7800x3d.
Very surprised it's a purely resistive power drop from 1.4-1.5V to 1.2V. That is a ton of power loss for a generation that is supposed to be based heavily on efficiency.
Hi. There is one burning specific question I have. Is there a way to test the 'AI Denoise' feature in Lightroom Classic and compare the 285K or 265K to the AMD CPUs? The AI Denoise is currently a feature that takes the longest time for me and I use it often. With the built-in (although weak) NPU in the the Intel Ultra chips, I'm guessing there's a jump in performance with them, but I'd like to see how much of a jump. Thanks!
I would speculate that spreading performance cores around the CPU to reduce the thermal spike of a relatively cool core getting full thermal boost when a thread is passed to it from a hot performance core takes additional time because it is using different level 2 or level 3 caches.
I really don't understand why they went with this DLVR model; even at low voltage drops like this it's painfully inefficient (both because of the high current and low overall core voltage), and they could have achieved a similar result by instead specifying that motherboards provide a small number (say 2-3) of different power rails at programmable voltages that the processor can switch its various cores between. The motherboard-provided regulators could then be switch-mode converters (impossible for the die-embedded DLVR they're using) and they'd be way more efficient. Alternatively, they could have had the motherboard provide a "typical" Vcore (that can vary as normal under CPU control) and fed that via power switches directly into the cores running at high load, and only use the DLVR for feeding a lower voltage to other cores under lower load. Since the DLVR would only be used for a small fraction of the overall current, its losses would be much smaller and you don't have to saddle your motherboard partners with the complexity of providing multiple Vcores. The processor itself is fairly efficient, and then they're throwing away a huge chunk of those gains dumping waste heat through an on-chip linear regulator. Insane.
Off topic idea for a Thermal Grizzly product. I would really want to see a high quality well built Flow and temperature sensor for water cooled pc. I have seen you using a sensor for your Mycro CPU waterblock but I feel like they all need a little bit of Thermal Grizzly quality seasoning.
So if i understand it correct the DLVR is causing a lot of loss in watts? Is it nog possible to lower the Mainboard VRM voltage to say about 1.2 to 1.25volts instead of 1.5? Would it mean that cinebench could run at 42k points at about 170 watts powerconsumption if the loss is smaller? It seems there is a lot to be gained then if you ask me! And the gaming power usage is already very well compared to 14th gen. I think with some microcode updates and Windows updates these CPU's can perform better with even lower power usage.
27:47 It seems good, and a "step in the right direction" but it's important to remember these chips are not made using Intel's own fabs. They're on TSMC.
Thanks, nice review. Now let's hope the new 9800X3D is a decent step up. We need something that gets a positive review. These meh launches are getting a bit tedious now...
First slide with Cyberpunk made me say - so you're saying it's possible to cool 285K with a 140mm radiator, you say? Corsair one, the vertical one, could return with this generation!
Video about that is coming shortly :) Probably saturday. You can already read some stuff here: www.thermal-grizzly.com/en/blog/intel-core-ultra-200-new-products-for-intel-s-arrow-lake-processors
18:52 yeah because it's reaally user friendly to boot into bios and change cpu oc settings in order to run a specific workload, the thing should be automatic.
Don't bother X3D, most games are GPU bound so anything 7700X will do for that. Only high fps gamers should buy it, anyone else for example 4K gamers don't need X3D. It is too overhyped that you need one, you don't need one. Any 14600K games excellent as well already.
This is basically Intel's Zen 5 focusing on enterprise and efficiency, and that would be a perfectly respectable approach, if it was the immediate successor to Alder Lake. The problem is we instead had Alder Lake, Alder Lake with more L2 Cache, Alder Lake L2 slightly binned, and now we finally have a new architecture with the single core performance of... Alder lake. This is their 3rd CPU now that has failed to make a meaningful mainstream user performance jump from 12th gen, right off the heels of their very poor response to one of the biggest mass hardware failures intel has ever had. That's why people are pissed.
I think Jay feared that that setting could cause some kind of problem like the one present in 13th and 14th gen, maybe he misunderstood something because as you said stock is DLVR, and if manually override things, is not like you couldn’t break stuff before putting very high voltages
Interesting about the 2P cores then 4 ecores then 2 P core etc, games are not that intelligent in how it distributes threads also does windows need a new CPU scheduler to optimise the workload?
@11:37 I'm from another (5V and 20Ampers era... i just saw you calculation and result of 220 Ampers....crazy... are half of 1300 CPU pins are for power ?
Strange 9950X you got there, mine gets over 46k in Cinebench R23, and runs with 8000MT/s CL36 memory. After you tweak a few settings this thing stays cool, is extremely stable (zero blue screens or other issues in two months) and chews through any tasks you throw at it at godspeed. I think that kneecapping the 9950X with 6000MT/s CL30 ram is a bit of crime when giving Intel the beans...
How was power tested if you could share that methodology? GNs video shows normal power logging methods aren't all-telling in this gen and showed the real power draw they had
He is measuring from the VRM power reported by the ASUS embedded controller, which measures the amount of power the vrm is outputting. GN are measuring the power being drawn from the cables, which is why they ran into the issue.
i like the watts in every chart. Most us reviewers just swipe that aspect aside. Buit also because it makes Intels boasting about more efficiency look QUITE bad compared with AMD.
Watch GN on the power draw stuff. Literally ignore everyone else, though, Derbauer did a pretty good job on it. Most didn't properly isolate/test accurately to show the actual true efficiency (or lack thereof).
Derbauer was the one who warned about this issue in the first place. It is absurd to say watch only gn for this, there is nothing wrong with derbauers numbers.
How does AMD's Ryzen series handle per-core voltage downstepping? AMD's marketing materials mentioned a dLDO - is this comparable to Intel's DLVR? When comparing the two approaches, what are the pros/cons or in which scenarios is one better than the other? Does a dLDO waste less power?
Having recently looked again at memory speeds in gaming, latency may make a considerably bigger difference than memory speed. ... If the videos I have looked at were right, then sadly you have borked your results. Going by what I have seen, 8800 C42 is going to be quite a bit slower than 6000 C30.
DLVR seems like insanity to me. A linear regulator at those currents is something that you always discard in favor of a switching regulator. Of course, they can't put large inductors and capacitors in the die but still I think that a much better solution would be having individual Vcores coming from the motherboard. Of course that would mean multiple DC-DC converters with less phases and that means a bit more ripple but I'd take that over burning 70W for no reason. It may also increase the price of the cheapest motherboards that just don't have enough phases to repurpose them into separate Vcore lines.
Thank you for going into the DLVR detail, and so much more detail than others in general.
Just beware that power loss calculations may be incorrect in the video. A linear drop-out regulator, LDO, works as explained. I doubt they are using those. Other voltage regulators, like for example how the multiple stage buck converters work on the motherboard, achieve a higher efficiency as they can decouple magnitude of input current to the magnitude of output current.
Isn't that just like Haswell's FIVR and VCCIN settings ? It's just that we can't disable FIVR in Haswell. Now we can disable it on Arrow Lake.
So you managed to OC it to the advertised max boost of 5.7GHz? Astounding.
(My sarcasm is directed at Intel)
haha not bad :D Yea I wish they would do proper frequency tables so we'd see exactly which frequency at which workload
@@der8auer-enquick question I have heard on former generations disabling the E-Cores was possible, does this save power, or heat and what practical performances is lost, and do those cores that are turned off still pull power.
@@RadarLeon with 13th and 14th gen, disabling E-cores still pulled miniscule levels of power since it was a monolithic CPU design, you just didnt use them. But the design is different with this new CPU
@@dankmemes3153 I've just think that in most cases outside browsing the Internet on web pages or office tasks there seems to be little benefit to E-Cores, they are supposed to be efficient but in doing so having the performance cores operating in a efficient lower power state seems to make more sense and having the chip park what cores it's not using instead of the junk intel is doing
@@der8auer-en Thank you for doing such an in-depth look in any case. It looks that there is some small niche for ARL-S afterall, looks like tweaker's dream. Looking forward to more videos with overclocking and memory tuning.
My fav review of this new platform so far. Very informative and cool at the same time. Thanks Roman. Its funny to see that ppl that just enjoying tinkering are doing better job at making videos than most of the reviewers xD
G'day Roman,
I noticed some comments pointing out the difference in "Power in W" between channels.
As GN Steve said until there is more understanding of how Power is Supplied to the VRM & CPU & there is a Consitent way to get the most accurate results "CPU Power Usage" will be all over the place between channels as the way people measure is going to be very different.
GN Steve did great work solving a problem in front of him but he is incorrect to imply that every board routes substantial CPU power through the 24-pin connector. Some boards still route most CPU power through EPS12V
Thank you for keeping the graph style, also excellent coverage as usual.
Irrelevant to this video, but I just took delivery and installed your Intel Mycro Direct Die Pro block on my 14900KF. I had tried a competitors block, and found it to be an overly complex design, in addition to using the standard socket latch - resulting in the usual issues with RAM stability at high clocks. No such issues with the (superior) design of the Mycro Direct Die Pro. Installation was flawless, and easy. As the design is essentially a contact frame and direct die bock in one, my system immediately booted with RAM @ 8000Mt/s, and has remained stable at that speed. Temps have dropped from an all core load of 98 - 100C to nothing above 68C. Absolutely outstanding work, Roman. As always, Thermal Grizzly present an exceptional product that exceeds expectations. 10/10.
Thanks a lot for your feedback :) Really happy to hear. We had a lot of issues with that block before it came to that status but I'm so happy it works fine now!
@@der8auer-en I followed the whole way. You did a fuck up perfectly - you admitted the mistake, and looked after any customers affected. Once things were sorted, you were 100% transparent about what happened and why, and the measures you took to fix the situation. Much, much bigger companies can learn a lot from how Thermal Grizzly conducts business!
Basically this release from Intel will just increase the sales of the X3D. Those who waited for the charts before deciding which way to go now have their answer.
Na 14900k
@@PAIN-ot4cjI am on 14900KS and mine is the third bonked one. I am just evaluating if it's worth getting a new mother and cooler bracket. Look like a new 14900KS is best bang for buck right now for me. The new Amd and Intel stuff are so small improvements over my 14900KS that I can't justify buying more parts to change platform.
Already got a 7800x3d, beast of a cpu!
@@PAIN-ot4cj yeah, go buy an almost uncoolable degrading CPU instead of X3D of any generation and use air.
@@TheGuruStud Why the fanboyism leaking over someone else's purchase, pocket watching for free is mad cringe ngl
would have been nice if you had also tested the 285k with 6000 cl30 memory so we could se the different in faster memory but worse latency.
I looked at another reviewer and depending on if the program takes advantage of the faster memory speeds this can sometimes be a big difference or none at all.
It would have been way worse. His testing config is both high bandwith and low latency. Cl is per clock, so faster clock faster ticks. So even with cutting edge ram this thing sucks. Imagine tested against 14900k at 5Ghz ring and 8000 cl 36 ram. Most 14900k does 7600Mhz on gear 2 now.
Running ddr5-8800 just invalidates all these benchmarks for me. Why is it so hard to compare apples vs apples??
@@ncohafmutathe explanation is in the video
@@impuls60 I mean it really shouldn't be way worse. 8800 CL42 is only gonna net you a .5 nanosecond latency improvement over 6000 CL30. You don't usually see massive deviations until you are getting into 2+ nanosecond differences in cas latency.
I love the detail you go into. More technical than any other, including GN.
I wouldn't compare the DLVR to a resistor though.
The MOSFETs in the DLVR produce heat through switching losses and the internal resistance of the MOSFET (which is pretty small).
I would expect the DLVR to be 90%-95% efficient, rather than the ~65% efficiency your showing.
I still think the bypass mode can be really useful, but i love the added precision the DLVR gives you. Vdroop should be minimal, given you're talking the resistance of the socket and power plane out of the equation.
If also love to see you test the overall system efficiency with DLVR bypass on vs off. My guess is the benefit of reducing power losses in the CPU will be offset by pushing more current, at a lower voltage, through the vcore MOSFETs, power plane and socket.
The DLVR should also mean lower voltage in the logic since it's more granular and responsive; tmk that's the main reason for adding DLVR.
Your in depth high quality coverage has basically become standard at this point and it's greatly appreciated. I hope down the road here you'll do some more in depth looks at how all of the different settings affect the CPU performance. I know the overall memory latency is quite a bit higher than RPL so this seems like a good place to target with tweaking setups.
Cuz Jay has never had any idea what he’s talking about
Jay two cents fell off. He spits so much false information these days it’s not even funny. He’s a sell out now too.
@@Multimeter1 I feel the same way.
If you look up the definition of the Dunning-Kruger effect you'll find a picture of Jay.
Just buy EVGA cards! What a scam that was... This guy is a joke.
When I first got into PC and PC gaming, I ran into J2C and soon realized that a lot of his takes are really bad and he just is not well educated in PC even though he's a PC TH-camr. He's an embarrassment to the PC community.
as a macro world electrician it always amazes me, how many AMPs move around in a PC.
The voltages are soooo low. Robert Faranec has a great video on designing PCBs for this, it's a wild wild world.
It always impress me to see those 3D parts. They're so efficient!
amazing how good the 7800x3d has turned out to be. One for the ages for sure
Yea, these X3D parts are gonna be like the gtx1080ti of CPUs
@@floodo1 Its a one trick pony tho and gets its ass handed to it in anything but gaming.
@@thetheoryguy5544not really its still a perfectly fine modern 8c/16t cpu still perfectly viable. Not world beating, but totally fine obviously buy a work station cpu if you need one.
@@floodo1 good for only gaming
I love that you tested 8800C42 memory with the 285K to show its limits, as many other sources used the same RAM configuration across all platforms. The efficiency is pretty impressive, and the CPU Cores themselves look like they will work great with low latency memory, but I think this is exactly what's holding Arrow Lake back.
The Chiplet architecture on Intel processors have proven to be great at controlling uncore power, which is really not the case with AMD's offerings (comes through as high idle power measurements and efficiency penalty due to less CPU Core power / package power).
It seems like Intel still got a long way to optimize the memory controller for this new design, as some reviews point to a memory latency increase by around 20ns going from 14900KS to 285K with the same memory kit (referring to Tony Yu's data, which can be considered as official Asus numbers on some extent). This latency could be the main reason why we're seeing gaming performance regression from last gen, despite the IPC and benchmark increases
I saw somewhere (don't remeber where) that one board maker managed to fix this latency grow. Don't ask details, as I only remeber the idea, not where I saw this. :))
@@ContraVsGigiI remember hearing this too. I think it was a gigabyte board or something but I don’t know.
I fully agreed with the conclusion but would like a full efficiency test between the 9950x and 285k since once you start manually setting voltages etc then you might as well be undervolting and though this is like you said a good step in the right direction for intel which they should be commended for in some way at least, I don't quite think the 285k would outperform a 9950x undervolted and in ECO mode in a highest efficiency vs highest efficiency duel overall but would absolutely love some actual testing in this area at different power profiles like 65w, 80w, 120w 240w and maxed out with a underclock maybe.
It really seems like it would help a LOT to have the MB VRM supply 1.3 Volts, or something closer to what the DLVR needs, so there is less drop in the core, but you still have the advantage of that fine grained local control.
higher voltage on the input = fast transient response
but more losses. if input voltage same as output there is no meaning of using dlvr.
Thanks for the work. Performs as expected for new architecture. Will be interesting content tweaking it out and hopefully software will come up with some updates to improve stability and performance.
Crazy performance gaming 7800x3d!!!
The power consumption numbers in all the graphs are really useful, thanks!
Thank you for all these in-depth reviews that most of the others don't do, you have found your own style and you also make great products!
i think this is the most or even only positive review the 285k has seen so far
PC Builders: I've got 99 problems, using Intel ain't one... 🤣🤣🤣
Home server builder: this is perfect
@@zee-fr5kwUntil the 9950X3D launches.
@@zee-fr5kwYou have much better options already available for a home server given the price of the CPU and associated motherboard.
@@tringuyen7519 why would i use an X3D for a home server lmao,
Lol yeah amd launches 😂
I don't like this low power consumption trend. Soon I'll have to turn on my radiator in the winter again if this trend continues.
Can't wait to see the overclocking/tweaking results for this with a higher power envelope. So much room for performance with faster RAM, faster rings, and pure overclock.
I bet there's a lot of headroom but remember, there's a reason Intel nerfed the ring clock so much. A lot of people suspect the high ring clocks on 13th/14th gen was the reason cpus were degrading and from HWU's video, 8000mhz ram didn't offer any performance over 7200mhz ram so it'll probably be like Ryzen now where latency is more important than ram speed. I'm just interested now how long highly overclocked samples will last compared to Raptor Lake.
@@nateTrh Well the 8000/8800MHz memory has quite severe latency...i dont think we would see much difference until 10 000MHz cudimms finally start appearing.
Just spot on, and not that much subs, like how? U r one of few (if any) youtubers who published this kind of review. Easy sub
In the gaming benchmarks, it would be interesting to see how much power the 285k is using, and then constrain the 14900k to that power limit, and see what FPS you get. Considering power usage over about 100W on the 12th, 13th and 14th gen has always been a game of diminishing returns, I think the 14900K using the same power as the 285k, there still wouldn't be much in it.
if you disable E-cores and hyper threading on 14900k and make undervolt, fps will be the same on average and consumption will drop by 2 times
I other Review I see 50W difference from the wall 420 to 470W if FPS looked at 100.
Wendel did that, that's why it's good to watch multiple reviews as Derbauer said.
@@Hermes_C Yup that's what I'm getting at. I think I saw it on one of Derbauer's videos, so did it on my 12900K I was dropping the power to 90 or even 50W and it wasn't making a lot of difference to FPS.
@@yakacm Just locking the 13600k at 1.18v and setting all P cores to 5.2ghz drops stock power in gaming to 45-50W. Try and push 5.5ghz at 1.25v all P core will push that over 75W with a few % gained. That 5.5ghz basically gives you a 14600k lol.
Interesting to see him be able to push 5.7ghz with a healthy drop in power draw. Maybe in-depth OCing makes a comeback.
The DLVR would save CPUs if the VRM on the board is questionable, but seeing no entry-level or basic motherboards out there that might take advantage of the linear regulator is pretty awful. Then we have rebadged 14th gen processors for this generation, we'll see how that pans out!
Yeah, the motherboards they showed first are more expensive than the cpus somehow
Recently bought a laptop with 185 H and 4060 inside. According the Intel 185 is the second year of production of the Ultra series. Seems like if you do not have the right cooling dynamics in the box including the fan / vapor chamber setup, you are basically screwed. The combo which I am sure both have power issues is running quite hot under 90 - 100 % load over several hours and causing some issues with graphics set at ultra settings and 3.2K resolution or above. With that said I am sure Intel made 2% improvement for the 285H, but be warned if you do not have the right airflow in your box it will be a big issue. Can't say AMD is better but there are definitely benefits and draws on both sides, needed to be considered these days before spending another 2 - 5K on a new setup.
Great job as usual Roman. This is why you are one of the few channels I subscribe to. Hopefully, Intel is paying attention to the people who actually have engineering brains like yourself. So many of my friends have recently left for greener pastures.
I believe you mean pastures🤣
@@megapro125 yes, thanks
I could not tell. Did your testing factor in the new power delivery mechanism for arrow Lake? Seeing power consumption around 170 W could be because you measured at the EPS 12 V rail and not the other mechanism by which the CPU receives power. Of course gamer Nexus can explain it better than I can.
as far as I can tell yes, since he talks about the VRM's output power measurement; the split between EPS and 24-pin is for feeding the VRM. think of it like you have a bucket with a hole in bottom, but you're filling the bucket with 2 different flow taps/hoses/whatever, and he's just looking at what comes out of the hole in the bottom
WOW, This is the ultimate IN DEPTH review.
Thanks for this review, much appreciated.
as always the best reviews ever, man if i come to germany one day ill be visiting u 100% :D
I really appreciate all the work you do for boss.
Thank you for making my 14900ks usable with the new Mycro Direct Die Pro your the GOAT!
Thanks Roman.
So glad to see your video on this CPU release. Thank you very much for being an unbiased source of factual information.
So basicly they got it down efficient as AMD, but the preformance is just worse even then their own 14gen. And it requires a whole new platfrom, wow how interesting ! What a product, good job Intel !
No, read, watch it all again. Intel is more efficient than AMD. And performance just needs to be tuned more by the ring clock (which is standard quite low) and Windows 11. Then the 285K is a monster fast CPU. AMD also needed W11 to get somewhere better. Think in solutions not in problems.
a new 289 euro 12900KS looks very very alluring
Would be a solid choice indeed . Maybe even a 7700x
i was just looking a this.. but im not sure if its worth going from my 12700k as i still only get 8p cores.. i think ill wait for the 9800x3d scores.. the 7800x3d would have been a good choice but the price went silly when it was announced Amd were stopping production. :-(
I heard the 12900KS was end of life.
Problem is that 24H2 harms Alder Lake by a significant margin.
@@mindurbusiness-b3u No. Don't do it. If you're on Alder Lake, disable the E-cores and clock the ring bus higher . You should get better fps. Although you can't overclock to match the L3 cache of an i9.
I’d be more interested if mobo prices weren’t through the roof.
In any case, thanks for always bringing your own perspective, I always find it illuminating.
Yeah you basically have to drop 300€ for a mobo and 600€ for the CPU, and you'd be stupid to pair this with anything but the fastest memory so that's another 300€ for 8200 or 8800 MT/s. So you have to pay like 1200€ to have performance comparable to a 7950X or 9950X on a B650 with 6600 MT/s memory which you can get for 600€ total. Or (if the prices weren't stupid) you can get beaten in gaming by a 7800X3D for 600€. Worse of all you can be beaten in both by a 7950X3D with a total platform cost of 800€/ upcoming 9950X3D.
It's not even a high core but low power chip which you can use in a server to handle multiple connections or something, because it chugs alone as much as my whole 5800X3D+3060 12GB+fans+spinny+2 170 Hz overclocked monitors system (idle 170 W, under maximum load 270W total as measured by my UPS).
I really struggle to find a reason for this existing, although I still think the tech is pretty cool
X3D is cpu gaming monster
I agree with you. I don't think they should remove the DLVR feature. Intel can let the motherboard manufacturers include a switch for it but with their track record of burning down the house, that's not a good idea. Performance and power draw are equally important for business work loads. I'm willing to take a slight performance hit if it saves power. Jay's business is gaming and he wants unlimited power and is willing to fry his boards. Most of us can't afford it.
Nice overview, did miss the 1300K in the comparison like I've seen in other reviews. All in all nice its 'only' the V2 285. Wondering who will fabricate the possible V3, say 385. Intel must by then been able to put a few new design updates in such version. So for work it is a nice efficient platform / CPU for gaming on and off a lesser as... still lots of FUN for DIY.
Danke schone Herr DeBauer. 😎👍🤛
It’s refreshing to see some balance here. I understand the frustration with some reviewers because there has been a level of beta testing done by reviewers… however there are genuine efficiency gains here which is a good change for Intel. Let’s hope they continue to get more efficient rather than cramming more wattage into the CPU to fake a generational uplift
Finally we are over with 400W parts for a while...
Thanks, looking good for Intel. I'm not all for FPS boost and such but I absolutely love the power draw reduction!
Does the ASUS reported VRM POUT include the power that they’re now drawing through the 24 pin? I’m assuming it does?
yes it's the full VRM power out
I hope Jay responds back with a technical video from his Lab* 🤐
Jayz Nonsense
Jay is a hot head and always goes full speed when he gets even a whiff of what he thinks is a big deal... and its always a non issue, or sometimes his own fault.
Don't even mention his name, the dude needs to disappear
@@alexskywalker888 reminds me when those gpu cable adapters started failing and in a video he talked about it while having an ad for them in it lmao
@@RADkate He gets a lot inside info from sponsors, yes he is a giant sellout and people have called him out on that hundreds of times.
Appreciate the detail. Thanks.
I loved working on Arrow Lake throughout this year. Unfortunately everyone only looks at performance; we did a lot of work on fixing the power and heat issues everyone complained about last gen, but not many seem to be talking about this.
It is great that you addressed it. Now, will it still degrade over time like the 14 Gen chips?
Techtubers only care about "big number better!!!!" yes. Although this time, everything should be less important than "is ist actually usable now, unlike 13/14th?"
absolutely, Im baffled at the takes of 99% of YT reviewers bombing on it while not addressing its a power house of processing power with WAY less power consumption and really cool temps across the board against every single cpu out there... but hey, oh no it doesnt pull a billion FPS on every single game, its a flop!
@@HellPedre to be fair, it is still using more power than AMD, and providing only equivalent performance on non Gaming workloads, while losing badly in gaming workloads to both AMD and 14th gen and 13th gen.
@@EdDale44135 more power than an almost all 3d AMDs for sure, not more than 9xxx series and even 7900, 7950s and so on (from what I've seen) and it also runs cooler in the only benchmark I saw that mentioned (Puget) so idk where you seeing that. gaming performance is all over the place, it matches the 14xxx series sometimes, beats it sometimes, and looses to everything, depends on which game CPU and reviewer
1 Jay has no idea what he's doing
2 Jay video looked like hostage/paid promotion. It was weirdly stupid :P
That's why I don't watch anyone else. These guy's video comes off as genuine unlike all the other Techtubers whoa re just basically promoting amd.
@@Darkness-ud5wk Roman is a little biased towards intel. AMD and intel are kinda same in terms of desktop cpus and it is easy to get slightly different results if you want to show some company as the winning one.
@@MarioAPN where do you get that from? Intel is a better target for fancy cooling solutions (which is his thing) because of the power issues.
But i see no reason for "bias" claims in benchmark and performance videos. . Except of course this is social media and everything has to be a conspiracy theory.
Also pretty much everybody else went into this with "ha now we can dump on Intel". So neutrality may just look like bias.
@Thisandthat8908 he likes intel because it was the only solution for hardcore overclocking. I wrote slightly biased, as he prefers intel. It makes no difference in real life, anyone can get what they want. I was referring to the comment of user Darkness. He wrote that he watches only Roman and no one else, because everyone else are promoting amd. For me, I can see both sides of a coin. Intel is good. Amd is good. Customers win. I am going with amd for the last 7 years. I am buying the first x600 line of cpus and then after a couple of years slap stronger cpu into the same build. Convenient way for me.
@@MarioAPN i can see that no destryoning intel over this may already look biased.
But he`s exited about how many things you can play with here as an overclocker. Not really an AMD thing,
great information. thank you
So what are my options then, as an uneducated non-tech-savvy person like myself, who wants the latest and greatest? Dip my toe into AMD for the first time and wait till January for the 9950X3D?
Depends on the usecase, for games 9950x3d will probably be worse than 9800x3d
But amd seems like a better option at the moment
Yeah if you're a non-tech-savvy peep, there very likely isn't a use case where the 9950X3D would make any sense over the 9800X3D.
If you just wanna game buy the 9800x3d when it comes out and you'll be good for 3-6 years. ( Assuming it's at least as good or better than 7800x3d)
It depends on how AMD does the 3d cache on the 9950x3d, if it is 3d cache on both 8 cores it will be a beast. If it is split like 7950x3d, with one normal and one x3d. It will not be as impressive.
Yes, currently AMD is definitely the way to go.
7950X or 9950X for productivity.
7800X3D for gaming and light-ish productivity.
7950X3D if you want both high productivity and high gaming performance.
280mm OR 360mm AIO cooler like the ARCTIC Liquid Freezer. Even a tower like Deep cool AK620 will do.
Any good B650 motherboard, you probably don't need X670(E)/X870(E).
As for RAM, get a kit of 6000 MHz, a kit of two sticks(not four) and the capacity you need.
And there's the CPU side of your build done.
So is it safe to say that I should just stick with getting a 9950x...or 9900x or wait for the X3D 9xxx chips
If your a gamer might as well wait for the 9800X3D as it comes out on November 7.
@@karehaqt Don't you mean 7800X3D refreshTM
Get a 7700 non-x with a cheap b650 board 😊❤
@@karehaqt agreed.... Also no use going backwards on the 7xxx even though it will be cheaper. For gaming and editing and blender better off going 12core chip or higher.
yes
1. How did you benchmark valorant?
2. The Direct Die OC is that 5.7P 5.1 ring? What are the D2D and NPU clocks for it? Also did you find ecores off to ever help? Or did overclocking the ecores ever help?
1. It's a predefined path in practice mode that takes about 5min and it's done 3 times and the average is taken of it
2. The DD OC is 5.7P and 5.1E. The D2D and NPU are the same as maxed below in the chart. I didn't have enough space to squeeze it all on that row :D
@@der8auer-en that bit of gains from insane cooling and a static P & E core? Wow. And the ring is the same 4.2 as well right
Would also be interesting to see if disabling ecores allows for much higher ring
I only tried to disable E-Cores to see if it helps performance but didn't see if it helps to push ring. Will try that for sure :) thanks for the great idea!
The power consumption isn't that low though. it channels a lot of power through the board rather then the connectors. This is rather strange behavior though. It seems like it is uses around 50% more than it seems judging by Steve's numbers.
Yes. Steve saw about 40-50W from the ATX12V (24pin) on top of what is pulled from the EPS (that is the only numbers reported on HWinfo).
They're not strangers to underhanded tactics. Just another way to hide the true power consumption, one of the many facets where AMD beats them
@@AMabud-lv7hy And where AMD beats them to such an extent that even cutting power consumption in half isn't enough of a difference.
The one positieve about this new CPU is that it does use a chiplet (or "tile") design which should allow Intel to release new CPUs with updated cores much more quickly.
Which I seriously hope they do.
AMD's advantage is slowly disappearing into overpricing and availability issues. Which aren't AMDs fault really, as the pricing and availability issues are again caused by scalpers.
That's why I just take the EC reporing from the mainboard that records the entire VRM Pout :)
edit: I was also one of the first reviewers to point this out in my first arrow lake video to warn the other reviewers. So I'm very aware of this topic!
@@der8auer-enI think you should have mentioned you are aware of this in the review, so guys like this won't assume you didn't take this into account.
Jay is worried about board manufacturers abusing the modes, leading to failures and drawn out RMAs all over again.
Glad I got a 13600k a year ago, beast little cpu and no need to upgrade to the new stuff.
This is the only review I'm going to bother with because it is professional and not "it sucks for gaming! It so is bad) BS. IMO the so called gaming performance is a non-issue. I run a 5950x and I am never CPU bound because I don't game with potato graphic settings. The power consumption of this chip makes it very appealing to anyone who would rather dump that power into the GPU - like me. All this CPU has to do is keep up, and it seems to do that just fine. I can't wait to see how this CPU performs in real world non-potato gaming scenarios because I haven't gamed at that level in a DECADE. Come on other reviewers, join the second decade of the 21st century already! And don't give me the Steam says crap - the people gaming at potato levels aren't going to buy this chip because it isn't the cheapest so go get your dumb clicks by talking about the potato junk.
I understand, you did test with the maximum ram speed the new cpu can handle, unlike maybe previous gen. But why aren't you making the comparaison chart with the same ram speed? it doesn't help us make an idea how fast is the cpu ingame compare to the old one when you test them with different ram speed.
The more you buy the less fps you get $$$ Intel
Even though performance is underwhelming, tweaking potential and features are interesting to mess around, also cool temps and reduced power are cool. Also wouldn't really expect cooler temps at same wattage and it being effectively 2 nodes newer but here we are
Well all the TH-cam channels almost destroyed Intel over the 14900k so obviously they had to bring power levels down to ensure reliability so it's not intels fault really.
Thanks for the deep dive on stuff like DLVR and how we can bypass the effect if they from move the actual bypass feature, I get what people are coming from not wanting board partners to go back to their sneaky ways boosting Voltages with their own MB tunes. The platform does look fun for thinkers, maybe it might impove a bit with time but in the now competion is fierce. OCing isnt useless by these results, you got some meaty Mhz their. Curious how the overall Ocing goes across the board, I'm sure more will investegate.
Judging by those power consumption numbers, you could very well boost it into 6 GHz, which would lower the gap between 14900K and 285K.
@@alexturnbackthearmy1907 it’s likely up to silicon lottery but yeah the overlocking is interesting
Sorry Roman, I find comparing efficiency to the notoriously inefficient 14900KS to be disingenious.
Also, what the heck is the point of the DLVR. Is it more efficient, faster or cheaper than doing the regulation on the mobo? Condensing more heat in the socket is no bueno after all.
14900KS was the highest performing from last gen :)
DLVR can supply voltage to each single P Core individually and also to E-Core Clusters. If you just do normal vCore supply, everything would get the same voltage
@der8auer-en thanks for the direct reply! This just reinforces my hatred for big.little architecture lmao.
Maybe DLVR will look better when paired with an overall better product.
are you gonna test the 285k for gaming with e cores disabled ?
Hello Roman, did you check how does 285k divide the load across cores, especially in case of the best resulted game and the worst? Also, it would be very interesting to see how 285k shows itself at different power limits, let's say 125 or 65W comparing with 9950x and 14900k and with e-cores turned off.
In general, it seems that the platform is very raw and there will be more updates. A little zen1 vibe, 285k is r7 1800x, and 7700k is 7800x3d.
Very surprised it's a purely resistive power drop from 1.4-1.5V to 1.2V. That is a ton of power loss for a generation that is supposed to be based heavily on efficiency.
Perhaps different tiles require different voltages. Thus require the DLVR to have higher than expected voltage.
Excellent video!
Hi. There is one burning specific question I have. Is there a way to test the 'AI Denoise' feature in Lightroom Classic and compare the 285K or 265K to the AMD CPUs? The AI Denoise is currently a feature that takes the longest time for me and I use it often. With the built-in (although weak) NPU in the the Intel Ultra chips, I'm guessing there's a jump in performance with them, but I'd like to see how much of a jump. Thanks!
I would speculate that spreading performance cores around the CPU to reduce the thermal spike of a relatively cool core getting full thermal boost when a thread is passed to it from a hot performance core takes additional time because it is using different level 2 or level 3 caches.
Nice review.
I really don't understand why they went with this DLVR model; even at low voltage drops like this it's painfully inefficient (both because of the high current and low overall core voltage), and they could have achieved a similar result by instead specifying that motherboards provide a small number (say 2-3) of different power rails at programmable voltages that the processor can switch its various cores between. The motherboard-provided regulators could then be switch-mode converters (impossible for the die-embedded DLVR they're using) and they'd be way more efficient.
Alternatively, they could have had the motherboard provide a "typical" Vcore (that can vary as normal under CPU control) and fed that via power switches directly into the cores running at high load, and only use the DLVR for feeding a lower voltage to other cores under lower load. Since the DLVR would only be used for a small fraction of the overall current, its losses would be much smaller and you don't have to saddle your motherboard partners with the complexity of providing multiple Vcores.
The processor itself is fairly efficient, and then they're throwing away a huge chunk of those gains dumping waste heat through an on-chip linear regulator. Insane.
Off topic idea for a Thermal Grizzly product.
I would really want to see a high quality well built Flow and temperature sensor for water cooled pc. I have seen you using a sensor for your Mycro CPU waterblock but I feel like they all need a little bit of Thermal Grizzly quality seasoning.
it's on my to do list for next year :)
@@der8auer-en That's awesome, I can't wait to buy it for my loop.
I kinda think that if the chip replaced all the E cores with P cores, for 12 P cores with more cache it would make a far stronger gaming cpu.
So if i understand it correct the DLVR is causing a lot of loss in watts? Is it nog possible to lower the Mainboard VRM voltage to say about 1.2 to 1.25volts instead of 1.5? Would it mean that cinebench could run at 42k points at about 170 watts powerconsumption if the loss is smaller?
It seems there is a lot to be gained then if you ask me! And the gaming power usage is already very well compared to 14th gen. I think with some microcode updates and Windows updates these CPU's can perform better with even lower power usage.
27:47 It seems good, and a "step in the right direction" but it's important to remember these chips are not made using Intel's own fabs. They're on TSMC.
so what you're saying is they're not going to oxidise in 2-3 years?
Thanks, nice review. Now let's hope the new 9800X3D is a decent step up. We need something that gets a positive review. These meh launches are getting a bit tedious now...
Intel 265K test is coming? There are many intel 285k and 245k tests, but hardly is 265K!
First slide with Cyberpunk made me say - so you're saying it's possible to cool 285K with a 140mm radiator, you say? Corsair one, the vertical one, could return with this generation!
Do you believe these will be harder to delid without damaging if they've chosen indium as the TIM now that they've abandoned monolithic chips?
Video about that is coming shortly :) Probably saturday. You can already read some stuff here: www.thermal-grizzly.com/en/blog/intel-core-ultra-200-new-products-for-intel-s-arrow-lake-processors
so just a mild oc makes it way faster than any CPU niceee ok I might upgrade for sure now
in all core full load production use cases.
18:52 yeah because it's reaally user friendly to boot into bios and change cpu oc settings in order to run a specific workload, the thing should be automatic.
I am less worried about what ARL brings to the table compared to what ARL will do to X3D prices 😢
Don't bother X3D, most games are GPU bound so anything 7700X will do for that. Only high fps gamers should buy it, anyone else for example 4K gamers don't need X3D. It is too overhyped that you need one, you don't need one. Any 14600K games excellent as well already.
This is basically Intel's Zen 5 focusing on enterprise and efficiency, and that would be a perfectly respectable approach, if it was the immediate successor to Alder Lake. The problem is we instead had Alder Lake, Alder Lake with more L2 Cache, Alder Lake L2 slightly binned, and now we finally have a new architecture with the single core performance of... Alder lake. This is their 3rd CPU now that has failed to make a meaningful mainstream user performance jump from 12th gen, right off the heels of their very poor response to one of the biggest mass hardware failures intel has ever had. That's why people are pissed.
I think Jay feared that that setting could cause some kind of problem like the one present in 13th and 14th gen, maybe he misunderstood something because as you said stock is DLVR, and if manually override things, is not like you couldn’t break stuff before putting very high voltages
looking forward to see what some serious overclocking can get out of the CPU, on paper at least, it looks like there is plenty of headroom
Interesting about the 2P cores then 4 ecores then 2 P core etc, games are not that intelligent in how it distributes threads also does windows need a new CPU scheduler to optimise the workload?
Yes, considering that most stuff still bangs in core 0 even if every other core is free.
Hello Roman, When you review 9800X3D, could you also review 7800X3D with the X3D turbo mode?
@11:37 I'm from another (5V and 20Ampers era... i just saw you calculation and result of 220 Ampers....crazy... are half of 1300 CPU pins are for power ?
The fact this is nearly the same as the previous gen, but requires a new motherboard is the nail in the coffin.
How did u test cyberpunk? Was u using the benchmark?
No I'm using my own ingame scene. Always use my own for all games even if it has a benchmark built in
@@der8auer-en Yea cus Hardware uboxed had lower fps compared to 14900k
@@listX_DE I would not trust anyone to properly test Intel platform beside der8auer and framechaser
Strange 9950X you got there, mine gets over 46k in Cinebench R23, and runs with 8000MT/s CL36 memory. After you tweak a few settings this thing stays cool, is extremely stable (zero blue screens or other issues in two months) and chews through any tasks you throw at it at godspeed. I think that kneecapping the 9950X with 6000MT/s CL30 ram is a bit of crime when giving Intel the beans...
How was power tested if you could share that methodology? GNs video shows normal power logging methods aren't all-telling in this gen and showed the real power draw they had
He is measuring from the VRM power reported by the ASUS embedded controller, which measures the amount of power the vrm is outputting. GN are measuring the power being drawn from the cables, which is why they ran into the issue.
14:33 is that 1.501v vcore? and 1.619v on VLatch? I guess this is fine now on a new architecture...or?
It's over
i like the watts in every chart. Most us reviewers just swipe that aspect aside.
Buit also because it makes Intels boasting about more efficiency look QUITE bad compared with AMD.
Watch GN on the power draw stuff. Literally ignore everyone else, though, Derbauer did a pretty good job on it. Most didn't properly isolate/test accurately to show the actual true efficiency (or lack thereof).
Well Der8auer has the perfect tool to judge power real power consumptions: a german energy bill.
Derbauer was the one who warned about this issue in the first place. It is absurd to say watch only gn for this, there is nothing wrong with derbauers numbers.
the irony in saying ignore Der8auer when he was on the forefront of the issue lol
Tech Jesus would probably be the first to tell you not to ignore Roman.^
How does AMD's Ryzen series handle per-core voltage downstepping? AMD's marketing materials mentioned a dLDO - is this comparable to Intel's DLVR? When comparing the two approaches, what are the pros/cons or in which scenarios is one better than the other? Does a dLDO waste less power?
Add "i" after d in dLDO and that's what they really use!
@@D9ID9I Move along... grownups are talking.
Having recently looked again at memory speeds in gaming, latency may make a considerably bigger difference than memory speed. ... If the videos I have looked at were right, then sadly you have borked your results. Going by what I have seen, 8800 C42 is going to be quite a bit slower than 6000 C30.
DLVR seems like insanity to me. A linear regulator at those currents is something that you always discard in favor of a switching regulator. Of course, they can't put large inductors and capacitors in the die but still I think that a much better solution would be having individual Vcores coming from the motherboard. Of course that would mean multiple DC-DC converters with less phases and that means a bit more ripple but I'd take that over burning 70W for no reason. It may also increase the price of the cheapest motherboards that just don't have enough phases to repurpose them into separate Vcore lines.