@@arch1107 Yeah, gone are the times when CPU was "it will work for 20 years even if you overclock", now you are happy it did warranty at stock... And problem is always pushing parts way above what they should be pushed, that's why we have CPUs from AMD and Intel that can cut their power consumption 50% while losing 10% of speed.
@@jake20479 So completely taking away all the limits of CPU is not on them? Or that time when 7800X3D was killed with 1.3V (against any AMD recommendation) on soc voltage (search on exploding 7800x3d), then new BIOS came out and dying stopped. Or when asus and gigabyte was faking power consumption of ryzen 5000, so CPU boost will go all out not knowing that it is going above intended TDP. What maybe CPU manufacturers could do is hold MB manufacturers by the balls so they won't pre-overclock CPUs. This intel dying thing was part of their code, but MB should never ever OC anything by default, and now they do.
"Arrow lake is harder to delid, but we will talk about the solution when the emargo lifts". I just laughed cause of course you already have a solution to that!
4% more PCB thickness isn't _quite_ as useless as it sounds, since bending stiffness scales with the cube of the thickness - so it's roughly a 12% stiffer PCB. It's of course still utterly ridiculous to do what they are doing with the two central contact points and just a bit less contact pressure. Well into facepalm territory there...
@@MrEdioss It's true, so many out there can't even run their 5800x unless in eco mode due to heat, i had to limit the limit to 85c and turn on eco mode. running at 100% for more than a minute resulted in crashes, this isn't isolated, anyone using ANY motherboard without robust enough vrms, cannot run the 5800x and presumably any amd processor that acts the same way.
I think you need to check your historybooks. Look for Intel Clarkdale and Arrandale. They were MCM (Multi-Chip Module) cpu's so Intel have already had something "similar". Performance per watt.. intel used that even in 2005 in their marketing :P
Next video: we made semi-automatic razor bladed delider for new intel cpus! Next next video: we made a new high conductive thermal glue after you delid your cpu Even next video: we are investigating crunchings on new intel cpus
If the iGPU supports AV1 encoding, this thing will be a rendering monster for the likes of streamers and youtubers. The 245k will be really interesting to see how cool it can be ran.
So, DLVR is the comeback of the integrated voltage regulator that we had in the past, but with an optional bypass feature. Sounds good. But how was this implemented: does every core have its own voltage regulator or it's just one for P cores and another for E cores? Is the DLVR bypass a requirement to board design or an optional feature. Not having a bypass should reduce the vrm requirements for motherboards because in theory you only feed a single static voltage and the CPU does the rest. This is the thing that I will miss the most from anandtech. Those guys went very deep on these architectural details
A shame, a year late, and a patchwork quilt to keep cost down. A real transitional product, not fully taking advantage of TSMC 3 nm. Hopefully they get another shot by the next release. They needed a 8P+8E SKU under 200 W and $300 US for this launch, likely binned for later.
@kazuviking I'm not comfortable even calling that Intel. Since the fabs might very well be under new ownership by then, despite Gelsinger trying to claim they will be a subsidiary.
Good to know that contact frames are still going to work best. Kind of strange they didn't fix the issue with this new CPU, I agree. Thumbs up for great videos.
16:30 I'm familiar with the internal 'discussion' about the ILM. Basically it came down to cost and simplicity vs. the actual severity of the problem. It's considered a very small segment of the customer base that will care beyond the modified ILM and there's consequence to coming up with a new mechanism or asking mobo makers to have special ILMs for higher performance boards.
Incorrect, Intel did have a 7950X3D slide for gaming. It's in the "A Balanced Enthusiast Experience" slide. It specifically said 7950X3D on the red line.
@@RawBejkon Take the 1st party benchmarks with a grain of salt, but the main gripe by der8auer was that the slides "didn't include the 7950X3D." which is factually incorrect. We'll just have to see final numbers in 2 weeks, including faster memory speeds etc... At least it seems like only modest marketing claims by Intel and not outrageous ones like Zen5.
If they're losing to the 9950x in some games, then AMD will remain on top. Then there's 24H2 which wasn't as beneficial for intel and very likely the same AL. Did they test with that version of Windows? Doubt it.
@@AndroidBeacshire Grand strategy game generally involve very heavy CPU computation and little in the graphics. The paradox strategy games, civilization franchises and other are mostly CPU bottlenecked. If you happen to play those games you need a strong CPU. FPS doesn't matter because you will be more cocnerned with tick rates / turn time on those games.
@@PolskiJaszczomb What are you even talking about? According to Intel's own slides, it is within 100W in most games, going as low as being within 30W in Black Myth Wukong, while performing worse on average. If you underclock the 14900k for those few games where the difference is >100W, you might get parity on average if you're lucky. And remember that these are first party numbers. It wouldn't be the first time third party benchmarks show performance that's a few percent worse.
@@pmHidden Have you ever have a 14900k? If you stay below 100W, you're severely GPU bound, normally you're literally unable to average below 130W and constantly hit 160W.
The 9% IPC gain for P Cores they are referring to is likely undervalued by the clock regression vs the 14900K looks to be Broadwell vs Haswell all over again only this time they don't have enough gains in IPC to make it better than its predecessor the loss of HT is also not helping it in MT workloads never thought that both AMD and Intel could produce a DUD generation back to back.
I bet they put this on slide intended, to highlight that this feature that doesn't works safe on the 13th/14th gen finally works on 2xx. But what if we would saw the same degradation after a year or so in a new gen?
Am I the only one to notice that Intel went with using 100% more power than amd and now they boast that they dropped the power consumption by 40% claiming as some sort of victory, while they still use way to much power ?!
It's up to the motherboard manufacturer on how to distribute the PCIe lanes, that's probably why you can't find any information about it in the cpu datasheet.
I don;t know if anyone noticed but when comparing to 9950X both CPUs were set to 125W TDP. Thing is - 9950X by default runs at 170W TDP which allows up to 230W of power consumption, limiting it to 120-125W reduced that to 160-170W meanwhile 125TDP for intel allows up to 250W of power consumption. 9950X was severely underpowered in multicore tests compared to 285K.
These are the first desktop CPUs from intel to be mainly produced by TSMC right? The decrease in power consumption wouldn't really be a surprise if it was made with the N4 or N5 processes.
Thanks for the info. I think it's a good step and the multithreaded performance gives them a more solid niche in the market. As always, we will have to wait for independent reviews to give a verdict.
The amount of PCI lanes themselves is incredible. No longer are we stuck at 24 lanes. We're finally nearing affordable CONSUMER lanes that we had in HEDT/Server 2011-3/X299 from 10 years ago that is only now affordable.
that will be disabled on most motherboards for one reason or another specially knowing fewer now need those lanes because people uses now only one nvme, one gpu and nothing more
@@arch1107yeah, but that makes sense for cheaper boards especially The stupid part is for more expensive boards, they will spend 16 to 24 lanes on NVMe slots I just want 1x16 1x8 2x4, 2 NVMe, is that so hard
Lol you totally misunderstood the slide. Previous gen intels had 20 pcie lanes. New cpus looks like they have either 24 or same 20 as previous gens. the total 48 pcie lanes advertised in slide is total count of z890 + cpu combined. where the z890 has a limited bandwidth to connect to the cpu. and it acts as a pcie hub. so the pcie lanes the z890 provides are only connections to the chipset. even worse, z890 is still pcie4, not even pcie5. we are no where near the server grade cpus and mobos. looks like nothing has changed.
Impressive somewhat on the efficiency, Disappointed on how the performance they showed vs 9950X in gaming, and vs 7950X3D in content creation. will be interesting for real world testing and especially, if you would like to probe the CPU voltage behavior
Would love to see the design of the DLVR (not holding my breath), but with traditional voltage-regulators extra transistor switching needs to occur especially with variable regulation thus the bypass is in-place ... hope we can get more tech-specs on this
I think it's still a wait and see; I bet the gaming regressions are limited to games that want more than 8 "fast" threads to get peak performance (looking at you, Cyberpunk...) while those that are either heavy MT or more lightly threaded will show either parity despite the reduced core clocks or a net gain thanks to the better E-cores. It'll be interesting to see the Day 0 benchmarks. I don't think the 285K makes a great deal of sense, but the 265K seems to potentially be "more worse" than 14700K vs 14900K.
Impressive and Disappointing at the same time is good way to describe Intel in general at this moment in time. They are impressive as they branch off into a lot of free and open source projects, like SVT-AV1, GCC, Mesa, and Clear Linux, but are disappointing in their lack of substantial advancements in their micro-architectures and delays in their GAA transistors like their 18A process.
Today Intel offers us the z890 along with the 285K to make a jump of -3% compared to the currently castrated and degraded 14900k, curiously I feel somewhat better because I know that there are those who are in worse situations than me 🚬
Take any i9 12900-14900k Disable HT put all P core to 5.5ghz and Find the next maximum OC for each core 0-6 and then find the maximum supported voltage drop and you should be faster than stock or the same on MT workloads but gaming will see a boost and you can save 20-80watts maybe. You can disable E cores and HT and you could hit 6/6.2+ghz all P core if you have a good chip. I’ve see these running pulling 180-200watts at 6.2ghz.
People won't like the new naming scheme because people don't like change. I remember when they named the Pentium instead of 80586. I was miffed, but I got over it. We'll get used to it in no time.
this is the first intel CPU i'm interested in over the last decade Efficient an can hit more than 5ghz Doesn't require hyperthreading nor does it suffer from the exploits that HT enables 24 cores of brute performance this is a solid flagship tbh, even if the performance in gaming is the same as 14900k it's like a 14900k that doesn't eat PSUs for breakfast and doesn't get too hot, nor does it degrade over time
I will be very interested to see it's idle power. I am one who never turns their PC off so have always loved Intel's ability to get to a very low idle power usage. Also, the focus on gaming always amuses me. Even if your primary goal is gaming, there is really no reason to upgrade to any of these new CPUS from Intel or AMD. So for me, it comes down to productivity gains.
When the dust settles, it will be more clear that both companies had to invest in the future, all the ++++ from generation to generation without andy major overhaull was not future proof. Both Zen5 and AL are focusing on maximizing the performance with power draw in mind which is a good thing, since ARM already showed that it's possible. So any advancements in DDR5 memory speed support, branch cache predicition optimization both for speed and security, lower power draw for same or better perofrmance while maintaining motherboard socket compatibility (if possible) is more than welcome.
I think it's happening. We are definitely at a standstill in the CPU department. Neither company is putting anything compelling out. Well, AMD does with 3d cache. The gaming industry needs some chill time anyway.
What makes me sad is that are no easy performance jumps by buying latest gen cpu anymore. I went from 8700k to ryzen 5700g (during gpu shortage ) to 7950x. And each time I got a large hike in perf due to bigger core counts. I don’t want to go threadripper or Xeon just get a decent uplift
The issue is sometimes trying to read the "latest" gen CPU's. AMD randomly switched their naming scheme a while back for example. So now it's a pain to read them if you're used to the way they used to do it. The bigger number isn't always better these days. That being said. Always research what a CPU is capable of so it meets your needs. Don't just buy it because it's new and cool. Research is the most important thing that AI can't even replace today, as people's use cases vary so much.
@@Deja117 Actually I don't buy cpus by name. If your a tech curious person you will probaly do your research aka. look at cpu specs, benchmarks and such. If your not that interested you will probably buy by brand recognition and also what your friends and/or dealer recommend. Since 3 generations of AMD cpus there is no uplift in core count. So the only gain you can buy is their architectural uplift. zen3, zen4 and zen5 all the same in core count. Intel also seems to stagnate by shuffling the around P and E cores. 3 generations is like 6 years. No mayor uplift.
I'm glad I don't feel compelled to upgrade from 13900. I was genuinely upset that I'd need to upgrade mobo+cpu if IO wanted to stay at the top of performance pyramid of consumer PCs.
I guess we don't know yet whether a Gen 5 SSD and a GPU both in their respective primary slots can be run concurrently without detracting from GPU bandwidth.
It seems as if it were yesterday I got my 12400f in my server, and thought it would be the most efficient thing I could do for a while. I feel the urge to upgrade yet again!
Puh. I'm staying with my 3950X until Zen 6. Word on the street is it'll have 12 core CCDs. A 2nm single CCD 12 core with Vcache at higher frequency and with a new IMC (finally) sounds awesome.
17:50 Honestly it is surprising to me that this make it worse at cooling. 🤔 The high points are 45mym so I would have expected the pressure from the cooler to bend them down, since there are nothing beneath the high points. Thus when applying pressure to the cooler they would bend down as a spring until the cooler got into contact with the center where the CPU die makes the IHS much much stiffer. Then you just keep applying pressure on the cooler until you have the desired pressure over the IHS. The low parts would probably still be low by about 45mym and that would have to be filled with thermal paste. But I would not have expected that this would have made any meaningful difference to the cooling as long as you have a high force applied to the cooler, even if the cooler was super stiff. (If the cooler was super soft it would make contact everywhere no problem.) So I am surprised that the contact frames works as well as the do.
It's kinda weird, I mean with the Core 200 Intel just seems roughly able to compete with AMD, with no real strong selling point, loss of HT being a clear con But the new architecture seems to have potential so maybe in a couple generations they can pull up ahead again, what killed them was to cling to their old architecture
@@asdf_asdf948 Exactly - we're rapidly reaching a ceiling that a 120V/15A breaker will limit. While some users might have better than that, product design will always aim for the largest markets. In another decade at the rates we've increased we could easily be reaching that tipping point. Eventually it could very well set the overall limits on tech.
@@asdf_asdf948no, because CPUs are nowhere near power hungry enough to make that a concern. The typical 120V breaker is rated for 15A, which gives you 1800W to play with. Even a 14900ks pulling 400W is only using 22.2% of that circuit’s capacity, ignoring psu losses.
@@wahidtrynaheghugh260 correction. electrical codes do not let you use the full current rating of a breaker for continuous loads. Canadian electrical code (and i think national electrical code as well) limits you to 80% of the rated current which is about 12A for a 15A circuit = 1440W @ 120Vrms AC. Also keep in mind some receptacles are daisy chained, so you are sharing the power with whatever else is connected to the same breaker. honestly though, thermals, stability and degradation are more of a concern, which are all affected by power consumption of the chip, not to mention the extra power = more heat in the room where the computer is.
Higher efficiency is great, but its a really brave move to launch a new socket and all that entails with a CPU that isn't promising higher performance in most cases. Wager AMD is not the only company to see really weak sales for their newest generation.
Comparison with 7800X3D is pointless. Previous i9's with its 24 cores are in a totally different league than 8 core 7800X3d and same for this Core 9 Ultra. This doesn't change the fact that 7800X3d might be slightly ahead in most of the games, but also multithreaded performance is about half of i9 and Core 9 CPUs. This is the simple reason why it's not on the comparison charts. i7 and now Core 7 are much better value for gaming anyway, so these CPUs is what the 7800X3d should be compared to.
Lets be clear. 99% of customers dont care about power efficiency. They care about better performance. With the top tier only getting the same gaming results as a 14 series should hurt them. They should concentrate on being faster not more efficient for the general consumer.
If I was able to get 1x16 and 1x8 with 2 NVMe and 10GbE, I would upgrade just for the PCIE lanes. Not being able to use my 2nd full length PCIE slot unless I want my 4090 to only get 8 lanes, Is annoying.
I dropped what I was doing and lost my mind when I heard individual voltages per Pcore, that is amazing. X3d still sounds faster for gaming so I think I’ll be sticking with that.
you already talk about socket. but you forgot to mention if those exisit cooler blocks in custom loop or AIO if its still compatible with this new socket
My rig is 8 years old with a I7-6700k. I have to build a new rig asap so I'm forced to buy the new intel cpu. For me it will be an enormous upgrade in any case.
I think it's a great processor. I think the small difference in gameplay only occurs at low resolutions, which for me is not of any importance. If I were looking for a new platform, I would invest in this Intel. Great efficiency and great performance. But I'm using AMD 7950x and it's great for me.
Look at the comparisons made for power draw, notice what's missing? Edit: The extra 4 PCIE lanes from the CPU should be for connecting to the chipset...
I was also thinking about that. If you don't have the tool to grind metal, you could just "Hotknife™" that plastic nipple off of the board just like we used to do to plug an X8 card in an X4 slot ^^
I am severely disappointed in the lack of cat in this video
We know what's important.
This! 🎉 no videos without the kitties! Even if it’s just a small graphic at the bottom.
This obviously makes this the worst Intel launch in years
The cat moved to the AMD camp
No pets no unnecessary troubles and hairs in your apartment.
MB manufacturers can't wait to use that DLVR for daily driving these CPUs at 1.7V so it can ignore all this new power saving.
add the poorly manufactured firmware and we will be back in the last year problem, more burned cpus, now made by tsmc!
@@arch1107 Yeah, gone are the times when CPU was "it will work for 20 years even if you overclock", now you are happy it did warranty at stock... And problem is always pushing parts way above what they should be pushed, that's why we have CPUs from AMD and Intel that can cut their power consumption 50% while losing 10% of speed.
... thats a dumb take my man. motherboard manufacturers werent at fault in the end. intel just used them as scapegoats.
@@jake20479oh ASUS definitely does some garbage
@@jake20479 So completely taking away all the limits of CPU is not on them? Or that time when 7800X3D was killed with 1.3V (against any AMD recommendation) on soc voltage (search on exploding 7800x3d), then new BIOS came out and dying stopped. Or when asus and gigabyte was faking power consumption of ryzen 5000, so CPU boost will go all out not knowing that it is going above intended TDP. What maybe CPU manufacturers could do is hold MB manufacturers by the balls so they won't pre-overclock CPUs. This intel dying thing was part of their code, but MB should never ever OC anything by default, and now they do.
"Arrow lake is harder to delid, but we will talk about the solution when the emargo lifts". I just laughed cause of course you already have a solution to that!
If I had to hazard a guess. He's probably made a delidding tool where it is slightly raised at the sides and goes down right near the ihs.
I think your idea is right
4% more PCB thickness isn't _quite_ as useless as it sounds, since bending stiffness scales with the cube of the thickness - so it's roughly a 12% stiffer PCB. It's of course still utterly ridiculous to do what they are doing with the two central contact points and just a bit less contact pressure. Well into facepalm territory there...
easily the best tech youtube channel period. Always getting parts early or jus straight up exclusive tech.
Finally, Intel targets efficiency. AMD been killing them in that arena.
If you said this 10 years ago, you'd be getting so many weird looks! Insane time!
@@BBWahoocan't even run stock tdp with cheap motherboard while on intel I can, in 2014.
@@MrEdiossany z690/z790 in the $150+ can. I ran a 13900ks 6ghz all core on MSI z790 WiFi for $180.
@@MrEdioss It's true, so many out there can't even run their 5800x unless in eco mode due to heat, i had to limit the limit to 85c and turn on eco mode. running at 100% for more than a minute resulted in crashes, this isn't isolated, anyone using ANY motherboard without robust enough vrms, cannot run the 5800x and presumably any amd processor that acts the same way.
265KF still peaks at 250W at max clock. Effiency my @$$.
I am so glad you have a channel, it's so much better than many of the other larger tech TH-camr channels ... no BS, no drama ... just great quality.
Intel using all the competition tricks: 1. tile glueing 2. AI footnotes 3. efficiency or performance/watt.
I think you need to check your historybooks. Look for Intel Clarkdale and Arrandale. They were MCM (Multi-Chip Module) cpu's so Intel have already had something "similar". Performance per watt.. intel used that even in 2005 in their marketing :P
Bro is complaining about tech companies talking about tech things 💔💔
Next video: we made semi-automatic razor bladed delider for new intel cpus!
Next next video: we made a new high conductive thermal glue after you delid your cpu
Even next video: we are investigating crunchings on new intel cpus
If the iGPU supports AV1 encoding, this thing will be a rendering monster for the likes of streamers and youtubers. The 245k will be really interesting to see how cool it can be ran.
It does, but the media engine hasn't changed from the Alchemist days. The AV1 hardware encoder is disappointing.
You already have QuickSync and Nvenc which is more than enough and less resource intensive already.
@@kazuviking Good luck using Nvenc from an Intel cpu.
@@kazuvikingisn't he literally talking about quicksync?
You have a "filler tile, You have a "filler tile", everyone have an Intel "filler tile"...
So, DLVR is the comeback of the integrated voltage regulator that we had in the past, but with an optional bypass feature. Sounds good. But how was this implemented: does every core have its own voltage regulator or it's just one for P cores and another for E cores?
Is the DLVR bypass a requirement to board design or an optional feature. Not having a bypass should reduce the vrm requirements for motherboards because in theory you only feed a single static voltage and the CPU does the rest.
This is the thing that I will miss the most from anandtech. Those guys went very deep on these architectural details
3:37 I'm excited for the filler tile! AMD can't top that!
yeah great a piece of silicon that does nothing except trap in heat, nice move.
Reducing the power was an essential step.
Now they have room to grow on the next generation/s.
A shame, a year late, and a patchwork quilt to keep cost down. A real transitional product, not fully taking advantage of TSMC 3 nm. Hopefully they get another shot by the next release. They needed a 8P+8E SKU under 200 W and $300 US for this launch, likely binned for later.
Intels 14AP is gonna be like that.
@kazuviking I'm not comfortable even calling that Intel. Since the fabs might very well be under new ownership by then, despite Gelsinger trying to claim they will be a subsidiary.
Oh look, another Intel CPU, another socket. How surprising!
16:50 This is why we watch der8auer video's, proper high tech tools being used for Gamers entertainment 😎
Hey, your timestamps are not listed correctly in the description. Just a heads up, excellent content as usual
Cant believe he ended the video without talking about the 3 dims slot motherboard
Greatly appreciated the detailed look which included socket information. Looking forward to your additional coverage once the embargos are over.
Good to know that contact frames are still going to work best. Kind of strange they didn't fix the issue with this new CPU, I agree. Thumbs up for great videos.
Same washer mod as Noctua then. Sooo, a contact frame is still preferred by the looks of it.
16:30 I'm familiar with the internal 'discussion' about the ILM. Basically it came down to cost and simplicity vs. the actual severity of the problem. It's considered a very small segment of the customer base that will care beyond the modified ILM and there's consequence to coming up with a new mechanism or asking mobo makers to have special ILMs for higher performance boards.
Incorrect, Intel did have a 7950X3D slide for gaming. It's in the "A Balanced Enthusiast Experience" slide. It specifically said 7950X3D on the red line.
Was Core Parking enabled? That was not in the video tho?
@@RawBejkon Take the 1st party benchmarks with a grain of salt, but the main gripe by der8auer was that the slides "didn't include the 7950X3D." which is factually incorrect. We'll just have to see final numbers in 2 weeks, including faster memory speeds etc... At least it seems like only modest marketing claims by Intel and not outrageous ones like Zen5.
Compare it to the 7800 because that’s the real gaming champ
@@mikezappulla4092 it'll easily loose to a 7800x3D.
If they're losing to the 9950x in some games, then AMD will remain on top. Then there's 24H2 which wasn't as beneficial for intel and very likely the same AL. Did they test with that version of Windows? Doubt it.
That E cores perf change is HUGE
Nice to see the Filler tile so close to the Compute tile. This should drastically reduce Filler latency. /sarc
12:00 The CPU is blushing 😊
Intel needs a technology to compete with X3D ASAP.
Lol, who's even cpu bottlenecked these days ?
@@AndroidBeacshireactually this is valid point. Who cares about 200 vs 215 fps ?
Cope @@AndroidBeacshire
@@AndroidBeacshire Grand strategy game generally involve very heavy CPU computation and little in the graphics. The paradox strategy games, civilization franchises and other are mostly CPU bottlenecked. If you happen to play those games you need a strong CPU. FPS doesn't matter because you will be more cocnerned with tick rates / turn time on those games.
A 12P core 5.7ghz high cache cpu would be very interesting.
A 14900K undervolted at a 100W limit doesn't sacrifice much gaming performance vs. stock. So is there any point in Arrow Lake for gaming?
This Intel CPU has no point. This is DOA
@@freak777power It still beats the 14900K in multicore.
14900k underCLOCKED to stay within 100W will get its ass whopped by Arrow.
@@PolskiJaszczomb What are you even talking about? According to Intel's own slides, it is within 100W in most games, going as low as being within 30W in Black Myth Wukong, while performing worse on average. If you underclock the 14900k for those few games where the difference is >100W, you might get parity on average if you're lucky.
And remember that these are first party numbers. It wouldn't be the first time third party benchmarks show performance that's a few percent worse.
@@pmHidden Have you ever have a 14900k? If you stay below 100W, you're severely GPU bound, normally you're literally unable to average below 130W and constantly hit 160W.
I'm looking forward to the DLVR discussion, and how it differs from the FIVR we saw in earlier chips
Solid video as usual, Roman being very hands on and using his engineering know how. 👍
The 9% IPC gain for P Cores they are referring to is likely undervalued by the clock regression vs the 14900K looks to be Broadwell vs Haswell all over again only this time they don't have enough gains in IPC to make it better than its predecessor the loss of HT is also not helping it in MT workloads never thought that both AMD and Intel could produce a DUD generation back to back.
A hope Roman will test new cpu with alphacool Core 1 waterblock and der cat❤
"Low temp overvolting" sounds like something that didn't end well in 14th gen..
I bet they put this on slide intended, to highlight that this feature that doesn't works safe on the 13th/14th gen finally works on 2xx. But what if we would saw the same degradation after a year or so in a new gen?
Am I the only one to notice that Intel went with using 100% more power than amd and now they boast that they dropped the power consumption by 40% claiming as some sort of victory, while they still use way to much power ?!
It's up to the motherboard manufacturer on how to distribute the PCIe lanes, that's probably why you can't find any information about it in the cpu datasheet.
Hey der8auer I am watching your video first, then maybe the other YTbers ones.
I don;t know if anyone noticed but when comparing to 9950X both CPUs were set to 125W TDP. Thing is - 9950X by default runs at 170W TDP which allows up to 230W of power consumption, limiting it to 120-125W reduced that to 160-170W meanwhile 125TDP for intel allows up to 250W of power consumption. 9950X was severely underpowered in multicore tests compared to 285K.
These are the first desktop CPUs from intel to be mainly produced by TSMC right? The decrease in power consumption wouldn't really be a surprise if it was made with the N4 or N5 processes.
It's TSMC 3 nm for the important chiplets, larger nodes for the rest.
If Intel have genuinely fixed the 14900K, it could make a surprising come-back... remains to be seen.
Thanks for the info. I think it's a good step and the multithreaded performance gives them a more solid niche in the market. As always, we will have to wait for independent reviews to give a verdict.
Intel - we're going to try make it extremely difficult to delid our new. CPUS
Roman - hold my beer
looking forward to update microcode
No i10??? How boring from Intel.
You will get i9+++.
@@PREDATEURLT OMG HYPE!!!
Hyundai already makes those ;p
The amount of PCI lanes themselves is incredible. No longer are we stuck at 24 lanes. We're finally nearing affordable CONSUMER lanes that we had in HEDT/Server 2011-3/X299 from 10 years ago that is only now affordable.
that will be disabled on most motherboards for one reason or another
specially knowing fewer now need those lanes because people uses now only one nvme, one gpu and nothing more
@@arch1107yeah, but that makes sense for cheaper boards especially
The stupid part is for more expensive boards, they will spend 16 to 24 lanes on NVMe slots
I just want 1x16 1x8 2x4, 2 NVMe, is that so hard
Lol you totally misunderstood the slide. Previous gen intels had 20 pcie lanes. New cpus looks like they have either 24 or same 20 as previous gens. the total 48 pcie lanes advertised in slide is total count of z890 + cpu combined.
where the z890 has a limited bandwidth to connect to the cpu. and it acts as a pcie hub. so the pcie lanes the z890 provides are only connections to the chipset. even worse, z890 is still pcie4, not even pcie5.
we are no where near the server grade cpus and mobos. looks like nothing has changed.
Yep still rocking a 10980xe
@@nebiforever Company fanboys misreading facts? I'm shocked!
Impressive somewhat on the efficiency, Disappointed on how the performance they showed vs 9950X in gaming, and vs 7950X3D in content creation. will be interesting for real world testing and especially, if you would like to probe the CPU voltage behavior
Would love to see the design of the DLVR (not holding my breath), but with traditional voltage-regulators extra transistor switching needs to occur especially with variable regulation thus the bypass is in-place ... hope we can get more tech-specs on this
I think it's still a wait and see; I bet the gaming regressions are limited to games that want more than 8 "fast" threads to get peak performance (looking at you, Cyberpunk...) while those that are either heavy MT or more lightly threaded will show either parity despite the reduced core clocks or a net gain thanks to the better E-cores.
It'll be interesting to see the Day 0 benchmarks. I don't think the 285K makes a great deal of sense, but the 265K seems to potentially be "more worse" than 14700K vs 14900K.
Impressive and Disappointing at the same time is good way to describe Intel in general at this moment in time. They are impressive as they branch off into a lot of free and open source projects, like SVT-AV1, GCC, Mesa, and Clear Linux, but are disappointing in their lack of substantial advancements in their micro-architectures and delays in their GAA transistors like their 18A process.
Today Intel offers us the z890 along with the 285K to make a jump of -3% compared to the currently castrated and degraded 14900k, curiously I feel somewhat better because I know that there are those who are in worse situations than me 🚬
the fact that the k is equal to the ks gives me hope that the new ks will be insane
Take any i9 12900-14900k Disable HT put all P core to 5.5ghz and Find the next maximum OC for each core 0-6 and then find the maximum supported voltage drop and you should be faster than stock or the same on MT workloads but gaming will see a boost and you can save 20-80watts maybe. You can disable E cores and HT and you could hit 6/6.2+ghz all P core if you have a good chip. I’ve see these running pulling 180-200watts at 6.2ghz.
Can intel put e-core to sleep?
Of one could keep those core as sleep for specific games it should lower wattage.
People won't like the new naming scheme because people don't like change. I remember when they named the Pentium instead of 80586. I was miffed, but I got over it. We'll get used to it in no time.
Looks like Intel matched Ryzen 9000 series at being impressive and dissapointing at the same time
We're like a cpu paradiam shift now.
We're like a cpu paradiam shift now.
this is the first intel CPU i'm interested in over the last decade
Efficient an can hit more than 5ghz
Doesn't require hyperthreading nor does it suffer from the exploits that HT enables
24 cores of brute performance
this is a solid flagship tbh, even if the performance in gaming is the same as 14900k
it's like a 14900k that doesn't eat PSUs for breakfast and doesn't get too hot, nor does it degrade over time
I will be very interested to see it's idle power. I am one who never turns their PC off so have always loved Intel's ability to get to a very low idle power usage. Also, the focus on gaming always amuses me. Even if your primary goal is gaming, there is really no reason to upgrade to any of these new CPUS from Intel or AMD. So for me, it comes down to productivity gains.
When the dust settles, it will be more clear that both companies had to invest in the future, all the ++++ from generation to generation without andy major overhaull was not future proof. Both Zen5 and AL are focusing on maximizing the performance with power draw in mind which is a good thing, since ARM already showed that it's possible. So any advancements in DDR5 memory speed support, branch cache predicition optimization both for speed and security, lower power draw for same or better perofrmance while maintaining motherboard socket compatibility (if possible) is more than welcome.
I think it's happening. We are definitely at a standstill in the CPU department. Neither company is putting anything compelling out. Well, AMD does with 3d cache. The gaming industry needs some chill time anyway.
5:40 WHAT THE HECK!!!
Filer tile??? Intel you could have jammed so much L3 cache in that, Pat what are your people doing???
But if they used that space to house "L4 cache" you would have to pay more for the processor.
@@zosimus_99 Paying $50 more for 50 more FPS is a BANGER WIN for high -end Gamers. AKA this is why the 7800X3D is untouchable.
@@zosimus_99 GN has an excuse to say "waste of sand" again.
E cores and empty spaces could be used to stack cache but intel is dumb nowadays
@@Saiohleet Maybe they are just working on it
What makes me sad is that are no easy performance jumps by buying latest gen cpu anymore. I went from 8700k to ryzen 5700g (during gpu shortage ) to 7950x. And each time I got a large hike in perf due to bigger core counts. I don’t want to go threadripper or Xeon just get a decent uplift
The issue is sometimes trying to read the "latest" gen CPU's. AMD randomly switched their naming scheme a while back for example. So now it's a pain to read them if you're used to the way they used to do it. The bigger number isn't always better these days.
That being said. Always research what a CPU is capable of so it meets your needs. Don't just buy it because it's new and cool. Research is the most important thing that AI can't even replace today, as people's use cases vary so much.
@@Deja117 Actually I don't buy cpus by name. If your a tech curious person you will probaly do your research aka. look at cpu specs, benchmarks and such. If your not that interested you will probably buy by brand recognition and also what your friends and/or dealer recommend.
Since 3 generations of AMD cpus there is no uplift in core count. So the only gain you can buy is their architectural uplift. zen3, zen4 and zen5 all the same in core count. Intel also seems to stagnate by shuffling the around P and E cores. 3 generations is like 6 years. No mayor uplift.
I'm glad I don't feel compelled to upgrade from 13900. I was genuinely upset that I'd need to upgrade mobo+cpu if IO wanted to stay at the top of performance pyramid of consumer PCs.
I guess we don't know yet whether a Gen 5 SSD and a GPU both in their respective primary slots can be run concurrently without detracting from GPU bandwidth.
It seems as if it were yesterday I got my 12400f in my server, and thought it would be the most efficient thing I could do for a while. I feel the urge to upgrade yet again!
That doesnt seem very efficient
300 dollars for the lower sku
@@simocity99 but not compatible with DDR4 and LGA1700 yes?
Wonder if this thing has a bunch of OC headroom since the power consumption is so relatively low
maybe but gaming performance is hurt by the tile design
I'd like to see an 8p+4e or 8p+6e version without the reduced clocks for the P cores at a lower price.
Puh. I'm staying with my 3950X until Zen 6. Word on the street is it'll have 12 core CCDs. A 2nm single CCD 12 core with Vcache at higher frequency and with a new IMC (finally) sounds awesome.
uses 165W less power than a 14900K in Space Marines II ... 🧐
17:50
Honestly it is surprising to me that this make it worse at cooling. 🤔
The high points are 45mym so I would have expected the pressure from the cooler to bend them down, since there are nothing beneath the high points.
Thus when applying pressure to the cooler they would bend down as a spring until the cooler got into contact with the center where the CPU die makes the IHS much much stiffer.
Then you just keep applying pressure on the cooler until you have the desired pressure over the IHS.
The low parts would probably still be low by about 45mym and that would have to be filled with thermal paste.
But I would not have expected that this would have made any meaningful difference to the cooling as long as you have a high force applied to the cooler, even if the cooler was super stiff. (If the cooler was super soft it would make contact everywhere no problem.)
So I am surprised that the contact frames works as well as the do.
It's kinda weird, I mean with the Core 200 Intel just seems roughly able to compete with AMD, with no real strong selling point, loss of HT being a clear con
But the new architecture seems to have potential so maybe in a couple generations they can pull up ahead again, what killed them was to cling to their old architecture
Considering they are also reducing the number of threads its definately not bad.
-165W in space marines.
That's roughly consumption of Intel xeon v3 or v4 cpu with motherboard and 4 sticks of ram. Under load!
@der8auer
My man.....
You don't mind telling me where you got those Hexagon fake plants on the wall behind you ?
Me Gusta !😊
Did they glue the tiles to the base 😂😂😂
but they used amd glue it seems, to glue tiles, instead of chiplets 🤣
Intel Error Lake
(I'm not making fun of Roman's accent. I'm unlucky owner of buggy i9-14900K)
Claiming a serious gamer cares about power consumption is like claiming a Lamborghini Huracan driver cares about fuel consumption.
They do though, residential wiring only goes up to a certain amperage per socket
@@asdf_asdf948true, it is also country specific too.
@@asdf_asdf948 Exactly - we're rapidly reaching a ceiling that a 120V/15A breaker will limit. While some users might have better than that, product design will always aim for the largest markets. In another decade at the rates we've increased we could easily be reaching that tipping point. Eventually it could very well set the overall limits on tech.
@@asdf_asdf948no, because CPUs are nowhere near power hungry enough to make that a concern. The typical 120V breaker is rated for 15A, which gives you 1800W to play with. Even a 14900ks pulling 400W is only using 22.2% of that circuit’s capacity, ignoring psu losses.
@@wahidtrynaheghugh260 correction. electrical codes do not let you use the full current rating of a breaker for continuous loads. Canadian electrical code (and i think national electrical code as well) limits you to 80% of the rated current which is about 12A for a 15A circuit = 1440W @ 120Vrms AC. Also keep in mind some receptacles are daisy chained, so you are sharing the power with whatever else is connected to the same breaker. honestly though, thermals, stability and degradation are more of a concern, which are all affected by power consumption of the chip, not to mention the extra power = more heat in the room where the computer is.
Higher efficiency is great, but its a really brave move to launch a new socket and all that entails with a CPU that isn't promising higher performance in most cases.
Wager AMD is not the only company to see really weak sales for their newest generation.
What's with the 3 ram slots ? Is it the UDIMM thing ?
That 3rd slot is for nvme drive adapter
17:20 They can do long noches around all CPU to make pressure all around IHS. But no they still bend this IHS by 2 the same noches.
No cat in video, new Intel chips DOA.
Stil waiting for the delided 9950x and 9900x max oc results !!
Appears Intel continues not to target gaming. Makes sense. The new specs are definitely business friendly.
So if I own a 14900K, the earliest upgrade for me would be a 485K I guess
Comparison with 7800X3D is pointless. Previous i9's with its 24 cores are in a totally different league than 8 core 7800X3d and same for this Core 9 Ultra. This doesn't change the fact that 7800X3d might be slightly ahead in most of the games, but also multithreaded performance is about half of i9 and Core 9 CPUs. This is the simple reason why it's not on the comparison charts. i7 and now Core 7 are much better value for gaming anyway, so these CPUs is what the 7800X3d should be compared to.
Lets be clear. 99% of customers dont care about power efficiency. They care about better performance. With the top tier only getting the same gaming results as a 14 series should hurt them. They should concentrate on being faster not more efficient for the general consumer.
If I was able to get 1x16 and 1x8 with 2 NVMe and 10GbE, I would upgrade just for the PCIE lanes. Not being able to use my 2nd full length PCIE slot unless I want my 4090 to only get 8 lanes, Is annoying.
So 265KF OC to 6GHz will be the best performer for Gamers? (no GPU tile).
Why so conservative with gains? It would be 6.016,67 whopping ghz!
@@MrEdiossor 5.983.33 Ghz...
I dropped what I was doing and lost my mind when I heard individual voltages per Pcore, that is amazing. X3d still sounds faster for gaming so I think I’ll be sticking with that.
I concure this is good for better power management.
Intel moving to a rectangle socket has not gone well
you already talk about socket. but you forgot to mention if those exisit cooler blocks in custom loop or AIO if its still compatible with this new socket
My rig is 8 years old with a I7-6700k. I have to build a new rig asap so I'm forced to buy the new intel cpu. For me it will be an enormous upgrade in any case.
Impresappointing
ultraimpresappointing
I think it's a great processor. I think the small difference in gameplay only occurs at low resolutions, which for me is not of any importance. If I were looking for a new platform, I would invest in this Intel. Great efficiency and great performance. But I'm using AMD 7950x and it's great for me.
It needed a gaming uplift over 14th gen, and it doesn't have that. Eh
Performance per watt all depends on where your pushing
You know what, I like it for now, not mind blowing or anything but it is an interesting step in the right direction
Look at the comparisons made for power draw, notice what's missing?
Edit: The extra 4 PCIE lanes from the CPU should be for connecting to the chipset...
How about modifying a contact frame?
I was also thinking about that. If you don't have the tool to grind metal, you could just "Hotknife™" that plastic nipple off of the board just like we used to do to plug an X8 card in an X4 slot ^^
From Zen 5% to Intel -5%... how exciting...
Hi Roman. Are you able to share if the existing thermal grizzly intel heatspreader is compatible with the new boards?