Very exciting to see 64/112 PCIe lanes on Intel HEDT too! The lowest-end $359 6-core SPR CPU is attractive for building a cheap for quad-GPU workstation for CFD; only the GPUs and 64 PCIe lanes matter here. If only the mainboards and DDR5 memory wouldn't be that expensive.
@@davidbuddy Yeah, I would love to purchase a 6-core processor for a reasonable amount of money, alongside a $400-ish motherboard, and later upgrade to a 24-core or 3400-series CPU, upgrade the memory, and truly get the most out of the platform. But when the minimum price for just the motherboard and CPU for an HEDT system is $1,500, that really makes me not want to buy into this amazing platform, especially since $1,000 for a 16-core CPU is already not a great price.
Thanks Intel for bringing back the HEDT monster overclocking !!! Great job DerBauer....cant wait to see what this platform can really do when you have the proper water cooling hardware to push the limits of these chips.
In LINPACK this Xeon W9-3495WX CPU pushes 5.376 TFLOPS around 4 GHz (56 cores * 4 GHz * 32 FP64 FLOPS/cycle = 7,168 GFLOPS FP64 max theoretical throughput), assuming 75% efficiency in scaling. You also get 112 PCIe 5.0 lanes with 448 GB/sec of bi-directional bandwidth off the CPU, DMI 4.0 (8 PCIe 4.0 lanes) gives an extra 16 GB/sec of bandwidth to the W790 chipset which has Wi-Fi 6 integrated, for a grand total of 464 GB/sec of IO bandwidth; and, 8 DDR5-4800 memory channels, which at 40 GB/sec per channel translates into 320+ GB/sec of memory bandwidth. That's close to 700 GB/sec of bandwidth of this chip. A 5995WX at 4 GHz would score around 3 TFLOPS FP64, assuming 75% scaling (64 cores * 4 GHz * 32 FP64 FLOPS/cycle = 4,096 GFLOPS FP64 max theoretical throughput). You get 128 PCI-E 4.0 lanes, or 256 GB/sec of IO bandwidth off the CPU directly and that includes the WRX80 chipset lanes; and, 8 DDR4-3200 memory channels with tuned memory timings will get you to about 25 GB/sec of memory bandwidth per channel or 200 GB/sec of memory bandwidth. That's about 456 GB/sec of bandwidth from the 5995WX. Yeah, it turns out the Xeon W9-3495WX chip at $6,000 has significantly greater potential throughput and memory/IO bandwidth, plus AVX-512+AMX instructions and Optane support, whereas the 5995WX costs $6,200 and has inferior max theoretical throughput in integer and floating-point, and lacks those extra instructions. Efficiency-wise, GFLOPS/W, the W9 wouldn't be any worse because you only need 1.0 V to hold 4.1-4.2 GHz in an all-AVX-512/AMX workload saturating the dual 512-bit SIMD-vector units of each core on the W9 Xeon versus the dual 256-bit ones on the 5995WX chip. They both pretty much average close to 700 W under full load around the 4 GHz/1.0 V mark.
I just rebuilt my desktop (13700K) and my needs would line up with the lower end of the W-2400 line but the thought of 64 PCIE lanes is seriously tempting. I know Intel is going to be harping on the higher end SKU's but I can't wait for the lower end parts to get into the hands on reviewers so I can decide if I want to save my pennies for a while and build one of these. Haven't been this excited since Threadripper was announced. Feel better!
I've been really looking forward to the "OOPS ALL P-CORES" chip cycle for a while. People really don't realize how powerful 12th and 13th gen cores are in comparison to 10th and 11th gen.
These new pro Intel chips made getting 13900k for creative workloads senseless. All P cores will definitely perform better than p+e combo. I'm interested in the results from various tests to know which platform I'm subscribing to
We desperately need Direct Die Water Cooling, especially as we go down the 3D stacked chips road, stacking heat sources on top of heat sources severely limiting the clock frequencies. But even with 2D chips like this is very clear that the limiting factor is heat density. I know that TSMC have been testing Direct Die Water Cooling, where on die micro channels are etched directly into the silicon layer on top of the CPU for a coolant to flow between layers. In their tests, silicon channels were etched into a silicon layer with a silicon-oxide thermal interface material between the microfluidic system and the actual silicon of the TTV. In a third option, the silicon-oxide TIM was replaced with a liquid metal TIM. The results were really impressive and easily enabled cooling off chips that used more than 2000w.
Copper conducts heat twice as good as silicon, aluminum is still also a better conductor of heat. So silicon layer cooling is already of to a bad start. Then there is the problem of making a super fat silicon chip cheaply, to have room for the fluid channels. Connecting chiplets like a Epic or Sapphire Rapids up with fluid channels between the individually build and spaced out chips, sometimes up to 13 individual chips. What happens if the fluid channels are no longer clean after a couple of years use? Is the density even needed? we already have 96 cores Epic that are air coolable and dual socket capable and mountable in a rack with 10 servers for a total of 1920 CPU cores working at atleast 2.4ghz. In Geekbench this 56 core sapphire rapids at 1000W overclock is not able to beat a stock AMD Epic 9374F 96 core at 400W TDP, so it is just a question of not pushing the overclock well past good efficiency and using the right chip for the job.
1000 W on a chip with 1400 mm^2 of die area isn't that hot. There's a lot of surface area and it's easily cooled with a radiator or two in a water cooling loop.
Water introduces so much problems and makes maintained a pain. Those micro-channels will be clogged, then with troubleshooting you have to drain the system. No thank you
The chip is one thing but I'm blown away by that VRM. How on earth is that thing shifting 1kA with that tiny heatsink? Gotta be some fancy new bare-die SiC MOSFETs or something, with ludicrously high transconductance and ultra-low Rds(on). I don't see output caps, either - just a big array of pads, or maybe filled/plugged vias? Did they switch to a massive bank of MLCCs on the back or something? Maybe even embedded passives? Pretty wild. Edit: given that production boards for SR seem to be omitting polar caps on the low side too, I guess we just exceeded the ripple current limit for alupoly and solidpoly? I wonder how they're handling the inrush issue from low MLCC impedance at low load though... maybe that's part of why the idle consumption is so high. Can't wait to see some teardowns and analysis of these new designs.
I have that exact dewar at 1:28 on the right side of the screen. Love the low pressure valve for refilling the vacuum tight flasks for pouring LN2 into the pots
This is exciting! Thank you for sharing your time at Intel, and thank you Intel for sharing this news! G.Skill, wowowowow! I can't wait to see more of your HEDT overclocking.
6:42 - I have a question, why does the core overview put the "favor cores" in weird places in the grid chart, instead of at the top? My 7950x is the same way in taskmanager... I can definitely spot which cores are the favored even though they aren't marked by stars or anything. There seems to be no symmetry or rhyme or reason as to why or what in taskmgr compared to where they might actually be located on the chip / die.
I would love one of those, if not for the fear of that electricity usage. It would be interesting to see some 'IPC comparison' testing for the lower core-count models in comparison to similar core-count parts (HEDT and consumer Intel and AMD). That Cinebench R23 score at only 2.9GHz is like 2.7x Ryzen 9 5950x with PBO😲 Very steep platform entry cost, but can pay for itself quickly in certain use cases.
Can someone explain how you would ever hope to cool this thing when its running ~1000W loads? Of course this kind of setup would have crazy custom cooling but still 1000W?
The fact the VRM didn't overheat in geekbench 5 doesn't mean anything because it's a very short peak load. Also these CPUs use a FIVR so the current going though the motherboard's VRM isn't as high.
Speaking of DDR5 ECC with XMP, i stumbled accross some DDR4 ECC a few years ago that had XMP and it seemed to work with everything, sure it was only 3200, but for a 128GB kit thats actually a pretty good speed for ECC at the time, and i think even now, because i think most ECC kits are still 2400/2666
@@Sineseol True, but you're not getting 4 sticks of 64GB ECC 6000 for several years, at least not running at 6000, let alone for ~$400 total IIRC it was $196 for a 2 stick kit of 64GB DDR4 uDIMM ECC 3200 less than $300 for 128GB
I'd love to see overclocking results for the 2465X. 16 cores is plenty for me, even the 12-core X would be fine. I mostly need the PCI-e lanes, with excellent single-threaded performance, though some apps I use need lots of cores, so the 6-core wouldn't suffice. How far past 4.7GHz can it be pushed...
yeah and your platform (weighted against 16 core S.R) was at most 60% the price, I'm shocked to see ppl so excited by intel's coming up offering, don't get me wrong it will preform really well but the price-performance ratio is objectively poor
@@comrade171 imagine comparing consumer platforms to the professional variants and complaining about the professional variants being more expensive. There is a reason the 13900K forced the 7950X to have its price cut by over $100.
@@comrade171 This is not a platform for gamers or consumers. The people that buy these types of systems (like me) don't rate them in price to performance, but rather price to time. If buying this CPU + board can save me 2 hours a day and only cost 10k, then it will pay for itself in under 2 weeks; after that it is increasing how much money I make every week.
my 13900k is already getting 42k on r23... who cares. the cpu in the video gets 70k... literally almost 2x the score mine or your cpu gets... at a much lower clock and much lower temps... 60c on air.
Us mere mortals could rarely find a use for such a cpu. Just a few years ago the average person had 2 or 4 cores and everything worked fine lol 56 alderlake cores 💥🔥💯
Out of curiosity, was this being done at the Folsom (California) campus? Growing up in the 90s, I worked at a computer store near the Folsom campus and we'd get Intel engineers coming in some CPU production samples and motherboards for new CPUs (think P2 era)..it was a pretty cool experience.
The OC lab is in Portland, OR. I also had similar experience to yours as Intel's development campus is here in Hillsboro (also Oregon) so we had engineers come by and they also donated hardware (we had a lab full of engineering sample P3s) to my high school.
I look forward to seeing the more budget w3/w5 versions of these chips with retail board pricing. I really like the idea of having a cheapish system with 64 PCIE lanes across all SKU's with massive ram support as well. I just hope the total platform cost isn't mind blowing just for the low end as well!
Wow so for the past 10 years I had been planning to build a gaming PC and always wanted to go with the (RAM on each side of MoBo / HEDT / X Boards etc.) Mostly for the aesthetic looks. I finally gave in and went with an z690 board w a 13700k etc. So does this mean that we should expect to see these boards making a comeback or are they going to be available but at crazy high prices like some of the 'extreme' boards we see today?
I don't see consumer and hedt overlapping ever again. Threadripper already is a "non-retail" only part, and Intel's stuff is too. It's not like sandy bridge to haswell era where HEDT was the high end gaming platform. It's all locked up in the enterprise market.
I saw someone in a different comment speculate that the minimal entry cost would be around 2k for motherboard and cpu combo since the below 12 core cpu's are oem only and the previous hedt class intel boards were extremely expensive in their own right. In theory that would actually still be a great deal for what your getting if compared to threadripper, but depends on if you can consider that far above your price point or not.
@@h.b.5577 Yeah makes sense! I mean, if they are already asking for $1000-$1400 for the high end Extreme boards then somewhere around $2k is probably what the price would be. I just don't think that the manufacturers would find it prudent enough to produce at that price point for retail but, who knows, I could be wrong.
@@itstheweirdguy Isn't that kind of why intel introduced the i9 in 2017 to kind of have a "high end desktop CPU" that worked on mainstream consumer boards?
Is this Windows workstation that can use those cores? My understanding is that "standard or pro" Windows doesn't run extreme core counts well and doesnt really understand how to access PCIE lanes as the RDIMM ECC speeds that the Workstation version can. Am I wrong? Seemed to gloss over the massive PCI lanes that can address the memory and NVME drives directly and in parallel.
Is the G.Skill Zeta R5 kit actually ECC? None of the marketing from G.Skill mentions ECC even once, and the label on the DIMMs mentions 8x chips when I'd expect 10x for a DDR5 ECC DIMM. This would be hugely helpful to confirm.
When I started feeling the system in my own sort of imaginary gui sense to help diagnose it like a car mechanic does / I was shown things and anything is possible is useful and voted into needs
I am really interested in getting an Intel W-2400 series 12-Core or 16-Core HEDT processor, but I need the DDR5 memory manufacturers to get on the ball and release larger DIMMs since I need 2TB of memory with it. I need memory over cores. I do special work with specific 3D software and game engines and the larger memory is a necessity. The main 3D software that I use supports up to 4TB, with a massive maximum capability of 18 Exabytes. I currently have two Intel X99 Extreme systems and an R9-5950X 128GB system (plus others).
@Michaels Carport - The W-3400 series would be nice, along with 4TB of memory, but they will probably be out of my budget price range. I already priced out an AMD Threadripper Pro with 2TB of memory and it was $30,000 CAD. I will probably have to settle on a W-2465X with 512GB to 1TB for starters, especially since DDR5 128GB/256GB DIMMs are really rare. If I can afford the W-3400 series processor and get the full memory later, that would be an option, depending on the cost of the processor in Canada.
@Michaels Carport - I noticed the Epyc processors around for good prices. Even NewEgg Canada has decent prices on the Epyc processors and server motherboards. I can get an Epyc 16-Core plus motherboard for $2000 CAD which is decent. The DDR4 ECC Server memory is $4000 CAD for 512GB though at NewEgg. So for me to get 2TB of memory will be $16000 CAD. That is still less than the Threadripper Pro 2TB I priced at $30,000 CAD. But with Epyc and DDR4 I feel like I'm buying old tech that is now outdated and a dead end. NewEgg Canada has nothing for DDR5 RDIMMs either which is what the new Intel W-2400/3400 uses. So if I go with a W-3400 I will have to source the memory somewhere else like a memory company direct. I am just going to wait until summer and see what the prices and availability are like in Canada. Even if I can get a W-3435X 16-Core plus ASUS motherboard with 256GB memory to start, that will be more memory than any current system that I have.
I'm glad I kept my above ground swimming pool in the back yard. With the proper chilling I might be able to keep this monster cool, once I finish the nuclear reactor to power it.
AMD threadripper 5995WX got 49k+ on Geekbench 5 at 3.2ghz and is not a current gen CPU. Seems like an extra 25% higher clock would get you quite a bit over this CPUs performance.
Yeah I mean come on. this isn't really that mindblowingly impressive. Threadripper is still very close on a "zen 3" node. It's nice to see some competition from the sleeping titan called Intel, but still think once we see next gen TR Zen 4 parts they gonna run circles around Intel again. And even it's very funny to see how much you can push a multicore monster like this, overclocking it, thats never gonna be how these boards and cpu's is gonna used. They are going to be run at stock clocks in servers or workstations. No pro user is going to risk any unstability chance over a few percent overclock for a Workstation and Intel still have a big issue in the power draw, which also plays into account when deciding which platform to use. Intel do have an edge for certain loads like workstations for Music production with 12'th and 13 gen cause of the monolithic die which doesn't have the cpu an ram latency of AMD's chiplet design, but again, is it worth 20-30% more processing power if it comes at a +75% power bill ? (just a number. I don't have the exact values) not to bash Intel. It's good they actually managed to take up the fight they were about to loose. Competition is better for the end users. Still think Sapphire Rapids gonna have a Rough Fight when it eventually comes out.
*new 56-core cpu OCed on all core.....beat a year old Zen 3 64-core* makes it not impressive. Being able to keep up with a 64core Zen4 "storm peak" releasing in the same year while consuming similar power, that would be impressive and maybe in 2 years they'll do it.
In LINPACK this Xeon W9-3495WX CPU pushes 5.376 TFLOPS around 4 GHz (56 cores * 4 GHz * 32 FP64 FLOPS/cycle = 7,168 GFLOPS FP64 max theoretical throughput), assuming 75% efficiency in scaling. You also get 112 PCIe 5.0 lanes with 448 GB/sec of bi-directional bandwidth off the CPU, DMI 4.0 (8 PCIe 4.0 lanes) gives an extra 16 GB/sec of bandwidth to the W790 chipset which has Wi-Fi 6 integrated, for a grand total of 464 GB/sec of IO bandwidth; and, 8 DDR5-4800 memory channels, which at 40 GB/sec per channel translates into 320+ GB/sec of memory bandwidth. That's close to 700 GB/sec of bandwidth of this chip. A 5995WX at 4 GHz would score around 3 TFLOPS FP64, assuming 75% scaling (64 cores * 4 GHz * 32 FP64 FLOPS/cycle = 4,096 GFLOPS FP64 max theoretical throughput). You get 128 PCI-E 4.0 lanes, or 256 GB/sec of IO bandwidth off the CPU directly and that includes the WRX80 chipset lanes; and, 8 DDR4-3200 memory channels with tuned memory timings will get you to about 25 GB/sec of memory bandwidth per channel or 200 GB/sec of memory bandwidth. That's about 456 GB/sec of bandwidth from the 5995WX. Yeah, it turns out the Xeon W9-3495WX chip at $6,000 has significantly greater potential throughput and memory/IO bandwidth, plus AVX-512+AMX instructions and Optane support, whereas the 5995WX costs $6,200 and has inferior max theoretical throughput in integer and floating-point, and lacks those extra instructions. Efficiency-wise, GFLOPS/W, the W9 wouldn't be any worse because you only need 1.0 V to hold 4.1-4.2 GHz in an all-AVX-512/AMX workload saturating the dual 512-bit SIMD-vector units of each core on the W9 Xeon versus the dual 256-bit ones on the 5995WX chip. They both pretty much average close to 700 W under full load around the 4 GHz/1.0 V mark.
I always used to build HEDT rigs, but switched when gaming became so core frequency dependent (to 10900K and now 13900K, OC'd on water cooling). Good to see HEDT making a comeback, but only suitable for productivity workloads given the low clock speeds and non multithreading capability of games. 10K is probably acceptable to companies needing this level of performance (plus 8K for 4 x 4090 🙂).
Any chance we will see a follow up with a retail board and CPU? Shocking how well it seems to scale but it is so expensive I am actually a little nervous to try to OC it. Going with the EK block and try to see how a single 360mm P360M can handle things w/D5 pump.
Dude, you know that you HAVE to do some individual core tweaking/tuning with clocks and voltage, that's just too insane to pas up. When you look at every CPU there's always better cores, how cool would it be to be able to pick them and give them the extra Umpf they can use.
It's weird how it's Intel this time around who brings back excitement to this segment... Anyways, get better soon Roman, and I'm eagerly looking forward to your in-depth video on this new platform!
@@elvewizzy Oh, sure. I don't plan on getting one, but this doesn't mean I shouldn't get excited :) Lately, I've partially lost interest in tech products because of all the greed of the companies - there's been no substantial improvement in perf/$ in GPUs and CPUs and this kind of invokes in me the feeling that there's stagnation, although technically the 4090 offers a huge jump in absolute perf for example. But this new platform kind of represents a return to a class of PCs which has been more or less dead for the last several years and may force AMD to respond at some point, bringing competition back to this segment, and I find it exciting - something I've missed for a while :/ I'd be much more excited if the mainstream platforms get let's say 8 extra PCIe lanes to the CPU which will cover the needs for many enthusiast/prosumers which are currently forced to go HEDT, but yeah...
@@Freestyle80 Absolutely! I don't use reddit or follow blindly any trends, I form my own opinions. I was rooting for AMD back in the K6-2 and Thunderbird days, but I saw the true nature of corporations when Athlon64 came out. I learnt this painful lesson long ago, still in my teens, something some people never do sadly. Since then, I'm not a fan of any company, I'm a fan of consumer-friendly practices (like long-lasting platforms) and of particular products and the experiences they enable. I'm a fan of true competition - something I feel disappeared a decade ago, I feel like we're victims of a cartel, the worst possible type of duopoly. And I'm kinda depressed because nothing seems to have the power to stop it :( It's truly sad that a multi-thousand dollar platform is the thing that's actually exciting, to me at least :/
can you please test energy efficiency with undervolting/underclocking this cpu? what is the sweet spot, like you did for 40xx nvidia cards. Energy consumption is very important for such servers. I suspect it will draw half energy at 3.7-3.8ghz with 10-20% perf decrease
Seeing it rip through Cinebench R23 MT in just a few seconds is insane haha. Breaking 70k points at stock when we saw it was clearly possible to *double* the all-core frequency?! Can't wait to see some real testing of these!
HEDT is alive again .. really looking forward to get this and review it on my channel! so glad Intel is bringing it back to life after AMD murdered it !! @Der8auer man could you please follow up with more videos on it.. even stuff like how to mount the cooler, would be nice .. Danke!
56 P-core CPU + 4x RTX 4090 + lots of ram and hd etc. = 30 Solar Panels in the Tropical Zone to compensate CO2 emissions of such system runing 24/7. In a portable solution you could have the first computer with petrol tank and generator, put two wheels and you'll have a server-motorcycle...
@@ianmoone8244 Jokes aside, it's a 1400 mm^2 chip. 1000 W isn't that much. You've got 4090s pushing 500+ W and they are only around, what, 600-700 mm^2? The power density is lower on the Intel chip than the NVIDIA GPU, so for a given amount of cooling capacity, the Intel CPU with 1400 mm^2 die area will be about similar temperature wise, and yes, maybe a little higher because you have solder and a heatspreader in the way reducing heat conductivity relative to a GPU that is direct-die cooled.
at least it's much better than AMD cpu glue which requires a massive L3 cache to compensate for the performance degradation due to the distance between the chips. Also if you want to buy Ryzen cpu plz go for with 3D cache. They just artificially limit the performance of those non 3D cache cpu.
It seems like no way you can push that many amps through the socket pins. I think they will have to redesign how power is connected to the CPU from the socket. It is just too many amps for the (many) little tiny pins.
13:30 Is the board defective or CPU-Z is not working properly ? It should state that you have 8 channel memory because you are running a Xeon 3400 series and not the 2400 series.
Technically this isn't an HEDT platform it's more of a Pro/Workstation oriented one, and a very expensive one at that. And 500+ Watts at stock isn't that attractive in this day and age, especially when you can get a similar performance for less consumption in the AMD 5000 Pro CPUs. Looking forward to see what the W5 and W7 can do when overclocked, Up to 28 cores won't be as hard to cool as the 56 Cores monster.
Yeah, if I was running a medium-to-large VFX studio, Threadripper Pro would still be attractive because of the efficiency. That said, smaller shops, highly-paid freelancers, and high-frequency-trading guys will probably love these new Intel chips.
So, the Geekbench score indicates Sapphire Rapids is about 30% faster, core for core, than Zen 3. Now, if they can do something about that power draw...
Finally a platform that will fill my CaseLabs M8, lol - over excited to see HEDT make a comeback to the space (even if I have to sell a kidney to get my hands on it)
Any recommendation for the best single-core performance? Platform needs to support >=256GB For running Wax nodes, the single-core performance is the most important. They use Xeon Gold's pushed above 5GHz mostly now, but it's such an expensive platform 😅 I was hoping the new W-series would be an option, but their IPC and max clock on just 1-2 cores would mostly determine that 😅
I'm interested in the w7 2495 sku, 24 cores is enough. 📌 I'm interested in the architecture of the cores, are they more in line with 12th gen or 13th gen?
well 14:28 - 70 000 in CB23 (for 6000$) when all we in regular store CAN buy 13900k wich give us ~38 000/~40 000 in CB23 personaly i - not impressed and even vice versa (remember 6000$ in best case ONLY for CPU)....and 13900k in regular store...
Its like 7 sets of the P cores stuck together basically from a 12900K, not a 13900K. If they got it from a 13900K they would probably be able to touch 5ghz on all the cores at lower or same power to this. So that is something to maybe look forward too in a year or two, a 64 P core 5ghz CPU from Intel.
The total throughput of this chip at 4 GHz, can score up to 7 TFLOPS (FP64). An i9-13900K with its 24 cores at 4 GHz only scores around, what, 1 TFLOPS (FP64). That's a pretty sizable difference. Even at 5 GHz, you only get up to around 1.2 TFLOPS (FP64), which is still six times slower in terms of max theoretical throughput to the Xeon at 20% lower clock speed. You also only have 2 versus 8 memory channels, which means 4-8 times lower memory bandwidth depending on whether you use DDR4 or DDR5 on the i9-13900K, and you only have 20 lanes (80 GB/sec) of PCIe 5.0 and 8 lanes of DMI 4.0 (16 GB/sec), which is 96 GB/sec of IO bandwidth. The Xeon W9 has 112 PCIe 5.0 lanes (448 GB/sec) and 8 lanes of DMI 4.0 (16 GB/sec), or 464 GB/sec of IO bandwidth. You get 4.5 times the IO bandwidth, over 6 times the floating-point and integer throughput, and 4-8 times the memory bandwidth depending on DDR4 or DDR5 used as a baseline for the i9-13900K versus the Xeon W9. I'd say that justifies the 10X asking price over the 9-13900K!
@@yungnachty4474 A year for WS/HEDT refresh, Intel hopes to launch Emerald Rapids server SKUs by the end of this year. They're not the same, but by comparing server CPUs and extrapolating the results we might see potential performance and power consumption of ER WS parts.
@@NUCLEARARMAMENT Who are you replying too? I don't think anyone in this comment thread was advocating for getting a 13900K over the 56 core W9 for the types of things you would use this processor for? I mentioned the 13900K but only in thathis W series Xeons are using Alderlake and not Raptor lake P cores, as using the P cores from Raptor lake would allow for the chip to run higher due to being way more efficient.
@@yungnachty4474 I am talking to the person who started the comment thread. I provided a comparison to the 13900K in exact detail, and I got my point across. The Golden Cove cores used are about 10% slower than the Raptor Cove ones, but you get 4-8 times the memory bandwidth, and 2-4 times the floating-point and integer throughput per core on the Xeon when you compile your applications to take advantage of it or use ones that already are. The 10% IPC improvement from the Golden Cove core to the Raptor Cove core doesn't mean much outside of single-thread workloads, you also have no ECC support unless you get a W680 motherboard.
@@der8auer-en Noticed any improvements on AIDA's cache and memory latency? E3 stepping has a 40ns L3 and 100ns memory latency, which we assumed are the main causes of the underperformance in many real world tests.
you'd be tapped out around 1800 watts on a 15A breaker, but could go 2400 on a 20A. Maybe we will see the rise of 30A single phase 120V circuits to support our ever more powerful CPUs at 3600 watts...! I see they have 30A single pole at HomeDepot for $6.98, but wondering if you'd need a larger gauge romex from the box to your server outlet, and what kind of outlet you'd have to use, guessing twistlock 30A.
@@JimBronson Most house wiring is gonna be at around AWG 12 good for 25A max, though regulations limit continuous loads to 20A. Quite a few circuits are only AWG 14 good for 20A max, with regulations limiting continuous loads at 16a. If you want a legal and safe 30A circuit you'd want to run at least 10 AWG. Alternatively, you could get a 240v circuit, use slightly less than half the amps for a given load (PSU are more effecient at 240 than 120)
Very exciting to see 64/112 PCIe lanes on Intel HEDT too! The lowest-end $359 6-core SPR CPU is attractive for building a cheap for quad-GPU workstation for CFD; only the GPUs and 64 PCIe lanes matter here. If only the mainboards and DDR5 memory wouldn't be that expensive.
That 6 core won't be normally available outside OEMs. Same seems to be true with any non-X chip.
@@davidbuddy Pity, would definitely consider one of these for an all in one home lab server (NAS/file server, VMs, self-hosted services).
@@XenonG it's really a shame that Intel really wants enthusiasts to pay at least USD$1K for a CPU on this platform
@@davidbuddy Yeah, I would love to purchase a 6-core processor for a reasonable amount of money, alongside a $400-ish motherboard, and later upgrade to a 24-core or 3400-series CPU, upgrade the memory, and truly get the most out of the platform. But when the minimum price for just the motherboard and CPU for an HEDT system is $1,500, that really makes me not want to buy into this amazing platform, especially since $1,000 for a 16-core CPU is already not a great price.
DDR5 is getting cheaper by the second
Thanks Intel for bringing back the HEDT monster overclocking !!! Great job DerBauer....cant wait to see what this platform can really do when you have the proper water cooling hardware to push the limits of these chips.
In LINPACK this Xeon W9-3495WX CPU pushes 5.376 TFLOPS around 4 GHz (56 cores * 4 GHz * 32 FP64 FLOPS/cycle = 7,168 GFLOPS FP64 max theoretical throughput), assuming 75% efficiency in scaling. You also get 112 PCIe 5.0 lanes with 448 GB/sec of bi-directional bandwidth off the CPU, DMI 4.0 (8 PCIe 4.0 lanes) gives an extra 16 GB/sec of bandwidth to the W790 chipset which has Wi-Fi 6 integrated, for a grand total of 464 GB/sec of IO bandwidth; and, 8 DDR5-4800 memory channels, which at 40 GB/sec per channel translates into 320+ GB/sec of memory bandwidth. That's close to 700 GB/sec of bandwidth of this chip.
A 5995WX at 4 GHz would score around 3 TFLOPS FP64, assuming 75% scaling (64 cores * 4 GHz * 32 FP64 FLOPS/cycle = 4,096 GFLOPS FP64 max theoretical throughput). You get 128 PCI-E 4.0 lanes, or 256 GB/sec of IO bandwidth off the CPU directly and that includes the WRX80 chipset lanes; and, 8 DDR4-3200 memory channels with tuned memory timings will get you to about 25 GB/sec of memory bandwidth per channel or 200 GB/sec of memory bandwidth. That's about 456 GB/sec of bandwidth from the 5995WX.
Yeah, it turns out the Xeon W9-3495WX chip at $6,000 has significantly greater potential throughput and memory/IO bandwidth, plus AVX-512+AMX instructions and Optane support, whereas the 5995WX costs $6,200 and has inferior max theoretical throughput in integer and floating-point, and lacks those extra instructions. Efficiency-wise, GFLOPS/W, the W9 wouldn't be any worse because you only need 1.0 V to hold 4.1-4.2 GHz in an all-AVX-512/AMX workload saturating the dual 512-bit SIMD-vector units of each core on the W9 Xeon versus the dual 256-bit ones on the 5995WX chip. They both pretty much average close to 700 W under full load around the 4 GHz/1.0 V mark.
I just rebuilt my desktop (13700K) and my needs would line up with the lower end of the W-2400 line but the thought of 64 PCIE lanes is seriously tempting. I know Intel is going to be harping on the higher end SKU's but I can't wait for the lower end parts to get into the hands on reviewers so I can decide if I want to save my pennies for a while and build one of these. Haven't been this excited since Threadripper was announced. Feel better!
I'm interested in the comparison between these chips and threadripper
I've been really looking forward to the "OOPS ALL P-CORES" chip cycle for a while. People really don't realize how powerful 12th and 13th gen cores are in comparison to 10th and 11th gen.
2024 and onward will be an "OOPS ALL E-CORES" chip cycle for the server market
@@utubekullanicisi As long as they're not based on the atom cores, that's fine with me.
@@utubekullanicisi should be interesting in terms of density considering how many E cores you can fit in the footprint of a P core
These new pro Intel chips made getting 13900k for creative workloads senseless. All P cores will definitely perform better than p+e combo.
I'm interested in the results from various tests to know which platform I'm subscribing to
Powerful enough to still lose almost across the board to the ancient years old Zen 3 based Threadripper Pro 5000! 🤣 (See Puget's testing).
We desperately need Direct Die Water Cooling, especially as we go down the 3D stacked chips road, stacking heat sources on top of heat sources severely limiting the clock frequencies.
But even with 2D chips like this is very clear that the limiting factor is heat density. I know that TSMC have been testing Direct Die Water Cooling, where on die micro channels are etched directly into the silicon layer on top of the CPU for a coolant to flow between layers. In their tests, silicon channels were etched into a silicon layer with a silicon-oxide thermal interface material between the microfluidic system and the actual silicon of the TTV. In a third option, the silicon-oxide TIM was replaced with a liquid metal TIM. The results were really impressive and easily enabled cooling off chips that used more than 2000w.
Copper conducts heat twice as good as silicon, aluminum is still also a better conductor of heat.
So silicon layer cooling is already of to a bad start.
Then there is the problem of making a super fat silicon chip cheaply, to have room for the fluid channels.
Connecting chiplets like a Epic or Sapphire Rapids up with fluid channels between the individually build and spaced out chips, sometimes up to 13 individual chips.
What happens if the fluid channels are no longer clean after a couple of years use?
Is the density even needed? we already have 96 cores Epic that are air coolable and dual socket capable and mountable in a rack with 10 servers for a total of 1920 CPU cores working at atleast 2.4ghz.
In Geekbench this 56 core sapphire rapids at 1000W overclock is not able to beat a stock AMD Epic 9374F 96 core at 400W TDP, so it is just a question of not pushing the overclock well past good efficiency and using the right chip for the job.
1000 W on a chip with 1400 mm^2 of die area isn't that hot. There's a lot of surface area and it's easily cooled with a radiator or two in a water cooling loop.
the really fine water ways would get blocked so easily.
mf has been watching ians channel.
Water introduces so much problems and makes maintained a pain. Those micro-channels will be clogged, then with troubleshooting you have to drain the system. No thank you
Exciting to see competition in the HEDT market again! Nice video, can't wait for some proper OC on this baby, maybe some chilled water and even LN2?
The chip is one thing but I'm blown away by that VRM. How on earth is that thing shifting 1kA with that tiny heatsink? Gotta be some fancy new bare-die SiC MOSFETs or something, with ludicrously high transconductance and ultra-low Rds(on). I don't see output caps, either - just a big array of pads, or maybe filled/plugged vias? Did they switch to a massive bank of MLCCs on the back or something? Maybe even embedded passives? Pretty wild.
Edit: given that production boards for SR seem to be omitting polar caps on the low side too, I guess we just exceeded the ripple current limit for alupoly and solidpoly? I wonder how they're handling the inrush issue from low MLCC impedance at low load though... maybe that's part of why the idle consumption is so high. Can't wait to see some teardowns and analysis of these new designs.
Exciting news! I hope you will manage to get hold of a 12 or 16 core model and a lower-end motherboard and show us the performance.
I have that exact dewar at 1:28 on the right side of the screen. Love the low pressure valve for refilling the vacuum tight flasks for pouring LN2 into the pots
That UEFI menu is such a nice thing to see, wish desktop UEFIs would give up with looking fancy and instead be useful
Wow first sapphire rapids hands on (I’ve seen) I’m excited!
This is exciting! Thank you for sharing your time at Intel, and thank you Intel for sharing this news! G.Skill, wowowowow! I can't wait to see more of your HEDT overclocking.
6:42 - I have a question, why does the core overview put the "favor cores" in weird places in the grid chart, instead of at the top? My 7950x is the same way in taskmanager... I can definitely spot which cores are the favored even though they aren't marked by stars or anything. There seems to be no symmetry or rhyme or reason as to why or what in taskmgr compared to where they might actually be located on the chip / die.
I would love one of those, if not for the fear of that electricity usage.
It would be interesting to see some 'IPC comparison' testing for the lower core-count models in comparison to similar core-count parts (HEDT and consumer Intel and AMD). That Cinebench R23 score at only 2.9GHz is like 2.7x Ryzen 9 5950x with PBO😲
Very steep platform entry cost, but can pay for itself quickly in certain use cases.
Now test speeds with 4x raid0 pcie5 nvme. Could load a game in tenth of a second. This really looks like great option for streamer gamers.
Thanks for these videos
Wooo, been waiting for this coverage. Good to see a teaser
Wow, I can't wait to see you pushing this thing to the limit, it's going to be very interesting no doubt.
Cool! Really glad to see that CPU development continues!
Can someone explain how you would ever hope to cool this thing when its running ~1000W loads? Of course this kind of setup would have crazy custom cooling but still 1000W?
I would love to see this beast run RPCS3, it should absolutely rip in every title (GOW ascencion and GOW3 comes to mind, the last of us as well)
sounds interesting :D
Nvidia was ashamed when seeing the VRM size and how much it can handle
To be fair a server chassis has insane air flow so it doesn't really need an overkill vrm
Handle that power for 10 sec is not the same compare to handle it continuesly.
Wait why? Current gen Nvidia FE cards have insanely overkill VRMs.
@@OTechnology Not really, if they cud get away with less they will. Its a fine balance with longevity and price for nvidia.
The fact the VRM didn't overheat in geekbench 5 doesn't mean anything because it's a very short peak load. Also these CPUs use a FIVR so the current going though the motherboard's VRM isn't as high.
Speaking of DDR5 ECC with XMP, i stumbled accross some DDR4 ECC a few years ago that had XMP and it seemed to work with everything, sure it was only 3200, but for a 128GB kit thats actually a pretty good speed for ECC at the time, and i think even now, because i think most ECC kits are still 2400/2666
With ddr5, there are already 6000mhz ecc kits, thats also pretty good.
@@Sineseol True, but you're not getting 4 sticks of 64GB ECC 6000 for several years, at least not running at 6000, let alone for ~$400 total
IIRC it was $196 for a 2 stick kit of 64GB DDR4 uDIMM ECC 3200 less than $300 for 128GB
I'd love to see overclocking results for the 2465X. 16 cores is plenty for me, even the 12-core X would be fine. I mostly need the PCI-e lanes, with excellent single-threaded performance, though some apps I use need lots of cores, so the 6-core wouldn't suffice. How far past 4.7GHz can it be pushed...
Can't wait to see later bios and your newer test
Thank you teams worked out splendidly
my 7950x scores 40k on Cinebench r23. I hope these intel CPUs can be overclocked enough to be impressed by :)
yeah and your platform (weighted against 16 core S.R) was at most 60% the price, I'm shocked to see ppl so excited by intel's coming up offering, don't get me wrong it will preform really well but the price-performance ratio is objectively poor
@@comrade171 imagine comparing consumer platforms to the professional variants and complaining about the professional variants being more expensive. There is a reason the 13900K forced the 7950X to have its price cut by over $100.
@@dex6316 imagine a clear argument, then imagine writing it down here and then consider actually doing that
@@comrade171 This is not a platform for gamers or consumers. The people that buy these types of systems (like me) don't rate them in price to performance, but rather price to time. If buying this CPU + board can save me 2 hours a day and only cost 10k, then it will pay for itself in under 2 weeks; after that it is increasing how much money I make every week.
my 13900k is already getting 42k on r23... who cares. the cpu in the video gets 70k... literally almost 2x the score mine or your cpu gets... at a much lower clock and much lower temps... 60c on air.
did you ever do a follow up on this?
Man, I always want to build a system like these but I don't need so many features 😅
Us mere mortals could rarely find a use for such a cpu. Just a few years ago the average person had 2 or 4 cores and everything worked fine lol 56 alderlake cores 💥🔥💯
@@christophermullins7163 funny how Threadripper was at some point extremely approachable to consumers by comparison lol.
Do you have the money to pay for one? :)
Anything with cpu or power compute control near hot merc or coil or caps noise and delay checking 3 forms of delay for classic power supply
Everyone looks so happy in the video, i guess everybody is glad that after many years finally we having a real HEDT platform again.
Out of curiosity, was this being done at the Folsom (California) campus? Growing up in the 90s, I worked at a computer store near the Folsom campus and we'd get Intel engineers coming in some CPU production samples and motherboards for new CPUs (think P2 era)..it was a pretty cool experience.
The OC lab is in Portland, OR. I also had similar experience to yours as Intel's development campus is here in Hillsboro (also Oregon) so we had engineers come by and they also donated hardware (we had a lab full of engineering sample P3s) to my high school.
Hot on the newest kit as always!
I look forward to seeing the more budget w3/w5 versions of these chips with retail board pricing. I really like the idea of having a cheapish system with 64 PCIE lanes across all SKU's with massive ram support as well. I just hope the total platform cost isn't mind blowing just for the low end as well!
All that heat in the palm of my hand!!
That is called a burn.
60c isnt hot.
Wow so for the past 10 years I had been planning to build a gaming PC and always wanted to go with the (RAM on each side of MoBo / HEDT / X Boards etc.) Mostly for the aesthetic looks.
I finally gave in and went with an z690 board w a 13700k etc. So does this mean that we should expect to see these boards making a comeback or are they going to be available but at crazy high prices like some of the 'extreme' boards we see today?
I don't see consumer and hedt overlapping ever again. Threadripper already is a "non-retail" only part, and Intel's stuff is too. It's not like sandy bridge to haswell era where HEDT was the high end gaming platform. It's all locked up in the enterprise market.
I saw someone in a different comment speculate that the minimal entry cost would be around 2k for motherboard and cpu combo since the below 12 core cpu's are oem only and the previous hedt class intel boards were extremely expensive in their own right. In theory that would actually still be a great deal for what your getting if compared to threadripper, but depends on if you can consider that far above your price point or not.
@@h.b.5577 Yeah makes sense! I mean, if they are already asking for $1000-$1400 for the high end Extreme boards then somewhere around $2k is probably what the price would be. I just don't think that the manufacturers would find it prudent enough to produce at that price point for retail but, who knows, I could be wrong.
@@itstheweirdguy Isn't that kind of why intel introduced the i9 in 2017 to kind of have a "high end desktop CPU" that worked on mainstream consumer boards?
Dual PSU means that is an actual board for serious users, nice to have other options than e.g. supermicro for that
Is this Windows workstation that can use those cores? My understanding is that "standard or pro" Windows doesn't run extreme core counts well and doesnt really understand how to access PCIE lanes as the RDIMM ECC speeds that the Workstation version can. Am I wrong? Seemed to gloss over the massive PCI lanes that can address the memory and NVME drives directly and in parallel.
You re right. Jumping from 8 cores to 16 on windows is no big improvement like you d get on linux and abovd 32C is nonsense but linux
She has an amazing voice for ad reads maybe she should do it more often!
Is the G.Skill Zeta R5 kit actually ECC? None of the marketing from G.Skill mentions ECC even once, and the label on the DIMMs mentions 8x chips when I'd expect 10x for a DDR5 ECC DIMM. This would be hugely helpful to confirm.
When I started feeling the system in my own sort of imaginary gui sense to help diagnose it like a car mechanic does / I was shown things and anything is possible is useful and voted into needs
I still run old LGA 2011-v3 Xeon because they offer 40 PCIe lanes and are dirt cheap.
I'm so jealous.. their is sooo much cool amazing stuff in that building...
All I want is cheap old stuff.. not expensive new stuff
I am really interested in getting an Intel W-2400 series 12-Core or 16-Core HEDT processor, but I need the DDR5 memory manufacturers to get on the ball and release larger DIMMs since I need 2TB of memory with it. I need memory over cores. I do special work with specific 3D software and game engines and the larger memory is a necessity. The main 3D software that I use supports up to 4TB, with a massive maximum capability of 18 Exabytes. I currently have two Intel X99 Extreme systems and an R9-5950X 128GB system (plus others).
@Michaels Carport - The W-3400 series would be nice, along with 4TB of memory, but they will probably be out of my budget price range.
I already priced out an AMD Threadripper Pro with 2TB of memory and it was $30,000 CAD.
I will probably have to settle on a W-2465X with 512GB to 1TB for starters, especially since DDR5 128GB/256GB DIMMs are really rare. If I can afford the W-3400 series processor and get the full memory later, that would be an option, depending on the cost of the processor in Canada.
@Michaels Carport - I noticed the Epyc processors around for good prices. Even NewEgg Canada has decent prices on the Epyc processors and server motherboards. I can get an Epyc 16-Core plus motherboard for $2000 CAD which is decent.
The DDR4 ECC Server memory is $4000 CAD for 512GB though at NewEgg. So for me to get 2TB of memory will be $16000 CAD. That is still less than the Threadripper Pro 2TB I priced at $30,000 CAD.
But with Epyc and DDR4 I feel like I'm buying old tech that is now outdated and a dead end.
NewEgg Canada has nothing for DDR5 RDIMMs either which is what the new Intel W-2400/3400 uses. So if I go with a W-3400 I will have to source the memory somewhere else like a memory company direct.
I am just going to wait until summer and see what the prices and availability are like in Canada. Even if I can get a W-3435X 16-Core plus ASUS motherboard with 256GB memory to start, that will be more memory than any current system that I have.
I'm glad I kept my above ground swimming pool in the back yard. With the proper chilling I might be able to keep this monster cool, once I finish the nuclear reactor to power it.
I know it'll never happen but I'd love to see Der8aur let loose on something like the Cerebras WSE-2
AMD threadripper 5995WX got 49k+ on Geekbench 5 at 3.2ghz and is not a current gen CPU. Seems like an extra 25% higher clock would get you quite a bit over this CPUs performance.
Yeah I mean come on. this isn't really that mindblowingly impressive. Threadripper is still very close on a "zen 3" node. It's nice to see some competition from the sleeping titan called Intel, but still think once we see next gen TR Zen 4 parts they gonna run circles around Intel again. And even it's very funny to see how much you can push a multicore monster like this, overclocking it, thats never gonna be how these boards and cpu's is gonna used. They are going to be run at stock clocks in servers or workstations. No pro user is going to risk any unstability chance over a few percent overclock for a Workstation and Intel still have a big issue in the power draw, which also plays into account when deciding which platform to use. Intel do have an edge for certain loads like workstations for Music production with 12'th and 13 gen cause of the monolithic die which doesn't have the cpu an ram latency of AMD's chiplet design, but again, is it worth 20-30% more processing power if it comes at a +75% power bill ? (just a number. I don't have the exact values) not to bash Intel. It's good they actually managed to take up the fight they were about to loose. Competition is better for the end users. Still think Sapphire Rapids gonna have a Rough Fight when it eventually comes out.
It is impressive that this new 56-core cpu OCed on all core at 4.2 GHz is able to beat a year old Zen 3 64-core...
*new 56-core cpu OCed on all core.....beat a year old Zen 3 64-core* makes it not impressive.
Being able to keep up with a 64core Zen4 "storm peak" releasing in the same year while consuming similar power, that would be impressive and maybe in 2 years they'll do it.
@@tomstech4390 I was sarcastic in my comment, as I was not impressed at all.
Well said. :)
Yeah, with twice the power consumption.
In LINPACK this Xeon W9-3495WX CPU pushes 5.376 TFLOPS around 4 GHz (56 cores * 4 GHz * 32 FP64 FLOPS/cycle = 7,168 GFLOPS FP64 max theoretical throughput), assuming 75% efficiency in scaling. You also get 112 PCIe 5.0 lanes with 448 GB/sec of bi-directional bandwidth off the CPU, DMI 4.0 (8 PCIe 4.0 lanes) gives an extra 16 GB/sec of bandwidth to the W790 chipset which has Wi-Fi 6 integrated, for a grand total of 464 GB/sec of IO bandwidth; and, 8 DDR5-4800 memory channels, which at 40 GB/sec per channel translates into 320+ GB/sec of memory bandwidth. That's close to 700 GB/sec of bandwidth of this chip.
A 5995WX at 4 GHz would score around 3 TFLOPS FP64, assuming 75% scaling (64 cores * 4 GHz * 32 FP64 FLOPS/cycle = 4,096 GFLOPS FP64 max theoretical throughput). You get 128 PCI-E 4.0 lanes, or 256 GB/sec of IO bandwidth off the CPU directly and that includes the WRX80 chipset lanes; and, 8 DDR4-3200 memory channels with tuned memory timings will get you to about 25 GB/sec of memory bandwidth per channel or 200 GB/sec of memory bandwidth. That's about 456 GB/sec of bandwidth from the 5995WX.
Yeah, it turns out the Xeon W9-3495WX chip at $6,000 has significantly greater potential throughput and memory/IO bandwidth, plus AVX-512+AMX instructions and Optane support, whereas the 5995WX costs $6,200 and has inferior max theoretical throughput in integer and floating-point, and lacks those extra instructions. Efficiency-wise, GFLOPS/W, the W9 wouldn't be any worse because you only need 1.0 V to hold 4.1-4.2 GHz in an all-AVX-512/AMX workload saturating the dual 512-bit SIMD-vector units of each core on the W9 Xeon versus the dual 256-bit ones on the 5995WX chip. They both pretty much average close to 700 W under full load around the 4 GHz/1.0 V mark.
Been trying but you understand it’s different to get the honor of a life time
Awesome pair these Intel Xeon gen 4 CPU great with Asus W790-Sage and W790-Ace
Get well soon. ;-)
I always used to build HEDT rigs, but switched when gaming became so core frequency dependent (to 10900K and now 13900K, OC'd on water cooling). Good to see HEDT making a comeback, but only suitable for productivity workloads given the low clock speeds and non multithreading capability of games. 10K is probably acceptable to companies needing this level of performance (plus 8K for 4 x 4090 🙂).
Any chance we will see a follow up with a retail board and CPU? Shocking how well it seems to scale but it is so expensive I am actually a little nervous to try to OC it. Going with the EK block and try to see how a single 360mm P360M can handle things w/D5 pump.
Feel better soon 😁
Can you also please make a video about C741 chipset as it should be much cheaper . All i'm looking for is a workstation
I am very excited to see these in action. The performance should be great, and also the memory and PCIe bandwidths are nuts.
Dude, you know that you HAVE to do some individual core tweaking/tuning with clocks and voltage, that's just too insane to pas up. When you look at every CPU there's always better cores, how cool would it be to be able to pick them and give them the extra Umpf they can use.
It's weird how it's Intel this time around who brings back excitement to this segment...
Anyways, get better soon Roman, and I'm eagerly looking forward to your in-depth video on this new platform!
@@elvewizzy Oh, sure. I don't plan on getting one, but this doesn't mean I shouldn't get excited :)
Lately, I've partially lost interest in tech products because of all the greed of the companies - there's been no substantial improvement in perf/$ in GPUs and CPUs and this kind of invokes in me the feeling that there's stagnation, although technically the 4090 offers a huge jump in absolute perf for example.
But this new platform kind of represents a return to a class of PCs which has been more or less dead for the last several years and may force AMD to respond at some point, bringing competition back to this segment, and I find it exciting - something I've missed for a while :/
I'd be much more excited if the mainstream platforms get let's say 8 extra PCIe lanes to the CPU which will cover the needs for many enthusiast/prosumers which are currently forced to go HEDT, but yeah...
because amd intel nvidia they all want your money, amd isnt special nor are they your friend like reddit makes you think
@@Freestyle80 Absolutely! I don't use reddit or follow blindly any trends, I form my own opinions. I was rooting for AMD back in the K6-2 and Thunderbird days, but I saw the true nature of corporations when Athlon64 came out. I learnt this painful lesson long ago, still in my teens, something some people never do sadly.
Since then, I'm not a fan of any company, I'm a fan of consumer-friendly practices (like long-lasting platforms) and of particular products and the experiences they enable. I'm a fan of true competition - something I feel disappeared a decade ago, I feel like we're victims of a cartel, the worst possible type of duopoly. And I'm kinda depressed because nothing seems to have the power to stop it :(
It's truly sad that a multi-thousand dollar platform is the thing that's actually exciting, to me at least :/
It's strange to me that HEDT gets 8-channel memory, but even high-end consumer platforms are still stuck with 2-channel memory.
Because really nothing a consumer is going to do with a PC makes use of memory to the point where it makes a difference.
Wonder how latency on the mesh interface compares on the 2400 vs 3400 since that was always the shortfall of the x299 and 3175x.
can you please test energy efficiency with undervolting/underclocking this cpu? what is the sweet spot, like you did for 40xx nvidia cards. Energy consumption is very important for such servers. I suspect it will draw half energy at 3.7-3.8ghz with 10-20% perf decrease
Seeing it rip through Cinebench R23 MT in just a few seconds is insane haha. Breaking 70k points at stock when we saw it was clearly possible to *double* the all-core frequency?! Can't wait to see some real testing of these!
It's about time Intel got back into the HEDT market. Long live the compute wars!
HEDT is alive again .. really looking forward to get this and review it on my channel! so glad Intel is bringing it back to life after AMD murdered it !! @Der8auer man could you please follow up with more videos on it.. even stuff like how to mount the cooler, would be nice .. Danke!
56 P-core CPU + 4x RTX 4090 + lots of ram and hd etc. = 30 Solar Panels in the Tropical Zone to compensate CO2 emissions of such system runing 24/7.
In a portable solution you could have the first computer with petrol tank and generator, put two wheels and you'll have a server-motorcycle...
Probably a beast for emulation
Hey , he made a mobo bracket to help intel / we did the same with xbox 360s and dvd drives
So nice for intel to also embrace cpu glue
But that glue will melt with 1KW+ heating the world!!!
@@ianmoone8244 Jokes aside, it's a 1400 mm^2 chip. 1000 W isn't that much. You've got 4090s pushing 500+ W and they are only around, what, 600-700 mm^2? The power density is lower on the Intel chip than the NVIDIA GPU, so for a given amount of cooling capacity, the Intel CPU with 1400 mm^2 die area will be about similar temperature wise, and yes, maybe a little higher because you have solder and a heatspreader in the way reducing heat conductivity relative to a GPU that is direct-die cooled.
at least it's much better than AMD cpu glue which requires a massive L3 cache to compensate for the performance degradation due to the distance between the chips. Also if you want to buy Ryzen cpu plz go for with 3D cache. They just artificially limit the performance of those non 3D cache cpu.
It seems like no way you can push that many amps through the socket pins.
I think they will have to redesign how power is connected to the CPU from the socket. It is just too many amps for the (many) little tiny pins.
13:30 Is the board defective or CPU-Z is not working properly ? It should state that you have 8 channel memory because you are running a Xeon 3400 series and not the 2400 series.
Most likely CPU-Z since it's so new.
You should try to overclock the CPU using the Ice Giant cooler, using patented Thermosiphen technology.
I don't think that would go to well....
excited to what you can do with this chip it would be cool to see a really high single core clock and run some games on it
Technically this isn't an HEDT platform it's more of a Pro/Workstation oriented one, and a very expensive one at that.
And 500+ Watts at stock isn't that attractive in this day and age, especially when you can get a similar performance for less consumption in the AMD 5000 Pro CPUs.
Looking forward to see what the W5 and W7 can do when overclocked, Up to 28 cores won't be as hard to cool as the 56 Cores monster.
Yeah, if I was running a medium-to-large VFX studio, Threadripper Pro would still be attractive because of the efficiency. That said, smaller shops, highly-paid freelancers, and high-frequency-trading guys will probably love these new Intel chips.
Cant wait to grab one in a few years when they are only like $100 used.
Same!
Amazing performance results. But that's a lot of current. 1kw power spikes. Getting well into fire safety concern territory...
Now imagine SPR dual socket with EVGA SR board.
Even a single socket new gen SR board would be awesome, I have an SR-3, which for me was the best looking motherboard ever.
Oh man... that's a super fun chip to do overclocking with and also a bonkers $$$.
INSANE POWER CPU :] Get well soon so we can see this beast singing!
1:37... does the motherboard is big are you became small?
Good luck finding a case for this board.
That's what super towers are for.
Wow, electrolytics on a computer board, smells like early 2000's :)
So external radiator for wintertime needed
video encoding taken to another level.....makes me think when we first try quad cores, vs one. fast.
this.......probably awfully fast
So, the Geekbench score indicates Sapphire Rapids is about 30% faster, core for core, than Zen 3. Now, if they can do something about that power draw...
They could try reducing the clocks 30%, but then it wouldn't be faster core for core.
Exactly my thoughts.. and that's overclocked. Under normal operation, the % is way lower.
@@ryzenforce @Toms Tech Um no, the Ryzen 5995WX was overclock to 4350MHz. I think the base clock is around 3.2GHz
These are certainly very expensive, but it's still cheaper than current Threadripper! Should offer *significantly* better value overall!
Finally a platform that will fill my CaseLabs M8, lol - over excited to see HEDT make a comeback to the space (even if I have to sell a kidney to get my hands on it)
Any recommendation for the best single-core performance? Platform needs to support >=256GB For running Wax nodes, the single-core performance is the most important. They use Xeon Gold's pushed above 5GHz mostly now, but it's such an expensive platform 😅 I was hoping the new W-series would be an option, but their IPC and max clock on just 1-2 cores would mostly determine that 😅
Nice review. Does it have a competitor in the market, and if it does - how does it compare with performance and price?
I'd like to learn more about reference boards for CPU chipsets
I'm interested in the w7 2495 sku, 24 cores is enough.
📌 I'm interested in the architecture of the cores, are they more in line with 12th gen or 13th gen?
Golden Cove, 12th gen.
well 14:28 - 70 000 in CB23 (for 6000$) when all we in regular store CAN buy 13900k wich give us ~38 000/~40 000 in CB23
personaly i - not impressed and even vice versa (remember 6000$ in best case ONLY for CPU)....and 13900k in regular store...
Whoa this could be a Crysis CPU render monster, please do a CPU-rendered Crysis test just for shits n giggles
We need to know if it can run Crysis!
On one hand, cool. In 3-5 years I might be able to get a used one.
On the other hand, is it really 6x the cpu of a 13900?
Its like 7 sets of the P cores stuck together basically from a 12900K, not a 13900K. If they got it from a 13900K they would probably be able to touch 5ghz on all the cores at lower or same power to this. So that is something to maybe look forward too in a year or two, a 64 P core 5ghz CPU from Intel.
The total throughput of this chip at 4 GHz, can score up to 7 TFLOPS (FP64). An i9-13900K with its 24 cores at 4 GHz only scores around, what, 1 TFLOPS (FP64). That's a pretty sizable difference. Even at 5 GHz, you only get up to around 1.2 TFLOPS (FP64), which is still six times slower in terms of max theoretical throughput to the Xeon at 20% lower clock speed.
You also only have 2 versus 8 memory channels, which means 4-8 times lower memory bandwidth depending on whether you use DDR4 or DDR5 on the i9-13900K, and you only have 20 lanes (80 GB/sec) of PCIe 5.0 and 8 lanes of DMI 4.0 (16 GB/sec), which is 96 GB/sec of IO bandwidth.
The Xeon W9 has 112 PCIe 5.0 lanes (448 GB/sec) and 8 lanes of DMI 4.0 (16 GB/sec), or 464 GB/sec of IO bandwidth. You get 4.5 times the IO bandwidth, over 6 times the floating-point and integer throughput, and 4-8 times the memory bandwidth depending on DDR4 or DDR5 used as a baseline for the i9-13900K versus the Xeon W9.
I'd say that justifies the 10X asking price over the 9-13900K!
@@yungnachty4474
A year for WS/HEDT refresh, Intel hopes to launch Emerald Rapids server SKUs by the end of this year. They're not the same, but by comparing server CPUs and extrapolating the results we might see potential performance and power consumption of ER WS parts.
@@NUCLEARARMAMENT Who are you replying too? I don't think anyone in this comment thread was advocating for getting a 13900K over the 56 core W9 for the types of things you would use this processor for?
I mentioned the 13900K but only in thathis W series Xeons are using Alderlake and not Raptor lake P cores, as using the P cores from Raptor lake would allow for the chip to run higher due to being way more efficient.
@@yungnachty4474 I am talking to the person who started the comment thread. I provided a comparison to the 13900K in exact detail, and I got my point across.
The Golden Cove cores used are about 10% slower than the Raptor Cove ones, but you get 4-8 times the memory bandwidth, and 2-4 times the floating-point and integer throughput per core on the Xeon when you compile your applications to take advantage of it or use ones that already are.
The 10% IPC improvement from the Golden Cove core to the Raptor Cove core doesn't mean much outside of single-thread workloads, you also have no ECC support unless you get a W680 motherboard.
Edit: E5 stepping is the final version?
the CPU I used in the end is retail status
@@der8auer-en Noticed any improvements on AIDA's cache and memory latency? E3 stepping has a 40ns L3 and 100ns memory latency, which we assumed are the main causes of the underperformance in many real world tests.
I could tell you how to get rid of that pause and latency issues without doing a reinstall.
It would be interesting to see what kind of single core performance you could get by disabling some cores and overclocking.
I would love to see a benchmark of Star Citizen with this CPU!
Wow, that power draw is enormous... At least here in the states with 15A 110-120V(rms) circuits, thats a seriously non-trivial amount of power draw.
you'd be tapped out around 1800 watts on a 15A breaker, but could go 2400 on a 20A. Maybe we will see the rise of 30A single phase 120V circuits to support our ever more powerful CPUs at 3600 watts...! I see they have 30A single pole at HomeDepot for $6.98, but wondering if you'd need a larger gauge romex from the box to your server outlet, and what kind of outlet you'd have to use, guessing twistlock 30A.
@@JimBronson Most house wiring is gonna be at around AWG 12 good for 25A max, though regulations limit continuous loads to 20A. Quite a few circuits are only AWG 14 good for 20A max, with regulations limiting continuous loads at 16a. If you want a legal and safe 30A circuit you'd want to run at least 10 AWG. Alternatively, you could get a 240v circuit, use slightly less than half the amps for a given load (PSU are more effecient at 240 than 120)
i lolled so hard at the tannebaum sticker