Thank you guys for doing this. Windows gets all the love and as a recent adopter of Fedora in February who also likes hardware knowing how well new hardware runs on day +1 is important to me.
Microsoft probably reasons that most compute should be done in their Azure cloud and same for storage really. They want to own your digital you and get rent for accessing it at the same time. The last thing they want is an optimal performing non-cloud solution for everyone.
@@TheEVEInspiration I wish so badly this weren't the truth. Why are they so evil. Why do they need more money? Just make enough for everyone to have a nice life. We don't need socialism, but we do need to put a lid on capitalism.
It's mainly because most of Microsoft's business is in the corporate world and they intentionally tune the power plans and settings to save power. Also its to keep the state of California off their backs too cause of the laws there.
I wonder if all that background telemetry, windows recall, ads all over the OS, online integration nobody has asked for, is impacting windows performance 😂
I've seen a post on Twitter where someone disabled all but 1 P-Core and performance in Cyberpunk actually went up, by like 10%. So I think its scheduler related. (And I definitly don't want to defend microsofts data vacuuming.)
Not only that. The Windows scheduler is dumb. It will literally move software threads across the physical threads of your CPU for no reason at all. This is literally why the Ryzen 7 9700X sometimes does better than the Ryzen 9 9950X (2 CCDs instead of 1).
@@Moejoe647 Are you sure you're not mixing this up with someone disabling 1 P-core (core-0) and leaving the rest untouched? That's a post I remember seeing too. It's an old way to get better performance.
Wendell! Just wanted to say I am enjoying your work. I'm a recent Windows refugee, and have switched to Kubuntu 24.04. Hoping I can learn more about Linux from this channel.
Not surprised this works better on Linux, this platform is the first one that came out with a fix/optimizations for intel's big-little design over Windows. Microsoft just doesn't seem like they know what they are doing anymore smh
GameModeRun shouldn't need any special adjustments for ArrowLake since the e- vs p-core detection is simply "looking to see if there are differences in the max frequency among the cores and use the set that report the higher number" so that should already work out of the box. At least with the v1.8.2 since my initial implementation in v1.8.0 had a bug for e- vs p-cores (due to not having such a system so could never test it). edit: ok spoke too soon, I added a 5% safety limit in the detection code and it appears that some individual cores on at least the 13900k can boost more than 5% over the other P cores max frequency leading to GameModeRun pinning the game to only those 4 cores that can boost.
Correct me if I'm wrong, but wasn't the Ryzen 9000 series also sometimes better on Linux than Windows? Between similar performance at lower power, improved AI compute, and sometimes better Linux performance, do you think this is a consistent shift in the market strategy from the big manufacturers (perhaps trying to stay competitive with the rise of ARM?) or is it just a matter of economics (silicon yields down, geopolitics goofy, etc.)?
Open source, when widely adopted, is clearly a superior solution. Linux is more optimized in general (gaming suffers mainly due to low adoption, but it's still mostly pretty good).
@@MrFatpenguin I only game on Linux now and i can attest to that with NVIDIA. Except where Anti-Cheat lies and also some Unreal Engine games (Icarus being a big one).
@cajonesalt0191 I don’t think it’s part of any new strategy. The server market has always been dominated by Linux and business customers have been their biggest customers. They would have been fools to ignore their needs and they didn’t. You can look through the x86 instruction set for dozens of oddly specific single instructions where it’s hard to see why they need to be implemented on hardware. These were requests from their big customers because they improved performance or energy efficiency on a large scale. I think the discrepancy in performance between Windows and Linux lies in their thread schedulers. Windows moves around threads to different cores more than it needs to and thus wastes time doing nothing in those nanoseconds which adds up over thousands of threads. This made the tech news when earlier Ryzen chips started running faster on Linux a few years ago. It’s hard to know if anything has changed in the time since then. Meanwhile Linux has experimented with a few different thread scheduling algorithms and has multiple available as options depending on your needs. Microsoft doesn’t have the impetus to create a high performance kernel for windows because their big customers that care about performance are running Linux on azure already. Windows users who do care about performance are a captive market because they’re either tied to windows by the software they use but don’t spend enough money with Microsoft to have any influence there. Could they close the gap? Yes. Will they? No because high performance doesn’t seem to be a priority for the Windows team right now. It’s more about pushing Bing, telemetry, advertising, and “AI” features. As far as they care, the thread scheduler is good enough.
@@ruikazane5123 I took Wendell's advice when he made his first RIP Optane video whenever that was, a couple years ago. I got an p1600x, 118 gig, for around $100. It's awesome, I use it as the boot drive for my laptop. They've almost doubled in price since then, the used market is drying up.
N100 is great - but it has one and only (but big) problem… single channel memory…. If that thing would have dual channel memory capabilities… it would be such a great thing…. Sad thing? Its new brother - N150 - is still single channel only…
This is an interesting step for Intel. When it excels, it does really well, but also uses about the same power as the previous architecture. When it performs okay, it's more efficient but nowhere near AMD at times. Then sometimes it seems just, bad. I think it is a step in the right direction, but still needs work both on the software and hardware sides. 🤔
@@tommyking626 kinda but not that close. I mean ryzen 1000 was miles ahead of previous AMD CPUs in everything, both performance and efficiency. While intel 200 trades blows with 14th gen and is somewhat more efficient. Still it is true that both ryzen 1000 and intel 200 mark a change for each company, let's just hope intel has similar success with following generations, otherwise AMD is just gonna keep increasing the prices and offering eh performance improvements each gen just like intel did from 3000 to 7000 series
It wasn't ready for release and has a lot of bugs. Ryen 1000 was 8 powerful cores for $350 (we bought a 1700x) and Intel would sell you only 60% of that for $350.
As a complete aside, I was just pondering why they had chose to go with with SMT on the P cores in 12th→14th gen. My gut tells me they should have had SMT on E cores and non-SMT P cores if they really wanted to get the best single threading and latency for foreground tasks and efficient use of die area for background tasks.
JayzTwoCents actually found one of the biggest issue with 285. Some performance issues got fixed by using CU-DIMMs. I think the issue is having a task that takes certain amount of time, and splitting it into a pipeline and then running it way lower frequency than it is designed to handle, thus if you run traditional DDR5 you have extra latency that you wouldn't have if you would use CUDIMM.
its not cudimm, its the memory frequency. Jayz used ram at 8400, you don't need cudimm for such a frequency. Any higher than that might even harm performance a lot cuz the memory controller would run in gear4.
I'm curious too. On my i7-6700HQ this stupid chromium takes 14 hours, if I'm not doing anything else on the computer. I know you weren't interested, I just wanted to complain. How can Firefox be literally 10 times faster (between 1.0 and 1.5 hours) is beyond me. Stupid, stupid chromium.
Intel All access described how this generation of thread director would use e cores first, and only move to p core if workload was too big for an ecore
I never heard about PCI Express ECC before. Actually PCI express encoding already have built-in forward error correction, I suspect this get implemented because of CXL that require PCI Express behave like ECC memory. This mean extra bits need to be transmitted for ECC memory in CXL to CPU to work, butt will add overhead. May able to prevent retransmission in case bit errors ?
Hmm, I have noted a lot of work in intels linux development so i guess its paying off, im just a casual user anyway, i wonder if for now making sure that the future of intel works well on linux is something that can sway some share away from AMD? as currently i feel that intel has a lot of good future tech and ideas, but many need some tuning to work optimally, after that its the balance of power/performance. Im stuck on a nuc 12 extreme compute module, im guessing an upgrade to the 245k would work for me considering i cant hit past 4.5Ghz with cooler limitations and i expect the multi tasking would likely work better on these with linux so i would not be losing anything, trying to aim for a more efficient build overall, from the sounds of it, I think CUDIMMs are the optimal way for me to go as well.
So this is Faverose? Lots of separate systems “working together” in one chip? Seems to complicated to work without headaches…Intel’s worked on this for sometime and it’s appears to be beta.
I recently run some test in wsl2 windows on my gen14 rig and I got predictably 20% worse performance than about 2-3 month ago. On the native Linux I have the same performance as it was 3 month ago. I suspect that MS broke something in the recent updates, since they don't describe in details full list on charges in the updates.
For gaming it's a pass but still on 12900KF Linux, would probably go 9000 series if I were to build again today mostly for gaming, 9800X3D around the corner or a 9950X/9950X3D depending on use case.
I think you misspoke at the time you said the Core Ultra 5 CPU has 5 P cores it is 6. But to my question how many compute dies does Intel make for this series. Is it only one and binning decides what bracket it ends up in. or is it more than one die?
Watching windows users constantly complain about windows being a pile of utter rubbish never gets old. As a non gamer that has very little interest in gaming outside of a couple indie games I find arrow lake quite nice.
Thanks for the unbiased and objective reviews. I really dislike how many tech reviewers are shitting so hard on this generation launch. Personally, I think it's impressive how good it is for such a different architectural layout. It's a 1st gen of its type and drivers are probably still being worked on. Performance will probably improve in the next few months. That being said, I can't wait for the 9000X3D launch.
So many channels are just gaming benchmark and test numbers channels. Few actually look into (or even understand) the actual HARDWARE and what it offers as a piece of technology. I'll give Gamer's Nexus a pass though since they do other in-depth hardware tests around what they're testing, but the context of their presentations doesn't do them much wonders.
I really appreciate your comment about the NPU not being leveraged atm and that most comparisons focus on the CPU part. yes the performance compared to previous generations and platforms are meh at best, however when the software catches up and games and applications take advantage of the NPU it will make a world of difference...maybe a bit scarry too. Companies need to innovate and build the platforms to develop the next generation applications.... and indeed people need to understand what their workload requires and what planform is the most suited for it and at what point to make the upgrades. At least for now there are new tools coming on to the fore and it may make sense to make this point a little louder in commentaries .😎
Gaming is literally the most useless metric ever to be used to measure cpu performance. Oh noo my game doesn’t run at 200 fps, I must now sacrifice everything else just to reach a barely noticeable change in frame time latency. What an investment
@@roccociccone597 I agree. I was fuming over Zen 5 reviews and to a degree Arrow Lake ones (there was more regression there). Gaming is nice and quite useful as a chaotic benchmark of CPU functionality, but also takes ages to update. If it even is updated. I made this comment just in case someone who watched Wendell's video forgot that there will be 9800X3D dropping in 8 days.
@@RotaryJunkie For gaming community certainly. And for engineering as well. Though most games don't need super insane FPS that currently handful of monitors can even take advantage of. Experienced player with 120 Hz monitor will beat hype-gamer with 500 Hz monitor. Don't get me wrong - fluidity is nice, but seeing CPU's only through lens of FPS is a bit ridiculous (I'm not saying you do that). I expect mild teething issues with flipped cache to pop up. And we will see if it was a good idea to give users option to OC. It needs to be made super clear that with OC of X3D part warranty is VOID.
@@roccociccone597 they just arnt giving their toys away to play with.. cant find the bugs if no one has them yet. makes me wonder what they are really going to do with these
interesting you say memory latency is worse - Jayz2C was looking at overclocking and noticed that putting new CUDIMMs in gave it a 15% uplift before any OC happened.
Memory latency is worse compared to RPL, if you run AIDA64 memory test the latency shown in nanoseconds is sadly higher than RPL. That’s why going with CUDIMMs helps because with apples to apples UDIMMs the same kit on ARL has higher latency than RPL. This is a big topic of discussion on the overclocking forums.
Even with what Jayz2c did, even though I didn't see his numbers, I saw others, IIRC Derbauer and somebody else had some memory lantency and even after all the OC and ringbus OC and tuned memory, it was still higher latency than Raptor Lake. I think the lowest I saw was 70something nanoseconds, while on RPL it was 60something nanoseconds. And that 70 was miles better than the stock being at over 100 IIRC.
Intel should re-consider some of their design choices. A product manager should have noticed that gaming is by far the most important workload to optimize for in the desktop space, yet they chose the tiled architecture approach with a ton of added latency. I guess the 9800X3D will win this fight against Arrow Lake by a large margin and AMD will charge a premium for it. Thanks, Intel!
"Gaming is by far the most important workload to optimize for" You're funny, real funny. Not like desktops are used for anything else in the world, right?
@@TheAmazingCowpig Maybe I should have added a bit of context to not trigger comments like yours. Of course there are other workloads people use with their CPUs, but I speak about the market perception that drives sales. Have you seen Hardware Unboxed or Gamers Nexus basing their narrative around productivity tasks alone?! No. The narrative is shaped around gaming workloads, hence it matters to the bottomn line of these CPU companies or you would see more sales of 13th/14th gen as they are superior in multi-core workloads against AMD's X3D chips already. It seems most people prefer V-Cache (and good gaming performance) over E-Cores (multi-core performance). Over here in Germany, AMD has a solid lead against Intel at Mindfactory with a split of 90 to 10 percent. These numbers should speak for themselves. Edit: Tom's Hardware just posted the bad sales data for Arrow Lake on Mindfactory, what a surprise.
@@seylaw People buying chips for their gaming desktop continues to be Not Important, what matters is how many optiplexes Dell can sell. If it's true that AMD is outselling Intel 9 to 1 in enthusiast markets, then intel's 60% share of this quarter's sales overall should make this point extremely clear.
@@evildude109 You are talking about a totally different market. These Arrow Lake SKUs are meant for the desktop enthusiast market first and foremost. Intel should have come up with better memory latency, AVX 10.2 and a VCache alternative to compete with AMD at this point in time while keeping the amount of cores.
You're completely wrong. The primary place where desktop CPUs are used by dollar amounts are in enterprise workstations. The secondary place where desktops are sold is in boxed desktops, and then you get to gaming desktops, which split decently for Intel in prebuilts. The number one thing that the tile-based approach does is it allows for component reuse. This means that desktop parts can become more commodity. There is less need for Intel to subsidize development with gaming parts.
Am I the only one that doesnt care about thermals beyond a point? Why do I care if a PC requires 400W instead of 800W? You're talking about pennies even if you left it on 24/7 pulling max power. I assume its a much bigger deal in the enterprise space because for the individual user it seems inconsequential.
It's not pennies though, especially if you live somewhere like California with incompetent government making electricity more expensive. One of the hosts of The Tech Pod, either Brad Shoemaker or Will Smith, recently disclosed that he was paying over $0.40 /kWh off-peak, and *over $0.70 /kWh* on-peak, with peak rates applied through the entire afternoon and evening. At that point, ridiculous things like running off-grid with a propane generator start looking cost-effective. Even with more normal rates, say, $0.18 /kWh, a 20W difference in idle power adds up to $100 over 5 years, assuming you sleep the machine when you do. That's the Intel idle advantage. If the difference were so bad as 400W, even at only 1 hour/day that's $130. Power can be a significant fraction of TCO, especially if the computer is heavily used.
Wow a reviewer who's not shitting on ARL? That's rare and refreshing. I've been telling everyone that its not that these chips are terrible or a waste of sand like people say. They are just not well optimized, especially on Windows mostly because Windows is ass and H24 is not helping. Second reason is this is a brand new architecture that probably needs some bugs ironed out software and architecture wise. I hear NovaLake will reintegrate the memory controller into the compute tile, also seems like boosting the ring bus and e cores on ARL help gaming performance a good amount according to JaysTwocents findings. And before I get the usual AMD fanboy saying cope or something silly, do some research or just watch the damn video.
They're not bad CPUs, their value is a bit off. But if you do need the productivity/workstation part (like I would for compilation, so sick of that stupid Chromium taking 14+ hours) then it's actually competitive. Intel needs all the help it can get now (literally to survive)
@@Winnetou17 I run a butt load of VM's, some with telephony programs, and a heavy multi track audio recording, compression, import/export workload. All of which my 11700k handle fine. For me, its mainly about the improved I/O
@@linrono If you can't web search to find something that's on Intel's own website, this channel is too advanced for you, I recommend and LMG channel, like tech quickie, more your level!
AMD and Intel made this year of cpu floops only !! 14th gen of Intel cpus now looks pretty good with prices and performances overall are better then floop zen5 cpus for price
Biggest thing keeping me from linux is the gaming side and complexity getting it setup. Also drivers aren't as good as official windows drivers for games. Tried twice now. Just not ideal. Maybe try Bazzite in the future but windows is so good for gaming. Especially with a completely de bloated / AI free windows install.
intel made cheaper version of xenon hardware for everyone now....more bandwidth the better for linux users than amd tiny boards have far less pcie gen5 bandwidth vs intel over ton of bandwidth to support 6 m2 drives and many pcie 5.0 slots ,,,its server cpu with workstation in mind to get best in both worlds...for gaming...all they need cudimm 9600 is all needed to dethrone any amd x3d chip
I clicked in because I thought you were holding a 3.5 inch floppy lol
Way too small to be a 3.5”. More like 5.25”. And yeah, I thought the same thing.
I also thought it is a floppy drive. 😆
same lol, but I thought it was a 5 1/4 floppy.
same tbh
I also thought it was a floppy but a 5 ¹/₄
Thank you guys for doing this. Windows gets all the love and as a recent adopter of Fedora in February who also likes hardware knowing how well new hardware runs on day +1 is important to me.
It's crazy that windows throttles new chips from both Intel and amd
Microsoft probably reasons that most compute should be done in their Azure cloud and same for storage really.
They want to own your digital you and get rent for accessing it at the same time.
The last thing they want is an optimal performing non-cloud solution for everyone.
@@TheEVEInspiration 110% this, look at how autosave is only to the cloud on office apps
@@TheEVEInspiration I wish so badly this weren't the truth. Why are they so evil. Why do they need more money? Just make enough for everyone to have a nice life. We don't need socialism, but we do need to put a lid on capitalism.
It's not that crazy. They've always been incompetent, but Wendell hasn't always had a youtube channel.
It's mainly because most of Microsoft's business is in the corporate world and they intentionally tune the power plans and settings to save power. Also its to keep the state of California off their backs too cause of the laws there.
I wonder if all that background telemetry, windows recall, ads all over the OS, online integration nobody has asked for, is impacting windows performance 😂
FBI! OPEN UP!!
DANG DANG DANG DANG!
I've seen a post on Twitter where someone disabled all but 1 P-Core and performance in Cyberpunk actually went up, by like 10%. So I think its scheduler related. (And I definitly don't want to defend microsofts data vacuuming.)
Not only that. The Windows scheduler is dumb. It will literally move software threads across the physical threads of your CPU for no reason at all. This is literally why the Ryzen 7 9700X sometimes does better than the Ryzen 9 9950X (2 CCDs instead of 1).
Not to mention how poorly programmed those ad displays may be.
@@Moejoe647 Are you sure you're not mixing this up with someone disabling 1 P-core (core-0) and leaving the rest untouched? That's a post I remember seeing too. It's an old way to get better performance.
I searched Phoronix for a review on these processors as I had a strong hunch that Windows was the culprit for low performance.
And ? What's the conclusion ?
Wendell! Just wanted to say I am enjoying your work. I'm a recent Windows refugee, and have switched to Kubuntu 24.04. Hoping I can learn more about Linux from this channel.
KDE great choice 👍🏼
Welcome to Linux! The more, the better for all. Ubuntu here, but nevermind, it is the same family.
I’ve been waiting for this to drop since you the Windows review on the main channel. Thank you as always, Wendell.
I am glad Steve introduced me to your content. I wish it happened sooner. You have such amazing information! I can see why you two linked up.
Not surprised this works better on Linux, this platform is the first one that came out with a fix/optimizations for intel's big-little design over Windows. Microsoft just doesn't seem like they know what they are doing anymore smh
GameModeRun shouldn't need any special adjustments for ArrowLake since the e- vs p-core detection is simply "looking to see if there are differences in the max frequency among the cores and use the set that report the higher number" so that should already work out of the box. At least with the v1.8.2 since my initial implementation in v1.8.0 had a bug for e- vs p-cores (due to not having such a system so could never test it).
edit: ok spoke too soon, I added a 5% safety limit in the detection code and it appears that some individual cores on at least the 13900k can boost more than 5% over the other P cores max frequency leading to GameModeRun pinning the game to only those 4 cores that can boost.
Correct me if I'm wrong, but wasn't the Ryzen 9000 series also sometimes better on Linux than Windows? Between similar performance at lower power, improved AI compute, and sometimes better Linux performance, do you think this is a consistent shift in the market strategy from the big manufacturers (perhaps trying to stay competitive with the rise of ARM?) or is it just a matter of economics (silicon yields down, geopolitics goofy, etc.)?
I'd say this is at least partially optimization. Linux optimizes better and faster, sometimes years before Windows does.
Open source, when widely adopted, is clearly a superior solution. Linux is more optimized in general (gaming suffers mainly due to low adoption, but it's still mostly pretty good).
@@MrFatpenguin
I only game on Linux now and i can attest to that with NVIDIA.
Except where Anti-Cheat lies and also some Unreal Engine games (Icarus being a big one).
Intel also provide code fixes and optimization for linux
@cajonesalt0191 I don’t think it’s part of any new strategy. The server market has always been dominated by Linux and business customers have been their biggest customers. They would have been fools to ignore their needs and they didn’t. You can look through the x86 instruction set for dozens of oddly specific single instructions where it’s hard to see why they need to be implemented on hardware. These were requests from their big customers because they improved performance or energy efficiency on a large scale. I think the discrepancy in performance between Windows and Linux lies in their thread schedulers. Windows moves around threads to different cores more than it needs to and thus wastes time doing nothing in those nanoseconds which adds up over thousands of threads. This made the tech news when earlier Ryzen chips started running faster on Linux a few years ago. It’s hard to know if anything has changed in the time since then. Meanwhile Linux has experimented with a few different thread scheduling algorithms and has multiple available as options depending on your needs. Microsoft doesn’t have the impetus to create a high performance kernel for windows because their big customers that care about performance are running Linux on azure already. Windows users who do care about performance are a captive market because they’re either tied to windows by the software they use but don’t spend enough money with Microsoft to have any influence there. Could they close the gap? Yes. Will they? No because high performance doesn’t seem to be a priority for the Windows team right now. It’s more about pushing Bing, telemetry, advertising, and “AI” features. As far as they care, the thread scheduler is good enough.
The hanging M.2 is cursed
It's even better that it's an optane, that module is becoming irreplaceable.
@@evildude109 There goes my dream of having ultra-high-speed, ultra durable (not Gigabyte) M.2 on my laptop...Optane died so soon
@@ruikazane5123 I took Wendell's advice when he made his first RIP Optane video whenever that was, a couple years ago. I got an p1600x, 118 gig, for around $100. It's awesome, I use it as the boot drive for my laptop. They've almost doubled in price since then, the used market is drying up.
Thanks for your balanced opinion - I value it.
The new E-cores are kinda exciting. You can do so much with an N100 mini-pc now. Can't wait til the replacement hits.
N100 is great - but it has one and only (but big) problem… single channel memory….
If that thing would have dual channel memory capabilities… it would be such a great thing….
Sad thing? Its new brother - N150 - is still single channel only…
@@Karti200 and only 9 pcie lanes.
Thanks so much for the Video, most times i found only useless Windows Videos for it.
I am very interested in ECC support comparison between current Intel and AMD. Please the most likable person on YT!
I'm so old. I thought Windell was holding up a 5.25" floppy disc in the thumbnail.
ngl, looks like a floppy
So old that people still measured things in inches!
Will be interesting to see these CPUs in NUCs, Asus now but NUCs have always been interesting.
05:24 that old arctic cooler :D impressed that works in the new socket
You missed something. If you want to get the best performance out of these new CPU's, you will also need new memory.
Better experience in Linux does not surprise me. I may chose Intel for my next PC for code compilation.
This is an interesting step for Intel.
When it excels, it does really well, but also uses about the same power as the previous architecture.
When it performs okay, it's more efficient but nowhere near AMD at times.
Then sometimes it seems just, bad.
I think it is a step in the right direction, but still needs work both on the software and hardware sides. 🤔
The same thing happens with Ryzen 1000
@@tommyking626 kinda but not that close. I mean ryzen 1000 was miles ahead of previous AMD CPUs in everything, both performance and efficiency.
While intel 200 trades blows with 14th gen and is somewhat more efficient.
Still it is true that both ryzen 1000 and intel 200 mark a change for each company, let's just hope intel has similar success with following generations, otherwise AMD is just gonna keep increasing the prices and offering eh performance improvements each gen just like intel did from 3000 to 7000 series
It wasn't ready for release and has a lot of bugs.
Ryen 1000 was 8 powerful cores for $350 (we bought a 1700x) and Intel would sell you only 60% of that for $350.
Great video as usual thank you
Wendell!
when hdr is officially supported on linux, i'll never ever ever look back at windows.
I need a beginner Linux distro. Windows is sucking my performance
As a complete aside, I was just pondering why they had chose to go with with SMT on the P cores in 12th→14th gen. My gut tells me they should have had SMT on E cores and non-SMT P cores if they really wanted to get the best single threading and latency for foreground tasks and efficient use of die area for background tasks.
imagine shrinking x2 and not gaining x2 in perf like it was before....
Have you guys done a video on Linux performance with lunar lake, e.g. 288V?
its six cores for the Ultra 5, wish it was 5, would have gotten it instantly!
JayzTwoCents actually found one of the biggest issue with 285. Some performance issues got fixed by using CU-DIMMs. I think the issue is having a task that takes certain amount of time, and splitting it into a pipeline and then running it way lower frequency than it is designed to handle, thus if you run traditional DDR5 you have extra latency that you wouldn't have if you would use CUDIMM.
yup and it's completely logical despite all the early kneejerk bagging of intel. next gen memory controller next gen ram
its not cudimm, its the memory frequency. Jayz used ram at 8400, you don't need cudimm for such a frequency. Any higher than that might even harm performance a lot cuz the memory controller would run in gear4.
I see that Nobara wallpaper ;D
how are your kernel or other compile times, any uplift over RPL-R or Zen 5?
on a 14700K, can do chromium in 1 hour, 50 minutes and 14 seconds
I'm curious too.
On my i7-6700HQ this stupid chromium takes 14 hours, if I'm not doing anything else on the computer.
I know you weren't interested, I just wanted to complain. How can Firefox be literally 10 times faster (between 1.0 and 1.5 hours) is beyond me. Stupid, stupid chromium.
@@Winnetou17 bloat, simple. Bloat and probably lots of hidden back doors few actors know about
@@TudorgeableI wonder what are the chances that is a coincidence?
What is the CPU frequency scaling doing? What happens to webXPRT if you `cpupower frequency-set -g performance`?
Intel All access described how this generation of thread director would use e cores first, and only move to p core if workload was too big for an ecore
I’ve seen it where r cores give big uplift in gaming along with cu-dimm and cache over clock.
Meanwhile 7800x3d in windows:” aham.., cool”
linux mentioned 😎
A possible Primagen mentioned?
whats the SRIOV functionality like on this platform?
What do you think of Jayz2Cents overclocking and also of alleged (I didn't see it not-cropped) 1P+16E cores gaming that was faster?
wait can you run an m.2 without screwing it down?
some lap tops are not screwed down, just pushed down with the case
When you insert it into the slot, there is a noticeable bite where the contacts "clip in". Just don't wiggle it around too much
The best takeaway from this video was the existence of lstopo (or hwloc).
So is it a stretch to state that Microsoft has a hard time to follow all the changes coming from the cpu manufacturers?
Curious to know how the iGPU behaves on Linux
+
Wendel... timestamps, timestamps!
I never heard about PCI Express ECC before.
Actually PCI express encoding already have built-in forward error correction, I suspect this get implemented because of CXL that require PCI Express behave like ECC memory. This mean extra bits need to be transmitted for ECC memory in CXL to CPU to work, butt will add overhead.
May able to prevent retransmission in case bit errors ?
Hmm, I have noted a lot of work in intels linux development so i guess its paying off, im just a casual user anyway, i wonder if for now making sure that the future of intel works well on linux is something that can sway some share away from AMD? as currently i feel that intel has a lot of good future tech and ideas, but many need some tuning to work optimally, after that its the balance of power/performance.
Im stuck on a nuc 12 extreme compute module, im guessing an upgrade to the 245k would work for me considering i cant hit past 4.5Ghz with cooler limitations and i expect the multi tasking would likely work better on these with linux so i would not be losing anything, trying to aim for a more efficient build overall, from the sounds of it, I think CUDIMMs are the optimal way for me to go as well.
the disadvantage on spacing the pcores out is latency. which arrow lake suffers a lot from
Soooo, they’ve made a more complicated chip- how many years to tune it?
does cudimm improve gaming?, on windows seams to be that way, as per jay2cents video
So this is Faverose? Lots of separate systems “working together” in one chip? Seems to complicated to work without headaches…Intel’s worked on this for sometime and it’s appears to be beta.
i love this channel ❤
Wendle, you are way to optimistic @11:20. It's going to result is better shareholder outcomes, not lower prices for us consumers.
How. about running 4 DIMM of RAM ?
wish it was hyper.
I was hoping for more charts but it's ok
I recently run some test in wsl2 windows on my gen14 rig and I got predictably 20% worse performance than about 2-3 month ago. On the
native Linux I have the same performance as it was 3 month ago. I suspect that MS broke something in the recent updates, since they don't describe in details full list on charges in the updates.
For gaming it's a pass but still on 12900KF Linux, would probably go 9000 series if I were to build again today mostly for gaming, 9800X3D around the corner or a 9950X/9950X3D depending on use case.
I'm also on 12900K, but won't build AMD until they widen their DMI
I think you misspoke at the time you said the Core Ultra 5 CPU has 5 P cores it is 6. But to my question how many compute dies does Intel make for this series. Is it only one and binning decides what bracket it ends up in. or is it more than one die?
What do you mean with the P cores ? This clearly has 8 P cores and everybody knew that for months.
@@Winnetou17 0:49 in talking about the core 5 variant
@@afre3398 Oh, ok, sorry, missed that.
Interesting that both Ryzen 5 and Arrow Lake do better on Linux than Windows.
Why not run the latest kernel (6.12 rc4) instead of such an old kernel.
Watching windows users constantly complain about windows being a pile of utter rubbish never gets old. As a non gamer that has very little interest in gaming outside of a couple indie games I find arrow lake quite nice.
Please, secure that M2 connection 5:24 .
Thanks for the unbiased and objective reviews. I really dislike how many tech reviewers are shitting so hard on this generation launch. Personally, I think it's impressive how good it is for such a different architectural layout. It's a 1st gen of its type and drivers are probably still being worked on. Performance will probably improve in the next few months. That being said, I can't wait for the 9000X3D launch.
So many channels are just gaming benchmark and test numbers channels. Few actually look into (or even understand) the actual HARDWARE and what it offers as a piece of technology.
I'll give Gamer's Nexus a pass though since they do other in-depth hardware tests around what they're testing, but the context of their presentations doesn't do them much wonders.
Wendell .. you have got contacts to Intel, right? Could you ask them just one thing: "Why?"
0:46 core 5 is 5 performance cores, you said. haha
I really appreciate your comment about the NPU not being leveraged atm and that most comparisons focus on the CPU part.
yes the performance compared to previous generations and platforms are meh at best, however when the software catches up and games and applications take advantage of the NPU it will make a world of difference...maybe a bit scarry too.
Companies need to innovate and build the platforms to develop the next generation applications.... and indeed people need to understand what their workload requires and what planform is the most suited for it and at what point to make the upgrades.
At least for now there are new tools coming on to the fore and it may make sense to make this point a little louder in commentaries .😎
Didn't Jay just show a huge jump in gaming with cu dimms.
what is VID?
I'd still wait for 9800X3D to make judgement what is best gaming CPU of this year.
I mean, if the 9800X3D /isn't/ the best gaming CPU of this year, it's a pretty severe problem.
Gaming is literally the most useless metric ever to be used to measure cpu performance. Oh noo my game doesn’t run at 200 fps, I must now sacrifice everything else just to reach a barely noticeable change in frame time latency. What an investment
@@roccociccone597 I agree. I was fuming over Zen 5 reviews and to a degree Arrow Lake ones (there was more regression there). Gaming is nice and quite useful as a chaotic benchmark of CPU functionality, but also takes ages to update. If it even is updated. I made this comment just in case someone who watched Wendell's video forgot that there will be 9800X3D dropping in 8 days.
@@RotaryJunkie For gaming community certainly. And for engineering as well. Though most games don't need super insane FPS that currently handful of monitors can even take advantage of. Experienced player with 120 Hz monitor will beat hype-gamer with 500 Hz monitor.
Don't get me wrong - fluidity is nice, but seeing CPU's only through lens of FPS is a bit ridiculous (I'm not saying you do that).
I expect mild teething issues with flipped cache to pop up. And we will see if it was a good idea to give users option to OC. It needs to be made super clear that with OC of X3D part warranty is VOID.
what about this instability in core chips with graphics enabled?
Man I hope this isn’t a wide spread problem… we can’t have intel be fumbling all the time, AMD is already stagnating…
@@roccociccone597 they just arnt giving their toys away to play with.. cant find the bugs if no one has them yet. makes me wonder what they are really going to do with these
NEW INTEL HAVE DECOMPRESS PROBLEM TOO ,,,after 3 month give blue screen in games
Intel made good server cpu like Xeon. Only its called ultra now?
tokens per seconds in llama 3.2 405b using cpu+ram? just for the lolz
interesting you say memory latency is worse - Jayz2C was looking at overclocking and noticed that putting new CUDIMMs in gave it a 15% uplift before any OC happened.
Memory latency is worse compared to RPL, if you run AIDA64 memory test the latency shown in nanoseconds is sadly higher than RPL. That’s why going with CUDIMMs helps because with apples to apples UDIMMs the same kit on ARL has higher latency than RPL.
This is a big topic of discussion on the overclocking forums.
Even with what Jayz2c did, even though I didn't see his numbers, I saw others, IIRC Derbauer and somebody else had some memory lantency and even after all the OC and ringbus OC and tuned memory, it was still higher latency than Raptor Lake. I think the lowest I saw was 70something nanoseconds, while on RPL it was 60something nanoseconds. And that 70 was miles better than the stock being at over 100 IIRC.
Mo betta blues...😂 Blue; Intel; get it?
Put the better ram in
Windows always was, is and will be crap compared to GNU/Linux (not "Linux", which is not an operating system, but a kernel).
Intel should re-consider some of their design choices. A product manager should have noticed that gaming is by far the most important workload to optimize for in the desktop space, yet they chose the tiled architecture approach with a ton of added latency. I guess the 9800X3D will win this fight against Arrow Lake by a large margin and AMD will charge a premium for it. Thanks, Intel!
"Gaming is by far the most important workload to optimize for"
You're funny, real funny. Not like desktops are used for anything else in the world, right?
@@TheAmazingCowpig Maybe I should have added a bit of context to not trigger comments like yours. Of course there are other workloads people use with their CPUs, but I speak about the market perception that drives sales. Have you seen Hardware Unboxed or Gamers Nexus basing their narrative around productivity tasks alone?! No. The narrative is shaped around gaming workloads, hence it matters to the bottomn line of these CPU companies or you would see more sales of 13th/14th gen as they are superior in multi-core workloads against AMD's X3D chips already. It seems most people prefer V-Cache (and good gaming performance) over E-Cores (multi-core performance). Over here in Germany, AMD has a solid lead against Intel at Mindfactory with a split of 90 to 10 percent. These numbers should speak for themselves. Edit: Tom's Hardware just posted the bad sales data for Arrow Lake on Mindfactory, what a surprise.
@@seylaw People buying chips for their gaming desktop continues to be Not Important, what matters is how many optiplexes Dell can sell. If it's true that AMD is outselling Intel 9 to 1 in enthusiast markets, then intel's 60% share of this quarter's sales overall should make this point extremely clear.
@@evildude109 You are talking about a totally different market. These Arrow Lake SKUs are meant for the desktop enthusiast market first and foremost. Intel should have come up with better memory latency, AVX 10.2 and a VCache alternative to compete with AMD at this point in time while keeping the amount of cores.
You're completely wrong. The primary place where desktop CPUs are used by dollar amounts are in enterprise workstations. The secondary place where desktops are sold is in boxed desktops, and then you get to gaming desktops, which split decently for Intel in prebuilts.
The number one thing that the tile-based approach does is it allows for component reuse. This means that desktop parts can become more commodity. There is less need for Intel to subsidize development with gaming parts.
Am I the only one that doesnt care about thermals beyond a point? Why do I care if a PC requires 400W instead of 800W? You're talking about pennies even if you left it on 24/7 pulling max power. I assume its a much bigger deal in the enterprise space because for the individual user it seems inconsequential.
It's not pennies though, especially if you live somewhere like California with incompetent government making electricity more expensive. One of the hosts of The Tech Pod, either Brad Shoemaker or Will Smith, recently disclosed that he was paying over $0.40 /kWh off-peak, and *over $0.70 /kWh* on-peak, with peak rates applied through the entire afternoon and evening. At that point, ridiculous things like running off-grid with a propane generator start looking cost-effective.
Even with more normal rates, say, $0.18 /kWh, a 20W difference in idle power adds up to $100 over 5 years, assuming you sleep the machine when you do. That's the Intel idle advantage. If the difference were so bad as 400W, even at only 1 hour/day that's $130. Power can be a significant fraction of TCO, especially if the computer is heavily used.
ECC support again locked? So another useless CPU from Intel.
It's nice that throughout the Intel debacle, there has been a balanced and nuanced opinion.
intel always with the " muh need new mobo " scheme.
pathetic honestly
These Floppy Disks was everywhere in the 90s now they are obsolete.
Honestly both Intel and AMD have such atrocious chip names now I've lost interest.
Wow a reviewer who's not shitting on ARL? That's rare and refreshing. I've been telling everyone that its not that these chips are terrible or a waste of sand like people say. They are just not well optimized, especially on Windows mostly because Windows is ass and H24 is not helping. Second reason is this is a brand new architecture that probably needs some bugs ironed out software and architecture wise. I hear NovaLake will reintegrate the memory controller into the compute tile, also seems like boosting the ring bus and e cores on ARL help gaming performance a good amount according to JaysTwocents findings. And before I get the usual AMD fanboy saying cope or something silly, do some research or just watch the damn video.
I bought my 11700k on sale in early 2022 with the idea of upgrading to 15th gen when it came out. I think I still might upgrade to the 285k.
9950x3d
They're not bad CPUs, their value is a bit off. But if you do need the productivity/workstation part (like I would for compilation, so sick of that stupid Chromium taking 14+ hours) then it's actually competitive. Intel needs all the help it can get now (literally to survive)
@@Winnetou17 I run a butt load of VM's, some with telephony programs, and a heavy multi track audio recording, compression, import/export workload. All of which my 11700k handle fine. For me, its mainly about the improved I/O
Not going near it until I see the bugs are fixed (and no ring bus failures)
e-waste cores...was a junk idea from the start on, but somehow they have to get rid of the manufacturing waste.
mb
This platform is downgrade not upgrade
Boycott Intel, for it's longstanding support of Eth nic cleansing and Jeno Side!
Sauce?
@@linronoIt’s an Israeli company
@@Felale that's not sauce.
@@linrono If you can't web search to find something that's on Intel's own website, this channel is too advanced for you, I recommend and LMG channel, like tech quickie, more your level!
@@linrono Intel have a page specifically for their support of Israel. How can you not find that yourself?
Interl Fanboy Yes Sir++++
AMD and Intel made this year of cpu floops only !! 14th gen of Intel cpus now looks pretty good with prices and performances overall are better then floop zen5 cpus for price
Biggest thing keeping me from linux is the gaming side and complexity getting it setup. Also drivers aren't as good as official windows drivers for games. Tried twice now. Just not ideal. Maybe try Bazzite in the future but windows is so good for gaming. Especially with a completely de bloated / AI free windows install.
intel made cheaper version of xenon hardware for everyone now....more bandwidth the better for linux users than amd tiny boards have far less pcie gen5 bandwidth vs intel over ton of bandwidth to support 6 m2 drives and many pcie 5.0 slots ,,,its server cpu with workstation in mind to get best in both worlds...for gaming...all they need cudimm 9600 is all needed to dethrone any amd x3d chip
Pheww more glued together snake oil.
Linux sucks
no
amd cry boys crying
Found the delusional propagandist who runs userbenchmark
As a dev on Linux, arrow lake and zen 5 are both great gens. It’s the gamers that keep crying about this shit.
maybe its good on android or iPhone OS or Huawei OS too!!
8:29 wtf was that huge nvme ?