I don't think they care about ryzen as much as they want to make Apple come back to use their chips. I can see they are trying to pivot for better mobile chips and use E P cores just like big.little arch in apple's arm arch design. AMD doesn't have that kind of a flexibility to combine different types of cores, yet.
@@slimjimjimslim5923 Does Apple even make a ripple in the market to matter all that much? I always thought Apple's market share was peanuts compared to x86 corporate and enterprise. It's large enough to make Apple gajillions in profit, of course, but in terms of volume (even including phones phones), they're just a drop in the bucket in terms of overall units sold.
@@slimjimjimslim5923 apple will never go wtih intel again, they have there own design and it works why would they go back to making less profit for there own product, in house development is way better for the product and how long it takes to make
@@slimjimjimslim5923E/P cores only exist because they can't have one really efficient core, and if you've been following the latest SoC releases, people are even shifting from it, e.g: X Elite, AMD Ryzen, and MediaTek too with the All big core design CPU, and I don't think Intel Devs are so delusional and trying to get Apple back, they are trying not to lose the contracts they have now with PC vendors and even trying to open more Intel foundries to competitors
@slimjimjimslim5923 even if Intel chips were twice as fast and efficient Apple would not use Intel again. Apple wants complete control over every part of the hardware inside their devices.
I really appreciate that intel prioritized the efficiency and even used TSMC instead of being stubborn and using their own silicon which is not quite at the same level yet
@@Nicky_TM bro TSMC 5nm has less transistor than Intel's 10nm.. TSMC is way behind intel in technology..🤡🤡 problem is Intel's production is facing problems.. wait for some time intel will get back to normal
I don't know if you can call Lunar Lake low end when it will likely be the fastest chip to date running Windows tasks like those found on pcmark10. It will be extremely responsive in business class laptops and ultrabooks while beating Qualcomm on power consumption which is just slightly better than Meteor Lake per notebookcheck.
Lunar Lake is more like Low-Mid Range. Strix Point will take up High End. To compete, Intel will also release high end Arrow Lake to Laptop. Lunar Lake is mostly meant for business class laptops, for college students, and for people who want a portable option for AI, Gaming and Productivity.
Also consider that the chips get designed ahead of time, even if they wanted to use intel fabs, in that point back in time 2 years ago or less, they saw TSMC in a better position. So they made a strong choice to play it safe or gamble it. I think Intel took the best option at that time for those chips.
@PaulsTechSpace I've never seen any roadmap to suggest that. 18A, being early 2025, is 6 months ahead of original schedule. Intel purchased large volume of N3B years ago. The timelines don't align
Lunar Lake has built-in 16GB or 32GB memory on SoC, that 's quite exciting. If overall system performance and efficiency like what Intel claimed, it will kill Qualcomm Snapdragon X laptops, and gain large market share back from AMD.
Snapdragon X Elite isn't gonna fair very well against Lunar Lake even if it fell short. It has marginally higher battery life against Meteor Lake, and while it does have decent single core and multicore performance(in most cases at least), it struggles in gaming. Lunar Lake should be able to beat that quite easily. The real competition with Lunar Lake is with Low-Mid Range Strix Point.
@@auritro3903 x-elite is hobbled by win-on-arm more than anything else. Just cannot gain the x-86 maturity in a short time. Emulation layer will have glitches and hiccups for many non-MS apps.
Intel also said HT is modular. They removed it in Lunarlake but Arrow lake and Xeon are a different story. Xeon will have hyperthreading but we don't know if Arrow lake will. Or even if it's there, low end arrow lake may not have it
HT isn't necessarily going to the grave. It's just not present in the *mobile parts* at least this time around. HT still makes a lot of sense in the desktop and server parts. I don't think we have heard that HT is completely, universally dead, yet. On mobile, that 20% power for 30% IPC (remember, that IPC is for the whole core, ie 2 threads running at the same time, not a single thread of execution) doesn't make as much sense when power consumption is the primary design consideration.
Ht is axed for desktop for the time being all reports point to, "riping out ht" is a ground up thing. With HT built in still, turning it off with 8+ P cores already nets gains in a majority of games, with the changes to the scheduler also which removing HT allows (long pipeline vs short and wide). If the E cores are Raptor lake level and we see those backfill the HT cores Intel could have some very competitive CPU's again. Intels new Core Ultra line has launched and has HT, its the upcoming PC side that is being talked about, the only cross over from Core Ultra is the new Ecores.
Great job on the video. Intel's poised to make a comeback? That's great news! We can't have AMD turning into Intel on the CPU side, and Nvidia needs more competition than just Radeon.
Intel should provide an option for users to choose whether to use the onboard low latency 16/32GB memory as L4 cache and provide additional memory slots on the motherboard.
Intel has straight up stated that at the time of designing Lunar Lake TSMC had the better node for the job. Simple as that, apparently. A more humble Intel under Pat Gelsinger 🤷♂️
Meteor Lake never came to desktop because they had, and still have, mountains of 13th and 14th gen desktop CPUs in the distribution pipeline and didn't want to crush sales. It's that simple.
also Meteor Lake desktop probably won't be as fast as Raptor lake, they would just be more power efficient. In that case, it make sense Meteor Lake desktop not to exist. Power efficiency is more of a mobile chip and server concern anyway. And for server Intel already got Sierra Forest e-core clusters on the way.
@@slimjimjimslim5923 Yup, Meteor Lake was both a clock speed and IPC regression. A pretty hard sell when Meteor Lake also costs more to produce than Raptor Lake.
E-cores should have no problem with access L3 cache, they do have access to Memory Side cache 8 Mb. Regarding comparison E-cores to Raptor cove. This is comparison E-cores that are connected to a L3 and ring bus, it's Arrow Lake btw. This can be seen in the slides.
Good especulation and explanation about the topics. Very down to earth and sober. It seems good news to see Intel introducing changes to the organization of the blocks. Meteor lake still quite decent leap but didn't reach the hype promised. Given my urgency I think I will get a Meteor Lake now instead of wait for the innovation and performance that is coming with Lunar Lake. Again, efficiency will be great but Meteor Lake is quite good for most people according to my understand. Battery saving in Lunar Lake will be great, though. The nail in the coffin was that we didn't see Thunderbolt 5 in Lunar Lake.
i don't know if this is a silly question but i'm kinda new to all this gaming pc and streaming stuff but if hyperthreading makes the core more powerful how come the i7 9700k didn't beat the i9 9900k in single-core performance as the hyperthreading is taken away from the i7?, if you get what i mean
I don't know this right! Here is my result of conclusion:Since the launch of the Alder Lake 12th generation Core, the processor architecture has begun to shift towards multi-density processing and no longer focuses on single-core performance. Hyper-threading technology is mainly used to improve single-core performance. Now, the new architecture focuses on optimizing multi-core performance and multi-density processing to achieve the best performance state of the processor.
It depends on where the bottleneck is. If the main limiter is utilization of CPU resources, then HT can provide a large boost. If you're thermally limited, then HT is slowing you down. Intel has been very thermally-limited lately, and that seems to be why the laptop and desktop chips are moving away from HT, at least for now.
@@IcyTorment : The 12th-gen core is the best proofing of the future and has great value. 13 and 14th have pretty suck TDP Performance and the hottest temp inside with the CPU core. So that's why HT is out of time technology for using in the future.
Arrow Lake will wipe the floor with Zen 5. Zen 5 is basically AMD's 14th gen moment. It's pretty much just Zen 4 with slightly better efficiency and better AI capabilities. Arrow Lake is a completely new architecture.
I'm curious. Why are they building AI into the CPU core if they already have the NPU? And why do they build the NPU if they already have AI support in the GPU? Do all *PUs need AI support? If so, are they aimed at different AI workloads? Which are good at what? So many questions...
The AI in the CPU isn't actually ai, it's just finds the best way to use the cores Npus are better at ai workloads then GPUs, GPU uses more power as well, It's like making a CPU do GPU things
The AI allows them to change clocks on the fly. His explanation was wrong. What Intel is saying is that they moved from setting clocks using a table to AI deciding what clocks they need based on the current temperature and workload. This also means Lunarlake may be more inconsistent amongst reviewers
I enjoyed your detailed summary. I like how Intel is taking on ARM with their new and improved version of x64. Battery longevity is a must. However, manufacturers tend to reduce, not increase battery capacity from these efficiencies, which is ridiculous. I wouldn't mind if AMD & Intel dabbled into creating their own ARM SOC. Definitely great times for Apple, Qualcomm, Intel, AMD & the consumer.
Not sure ARM is the direction they'd go. If they want to diversify away from x86, RISC V might be a better choice, since it's already replacing ARM in some markets and wouldn't require them to pay licensing fees.
It makes a lot of sense. Why waste transistors on something that the machines those chips are targeted at mostly won't use? Arrow Lake will be in desktop PCs and gaming laptops, where discrete GPUs will continue to dominate.
only real application testing side by side can be trust. since last year Intel had already told everyone core ultra how good on paper, and how the benchmark kick amd ass. but we all know the outcome is core ultra even can’t beat amd a year older product. especially the igpu, the 3dmark of core ultra is 15-20% faster than 780m, however core ultra lose almost all real gaming test. Intel put too much water on their marketing, so I don’t trust any information they announced.
i actually have pretty high hopes for this. i think in terms of real /scientific workloads these new CPUs are going to absolutely destroy the competition.
I really don't know if I should upgrade my PC This is my ring MSI Creator 32 PS321URV Intel Core i9-13900K ADATA XPG Levante 360 AIO Gigabyte Z790 AERO G Vengeance RGB 64GB DDR5 5600Mhz Samsung 990 Pro 2TB Samsung 980 Pro 2TB Samsung SSD 840 EVO 1TB Creative AE-5 Plus Nvidia 3080Ti PSU ROG THOR 850W Everyone said that the new CPU would be slower than 13/14th, and I shouldn't upgrade. Do you have recommendations? Thanks!
Finally Intel chips are interesting again! They appear efficient, but that doesn't mean they still won't suck down tons of power during high performance loads. It would be nice if I can stop ripping on Intel for putting out enough heat to cook steaks with. And I for one am glad hyper-threading died, it had a purpose before but not anymore.
Dense but great material. Inscribed right now. But I have a question. Will Intel completely abandon the past architecture? Aren't they complementary each other?
It was AMD chips which burn motherboards and more specifically 7800X3D. There are no reports of any 14900K chips burning and even overall failure rate as in chips stopping working completely seems to be within expected in the industry norm. At least I didn't ever hear this is what the reported issue includes - it seems to always be instability and often chips can be stabilized by turning down power target and/or maximum single core boost.
the real question on everyones mind is about the current issues with 13th and 14th gen. will these chips be subject to the same issues? what is Intel going to do about all the ppl with issues?
Its an all-new architecture at an all-new foundry using all-new VLSI design and verification tools (Intel engineers had 0 hours of experience with the TSMC tools)! What could possibly go wrong???
If I can't upgrade/choose RAM I'm not gonna want it. Non-upgradable RAM is a NOPE from me. I might have to swap over to AMD to take advantage of the new LPCAMM2 memory. I'm much more comfortable choosing my RAM manufacturer, latency, sizes and such just like we do with SSD manufacturer, speed, and size as well as GPU. It still puzzles me how higher-end laptops aren't as upgradable as desktops.
Fun Fact: No one knows anything about any -Lake... we did not know back then with the 13th and 14th gen.. How you gonna know now what they gonna build in?
@@n.d.n.e1805 Still, it doesn't change the fact that we are once again buying ourselves a very expensive surprise egg, probably with a remote k1LL switch...like with current Tesla vehicles and multi-brand solar systems...
@@instantdislikechannel5699That's what happens when you buy a new architecture. It's a surprise egg - you don't know if it will have problems. When people bought the AMD 7000 series, it was a surprise egg. They didn't know if it would have problems or not
They recently announce lay-offs so they seem to be desperate for cash. If employee satisfaction is an indicator of future success, then Intel has 48% of their design workforce thinking of finding a new job elsewhere given their latest employee survey.
I agree. Intel's strutting away from Hyper Threating is like a millennial change in their technology. Which they induced for entire Earth. I hope they know what they are doing. And hold on for a second, if i get this idea right... Integrating RAM module into CPU die? That seems to me like they found latency may brake performance of their chip. Nothing is without a reason. I really wish they show us the engineering sample, one with RAM modules outside chip, and one with integrated.
Apple has RAM on package. Lunar Lake, if you look at the die shots, looks very similar to base M series design. It began development when M1 was released.
I think Intel would destroy if they allowed ram upgrade-ability alongside giving ram on package. The performance would be crazy for AI and gaming workloads.
It's for thin and low power laptops. They said that higher end chips from the next generation will be able to use upgradeable memory if the manufacturer decides to do so. Frankly, nothing is changing here at least for the end user from what we currently have.
Considering thats it's for low-mid end light laptops that most people are gonna use for software purposes and with AI, with a tinge of gaming in it, 32 GB is honestly pretty good. Most gamers themselves don't use over 32 GB RAM on their desktops unless it's a high end build like a i7/R7 + 4070S/7800XT + build.
The idea is that by the time you really need the Memory upgrade, you properly need a CPU upgrade as well. I never think solder in memory is a bad idea, especially when they did it right. You get higher performance. But yes, 8GB solder in stupid. Apple....
@@auritro3903 I have now 32G on my ThinkPad P16s with 12th Gen Intel i7-1270P processor and would like to have the next ThinkPad with a processor like Lunar Lake and 64G Ram. If Lunar Lake is limited to 32G now, then perhaps next year will be a next generation of such processors, which support up to 64G. It is true that 32G is enough for almost everything, but running AIs locally, WSL with docker containers it is better to have more.
Bro listen. Tell me everything about the intel ultra 3 105ul. This cpu is like @_@ so good in my opinion. It supports the lga 1851. 🥲 Do u have any info about it.
Anybody who owned an Intel 2019 macbook pro - like me - rightly understands that INTEL IS NOW IN 4TH PLACE. It will be a cold day in hell before I buy another Intel powerpig thighburner laptop! I bought an AMD 7940hs laptop 3d ago!
@@olukoyaopeyemioluwaseun8559 intel has been failing in efficiency (mips per watt) ever aince they failed to progress from 14nm. They offered 14nm+, 14nm++, 14nm+++, 14++++. Meanwhile the whole world passed them by! Check any review where they measure geekbench points per watt, Intel is DEAD LAST ...
so you saying 20% less power and same single core performance and maybe little more mulitcore performance :-p i hope intel make Arc performe better with arc and intel cpu
Why? The problem is in the architecture, not the code. The 13/14 gen would draw way too much power killing the CPU but the 15 gen won't have that problem since it won't use that much power since it will be way more efficient.
Ask any of the 15000 employees being fired if they are going to buy Intel chips😂. Investors losing 25% in one day. Until it's doomed and no one gives a crap about them
i really hope they dont fall apart like gen 14. if thats the case ill get some new lenovo gear :) well done intel.. bad luck dell - your camm 2 mem modules are now scrap!
I am not so sure about intel anymore. Unless they get their thermals in check and work with laptop OEMs to give decent battery life, they are not going to work in the ultra portable line of laptops. My Lenovo Yoga 7X with Snapdragon X Elite has been amazing as a portable laptop for coding, productivity, and media consumption. I am getting 12-16 hours of battery life on average over the past 2 days of me using it.
@@n.d.n.e1805 It's not to say it won't have the same flaws as 13th and 14th Gen as Arrow Lake was technically designed and made before they had issues with 13th and 14th gen
@@stephendippenaar9986 13th and 14th generation CPUs had issues because they were essentially upgraded and overclocked 12th generation CPUs. The 15th generation is a completely new CPU with a new node and design, and it only draws a maximum of 200 watts, not 400 like the 13 and 14 gen.
@@n.d.n.e1805 Gonna be bit hard to do with 15000 less staff and over 10 billion in losses and shares worth almost nothing. Intel has to gain there credibility again after such a fuck up they just had and people losing thousands on building a big paper weight ornament. As this is the reality of the situation
@@stephendippenaar9986 That is the reason Intel is working with TSMC after the 10 nm disaster, which led them to lay off 5,000 employees. They began working with TSMC on a 2 nm process and have already invested over $20 billion on the A20 node, the same node that the 15th generation will use. The leaked finalized specifications for Arrow Lake indicate that the maximum TDP for the Ultra Core 9 285K is 250 watts.
Dothan, the precursor for the core architecture, was also designed in Israel. i ran a 1.6ghz dothan cpu on an asus motherboard with a socket adapter, and it would run at 2.4ghz all day long. it was the fastest faming cpu at the time. lets hope they deliver again.
It reminds me of Nokia's N9. Nokia in the late 2000's in the smartphone industry was like Intel now. They had been the king, but Apple/Google came. Nokia tried not to lose to the new smartphone trend with their next-gen N9, but it miserably failed. I think the tide has turned to ARM. No matter what Intel does, I don't think they can catch up. Apple and Qualcomm won't stay still till Intel catch up.
Than I think you don't really understand how cpu works and how they are designed. ISA doesn't guarantee performance and efficiency but your microarchitechtural changes and design focus do make a difference.
@@KP21530 That's the same thing also said by jim keller himself. ARM will not magically give efficiency. Everyone is sleeping on lunar lake when it seems it will be the best chip for laptop on windows side
@@MrDevilex94 Yeah but most of the people and youtubers think if its arm it is going to be more efficient, which is true but only in some cases and with node shrink the difference will be negligible.
basically 7800X3D still is the way to go. Thanks intel. 15gen lost agen , intel make money for people , slow go to up , bay bay intel , also new AMD CPU coming soon 9800X3D for intal game over
No, they are fixing the efficiency problems It will use way less power than 14 Gen but be more powerful than 14 Gen. The CPU is going to run as cool as the amd cpu
Intel having the entire chip at TSMC says something where they were at with 'intel 3' a couple of years ago not how the node is acting now. if Arrowlake gets pushed back that would say something about intel 3 now. I hope it's a good chip that acts like what apple is offering
Intel 3 literally just launched this month and was planned for the Xeon line. Arrow Lake and Lunar Lake were never Intel 3 parts. Phoronix has (Intel 3 based) Sierra Forest reviews already.
The more I learn about arrow lake, the less I'm excited for it. Lion cove has only 14% ipc improvement over the worthless regressive meteor lake with 1.5 nodes jump. There's no way arrow lake would be competitive with zen5x3d while costing significantly more to produce. Dead on arrival.
Intel pumps out a lot tech specs but what matters is real world performance relative to competitors. Nothing suggests their next gen chips will beat the new ARM in power efficiency or AMD in iGPU or EPYC . In other words, it seems like they will lose even more ground. It's in the fab space Intel lost their edge. IMO if Intel wants to be clear performance king again it will probably depend on if their upcoming fabs that use ASMLs Twinscan lithography machines can finally beat the best TSMC can offer.
TSMC’s US Arizona fab isn’t active yet, so this is incorrect. They’ve already said they’re using TSMC because TSMC has a better node at the moment. In the future Intel will switch to using their own nodes.
@@thetheoryguy5544 hasn’t been fully elaborated on but the prevailing rumor is they’re using Intel 20a for the CPU tile on certain midrange Arrow Lake chips. They’ll still use TSMC for most of the tiles on though.
Lunar Lake is the only CPU lineup by Intel using a TSMC node. While Lunar Lake was being designed, Intel's foundry was going through a major restructuring, where they were rapidly improving from 10nm Intel 7 to 1.8nm Intel 18A. Now that Intel 3, Intel 20A and Intel 18A are up and running, Intel once again seems to have the node superiority, and will switch to using their own fabs again. Not to mention, Xeon 6 is being manufactured on Intel 3, while Arrow Lake is being manufactured on Intel 20A. Upcoming Panther Lake is also going to be manufactured in 18A, giving node superiority potentially back to Intel.
And by ripping, you mean it's struggling against Meteor Lake and Hawk Point? It's barely even an Intel/AMD competitor, let alone a fucking M3 destroyer. In benchmarks, an X Elite chip in the Vivobook has barely higher batterly life than a 155H, and has much lower than that of a 7840U(and from what I've heard, possibly even a 185H). The multicore performance is decent? But it gets mauled in gaming by Intel and AMD(and even apple sometimes). Even though it has a strong point in AI, Lunar Lake and Strix Point are coming out soon anyways, both of which have better NPUs and much higher platform TOPS than the X Elite. And while there are better models of the X Elite, they aren't as significant of an upgrade over the base X Elite, and will likely be crushed by Strix Point and Lunar Lake.
Do people even know why x86 is still here? Because it has instruction set that both ARM or RISC V or whatever "reduce instruction set" CPU are lacking. So how come ARM is so "popular", and the answer is simple, because the device that runs ARM aren't design for running applications that needed those "other" instruction set. And the true is, if they do need those addition instruction set. they simply would have to either figure out some way to converted, and take the performance hit, OR, they would have to make an "extension pack" for the CPU. Gaming, video editing, or other meida creation or comsuption aren't all that people use computer for.
@@iokwong1871 there's no real advantage in either RISC or CISC while RISC requires less cycles for an instruction it requires more instructions to do the same thing bcs it is lacking some ARM based SoCs are likely only ahead bcs the priority has always been low power consumption i think all the other advancements in the microarchitecture of ARM SoCs are what make them more efficient not the instruction set itself also most apps and especially games are compiled to x86 so you'll need to run them under emulation which requires more power so it's safe to say x86 isn't going anywhere for the next 10 years
@@niveZz- I would argue that most games probably won't be needed x86. But I don't know all the major game engines to make that claim for sure. But look at nearly all of not all Open source games that can compile ARM or RISC V no problems , except for those who use x86 specific assembly codes. But yeah, 9/10 of the people who said x86 is dead don't really know what they are talking about.
It's certainly interesting. Intel is looking to try and be more efficient, it's much needed to be competitive with Ryzen.
I don't think they care about ryzen as much as they want to make Apple come back to use their chips. I can see they are trying to pivot for better mobile chips and use E P cores just like big.little arch in apple's arm arch design. AMD doesn't have that kind of a flexibility to combine different types of cores, yet.
@@slimjimjimslim5923 Does Apple even make a ripple in the market to matter all that much? I always thought Apple's market share was peanuts compared to x86 corporate and enterprise. It's large enough to make Apple gajillions in profit, of course, but in terms of volume (even including phones phones), they're just a drop in the bucket in terms of overall units sold.
@@slimjimjimslim5923 apple will never go wtih intel again, they have there own design and it works why would they go back to making less profit for there own product, in house development is way better for the product and how long it takes to make
@@slimjimjimslim5923E/P cores only exist because they can't have one really efficient core, and if you've been following the latest SoC releases, people are even shifting from it, e.g: X Elite, AMD Ryzen, and MediaTek too with the All big core design CPU, and I don't think Intel Devs are so delusional and trying to get Apple back, they are trying not to lose the contracts they have now with PC vendors and even trying to open more Intel foundries to competitors
@slimjimjimslim5923 even if Intel chips were twice as fast and efficient Apple would not use Intel again. Apple wants complete control over every part of the hardware inside their devices.
I really appreciate that intel prioritized the efficiency and even used TSMC instead of being stubborn and using their own silicon which is not quite at the same level yet
@@curio78it's coming september
@@curio78 September 😆 idk about when amazon will have them available but your local store should have them if your in the us
@@Nicky_TM bro TSMC 5nm has less transistor than Intel's 10nm.. TSMC is way behind intel in technology..🤡🤡 problem is Intel's production is facing problems.. wait for some time intel will get back to normal
I don't know if you can call Lunar Lake low end when it will likely be the fastest chip to date running Windows tasks like those found on pcmark10. It will be extremely responsive in business class laptops and ultrabooks while beating Qualcomm on power consumption which is just slightly better than Meteor Lake per notebookcheck.
Lunar Lake is more like Low-Mid Range. Strix Point will take up High End. To compete, Intel will also release high end Arrow Lake to Laptop. Lunar Lake is mostly meant for business class laptops, for college students, and for people who want a portable option for AI, Gaming and Productivity.
He was talking in context of the siblings chips from the same generation.
They are using tsmc because they have a better process than what intel can ship at volume right now. 20A is better but not at volume.
Also consider that the chips get designed ahead of time, even if they wanted to use intel fabs, in that point back in time 2 years ago or less, they saw TSMC in a better position. So they made a strong choice to play it safe or gamble it. I think Intel took the best option at that time for those chips.
They’re using N3B for the Battlemage GPUs but the CPUs remain on 18A
@@PaulsTechSpace18A is Panther Lake in 2025. No 18A chips are launching this year
@@moist_ointment on the original roadmaps, Lunar lake was labeled as 18A
@PaulsTechSpace I've never seen any roadmap to suggest that.
18A, being early 2025, is 6 months ahead of original schedule. Intel purchased large volume of N3B years ago.
The timelines don't align
i think 15th gen will be good. Intel know they need to blow peoples minds after the previous gen failure
Congratulations on 100K subs by the way.
Lunar Lake has built-in 16GB or 32GB memory on SoC, that 's quite exciting. If overall system performance and efficiency like what Intel claimed, it will kill Qualcomm Snapdragon X laptops, and gain large market share back from AMD.
Snapdragon X Elite isn't gonna fair very well against Lunar Lake even if it fell short. It has marginally higher battery life against Meteor Lake, and while it does have decent single core and multicore performance(in most cases at least), it struggles in gaming. Lunar Lake should be able to beat that quite easily. The real competition with Lunar Lake is with Low-Mid Range Strix Point.
@@auritro3903 x-elite is hobbled by win-on-arm more than anything else. Just cannot gain the x-86 maturity in a short time. Emulation layer will have glitches and hiccups for many non-MS apps.
I'm actually super excited for Lunar Lake, hoping it's super power efficient :)
Does this mean it will have better temps on gaming?
@@adriatics.3357yes
@@adriatics.3357 Yes, since it will be more efficient, it will use less power, and thus produce less heat.
@@n.d.n.e1805 can’t wait to see the New gaming laptops with lunar lake then.
Intel also said HT is modular. They removed it in Lunarlake but Arrow lake and Xeon are a different story. Xeon will have hyperthreading but we don't know if Arrow lake will. Or even if it's there, low end arrow lake may not have it
HT isn't necessarily going to the grave. It's just not present in the *mobile parts* at least this time around. HT still makes a lot of sense in the desktop and server parts. I don't think we have heard that HT is completely, universally dead, yet. On mobile, that 20% power for 30% IPC (remember, that IPC is for the whole core, ie 2 threads running at the same time, not a single thread of execution) doesn't make as much sense when power consumption is the primary design consideration.
Ht is axed for desktop for the time being all reports point to, "riping out ht" is a ground up thing. With HT built in still, turning it off with 8+ P cores already nets gains in a majority of games, with the changes to the scheduler also which removing HT allows (long pipeline vs short and wide). If the E cores are Raptor lake level and we see those backfill the HT cores Intel could have some very competitive CPU's again. Intels new Core Ultra line has launched and has HT, its the upcoming PC side that is being talked about, the only cross over from Core Ultra is the new Ecores.
Great job on the video. Intel's poised to make a comeback? That's great news! We can't have AMD turning into Intel on the CPU side, and Nvidia needs more competition than just Radeon.
Intel should provide an option for users to choose whether to use the onboard low latency 16/32GB memory as L4 cache and provide additional memory slots on the motherboard.
Several laptop chip designs didn’t come to desktop. Icelake and tigerlake are someone what recent examples
Maybe memory manufactures won't like this move from Intel on including the RAM memory inside the CPU package
Intel has straight up stated that at the time of designing Lunar Lake TSMC had the better node for the job. Simple as that, apparently. A more humble Intel under Pat Gelsinger 🤷♂️
Meteor Lake never came to desktop because they had, and still have, mountains of 13th and 14th gen desktop CPUs in the distribution pipeline and didn't want to crush sales. It's that simple.
also Meteor Lake desktop probably won't be as fast as Raptor lake, they would just be more power efficient. In that case, it make sense Meteor Lake desktop not to exist. Power efficiency is more of a mobile chip and server concern anyway. And for server Intel already got Sierra Forest e-core clusters on the way.
@@slimjimjimslim5923 Yup, Meteor Lake was both a clock speed and IPC regression. A pretty hard sell when Meteor Lake also costs more to produce than Raptor Lake.
E-cores should have no problem with access L3 cache, they do have access to Memory Side cache 8 Mb. Regarding comparison E-cores to Raptor cove. This is comparison E-cores that are connected to a L3 and ring bus, it's Arrow Lake btw. This can be seen in the slides.
Amazing overview
I’m really looking forward to efficient hardware going forward. And the only reason I’d upgrade my system at this point
Good especulation and explanation about the topics. Very down to earth and sober. It seems good news to see Intel introducing changes to the organization of the blocks. Meteor lake still quite decent leap but didn't reach the hype promised. Given my urgency I think I will get a Meteor Lake now instead of wait for the innovation and performance that is coming with Lunar Lake. Again, efficiency will be great but Meteor Lake is quite good for most people according to my understand. Battery saving in Lunar Lake will be great, though. The nail in the coffin was that we didn't see Thunderbolt 5 in Lunar Lake.
i don't know if this is a silly question but i'm kinda new to all this gaming pc and streaming stuff but if hyperthreading makes the core more powerful how come the i7 9700k didn't beat the i9 9900k in single-core performance as the hyperthreading is taken away from the i7?, if you get what i mean
Single core is technically single thread in some benchmarks. So you are comparing a 1C/1T chip against a 1C/1T chip. It will be a draw.
I don't know this right! Here is my result of conclusion:Since the launch of the Alder Lake 12th generation Core, the processor architecture has begun to shift towards multi-density processing and no longer focuses on single-core performance. Hyper-threading technology is mainly used to improve single-core performance. Now, the new architecture focuses on optimizing multi-core performance and multi-density processing to achieve the best performance state of the processor.
Not sure my thought is correctly
It depends on where the bottleneck is. If the main limiter is utilization of CPU resources, then HT can provide a large boost. If you're thermally limited, then HT is slowing you down. Intel has been very thermally-limited lately, and that seems to be why the laptop and desktop chips are moving away from HT, at least for now.
@@IcyTorment : The 12th-gen core is the best proofing of the future and has great value. 13 and 14th have pretty suck TDP Performance and the hottest temp inside with the CPU core. So that's why HT is out of time technology for using in the future.
Arrow Lake will wipe the floor with Zen 5. Zen 5 is basically AMD's 14th gen moment. It's pretty much just Zen 4 with slightly better efficiency and better AI capabilities. Arrow Lake is a completely new architecture.
Sure. Believe it. Intel's on fire, literally, lol.
@@sterus2283 How so? it will be way more efficient, so it will use less power, and thus produce less heat.
Will intel new gen use the LGA 1700 socket ??
doubt it
No, it's moving to the 1851 socket and new Z890 chipset.
My concern is, Intel have completely drop the ball with 14th gen. And now we are to wait around for them to potentially jusy do it again.
If you watched the video 15th Gen should be a game changer vs 14 Gen
@Yielar1 should is the key word, the same as the 14th should have been.
@@mahmahmahmahmah6699 wasn't 14th gen always a refresh but when it was released it didnt refresh much of anything
@@mahmahmahmahmah6699 Its redesign of there CPU so it wont be the same as 14 gen
I'm curious. Why are they building AI into the CPU core if they already have the NPU? And why do they build the NPU if they already have AI support in the GPU? Do all *PUs need AI support? If so, are they aimed at different AI workloads? Which are good at what? So many questions...
The AI in the CPU isn't actually ai, it's just finds the best way to use the cores
Npus are better at ai workloads then GPUs, GPU uses more power as well,
It's like making a CPU do GPU things
The AI allows them to change clocks on the fly.
His explanation was wrong. What Intel is saying is that they moved from setting clocks using a table to AI deciding what clocks they need based on the current temperature and workload.
This also means Lunarlake may be more inconsistent amongst reviewers
所以LUNAR LAKE 將來不會用在高階筆電作為 CPU嗎????
This niche is for Arrow Lake mobile chips
AMD lost it.
with 8c/8t it’s very unlikely that lunar lake will be remotely competitive with amd in multicore performance
@@albert-ly1tp Strix Halo and Lunar Lake will be neck and neck performance beasts for laptops and NPU's.
@@iLegionaire3755 npu preformace is irrelevant for most tasks
Are you high?
@@essentials1016 name one task you regularly do that uses npu lmao
We'll see about that when independent reviews arrive. Best reaction i can give right now is "sounds promising".
I enjoyed your detailed summary. I like how Intel is taking on ARM with their new and improved version of x64. Battery longevity is a must. However, manufacturers tend to reduce, not increase battery capacity from these efficiencies, which is ridiculous. I wouldn't mind if AMD & Intel dabbled into creating their own ARM SOC. Definitely great times for Apple, Qualcomm, Intel, AMD & the consumer.
Not sure ARM is the direction they'd go. If they want to diversify away from x86, RISC V might be a better choice, since it's already replacing ARM in some markets and wouldn't require them to pay licensing fees.
@@IcyTorment You might be eventually right.
Its a shame that intel giving Xe2 gpu on Lunar Lake and Arrow Lake getting only tweaked version of Xe1st gen
It makes a lot of sense. Why waste transistors on something that the machines those chips are targeted at mostly won't use? Arrow Lake will be in desktop PCs and gaming laptops, where discrete GPUs will continue to dominate.
@@IcyTorment Arrow Lake will also comein Laptops as next upgrade for meteor lake, LL is for separate set of ultra low power laptops
only real application testing side by side can be trust. since last year Intel had already told everyone core ultra how good on paper, and how the benchmark kick amd ass. but we all know the outcome is core ultra even can’t beat amd a year older product. especially the igpu, the 3dmark of core ultra is 15-20% faster than 780m, however core ultra lose almost all real gaming test. Intel put too much water on their marketing, so I don’t trust any information they announced.
i actually have pretty high hopes for this. i think in terms of real /scientific workloads these new CPUs are going to absolutely destroy the competition.
Will arrowlake or lunar lake support avx 10.2?
Congrats on 100k subs!
Yo this is such a high quality, informative video, thanks.
I really don't know if I should upgrade my PC
This is my ring
MSI Creator 32 PS321URV
Intel Core i9-13900K
ADATA XPG Levante 360 AIO
Gigabyte Z790 AERO G
Vengeance RGB 64GB DDR5 5600Mhz
Samsung 990 Pro 2TB
Samsung 980 Pro 2TB
Samsung SSD 840 EVO 1TB
Creative AE-5 Plus
Nvidia 3080Ti
PSU ROG THOR 850W
Everyone said that the new CPU would be slower than 13/14th, and I shouldn't upgrade.
Do you have recommendations?
Thanks!
You should upgrade your gpu
@gamingwithtihamshorts2628
I thought about 4080 Super
Do you think I shouldn't upgrade the CPU? it's a huge econony adventure...
Thanks!
Finally Intel chips are interesting again! They appear efficient, but that doesn't mean they still won't suck down tons of power during high performance loads.
It would be nice if I can stop ripping on Intel for putting out enough heat to cook steaks with.
And I for one am glad hyper-threading died, it had a purpose before but not anymore.
It appears the maximum wattage is much lower, so they won't be able to suck down tons of power.
Dense but great material. Inscribed right now. But I have a question. Will Intel completely abandon the past architecture? Aren't they complementary each other?
Will it burn out the motherboard like the gen 14th 14900k?
It was AMD chips which burn motherboards and more specifically 7800X3D. There are no reports of any 14900K chips burning and even overall failure rate as in chips stopping working completely seems to be within expected in the industry norm. At least I didn't ever hear this is what the reported issue includes - it seems to always be instability and often chips can be stabilized by turning down power target and/or maximum single core boost.
No, they are fixing the efficiency problems It will use way less power than 14 Gen but be more powerful than 14 gen.
my curiosity is camm next ddr6 module ?
i hope intel or someone else stateside can catch up with TSMC. it’s not good to have all of that capability centralized in one island of the world
TSMC is heavily investing in the US currently
the real question on everyones mind is about the current issues with 13th and 14th gen. will these chips be subject to the same issues? what is Intel going to do about all the ppl with issues?
what people i think are missing is that intel is continuously moving towards a focus on the data center along with nvidia
Its an all-new architecture at an all-new foundry using all-new VLSI design and verification tools (Intel engineers had 0 hours of experience with the TSMC tools)! What could possibly go wrong???
If I can't upgrade/choose RAM I'm not gonna want it. Non-upgradable RAM is a NOPE from me. I might have to swap over to AMD to take advantage of the new LPCAMM2 memory. I'm much more comfortable choosing my RAM manufacturer, latency, sizes and such just like we do with SSD manufacturer, speed, and size as well as GPU.
It still puzzles me how higher-end laptops aren't as upgradable as desktops.
arrow lake plus battlemage should be pretty great combination
Fun Fact:
No one knows anything about any -Lake...
we did not know back then with the 13th and 14th gen..
How you gonna know now what they gonna build in?
The fact Intel outsourced $10B worth of orders to TSMC
@@n.d.n.e1805 Still, it doesn't change the fact that we are once again buying ourselves a very expensive surprise egg, probably with a remote k1LL switch...like with current Tesla vehicles and multi-brand solar systems...
@@instantdislikechannel5699That's what happens when you buy a new architecture. It's a surprise egg - you don't know if it will have problems. When people bought the AMD 7000 series, it was a surprise egg. They didn't know if it would have problems or not
They recently announce lay-offs so they seem to be desperate for cash. If employee satisfaction is an indicator of future success, then Intel has 48% of their design workforce thinking of finding a new job elsewhere given their latest employee survey.
Wonder if they’ll fix the latency / stuttering? I get it on my windows 13-gen laptop. I thought it was BS until I experienced it myself.
I agree. Intel's strutting away from Hyper Threating is like a millennial change in their technology. Which they induced for entire Earth. I hope they know what they are doing. And hold on for a second, if i get this idea right... Integrating RAM module into CPU die? That seems to me like they found latency may brake performance of their chip. Nothing is without a reason. I really wish they show us the engineering sample, one with RAM modules outside chip, and one with integrated.
Apple has RAM on package. Lunar Lake, if you look at the die shots, looks very similar to base M series design. It began development when M1 was released.
I think most of us in the know, are curious to see if they work at all.
I think Intel would destroy if they allowed ram upgrade-ability alongside giving ram on package. The performance would be crazy for AI and gaming workloads.
Can't wait to put them into an Asus motherboard
Amazing info sub for life ty for the video
Great summary
It's so over for Qualcomm
Nah Qualcomm is just getting started
I don't care about power consumption, I care about performance!
Meteor lake desktop delay too much until there's no point to deliver it. That's why jumping into Arrow lake instead.❤😂
we all know what means "up to..." ^^ we will see in practice ..
AMD [No Mercy!] Responds With: Ryzen™ AI 9 HX 370
would it be good for hacking purpesses?
Lunar Lake looks nice, but is limited to only 32 GB RAM :(
it's aimed at mid-low end laptops i think it's sensible
It's for thin and low power laptops. They said that higher end chips from the next generation will be able to use upgradeable memory if the manufacturer decides to do so. Frankly, nothing is changing here at least for the end user from what we currently have.
Considering thats it's for low-mid end light laptops that most people are gonna use for software purposes and with AI, with a tinge of gaming in it, 32 GB is honestly pretty good. Most gamers themselves don't use over 32 GB RAM on their desktops unless it's a high end build like a i7/R7 + 4070S/7800XT + build.
The idea is that by the time you really need the Memory upgrade, you properly need a CPU upgrade as well. I never think solder in memory is a bad idea, especially when they did it right. You get higher performance. But yes, 8GB solder in stupid. Apple....
@@auritro3903 I have now 32G on my ThinkPad P16s with 12th Gen Intel i7-1270P processor and would like to have the next ThinkPad with a processor like Lunar Lake and 64G Ram. If Lunar Lake is limited to 32G now, then perhaps next year will be a next generation of such processors, which support up to 64G.
It is true that 32G is enough for almost everything, but running AIs locally, WSL with docker containers it is better to have more.
Bro listen. Tell me everything about the intel ultra 3 105ul.
This cpu is like @_@ so good in my opinion.
It supports the lga 1851.
🥲 Do u have any info about it.
Anybody who owned an Intel 2019 macbook pro - like me - rightly understands that INTEL IS NOW IN 4TH PLACE. It will be a cold day in hell before I buy another Intel powerpig thighburner laptop! I bought an AMD 7940hs laptop 3d ago!
You're saying latest development in tech world makes these laptops obsolete and useless real quick
@@olukoyaopeyemioluwaseun8559 intel has been failing in efficiency (mips per watt) ever aince they failed to progress from 14nm. They offered 14nm+, 14nm++, 14nm+++, 14++++. Meanwhile the whole world passed them by! Check any review where they measure geekbench points per watt, Intel is DEAD LAST ...
so you saying 20% less power and same single core performance and maybe little more mulitcore performance :-p i hope intel make Arc performe better with arc and intel cpu
Can't trust intel now.
Why? The problem is in the architecture, not the code. The 13/14 gen would draw way too much power killing the CPU but the 15 gen won't have that problem since it won't use that much power since it will be way more efficient.
Ask any of the 15000 employees being fired if they are going to buy Intel chips😂. Investors losing 25% in one day. Until it's doomed and no one gives a crap about them
They need to get rid of more people considering complete change in the business model.
Hope it goes even lower, easiest long in history.
i really hope they dont fall apart like gen 14. if thats the case ill get some new lenovo gear :) well done intel.. bad luck dell - your camm 2 mem modules are now scrap!
Tf is that branding? Are they trying to trick people out of buying Ryzen?
I am not so sure about intel anymore. Unless they get their thermals in check and work with laptop OEMs to give decent battery life, they are not going to work in the ultra portable line of laptops.
My Lenovo Yoga 7X with Snapdragon X Elite has been amazing as a portable laptop for coding, productivity, and media consumption. I am getting 12-16 hours of battery life on average over the past 2 days of me using it.
Another person who drank the ARM on windows koolaid
They are getting there thermals in check that's what efficiency means it going to use less power so it will create less heat
i am so glad hyperthreading is going to be gone
AMD vs Intel. Intel broke on the way to the fight
He talking about arrow lake so the joke doesn't work here
@@n.d.n.e1805 It's not to say it won't have the same flaws as 13th and 14th Gen as Arrow Lake was technically designed and made before they had issues with 13th and 14th gen
@@stephendippenaar9986 13th and 14th generation CPUs had issues because they were essentially upgraded and overclocked 12th generation CPUs. The 15th generation is a completely new CPU with a new node and design, and it only draws a maximum of 200 watts, not 400 like the 13 and 14 gen.
@@n.d.n.e1805 Gonna be bit hard to do with 15000 less staff and over 10 billion in losses and shares worth almost nothing. Intel has to gain there credibility again after such a fuck up they just had and people losing thousands on building a big paper weight ornament. As this is the reality of the situation
@@stephendippenaar9986 That is the reason Intel is working with TSMC after the 10 nm disaster, which led them to lay off 5,000 employees. They began working with TSMC on a 2 nm process and have already invested over $20 billion on the A20 node, the same node that the 15th generation will use. The leaked finalized specifications for Arrow Lake indicate that the maximum TDP for the Ultra Core 9 285K is 250 watts.
It is interesting that the chip was designed in Israel.
Dothan, the precursor for the core architecture, was also designed in Israel. i ran a 1.6ghz dothan cpu on an asus motherboard with a socket adapter, and it would run at 2.4ghz all day long. it was the fastest faming cpu at the time. lets hope they deliver again.
It reminds me of Nokia's N9. Nokia in the late 2000's in the smartphone industry was like Intel now. They had been the king, but Apple/Google came. Nokia tried not to lose to the new smartphone trend with their next-gen N9, but it miserably failed. I think the tide has turned to ARM. No matter what Intel does, I don't think they can catch up. Apple and Qualcomm won't stay still till Intel catch up.
Than I think you don't really understand how cpu works and how they are designed. ISA doesn't guarantee performance and efficiency but your microarchitechtural changes and design focus do make a difference.
@@KP21530 That's the same thing also said by jim keller himself. ARM will not magically give efficiency. Everyone is sleeping on lunar lake when it seems it will be the best chip for laptop on windows side
There is a reason why big desktop games can't run in arm laptops. Because arm will run on same or more power than x86 if that happens.
@@MrDevilex94 Yeah but most of the people and youtubers think if its arm it is going to be more efficient, which is true but only in some cases and with node shrink the difference will be negligible.
basically 7800X3D still is the way to go. Thanks intel. 15gen lost agen , intel make money for people , slow go to up , bay bay intel , also new AMD CPU coming soon 9800X3D for intal game over
How can you trust a company introducing new models if they cannot even resolve the 13th and 14th gen chip failures??
The problem is in the architecture not the code
really?
AMD COMING SOON 9800X3D , BAY BAY 15GEN
More like intel fire lake😂😂
They should call their new uarch "Crash Lake"
No, they are fixing the efficiency problems It will use way less power than 14 Gen but be more powerful than 14 Gen. The CPU is going to run as cool as the amd cpu
Intel having the entire chip at TSMC says something where they were at with 'intel 3' a couple of years ago not how the node is acting now.
if Arrowlake gets pushed back that would say something about intel 3 now.
I hope it's a good chip that acts like what apple is offering
Intel 3 literally just launched this month and was planned for the Xeon line. Arrow Lake and Lunar Lake were never Intel 3 parts.
Phoronix has (Intel 3 based) Sierra Forest reviews already.
TOPS / CPU Hz = ___________________?
desperate times require desperate measures... Good Luck Intel, you had your time and you wasted... ARM!
All this talk is USELESS unless they ship it and it PERFORMS!. If Ryzen keeps ripping it, there is no chance!.
You know that TSMC manufactures AMD CPUs, right? Also, TSMC is producing the nodes for Arrow Lake.
The more I learn about arrow lake, the less I'm excited for it. Lion cove has only 14% ipc improvement over the worthless regressive meteor lake with 1.5 nodes jump. There's no way arrow lake would be competitive with zen5x3d while costing significantly more to produce. Dead on arrival.
"Significnatly more to produce "
What are the costs of each, specifically?
The Redwood Cove cores have higher IPC.
Actual regressions are from memory access which Arrow lake and Lunarlake have lower latencies
Intel pumps out a lot tech specs but what matters is real world performance relative to competitors. Nothing suggests their next gen chips will beat the new ARM in power efficiency or AMD in iGPU or EPYC . In other words, it seems like they will lose even more ground. It's in the fab space Intel lost their edge. IMO if Intel wants to be clear performance king again it will probably depend on if their upcoming fabs that use ASMLs Twinscan lithography machines can finally beat the best TSMC can offer.
TSMC have a new 'fab' in the USA, that's why Intel are outsourcing - it makes financial sense.
TSMC’s US Arizona fab isn’t active yet, so this is incorrect. They’ve already said they’re using TSMC because TSMC has a better node at the moment. In the future Intel will switch to using their own nodes.
@@killercage2 I think they also said Arrowlake will be using the new Intel node as well.
@@thetheoryguy5544 hasn’t been fully elaborated on but the prevailing rumor is they’re using Intel 20a for the CPU tile on certain midrange Arrow Lake chips. They’ll still use TSMC for most of the tiles on though.
@@thetheoryguy5544 20Ångström
Lunar Lake is the only CPU lineup by Intel using a TSMC node. While Lunar Lake was being designed, Intel's foundry was going through a major restructuring, where they were rapidly improving from 10nm Intel 7 to 1.8nm Intel 18A. Now that Intel 3, Intel 20A and Intel 18A are up and running, Intel once again seems to have the node superiority, and will switch to using their own fabs again. Not to mention, Xeon 6 is being manufactured on Intel 3, while Arrow Lake is being manufactured on Intel 20A. Upcoming Panther Lake is also going to be manufactured in 18A, giving node superiority potentially back to Intel.
Game over for X86. Snapdragon is ripping through benchmarks across the board
dont make me laugh. Everyone is mocking it for how mediocre it is delusional arm fan
And by ripping, you mean it's struggling against Meteor Lake and Hawk Point? It's barely even an Intel/AMD competitor, let alone a fucking M3 destroyer.
In benchmarks, an X Elite chip in the Vivobook has barely higher batterly life than a 155H, and has much lower than that of a 7840U(and from what I've heard, possibly even a 185H). The multicore performance is decent? But it gets mauled in gaming by Intel and AMD(and even apple sometimes). Even though it has a strong point in AI, Lunar Lake and Strix Point are coming out soon anyways, both of which have better NPUs and much higher platform TOPS than the X Elite. And while there are better models of the X Elite, they aren't as significant of an upgrade over the base X Elite, and will likely be crushed by Strix Point and Lunar Lake.
Do people even know why x86 is still here? Because it has instruction set that both ARM or RISC V or whatever "reduce instruction set" CPU are lacking. So how come ARM is so "popular", and the answer is simple, because the device that runs ARM aren't design for running applications that needed those "other" instruction set. And the true is, if they do need those addition instruction set. they simply would have to either figure out some way to converted, and take the performance hit, OR, they would have to make an "extension pack" for the CPU. Gaming, video editing, or other meida creation or comsuption aren't all that people use computer for.
@@iokwong1871 there's no real advantage in either RISC or CISC
while RISC requires less cycles for an instruction it requires more instructions to do the same thing bcs it is lacking some
ARM based SoCs are likely only ahead bcs the priority has always been low power consumption
i think all the other advancements in the microarchitecture of ARM SoCs are what make them more efficient not the instruction set itself
also most apps and especially games are compiled to x86 so you'll need to run them under emulation which requires more power so it's safe to say x86 isn't going anywhere for the next 10 years
@@niveZz- I would argue that most games probably won't be needed x86. But I don't know all the major game engines to make that claim for sure. But look at nearly all of not all Open source games that can compile ARM or RISC V no problems , except for those who use x86 specific assembly codes. But yeah, 9/10 of the people who said x86 is dead don't really know what they are talking about.
intel lost it.
They did, but under Pat Gelsinger they will come back, whether it is via their processors or via their Fab business
And now on the upswing back I believe
@@Andrew_Thriftimo this release would be their own version of "ryzen 3000 release"
Intel has not lost it