I do like lots of fast cores. And I'm excited to see AMD, Intel, and Ampere all getting in on this game. Just wish these Ampere chips were made available for testing...
One regret I'm having with their offering is that there's no high frequency CPU in the low-cores. The 32-cores was only at 1.7G, the 80-cores at 2.6 and the 128-cores at 3.3. That's the opposite of what's normally done. I'd have loved a 3.5GHz 32-cores for some use cases.
I like high memory bandwidth. Unless your application is floating-point bound, or otherwise CPU-bound, there's a quickly-diminishing return to adding cores.
Seems too little too late. Their own slides barely show them beating out Bergamo which isnt new at this point, you now also have Intel back to firing at all cylinders trading blows with AMD in DC, and adding accelerators to Xeons to target customers who need more than general compute. Ampere is competitive, but not competitive enough to sway a meaningful amount of customers away from Intel and AMD, and those two are more competitive now with each other than they were in the last 5 year.
@@clou09 particularly if you are Google or Amazon. Who build in big enough numbers to do your own design and pay tsmc/samsung to make it. Then just The ARM royalty fees will be what ? 15-20 cents per core? .
Is Qualcomm set on keeping Nuvia founders dream of seeing Oryon cores in the enterprise dead? Or has it shared some plans on entering the server market?
I think Qualcomm has a business model closer to Nvidia Grace than Ampere as described in this video, they sell a hardware platform for partners to implement. For that to happen you need a chip, but also time to build relations with partners. It won't be whatever most of us want to see however.
One really great use case for these affordable chips is development. We have a 80-core at work and I use it every single day for testing code for scalability, as a build machine when I need to bisect some regressions since it builds super fast, to validate various assumptions etc. The machine is small, super fast, not very expensive and lends way more services to me than the 24-core AMD EPYC that's next to it and that I start once every 4 months.
I like Altra. Also I like they do not do BS marketing, or AI hype. They work, then deliver. Not super frequently, but very solid and industry leading. (the "predictable" performance marketing, or comparisons to x86, are a bit misleading, because it highly depends on workload). Also do love you can buy motherboards in various form factors, and systems from various vendors (Asrock, Supermicro, Tyan, Asus, Gigabyte mostly). I think most of the secret souse is in SoC mesh and coherency fabric scaling, and L3 partitioning and memory bandwidth partitioning. Similar to Intel RDT. It is hard to believe their new CPU is based on a custom core IP. Maybe custom core IP is also a hedge, so they can pivot to RISC-V by changing frontend, and doing some adjustments, if needed. (Sure it would require a lot of software rework, but who knows). Extrapolating. Prospect of ~500 cores from Altra in 3 years, is bonkers.
If a customer is buying use of blocks of cores, having dynamic frequency but with a constant power limit available to each block seems fairer. Then serial code can run faster when block threads are idle, which Amdahl's law shows can make a large difference, while highly parallel sections can benefit from wider power efficient processing. Determinsm has limits because the data is generally changing, if not the computation is redundant. Fixed frequency schemes are simpler for the CPU designer but pass the burden on effective job mixes to programmers and sys admins
@@geoffstricklerabsolutely, otherwise we'd be blissfully enjoying the simplicity of uni-processors. Certain types love "fixed" but in reality it leaves potential on the table and you cannot control everything. A larger pool of shared resources, should be a benefit. Less dense logic and faster cells are required to hit high frequencies. Fixed frequency makes for longer tails on jobs, so you'd need to schedule more different tasks simultaneously which often reduces cache effectiveness. With a large pool of cores, you can run at higher utilisation while meeting service guarantees, it's only the idea of assigning specific customer blocks with a low core count that burdens them with a utilisation issue. In the mainframe days, it was usual to bill by the CPU second on time sharing systems.
@@RobBCactive Would you say once we reach the limit of dennards law that multi frequency designs will become more frequent or less frequent? On the one hand the lower we get the higher the noise, so we made stopgaps like backside power delivery and materials adjustments to reduce noise on designs while reaching size targets. On the other hand the noise may become so high at higher frequencies and lower sizes we normally expect processors to shift to, and frequency scaling will become much more difficult. Maybe static frequency is the future?
@@RobBCactive I think the next big steps will come in the forms of (roughly in this order): 1. Backside power delivery. Already in the works. 2. Asynchronous logic. Probably not a fully asynchronous (clockless) design, but asynchronous subunits with limited clocked or triggered gating. Already demonstrated in small circuits, but not used much or at all in existing LSI designs. 3. More/better parallel algorithms.
Word on the street is Amazon might be stopping graviton4 development due to the competition heating up between the x86 vendors. With both of them yielding cloudy x86 server CPUs that are very armlike in performance and power characteristics, doesn't seem like the R&D cost to go custom chips is as attractive as it was five years ago.
Graviton 4 has been in preview since 2023 and available to some customers, why wold they do that and waste all the investment made? Maybe true Graviton 5?
@@TechTechPotatoWe should expect Zen5c on 3nm to be tasty too, up to 192c/384t (12x16c CCD) with AMD SMT and a full fat AVX512/VINNI. BTW I hope you can review the Zen5 on launch again, your Zen4 live presentation had details that others failed to reach 😉😉
Yes, but what's Bergamo power draw for roughly the same performance? Barely any DC owner has a per rack power budget for ~36u of Bergamo. Per rack or aisle performance is also a calculation you have to make as a cloud provider.
Oracle and DB2 is still most optimized on s390x architecture - you want less cores that are much more dense - high cache and throughput, I/o offloading
while watchinng this and you spoke about Oracle i got this notification: "AMD and Oracle The Art of Unrivaled Database Performance" 😅 Genoa-X, Siena and now Turin and Turin-Dense... at least everyone is hard at work.
Burning out your SSDs looks like a real problem... Communication is also going to be really expensive. I'd be matching these with DDR cashing for the output, but that has problems for the data retention laws requiring dumping on to massive platters.
I would love for Ampere to sell not only chips, but also their naming scheme to AMD and Intel 😇 Also, is there any chance that we get some insights about Atomic Semi anytime soon?
Could we see AMD integrating Arm cores as a counter to the ecores on intel? We haven't had a small (cut function) AMD architecture since Jaguar IIRC. An AMD-Arm-Samsung partnership has been rumoured for a decade.
You might have missed AMD Bergamo using Zen 4c cores. Half the size, half the cache, and optimized for cloud, but otherwise functionally identical to Zen4. They beat Intel to the punch for enterprise E cores.
@@TechTechPotato Oh I'm aware I was just daydreaming. (Personally not a fan of Ecores and wouldn't call ZenC an Ecore but that's my bias) Intels ecores have half the IPC of Pcores, no SMT, removed instruction support and a shared L2 cache. ...IF AMD did the same thing could they make a core 1/2 the size of Zen4c/Zen5c? How big would 4x 4core clusters of Cortex X4 be? (or Jaguar just for fun) -CPU caches grew to 3 tiers... L1 to L2 to L3 and briefly L4 -Smartphones have 3 tiers... of X2 plus A710 plus A510 -Arrow Lake might be doing the same with the SOC ecores separate from the compute tile. AMD is doing good with just Pcore and "not so big" Pcore, just some fun brainstorming what options they have and what it might look like. Wendell (reading Phoronix) recently covered Xeon E, he mentioned that Bergamo's weakness was lots of tiny environments in containers, hyperscalers might prefer more small cores (arm) to more powerful ones with SMT plus the aded benefit of more power granularity. Sorry for the essay, I don't have a doctorate but I'll still make a thesis while trying to understand. Thank you for your time :D Sorry if this is a repost, youtube is being a div.
MLiD has reported AMD sources on future management plans with more variants that could include cutting AVX512 as well as cache sizes. Though designers might be pushing back against increased workload. But the semi-custom arm (pun intended) did cut down the fp units for the Zen2 variant in the PS5. There's disadvantages to fragmenting the ISA, having a lot of chips out with features encourages software support. I can see Intel were forced into E-cores, but now they're dropping HT, while AMD SMT still offers a context switch free second thread that can aid core utilisation, so are they under pressure to reduce core area? At N2P leaks suggest 32c/64t CCDs so core count per socket is planned to grow.
Unfortunately for Ampere, Intel just launched Sierra Forest at Computex. 144 cores on the mainstream, but up to 288 cores on the advanced platform. They are E-cores, but those Crestmont cores are quite performant, pretty handily beating ARM designs. And combining E-cores with the Intel 3 process, the power efficiency is class leading, beating all of Intel's existing Xeons *and AMD's* Epycs in efficiency.
Bergamo has a higher thread count, the independent review I saw suggested SF only exceeds at very high utilisation while Bergamo is better at burstier loads. Turin dense on TSMC N3P is expected Q1/25 with a maximum of 12x16c, so Ampere has that competition too.
@@noname-gp6hk they have delivered the 144c to independent reviewer but the larger is "next year" and Intel change/cancel many plans. They're still weaker single thread cores.
I wouldn't be totally opposed to getting an AmpereOne based system but I really need a plain English guide on how to use ARM intrinsics / AVX equivalent instructions, and a realistic performance comparison to the competition; there was a lot of marketing with Ampere Altra, and a lot of reference to "Ampere optimized Pytorch" and so on, but when I can pick up a Xeon Scalable 5th gen, or an Epyc 9004 series and just grab any software off the shelf and know it works, with a huge wealth of legacy resources online to learn how to use them effectively, it's just a bit too scary to take the plunge. On Intel I could find no fewer than five guides on running or optimizing various AI workloads for the chip, and on AMD you can basically just use Intel's software (lol) but faster, or use AMD's specific implementations (ZenDNN, AOCL, etc.)
Yeah, too little too late. They need better performance or low enough price to make sense. As a developer, I really just want cheap systems that I can benchmark and test stuff. But those doesn't look anywhere close to affordable let alone cheap.
Not that I gonna defend an arm chip lol but just wanted to say because it doesn't make sense for your use case and budget doesn't mean much I mean obviously when they designed this cpu they weren't like "Humm how do we cater the single guy devs that want to test random stuff on random CPUs ? "
It depends on context. I could see this thing being perfect for our fintech CI/CD platform where we have thousands of automated tests to run ahead of deployment.
Yep nothing called A.I> is actually A.I. and the sheep will not speak up. A.I. means the artificial equivalent to a human brain and a human brain is sentient. ChatGPT and my 4090 are NOT sentient sheep.
@@tringuyen7519 yeah. AI is interesting and useful when use appropriately. As for arm... Well there's billions of mobile devices sporting arm cores for a reason
@@tringuyen7519 I doubt this actually true but even if it was most of the "software engineers" are useless copycats anyway creating all that play store garbage or boilerplate websites 🤣 it's almost useless and by design will not be more useful just better at the only useless thing it can do, parrot repetitive monkey tasks in a borderline acceptable way.
The top 8 mega datacenters now account for more than 50% of global server shipments. Those 8 customers buy density and power efficiency. So yeah, only a few server customers need them but they represent the majority share of all servers shipped.
@@tringuyen7519AMD might have this generation. ARM still seems a bit lackluster (from Qualcomms side) and I really don't expect Intel to come out swinging, going to be fun to see reviews.
I got long investment positions in intel. Same thing with AMD and Qualcomm tbh. All the chip vendors have strong longterm outlooks. I don't know about ampere though. Too much strong competition with mainstream architectures underneath in the space.
Thanks!
I do like lots of fast cores. And I'm excited to see AMD, Intel, and Ampere all getting in on this game. Just wish these Ampere chips were made available for testing...
One regret I'm having with their offering is that there's no high frequency CPU in the low-cores. The 32-cores was only at 1.7G, the 80-cores at 2.6 and the 128-cores at 3.3. That's the opposite of what's normally done. I'd have loved a 3.5GHz 32-cores for some use cases.
Strong ARM 💪
We need 256-core X Elite server CPU....
I like high memory bandwidth. Unless your application is floating-point bound, or otherwise CPU-bound, there's a quickly-diminishing return to adding cores.
@@philmarsh7723 clearly. RPis are a good example of how more cores almost do not provide anything when bandwidth is too low.
@@philmarsh7723 For server CPU more cores = more customers in VM = more profit.
Seems too little too late. Their own slides barely show them beating out Bergamo which isnt new at this point, you now also have Intel back to firing at all cylinders trading blows with AMD in DC, and adding accelerators to Xeons to target customers who need more than general compute. Ampere is competitive, but not competitive enough to sway a meaningful amount of customers away from Intel and AMD, and those two are more competitive now with each other than they were in the last 5 year.
What's connectivity, platform cost, availability, accelerators, lifetime, efficiency like?
Isn't performance per watt what really matters? More performance per watt is more compute per rack less cooling etc etc @MiesvanderLippe
@@MiesvanderLippe platform cost also watts used
Their biggest advantage is you can just buy it without the contract and support bs that is very typical in this kind of industry.
@@clou09 particularly if you are Google or Amazon. Who build in big enough numbers to do your own design and pay tsmc/samsung to make it.
Then just The ARM royalty fees will be what ? 15-20 cents per core? .
"you can buy this stuff" this is the future of commerce
7:38
Chuck Moore inventor of FORTH in 2009 did a 144 core CPU still sold by GreenArrays Inc
Is Qualcomm set on keeping Nuvia founders dream of seeing Oryon cores in the enterprise dead?
Or has it shared some plans on entering the server market?
I think Qualcomm has a business model closer to Nvidia Grace than Ampere as described in this video, they sell a hardware platform for partners to implement. For that to happen you need a chip, but also time to build relations with partners. It won't be whatever most of us want to see however.
@@hugevibezQualcomm would need a decent partner to enter the server CPU market. Qualcomm partnering with IBM is recipe for failure!!!
Notebook first, then smartphone. Other markets tbd, not taking anything off the table
With that many Cores, you really need to look at the Memory and Cache Coherency Architecture.
How I wish that I could get my hands on these systems! Looking forward to these Ampere chips eventually hitting the second hand market...
Or you rent one in the cloud.
Will it run crisis or mine sweeper? I could put that to it's knees rendering, love the cores.
One really great use case for these affordable chips is development. We have a 80-core at work and I use it every single day for testing code for scalability, as a build machine when I need to bisect some regressions since it builds super fast, to validate various assumptions etc. The machine is small, super fast, not very expensive and lends way more services to me than the 24-core AMD EPYC that's next to it and that I start once every 4 months.
Answer me this? Can it run... a tenstorrent wormhole card?
I like Altra. Also I like they do not do BS marketing, or AI hype. They work, then deliver. Not super frequently, but very solid and industry leading. (the "predictable" performance marketing, or comparisons to x86, are a bit misleading, because it highly depends on workload). Also do love you can buy motherboards in various form factors, and systems from various vendors (Asrock, Supermicro, Tyan, Asus, Gigabyte mostly).
I think most of the secret souse is in SoC mesh and coherency fabric scaling, and L3 partitioning and memory bandwidth partitioning. Similar to Intel RDT. It is hard to believe their new CPU is based on a custom core IP.
Maybe custom core IP is also a hedge, so they can pivot to RISC-V by changing frontend, and doing some adjustments, if needed. (Sure it would require a lot of software rework, but who knows).
Extrapolating. Prospect of ~500 cores from Altra in 3 years, is bonkers.
ive had my arm based computer for two years and its life changing
where would one go to buy one of these? asking for a friend
Probably have to be a business to contact them. With companies like these you have to send them an email for a quote.
If a customer is buying use of blocks of cores, having dynamic frequency but with a constant power limit available to each block seems fairer. Then serial code can run faster when block threads are idle, which Amdahl's law shows can make a large difference, while highly parallel sections can benefit from wider power efficient processing.
Determinsm has limits because the data is generally changing, if not the computation is redundant.
Fixed frequency schemes are simpler for the CPU designer but pass the burden on effective job mixes to programmers and sys admins
Yes, but there are significant limits to the performance scaling from frequency boosts.
@@geoffstricklerabsolutely, otherwise we'd be blissfully enjoying the simplicity of uni-processors.
Certain types love "fixed" but in reality it leaves potential on the table and you cannot control everything. A larger pool of shared resources, should be a benefit.
Less dense logic and faster cells are required to hit high frequencies.
Fixed frequency makes for longer tails on jobs, so you'd need to schedule more different tasks simultaneously which often reduces cache effectiveness.
With a large pool of cores, you can run at higher utilisation while meeting service guarantees, it's only the idea of assigning specific customer blocks with a low core count that burdens them with a utilisation issue.
In the mainframe days, it was usual to bill by the CPU second on time sharing systems.
@@RobBCactive Would you say once we reach the limit of dennards law that multi frequency designs will become more frequent or less frequent?
On the one hand the lower we get the higher the noise, so we made stopgaps like backside power delivery and materials adjustments to reduce noise on designs while reaching size targets. On the other hand the noise may become so high at higher frequencies and lower sizes we normally expect processors to shift to, and frequency scaling will become much more difficult. Maybe static frequency is the future?
@@skunkwerx9674 Dennard scaling ended long ago, it used to be that a shrink was faster, cheaper and more efficient.
@@RobBCactive I think the next big steps will come in the forms of (roughly in this order):
1. Backside power delivery. Already in the works.
2. Asynchronous logic. Probably not a fully asynchronous (clockless) design, but asynchronous subunits with limited clocked or triggered gating. Already demonstrated in small circuits, but not used much or at all in existing LSI designs.
3. More/better parallel algorithms.
Is it correct that Nuvia was build ARM for the data center before Qualcomm bought them?
yup
Oracle did some DB acceleration in SPARC cpus with DAX. So may be in case of AMPERE is something similiar
Subtle note... DB2 is IBM - not 'Orrible.
yup yup :)
Exactly my thoughts...
How much faster would be DRC or LVS?
Word on the street is Amazon might be stopping graviton4 development due to the competition heating up between the x86 vendors. With both of them yielding cloudy x86 server CPUs that are very armlike in performance and power characteristics, doesn't seem like the R&D cost to go custom chips is as attractive as it was five years ago.
Graviton 4 has been in preview since 2023 and available to some customers, why wold they do that and waste all the investment made?
Maybe true Graviton 5?
How much it cost to manufacture a Snapdragon X elite chip by tsmc? And how much did snapdragon charge for each chip?
TTP in soft focus? Is this a new thing?
You could go the whole hog and do sepia with a '40s film noir eye light slash - Morticia style ;-)
Or even better lots of fog and peasants eating mud like Monty Python and the Holy Grail
Either this is very cheap or it will be underwhelming. Barely faster than Bergamo and still not on market. Did I get that correctly?
Those were some of the 8ch 192 core on 5nm numbers. We're expecting 12ch 256 core on 3nm to be better
@@TechTechPotatoWell amd will shoot back too so I guess they would need to leapfrog them with a 512 core or something
Oh, then the 192 core looks quite nice! It has been hard to track benchmarks for those.
@@TechTechPotatoWe should expect Zen5c on 3nm to be tasty too, up to 192c/384t (12x16c CCD) with AMD SMT and a full fat AVX512/VINNI.
BTW I hope you can review the Zen5 on launch again, your Zen4 live presentation had details that others failed to reach 😉😉
Yes, but what's Bergamo power draw for roughly the same performance? Barely any DC owner has a per rack power budget for ~36u of Bergamo. Per rack or aisle performance is also a calculation you have to make as a cloud provider.
I wonder how they got away with registering the "Altra" brand, considering that Intel bought "Altera" years ago.
These days, compute efficiency while maintaining performance is important.
I can't wait to see a AmpereOne running 12 channel DDR5!
these days?
that has always been important lmao
@@clearlisted its called market differentiation away from the old cpu bloat.
How much is it?
Oracle and DB2 is still most optimized on s390x architecture - you want less cores that are much more dense - high cache and throughput, I/o offloading
Will you test the snapdragon xelite?
If I get one. Haven't been sampled
I like memory bandwidth
Is this going to be in the galaxy s25?
while watchinng this and you spoke about Oracle i got this notification: "AMD and Oracle The Art of Unrivaled Database Performance" 😅 Genoa-X, Siena and now Turin and Turin-Dense... at least everyone is hard at work.
Burning out your SSDs looks like a real problem... Communication is also going to be really expensive. I'd be matching these with DDR cashing for the output, but that has problems for the data retention laws requiring dumping on to massive platters.
I would love for Ampere to sell not only chips, but also their naming scheme to AMD and Intel 😇
Also, is there any chance that we get some insights about Atomic Semi anytime soon?
Will you be reviewing WoA for testing emulation performance and other things?
Is there ampere DTK for end user testing?
I wish to make my own CPU also.
Could we see AMD integrating Arm cores as a counter to the ecores on intel? We haven't had a small (cut function) AMD architecture since Jaguar IIRC. An AMD-Arm-Samsung partnership has been rumoured for a decade.
You might have missed AMD Bergamo using Zen 4c cores. Half the size, half the cache, and optimized for cloud, but otherwise functionally identical to Zen4. They beat Intel to the punch for enterprise E cores.
@@TechTechPotato Oh I'm aware I was just daydreaming.
(Personally not a fan of Ecores and wouldn't call ZenC an Ecore but that's my bias)
Intels ecores have half the IPC of Pcores, no SMT, removed instruction support and a shared L2 cache.
...IF AMD did the same thing could they make a core 1/2 the size of Zen4c/Zen5c?
How big would 4x 4core clusters of Cortex X4 be? (or Jaguar just for fun)
-CPU caches grew to 3 tiers... L1 to L2 to L3 and briefly L4
-Smartphones have 3 tiers... of X2 plus A710 plus A510
-Arrow Lake might be doing the same with the SOC ecores separate from the compute tile.
AMD is doing good with just Pcore and "not so big" Pcore, just some fun brainstorming what options they have and what it might look like.
Wendell (reading Phoronix) recently covered Xeon E, he mentioned that Bergamo's weakness was lots of tiny environments in containers, hyperscalers might prefer more small cores (arm) to more powerful ones with SMT plus the aded benefit of more power granularity.
Sorry for the essay, I don't have a doctorate but I'll still make a thesis while trying to understand.
Thank you for your time :D
Sorry if this is a repost, youtube is being a div.
AMD is also working on ARM I think its called Soundwave, but seemed mostly mobile, iirc.
If you're thinking of skybridge, that died several years ago. I asked the architect.
MLiD has reported AMD sources on future management plans with more variants that could include cutting AVX512 as well as cache sizes. Though designers might be pushing back against increased workload.
But the semi-custom arm (pun intended) did cut down the fp units for the Zen2 variant in the PS5.
There's disadvantages to fragmenting the ISA, having a lot of chips out with features encourages software support.
I can see Intel were forced into E-cores, but now they're dropping HT, while AMD SMT still offers a context switch free second thread that can aid core utilisation, so are they under pressure to reduce core area? At N2P leaks suggest 32c/64t CCDs so core count per socket is planned to grow.
I’ll wait for the Ampere pro max ultra😂
Is not that just a GPU with more General Computing capacity?
Let me see.
the AMD Radeon 520 has 320 shader processors... so it is close.
Can't wait to solder that thing into my phone as an aftermarket mod! ;D
sir, please tuck your illuminated microphone and manly chest hair toupee into your shirt.
All for a good cores
I just want one ginormous core at 10ghz. No hyperthreading.
So...Who's gonna play Google Play games on that thing?
Unfortunately for Ampere, Intel just launched Sierra Forest at Computex. 144 cores on the mainstream, but up to 288 cores on the advanced platform. They are E-cores, but those Crestmont cores are quite performant, pretty handily beating ARM designs. And combining E-cores with the Intel 3 process, the power efficiency is class leading, beating all of Intel's existing Xeons *and AMD's* Epycs in efficiency.
Bergamo has a higher thread count, the independent review I saw suggested SF only exceeds at very high utilisation while Bergamo is better at burstier loads.
Turin dense on TSMC N3P is expected Q1/25 with a maximum of 12x16c, so Ampere has that competition too.
@@RobBCactivebergamo has 128c/256t. SRF-AP has 288c/288t.
@@noname-gp6hk they have delivered the 144c to independent reviewer but the larger is "next year" and Intel change/cancel many plans.
They're still weaker single thread cores.
I wouldn't be totally opposed to getting an AmpereOne based system but I really need a plain English guide on how to use ARM intrinsics / AVX equivalent instructions, and a realistic performance comparison to the competition; there was a lot of marketing with Ampere Altra, and a lot of reference to "Ampere optimized Pytorch" and so on, but when I can pick up a Xeon Scalable 5th gen, or an Epyc 9004 series and just grab any software off the shelf and know it works, with a huge wealth of legacy resources online to learn how to use them effectively, it's just a bit too scary to take the plunge.
On Intel I could find no fewer than five guides on running or optimizing various AI workloads for the chip, and on AMD you can basically just use Intel's software (lol) but faster, or use AMD's specific implementations (ZenDNN, AOCL, etc.)
Yeah, too little too late. They need better performance or low enough price to make sense.
As a developer, I really just want cheap systems that I can benchmark and test stuff. But those doesn't look anywhere close to affordable let alone cheap.
Not that I gonna defend an arm chip lol but just wanted to say because it doesn't make sense for your use case and budget doesn't mean much I mean obviously when they designed this cpu they weren't like "Humm how do we cater the single guy devs that want to test random stuff on random CPUs ? "
It depends on context. I could see this thing being perfect for our fintech CI/CD platform where we have thousands of automated tests to run ahead of deployment.
ARM + RISC-V is the future.
Please support to the linux and will be fine.
FYI, all of AMD Epyc server CPUs run Linux.
its like x86 + ARM vs RISC-V... and the future is adaptable
hair looks metallic
ur hair is gettin crazy,.lol, nice video
He is going full super saiyan
You are still going to care about noisy neighbours due to cache and io contention.
Arm and gpt AI , the most overhyped bubble buzzwords of our decade 😂
Yet, 90% of the software engineers are using ChatGPT right now. & ARM is still the CPU of choice for smartphones, tablets, & ultra thin laptops.
Yep nothing called A.I> is actually A.I. and the sheep will not speak up. A.I. means the artificial equivalent to a human brain and a human brain is sentient. ChatGPT and my 4090 are NOT sentient sheep.
@@tringuyen7519 yeah. AI is interesting and useful when use appropriately. As for arm... Well there's billions of mobile devices sporting arm cores for a reason
@@tringuyen7519 I doubt this actually true but even if it was most of the "software engineers" are useless copycats anyway creating all that play store garbage or boilerplate websites 🤣 it's almost useless and by design will not be more useful just better at the only useless thing it can do, parrot repetitive monkey tasks in a borderline acceptable way.
@@talkingonthespectrum ai in general yea but this gpt/copilot fad/bubble is nothing else than an over glorified mechanical parrot nothing more
,oo
99% of users do not need 256 CPU cores to do anything at all. 8 to 12 cores can easily render 12K video.
The top 8 mega datacenters now account for more than 50% of global server shipments. Those 8 customers buy density and power efficiency. So yeah, only a few server customers need them but they represent the majority share of all servers shipped.
forth!
second!
hi frist
hi first
Ew
Intel will lose a big chunk of their datacenter market. Sell their shares
Still waiting for the Lunar Lake vs AMD AI 9 HX370 vs Snapdragon X comparison testing to see how poorly Lunar Lake performs!
@@tringuyen7519AMD might have this generation. ARM still seems a bit lackluster (from Qualcomms side) and I really don't expect Intel to come out swinging, going to be fun to see reviews.
I got long investment positions in intel. Same thing with AMD and Qualcomm tbh. All the chip vendors have strong longterm outlooks. I don't know about ampere though. Too much strong competition with mainstream architectures underneath in the space.
Lol it fails to even beat Bergamo