Dont want intel to fail, but want them to fail enough to make room for competition. Intel can fail for a few more years to balance out the market, that would be good for all of us.
Focusing on inferencing is a pretty good choice because it's a lot lower risk for customers. With an intel arc a770, I haven't run into any serious issues with inferencing, however I have sometimes hit issues where training fails after awhile and then had to change a flag to intel's pytorch extension to get it working. Tiny numerical errors are probably going to be irrelevant to inferencing, but they'll accumulate over time in training until it breaks completely. Inference for hundreds of millions of users is also going to be more expensive than training so there's a huge incentive to get another vendor that doesn't have absurd margins like Nvidia. I really hope AMD, Intel, and others can start to get some significant market share here.
The parasites smell government subsidies on top of a lucrative asset they can make a quick buck on. Spinning off GloFo didn't help AMD and only helped stave off bankruptcy a little longer, but GloFo was floundering itself against TSMC, Intel, and Samsung. Intel's foundries would be worth far more for the parasites to liquidate. Especially considering they're meeting all their milestones and they're on the verge of regaining industry leadership. Anyone trying to unload the foundries at this point obviously doesn't have Intel's best interests at heart and just wants to make a quick buck by chopping it up and selling the pieces. Shitty part is if their stock slides anymore _everyone_ will be licking their chops to profit off a hostile takeover.
@@kubotite9168 Intel _NEEDS_ their foundries. The thing so many people don't seem to get is one of the biggest reasons OEMs, laptop, and server hasn't embraced AMD as much as you'd expect, is AMD and TSMC simply _CAN'T_ supply them enough chips. While TSMC technically produces more wafers a year than Intel, a full HALF of that is legacy nodes making car parts, microwaves, and appliances etc. that don't need to be anywhere near cutting edge. Meanwhile Intel isn't exactly still churning out Pentium 4s. Almost all their needs must be on the newest nodes, and TSMC isn't going to tell Apple, Nvidia, AMD, and Qualcomm off to make millions upon millions of cheap office and laptop CPUs for Intel. It will be interesting to see if TSMC can keep supply up for Alder Lake and Lunar Lake as it is. Regardless, it's NOT a good thing to have literally _everyone_ buying their cutting edge silicon from just one country, especially when China keeps saying it wants to invade.
@@kubotite9168 It was to use their foundries when they were still at 10nm. They couldn't get beyond that and still maintain good enough yields to bring the cost down. They had to go shopping with ASML.
Intel "in trouble" is much much more healthy than AMD "trouble" back in the mid 2010s. Intel WILL bounce back, wallstreet bros doom and gloom story is getting annoying.
can doom and gloom it all they want.. just meant cheaper shares for me because once the click hype dies off it's only upward baby! (*not financial advice* do your own research)
@@Lustanda the entire bulldozer/piledriver line was underrated severely. Bulldozer cores are real cores, not traditional unified cores, and they're not hyperthreads, but closer to the opposite. The problem is that most devs only want traditional unified cores or hyperthreads, wasn't until way into Zen's lifetime that bulldozer-cores actually showed their capability.
@@mathew2214 Piledriver was pretty good, I still use one as a general desktop (Though AM2+ platform lacks some protocol standards that I need for other tasks, mainly it is not ROCm capatible.). People don't realize the strong influence that HeavyEquipement had on Zen because AMD marketing was so focused on "new and improved". Also HE was somewhat held back by its node not being able to accomodate as many transistors as it needed so some bits were half hobbled. Anyway, Heavy equipment gave them a ton of information on architectural bits that do and don't work well together, better cache strategies, pipelining bottlenecks.
I love that you're a tech youtuber who hasn't shunned talking ai tech. The underlying hardware information is important to keep the technology open and accessible to the masses.
I remember how all the mainstream was complaining about AMD almost as much as about Intel now. Meanwhile, the engineering news were coming up with “not too bad actually” reviews of the new Zen architecture. Then it took a year or two for gamers to notice AMD’s progress, and then the server people got on board. Real technology and hard problems take time to overcome. I hope Intel makes it, despite this market. Among other things because they are the best with software and standards.
You have to remember a lot of professionals wouldn't trust AMD for years after being burned. There's a reason the saying exists "No one ever got fired for buying Intel." As much as the fanboys would have you believe otherwise, even at it's worst Intel was _never_ in as bad of shape as AMD in the 2010s. AMD fully _earned_ that reputation they've been trying to get away from for years.
128 dont divide by 3 because one tile is max 48 cores for total 144 P cores but it seems like not all of them is active and operate maybe these cores defective and disabled.
@@randyh4154Regardless of when, average human won’t know how to use an LLM to save their lives! Jarvis AI is great. But how many humans are Tony Stark?
been a following you since back in t*k syndicate days, glad your bright brain is still bringing great perspectives. I'm not even a server guy and you make me excited for the new tech 😂
i remember when my boss gave me the credit card and said go wild. Dual core overclocked xenon desktop workstation max memory and a maxed out ramdisk. Things have come so far since i was building a workstation for printing and databases.
I can't see clearly from slides what the memory bandwidth would be on that platform in GB/s. It would be interesting to see how well some of the larger MoE models perform on the Xeon 6900 platform. The lower activation parameter count should mean lighter workload, while being able to use system memory should make it a lot easier to fit the larger models all ready to access by the router model. Keen to see these in action, same with Gaudi 3!
Not yet to my knowledge. Also, I doubt NVIDIA/AMD would engage in intensive development of solutions that would cut into the profit margins (more HBM3 memory on individual AI accelerators) by offering such a product, soon. I‘d expect such innovations when the AI bubble is bursting with an over-saturated market.
NEC developing Japan's National Institutes for Quantum Science and Technology (QST) Intel's Xeon 6900P processors, AMD's Instinct MI300A accelerators .
+50% power consumption and eye-catching benches based on the AMX extensions...what about using many, many cuda cores..? Will they end like AVX?...and released just before Turin..
I'm looking forward to a Lunar Lake laptop. I currently have an M1 Pro MacBook Pro but I have a couple of Windows programs that don't run poorly on macOS Apple Silicon.
@@ContraVsGigi I do not need a lot of performance and my i7-10700 is fine on the desktop. My ideal laptop would be 17 inches with a 4k screen. The programs that I use do not need a lot of CPU horsepower but do need RAM and a strong display.
@@movdqa Ok, then the performance seems to be enough. Are 32GB of memory good enough? That is the maximum you can get with Lunar Lake. Maybe something more future proof is better? I would wait 3 more months anyways, to also have Arrow Lake laptops. BTW, I have a Dell XPS17, at that moment I said it was exactly the same size as my old 15.6 inch laptop, but now I wished it was smaller, it is a bit cumbersome to carry it or use it on my lap.
@@ContraVsGigi 32 GB is my sweet spot for RAM. I currently have a MacBook Pro 16 and it's big. But most of my travel is by car with a little travel by air so I can manage a large laptop. My laptop bag (Swissgear Carbon) was designed for the 2008 MacBook Pro 17 which would be thicker and heavier than anything outside of gaming laptops today. I use laptops for trading and need the screen real estate. One program I use runs really horribly on Apple Silicon - though it still runs. It runs fine on old Intel CPUs.
Man, I would love to run the Xeon 6900P in a home server, think of all the workloads I could run... Too bad I'll probably not be able to afford one like that.
Dell will have a PowerEdge XE system based on Gaudi3. Don't know the exact details but looks promising next to the existing and planned Nvidia based XE systems as well as an AMD XE box. And because the demand of Nvidia based systems exceeds supply these alternatives look promising. (Iwork@dell - but not on the compute side)
I mean, one good enterprise product release doesn't undo the damage they've done to their brand lately. They're gonna be on the ropes for a long while. Cutting a quarter of their staff is not a minor speedbump. I certainly hope they don't nosedive as a company, because we need those high-tech fabs to be built and we need it to happen before China gets all grabby with Taiwan. That said, I'm wondering if they might try the lunar lake method and axe hyperthreading on the P-cores. Seemed to do wonders for efficiency. Would it even be a tangible benefit for these kinds of workloads?
fine tuning can work too! the point was more that training is conceded ground for now... and that's it. and folks are making a bigger deal out of that than reality
@@Workaholic42 Top of the line Xeon processors broke the $10k mark back with the Xeon Platinum 8170M on 11 July 2017. Wiki doesn't show the original MSRPs for Nvidia GPUs, so I can't tell when Nvidia broke the $10k mark, but I am going to guess that it wasn't in 2017. It was probably later than that.
@@Workaholic42 You make no mention of Gaudi 3, in your original reply/comment. "claims to have a 2x better performance per dollar than H100" = it's still going to be VERY expensive.
I said not so long ago that this is Intel getting back to about parity with AMD which is a huge step for both them and AMD, they're even ahead for Sierra Forest but that's not quiet such an easy to carry out comparison because of SMT that the high density parts have on AMD's side of things as well as other differences
I want AMD and Intel to do well, along with Qualcomm and hopefully more in the future. The only one I hate is nvidia, their Linux drivers have cost me my sanity.
Wall Street is upset because Intels investing money if node/fab upgrades and building new fabs, instead of what Wall Street got used to under Pat, all the money going to dividends and stck buy backs. If Pat would have started the EUV node work years earlier, we wouldn't be here. Without the EUV nodes, Intel would never be competitive on wafer processing costs.
I'll be impressed when i get to play games on it while it works... and edit and encode on processor without graphics at monitor refresh rate with no latency changes.
@@ragesmirk compact. They fiddled a bit with the core design, i think it has less cache / core, and that way they shrunk the cores a bit, and can fit more cores / cm^2. -> more density. They're a bit slower cores individually also. but there's more. basically some trade offs have been made for more density.
Intel‘s actions haven’t been helping much. The finance dooouuuches have just been burnt by the whole Raptor Lake (Refresh) debacle since they are one of the target markets for maximum single-thread performance. Never underestimate pettiness.
Implying Wendell has an overly positive opinion of Intel compared to AMD a bit “far-fetched”… This video isn’t a review, just a news segment with more technical details.
This is not feasible if some dumb OS is used to dispatch threads in dual socket server. Some server applications like P-cores, some server applications like E-cores. Who is going to explain this preference to the OS dispatcher? Only sysadmin knows what server application shall be started on particular server hardware.
Interesting WL for me would be anything FEA analysis like ANSYS or Code_Aster, as they are 80% matrix multiply like all the AI Stuff. I expect people to think about moving FEA workloads to AI-hardware in the coming years, as there are many similarities. The reason why nobody does FEA on GPUs now is because we need much more RAM/VRAM than affordable GPUs offer. So we have no benefit from GPUs and thus it'd reasonable to move to AI-hardware. FEA analysis could also be a second life for old AI-HW in the future. I think that's what we will see soon. And of course, being an Intel stockholder myself, I hope they are catching up in the next months.
Intel has to start kicking ass again. Just like I said they would, they will be splitting their foundry in to a subsidiary. Now the US gov and Intel need to train people.
While it isn't exactly wrong to call it glue; the more appropriate name is underfill, it helps with the stresses of thermal cycling and reduces fatigue failure modes.
@@foch3 I see tons of sources claiming otherwise (that they use TSMC 5N) including the Intel Gaudi 3 AI Accelerator White Paper (ID 817486) on Intel's website.
@@foch3 I see tons of sources claiming that they use TSMC 5N process, including the Intel Gaudi 3 AI Accelerator White Paper (ID 817486) on Intel's website.
@@kyu9649Server Chips use network adapters...and those can saturate pcie 4.0 quite easily. If you use a pcie switch in the configuration you are severely bandwidth starved. As you are with QPI links between Xeon on multi socket servers.
@@enthuscimandiri1640 micron also dropped out of 3d xpoint, patents were bought by Texas Instruments from what I read last (sad since I'm pretty sure TI won't actually use them)
Good to see Intel seemily change their ways. It will keep them alive. Let's hope they have learned their lesson and not go back to complacency and live on milking a monopoly.
We don’t want Intel to fail. We want them to do better.
Well said.
We want more competition.
Nah, plenty of people wanted Intel to fail because they love drama and chaos.
Competition is the best. Intel must catch up.
Dont want intel to fail, but want them to fail enough to make room for competition. Intel can fail for a few more years to balance out the market, that would be good for all of us.
In 10 years I will buy one of these on Aliexpress!
if all don't die in 5
I consider this a reasonable proof of durability verification.
For under $100
If you're still alive
💀
we need some metrics, cat pictures generated per second or something
“Sequential access” generation of cats, and “random access” of all kinds of pictures.
And verified by GN CEO Snowflake.
cp/s would be an interesting metric. I second your notion!
Focusing on inferencing is a pretty good choice because it's a lot lower risk for customers. With an intel arc a770, I haven't run into any serious issues with inferencing, however I have sometimes hit issues where training fails after awhile and then had to change a flag to intel's pytorch extension to get it working. Tiny numerical errors are probably going to be irrelevant to inferencing, but they'll accumulate over time in training until it breaks completely. Inference for hundreds of millions of users is also going to be more expensive than training so there's a huge incentive to get another vendor that doesn't have absurd margins like Nvidia. I really hope AMD, Intel, and others can start to get some significant market share here.
what do you run on your a770? I had to sell mine because no models supported it at the time
Around the time the last CEO was ousted, there was a concerted effort to get Intel to give up their foundries. Seems like those sharks are back.
The parasites smell government subsidies on top of a lucrative asset they can make a quick buck on. Spinning off GloFo didn't help AMD and only helped stave off bankruptcy a little longer, but GloFo was floundering itself against TSMC, Intel, and Samsung. Intel's foundries would be worth far more for the parasites to liquidate. Especially considering they're meeting all their milestones and they're on the verge of regaining industry leadership. Anyone trying to unload the foundries at this point obviously doesn't have Intel's best interests at heart and just wants to make a quick buck by chopping it up and selling the pieces.
Shitty part is if their stock slides anymore _everyone_ will be licking their chops to profit off a hostile takeover.
the reason of intel fall behind of their competition is their eagerness of using their foundries....
@@kubotite9168 Intel _NEEDS_ their foundries. The thing so many people don't seem to get is one of the biggest reasons OEMs, laptop, and server hasn't embraced AMD as much as you'd expect, is AMD and TSMC simply _CAN'T_ supply them enough chips.
While TSMC technically produces more wafers a year than Intel, a full HALF of that is legacy nodes making car parts, microwaves, and appliances etc. that don't need to be anywhere near cutting edge. Meanwhile Intel isn't exactly still churning out Pentium 4s. Almost all their needs must be on the newest nodes, and TSMC isn't going to tell Apple, Nvidia, AMD, and Qualcomm off to make millions upon millions of cheap office and laptop CPUs for Intel. It will be interesting to see if TSMC can keep supply up for Alder Lake and Lunar Lake as it is.
Regardless, it's NOT a good thing to have literally _everyone_ buying their cutting edge silicon from just one country, especially when China keeps saying it wants to invade.
@@kubotite9168nice try bot account
@@kubotite9168 It was to use their foundries when they were still at 10nm. They couldn't get beyond that and still maintain good enough yields to bring the cost down. They had to go shopping with ASML.
Intel "in trouble" is much much more healthy than AMD "trouble" back in the mid 2010s. Intel WILL bounce back, wallstreet bros doom and gloom story is getting annoying.
can doom and gloom it all they want.. just meant cheaper shares for me because once the click hype dies off it's only upward baby! (*not financial advice* do your own research)
@@Lustanda the entire bulldozer/piledriver line was underrated severely. Bulldozer cores are real cores, not traditional unified cores, and they're not hyperthreads, but closer to the opposite. The problem is that most devs only want traditional unified cores or hyperthreads, wasn't until way into Zen's lifetime that bulldozer-cores actually showed their capability.
@@mathew2214I don't recall them oxidizing themselves to death either
Let's get real, they're just running a pump and dump but in reverse.
@@mathew2214 Piledriver was pretty good, I still use one as a general desktop (Though AM2+ platform lacks some protocol standards that I need for other tasks, mainly it is not ROCm capatible.). People don't realize the strong influence that HeavyEquipement had on Zen because AMD marketing was so focused on "new and improved". Also HE was somewhat held back by its node not being able to accomodate as many transistors as it needed so some bits were half hobbled.
Anyway, Heavy equipment gave them a ton of information on architectural bits that do and don't work well together, better cache strategies, pipelining bottlenecks.
Nvidia Selling the Shovels, and Intel selling the gloves, is what wall street bros dont understand
In a gold rush, always bet on the shovel makers
Amd selling the gloves right now. DGX use Eypc
@@Demopans5990 I'm betting on Levis.
No amd is selling the gloves, intel is selling spoons
negative, Intel is selling the bottled ice cold water
It's a good news. Competition is all we need.
I love that you're a tech youtuber who hasn't shunned talking ai tech. The underlying hardware information is important to keep the technology open and accessible to the masses.
Hollywood level CGI graphics
Oh come on!!! Hollywood isn't giving us CGI that impressive these days. 😅
So dogshit?
Lawnmower man for everyone.
I think even better?
Wow, thanks TH-cam, for finally recommending this channel to me!
I remember how all the mainstream was complaining about AMD almost as much as about Intel now. Meanwhile, the engineering news were coming up with “not too bad actually” reviews of the new Zen architecture. Then it took a year or two for gamers to notice AMD’s progress, and then the server people got on board. Real technology and hard problems take time to overcome. I hope Intel makes it, despite this market. Among other things because they are the best with software and standards.
You have to remember a lot of professionals wouldn't trust AMD for years after being burned. There's a reason the saying exists "No one ever got fired for buying Intel." As much as the fanboys would have you believe otherwise, even at it's worst Intel was _never_ in as bad of shape as AMD in the 2010s. AMD fully _earned_ that reputation they've been trying to get away from for years.
It's almost as if they put an engineer back in charge of the company.
I think at around 2:45 you meant 2.25 TB not GB unless it was talking about CPU cache and I misunderstood 😅
Like "muscle memory" when people talk about memory GB comes out of their mouth automatically. He must have meant TB but didn't want to say 2,250GB.
@ChrisP872 2304GB = 2.25TB
There aren't many snoozetubers who cover xeon, thanks!
128 dont divide by 3 because one tile is max 48 cores for total 144 P cores but it seems like not all of them is active and operate maybe these cores defective and disabled.
The upcoming Bartlet S 12 Pcore CPU on LGA 1700 is really interesting. L1 testing would be amazing.
Thought for some reason this was an actual cpu review.
Looking forward to actual tests from Wendel soon
that's some good gluing
More parallel, more memory, more close, more stuff that works. Is gonna win.
With GNR, Intel has launch itself into being competitive again after years of being lackluster. Can't wait to see how AMD Epyc Turin perform!
It's going to be really fun if these trickle down to a w790 successor
I can't be sad at E cores. Hope to homelab that one day 🤣
Hopefully in about 10 years we'll be able to run something like llama 3.1 405B (is it 405?) on consumer hardware (that isn't super expensive.)
hopefully within 5 yrs
@@randyh4154 well hopefully, but I think it'll take longer.
@@randyh4154Regardless of when, average human won’t know how to use an LLM to save their lives! Jarvis AI is great. But how many humans are Tony Stark?
AUDIENCE ENGAGEMENT!!!
MOAR AUDIENCE ENGAGEMENT!!!
Still loved my Parallel system with the 68000 Chip set.. NOW to get the OS to live in one.
The ability to define Xeon numa domains sounds very interesting. I would love to learn more.
XEON 6900P - finally a gaming cpu!
I have been waiting for this humble 128 core CPU all my life
been a following you since back in t*k syndicate days, glad your bright brain is still bringing great perspectives. I'm not even a server guy and you make me excited for the new tech 😂
You went to Portland and got out unharmed? Solid.
500W per socket??!!?!?!?!?! Holy COW!
that's normal for servers
Chilled water is a service you have at a data center. And if it's a 6 socket/8 socket node thats 3kw/4kw of heat just from the Sockets per node!
Hope it comes with a smoke detector unit.
@@tourist6290 why are people here pretending this is something crazy?
Threadripper does that. Threadripper also has something like 3-4W per core, an efficiency only matched by Apple
i remember when my boss gave me the credit card and said go wild. Dual core overclocked xenon desktop workstation max memory and a maxed out ramdisk. Things have come so far since i was building a workstation for printing and databases.
Dual socket Xeon with a ram disk is the best print server ever lol. Good work.
Loved the intro
I can't see clearly from slides what the memory bandwidth would be on that platform in GB/s. It would be interesting to see how well some of the larger MoE models perform on the Xeon 6900 platform. The lower activation parameter count should mean lighter workload, while being able to use system memory should make it a lot easier to fit the larger models all ready to access by the router model. Keen to see these in action, same with Gaudi 3!
Do they use AI to design better AI Silicon and general compute? .
Where do you purchase CXL devices? Anything that can augment GPU vram?
Not yet to my knowledge.
Also, I doubt NVIDIA/AMD would engage in intensive development of solutions that would cut into the profit margins (more HBM3 memory on individual AI accelerators) by offering such a product, soon.
I‘d expect such innovations when the AI bubble is bursting with an over-saturated market.
Yes, but does it come with RGB?
Intel finally caught up on the core count without settling for E-cores.
Intel cancelled Jim Keller’s idea of rentable cores. Without Keller who left Intel in 2019, Intel’s engineering staff can’t make rentable cores work!
@@tringuyen7519 it'd only work if they where hiden from windows.
@@tringuyen7519
Some ideas are born to burn.
I want to get my hands on a gaudi 3 pcie card. I hope it’s affordable enough for devs to develop for it.
NEC developing Japan's National Institutes for Quantum Science and Technology (QST) Intel's Xeon 6900P processors, AMD's Instinct MI300A accelerators .
Looking forward to the testing
So Gaudi 3 has a PCIe card... For 600W and also 24G Ethernet? Likely not something for my workstation
Why are the slides rendered at 480p
+50% power consumption and eye-catching benches based on the AMX extensions...what about using many, many cuda cores..? Will they end like AVX?...and released just before Turin..
6900P? NICE
I'm looking forward to a Lunar Lake laptop. I currently have an M1 Pro MacBook Pro but I have a couple of Windows programs that don't run poorly on macOS Apple Silicon.
If you also need multicore performance, just wait 2-3 more months for Arrow Lake, then decide what is best for you. It is a very interesting period.
@@ContraVsGigi I do not need a lot of performance and my i7-10700 is fine on the desktop. My ideal laptop would be 17 inches with a 4k screen. The programs that I use do not need a lot of CPU horsepower but do need RAM and a strong display.
@@movdqa Ok, then the performance seems to be enough. Are 32GB of memory good enough? That is the maximum you can get with Lunar Lake. Maybe something more future proof is better? I would wait 3 more months anyways, to also have Arrow Lake laptops. BTW, I have a Dell XPS17, at that moment I said it was exactly the same size as my old 15.6 inch laptop, but now I wished it was smaller, it is a bit cumbersome to carry it or use it on my lap.
@@ContraVsGigi 32 GB is my sweet spot for RAM. I currently have a MacBook Pro 16 and it's big. But most of my travel is by car with a little travel by air so I can manage a large laptop. My laptop bag (Swissgear Carbon) was designed for the 2008 MacBook Pro 17 which would be thicker and heavier than anything outside of gaming laptops today. I use laptops for trading and need the screen real estate. One program I use runs really horribly on Apple Silicon - though it still runs. It runs fine on old Intel CPUs.
@@movdqa Oh, travel and light tasks. I guess this is perfect for you.
I'll be able to afford one after the next upgrade cycle when they are considered e-waste.
Garage S3-compat storage ?? Is it better than MinIO ?
And we just finished upgrading to 5th gen... See you in gen 8.
Do you know if these have already been designed with industry standard EDA tools, or still Intel internal tools? Looking forward to your review.
EDA
Man, I would love to run the Xeon 6900P in a home server, think of all the workloads I could run... Too bad I'll probably not be able to afford one like that.
Dell will have a PowerEdge XE system based on Gaudi3. Don't know the exact details but looks promising next to the existing and planned Nvidia based XE systems as well as an AMD XE box. And because the demand of Nvidia based systems exceeds supply these alternatives look promising. (Iwork@dell - but not on the compute side)
You do know that CMOS is inherently quantum mechanically defective. Every clock cycle, there is a transient short between power and ground.
I mean, one good enterprise product release doesn't undo the damage they've done to their brand lately. They're gonna be on the ropes for a long while. Cutting a quarter of their staff is not a minor speedbump. I certainly hope they don't nosedive as a company, because we need those high-tech fabs to be built and we need it to happen before China gets all grabby with Taiwan.
That said, I'm wondering if they might try the lunar lake method and axe hyperthreading on the P-cores. Seemed to do wonders for efficiency. Would it even be a tangible benefit for these kinds of workloads?
yo garage looks sick im pumped
Interesting to hear that Intel focuses on inference with Gaudi 3 - at PyTorch Conf last week they demoed Gaudi 2 with a fine tuning use-case…
fine tuning can work too! the point was more that training is conceded ground for now... and that's it. and folks are making a bigger deal out of that than reality
On the product side Intel is SO BACK...
But now can they get the foundry making money again before they drag everything else down with it.
It's going to be VERY expensive.
If it’s cheaper than NVIDIA…
@@Workaholic42
Top of the line Xeon processors broke the $10k mark back with the Xeon Platinum 8170M on 11 July 2017.
Wiki doesn't show the original MSRPs for Nvidia GPUs, so I can't tell when Nvidia broke the $10k mark, but I am going to guess that it wasn't in 2017. It was probably later than that.
@@ewenchan1239 I was referring to Gaudi 3 which claims to have a 2x better performance per dollar than H100
@@Workaholic42
You make no mention of Gaudi 3, in your original reply/comment.
"claims to have a 2x better performance per dollar than H100" = it's still going to be VERY expensive.
When do we see the randomx benchmark?
did they fix their QC issues?
I said not so long ago that this is Intel getting back to about parity with AMD which is a huge step for both them and AMD, they're even ahead for Sierra Forest but that's not quiet such an easy to carry out comparison because of SMT that the high density parts have on AMD's side of things as well as other differences
I want AMD and Intel to do well, along with Qualcomm and hopefully more in the future. The only one I hate is nvidia, their Linux drivers have cost me my sanity.
why is your audio always so compressed?
Wall Street is upset because Intels investing money if node/fab upgrades and building new fabs, instead of what Wall Street got used to under Pat, all the money going to dividends and stck buy backs. If Pat would have started the EUV node work years earlier, we wouldn't be here. Without the EUV nodes, Intel would never be competitive on wafer processing costs.
rendering and flip simulation. any heavy vfx workloads
Didn’t IBM just introduce a new processor with AI acceleration? IBM Telum 2 on the z16 architecture?
I'll be impressed when i get to play games on it while it works... and edit and encode on processor without graphics at monitor refresh rate with no latency changes.
3:50 I think you meant "coming Q1 of 2025" not 2024 for the "e-cores on the 6900 platform".
Bro get to the part of how much? All I need to know
$17K
Looking forward to getting these for $100 in 2030 from eBay.
Woohoo, in 20 years I’ll own one of these for 2$
x86 people say is dead arm is future What you say is future?
Meanwhile AMD with 192 core on Turin
Those are Zen 5C, but yeah should be stellar competition nevertheless
@@SAKTHITech c for champion?
@@ragesmirk compact. They fiddled a bit with the core design, i think it has less cache / core, and that way they shrunk the cores a bit, and can fit more cores / cm^2. -> more density.
They're a bit slower cores individually also. but there's more. basically some trade offs have been made for more density.
@@TheHighbornisn’t their ipc practically identical but they can’t be pushed as high (clock speed) as a regular zen5 etc core because of cell density?
🔥
But will it burn?
This channel is single-handedly curing my chronic depression 🤌
why? have u bought intel stocks? xD
need a great software layer with that
Hmm, yes, 128/3 does not compute. I wonder how they are doing that?
They have 3x44core tiles, the middle one has 2 cores disabled the ones on the side have one core disabled.
Wallstreet has never been in touch with reality.....
Thanks for calling out the wall street Bros. It's gotten annoying.
SO annoying.
Intel‘s actions haven’t been helping much. The finance dooouuuches have just been burnt by the whole Raptor Lake (Refresh) debacle since they are one of the target markets for maximum single-thread performance.
Never underestimate pettiness.
It's only annoying if you pay attention to them
Implying Wendell has an overly positive opinion of Intel compared to AMD a bit “far-fetched”…
This video isn’t a review, just a news segment with more technical details.
@@xephael3485 You must be new to his channel.
500w? And you say they're not on fire? Oh; they be on fire!
Yeah, great, but how well do games run.
very badly
In virtual machines for neighborhoods
I wonder if anyone has tried mixing an all P core and all e core xeon into the same dual socket server. 😂
This is not feasible if some dumb OS is used to dispatch threads in dual socket server.
Some server applications like P-cores, some server applications like E-cores. Who is going to explain this preference to the OS dispatcher? Only sysadmin knows what server application shall be started on particular server hardware.
They just need wins for Gaudi, because there’s only one huge super computer for Intel and it goes online sometime next year.
Interesting WL for me would be anything FEA analysis like ANSYS or Code_Aster, as they are 80% matrix multiply like all the AI Stuff. I expect people to think about moving FEA workloads to AI-hardware in the coming years, as there are many similarities. The reason why nobody does FEA on GPUs now is because we need much more RAM/VRAM than affordable GPUs offer. So we have no benefit from GPUs and thus it'd reasonable to move to AI-hardware. FEA analysis could also be a second life for old AI-HW in the future. I think that's what we will see soon. And of course, being an Intel stockholder myself, I hope they are catching up in the next months.
Cant wait til i can get like an MI300 off ebay for $200 and pair it with an old cpu or something.
Self hosted cloud alternatives that are homelab friendly = $$$$
For fighting Nvidia, could Intel (and AMD) try to offer a very performant and efficient combo CPU + GPU combo? Somethibg that runs better "together".
Intel has to start kicking ass again. Just like I said they would, they will be splitting their foundry in to a subsidiary. Now the US gov and Intel need to train people.
I think I saw some glue that held those PCBs together...
While it isn't exactly wrong to call it glue; the more appropriate name is underfill, it helps with the stresses of thermal cycling and reduces fatigue failure modes.
8:23. That Gaudi die looks suspiciously like an Apple 'Max' double die. Hope they slice half that off and make a consumer GPU 😂
Gaudi 3 & Apple M3 Max are both made by TSMC! You’re not wrong.
@@tringuyen7519 Wrong! It's on intel process.
@@foch3 I see tons of sources claiming otherwise (that they use TSMC 5N) including the Intel Gaudi 3 AI Accelerator White Paper (ID 817486) on Intel's website.
@@foch3 I see tons of sources claiming that they use TSMC 5N process, including the Intel Gaudi 3 AI Accelerator White Paper (ID 817486) on Intel's website.
Can I use these to make a plex machine?
Its a bit disappointing with only 96 PCIE lanes when AMD has 128 across all their 9000 series Epyc line.
We can use some good Intel news for a change, competition is good!
I'm missing PCIe 6.0. It's time.
There is nothing that would fully saturate PCIe 6.0 (yet). An RTX 4090 doesn't even fully saturate PCIe 4.0 ...
2027
@@kyu9649Server Chips use network adapters...and those can saturate pcie 4.0 quite easily.
If you use a pcie switch in the configuration you are severely bandwidth starved.
As you are with QPI links between Xeon on multi socket servers.
@@kyu9649 nothing? You might leave your gaming chair and take a look at current high speed network adapters...
@@kyu9649 No, I'm talking about I/0. Storage, Network, and especial CXL.
Intel needs to do optane again
Sold memory department to SK Hynix
@@kartikpintu optane division is on micron if im not wrong remembering
@@enthuscimandiri1640 micron also dropped out of 3d xpoint, patents were bought by Texas Instruments from what I read last (sad since I'm pretty sure TI won't actually use them)
Dude, you have been doing plumbing lately ?. You used "spigot" several times....🤣😉
Particularly in the tech sector, the share price is just a religious belief measurement.
The biggest saving is if you dont spend money at all 😊
Good to see Intel seemily change their ways. It will keep them alive. Let's hope they have learned their lesson and not go back to complacency and live on milking a monopoly.
Sure hope Intel stocks recover once the controversy and fire are extinguished. Pokes Pat with a stick, "get that fab business spooled up"
Okay. I'm gonna take my 12900k and go find someplace else to play ... 😔
(P/S: You're lookin kinda svelte there, Wendellman. 👍🏼)