All this is great...Chips, XPUs etc...but who will do the training ? Who will provide data from various industries ? Who will do the inference ? Which software firms (and stocks 🙂) will be involved in this crucial piece that makes AI possible. Would love to hear your commentary on this some day.
From what I have heard Hock Tan say, he knows what he is doing really well. He understands the importance of inference requirements really well imo. I was intrigued by how he had his focus specifically on companies that will actually make it to the next step in this AI race (i.e using inference to make money). Like Google/Meta/Tencent etc etc. Hopefully AVGO will join the trillion dollar club soon and then keep on mooning.
Great video about an under the radar event. However, maybe I understood you wrong, but at least Google's TPUs are used both for training and inference. excluding the software stack TPUs are in-place replacement for GPUs.
True, but probably not the most intense AI systems are being trained. This brief PR piece is high-level, but there are enough tidbits here to see that the TPU is far behind the training heavylifting of Nvidia's top systems. Which explains why Google continues to be a top Nvidia customer. cloud.google.com/blog/products/ai-machine-learning/introducing-cloud-tpu-v5p-and-ai-hypercomputer
@@chipstockinvestor indeed, at least judging from MLPerf training submissions, Google does not show off large LLMs on TPUs (except one example that is also rather poor compared to Nvidia, but still better than Gaudi2)
@@shefudgrupa absolutely, but who knows what they're planning. Maybe later gens of those TPUs can do more heavy lifting. But as of now, it appears to be for LLM fine tuning and basic inference.
These are two older videos that break down the business model a bit more. Broadcom does a little in-house manufacturing, and primarily uses TSMC for the rest. th-cam.com/video/Luk92Arz1dw/w-d-xo.html th-cam.com/video/9DlYMfp3iN0/w-d-xo.html
Alphabet, Intel and Qualcomm just announced an AI alliance ("UXL foundation") and want to develop a software suite that allows AI programs to run on different kinds of AI chips from different manufacturers. This seems to be an equivalent to Nvidia's Cuda!?
@@lc285 Nvidia relies on TSMC which is top notch at the moment and maybe for the next years as well. But the main risk is that their most advanced plants are in Taiwan. TSMC will produce for anyone if you are big enough customer too be able too book their limited production capacities.
Do you think broadcom will continue to grow well, while nvidia is still such a hot stock in the AI market? I understand they are different. Just curious how it could affect the other?
Don’t think there is a standard. Without the inter GPU or XPU data transfers, the PCIe becomes the bottleneck for large scale ML models. For example Intel may claim better performance on 7B parameters model, but Nvidia solution is scalable to 100B parameters.
There is an issue with using COWOS-L to having 12 stacks you need three garguntum chips which causes NUMA related weirdness... There is also an issue with yields one defect and the entire gargantuan chip is lost.
I'm an Engineer at Broadcom, gotta say we are gonna dominate :)
Y'all are already! Keep up the stellar work!
I hold a nice portion of broadcom. Fantastic dividend growth stock too
Excellent, as usual. Be Well !!!
All this is great...Chips, XPUs etc...but who will do the training ? Who will provide data from various industries ? Who will do the inference ? Which software firms (and stocks 🙂) will be involved in this crucial piece that makes AI possible. Would love to hear your commentary on this some day.
Nothing spells trouble for Nvdia stock
From what I have heard Hock Tan say, he knows what he is doing really well. He understands the importance of inference requirements really well imo. I was intrigued by how he had his focus specifically on companies that will actually make it to the next step in this AI race (i.e using inference to make money). Like Google/Meta/Tencent etc etc. Hopefully AVGO will join the trillion dollar club soon and then keep on mooning.
Great video about an under the radar event. However, maybe I understood you wrong, but at least Google's TPUs are used both for training and inference. excluding the software stack TPUs are in-place replacement for GPUs.
True, but probably not the most intense AI systems are being trained. This brief PR piece is high-level, but there are enough tidbits here to see that the TPU is far behind the training heavylifting of Nvidia's top systems. Which explains why Google continues to be a top Nvidia customer. cloud.google.com/blog/products/ai-machine-learning/introducing-cloud-tpu-v5p-and-ai-hypercomputer
@@chipstockinvestor indeed, at least judging from MLPerf training submissions, Google does not show off large LLMs on TPUs (except one example that is also rather poor compared to Nvidia, but still better than Gaudi2)
@@shefudgrupa absolutely, but who knows what they're planning. Maybe later gens of those TPUs can do more heavy lifting. But as of now, it appears to be for LLM fine tuning and basic inference.
Thanks! Great video!
Nvda is building a warp drive using its C100
Can you please do an update on AEHR? I know that’s not related to today. But would appreciate your input.
There's an update on the community board
@@chipstockinvestorwhat’s the community board?
Thanks you!💯
My understanding is Broadcom is a fabless company that makes XPUs. So which Fab companies make XPUs ? TSM ?
These are two older videos that break down the business model a bit more. Broadcom does a little in-house manufacturing, and primarily uses TSMC for the rest. th-cam.com/video/Luk92Arz1dw/w-d-xo.html th-cam.com/video/9DlYMfp3iN0/w-d-xo.html
@@chipstockinvestorThanks.
Alphabet, Intel and Qualcomm just announced an AI alliance ("UXL foundation") and want to develop a software suite that allows AI programs to run on different kinds of AI chips from different manufacturers. This seems to be an equivalent to Nvidia's Cuda!?
Intel and Broadcomm are manufacturers which gives them an advantage against others who rely on manufacturers to make their products, ie., Nvidia?
@@lc285 Nvidia relies on TSMC which is top notch at the moment and maybe for the next years as well. But the main risk is that their most advanced plants are in Taiwan. TSMC will produce for anyone if you are big enough customer too be able too book their limited production capacities.
They announced this months ago :) It's intended to be open source.
Hello Guys. can you comment on AEHR test systems again? They just cut their projection again. Is it still a buy?
there's a comment on the community board
Do you think broadcom will continue to grow well, while nvidia is still such a hot stock in the AI market? I understand they are different. Just curious how it could affect the other?
The XPU switch looks to function the same role as NVLink.
Could be, but it's proprietary to that XPU customer, so not a true NVlink competitor
Don’t think there is a standard. Without the inter GPU or XPU data transfers, the PCIe becomes the bottleneck for large scale ML models. For example Intel may claim better performance on 7B parameters model, but Nvidia solution is scalable to 100B parameters.
Good shot third customer I think is OpenAI, Tencent, Apple, or Tesla
Why has AEHR dropped almost 26% today? Couldn't find any news on the internet so far.
I am looking for answer too.
Check the community board on our main page.
AEHR tommorow?!
What happened to the audio? Sorry but words separation sounds like a synthetic voice of a robot 🤖
Yay!
A 12-stack HBM3 accelerator offers 50% more memory capacity and 25% higher bandwidth compared to NVIDIA's flagship Blackwell B200.
There is an issue with using COWOS-L to having 12 stacks you need three garguntum chips which causes NUMA related weirdness... There is also an issue with yields one defect and the entire gargantuan chip is lost.
The more HBM’s you add the larger the model with more parameters you can run
Nvidia will be the next Nvidia!
NVIDIA will be a $1,500 stock in 2024 regardless of if you buy it or not, so you might as well but it and get a 50% increase on your investment.
smci you missed one of the best stories , i know you dont change your mind, bad luck
So where does smci go from here?