Does Broadcom's AI Event Spell Trouble For Nvidia Stock? (AVGO & NVDA)

แชร์
ฝัง
  • เผยแพร่เมื่อ 31 ธ.ค. 2024

ความคิดเห็น • 46

  • @1ckt157
    @1ckt157 8 หลายเดือนก่อน +7

    I'm an Engineer at Broadcom, gotta say we are gonna dominate :)

    • @chipstockinvestor
      @chipstockinvestor  8 หลายเดือนก่อน +3

      Y'all are already! Keep up the stellar work!

  • @jeffj8825
    @jeffj8825 9 หลายเดือนก่อน +10

    I hold a nice portion of broadcom. Fantastic dividend growth stock too

  • @elroy1836
    @elroy1836 9 หลายเดือนก่อน +3

    Excellent, as usual. Be Well !!!

  • @eversunnyguy
    @eversunnyguy 9 หลายเดือนก่อน +1

    All this is great...Chips, XPUs etc...but who will do the training ? Who will provide data from various industries ? Who will do the inference ? Which software firms (and stocks 🙂) will be involved in this crucial piece that makes AI possible. Would love to hear your commentary on this some day.

  • @matt.stevick
    @matt.stevick 9 หลายเดือนก่อน +12

    Nothing spells trouble for Nvdia stock

  • @OnlyOneNagaBABA
    @OnlyOneNagaBABA หลายเดือนก่อน

    From what I have heard Hock Tan say, he knows what he is doing really well. He understands the importance of inference requirements really well imo. I was intrigued by how he had his focus specifically on companies that will actually make it to the next step in this AI race (i.e using inference to make money). Like Google/Meta/Tencent etc etc. Hopefully AVGO will join the trillion dollar club soon and then keep on mooning.

  • @shefudgrupa
    @shefudgrupa 9 หลายเดือนก่อน +2

    Great video about an under the radar event. However, maybe I understood you wrong, but at least Google's TPUs are used both for training and inference. excluding the software stack TPUs are in-place replacement for GPUs.

    • @chipstockinvestor
      @chipstockinvestor  9 หลายเดือนก่อน +1

      True, but probably not the most intense AI systems are being trained. This brief PR piece is high-level, but there are enough tidbits here to see that the TPU is far behind the training heavylifting of Nvidia's top systems. Which explains why Google continues to be a top Nvidia customer. cloud.google.com/blog/products/ai-machine-learning/introducing-cloud-tpu-v5p-and-ai-hypercomputer

    • @shefudgrupa
      @shefudgrupa 9 หลายเดือนก่อน

      @@chipstockinvestor indeed, at least judging from MLPerf training submissions, Google does not show off large LLMs on TPUs (except one example that is also rather poor compared to Nvidia, but still better than Gaudi2)

    • @chipstockinvestor
      @chipstockinvestor  9 หลายเดือนก่อน

      @@shefudgrupa absolutely, but who knows what they're planning. Maybe later gens of those TPUs can do more heavy lifting. But as of now, it appears to be for LLM fine tuning and basic inference.

  • @asafgr
    @asafgr 9 หลายเดือนก่อน +1

    Thanks! Great video!

  • @randomsitisee7113
    @randomsitisee7113 9 หลายเดือนก่อน +3

    Nvda is building a warp drive using its C100

  • @JaneWarren-m7c
    @JaneWarren-m7c 9 หลายเดือนก่อน +1

    Can you please do an update on AEHR? I know that’s not related to today. But would appreciate your input.

    • @chipstockinvestor
      @chipstockinvestor  9 หลายเดือนก่อน

      There's an update on the community board

    • @caffeinej2691
      @caffeinej2691 9 หลายเดือนก่อน

      @@chipstockinvestorwhat’s the community board?

  • @владши-о8з
    @владши-о8з 9 หลายเดือนก่อน +1

    Thanks you!💯

  • @eversunnyguy
    @eversunnyguy 9 หลายเดือนก่อน +1

    My understanding is Broadcom is a fabless company that makes XPUs. So which Fab companies make XPUs ? TSM ?

    • @chipstockinvestor
      @chipstockinvestor  9 หลายเดือนก่อน +1

      These are two older videos that break down the business model a bit more. Broadcom does a little in-house manufacturing, and primarily uses TSMC for the rest. th-cam.com/video/Luk92Arz1dw/w-d-xo.html th-cam.com/video/9DlYMfp3iN0/w-d-xo.html

    • @eversunnyguy
      @eversunnyguy 9 หลายเดือนก่อน

      @@chipstockinvestorThanks.

  • @valueinvestor8555
    @valueinvestor8555 9 หลายเดือนก่อน +3

    Alphabet, Intel and Qualcomm just announced an AI alliance ("UXL foundation") and want to develop a software suite that allows AI programs to run on different kinds of AI chips from different manufacturers. This seems to be an equivalent to Nvidia's Cuda!?

    • @lc285
      @lc285 9 หลายเดือนก่อน

      Intel and Broadcomm are manufacturers which gives them an advantage against others who rely on manufacturers to make their products, ie., Nvidia?

    • @valueinvestor8555
      @valueinvestor8555 9 หลายเดือนก่อน

      @@lc285 Nvidia relies on TSMC which is top notch at the moment and maybe for the next years as well. But the main risk is that their most advanced plants are in Taiwan. TSMC will produce for anyone if you are big enough customer too be able too book their limited production capacities.

    • @amyc7467
      @amyc7467 4 หลายเดือนก่อน

      They announced this months ago :) It's intended to be open source.

  • @NGh_teh
    @NGh_teh 9 หลายเดือนก่อน

    Hello Guys. can you comment on AEHR test systems again? They just cut their projection again. Is it still a buy?

    • @chipstockinvestor
      @chipstockinvestor  9 หลายเดือนก่อน

      there's a comment on the community board

  • @jeffj8825
    @jeffj8825 9 หลายเดือนก่อน

    Do you think broadcom will continue to grow well, while nvidia is still such a hot stock in the AI market? I understand they are different. Just curious how it could affect the other?

  • @gzhang207
    @gzhang207 8 หลายเดือนก่อน +1

    The XPU switch looks to function the same role as NVLink.

    • @chipstockinvestor
      @chipstockinvestor  8 หลายเดือนก่อน

      Could be, but it's proprietary to that XPU customer, so not a true NVlink competitor

    • @gzhang207
      @gzhang207 8 หลายเดือนก่อน +1

      Don’t think there is a standard. Without the inter GPU or XPU data transfers, the PCIe becomes the bottleneck for large scale ML models. For example Intel may claim better performance on 7B parameters model, but Nvidia solution is scalable to 100B parameters.

  • @Jai-qf8lw
    @Jai-qf8lw 9 หลายเดือนก่อน

    Good shot third customer I think is OpenAI, Tencent, Apple, or Tesla

  • @chrisveer9174
    @chrisveer9174 9 หลายเดือนก่อน

    Why has AEHR dropped almost 26% today? Couldn't find any news on the internet so far.

    • @gopalkc3514
      @gopalkc3514 9 หลายเดือนก่อน +1

      I am looking for answer too.

    • @chipstockinvestor
      @chipstockinvestor  9 หลายเดือนก่อน +2

      Check the community board on our main page.

  • @markvanbrunschot1435
    @markvanbrunschot1435 9 หลายเดือนก่อน +1

    AEHR tommorow?!

  • @juansolo-w5g
    @juansolo-w5g 8 หลายเดือนก่อน

    What happened to the audio? Sorry but words separation sounds like a synthetic voice of a robot 🤖

  • @Jai-qf8lw
    @Jai-qf8lw 9 หลายเดือนก่อน

    Yay!

  • @Jai-qf8lw
    @Jai-qf8lw 9 หลายเดือนก่อน +1

    A 12-stack HBM3 accelerator offers 50% more memory capacity and 25% higher bandwidth compared to NVIDIA's flagship Blackwell B200.

    • @haseebgatsby
      @haseebgatsby 9 หลายเดือนก่อน +1

      There is an issue with using COWOS-L to having 12 stacks you need three garguntum chips which causes NUMA related weirdness... There is also an issue with yields one defect and the entire gargantuan chip is lost.

  • @Jai-qf8lw
    @Jai-qf8lw 9 หลายเดือนก่อน

    The more HBM’s you add the larger the model with more parameters you can run

  • @juansolo-w5g
    @juansolo-w5g 8 หลายเดือนก่อน

    Nvidia will be the next Nvidia!

  • @jeffrose5622
    @jeffrose5622 9 หลายเดือนก่อน

    NVIDIA will be a $1,500 stock in 2024 regardless of if you buy it or not, so you might as well but it and get a 50% increase on your investment.

  • @hubertusgumpert8463
    @hubertusgumpert8463 9 หลายเดือนก่อน

    smci you missed one of the best stories , i know you dont change your mind, bad luck

    • @jacqdanieles
      @jacqdanieles 9 หลายเดือนก่อน +1

      So where does smci go from here?