You don't need Supercomputers for AI!

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 ต.ค. 2024

ความคิดเห็น • 13

  • @TechEnthusiastInc
    @TechEnthusiastInc  3 หลายเดือนก่อน

    💥 Check out my NEW COURSE "Introduction to Enterprise IT [2024]" and learn the fundamentals of Enterprise IT in one go and one day! 💥
    academy.techenthusiast.com/p/introduction-to-enterprise-it

  • @aryehbarron4067
    @aryehbarron4067 ปีที่แล้ว +1

    Great video as usual Sir!

  • @RossCooper-Smith
    @RossCooper-Smith ปีที่แล้ว +1

    Another critical need for AI training is that the source data set has to be on flash. It's a small I/O, random read workload, across the entire training dataset. If you do the math, a single NVidia H100 GPU needs the IOPS of around 8,000 hard drives to keep it busy. You can't even use hybrid storage since caching doesn't work for totally random I/O.
    NVidia have a certification program for storage arrays, and for AI Training they don't certify anything that isn't all flash.
    And I totally agree with your conclusion, HPE have a really strong portfolio for AI workloads. They're the only vendor with a full stack in-house solution proven at every level of AI training and deployment. From Supercomputing to Cloud, Datacenter to Edge. They've made some very good strategic choices with their focus over the last few years.

    • @TechEnthusiastInc
      @TechEnthusiastInc  ปีที่แล้ว

      Hi Ross! You are absolutely right, very good point! And thanks for all the detailed specs, appreciated. Indeed, there are quite a lot of critical requirements with AI compared to traditional workloads.
      Agreed. HPE has all it takes to be one of the key players in the field and it has shown it’s bold enough to make the needed risky first moves too. Very interesting to follow!
      By the way, congrats on the super interesting VAST Data Platform announcement! We need to get back to that. 😉

  • @svrangarao1224
    @svrangarao1224 ปีที่แล้ว +1

    Thanks for the information

  • @papi-jor9239
    @papi-jor9239 ปีที่แล้ว +1

    Hi Markus, Thanks a lot for your informative video again. Is the data stored with HPE Greenlake for AI in your own datacenter or in a datacenter of HPE? Thanks!

    • @TechEnthusiastInc
      @TechEnthusiastInc  ปีที่แล้ว

      Thanks, my pleasure! The data will need to be stored close to the CPUs and GPUs for the fastest access so it needs to be stored within “HPE Cloud”. In North America HPE is using a Canadian co-lo service provider called QScale where they are building their HPC/supercomputing fascilities and all the training data will be located there.
      I actually just made a video about all this. Check it out!
      HPE just announced AI public cloud! (with HPE GreenLake for Large Language Models)
      th-cam.com/video/dM7HxcPMDZo/w-d-xo.html

    • @RossCooper-Smith
      @RossCooper-Smith ปีที่แล้ว +1

      HPE GreenLake for Large Language Models is an HPE AI Cloud service that runs within HPE's own datacentres, but they also have on-premise solutions all the way from Enterprise to Supercompute scale.
      For example: HPE GreenLake for File (GL4F) is an on-premise enterprise solution running a software stack that's already proven for Top-10 HPC workloads and some of the worlds largest AI Clouds. There's a UK deployment of that stack running a 60PB single namespace for data and 100,000 Kubernetes containers, all powered by around 15,000 HPE CPU & GPU compute nodes. GL4F will handle on-prem workloads from 200TB to 200PB and beyond.
      And of course HPE have their Cray supercompute division as well. Doesn't matter if you want on-prem or cloud, large or small, HPE have you covered. :-)

  • @ingridstrombeck8221
    @ingridstrombeck8221 7 หลายเดือนก่อน +1

    TACK!