Squeeze more juice out of a single GPU in deep learning

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 ก.ย. 2024
  • It’s well known that GPUs can significantly accelerate neural network training. However, not everyone knows that a single GPU is sufficient to train most neural networks except for a few large ones (like LLM). In fact, a GPU is under-utilized in most cases. In this talk, we are addressing the under-utilization issue and proposing a way to make full use of the GPU capacity. The goal is to increase the throughput with a single GPU. We will use a small NN training as an example to illustrate how to achieve the goal by splitting a physical GPU into multiple logical GPUs and then running a particular training process per logical GPU.
    _______________________________________­________
    This webinar was presented by Weiguang Guan (SHARCNET) on November 15th, 2023, as a part of a series of weekly Compute Ontario Colloquia. The webinar was hosted by SHARCNET. The colloquia cover different advanced research computing (ARC) and high performance computing (HPC) topics, are approximately 45 minutes in length, and are delivered by experts in the relevant fields. Further details can be found on this web page: www.computeont... . Recordings, slides, and other materials can be found here: helpwiki.sharc...
    SHARCNET is a consortium of 19 Canadian academic institutions who share a network of high performance computers (www.sharcnet.ca). SHARCNET is a part of Compute Ontario (computeontario.ca/) and Digital Research Alliance of Canada (alliancecan.ca).

ความคิดเห็น •