How Fully Sharded Data Parallel (FSDP) works?

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 พ.ย. 2024

ความคิดเห็น • 63

  • @tinajia2958
    @tinajia2958 7 หลายเดือนก่อน +4

    This is the best video I’ve watched on distributed training

  • @yixiaoli6786
    @yixiaoli6786 7 หลายเดือนก่อน +2

    The best video of FSDP. Very clear and helpful!

  • @quaizarvohra3810
    @quaizarvohra3810 11 วันที่ผ่านมา

    I have been looking for a resource which would explain FSDP conceptually. This one explains it very clearly and completely. Awesome!

  • @chenqian3404
    @chenqian3404 ปีที่แล้ว +6

    To me this is by far the best video explaining how FSDP works, thanks a lot!

  • @mahmoudelhage6996
    @mahmoudelhage6996 2 หลายเดือนก่อน +3

    As a Machine learning Research Engineer working on fine-tuning LLMs, I normally use DDP, or Deepspeed, and wanted to understand more about how FSDP works. This video is well structured and provides a detailed explanation about FSDP, I totally recommend it. Thanks Ahmed for your effort :)

  • @MrLalafamily
    @MrLalafamily 10 หลายเดือนก่อน +1

    Thank you so much for investing your time in creating this tutorial. I am not an ML engineer, but I wanted to build intuition around parallelizing computation across GPUs and your video was very helpful. I especially liked that you provided multiple examples for parts that were a bit more nuanced. I paused the video many times to think things over. Again, gratitude as a learner

  • @dhanvinmehta3294
    @dhanvinmehta3294 2 หลายเดือนก่อน +1

    Thank you very much for making such a knowledge-dense, yet self-contained video!

  • @abdelkarimeljandoubi2322
    @abdelkarimeljandoubi2322 7 วันที่ผ่านมา

    Well explained. Thank you

  • @abhirajkanse6418
    @abhirajkanse6418 25 วันที่ผ่านมา

    That makes things very clear! Thanks a lot!!

  • @xxxiu13
    @xxxiu13 3 หลายเดือนก่อน

    A great explanation of FSDP indeed. Thanks for the video!

  • @lazycomedy9358
    @lazycomedy9358 9 หลายเดือนก่อน

    This is really clear and help me understand a lot of details in FSDP!! Thanks

  • @AntiochSanders
    @AntiochSanders ปีที่แล้ว +1

    Wow this is super good explanation, cleared up a lot of misconceptions I had about fsdp.

  • @yuxulin1322
    @yuxulin1322 7 หลายเดือนก่อน

    Thank you so much for such detailed explanations.

  • @saurabhpawar2682
    @saurabhpawar2682 9 หลายเดือนก่อน

    Excellent explanation. Thank you so much for putting this out!

  • @tharunbhaskar6795
    @tharunbhaskar6795 3 หลายเดือนก่อน

    The best explanation so far

  • @bharadwajchivukula2945
    @bharadwajchivukula2945 ปีที่แล้ว

    crisp and amazing explanation so far

  • @pankajvermacr7
    @pankajvermacr7 ปีที่แล้ว

    thanks for this, im having trouble undertanding FSDP, even i read a research paper but hard to understand, i really appreciate your effort, please make more such videos.

  • @yuvalkirstain7190
    @yuvalkirstain7190 9 หลายเดือนก่อน

    Fantastic presentation, thank you!

  • @mandeepthebest
    @mandeepthebest 3 หลายเดือนก่อน

    amazing video! very well articulated.

  • @NachodeGregorio
    @NachodeGregorio 8 หลายเดือนก่อน

    Amazing explanation, well done.

  • @dan1ar
    @dan1ar หลายเดือนก่อน

    Great video!

  • @amansinghal5908
    @amansinghal5908 4 หลายเดือนก่อน

    great video - one recommendation. make 3 videos, one like this, one that goes deeper into the implementation e.g. FSDP code and finally how to use it e.g. case studies

  • @gostaforsum6141
    @gostaforsum6141 2 หลายเดือนก่อน

    Great explanation!

  • @coolguy69235
    @coolguy69235 10 หลายเดือนก่อน

    very good video ! seriously keep up the good work !

  • @ElijahTang-t1y
    @ElijahTang-t1y 2 หลายเดือนก่อน

    well explained, great job!

  • @yatin-arora
    @yatin-arora 4 หลายเดือนก่อน +1

    well explained 👏

  • @amirakhlaghi8143
    @amirakhlaghi8143 5 หลายเดือนก่อน

    Excellent presentation

  • @p0w3rFloW
    @p0w3rFloW 11 หลายเดือนก่อน

    Awesome video! Thanks for sharing

  • @RaviTeja-zk4lb
    @RaviTeja-zk4lb ปีที่แล้ว

    I was struggling to understand how FSDP works and your video helped me a lot in understanding. Thank you. After understanding what are this backends. I see that FSDP definetly requires 'GPU'. For 'CPU" we use 'gloo' as backend and it doesn't support reduce-scatter. It would be great if you also cover Paramater serving training using RPC framework.

  • @phrasedparasail9685
    @phrasedparasail9685 หลายเดือนก่อน

    This is amazing

  • @TrelisResearch
    @TrelisResearch 7 หลายเดือนก่อน

    Great video, congrats

  • @clarechen1590
    @clarechen1590 2 หลายเดือนก่อน

    great video!

  • @bennykoren212
    @bennykoren212 11 หลายเดือนก่อน

    Excellent !

  • @adamlin120
    @adamlin120 ปีที่แล้ว

    Amazing explanation 🎉🎉🎉

  • @aflah7572
    @aflah7572 3 หลายเดือนก่อน

    Thank You!

  • @AIwithAniket
    @AIwithAniket ปีที่แล้ว

    it helped a lot. thank you so much

  • @dhineshkumarr3182
    @dhineshkumarr3182 11 หลายเดือนก่อน

    Thanks man!

  • @ManishPrajapati-o4x
    @ManishPrajapati-o4x หลายเดือนก่อน

    TY!

  • @louiswang538
    @louiswang538 4 หลายเดือนก่อน

    how is FSDP different from gradient accumulation? seems both have a mini-batch to get 'local gradients' and sum up to get global gradient for model update.

  • @hannibal0466
    @hannibal0466 6 หลายเดือนก่อน

    Awesome Bro! One short question: in the example shown (24:06), why there are two consecutive AG2 stages?

    • @ahmedtaha8848
      @ahmedtaha8848  6 หลายเดือนก่อน

      Thanks! One for the forward pass and another for the backward pass. I suppose you can write a special handler for the last FSDP unit to avoid freeing the parameters then re-gathering them. Yet, imagine if FSDP unit#0 have another layer (layer#6) after FSDP unit#2, i.e., so total (layer#0, layer#3, layer#6). The aforementioned special handler won't look wise then.

  • @santiagoruaperez7394
    @santiagoruaperez7394 8 หลายเดือนก่อน

    Hi. I want to ask you something. In the 3:01 you also include de optimizer state in the multiplication for each parameter. I want to ask if the optimizer state is not just one for the whole model? What I mean is: if I have a 13B model in comparisson with a 7B model, the gradients are going to be more. But in the case of the optimizer state is going to depend also from the number of parameters?

    • @ahmedtaha8848
      @ahmedtaha8848  8 หลายเดือนก่อน +1

      The optimizer state is not just one for the whole model. A 13B model has both more gradients and more optimizer state compared to 7B model. Yes, optimizer state depends on the number of parameters. For Adam optimizer, the optimizer state (Slide 4 & 5) includes both momentum (first moment) and variance (second moment) for each gradient, i.e., for each parameter.

    • @santiagoruaperez7394
      @santiagoruaperez7394 8 หลายเดือนก่อน

      Amazing video, super clear@@ahmedtaha8848

  • @Veekshan95
    @Veekshan95 11 หลายเดือนก่อน

    Amazing video with great visual aids and even better explanation.
    I just had one question - At 24:45 you mentioned that FSDP layer 0 is never freed until the end. So does this mean, the GPUs will have layer0 all the time and in addition to that they will consider other layers as needed?

    • @ahmedtaha8848
      @ahmedtaha8848  11 หลายเดือนก่อน

      Yes, Unit 0 (layer 0 + layer 3) -- which is the outermost FSDP unit -- will be available across all nodes (GPUs) during an entire training iteration (forward + backward). Quoting from arxiv.org/pdf/2304.11277 (Page #6), "Note that the backward pass excludes the AG0 All-Gather because FSDP intentionally keeps the outermost FSDP unit’s parameters in memory to avoid redundantly freeing at the end of forward and then re-All-Gathering to begin backward."

  • @mohammadsalah2307
    @mohammadsalah2307 ปีที่แล้ว

    Thanks for sharing! 19:19 The first FSDP unit to compute forward process is FWD0; however this FSDP unit contains layer 0 and layer 3; How could we compute the result of layer3 without compute the result of layer 1 and 2 first?

    • @ahmedtaha8848
      @ahmedtaha8848  ปีที่แล้ว +1

      Layer3 is computed only after computing layer 1 and layer 2. Please note that there are two 'FWD0': the first one computes layer 0; the second one computes layer 3 after FWD1 (layer 1 and 2) finishes.

  • @amaleki
    @amaleki 2 หลายเดือนก่อน

    there is a bit of discrepancy in what is defined as model parallelism with Nvidia literature. Name, in nvidia literature, Model parallelism is overarching idea of sharing the model and it can take two forms: i) Tensor MP (splitting layers between GPUs, each GPU get a portion of each layer) ii) Pipeline Parallelism (each GPU responsible for computing some layers entirely).

  • @maxxu8818
    @maxxu8818 7 หลายเดือนก่อน

    Hello Ahmed, If it's a 4 way FSDP in a node, does it mean there is only 4 GPU used for that node? Usually there are 8 GPUs in a node? how the other 4 GPUs are used? Thanks!

  • @richeshc
    @richeshc 9 หลายเดือนก่อน

    Namaste, a doubt. For Pipeline parallelism (mins 10 to 12) you mentioned while we send weights from 1st gpu of minibatch 1 training to gpu 2, we start training gpu 1 feedforward network on minibatch 2. Doubt is isnt it suppose to be feedforward followed by backward propogation and updation of weights, then training on batch 2? Words are giving an interpretation that we are straight away starting feedforward training on gpu1 with minimatch 2 soon we transfer weights from minibatch 1 training of gpu 1 to gpu 2.

    • @ahmedtaha8848
      @ahmedtaha8848  9 หลายเดือนก่อน

      For minibatch 1, we can't do back-propogation till we compute the loss, i.e, till mini-batch 1 passes through all layers/blocks. Same for mini-batch 2. After computing the loss for mini-batch 1, we can back-propogate one layer/block at a time on different GPUs -- and of course update the gradient. Yet, again other GPUs will remain idle if we are processing (forward/backward) a single mini-batch. Thus, it is better to work with multiple mini-batches, each with a different loss value. These mini-batches will be forwarding/backwarding on different GPUs in parallel.

  • @DarnellFie
    @DarnellFie หลายเดือนก่อน

    Thanks for the interesting content! 😍 I’ve got a question: 🤨 I found these words 😅. (behave today finger ski upon boy assault summer exhaust beauty stereo over). Can someone explain what this is? 😅

  • @RedOne-t6w
    @RedOne-t6w 8 หลายเดือนก่อน

    Awesome

  • @DevelopersHutt
    @DevelopersHutt 7 หลายเดือนก่อน

    TY

  • @piotr780
    @piotr780 8 หลายเดือนก่อน

    how whole gradient is calculated if weights does not fit single GPU memory ???????

  • @parasetamol6261
    @parasetamol6261 ปีที่แล้ว

    That Great vedio.

  • @hyunhoyeo4287
    @hyunhoyeo4287 4 หลายเดือนก่อน

    Great explanation!

  • @adityashah3751
    @adityashah3751 7 หลายเดือนก่อน

    Great video!