LLM Fine Tuning - Explained!

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 ต.ค. 2024

ความคิดเห็น • 10

  • @blancanthony9992
    @blancanthony9992 3 วันที่ผ่านมา +1

    yes, good question, there is nothing like a peak. To be honest, the only peak is the human imagination to create new incredibles models and maths. Remember before transformers, there was a peak, and before diffusion models the peak was Gans. Now yes we are stucked in transformers and diffusion and everybody adopt them, so we have to wait for someone working on others concepts.

  • @souravjha2146
    @souravjha2146 3 วันที่ผ่านมา +1

    what are the chances that I was searching for exact same content, and got notification at the same time

  • @deepakkushwaha818
    @deepakkushwaha818 2 วันที่ผ่านมา +1

    One of the best PEFT explanations till date. Should deserve more subscribers

    • @CodeEmporium
      @CodeEmporium  2 วันที่ผ่านมา

      Thank you for watching and for the compliments!

  • @susantachary7456
    @susantachary7456 3 วันที่ผ่านมา

    Loved it

  • @sudlow3860
    @sudlow3860 วันที่ผ่านมา

    I may be missing something but in the second quiz why would full fine tuning increase the number of trainable model parameters by 100%? Wouldn't it just act further on 100% of the original trainable model parameters?

    • @CodeEmporium
      @CodeEmporium  วันที่ผ่านมา

      Yea maybe I could have phrased better. You are still training the same parameters. But the amount of parameters to store would be 100% for each task. So for a model that has 1000 parameters that we fine tune on 15 tasks, we would need to store 15 * 1000 = 15k parameters. I was trying to go for “number of trainable model parameters to be stored per task”

    • @sudlow3860
      @sudlow3860 วันที่ผ่านมา

      @@CodeEmporium OK that makes sense. Thanks for all your excellent videos on the subject.

  • @jameslucas5590
    @jameslucas5590 3 วันที่ผ่านมา

    We haven't peaked. Now, technology or hardware will have to get stronger, better, faster.

  • @Patapom3
    @Patapom3 3 วันที่ผ่านมา

    Your question whether AI has peaked is not well formed. What AI are we talking about?
    If it's general AI, then no, clearly AI models today don't get close to do everything humans are capable of.
    If it's ML and language models, well there seems to be some kind of plateau and there are no clear advantages of one solution over the next one. Maybe the salvation will come from a new learning algorithm but they clearly need a change of paradygm.