Building with Instruction-Tuned LLMs: A Step-by-Step Guide

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ธ.ค. 2024

ความคิดเห็น • 58

  • @steveking5858
    @steveking5858 ปีที่แล้ว +1

    Great session. Really helps in starting to understand the key building blocks and considerations required to do model fine-tuning. Great job Chris and Greg - and thanks!

  • @redfield126
    @redfield126 ปีที่แล้ว +1

    This is a very very educational content. I found most of all my main questions answered. Fantastic wrap up. Thank you guys

  • @prizmaweb
    @prizmaweb ปีที่แล้ว +2

    Outstanding! I was looking around for exactly this for the last week.

  • @archit_singh15
    @archit_singh15 ปีที่แล้ว

    such excellent explanations, perfect understanding achieved ! thanks

  • @chukypedro818
    @chukypedro818 ปีที่แล้ว +1

    Awesome Webinar.
    Thanks Chris an Greg

  • @fal57
    @fal57 ปีที่แล้ว

    Thank you so much; you've made the idea very simple.

  • @fabianaltendorfer11
    @fabianaltendorfer11 ปีที่แล้ว

    Love the energy. Thanks for the session!

  • @wangsherpa2801
    @wangsherpa2801 ปีที่แล้ว +1

    Excellent session, thanks!

  • @RaymonddeLacaze
    @RaymonddeLacaze ปีที่แล้ว +17

    That was an excellent presentation. I feel like I learned a lot. I am frequently disappointed by these 1-hr webinars. I really appreciated the way both of you complemented each other. It was great to get the top-level view and Chris did a great job of walking through the code. He moved understandably a bit fast so it was hard to ingest all the code which is normal, and then I really appreciated Greg giving a recap and the take always of what Chris had demoed. It really helped me retain something constructive the code demo. All in all I think you both did a great job. Thank you for doing this. I would love to get a copy of the slides and the code that was demoed to walk through it at my own pace and try it out.
    Will you guys be making the slides and code available?

    • @Deeplearningai
      @Deeplearningai  ปีที่แล้ว +1

      We'll be following up with the slides!

    • @Jyovita1047316
      @Jyovita1047316 ปีที่แล้ว

      @@Deeplearningai when?

    • @lysanderAI
      @lysanderAI ปีที่แล้ว +1

      you can find a link to the slides in the chat around the 45 min mark in the video.

  • @membershipyuji
    @membershipyuji ปีที่แล้ว +2

    The session was great and informative. For the 2nd part, I would like to see inference results before fine-tuning as well. The bloomz is instructed-tuned already and might be good at writing marketing email even before feeding 16 examples.

    • @temp_temp3183
      @temp_temp3183 ปีที่แล้ว +3

      100% agree, it wasn't clear what was the value add with unsupervised training.

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว

      Great question!
      If you load up the model without the fine-tuning, you will see that it does "fine" on the task - but that it doesn't achieve the same "style" as we're training with the unsupervised fine-tuning. You can imagine it as more of an extension of pre-training - which uses a largely unsupervised process.

  • @llohannsperanca
    @llohannsperanca ปีที่แล้ว +3

    Dears, great presentation! Thank you very much!
    I wonder where the material will be available?

  • @seyedmohammadseyedmahdi8913
    @seyedmohammadseyedmahdi8913 ปีที่แล้ว +1

    thanks!

  • @anujanand6
    @anujanand6 ปีที่แล้ว +1

    That was a great presentation! Brief yet clear and to the point!
    I have a question on the make_inference function - based on the code, both the outputs (the good and bad) seem to be coming from the same fine tuned model. In the inference function, the good outputs are from 'base_model' and bad outputs are from 'model'. But the base_model is the model that was finetuned and pushed to the hub, and later we import that model and store it in the object 'model'. The only difference seems to be that the max_new_tokens is lesser when predicting the bad outputs. Please correct me if I'm wrong. Thanks!

  • @amortalbeing
    @amortalbeing 10 หลายเดือนก่อน

    Thanks a lot. really appreciate it. To what extend quantizing affects the training? or the output of the model in terms of the generation capabilities? does it dumb it down ? does it affect the loss?

  • @MS-en7el
    @MS-en7el ปีที่แล้ว +4

    Hi! Thank you for the valuable content! Although I still have a question.
    @chrisalexiuk Do I correctly assume that in both cases (instruct tuning and "unsupervised" fine-tuning) the model during the training (tuning) phase actually performs the next token prediction task and calculates loss based on that (as in typical autoregressive training of decoder) ? My point is that in both cases we simply create the text input in different formats (e.g., input combined with response [or target] in the first case) and we pass it through the base model. Is there any crucial "technical" difference underneath in presented cases?

  • @seulkeelee4655
    @seulkeelee4655 ปีที่แล้ว +2

    Thanks for the great session! Only one question... I tried the supervised instruct-tuning exactly the same. After the training is complete, I tried to push the model to the hub. But I got an error message: "NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported." But you seemed to have no issue with pushing. Do you have any insight? Any advice? Thank you!

  • @amortalbeing
    @amortalbeing 10 หลายเดือนก่อน

    Where are the slides? I want to read the paper suggested at 30:10 .
    What am I supposed to do ?
    Thanks a lot in advance

  • @fox_trot
    @fox_trot ปีที่แล้ว +7

    Will you guys be making the slides and code available?

  • @karrtikiyer1987
    @karrtikiyer1987 ปีที่แล้ว

    Thanks for the nice tutorial. How do you create a custom dataset for the second part (single task unsupervised learning)? Say I have bunch of documents, is there some framework or library available to create a single task dataset for unsupervised learning?

  • @akibulhaque8621
    @akibulhaque8621 11 หลายเดือนก่อน

    For the supervised instruction set can i use any model? Like a Lliama 2 base model and train it?

  • @bhaveshsethi6876
    @bhaveshsethi6876 ปีที่แล้ว +2

    @chris Alexiuk how did you push 4 bit model to hugging face

    • @weizhili6732
      @weizhili6732 ปีที่แล้ว

      I got the same error today: NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported

    • @bhaveshsethi6876
      @bhaveshsethi6876 ปีที่แล้ว

      ​@@weizhili6732I had searched it it can't be saved and loading the 4 bit require more GPU so go with the same process again and again

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว +1

      Hey!
      You'll just want to save and push the adapter - I'll modify the notebook to a format that includes that process. You can expect that to be push tomorrow!

  • @MauricioGomez-e9e
    @MauricioGomez-e9e ปีที่แล้ว +1

    magnifico

  • @MrLahcenDZ
    @MrLahcenDZ ปีที่แล้ว +2

    I think there's an error in the formatting_func, it's not example.get("input", "") but example.get("context", "") since the key I assume is context.. so in your case the function will always go to the else part, and all the data will be formatted as only instruction and response, never with a context, or maybe I'm missing something..

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว +3

      Excellent catch! This is a relic of trying it out on a few different datasets! It should be updated as of now.

  • @androide551
    @androide551 ปีที่แล้ว +2

    wen slides sir?

  • @ashsha-y5f
    @ashsha-y5f ปีที่แล้ว

    @chris - I wanted to finetune llama model on my mac M1but it seems bitsandbytes does not have package for Apple silicon yet. Any suggestions ?

  • @pec8377
    @pec8377 ปีที่แล้ว

    Your first model is repeating itself, does tons of weird things. What would you do in order to correct this ? More steps ? Larger dataset ?

  • @ChiliJ
    @ChiliJ ปีที่แล้ว +1

    If I'm looking to teach the LLM a new programming language, should I go with instruction tuning or fine tuning?

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว +1

      Fine-tuning will likely have the best results!

    • @ChiliJ
      @ChiliJ ปีที่แล้ว

      @@chrisalexiuk thank you for being responsive. Got to check out your channel as well. Very informative!

  • @ashwinrachha1694
    @ashwinrachha1694 ปีที่แล้ว +2

    I tried Intruction-Tuning on a custom dataset and got this error:
    ValueError: num_samples should be a positive integer value, but got num_samples=0

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว

      This has been corrected in the notebook now, there were some changes to the libraries that cause a few errors.

    • @ShyamSunderKumarNITD
      @ShyamSunderKumarNITD ปีที่แล้ว +4

      @@chrisalexiuk From where i can access the notebook.

  • @ashishsharma-fy7ox
    @ashishsharma-fy7ox ปีที่แล้ว

    I get this error when I try to push it hub - NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported . Has anyone seen this error ?

  • @Pouya..
    @Pouya.. 8 หลายเดือนก่อน

    are these notebooks available?

  • @prayagpurohit148
    @prayagpurohit148 ปีที่แล้ว

    Hey guys, I come from a non-data-science background and trying to automate a task. I want to automate the task of giving feedback to students. Howver, I am having a hard time coming up the logic for fine tuning. If anyone is interested in helping me out (please do), please reply to this comment. I'll give you more context about the problem if you decide to help me (please do)

  • @fintech1378
    @fintech1378 ปีที่แล้ว

    i finetuned Llama 2 on colab but it says CUDA run out of memory, what is the problem here? cuz video says its possible

  • @paparaoveeragandham284
    @paparaoveeragandham284 7 หลายเดือนก่อน

    Look it

  • @EXPERIMENTGPT
    @EXPERIMENTGPT ปีที่แล้ว +1

    @Chris Alexiuk I am getting this WARNING:accelerate.utils.modeling:The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function.

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว

      On which notebook is this occuring?

    • @EXPERIMENTGPT
      @EXPERIMENTGPT ปีที่แล้ว +1

      @@chrisalexiuk google colab

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว

      @@EXPERIMENTGPT Is it in the Supervised Fine-tuning notebook?

    • @EXPERIMENTGPT
      @EXPERIMENTGPT ปีที่แล้ว +1

      @@chrisalexiuk yes sir

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว

      @@EXPERIMENTGPT Hey! Sorry for the late reply, I don't wind up getting notifications on these comments: I didn't encounter this issue - could you send me a copy of your notebook?

  • @ashishsharma-fy7ox
    @ashishsharma-fy7ox ปีที่แล้ว +1

    I am using openlm-research/open_llama_7b_v2 . The training starts with loss around 1.26 and after 5K steps , the loss goes down to 1.02 . I am not sure why the numbers are so different from the presentation and model is learning very slowly. Any suggestions ?