Building with Instruction-Tuned LLMs: A Step-by-Step Guide

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 พ.ค. 2023
  • Welcome, to the Building with Instruction-Tuned LLMs: A Step-by-Step Guide workshop!
    We will be taking questions during the event. Please submit your question or upvote others' here:
    app.sli.do/event/erFLUz3s8yWh...
    Speakers
    Dr. Greg Loughnane, Head of Product & Curriculum at FourthBrain
    / gregloughnane
    Chris Alexiuk, is an LLM Instructor at FourthBrain
    / csalexiuk
    Let us know how we're doing? We will be giving out discount codes for a selected number of people who fill out the survey:
    forms.gle/qzBWfovxPTLPeVRN6
    Looking to connect with your peer learners, share projects, and swap advice? Join our AI community:
    community.deeplearning.ai/inv...
    To learn more about us and signup for future events:
    www.deeplearning.ai/events/
  • บันเทิง

ความคิดเห็น • 58

  • @steveking5858
    @steveking5858 ปีที่แล้ว +1

    Great session. Really helps in starting to understand the key building blocks and considerations required to do model fine-tuning. Great job Chris and Greg - and thanks!

  • @redfield126
    @redfield126 ปีที่แล้ว +1

    This is a very very educational content. I found most of all my main questions answered. Fantastic wrap up. Thank you guys

  • @prizmaweb
    @prizmaweb ปีที่แล้ว +2

    Outstanding! I was looking around for exactly this for the last week.

  • @fabianaltendorfer11
    @fabianaltendorfer11 11 หลายเดือนก่อน

    Love the energy. Thanks for the session!

  • @wangsherpa2801
    @wangsherpa2801 ปีที่แล้ว +1

    Excellent session, thanks!

  • @fal57
    @fal57 10 หลายเดือนก่อน

    Thank you so much; you've made the idea very simple.

  • @chukypedro818
    @chukypedro818 ปีที่แล้ว +1

    Awesome Webinar.
    Thanks Chris an Greg

  • @archit_singh15
    @archit_singh15 7 หลายเดือนก่อน

    such excellent explanations, perfect understanding achieved ! thanks

  • @seyedmohammadseyedmahdi8913
    @seyedmohammadseyedmahdi8913 ปีที่แล้ว +1

    thanks!

  • @RaymonddeLacaze
    @RaymonddeLacaze ปีที่แล้ว +17

    That was an excellent presentation. I feel like I learned a lot. I am frequently disappointed by these 1-hr webinars. I really appreciated the way both of you complemented each other. It was great to get the top-level view and Chris did a great job of walking through the code. He moved understandably a bit fast so it was hard to ingest all the code which is normal, and then I really appreciated Greg giving a recap and the take always of what Chris had demoed. It really helped me retain something constructive the code demo. All in all I think you both did a great job. Thank you for doing this. I would love to get a copy of the slides and the code that was demoed to walk through it at my own pace and try it out.
    Will you guys be making the slides and code available?

    • @Deeplearningai
      @Deeplearningai  ปีที่แล้ว +1

      We'll be following up with the slides!

    • @Jyovita1047316
      @Jyovita1047316 ปีที่แล้ว

      @@Deeplearningai when?

    • @productivitylaunchpad
      @productivitylaunchpad ปีที่แล้ว +1

      you can find a link to the slides in the chat around the 45 min mark in the video.

  • @user-pn7jh5xr3n
    @user-pn7jh5xr3n ปีที่แล้ว +1

    magnifico

  • @seulkeelee4655
    @seulkeelee4655 ปีที่แล้ว +2

    Thanks for the great session! Only one question... I tried the supervised instruct-tuning exactly the same. After the training is complete, I tried to push the model to the hub. But I got an error message: "NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported." But you seemed to have no issue with pushing. Do you have any insight? Any advice? Thank you!

  • @MS-en7el
    @MS-en7el ปีที่แล้ว +3

    Hi! Thank you for the valuable content! Although I still have a question.
    @chrisalexiuk Do I correctly assume that in both cases (instruct tuning and "unsupervised" fine-tuning) the model during the training (tuning) phase actually performs the next token prediction task and calculates loss based on that (as in typical autoregressive training of decoder) ? My point is that in both cases we simply create the text input in different formats (e.g., input combined with response [or target] in the first case) and we pass it through the base model. Is there any crucial "technical" difference underneath in presented cases?

  • @anujanand6
    @anujanand6 9 หลายเดือนก่อน +1

    That was a great presentation! Brief yet clear and to the point!
    I have a question on the make_inference function - based on the code, both the outputs (the good and bad) seem to be coming from the same fine tuned model. In the inference function, the good outputs are from 'base_model' and bad outputs are from 'model'. But the base_model is the model that was finetuned and pushed to the hub, and later we import that model and store it in the object 'model'. The only difference seems to be that the max_new_tokens is lesser when predicting the bad outputs. Please correct me if I'm wrong. Thanks!

  • @membershipyuji
    @membershipyuji ปีที่แล้ว +2

    The session was great and informative. For the 2nd part, I would like to see inference results before fine-tuning as well. The bloomz is instructed-tuned already and might be good at writing marketing email even before feeding 16 examples.

    • @temp_temp3183
      @temp_temp3183 ปีที่แล้ว +3

      100% agree, it wasn't clear what was the value add with unsupervised training.

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว

      Great question!
      If you load up the model without the fine-tuning, you will see that it does "fine" on the task - but that it doesn't achieve the same "style" as we're training with the unsupervised fine-tuning. You can imagine it as more of an extension of pre-training - which uses a largely unsupervised process.

  • @llohannsperanca
    @llohannsperanca ปีที่แล้ว +3

    Dears, great presentation! Thank you very much!
    I wonder where the material will be available?

  • @akibulhaque8621
    @akibulhaque8621 6 หลายเดือนก่อน

    For the supervised instruction set can i use any model? Like a Lliama 2 base model and train it?

  • @MrLahcenDZ
    @MrLahcenDZ ปีที่แล้ว +2

    I think there's an error in the formatting_func, it's not example.get("input", "") but example.get("context", "") since the key I assume is context.. so in your case the function will always go to the else part, and all the data will be formatted as only instruction and response, never with a context, or maybe I'm missing something..

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว +3

      Excellent catch! This is a relic of trying it out on a few different datasets! It should be updated as of now.

  • @karrtikiyer1987
    @karrtikiyer1987 11 หลายเดือนก่อน

    Thanks for the nice tutorial. How do you create a custom dataset for the second part (single task unsupervised learning)? Say I have bunch of documents, is there some framework or library available to create a single task dataset for unsupervised learning?

  • @user-bg2yo8uq1k
    @user-bg2yo8uq1k 9 หลายเดือนก่อน

    @chris - I wanted to finetune llama model on my mac M1but it seems bitsandbytes does not have package for Apple silicon yet. Any suggestions ?

  • @amortalbeing
    @amortalbeing 5 หลายเดือนก่อน

    Thanks a lot. really appreciate it. To what extend quantizing affects the training? or the output of the model in terms of the generation capabilities? does it dumb it down ? does it affect the loss?

  • @pec8377
    @pec8377 9 หลายเดือนก่อน

    Your first model is repeating itself, does tons of weird things. What would you do in order to correct this ? More steps ? Larger dataset ?

  • @amortalbeing
    @amortalbeing 5 หลายเดือนก่อน

    Where are the slides? I want to read the paper suggested at 30:10 .
    What am I supposed to do ?
    Thanks a lot in advance

  • @ashishsharma-fy7ox
    @ashishsharma-fy7ox 9 หลายเดือนก่อน

    I get this error when I try to push it hub - NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported . Has anyone seen this error ?

  • @ashwinrachha1694
    @ashwinrachha1694 ปีที่แล้ว +2

    I tried Intruction-Tuning on a custom dataset and got this error:
    ValueError: num_samples should be a positive integer value, but got num_samples=0

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว

      This has been corrected in the notebook now, there were some changes to the libraries that cause a few errors.

    • @ShyamSunderKumarNITD
      @ShyamSunderKumarNITD ปีที่แล้ว +4

      @@chrisalexiuk From where i can access the notebook.

  • @ChiliJ
    @ChiliJ ปีที่แล้ว +1

    If I'm looking to teach the LLM a new programming language, should I go with instruction tuning or fine tuning?

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว +1

      Fine-tuning will likely have the best results!

    • @ChiliJ
      @ChiliJ ปีที่แล้ว

      @@chrisalexiuk thank you for being responsive. Got to check out your channel as well. Very informative!

  • @androide551
    @androide551 ปีที่แล้ว +2

    wen slides sir?

  • @bhaveshsethi6876
    @bhaveshsethi6876 ปีที่แล้ว +2

    @chris Alexiuk how did you push 4 bit model to hugging face

    • @weizhili6732
      @weizhili6732 ปีที่แล้ว

      I got the same error today: NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported

    • @bhaveshsethi6876
      @bhaveshsethi6876 ปีที่แล้ว

      ​@@weizhili6732I had searched it it can't be saved and loading the 4 bit require more GPU so go with the same process again and again

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว +1

      Hey!
      You'll just want to save and push the adapter - I'll modify the notebook to a format that includes that process. You can expect that to be push tomorrow!

  • @fox_trot
    @fox_trot ปีที่แล้ว +7

    Will you guys be making the slides and code available?

  • @paparaoveeragandham284
    @paparaoveeragandham284 2 หลายเดือนก่อน

    Look it

  • @Pouya..
    @Pouya.. 3 หลายเดือนก่อน

    are these notebooks available?

  • @prayagpurohit148
    @prayagpurohit148 9 หลายเดือนก่อน

    Hey guys, I come from a non-data-science background and trying to automate a task. I want to automate the task of giving feedback to students. Howver, I am having a hard time coming up the logic for fine tuning. If anyone is interested in helping me out (please do), please reply to this comment. I'll give you more context about the problem if you decide to help me (please do)

  • @fintech1378
    @fintech1378 8 หลายเดือนก่อน

    i finetuned Llama 2 on colab but it says CUDA run out of memory, what is the problem here? cuz video says its possible

  • @ashishsharma-fy7ox
    @ashishsharma-fy7ox 9 หลายเดือนก่อน +1

    I am using openlm-research/open_llama_7b_v2 . The training starts with loss around 1.26 and after 5K steps , the loss goes down to 1.02 . I am not sure why the numbers are so different from the presentation and model is learning very slowly. Any suggestions ?

  • @user-rh3cy6xf3v
    @user-rh3cy6xf3v ปีที่แล้ว +1

    @Chris Alexiuk I am getting this WARNING:accelerate.utils.modeling:The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function.

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว

      On which notebook is this occuring?

    • @user-rh3cy6xf3v
      @user-rh3cy6xf3v ปีที่แล้ว +1

      @@chrisalexiuk google colab

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว

      @@user-rh3cy6xf3v Is it in the Supervised Fine-tuning notebook?

    • @user-rh3cy6xf3v
      @user-rh3cy6xf3v ปีที่แล้ว +1

      @@chrisalexiuk yes sir

    • @chrisalexiuk
      @chrisalexiuk ปีที่แล้ว

      @@user-rh3cy6xf3v Hey! Sorry for the late reply, I don't wind up getting notifications on these comments: I didn't encounter this issue - could you send me a copy of your notebook?