Great session. Really helps in starting to understand the key building blocks and considerations required to do model fine-tuning. Great job Chris and Greg - and thanks!
That was an excellent presentation. I feel like I learned a lot. I am frequently disappointed by these 1-hr webinars. I really appreciated the way both of you complemented each other. It was great to get the top-level view and Chris did a great job of walking through the code. He moved understandably a bit fast so it was hard to ingest all the code which is normal, and then I really appreciated Greg giving a recap and the take always of what Chris had demoed. It really helped me retain something constructive the code demo. All in all I think you both did a great job. Thank you for doing this. I would love to get a copy of the slides and the code that was demoed to walk through it at my own pace and try it out. Will you guys be making the slides and code available?
The session was great and informative. For the 2nd part, I would like to see inference results before fine-tuning as well. The bloomz is instructed-tuned already and might be good at writing marketing email even before feeding 16 examples.
Great question! If you load up the model without the fine-tuning, you will see that it does "fine" on the task - but that it doesn't achieve the same "style" as we're training with the unsupervised fine-tuning. You can imagine it as more of an extension of pre-training - which uses a largely unsupervised process.
That was a great presentation! Brief yet clear and to the point! I have a question on the make_inference function - based on the code, both the outputs (the good and bad) seem to be coming from the same fine tuned model. In the inference function, the good outputs are from 'base_model' and bad outputs are from 'model'. But the base_model is the model that was finetuned and pushed to the hub, and later we import that model and store it in the object 'model'. The only difference seems to be that the max_new_tokens is lesser when predicting the bad outputs. Please correct me if I'm wrong. Thanks!
Thanks a lot. really appreciate it. To what extend quantizing affects the training? or the output of the model in terms of the generation capabilities? does it dumb it down ? does it affect the loss?
Hi! Thank you for the valuable content! Although I still have a question. @chrisalexiuk Do I correctly assume that in both cases (instruct tuning and "unsupervised" fine-tuning) the model during the training (tuning) phase actually performs the next token prediction task and calculates loss based on that (as in typical autoregressive training of decoder) ? My point is that in both cases we simply create the text input in different formats (e.g., input combined with response [or target] in the first case) and we pass it through the base model. Is there any crucial "technical" difference underneath in presented cases?
Thanks for the great session! Only one question... I tried the supervised instruct-tuning exactly the same. After the training is complete, I tried to push the model to the hub. But I got an error message: "NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported." But you seemed to have no issue with pushing. Do you have any insight? Any advice? Thank you!
Thanks for the nice tutorial. How do you create a custom dataset for the second part (single task unsupervised learning)? Say I have bunch of documents, is there some framework or library available to create a single task dataset for unsupervised learning?
Hey! You'll just want to save and push the adapter - I'll modify the notebook to a format that includes that process. You can expect that to be push tomorrow!
I think there's an error in the formatting_func, it's not example.get("input", "") but example.get("context", "") since the key I assume is context.. so in your case the function will always go to the else part, and all the data will be formatted as only instruction and response, never with a context, or maybe I'm missing something..
I get this error when I try to push it hub - NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported . Has anyone seen this error ?
Hey guys, I come from a non-data-science background and trying to automate a task. I want to automate the task of giving feedback to students. Howver, I am having a hard time coming up the logic for fine tuning. If anyone is interested in helping me out (please do), please reply to this comment. I'll give you more context about the problem if you decide to help me (please do)
@Chris Alexiuk I am getting this WARNING:accelerate.utils.modeling:The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function.
@@EXPERIMENTGPT Hey! Sorry for the late reply, I don't wind up getting notifications on these comments: I didn't encounter this issue - could you send me a copy of your notebook?
I am using openlm-research/open_llama_7b_v2 . The training starts with loss around 1.26 and after 5K steps , the loss goes down to 1.02 . I am not sure why the numbers are so different from the presentation and model is learning very slowly. Any suggestions ?
Great session. Really helps in starting to understand the key building blocks and considerations required to do model fine-tuning. Great job Chris and Greg - and thanks!
This is a very very educational content. I found most of all my main questions answered. Fantastic wrap up. Thank you guys
Outstanding! I was looking around for exactly this for the last week.
such excellent explanations, perfect understanding achieved ! thanks
Awesome Webinar.
Thanks Chris an Greg
Thank you so much; you've made the idea very simple.
Love the energy. Thanks for the session!
Excellent session, thanks!
That was an excellent presentation. I feel like I learned a lot. I am frequently disappointed by these 1-hr webinars. I really appreciated the way both of you complemented each other. It was great to get the top-level view and Chris did a great job of walking through the code. He moved understandably a bit fast so it was hard to ingest all the code which is normal, and then I really appreciated Greg giving a recap and the take always of what Chris had demoed. It really helped me retain something constructive the code demo. All in all I think you both did a great job. Thank you for doing this. I would love to get a copy of the slides and the code that was demoed to walk through it at my own pace and try it out.
Will you guys be making the slides and code available?
We'll be following up with the slides!
@@Deeplearningai when?
you can find a link to the slides in the chat around the 45 min mark in the video.
The session was great and informative. For the 2nd part, I would like to see inference results before fine-tuning as well. The bloomz is instructed-tuned already and might be good at writing marketing email even before feeding 16 examples.
100% agree, it wasn't clear what was the value add with unsupervised training.
Great question!
If you load up the model without the fine-tuning, you will see that it does "fine" on the task - but that it doesn't achieve the same "style" as we're training with the unsupervised fine-tuning. You can imagine it as more of an extension of pre-training - which uses a largely unsupervised process.
Dears, great presentation! Thank you very much!
I wonder where the material will be available?
thanks!
That was a great presentation! Brief yet clear and to the point!
I have a question on the make_inference function - based on the code, both the outputs (the good and bad) seem to be coming from the same fine tuned model. In the inference function, the good outputs are from 'base_model' and bad outputs are from 'model'. But the base_model is the model that was finetuned and pushed to the hub, and later we import that model and store it in the object 'model'. The only difference seems to be that the max_new_tokens is lesser when predicting the bad outputs. Please correct me if I'm wrong. Thanks!
Thanks a lot. really appreciate it. To what extend quantizing affects the training? or the output of the model in terms of the generation capabilities? does it dumb it down ? does it affect the loss?
Hi! Thank you for the valuable content! Although I still have a question.
@chrisalexiuk Do I correctly assume that in both cases (instruct tuning and "unsupervised" fine-tuning) the model during the training (tuning) phase actually performs the next token prediction task and calculates loss based on that (as in typical autoregressive training of decoder) ? My point is that in both cases we simply create the text input in different formats (e.g., input combined with response [or target] in the first case) and we pass it through the base model. Is there any crucial "technical" difference underneath in presented cases?
Thanks for the great session! Only one question... I tried the supervised instruct-tuning exactly the same. After the training is complete, I tried to push the model to the hub. But I got an error message: "NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported." But you seemed to have no issue with pushing. Do you have any insight? Any advice? Thank you!
Where are the slides? I want to read the paper suggested at 30:10 .
What am I supposed to do ?
Thanks a lot in advance
Will you guys be making the slides and code available?
they have said they will share that in a mail.
@@ShyamSunderKumarNITD thank you
Has this mail been sent?
Did not received yet.
Thanks for the nice tutorial. How do you create a custom dataset for the second part (single task unsupervised learning)? Say I have bunch of documents, is there some framework or library available to create a single task dataset for unsupervised learning?
For the supervised instruction set can i use any model? Like a Lliama 2 base model and train it?
@chris Alexiuk how did you push 4 bit model to hugging face
I got the same error today: NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported
@@weizhili6732I had searched it it can't be saved and loading the 4 bit require more GPU so go with the same process again and again
Hey!
You'll just want to save and push the adapter - I'll modify the notebook to a format that includes that process. You can expect that to be push tomorrow!
magnifico
I think there's an error in the formatting_func, it's not example.get("input", "") but example.get("context", "") since the key I assume is context.. so in your case the function will always go to the else part, and all the data will be formatted as only instruction and response, never with a context, or maybe I'm missing something..
Excellent catch! This is a relic of trying it out on a few different datasets! It should be updated as of now.
wen slides sir?
@chris - I wanted to finetune llama model on my mac M1but it seems bitsandbytes does not have package for Apple silicon yet. Any suggestions ?
Your first model is repeating itself, does tons of weird things. What would you do in order to correct this ? More steps ? Larger dataset ?
If I'm looking to teach the LLM a new programming language, should I go with instruction tuning or fine tuning?
Fine-tuning will likely have the best results!
@@chrisalexiuk thank you for being responsive. Got to check out your channel as well. Very informative!
I tried Intruction-Tuning on a custom dataset and got this error:
ValueError: num_samples should be a positive integer value, but got num_samples=0
This has been corrected in the notebook now, there were some changes to the libraries that cause a few errors.
@@chrisalexiuk From where i can access the notebook.
I get this error when I try to push it hub - NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported . Has anyone seen this error ?
are these notebooks available?
Hey guys, I come from a non-data-science background and trying to automate a task. I want to automate the task of giving feedback to students. Howver, I am having a hard time coming up the logic for fine tuning. If anyone is interested in helping me out (please do), please reply to this comment. I'll give you more context about the problem if you decide to help me (please do)
i finetuned Llama 2 on colab but it says CUDA run out of memory, what is the problem here? cuz video says its possible
Look it
@Chris Alexiuk I am getting this WARNING:accelerate.utils.modeling:The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function.
On which notebook is this occuring?
@@chrisalexiuk google colab
@@EXPERIMENTGPT Is it in the Supervised Fine-tuning notebook?
@@chrisalexiuk yes sir
@@EXPERIMENTGPT Hey! Sorry for the late reply, I don't wind up getting notifications on these comments: I didn't encounter this issue - could you send me a copy of your notebook?
I am using openlm-research/open_llama_7b_v2 . The training starts with loss around 1.26 and after 5K steps , the loss goes down to 1.02 . I am not sure why the numbers are so different from the presentation and model is learning very slowly. Any suggestions ?