Thank you Sir! I really like the way you simplify these complex concepts. For in depth future videos, I would request to go in deep which is commonly used in the industry project.❤
Planning to fine tune GPT 2 small LLM model (145 million parameters.) Will a laptop consisting of i5 13th Gen and Nvidia RTX 3050 6GB Graphics card, able to do all this task
Thank you Sir! I really like the way you simplify these complex concepts. For in depth future videos, I would request to go in deep which is commonly used in the industry project.❤
This is the best channel for learning cs ❤❤
Thanks a lot - please share with friends as well.
Thanks for the amazing video.
Hi Aman . Do u mean during back propagation some weights are ignored using sokething like no grad?
Hmmm yes during back propagation in transformer some weights are ignored and to solve this problem they come up with learning parameters
Are you asking about LoRa or something else that we discussed in video
Why get_peft_model is not used in the trainer model parameter?
I will check script and get back
Thank you Sir!
I really like the way you simplify these complex concepts.
For in depth future videos, I would request to go in deep which is commonly used in the industry project.❤
Yes sure - will do
Sir Thank you for the helpful explanation. Looking forward to more detail WRT Azure Open AI. Tnxs
As soon as possible
LORA stands for Low Rank Adaption
Thank you.
can u make full llm serice
Many videos are coming please press bell icon to get all notifications
I wanted to reach you sir what is the procedure for that
You can email me. Email id in channel page.
Don't use simple dataset, use atleats have 2 to 4 features like instruction, input, output, for finetunning llm it's help a lot
Many thanks for watching, sure will do
Planning to fine tune GPT 2 small LLM model (145 million parameters.)
Will a laptop consisting of i5 13th Gen and Nvidia RTX 3050 6GB Graphics card, able to do all this task
Use small batch sizes and gradient accumulation.
Enable mixed-precision training (FP16) with libraries like PyTorch (torch.cuda.amp).
Thank you Sir!
I really like the way you simplify these complex concepts.
For in depth future videos, I would request to go in deep which is commonly used in the industry project.❤
Thanks a lot - many videos on similar topics are on the way. Just ensure you press the bell icon