Bro u killed it. Best explanation. Trust me I have watched all tutorials but all other explanations were shitty. Please create one video on quantization.
Good summary! Next time it would be great if you add headings to the tables that you show on the video. Sometimes it is hard to follow. For example, what is computational efficiency? is it inference time or inference time increase over the increase in performance (e.g. accuracy, recall, etc.)? Thanks.
After watching many LoRA videos, this one finally makes me satisfied. I have a question: I see in the fine tuning code, they talk about merging lora adapters. What is that? Is this h + = x @ (W_A @ W_B) * alpha ? Can you mix and match adapters to improve the evaluation score?
Thank you very much for this amazing vide. However, although this was probably only for demo purposes of a forward pass after LoRA finetuning; the modified forward pass method you`ve shown might be mislieading; since the forward pass of the function is assumed to be entirely linear. So, does the addition of the LoRA finetuned weights to the base model weights happen directly within model weights file (like .safetensors) or can it be done on a higher level on pytorch or tensorflow?
I'm interested in fine-tuning a Large Language Model to specialize in specific knowledge, for example about fish species, such as which fish can be found in certain seas or which are prohibited from fishing. Could you guide me on how to prepare a dataset for this purpose? Should I structure it as simple input-output pairs (e.g., 'What fish are in the Mediterranean Sea?' -> 'XX fish can be found in the Mediterranean Sea'), or is it better to create a more complex dataset with multiple columns containing various details about each fish species? Any advice on dataset preparation for fine-tuning an LLM in this context would be greatly appreciated. Thanks in advance!"
This is one of the easiest to follow explanations of LoRA that I’ve seen. Thanks a lot.
Glad you found it useful!
Another great video. I appreciate that you don't skip on giving context and lay a good foundation. Makes understanding a lot easier. Thanks!
Amazing video, feel like I finally understood every aspect of LoRA, thank you!
Glad it was helpful :)
I've been scouring for a video like this. You're the best explanation so far!
The best explanation i found in internet
This is a perfect explanation for LoRA I found so far !!!
Thanks!
Thank you!
Bro u killed it. Best explanation. Trust me I have watched all tutorials but all other explanations were shitty.
Please create one video on quantization.
Awesome! Waiting for a video on implementing LoRA from scratch in pytorch.
Nice job with summarizing transfer learning and LoRA!
thank you for your great explanation
Thanks for the video!!
What an excellent video!! Congrats!!
Thanks a lot Amazing explanation, very clear and straightforward
Very good explanation
Amazing explanation! Thanks a lot!
Explained quite well !
you make the best explanation videos everrrr! is there one on how to train custom models using LORA?
great video to explain lora! thanks
gold
perfect video
Another great video, keep it up!
Great video! Liked and subscribed
Really Helpful!
Good explanation
Brilliant
please make video on QLoRA
Good summary! Next time it would be great if you add headings to the tables that you show on the video. Sometimes it is hard to follow. For example, what is computational efficiency? is it inference time or inference time increase over the increase in performance (e.g. accuracy, recall, etc.)? Thanks.
After watching many LoRA videos, this one finally makes me satisfied. I have a question: I see in the fine tuning code, they talk about merging lora adapters. What is that? Is this h + = x @ (W_A @ W_B) * alpha ? Can you mix and match adapters to improve the evaluation score?
good job!
Thank you very much for this amazing vide. However, although this was probably only for demo purposes of a forward pass after LoRA finetuning; the modified forward pass method you`ve shown might be mislieading; since the forward pass of the function is assumed to be entirely linear. So, does the addition of the LoRA finetuned weights to the base model weights happen directly within model weights file (like .safetensors) or can it be done on a higher level on pytorch or tensorflow?
Do you know what is \Delta W? How is it defined?
What softwares do you use to make videos?
Yes, indeed was hrlpful! Do you have a video on quantization?
how is Lora fine-tuning track changes from creating two decomposition matrix? How the ΔW is determined?
same question
Amazing!
XAI techniques on LLMs is really interesting topic! When you would consider it?
I think the lora is scaled by the square root of the rank, not the rank.
I'm interested in fine-tuning a Large Language Model to specialize in specific knowledge, for example about fish species, such as which fish can be found in certain seas or which are prohibited from fishing. Could you guide me on how to prepare a dataset for this purpose? Should I structure it as simple input-output pairs (e.g., 'What fish are in the Mediterranean Sea?' -> 'XX fish can be found in the Mediterranean Sea'), or is it better to create a more complex dataset with multiple columns containing various details about each fish species? Any advice on dataset preparation for fine-tuning an LLM in this context would be greatly appreciated.
Thanks in advance!"
Thanks
Good video
How do you make the illustrations ?
As clear as mud
Is it a compliment or no? Cause mud is not clear.
@@anudeepk7390😂😂😂😂😂😂
It was pretty clear why u hating
Ideot read paper. Lol
thanks