My favorite data science youtuber these days! Thank you So many channels now are about pure hype of delivering AI news with no substance... But you are an inspiration. Damn I want to read a paper a day at least now!
I just learned that the pissa method from peft probably solves this problem - it initializes the LoRA with singular vectors taken from the model, so it does not have to add new intruder dimensions.
how do you measure model's generalization capability , this is a really fuzzy and vogue concepts and we keep using it while do not have clear measurement .
If I understand it correctly, then doing full fine tuning and running SVD on the difference between the finetuned and the original model would create a LoRA that does not suffer from this problem. Is it correct?
Could you train LORA's at different ranks, scaling up and measuring performance? Then when you reach an acceptable level of performance you determine - or improvement falls below a certain level - you stop the process. It might involve some upfront costs, but I assume you would save on inference down the line because the 'acceptable' LORA would be computational more efficient than the full trained model. It would depend on the use case. If you are doing lots of inference, it would definitely payoff down the line. It would be interesting to see the costs of training multiple LORA's in this way vs full training.
LORA is not more efficient for inference. Either you can merge the LoRA into the model, then it is exactly the same or you can compute the LoRA separately and then it is less efficient.
@@vladimirnadvornik8254 Ok, then perhaps a better approach would be to train a LORA on different model sizes - 1B, 3B, 8B (which are computationally more efficient - and stop when acceptable accuracy is reached or improvement falls below a certain level.
Your intuition, basically. It’s tricky because some tasks will be in distribution, even when dealing with unique data, while some tasks will explicitly not be in distribution. Here’s a couple of things to consider: For simple math, let’s say addition, subtraction, multiplication and division, do you think that a new equation outside of the example equations is in-distribution or out of distribution? For logical reasoning problems, do you think that a problem with a similar structure to a problem in the training set is in distribution or out of distribution? For creative writing, do you think that a model being asked to write stories in the same genres as the training examples is in distribution or out of distribution? It gets really nuanced, and I think the only way to really understand this is to approach them on a model-by-model and dataset-by-dataset basis.
What about undertraining loras on each block and merging as you go? You update all the parameters, and no single lora "overpowers" the original data vectors.
??? If you "undertrain" a fine-tuning mechanism, then you have a broken fine-tuned weight tensor structure. Why merge something that is not working into the pre-trained model?
My favorite data science youtuber these days! Thank you So many channels now are about pure hype of delivering AI news with no substance... But you are an inspiration. Damn I want to read a paper a day at least now!
Do it! Smile ....
I just learned that the pissa method from peft probably solves this problem - it initializes the LoRA with singular vectors taken from the model, so it does not have to add new intruder dimensions.
how do you measure model's generalization capability , this is a really fuzzy and vogue concepts and we keep using it while do not have clear measurement .
hows fine tuning vision low mıdel with ocr, vqa custom datasets
If I understand it correctly, then doing full fine tuning and running SVD on the difference between the finetuned and the original model would create a LoRA that does not suffer from this problem. Is it correct?
Could you train LORA's at different ranks, scaling up and measuring performance? Then when you reach an acceptable level of performance you determine - or improvement falls below a certain level - you stop the process. It might involve some upfront costs, but I assume you would save on inference down the line because the 'acceptable' LORA would be computational more efficient than the full trained model. It would depend on the use case. If you are doing lots of inference, it would definitely payoff down the line. It would be interesting to see the costs of training multiple LORA's in this way vs full training.
LORA is not more efficient for inference. Either you can merge the LoRA into the model, then it is exactly the same or you can compute the LoRA separately and then it is less efficient.
@@vladimirnadvornik8254 Ok, then perhaps a better approach would be to train a LORA on different model sizes - 1B, 3B, 8B (which are computationally more efficient - and stop when acceptable accuracy is reached or improvement falls below a certain level.
What determines if the task is the same? Is it the instruction prompt? And what defines the size of a task which is correct for LoRA?
Your intuition, basically.
It’s tricky because some tasks will be in distribution, even when dealing with unique data, while some tasks will explicitly not be in distribution. Here’s a couple of things to consider:
For simple math, let’s say addition, subtraction, multiplication and division, do you think that a new equation outside of the example equations is in-distribution or out of distribution?
For logical reasoning problems, do you think that a problem with a similar structure to a problem in the training set is in distribution or out of distribution?
For creative writing, do you think that a model being asked to write stories in the same genres as the training examples is in distribution or out of distribution?
It gets really nuanced, and I think the only way to really understand this is to approach them on a model-by-model and dataset-by-dataset basis.
What about undertraining loras on each block and merging as you go? You update all the parameters, and no single lora "overpowers" the original data vectors.
??? If you "undertrain" a fine-tuning mechanism, then you have a broken fine-tuned weight tensor structure. Why merge something that is not working into the pre-trained model?
I'm guessing Lamini AI company doing something like this to achieve what they said better than RAG...