Here's a great blog post that hopefully answers your question. They have compared the results of an LLM (Llama-3.1-8B) with a small model. They demonstrate that small trained classifier outperforms LLM especially in few-shot learning. Here's the link: huggingface.co/blog/sdiazlor/custom-text-classifier-ai-human-feedback But in general, scaling an LLM for classification is hard, dealing with latency, cost, etc in general is challenging.
For some reason, my comment doesn't show up here since it has a link. Search for this "How to build a custom text classifier without days of human labeling", it's blog that has compared llm(Llama3.1) with a small trained model and they show that the small model actually outperform the llm.
Great talk! One small point I’d like to mention is that at around 17:55, Angelina “hmm”s five times within the next 15 seconds, which is quite distracting. While this habit might work well in an offline meeting where such sounds signal active listening, in an online setting, it can actually interrupt the flow and impact the quality of the talk-especially when I’m trying to focus on Mehdi’s insights. A little nodding or some sign language with the mic muted would be really appreciated! Anyway, it was a very insightful talk-I’m just nitpicking.
Fantastic video. Thank you very much
Great job! Keep up the excellent work!
Quite informative!
Well done ! I would like to see a comparison in terms quality and scale for classification between a in house trained models vs LLMs !
Here's a great blog post that hopefully answers your question. They have compared the results of an LLM (Llama-3.1-8B) with a small model. They demonstrate that small trained classifier outperforms LLM especially in few-shot learning. Here's the link:
huggingface.co/blog/sdiazlor/custom-text-classifier-ai-human-feedback
But in general, scaling an LLM for classification is hard, dealing with latency, cost, etc in general is challenging.
For some reason, my comment doesn't show up here since it has a link. Search for this "How to build a custom text classifier without days of human labeling", it's blog that has compared llm(Llama3.1) with a small trained model and they show that the small model actually outperform the llm.
Thanks guys very useful
Thank you. Which is better? Rag or fine tuning? And any hope for the fully open source development? What do you think?
Great talk!
One small point I’d like to mention is that at around 17:55, Angelina “hmm”s five times within the next 15 seconds, which is quite distracting.
While this habit might work well in an offline meeting where such sounds signal active listening, in an online setting, it can actually interrupt the flow and impact the quality of the talk-especially when I’m trying to focus on Mehdi’s insights.
A little nodding or some sign language with the mic muted would be really appreciated!
Anyway, it was a very insightful talk-I’m just nitpicking.
Thank you for your feedback!
I guess anyone is an AI guru these days 😂
@@123456crapface anyone willing to try and use AI can be a guru. Especially with more and more low code no code tools. Anyone can be enabled