How Good is LLAMA-3 for RAG, Routing, and Function Calling
ฝัง
- เผยแพร่เมื่อ 20 มิ.ย. 2024
- How good is Llama-3 for RAG, Query Routing, and function calling? We compare the capabilities of both 8B and 70B models for these tasks. We will be using GROQ API for accessing these models.
🦾 Discord: / discord
☕ Buy me a Coffee: ko-fi.com/promptengineering
|🔴 Patreon: / promptengineering
💼Consulting: calendly.com/engineerprompt/c...
📧 Business Contact: engineerprompt@gmail.com
Become Member: tinyurl.com/y5h28s6h
💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
Signup for Advanced RAG:
tally.so/r/3y9bb0
LINKS:
Notebooks
RAG, Query Routing: tinyurl.com/3s6jzmuw
Function Calling: tinyurl.com/4299fjn5
TIMESTAMPS:
[00:00] LLAMA-3 Beyond Benchmarks
[00:35] Setting up RAG with llamaIndex
[05:15] Query Routing
[07:31] Query Routing
[10:35] Function Calling [Tool Usage] with Llama-3
All Interesting Videos:
Everything LangChain: • LangChain
Everything LLM: • Large Language Models
Everything Midjourney: • MidJourney Tutorials
AI Image Generation: • AI Image Generation Tu... - วิทยาศาสตร์และเทคโนโลยี
If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag
Thank you bro. Today itself i switched the LLM in RAG to Llama - 3 8B. It is performing really well.
You want learn RAG beyond basics? Make sure to sign up here: tally.so/r/3y9bb0
Excellent presentation.
thank you.
I found that the llama-3-70b from groq does not do as well on the test rag task I ran versus a local version, so they might have quantized it a lot on groq.
I have seen people saying that. That might be the case.
Would love to see a reliable way to utilize function calling on completely local model.
I saw a fine tuned model on HF designed for function calling, but users said that it had issues
Anyone know if this has been done locally? relatively reliably?
Function calling without groq would be cool. we are looking to self host 70B with OpenAI compatible functions/tools. so far there is nothing promising except Trelis' models.
Use grammars and any model can do function calling.
Just need to struggle to get the "bnf grammar" perfect.
Try to hard code as many characters as possible, it improves the quality of the output.
Quickly you'll realize how inefficient the OpenAI JSON style format is, and you'll go down the parsing rabbit hole, trying YAML TOML, etc.
Good luck!
I have seen a few finetunes for function calling. Will cover some of them soon.
Can I run llm on 4 GB ram and 16 bit processor laptop? Please tell me
U can run but using open source locally will not perform well . Instead of using llama 3 locally using ollama use groq API it would be insanely fast
No module named 'packaging' - does not work in windows or wsl
fully decensored?