High-performance RAG with LlamaIndex
ฝัง
- เผยแพร่เมื่อ 9 ต.ค. 2023
- GPT-4 Summary: "Master Advanced AI Retrieval Techniques in Our Upcoming Event! Are you ready to elevate your AI game with high-quality data engineering? Join us to learn how to craft robust, scalable retrieval pipelines using hierarchical embeddings for maximum relevance. Discover the secrets of optimizing context precision and recall with the RAG ASsessment (RAGAS) framework. Gain insights on fine-tuning embedding models to enhance retrieval metrics and see the remarkable improvements in generations with advanced retrieval techniques. Ideal for AI engineers, learners interested in advanced retrieval methods, and LLM practitioners focusing on industry-standard metrics. Participate in live sessions with access to all GitHub repositories and Colab notebooks. Don't miss this chance to become an expert in building high-performance RAG applications!"
Event page: lu.ma/llamaRAG
Have a question for a speaker? Drop them here:
app.sli.do/event/g217LbPE8rQP...
Speakers:
Dr. Greg Loughnane, Founder & CEO, AI Makerspace
/ greglough. .
Chris Alexiuk, CTO, AI Makerspace
/ csalexiuk
Join our community to start building, shipping, and sharing with us today!
/ discord
Apply to our upcoming LLM Ops: LLMs in production course on Maven today!
maven.com/aimakerspace/llmops
How'd we do? Share your feedback and suggestions for future events.
forms.gle/Kt6vF6NSGAJA7fyi8 - บันเทิง
Google Colab Notebook: colab.research.google.com/drive/1TDiWZtb6gsM9wVXCLQrR-7OEPaQ2n-JA?usp=sharing
Slides: www.canva.com/design/DAFw0D8y038/5Yh9MA2XXd2Lfr2thcsuLA/edit?DAFw0D8y038&
Great tutorial! Thanks. Definitely try it out 👏
Fabulous overview and demonstrations.
Awesome stuff. You guys are killing it!
Camels are my favorite animal, thanks for this.
YES! You're so welcome @Kingromstar 🤗! We had a blast putting it together!
Nice presentation, subscribed! Keep it up!
Some retrievers use libraries in the background, like pypdf, beautiful soup, and so on.
solid
It was another amazing event with great content - thank you so much! Could you please share the slides and the notebook used for the event?
Thanks and of course!
Slides: www.canva.com/design/DAFw0D8y038/5Yh9MA2XXd2Lfr2thcsuLA/edit?DAFw0D8y038&
Google Colab Notebook: colab.research.google.com/drive/1TDiWZtb6gsM9wVXCLQrR-7OEPaQ2n-JA?usp=sharing
Would it be worth renaming this raft as it sounds very familiar to retrieval augmentation fine-tuning or is it different than what you’ve done done here?
Compared to OpenAI's Ada-02, how well does the finetuned embedding model work? Since my application is fully dependent on openai and I prefer working with better models (sentence transformer is good but cant really stack up to Ada). Also, is it possible to apply this finetuning to Ada?
In the specific case we examined, the fine-tuned embeddings performed slightly better than ada-02 on the aggregate hitrate metric.
Great context. I really rllearnd something
Nice!
Great session. I am able to take a lot of this and use directly. However a concern of mine is in regards to using GPT4 as the evaluator, since we are not allowed to send our data outside the company. What would be your suggestions into what model to use if it has to run on-prem? Woudl that be a llama-2-70b model? Again thanks for a rreally great and informative session...
Thanks for the question! The best evaluator out there is GPT-4, although if you're very keen on using an open-source model instead, we'd recommend one of the instruct-tuned Llama models today.
I appreciate the effort and the information. But there is always a reason when they don't actually RUN the notebook in a demo. Add `cohere` to the package list before the openai package.
You don't actually *need* Cohere for this notebook, you can safely ignore the pip error!
Thanks for wonderful sharing! I would like to ask if I can use GPT-3.5 to evaluate the answer from GPT-3.5 (because I am unable to use GPT-4.0 right now). Does it induce any risks when I use GPT-3.5?
Great question @tramanhphamthi5218! You can absolutely use GPT 3.5 for everything that you saw here! However, whenever you can while doing evaluation or automatically generating code, you should opt for the most powerful model that you can get your hands on. If it's GPT 3.5 for now, then go with that!
@@AI-Makerspace Thanks a lot for your response!
Could you share the URL for the notebook in the description box? Super session! 🎉
Thank you, and of course!
Google Colab Notebook: colab.research.google.com/drive/1TDiWZtb6gsM9wVXCLQrR-7OEPaQ2n-JA?usp=sharing