Autonomous RAG | The next evolution of RAG AI Assistants
ฝัง
- เผยแพร่เมื่อ 1 พ.ค. 2024
- Lets build an Autonomous RAG Assistant where we let the LLM automatically pull the data it needs.
Code: git.new/auto-rag
⭐️ Phidata: git.new/phidata
Questions on Discord: phidata.link/discord
Here's the flow:
🦋 The user asks a question.
🤔 LLM decides whether to search its knowledge, memory, internet or make an API call.
✍️ LLM answers with the context.
"Autonomous Rag" sounds like a Marvel super-villain...
OMG thanks a lot ! I was trying to manage how to build that with my low programming skills.. And there is so more in your solution... Thanks a lot !
This is great! Is there a way to add a lot of documents (hundreds or thousands) into the vector database so that we can have the agent query a large corpus?
Hello. This is a great tool. I’m new to coding but logically I completely follow you. Would you be able to assist me in where to find instructions for how to add ‘text’ as an option to go along with the correct ‘pdf’ option you mentioned? Thank you,
i am gonna use it alot, thank you
Can I use any open source llm like llama or mistral instead of openai gpt
yes it can use any llm, but the results might not be as good :)
Make a video on open source models and we have images and tables in PDFs
which part of system extract the search term from the user input?
Can we show us how can weight on the random between the three sources ?. I want to add more weights on the the knowledge base.
I got the answer at 6:01