There is a fundamental question that arises when creating an augmented query. It is mentioned that relevant information is fetched from the internal source document based on the original query. Since we have already retrieved relevant information from the sources, why are we passing the augmented query to the model again, given that we have already received the requested information from the internal source?
Hi there, and your question has been heard! 👂 Perhaps this official doc regarding RAG may help: go.aws/4bMRzA3. However, you always have the option to get on re:Post: go.aws/aws-repost. You can take your question directly to AWS industry experts who have a lot of tools to help you out. 🧰 ^DC
It's late but for example if we ask LLM how much sunscreen i should carry for my upcoming trip. From this user query, External data would be, how much days, place the user is going. Augmented prompt will have this information and LLM can further use basic maths and other models too if needed to return you some answer. If LLM is not given such external data, it will hallucinate.
LLM ensures that the information retrieved is interpreted, synthesized, and presented in a meaningful and contextually appropriate manner. Without this step, you might only get raw, unfiltered data from the retriever
Hi there. 👋 Our scope for supporting technical queries is limited on this platform, but please reach out to our community of cloud experts on re:Post: go.aws/aws-repost. ℹ️ You can also check out these additional options for assistance: go.aws/get-help. 🔗🤝 ^RW
Great overview of RAG and details of implementation. Thank you.
How to protect a company's information with this technology?
There is a fundamental question that arises when creating an augmented query. It is mentioned that relevant information is fetched from the internal source document based on the original query.
Since we have already retrieved relevant information from the sources, why are we passing the augmented query to the model again, given that we have already received the requested information from the internal source?
Hi there, and your question has been heard! 👂 Perhaps this official doc regarding RAG may help: go.aws/4bMRzA3. However, you always have the option to get on re:Post: go.aws/aws-repost. You can take your question directly to AWS industry experts who have a lot of tools to help you out. 🧰 ^DC
It's late but for example if we ask LLM how much sunscreen i should carry for my upcoming trip. From this user query, External data would be, how much days, place the user is going. Augmented prompt will have this information and LLM can further use basic maths and other models too if needed to return you some answer. If LLM is not given such external data, it will hallucinate.
LLM ensures that the information retrieved is interpreted, synthesized, and presented in a meaningful and contextually appropriate manner. Without this step, you might only get raw, unfiltered data from the retriever
Why transactional data is not good for RAG?
Hi there. 👋 Our scope for supporting technical queries is limited on this platform, but please reach out to our community of cloud experts on re:Post: go.aws/aws-repost. ℹ️ You can also check out these additional options for assistance: go.aws/get-help. 🔗🤝 ^RW
The animations are far too busy, and they lack sufficient highlighting.
Hello there, thanks for letting us know what's on your mind! I've sent your feedback over to our TH-cam team for review. ^RM