RAG From Scratch: Part 4 (Generation)
ฝัง
- เผยแพร่เมื่อ 18 พ.ค. 2024
- This is the fourth video in our series on RAG. The aim of this series is to build up an understanding of RAG from scratch, starting with the basics of indexing, retrieval, and generation. This video focuses on generation, covering the process of RAG prompt construction and passing the prompt to an LLM for answer generation.
Code:
github.com/langchain-ai/rag-f...
Slides:
docs.google.com/presentation/...
There cannot be enough of these. Lance, you're incredible at teaching this.
Excellent overview. Many thanks!
This came at a very opportune time when i started building something using langchain and wanted to understand this whole hot mess known as RAGs and this series by Lance makes this more approachable. Lovem or hatem you cant ignore RAGs if you are building with LLMs. Atleasr if you are GPU piss poor like me. So 🙏 Langchain.
Thanks for the tutorial, Lance.! I have learned a lot. I have a question though when I set the nearest neighbor parameter 'k' in the retriever as 5 for the same example provided in the video, `retriever = vectorstore.as_retriever(search_kwargs={"k": 5})`. In Langsmith, I see that out of the 5 neighbors' output, the first 3 are the same outputs. Shouldn't all the 5 neighbors be different or if 5 neighbors don't exist, shouldn't the output be unique neighbors which is 3 in this case.? Can you please help me understand why this is the case.?
How can you specify another chain_type in the retriever? For example, if you want to use "refine"
How to do ConversationalRetrivalChain with custom prompt in LCEL?
I have a doubt . can we try out different techniques for embeddings and vector database , methods other than HNSW ?
Lance from langchain has a nice ring to it😂 , from blog
Like Jake, from State Farm
How can we do this with csv files?
I'm having a hard time understanding the {"context": retriever, "question": RunnablePassthough()}
If I have 3 different inputs to the chain and a prompt containing these inputs + an explanation of how to respond to these inputs.
How can I write that step in the chain?
I think you can create a string before, that can be formated with the inputs you want and then pass this to the chain.invoke()
@@lauther_27 I solved it like this... however I'm not sure if its ok:
rag_chain = {"context": itemgetter("description") | retriever,
"issues_and_opportunities": itemgetter("issues_and_opportunities"),
"business_goals": itemgetter("business_goals"),
"description": itemgetter("description")} | prompt| llm | output_parser