Hi Lance just wanted to drop a thank you from me and my team for always being on top of the RAG game. This is a complex field with fast evolving concepts and LangGraph seems the tool we have been looking for. What is your take on graphRAG's: are they production ready, will they eventually replace or complement current RAG systemes ?
I'm nightly shaken awake in my sleep, a cold sweat drenched from the nightmares from contemplating this exact concept. Please do a video series about GraphRAG 💜💜💜
Great video Lance. Thanks for sharing it! Looking at the evaluation results, it seems that the custom agent always performs a web search before generating the answer. Does it mean that the grader agent always scores 0 the output received by the RAG agents? That would be interesting, because the react agent sometimes skips the web search (meaning that score = 1).
Trying to learn more about these types of processes. Am I correct to understand that the agent for loop would also need to make more LLM calls (thus being more expensive) as it needs to make an extra to decide which step to take next? While with the mixed method you only make that extra call when he is grading.
Yes, thats correct and also the chance of llm to incorrectly call tools makes it less reliable. You can add retry loops but those will increase LLM calls as you will have to include a correction prompt aswell
Lance, I really enjoy your videos. One of thing that I have notice for all demos, not just yours, compound request/questions are not used. In the examples below, to elicit the desired outcome, a decomposition of the multiple sentences must take place. A Chain of Thought or reasoning process is necessary to address the compound request. I do not see how using Langgraph would be suitable in the initial step. For example: I am looking for information on Garlic. I want to understand the health benefits as well as studies that have been conducted. Provide the list of resources used in your research. Generate a report on Katherine Johnson and John Hopkins. review report and address short falls. Compare the background of that of John F Kenney. Are there known latency issue with Milvus? If there are, what are the work arounds.
I would be very helpful if LangGraph had built-in code interpreter support: LLM prompted to generate code instead of calling predefined functions (tools), the framework executes the code and return the results back to LLM. Both OpenAPI Assistants API and AutoGen have this.
I keep playing up with fancy agent packages so called to one tool for everything, however when i try it to do actual simple work i see that it doesnt even correctly tell me the current time without several trial and errors etc.. reliability and consistency is very important if we want to implement such tools in real business. No tolerance to error s.thanks
The best explanation on web!
Great summary of everything 🙌🏾💜
you are always doing great, huge fan of your videos❤ keep doing that
Hi Lance just wanted to drop a thank you from me and my team for always being on top of the RAG game. This is a complex field with fast evolving concepts and LangGraph seems the tool we have been looking for.
What is your take on graphRAG's: are they production ready, will they eventually replace or complement current RAG systemes ?
I'm nightly shaken awake in my sleep, a cold sweat drenched from the nightmares from contemplating this exact concept.
Please do a video series about GraphRAG 💜💜💜
Thanks this was intresting!
Great video Lance. Thanks for sharing it!
Looking at the evaluation results, it seems that the custom agent always performs a web search before generating the answer. Does it mean that the grader agent always scores 0 the output received by the RAG agents? That would be interesting, because the react agent sometimes skips the web search (meaning that score = 1).
Trying to learn more about these types of processes. Am I correct to understand that the agent for loop would also need to make more LLM calls (thus being more expensive) as it needs to make an extra to decide which step to take next? While with the mixed method you only make that extra call when he is grading.
Yes, thats correct and also the chance of llm to incorrectly call tools makes it less reliable. You can add retry loops but those will increase LLM calls as you will have to include a correction prompt aswell
Lance,
I really enjoy your videos. One of thing that I have notice for all demos, not just yours, compound request/questions are not used.
In the examples below, to elicit the desired outcome, a decomposition of the multiple sentences must take place. A Chain of Thought or reasoning process is necessary to address the compound request. I do not see how using Langgraph would be suitable in the initial step.
For example:
I am looking for information on Garlic. I want to understand the health benefits as well as studies that have been conducted. Provide the list of resources used in your research.
Generate a report on Katherine Johnson and John Hopkins. review report and address short falls. Compare the background of that of John F Kenney.
Are there known latency issue with Milvus? If there are, what are the work arounds.
🎉 enjoyed
which software did you use to record the video, thanks.
thanks, was the workshop recorded?
What is the conference that you gave this presentation at?
I would be very helpful if LangGraph had built-in code interpreter support: LLM prompted to generate code instead of calling predefined functions (tools), the framework executes the code and return the results back to LLM.
Both OpenAPI Assistants API and AutoGen have this.
What version of langchain is being used on this video ?
I keep playing up with fancy agent packages so called to one tool for everything, however when i try it to do actual simple work i see that it doesnt even correctly tell me the current time without several trial and errors etc.. reliability and consistency is very important if we want to implement such tools in real business. No tolerance to error s.thanks
Tools have docstrings as part of prompt. How do you manage these docstrings?