Super helpful! Personally I feel like this type of content separates Langchain/Langgraph from other frameworks. You're really lowering the barrier to entry and make it so much easier to get started.
Thanks for the guide! Seem multi-agent hierarchical systems are the most sophisticated paths; with recommendation to add custom wiring amongst paths as necessary (for example reflection sub-graph).
I've got one follow up question for everybody. What if all the langgraph work we're doing is going to be only for short term? Similar to how complicated encoder+decoder architecture were precursor to simpler decoder only architecture? From that perspective, since newer models like `o1` have `test-time-compute`, they are a bit agentic in themselves as is. Therefore two paths: 1/ Are langgraphs here to stay? Or will they be consumed by the increasingly powerful test time compute driven LLM architectures. And, 2/ Will swarm architecture beat every other sophisticated architecture, where we offload the planning, and collaboration aspect to the model itself. I understand Langgraph currently talks about model reliability increasing as more control comes to user with langgraph. Just wondering long term if this claim will be broken or not.
I think that we have not yet reached the limit of LLM and that is why we still see enormous improvements in these models, but that does not have to interfere with the use of frameworks like langraph, these frameworks allow us to create personalized agentive systems with our tools or our data recovered with techniques like RAG. The control flow will continue to be orchestrated by an LLM and the better the LLM the more reliable the system will be as well, but I think that a lot of the reliability and security lies in the design of that control flow and that control flow should not be It is stored directly in the LLM nor should it be executed within it. What if Langgraph is here to stay? I don't know that because this changes very quickly and maybe someone will find a better abstraction to build these systems, what I do think is an LLM is an important part for agentive systems, but agentive systems have more components than just an LLM .
For production cases, I would nearly always choose a dedicated agent system, then just dumping everything into large models like o1. 1/ the cost implications are to be considered (yes, the PoC might be cheap, but those models run on the most expensive hardware we have in data centers atm) 2/ it is a lot easier to debug and improve if you break the task up into subtasks.
Would you say that there is some kind of analogy between the problem of designing single agent vs multi-agent architectures and the problem of monolithic architectures vs micro services? I think they are different faces of the same problem.
Seems like the supervisor architecture has the same problems as the tool calling agent ("makes poor decisions about which tool to call next") in that it also makes poor decisions about which agent to call next. Is there something fundamentally different in the supervisor architecture that mitigates this?
Really only 5-10 tools? How many multi-turn convos or sequential reasoning steps are you needing? I’ve seen 100 tools be accurate for single-step Q&A tasks
I would assume it depends on the LLM used and the complexity of the tools. 5-10 may just be a higher level of success that they have seen in all the stats they would have on it.
Amazing overview - and I really didnt expect the CEO of langchain to suggest "not taking an off-the-shelf architecture" 😀 (100% agree)
Super helpful! Personally I feel like this type of content separates Langchain/Langgraph from other frameworks. You're really lowering the barrier to entry and make it so much easier to get started.
It is great to have more agents-related langchain video!
Thanks for the guide! Seem multi-agent hierarchical systems are the most sophisticated paths; with recommendation to add custom wiring amongst paths as necessary (for example reflection sub-graph).
I've got one follow up question for everybody. What if all the langgraph work we're doing is going to be only for short term? Similar to how complicated encoder+decoder architecture were precursor to simpler decoder only architecture? From that perspective, since newer models like `o1` have `test-time-compute`, they are a bit agentic in themselves as is. Therefore two paths:
1/ Are langgraphs here to stay? Or will they be consumed by the increasingly powerful test time compute driven LLM architectures. And,
2/ Will swarm architecture beat every other sophisticated architecture, where we offload the planning, and collaboration aspect to the model itself.
I understand Langgraph currently talks about model reliability increasing as more control comes to user with langgraph. Just wondering long term if this claim will be broken or not.
I think that we have not yet reached the limit of LLM and that is why we still see enormous improvements in these models, but that does not have to interfere with the use of frameworks like langraph, these frameworks allow us to create personalized agentive systems with our tools or our data recovered with techniques like RAG. The control flow will continue to be orchestrated by an LLM and the better the LLM the more reliable the system will be as well, but I think that a lot of the reliability and security lies in the design of that control flow and that control flow should not be It is stored directly in the LLM nor should it be executed within it. What if Langgraph is here to stay? I don't know that because this changes very quickly and maybe someone will find a better abstraction to build these systems, what I do think is an LLM is an important part for agentive systems, but agentive systems have more components than just an LLM .
For production cases, I would nearly always choose a dedicated agent system, then just dumping everything into large models like o1. 1/ the cost implications are to be considered (yes, the PoC might be cheap, but those models run on the most expensive hardware we have in data centers atm) 2/ it is a lot easier to debug and improve if you break the task up into subtasks.
Great summary of multi-agent system. Loved it. Thanks
amazing stuff guys!
Really very good explanation.
Keep pushing!!
Excellent explainer! Many thanks!
Hey any recommendations how to actually implement this in production?these type of agents take way too much latency
human in the loop, gives user a sense of speed, and grounds the LLM
What do you guys use to make these diagrams?
Great Information, thanks!
Great tutorial! I have a question: can Langgraph be used to orchestrate the openai realtime API? If so, could you give me an example?
Question: For contact center is it good idea to establish Supervisor with Tools approach?
Very useful video
Would you say that there is some kind of analogy between the problem of designing single agent vs multi-agent architectures and the problem of monolithic architectures vs micro services? I think they are different faces of the same problem.
Seems like the supervisor architecture has the same problems as the tool calling agent ("makes poor decisions about which tool to call next") in that it also makes poor decisions about which agent to call next. Is there something fundamentally different in the supervisor architecture that mitigates this?
Really only 5-10 tools? How many multi-turn convos or sequential reasoning steps are you needing? I’ve seen 100 tools be accurate for single-step Q&A tasks
I would assume it depends on the LLM used and the complexity of the tools. 5-10 may just be a higher level of success that they have seen in all the stats they would have on it.
Looking forward form the day when studio is available for Linux. Until then, will play around with OpenAI swarms,
This was nice but I think it could have been edited to be a bit shorter
5:37 " Agent 1 makes a tool call to determine was agent 2 sees" litle missclick i guess, thx for vids !
Langgraph vs aws bedrock agents, Langgraph is so much better in all aspects