thanks a lot for your infomation and is it compatibile to all llm? and what is the difference between bind_tools and create_react_agent, does agent think before they choose the tool?
Thanks for sharing. BTW, links that point to resource page of "Tool calling agent shows how to create an agent that uses the standardized tool calling interface: " are invalid.
Chester, thank you for a very clear walkthrough! Guys, can somebody please clarify what would be the difference between just invoking the model with binded tools as opposed to creating agent with tools using a method shown in the tutorial. Especially in the context of the langgraph. I see so many tutorials on langgraph but only few of them use AgentExecutors. Do I even need to use AgentExecutors with langgraph?
How have you tried to use Ollama? Have you tried the following way: from langchain_community.chat_models import ChatOllama llm = ChatOllama(model="llama2", format="json", temperature=0)
Hello, why the content of your gpt4 call result is blank? I'm using 4o-mini. I'm also getting blank in the "content". How would I get the result 3 * 12 = 36? Thanks.
In case we needed to use an LLM model that is opensource like Llama2 I can see that you depricated the previous agent library which enables us to do so
Can you please confirm this? Does the LLM actually invoke the tool to obtain info and use it in response? OR does the LLM indicates the need to use a tool and the agent does it on its behalf?
Did you notice that when choosing the openai gpt4 as a model, the result was empty. Am I missing something please, I was implementing a similar code and faced the same problem, the result showed me empty content despite calling the function successfully, Thanks
LangChain's AgentExecutor is built for this: python.langchain.com/docs/modules/agents/agent_types/tool_calling/ You can always "manually" pass the parameters back into the original tool, as well. We also are supporting more advanced agent workflows in Langgraph (github.com/langchain-ai/langgraph), and uploaded a cookbook on using the new tool calls with Langgraph here: github.com/langchain-ai/langchain/blob/master/cookbook/tool_call_messages.ipynb. More to come on this!
As per my knowledge, create_react_agent works on ReAct based agents, whereas here it's more customized to users and they can convert a normal LLM into an agent
Some LLMs are tuned to output these "tool calls" with an expected format. This agent takes advantage of those features, whereas ReAct typically relies on prompting the LLM to follow a certain natural language pattern (e.g., "Thought: ", "Action: ", etc.).
I've had trouble with date, I'm curious if you have references to using different pydantic field types in tool calling? Any help would be great! My use case from langchain_core.pydantic_v1 import BaseModel, Field from datetime import date class CompanySobject(BaseModel): incorporation_date: Optional[date] = Field(None, description="Incorporation date (format: YYYY-MM-DD)") Exception in parsing company SObject: 1 validation error for CompanySobject incorporation_date: invalid datetime format (type=value_error.datetime)
I'd like to know how to implement chat function calling with streaming=true and as an api endpoint. It feels like langchain and llamaindex are optimizing for colab notebooks without streaming, but none of the examples I've seen so far are real world use cases. So I would apprechiate if u could come up with some sort of real world examples.
Thanks for sharing Informative Content , I want to implement tool calling using Hugging Face Model , Like Llama 3 or 3.2 , How I can achieve same task with Hugging Face. Thanks
agentExecutor = AgentExecutor( agent=self.chain, tools=self.tools, verbose=True, memory=memory, handle_parsing_errors=True, ) I build agent with above command, can you tell me which difference between your method and my code ?
Are these updates available in typescript package as well ?
you guys really rock ! always ahead of the curve bringing innovations in this space ;-)
Thanks for the tutorial. Brilliant.
thanks a lot for your infomation and is it compatibile to all llm? and what is the difference between bind_tools and create_react_agent, does agent think before they choose the tool?
Thanks for sharing.
BTW, links that point to resource page of "Tool calling agent shows how to create an agent that uses the standardized tool calling interface: " are invalid.
Very clear explanation! Love Python! Keep em comming 🎉
Chester, thank you for a very clear walkthrough!
Guys, can somebody please clarify what would be the difference between just invoking the model with binded tools as opposed to creating agent with tools using a method shown in the tutorial. Especially in the context of the langgraph. I see so many tutorials on langgraph but only few of them use AgentExecutors. Do I even need to use AgentExecutors with langgraph?
Unfortunately, it doesn't work with Ollama yet :( object has no attribute 'bind_tools'
How have you tried to use Ollama? Have you tried the following way:
from langchain_community.chat_models import ChatOllama
llm = ChatOllama(model="llama2", format="json", temperature=0)
@@nikoG2000 Yes, tried, doesn't work as the community version of Ollama has yet to implement binding functions and tools.
Hello, why the content of your gpt4 call result is blank? I'm using 4o-mini. I'm also getting blank in the "content". How would I get the result 3 * 12 = 36? Thanks.
same here. could you find any solutions?
In case we needed to use an LLM model that is opensource like Llama2 I can see that you depricated the previous agent library which enables us to do so
@LangChain is there a way to self-heal the invalid tool args created by an LLM? Groq Mixtral appears to suffer here..
can you support open source models like llama3 and phi?
Can you please confirm this? Does the LLM actually invoke the tool to obtain info and use it in response? OR does the LLM indicates the need to use a tool and the agent does it on its behalf?
Did you notice that when choosing the openai gpt4 as a model, the result was empty.
Am I missing something please,
I was implementing a similar code and faced the same problem, the result showed me empty content despite calling the function successfully, Thanks
same, were you able to resolve it?
@@severlight Yes , Thanks
@@lionelshaghlil1754how you
Resolved
When to use bind_tools and when to ue create_tool_calling_agent?
very nice, but how do I call and execute the tool from the JSON that was returned?
LangChain's AgentExecutor is built for this: python.langchain.com/docs/modules/agents/agent_types/tool_calling/
You can always "manually" pass the parameters back into the original tool, as well.
We also are supporting more advanced agent workflows in Langgraph (github.com/langchain-ai/langgraph), and uploaded a cookbook on using the new tool calls with Langgraph here: github.com/langchain-ai/langchain/blob/master/cookbook/tool_call_messages.ipynb. More to come on this!
Good vid. Could you elaborate on the difference between `create_tool_calling_agent()` and other agents commonly used such as create_react_agent?
As per my knowledge, create_react_agent works on ReAct based agents, whereas here it's more customized to users and they can convert a normal LLM into an agent
Some LLMs are tuned to output these "tool calls" with an expected format. This agent takes advantage of those features, whereas ReAct typically relies on prompting the LLM to follow a certain natural language pattern (e.g., "Thought: ", "Action: ", etc.).
So would this be more stable than react pattern
I've had trouble with date, I'm curious if you have references to using different pydantic field types in tool calling? Any help would be great!
My use case
from langchain_core.pydantic_v1 import BaseModel, Field
from datetime import date
class CompanySobject(BaseModel):
incorporation_date: Optional[date] = Field(None, description="Incorporation date (format: YYYY-MM-DD)")
Exception in parsing company SObject: 1 validation error for CompanySobject
incorporation_date: invalid datetime format (type=value_error.datetime)
I'd like to know how to implement chat function calling with streaming=true and as an api endpoint. It feels like langchain and llamaindex are optimizing for colab notebooks without streaming, but none of the examples I've seen so far are real world use cases. So I would apprechiate if u could come up with some sort of real world examples.
why is my content empty though
Thanks for sharing Informative Content , I want to implement tool calling using Hugging Face Model , Like Llama 3 or 3.2 , How I can achieve same task with Hugging Face.
Thanks
agentExecutor = AgentExecutor(
agent=self.chain,
tools=self.tools,
verbose=True,
memory=memory,
handle_parsing_errors=True,
)
I build agent with above command, can you tell me which difference between your method and my code ?
👍
First
Very poor explanation,executing 6 lines of code in jyputer notebook was not expected