Tool Calling with LangChain

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 ม.ค. 2025

ความคิดเห็น • 40

  • @EveryDayAIProgrammer
    @EveryDayAIProgrammer 9 หลายเดือนก่อน +9

    Are these updates available in typescript package as well ?

  • @unhandledexception1948
    @unhandledexception1948 8 หลายเดือนก่อน +3

    you guys really rock ! always ahead of the curve bringing innovations in this space ;-)

  • @kuriankattukaren
    @kuriankattukaren หลายเดือนก่อน

    Thanks for the tutorial. Brilliant.

  • @waneyvin
    @waneyvin 9 หลายเดือนก่อน +2

    thanks a lot for your infomation and is it compatibile to all llm? and what is the difference between bind_tools and create_react_agent, does agent think before they choose the tool?

  • @pazarazi
    @pazarazi 9 หลายเดือนก่อน

    Thanks for sharing.
    BTW, links that point to resource page of "Tool calling agent shows how to create an agent that uses the standardized tool calling interface: " are invalid.

  • @juliustuckayo8973
    @juliustuckayo8973 9 หลายเดือนก่อน +1

    Very clear explanation! Love Python! Keep em comming 🎉

  • @SashaBaych
    @SashaBaych 9 หลายเดือนก่อน +2

    Chester, thank you for a very clear walkthrough!
    Guys, can somebody please clarify what would be the difference between just invoking the model with binded tools as opposed to creating agent with tools using a method shown in the tutorial. Especially in the context of the langgraph. I see so many tutorials on langgraph but only few of them use AgentExecutors. Do I even need to use AgentExecutors with langgraph?

  • @MavVRX
    @MavVRX 9 หลายเดือนก่อน +2

    Unfortunately, it doesn't work with Ollama yet :( object has no attribute 'bind_tools'

    • @nikoG2000
      @nikoG2000 9 หลายเดือนก่อน

      How have you tried to use Ollama? Have you tried the following way:
      from langchain_community.chat_models import ChatOllama
      llm = ChatOllama(model="llama2", format="json", temperature=0)

    • @MavVRX
      @MavVRX 9 หลายเดือนก่อน

      @@nikoG2000 Yes, tried, doesn't work as the community version of Ollama has yet to implement binding functions and tools.

  • @terryliu3635
    @terryliu3635 3 หลายเดือนก่อน +1

    Hello, why the content of your gpt4 call result is blank? I'm using 4o-mini. I'm also getting blank in the "content". How would I get the result 3 * 12 = 36? Thanks.

    • @debatradas9268
      @debatradas9268 หลายเดือนก่อน

      same here. could you find any solutions?

  • @lshagh6045
    @lshagh6045 8 หลายเดือนก่อน

    In case we needed to use an LLM model that is opensource like Llama2 I can see that you depricated the previous agent library which enables us to do so

  • @hiranga
    @hiranga 9 หลายเดือนก่อน

    @LangChain is there a way to self-heal the invalid tool args created by an LLM? Groq Mixtral appears to suffer here..

  • @Lamoboos223
    @Lamoboos223 28 วันที่ผ่านมา

    can you support open source models like llama3 and phi?

  • @amotorcyclerider3230
    @amotorcyclerider3230 2 หลายเดือนก่อน

    Can you please confirm this? Does the LLM actually invoke the tool to obtain info and use it in response? OR does the LLM indicates the need to use a tool and the agent does it on its behalf?

  • @lionelshaghlil1754
    @lionelshaghlil1754 8 หลายเดือนก่อน +1

    Did you notice that when choosing the openai gpt4 as a model, the result was empty.
    Am I missing something please,
    I was implementing a similar code and faced the same problem, the result showed me empty content despite calling the function successfully, Thanks

    • @severlight
      @severlight 6 หลายเดือนก่อน

      same, were you able to resolve it?

    • @lionelshaghlil1754
      @lionelshaghlil1754 3 หลายเดือนก่อน

      @@severlight Yes , Thanks

    • @RamarK-m9w
      @RamarK-m9w 2 หลายเดือนก่อน

      @@lionelshaghlil1754how you
      Resolved

  • @anirban3974
    @anirban3974 5 หลายเดือนก่อน

    When to use bind_tools and when to ue create_tool_calling_agent?

  • @GeoffLadwig
    @GeoffLadwig 9 หลายเดือนก่อน

    very nice, but how do I call and execute the tool from the JSON that was returned?

    • @LangChain
      @LangChain  9 หลายเดือนก่อน

      LangChain's AgentExecutor is built for this: python.langchain.com/docs/modules/agents/agent_types/tool_calling/
      You can always "manually" pass the parameters back into the original tool, as well.
      We also are supporting more advanced agent workflows in Langgraph (github.com/langchain-ai/langgraph), and uploaded a cookbook on using the new tool calls with Langgraph here: github.com/langchain-ai/langchain/blob/master/cookbook/tool_call_messages.ipynb. More to come on this!

  • @AngelMartinez-ge2gs
    @AngelMartinez-ge2gs 9 หลายเดือนก่อน

    Good vid. Could you elaborate on the difference between `create_tool_calling_agent()` and other agents commonly used such as create_react_agent?

    • @maruthikonjeti4572
      @maruthikonjeti4572 9 หลายเดือนก่อน

      As per my knowledge, create_react_agent works on ReAct based agents, whereas here it's more customized to users and they can convert a normal LLM into an agent

    • @LangChain
      @LangChain  9 หลายเดือนก่อน +2

      Some LLMs are tuned to output these "tool calls" with an expected format. This agent takes advantage of those features, whereas ReAct typically relies on prompting the LLM to follow a certain natural language pattern (e.g., "Thought: ", "Action: ", etc.).

    • @Kenykore
      @Kenykore 9 หลายเดือนก่อน

      So would this be more stable than react pattern

  • @brilliant332
    @brilliant332 7 หลายเดือนก่อน

    I've had trouble with date, I'm curious if you have references to using different pydantic field types in tool calling? Any help would be great!
    My use case
    from langchain_core.pydantic_v1 import BaseModel, Field
    from datetime import date
    class CompanySobject(BaseModel):
    incorporation_date: Optional[date] = Field(None, description="Incorporation date (format: YYYY-MM-DD)")
    Exception in parsing company SObject: 1 validation error for CompanySobject
    incorporation_date: invalid datetime format (type=value_error.datetime)

  • @svenst
    @svenst 5 หลายเดือนก่อน

    I'd like to know how to implement chat function calling with streaming=true and as an api endpoint. It feels like langchain and llamaindex are optimizing for colab notebooks without streaming, but none of the examples I've seen so far are real world use cases. So I would apprechiate if u could come up with some sort of real world examples.

  • @ggg9gg
    @ggg9gg 3 หลายเดือนก่อน

    why is my content empty though

  • @AmeerHamza-u9e4i
    @AmeerHamza-u9e4i 8 วันที่ผ่านมา

    Thanks for sharing Informative Content , I want to implement tool calling using Hugging Face Model , Like Llama 3 or 3.2 , How I can achieve same task with Hugging Face.
    Thanks

  • @lamkhatinh8344
    @lamkhatinh8344 9 หลายเดือนก่อน

    agentExecutor = AgentExecutor(
    agent=self.chain,
    tools=self.tools,
    verbose=True,
    memory=memory,
    handle_parsing_errors=True,
    )
    I build agent with above command, can you tell me which difference between your method and my code ?

  • @gordeyvasilev
    @gordeyvasilev 6 หลายเดือนก่อน

    👍

  • @abdullah_sdq_priv
    @abdullah_sdq_priv 9 หลายเดือนก่อน

    First

  • @kaustuvchakraborty7372
    @kaustuvchakraborty7372 9 หลายเดือนก่อน

    Very poor explanation,executing 6 lines of code in jyputer notebook was not expected