AutoGen function calling - A more reliable Solution

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 พ.ย. 2024

ความคิดเห็น • 26

  • @93cutty
    @93cutty 11 หลายเดือนก่อน +1

    I just had to pause just before 3 minutes and say I wish I knew that trick about showing appreciation. I never thought I could get so mad watching the AI say thank you to each other over and over lol

    • @business24_ai
      @business24_ai  11 หลายเดือนก่อน +1

      Glad you liked it! Yes, this saves time and tokens.

  • @autoxue
    @autoxue 10 หลายเดือนก่อน +2

    This is the most clear video of function calling, thank you for explaining it so clearly at a comfortable pace😊
    Why is the function_map placed within the user_proxy instead of in the assistant definition, given that it's actually the assistant that calls the function?

    • @business24_ai
      @business24_ai  10 หลายเดือนก่อน +1

      Thanks for your feedback. A function config must be passed to AssistantAgent. The corresponding functions must be passed to UserProxyAgent, which will execute any function calls made by AssistantAgent. For more information check the description in the AutoGen documentation at this link: github.com/microsoft/autogen/blob/main/notebook/agentchat_function_call.ipynb

  • @juliovac
    @juliovac ปีที่แล้ว +1

    So simple and good! I am a no coder, that just learned to use python using GPT. I have little programs and functions I would like to create to facilitate my autogen. Langchain as functions would be extremely helpful :) thanks and cheers

  • @zakaria20062
    @zakaria20062 ปีที่แล้ว +2

    Great video Thanks
    i think community somehow need to focus in open source LLM cause running Autogen with OpenAI can be costy

    • @business24_ai
      @business24_ai  ปีที่แล้ว +2

      Yes, I agree. We are looking for the best Open Source LLMs optimized for function-calling. It is just a question of time.

  • @proterotype
    @proterotype 10 หลายเดือนก่อน

    New sub here. I like your delivery a lot. I’d be interested in seeing that Langchain video you mention in the end

    • @business24_ai
      @business24_ai  10 หลายเดือนก่อน +1

      Thanks for the sub! Good reminder. I will consider it in our planning.

  • @thankqwerty
    @thankqwerty 10 หลายเดือนก่อน

    Thank you very much for the tutorial.
    Has there been changes to the autogen package? Because I can't even get the official autogen function calling example notebook to work :(((
    agentchat_oai_assistant_function_call.ipynb
    Hmm .... the only difference being that I'm using Mistral via Ollama. Would that be the reason?

    • @business24_ai
      @business24_ai  10 หลายเดือนก่อน

      Welcome. Yes, the tools evolve fast and AutoGen has a new version, but I am not sure if this is the reason your script doesn't work. We already used Mistral with AutoGen (although using LM-Studio for backend).

    • @thankqwerty
      @thankqwerty 10 หลายเดือนก่อน

      @@business24_ai
      Thank you so much for your reply.
      I just switched to ChatGPT3.5 to test my script and it worked. So I suppose it's Mistral not being able to understand the function call (?).

    • @business24_ai
      @business24_ai  10 หลายเดือนก่อน

      Hi. Yes, the local LLMs are not ready for function calling yet. We tested even Gorilla. (gorilla.cs.berkeley.edu/) For the time being, we switched to Microsoft TaskWeaver for function calls. Later we will combine AutoGen and TaskWeaver.

  • @fasa5314
    @fasa5314 10 หลายเดือนก่อน

    how many functions can be write into function_list? Is there any limit ? by word's number?i want to put thousands of functions into function_list,which means there will be a lot of words.Will the functions be send to the LLMs?If that ,the funciton_list maybe limited.thanks

    • @business24_ai
      @business24_ai  10 หลายเดือนก่อน +1

      Hi, good question. I have not seen any written limits for function count, but you need to inform the LLM about the description of your functions and the parameters, this alone needs some tokens and is limited to the context window, as tested with LangSmith. This means you can have many functions, but how the LLM will handle a vast amount of functions, is a good question. In the case of OpenAI, you must pay for all the tokens to find it out. Keep in mind that only the function description will be sent to the LLM, not the actual Python function. The LLM sees the function as a black box and only calls it when it decides it will be helpful.

  • @caiomar
    @caiomar ปีที่แล้ว +1

    What if the function doesn't have parameters, what do I set `parameters` to? It crashes if there is no definition.

    • @business24_ai
      @business24_ai  ปีที่แล้ว +1

      Hi, I have not tested yet but you can try the solution described in community.openai.com/t/what-if-there-is-no-parameters-of-the-function-while-using-functions-call-in-the-complection-fixed/269097

  • @paulandrew76
    @paulandrew76 ปีที่แล้ว

    What’s the best way to communicate with you for consultation?

    • @business24_ai
      @business24_ai  ปีที่แล้ว

      Hi, please use info@business24.ai to get in touch.

  • @staticalmo
    @staticalmo ปีที่แล้ว +1

    Remember to blur API keys or delete them as soon as you post the video

    • @business24_ai
      @business24_ai  ปีที่แล้ว +3

      I deleted the key before uploading. In addition to deleting the keys, we use E-Mail notifications on reaching the threshold and monthly budgets. Setting a hard limit is highly recommended, as working with AutoGen can sometimes bring unexpected costs. Thanks again for your reminder.

  • @samketola919
    @samketola919 ปีที่แล้ว +1

    source code?

    • @business24_ai
      @business24_ai  ปีที่แล้ว +2

      Hi, I set the visibility of the repo to public. You find it here: github.com/business24ai/autogen-functions

  • @fanchuankang1228
    @fanchuankang1228 11 หลายเดือนก่อน

    cache.db file? i can run free

    • @business24_ai
      @business24_ai  11 หลายเดือนก่อน +1

      Hi, yes I uses cache, but to run it for free you have to use open source local LLMs. We will cover this in the upcoming videos.