I just had to pause just before 3 minutes and say I wish I knew that trick about showing appreciation. I never thought I could get so mad watching the AI say thank you to each other over and over lol
This is the most clear video of function calling, thank you for explaining it so clearly at a comfortable pace😊 Why is the function_map placed within the user_proxy instead of in the assistant definition, given that it's actually the assistant that calls the function?
Thanks for your feedback. A function config must be passed to AssistantAgent. The corresponding functions must be passed to UserProxyAgent, which will execute any function calls made by AssistantAgent. For more information check the description in the AutoGen documentation at this link: github.com/microsoft/autogen/blob/main/notebook/agentchat_function_call.ipynb
So simple and good! I am a no coder, that just learned to use python using GPT. I have little programs and functions I would like to create to facilitate my autogen. Langchain as functions would be extremely helpful :) thanks and cheers
Thank you very much for the tutorial. Has there been changes to the autogen package? Because I can't even get the official autogen function calling example notebook to work :((( agentchat_oai_assistant_function_call.ipynb Hmm .... the only difference being that I'm using Mistral via Ollama. Would that be the reason?
Welcome. Yes, the tools evolve fast and AutoGen has a new version, but I am not sure if this is the reason your script doesn't work. We already used Mistral with AutoGen (although using LM-Studio for backend).
@@business24_ai Thank you so much for your reply. I just switched to ChatGPT3.5 to test my script and it worked. So I suppose it's Mistral not being able to understand the function call (?).
Hi. Yes, the local LLMs are not ready for function calling yet. We tested even Gorilla. (gorilla.cs.berkeley.edu/) For the time being, we switched to Microsoft TaskWeaver for function calls. Later we will combine AutoGen and TaskWeaver.
how many functions can be write into function_list? Is there any limit ? by word's number?i want to put thousands of functions into function_list,which means there will be a lot of words.Will the functions be send to the LLMs?If that ,the funciton_list maybe limited.thanks
Hi, good question. I have not seen any written limits for function count, but you need to inform the LLM about the description of your functions and the parameters, this alone needs some tokens and is limited to the context window, as tested with LangSmith. This means you can have many functions, but how the LLM will handle a vast amount of functions, is a good question. In the case of OpenAI, you must pay for all the tokens to find it out. Keep in mind that only the function description will be sent to the LLM, not the actual Python function. The LLM sees the function as a black box and only calls it when it decides it will be helpful.
Hi, I have not tested yet but you can try the solution described in community.openai.com/t/what-if-there-is-no-parameters-of-the-function-while-using-functions-call-in-the-complection-fixed/269097
I deleted the key before uploading. In addition to deleting the keys, we use E-Mail notifications on reaching the threshold and monthly budgets. Setting a hard limit is highly recommended, as working with AutoGen can sometimes bring unexpected costs. Thanks again for your reminder.
I just had to pause just before 3 minutes and say I wish I knew that trick about showing appreciation. I never thought I could get so mad watching the AI say thank you to each other over and over lol
Glad you liked it! Yes, this saves time and tokens.
This is the most clear video of function calling, thank you for explaining it so clearly at a comfortable pace😊
Why is the function_map placed within the user_proxy instead of in the assistant definition, given that it's actually the assistant that calls the function?
Thanks for your feedback. A function config must be passed to AssistantAgent. The corresponding functions must be passed to UserProxyAgent, which will execute any function calls made by AssistantAgent. For more information check the description in the AutoGen documentation at this link: github.com/microsoft/autogen/blob/main/notebook/agentchat_function_call.ipynb
So simple and good! I am a no coder, that just learned to use python using GPT. I have little programs and functions I would like to create to facilitate my autogen. Langchain as functions would be extremely helpful :) thanks and cheers
Thank you for your feedback.
Great video Thanks
i think community somehow need to focus in open source LLM cause running Autogen with OpenAI can be costy
Yes, I agree. We are looking for the best Open Source LLMs optimized for function-calling. It is just a question of time.
New sub here. I like your delivery a lot. I’d be interested in seeing that Langchain video you mention in the end
Thanks for the sub! Good reminder. I will consider it in our planning.
Thank you very much for the tutorial.
Has there been changes to the autogen package? Because I can't even get the official autogen function calling example notebook to work :(((
agentchat_oai_assistant_function_call.ipynb
Hmm .... the only difference being that I'm using Mistral via Ollama. Would that be the reason?
Welcome. Yes, the tools evolve fast and AutoGen has a new version, but I am not sure if this is the reason your script doesn't work. We already used Mistral with AutoGen (although using LM-Studio for backend).
@@business24_ai
Thank you so much for your reply.
I just switched to ChatGPT3.5 to test my script and it worked. So I suppose it's Mistral not being able to understand the function call (?).
Hi. Yes, the local LLMs are not ready for function calling yet. We tested even Gorilla. (gorilla.cs.berkeley.edu/) For the time being, we switched to Microsoft TaskWeaver for function calls. Later we will combine AutoGen and TaskWeaver.
how many functions can be write into function_list? Is there any limit ? by word's number?i want to put thousands of functions into function_list,which means there will be a lot of words.Will the functions be send to the LLMs?If that ,the funciton_list maybe limited.thanks
Hi, good question. I have not seen any written limits for function count, but you need to inform the LLM about the description of your functions and the parameters, this alone needs some tokens and is limited to the context window, as tested with LangSmith. This means you can have many functions, but how the LLM will handle a vast amount of functions, is a good question. In the case of OpenAI, you must pay for all the tokens to find it out. Keep in mind that only the function description will be sent to the LLM, not the actual Python function. The LLM sees the function as a black box and only calls it when it decides it will be helpful.
What if the function doesn't have parameters, what do I set `parameters` to? It crashes if there is no definition.
Hi, I have not tested yet but you can try the solution described in community.openai.com/t/what-if-there-is-no-parameters-of-the-function-while-using-functions-call-in-the-complection-fixed/269097
What’s the best way to communicate with you for consultation?
Hi, please use info@business24.ai to get in touch.
Remember to blur API keys or delete them as soon as you post the video
I deleted the key before uploading. In addition to deleting the keys, we use E-Mail notifications on reaching the threshold and monthly budgets. Setting a hard limit is highly recommended, as working with AutoGen can sometimes bring unexpected costs. Thanks again for your reminder.
source code?
Hi, I set the visibility of the repo to public. You find it here: github.com/business24ai/autogen-functions
cache.db file? i can run free
Hi, yes I uses cache, but to run it for free you have to use open source local LLMs. We will cover this in the upcoming videos.