Just for clarity sake: The function and the arguments to call it with are suggested by the assitant agent to the user proxy and is effectively executed by the user proxy. No the other way around. Thats the reason the function must be declared to both: The assistant must know the function, its description and its arguments to be able to suggest it to the user proxy agent (thus the registration for llm) , and the user proxy agent needs to get a handle on the function to be able to execute it (thus the registation for execution). Thank for the good work!
Re-reading my commnent I'm not sure I managed to explained it so well: English is my second language. Hoping someone else will jump in that is able to explain simple things in simple terms 😄@@TylerReedAI
00:01 Autogen 0.2.3 update makes function calling simpler 01:05 Defining Functions and Parameters 02:12 Creating function to calculate exchange rate for currency conversion. 03:25 Registering functions allows specific agents to run them. 04:37 AUTOGEN 0.2.3 update simplifies function calling 05:54 Changes in currency calculator function 07:04 Function calling made simpler with AUTOGEN 0.2.3 update 08:25 Using decorators for function calling in autogen
Tylor, looks like I'm haunting you for the function call! 😆. I even tried this method but the calling is not triggering for me. I read somewhere online by Adam Faurney, one of the main Microsoft Researcher for Autogen that it's optimized to work with OpenAI and may fail with other LLMs. You may want to mention that so people don't try in vein. Thank you Tylor again! If anyone has got this working with local LLMs with LM Studio, please let us know.
Hey haha no worries. Yeah so I thought I mentioned but definitely not clearly enough. I have had a lot of issues with LM studio and functions. They don’t want to get called. I haven’t tested a ton of different LLMs but I have issues. I’ve only had real success with OpenAI and not locally. But yeah if somebody else had success with an LLM locally and function calling that would be helpful to know 🙏
@@TylerReedAI aah That's interesting to know about LM studio. And that explains. I have tried quite a few llms on LLMs studio but same result. The agents become too chatty, and they're like, Thank you, and sorry for the confusion, and all the mouthful discussion. And I am like, GUYS JUST GET TO WORK! 😡 😃 Prompts do make a lot of difference though. It's funny (and amazing at the same time) to see what works for them and how you can give them personality. In a different example, I wrote something to the tune of following for my DE (note the uppercase): "You are a NO NONSENSE data engineer. I will be REALLY MAD if you do not provide a raw sql and nothing else as your response" And that guy just worked like a charm! 😜😀
Dude I know I spent all night one night last week and no natter what I did and then I found an issue opened up for LM studio and others had function calling problems as well. I’ve read where if you make it dire then it tends to give you a better response as well 😅😅
Dude your way of explaining is very easy to follow. you should do a video everytime autogen comes up with updates and explain top 3 changes this way :) thank you for the content.
would you able to create a autogen example script that is Group Chat with Local LM Studio/Local Qdrant based Retrieval Augmented Generation (with 5 group member agents and 1 manager agent) and function calling. basically an a local setup with all autogen features without using openai chatgpt. would love to see something like that in a script. thank you
Yes! Friday I have a video coming out with Full Stack application using LM Studio and making calls to it not using ChatGPT OpenAPI, meaning it will be free to use locally.
Just for clarity sake: The function and the arguments to call it with are suggested by the assitant agent to the user proxy and is effectively executed by the user proxy. No the other way around. Thats the reason the function must be declared to both: The assistant must know the function, its description and its arguments to be able to suggest it to the user proxy agent (thus the registration for llm) , and the user proxy agent needs to get a handle on the function to be able to execute it (thus the registation for execution).
Thank for the good work!
Thank you for making this comment! You explained this better than me 🤓
Re-reading my commnent I'm not sure I managed to explained it so well: English is my second language. Hoping someone else will jump in that is able to explain simple things in simple terms 😄@@TylerReedAI
00:01 Autogen 0.2.3 update makes function calling simpler
01:05 Defining Functions and Parameters
02:12 Creating function to calculate exchange rate for currency conversion.
03:25 Registering functions allows specific agents to run them.
04:37 AUTOGEN 0.2.3 update simplifies function calling
05:54 Changes in currency calculator function
07:04 Function calling made simpler with AUTOGEN 0.2.3 update
08:25 Using decorators for function calling in autogen
Tylor, looks like I'm haunting you for the function call! 😆. I even tried this method but the calling is not triggering for me. I read somewhere online by Adam Faurney, one of the main Microsoft Researcher for Autogen that it's optimized to work with OpenAI and may fail with other LLMs. You may want to mention that so people don't try in vein. Thank you Tylor again!
If anyone has got this working with local LLMs with LM Studio, please let us know.
Hey haha no worries. Yeah so I thought I mentioned but definitely not clearly enough. I have had a lot of issues with LM studio and functions. They don’t want to get called. I haven’t tested a ton of different LLMs but I have issues.
I’ve only had real success with OpenAI and not locally. But yeah if somebody else had success with an LLM locally and function calling that would be helpful to know 🙏
@@TylerReedAI aah That's interesting to know about LM studio. And that explains. I have tried quite a few llms on LLMs studio but same result. The agents become too chatty, and they're like, Thank you, and sorry for the confusion, and all the mouthful discussion. And I am like, GUYS JUST GET TO WORK! 😡 😃
Prompts do make a lot of difference though. It's funny (and amazing at the same time) to see what works for them and how you can give them personality. In a different example, I wrote something to the tune of following for my DE (note the uppercase): "You are a NO NONSENSE data engineer. I will be REALLY MAD if you do not provide a raw sql and nothing else as your response"
And that guy just worked like a charm! 😜😀
Dude I know I spent all night one night last week and no natter what I did and then I found an issue opened up for LM studio and others had function calling problems as well.
I’ve read where if you make it dire then it tends to give you a better response as well 😅😅
@@TylerReedAI yup. So true!
Dude your way of explaining is very easy to follow. you should do a video everytime autogen comes up with updates and explain top 3 changes this way :) thank you for the content.
Amen!
Most valuable 9min video of my life!
Thank you
Thank you, will do! Appreciate it 🙌
Thank you for watching it 😀
Love your content man. Appreciate it!
I appreciate that so much, thank you 🙌
would you able to create a autogen example script that is Group Chat with Local LM Studio/Local Qdrant based Retrieval Augmented Generation (with 5 group member agents and 1 manager agent) and function calling.
basically an a local setup with all autogen features without using openai chatgpt. would love to see something like that in a script. thank you
Yes! Friday I have a video coming out with Full Stack application using LM Studio and making calls to it not using ChatGPT OpenAPI, meaning it will be free to use locally.
Great Content!
Thank you 🙌
May I suggest using bigger fonts for the video? Kinda hard to see what's on your screen for smaller screen devices.
Yes thank you for suggestion! I will try to make them bigger, thanks for bringing this up
Hi i used the same code, but it show that error: python function not found. Can you help me with this.