As a developer, this is great. Thanks for clarifying right at the beginning that its us who have to make the function call and 👍 to the sequence diagram.
Jesus, I spent one day reading open ai documentation and I could not figure it out. Even their example, I was like... how the heck is their function working? THanks!!
I have several functions implemented in my project, each responsible for a specific task. However, frequently, when the user requests the extraction of information that should trigger function A, other functions (like B or C) are called instead. Although function A is correctly triggered on some occasions, this does not happen consistently. Why is this happening? I am using OpenAI's GPT-4 model.
Your video is the best. I am just wondering " is there any way to set the fixed number of items in an array from an open ai reponse" when you using function calling
if I have multiple functions how does the "for tool_call in tool_calls" loop work with different functions? the function_response parameter has the latitude and longitude hardcoded as arguments. what if my other functions deal with other stuff and don't require lat and long as arguments, but other arguments? I'm really confused by that bit. If I have more functions do I just add a "match case" check to pass the correct arguments for each function?
Yeh this is a bit hardcoded for a single function. The arguments are in that function_args variable, so you could pass them in using the kwargs syntax. Maybe I'll make another video to show how to do that.
Hi mark. I have an array of about 60 objects. Each object has a length property with a non null value. Each object has a text property with a null value. I've been trying for weeks after work to craft a prompt that will force OpenAI to generate a word - any english word for now - that has a character amount equal to the length property. I have about an 80% error rate - Rarely can it generate a word with characters equal to the length value. The error rate goes down to about 60% when I say it doesn't have to be a word. Do you feel that this function calling double API request is the only way to fix this issue to guarantee 100% success rate? thanks.
I wonder whether something like Instructor might be better for trying to get structured output. I haven't tried it with something like what you're trying, but I might play around with it over the weekend - github.com/jxnl/instructor
@@learndatawithmark Thanks for the reply mark. I should have been more clear, the structure is always intact, the issue is that LLM's don't seem to be able to count. Very ironic - the technology that will eventually "take over our planet" can't tell you how many characters are in this message, nor how many objects are in a given array, let alone create strings with lengths equal to a value *you supply. Hopefully that issue gets solved before we hand over military systems to our AI bots ;)
@@chriswooohoo4518 I know, it's not intuitive at all. It's got me thinking whether the problem could be solved with Guidance, another tool that tries to control the output of LLMs github.com/guidance-ai/guidance
@@learndatawithmark I am developing an app where users can choose a topic or upload a book/PDF for conversation and create multiple personas to get responses. I am not using LangChain; I am using OpenAI function calling and high level prompting.
Why the fuck is everybody on the web giving just this weather example? Can't you be more creative and authentic? What the hell do I do if for example I need to translate an array of strings and always return a consistent and formated result?
Hey - I thought it'd be easier to explain if I used the example that people are familiar with. I did make another video a while back where I had it generate an array of objects and their sentiment. You can see that here - th-cam.com/video/lJJkBaO15Po/w-d-xo.html
You are the only one who explained that functions are not actual functions. It is just a reference
yeh - it's not the best piece of naming!
As a developer, this is great. Thanks for clarifying right at the beginning that its us who have to make the function call and 👍 to the sequence diagram.
Great, glad it was useful!
The best simple explanation I have found about Function Calling. Thanks for making it so easy to understand!
Nice vedio ,with understandable realtime examples
Jesus, I spent one day reading open ai documentation and I could not figure it out. Even their example, I was like... how the heck is their function working? THanks!!
very clear introduction for open AI function calls
this video is super useful for me to understand function calls.
Great! Glad it was useful - let me know if there are any other topics in this area you'd like me to cover next.
thanks for the explanation. Got a question...with tools being passed to the OpenAI API, how the API handle the "tools" while calling the llm?
Perfect explanation, all other videos were long winded
haha, thanks! I'm trying my best to explain things in under 5 minutes :)
i had to set the your speech rate to 0.75 in order for me to catch up with you , lol
thank you , very good explanation sir
I have several functions implemented in my project, each responsible for a specific task. However, frequently, when the user requests the extraction of information that should trigger function A, other functions (like B or C) are called instead. Although function A is correctly triggered on some occasions, this does not happen consistently. Why is this happening? I am using OpenAI's GPT-4 model.
Thanks for the great explanation :)
I have one query: How many API calls does it support? like in AWS it supports 5 APIs per agent (Bedrock)
Your video is the best. I am just wondering " is there any way to set the fixed number of items in an array from an open ai reponse" when you using function calling
Thanks
if I have multiple functions how does the
"for tool_call in tool_calls" loop work with different functions? the function_response parameter has the latitude and longitude hardcoded as arguments. what if my other functions deal with other stuff and don't require lat and long as arguments, but other arguments? I'm really confused by that bit. If I have more functions do I just add a "match case" check to pass the correct arguments for each function?
Yeh this is a bit hardcoded for a single function. The arguments are in that function_args variable, so you could pass them in using the kwargs syntax. Maybe I'll make another video to show how to do that.
How did you get the longitude and latitude to pass to the function ? Was it in response by the LLM ?
Yes - the LLM works out the lat/long based on the location that we used in our prompt.
Hi mark. I have an array of about 60 objects. Each object has a length property with a non null value. Each object has a text property with a null value. I've been trying for weeks after work to craft a prompt that will force OpenAI to generate a word - any english word for now - that has a character amount equal to the length property. I have about an 80% error rate - Rarely can it generate a word with characters equal to the length value. The error rate goes down to about 60% when I say it doesn't have to be a word. Do you feel that this function calling double API request is the only way to fix this issue to guarantee 100% success rate? thanks.
I wonder whether something like Instructor might be better for trying to get structured output. I haven't tried it with something like what you're trying, but I might play around with it over the weekend - github.com/jxnl/instructor
@@learndatawithmark Thanks for the reply mark. I should have been more clear, the structure is always intact, the issue is that LLM's don't seem to be able to count. Very ironic - the technology that will eventually "take over our planet" can't tell you how many characters are in this message, nor how many objects are in a given array, let alone create strings with lengths equal to a value *you supply. Hopefully that issue gets solved before we hand over military systems to our AI bots ;)
@@chriswooohoo4518 I know, it's not intuitive at all. It's got me thinking whether the problem could be solved with Guidance, another tool that tries to control the output of LLMs github.com/guidance-ai/guidance
Couldn't we do the same approach with LangChain's Agents?
Yes I think so. I don't know for sure, but I would think the agents might use this API if/when they make calls to OpenAI?
I think you can. But I like this approach more.
gooooooood
then why langchain agents?
get more control by using openai function calling
I haven't played with langchain agents yet. I assume they are more powerful than what we showed in this video
@@learndatawithmark
I am developing an app where users can choose a topic or upload a book/PDF for conversation and create multiple personas to get responses. I am not using LangChain; I am using OpenAI function calling and high level prompting.
For real AI, I don't think prompt engineering is needed. So that brings a question: are those GPTs real AI?
The definition of AI seems to evolve to be whatever we can't currently do! So I dunno, I suppose not.
Why the fuck is everybody on the web giving just this weather example?
Can't you be more creative and authentic?
What the hell do I do if for example I need to translate an array of strings and always return a consistent and formated result?
Hey - I thought it'd be easier to explain if I used the example that people are familiar with.
I did make another video a while back where I had it generate an array of objects and their sentiment. You can see that here - th-cam.com/video/lJJkBaO15Po/w-d-xo.html