How does function calling with tools really work?

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ต.ค. 2024

ความคิดเห็น • 109

  • @Qwme5
    @Qwme5 3 หลายเดือนก่อน +9

    I'm truly impressed by your explanation. As a complete beginner in this field, I found your ideas very easy to understand. You deserve a larger audience and more support. I'm grateful for experts like you who can break down complex topics and make learning accessible for newcomers like myself.

  • @henrijohnson7779
    @henrijohnson7779 2 วันที่ผ่านมา

    I truly appreciate the time Matt took to provide a comprehensive and detailed explanation of the function/tool calling processes. His technical explanations were spot on and completely resonated with me. I found his breakdown to be clear and insightful, making it easy to grasp the concepts involved.

  • @sergeziehi4816
    @sergeziehi4816 3 หลายเดือนก่อน +16

    The way you explain thinks..... Is soooo pedagogical . The tone and the voice nuance ..musical in the ears 😊 .

  • @ElvinHoney707
    @ElvinHoney707 3 หลายเดือนก่อน +2

    You are correct! Function calling is actually made possible simply by the reasoning capacity, such that it is, of the model. There is nothing more than that. It is a convenient abstraction for service interactions. Instead of function calling we could just call it "if you think you need it you may ask for the following ...". BTW, this type of process reasoning is also used for agentic interactions when deciding workflows.

    • @xspydazx
      @xspydazx 3 หลายเดือนก่อน

      if your model can write code then it can call a function !

  • @emmanuelgoldstein3682
    @emmanuelgoldstein3682 3 หลายเดือนก่อน +12

    As you grow in popularity, you may experience that your closest supporters will apply the greatest scrutiny. It doesn't mean you're disliked, no matter the perception of tone.

    • @xspydazx
      @xspydazx 3 หลายเดือนก่อน

      @@emmanuelgoldstein3682 problem was only felt like it was incomplete ! .. as everybody has been giving the same incomplete tutorial ..

  • @solyarisoftware
    @solyarisoftware 3 หลายเดือนก่อน +2

    Hi Matt,
    I watched again this video with pleasure, and it got me thinking again :-). First of all, please avoid the trap of dividing your followers into lovers and haters. You produce top-notch content, and there's no need to apologize or dramatize (appreciating you irony).
    Let me delve into a point that emerged in your demo/experiment, which is, in my opinion, more significant than the function calling "issue". You verified that the majority of open-source models available on Ollama are able to produce the expected JSON. That's somewhat surprising to me. This demonstrates, as you suggested, that the OpenAI function-calling fine-tuned models are just marketing, but wait. I remember that the old GPT-3.5 OpenAI "instruct" models like "text-davinci-003" were able to produce JSON (so function calling-JSON if you will), but subsequent chat COMPLETION models (fine-tuned for lists of system/assistant/user messages) weren't! So, my guess is that OpenAI released the function-calling fine-tuned models later to correct the chat-completion fine-tuning?! Ironic again.
    But, back to the Ollama models-I'm still perplexed. Are these optimized for both (at the same run-time) CHAT completion and "function calling" (aka JSON outputs)? This could maybe be a topic for another video...?
    By the way, it would be kind of you to share the code on your GitHub repo as usual, but anyway the video is absolutely explanatory.
    I'll take a look at the "Tools" #5284 Ollama PR. In my opinion, standardization could help the community around Ollama, even if you demonstrated that any user-made schema does the job.
    Thanks always for sharing great content. I appreciate your effort.
    Chapeau
    Giorgio

  • @christopherseiler7230
    @christopherseiler7230 3 หลายเดือนก่อน +3

    The way you explained it before is way more robust than how most frameworks/providers accomplish things with a tool use abstraction.

  • @Cairos1014
    @Cairos1014 3 หลายเดือนก่อน +1

    Timely. I have been battling getting function calling to wotk right. Sadly, many of the examples out there don't work with different models, they seem to all assume OpenAI. I look forward to giving your approach a try!

  • @12wsaqw
    @12wsaqw 3 หลายเดือนก่อน +1

    Despite your aversion to a reasonable display mode, both of your 'tools' videos make me say 'Whoop, it's not just me. Thank you.

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน +2

      I have no aversion to the reasonable display mode, which of course is light mode...

  • @jofus521
    @jofus521 หลายเดือนก่อน

    I love your videos, man. This is one big thing I’ve picked up along my journey:
    Always start from specific information or ideas, and go towards generalities (not the other way around).
    In a classroom setting, where people prepare their wallets and minds for a learning event, maybe starting with generalities is better. But in real life, there are so many distractions. Distractions lead to confusion. Confusion leads to annoyance and madness.
    Moderate specificity requires the least amount of a consumer’s time and attention to get started. Maximizes potential engagement. Minimizes annoyance or friction.
    Am always telling support or client-facing folks this exact same thing when they ask wide open questions, and then complain that devs are frustrated or taking a long time to respond. Or maybe the dev team is falling behind on a new feature (of course they will when always having to context switch due to wide open query or too many unresolved details).
    Goes back to starting from specific details, so the consumer remains locked into their original intent and context, as much as possible. Not everyone who clicks here and watches these videos is watching with pen and paper in hand taking notes for a college exam. In order to potentiate their involvement, specific repeatable easy details are key.

    • @technovangelist
      @technovangelist  หลายเดือนก่อน

      That wasn’t the goal here.

    • @jofus521
      @jofus521 หลายเดือนก่อน

      Sorry I don’t understand that response. Also I updated/clarified the message during your response. Not trying to preach. Just my observations of life applied to what was discussed.

    • @technovangelist
      @technovangelist  หลายเดือนก่อน

      I guess I didn’t understand the comment.

    • @jofus521
      @jofus521 หลายเดือนก่อน

      OK, well, if there is something that you don’t understand about it, can you point out the spot that is not understood? That’s kind of my case in point.

    • @jofus521
      @jofus521 หลายเดือนก่อน

      At any rate, thank you for your videos and contributions

  • @lucasbarroso2776
    @lucasbarroso2776 3 หลายเดือนก่อน +2

    Love your videos! Your last vid about function calling really cleared some things up.
    I used that knowledge to create a market research bot for my company! They loved it, now I've jumped from a frontend typescript tev to AI operations engineer

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน

      Nice. hope that came with a bit of a pay bump.....let me know the next thing you need and I can try to cover that too.

    • @lucasbarroso2776
      @lucasbarroso2776 3 หลายเดือนก่อน

      ​@technovangelist I would be stoked to see a "top 5 ollama models for different tasks" style video. I'm just using Llama 3 for everything right now.
      Some tasks I would like to optimize for speed, others for depth.

  • @jeffsteyn7174
    @jeffsteyn7174 3 หลายเดือนก่อน +2

    I think the biggest problem for you, is that most people will read something in docs or on a blog and then claim to understand. Then attack someone even though they dont dont actually understand what they talking about. You only understand once you implement and use the functionality.
    And to your 2nd point. You 100% correct theirs no reason for a model to be finetuned for function calling, i discovered function calling with gpt3.5 about 3 months after chatgpts launch.

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน

      I wish. It’s pretty clear in the docs. They just see the feature name and assume from there. Thanks for the comment.

  • @jayakumark9213
    @jayakumark9213 3 หลายเดือนก่อน +1

    Ollama tools got merged, the day after you mentioned it :-). Thanks for the push

  • @HyperUpscale
    @HyperUpscale 3 หลายเดือนก่อน

    Awesome! I am glad there are people like you to simplify and RE-explain the basics to the "writers". ☺
    I really appreciate you coming and stepping on the trolls' feet. Perfect 👌
    I see no reason to get excited about incompetent comments.
    Just chill and explain nicely 👏

  • @favoratti
    @favoratti หลายเดือนก่อน

    I've been playing with AI and tools for quite long and I just got the same opinion as you. It's not complex at all but it does bring a lot to the table.
    Also, agents frameworks are not needed for most use cases.

  • @eggsdee9110
    @eggsdee9110 3 หลายเดือนก่อน +1

    When it comes to function calling with lots of highly specific parameters I find that the models that I can run in ollama are simply uncapable of following the schema i provide. Whereas openai and claude do an excellent job following a large json schema. So when it comes to a mod being "trained for function calling" I think they mean trained to follow large and strict schemas well as thats really the main difference.
    You will see a difference if your function call (basically json output) needs to follow a large and strict schema. Everyones examples are too small to notice the change.

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน

      Have you tried. I just tweaked my code to use 10 parameters per function. And gave a more complicated prompt. Worked just fine. But if you are doing that you probably have bigger issues outside of the model.

  • @robtaylor796
    @robtaylor796 3 หลายเดือนก่อน

    Great springboard on the subject matter. Clear, to the point.

  • @RazorCXTechnologies
    @RazorCXTechnologies 3 หลายเดือนก่อน +1

    Pure gold! Always appreciate your concise explanation and humour.

  • @blee6782
    @blee6782 3 หลายเดือนก่อน

    that's amazingly simple, nice. I'm guessing one scenario where someone would still want an agent-framework is if the framework was a low/no-code workflow.
    I'd love to see a video on whether running models with GPTQ quantization is worthwhile. Most explanations I've seen amount to "GPTQ is for GPUs, GGML is for cpus" without saying why GPTQ is completely neglected in projects like ollama, or if there is even a meaningful advantage to either at this point.

    • @xspydazx
      @xspydazx 3 หลายเดือนก่อน

      quantized models are fine .. they work as well as the original full precision in general !!
      Speed is ALWAYS dependant on the system !

  • @pythonlibrarian224
    @pythonlibrarian224 3 หลายเดือนก่อน

    The libraries are creating abstractions over a document and we can forget where the abstraction layer ends and where the substrate begins.
    I'm going to try out this pattern. Lots of libraries make it easy to swap out models expecting completions vs conversations... fewer libraries have a nice clean way to swap out models that handle function calling differently.

  • @MindForeverVoyaging
    @MindForeverVoyaging 3 หลายเดือนก่อน +1

    Welcome to the real world 🙂
    I suggest a disconnected engagement approach.
    Love your videos and your style.

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน

      what do you mean by disconnected approach?

    • @MindForeverVoyaging
      @MindForeverVoyaging 3 หลายเดือนก่อน

      @@technovangelist 'Disconnected Engagement'. Stay fully engaged with what you are working on and your goals but disconnected from trolls, detractors and negative feedback. All the best with your channel.

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน +1

      Got it. Thanks. Luckily the negative is a small fraction of the rest of the comments. And I don’t spend too much time on it. I had fun with this one though. Thanks for the comment.

    • @themax2go
      @themax2go 3 หลายเดือนก่อน

      Matt, it seems that you go manually through YT comments... would it be possible to use AI to help you with that somehow? 🤔

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน

      You make it sound like reading my comments is something I would want to avoid. Ideas come from comments. Connection comes from comments. This would be the last thing I would ever want to outsource to an ai or other human.

  • @researchandbuild1751
    @researchandbuild1751 19 วันที่ผ่านมา

    Would you mind answering a question? With you use this do we essentially pass in the instructions each time we prompt? For example if i tried to do this manually, i would just repeat my instructions each time (which also include the formatting) along with the new question? I wasnt sure what system prompt vs user prompt means, do they both end up in the same place anyway

  • @renemuller5823
    @renemuller5823 3 หลายเดือนก่อน

    Hello Matt thanks for the updated example, i had been stuck there too, but never thought once about insulting you beause of my lack of experience 🙂.

  • @IanScrivener
    @IanScrivener 3 หลายเดือนก่อน

    Thanks for your videos and demo code Matt... very helpful.
    And sorry that some people are nasty and hateful. There os no need for that. It is sad that some people feel they have permission to vent their anger and negativity and harm others.

  • @eyeseethru
    @eyeseethru 3 หลายเดือนก่อน

    So glad you made this video! Could you perhaps go into why apps like the ones that assist with coding or app creation that use function calling may fail with local models, but work seamlessly with the cloud models? I think this is an area where people are struggling based on the many issues I see in Github repos.

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน

      I think a lot of folks don’t realize that function calling is possible in ollama. There are folks who seem intent on spreading the notion that function calling is more than it is. And so they kind of brute force their way through rather than taking the simpler approach. But that’s just a guess. Can you point me to some of the issues you have seen?

  • @DeanRIowa
    @DeanRIowa 3 หลายเดือนก่อน

    My favorite video of yours to date. Actually the example clarified some questions I had, so thank you. I personally hope you make more mistakes 😉

  • @MatiasBerrueta
    @MatiasBerrueta 3 หลายเดือนก่อน +1

    haters will hate, but you rock man! ty for your videos !

  • @johnkotchmusic
    @johnkotchmusic 3 หลายเดือนก่อน

    Matt - haters suck. You’re doing great and it’s awesome that you’re willing to share your wisdom and knowledge. Please ignore the jerks, we’re surrounded by ass holes.

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน

      If I ignore them I don’t get to do fun things like this video.

  • @AshishBangwal
    @AshishBangwal 3 หลายเดือนก่อน +4

    Considering openAI as the ONLY solution is not smart. In a lot of usecases you can get away with opensource models like llama3, mixtral, deepseek etc. And try not to blame Ollama its just a library to run quantized open source model locally, and give you a API interface just like OpenAI 😆

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน +2

      It is incredible how some think OpenAI is the only solution that deserves to exist.

  • @Cheng32290
    @Cheng32290 3 หลายเดือนก่อน

    Can I say that, if my prompt is clear enough, I can have function calling using any module? Since it’s just helping the software to decide which functions to call, right?
    Thanks for the explanation, it’s mind blowing to me

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน +1

      The important part is to use format:json, and specify to output as json in the prompt.

    • @Cheng32290
      @Cheng32290 3 หลายเดือนก่อน

      @@technovangelist interesting, our makes me wonder how does ollama guarantee the output from any LLM model will be in json format?

  • @arun._space
    @arun._space 3 หลายเดือนก่อน

    can you explain or some useful resources to learn more about introspection and reflecion?

  • @DevasheeshMishra
    @DevasheeshMishra 3 หลายเดือนก่อน +1

    i think that ollama implementation of function calling is by forcing `{` tokens at the starting to force the model to generate function call.
    correct me if i am wrong.

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน

      I don't know the details but I am 95% sure that has nothing to do with it. I am pretty sure its a gbnf grammar that was set up back in October.

  • @hiddenkirby
    @hiddenkirby 2 หลายเดือนก่อน

    Haters are just insecure. Great work.

  • @VinCarbone
    @VinCarbone 3 หลายเดือนก่อน +1

    Please can you point to the websearch tool used?

  • @sean_vikoren
    @sean_vikoren 15 วันที่ผ่านมา

    funny, but also three minutes of my life...ill go back to watching the original. i was half way to trying it, when some dilwad distracted me from that excellent content.
    update: The code eventually returned 200 success in an empty message. If i ask on the ollama interface, I do get Berlin. Probably an ollama update.

  • @mattgscox
    @mattgscox 3 หลายเดือนก่อน +2

    I dont use OpenAI function calling at all - it's just a wrapper for JSON conversion and interpretation of the output, and I'd rather keep control of that myself to make it more portable between LLMs. Why would anyone write something that is locked to a LLM interface definition when we live in such a turbulent world. I'd encourage everyone to do the same. I cant honestly see any benefit in using the "function call" feature versus rolling your own.

    • @coom07
      @coom07 2 หลายเดือนก่อน

      @@mattgscox bot gonna lie. I end up finding myself in your position. And realize OpenAPI specs is the answer to it. I don't want to give away my digital independence.

  • @darksites
    @darksites 3 หลายเดือนก่อน +2

    I accept your apology.

  • @The_8Bit
    @The_8Bit 3 หลายเดือนก่อน +2

    Like a boss!

  • @solyarisoftware
    @solyarisoftware 3 หลายเดือนก่อน

    100% clear. thanks

  • @oliviere1215
    @oliviere1215 3 หลายเดือนก่อน

    Is it possible to use function calling with tools with Open-Webui?

  • @mrschmiklz
    @mrschmiklz 2 หลายเดือนก่อน

    lol. love it. spread more knows

  • @wavecoders
    @wavecoders 3 หลายเดือนก่อน

    Yeah, I am not getting consistent function names. Model keeps changing them. Parameters are good. So for me it’s not stable no matter the model I use

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน

      interesting. would love to see the code you are running. I haven't been able to get it to fail ever.

    • @wavecoders
      @wavecoders 3 หลายเดือนก่อน

      @@technovangelist I got it now. Forgot to stringify the json object.
      So basically I have it working in JavaScript, including agents

  • @flat-line
    @flat-line 3 หลายเดือนก่อน

    What is the name of local search api, you used ?

  • @crism8868
    @crism8868 3 หลายเดือนก่อน

    Really Gemma can do this? From the examples I've seen that model is pretty dumb, so if an SLM such as this can do function calling I'm impressed

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน +1

      That was gemma2 I think.

    • @xspydazx
      @xspydazx 3 หลายเดือนก่อน

      actually that bit was interesting to see that every single model produced not just the correct output but the right out put....
      personal;y i have found that using such techniques means after you get your final response you will need to unload and rerload the model or clear the cache so the model can prepare for the next question ?

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน +1

      If you are having to unload and reload there must be something very strange with your setup. Is this with ollama? Have you updated to the latest versions? There is no need to do such things.

  • @BrokenOpalVideos
    @BrokenOpalVideos 3 หลายเดือนก่อน

    I dont know if i will ever be forgive you for this. How could do this to us 😂❤

    • @xspydazx
      @xspydazx 3 หลายเดือนก่อน

      lol

  • @jinil9002
    @jinil9002 3 หลายเดือนก่อน

    Great!!!

  • @jamazing1122
    @jamazing1122 3 หลายเดือนก่อน

    😆I felt like this was a dev version of this vid: th-cam.com/video/0Szj21arytU/w-d-xo.html. Really enjoyed this one. As always, thanks for putting out great and useful content!

  • @brian2590
    @brian2590 3 หลายเดือนก่อน

    🔥🔥🔥

  • @dtesta
    @dtesta 3 หลายเดือนก่อน

    Sooo, that was exactly what I wrote on your other video? That you can use ANY model for this, as long as it returns json in the response. As it's the "calling party" that actually runs the code/function, it has nothing to do with the model itself. But you claimed this was "added" to later models? I am confused.

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน

      Hmm not sure what other video you are referring to. But if I said something that sounded like I suggested it was added in the model I was simply not stating what I meant clearly. Function calling was added to ollama in October or November. So later than the initial release in June. That’s what I would have meant.

    • @dtesta
      @dtesta 3 หลายเดือนก่อน

      @@technovangelist Ok, you wrote that it was added in Llama 2, which is a model. If you meant Ollama, it makes more sense. However, what exactly prevented me from doing this with the very first version of ollama? As long as I make my own scripts that talks directly to the ollama API, why would I not be able to "ask it to return json" and simply run functions in my script based on the response? That is the part that I still do not get. Why would any type of "support for function calling" need to be added to either the model or the "wrapper" (ollama in this case) for it to work?

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน

      If you did that at the beginning the answer would have probably been something like: “sure, here is the json: {…”. It wouldn’t have been just the json. Folks were adding instructions like no prose etc to get the model to follow the instructions

    • @dtesta
      @dtesta 3 หลายเดือนก่อน

      @@technovangelist That's odd. It worked perfectly fine for me to say "only respond with a json object, nothing else" even on the very first models. Anyways, doesn't really matter.

  • @RickySupriyadi
    @RickySupriyadi 3 หลายเดือนก่อน

    most of youtuber doesn't care about what their viewer agrees or disagree (and how they said it) but you handle it as if they are part of a....? community... ? a.... companion along ollama adventures... ?
    in the first place most youtuber doesn't care and move on to next video at the end those nasty words are just a comments and when newer videos come up those disagree-er would come back to watch newer video.... it also happen with wes roth channel, david saphiro channel, even Kamph channel....
    well anyway, I've been watching TH-cam unreasonable long, i don't have local TV or Netflix all i got are smart tvs, android boxes, tablets all over my places, at the office at my room at my home at my car everywhere they all mostly 24 hour playing TH-cam videos. and matt you're the only one after all these years a youtuber whom really care and serious about what you said and the recent event was surprisingly handled in deferent degrees. you're treating your channel in different way it is interesting way of youtube-ing don't stop matt, unless you got private issue.

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน +1

      I think that is part of my background as an evangelist or as some companies call it, a dev advocate, though that’s a misleading name. Build a community, have conversations, relay feedback back to the team. I have incorporated so much feedback into my videos every time. Thanks for the comment and thanks for noticing.

  • @poisonza
    @poisonza 3 หลายเดือนก่อน +1

    yeah function calling is just making llm to choose what function to use and specifying the required param as structured output. I am amazed how dumb people are ... just try to code up simple example and run it.

    • @xspydazx
      @xspydazx 3 หลายเดือนก่อน

      no not dumb .... they are many components to an ai system you can just use inputs and outputs ... but there is alot more you can do with a base model !
      as we amy see a tutorial or example of your Mistral model , flying your RC helecopter !

  • @soonheng1577
    @soonheng1577 2 หลายเดือนก่อน

    Thank you, thank you. you just confirm my thoughts and totally clear my doubts. Thank you, thank you.