Instructor makes GPT Function Calling easy! 🚀

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ก.ย. 2024

ความคิดเห็น • 36

  • @echohive
    @echohive  10 หลายเดือนก่อน +1

    Code files are available for FREE to download at Patreon:
    www.patreon.com/posts/92210687
    Exclusive deep dive for AI Architect+ Patrons: www.patreon.com/posts/instructor-for-92268385
    Everything GPT API crash course: www.patreon.com/posts/code-files-for-92075434
    Quick start if you are new to coding and GPT API:
    th-cam.com/video/YMhsatiXiGc/w-d-xo.html
    th-cam.com/video/YMhsatiXiGc/w-d-xo.html
    Instructor github: github.com/jxnl/instructor
    instructor video: th-cam.com/video/yj-wSRJwrrc/w-d-xo.html
    Voice controlled Auto AGI with swarm and multi self launch capabilities:
    th-cam.com/video/zErt3Tp7srY/w-d-xo.html
    Auto AGI original video: th-cam.com/video/jTC-6kBOfn8/w-d-xo.html
    Auto AGI original source code: www.patreon.com/posts/87530987
    Search 200+ echohive videos and code download links:
    www.echohive.live/
    Chat with us on Discord:
    discord.gg/PPxTP3Cs3G
    Follow on twitter(X) : twitter.com/hive_echo

    • @xspydazx
      @xspydazx 4 หลายเดือนก่อน +1

      nice video as also i like the enthusiasm:
      also i did not know about the dict response ie: the any potential query response: a good chain to build here manually:
      also the validators maybe they can also be used to capture the input , ie for uppercase responses the string can be updated while loading the form : as well as even perform the calculation in the response : ie using the error to return data for the next retry... ie some auto fields which populate while the llm is filling out the form ... for commercial use ... here you could be providing data from an internal data source based on the partial form ..
      its very useful...
      i would like to see how to use it with a local llm not hosted on the api (or lm-studio) ...?
      great video and thanks for doing the demo >>> then the examples >>> then the retelling and more indepth look >>>great methodology ..Thanks

    • @echohive
      @echohive  4 หลายเดือนก่อน +1

      Thank you 🙏 If you wanted to use it with a local model, all you would have to do is to run it with an OpenAI library compatible client and change the api url to that model as shown in docs and that is it.

    • @xspydazx
      @xspydazx 3 หลายเดือนก่อน

      ​@@echohive hi , i dont know if you coul help meunderstand how to execute code on ipython ? / ikernel from python code ... ie running a notebook from the python a jupiter client:
      recently i discovered that all we need is to for each session ... in the chat or each task , execute the code in jupiter notebook cell :.... return the result to the model or response:
      so we can catch the function calls in the response with a detail feild (functionlist) ... and seperate the response with a template :
      so we template the repsonse with instructor .. and for functions : if they require returning a value they can be returned with the validator ...
      quite simple : i think cod interpreter and open-interpreter and outline use this type of process:
      with the instructor version , we can use normal python funcitons in a folder , convert them all to pydantic functions and send them with the prompt to the model :
      with the correct wrapper we can add intent detection to the prompt input and load only the funcitons we will require for the response: , ie sort our python functions in folder hierachy .. load only text formatting funcitons , so we can tell the model the loaction of the function to load instead of sending the funcitons : so it would call for the right collection of funcitons to make a call ...or even generate its own on the fly functions to be executed in jupiter ..
      as a coder then the model can produce the expected outputs from the code is shares : (using jupyter) ... hence the jupiter call is the same template ? it can make the shell call (sysytem control) and (code call, based on installed kernals) ..

  • @jxnlco
    @jxnlco 10 หลายเดือนก่อน +14

    Creator of Instructor here
    Thanks for sharing this, lets work together if you want to try to go over some more complex examples

    • @echohive
      @echohive  10 หลายเดือนก่อน +2

      Hi Jason. I love instructor and I think you built something truly remarkable and useful. I would love to take you up on it. We can talk on discord if you like discord.gg/zYpJn8nhcw
      I will also message you on Twitter. Looking forward to talking with you.

    • @kevon217
      @kevon217 9 หลายเดือนก่อน +1

      @@echohive would love to see a collab between you two featuring more complex applications.
      instructor is a super useful abstraction. i've been trying to build out more complex agentic applications and have been experimenting with a number of approaches, but the thing that always trips me up is how to best organize the foundation of my codebase and build upon it efficiently/effectively. i often end up going down rabbit holes of premature optimization and trying to modularize everything, but it gets messy pretty quickly... instructor imo though has been the most intuitive and cleanest approach to quickly creating useful functions without getting bogged down in the minutia. i'm curious what your all ideas are for approaching more complex applications utilizing gpt assistants and instructor.
      also, VRSEN has a nice demo on his channel ( th-cam.com/video/8fMnAZI1bdA/w-d-xo.html ) of implementing that combo with a user proxy acting like a load balancer between other agents. worth checking out.

    • @xspydazx
      @xspydazx 3 หลายเดือนก่อน

      ​ @echohive hi , i dont know if you coul help meunderstand how to execute code on ipython ? / ikernel from python code ... ie running a notebook from the python a jupiter client:
      recently i discovered that all we need is to for each session ... in the chat or each task , execute the code in jupiter notebook cell :.... return the result to the model or response:
      so we can catch the function calls in the response with a detail feild (functionlist) ... and seperate the response with a template :
      so we template the repsonse with instructor .. and for functions : if they require returning a value they can be returned with the validator ...
      quite simple : i think cod interpreter and open-interpreter and outline use this type of process:
      with the instructor version , we can use normal python funcitons in a folder , convert them all to pydantic functions and send them with the prompt to the model :
      with the correct wrapper we can add intent detection to the prompt input and load only the funcitons we will require for the response: , ie sort our python functions in folder hierachy .. load only text formatting funcitons , so we can tell the model the loaction of the function to load instead of sending the funcitons : so it would call for the right collection of funcitons to make a call ...or even generate its own on the fly functions to be executed in jupiter ..
      as a coder then the model can produce the expected outputs from the code is shares : (using jupyter) ... hence the jupiter call is the same template ? it can make the shell call (sysytem control) and (code call, based on installed kernals) ..

    • @xspydazx
      @xspydazx 3 หลายเดือนก่อน

      the simplicity of executing our code in jupiter notebook , will give the model the full abilitys it needs : hence a wrapper can be made for the model (including the tokneizer and model) so the model can make function calls using jupyter client: ikernal:
      then the template can be simple : Response/Functions:
      the valiator can execute and treturn the code :
      right now complex structures really slow the model down :
      I found the model doe not need functions to be suppied either it will generate every function it needs ... so we just need to collect the functions that need to be executed and even just seperate them from the output using the instructor !
      hence we send only 1 function (the jupiter call):
      then we can also send 1 more function : API CALLS <
      then calls can be made to api and returned to the model !
      there are no more funcitons required to be sent to the model now as it will have the two object models it requires:
      new templates entered intot this model would , sub catagorie the response feild :
      leaving the two or three top feild only .... response : apicalls: function calls: ,
      hence api keys can be sent to the model and it should determoine the correct call for itself using the username and password supplied!

  • @nuclear_AI
    @nuclear_AI 8 หลายเดือนก่อน +2

    Can't believe I'm just watching this now!
    Great work!
    👏👏👏

    • @echohive
      @echohive  8 หลายเดือนก่อน +1

      Thank you 🙏 instructor is very useful

  • @kenchang3456
    @kenchang3456 8 หลายเดือนก่อน

    Happy New Year and thanks for this video. I'm been working with using Pydantic and Instructor looks like the addition that would take my POC for conversational forms to the next level. Thanks again.

    • @echohive
      @echohive  8 หลายเดือนก่อน

      Happy new year to you too! I am glad you found the video useful 🙏

  • @balogunlikwid
    @balogunlikwid 10 หลายเดือนก่อน +1

    I have used this Library and it is insanely good. It only works well with GPT-4 though

    • @echohive
      @echohive  10 หลายเดือนก่อน

      I liked it very much as well. Will look more into it.

    • @balogunlikwid
      @balogunlikwid 10 หลายเดือนก่อน +1

      ​@@echohivewould love to see more implementations

    • @echohive
      @echohive  10 หลายเดือนก่อน +1

      Will work on more examples 🙂

  • @dawn_of_Artificial_Intellect
    @dawn_of_Artificial_Intellect 10 หลายเดือนก่อน

    This is awesome

    • @echohive
      @echohive  10 หลายเดือนก่อน

      Thank you 🙏

  • @brando2818
    @brando2818 10 หลายเดือนก่อน

    Excellent

    • @echohive
      @echohive  10 หลายเดือนก่อน

      Thank you 🙏

  • @nuclear_AI
    @nuclear_AI 8 หลายเดือนก่อน

    We could easily rephrase the comment "We can get the LLM to return an error that we can use."...
    To "Promopting your application framework how to proceed"...
    🤯🧠
    [09.15]

    • @echohive
      @echohive  8 หลายเดือนก่อน +1

      Yeah very true. Instructor’s goal is to convert natural language prompt into code :)

    • @nuclear_AI
      @nuclear_AI 8 หลายเดือนก่อน

      @@echohive @echohive Is it correct that Pydantic handles the memory side of things, and Instructor controls how the model proceeds/acts/behaves?
      Behaviour is taking on a whole new meaning. I feel this area of research may be important in the areas of #AGI and #ConciousnessTheory

    • @echohive
      @echohive  8 หลายเดือนก่อน +1

      Maybe a better way to say would be “Pydantic handles the structure of the memory” these all worth a good philosophical inquiry as they are all very open ended. A lot of creativity is possible

  • @zyxwvutsrqponmlkh
    @zyxwvutsrqponmlkh 10 หลายเดือนก่อน

    Anything is a hammer if you want it to be. In this instance we are using a car as a hammer.

    • @echohive
      @echohive  10 หลายเดือนก่อน

      I am not sure if I understand what you mean. Would you like to explain?

    • @zyxwvutsrqponmlkh
      @zyxwvutsrqponmlkh 10 หลายเดือนก่อน

      ​@@echohive seems like overkill to use chatgpt just to extract names and ages from a string. I bet a couple regexes could get 90% of these and chatgpt could even be coerced to giving you the regex.

    • @echohive
      @echohive  10 หลายเดือนก่อน +4

      Oh I see. This is a demonstration of the capabilities of the library. It starts simple then talks about more complex cases. It also extracts information from meaning. There couldn’t be a regex which can extract pyramid names for example.

  • @delightfulThoughs
    @delightfulThoughs 10 หลายเดือนก่อน

    Would this work with any open source models? Thank you for the video.

    • @echohive
      @echohive  10 หลายเดือนก่อน

      I don’t think it would because they wouldn’t accept function calling parameters. If I am mistaken, maybe one of the viewers can correct me.

    • @jaysonp9426
      @jaysonp9426 10 หลายเดือนก่อน +1

      Good question, I noticed when running Mistral locally with autogen if I'm running in LM studio it fails because of the way LM studio's server structures their OpenAI call which, I believe is using function calling (looking at the server log).
      But if I use oodabooga autogen works...my intuition is telling me it has to do with the way the call is structured. I'll try instruction with both and see.

    • @echohive
      @echohive  10 หลายเดือนก่อน

      Let us know what you find kindly

    • @joxxen
      @joxxen 10 หลายเดือนก่อน

      I think huggingface released function calling also

  • @jasondogan
    @jasondogan 10 หลายเดือนก่อน

    Sorry to say, programming will be an extinct art form in 2-5 years.