Function Calling Local LLMs!? LLaMa 3 Web Search Agent Breakdown (With Code!)

แชร์
ฝัง
  • เผยแพร่เมื่อ 3 ส.ค. 2024
  • With the power of LangGraph, we can add the ability for tool use and function calling with local LLMs! In this video, we go over my entire process and breakdown all of the code needed to do this yourself using Llama 3 - 8b as a research agent with a web search tool. This idea can be extracted further to more advanced workflows, and additional tools outside of just web search. Enjoy!
    Code: github.com/ALucek/llama3-webs...
    Ollama Tutorial: • How To Easily Run & Us...
    Ollama Llama 3: ollama.com/library/llama3
    FIX FOR ERROR 202 WITH DUCKDUCKGO_SEARCH PACKAGE:
    pip install -U duckduckgo_search==5.3.0b4
    Chapters:
    00:00 - Intro
    00:33 - Prerequisite Ollama
    00:59 - Workflow Diagram Overview
    02:11 - How This Simulates Function Calling
    03:17 - Code Walkthrough Starts
    04:10 - Defining LLMs
    05:15 - Setting Up DuckDuckGo API
    06:52 - Prompting Overview
    07:21 - Llama 3 Special Token Explanation
    09:07 - Generation Prompt
    11:29 - Routing Prompt
    13:23 - Transform Query Prompt
    14:45 - Putting It Together With LangGraph
    15:40 - Defining the State
    17:40 - Router Node
    18:48 - Transform Query Node
    20:13 - Web Search Node
    21:15 - Generation Node
    22:17 - Adding Nodes & Edges
    25:02 - Invoking the Agent!
    27:15 - LangSmith Trace Overview
    29:34 - Outro
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 45

  • @martintourneboeuf5282
    @martintourneboeuf5282 19 วันที่ผ่านมา

    Awesome Class, thank you!
    A very complicated and powerful topic, but you divided with great methodology the problem, your code is friendly to read and the presentation very smooth.
    Congrats and thanks again as this for sure helps me.

  • @madhudson1
    @madhudson1 2 หลายเดือนก่อน +5

    absolutely fantastic. I've been trying to do something similar today and experienced much of the 'going rogue' - with incorrect special tokens, until I followed your example.

    • @AdamLucek
      @AdamLucek  2 หลายเดือนก่อน

      Glad I could help!

  • @redfield126
    @redfield126 หลายเดือนก่อน

    Excellent video and perfect hands-on introduction to langgraph. You make my day. Thank you.

  • @Kidzwish
    @Kidzwish หลายเดือนก่อน +1

    You're awesome dude, neatly explained! Thanks !!

  • @matthewturnerphd
    @matthewturnerphd 2 หลายเดือนก่อน

    Thanks for this video! Several smaller details you emphasized were things I had missed in other tutorials, and really helped me.

  • @jimlynch9390
    @jimlynch9390 2 หลายเดือนก่อน

    I really enjoyed this video. I've seen lots of "how to" vids WRT programming using local LLMs but this is by far the best one I've viewed. I often get lost and have to re read some of the steps however you moved along at exactly the right pace for me and explained pretty much all of the questions I was dreaming up during the view. Thank you! Ollama does have function calling but this method seems to be more logical and easier to understand.

  • @user-rl9yz3rg7r
    @user-rl9yz3rg7r 2 หลายเดือนก่อน

    Thank you for the wonderful lecture and example of source code. The example source code worked nicely in the local environment, and the test code inserted conveniently in the middle helped me understand the example a lot

  • @petermorvay1842
    @petermorvay1842 2 หลายเดือนก่อน

    Excelent tutorial. Thank you Adam, your content is really valuable. I was able to run the code in the local environment.

  • @_codegod
    @_codegod 2 หลายเดือนก่อน

    Your presentation style is awesome bro.

  • @ofrylivney367
    @ofrylivney367 หลายเดือนก่อน

    That is really high quality content! thanks!

  •  2 หลายเดือนก่อน

    Excellent video thank you!

  • @rajesharora27
    @rajesharora27 2 หลายเดือนก่อน

    Awesome stuff!

  • @emporiumofthearcane
    @emporiumofthearcane 2 หลายเดือนก่อน

    Good video. Thanks for making it. Subbed.

  • @TestMyHomeChannel
    @TestMyHomeChannel 2 หลายเดือนก่อน

    You are an awesome teacher. I am already running almost the same setup, with agents created automatically by another PraisonAI but didn’t fully follow what was going on and many times not working and I didn’t know why. I loved the way you broke down and explained everything. Looking forward to see more videos from you. Best wishes

  • @liosha2007
    @liosha2007 หลายเดือนก่อน

    Thank you very much 🙏

  • @JoshuaMillerDev
    @JoshuaMillerDev 2 หลายเดือนก่อน +1

    I like seeing these. Something to consider is having the web search differentiate between sponsored and non sponsored results. I have not seen anyone tackle that yet. It seems to me that search results and LLM outputs would be more accurate when steering away from sponsored data.

    • @AdamLucek
      @AdamLucek  2 หลายเดือนก่อน +1

      Good idea!

    • @JoshuaMillerDev
      @JoshuaMillerDev 2 หลายเดือนก่อน

      Just FYI, doing what I mentioned could be problematic in a long long run as folks use search less and LLMs more. At some point there is a potential conflict where search engines get nothing from billions + of AI crawling. Not much of a worry, but a good thought exercise.

  • @dr.mikeybee
    @dr.mikeybee 2 หลายเดือนก่อน

    Nice work

  • @amanmeghrajani1
    @amanmeghrajani1 2 หลายเดือนก่อน

    loving your content, thank you for sharing this. learning a lot!
    would you be interested in making a video to show how to deploy these models and have access to them from inputs like whatsapp chat, email? would be super helpful

  • @WladBlank
    @WladBlank 2 หลายเดือนก่อน

    Interesting concept. I try to force my llms to produce valid json and this would be easier.

  • @szpiegzkrainydeszczowcow8476
    @szpiegzkrainydeszczowcow8476 2 หลายเดือนก่อน

    Great job. Subscribing. Any chance you would make some video on long term memory in vector db? greetings

  • @DakshSripada
    @DakshSripada หลายเดือนก่อน

    is there any other method of function calling using llama3 ollama and langgraph? Is there a way that would allow us to add tool nodes into the graph as we do with openai agents and then have llama3 use those tools? Or would this approach simply complicate things?

  • @Watdooyoumeen
    @Watdooyoumeen 2 หลายเดือนก่อน

    Can you integrate your routing node to perplexica the clone of perplexity?

  • @JoshuaMillerDev
    @JoshuaMillerDev 2 หลายเดือนก่อน

    I wonder if anyone has went through the thought exercise of how an AI model could benefit from having anyone "fold at home" in order to build it. In other words... instead of the owner (OpenAI, Llama, whatever..) dedicating servers to build a LLM, could it not be distributed such that video cards in PC's around the world contribute idle cycles towards the training? Seems like a good way to have an open sourced model to get off the ground, or build using more tokens. Could even have a reward system (minor) allowing X privileged API access per contribution node or whatever.

  • @_codegod
    @_codegod 2 หลายเดือนก่อน

    I don't understand pipe operator. :(

    • @AdamLucek
      @AdamLucek  2 หลายเดือนก่อน +1

      Those are part of LangChain Expression Language, their own way of setting up chain's easily with their components. More on that here python.langchain.com/v0.1/docs/expression_language/get_started/

    • @_codegod
      @_codegod 2 หลายเดือนก่อน +1

      @@AdamLucek Got it. So it's an operator overload for LangChain. Got confused there for a bit.

  • @GlobalAiServices
    @GlobalAiServices 2 หลายเดือนก่อน

    What's the point, SORA is not released yet. Just a waste of time!

    • @ringpolitiet
      @ringpolitiet 2 หลายเดือนก่อน +1

      Why are you here?