LangGraph Crash Course with code examples

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 มิ.ย. 2024
  • Colab 01. Learning LangGraph Agent Executor: drp.li/vL1J9
    Colab 02. Learning LangGraph - Chat Executor: drp.li/HAz3o
    Colab 03. Learning LangGraph - Agent Supervisor: drp.li/xvEwd
    Interested in building LLM Agents? Fill out the form below
    Building LLM Agents Form: drp.li/dIMes
    Github:
    github.com/samwit/langchain-t... (updated)
    github.com/samwit/llm-tutorials
    Time Stamps:
    00:00 Intro
    00:19 What is LangGraph?
    00:26 LangGraph Blog
    01:38 StateGraph
    02:16 Nodes
    02:42 Edges
    03:48 Compiling the Graph
    05:23 Code Time
    05:34 Agent with new create_open_ai
    21:37 Chat Executor
    27:00 Agent Supervisor
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 83

  • @samwitteveenai
    @samwitteveenai  4 หลายเดือนก่อน +6

    If you are interested in building LLM Agents Fill out the form below for what type of agents you want some examples of ?
    Building LLM Agents Form: drp.li/dIMes

    • @antwierasmus
      @antwierasmus 4 หลายเดือนก่อน

      Thanks Sam, I already did that

  • @viktor4207
    @viktor4207 4 หลายเดือนก่อน +20

    I really like the idea of integrating Grpah theory into this. You can experiment with different agents and tools for certain types of tasks. Then you can start playing around with network measures and give edges weight based on the successful completion of types of tasks. The network essentially will end up balancing itself out as you start to direct traffic along your high-weight edges. You can run another network and experiment with different models for different tasks. It's like a simulation of a workplace where people end up going to the most productive people to accomplish tasks.

  • @tvaddict6491
    @tvaddict6491 3 วันที่ผ่านมา

    Thank you for going through the notebooks line by line. Helps noobs like me follow along.

  • @avidlearner8117
    @avidlearner8117 4 หลายเดือนก่อน +5

    This is ABSOLUTELY FANTASTIC!!! I've been dealing with manual "orchestrator" that felt so dummy before... This is a game changer! You effin deliver on your content, man! Holyfack...

  • @rupjitchakraborty8012
    @rupjitchakraborty8012 วันที่ผ่านมา

    This is such a great intro, thank you so much for the effort.

  • @kenchang3456
    @kenchang3456 4 หลายเดือนก่อน

    This video is timely as I was ready to start exploring LangGraph to get a feel for what use cases can fit. Deeper dive video will be much appreciated.

  • @paulmiller591
    @paulmiller591 4 หลายเดือนก่อน +1

    Great video as usual. Yes, more videos and use cases on building agents with the updated version of LangChain would be great.

  • @joffreylemery6414
    @joffreylemery6414 4 หลายเดือนก่อน +2

    Awesome work once again !
    Very interested by this langGraph for more complexe use cases ! For us building a team platform augmentation with many Agent (which are Agent, or just Chains), it can allows use to have a big and powerful super agent, an supervisor as you in third part. To be continued

  • @luisguillermopardo7792
    @luisguillermopardo7792 4 หลายเดือนก่อน +1

    Sam I watch all your videos from Colombia. They are awesome!! they explain really well.

    • @samwitteveenai
      @samwitteveenai  4 หลายเดือนก่อน

      Thanks much appreciated!!

  • @guanjwcn
    @guanjwcn 4 หลายเดือนก่อน

    very insightful but heavy stuff to master. thank you, Sam. ❤

  • @narutocole
    @narutocole 4 หลายเดือนก่อน

    This is sick Sam! Keep it up!

  • @user-ew8ld1cy4d
    @user-ew8ld1cy4d 3 หลายเดือนก่อน +1

    You. Are. Fantastic... Thank you Sam

  • @shobhitagnihotri416
    @shobhitagnihotri416 4 หลายเดือนก่อน +5

    Sir please make a full course using langchain with open , hugging face ,lama and fine thing models and chatbot. Keep little bit affordable like 100$ it would be really great . Lots of love from India

  • @wuhaipeng
    @wuhaipeng 4 หลายเดือนก่อน

    Thank you so much for the course!

  • @ahmedennaifer3693
    @ahmedennaifer3693 4 หลายเดือนก่อน

    thanks again for the awesome content i ve learned a lot from your videos please keep doing what do. I was also wondering if you plan on making videos about production ready RAGs with the methods that you talked about in your rag series. Thanks a lot and please keep enriching us with your content.

    • @samwitteveenai
      @samwitteveenai  4 หลายเดือนก่อน +1

      Yeah I will go back to the RAG stuff again.

  • @HoldMyData
    @HoldMyData 4 หลายเดือนก่อน

    Thanks again, Sam!

  • @micbab-vg2mu
    @micbab-vg2mu 4 หลายเดือนก่อน

    Thank you for the great video:)

  • @user-ye6ks6xn8l
    @user-ye6ks6xn8l 4 หลายเดือนก่อน

    Beautiful......... Thank you so much.

  • @andrewandreas5795
    @andrewandreas5795 4 หลายเดือนก่อน

    thanks for the very informative video. do you know wich OSS models support function calling?

  • @AmarGupta-dz5wy
    @AmarGupta-dz5wy 3 หลายเดือนก่อน

    Interesting, Thank you!!

  • @RADKIT
    @RADKIT 4 หลายเดือนก่อน

    you mentioned something on point to what i was wondring sam, with you experience which oif the open-source LLM support function calling as of today? which one would you try out first? and if you do please make a video about langgraph and HF LLM and function calling maybe! ☺, love you work btw!

  • @zhiyanliu7068
    @zhiyanliu7068 3 หลายเดือนก่อน

    Thanks for making this video. A question, iiuc, imo It would be perfect if Coder node could be routed by the supervisor and executed to generate the chart by leverage the PythonREPLTool. Did you try to remove the PythonREPLTool tool from the Lotto_Manager agent, and only provide it in Coder agent? Make sense?

  • @VibudhSingh
    @VibudhSingh 3 หลายเดือนก่อน +1

    Super useful. I would say this was explained in a better way than the official Langchain channel.
    Next Video: it would be cool to build the perplexity’s copilot feature. So, ask for clarifying questions if needed with human-in-the-loop feature. Then give access to the internet to get the results.

  • @PrashantSaikia
    @PrashantSaikia 3 หลายเดือนก่อน

    Great! Do you have any example notebook showing how to use Langgraph for code generation in an external compiler language? Like, C for example - how do you replace the "exec" command (which is for Python code only, an "internal" compiler), and replace it with something that can call the C compiler, run it against the generated (and saved) code file, collect the compiler errors, put them back into the langgraph flow in the relevant node, and so on.

  • @jatinnandwani6678
    @jatinnandwani6678 14 วันที่ผ่านมา

    Thanks so much

  • @ankit85jain
    @ankit85jain 4 หลายเดือนก่อน

    Thanks Sam. For colab 01, I tried inputs = {"input": "Give me a random number and then write in words", "chat_history": []}.. it is still calling to_lower_case tool.. is it expected or we have to be more vocal in our input?

  • @Chicle777
    @Chicle777 4 หลายเดือนก่อน

    Well explained. Could you please show these examples using VS Code with production file structure?

  • @fengshi9462
    @fengshi9462 3 หลายเดือนก่อน

    Thanks a lot. I wanna know, how does the agent excuter know to write the anwer "4" to capitalized "FOUR" and then send it to the lower_case tool? Is there another bulit-in LLM doing that?

  • @nattyzaddy6555
    @nattyzaddy6555 4 หลายเดือนก่อน +1

    @ what timestamp is the demo where we can see it in use

  • @HiteshGulati
    @HiteshGulati 4 หลายเดือนก่อน

    Hi Sam, your videos are always very insightful and has helped me keep up with the latest development LLM space. I do have a question, when we pass a python function as tool into LLM, how does the execution works? Let's say there is a very long function which is to be executed next, now is the whole function along with its parameters passed on to LLM (using precious tokens) and LLM runs function on its server and gets the output. Or is it that LLM just assigns the function to run, the function then runs locally and provides the output to LLM for next action.
    Also is the behaviour same in python REPL functions?

    • @pnhbs392
      @pnhbs392 20 วันที่ผ่านมา +1

      The functions are executed on the process that executes the runnable chain, not remotely on the LLM. The LLM only determines which function to run and what the parameters should be, then LangChain / LangGraph executes the code "locally."

    • @HiteshGulati
      @HiteshGulati 16 วันที่ผ่านมา

      @@pnhbs392 Thanks, this was really helpful.

  • @alivecoding4995
    @alivecoding4995 หลายเดือนก่อน

    What do you think about Microsoft's Semantic Kernel and PromptFlow?

  • @stacy9698
    @stacy9698 2 หลายเดือนก่อน

    Can I ask what you used to draw the StateGraph slide? Looks cool

    • @samwitteveenai
      @samwitteveenai  2 หลายเดือนก่อน +1

      Excalidraw. It works very well for things like this.

  • @seththunder2077
    @seththunder2077 4 หลายเดือนก่อน

    at 9:30 you said you can have multiple agents with langgraph but isnt langchain originally a single agent framework? Unlike Autogen and CrewAI? I'm a bit confused

  • @jessezwamborn6526
    @jessezwamborn6526 3 หลายเดือนก่อน

    I tried to follow along in the provided file, but in the third example, my supervisor tells the coder to run, then keeps telling it to run over and over. The supervisor keeps choosing "coder" as the next step. Any idea why this difference in result even though I haven't changed anything about the code and simply ran it as-is?

  • @caiyu538
    @caiyu538 4 หลายเดือนก่อน

    Great. Great

  • @alizhadigerov9599
    @alizhadigerov9599 4 หลายเดือนก่อน +1

    how is it different from agent executor?

  • @johnhopkins849
    @johnhopkins849 4 หลายเดือนก่อน +1

    What open source models support function calling? I must admit, I don't know what it is about function calling that needs to be supported. Recently, I tried AutoGen function calling with a few different _local_ 7B parameter models without any luck.

    • @robxmccarthy
      @robxmccarthy 4 หลายเดือนก่อน +1

      There aren't many which excel. The ones that do are fine tuned on the task. First was gorilla. Now we have functionary. And apparently qwen 1.5 (even the 0.5b) model can reach near gpt4 reliability (though not in my personal testing).
      As far as I know there aren't any great drag and drop solutions. You may also need to use multiple models (one for function calling and one for higher level reasoning)

  • @ShearyTan
    @ShearyTan 4 หลายเดือนก่อน

    How does this work with open source LLM instead of OpenAI?

  • @emanueleielo6660
    @emanueleielo6660 2 หลายเดือนก่อน

    Amazing! But Is not clear how the agent can understand to repeat the function random_number() for 10 times, everytime that it finish it will recall again OpenAI and ask if the task is accomplished? If is like that why we don't see it on LangSmith?

  • @AdamTwardoch
    @AdamTwardoch 4 หลายเดือนก่อน

    Pretty much every LLM API has a large set of parameters: temperature, max output length, top P, [top K], frequency penalty, presence penalty.
    Shrink-wrapped UIs like ChatGPT don't give access to these. The defaults differ in some APIs: sometimes temperature is set to 1, sometimes 0.8.
    Some experiments I've done indicate that changing these parameters has serious impact on the results. But I've hardly ever seen benchmarks, papers, videos that discuss this. As far as I can tell, most LLM benchmarks only test the "default" settings.
    I'd love to see some more in-depth experiments that compare models and change these parameters.
    The community has been trying a lot of elaborate optimizations to get the most desired results out of LLMs. But my partial experiments suggest that there's a fair bit of untapped potential with the model parameters.

    • @AdamTwardoch
      @AdamTwardoch 4 หลายเดือนก่อน

      Another matter is the way the community discusses ChatGPT: late last year OpenAI added a new model to the ChatGPT app: GPT 4 Turbo. This is a model that's as different from GPT-4 as GPT 3.5 is from GPT-4. Or at least, it's different. It's smaller, distilled, simplified, dumber.
      Yet some discussion has shown that users didn't accept that as a fact: they've thought they GPT 4 Turbo us just some "faster version" of GPT 4. Magically :)
      But there's no magic. In the ChatGPT app, you can select the "ChatGPT" mode, which is GPT-4 Turbo with Vision, DALL-E and tool switching, or you can choose ChatGPT Classic mode, which is the real GPT-4 model. They're very different, and should be treated as separate models in comparisons.

    • @Ken129100
      @Ken129100 4 หลายเดือนก่อน

      How do you change to other LLMs? I tried but not it was not successful

    • @jessezwamborn6526
      @jessezwamborn6526 3 หลายเดือนก่อน

      @@Ken129100 You can simply import a new model when setting llm (for example, llm = ChatOpenAI(model="gpt-3.5-turbo-1106", temperature=0, verbose=True, streaming=True)), or use gemeni, or claude-2. (dont forget to include the API key at the top).

  • @RaviRanjan1989
    @RaviRanjan1989 4 หลายเดือนก่อน

    Can we use Local LLM using the HuggingFaceTextGenInference?

  • @8eck
    @8eck 4 หลายเดือนก่อน

    Wait, why do you need to explicitly define an array of tools and forward it into an agent creator, if you have decorators in there? What use of decorators then? I'm confused.

  • @luisguillermopardo7792
    @luisguillermopardo7792 3 หลายเดือนก่อน

    Hey Sam, do you know if it is possible to integrate memory in a graph and how to do it?

    • @samwitteveenai
      @samwitteveenai  3 หลายเดือนก่อน +1

      yeah you can save and load etc and use the normal ways of making I will make some more vids for LangGraph when I get a chance

  • @manishmandal5240
    @manishmandal5240 3 หลายเดือนก่อน

    This is really excellent tutorial. If I want to develop a use case wherein I have to do an API (e.g., Google Map API) based on a location and use returned result to filter down some customers around the location (wihin 2 kilometer around location) which (the customer and ordering information) is stored in a relational data store (say Sqllite or Postgresql or MySQL). Can you provide any implementation suggestion. Just to clarify that user input could lead to 3 scenarios for queries, 1) API only 2) API + RDBMS 3) RDBMS only.

    • @samwitteveenai
      @samwitteveenai  3 หลายเดือนก่อน

      Put the effort into making it as a took and then the Agent just uses the tool with simple commands. Eg put the heavy lifting on the tools side. I have a CrewAI tutorial coming out later this week that goes into this.

  • @user-yy9bl3ds1p
    @user-yy9bl3ds1p 4 หลายเดือนก่อน

    Please do one deep dive on dspy also.

  • @souvickdas5564
    @souvickdas5564 4 หลายเดือนก่อน

    I have been doing research on nlp and software engineering for 6 years. I have some good research publications as well in IEEE transactions on Software Engineering, journal of systems and software, Requirements Engineering conference. I have also developed skills on RAG, agent based frameworks. Can I get a good job in the field of GenAI, LLM orchestration? If you have please ask for my cv. Thanks in advance.

    • @samwitteveenai
      @samwitteveenai  4 หลายเดือนก่อน

      I would say yes. I have some research background with papers at EMNLP and NeurIPS workshops etc. and I see that as an advantage for a lot of the new skills. Understanding the basics of NLP and NLU really helps for a lot of these skills. That said you certainly need to update the skills etc.

  • @hasani511
    @hasani511 4 หลายเดือนก่อน

    This is a great video, it seems overly complicated tho compared to AutoGen which seems to be hiding a lot of the complexity. We built so,etching similar using regular agents as tools (node) which then have their own tools. A more dynamic agent with multiple personalities can be built with this but it would be hard to manage.

  • @ramp2011
    @ramp2011 4 หลายเดือนก่อน

    I am curious how you compare this with CrewAI for setting up agents. I feel setting up an agent with Langragh has too many steps....

    • @samwitteveenai
      @samwitteveenai  4 หลายเดือนก่อน +1

      CrewAI is much higher level yes but it is not as flexible as LangGraph. That said both are LangChain so should be able to do much of the same stuff. I will make some vids soon about that.

  • @bhargavchoithwani3961
    @bhargavchoithwani3961 3 หลายเดือนก่อน

    @samwitteveen very nicely explained

  • @SashaBaych
    @SashaBaych หลายเดือนก่อน

    By the way, why use agents and agent executors?
    I have seen some many tutorials with just models with binded tools. What is the benefit/difference in using AgentExecutor?
    What would I do with memory if I am using agent executor? Create agent executor with memory or create memory that saves the state of the graph? How multiple agents access the memory then?... omg, langchain...

    • @samwitteveenai
      @samwitteveenai  หลายเดือนก่อน

      The AgentExecutor was more the old way of doing agents before LangGraph. Think of the graph as a big state machine and you just pass that around. multiple agents can be like different nodes on the graph. I am still thinking of some simple examples to show off the basics. but these are great questions and I will address them in a video.

    • @SashaBaych
      @SashaBaych หลายเดือนก่อน

      @@samwitteveenai thank you so much for responding to the comments) Thank you for your attention. Keep up the great work, while I am integrating self-queryng RAG for my startup based on your tutorial)

  • @antwierasmus
    @antwierasmus 4 หลายเดือนก่อน +1

    Great job sir, all the example in the docs are using openai, can you please do a video where you use a different model like Gemini for this? Also if I have a complex input like a list of objects containing messages from different users and I want to work on each of them. Can you show us how to go about this. Maybe send a response message to each of the users in the list after reading their messages?

  • @SashaBaych
    @SashaBaych หลายเดือนก่อน

    I extremely respect what Sam does. He is one of the few youtubers who avoids just hyping up trendy things and simply makes very useful videos.
    But am I the only one who thinks that langchain's syntax is just insane? Having looked at the 3rd notebook, I find that I create an agent that has tools to then create agent executor passing the agent and aforementioned tools (why again?). Then I create agent node that invokes some kind of agent, then I pass created agent executor as agent argument to the node... How can anyone be able to understand this Russian doll...

    • @samwitteveenai
      @samwitteveenai  หลายเดือนก่อน +1

      😀 Hey Sasha I can totally relate to how you feel. It is very low level and they have also change some things I think since this video. Also they are finally supporting function Calling better across multiple models. I have been playing with some new notebooks for this and will make a video about it soon. LangGraph is good at a low level but I agree it can be insanely frustrating at times. You can use something like CrewAI if you want to stay really high level but I find that frustrating when it runs into issues as well. I promise I will try to get some new vids on this out soon, hopefully with some open source models like the new Llama etc as well.

  • @xunyang7126
    @xunyang7126 4 หลายเดือนก่อน

    🎯 Key Takeaways for quick navigation:
    00:18 🐉 *庆祝华人传统*
    - 强调作为龙的后代的自豪感,象征深厚的根基和丰富的历史。
    01:18 📱 *文化矛盾*
    - 讨论传统价值与现代实践之间的对立,如尽管历史冲突仍购买iPhone和在日本度假。
    02:17 🎉 *强调团结与文化自豪*
    - 鼓励庆祝华人文化,展示龙舞等元素,强调与根源保持联系的重要性。
    Made with HARPA AI

  • @abdelkaioumbouaicha
    @abdelkaioumbouaicha 4 หลายเดือนก่อน

    📝 Summary of Key Points:
    📌 Langgraph is a graph-based system for building custom agents in the Langchain ecosystem. Nodes represent different components of an agent, and edges connect these nodes to enable decision-making and conditional routing within the agent.
    🧐 The video provides coding examples to demonstrate Langgraph's functionality. Examples include building an agent executor using custom tools, using a chat model and a list of messages for more complex conversations, and creating an agent supervisor to route user requests to different agents based on predefined conditions.
    💡 Additional Insights and Observations:
    💬 "Langgraph is a powerful tool for building custom agents with decision-making capabilities."
    📊 No specific data or statistics were mentioned in the video.
    🌐 The Langchain ecosystem and Langgraph provide a flexible framework for creating various types of agents.
    📣 Concluding Remarks:
    Langgraph is an innovative tool within the Langchain ecosystem that allows users to build custom agents with decision-making capabilities. The video showcases coding examples to demonstrate the functionality of Langgraph and encourages viewers to explore different use cases. Langgraph provides a flexible and powerful framework for creating agents, making it a valuable tool for developers.
    Generated using TalkBud

  • @jordenvanforeest8741
    @jordenvanforeest8741 4 หลายเดือนก่อน

    It would be a gratis idea to make an agent that makes other agents

  • @user-yq8yp3nk2d
    @user-yq8yp3nk2d 4 หลายเดือนก่อน

    ok

  • @khadiravanabv7417
    @khadiravanabv7417 4 หลายเดือนก่อน

    is it me or does he sound very close to @3Blue1Brown

  • @IdPreferNot1
    @IdPreferNot1 4 หลายเดือนก่อน

    Nope…. Still lazy after a while. It takes time just to search the output code to see where they are summarizing etc instead of full code. It really breaks the flow when your actually working well together, then program generates NEW error and you’ve copied over goodccode with a bunch fixed code interspersed with some random “put in your stuff” here sections. 😮

  • @thecooler69
    @thecooler69 หลายเดือนก่อน

    Thanks for the video + code xamples Sam. I have consistent trouble with early stopping, is there anyway to prevent it? Like, the should_continue function receives an AgentFinish message but the output will look like this: 'agent_outcome': AgentFinish(return_values={'output': 'Please call the following function: {"function":{...
    So it knows it should keep calling functions, but fires a Finish anyway. I've tried to change the system prompt to make it not finish until all its functions are done, but it will still do this. Any suggestions?

    • @samwitteveenai
      @samwitteveenai  หลายเดือนก่อน +1

      Try adding in another self check step, so it having another node check and then if it things all is done it can trigger the Agent END etc.

    • @thecooler69
      @thecooler69 หลายเดือนก่อน

      @@samwitteveenai Your response is much appreciated. If I understand you correctly, it would mean looking for isinstance of AgentFinish in the outcome returned by agent.invoke(), then ignoring that msg and generating an AgentAction manually inside of run_agent. I couldn't figure out how to create the AgentAction yet, but also that almost feels like a hack to me--maybe the better solution is to split the tasks better among multiple agents, using your supervisor code (I will try this next). However, it feels like one agent should be able to handle a few tools each.
      Additional information on my setup is that I consistently get the early stop problem when trying to get the same agent to call the same tool twice (for different inputs). Presumably, the agent looks at the log and sees it has already called the tool, then gives up. I have tried altering the sys/user prompts to avoid that behavior, to no success.
      Let me know if there is an error in my comprehension.

    • @thecooler69
      @thecooler69 หลายเดือนก่อน

      @@samwitteveenai Looking at the other examples, I think I see what is happening. Instead of putting the function request in the additional_kwargs of the last message, the agent sometimes puts 'Please use function x' in the body of the response, which results in an AgentFinish firing.

  • @amirbehbehani4844
    @amirbehbehani4844 3 หลายเดือนก่อน

    Hi @samwitteveenai I sent you a LI request :). Great video!

  • @FranckePeixoto
    @FranckePeixoto 4 หลายเดือนก่อน

    great