Mistral AI
Mistral AI
  • 2
  • 32 826
Advance RAG control flow with Mistral and LangChain: Corrective RAG, Self-RAG, Adaptive RAG
github.com/mistralai/cookbook/tree/main/third_party/langchain
มุมมอง: 14 398

วีดีโอ

Function Calling with Mistral AI
มุมมอง 18K7 หลายเดือนก่อน
- Notebook: colab.research.google.com/github/mistralai/cookbook/blob/main/mistral/function_calling/function_calling.ipynb - Docs: docs.mistral.ai/guides/function-calling/

ความคิดเห็น

  • @sergiovasquez7686
    @sergiovasquez7686 หลายเดือนก่อน

    Hey, can we implement it with all together?

  • @RUSHABHPARIKH-vy6ey
    @RUSHABHPARIKH-vy6ey หลายเดือนก่อน

    Does the structured output work with llm calls using bedrock?

  • @TheInternet81
    @TheInternet81 2 หลายเดือนก่อน

    but better if you can store several version of perspective and has calculation the benefit do a perspective than other perspective. because...in academic world there is several perspective to solve a problem. what you build here is only enhancement a perspective. event that we all here should appreciate this is a BIG STEP forward of improvement in field AI Knowledge. cheers...

  • @davidvukotic
    @davidvukotic 2 หลายเดือนก่อน

    The best 💯

  • @davidvukotic
    @davidvukotic 2 หลายเดือนก่อน

    Mistral will be big one!

  • @awakenwithoutcoffee
    @awakenwithoutcoffee 3 หลายเดือนก่อน

    thank you for the wonderful insights in the latest RAG developments. Can someone explain in simple terms the benefit of implementing "LangGraph" ? from what I understand it allows for more accurate LLM executions by limiting the "routes" the output of a certain LLM flows trough, improving it's reliability in execution. But why can't we empower LangChain "Agents" with the same functionality ? wouldn't the ideal agent have LangGraph capibilities built in ?

    • @awakenwithoutcoffee
      @awakenwithoutcoffee 2 หลายเดือนก่อน

      after diving deep for 3 weeks I know the answer to my own question (lol): - LangGraph is an extension to LangChain that allows for managing "state" e.g.: controlling what goes in and out your LLM. By restricting what the LLM is able to do we increase it's reliability allowing us to build better, safer & more complex LLM systems. Instead of making 1 massive prompt technique we utilize the techniques of traditional System Design to map out our software in a more traditional sense.

  • @aipt32
    @aipt32 3 หลายเดือนก่อน

    What happens if the graph gets stuck in a loop? (Web Search > not use full > Web Search > not usefull > ...) Do i have to add a "tries" counter to my state and end after x tries to prevent an infinte loop?

  • @hxxzxtf
    @hxxzxtf 3 หลายเดือนก่อน

    🎯 Key points for quick navigation: 00:00 *- Advance RAG control flow with Mistral and LangChain* 00:12 *- Combining small steps into comprehensive control flow for large language model applications* 00:25 *- Flow engineering uses a flow diagram to check response intent and construct answer iteratively* 01:06 *- Corrective RAG uses retrieval evaluator to assess document quality and trigger web search for additional information* 02:14 *- Hallucination note checks answer support by document, and answer question node checks generated answer relevance* 10:31 *- Bind MRAW to schema* 10:43 *- Convert JSON output* 10:59 *- Mock retrieval example* 11:12 *- Grading documents relevance* 11:25 *- Confirm binary score* 11:39 *- Define RAG chain* 12:05 *- Graph State explained* 21:11 *Adversarial Tax Routing* 21:52 *Hallucination Grader Defined* 22:18 *Router Conditional Edge* 22:47 *Web Search Fallback* 24:03 *Control Flow Implemented* Made with HARPA AI

  • @pvp8349
    @pvp8349 4 หลายเดือนก่อน

    what other types of functions? is there any good documention?

  • @ryanfarran
    @ryanfarran 4 หลายเดือนก่อน

    Mistral rocks!

  • @kuldeepsinhjadeja3668
    @kuldeepsinhjadeja3668 4 หลายเดือนก่อน

    In the last part, when the flow went twice to the web search tool, it basically searched on the same query, then how did it produce valid result 2nd time and not first time. How to ensure that it does not get stuck in the loop, because basically it does the same thing again and again without changing anything hoping to get correct result.

  • @RajaSekharaReddyKaluri
    @RajaSekharaReddyKaluri 4 หลายเดือนก่อน

    Thank you Sophia and Lance!

  • @ayubsubhaniya6516
    @ayubsubhaniya6516 5 หลายเดือนก่อน

    How does a request gets translated into llm input? Are you using special tokens to denote function call or response messages. Thanks for the help.

  • @nicolaspellerin2207
    @nicolaspellerin2207 5 หลายเดือนก่อน

    Thanks for this ! Learned a ton of good stuff, very well explained, will definitely be playing with your notebooks 😊 You’re fantastic for sharing such high quality work

  • @eddyjens4948
    @eddyjens4948 5 หลายเดือนก่อน

    nice

  • @chuanjiang6931
    @chuanjiang6931 5 หลายเดือนก่อน

    Calling the API costs money, right?

  • @AlbertJinkuGu
    @AlbertJinkuGu 5 หลายเดือนก่อน

    Awesome job! Thank you for sharing! What's the best way to do the RAG based on the relational database? We need to understand the question, go to the correct table of a database and find the most relevant records. Looks like we should support both keyword search and sematic search. For the keyword search, we need to extract the parameters, like the keyword, date of that question, the person who generated that record, etc.

  • @8eck
    @8eck 5 หลายเดือนก่อน

    Glad to see you in Mistral AI! 🥰

  • @Taskade
    @Taskade 5 หลายเดือนก่อน

    Excited to bring Mistral into Taskade with our upcoming Multi-Agent update! 😊

  • @Taskade
    @Taskade 5 หลายเดือนก่อน

    Can't wait to incorporate Mistral into Taskade in our next Multi-Agent update :)

    • @camaycama7479
      @camaycama7479 5 หลายเดือนก่อน

      Keep us posted!

    • @choiswimmer
      @choiswimmer 5 หลายเดือนก่อน

      Is this an ad? It's an ad

  • @luanorionbarauna8555
    @luanorionbarauna8555 5 หลายเดือนก่อน

    What about this document be csv file? How can I do it?

    • @eeee8677
      @eeee8677 5 หลายเดือนก่อน

      Its impossible

  • @pabloe1802
    @pabloe1802 5 หลายเดือนก่อน

    What is the purpose of Mistral client? can we replace with a model run locally

  • @deathdefier45
    @deathdefier45 5 หลายเดือนก่อน

    You guys are amazing ❤❤

  • @NarendraChennamsetty
    @NarendraChennamsetty 5 หลายเดือนก่อน

    This is an amazing tutorial. so much valuable information packed in 30 min. Subscribed, Thank you!

    • @bqmac43
      @bqmac43 5 หลายเดือนก่อน

      Lance's videos always have great insights. I'd recommend checking more of his videos out if you liked this one.

  • @ChrisSMurphy1
    @ChrisSMurphy1 5 หลายเดือนก่อน

    Smokin hott

  • @RiteshKumar-xo3ll
    @RiteshKumar-xo3ll 5 หลายเดือนก่อน

    i don't want use it as a open source llm but instead i want it as local and deployed in my cloud service. I need to deploy it in the Azure cloud then what is the cpu and gpu requirement ???and can i use langchain.???

  • @nikitakuznetsov4592
    @nikitakuznetsov4592 5 หลายเดือนก่อน

    Guys, this is crazy good! Please don't stop your demos and explaining of concepts. If you read this - can you explain a lil bit more the concepts of action tools (usage, own implementations and so on). Thx in advance!

    • @bqmac43
      @bqmac43 5 หลายเดือนก่อน

      Tools are functions that the agent can call. To decide which tool to use, an agent can send the available tools to the LLM and say "which one should I use?" Once a tool has been selected, the LLM can then provide arguments to pass into the tool's function. The agent takes the information from the LLM to call the tool, and then goes back to the LLM to say "What tool should I use now?" In the example that is shown here, the agent is given specific routes to take. This simplifies each step because the agent is focused on a specific outcome at each step. So at each step, the available tools are scoped down to the task at hand. An alternative to this flow are ReAct agents. ReAct agents are given a set of tools and a task and can reason for themselves how to accomplish the task given the tools they have. Each type of flow has it's place (as Lance points out nicely with his pros and cons). Personally, I start with ReAct agents because they're easier to set up and if I feel myself getting frustrated by the steps it takes, then I move to a more deterministic flow (i.e. LangGraph, what Lance does in the video). That's a long explanation and hopefully it makes sense. You can read more on how to implement them with Langchain here. python.langchain.com/docs/modules/tools/custom_tools/

    • @choiswimmer
      @choiswimmer 5 หลายเดือนก่อน

      The langchain channel has more

  • @AurobindoTripathy
    @AurobindoTripathy 5 หลายเดือนก่อน

    There's a mention of tool execution on the "model" side? What's the use-case for that?

  • @shackyalla
    @shackyalla 5 หลายเดือนก่อน

    I tried to do this with the OpenAI client and base_url set to my local Mistral-7b endpoint, basically using Mistral7b as a stand-in replacement for the OpenAI models. The tools format should be the same, right? It works with the gpt models but not with Mistral. Any idea, why?

    • @MistralAIOfficial
      @MistralAIOfficial 5 หลายเดือนก่อน

      Function calling is currently only available for mistral-small and mistral-large

  • @MaximoPower2024
    @MaximoPower2024 6 หลายเดือนก่อน

    Is function calling and system prompt compatible features? Setting tool_choice in "auto" but with usecase demanding a function call, the model write the JSON to call the function but includes it as a part of the content, instead of using tool calls explicitly.

  • @ramikanimperador2286
    @ramikanimperador2286 6 หลายเดือนก่อน

    *The Mistral AI impressed me a lot because it gave me a good code to train an AI without a gpu and using the technique of dividing the data.txt dataset into mini-batches freeing the memory at each training of the mini-batch, which came to work but unfortunately with an error at the end of the training of the last mini-batch of index out of range in self and could not solve it at all, But I was very impressed by the fact that she gave me this good code with few requests for hits (about 3 only)...*

  • @derax3878
    @derax3878 6 หลายเดือนก่อน

    is there functions calling support for typescript and Nextjs or only possible with python ?

    • @MistralAIOfficial
      @MistralAIOfficial 6 หลายเดือนก่อน

      we have JS support: github.com/mistralai/client-js/blob/main/examples/function_calling.js

  • @parkersettle460
    @parkersettle460 6 หลายเดือนก่อน

    Does Mixtral 8x7b have function calling or just the Large API model?

    • @MistralAIOfficial
      @MistralAIOfficial 6 หลายเดือนก่อน

      Currently it's available for Mistral-small and Mistral-large

    • @parkersettle460
      @parkersettle460 6 หลายเดือนก่อน

      @@MistralAIOfficial Can you speak to the plan for future open sourcing models? Also, is there any thoughts on releasing data sets such as a function calling data sets for the current open sourcing models?

  • @g0d182
    @g0d182 6 หลายเดือนก่อน

    cool

  • @Techonsapevole
    @Techonsapevole 7 หลายเดือนก่อน

    Great, is Mistral 7B capable of function calling ?

    • @MistralAIOfficial
      @MistralAIOfficial 6 หลายเดือนก่อน

      currently we only have function calling for Mistral-small and Mistral-large

    • @NouhaBelhajyoussef
      @NouhaBelhajyoussef 3 หลายเดือนก่อน

      @@MistralAIOfficial is there bind_tool() for mistral 7B v0.3 ? I can't make it able to use the tool.

  • @anton9690
    @anton9690 7 หลายเดือนก่อน

    I am getting this error at step 10, both on the colab and in my local interpreter. Any clue? ValidationError: 1 validation error for ChatCompletionResponse choices.0.finish_reason Input should be 'stop', 'length', 'error' or 'tool_calls' [type=enum, input_value='tool_call', input_type=str]

    • @MistralAIOfficial
      @MistralAIOfficial 7 หลายเดือนก่อน

      could you try it again? it should work now

    • @anton9690
      @anton9690 7 หลายเดือนก่อน

      @@MistralAIOfficial fixed, thanks! :)

  • @andfanilo
    @andfanilo 7 หลายเดือนก่อน

    Congratulations on the launch of the channel ☺ great video, looking forward to the next ones!

  • @joelwalther5665
    @joelwalther5665 7 หลายเดือนก่อน

    Great ! can we change the ENDPOINT = "localhost" (or base_url) & api_key="NONE" ? It would be excellent !

    • @nbbhaskar3294
      @nbbhaskar3294 7 หลายเดือนก่อน

      I think Mistral large is a proprietary model, that cannot be run locally. You either have to use Mistral API or Microsoft Azure has this model in their AI Studio services. I am using the latter at work. But you could always run a 7B variant locally and use function calling as described here in this video th-cam.com/video/MQmfSBdIfno/w-d-xo.html.

  • @mhsnalm
    @mhsnalm 7 หลายเดือนก่อน

    Good stuff, tools lifecycle looks clean and straight forward. Trying it out by porting existing openai models for tool calling to mistral. Thanks for sharing and please keep sharing ...

  • @DevsDoCode
    @DevsDoCode 7 หลายเดือนก่อน

    Hey that's great mistral. But i think the api is not free and also doesn't offer any free tier. Right ?

    • @MistralAIOfficial
      @MistralAIOfficial 7 หลายเดือนก่อน

      Thanks! The API is not free.