Good stuff, tools lifecycle looks clean and straight forward. Trying it out by porting existing openai models for tool calling to mistral. Thanks for sharing and please keep sharing ...
i don't want use it as a open source llm but instead i want it as local and deployed in my cloud service. I need to deploy it in the Azure cloud then what is the cpu and gpu requirement ???and can i use langchain.???
Is function calling and system prompt compatible features? Setting tool_choice in "auto" but with usecase demanding a function call, the model write the JSON to call the function but includes it as a part of the content, instead of using tool calls explicitly.
I tried to do this with the OpenAI client and base_url set to my local Mistral-7b endpoint, basically using Mistral7b as a stand-in replacement for the OpenAI models. The tools format should be the same, right? It works with the gpt models but not with Mistral. Any idea, why?
I think Mistral large is a proprietary model, that cannot be run locally. You either have to use Mistral API or Microsoft Azure has this model in their AI Studio services. I am using the latter at work. But you could always run a 7B variant locally and use function calling as described here in this video th-cam.com/video/MQmfSBdIfno/w-d-xo.html.
Really exciting guys 🙌 Is the new function calling only executable via new Mistral-Large model, or it is also available with the lower sized models also?
@@MistralAIOfficial With the python client it says "Function calling is not enabled for this model" when trying to use "mistral-small" with function calling. Is it possible to check in the official documentation which models have function calling?
@@MistralAIOfficial Can you speak to the plan for future open sourcing models? Also, is there any thoughts on releasing data sets such as a function calling data sets for the current open sourcing models?
@@MistralAIOfficial It works nicely with the LARGE and SMAL models! Looks like the "Parallel function calling" is not supported at the moment? If not mistaken, the ToolCall doesn't provide/support "tool_call_id"? Thank you for the great models!
I am getting this error at step 10, both on the colab and in my local interpreter. Any clue? ValidationError: 1 validation error for ChatCompletionResponse choices.0.finish_reason Input should be 'stop', 'length', 'error' or 'tool_calls' [type=enum, input_value='tool_call', input_type=str]
*The Mistral AI impressed me a lot because it gave me a good code to train an AI without a gpu and using the technique of dividing the data.txt dataset into mini-batches freeing the memory at each training of the mini-batch, which came to work but unfortunately with an error at the end of the training of the last mini-batch of index out of range in self and could not solve it at all, But I was very impressed by the fact that she gave me this good code with few requests for hits (about 3 only)...*
Excellent. Love you Mistral! Thank you for your lack of censorship and treating customers like adults.
Excited to bring Mistral into Taskade with our upcoming Multi-Agent update! 😊
Mistral rocks!
Congratulations on the launch of the channel ☺ great video, looking forward to the next ones!
Good stuff, tools lifecycle looks clean and straight forward. Trying it out by porting existing openai models for tool calling to mistral. Thanks for sharing and please keep sharing ...
Very cool, mille mercis !
What is the purpose of Mistral client? can we replace with a model run locally
i don't want use it as a open source llm but instead i want it as local and deployed in my cloud service. I need to deploy it in the Azure cloud then what is the cpu and gpu requirement ???and can i use langchain.???
There's a mention of tool execution on the "model" side? What's the use-case for that?
merci
Is function calling and system prompt compatible features? Setting tool_choice in "auto" but with usecase demanding a function call, the model write the JSON to call the function but includes it as a part of the content, instead of using tool calls explicitly.
How does a request gets translated into llm input? Are you using special tokens to denote function call or response messages. Thanks for the help.
I tried to do this with the OpenAI client and base_url set to my local Mistral-7b endpoint, basically using Mistral7b as a stand-in replacement for the OpenAI models. The tools format should be the same, right? It works with the gpt models but not with Mistral. Any idea, why?
Function calling is currently only available for mistral-small and mistral-large
Thank you for using the same format as openai. Could the integration be flawless using openai JS/TS client SDK ?
Should be. But let us know if you have any feedback.
what other types of functions? is there any good documention?
Great ! can we change the ENDPOINT = "localhost" (or base_url) & api_key="NONE" ? It would be excellent !
I think Mistral large is a proprietary model, that cannot be run locally. You either have to use Mistral API or Microsoft Azure has this model in their AI Studio services. I am using the latter at work. But you could always run a 7B variant locally and use function calling as described here in this video th-cam.com/video/MQmfSBdIfno/w-d-xo.html.
Really exciting guys 🙌 Is the new function calling only executable via new Mistral-Large model, or it is also available with the lower sized models also?
it's available for mistral-small as well.
@@MistralAIOfficial Amazing!! Will get testing right away tomorrow. Can’t wait. Thanks for the reply.
@@MistralAIOfficial With the python client it says "Function calling is not enabled for this model" when trying to use "mistral-small" with function calling. Is it possible to check in the official documentation which models have function calling?
Does Mixtral 8x7b have function calling or just the Large API model?
Currently it's available for Mistral-small and Mistral-large
@@MistralAIOfficial Can you speak to the plan for future open sourcing models?
Also, is there any thoughts on releasing data sets such as a function calling data sets for the current open sourcing models?
cool
Hey that's great mistral. But i think the api is not free and also doesn't offer any free tier. Right ?
Thanks! The API is not free.
Why isn't the context window size specified anywhere in the documentation for each model? It's very inconvenient :(
Thanks for the feedback! We will add it to the docs soon.
Is the API format for function calling identical to the OpenAI format?
Yes
Great, is Mistral 7B capable of function calling ?
currently we only have function calling for Mistral-small and Mistral-large
@@MistralAIOfficial is there bind_tool() for mistral 7B v0.3 ? I can't make it able to use the tool.
Where is the API reference doc for the Function Calling?
We will update the API specs docs soon.
@@MistralAIOfficial
It works nicely with the LARGE and SMAL models!
Looks like the "Parallel function calling" is not supported at the moment? If not mistaken, the ToolCall doesn't provide/support "tool_call_id"?
Thank you for the great models!
I am getting this error at step 10, both on the colab and in my local interpreter. Any clue?
ValidationError: 1 validation error for ChatCompletionResponse
choices.0.finish_reason
Input should be 'stop', 'length', 'error' or 'tool_calls' [type=enum, input_value='tool_call', input_type=str]
could you try it again? it should work now
@@MistralAIOfficial fixed, thanks! :)
Calling the API costs money, right?
*The Mistral AI impressed me a lot because it gave me a good code to train an AI without a gpu and using the technique of dividing the data.txt dataset into mini-batches freeing the memory at each training of the mini-batch, which came to work but unfortunately with an error at the end of the training of the last mini-batch of index out of range in self and could not solve it at all, But I was very impressed by the fact that she gave me this good code with few requests for hits (about 3 only)...*
is there functions calling support for typescript and Nextjs or only possible with python ?
we have JS support: github.com/mistralai/client-js/blob/main/examples/function_calling.js