Mistral AI API Usage | LlamaIndex 🦙
ฝัง
- เผยแพร่เมื่อ 1 ต.ค. 2024
- After the mysterious release of the latest model by Mistral AI, they release the API for using different models. In this video, I will show you how to use the api from Mistral AI with and without LlamaIndex. Happy learning 😎
👉🏼 Links:
Mistral Website: mistral.ai/
Get API key: console.mistra...
Colab code: github.com/sud...
OpenAI pricing: openai.com/pri...
------------------------------------------------------------------------------------------
☕ Buy me a Coffee: ko-fi.com/data...
✌️Patreon: / datasciencebasics
------------------------------------------------------------------------------------------
🔗 🎥 Other videos you might find helpful:
🔥 Databricks playlist: • 30 Days Of DataBricks
⛓️ Langflow: • ⛓️ langflow | UI For 🦜...
⛓️ Flowise: • Flowise | UI For 🦜️🔗 L...
GPTs: • HoW TO Create YOUR OWN...
OpenGPTs: • OpenGPTs FROM LangChai...
RAGs: • LlamaIndex RAGs: Built...
🦜️🔗 LangChain playlist: • LangChain
🦙 LlamaIndex Playlist: • LlamaIndex
------------------------------------------------------------------------------------------
🤝 Connect with me:
📺 TH-cam: www.youtube.co...
👔 LinkedIn: / sudarshan-koirala
🐦 Twitter: / mesudarshan
🔉Medium: / sudarshan-koirala
💼 Consulting: topmate.io/sud...
#mistral #mistralai #llm #mistralapi #llamaindex #chatgpt #ai #datasciencebasics
ImportError: cannot import name 'MistralAI' from 'llama_index.llms' (unknown location)
this video are hepl full for me. please use mistral api in anouther librery also like autogen, open interpreter. please do it i am trying very hard to achive this task but i will not do that.😇
Superr video....Yes many things happened....waiting for your video...like to discuss improved RAG versions
Can we have an end to end example on llama2.ggml model on local ( cpu /gpu ) or deploy in sagemaker with llamaindex? Thank you
Thanks bud