Ollama + Phi3 + Python - run large language models locally like a pro!
ฝัง
- เผยแพร่เมื่อ 4 ต.ค. 2024
- Large Language Models are popular these days. In this video I'll cover what is Ollama, how you can use it to pull and run local LLM models like Phi3, Mistral, Llama, or similar ones. We'll also cover the setup, do a bit of model customization, and a little bit of Python programming in the end as well. I'll be doing a follow-up video on Ollama+RAG if there is enough interest.
Let me know how you like this video, and any questions are always welcome! And of course, click the like-button if you like. And if you like a lot, please subscribe to my channel for more, and click that bell icon to get notifications of new content.
Here are the links in video:
ollama.com/
github.com/oll...
Great content.
Very easy to understand the tutorial.
+1 like and +1 subscriber
Thank you! Appreciated!!
Documents Q&A mini project with Phi3 mini
I am interested with this. Have you create the project for this ?
hahaa just what i was looking for now i will make an anime waifu with png showing besides its response if the response is positive than png will be that if its negative than waifu png will be diff and it will be more like fastfetch
Hahaa, omg I think I created a monster!! ;)