Ollama + Phi3 + Python - run large language models locally like a pro!

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 ต.ค. 2024
  • Large Language Models are popular these days. In this video I'll cover what is Ollama, how you can use it to pull and run local LLM models like Phi3, Mistral, Llama, or similar ones. We'll also cover the setup, do a bit of model customization, and a little bit of Python programming in the end as well. I'll be doing a follow-up video on Ollama+RAG if there is enough interest.
    Let me know how you like this video, and any questions are always welcome! And of course, click the like-button if you like. And if you like a lot, please subscribe to my channel for more, and click that bell icon to get notifications of new content.
    Here are the links in video:
    ollama.com/
    github.com/oll...

ความคิดเห็น • 6

  • @talktotask-ub5fh
    @talktotask-ub5fh 17 วันที่ผ่านมา +1

    Great content.
    Very easy to understand the tutorial.
    +1 like and +1 subscriber

    • @DevXplaining
      @DevXplaining  17 วันที่ผ่านมา +1

      Thank you! Appreciated!!

  • @suraj_bini
    @suraj_bini 4 หลายเดือนก่อน +2

    Documents Q&A mini project with Phi3 mini

    • @MrPkmonster
      @MrPkmonster 19 วันที่ผ่านมา +1

      I am interested with this. Have you create the project for this ?

  • @27meghtarwadi99
    @27meghtarwadi99 3 วันที่ผ่านมา +1

    hahaa just what i was looking for now i will make an anime waifu with png showing besides its response if the response is positive than png will be that if its negative than waifu png will be diff and it will be more like fastfetch

    • @DevXplaining
      @DevXplaining  3 วันที่ผ่านมา

      Hahaa, omg I think I created a monster!! ;)