Run AI models locally without an expensive GPU

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 ก.พ. 2025

ความคิดเห็น • 31

  • @samueladam7693
    @samueladam7693 6 วันที่ผ่านมา +1

    I don't ever usually comment, though the video is very informative, want to support you, keep going :)

  • @alwayssmile1778
    @alwayssmile1778 10 วันที่ผ่านมา

    Will try it surely!

  • @nufh
    @nufh 9 วันที่ผ่านมา

    Hey, thanks for the video.

  • @Gatterman
    @Gatterman 5 วันที่ผ่านมา

    Hello, im confused whether I can run any model on my Ryzen 3 3200G PC, with 16gb of ram or not?

    • @zenvanriel
      @zenvanriel  5 วันที่ผ่านมา

      Did you try it out for yourself with the repository in the description? I expect Phi 3.5 to work "okay" on that machine, but not very fast.

  • @clemngene2313
    @clemngene2313 10 วันที่ผ่านมา +1

    What's the difference between using docker and ollama? If you could give a detailed explanation, I'd like that

    • @zenvanriel
      @zenvanriel  10 วันที่ผ่านมา +3

      The important distinction here would be that this guide shows you how to use LocalAI with Docker. Similarly, you can use Ollama with Docker. Docker is just meant to give you an easy one-command way to start an environment consistently.
      So I'm going to take the liberty to answer the difference between Ollama and LocalAI, because those are actually the systems you are interested in comparing. For large language models, Ollama is actually just as good if not even better supported by the community. You can use Ollama just fine in Docker as well: hub.docker.com/r/ollama/ollama
      However, LocalAI supports more types of AI models than just language models in one package. I aim to show more of those possibilities later on hence I focus on LocalAI. If you just want to use language models, definitely try out Ollama!

    • @clemngene2313
      @clemngene2313 10 วันที่ผ่านมา

      @zenvanriel oh, LocalAI. I understand now. Thanks

  • @zenvanriel
    @zenvanriel  11 วันที่ผ่านมา

    How does this model perform on your device? Let me know!

  • @zensajnani
    @zensajnani 9 วันที่ผ่านมา +1

    nice name zen

    • @zenvanriel
      @zenvanriel  9 วันที่ผ่านมา +1

      We share the same first name based on your handle? If so, love it!

    • @zensajnani
      @zensajnani 9 วันที่ผ่านมา

      @ yes we do 🤝🏼

  • @toby645pa
    @toby645pa 8 วันที่ผ่านมา

    I always start my prompting with the question “Was 2024 a leap year”. On simple local machines it’s telling you fairy tales. That’s disappointing. But thanks for your good explanation 👏

    • @zenvanriel
      @zenvanriel  8 วันที่ผ่านมา

      Yeah, for local devices a RAG implementation makes more sense. I'll explain that later on.

  • @Ai-inarabic
    @Ai-inarabic 10 วันที่ผ่านมา

    How can i contact u zen

    • @zenvanriel
      @zenvanriel  9 วันที่ผ่านมา

      In a few weeks I'll make sure we can all get in touch. Stay tuned.

  • @CarlosRodriguez-xk9ot
    @CarlosRodriguez-xk9ot 9 วันที่ผ่านมา

    Awesome content, could you make a video where you customize a reasoning model It further like connecting It to a folder of PDF files as database

    • @zenvanriel
      @zenvanriel  9 วันที่ผ่านมา +2

      This is indeed all in the backlog of the next videos! Stay tuned.

  • @CarlosRodriguez-xk9ot
    @CarlosRodriguez-xk9ot 9 วันที่ผ่านมา

    How can you make an interface or make it receive input files?

    • @mahendrachoudhary3063
      @mahendrachoudhary3063 9 วันที่ผ่านมา

      There is a opensource webuis

    • @zenvanriel
      @zenvanriel  9 วันที่ผ่านมา

      This is upcoming in a next video!

  • @ultrasound1459
    @ultrasound1459 8 วันที่ผ่านมา +2

    The voice is not synchronised with your video

    • @zenvanriel
      @zenvanriel  8 วันที่ผ่านมา

      Oops, looks like it is slightly off indeed. Will make it better next time.

  • @BadalPradhan-h1v
    @BadalPradhan-h1v 9 วันที่ผ่านมา +1

    Kindly make a video how to setup and make own deepseek r1 api

    • @NightHawk_Pro
      @NightHawk_Pro 8 วันที่ผ่านมา

      It's like 0.0014 per prompt just get it lol things unbelievable I was stuck for 3 days with clide and chatgpt and cascade. All of them. Could t handle the size of my algo any more. Thjs thing did what I was trying to figure out in three days. Did it in 3 prompts

    • @zenvanriel
      @zenvanriel  8 วันที่ผ่านมา

      Deepseek r1, definitely the full model you can access via web/api, is really difficult to host on your own hardware. You'll need to start with a smaller model like Phi-3.5/4 or a smaller Llama model.

    • @NightHawk_Pro
      @NightHawk_Pro 7 วันที่ผ่านมา

      I bought it and it's the best model hands down. Nothing compares ​@@zenvanriel

    • @BadalPradhan-h1v
      @BadalPradhan-h1v 6 วันที่ผ่านมา

      @@zenvanriel I have installed deepseek r1 7b using lm studio, but how to create api to access in side my program

    • @zenvanriel
      @zenvanriel  3 วันที่ผ่านมา

      LM studio has a server mode in it, check the main menu icons. It is the green one that looks like a CLI

  • @엠케이-p3p
    @엠케이-p3p 8 วันที่ผ่านมา

    phi...ok but not that powerful, i thought there might be something i don't know in this video...i would rather use openwebui or anythingllm with ollama if you just need a chat interface

    • @zenvanriel
      @zenvanriel  8 วันที่ผ่านมา

      That's right, I will show in later content that LocalAI allows you to use other models not just LLMs. If you just want a language model, you can use ollama