Run DeepSeek-R1 Locally | Complete Beginner's Guide

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 ม.ค. 2025

ความคิดเห็น • 20

  • @makers_lab
    @makers_lab 2 วันที่ผ่านมา

    Thanks for making this, and great that it works well on a 4090. The wording of the model's thinking from their example output on HF was also amusing:
    "Okay, so I need to figure out what 1 plus 1 is. Hmm, where do I even start? ..." :)

  • @kryptomac101
    @kryptomac101 2 วันที่ผ่านมา +1

    Can it be used on Docker with its API with thinking and websearch feature?

    • @professorpatterns
      @professorpatterns  วันที่ผ่านมา

      @@kryptomac101 Yep! Could you elaborate on the use-case some more? I can create a video on it.

    • @professorpatterns
      @professorpatterns  วันที่ผ่านมา +1

      Yes! Let me know if you need help setting it up.

  • @makers_lab
    @makers_lab 2 วันที่ผ่านมา +1

    So I've just been running some past GPT queries on the model you used, covering coding, food, email revisions and so on. It's impressive. Coding was not quite as good as 4o, making some errors with QML, some issues in a tensorflow coding task but minor, not fully knowing about the numerai cli for a shell script task (forgivable as that's esoteric, and 4o and claude weren't 100% accurate either), plus some other coding shortcomings, but still impressive. Email wording was a bit cheesy, but probably correctable with more guidance. Combi boiler fault diagnostics for delayed ignition was pretty good, though it missed an obvious reason that was the actual cause. Food related queries were good. Super impressive for something that's only a few GB though, and very fast. The term "game changer" is so overused, but this almost is, and might turn out to be.

    • @professorpatterns
      @professorpatterns  วันที่ผ่านมา +1

      @@makers_lab it’s just really REALLY impressive.

  • @N1ghtR1der666
    @N1ghtR1der666 วันที่ผ่านมา +1

    can you cover way to stich multiple GPUs together to run larger models

    • @professorpatterns
      @professorpatterns  วันที่ผ่านมา +1

      @@N1ghtR1der666 good idea! Let me add that to my to-do list

  • @will2u
    @will2u วันที่ผ่านมา

    What specific deepseek model for 3050 6GB mobile + 24GB RAM on ollama?

  • @JeffKGabriel
    @JeffKGabriel วันที่ผ่านมา

    value - ty sir

  • @Technology_Forum
    @Technology_Forum 2 วันที่ผ่านมา

    What machine specifications is required to run this locally and what is your machine specifications, operating system etc

    • @professorpatterns
      @professorpatterns  2 วันที่ผ่านมา

      @@Technology_Forum I currently have a new prebuilt computer with a nvidia 4090 gpu. What’s your operating system and hardware? I can recommend a specific distil version.

    • @OA-Wanted
      @OA-Wanted 2 วันที่ผ่านมา

      @@professorpatterns hey im trying to find out what one i can download, i have an rtx 3070, cpu ryzen 5 3500 with wins11. what one can i download?

  • @ZeebieZebra
    @ZeebieZebra วันที่ผ่านมา

    I have an AMD 5950x, 64gb ram and an RTX4090. In LMstudio I am only getting 2.08 tokens, should be getting a heap more than that, what settings do you recommend using for LM studio?

  • @zhanezar
    @zhanezar 3 วันที่ผ่านมา +1

    love all your videos , i was wondering will you consider buying the Nvidia DIGITS PC for ai ?

    • @professorpatterns
      @professorpatterns  3 วันที่ผ่านมา

      @@zhanezar the price is pretty steep, but I’ll definitely give it a shot to see if it’s useful.

  • @Josecunyat
    @Josecunyat 2 วันที่ผ่านมา

    I did never run a local model, does the gpu affect a lot? I do have a old nvidia quadro for CAD modelling... not sure if that will make the cut... What happens if the gpu is weak?

    • @professorpatterns
      @professorpatterns  2 วันที่ผ่านมา

      @@Josecunyat it’ll just won’t load or try to use CPU. Basically it’ll be very slow to type

    • @Josecunyat
      @Josecunyat 12 ชั่วโมงที่ผ่านมา

      @ responsestake really long, yep…