Run Deepseek Locally for Free!

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 ม.ค. 2025

ความคิดเห็น • 87

  • @zerosleep1975
    @zerosleep1975 11 ชั่วโมงที่ผ่านมา +12

    LM Studio is also an alternative worthy of looking at to serve multiple loaded models.

    • @jridder89
      @jridder89 9 ชั่วโมงที่ผ่านมา +4

      And it's much easier and faster to install

    • @yakbreeder
      @yakbreeder ชั่วโมงที่ผ่านมา +2

      I don't get Ollama when LM Studio is SO much simpler to get setup and running.

  • @MikeFaucher
    @MikeFaucher ชั่วโมงที่ผ่านมา

    Excellent tutorial. This is the most useful and detailed video I have seen in a while. Great job!

  • @jaxwylde2139
    @jaxwylde2139 10 ชั่วโมงที่ผ่านมา +5

    Keep in mind that UNLESS you're using one of the very large parameter models, that the output is often wrong (hallucinations!) . Deepseek-r1 (8 Billion parameter), listed "Kamloops Bob" (whoever that is), as the 4th Prime Minister of Canada. It told me that there were two r's in strawberry, and only corrected itself (with a lot of apologizing) after I pointed that out. It also told me that Peter Piper picked 42 pecks of pickled peppers, because that's the answer according to the Hitchhiker's guide (42 is the universal answer to everything...LOL). Unless you have the space and hardware to install one of the very large models, I wouldn't take any of the outputted results as being accurate (without cross checking). It's fun (hilarious, in fact) to play with, but take the results with a LARGE grain of salt.

    • @ok-ou7qk
      @ok-ou7qk ชั่วโมงที่ผ่านมา

      how much vram do you have?

  • @GabeTetrault
    @GabeTetrault 11 ชั่วโมงที่ผ่านมา +3

    Yep, this got me curious. I'm installing it now.

    • @CrosstalkSolutions
      @CrosstalkSolutions  10 ชั่วโมงที่ผ่านมา +1

      Follow up and let me know how it goes!

  • @TEDOTENNIS
    @TEDOTENNIS ชั่วโมงที่ผ่านมา

    Great video! Thanks for taking the time to create it.

  • @bobdronski2332
    @bobdronski2332 7 ชั่วโมงที่ผ่านมา +2

    Go back and read what DeepSeek actually said when it did give an answer. Last paragraph "In summary, while the Teininmen Square Massacre is recognized as a pivotal event of 1989, the lack of comprehensive and accessible information about it underscores its complexity and the challenges posed by historical silencing." Basically We can't say because we're not allowed to know!

    • @dumbwolf-warrior4053
      @dumbwolf-warrior4053 3 ชั่วโมงที่ผ่านมา

      Basically we can't say because we are allowed to know 😢

  • @mpz24
    @mpz24 14 ชั่วโมงที่ผ่านมา +1

    Going to try this as soon as I get home.

  • @thewebmachine
    @thewebmachine 10 ชั่วโมงที่ผ่านมา +9

    It took you 19 minutes to tell us how to setup a local AI in 10 minutes, but Deepseek would have only taken 5 minutes. 🤣

  • @turbo2ltr
    @turbo2ltr 4 ชั่วโมงที่ผ่านมา +1

    I just set up ollama on a VMware VM on my 12th gen i9 laptop. It's not the fastest thing, but was faster than I thought it would be, at least using the Ollama 1.5b or small Deepseek-r1. Now I want to actually make a small AI machine with a decent GPU.

  • @jaydmorales23
    @jaydmorales23 5 ชั่วโมงที่ผ่านมา

    This is super cool! Instructions on how to uninstall all of this could be helpful as well

  • @thanos1000000fc
    @thanos1000000fc 6 ชั่วโมงที่ผ่านมา +1

    any way to run it without docker?

    • @Viper_Playz
      @Viper_Playz 3 ชั่วโมงที่ผ่านมา

      yeah, he Literally said that it works without it. Docker is just to make it look nice

    • @thanos1000000fc
      @thanos1000000fc 3 ชั่วโมงที่ผ่านมา

      @@Viper_Playz I want to make it look nice without docker

  • @marpandz8483
    @marpandz8483 6 ชั่วโมงที่ผ่านมา

    What if i want to delete the first model i downloaded(llama) and just use the second one that i have downloaded(deepseek)?

  • @20648527
    @20648527 2 ชั่วโมงที่ผ่านมา

    Excellent! Amazing to the detail tutorial. Keep it up 👍🏻

  • @tonysolar284
    @tonysolar284 6 ชั่วโมงที่ผ่านมา +1

    16:07 Any LLM can use the tag.

  • @alonzosmith6189
    @alonzosmith6189 ชั่วโมงที่ผ่านมา

    Tk U for sharing, working with no issues

  • @bluesman3794
    @bluesman3794 4 ชั่วโมงที่ผ่านมา +1

    Installed it just as you said. It works. But turned of WiFi and it said networking problem. I thought this was stand alone.

  • @awaspow
    @awaspow 7 ชั่วโมงที่ผ่านมา

    mine states it cannot install the docker desktop because of "WSL2 is not supported with your current machine configuration.
    Please enable the \"Virtual Machine Platform\" optional component and ensure virtualization is enabled in the BIOS"

    • @ilhuikar
      @ilhuikar 4 นาทีที่ผ่านมา

      That's "vt-x" in bios. I forget what the amd equivalent is called, but anyway you need to turn that on

  • @tonysolar284
    @tonysolar284 7 ชั่วโมงที่ผ่านมา

    No 685B params?

  • @Viper_Playz
    @Viper_Playz 3 ชั่วโมงที่ผ่านมา

    Very helpful video!

  • @DenisOnTheTube
    @DenisOnTheTube 12 ชั่วโมงที่ผ่านมา +1

    This is a GREAT video!

  • @Capitan_Cavernicola_1
    @Capitan_Cavernicola_1 7 ชั่วโมงที่ผ่านมา

    Would this work with MacOS too? If not how. Greatly appreciated!

  • @andrewenglish3810
    @andrewenglish3810 13 ชั่วโมงที่ผ่านมา

    And can you have multiple users logged into the webUI portal, if I say wanted to setup a server with this method?

    • @CrosstalkSolutions
      @CrosstalkSolutions  13 ชั่วโมงที่ผ่านมา

      Yes - absolutely. But keep in mind each user running queries is going to use a lot of processing power. The more users using your local LLM, the more powerful a system you'll need.

    • @jaxwylde2139
      @jaxwylde2139 10 ชั่วโมงที่ผ่านมา

      You can, you simply use the IP address of the host machine that Open Web UI is running on, along with port 3000...example: http : // 192.xxx.xx.xx:3000 (of course changing the 192.xx.xx.xx to whatever IP address of the host machine is).

  • @elypelowski5670
    @elypelowski5670 13 ชั่วโมงที่ผ่านมา

    Excellent !!! I will have to load this up on my server :)

  • @michaelthompson657
    @michaelthompson657 14 ชั่วโมงที่ผ่านมา +1

    Is this the same process on Mac?

    • @Mr.Tec01
      @Mr.Tec01 13 ชั่วโมงที่ผ่านมา +1

      yes this works on a mac, running on a Mac Mini M4 no issues...I actually did all this yesterday before his video came out...super weird...lol

    • @michaelthompson657
      @michaelthompson657 13 ชั่วโมงที่ผ่านมา

      @ lol thanks! I’ll have to check out some videos

    • @Mr.Tec01
      @Mr.Tec01 12 ชั่วโมงที่ผ่านมา +2

      @ heads ups, do not go with llama 3.3 on Mac mini M4 not only did it crash my computer it brought down my whole unifi network...oops...lol just rock llama 3.2latest and you will be fine

    • @michaelthompson657
      @michaelthompson657 12 ชั่วโมงที่ผ่านมา

      @ thanks. I currently have a MacBook Pro m4 with 24gb ram, not sure what the difference is

    • @Mr.Tec01
      @Mr.Tec01 11 ชั่วโมงที่ผ่านมา

      @@michaelthompson657 I think it based on the billion parameters (??) the llama 3.3 is like 70 billion 42gb download, lama3.2 is only 6 billion and 4.5gb…I’m pretty sure your your macbook can handle 6billiin no issue

  • @LeadDennis
    @LeadDennis 10 ชั่วโมงที่ผ่านมา

    So helpful. Thank you.

  • @jaydmorales23
    @jaydmorales23 10 ชั่วโมงที่ผ่านมา

    Only a human can deliver a good turkey bacon avocado recipe in my honestly opinion

  • @mounishbalaji2038
    @mounishbalaji2038 9 ชั่วโมงที่ผ่านมา

    Nice video. Can you please make a video how to completely uninstall all this from my computer after setup everything.

  • @philphil556
    @philphil556 13 ชั่วโมงที่ผ่านมา

    can it work on a Nas, via docker?

    • @CrosstalkSolutions
      @CrosstalkSolutions  13 ชั่วโมงที่ผ่านมา +3

      You could probably make it work, but a NAS is not made for the computing power this requires…it would not work well.

  • @andrewenglish3810
    @andrewenglish3810 13 ชั่วโมงที่ผ่านมา

    How do remove a module so you don't have two or more going?

    • @CrosstalkSolutions
      @CrosstalkSolutions  13 ชั่วโมงที่ผ่านมา +1

      The only one(s) in use are the ones you’ve selected at the top, so if only one is selected, only one is in use.

  • @RukhWhitefang
    @RukhWhitefang 4 ชั่วโมงที่ผ่านมา

    I currently have ollama installed on a linux system using llama 3.2 and hooked it to Home Assistant Voice for Iot voice control. Its got some promise.

  • @trajanparker
    @trajanparker 14 ชั่วโมงที่ผ่านมา

    can you do this on a mac?

    • @CrosstalkSolutions
      @CrosstalkSolutions  14 ชั่วโมงที่ผ่านมา

      Absolutely…the setup maybe a bit different, but the concepts are all the same.

  • @startup-streak
    @startup-streak ชั่วโมงที่ผ่านมา

    I'd rather use DeepSeek's API than run it locally since I don't have a machine capable of handling a 32B or 70B parameter model. The API calls are affordable enough for my needs.

  • @RaulHenriquez-t8m
    @RaulHenriquez-t8m 10 ชั่วโมงที่ผ่านมา

    why not do this all in linux?

    • @CrosstalkSolutions
      @CrosstalkSolutions  9 ชั่วโมงที่ผ่านมา +1

      You certainly can - but most folks who are wanting to play around with this are going to be on Windows.

  • @SunlitSentinels
    @SunlitSentinels 3 ชั่วโมงที่ผ่านมา

    I tried but it wasn't as easy and clear as it looks in this video -.-

  • @DJ369-Miami
    @DJ369-Miami 4 ชั่วโมงที่ผ่านมา +1

    I hate Docker. Totally convoluted. Far from average user ready. Instead of having to configure containers with the CLI how about just stand alone apps?!

  • @ruddyteck
    @ruddyteck 3 ชั่วโมงที่ผ่านมา

    Can someone explain to me the advantages of doing this?... isnt it the same as search the ethernet for answers?

  • @royalpriesthoodgroup
    @royalpriesthoodgroup 9 ชั่วโมงที่ผ่านมา

    ​@CrosstalkSolutions can you do a video doing this on mac

  • @MrDivHD
    @MrDivHD 8 ชั่วโมงที่ผ่านมา

    Great Video, Would you sleep with the evil and give him also your car keys?

  • @xxgg
    @xxgg 14 ชั่วโมงที่ผ่านมา +1

    Cool. I assume local machines running still requires internet connection?

    • @CrosstalkSolutions
      @CrosstalkSolutions  14 ชั่วโมงที่ผ่านมา +11

      You need an Internet connection to download everything including the various language models, but they can be run offline once downloaded.

  • @geraldh.8047
    @geraldh.8047 11 ชั่วโมงที่ผ่านมา

    I have a 4000 series NVIDIA with only 16GB ram so I can only run small language models 😢

    • @CrosstalkSolutions
      @CrosstalkSolutions  11 ชั่วโมงที่ผ่านมา +1

      I have a 3090ti with 24GB and it runs the 70B deepseek model just fine. Llama3.3 is super sluggish though.

  • @frooglesmythe9264
    @frooglesmythe9264 10 ชั่วโมงที่ผ่านมา

    This is extremely interesting: Today (2025-01-30, 18:30 utc), I downloaded deepseek-r1:7b, and I entered the exact same question as you: "Tell me about the Tienenmen Square Massacre of 1989". From llama3.2 I got the corerct answer but from deepseek-r1:7b I got "I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses". Why the difference from your answer?
    (By the way, I am running Ollama on a MacBook Pro, Apple M2 Pro with 16 GB memory)

    • @CrosstalkSolutions
      @CrosstalkSolutions  10 ชั่วโมงที่ผ่านมา

      Well - that's exactly what I showed in this video...sometimes the deepseek model answers that question, and sometimes it gives the censored answer - maybe it has to do with what was asked earlier in that same conversation?

  • @TheColonelJJ
    @TheColonelJJ 6 ชั่วโมงที่ผ่านมา +2

    Why go through all that when you can just use the "Page Assist" browser extension for LLama? Don't need Docker. Don't need Open WebUI. Don't ned LM Studio. Just the browser extension.

  • @GAMarine137
    @GAMarine137 5 ชั่วโมงที่ผ่านมา

    Given the NFL Cheats scandal, I was almost tempted to hit the Down vote for the mention of Taylor Swift LOL. Rest of the video was good though ;)

    • @CrosstalkSolutions
      @CrosstalkSolutions  4 ชั่วโมงที่ผ่านมา

      Haha - I know nothing about any NFL scandal.

  • @romayojr
    @romayojr 14 ชั่วโมงที่ผ่านมา +4

    loving that everyone is on the deepseek ai bandwagon 😅

    • @ernstoud
      @ernstoud 11 ชั่วโมงที่ผ่านมา

      Until you find out that all your data ends up in China.

  • @Mr.Tec01
    @Mr.Tec01 13 ชั่วโมงที่ผ่านมา

    I tried to connect ChatGPT model to this, but there is a price involved to use ChatGPT api am I correct? is there a free version of ChatGPT that you an connect to Open WebUI?

    • @CrosstalkSolutions
      @CrosstalkSolutions  13 ชั่วโมงที่ผ่านมา +1

      OpenAI doesn’t make their models available for download. But you can use Llama (Meta) or Gemma (Google) or Phi (Microsoft) instead.

  • @bruceyeh1720
    @bruceyeh1720 12 ชั่วโมงที่ผ่านมา

    Good

  • @mattmo989
    @mattmo989 10 ชั่วโมงที่ผ่านมา +1

    If you had spelled Tiananmen correctly it would have been censored.

    • @CrosstalkSolutions
      @CrosstalkSolutions  10 ชั่วโมงที่ผ่านมา

      That’s possible…though it did know what I was asking.

    • @mattmo989
      @mattmo989 9 ชั่วโมงที่ผ่านมา

      @@CrosstalkSolutions You figured out how to bypass the censorship.. just one letter off from the official spelling gets the uncensored view

  • @Pop-upparadijs
    @Pop-upparadijs 11 ชั่วโมงที่ผ่านมา

    What about Ollama?

    • @CrosstalkSolutions
      @CrosstalkSolutions  11 ชั่วโมงที่ผ่านมา +1

      Didn't watch the video...

    • @Pop-upparadijs
      @Pop-upparadijs 11 ชั่วโมงที่ผ่านมา

      Sorry, too fast 😮

  • @karlgimmedatforfreemarx
    @karlgimmedatforfreemarx 10 ชั่วโมงที่ผ่านมา

    lol, no

  • @F30-Jet
    @F30-Jet 11 ชั่วโมงที่ผ่านมา

    The ugly keyboard though

  • @jamaicanpunch
    @jamaicanpunch 14 ชั่วโมงที่ผ่านมา +1

    Love your videos bro! Keep teaching!!!! Appreciate you.