Ultimate Offline/Local AI: Deepseek R1 | Complete Crash Course/Ollama Guide

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 ก.พ. 2025

ความคิดเห็น • 162

  • @TroubleChute
    @TroubleChute  7 วันที่ผ่านมา +4

    Looking for Deepseek image generation/interpretation? See Deepseek Janus Pro: th-cam.com/video/6axAY9NV1OU/w-d-xo.html (Also free, and local/offline)

    • @TrippyVisual
      @TrippyVisual วันที่ผ่านมา

      Could you please suggest me If i should Ollama or LM studio...
      I think using LM studio is much easy and better option due to it's easy to setup functionality and also includes GUI
      any suggestion between Ollama or LM studio?

  • @Wezza-c2j
    @Wezza-c2j 7 วันที่ผ่านมา +24

    It was a pretty straightforward tutorial thank you

  • @harrydurnberger
    @harrydurnberger 5 วันที่ผ่านมา +1

    Awesome guide thank you. This post-DeepSeek phase of AI is going to be fun

  • @MDLEUA
    @MDLEUA 7 วันที่ผ่านมา +3

    The video is on point! fast and easy to follow. Thank you!

  • @A7medzz0
    @A7medzz0 8 วันที่ผ่านมา +63

    It's time to test the 4090.

    • @venkatkaranth6855
      @venkatkaranth6855 7 วันที่ผ่านมา +4

      5090

    • @AYouTubeRambler
      @AYouTubeRambler 7 วันที่ผ่านมา

      It works with the steamdeck. The 4090 will easily do it lol

    • @srivarshan780
      @srivarshan780 6 วันที่ผ่านมา

      @@ATH-camRambler what the helllll

    • @angelsv
      @angelsv 6 วันที่ผ่านมา +1

      Just tried with a Geforge 980 Ti and at least the first 1.5b worked ok for me, fast enough.

    • @AYouTubeRambler
      @AYouTubeRambler 5 วันที่ผ่านมา

      @ It's a bit tough on the steamdeck. But it can use the middle tier model of the AI so long as you don't have a substantial amount of tokens being used each generation. Unfortunately, it's a little too slow for my taste. Each response takes a solid 3-5 minutes.

  • @testy_cool
    @testy_cool 6 วันที่ผ่านมา

    This is a fabulous to-the-point and beginner-friendly video.

  • @univera1111
    @univera1111 7 วันที่ผ่านมา +2

    Thanks for mentioning the hardware. Alot of videos don't mention hardware and they be using Mac.

  • @luorhan
    @luorhan 7 วันที่ผ่านมา +2

    Easy and straightforward, thanks.

  • @Gorjuk
    @Gorjuk วันที่ผ่านมา

    dude, thank u so much! It was interesting

  • @SangheiliSpecOp
    @SangheiliSpecOp 7 วันที่ผ่านมา +3

    Very very thorough guide, you went through every single dialog box. Thank you, it is much appreciated. I got an old water cooled 2080 super still chugging along and the weakest version of the ai works fine. Dare I step it up and make stuff catch on fire? lolol

    • @jnhkx
      @jnhkx 6 วันที่ผ่านมา

      I got 1070ti with 8GB VRAM and it run 8b perfectly and fast.

    • @SangheiliSpecOp
      @SangheiliSpecOp 6 วันที่ผ่านมา

      @jnhkx nice. I have been noticing I can go up to that as well. But actually, I'm not too impressed with this ai, at least tge lesser forms. I hear the true non distilled one is good but no one has the 150tb of ram at home to run it lol. I have since tried dolphin llama 3 and it works faster and smarter, and also tried a roleplaying ai for conversations and it works well

  • @Ferruccio_Guicciardi
    @Ferruccio_Guicciardi 6 วันที่ผ่านมา +1

    Thanks a lot ! Very handy !

  • @rashadb954
    @rashadb954 4 วันที่ผ่านมา

    Great tutorial, thanks!

  • @photovisionproject
    @photovisionproject 7 วันที่ผ่านมา +1

    VERY HELPFUL! THANK YOU SO MUCH!

  • @Turbo_Marvel_Boost
    @Turbo_Marvel_Boost 8 วันที่ผ่านมา +27

    I have locally deployed DeepSeek R1 (70B parameters). My goal is to develop an autonomous code generation system that integrates code synthesis and automated error correction. This system will iterate until an error-free code is generated. For testing, the system will automatically install the required dependencies and streamline the code execution process. Can you create this project?

    • @sainsrikar
      @sainsrikar 8 วันที่ผ่านมา +6

      You can integrate deepseek r1 into VS Code

    • @donlopez09
      @donlopez09 3 วันที่ผ่านมา

      How ?​@@sainsrikar

  • @bu3d_efficiency
    @bu3d_efficiency 6 วันที่ผ่านมา

    Thank you! That was to the point.

  • @MiloLIV-w3b
    @MiloLIV-w3b 7 วันที่ผ่านมา +5

    I have a 4060 with 8 gigs of VRAM, the 8b model works pretty dam quickly I would say

    • @anonix4078
      @anonix4078 6 วันที่ผ่านมา

      Thank you. I have the same GPU. Thinking of trying it.

    • @Gamer-jc4kg
      @Gamer-jc4kg 6 วันที่ผ่านมา +1

      nvidia privilages

    • @kura_bot
      @kura_bot 5 วันที่ผ่านมา

      I'm using a 3070 which also has 8GB of VRAm and the 8B model is running fine, one can append verbose to the run command like:
      `ollama run deepseek-r1:8b --verbose`
      to get more information like eval rate, I am getting a rate of 59.39 tokens per second, which is significantly faster than my reading speed

  • @EagleRandom
    @EagleRandom 7 วันที่ผ่านมา +2

    Can you make a tutorial to install DeepSeek Janus Pro local ? Thank you.

  • @thetechdog
    @thetechdog 7 วันที่ผ่านมา +22

    For people that want a nicer interface as well as people using AMD GPUs (because ollama doesn't have as good of a support for AMD GPUs), use LM Studio instead. You even get the option to expand and collapse the thinking process.

  • @comanandrei1993
    @comanandrei1993 6 วันที่ผ่านมา +2

    I was so stoked about trying this on my 1650Ti laptop until you got to the VRAM requirements. Now I'm wondering if it's even worth trying the 7B model on my rtx 3060 (12GB version) PC 😐

    • @kevinzhu5591
      @kevinzhu5591 6 วันที่ผ่านมา

      My rtx 3090 can run 32B model seemlessly, while 64B model does do slow generation due to maximum vram usage.

  • @rgarlinyc
    @rgarlinyc 7 วันที่ผ่านมา +1

    Thanks a lot, very helpful!

  • @Lyaaaamlyaaaam
    @Lyaaaamlyaaaam 6 วันที่ผ่านมา +1

    can we hide the thinking process in the chatbox just like a normal assistant?

  • @JerelTeh
    @JerelTeh 6 วันที่ผ่านมา

    Thanks for the thorough explaination. This explains why my rx6600xt with slow 8gb ram is choking on the 32b model😂

  • @mamadouxchaco8197
    @mamadouxchaco8197 6 วันที่ผ่านมา

    Thank you for the tutorial.

  • @Bambabah
    @Bambabah 6 วันที่ผ่านมา

    Wonderful mate!

  • @rasyasejati
    @rasyasejati 8 วันที่ผ่านมา +5

    8b in my machine ran a bit slower, in a 3050ti laptop, it got only 4gb of vram , but it's acceptable

  • @thehighwayman78
    @thehighwayman78 7 วันที่ผ่านมา

    Great walkthrough thank you! +1 sub

  • @yuzuki-tangerine
    @yuzuki-tangerine 7 วันที่ผ่านมา +2

    Can you teach us how to download more ram next? Thanks :)

    • @reze_dev
      @reze_dev 7 วันที่ผ่านมา

      Look like 16 vram is not enough for 2025. Lol

  • @YuvrajSingh-ls5ut
    @YuvrajSingh-ls5ut 5 วันที่ผ่านมา +1

    which model should i install. my laptop specs- i9 14900hx, rtx 4060 8gb vram, 16gb ddr5 ram? please help me

  • @tarviis4607
    @tarviis4607 7 วันที่ผ่านมา +2

    Is it possible to hide the thinking bar ? Like not just clicking it everytime but disable it ? Thanks for the video

    • @TroubleChute
      @TroubleChute  7 วันที่ผ่านมา

      I would say yes but it could take ages to think, then actually give the response. The thinking text output is quite literally the AI "thinking" and is meant to stay part of the model as far as I understand

    • @thetechdog
      @thetechdog 7 วันที่ผ่านมา +1

      Use LM Studio if you want to be able to hide the thinking process.

  • @NumbaoneGuy
    @NumbaoneGuy 7 วันที่ผ่านมา +2

    If needed, how do we uninstall the LLMs? Do we just delete the applications? and where do they live on our compusters?

    • @pyr0bee
      @pyr0bee 5 วันที่ผ่านมา

      ollama rm modelname

  • @ricardofranco4114
    @ricardofranco4114 7 วันที่ผ่านมา +2

    i am installing deepseek in my server. :D!

  • @pastuh
    @pastuh 7 วันที่ผ่านมา +2

    And now how to fine-tune model? ;)

  • @spectrotsu6629
    @spectrotsu6629 6 วันที่ผ่านมา

    But 32B one is more powerful than 7b and 1.5b as it answers some questions right as and is with the par with 671b parameters

  • @chamanshibha3961
    @chamanshibha3961 3 วันที่ผ่านมา

    i interrupted the powershell intallation step by disabling internet and now unable to install again, please help

  • @benjioverseas
    @benjioverseas 7 วันที่ผ่านมา

    Thank you so much

  • @AFewMomentLatte
    @AFewMomentLatte วันที่ผ่านมา

    Can I run Janus locally using the same technique?

  • @wyktron
    @wyktron 5 วันที่ผ่านมา

    Hey TroubleChute, I have a cuda gpu, but with this setup you showed it is still running on my cpu, am I missing anything?

  • @its_eis
    @its_eis 6 วันที่ผ่านมา

    Thank you! How do I uninstall models?

  • @caterpilar
    @caterpilar 5 วันที่ผ่านมา

    Good vid!

  • @lidinande
    @lidinande 7 วันที่ผ่านมา

    I have been having weird problems on my laptop ever sense I downloaded the interface you recommended. :[ I really hope you didn't give me malware...

  • @adzrocable
    @adzrocable 7 วันที่ผ่านมา

    are you the voice behind kurzgesagt??

  • @bladethewitch1442
    @bladethewitch1442 6 วันที่ผ่านมา

    something tells me he's deepening his voice

  • @Lyaaaamlyaaaam
    @Lyaaaamlyaaaam 6 วันที่ผ่านมา

    thanks a lot

  • @SN-je5ej
    @SN-je5ej 7 วันที่ผ่านมา

    Excellent

  • @gordenfreemin4780
    @gordenfreemin4780 6 วันที่ผ่านมา

    You were lucky your *1.5b* model responded "16".
    The first thing I asked *8b* was "What is 8/2(2+2)" and it said "4" after reasoning.
    Then I asked if it was sure, and it changed its answer to "1".

    • @Gamer-jc4kg
      @Gamer-jc4kg 6 วันที่ผ่านมา

      XD

    • @watema3381
      @watema3381 6 วันที่ผ่านมา +1

      Finetune it with r1

  • @Krishna0201
    @Krishna0201 8 วันที่ผ่านมา +1

    actually i use msty to use deepseek adding link from hugging face which does not require any additional webui cause it has inbuilt build and easy to use no login required hope you can try later which is way easier than ollama
    Edit: after installing msty add gpu in local Ai tab in settings

    • @CatTheGato
      @CatTheGato 7 วันที่ผ่านมา

      thanks for the heads up broskie, im gonna check it out

  • @mutluemre93
    @mutluemre93 6 วันที่ผ่านมา

    I have 48 GB ram but 4 GB Vram. Which model should I choose?
    Display Memory: 3964 MB
    Shared Memory: 24447 MB.

  • @Baiii369
    @Baiii369 6 วันที่ผ่านมา

    Is there a local way to have r1 also search the internet

  • @GurtGobain
    @GurtGobain 6 วันที่ผ่านมา +1

    Can somebody tell me which version I should choose? I'm confused about how he explained VRAM vs RAM. My System: Total memory 42904 MB, Dedicated memory 10240 MB, Shared memory 32664 MB. I have an RTX3080. Do you think I can run 14B since I have over 32GB total RAM, or should I stick to 7B since I only have 10GB of VRAM?

  • @b1gdan
    @b1gdan 7 วันที่ผ่านมา +1

    which model is able to read images and documents like chat gpt?

  • @northernmostchannel7059
    @northernmostchannel7059 6 วันที่ผ่านมา

    What to do? My Ollama server is not working. After the Ollama serve command, there is an incomprehensible list and the server does not work. Instead, the message appears: Error: could not connect to ollama app, is it running?

  • @shanetravel
    @shanetravel 7 วันที่ผ่านมา

    thanks man

  • @MartinEduardoPerezGomez
    @MartinEduardoPerezGomez 7 วันที่ผ่านมา

    . I would like to know how to install Deepseek on my computer but it downloads the 400GB version on a hard drive separate from the operating system.
    I am willing to purchase a 4TB internal hard drive to achieve this purpose, however, I don't know how to do it. Thanks for your help.

  • @IronMan-yg4qw
    @IronMan-yg4qw 8 วันที่ผ่านมา

    thx great stuff!

  • @IronMan-yg4qw
    @IronMan-yg4qw 8 วันที่ผ่านมา +3

    i didnt see the deepseek in model box even after i tried resting, so i tried firefox and it worked. i guess it dont work on brave browsers. :\

    • @Panicthescaredycat
      @Panicthescaredycat 8 วันที่ผ่านมา +1

      Just disable the adblock for that website, works fine.

    • @TroubleChute
      @TroubleChute  7 วันที่ผ่านมา

      ​@@Panicthescaredycatthat and a few refreshes/restarting olama should help. Double check the Logs folder by right clicking the ollama icon, and open I think it's server.txt to see if it complains about anything there

  • @InfoTalksWithBilal
    @InfoTalksWithBilal 2 วันที่ผ่านมา

    How to disable deepthink ?

  • @ChintuNama
    @ChintuNama 3 วันที่ผ่านมา

    i have gtx 1650 4GB can i run it

  • @davidbatista1183
    @davidbatista1183 6 วันที่ผ่านมา

    4:50 the display is not weird, it looks like LaTex, a typesetting system widely used in the scientific community.

  • @officeroinks8823
    @officeroinks8823 7 วันที่ผ่านมา

    how do we change the weights

  • @That-Albino-Kid
    @That-Albino-Kid 6 วันที่ผ่านมา

    wsarecv: An existing connection was forcibly closed by the remote host.
    and
    connectex: No connection could be made because the target machine actively refused it.
    any suggestions?

  • @nikluz3807
    @nikluz3807 7 วันที่ผ่านมา

    is there a command to ensure that the model uses the GPU ? mine seems slow and I want to be sure.

  • @zapzapdelivery4280
    @zapzapdelivery4280 7 วันที่ผ่านมา +2

    Is the local deepseek limited by ToS too? Like when you use the online deepsek there are certain prompts that wont go through as it says my question goes against their ToS and what not.

    • @TroubleChute
      @TroubleChute  7 วันที่ผ่านมา +2

      As this is offline you can download fine tuned models that have certain features changed like censors and the rest. These are just the base official releases but there are more.

    • @thetechdog
      @thetechdog 7 วันที่ผ่านมา

      You can use the Abliterated version and that one is mostly uncensored.

    • @B_e-i_tter_World
      @B_e-i_tter_World 4 วันที่ผ่านมา

      @@TroubleChute
      Is there a way to send the real time responses of the model from the command line to, say, a program?

  • @zebusaqua4415
    @zebusaqua4415 4 วันที่ผ่านมา

    While chatbox is cool it is extremely heavy. A more light weight option would be cool.

  • @heyangao927
    @heyangao927 7 วันที่ผ่านมา

    I'm on 3070Ti with only 8GB vram and 16bg ram. I can still run 8b model fine! usage is 7g/8g

  • @kira6353
    @kira6353 4 วันที่ผ่านมา

    04:54 "S R A W B E R R I N G" ??? 😭 it gets the final answer right but why does the word suddenly change

  • @ChaotixAsbestos
    @ChaotixAsbestos 7 วันที่ผ่านมา

    or the lower model knows less than above ones?

  • @purgedome2386
    @purgedome2386 9 ชั่วโมงที่ผ่านมา

    Getting 404 when updating Chatbox. Seems you can't download any version(404 error msg) as of this moment(25/02/05).
    EDIT: it's backup

  • @itsukyy
    @itsukyy 7 วันที่ผ่านมา

    Can you make video on xbox app on pc black ops 6 stuck in loading screen

  • @therationallion
    @therationallion 6 วันที่ผ่านมา

    Whoa you're not actually a robot!?? 🤯

  • @knl654
    @knl654 7 วันที่ผ่านมา

    i am using 7b version and i like hoiw it works but, man its doesn't know file formats of certain software, even the popular ones, it mistakes the software for something else, and doesn't even recognise it,or acknowledge its existence, I asked about final draft and .fdx format it uses and it doesn't even know what it mean, mistakes it for figma, which uses .fig, I asked chatgpt free version and it gave all the right answers, I think have to use bigger model but, I am limited by the hardware, I also have 3080ti and 64gb ram, I think this is not meant for regular users, unless they have a very high end setup, its better to stick with different model to use the free version. please let me know your thoughts.

  • @srivarshan780
    @srivarshan780 6 วันที่ผ่านมา

    guys does it work with amd

  • @lazyhomeowner
    @lazyhomeowner 3 วันที่ผ่านมา

    seems some info was omitted and this is more complicated than it should be.

  • @ChaotixAsbestos
    @ChaotixAsbestos 7 วันที่ผ่านมา +1

    Why AI boost cores on my new intel cpu's do nothing at all when this runs? What they even for I never seen them doing anything? Maybe anyone knows?

    • @paulossilveira
      @paulossilveira 7 วันที่ผ่านมา

      Great question!

    • @jm-sh6qr
      @jm-sh6qr 7 วันที่ผ่านมา +2

      if the application does not support your cpu's features then it wont matter. i would look into finding tools that support that

  • @shikikanaz4624
    @shikikanaz4624 8 วันที่ผ่านมา

    Is the Uninstaller a simple uninstall

  • @oden2011
    @oden2011 7 วันที่ผ่านมา

    Does the bigger model means its smarter or is it just faster?

    • @visfotakvideos5202
      @visfotakvideos5202 7 วันที่ผ่านมา +1

      Bigger means it has more information to derive answers from, thus, usually slower.
      Bigger = smarter but slow
      Small = less smart but quicker
      Hope this helps

    • @oden2011
      @oden2011 6 วันที่ผ่านมา

      If it learns throughout its interaction with me, does it mean that it gets smarter even though im still using 1.5B?

    • @uhfan22
      @uhfan22 6 วันที่ผ่านมา

      @@oden2011 It doesn't learn from you at all. It just remembers your conversations, in order to give you related answers to your previous prompts.

  • @temhirtleague-chess
    @temhirtleague-chess วันที่ผ่านมา

    For anyone interested, I am running the distill lama 8B on a 16gb ram with gtx1060 6gb vram

  • @dxmilol
    @dxmilol 6 วันที่ผ่านมา

    So this is fully safe right?

  • @Snowaxe3D
    @Snowaxe3D 7 วันที่ผ่านมา +4

    here before they start saying "iT sTeaLs yO dAta",

    • @TroubleChute
      @TroubleChute  7 วันที่ผ่านมา +4

      Their Web interface: yes your data is being sent to their server and collected. The offline model? You could run it on a completely offline computer and prevent it from communicating in any way, if it ever tried

  • @Blue_Razor_
    @Blue_Razor_ 8 วันที่ผ่านมา +1

    How does Ollama compare to something like LM Studio?

    • @ppie8001
      @ppie8001 8 วันที่ผ่านมา +1

      They run the same gpu software back end: llamacpp. So performance should be the same.

    • @fixelheimer3726
      @fixelheimer3726 7 วันที่ผ่านมา +1

      Lm studio also supports deep seeker btw

    • @thetechdog
      @thetechdog 7 วันที่ผ่านมา

      LM Studio has a built in UI and has much better support for AMD GPUs. It also lets you easily download models from hugging face while also letting you choose which quantization to download.

    • @fixelheimer3726
      @fixelheimer3726 7 วันที่ผ่านมา

      @@thetechdog yes I can also recommend it👍

  • @unicorn1655
    @unicorn1655 7 วันที่ผ่านมา +2

    How can I delete a model that I don't want anymore?

    • @Nemeziis360
      @Nemeziis360 7 วันที่ผ่านมา

      I'd like to know too lol

    • @_roj
      @_roj 7 วันที่ผ่านมา

      Looking at the comments for answers, but all I get is people asking more questions

    • @cptninja7890
      @cptninja7890 6 วันที่ผ่านมา +1

      go to c drive>user>(folder name which u entered during windows installation)>.ollama>models>then delete the file(dont delete the files with 1-2kb size)

  • @kamransiddiqui6053
    @kamransiddiqui6053 6 วันที่ผ่านมา

    How to uninstall Deepseek from PC completely?

  • @ArtFoSho
    @ArtFoSho 2 วันที่ผ่านมา

    I run 7b with 2060 Super 8gb and it runs fast and smoothly

  • @ordinarryalien
    @ordinarryalien 8 วันที่ผ่านมา +18

    That's what Xi said.

  • @yeahnope620
    @yeahnope620 3 วันที่ผ่านมา

    Uh It seems to me the listed RAM requirements in this video are wrong. As far as I can tell the models use just exactly the amount of ram equal to the filesize of the model. So 14b = 9.7gb VRAM.
    If i load the 14b model, you can see in your taskmanager that the model is using exactly 9.7GB and not 32gb. I have a 12GB vram GPU and the model does not seem to need any additional memory except what I just mentioned.

  • @skywater9607
    @skywater9607 7 วันที่ผ่านมา

    Is it normal the cpu goes above 90degrees when executing?

    • @stickfijibugs
      @stickfijibugs 7 วันที่ผ่านมา

      Look at Task Manager CPU usage, it may be near 100%. Programs like these can be heavy on the CPU, so it depends on the cooling arrangement you have on your CPU.

  • @berryvr7184
    @berryvr7184 7 วันที่ผ่านมา +3

    is this actually offline? like fully, you can use this model, without been conected to the internet and it will give you answer?

    • @Gamer-jc4kg
      @Gamer-jc4kg 6 วันที่ผ่านมา

      yes

    • @triskaideka13
      @triskaideka13 6 วันที่ผ่านมา

      I've heard the offline version isn't censored like the online one

    • @berryvr7184
      @berryvr7184 5 วันที่ผ่านมา

      i've been testing it out and honestly, unless its just this version of the AI, its kinda dumb. like, it just feels like a very early version of ChatGPT, like almost exactually how it used to respond to questions. it even gave an outdated answer to something I asked. which makes the theory that its just a ChatGPT rip off but probably just a very old model, make more sense.

  • @freakyninjaman3
    @freakyninjaman3 7 วันที่ผ่านมา

    I don't think the Haiku was correct 😅

  • @SoruxYT
    @SoruxYT 5 วันที่ผ่านมา +1

    4:54 ah yes strawberry is spelt as "strawberring"

  • @knl654
    @knl654 7 วันที่ผ่านมา

    DANG!, I gave it my story to anyalze but it messe up everything, even the relationship in the story and their dynamic, I am having 2nd doubt about his model, very much, it doesn't understand the story, even a a short one, people please try it, write a short story and ask for suggestions and then give your own suggestion and it will mess up everything. is there other better model I can use to get feedback on the story.

  • @Vgmantra
    @Vgmantra 5 วันที่ผ่านมา

    That Haiku was not a Haiku

  • @elwinroyale
    @elwinroyale 6 วันที่ผ่านมา

    Sadly it doesn't support intel gpu's

  • @H4d3s
    @H4d3s 7 วันที่ผ่านมา

    maybe i am dumb, but i dont know how the AI can have any knowlege when its local installed and not using internet. no background information = no knowlege????

  • @NotSpartaGT
    @NotSpartaGT 7 วันที่ผ่านมา

    frist time seeing you making a ai video!

  • @DanielThiele
    @DanielThiele 7 วันที่ผ่านมา

    dude thats not ultimate at all. when i press search online it cant do it. when i give it a pdf file and ask it to print out all the text info it hallucinates all kind of stuff. the online version can do it perfectly btw.... i tried it with the 14b model btw

    • @kevinzhu5591
      @kevinzhu5591 6 วันที่ผ่านมา

      This is a offline model, the model does not support online functionality

    • @DanielThiele
      @DanielThiele 6 วันที่ผ่านมา

      ​@@kevinzhu5591nah dude this is a locally installed model that you can run offline. But there is a bunch of people who enable this locally installed model to do a web search. Stop spreading false information

  • @RedGreene
    @RedGreene 6 วันที่ผ่านมา

    4:50 "SRAWBERRYING"
    It also messed up the haiku format.
    I'm sure the bigger models are way better but why anybody would use the smallest ones is beyond me.

  • @leggo15
    @leggo15 7 วันที่ผ่านมา

    \boxed{} is LaTeX.

  • @Kaylakaze
    @Kaylakaze 7 วันที่ผ่านมา

    I tried this and asked Deepseek some simple questions and it knows nothing.

  • @LOUXOWENS
    @LOUXOWENS 7 วันที่ผ่านมา

    Does this resolve the political censorship issue ?

    • @ExySmexy
      @ExySmexy 7 วันที่ผ่านมา

      no

    • @B_e-i_tter_World
      @B_e-i_tter_World 4 วันที่ผ่านมา

      Yeah, the ToS and guidelines imposed on it are not longer functioning when it is offline.
      And if it is built off skewed data than it can be fine tuned or corrected with new data.

  • @modredlew4190
    @modredlew4190 7 วันที่ผ่านมา

    I'll train this Chad AI being my AI GF in my local vps

  • @Jimbo544
    @Jimbo544 7 วันที่ผ่านมา +1

    Bruh the math question at 4:12 isnt 16 its 1. These things still cant do math.

    • @zurub
      @zurub 7 วันที่ผ่านมา +5

      The answer is 16. You still can't do math...

    • @B_e-i_tter_World
      @B_e-i_tter_World 4 วันที่ผ่านมา

      The convention is that you do math from left to right,
      So 8/2*(2+2):
      - After parenthesis is; 8/2*4,
      - And then you do (8/2) first, as it is the first operation on the left and get; 4*4,
      - And then you do 4*4 getting; 16.
      I hope this clarifies it.