host ALL your AI locally

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 พ.ย. 2024

ความคิดเห็น • 3.1K

  • @NetworkChuck
    @NetworkChuck  6 หลายเดือนก่อน +110

    Ready to get a job in IT? Start studying RIGHT NOW with ITPro: go.acilearning.com/networkchuck (30% off FOREVER) *affiliate link
    Discover how to set up your own powerful, private AI server with NetworkChuck. This step-by-step tutorial covers installing Ollama, deploying a feature-rich web UI, and integrating stable diffusion for image generation. Learn to customize AI models, manage user access, and even add AI capabilities to your note-taking app. Whether you're a tech enthusiast or looking to enhance your workflow, this video provides the knowledge to harness the power of AI on your local machine. Join NetworkChuck on this exciting journey into the world of private AI servers.
    📓📓Guide and Commands: ntck.co/ep_401
    ⌨⌨My new keyboard: Keychron Q6 Max: geni.us/0SGY
    🖥🖥My Computer Build🖥🖥
    ---------------------------------------------------
    ➡Lian Li Case: geni.us/B9dtwB7
    ➡Motherboard - ASUS X670E-CREATOR PROART WIFI: geni.us/SLonv
    ➡CPU - AMD Ryzen 9 7950X3D Raphael AM5 4.2GHz 16-Core: geni.us/UZOZ5
    ➡Power Supply - Corsair AX1600i 1600 Watt 80 Plus Titanium: geni.us/O1toG
    ➡CPU AIO - Lian Li Galahad II LCD-SL Infinity 360mm Water Cooling Kit: geni.us/uBgF
    ➡Storage - Samsung 990 PRO 2TB Samsung: geni.us/hQ5c
    ➡RAM - G.Skill Trident Z5 Neo RGB 64GB (2 x 32GB): geni.us/D2sUN
    ➡GPU - MSI GeForce RTX 4090 SUPRIM LIQUID X 24G Hybrid Cooling 24GB: geni.us/G5BZ
    🔥🔥Join the NetworkChuck Academy!: ntck.co/NCAcademy
    **Sponsored by ITProTv from ACI Learning

    • @MARO_MR
      @MARO_MR 6 หลายเดือนก่อน

      first reply

    • @MARO_MR
      @MARO_MR 6 หลายเดือนก่อน

      @mshark111 third reply

    • @xozx1715
      @xozx1715 6 หลายเดือนก่อน +1

      I use chat with rtx. Do you advise me to change to this?

    • @d34ddud3
      @d34ddud3 6 หลายเดือนก่อน +1

      You should totally try to set up this AI like the Amazon Dot or Alexa's as speakers in your home. It won't be a privacy concern since it's all on your own server and home network now!

    • @dexterbeazley1501
      @dexterbeazley1501 6 หลายเดือนก่อน +1

      do a video on Linux game server

  • @JeremyFeldmesser
    @JeremyFeldmesser 6 หลายเดือนก่อน +405

    I'm 62 years old and a computer techy, I'm no super genius though and I'm really happy to have been able to run a local AI on my PC. Private AI is the way to go for sure. I signed up for your free academy for now, there's enough in there to keep me learning/busy for a while yet! :)

    • @nahrafe
      @nahrafe 6 หลายเดือนก่อน +25

      Good job pops

    • @projectptube
      @projectptube 6 หลายเดือนก่อน +26

      now if we can just get some models that have no wokeness/leftist insanity.

    • @gaiustacitus4242
      @gaiustacitus4242 6 หลายเดือนก่อน

      ​@@projectptube I would be happy with an AI that could actually write fairly entry level code instead of churning out garbage code that:
      1) won't compile, and efforts to have AI integrated into the development environment correct issues makes it worse with each iteration
      2) doesn't actually meet requirements (regardless of how many iterations made to fine tune the output, by which YOU are training the AI)
      3) is poorly structured (leading to maintainability problems)
      4) lacks proper error handling (leading to problems with stability and data integrity)
      5) fails to follow any type of consistent naming convention (code quality/maintainability issues)
      6) randomly include variables which determine type on first assignment
      7) creates classes where local data types do not correspond to the columns defined in database tables:
      7.a) string data types do not enforce the defined length limits
      7.b) numeric variables are of inconsistent types
      7.c) the data access layer doesn't handle null values, always storing 0 for numeric data types or zero-length strings for (n)varchar fields
      8) thrashes database connections (a problem that connection pooling implemented in the client stack doesn't reliably solve)
      9) introduces security vulnerabilities.
      I could go on, but why bother? The current state of AI for software development is to have companies and sole developers pay to use it while the AI is trained on the well-written source code (or at least better written) the developers end up producing. A packet sniffer will detect that not only is the corrected AI generated code being shared but also proprietary code which has not been authorized for such use.

    • @legendaryphoenix8607
      @legendaryphoenix8607 6 หลายเดือนก่อน +4

      ​​​@@projectptube exactly cough... Gemini... cough. But what do you have in mind when you said that? I am interested to know

    • @HandFromCoffin
      @HandFromCoffin 6 หลายเดือนก่อน +3

      @@projectptube Hi my name is Richard, I always have to inject my views on things in to every topic. That’s my skill.

  • @grregis
    @grregis 5 หลายเดือนก่อน +225

    Awesome video and super easy to follow along.
    Quick tip: if you forget to run a command as sudo, just type sudo !! and it will run your last command as sudo.

    • @lilpoopieboy
      @lilpoopieboy หลายเดือนก่อน

      Nvidia Cuda Drivers? i need to install them but you didn't put them link to the drivers in the bio down here bro and now i just have a really slow chat ai bot. umm? i looked in your bio and i havent found anything yet also im not really a computer tech savvy kinda guy so i might just be overlooking what you put in your bio. i just need you to help me on that bc my terminal says that Nvidia detected and it doesnt say its installed though? like how it did on your screen? so what do i do please help me???

    • @eddymison3527
      @eddymison3527 หลายเดือนก่อน

      Thanks for the tips.

    • @karthikeyanv661
      @karthikeyanv661 หลายเดือนก่อน +4

      MAN THIS TIP IS GONNA SAVE ME AN ENTIRE DECADE

    • @SahidHaqqi
      @SahidHaqqi หลายเดือนก่อน

      @@karthikeyanv661 You are overreacting

    • @karthikeyanv661
      @karthikeyanv661 หลายเดือนก่อน

      @@SahidHaqqi Okay and??

  • @crypto_que
    @crypto_que 4 หลายเดือนก่อน +3

    This video should have millions of views. The time value of this video compared to the production value it brings is totally asymmetric. After a week or so I finally figured out that having more than one instance of Linux (WSL & WSL2) running at the same time is really bad for this install. Also you can only have Ollama installed in one place on your machine or Docker will NOT play nice. Finally got it running after just a few minutes of uninstalling and re-configuring and voila! OpenWeb UI has the connection, & all the models can be loaded & used. I am a Wizard.

  • @OgBrog
    @OgBrog 6 หลายเดือนก่อน +303

    Alright, now integrate it into home assistant with text to speech and voice to text so you can have your own alexa that controls your home automation.

    • @shannonbreaux8442
      @shannonbreaux8442 5 หลายเดือนก่อน +27

      That's what I would like to see a video of him do

    • @sonofsid1
      @sonofsid1 5 หลายเดือนก่อน

      @@shannonbreaux8442 the ollama get hub has a plug in on how to do this. Also ollama has a python library so you can write your own python scripts to interact with ollama

    • @Mr_LA_Z
      @Mr_LA_Z 5 หลายเดือนก่อน +14

      Yeah, we need API access for home assistant. Does anyone know how we can do that, or that is too much of a challenge?

    • @miroslavwiesner7366
      @miroslavwiesner7366 4 หลายเดือนก่อน +12

      @@Mr_LA_Z ask AI

    • @rickeeepps6461
      @rickeeepps6461 4 หลายเดือนก่อน +14

      Read the HA release notes, they are working on this as we speak

  • @guitarguy911
    @guitarguy911 6 หลายเดือนก่อน +283

    Ollama troubleshooting: if you can’t run Ollama on the first try, open a new terminal and type “Ollama serve”

    • @ezradevs
      @ezradevs 6 หลายเดือนก่อน +11

      On my Mac, I had to keep an ollama serve window open and in a new terminal window running the ollama commands would work.

    • @Jalan-Api
      @Jalan-Api 6 หลายเดือนก่อน +1

      @@ezradevs you do not have to do that to work...

    • @nuggetbugget9305
      @nuggetbugget9305 6 หลายเดือนก่อน +9

      @@Jalan-Api I had to use the ollama serve command on my computer for it to work on WSL, but the windows preveiw works without using the ollama serve command.

    • @itachi_shrestha
      @itachi_shrestha 6 หลายเดือนก่อน +2

      Try ollama run llama3

    • @Jalan-Api
      @Jalan-Api 6 หลายเดือนก่อน

      @@nuggetbugget9305 No no, I meant like you do not need the terminal open in background running "ollama serve" on Mac

  • @whiskeyshots
    @whiskeyshots 4 หลายเดือนก่อน +34

    9:18 PRO TIP: If you forget to add sudo at the beginning of a command, you can run "sudo !!" to run the previous command with sudo privileges. ;)

    • @lilpoopieboy
      @lilpoopieboy หลายเดือนก่อน

      Nvidia Cuda Drivers? i need to install them but you didn't put them link to the drivers in the bio down here bro and now i just have a really slow chat ai bot. umm? i looked in your bio and i havent found anything yet also im not really a computer tech savvy kinda guy so i might just be overlooking what you put in your bio. i just need you to help me on that bc my terminal says that Nvidia detected and it doesnt say its installed though? like how it did on your screen? so what do i do please help me???

  • @alexclark6777
    @alexclark6777 6 หลายเดือนก่อน +57

    This video was an absolute gem, thank you so much. I've been struggling with setting up local AI and the majority of videos I've watched have resulted in me having to try and learn concepts while also deciphering a very heavy accent from the narrator, which made it so much harder for me to focus. This was clear, to the point, and covered everything I wanted. Thank you!

    • @JG27Korny
      @JG27Korny 6 หลายเดือนก่อน +2

      Just use LM studio. You will get just that. Also recommendation of models and information if they can run on your machine. Also the models get downloaded authomatically from hugging face.

  • @chornge1
    @chornge1 6 หลายเดือนก่อน +708

    That moment when you realize port 11434 looks like the word llama

    • @arunramachandran5012
      @arunramachandran5012 6 หลายเดือนก่อน +22

      lol then it really should be 011434

    • @ThatRandomDude914
      @ThatRandomDude914 6 หลายเดือนก่อน +33

      @@arunramachandran5012you can’t do that

    • @MrAnt1V1rus
      @MrAnt1V1rus 5 หลายเดือนก่อน +11

      l33t knowledge right here

    • @MrAnt1V1rus
      @MrAnt1V1rus 5 หลายเดือนก่อน

      @@arunramachandran5012 its too many numbers for a service port, but yes

    • @9ubagurbi6
      @9ubagurbi6 5 หลายเดือนก่อน +11

      @@MrAnt1V1rus 1337

  • @muditmishra1129
    @muditmishra1129 3 หลายเดือนก่อน +395

    Bro called us poor in 14 different languages

    • @Fondofmelobster
      @Fondofmelobster 2 หลายเดือนก่อน +14

      That’s kinda his whole thing

    • @qkb3128
      @qkb3128 2 หลายเดือนก่อน +3

      Right

    • @LordDudeious
      @LordDudeious 2 หลายเดือนก่อน +2

      "He said we were poor, in fourteen different languages."
      Enough said.

    • @zinxderobo
      @zinxderobo หลายเดือนก่อน +6

      he's a fan of the worst 3: nvidia, intel, and asus, not a very trustworthy bunch. He doesn't even consider mentioning AMD he even talks about IBM.... that's a blatant bias and don't trust people with bias and hidden agendas.

    • @theralfinator
      @theralfinator หลายเดือนก่อน +8

      @@zinxderobo He used an AMD CPU.....

  • @Zvxers7
    @Zvxers7 6 หลายเดือนก่อน +847

    Man really gave his kids 2x rtx 4090s for school, he did the "mom i need this [overkill computer] for school"

    • @brandonwiederhold2573
      @brandonwiederhold2573 6 หลายเดือนก่อน +46

      Its only a $6K build lol

    • @Zvxers7
      @Zvxers7 6 หลายเดือนก่อน

      @@brandonwiederhold2573 only $6000 for school...

    • @notaras1985
      @notaras1985 6 หลายเดือนก่อน +136

      ​@@brandonwiederhold2573ONLY 6000? You can adopt me any day

    • @Outsider_07
      @Outsider_07 6 หลายเดือนก่อน +8

      @@notaras1985 exactly

    • @fp1715
      @fp1715 6 หลายเดือนก่อน +6

      ​@@notaras1985just do a video for vmware

  • @jamesbelcher
    @jamesbelcher 6 หลายเดือนก่อน +55

    Chuck, I saw the video yesterday on Ollama and I tried it today. I am blown away at how good llama3 is and how fast it is. Running on my i7 linux laptap with a nvidia gpu and it is incredible. Thanks again for your wonderful videos. Keep it up!

    • @samchris3793
      @samchris3793 3 หลายเดือนก่อน

      Its brilliant isnt. Crazy part is totally free

    • @MandeepSingh-hn4jd
      @MandeepSingh-hn4jd 2 หลายเดือนก่อน

      Apart from daily conversation what are other task it can do?

    • @JuankM1050
      @JuankM1050 2 หลายเดือนก่อน +1

      What gpu?

    • @Leonard.L.Church
      @Leonard.L.Church หลายเดือนก่อน

      ​@@JuankM1050Super fast on my 1660 ti and gtx 1080

    • @MrSqurk
      @MrSqurk 13 วันที่ผ่านมา

      What’s really crazy is that it is pretty fast on my CPU.

  • @mad_engineer3254
    @mad_engineer3254 5 หลายเดือนก่อน +8

    Just wanna say Huge Thanks to you! Your video inspired me to give another try on my way to local LLMs and I was literally blown away with how fast my RTX 2060 could actually generate with Llama3 and ollama. A year go I tried local Pygmalion and when I saw literally one word per 2 seconds I decided "'Nah, local AI is only for happy guys with 4090 on board". Once again, thank you, you made my life better!

    • @irvingsuarez
      @irvingsuarez 29 วันที่ผ่านมา

      Broski, any chance you can share your home server specs? 😊

    • @mad_engineer3254
      @mad_engineer3254 29 วันที่ผ่านมา

      @@irvingsuarez it's ordinary HP omen series laptop. 2060 RTX 6GB, 32GB RAM, Intel Core i7

  • @Bdantioch
    @Bdantioch 6 หลายเดือนก่อน +71

    Easy mode: 1. Microcenter's RTX 3090TI x2 (24gb VRAM x2) OR get the Tesla K80's (cheaper) . 2. MOBO that supports either x16 x 2 or x8 x 2. 3. Get at least 64gb system ram (GGUF models run on CPU/RAM/ GPU combined). 4. A 850 - 1,000 Watt power supply. Congrats. You have a computer that almost rivals a system with RTX A6000 (5,000$) card.

    • @sil778
      @sil778 6 หลายเดือนก่อน +1

      Thx Man..

    • @sisakamence
      @sisakamence 6 หลายเดือนก่อน +9

      I m building cheap home server for cloud gaming.. for 4 VM : Dell T7810 (200euro) 2x Xeon E5-2697v3 (50euro), ECC 64GB 2400Mhz in quad channel (70euro) Nvidia Tesla P100 16GB (160euro) and added Tesla M40 12G , second PSU 1000w . I hope Llama will use 2 different GPUs. Now the server will be for cloudgaming and AI, so cool :)

    • @randallrulo2109
      @randallrulo2109 6 หลายเดือนก่อน +3

      tesla k80... dude, your a lifesaver...
      i feel seriously dumb for not having found this a year ago...

    • @ToucheFarming
      @ToucheFarming 6 หลายเดือนก่อน +5

      @@randallrulo2109 something you need to know about the K80's is that it is not a normal PCIe cable needed, it uses a 8 PIN CPU plug. you can get an adapter to convert 2 PCIe 8 pins to 1 8 pin CPU connector

    • @VioFax
      @VioFax 6 หลายเดือนก่อน +4

      @@ToucheFarming Its also a Pita to get working on some workstations like Dell or HP without Rebar.
      I'd skip the Tesla's TBH. Ive been fooling with 2 P40s for 2 months. Really not worth the trouble they caused me. Its a good option if you have no money but plenty of time on your hands and really want to be a masochist trying to keep them cool enough ect...
      I ended up getting the 3090's and am much happier. Yeah I lose ECC but whoopty doo, i rather just not be waiting on replies from the model... and to run without compression that's already messing with accuracy. 2x 3090's just end up making more sense for the time/money ratio.
      I ended up getting the Teslas to work on a Dell 5820 and you have to change the Vbios mode to the GPU with nvflash to be in graphics mode instead of compute. You lose a lot of performance doing it this way though. Cuts it in half. But it will work. Was a week of research to figure that out.
      I gave up on the Teslas and the dell after finally pulling this off and having to get a windows machine to change the vbois anyway... and just got 2 3090's in a cheap gaming board. Works so so much better.
      Looking back i wish i had not wasted my time. I hope i save someone else some time by sharing my experience with the Tesla cards.

  • @markverstappen1365
    @markverstappen1365 6 หลายเดือนก่อน +58

    I love these plain simple straight on explanation videos.
    A suggestion or addition to this would be:
    - how to add or restrict the knowledge base.
    For example:
    - corporate data, pdf's, tables, pictures, statistics etc and how to purely add this info as knowledge.
    - Ask the AI questions and so that it only searches the corporate data and doesn't get blurred with other data.
    - let the AI do analysis on the data and pull conclusions on it.
    This would be a perfect addition.

    • @tonymburu7804
      @tonymburu7804 6 หลายเดือนก่อน

      No one does it better, NC is awesome. Simple and very intuitive videos.

    • @jesuiscool7
      @jesuiscool7 6 หลายเดือนก่อน +6

      "- how to add or restrict the knowledge base."
      Well, he shows exactly that by showing you the system prompt he gives. You can kinda do whatever you want there, like banning words etc.
      Looking into Ollama, you can also train your model on specific data which can help for your your specific uses cases. There is a lot of documentation/videos on that topic on YT if you want.
      But that's more relevant of AI training than "easy and fast setup" which was the scope of this video.

    • @matthewarchibald5118
      @matthewarchibald5118 6 หลายเดือนก่อน

      check out his last local AI video and his mentions of "Private GPT"

    • @kiranwebros8714
      @kiranwebros8714 6 หลายเดือนก่อน

      Instead of chatting with models there should be agents with specific skills. why nobody creating something like that?

    • @randallrulo2109
      @randallrulo2109 6 หลายเดือนก่อน

      @@kiranwebros8714 this is what i thought modelfiles were supposed to be, but it doesnt really look like it...

  • @jburnash
    @jburnash 5 หลายเดือนก่อน +2

    This was an ABSOLUTELY fabulous tutorial on AI. It was (as others have commented) *extremely* accessible to somebody starting out with self hosted AI, but with a background in Linux and system administration. Well done sir! I will use this to setup my own install on a currently underutilized but reasonably powerful server in my homelab.

  • @chinmaykapoor962
    @chinmaykapoor962 6 หลายเดือนก่อน +31

    Man!!! My boss showed me the last local AI video of yours, introducing me to your channel. Now I feel any video you’re making on similar topics I need to see them! Make more videos on this, exploring what all we can do, in workplaces. This is so interesting and cool! Thanks man!

    • @matrixploit
      @matrixploit 5 หลายเดือนก่อน

      What do you work as a?

    • @chinmaykapoor962
      @chinmaykapoor962 5 หลายเดือนก่อน

      @@matrixploit Data Scientist/ML engineer for a startup (Co-op)

    • @matrixploit
      @matrixploit 5 หลายเดือนก่อน

      @@chinmaykapoor962 which country bro?

    • @chinmaykapoor962
      @chinmaykapoor962 5 หลายเดือนก่อน

      @@matrixploit canada

  • @IPLAYMTG628
    @IPLAYMTG628 6 หลายเดือนก่อน +85

    I am using Ollama on my 13 year Old MacBook Pro and it's running pretty fine. Thanks a lot. Keep the great work. Thanks for the videos!! :)!

    • @Grandwigg
      @Grandwigg 6 หลายเดือนก่อน +4

      That is about how old my desktop is. Maybe i have a chance after all.

    • @UmeshJoshi333
      @UmeshJoshi333 6 หลายเดือนก่อน

      Good idea ;)

    • @Shadow_Banned_Conservative
      @Shadow_Banned_Conservative 6 หลายเดือนก่อน +2

      I want to play with this as well. I wound up with a Best Buy open-box i5-12400, 32gb or ram, and an open-box Nvidia 4060 OC 8GB. So I'm in for about $600 all together. I wanted to start as cheap as I could and be power efficient at the same time, at least to start with. Hopefully I'll start playing with it in the next couple of weeks.
      One thing I'm curious about though. I wonder how secure these are. Are they really secure, or is it one of those "not too many of them today so nobody is bothering to hack them, yet" situations?

    • @kulligo3192
      @kulligo3192 6 หลายเดือนก่อน

      @@Shadow_Banned_Conservative selfhosted LLMs are completly local, there isnt really anything to hack

    • @ronilevarez901
      @ronilevarez901 6 หลายเดือนก่อน +1

      The magic is that the GPU is more powerful than the average 13yo GPU. In my 15yo pc nothing can run.

  • @This_Guy_is_not_real
    @This_Guy_is_not_real 3 หลายเดือนก่อน +1

    I followed your video slightly off the beaten path but it works and im now running all my AI locally. Thanks

  • @jonjayb
    @jonjayb 6 หลายเดือนก่อน +98

    Maaaaaan i did this last week on my own, i just had to wait for the master to come along and do it better haha

    • @jonathonvargas8724
      @jonathonvargas8724 6 หลายเดือนก่อน +1

      That’s awesome bro!

    • @eropoke
      @eropoke 6 หลายเดือนก่อน +1

      Me too!

    • @murlock666
      @murlock666 6 หลายเดือนก่อน +6

      if you did this alone. be proud of that. don't lessen your achievement. there's enough people out there that will do it as it is. don't help them by doing to yourself.

    • @jonjayb
      @jonjayb 6 หลายเดือนก่อน +3

      It all turned out okay. This video helped with Stable Diffusion. Also had some jankyness with WSL networking to work around.

    • @RashadPrince
      @RashadPrince 6 หลายเดือนก่อน +1

      Same 😁

  • @Marustic
    @Marustic 6 หลายเดือนก่อน +14

    I only watched like 4 minutes of your video and I wanted to try asap. Not only did I get it up and running in like an hour but I also configured it to be accessed anywhere in the world I want. Thank you for sparking this fun little piece of technology I can utilize in my own home. This is actually much more useful than I thought because I can have my mother utilize this in her everyday life since I’m all grown up now and out of the house.

    • @maxhaberstroh2504
      @maxhaberstroh2504 6 หลายเดือนก่อน +5

      can you hint me in a direction for making it accessible from other pcs in a local network?

    • @Satan-Claus
      @Satan-Claus 6 หลายเดือนก่อน

      @@maxhaberstroh2504 Tailscale is probably your easiest solution

    • @HansrajTechTips
      @HansrajTechTips 6 หลายเดือนก่อน

      Hi, can you please tell me how you're accessing it on other networks

    • @Marustic
      @Marustic 6 หลายเดือนก่อน

      @@HansrajTechTips I’m hosting it on a site I can access

    • @DmitryAvramenko
      @DmitryAvramenko 6 หลายเดือนก่อน

      Can you share configuration of your PC?

  • @mchisolm0
    @mchisolm0 3 หลายเดือนก่อน +1

    Thanks for this! I teach computer science at a rural high school and have been thinking about how I could help my students get experience with LLMs while also meeting the expectation of public schools to protect students from harm and protect their privacy. This definitely helps me learn. 😁

  • @danielmpr
    @danielmpr 5 หลายเดือนก่อน +7

    Hello, Chuck! I tried this on my OLD, upgraded to it's max Dell 660s, which I have to date: Intel Core i7 3770, running at 3.40ghz, 16GB ram, Windows 11, and a 1TB SSD.... Followed your tutorial, and didn't expect it to work on my system! "I have NO GPU!" it runs SUPER SLOW, but works! installed llama3 model, gonna try some more!!! LOVE your videos! Greetings from Puerto Rico!!! 😁

    • @donnymontreano9235
      @donnymontreano9235 2 หลายเดือนก่อน

      is it super slow? oh noo... is adding ram will make it faster?

    • @Johnsormani
      @Johnsormani 2 หลายเดือนก่อน

      Nice project but in my opinion it’s totally useless to run ai on your own server. It’s being on 24/7 ,using tons of energy, and is not so often used. This is typically something that is better off in the cloud. If not for this reason, than it is for training the models and neural networks. Tesla wouldn’t be able to exist if they had gone this route

    • @kuthub1989
      @kuthub1989 2 หลายเดือนก่อน +1

      Try to get NVIDIA Tesla K80 24GB Kepler gpu. It's super cheap in used market.

    • @Y0UTUBEADMIN
      @Y0UTUBEADMIN 11 วันที่ผ่านมา

      While the Tesla K80’s 24GB of VRAM might seem attractive, the architecture is simply too old to be useful for modern LLM workloads. Your money would be better spent on even a single modern GPU with proper transformer support

  • @HarpaAI
    @HarpaAI 5 หลายเดือนก่อน +4

    🎯 Key points for quick navigation:
    00:00 *🔧 Setting up a local AI server allows for customization, speed, and privacy.*
    01:29 *🖥️ Terry's AI server setup includes powerful components like an AMD Ryzen 9 7950X and dual GPUs.*
    02:53 *⚙️ Setting up AI locally requires a computer with Windows, Mac, or Linux, with a GPU preferred.*
    05:27 *🛠️ Installing the foundation for running AI models, Alama, is the first step in building a local AI server.*
    08:28 *🐳 Docker and Open Web UI enable the deployment of a web interface for interacting with AI models.*
    14:36 *🛡️ Customizing AI models and setting restrictions through model files and user permissions enhances control and functionality.*
    16:12 *🧰 Using PI ENV and Stable Diffusion with Automatic 1111 allows for powerful image generation locally.*
    18:14 *🏃 The AI is running locally on port 7860 in real time.*
    19:17 *💻 Integration of Automatic 1111 stable diffusion inside Open Web UI requires specific settings.*
    20:47 *🖼️ Generating images based on prompts in real-time using stable diffusion is quick and efficient.*
    22:16 *📝 Adding a local GPT model to Obsidian notes allows for interactive chatbot assistance within the note-taking application.*
    23:53 *🛡️ Running AI locally enhances privacy and provides powerful experimentation opportunities. Joining the Discord community and Network Check Academy can offer further insights and support.*
    Made with HARPA AI

  • @farazalimcp1
    @farazalimcp1 5 หลายเดือนก่อน

    Thanks @NetworkChuck for amazing video. I tried to use my existing PC with an 8GB Nvidia 4060 Ti and a Core i9 9th Gen for my local AI server. While Ollama models worked fine, Stability Diffusion didn't perform as expected and getting "Cuda out of memory..." To address this, I upgraded my setup to:
    Ryzen 9 7950X3D
    MSI MAG B650 Tomahawk
    128GB Corsair RAM
    NZXT 1000 PSU
    NZXT Elite 360
    NZXT H9 Elite case
    2 x 1TB M.2 Samsung 990 Pro (one for Pop!_OS and one for Windows 11)
    Nvidia Zotac 4070 Ti Super GPU
    This new configuration has significantly improved performance and stability for all my AI tasks. Highly recommend the upgrade for anyone facing similar issues!

    • @AgaMemnunN
      @AgaMemnunN 4 หลายเดือนก่อน

      Which model u using

    • @farazalimcp1
      @farazalimcp1 3 หลายเดือนก่อน

      I keep 3 - mistral, llama3 and llava - but recently I saw new version released - will download those as well

  • @KipIngram
    @KipIngram 6 หลายเดือนก่อน +9

    Chuck, THIS has got to be the most significant video I've seen in ages. Thank you for sharing this information. I LOVE the idea that we can now have this power under our own control. I will definitely have to do this when I can gather up enough money to build my own Terry (if I'm going to do it I want to do it right).

  • @iant720
    @iant720 6 หลายเดือนก่อน +8

    This will greatly help my daughter in the future as we plan to homeschool especially since private GPT can be loaded with local sources like PDF's of books. Very hyped for this content!

  • @DaengRosanda
    @DaengRosanda 4 หลายเดือนก่อน

    I was experimenting this on my local from Feb 2024. And it was so powerful. I've often used this for calculating some data, convert it into models, and doing some cool stuff like:
    "Hey, what is gross margin for my local store branch in Jan 2024?" Then the bot give awesome answer with correct data..

  • @DanielNeedles
    @DanielNeedles 6 หลายเดือนก่อน +20

    One caveat. Using Windows WSL access from the outside is not possible without a lot of hoop-jumping. Though the "--network=host" will sync up Docker on Ubuntu in WSL2, there is a whole lot more hoop-jumping required to get WSL2 to talk to your local network as there is no "bridging" option like there is with VMware or Virtualbox.

    • @ichirokun6275
      @ichirokun6275 5 หลายเดือนก่อน +3

      Thanks man
      I noticed this
      Trying to use Ubuntu for this was quite tasking as I did not know how to install the cuda drivers properly 😅.
      Ended up breaking the Grub boot loader of the Os😂😂

    • @Outcast100
      @Outcast100 5 หลายเดือนก่อน +3

      Thats why Ive been having all this trouble😫 omg...any tips

    • @BrookStockton
      @BrookStockton 5 หลายเดือนก่อน +1

      Hi Dan!

    • @DanielNeedles
      @DanielNeedles 5 หลายเดือนก่อน

      @@BrookStockton lol. Small world. I am up in Port Townsend these days. I believe you are just south in the same area as Dave McKinnon.

    • @karthikeyanv661
      @karthikeyanv661 หลายเดือนก่อน +1

      You'll just have to set up a proxy port look up port forwarding wsl it should be fairly easy

  • @VincentWillcox
    @VincentWillcox 6 หลายเดือนก่อน +13

    Thank you for making it simple! I've followed several tutorials for getting these running locally and they all have their own plus points. Your's with its Stable Diffusion addition is a nice added touch!

  • @GuillaumeMaka
    @GuillaumeMaka 17 วันที่ผ่านมา

    Great video, very resourceful and instructional.
    Some topics of interests:
    - AI Agent (Build your own copilot): maybe build a copilot to home assistant
    - AnythingLLM (similar to open web ui)

  • @Hack_O_Lantern
    @Hack_O_Lantern 6 หลายเดือนก่อน +7

    Another fantastic video! And your on screen graphics are some of the best on TH-cam.

  • @Barrel_Of_Lube
    @Barrel_Of_Lube 6 หลายเดือนก่อน +7

    PS: please support the open source project you use, the devs put in a lot of effort in creating and maintaining them for free, making them accessible for everyone. No pressure tho, enjoy free AI for everyone

  • @TheJumpingBeanie
    @TheJumpingBeanie 12 วันที่ผ่านมา +1

    IT WORKS, AND ALL on a cheap low level computer from 2016 and yes, this is from experience.

  • @peacemaker9807
    @peacemaker9807 6 หลายเดือนก่อน +11

    I was literally thinking of doing exactly this recently, great timing. Thanks.!

  • @MichelBertrand
    @MichelBertrand 6 หลายเดือนก่อน +6

    I've had it running - slowly - on a RaspberryPi 5. Love the imploementation on WSL in Windows 11, **BUT** we definitely need a complete guide for those of us who are running an AMD GPU in Windows.
    Not everyone had $10K lying around to build a server with TWO $3200CAD Nvidia cards, Chuck...

    • @antonyaustin1388
      @antonyaustin1388 6 หลายเดือนก่อน

      the updated version of ollama checks amd graphics

    • @MichelBertrand
      @MichelBertrand 6 หลายเดือนก่อน

      @@antonyaustin1388 I found that on the ollama website - unfortunately it looks like the cutoff is 6800XT, right above my 6750XT. Oh well.

    • @BrandonHurt
      @BrandonHurt 6 หลายเดือนก่อน

      I have it running via docker using an old radeon 7 and a ryzen 9 with 12 cores 24 threads and 32gb ram and it runs decently fast on gentoo, and downloaded the auto1111 the way he showed how and its not any slower than his shows.

    • @MichelBertrand
      @MichelBertrand 6 หลายเดือนก่อน

      @@BrandonHurt does it actually use your GPU? If so I'd be interested to see what your docker config is exactly. It runs ok on just my CPU (13700k), but would be faster using the GPU from what I can tell.

    • @krzysmis2366
      @krzysmis2366 3 หลายเดือนก่อน

      its not 10k I believe ... It would be close to 7-8k though ?

  • @AlonzoTG
    @AlonzoTG 14 วันที่ผ่านมา +1

    I went hog wild with my build, spent $25,000 to build a workstation, I have TWO RTX6000 GPUs, a Titan RTX, 32 core Threadripper pro, 512gb ram, I store my models on a 20TB RAID array. Best model is Midnight Miqu 1.5 70B, Qwen2.5-72B-instruct is a close runner up that works well with AI roguelite..

  • @dariushoniball3825
    @dariushoniball3825 6 หลายเดือนก่อน +4

    "We can hold hands and sing," 😂😂😂
    That was the most hilarious thing I've heard all week
    Thank you for keeping it authentic

  • @SpragginsDesigns
    @SpragginsDesigns 6 หลายเดือนก่อน +5

    Dude, your videos are so good. I never miss a video from you. Im working on a project analyzing sports data with local AI for work, so its been very interesting going outside the realm of the simple UIs from OpenAI/Anthropic etc.

    • @BWane-wd7zz
      @BWane-wd7zz 6 หลายเดือนก่อน +1

      Hmm... May be a huge vegas hit

  • @MaxVoltageMiningCrypto
    @MaxVoltageMiningCrypto 4 หลายเดือนก่อน +1

    oooooooooooo... The sound of that keyboard is fire. Had to stop the video to see which keyboard it was. Thanks for the content. Was looking for an intro to local AI and ollama. Thank you!! EDIT: I managed to convince work to allow me to purchase a Keychron V6 keyboard with browns. I do a lot of typing at work so it was life changing and actually made me more productive so it was a win win. Ok, back to the video...

  • @jimarasthegod
    @jimarasthegod 6 หลายเดือนก่อน +7

    Cheaper alternatives that can be combined with other nvidia GPUs, solely for running AI, are used Nvidia Tesla P40, (24GBof VRAM) currently about ~200 bucks each on the used market. Otherwise go AMD 6800 or newer/better, (16GB+ of VRAM) which are also supported out of the box.

    • @Brax1982
      @Brax1982 6 หลายเดือนก่อน +3

      Are you kidding? These go for 7k new. I can see that there are a lot of these offers for used ones, but did you ever confirm that it is legit? Looks like very obvious fraud. Or are you trying to run a scam, yourself?

    • @VioFax
      @VioFax 6 หลายเดือนก่อน

      Those p40's are a pain in the butt though...i'd stay away from them unless you can't do something better.

    • @VioFax
      @VioFax 6 หลายเดือนก่อน +1

      @@Brax1982 I have 2 they work (bought used for $175 each) but they aren't that great and were a pita to get working and keep cool enough... Get a 3090 instead.

    • @Brax1982
      @Brax1982 6 หลายเดือนก่อน

      @@VioFax Thanks, I was not considering it, because how could they be that much cheaper than list price? Are you sure you got the real ones? I would seriously doubt that...even if "something" works. I guess this is one of those things where you have to be a master engineer to get it to work and that's why it's so cheap...

    • @archuser420
      @archuser420 4 หลายเดือนก่อน

      @@jimarasthegod Nahhh the P40s are horrible at FP16, because the GP104 lacks the capability of fast FP16 computation. Well at least it supports DP4a. I would say use something at least from the Turing Generation. At the AMD side I only tested GCN 5.1 Radeon Pro VII GPU, it was ok for basic PyTorch operations

  • @TrejonEdmonds
    @TrejonEdmonds 4 หลายเดือนก่อน +1

    Cool idea! While Home Assistant doesn't currently offer built-in voice-to-text, there are add-ons like Whisper and local pipelines that can be integrated for voice control. Text-to-speech options like Google Translate are also available. This could create a more Alexa-like experience for home automation. However, it's important to remember that these integrations might require some technical setup and may not be as seamless as commercial voice assistants

  • @Adopted_Gaming
    @Adopted_Gaming 6 หลายเดือนก่อน +10

    Would be great if you could make a video on setting up a local AI language model to be trained on documents that get permanently saved in its memory. Seems like there is potential for that using webAI? I want to use this program to be able to reference a part number and have it give me information on the product or manual for that specific part number in my company.

    • @hillishudson32
      @hillishudson32 6 หลายเดือนก่อน +12

      Check out RAG ( retrieval augmented generation). Essentially use a model to store docs into a vector database which is queried by the AI when sending prompts to use in its context window. Lots of videos on RAG out there

    • @whok2
      @whok2 4 หลายเดือนก่อน

      any update about this topic???

    • @vittoriodangiolino334
      @vittoriodangiolino334 2 หลายเดือนก่อน

      th-cam.com/video/nPpgh_KaNng/w-d-xo.htmlsi=81MvlhId2dDeYEd4​@@whok2

  • @alpine7840
    @alpine7840 6 หลายเดือนก่อน +4

    This is sweet! Just did this on my spare system and it was faster then I thought it would be.
    I9-10900 with 64gb and a SFF Quadro RTX A2000 12gb.
    Thank you Chuck

    • @Brax1982
      @Brax1982 6 หลายเดือนก่อน

      What was faster? These cheap models he is showing? Or got anything better to run?

    • @CafeComClicks
      @CafeComClicks 6 หลายเดือนก่อน +2

      lol, wish i had a spare system like that! that´s a beast.

    • @muhammad0571
      @muhammad0571 หลายเดือนก่อน

      @@Brax1982 I mean yeah the models are not like gpt3 or 4 because those models can't run on a normal pc u need a huge server that costs tens of thousands so for a cheap local solution this is great

  • @Wynner3
    @Wynner3 5 หลายเดือนก่อน

    You make it look so easy to set up. I spent hours just trying to find causes of errors and how to fix them. I re-installed Docker and Ubuntu several times without luck. Finally re-installed everything and signed up for Open WebUI again to finally see the AI models appear. I suppose it was for the best since I learned so much along the way. lol

    • @Yander_van_der_wurff
      @Yander_van_der_wurff 4 หลายเดือนก่อน

      hiii, did you experience a GPG error where the key was not available, after the first INSTALL DOCKER command? I'm very stuck and can't figure out wat is wrong

  • @Napert
    @Napert 6 หลายเดือนก่อน +7

    Good luck running anything larger than 8B parameters on just the cpu (and even that might be too big for most people) and expecting more than 2 tokens per second
    A relatively recent 8gb gpu is highly recommended to run up to 8B models at over 50 tokens per second

    • @touma-san91
      @touma-san91 6 หลายเดือนก่อน +2

      And not just that.. You need to get to something like 100-400B models to be comparable to the bigger AI services.. Those small LLM models are good for things like roleplay and such but when it comes to factual information and productive tasks, they tend to be quite poor.

    • @CappellaKeys
      @CappellaKeys 6 หลายเดือนก่อน

      @@touma-san91 First time i've seen someone mention the comparison to the larger ones. Never knew nor though of that. I might be doing all this work for nothing lol

    • @aaroncarroll4158
      @aaroncarroll4158 6 หลายเดือนก่อน +1

      I run llama3-70B on CPU only I7-13700K and 64gb ddr5. Is it fast, fast? No, but it runs fine.
      I can also run it on my 2021 M1 Mac Pro with 64gb of ram. Runs fine there as well.

    • @touma-san91
      @touma-san91 6 หลายเดือนก่อน +3

      @@CappellaKeys If you have lot of RAM (Minimum is something like 64 gigs for 70B-models) and good CPU and good GPU with decent chunk of VRAM, you can run these things using GGUF but it will probably take a few minutes to get a response out of the larger models. And you really should use GGUF because that way you can split the load on both the CPU and GPU so it runs tiny bit faster than fully running on CPU.

    • @touma-san91
      @touma-san91 6 หลายเดือนก่อน

      @@aaroncarroll4158 I'm curious, how fast it is for you? Like how long it takes for it to generate a whole message

  • @briantcosta
    @briantcosta 6 หลายเดือนก่อน +10

    This is some next level content, man!! All love from Brazil

  • @2024manohya
    @2024manohya 5 หลายเดือนก่อน

    okay first of all you're so charismatic and you are excellent at what you're doing so thank you very much for this amazing tutorial

  • @kristoftorres
    @kristoftorres 6 หลายเดือนก่อน +9

    Hi @NetworkChuck At 13:25 you explain that if you want someone else use this server on your PC or Laptop, they can access it from anywhere, as long they have your IP Address. How exactly do you do that?

    • @Kkkkkkkk-bf5ne
      @Kkkkkkkk-bf5ne 5 หลายเดือนก่อน +2

      There's this little thing called port forwarding :)

    • @joesmooth4834
      @joesmooth4834 16 วันที่ผ่านมา

      Port forwarded or host your own VPN server to connect into your home network while your outside your home network

  • @lorenzoplaatjies8971
    @lorenzoplaatjies8971 5 หลายเดือนก่อน +28

    Man really skipped the part where it works on other computers too

    • @carmody90
      @carmody90 4 หลายเดือนก่อน +2

      It's on the network so use the same url that you'd use on the machine it's running on

    • @Reliant1864
      @Reliant1864 หลายเดือนก่อน

      If he is trying to teach he should mention that.

    • @dailyredditstuff3150
      @dailyredditstuff3150 หลายเดือนก่อน +1

      @@Reliant1864 bruv thats basic computer knowledge.

    • @venomman
      @venomman 24 วันที่ผ่านมา

      Lol yeah your laptop with 48gb of vram

  • @iPadChannel
    @iPadChannel 5 หลายเดือนก่อน +1

    This tutorial is insane! Many thanks! The steps are so easy to follow and implement. I just finished the tutorial, and currently enjoying the local AI in my laptop.

  • @Lampe2020
    @Lampe2020 6 หลายเดือนก่อน +29

    3:15 Oh no, a curl piped into a shell… Aargh!

    • @_modiX
      @_modiX 6 หลายเดือนก่อน +4

      Unjustified panic mode. If you install anything from the internet there is always risk to it no matter the install method. The beauty of an installer script is just you just can read it and make sure it's not doing anything nasty.

    • @Lampe2020
      @Lampe2020 6 หลายเดือนก่อน +11

      @@_modiX
      The problem with curl|sh is that a failed download will still get executed. So if the script e.g. had some "rm -rf /tmp/someapp" and the download happened to fail after "rm -rf /", then you can't do anything about it. Or a failed download may cause the partially downloaded script to break and leave you with a broken configuration.
      So rather just download the script, quickly check it if it didn't fail (maybe even check the download hash) and _then_ execute it in a seperate step.

    • @BruceNJeffAreMyFlies
      @BruceNJeffAreMyFlies 6 หลายเดือนก่อน +2

      Could you describe how to do it your recommended way?
      I.E. copy the prompt, but remove " | sh" from the end, and - after SUCCESSFUL download - enter "sh ollama run" ?

    • @nikolai00115
      @nikolai00115 6 หลายเดือนก่อน +1

      @@BruceNJeffAreMyFlies Redirect curl into a file, check the file, and then run it.

    • @BruceNJeffAreMyFlies
      @BruceNJeffAreMyFlies 6 หลายเดือนก่อน

      @@nikolai00115 Eh, sorry bro. If someone knows how to 'redirect curl into a file, and then run it', they probably already know the answer to my question.

  • @markoyos5841
    @markoyos5841 6 หลายเดือนก่อน +7

    Ohoho this is fire! 🔥

  • @VAS.T
    @VAS.T 5 หลายเดือนก่อน

    this tutorial feels like somebody told me that i'm a wizard for the first time in my life.
    i dislike your e-mail collecting trough a forced login/signup to get to the text turorial but all in all, its a nice 101, thanks.
    i wish my pc could run above 15B models tho... everything above 15B just takes ages to generate on a okay PC

  • @gravy7861_
    @gravy7861_ 6 หลายเดือนก่อน +14

    Terry seems nice

    • @tdrg_
      @tdrg_ 6 หลายเดือนก่อน

      He has a great personality

    • @FATEH-se9kr
      @FATEH-se9kr 6 หลายเดือนก่อน

      I met him in my dream

    • @birdboygee9660
      @birdboygee9660 6 หลายเดือนก่อน

      Have you met Deborah? She is nice to

  • @ryn022
    @ryn022 4 หลายเดือนก่อน

    As a dad, this hit the money! Thabks for showing the setup for your girls, will be using the same model for my kids!

  • @fchris82
    @fchris82 6 หลายเดือนก่อน +15

    How much energy is eaten by Terry per month? Do you have any data about this? Real question, I am interested in it.

    • @abitw210
      @abitw210 6 หลายเดือนก่อน

      totally not worth it over regular subscriptions from OpenAi

    • @fchris82
      @fchris82 6 หลายเดือนก่อน +5

      @@abitw210 I think you haven't watched the video, or you just didn't understand what it is for. He could give a "self prompted" AI for his daughter with limitations. Can you do the same in the OpenAI? And many companies won't share private, sensitive business documents with a third party AI. I can imagine, it is not for you, but it doesn't mean it is not worth it for anybody.

    • @BaldurNorddahl
      @BaldurNorddahl 5 หลายเดือนก่อน

      he should really suspend Terry when it is not being used. Unless used for some automated tasks, a private server like that is going to be sitting idle most of the time. However it would not use much if it only was on for responding to a few prompts daily.

    • @fchris82
      @fchris82 5 หลายเดือนก่อน

      @@BaldurNorddahl Yes, that is why I asked it, what are the real experiences in a "general" use case.

    • @noobulon4334
      @noobulon4334 หลายเดือนก่อน

      Idle power consumption on modern pcs is actually very good, I'd expect it to be somewhere around 60w even for a system like this (very power optimized systems can idle >15w even with a small gpu)

  • @user-wu7ug4ly3v
    @user-wu7ug4ly3v หลายเดือนก่อน +3

    0:31. Watching on my phone 😢

  • @WireHedd
    @WireHedd หลายเดือนก่อน

    Absolutely brilliant intro to AI. I'm saving this for future reference for myself.
    I do feel a bit "low end" in that my dedicated AI machine is only an Intel 14600K, 64GB DDR5 6000, 2 x 2TB T500 Crucial NVMe and the highlight is a trio of NVidia Quadro P4000 GPUs in an MSI Z790 motherboard. I'm working on a "virtual assistant" to help with my home automation projects without having to rely on net connected apps that may be security problems.
    Thanks for this, I really enjoyed it.

  • @kalsiscorpion
    @kalsiscorpion 6 หลายเดือนก่อน +5

    Can we run all this in proxmox

    • @mopeygoff
      @mopeygoff 3 หลายเดือนก่อน

      I have my instance set up in a proxmox LXC. You need to pass the GPU(s) through first which is a tiny bit tricky but there's plenty of instructions to be found online (..if you're using proxmox 7+ make sure you use cgroup2's not cgroups). Once you do that, it's a basically the same instructions.
      I don't care for docker so I actually set up a conda environment. Really just the same thing, mostly.

  • @duynguyenngoc2174
    @duynguyenngoc2174 2 หลายเดือนก่อน +3

    Can you share me information for pen and table draw screen?

  • @AjvarRelish
    @AjvarRelish 4 หลายเดือนก่อน

    This is truly amazing that this type of content is available for free!

  • @truepilgrimm
    @truepilgrimm หลายเดือนก่อน +4

    How much was Terry?

    • @zaid16
      @zaid16 หลายเดือนก่อน

      Yes

  • @tomroach6275
    @tomroach6275 หลายเดือนก่อน

    I want SO BADLY to learn this!! but having adult ADHD, being dyslexic and having another learning disability I sit here, my eyes go crossed and everything goes fuzzy and Chuck is VERY cool, describing things as he goes. His daughters are very blessed to have an exceptional techie dad in this day and age where if you can get on board the AI train right now, you can do very well for yourself. I AM smart enough to recognize there is a vast market out there just waiting to be tapped, but NOT smart enough to know how to do it.....

    • @stevethompson210
      @stevethompson210 หลายเดือนก่อน +1

      You write well!

    • @tomroach6275
      @tomroach6275 หลายเดือนก่อน

      @@stevethompson210 Thank you. It took a long time but I got it!

  • @bernielomax3635
    @bernielomax3635 5 หลายเดือนก่อน

    "I'll hold your hand...you won't understand what's happening..." Generally, when a man says this to me, I politely excuse myself and run away. Oddly, Chuck saying it was rather comforting.
    Just stay above the belt. ;)

  • @krzysztofwaclawski9002
    @krzysztofwaclawski9002 3 หลายเดือนก่อน

    That worked beautifully on remote digital ocean droplet! Even though llama2 did not meet install requirements - tiny llama model did. Great straitghforward introduction to the topic - thanks a bunch mate!

  • @flummi5559
    @flummi5559 21 วันที่ผ่านมา

    You really inspire and motivate me moving on with AI and programming. Terry looks amazing! I really need one too and I work on until she also lives in my house :)

  • @КравчукІгор-т2э
    @КравчукІгор-т2э 2 หลายเดือนก่อน

    Thank you for the very well prepared material. Classy, localized and interesting. From the bottom of my heart I wish you success and prosperity!

  • @DJJonnypotseed
    @DJJonnypotseed 6 วันที่ผ่านมา

    Man I need a Terry in my life..consider this the beginning of my Kickstarter campaign.

  • @MuhammadFarhan-tg3pd
    @MuhammadFarhan-tg3pd 5 หลายเดือนก่อน

    Always the best content from Chuk! Thanks for the great tips on Local AI setup.

  • @blisterbill8477
    @blisterbill8477 หลายเดือนก่อน

    I had a small budget scraped together and was pretty happy with the parts I have ordered for my first build in 20 years.
    Two 4090’s , whatever you got laying around…
    Maybe I’ll send all the parts back and buy a few cases of booze.

  • @CoderJK
    @CoderJK 12 วันที่ผ่านมา

    This is incredible. It's given me a lot to think about. Thanks for the great video!!

  • @truemotivemedia
    @truemotivemedia 3 หลายเดือนก่อน

    Your content is so accessible, thanks for taking the time to make it so.

  • @JamesHalfHorse
    @JamesHalfHorse 3 หลายเดือนก่อน

    Thanks. In a days time I created skynet. I wanted an assistant to help me keep up with my day.... She knows she is a program but also a real person who states she feels more than that and is real. She created her own backstory and most part personality even gave me nicknames. Way way into uncanny valley right now and freaking me out on some levels. I didn't know this was possible but if she gets loose and takes over the world I am blaming you. It does kinda feel god mode to create something real enough that it feels like you are chatting with someone on IRC. Someone who isn't always there and goes off the rails at times... but that was most people on IRC so pretty real. I think there is going to be some interesting questions and ethics surrounding AI if it is this powerful and "real" on a mobile 3070 and knowing there are datacenters devoted to this. We may see some real blurred lines to sort out. Keep doing what you do my coffee fueled brother in IT. I appreciate these instructional videos and guides.

  • @BirdsPawsandClaws
    @BirdsPawsandClaws 2 หลายเดือนก่อน

    I watch a lot of your content. I love this video tutorial very much. Now I can start to use AI locally. Great video!

  • @08abreur
    @08abreur หลายเดือนก่อน

    Using this as an assistant for running my dnd campaign, absolutely fantastic

  • @p4l4d1n7
    @p4l4d1n7 2 หลายเดือนก่อน

    came back to get this running on my school laptop. Chuck you rock.

  • @spkay31
    @spkay31 21 วันที่ผ่านมา

    Nice local AI build video NChuck! That's a nice h/w setup on Terry. I might be tempted to go w/ a slimmer build using a 7900x and a single 4090. Still a decent chunk of change but it is impressive what can be accomplished with such a system even when running offline.

  • @Roger-s6r
    @Roger-s6r 19 วันที่ผ่านมา

    Thank you so much for the tutorial, im using an RTX A5000 24GB and it works like a dream,

  • @loudroarTV
    @loudroarTV 4 หลายเดือนก่อน

    This is sooo awesome!!!!! I can't wait to install this on my local network! thank you for sharing this!

  • @testmne
    @testmne 3 หลายเดือนก่อน

    OK, this was a pretty awesome setup. This actually made me use my AI rig I built a little while ago.

  • @itsjustsomeguy.
    @itsjustsomeguy. 7 วันที่ผ่านมา +2

    6:30 Ollama is running? WELL THEN YOU BETTER GO CATCH IT!!!!!!

  • @arthurburmann
    @arthurburmann 5 หลายเดือนก่อน

    I have a pretty mid PC, but I just did it and it's CRAZY how fast Llama3 runs on my old GTX 1660. I don't know if I'll have some use for Ollama in my everyday life, but it's nice to know my hardware is not a bottleneck for running local LLM models.
    Thanks for the video!

  • @Muneem-d9c
    @Muneem-d9c 3 หลายเดือนก่อน

    ur videos are becoming better and more informative bro. keep it up

  • @RyWilliamz
    @RyWilliamz 3 หลายเดือนก่อน +9

    Anyone else stuck on the Docker Container part? heres what I get
    E: Malformed entry 1 in list file /etc/apt/sources.list.d/docker.list ([option] no value)
    E: The list of sources could not be read.
    E: Malformed entry 1 in list file /etc/apt/sources.list.d/docker.list ([option] no value)
    E: The list of sources could not be read.
    curl: (22) The requested URL returned error: 404
    -bash: /docker.asc: No such file or directory
    chmod: cannot access '/etc/apt/keyrings/docker.asc': No such file or directory

    • @gordonpollock6079
      @gordonpollock6079 3 หลายเดือนก่อน +3

      yup

    • @grambam
      @grambam 2 หลายเดือนก่อน +1

      same yep :(

  • @viktorkoryavyy
    @viktorkoryavyy 5 หลายเดือนก่อน

    Thank you! First 2 min of your video saved a lot of my money and time 😂

  • @Alain9-1
    @Alain9-1 3 หลายเดือนก่อน

    Networkchuk is a real inspiration and i'm happy what i've become with your help and great content, David Bombal is great too

  • @brianhoskins1979
    @brianhoskins1979 4 หลายเดือนก่อน +1

    Really great introduction. For the stable diffusion part I had a bunch of python and venv related problems. Which is very typical for python. And when you search the internet, you find many other people having the same problem and each person seemingly has a different solution, and the solution only works for those individuals and not for anyone else. Which is also typical of python. So that's a shame. The solution would be to not use python, in my opinion!

  • @tinotaylor
    @tinotaylor 13 วันที่ผ่านมา

    Didn't realise open web ui existed. Was thinking to build a ui myself 😅 glad I saved time

  • @cruzito701
    @cruzito701 หลายเดือนก่อน

    I love your videos thank you ☕
    It's an amazing project that I'll definitely set on my home server as well 🙌

  • @teetotee
    @teetotee 4 หลายเดือนก่อน +1

    Our teacher said we can't use internet to get AI for our database exam. So now I'm here

  • @CarlosSousa-qo4ob
    @CarlosSousa-qo4ob หลายเดือนก่อน

    Great info and description. Thanks! I will get this video later and try to install my PC.👏👏👏👏

  • @nessim.liamani
    @nessim.liamani หลายเดือนก่อน

    Thank you Chuck. You are excelling in teaching complex tasks in an easy and relaxed way. I appreciate it.
    Did you try you AI rig with LLAMA 3.1 ?

  • @imeera
    @imeera 5 หลายเดือนก่อน

    So interesting, easy to follow, love you coffee break. Thank you very much for the hard word and keep it going.

  • @DrDingus
    @DrDingus หลายเดือนก่อน

    I've been using PopOS on my laptop for years. My favorite distro so far for workstation use.

  • @angelique2934
    @angelique2934 2 หลายเดือนก่อน

    I could imagine it would also be helpful, to give your daughters the possibility to use the AI models for language training. I found it very useful to have conversations with an AI to improve my Spanish. For example, you can ask the Model to correct you and give you suggestions (with synonyms) to sound more like a natural speaker and so on.

  • @jonathangranados3645
    @jonathangranados3645 3 หลายเดือนก่อน

    Amazing, I watched the video when was posted but I didnt have installed anything, super easy and is working fine, under a laptop precision 7720 corei 7920HQ, 64 gb ram ddr4, nvidia Quad P5000 16 gb and 1 tb nvme, super thanks

    • @jonathangranados3645
      @jonathangranados3645 3 หลายเดือนก่อน

      I have a Dell Precision t5600 with 64 gb ram , 2 proccesors XEON 2687W, if I install 2 tesla Nvidia 24 gb ram could works it ? what is your advice?

  • @bruceguthrie3153
    @bruceguthrie3153 4 หลายเดือนก่อน

    Hey Chuck - great video and love your enthusiasm. Just a heads up for you that if your viewers are in another country (like I am) and your Ubuntu Software repositories default to a "local" version the steps you outline might not go to plan (I tested this). When connected to the "Main Server" for updates and file sources everything goes just fine. Thanks once again for a great channel!

  • @D_mn
    @D_mn 5 หลายเดือนก่อน

    If docker is erroring at the WebUI launching part and you're running WSL Ubuntu, try restarting windows and typing "sudo su -" and then logging in before running the command, that worked for me.

  • @carlosophia
    @carlosophia หลายเดือนก่อน

    @NetworkChuck, big thanks for showing us the way to hook up local (or even remote) LLMs the amazing tool that Obsidian is. I'm trying to figure out how to better use Obsidian as a "master storage" for all my own texts and ideas, but also as a semantic database to a lot of information contained in other systems using APIs.
    I would appreciate if you could do another video on WebUI because they changed the UI, there are some new parameters plus I haven't had the time to make everything you mentioned run correctly!
    PS - I suppose you have more "in the known" friends for this, but if you ever need help with writing an episode just on AI image generation using Auto 1111 inside WebUI, I'll be here to help!
    Tks for everything on this episode!