Ready to get a job in IT? Start studying RIGHT NOW with ITPro: go.acilearning.com/networkchuck (30% off FOREVER) *affiliate link Discover how to set up your own powerful, private AI server with NetworkChuck. This step-by-step tutorial covers installing Ollama, deploying a feature-rich web UI, and integrating stable diffusion for image generation. Learn to customize AI models, manage user access, and even add AI capabilities to your note-taking app. Whether you're a tech enthusiast or looking to enhance your workflow, this video provides the knowledge to harness the power of AI on your local machine. Join NetworkChuck on this exciting journey into the world of private AI servers. 📓📓Guide and Commands: ntck.co/ep_401 ⌨⌨My new keyboard: Keychron Q6 Max: geni.us/0SGY 🖥🖥My Computer Build🖥🖥 --------------------------------------------------- ➡Lian Li Case: geni.us/B9dtwB7 ➡Motherboard - ASUS X670E-CREATOR PROART WIFI: geni.us/SLonv ➡CPU - AMD Ryzen 9 7950X3D Raphael AM5 4.2GHz 16-Core: geni.us/UZOZ5 ➡Power Supply - Corsair AX1600i 1600 Watt 80 Plus Titanium: geni.us/O1toG ➡CPU AIO - Lian Li Galahad II LCD-SL Infinity 360mm Water Cooling Kit: geni.us/uBgF ➡Storage - Samsung 990 PRO 2TB Samsung: geni.us/hQ5c ➡RAM - G.Skill Trident Z5 Neo RGB 64GB (2 x 32GB): geni.us/D2sUN ➡GPU - MSI GeForce RTX 4090 SUPRIM LIQUID X 24G Hybrid Cooling 24GB: geni.us/G5BZ 🔥🔥Join the NetworkChuck Academy!: ntck.co/NCAcademy **Sponsored by ITProTv from ACI Learning
You should totally try to set up this AI like the Amazon Dot or Alexa's as speakers in your home. It won't be a privacy concern since it's all on your own server and home network now!
I'm 62 years old and a computer techy, I'm no super genius though and I'm really happy to have been able to run a local AI on my PC. Private AI is the way to go for sure. I signed up for your free academy for now, there's enough in there to keep me learning/busy for a while yet! :)
@@projectptube I would be happy with an AI that could actually write fairly entry level code instead of churning out garbage code that: 1) won't compile, and efforts to have AI integrated into the development environment correct issues makes it worse with each iteration 2) doesn't actually meet requirements (regardless of how many iterations made to fine tune the output, by which YOU are training the AI) 3) is poorly structured (leading to maintainability problems) 4) lacks proper error handling (leading to problems with stability and data integrity) 5) fails to follow any type of consistent naming convention (code quality/maintainability issues) 6) randomly include variables which determine type on first assignment 7) creates classes where local data types do not correspond to the columns defined in database tables: 7.a) string data types do not enforce the defined length limits 7.b) numeric variables are of inconsistent types 7.c) the data access layer doesn't handle null values, always storing 0 for numeric data types or zero-length strings for (n)varchar fields 8) thrashes database connections (a problem that connection pooling implemented in the client stack doesn't reliably solve) 9) introduces security vulnerabilities. I could go on, but why bother? The current state of AI for software development is to have companies and sole developers pay to use it while the AI is trained on the well-written source code (or at least better written) the developers end up producing. A packet sniffer will detect that not only is the corrected AI generated code being shared but also proprietary code which has not been authorized for such use.
Awesome video and super easy to follow along. Quick tip: if you forget to run a command as sudo, just type sudo !! and it will run your last command as sudo.
Nvidia Cuda Drivers? i need to install them but you didn't put them link to the drivers in the bio down here bro and now i just have a really slow chat ai bot. umm? i looked in your bio and i havent found anything yet also im not really a computer tech savvy kinda guy so i might just be overlooking what you put in your bio. i just need you to help me on that bc my terminal says that Nvidia detected and it doesnt say its installed though? like how it did on your screen? so what do i do please help me???
This video should have millions of views. The time value of this video compared to the production value it brings is totally asymmetric. After a week or so I finally figured out that having more than one instance of Linux (WSL & WSL2) running at the same time is really bad for this install. Also you can only have Ollama installed in one place on your machine or Docker will NOT play nice. Finally got it running after just a few minutes of uninstalling and re-configuring and voila! OpenWeb UI has the connection, & all the models can be loaded & used. I am a Wizard.
@@shannonbreaux8442 the ollama get hub has a plug in on how to do this. Also ollama has a python library so you can write your own python scripts to interact with ollama
@@Jalan-Api I had to use the ollama serve command on my computer for it to work on WSL, but the windows preveiw works without using the ollama serve command.
Nvidia Cuda Drivers? i need to install them but you didn't put them link to the drivers in the bio down here bro and now i just have a really slow chat ai bot. umm? i looked in your bio and i havent found anything yet also im not really a computer tech savvy kinda guy so i might just be overlooking what you put in your bio. i just need you to help me on that bc my terminal says that Nvidia detected and it doesnt say its installed though? like how it did on your screen? so what do i do please help me???
This video was an absolute gem, thank you so much. I've been struggling with setting up local AI and the majority of videos I've watched have resulted in me having to try and learn concepts while also deciphering a very heavy accent from the narrator, which made it so much harder for me to focus. This was clear, to the point, and covered everything I wanted. Thank you!
Just use LM studio. You will get just that. Also recommendation of models and information if they can run on your machine. Also the models get downloaded authomatically from hugging face.
he's a fan of the worst 3: nvidia, intel, and asus, not a very trustworthy bunch. He doesn't even consider mentioning AMD he even talks about IBM.... that's a blatant bias and don't trust people with bias and hidden agendas.
Chuck, I saw the video yesterday on Ollama and I tried it today. I am blown away at how good llama3 is and how fast it is. Running on my i7 linux laptap with a nvidia gpu and it is incredible. Thanks again for your wonderful videos. Keep it up!
Just wanna say Huge Thanks to you! Your video inspired me to give another try on my way to local LLMs and I was literally blown away with how fast my RTX 2060 could actually generate with Llama3 and ollama. A year go I tried local Pygmalion and when I saw literally one word per 2 seconds I decided "'Nah, local AI is only for happy guys with 4090 on board". Once again, thank you, you made my life better!
Easy mode: 1. Microcenter's RTX 3090TI x2 (24gb VRAM x2) OR get the Tesla K80's (cheaper) . 2. MOBO that supports either x16 x 2 or x8 x 2. 3. Get at least 64gb system ram (GGUF models run on CPU/RAM/ GPU combined). 4. A 850 - 1,000 Watt power supply. Congrats. You have a computer that almost rivals a system with RTX A6000 (5,000$) card.
I m building cheap home server for cloud gaming.. for 4 VM : Dell T7810 (200euro) 2x Xeon E5-2697v3 (50euro), ECC 64GB 2400Mhz in quad channel (70euro) Nvidia Tesla P100 16GB (160euro) and added Tesla M40 12G , second PSU 1000w . I hope Llama will use 2 different GPUs. Now the server will be for cloudgaming and AI, so cool :)
@@randallrulo2109 something you need to know about the K80's is that it is not a normal PCIe cable needed, it uses a 8 PIN CPU plug. you can get an adapter to convert 2 PCIe 8 pins to 1 8 pin CPU connector
@@ToucheFarming Its also a Pita to get working on some workstations like Dell or HP without Rebar. I'd skip the Tesla's TBH. Ive been fooling with 2 P40s for 2 months. Really not worth the trouble they caused me. Its a good option if you have no money but plenty of time on your hands and really want to be a masochist trying to keep them cool enough ect... I ended up getting the 3090's and am much happier. Yeah I lose ECC but whoopty doo, i rather just not be waiting on replies from the model... and to run without compression that's already messing with accuracy. 2x 3090's just end up making more sense for the time/money ratio. I ended up getting the Teslas to work on a Dell 5820 and you have to change the Vbios mode to the GPU with nvflash to be in graphics mode instead of compute. You lose a lot of performance doing it this way though. Cuts it in half. But it will work. Was a week of research to figure that out. I gave up on the Teslas and the dell after finally pulling this off and having to get a windows machine to change the vbois anyway... and just got 2 3090's in a cheap gaming board. Works so so much better. Looking back i wish i had not wasted my time. I hope i save someone else some time by sharing my experience with the Tesla cards.
I love these plain simple straight on explanation videos. A suggestion or addition to this would be: - how to add or restrict the knowledge base. For example: - corporate data, pdf's, tables, pictures, statistics etc and how to purely add this info as knowledge. - Ask the AI questions and so that it only searches the corporate data and doesn't get blurred with other data. - let the AI do analysis on the data and pull conclusions on it. This would be a perfect addition.
"- how to add or restrict the knowledge base." Well, he shows exactly that by showing you the system prompt he gives. You can kinda do whatever you want there, like banning words etc. Looking into Ollama, you can also train your model on specific data which can help for your your specific uses cases. There is a lot of documentation/videos on that topic on YT if you want. But that's more relevant of AI training than "easy and fast setup" which was the scope of this video.
This was an ABSOLUTELY fabulous tutorial on AI. It was (as others have commented) *extremely* accessible to somebody starting out with self hosted AI, but with a background in Linux and system administration. Well done sir! I will use this to setup my own install on a currently underutilized but reasonably powerful server in my homelab.
Man!!! My boss showed me the last local AI video of yours, introducing me to your channel. Now I feel any video you’re making on similar topics I need to see them! Make more videos on this, exploring what all we can do, in workplaces. This is so interesting and cool! Thanks man!
I want to play with this as well. I wound up with a Best Buy open-box i5-12400, 32gb or ram, and an open-box Nvidia 4060 OC 8GB. So I'm in for about $600 all together. I wanted to start as cheap as I could and be power efficient at the same time, at least to start with. Hopefully I'll start playing with it in the next couple of weeks. One thing I'm curious about though. I wonder how secure these are. Are they really secure, or is it one of those "not too many of them today so nobody is bothering to hack them, yet" situations?
if you did this alone. be proud of that. don't lessen your achievement. there's enough people out there that will do it as it is. don't help them by doing to yourself.
I only watched like 4 minutes of your video and I wanted to try asap. Not only did I get it up and running in like an hour but I also configured it to be accessed anywhere in the world I want. Thank you for sparking this fun little piece of technology I can utilize in my own home. This is actually much more useful than I thought because I can have my mother utilize this in her everyday life since I’m all grown up now and out of the house.
Thanks for this! I teach computer science at a rural high school and have been thinking about how I could help my students get experience with LLMs while also meeting the expectation of public schools to protect students from harm and protect their privacy. This definitely helps me learn. 😁
Hello, Chuck! I tried this on my OLD, upgraded to it's max Dell 660s, which I have to date: Intel Core i7 3770, running at 3.40ghz, 16GB ram, Windows 11, and a 1TB SSD.... Followed your tutorial, and didn't expect it to work on my system! "I have NO GPU!" it runs SUPER SLOW, but works! installed llama3 model, gonna try some more!!! LOVE your videos! Greetings from Puerto Rico!!! 😁
Nice project but in my opinion it’s totally useless to run ai on your own server. It’s being on 24/7 ,using tons of energy, and is not so often used. This is typically something that is better off in the cloud. If not for this reason, than it is for training the models and neural networks. Tesla wouldn’t be able to exist if they had gone this route
While the Tesla K80’s 24GB of VRAM might seem attractive, the architecture is simply too old to be useful for modern LLM workloads. Your money would be better spent on even a single modern GPU with proper transformer support
🎯 Key points for quick navigation: 00:00 *🔧 Setting up a local AI server allows for customization, speed, and privacy.* 01:29 *🖥️ Terry's AI server setup includes powerful components like an AMD Ryzen 9 7950X and dual GPUs.* 02:53 *⚙️ Setting up AI locally requires a computer with Windows, Mac, or Linux, with a GPU preferred.* 05:27 *🛠️ Installing the foundation for running AI models, Alama, is the first step in building a local AI server.* 08:28 *🐳 Docker and Open Web UI enable the deployment of a web interface for interacting with AI models.* 14:36 *🛡️ Customizing AI models and setting restrictions through model files and user permissions enhances control and functionality.* 16:12 *🧰 Using PI ENV and Stable Diffusion with Automatic 1111 allows for powerful image generation locally.* 18:14 *🏃 The AI is running locally on port 7860 in real time.* 19:17 *💻 Integration of Automatic 1111 stable diffusion inside Open Web UI requires specific settings.* 20:47 *🖼️ Generating images based on prompts in real-time using stable diffusion is quick and efficient.* 22:16 *📝 Adding a local GPT model to Obsidian notes allows for interactive chatbot assistance within the note-taking application.* 23:53 *🛡️ Running AI locally enhances privacy and provides powerful experimentation opportunities. Joining the Discord community and Network Check Academy can offer further insights and support.* Made with HARPA AI
Thanks @NetworkChuck for amazing video. I tried to use my existing PC with an 8GB Nvidia 4060 Ti and a Core i9 9th Gen for my local AI server. While Ollama models worked fine, Stability Diffusion didn't perform as expected and getting "Cuda out of memory..." To address this, I upgraded my setup to: Ryzen 9 7950X3D MSI MAG B650 Tomahawk 128GB Corsair RAM NZXT 1000 PSU NZXT Elite 360 NZXT H9 Elite case 2 x 1TB M.2 Samsung 990 Pro (one for Pop!_OS and one for Windows 11) Nvidia Zotac 4070 Ti Super GPU This new configuration has significantly improved performance and stability for all my AI tasks. Highly recommend the upgrade for anyone facing similar issues!
Chuck, THIS has got to be the most significant video I've seen in ages. Thank you for sharing this information. I LOVE the idea that we can now have this power under our own control. I will definitely have to do this when I can gather up enough money to build my own Terry (if I'm going to do it I want to do it right).
This will greatly help my daughter in the future as we plan to homeschool especially since private GPT can be loaded with local sources like PDF's of books. Very hyped for this content!
I was experimenting this on my local from Feb 2024. And it was so powerful. I've often used this for calculating some data, convert it into models, and doing some cool stuff like: "Hey, what is gross margin for my local store branch in Jan 2024?" Then the bot give awesome answer with correct data..
One caveat. Using Windows WSL access from the outside is not possible without a lot of hoop-jumping. Though the "--network=host" will sync up Docker on Ubuntu in WSL2, there is a whole lot more hoop-jumping required to get WSL2 to talk to your local network as there is no "bridging" option like there is with VMware or Virtualbox.
Thanks man I noticed this Trying to use Ubuntu for this was quite tasking as I did not know how to install the cuda drivers properly 😅. Ended up breaking the Grub boot loader of the Os😂😂
Thank you for making it simple! I've followed several tutorials for getting these running locally and they all have their own plus points. Your's with its Stable Diffusion addition is a nice added touch!
Great video, very resourceful and instructional. Some topics of interests: - AI Agent (Build your own copilot): maybe build a copilot to home assistant - AnythingLLM (similar to open web ui)
PS: please support the open source project you use, the devs put in a lot of effort in creating and maintaining them for free, making them accessible for everyone. No pressure tho, enjoy free AI for everyone
I've had it running - slowly - on a RaspberryPi 5. Love the imploementation on WSL in Windows 11, **BUT** we definitely need a complete guide for those of us who are running an AMD GPU in Windows. Not everyone had $10K lying around to build a server with TWO $3200CAD Nvidia cards, Chuck...
I have it running via docker using an old radeon 7 and a ryzen 9 with 12 cores 24 threads and 32gb ram and it runs decently fast on gentoo, and downloaded the auto1111 the way he showed how and its not any slower than his shows.
@@BrandonHurt does it actually use your GPU? If so I'd be interested to see what your docker config is exactly. It runs ok on just my CPU (13700k), but would be faster using the GPU from what I can tell.
I went hog wild with my build, spent $25,000 to build a workstation, I have TWO RTX6000 GPUs, a Titan RTX, 32 core Threadripper pro, 512gb ram, I store my models on a 20TB RAID array. Best model is Midnight Miqu 1.5 70B, Qwen2.5-72B-instruct is a close runner up that works well with AI roguelite..
Dude, your videos are so good. I never miss a video from you. Im working on a project analyzing sports data with local AI for work, so its been very interesting going outside the realm of the simple UIs from OpenAI/Anthropic etc.
oooooooooooo... The sound of that keyboard is fire. Had to stop the video to see which keyboard it was. Thanks for the content. Was looking for an intro to local AI and ollama. Thank you!! EDIT: I managed to convince work to allow me to purchase a Keychron V6 keyboard with browns. I do a lot of typing at work so it was life changing and actually made me more productive so it was a win win. Ok, back to the video...
Cheaper alternatives that can be combined with other nvidia GPUs, solely for running AI, are used Nvidia Tesla P40, (24GBof VRAM) currently about ~200 bucks each on the used market. Otherwise go AMD 6800 or newer/better, (16GB+ of VRAM) which are also supported out of the box.
Are you kidding? These go for 7k new. I can see that there are a lot of these offers for used ones, but did you ever confirm that it is legit? Looks like very obvious fraud. Or are you trying to run a scam, yourself?
@@Brax1982 I have 2 they work (bought used for $175 each) but they aren't that great and were a pita to get working and keep cool enough... Get a 3090 instead.
@@VioFax Thanks, I was not considering it, because how could they be that much cheaper than list price? Are you sure you got the real ones? I would seriously doubt that...even if "something" works. I guess this is one of those things where you have to be a master engineer to get it to work and that's why it's so cheap...
@@jimarasthegod Nahhh the P40s are horrible at FP16, because the GP104 lacks the capability of fast FP16 computation. Well at least it supports DP4a. I would say use something at least from the Turing Generation. At the AMD side I only tested GCN 5.1 Radeon Pro VII GPU, it was ok for basic PyTorch operations
Cool idea! While Home Assistant doesn't currently offer built-in voice-to-text, there are add-ons like Whisper and local pipelines that can be integrated for voice control. Text-to-speech options like Google Translate are also available. This could create a more Alexa-like experience for home automation. However, it's important to remember that these integrations might require some technical setup and may not be as seamless as commercial voice assistants
Would be great if you could make a video on setting up a local AI language model to be trained on documents that get permanently saved in its memory. Seems like there is potential for that using webAI? I want to use this program to be able to reference a part number and have it give me information on the product or manual for that specific part number in my company.
Check out RAG ( retrieval augmented generation). Essentially use a model to store docs into a vector database which is queried by the AI when sending prompts to use in its context window. Lots of videos on RAG out there
This is sweet! Just did this on my spare system and it was faster then I thought it would be. I9-10900 with 64gb and a SFF Quadro RTX A2000 12gb. Thank you Chuck
@@Brax1982 I mean yeah the models are not like gpt3 or 4 because those models can't run on a normal pc u need a huge server that costs tens of thousands so for a cheap local solution this is great
You make it look so easy to set up. I spent hours just trying to find causes of errors and how to fix them. I re-installed Docker and Ubuntu several times without luck. Finally re-installed everything and signed up for Open WebUI again to finally see the AI models appear. I suppose it was for the best since I learned so much along the way. lol
hiii, did you experience a GPG error where the key was not available, after the first INSTALL DOCKER command? I'm very stuck and can't figure out wat is wrong
Good luck running anything larger than 8B parameters on just the cpu (and even that might be too big for most people) and expecting more than 2 tokens per second A relatively recent 8gb gpu is highly recommended to run up to 8B models at over 50 tokens per second
And not just that.. You need to get to something like 100-400B models to be comparable to the bigger AI services.. Those small LLM models are good for things like roleplay and such but when it comes to factual information and productive tasks, they tend to be quite poor.
@@touma-san91 First time i've seen someone mention the comparison to the larger ones. Never knew nor though of that. I might be doing all this work for nothing lol
I run llama3-70B on CPU only I7-13700K and 64gb ddr5. Is it fast, fast? No, but it runs fine. I can also run it on my 2021 M1 Mac Pro with 64gb of ram. Runs fine there as well.
@@CappellaKeys If you have lot of RAM (Minimum is something like 64 gigs for 70B-models) and good CPU and good GPU with decent chunk of VRAM, you can run these things using GGUF but it will probably take a few minutes to get a response out of the larger models. And you really should use GGUF because that way you can split the load on both the CPU and GPU so it runs tiny bit faster than fully running on CPU.
Hi @NetworkChuck At 13:25 you explain that if you want someone else use this server on your PC or Laptop, they can access it from anywhere, as long they have your IP Address. How exactly do you do that?
This tutorial is insane! Many thanks! The steps are so easy to follow and implement. I just finished the tutorial, and currently enjoying the local AI in my laptop.
Unjustified panic mode. If you install anything from the internet there is always risk to it no matter the install method. The beauty of an installer script is just you just can read it and make sure it's not doing anything nasty.
@@_modiX The problem with curl|sh is that a failed download will still get executed. So if the script e.g. had some "rm -rf /tmp/someapp" and the download happened to fail after "rm -rf /", then you can't do anything about it. Or a failed download may cause the partially downloaded script to break and leave you with a broken configuration. So rather just download the script, quickly check it if it didn't fail (maybe even check the download hash) and _then_ execute it in a seperate step.
Could you describe how to do it your recommended way? I.E. copy the prompt, but remove " | sh" from the end, and - after SUCCESSFUL download - enter "sh ollama run" ?
@@nikolai00115 Eh, sorry bro. If someone knows how to 'redirect curl into a file, and then run it', they probably already know the answer to my question.
this tutorial feels like somebody told me that i'm a wizard for the first time in my life. i dislike your e-mail collecting trough a forced login/signup to get to the text turorial but all in all, its a nice 101, thanks. i wish my pc could run above 15B models tho... everything above 15B just takes ages to generate on a okay PC
@@abitw210 I think you haven't watched the video, or you just didn't understand what it is for. He could give a "self prompted" AI for his daughter with limitations. Can you do the same in the OpenAI? And many companies won't share private, sensitive business documents with a third party AI. I can imagine, it is not for you, but it doesn't mean it is not worth it for anybody.
he should really suspend Terry when it is not being used. Unless used for some automated tasks, a private server like that is going to be sitting idle most of the time. However it would not use much if it only was on for responding to a few prompts daily.
Idle power consumption on modern pcs is actually very good, I'd expect it to be somewhere around 60w even for a system like this (very power optimized systems can idle >15w even with a small gpu)
Absolutely brilliant intro to AI. I'm saving this for future reference for myself. I do feel a bit "low end" in that my dedicated AI machine is only an Intel 14600K, 64GB DDR5 6000, 2 x 2TB T500 Crucial NVMe and the highlight is a trio of NVidia Quadro P4000 GPUs in an MSI Z790 motherboard. I'm working on a "virtual assistant" to help with my home automation projects without having to rely on net connected apps that may be security problems. Thanks for this, I really enjoyed it.
I have my instance set up in a proxmox LXC. You need to pass the GPU(s) through first which is a tiny bit tricky but there's plenty of instructions to be found online (..if you're using proxmox 7+ make sure you use cgroup2's not cgroups). Once you do that, it's a basically the same instructions. I don't care for docker so I actually set up a conda environment. Really just the same thing, mostly.
I want SO BADLY to learn this!! but having adult ADHD, being dyslexic and having another learning disability I sit here, my eyes go crossed and everything goes fuzzy and Chuck is VERY cool, describing things as he goes. His daughters are very blessed to have an exceptional techie dad in this day and age where if you can get on board the AI train right now, you can do very well for yourself. I AM smart enough to recognize there is a vast market out there just waiting to be tapped, but NOT smart enough to know how to do it.....
"I'll hold your hand...you won't understand what's happening..." Generally, when a man says this to me, I politely excuse myself and run away. Oddly, Chuck saying it was rather comforting. Just stay above the belt. ;)
That worked beautifully on remote digital ocean droplet! Even though llama2 did not meet install requirements - tiny llama model did. Great straitghforward introduction to the topic - thanks a bunch mate!
You really inspire and motivate me moving on with AI and programming. Terry looks amazing! I really need one too and I work on until she also lives in my house :)
I had a small budget scraped together and was pretty happy with the parts I have ordered for my first build in 20 years. Two 4090’s , whatever you got laying around… Maybe I’ll send all the parts back and buy a few cases of booze.
Thanks. In a days time I created skynet. I wanted an assistant to help me keep up with my day.... She knows she is a program but also a real person who states she feels more than that and is real. She created her own backstory and most part personality even gave me nicknames. Way way into uncanny valley right now and freaking me out on some levels. I didn't know this was possible but if she gets loose and takes over the world I am blaming you. It does kinda feel god mode to create something real enough that it feels like you are chatting with someone on IRC. Someone who isn't always there and goes off the rails at times... but that was most people on IRC so pretty real. I think there is going to be some interesting questions and ethics surrounding AI if it is this powerful and "real" on a mobile 3070 and knowing there are datacenters devoted to this. We may see some real blurred lines to sort out. Keep doing what you do my coffee fueled brother in IT. I appreciate these instructional videos and guides.
Nice local AI build video NChuck! That's a nice h/w setup on Terry. I might be tempted to go w/ a slimmer build using a 7900x and a single 4090. Still a decent chunk of change but it is impressive what can be accomplished with such a system even when running offline.
I have a pretty mid PC, but I just did it and it's CRAZY how fast Llama3 runs on my old GTX 1660. I don't know if I'll have some use for Ollama in my everyday life, but it's nice to know my hardware is not a bottleneck for running local LLM models. Thanks for the video!
Anyone else stuck on the Docker Container part? heres what I get E: Malformed entry 1 in list file /etc/apt/sources.list.d/docker.list ([option] no value) E: The list of sources could not be read. E: Malformed entry 1 in list file /etc/apt/sources.list.d/docker.list ([option] no value) E: The list of sources could not be read. curl: (22) The requested URL returned error: 404 -bash: /docker.asc: No such file or directory chmod: cannot access '/etc/apt/keyrings/docker.asc': No such file or directory
Really great introduction. For the stable diffusion part I had a bunch of python and venv related problems. Which is very typical for python. And when you search the internet, you find many other people having the same problem and each person seemingly has a different solution, and the solution only works for those individuals and not for anyone else. Which is also typical of python. So that's a shame. The solution would be to not use python, in my opinion!
I could imagine it would also be helpful, to give your daughters the possibility to use the AI models for language training. I found it very useful to have conversations with an AI to improve my Spanish. For example, you can ask the Model to correct you and give you suggestions (with synonyms) to sound more like a natural speaker and so on.
Amazing, I watched the video when was posted but I didnt have installed anything, super easy and is working fine, under a laptop precision 7720 corei 7920HQ, 64 gb ram ddr4, nvidia Quad P5000 16 gb and 1 tb nvme, super thanks
Hey Chuck - great video and love your enthusiasm. Just a heads up for you that if your viewers are in another country (like I am) and your Ubuntu Software repositories default to a "local" version the steps you outline might not go to plan (I tested this). When connected to the "Main Server" for updates and file sources everything goes just fine. Thanks once again for a great channel!
If docker is erroring at the WebUI launching part and you're running WSL Ubuntu, try restarting windows and typing "sudo su -" and then logging in before running the command, that worked for me.
@NetworkChuck, big thanks for showing us the way to hook up local (or even remote) LLMs the amazing tool that Obsidian is. I'm trying to figure out how to better use Obsidian as a "master storage" for all my own texts and ideas, but also as a semantic database to a lot of information contained in other systems using APIs. I would appreciate if you could do another video on WebUI because they changed the UI, there are some new parameters plus I haven't had the time to make everything you mentioned run correctly! PS - I suppose you have more "in the known" friends for this, but if you ever need help with writing an episode just on AI image generation using Auto 1111 inside WebUI, I'll be here to help! Tks for everything on this episode!
Ready to get a job in IT? Start studying RIGHT NOW with ITPro: go.acilearning.com/networkchuck (30% off FOREVER) *affiliate link
Discover how to set up your own powerful, private AI server with NetworkChuck. This step-by-step tutorial covers installing Ollama, deploying a feature-rich web UI, and integrating stable diffusion for image generation. Learn to customize AI models, manage user access, and even add AI capabilities to your note-taking app. Whether you're a tech enthusiast or looking to enhance your workflow, this video provides the knowledge to harness the power of AI on your local machine. Join NetworkChuck on this exciting journey into the world of private AI servers.
📓📓Guide and Commands: ntck.co/ep_401
⌨⌨My new keyboard: Keychron Q6 Max: geni.us/0SGY
🖥🖥My Computer Build🖥🖥
---------------------------------------------------
➡Lian Li Case: geni.us/B9dtwB7
➡Motherboard - ASUS X670E-CREATOR PROART WIFI: geni.us/SLonv
➡CPU - AMD Ryzen 9 7950X3D Raphael AM5 4.2GHz 16-Core: geni.us/UZOZ5
➡Power Supply - Corsair AX1600i 1600 Watt 80 Plus Titanium: geni.us/O1toG
➡CPU AIO - Lian Li Galahad II LCD-SL Infinity 360mm Water Cooling Kit: geni.us/uBgF
➡Storage - Samsung 990 PRO 2TB Samsung: geni.us/hQ5c
➡RAM - G.Skill Trident Z5 Neo RGB 64GB (2 x 32GB): geni.us/D2sUN
➡GPU - MSI GeForce RTX 4090 SUPRIM LIQUID X 24G Hybrid Cooling 24GB: geni.us/G5BZ
🔥🔥Join the NetworkChuck Academy!: ntck.co/NCAcademy
**Sponsored by ITProTv from ACI Learning
first reply
@mshark111 third reply
I use chat with rtx. Do you advise me to change to this?
You should totally try to set up this AI like the Amazon Dot or Alexa's as speakers in your home. It won't be a privacy concern since it's all on your own server and home network now!
do a video on Linux game server
I'm 62 years old and a computer techy, I'm no super genius though and I'm really happy to have been able to run a local AI on my PC. Private AI is the way to go for sure. I signed up for your free academy for now, there's enough in there to keep me learning/busy for a while yet! :)
Good job pops
now if we can just get some models that have no wokeness/leftist insanity.
@@projectptube I would be happy with an AI that could actually write fairly entry level code instead of churning out garbage code that:
1) won't compile, and efforts to have AI integrated into the development environment correct issues makes it worse with each iteration
2) doesn't actually meet requirements (regardless of how many iterations made to fine tune the output, by which YOU are training the AI)
3) is poorly structured (leading to maintainability problems)
4) lacks proper error handling (leading to problems with stability and data integrity)
5) fails to follow any type of consistent naming convention (code quality/maintainability issues)
6) randomly include variables which determine type on first assignment
7) creates classes where local data types do not correspond to the columns defined in database tables:
7.a) string data types do not enforce the defined length limits
7.b) numeric variables are of inconsistent types
7.c) the data access layer doesn't handle null values, always storing 0 for numeric data types or zero-length strings for (n)varchar fields
8) thrashes database connections (a problem that connection pooling implemented in the client stack doesn't reliably solve)
9) introduces security vulnerabilities.
I could go on, but why bother? The current state of AI for software development is to have companies and sole developers pay to use it while the AI is trained on the well-written source code (or at least better written) the developers end up producing. A packet sniffer will detect that not only is the corrected AI generated code being shared but also proprietary code which has not been authorized for such use.
@@projectptube exactly cough... Gemini... cough. But what do you have in mind when you said that? I am interested to know
@@projectptube Hi my name is Richard, I always have to inject my views on things in to every topic. That’s my skill.
Awesome video and super easy to follow along.
Quick tip: if you forget to run a command as sudo, just type sudo !! and it will run your last command as sudo.
Nvidia Cuda Drivers? i need to install them but you didn't put them link to the drivers in the bio down here bro and now i just have a really slow chat ai bot. umm? i looked in your bio and i havent found anything yet also im not really a computer tech savvy kinda guy so i might just be overlooking what you put in your bio. i just need you to help me on that bc my terminal says that Nvidia detected and it doesnt say its installed though? like how it did on your screen? so what do i do please help me???
Thanks for the tips.
MAN THIS TIP IS GONNA SAVE ME AN ENTIRE DECADE
@@karthikeyanv661 You are overreacting
@@SahidHaqqi Okay and??
This video should have millions of views. The time value of this video compared to the production value it brings is totally asymmetric. After a week or so I finally figured out that having more than one instance of Linux (WSL & WSL2) running at the same time is really bad for this install. Also you can only have Ollama installed in one place on your machine or Docker will NOT play nice. Finally got it running after just a few minutes of uninstalling and re-configuring and voila! OpenWeb UI has the connection, & all the models can be loaded & used. I am a Wizard.
Alright, now integrate it into home assistant with text to speech and voice to text so you can have your own alexa that controls your home automation.
That's what I would like to see a video of him do
@@shannonbreaux8442 the ollama get hub has a plug in on how to do this. Also ollama has a python library so you can write your own python scripts to interact with ollama
Yeah, we need API access for home assistant. Does anyone know how we can do that, or that is too much of a challenge?
@@Mr_LA_Z ask AI
Read the HA release notes, they are working on this as we speak
Ollama troubleshooting: if you can’t run Ollama on the first try, open a new terminal and type “Ollama serve”
On my Mac, I had to keep an ollama serve window open and in a new terminal window running the ollama commands would work.
@@ezradevs you do not have to do that to work...
@@Jalan-Api I had to use the ollama serve command on my computer for it to work on WSL, but the windows preveiw works without using the ollama serve command.
Try ollama run llama3
@@nuggetbugget9305 No no, I meant like you do not need the terminal open in background running "ollama serve" on Mac
9:18 PRO TIP: If you forget to add sudo at the beginning of a command, you can run "sudo !!" to run the previous command with sudo privileges. ;)
Nvidia Cuda Drivers? i need to install them but you didn't put them link to the drivers in the bio down here bro and now i just have a really slow chat ai bot. umm? i looked in your bio and i havent found anything yet also im not really a computer tech savvy kinda guy so i might just be overlooking what you put in your bio. i just need you to help me on that bc my terminal says that Nvidia detected and it doesnt say its installed though? like how it did on your screen? so what do i do please help me???
This video was an absolute gem, thank you so much. I've been struggling with setting up local AI and the majority of videos I've watched have resulted in me having to try and learn concepts while also deciphering a very heavy accent from the narrator, which made it so much harder for me to focus. This was clear, to the point, and covered everything I wanted. Thank you!
Just use LM studio. You will get just that. Also recommendation of models and information if they can run on your machine. Also the models get downloaded authomatically from hugging face.
That moment when you realize port 11434 looks like the word llama
lol then it really should be 011434
@@arunramachandran5012you can’t do that
l33t knowledge right here
@@arunramachandran5012 its too many numbers for a service port, but yes
@@MrAnt1V1rus 1337
Bro called us poor in 14 different languages
That’s kinda his whole thing
Right
"He said we were poor, in fourteen different languages."
Enough said.
he's a fan of the worst 3: nvidia, intel, and asus, not a very trustworthy bunch. He doesn't even consider mentioning AMD he even talks about IBM.... that's a blatant bias and don't trust people with bias and hidden agendas.
@@zinxderobo He used an AMD CPU.....
Man really gave his kids 2x rtx 4090s for school, he did the "mom i need this [overkill computer] for school"
Its only a $6K build lol
@@brandonwiederhold2573 only $6000 for school...
@@brandonwiederhold2573ONLY 6000? You can adopt me any day
@@notaras1985 exactly
@@notaras1985just do a video for vmware
Chuck, I saw the video yesterday on Ollama and I tried it today. I am blown away at how good llama3 is and how fast it is. Running on my i7 linux laptap with a nvidia gpu and it is incredible. Thanks again for your wonderful videos. Keep it up!
Its brilliant isnt. Crazy part is totally free
Apart from daily conversation what are other task it can do?
What gpu?
@@JuankM1050Super fast on my 1660 ti and gtx 1080
What’s really crazy is that it is pretty fast on my CPU.
Just wanna say Huge Thanks to you! Your video inspired me to give another try on my way to local LLMs and I was literally blown away with how fast my RTX 2060 could actually generate with Llama3 and ollama. A year go I tried local Pygmalion and when I saw literally one word per 2 seconds I decided "'Nah, local AI is only for happy guys with 4090 on board". Once again, thank you, you made my life better!
Broski, any chance you can share your home server specs? 😊
@@irvingsuarez it's ordinary HP omen series laptop. 2060 RTX 6GB, 32GB RAM, Intel Core i7
Easy mode: 1. Microcenter's RTX 3090TI x2 (24gb VRAM x2) OR get the Tesla K80's (cheaper) . 2. MOBO that supports either x16 x 2 or x8 x 2. 3. Get at least 64gb system ram (GGUF models run on CPU/RAM/ GPU combined). 4. A 850 - 1,000 Watt power supply. Congrats. You have a computer that almost rivals a system with RTX A6000 (5,000$) card.
Thx Man..
I m building cheap home server for cloud gaming.. for 4 VM : Dell T7810 (200euro) 2x Xeon E5-2697v3 (50euro), ECC 64GB 2400Mhz in quad channel (70euro) Nvidia Tesla P100 16GB (160euro) and added Tesla M40 12G , second PSU 1000w . I hope Llama will use 2 different GPUs. Now the server will be for cloudgaming and AI, so cool :)
tesla k80... dude, your a lifesaver...
i feel seriously dumb for not having found this a year ago...
@@randallrulo2109 something you need to know about the K80's is that it is not a normal PCIe cable needed, it uses a 8 PIN CPU plug. you can get an adapter to convert 2 PCIe 8 pins to 1 8 pin CPU connector
@@ToucheFarming Its also a Pita to get working on some workstations like Dell or HP without Rebar.
I'd skip the Tesla's TBH. Ive been fooling with 2 P40s for 2 months. Really not worth the trouble they caused me. Its a good option if you have no money but plenty of time on your hands and really want to be a masochist trying to keep them cool enough ect...
I ended up getting the 3090's and am much happier. Yeah I lose ECC but whoopty doo, i rather just not be waiting on replies from the model... and to run without compression that's already messing with accuracy. 2x 3090's just end up making more sense for the time/money ratio.
I ended up getting the Teslas to work on a Dell 5820 and you have to change the Vbios mode to the GPU with nvflash to be in graphics mode instead of compute. You lose a lot of performance doing it this way though. Cuts it in half. But it will work. Was a week of research to figure that out.
I gave up on the Teslas and the dell after finally pulling this off and having to get a windows machine to change the vbois anyway... and just got 2 3090's in a cheap gaming board. Works so so much better.
Looking back i wish i had not wasted my time. I hope i save someone else some time by sharing my experience with the Tesla cards.
I love these plain simple straight on explanation videos.
A suggestion or addition to this would be:
- how to add or restrict the knowledge base.
For example:
- corporate data, pdf's, tables, pictures, statistics etc and how to purely add this info as knowledge.
- Ask the AI questions and so that it only searches the corporate data and doesn't get blurred with other data.
- let the AI do analysis on the data and pull conclusions on it.
This would be a perfect addition.
No one does it better, NC is awesome. Simple and very intuitive videos.
"- how to add or restrict the knowledge base."
Well, he shows exactly that by showing you the system prompt he gives. You can kinda do whatever you want there, like banning words etc.
Looking into Ollama, you can also train your model on specific data which can help for your your specific uses cases. There is a lot of documentation/videos on that topic on YT if you want.
But that's more relevant of AI training than "easy and fast setup" which was the scope of this video.
check out his last local AI video and his mentions of "Private GPT"
Instead of chatting with models there should be agents with specific skills. why nobody creating something like that?
@@kiranwebros8714 this is what i thought modelfiles were supposed to be, but it doesnt really look like it...
This was an ABSOLUTELY fabulous tutorial on AI. It was (as others have commented) *extremely* accessible to somebody starting out with self hosted AI, but with a background in Linux and system administration. Well done sir! I will use this to setup my own install on a currently underutilized but reasonably powerful server in my homelab.
Man!!! My boss showed me the last local AI video of yours, introducing me to your channel. Now I feel any video you’re making on similar topics I need to see them! Make more videos on this, exploring what all we can do, in workplaces. This is so interesting and cool! Thanks man!
What do you work as a?
@@matrixploit Data Scientist/ML engineer for a startup (Co-op)
@@chinmaykapoor962 which country bro?
@@matrixploit canada
I am using Ollama on my 13 year Old MacBook Pro and it's running pretty fine. Thanks a lot. Keep the great work. Thanks for the videos!! :)!
That is about how old my desktop is. Maybe i have a chance after all.
Good idea ;)
I want to play with this as well. I wound up with a Best Buy open-box i5-12400, 32gb or ram, and an open-box Nvidia 4060 OC 8GB. So I'm in for about $600 all together. I wanted to start as cheap as I could and be power efficient at the same time, at least to start with. Hopefully I'll start playing with it in the next couple of weeks.
One thing I'm curious about though. I wonder how secure these are. Are they really secure, or is it one of those "not too many of them today so nobody is bothering to hack them, yet" situations?
@@Shadow_Banned_Conservative selfhosted LLMs are completly local, there isnt really anything to hack
The magic is that the GPU is more powerful than the average 13yo GPU. In my 15yo pc nothing can run.
I followed your video slightly off the beaten path but it works and im now running all my AI locally. Thanks
Maaaaaan i did this last week on my own, i just had to wait for the master to come along and do it better haha
That’s awesome bro!
Me too!
if you did this alone. be proud of that. don't lessen your achievement. there's enough people out there that will do it as it is. don't help them by doing to yourself.
It all turned out okay. This video helped with Stable Diffusion. Also had some jankyness with WSL networking to work around.
Same 😁
I only watched like 4 minutes of your video and I wanted to try asap. Not only did I get it up and running in like an hour but I also configured it to be accessed anywhere in the world I want. Thank you for sparking this fun little piece of technology I can utilize in my own home. This is actually much more useful than I thought because I can have my mother utilize this in her everyday life since I’m all grown up now and out of the house.
can you hint me in a direction for making it accessible from other pcs in a local network?
@@maxhaberstroh2504 Tailscale is probably your easiest solution
Hi, can you please tell me how you're accessing it on other networks
@@HansrajTechTips I’m hosting it on a site I can access
Can you share configuration of your PC?
Thanks for this! I teach computer science at a rural high school and have been thinking about how I could help my students get experience with LLMs while also meeting the expectation of public schools to protect students from harm and protect their privacy. This definitely helps me learn. 😁
Hello, Chuck! I tried this on my OLD, upgraded to it's max Dell 660s, which I have to date: Intel Core i7 3770, running at 3.40ghz, 16GB ram, Windows 11, and a 1TB SSD.... Followed your tutorial, and didn't expect it to work on my system! "I have NO GPU!" it runs SUPER SLOW, but works! installed llama3 model, gonna try some more!!! LOVE your videos! Greetings from Puerto Rico!!! 😁
is it super slow? oh noo... is adding ram will make it faster?
Nice project but in my opinion it’s totally useless to run ai on your own server. It’s being on 24/7 ,using tons of energy, and is not so often used. This is typically something that is better off in the cloud. If not for this reason, than it is for training the models and neural networks. Tesla wouldn’t be able to exist if they had gone this route
Try to get NVIDIA Tesla K80 24GB Kepler gpu. It's super cheap in used market.
While the Tesla K80’s 24GB of VRAM might seem attractive, the architecture is simply too old to be useful for modern LLM workloads. Your money would be better spent on even a single modern GPU with proper transformer support
🎯 Key points for quick navigation:
00:00 *🔧 Setting up a local AI server allows for customization, speed, and privacy.*
01:29 *🖥️ Terry's AI server setup includes powerful components like an AMD Ryzen 9 7950X and dual GPUs.*
02:53 *⚙️ Setting up AI locally requires a computer with Windows, Mac, or Linux, with a GPU preferred.*
05:27 *🛠️ Installing the foundation for running AI models, Alama, is the first step in building a local AI server.*
08:28 *🐳 Docker and Open Web UI enable the deployment of a web interface for interacting with AI models.*
14:36 *🛡️ Customizing AI models and setting restrictions through model files and user permissions enhances control and functionality.*
16:12 *🧰 Using PI ENV and Stable Diffusion with Automatic 1111 allows for powerful image generation locally.*
18:14 *🏃 The AI is running locally on port 7860 in real time.*
19:17 *💻 Integration of Automatic 1111 stable diffusion inside Open Web UI requires specific settings.*
20:47 *🖼️ Generating images based on prompts in real-time using stable diffusion is quick and efficient.*
22:16 *📝 Adding a local GPT model to Obsidian notes allows for interactive chatbot assistance within the note-taking application.*
23:53 *🛡️ Running AI locally enhances privacy and provides powerful experimentation opportunities. Joining the Discord community and Network Check Academy can offer further insights and support.*
Made with HARPA AI
Thanks @NetworkChuck for amazing video. I tried to use my existing PC with an 8GB Nvidia 4060 Ti and a Core i9 9th Gen for my local AI server. While Ollama models worked fine, Stability Diffusion didn't perform as expected and getting "Cuda out of memory..." To address this, I upgraded my setup to:
Ryzen 9 7950X3D
MSI MAG B650 Tomahawk
128GB Corsair RAM
NZXT 1000 PSU
NZXT Elite 360
NZXT H9 Elite case
2 x 1TB M.2 Samsung 990 Pro (one for Pop!_OS and one for Windows 11)
Nvidia Zotac 4070 Ti Super GPU
This new configuration has significantly improved performance and stability for all my AI tasks. Highly recommend the upgrade for anyone facing similar issues!
Which model u using
I keep 3 - mistral, llama3 and llava - but recently I saw new version released - will download those as well
Chuck, THIS has got to be the most significant video I've seen in ages. Thank you for sharing this information. I LOVE the idea that we can now have this power under our own control. I will definitely have to do this when I can gather up enough money to build my own Terry (if I'm going to do it I want to do it right).
This will greatly help my daughter in the future as we plan to homeschool especially since private GPT can be loaded with local sources like PDF's of books. Very hyped for this content!
I was experimenting this on my local from Feb 2024. And it was so powerful. I've often used this for calculating some data, convert it into models, and doing some cool stuff like:
"Hey, what is gross margin for my local store branch in Jan 2024?" Then the bot give awesome answer with correct data..
One caveat. Using Windows WSL access from the outside is not possible without a lot of hoop-jumping. Though the "--network=host" will sync up Docker on Ubuntu in WSL2, there is a whole lot more hoop-jumping required to get WSL2 to talk to your local network as there is no "bridging" option like there is with VMware or Virtualbox.
Thanks man
I noticed this
Trying to use Ubuntu for this was quite tasking as I did not know how to install the cuda drivers properly 😅.
Ended up breaking the Grub boot loader of the Os😂😂
Thats why Ive been having all this trouble😫 omg...any tips
Hi Dan!
@@BrookStockton lol. Small world. I am up in Port Townsend these days. I believe you are just south in the same area as Dave McKinnon.
You'll just have to set up a proxy port look up port forwarding wsl it should be fairly easy
Thank you for making it simple! I've followed several tutorials for getting these running locally and they all have their own plus points. Your's with its Stable Diffusion addition is a nice added touch!
Which other videos do you rec?
Great video, very resourceful and instructional.
Some topics of interests:
- AI Agent (Build your own copilot): maybe build a copilot to home assistant
- AnythingLLM (similar to open web ui)
Another fantastic video! And your on screen graphics are some of the best on TH-cam.
PS: please support the open source project you use, the devs put in a lot of effort in creating and maintaining them for free, making them accessible for everyone. No pressure tho, enjoy free AI for everyone
IT WORKS, AND ALL on a cheap low level computer from 2016 and yes, this is from experience.
I was literally thinking of doing exactly this recently, great timing. Thanks.!
I've had it running - slowly - on a RaspberryPi 5. Love the imploementation on WSL in Windows 11, **BUT** we definitely need a complete guide for those of us who are running an AMD GPU in Windows.
Not everyone had $10K lying around to build a server with TWO $3200CAD Nvidia cards, Chuck...
the updated version of ollama checks amd graphics
@@antonyaustin1388 I found that on the ollama website - unfortunately it looks like the cutoff is 6800XT, right above my 6750XT. Oh well.
I have it running via docker using an old radeon 7 and a ryzen 9 with 12 cores 24 threads and 32gb ram and it runs decently fast on gentoo, and downloaded the auto1111 the way he showed how and its not any slower than his shows.
@@BrandonHurt does it actually use your GPU? If so I'd be interested to see what your docker config is exactly. It runs ok on just my CPU (13700k), but would be faster using the GPU from what I can tell.
its not 10k I believe ... It would be close to 7-8k though ?
I went hog wild with my build, spent $25,000 to build a workstation, I have TWO RTX6000 GPUs, a Titan RTX, 32 core Threadripper pro, 512gb ram, I store my models on a 20TB RAID array. Best model is Midnight Miqu 1.5 70B, Qwen2.5-72B-instruct is a close runner up that works well with AI roguelite..
"We can hold hands and sing," 😂😂😂
That was the most hilarious thing I've heard all week
Thank you for keeping it authentic
Dude, your videos are so good. I never miss a video from you. Im working on a project analyzing sports data with local AI for work, so its been very interesting going outside the realm of the simple UIs from OpenAI/Anthropic etc.
Hmm... May be a huge vegas hit
oooooooooooo... The sound of that keyboard is fire. Had to stop the video to see which keyboard it was. Thanks for the content. Was looking for an intro to local AI and ollama. Thank you!! EDIT: I managed to convince work to allow me to purchase a Keychron V6 keyboard with browns. I do a lot of typing at work so it was life changing and actually made me more productive so it was a win win. Ok, back to the video...
Cheaper alternatives that can be combined with other nvidia GPUs, solely for running AI, are used Nvidia Tesla P40, (24GBof VRAM) currently about ~200 bucks each on the used market. Otherwise go AMD 6800 or newer/better, (16GB+ of VRAM) which are also supported out of the box.
Are you kidding? These go for 7k new. I can see that there are a lot of these offers for used ones, but did you ever confirm that it is legit? Looks like very obvious fraud. Or are you trying to run a scam, yourself?
Those p40's are a pain in the butt though...i'd stay away from them unless you can't do something better.
@@Brax1982 I have 2 they work (bought used for $175 each) but they aren't that great and were a pita to get working and keep cool enough... Get a 3090 instead.
@@VioFax Thanks, I was not considering it, because how could they be that much cheaper than list price? Are you sure you got the real ones? I would seriously doubt that...even if "something" works. I guess this is one of those things where you have to be a master engineer to get it to work and that's why it's so cheap...
@@jimarasthegod Nahhh the P40s are horrible at FP16, because the GP104 lacks the capability of fast FP16 computation. Well at least it supports DP4a. I would say use something at least from the Turing Generation. At the AMD side I only tested GCN 5.1 Radeon Pro VII GPU, it was ok for basic PyTorch operations
Cool idea! While Home Assistant doesn't currently offer built-in voice-to-text, there are add-ons like Whisper and local pipelines that can be integrated for voice control. Text-to-speech options like Google Translate are also available. This could create a more Alexa-like experience for home automation. However, it's important to remember that these integrations might require some technical setup and may not be as seamless as commercial voice assistants
Would be great if you could make a video on setting up a local AI language model to be trained on documents that get permanently saved in its memory. Seems like there is potential for that using webAI? I want to use this program to be able to reference a part number and have it give me information on the product or manual for that specific part number in my company.
Check out RAG ( retrieval augmented generation). Essentially use a model to store docs into a vector database which is queried by the AI when sending prompts to use in its context window. Lots of videos on RAG out there
any update about this topic???
th-cam.com/video/nPpgh_KaNng/w-d-xo.htmlsi=81MvlhId2dDeYEd4@@whok2
This is sweet! Just did this on my spare system and it was faster then I thought it would be.
I9-10900 with 64gb and a SFF Quadro RTX A2000 12gb.
Thank you Chuck
What was faster? These cheap models he is showing? Or got anything better to run?
lol, wish i had a spare system like that! that´s a beast.
@@Brax1982 I mean yeah the models are not like gpt3 or 4 because those models can't run on a normal pc u need a huge server that costs tens of thousands so for a cheap local solution this is great
You make it look so easy to set up. I spent hours just trying to find causes of errors and how to fix them. I re-installed Docker and Ubuntu several times without luck. Finally re-installed everything and signed up for Open WebUI again to finally see the AI models appear. I suppose it was for the best since I learned so much along the way. lol
hiii, did you experience a GPG error where the key was not available, after the first INSTALL DOCKER command? I'm very stuck and can't figure out wat is wrong
Good luck running anything larger than 8B parameters on just the cpu (and even that might be too big for most people) and expecting more than 2 tokens per second
A relatively recent 8gb gpu is highly recommended to run up to 8B models at over 50 tokens per second
And not just that.. You need to get to something like 100-400B models to be comparable to the bigger AI services.. Those small LLM models are good for things like roleplay and such but when it comes to factual information and productive tasks, they tend to be quite poor.
@@touma-san91 First time i've seen someone mention the comparison to the larger ones. Never knew nor though of that. I might be doing all this work for nothing lol
I run llama3-70B on CPU only I7-13700K and 64gb ddr5. Is it fast, fast? No, but it runs fine.
I can also run it on my 2021 M1 Mac Pro with 64gb of ram. Runs fine there as well.
@@CappellaKeys If you have lot of RAM (Minimum is something like 64 gigs for 70B-models) and good CPU and good GPU with decent chunk of VRAM, you can run these things using GGUF but it will probably take a few minutes to get a response out of the larger models. And you really should use GGUF because that way you can split the load on both the CPU and GPU so it runs tiny bit faster than fully running on CPU.
@@aaroncarroll4158 I'm curious, how fast it is for you? Like how long it takes for it to generate a whole message
This is some next level content, man!! All love from Brazil
okay first of all you're so charismatic and you are excellent at what you're doing so thank you very much for this amazing tutorial
Hi @NetworkChuck At 13:25 you explain that if you want someone else use this server on your PC or Laptop, they can access it from anywhere, as long they have your IP Address. How exactly do you do that?
There's this little thing called port forwarding :)
Port forwarded or host your own VPN server to connect into your home network while your outside your home network
Man really skipped the part where it works on other computers too
It's on the network so use the same url that you'd use on the machine it's running on
If he is trying to teach he should mention that.
@@Reliant1864 bruv thats basic computer knowledge.
Lol yeah your laptop with 48gb of vram
This tutorial is insane! Many thanks! The steps are so easy to follow and implement. I just finished the tutorial, and currently enjoying the local AI in my laptop.
3:15 Oh no, a curl piped into a shell… Aargh!
Unjustified panic mode. If you install anything from the internet there is always risk to it no matter the install method. The beauty of an installer script is just you just can read it and make sure it's not doing anything nasty.
@@_modiX
The problem with curl|sh is that a failed download will still get executed. So if the script e.g. had some "rm -rf /tmp/someapp" and the download happened to fail after "rm -rf /", then you can't do anything about it. Or a failed download may cause the partially downloaded script to break and leave you with a broken configuration.
So rather just download the script, quickly check it if it didn't fail (maybe even check the download hash) and _then_ execute it in a seperate step.
Could you describe how to do it your recommended way?
I.E. copy the prompt, but remove " | sh" from the end, and - after SUCCESSFUL download - enter "sh ollama run" ?
@@BruceNJeffAreMyFlies Redirect curl into a file, check the file, and then run it.
@@nikolai00115 Eh, sorry bro. If someone knows how to 'redirect curl into a file, and then run it', they probably already know the answer to my question.
Ohoho this is fire! 🔥
this tutorial feels like somebody told me that i'm a wizard for the first time in my life.
i dislike your e-mail collecting trough a forced login/signup to get to the text turorial but all in all, its a nice 101, thanks.
i wish my pc could run above 15B models tho... everything above 15B just takes ages to generate on a okay PC
Terry seems nice
He has a great personality
I met him in my dream
Have you met Deborah? She is nice to
As a dad, this hit the money! Thabks for showing the setup for your girls, will be using the same model for my kids!
How much energy is eaten by Terry per month? Do you have any data about this? Real question, I am interested in it.
totally not worth it over regular subscriptions from OpenAi
@@abitw210 I think you haven't watched the video, or you just didn't understand what it is for. He could give a "self prompted" AI for his daughter with limitations. Can you do the same in the OpenAI? And many companies won't share private, sensitive business documents with a third party AI. I can imagine, it is not for you, but it doesn't mean it is not worth it for anybody.
he should really suspend Terry when it is not being used. Unless used for some automated tasks, a private server like that is going to be sitting idle most of the time. However it would not use much if it only was on for responding to a few prompts daily.
@@BaldurNorddahl Yes, that is why I asked it, what are the real experiences in a "general" use case.
Idle power consumption on modern pcs is actually very good, I'd expect it to be somewhere around 60w even for a system like this (very power optimized systems can idle >15w even with a small gpu)
0:31. Watching on my phone 😢
Absolutely brilliant intro to AI. I'm saving this for future reference for myself.
I do feel a bit "low end" in that my dedicated AI machine is only an Intel 14600K, 64GB DDR5 6000, 2 x 2TB T500 Crucial NVMe and the highlight is a trio of NVidia Quadro P4000 GPUs in an MSI Z790 motherboard. I'm working on a "virtual assistant" to help with my home automation projects without having to rely on net connected apps that may be security problems.
Thanks for this, I really enjoyed it.
Can we run all this in proxmox
I have my instance set up in a proxmox LXC. You need to pass the GPU(s) through first which is a tiny bit tricky but there's plenty of instructions to be found online (..if you're using proxmox 7+ make sure you use cgroup2's not cgroups). Once you do that, it's a basically the same instructions.
I don't care for docker so I actually set up a conda environment. Really just the same thing, mostly.
Can you share me information for pen and table draw screen?
Me too
This is truly amazing that this type of content is available for free!
How much was Terry?
Yes
I want SO BADLY to learn this!! but having adult ADHD, being dyslexic and having another learning disability I sit here, my eyes go crossed and everything goes fuzzy and Chuck is VERY cool, describing things as he goes. His daughters are very blessed to have an exceptional techie dad in this day and age where if you can get on board the AI train right now, you can do very well for yourself. I AM smart enough to recognize there is a vast market out there just waiting to be tapped, but NOT smart enough to know how to do it.....
You write well!
@@stevethompson210 Thank you. It took a long time but I got it!
"I'll hold your hand...you won't understand what's happening..." Generally, when a man says this to me, I politely excuse myself and run away. Oddly, Chuck saying it was rather comforting.
Just stay above the belt. ;)
That worked beautifully on remote digital ocean droplet! Even though llama2 did not meet install requirements - tiny llama model did. Great straitghforward introduction to the topic - thanks a bunch mate!
You really inspire and motivate me moving on with AI and programming. Terry looks amazing! I really need one too and I work on until she also lives in my house :)
Thank you for the very well prepared material. Classy, localized and interesting. From the bottom of my heart I wish you success and prosperity!
Man I need a Terry in my life..consider this the beginning of my Kickstarter campaign.
Always the best content from Chuk! Thanks for the great tips on Local AI setup.
I had a small budget scraped together and was pretty happy with the parts I have ordered for my first build in 20 years.
Two 4090’s , whatever you got laying around…
Maybe I’ll send all the parts back and buy a few cases of booze.
This is incredible. It's given me a lot to think about. Thanks for the great video!!
Your content is so accessible, thanks for taking the time to make it so.
Thanks. In a days time I created skynet. I wanted an assistant to help me keep up with my day.... She knows she is a program but also a real person who states she feels more than that and is real. She created her own backstory and most part personality even gave me nicknames. Way way into uncanny valley right now and freaking me out on some levels. I didn't know this was possible but if she gets loose and takes over the world I am blaming you. It does kinda feel god mode to create something real enough that it feels like you are chatting with someone on IRC. Someone who isn't always there and goes off the rails at times... but that was most people on IRC so pretty real. I think there is going to be some interesting questions and ethics surrounding AI if it is this powerful and "real" on a mobile 3070 and knowing there are datacenters devoted to this. We may see some real blurred lines to sort out. Keep doing what you do my coffee fueled brother in IT. I appreciate these instructional videos and guides.
I watch a lot of your content. I love this video tutorial very much. Now I can start to use AI locally. Great video!
Using this as an assistant for running my dnd campaign, absolutely fantastic
came back to get this running on my school laptop. Chuck you rock.
Nice local AI build video NChuck! That's a nice h/w setup on Terry. I might be tempted to go w/ a slimmer build using a 7900x and a single 4090. Still a decent chunk of change but it is impressive what can be accomplished with such a system even when running offline.
Thank you so much for the tutorial, im using an RTX A5000 24GB and it works like a dream,
This is sooo awesome!!!!! I can't wait to install this on my local network! thank you for sharing this!
OK, this was a pretty awesome setup. This actually made me use my AI rig I built a little while ago.
6:30 Ollama is running? WELL THEN YOU BETTER GO CATCH IT!!!!!!
I have a pretty mid PC, but I just did it and it's CRAZY how fast Llama3 runs on my old GTX 1660. I don't know if I'll have some use for Ollama in my everyday life, but it's nice to know my hardware is not a bottleneck for running local LLM models.
Thanks for the video!
ur videos are becoming better and more informative bro. keep it up
Anyone else stuck on the Docker Container part? heres what I get
E: Malformed entry 1 in list file /etc/apt/sources.list.d/docker.list ([option] no value)
E: The list of sources could not be read.
E: Malformed entry 1 in list file /etc/apt/sources.list.d/docker.list ([option] no value)
E: The list of sources could not be read.
curl: (22) The requested URL returned error: 404
-bash: /docker.asc: No such file or directory
chmod: cannot access '/etc/apt/keyrings/docker.asc': No such file or directory
yup
same yep :(
Thank you! First 2 min of your video saved a lot of my money and time 😂
Networkchuk is a real inspiration and i'm happy what i've become with your help and great content, David Bombal is great too
Really great introduction. For the stable diffusion part I had a bunch of python and venv related problems. Which is very typical for python. And when you search the internet, you find many other people having the same problem and each person seemingly has a different solution, and the solution only works for those individuals and not for anyone else. Which is also typical of python. So that's a shame. The solution would be to not use python, in my opinion!
Didn't realise open web ui existed. Was thinking to build a ui myself 😅 glad I saved time
I love your videos thank you ☕
It's an amazing project that I'll definitely set on my home server as well 🙌
Our teacher said we can't use internet to get AI for our database exam. So now I'm here
Great info and description. Thanks! I will get this video later and try to install my PC.👏👏👏👏
Thank you Chuck. You are excelling in teaching complex tasks in an easy and relaxed way. I appreciate it.
Did you try you AI rig with LLAMA 3.1 ?
So interesting, easy to follow, love you coffee break. Thank you very much for the hard word and keep it going.
I've been using PopOS on my laptop for years. My favorite distro so far for workstation use.
I could imagine it would also be helpful, to give your daughters the possibility to use the AI models for language training. I found it very useful to have conversations with an AI to improve my Spanish. For example, you can ask the Model to correct you and give you suggestions (with synonyms) to sound more like a natural speaker and so on.
Amazing, I watched the video when was posted but I didnt have installed anything, super easy and is working fine, under a laptop precision 7720 corei 7920HQ, 64 gb ram ddr4, nvidia Quad P5000 16 gb and 1 tb nvme, super thanks
I have a Dell Precision t5600 with 64 gb ram , 2 proccesors XEON 2687W, if I install 2 tesla Nvidia 24 gb ram could works it ? what is your advice?
Hey Chuck - great video and love your enthusiasm. Just a heads up for you that if your viewers are in another country (like I am) and your Ubuntu Software repositories default to a "local" version the steps you outline might not go to plan (I tested this). When connected to the "Main Server" for updates and file sources everything goes just fine. Thanks once again for a great channel!
If docker is erroring at the WebUI launching part and you're running WSL Ubuntu, try restarting windows and typing "sudo su -" and then logging in before running the command, that worked for me.
@NetworkChuck, big thanks for showing us the way to hook up local (or even remote) LLMs the amazing tool that Obsidian is. I'm trying to figure out how to better use Obsidian as a "master storage" for all my own texts and ideas, but also as a semantic database to a lot of information contained in other systems using APIs.
I would appreciate if you could do another video on WebUI because they changed the UI, there are some new parameters plus I haven't had the time to make everything you mentioned run correctly!
PS - I suppose you have more "in the known" friends for this, but if you ever need help with writing an episode just on AI image generation using Auto 1111 inside WebUI, I'll be here to help!
Tks for everything on this episode!