AI Unleashed: Install and Use Local LLMs with Ollama - ChatGPT on Steroids! (FREE)
ฝัง
- เผยแพร่เมื่อ 7 ก.ย. 2024
- Get My Course Linux Mastery Express:
linuxtex.think...
Download Ollama:
ollama.com/
Phi 3 3.8 b (Try First):
ollama.com/lib...
Llama 3.1 8 b:
ollama.com/lib...
Download MSTY (Ollama not needed):
msty.app/
Follomac Flatpak:
flathub.org/ap...
Follomac on Ubuntu:
github.com/pej...
In this video, I’ll show you how to install and use Large Language Models or LLMs on your local computer using ollama.
LLMs like ChatGPT are all the hype today. In fact, they have become essential. And the way they work, it feels like magic.
Chatgpt is great. But running a local AI chatbot on your machine itself is even better and it really ramps up what you can do with them.
There are many advantages for using local LLMs. First one one being they are free. You don’t have to pay a monthly subscription like for ChatGPT. Then you get unlimited usage. And all your chats are completely private. Your chats are not sent to any servers and they remain on your computer. In fact, you don’t even need an internet connection to chat with them. So nobody can spy on what you say to your chatbot.
And you can even run uncensored LLMs that answer literally anything you ask them. They never refuse to answer.
I’ve been playing with local LLMs for a long time now. It’s become a huge hobby of mine now. I’ve played with so many models and it’s so much fun. Having your own chatgpt, inside your own computer- that you have full control on is absolutely amazing.
And I use ollama, which really makes this whole thing exaggeratedly simple.
Yes we need much more such videos
okay thats someting good and appreciatable content
Thanks Nehal. 👍
Do more local llm installs videos
Useful information 💯 ,,🔥🔥🔥🔥🔥
Thank you👍
Do make a video regarding ai generated images local models
Definitely bro👍
BASED!
How to make api calls to these offline llm. For used in projects
Do more videos like this❤ good apps
Will do. Thanks for the comment👍
Quality content dude..👍 got a sub👆👍
0:39 The Holy Trinity 😂😂🤣
Helpful video!
Thank you Syed👍
I already use ollama on my Galaxy A54 with Kali Nethunter
hardware requirement?
Nvidia H100
@@m4saurabh seriously? that's not for everyone T_T
For Phi 3.8b, 8 gb ram. No gpu needed.
For llama 3.1 7b, 16 gb ram. Most consumer GPUs will suffice. H100 not necessary.😉
@@LinuxTex thakn you!
How can I have my local LLM work with my files?
Sir you need to setup RAG for that. In the MSTY app that I've linked in the description below, you can create knowledge bases easily by just dragging and dropping files. Then you can interact with them using your local llms.
I have an old file/media server which I am planning on rebuilding as a future project. Would I be able to run this on that server and still access it with other computers on my network or would it just be available on the server itself?
That's a great idea actually. You could make it accessible on other devices on your network as ollama suppots that.
@@LinuxTex Thanks!
😮👍
which linux is perfect for my samsung NP300E5Z 4 RAM , Intel Core i5-2450M Processor , plz reply
Go with mx linux. Xfce desktop environment
Wow