I did some experimentations on the usage of differents olamma AI models, considering harddrive/SSD usage in giga octets, memory (RAM) usage, time in seconds to get the answer, using 3 LAMMAS models size (in Billions/Milliards of parameters/variables). CONCLUSION: I suggest to stick on the smallest model (7Billions) for User that are having an average computer laptop. If you have more than 16GB of RAM, you can uses the 33B Olamma model, but it take some time (a while on my old CPU) to get the answer. Here are the raw data I got in my experience : [ { "Model": "LLaMA-7B", "Time_Calculation_seconds": "5-15", "RAM_GB": "10-12", "Disk_Size_GB": 13 }, { "Model": "LLaMA-33B", "Time_Calculation_seconds": "15-30", "RAM_GB": "30-40", "Disk_Size_GB": 65 }, { "Model": "LLaMA-70B", "Time_Calculation_seconds": ">30", "RAM_GB": "70+", "Disk_Size_GB": 130 } ]
I’ve done this with locally based AI image processing software like comfyui, can’t wait to now try chat as well. I really appreciate the background and tutorial.
Comprehensive overview, thanks. A few months back I saw a TH-cam video by Data Slayer running one of these models on an 8GB Raspberry Pi 5 with impressive results.
Great video. I must admit I gave up on local LLMs about 6 months ago, due to it being *so slow* to run them, even on my beefy desktop machine. I don't have an NVidia GPU, though. I was hoping to see more info on using GPUs in this video, but it's probably a bit advanced/out of scope. I've been meaning to go back and try again and this video has further encouraged me to do so. Thanks!
You should talk about the several.AIs worth considering for llm art and more especially those using the Pinocchio installer engine for PC. Installing on a secondary drive, like D:, is recommended. For Android, the 'Local AI Offline Chatbot' app is available as a fully offline option, with no internet required. Although this AI is limited to a G-rating, it offers a solid offline experience for those seeking a private, all-ages model. No offensive material allowed. for a private AI.running local"
If you have an Apple Silicon Mac, LM Studio is another option that also has a CLI tool (lms). It supports MLX (optimized for Macs) and GGUF model formats.
You can install a Linux terminal with Termux. From there you can install Ollama. But there are some things that don't make it easy. I tried with Debian and the directory was not added to the path. And I needed a second session, as I needed to start the Ollama server in a separate session. But it can be done.
Excellent video. I should have seen it sooner, my bad. Is it possible to download the AI model and install it in Ollama or Open WebUI? I want to know if it is possible to get it while you already have a home network cut from the internet?
Thx`s for the video. I missed any words about the Hardware i need for the docker lxc`s or the backbone where i install the Librarys for all the stuff. Also in this context the special " NPU " which are necessary for the OP`s . ( TOPS )
I have just found you and love you. I am pretty ignorant of this and am a bit overwhelmed. I have started with what I thought was first step and opened firefox. Here is question: In firefox , I get the message to search using Google. Am I doing somthing wrong in the firefox setup. I have disabled all tracking and cookies and permissions in Google account. Thank you. I promise I will get better. You just have a lot of info . Thank you very much.
Fascinating stuff, and I can see why a local model is good for privacy. However, if I've registered with an AI system (say ChatGPT) with a masked email address, I always use a VPN and I never use personally identifiable information in chats, what is my privacy risk?
1) chatgpt requires an active cell number (not voip) 2) Everything you do through your acct is tied into a single profile and kept forever. I wouldn't bet against the ability for that information to eventually be tied to you. 3) if you don't reveal sensitive information in chat, I think using chatgpt is fine. It's just about understanding what information is collected, and making smart usage decisions accordingly.
Hi NBTV, curious how would users know if any telemetry-esque connection doesn’t happen in the background when the device connects online with the offline options? That prompts and inputs, end user info can still be collected via so?
I thought the same thing, I've set up multiple instances of different AI chat and Image generation on separate machines that are in a sandbox'd VLAN that was blocked from internet access and firewall logs monitored at the gateway, so far none of them have tried to call home or off network for anything prompt/query related. For queries that needed very new information that was not available when the model was built it did not call out instead reported it didn't have the info to answer. Additionally, I didn't see it covered in this video but there are uncensored models available that you can use, other public models/engines tend to censor very useful info, quick example - in a SHTF moment how to make useful compounds, for example KNO3CS.
My concern is that I use LLMs to solve certain issues. However, using the arguably "not as smart" local smaller models compared to the ones hosted by those 3rd parties online will result in worse solutions, won't it?
Naomi be know that ollama LLM are not really Open Source, Meta like others (Google recently corrected this) are abusing the the Open Source definition standard (OSI), this LLM are trained by private data and none of the sources are explained or given to the users.
The only problem is that all these modules use a lot of resources (some even GPU)... I think that until we could run a decent AI model locally that does not require a modern CPU and RAM, this will not be feasible... I tried already and it was to slow, or the "lower" models are to bad.... :(... Let's see in the future :)
my question to anyone who wants to answer, even though I applied this llama, my internet provider will be able to see my data, it is confusing to me as I am 1000 years old help please
I don't really like docker, it's still not user friendly enough. It's not like a regular application that you just install, you still need the command line.
The only problem I see with your tutorial is that it, like most tutorials that tell people to sue docker, is that there is no instructions on how to install docker. OH sure, I could do a web search like I have many times before, but they all tell me to fun a command in my prompt that my computer says; does not exist. My biggest frustration with techies is that they make generalized assumptions that are only true if the given computer happens to have the latest, greatest, most advanced everything, including a monitor set to the highest resolution possible, which I can not see and therefore must set mine to 1280 x 768. Other than that, good video.
i still think ur really pretty. i like LLM's . but yea feel really concerned about them. but im careful what i put on there. but only academic consumer level stuff.. Local LLM is out of the reach of my Hardware because i cant afford to run it well enough. I only like Claude ai and GPT so far. i Dis-like gemini and meta Lama whatever. And open source is subjective when Meta does it. read the fine print.
Would have been nice if you had linked the actual LLM video- couldn't find a single vid anywhere in his playlist (The hated one) about anything AI, much less LLM's. Also 99% of his content looks like it's ,ore likely to get you put on a gov list first. You also didn't even mention TechXplainator which seems to be where most of the video content is actually from, (like the Docker clips), although all 8 of his video's I clicked on have a female ai as voiceover, so I don't know where the male voice came from.
I did some experimentations on the usage of differents olamma AI models, considering harddrive/SSD usage in giga octets, memory (RAM) usage, time in seconds to get the answer, using 3 LAMMAS models size (in Billions/Milliards of parameters/variables).
CONCLUSION: I suggest to stick on the smallest model (7Billions) for User that are having an average computer laptop.
If you have more than 16GB of RAM, you can uses the 33B Olamma model, but it take some time (a while on my old CPU) to get the answer.
Here are the raw data I got in my experience :
[ { "Model": "LLaMA-7B", "Time_Calculation_seconds": "5-15", "RAM_GB": "10-12", "Disk_Size_GB": 13 }, { "Model": "LLaMA-33B", "Time_Calculation_seconds": "15-30", "RAM_GB": "30-40", "Disk_Size_GB": 65 }, { "Model": "LLaMA-70B", "Time_Calculation_seconds": ">30", "RAM_GB": "70+", "Disk_Size_GB": 130 } ]
Sorry for the Json file to show you my calculations.
Great video! Kinda glad this was recommended by Google...wonder how Google knew I was interested.
google shills its tools.
I’ve done this with locally based AI image processing software like comfyui, can’t wait to now try chat as well. I really appreciate the background and tutorial.
I'm glad you found it helpful!
Best content for Nov 1, 2024. You get my vote. Great job!
thank you for watching :)
I've been waiting for your take on AI and I was not disappointed. Great work as normally. Currently using Msty locally.
This right here can replace the internet
Omg I forgot about this! Glad you made another video for it!
Comprehensive overview, thanks. A few months back I saw a TH-cam video by Data Slayer running one of these models on an 8GB Raspberry Pi 5 with impressive results.
I honestly can't appreciate this enough!
This is a HUGE. Thank You!
YES! This is literally on my to-do list for next week, and now I have a great source of info to start from. You're the best Naomi :D
1:03 "a tutorial made by The Hated One" is such a hard line out of context
Cool Geeky video.
Thanks Naomi.
Glad you liked it! 🤓
Love it! Using this I won't need to agree to terms that allow someone to own MY data. It is mine.
This is helpful. Thank you. Subscribed ! : )
Thanks for subscribing!
I’d love to see a tutorial on running your own StableDiffusion as well
very well made tutorial! :)
I love this channel. Clear and easy instructions, even for people that aren't amazing with computers. I now have a private Ai!
That's wonderful!!
TH-cam should be sued for all the lies in the ads.
What adds? Inform yourself....but to google the solutions will most probably not work...use a different search engine
Utube shill didnt luv your comment, you should feel special.
Thank you Naomi.
So helpful,thanks
Great video. I must admit I gave up on local LLMs about 6 months ago, due to it being *so slow* to run them, even on my beefy desktop machine. I don't have an NVidia GPU, though. I was hoping to see more info on using GPUs in this video, but it's probably a bit advanced/out of scope. I've been meaning to go back and try again and this video has further encouraged me to do so. Thanks!
Very nice! Have a nice weekend!
You should talk about the several.AIs worth considering for llm art and more especially those using the Pinocchio installer engine for PC. Installing on a secondary drive, like D:, is recommended.
For Android, the 'Local AI Offline Chatbot' app is available as a fully offline option, with no internet required. Although this AI is limited to a G-rating, it offers a solid offline experience for those seeking a private, all-ages model. No offensive material allowed. for a private AI.running local"
Naomi makes light blue look extra elegant.
If you have an Apple Silicon Mac, LM Studio is another option that also has a CLI tool (lms). It supports MLX (optimized for Macs) and GGUF model formats.
You didnt get a luv, must not be on the approved list.
great video!
Great video. Please do another about local LLM on Android. Do not know if it is possible at this time.
there are people doing it but kind of complex
You can install a Linux terminal with Termux. From there you can install Ollama. But there are some things that don't make it easy. I tried with Debian and the directory was not added to the path. And I needed a second session, as I needed to start the Ollama server in a separate session. But it can be done.
wow , you smashed it ,
Excellent video. I should have seen it sooner, my bad.
Is it possible to download the AI model and install it in Ollama or Open WebUI?
I want to know if it is possible to get it while you already have a home network cut from the internet?
You should do another Ron Paul interview...been a while.
I was hoping to hear more about hardware requirements. What kind of GPU do we need for 40B models?
It can run from cpu too it's just slower. Ram is based on the model. Small once are fine with e.g. 2 - 5gb and relly big ones are 40gb+.
40B Q4_0 is going to be around 20GB. so 3090-4090 is needed, my friend!
Quantization is shrinking all the models quite nicely. Kind of the only way to go w light consumer scale models
How do you protect your computer from viruses when you download things from the command prompt?
Depends on threat model, but good start is sticking with the big name sources that have tens of thousands of dl’s
Naomi Brockwell Thanks for posting this video
Thanks for watching!
2:35 the most statistically probable responses ;)
Thx`s for the video. I missed any words about the Hardware i need for the docker lxc`s or the backbone where i install the Librarys for all the stuff. Also in this context the special " NPU " which are necessary for the OP`s . ( TOPS )
I have just found you and love you. I am pretty ignorant of this and am a bit overwhelmed. I have started with what I thought was first step and opened firefox. Here is question: In firefox , I get the message to search using Google. Am I doing somthing wrong in the firefox setup. I have disabled all tracking and cookies and permissions in Google account. Thank you. I promise I will get better. You just have a lot of info . Thank you very much.
Fascinating stuff, and I can see why a local model is good for privacy. However, if I've registered with an AI system (say ChatGPT) with a masked email address, I always use a VPN and I never use personally identifiable information in chats, what is my privacy risk?
1) chatgpt requires an active cell number (not voip)
2) Everything you do through your acct is tied into a single profile and kept forever. I wouldn't bet against the ability for that information to eventually be tied to you.
3) if you don't reveal sensitive information in chat, I think using chatgpt is fine. It's just about understanding what information is collected, and making smart usage decisions accordingly.
Hi NBTV, curious how would users know if any telemetry-esque connection doesn’t happen in the background when the device connects online with the offline options? That prompts and inputs, end user info can still be collected via so?
I thought the same thing, I've set up multiple instances of different AI chat and Image generation on separate machines that are in a sandbox'd VLAN that was blocked from internet access and firewall logs monitored at the gateway, so far none of them have tried to call home or off network for anything prompt/query related. For queries that needed very new information that was not available when the model was built it did not call out instead reported it didn't have the info to answer. Additionally, I didn't see it covered in this video but there are uncensored models available that you can use, other public models/engines tend to censor very useful info, quick example - in a SHTF moment how to make useful compounds, for example KNO3CS.
I run little snitch so I see every connection being requested
"Timeless Naomi"
❤
Can we train the models and add parameters to them (forking our own model?)
Thanks Naomi but is their a way to jailbreak AI or ChatGPT for privacy or like if I ask AI itself to encrypt our chat?
you have to run uncensored models locally..
@@PracticalPcGuide Thanks for the answer
My concern is that I use LLMs to solve certain issues. However, using the arguably "not as smart" local smaller models compared to the ones hosted by those 3rd parties online will result in worse solutions, won't it?
How do you update Open Web UI? Thank you
GREAT question, going to do a video about it!
@@NaomiBrockwellTV Thank you
msty works well for the windows user.
what do you think about gpt4all
Is the Grass DePin Project safe?
Naomi be know that ollama LLM are not really Open Source, Meta like others (Google recently corrected this) are abusing the the Open Source definition standard (OSI), this LLM are trained by private data and none of the sources are explained or given to the users.
Always a pleasure. (Forgive my lack of originality!) 😊
7:50 ekhem, what is that strange website you've showed here? I have never ever seen it before.
Llama by Meta is not open source. It's a word play to make you think that but if you look at the legality it isn't actually open source
Honestly seeing the hated one using nord or the pIrate bay undermined his credibility
I'd like to make an honorable mention for GPT4All. Turn key UI with CPU based models for more compatibility
The only problem is that all these modules use a lot of resources (some even GPU)... I think that until we could run a decent AI model locally that does not require a modern CPU and RAM, this will not be feasible... I tried already and it was to slow, or the "lower" models are to bad.... :(... Let's see in the future :)
Please review Invizible pro and Rethink dns android apps
The piratebay was the shyt in its day.
Can I ask my local files in this?
PLEASE, DO A VIDEO on how t0 keep my house property safe.
I love you Naomi!
my question to anyone who wants to answer, even though I applied this llama, my internet provider will be able to see my data, it is confusing to me as I am 1000 years old help please
If you are self hosting a model, you don't need internet access to use it. It's all stored locally on your machine.
I don't really like docker, it's still not user friendly enough. It's not like a regular application that you just install, you still need the command line.
Bonjour gorgeous 😊
I read a study about the crakeling voice it’s only happening in USA so far I remember 😊
Inb4 rugpull
The only problem I see with your tutorial is that it, like most tutorials that tell people to sue docker, is that there is no instructions on how to install docker.
OH sure, I could do a web search like I have many times before, but they all tell me to fun a command in my prompt that my computer says; does not exist.
My biggest frustration with techies is that they make generalized assumptions that are only true if the given computer happens to have the latest, greatest, most advanced everything, including a monitor set to the highest resolution possible, which I can not see and therefore must set mine to 1280 x 768.
Other than that, good video.
ppl don't care about their data until they are caught with a crime.
i still think ur really pretty. i like LLM's . but yea feel really concerned about them. but im careful what i put on there. but only academic consumer level stuff.. Local LLM is out of the reach of my Hardware because i cant afford to run it well enough. I only like Claude ai and GPT so far. i Dis-like gemini and meta Lama whatever. And open source is subjective when Meta does it. read the fine print.
Would have been nice if you had linked the actual LLM video- couldn't find a single vid anywhere in his playlist (The hated one) about anything AI, much less LLM's. Also 99% of his content looks like it's ,ore likely to get you put on a gov list first. You also didn't even mention TechXplainator which seems to be where most of the video content is actually from, (like the Docker clips), although all 8 of his video's I clicked on have a female ai as voiceover, so I don't know where the male voice came from.
What actual LLM video? This is a project we put together in collaboration, this IS the video :)
How about instructions on removing AI slop from our lives, instead of adding more to it ?
You being shadowed band I barely see you videos and I like them
[Title] How To Host AI Locally ?
OLLAMA ???
Was going to watch but I auto block channels with soy face intro screens.
Bad tutorial. Almost nothing worked as you showed. Had to look it all up and figure it out on my own.
Hey.
Naomi is malwarebytes any good for virus protection on my phone? Please let me know.
Thank christ I have a brain and don't need these tools.
I don't trust it
Why are you fo focused on running it on a laptop?
Thanks Mate Respect From Oz 🇦🇺🦘