I did some experimentations on the usage of differents olamma AI models, considering harddrive/SSD usage in giga octets, memory (RAM) usage, time in seconds to get the answer, using 3 LAMMAS models size (in Billions/Milliards of parameters/variables). CONCLUSION: I suggest to stick on the smallest model (7Billions) for User that are having an average computer laptop. If you have more than 16GB of RAM, you can uses the 33B Olamma model, but it take some time (a while on my old CPU) to get the answer. Here are the raw data I got in my experience : [ { "Model": "LLaMA-7B", "Time_Calculation_seconds": "5-15", "RAM_GB": "10-12", "Disk_Size_GB": 13 }, { "Model": "LLaMA-33B", "Time_Calculation_seconds": "15-30", "RAM_GB": "30-40", "Disk_Size_GB": 65 }, { "Model": "LLaMA-70B", "Time_Calculation_seconds": ">30", "RAM_GB": "70+", "Disk_Size_GB": 130 } ]
This topic just randomly appeared in my feed, yours is the 3rd video I clicked on and its the only one that made sense to me, you explained everything so well, thank you.
If you have an Apple Silicon Mac, LM Studio is another option that also has a CLI tool (lms). It supports MLX (optimized for Macs) and GGUF model formats.
You are doing a great job. I was surprised to see my other favorite channel. :D good luck. My other favorite IT Channels: NetworkChuck, David Bombal, The Hated One, and An0n Ali. :D
I’ve done this with locally based AI image processing software like comfyui, can’t wait to now try chat as well. I really appreciate the background and tutorial.
Comprehensive overview, thanks. A few months back I saw a TH-cam video by Data Slayer running one of these models on an 8GB Raspberry Pi 5 with impressive results.
You should talk about the several.AIs worth considering for llm art and more especially those using the Pinocchio installer engine for PC. Installing on a secondary drive, like D:, is recommended. For Android, the 'Local AI Offline Chatbot' app is available as a fully offline option, with no internet required. Although this AI is limited to a G-rating, it offers a solid offline experience for those seeking a private, all-ages model. No offensive material allowed. for a private AI.running local"
You can install a Linux terminal with Termux. From there you can install Ollama. But there are some things that don't make it easy. I tried with Debian and the directory was not added to the path. And I needed a second session, as I needed to start the Ollama server in a separate session. But it can be done.
Good explanation, Naomi. However, most users (with up to 16GB of RAM and probably not a very powerful GPU) will have to stick with a single digit billion parameters LLM as stated in the video. Whereas subscribing to the cloud AI solutions will let users access much more powerful LLMs. Finally, I think that it’s also worth mentioning LM Studio as an alternative to ollama, although I also use ollama.
Brilliant explanation, you sound like an Australian tech Barbie and I love it! (Not disparaging your technical knowledge, just saying it’s nice to see other feminine ladies in tech!)
My concern is that I use LLMs to solve certain issues. However, using the arguably "not as smart" local smaller models compared to the ones hosted by those 3rd parties online will result in worse solutions, won't it?
Great video. I must admit I gave up on local LLMs about 6 months ago, due to it being *so slow* to run them, even on my beefy desktop machine. I don't have an NVidia GPU, though. I was hoping to see more info on using GPUs in this video, but it's probably a bit advanced/out of scope. I've been meaning to go back and try again and this video has further encouraged me to do so. Thanks!
Thx`s for the video. I missed any words about the Hardware i need for the docker lxc`s or the backbone where i install the Librarys for all the stuff. Also in this context the special " NPU " which are necessary for the OP`s . ( TOPS )
Hi NBTV, curious how would users know if any telemetry-esque connection doesn’t happen in the background when the device connects online with the offline options? That prompts and inputs, end user info can still be collected via so?
I thought the same thing, I've set up multiple instances of different AI chat and Image generation on separate machines that are in a sandbox'd VLAN that was blocked from internet access and firewall logs monitored at the gateway, so far none of them have tried to call home or off network for anything prompt/query related. For queries that needed very new information that was not available when the model was built it did not call out instead reported it didn't have the info to answer. Additionally, I didn't see it covered in this video but there are uncensored models available that you can use, other public models/engines tend to censor very useful info, quick example - in a SHTF moment how to make useful compounds, for example KNO3CS.
I don't really like docker, it's still not user friendly enough. It's not like a regular application that you just install, you still need the command line.
I have just found you and love you. I am pretty ignorant of this and am a bit overwhelmed. I have started with what I thought was first step and opened firefox. Here is question: In firefox , I get the message to search using Google. Am I doing somthing wrong in the firefox setup. I have disabled all tracking and cookies and permissions in Google account. Thank you. I promise I will get better. You just have a lot of info . Thank you very much.
Fascinating stuff, and I can see why a local model is good for privacy. However, if I've registered with an AI system (say ChatGPT) with a masked email address, I always use a VPN and I never use personally identifiable information in chats, what is my privacy risk?
1) chatgpt requires an active cell number (not voip) 2) Everything you do through your acct is tied into a single profile and kept forever. I wouldn't bet against the ability for that information to eventually be tied to you. 3) if you don't reveal sensitive information in chat, I think using chatgpt is fine. It's just about understanding what information is collected, and making smart usage decisions accordingly.
my question to anyone who wants to answer, even though I applied this llama, my internet provider will be able to see my data, it is confusing to me as I am 1000 years old help please
Naomi be know that ollama LLM are not really Open Source, Meta like others (Google recently corrected this) are abusing the the Open Source definition standard (OSI), this LLM are trained by private data and none of the sources are explained or given to the users.
The only problem is that all these modules use a lot of resources (some even GPU)... I think that until we could run a decent AI model locally that does not require a modern CPU and RAM, this will not be feasible... I tried already and it was to slow, or the "lower" models are to bad.... :(... Let's see in the future :)
Excellent video. I should have seen it sooner, my bad. Is it possible to download the AI model and install it in Ollama or Open WebUI? I want to know if it is possible to get it while you already have a home network cut from the internet?
i still think ur really pretty. i like LLM's . but yea feel really concerned about them. but im careful what i put on there. but only academic consumer level stuff.. Local LLM is out of the reach of my Hardware because i cant afford to run it well enough. I only like Claude ai and GPT so far. i Dis-like gemini and meta Lama whatever. And open source is subjective when Meta does it. read the fine print.
The only problem I see with your tutorial is that it, like most tutorials that tell people to sue docker, is that there is no instructions on how to install docker. OH sure, I could do a web search like I have many times before, but they all tell me to fun a command in my prompt that my computer says; does not exist. My biggest frustration with techies is that they make generalized assumptions that are only true if the given computer happens to have the latest, greatest, most advanced everything, including a monitor set to the highest resolution possible, which I can not see and therefore must set mine to 1280 x 768. Other than that, good video.
Would have been nice if you had linked the actual LLM video- couldn't find a single vid anywhere in his playlist (The hated one) about anything AI, much less LLM's. Also 99% of his content looks like it's ,ore likely to get you put on a gov list first. You also didn't even mention TechXplainator which seems to be where most of the video content is actually from, (like the Docker clips), although all 8 of his video's I clicked on have a female ai as voiceover, so I don't know where the male voice came from.
OpenwebUI installs open telemetry. Every single thing that can be collected is. Every word of every chat. You should take this video down. There is no opt out and no way to disable it. This video kinda what this channel is against no?
I did some experimentations on the usage of differents olamma AI models, considering harddrive/SSD usage in giga octets, memory (RAM) usage, time in seconds to get the answer, using 3 LAMMAS models size (in Billions/Milliards of parameters/variables).
CONCLUSION: I suggest to stick on the smallest model (7Billions) for User that are having an average computer laptop.
If you have more than 16GB of RAM, you can uses the 33B Olamma model, but it take some time (a while on my old CPU) to get the answer.
Here are the raw data I got in my experience :
[ { "Model": "LLaMA-7B", "Time_Calculation_seconds": "5-15", "RAM_GB": "10-12", "Disk_Size_GB": 13 }, { "Model": "LLaMA-33B", "Time_Calculation_seconds": "15-30", "RAM_GB": "30-40", "Disk_Size_GB": 65 }, { "Model": "LLaMA-70B", "Time_Calculation_seconds": ">30", "RAM_GB": "70+", "Disk_Size_GB": 130 } ]
Sorry for the Json file to show you my calculations.
Thanks for the feedback! I agree, smaller models will do much better
I've been waiting for your take on AI and I was not disappointed. Great work as normally. Currently using Msty locally.
This topic just randomly appeared in my feed, yours is the 3rd video I clicked on and its the only one that made sense to me, you explained everything so well, thank you.
Thanks!
This right here can replace the internet
I honestly can't appreciate this enough!
This is a HUGE. Thank You!
If you have an Apple Silicon Mac, LM Studio is another option that also has a CLI tool (lms). It supports MLX (optimized for Macs) and GGUF model formats.
You didnt get a luv, must not be on the approved list.
You are doing a great job. I was surprised to see my other favorite channel. :D good luck. My other favorite IT Channels: NetworkChuck, David Bombal, The Hated One, and An0n Ali. :D
I’ve done this with locally based AI image processing software like comfyui, can’t wait to now try chat as well. I really appreciate the background and tutorial.
I'm glad you found it helpful!
Great video! Kinda glad this was recommended by Google...wonder how Google knew I was interested.
google shills its tools.
Love it! Using this I won't need to agree to terms that allow someone to own MY data. It is mine.
Really helpful! I love the privacy videos! 💪
Thank you Naomi.
Comprehensive overview, thanks. A few months back I saw a TH-cam video by Data Slayer running one of these models on an 8GB Raspberry Pi 5 with impressive results.
Best content for Nov 1, 2024. You get my vote. Great job!
thank you for watching :)
YES! This is literally on my to-do list for next week, and now I have a great source of info to start from. You're the best Naomi :D
Cool Geeky video.
Thanks Naomi.
Glad you liked it! 🤓
Nice to see you covering this topic! I already have a few of these installed but I'm glad you are covering the topic so i stopped in to hit like.
Omg I forgot about this! Glad you made another video for it!
I love this channel. Clear and easy instructions, even for people that aren't amazing with computers. I now have a private Ai!
That's wonderful!!
Naomi makes light blue look extra elegant.
You should talk about the several.AIs worth considering for llm art and more especially those using the Pinocchio installer engine for PC. Installing on a secondary drive, like D:, is recommended.
For Android, the 'Local AI Offline Chatbot' app is available as a fully offline option, with no internet required. Although this AI is limited to a G-rating, it offers a solid offline experience for those seeking a private, all-ages model. No offensive material allowed. for a private AI.running local"
1:03 "a tutorial made by The Hated One" is such a hard line out of context
wow , you smashed it ,
very well made tutorial! :)
holy crap finally found you again
I’d love to see a tutorial on running your own StableDiffusion as well
So helpful,thanks
This is helpful. Thank you. Subscribed ! : )
Thanks for subscribing!
love the work
great video!
Very nice! Have a nice weekend!
Great video. Please do another about local LLM on Android. Do not know if it is possible at this time.
there are people doing it but kind of complex
You can install a Linux terminal with Termux. From there you can install Ollama. But there are some things that don't make it easy. I tried with Debian and the directory was not added to the path. And I needed a second session, as I needed to start the Ollama server in a separate session. But it can be done.
Good explanation, Naomi. However, most users (with up to 16GB of RAM and probably not a very powerful GPU) will have to stick with a single digit billion parameters LLM as stated in the video. Whereas subscribing to the cloud AI solutions will let users access much more powerful LLMs. Finally, I think that it’s also worth mentioning LM Studio as an alternative to ollama, although I also use ollama.
Brilliant explanation, you sound like an Australian tech Barbie and I love it!
(Not disparaging your technical knowledge, just saying it’s nice to see other feminine ladies in tech!)
TH-cam should be sued for all the lies in the ads.
What adds? Inform yourself....but to google the solutions will most probably not work...use a different search engine
Utube shill didnt luv your comment, you should feel special.
lol you've never heard of adblocking?
edit: your playlists explain why
@@kiyoponnn so how do i block ads on my phone ?
TH-cam has ads???
"Timeless Naomi"
How do you protect your computer from viruses when you download things from the command prompt?
Depends on threat model, but good start is sticking with the big name sources that have tens of thousands of dl’s
immediately formatting the entire disk and profit
Naomi Brockwell Thanks for posting this video
Thanks for watching!
What is the current best main processor for AI? Are there any other processors that AI would benefit from their presence?
My concern is that I use LLMs to solve certain issues. However, using the arguably "not as smart" local smaller models compared to the ones hosted by those 3rd parties online will result in worse solutions, won't it?
Great video. I must admit I gave up on local LLMs about 6 months ago, due to it being *so slow* to run them, even on my beefy desktop machine. I don't have an NVidia GPU, though. I was hoping to see more info on using GPUs in this video, but it's probably a bit advanced/out of scope. I've been meaning to go back and try again and this video has further encouraged me to do so. Thanks!
I was hoping to hear more about hardware requirements. What kind of GPU do we need for 40B models?
It can run from cpu too it's just slower. Ram is based on the model. Small once are fine with e.g. 2 - 5gb and relly big ones are 40gb+.
40B Q4_0 is going to be around 20GB. so 3090-4090 is needed, my friend!
Quantization is shrinking all the models quite nicely. Kind of the only way to go w light consumer scale models
Can we train the models and add parameters to them (forking our own model?)
Thx`s for the video. I missed any words about the Hardware i need for the docker lxc`s or the backbone where i install the Librarys for all the stuff. Also in this context the special " NPU " which are necessary for the OP`s . ( TOPS )
2:35 the most statistically probable responses ;)
How do you update Open Web UI? Thank you
GREAT question, going to do a video about it!
@@NaomiBrockwellTV Thank you
@@Hindsight101if you're using a docker version, just pull a new one
Hi NBTV, curious how would users know if any telemetry-esque connection doesn’t happen in the background when the device connects online with the offline options? That prompts and inputs, end user info can still be collected via so?
I thought the same thing, I've set up multiple instances of different AI chat and Image generation on separate machines that are in a sandbox'd VLAN that was blocked from internet access and firewall logs monitored at the gateway, so far none of them have tried to call home or off network for anything prompt/query related. For queries that needed very new information that was not available when the model was built it did not call out instead reported it didn't have the info to answer. Additionally, I didn't see it covered in this video but there are uncensored models available that you can use, other public models/engines tend to censor very useful info, quick example - in a SHTF moment how to make useful compounds, for example KNO3CS.
I run little snitch so I see every connection being requested
You should do another Ron Paul interview...been a while.
what do you think about gpt4all
Can I connect my postgres with openwebUI
Is the Grass DePin Project safe?
I don't really like docker, it's still not user friendly enough. It's not like a regular application that you just install, you still need the command line.
Thanks Naomi but is their a way to jailbreak AI or ChatGPT for privacy or like if I ask AI itself to encrypt our chat?
you have to run uncensored models locally..
@@PracticalPcGuide Thanks for the answer
Always a pleasure. (Forgive my lack of originality!) 😊
❤
Please review Invizible pro and Rethink dns android apps
I have just found you and love you. I am pretty ignorant of this and am a bit overwhelmed. I have started with what I thought was first step and opened firefox. Here is question: In firefox , I get the message to search using Google. Am I doing somthing wrong in the firefox setup. I have disabled all tracking and cookies and permissions in Google account. Thank you. I promise I will get better. You just have a lot of info . Thank you very much.
Honestly seeing the hated one using nord or the pIrate bay undermined his credibility
msty works well for the windows user.
I love you Naomi!
Can I ask my local files in this?
7:50 ekhem, what is that strange website you've showed here? I have never ever seen it before.
Fascinating stuff, and I can see why a local model is good for privacy. However, if I've registered with an AI system (say ChatGPT) with a masked email address, I always use a VPN and I never use personally identifiable information in chats, what is my privacy risk?
1) chatgpt requires an active cell number (not voip)
2) Everything you do through your acct is tied into a single profile and kept forever. I wouldn't bet against the ability for that information to eventually be tied to you.
3) if you don't reveal sensitive information in chat, I think using chatgpt is fine. It's just about understanding what information is collected, and making smart usage decisions accordingly.
*y r we assuming everybody uses a laptop? which kinds of desktop cpus and gpus if any are capable of running, say, a 70b parameter model?*
my question to anyone who wants to answer, even though I applied this llama, my internet provider will be able to see my data, it is confusing to me as I am 1000 years old help please
If you are self hosting a model, you don't need internet access to use it. It's all stored locally on your machine.
Naomi be know that ollama LLM are not really Open Source, Meta like others (Google recently corrected this) are abusing the the Open Source definition standard (OSI), this LLM are trained by private data and none of the sources are explained or given to the users.
I'd like to make an honorable mention for GPT4All. Turn key UI with CPU based models for more compatibility
PLEASE, DO A VIDEO on how t0 keep my house property safe.
The piratebay was the shyt in its day.
Llama by Meta is not open source. It's a word play to make you think that but if you look at the legality it isn't actually open source
The only problem is that all these modules use a lot of resources (some even GPU)... I think that until we could run a decent AI model locally that does not require a modern CPU and RAM, this will not be feasible... I tried already and it was to slow, or the "lower" models are to bad.... :(... Let's see in the future :)
Inb4 rugpull
Excellent video. I should have seen it sooner, my bad.
Is it possible to download the AI model and install it in Ollama or Open WebUI?
I want to know if it is possible to get it while you already have a home network cut from the internet?
Bonjour gorgeous 😊
I read a study about the crakeling voice it’s only happening in USA so far I remember 😊
How about instructions on removing AI slop from our lives, instead of adding more to it ?
i still think ur really pretty. i like LLM's . but yea feel really concerned about them. but im careful what i put on there. but only academic consumer level stuff.. Local LLM is out of the reach of my Hardware because i cant afford to run it well enough. I only like Claude ai and GPT so far. i Dis-like gemini and meta Lama whatever. And open source is subjective when Meta does it. read the fine print.
You being shadowed band I barely see you videos and I like them
The only problem I see with your tutorial is that it, like most tutorials that tell people to sue docker, is that there is no instructions on how to install docker.
OH sure, I could do a web search like I have many times before, but they all tell me to fun a command in my prompt that my computer says; does not exist.
My biggest frustration with techies is that they make generalized assumptions that are only true if the given computer happens to have the latest, greatest, most advanced everything, including a monitor set to the highest resolution possible, which I can not see and therefore must set mine to 1280 x 768.
Other than that, good video.
Hey.
Naomi is malwarebytes any good for virus protection on my phone? Please let me know.
ppl don't care about their data until they are caught with a crime.
Would have been nice if you had linked the actual LLM video- couldn't find a single vid anywhere in his playlist (The hated one) about anything AI, much less LLM's. Also 99% of his content looks like it's ,ore likely to get you put on a gov list first. You also didn't even mention TechXplainator which seems to be where most of the video content is actually from, (like the Docker clips), although all 8 of his video's I clicked on have a female ai as voiceover, so I don't know where the male voice came from.
What actual LLM video? This is a project we put together in collaboration, this IS the video :)
Bad tutorial. Almost nothing worked as you showed. Had to look it all up and figure it out on my own.
[Title] How To Host AI Locally ?
OLLAMA ???
OpenwebUI installs open telemetry. Every single thing that can be collected is. Every word of every chat. You should take this video down. There is no opt out and no way to disable it. This video kinda what this channel is against no?
Was going to watch but I auto block channels with soy face intro screens.
Thank christ I have a brain and don't need these tools.
Why are you fo focused on running it on a laptop?
Thanks Mate Respect From Oz 🇦🇺🦘