I have an old file/media server which I am planning on rebuilding as a future project. Would I be able to run this on that server and still access it with other computers on my network or would it just be available on the server itself?
Sir you need to setup RAG for that. In the MSTY app that I've linked in the description below, you can create knowledge bases easily by just dragging and dropping files. Then you can interact with them using your local llms.
okay thats someting good and appreciatable content
Thanks Nehal. 👍
Yes we need much more such videos
Do more local llm installs videos
I have an old file/media server which I am planning on rebuilding as a future project. Would I be able to run this on that server and still access it with other computers on my network or would it just be available on the server itself?
That's a great idea actually. You could make it accessible on other devices on your network as ollama suppots that.
@@LinuxTex Thanks!
Do make a video regarding ai generated images local models
Definitely bro👍
How to make api calls to these offline llm. For used in projects
Quality content dude..👍 got a sub👆👍
hardware requirement?
Nvidia H100
@@m4saurabh seriously? that's not for everyone T_T
For Phi 3.8b, 8 gb ram. No gpu needed.
For llama 3.1 7b, 16 gb ram. Most consumer GPUs will suffice. H100 not necessary.😉
@@LinuxTex thakn you!
Useful information 💯 ,,🔥🔥🔥🔥🔥
Thank you👍
How can I have my local LLM work with my files?
Sir you need to setup RAG for that. In the MSTY app that I've linked in the description below, you can create knowledge bases easily by just dragging and dropping files. Then you can interact with them using your local llms.
which linux is perfect for my samsung NP300E5Z 4 RAM , Intel Core i5-2450M Processor , plz reply
Go with mx linux. Xfce desktop environment
Do more videos like this❤ good apps
Will do. Thanks for the comment👍
Helpful video!
Thank you Syed👍
I already use ollama on my Galaxy A54 with Kali Nethunter
0:39 The Holy Trinity 😂😂🤣
😮👍
Wow