Btw not sure if you had the chance but I would highly recommend trying out llamafile, I been moving away from ollama to llamafile and I love the CPU inference performance it has on edge devices and pretty good support
Hey! I haven't tried this yet, but Ollamda does have a docker image : hub.docker.com/r/ollama/ollama . In terms of setting up a UI, you could use something like this : github.com/JHubi1/ollama-app. If you run the model on termux , you should have an endpoint that you can connect with the ollama-app . These are just idea though heh but it might be worth checking out
Hey! Yea so it really depends on your goal and your device specs. I have tried the MLC app and gave it to one school in Uganda to test but the problem was it kept crashing on his phone (His phone only had 4gb of ram) . So my goal was to have my students experience LLM so having tinyllama worked was a success for me based on the above tutorial. But yea the more ram the better the experience. I have a pixel 7a and I downloaded the MLC app and that worked pretty good too but I had 8gb of ram : llm.mlc.ai/docs/deploy/android.html . I did hear about Layla and people say its quite good but I didn't want to pay for the 20$ : play.google.com/store/apps/details?id=com.layla . So this is a private AI since its being done offline. I got to admit the terminal makes it quite intimidating, appreciate the feedback this is good to know!
Difficult? It's like 2 downloads and running 5 commands... you don't even have to root your phone man. Also @thequacklearner, phenomenal job on the video man, you really should have more views!!
Thank you good sir, i just made my Rabbit R1 a legit local LLM and LAM
Dude you're brilliant!
Great tutorial
ERROR Unsupported architecture: armv7l
I get this error after pasting the link from Ollama. Can this be fixed?
oh darn it looks like it armv7 is not supported by ollama : github.com/ollama/ollama/issues/1926
Btw not sure if you had the chance but I would highly recommend trying out llamafile, I been moving away from ollama to llamafile and I love the CPU inference performance it has on edge devices and pretty good support
Nice! can you run this in a docker tho? creating a webui interface like chat gpt?
Hey! I haven't tried this yet, but Ollamda does have a docker image : hub.docker.com/r/ollama/ollama . In terms of setting up a UI, you could use something like this : github.com/JHubi1/ollama-app. If you run the model on termux , you should have an endpoint that you can connect with the ollama-app . These are just idea though heh but it might be worth checking out
Muito bom, parabéns!
Very helpful. Thank you
you might be able to get ollama without installing debian. Youd have to build it though. Might turn out better.
Looks difficult to install. Is this have better performance than simple apps like Layla lite/MLC chat/private ai?
Hey! Yea so it really depends on your goal and your device specs. I have tried the MLC app and gave it to one school in Uganda to test but the problem was it kept crashing on his phone (His phone only had 4gb of ram) . So my goal was to have my students experience LLM so having tinyllama worked was a success for me based on the above tutorial. But yea the more ram the better the experience. I have a pixel 7a and I downloaded the MLC app and that worked pretty good too but I had 8gb of ram : llm.mlc.ai/docs/deploy/android.html . I did hear about Layla and people say its quite good but I didn't want to pay for the 20$ : play.google.com/store/apps/details?id=com.layla . So this is a private AI since its being done offline. I got to admit the terminal makes it quite intimidating, appreciate the feedback this is good to know!
Oh wait I neverseen Layla lite before facinating, I need to check it out thanks for sharing this
Difficult? It's like 2 downloads and running 5 commands... you don't even have to root your phone man.
Also @thequacklearner, phenomenal job on the video man, you really should have more views!!
Termux 💪
How to remove any llama
ollama rm {Ai model name}
for example :
ollama rm llama3.1
@@mancapp terima kasih
You can use screen instead of tmux, after running ollama serve on screen - S, ctrl a Ctrl d to go back to previous screen and run tinyllama