Run Offline LLMs on Android

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 พ.ย. 2024

ความคิดเห็น • 21

  • @LebaneseJesus
    @LebaneseJesus 4 หลายเดือนก่อน +9

    Thank you good sir, i just made my Rabbit R1 a legit local LLM and LAM

  • @jeswer9
    @jeswer9 หลายเดือนก่อน +1

    Dude you're brilliant!

  • @DeepDeepSouth
    @DeepDeepSouth 3 หลายเดือนก่อน +2

    Great tutorial

  • @Horatius_444
    @Horatius_444 26 วันที่ผ่านมา

    ERROR Unsupported architecture: armv7l
    I get this error after pasting the link from Ollama. Can this be fixed?

    • @TheQuackLearner
      @TheQuackLearner  26 วันที่ผ่านมา

      oh darn it looks like it armv7 is not supported by ollama : github.com/ollama/ollama/issues/1926

    • @TheQuackLearner
      @TheQuackLearner  26 วันที่ผ่านมา

      Btw not sure if you had the chance but I would highly recommend trying out llamafile, I been moving away from ollama to llamafile and I love the CPU inference performance it has on edge devices and pretty good support

  • @chomanthapa
    @chomanthapa 3 หลายเดือนก่อน +3

    Nice! can you run this in a docker tho? creating a webui interface like chat gpt?

    • @TheQuackLearner
      @TheQuackLearner  3 หลายเดือนก่อน +3

      Hey! I haven't tried this yet, but Ollamda does have a docker image : hub.docker.com/r/ollama/ollama . In terms of setting up a UI, you could use something like this : github.com/JHubi1/ollama-app. If you run the model on termux , you should have an endpoint that you can connect with the ollama-app . These are just idea though heh but it might be worth checking out

  • @naoadiantasimular3498
    @naoadiantasimular3498 3 หลายเดือนก่อน +1

    Muito bom, parabéns!

  • @mojeime3
    @mojeime3 หลายเดือนก่อน

    Very helpful. Thank you

  • @SCHaworth
    @SCHaworth หลายเดือนก่อน

    you might be able to get ollama without installing debian. Youd have to build it though. Might turn out better.

  • @とふこ
    @とふこ 4 หลายเดือนก่อน

    Looks difficult to install. Is this have better performance than simple apps like Layla lite/MLC chat/private ai?

    • @TheQuackLearner
      @TheQuackLearner  4 หลายเดือนก่อน +2

      Hey! Yea so it really depends on your goal and your device specs. I have tried the MLC app and gave it to one school in Uganda to test but the problem was it kept crashing on his phone (His phone only had 4gb of ram) . So my goal was to have my students experience LLM so having tinyllama worked was a success for me based on the above tutorial. But yea the more ram the better the experience. I have a pixel 7a and I downloaded the MLC app and that worked pretty good too but I had 8gb of ram : llm.mlc.ai/docs/deploy/android.html . I did hear about Layla and people say its quite good but I didn't want to pay for the 20$ : play.google.com/store/apps/details?id=com.layla . So this is a private AI since its being done offline. I got to admit the terminal makes it quite intimidating, appreciate the feedback this is good to know!

    • @TheQuackLearner
      @TheQuackLearner  4 หลายเดือนก่อน +2

      Oh wait I neverseen Layla lite before facinating, I need to check it out thanks for sharing this

    • @LebaneseJesus
      @LebaneseJesus 4 หลายเดือนก่อน +1

      Difficult? It's like 2 downloads and running 5 commands... you don't even have to root your phone man.
      Also @thequacklearner, phenomenal job on the video man, you really should have more views!!

  • @ShaheenPyDev
    @ShaheenPyDev 20 วันที่ผ่านมา

    Termux 💪

  • @legendaryaadmi
    @legendaryaadmi 2 หลายเดือนก่อน

    How to remove any llama

    • @mancapp
      @mancapp 2 หลายเดือนก่อน

      ollama rm {Ai model name}
      for example :
      ollama rm llama3.1

    • @JangWon-young-g5p
      @JangWon-young-g5p หลายเดือนก่อน

      @@mancapp terima kasih

    • @marukocha7653
      @marukocha7653 3 วันที่ผ่านมา

      You can use screen instead of tmux, after running ollama serve on screen - S, ctrl a Ctrl d to go back to previous screen and run tinyllama