Chat with your Documents Privately with Local AI using Ollama and AnythingLLM

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ก.ค. 2024
  • Chat with your Documents Privately with Local AI using Ollama and AnythingLLM
    In this video, we'll see how you can install and use AnythingLLM, a desktop app that provides a Chat GPT style AI chat that runs locally on your machine. While not the most user-friendly UI I have seen, the built0in document chat feature works very well. Use it with local LLMs such as Meta's Llama 3, Mistral's Mixtral, Google's Gemma and Microsoft's WizardLM 2.
    This lets you try out different models, and even use uncensored models.
    Don't send your private data to OpenAI's ChatGPT or Anthropic's Claude.ai, keep it private on your pc or mac.
    If you want to install Ollama, or try out an open-source UI with even more features, check out my previous video on how to install Ollama and Open WebUI: • Run your Own Private C...
    👍 Please like if you found this video helpful, and subscribe to stay updated with my latest tutorials. 🔔
    ❤️ You can support this channel by buying me a ☕: buymeacoffee.com/codesfinance
    🔖 Chapters:
    00:00 Intro
    02:18 Ollama installation
    03:11 AnythingLLM installation
    06:09 Chat
    09:08 Chat with Document
    14:40 Final thoughts
    🔗 Video links:
    - AnythingLLM: useanything.com/
    - Ollama: ollama.com/
    - Llama 3: llama.meta.com/llama3/
    - Paper: papers.ssrn.com/sol3/papers.c...
    🐍 More Vincent Codes Finance:
    - ✍🏻 Blog: vincent.codes.finance
    - 🐦 X: / codesfinance
    - 🧵 Threads: www.threads.net/@codesfinance
    - 😺 GitHub: github.com/Vincent-Codes-Finance
    - 📘 Facebook: / 61559283113665
    - 👨‍💼 LinkedIn: / vincent-codes-finance
    - 🎓 Academic website: www.vincentgregoire.com/
    #ollama #llm #llama3 #anythingllm #mixtral #dbrx #command-r-plus #wizardlm2 #chatgpt #llm #largelanguagemodels #openwebui #gpt #opensource #cohere #databricks #opensourceai #llama2 #mistral #bigdata #research #researchtips #professor #datascience #dataanalytics #dataanalysis #uncensored #private #mac #macbookpro #claude #anthropic #msty
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 10

  • @VincentCodesFinance
    @VincentCodesFinance  2 หลายเดือนก่อน

    👍 Please like if you found this video helpful, and subscribe to stay updated with my latest tutorials. 🔔
    ❤ You can support this channel by buying me a ☕: buymeacoffee.com/codesfinance

  • @englishmimics
    @englishmimics 2 หลายเดือนก่อน

    Thank you so much Vincent for taking the time to create and share that amazing video with us. Your effort is truly appreciated.

  • @drkvaladao776
    @drkvaladao776 2 หลายเดือนก่อน +2

    Very nice, easy to follow and set up even if you are not a programer like me, I'm using it with Llama 3 at the moment. Subscribed for more content like this.

  • @englishmimics
    @englishmimics 2 หลายเดือนก่อน

    Vincent, I've seen all your tutorials on ChatGPT-like clones, but I'm still unsure which one to install on my computer! Could you please advise me on which one you recommend, considering I'm not a coding pro? I just want to avoid any unnecessary headaches!

    • @VincentCodesFinance
      @VincentCodesFinance  2 หลายเดือนก่อน

      So far the easiest to install and use is Msty. However, it does not super documents (yet), it is just a pure chat for now.

  • @Alex29196
    @Alex29196 2 หลายเดือนก่อน +1

    AnythingLLM won't be able to outperform any local inference UI, lacking voice output like Ollama web UI and others, plus inference runs much slower than Msty, ollama CLI and LMstudio.

    • @VincentCodesFinance
      @VincentCodesFinance  2 หลายเดือนก่อน

      I agree that the UI is inferior to pretty much all the other ones I tried. As for inference speed, I configured it to use my local Ollama installation, so the speed is the same. The one use case where I found AnythingLLM better than WebUI is for document search (Msty doesn't have it yet). I haven't tried LMstudio yet but it's high on my list.

    • @Alex29196
      @Alex29196 2 หลายเดือนก่อน

      @@VincentCodesFinance Try Msty inferece spped is really fast, i run a low en laptop, with 4g vram 16gb ram. Hope it helps