Stop paying for ChatGPT with these two tools | LMStudio x AnythingLLM

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 พ.ค. 2024
  • In this video, we are installing two user-friendly tools that make downloading, running, and managing a powerful local LLM to replace ChatGPT. Seriously.
    Today, with only a desktop computer with a retail GPU, and two free applications you will get a fully private Local LLM + RAG chatbot running in less than 5 minutes!
    This is no joke - the teams at LM Studio and AnythingLLM are now fully integrated for your convenience. Run models like LLama-2, Mistral, CodeLLama, and more to make your dreams a reality at no expense of privacy.
    LMStudio: lmstudio.ai
    AnythingLLM: useanything.com/download
    Chapters:
    0:00 Introduction to LMStudio x AnythingLLM
    0:57 What is AnythingLLM?
    1:20 Installing LMStudio
    1:53 Installing AnythingLLM
    2:10 LMStudio Basic use tutorial
    4:28 Testing out our model
    5:32 How to level up your LLM chat abilities
    6:00 Connecting LMStudio to AnythingLLM
    7:53 Send a basic chat on AnythingLLM to our custom model
    8:26 Adding knowledge to our LMStudio model
    10:08 What the future of chat with local LLMs is going to look like

ความคิดเห็น • 429

  • @codygaudet8071
    @codygaudet8071 2 หลายเดือนก่อน +155

    Please do a dedicated video on training minimal base models for specific purposes. You're a legend. Also a video on commercial use and licensing would be immensely valuable and greatly appreciated.

    • @Al-Storm
      @Al-Storm 2 หลายเดือนก่อน +5

      +1

    • @akram5960
      @akram5960 2 หลายเดือนก่อน +7

      Where to start with in the path of learning AI (llm, rag, generative Ai..,)

    • @fxstation1329
      @fxstation1329 หลายเดือนก่อน

      +1

    • @vulcan4d
      @vulcan4d หลายเดือนก่อน +1

      Yes!

    • @nasirkhansafi8634
      @nasirkhansafi8634 15 วันที่ผ่านมา

      Very nice question i am waiting for the same. Wish Tim make that video soon

  • @alanmcoll101
    @alanmcoll101 2 หลายเดือนก่อน +4

    Thanks mate. Had them up and running in a few minutes.

  • @TazzSmk
    @TazzSmk 2 หลายเดือนก่อน +8

    thanks for the tutorial, everything works great and surprisingly fast on M2 Mac Studio, cheers!

  • @claudiantenegri2612
    @claudiantenegri2612 2 หลายเดือนก่อน +3

    Very nice tutorial! Thanks Tim,

  • @olivierstephane9232
    @olivierstephane9232 2 หลายเดือนก่อน +2

    Excellent tutorial. Thanks a bunch😊

  • @PCFix41
    @PCFix41 2 หลายเดือนก่อน +2

    I’m just about to dive into LM Studio and AnythingLM Desktop, and let me tell you, I’m super pumped! 🚀 The potential when these two join forces is just out of this world!

  • @autonomousreviews2521
    @autonomousreviews2521 2 หลายเดือนก่อน +8

    Fantastic! I've been waiting for someone to make RAG smooth and easy :) Thank you for the video!

  • @dipteshbosedb
    @dipteshbosedb 2 หลายเดือนก่อน +13

    Wonderful. Thanks Tim. You have solved a world of problems by publishing this video. I am sure AnythingLLM will keep adding more supported file formats such as excel, csv etc.

  • @bradcasper4823
    @bradcasper4823 2 หลายเดือนก่อน +4

    Thank you, I've been struggling for so long with problematic things like privateGPT etc. which gave me headaches. I love how easy it is to download models and add embeddings! Again thank you.
    I'm very eager to learn more about AI, but I'm absolute beginner. Maybe video on how would you learn from the beginning?

  • @user-ld8sy9xu2v
    @user-ld8sy9xu2v 2 หลายเดือนก่อน +1

    Great stuff,this way you can run a good smaller conversational model like 13b or even 7b,like Laser Mistral.
    Main problem with this smaller LLM are massive holes in some topics,or informations about events,celebs or other stuff,this way you can make your own database about stuff you wanna chat.
    Amazing.

  • @sitedev
    @sitedev 2 หลายเดือนก่อน +10

    I’d love to hear more about your product roadmap - specifically with how it relates to the RAG system you have implemented . I’ve been experimenting a lot with Flowise and the new LlamaIndex integration is fantastic - especially the various text summarisation and content refinement methods available with a LlamaIndex based RAG. Are you planning to enhance the RAG implementation in AnythingLLM?

  • @BudoReflex
    @BudoReflex 2 หลายเดือนก่อน +1

    Thank you! Very useful info. Subbed.

  • @AC-go1tp
    @AC-go1tp หลายเดือนก่อน

    Thank you so much for your generosity. I wish the very best for your enterprise . God Bless!

  • @rowbradley
    @rowbradley หลายเดือนก่อน

    Thanks for building this.

  • @cee7004
    @cee7004 หลายเดือนก่อน

    Thank you for making this video. This helped me a lot.

  • @williamsoo8500
    @williamsoo8500 26 วันที่ผ่านมา

    Awesome man. Hope to see more video with AnythingLL!

  • @wingwing2683
    @wingwing2683 2 หลายเดือนก่อน

    It's very helpful. Thank you!

  • @moreloveandjoy
    @moreloveandjoy 2 หลายเดือนก่อน

    Brilliant. Thank you.

  • @Babble_Gum
    @Babble_Gum หลายเดือนก่อน

    Thanks dude! Great video

  • @craftedbysrs
    @craftedbysrs 15 วันที่ผ่านมา

    Thanks a lot! This tutorial is a gem!

  • @boomerstrikeforce
    @boomerstrikeforce 2 หลายเดือนก่อน

    Great overview!

  • @MCSchuscha
    @MCSchuscha 2 หลายเดือนก่อน +1

    changing the embedding model would be a good tutorial! For examle how to use a multi langual model!

  • @aiforall217
    @aiforall217 2 หลายเดือนก่อน

    Thanks a lot , very informative , you are my new mentor :)

  • @Helios1st
    @Helios1st 19 วันที่ผ่านมา +1

    Wow, great information. I have a huge amount of documents and everytime I search for something it's getting such a difficult task to fulfill

  • @fieldpictures1306
    @fieldpictures1306 หลายเดือนก่อน

    Thanks for this, about to try it to query legislation and case law for a specific area of UK law to see if it effective in returning references to relevant sections and key case law. Interested in building a private LLM to assist with specific repetitive tasks. Thanks for the video.

  • @vivekkarumudi
    @vivekkarumudi 2 หลายเดือนก่อน +10

    Thanks a ton ...you are giving us power on working with our local documents... its blazingly fast to embed the docs, super fast responses and all in all i am very happy.

    • @ashleymusihiwa
      @ashleymusihiwa 2 หลายเดือนก่อน

      thats liberating ! i was really concerned about privacy especially when coding or working on refining internal proposals> Now I know what to do

    • @BarryFence
      @BarryFence หลายเดือนก่อน

      What type of processor/GPU/model are you using? I'm using version 5 of Mistral and it is super slow to respond. i7 and an Nvidia RTX 3060ti GPU.

  • @thualfiqar87
    @thualfiqar87 2 หลายเดือนก่อน +1

    That's really amazing 🤩, I will definitely be using this for BIM and Python

  • @kylequinn1963
    @kylequinn1963 2 หลายเดือนก่อน +6

    This is exactly what I've been looking for. Now, I'm not sure if this is already implemented, but if the chat bot can use EVERYTHING from all previous chats within the workspace for context and reference... My god that will change everything for me.

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน +2

      It does use the history for context and reference! History, system prompt, and context - all at the same time and we manage the context window for you on the backend

    • @IrakliKavtaradzepsyche
      @IrakliKavtaradzepsyche 2 หลายเดือนก่อน

      @@TimCarambatbut isn’t history actually constrained by active model’s context size?

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน +4

      @@IrakliKavtaradzepsyche yes, but we manage the overflow automatically so you at least don't crash from token overflow. This is common for LLMs, to truncate or manipulate the history for long running sessions

  • @stanTrX
    @stanTrX 22 วันที่ผ่านมา

    IMO anythingLLM is much userfriendly and really has big potential. thanks Tim!

  • @BotchedGod
    @BotchedGod 2 หลายเดือนก่อน

    AnythingLLM looks super awesome, cant wait to setup with ollama and give it a spin. tried chat with rtx but the youtube upload option didnt install for me and that was all i wanted it for

  • @jimg8296
    @jimg8296 หลายเดือนก่อน

    Just got this running and it's fantastic. Just a note that LM Studio uses the API key "lm-studio" when connecting using Local AI Chat Settings.

    • @thegoat10.7
      @thegoat10.7 หลายเดือนก่อน

      does it provide script for youtube?

  • @Namogadget_
    @Namogadget_ 2 หลายเดือนก่อน

    Love you for your explain
    Love from INDIA 😊

  • @atharvadeshpande6647
    @atharvadeshpande6647 2 หลายเดือนก่อน

    Great content❤

  • @continuouslearner
    @continuouslearner 2 หลายเดือนก่อน +3

    Also, how is this different from implementing RAG on a base foundation model and chunking our documents and loading it into a vector db like pinecone? Is the main point here that everything is locally run on our laptop? Would it work without internet access?

  • @drew5834
    @drew5834 หลายเดือนก่อน

    Great work Tim, I'm hoping I can introduce this or anything AI into our company

  • @YannMetalhead
    @YannMetalhead หลายเดือนก่อน

    Thank you!

  • @djmccullough9233
    @djmccullough9233 หลายเดือนก่อน

    I had a spare 6800xt sitting around that had been retired due to overheating for no apparent reason, as well as a semi-retired ryzen 2700x , and i found 32 gigs of ram sitting around for the box. Just going to say flat out that it is shockingly fast. I actually think running Rocm to enable gpu acceleration for lm studio is running llm's better than my 3080ti in my main system, or at the very least, so similar i cant perceive a difference

  • @shabbirug
    @shabbirug หลายเดือนก่อน

    Excellent work. Please make a video on text to sql and excel csv sql support for llms and chatbot. Thank you so much ♥️

  • @LiebsterFeind
    @LiebsterFeind 2 หลายเดือนก่อน +7

    LM Studios TOS paragraph:
    "Updates. You understand that Company Properties are evolving. As a result, Company may require you to accept updates to Company Properties that you have installed on your computer or mobile device. You acknowledge and agree that Company may update Company Properties with or WITHOUT notifying you. You may need to update third-party software from time to time in order to use Company Properties.
    Company MAY, but is not obligated to, monitor or review Company Properties at any time. Although Company does not generally monitor user activity occurring in connection with Company Properties, if Company becomes aware of any possible violations by you of any provision of the Agreement, Company reserves the right to investigate such violations, and Company may, at its sole discretion, immediately terminate your license to use Company Properties, without prior notice to you."
    Several posts on LLM Reddit groups with people not happy about it. NOTE: I'm not one of the posters, read-only, I'm just curious what others think.

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน +6

      Wait so their TOS basically says they may or may not monitor your chats in case you are up to no good with no notification?
      okay. I see why people are pissed about that. I dont like that either unless they can verifiable prove the "danger assessment" is done on device because otherwise this is no better than just cloud hosting but paying for it with your resources

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน +3

      Thanks for bringing this to my attention btw. I know _why_ they have it in the ToS, but I cannot imagine how they think that will go over.

    • @LiebsterFeind
      @LiebsterFeind 2 หลายเดือนก่อน

      Ancient idea clash between wanting to be a good "software citizen" and the unfortunate fact that their intent is still to "monitor" your activities. As you said in your second reply to me, "monitoring" does not go over well with some and the consideration of the intent for doing so, even if potentially justified, is a subsequent thought they will refuse to entertain. @@TimCarambat

    • @alternate_fantasy
      @alternate_fantasy หลายเดือนก่อน

      ​@@TimCarambatLet's say there is a monitoring background behind, what if we setup a vm that did not allow to connect to the internet, does that will make our data safe ?

    • @TimCarambat
      @TimCarambat  หลายเดือนก่อน

      @@alternate_fantasy it would prevent phone homes, sure, so yes. That being said I have Wiresharkd lmstudio while running and did not see anything sent outbound that would indicate they can view anything like that. I think that's just their lawyers being lawyers

  • @dadlord689
    @dadlord689 2 หลายเดือนก่อน

    Awesome!

  • @TheDroppersBeats
    @TheDroppersBeats หลายเดือนก่อน

    @Tim, this episode is brilliant! Let me ask you one thing. Do you have any ways to force this LLM model to return the response in a specific form, e.g. JSON with specific keys?

  • @WestW3st
    @WestW3st 2 หลายเดือนก่อน

    I mean this is pretty useful already, is there plans to increase the capabilities to include other formats of documents, images, etc?

  • @bennguyen1313
    @bennguyen1313 2 หลายเดือนก่อน +4

    I notice some of the models are 25GB+.. BLOOM, Meta's Llama 2, Guanaco 65B and 33B, dolphin-2.5-mixtral-8x7b etc
    Do these models require training? If not, but you wanted to train it with custom data, does the size of the model grow, or does it just change and stay the same size?
    Aside from LMStudio , AnythingLLM , any thoughts on other tools that attempt to make it simpler to get started, like Oobabooga , gpt4all io , Google Colab , llamafile , Pinokio ?

  • @fxstation1329
    @fxstation1329 หลายเดือนก่อน

    Thank you so much for the concise tutorial. Can we use ollama and LM studio as well with AnythingLLM. It only takes either of them. I have some models in ollama, and some in LM. Would love to have them both in AnythingLLM. I don't know if this is possible though. Thanks!

  • @lalamax3d
    @lalamax3d หลายเดือนก่อน

    thanks, i love it. beautiful work

  • @karlwireless
    @karlwireless หลายเดือนก่อน +1

    This video changed everything for me. Insane how easy to do all this now!

  • @valenciawalker6498
    @valenciawalker6498 2 หลายเดือนก่อน

    Thank you

  • @immersift7856
    @immersift7856 2 หลายเดือนก่อน +1

    looks soo good! I have a question : is there some way to add chat diagram like voiceflow or botpress ?
    For example, guiding the discussion for an ecommerce chatbot and give multiple choice when ask questions ?

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน

      I think this could be done with just some clever prompt engineering. You can modify the system prompt to behave in this way. However, there is no voiceflow-like experience built-in for that. That is a clever solution though.

  • @Augmented_AI
    @Augmented_AI 2 หลายเดือนก่อน +1

    How well does it perform on large documents. Is it prone to lost in the middle phenomena?

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน

      That is more of a "model behavior" and not something we can control.

  • @FlutterXP
    @FlutterXP 27 วันที่ผ่านมา

    You are a superhero!

  • @milorad9301
    @milorad9301 2 หลายเดือนก่อน

    Hello Tim, you can make a video connecting Ollama with AnythingLLM?

  • @continuouslearner
    @continuouslearner 2 หลายเดือนก่อน +6

    So if in case we need to programmatically use this, does anythingllm itself offer a ‘run locally on server’ option to get an API endpoint that we could call from a local website for example? i.e. local website -> post request -> anythingllm (local server + PDFs)-> LMstudio (local server - foundation model)

    • @clinbrokers
      @clinbrokers 2 หลายเดือนก่อน

      Did you get an answer?

  • @jakajak1991
    @jakajak1991 13 วันที่ผ่านมา

    I get this response every time:
    "I am unable to access external sources or provide information beyond the context you have provided, so I cannot answer this question".
    Mac mini
    M2 Pro
    Cores:10 (6 performance and 4 efficiency)
    Memory:16 GB

  • @cosmochatterbot
    @cosmochatterbot 2 หลายเดือนก่อน +6

    Absolutely stellar video, Tim! 🌌 Your walkthrough on setting up a locally run LLM for free using LM Studio and Anything LLM Desktop was not just informative but truly inspiring. It's incredible to see how accessible and powerful these tools can make LLM chat experiences, all from our own digital space stations. I'm particularly excited about the privacy aspect and the ability to contribute to the open-source community. You've opened up a whole new universe of possibilities for us explorers. Can't wait to give it a try myself and dive into the world of private, powerful LLM interactions. Thank you for sharing this cosmic knowledge! 🚀👩‍🚀

  • @CaptZenPetabyte
    @CaptZenPetabyte หลายเดือนก่อน

    Im on a Linux machine, and want to set up some hardware ... recommended GPU (or even point me to the direction for good information?) or better yet can an old bitcon rig do the job somehow seeing as though theyre useless for bitcoin these days?! Great tutorial too mate, really appreciate you taking the time!

  • @Chris.888
    @Chris.888 2 หลายเดือนก่อน

    Nice one Tim. It’s been on my list to get a private LLM set up. You’re guide is just what I needed. I know Mistral is popular. Are those models listed on capabilities, top being most efficient? I’m wondering how to choose the best model for my needs.

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน +1

      Those models are curated by thr lmstudio team. Imo they are based on popularity. However, if you aren't sure what model to chose, go for Llama2 or Mistral, can't go wrong with those models as they are all around capable

    • @Chris.888
      @Chris.888 2 หลายเดือนก่อน

      Thanks Tim, much appreciated.

  • @djkrazay7791
    @djkrazay7791 2 หลายเดือนก่อน

    This is an amazing tutorial. Didn't know there were that many models out there. Thank you for clearing the fog. I have one question though, how do I find out what number to put into "Token context window"? Thanks for your time!

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน +1

      Once pulling into LMStudio, its in the sidebar once the model is selected. Its a tiny little section on the right sidebar that say "n_ctxt" or something similar to that. Youll then see it will explain further how many tokens your model can handle at max, RAM permitting.

    • @djkrazay7791
      @djkrazay7791 2 หลายเดือนก่อน

      @@TimCarambat your the best... thanks... 🍻

  • @apricotcomputers3943
    @apricotcomputers3943 2 หลายเดือนก่อน

    you are amazing 🎉🎉🎉

  • @CrusaderGeneral
    @CrusaderGeneral 2 หลายเดือนก่อน +1

    thats great, I was getting tired of the restrictions in the common AI platforms

  • @Jascensionvoid
    @Jascensionvoid 2 หลายเดือนก่อน +1

    This is an amazing video and exactly what Ineeded. Thank you! I really appreciate it. Now the one thing,how do I find the token context window for the different models? I'm trying out gemma?

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน +3

      up to 8,000 (depends on VRAM available - 4096 is safe if you want best performance). I wish they had it on the model card on HuggingFace, but in reality it just is better to google it sometimes :)

    • @Jascensionvoid
      @Jascensionvoid 2 หลายเดือนก่อน

      I gotcha. So for the most part, just use the recommended one. I got everything working. But I uploaded a PDF and it keeps saying I am unable to provide a response to your question as I am unable to access external sources or provide a detailed analysis of the conversation. But the book was loaded and moved to workspace and save and embed?
      @@TimCarambat

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน +2

      For what its worth in LM Studio, on the sidebar there is a `n_cntxt` param that shows the maximum you can run. Performance will degrade if your GPU is not capable though to run the max token context.

  • @Al-Storm
    @Al-Storm 2 หลายเดือนก่อน

    Very cool, I'll check it out. Is there a way to not install this on your OS drive?

  • @NaveenKumar-vj9sc
    @NaveenKumar-vj9sc 2 หลายเดือนก่อน

    Thanks for the insights. What's the best alternative for a person who doesn't want to run locally yet he wants to use opensource LLMs for interacting with documents and webscraping for research.

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน

      OpenRouter has a ton of hosted open-source LLMs you can use. I think a majority of them are free and you just need an API key.

  • @gigawerx
    @gigawerx 2 หลายเดือนก่อน

    Hi Tim I am farley new to this. But going to as a silly question. will this method have up to date information knowledge similar to GPT4 using bing etc.... ? Thans this is a great video!

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน +1

      The model cutoff dates vary, so no. however! We are going to be adding live search via various means (from free to connecting to an external paid services). Live web browsing is the term for this. Some models are even "online" and have this baked in, but they are not private or something you can run yourself. See perplexity AI for that kind of functionality.
      We want to unlock this for local LLMs for you though

  • @iiiBog
    @iiiBog 2 หลายเดือนก่อน

    Does AnythingLLM have its own version? How I recognize since time how relevant my installed version is or I should update to receive some new features...

  • @yvesvandenbroek6055
    @yvesvandenbroek6055 2 หลายเดือนก่อน

    Hmm using it on a Mac, on ollama it seems faster than LM Studio, but the main issue seems that besides the AnythingLLM question it seems not to remember the content in a reliable way. It seems to not involve local docs all the time. I did threw away all the settings an started over from scratch … it’s more a miss than a hit?!

  • @HugoRomero-mq7om
    @HugoRomero-mq7om 25 วันที่ผ่านมา

    Very useful video!! Thanks for the work. I kept a doubt about the chats that take place, there is any registration of the conversations? For commercial purposes it will be nice to generate leads with the own chat!

    • @TimCarambat
      @TimCarambat  24 วันที่ผ่านมา

      Absolutely, while you can "clear" a chat window you can always view all chats sent as a system admin and even export them for manual analysis or fine-tuning.

  • @Djk0t
    @Djk0t 2 หลายเดือนก่อน +1

    Hi Tim, Fantastic. Is it possible to use anythingllm with gpt4 directly, for local use? like the example you demonstrated above.

    • @thedeathcake
      @thedeathcake 2 หลายเดือนก่อน

      Can't imagine that's possible with GPT4. The VRAM requires for that model would be in the hundreds of GB.

  • @temp911Luke
    @temp911Luke 2 หลายเดือนก่อน

    Hi Tim,
    What GPU have you got ?

  • @MrAmirhk
    @MrAmirhk 22 วันที่ผ่านมา

    Can't wait to try this. I've watched a dozen other tutorials that were too complicated for someone like me without basic coding skills. What are the pros/cons of setting this up with LMStudio vs. Ollama?

    • @TimCarambat
      @TimCarambat  22 วันที่ผ่านมา

      If you don't like to code, you will find the UI of lmstudio much more approachable, but it can be an information overload. Lmstudio has every model on huggingface. Ollama is only accessible via terminal and has limited model support but is dead simple.
      This video was made before we launched the desktop app. Our desktop comes with ollama pre-installed and gives you a UI to pick a model and start chatting with docs privately. That might be a better option since that is one app, no setup, no cli or extra application

  • @xevenau
    @xevenau 2 หลายเดือนก่อน +10

    software engineer and AI knowledge? You got my sub.

  • @Equality-and-Liberty
    @Equality-and-Liberty หลายเดือนก่อน

    I want to try it in a Linux VM, but from what I see you can only make this work on a laptop with a desktop OS. It would be even better if both LMstudio and AnythingLLM could run in one or two separate containers with a web UI

  • @stevekirsch8284
    @stevekirsch8284 19 วันที่ผ่านมา

    Very helpful video. I'd love to be able scrape an entire website in Anything LLM. Is there a way to do that?
    Is there a website where I can ask help questions about Anything LLM?

  • @MarsX69
    @MarsX69 2 หลายเดือนก่อน

    Thanks for this video. Does AnythingLLM a good job in terms of writing SEO optimised blog articles?

    • @IrakliKavtaradzepsyche
      @IrakliKavtaradzepsyche 2 หลายเดือนก่อน

      It is a model that you choose in ML Studio that affects the output.

  • @MrNatzu
    @MrNatzu 2 หลายเดือนก่อน

    Very nice. Will definitely try it. Is or will there be an option to integrate a anything LLM workspace in a python code to automate task via API?

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน

      Yes, but the api is only in the docker version currently since that can be run locally and on cloud so an API makes more sense for that medium

  • @catwolf256
    @catwolf256 11 วันที่ผ่านมา

    To operate a model comparable to GPT-4 on a personal computer, you would currently need around 60GB of VRAM. This would roughly necessitate three 24GB graphics cards, each costing between $1,500 and $2,000. Therefore, equipping a PC to run a similar model would cost more than 25 years' worth of a ChatGPT subscription at $20 per month or $240 per year.
    Although there are smaller LLM (Large Language Models) available, such as 8B or 13B models requiring only 4-16GB of VRAM, they don't compare favorably even with the freely available GPT-3.5.
    Furthermore, with OpenAI planning to release GPT-5 later this year, the hardware requirements to match its capabilities on a personal computer are expected to be even more demanding.

    • @TimCarambat
      @TimCarambat  9 วันที่ผ่านมา

      Absolutely. Closed source and cloud based models will always have a performance edge. The kicker is are you comfortable with their limitations on what you can do with them, paying for additional plugins, and the exposure of your uploaded documents and chats to a third party.
      Or get 80-90% of the same experience with whatever the latest and greatest oss model is running on your CPU/GPU with none of that concern. Its just two different use cases, both should exist

    • @catwolf256
      @catwolf256 9 วันที่ผ่านมา

      @@TimCarambat While using versions 2.6 to 2.9 of Llama (dolphin), I've noticed significant differences between it and ChatGPT-4. Llama performs well in certain areas, but ChatGPT generally provides more detailed responses. There are exceptions where Llama may have fewer restrictions due to being less bound by major company policies, which can be a factor when dealing with sensitive content like explosives or explicit materials. however, while ChatGPT has usage limits and avoids topics like politics and explicit content, some providers offer unrestricted access through paid services. and realistically, most users-over 95%-might try these services briefly before discontinuing their use.

  • @reinerheiner1148
    @reinerheiner1148 2 หลายเดือนก่อน

    Nice work! Unfortunately, after installing anythingllm on windows 10, and running it, its stuck at "loading workspaces".

  • @TrevorMatthews
    @TrevorMatthews 2 หลายเดือนก่อน

    Use with source code? This might be ideal, can this understand my VS database and ETL projects? Asking questions of source would be my #1 usecase. I've installed, and 2 things...I don't see a way to point it at a folder - or do I have to injest each file individually, and second when navigating into the code folder, the SQL, dtsx, ps1 type files are filtered out. I'm assuming that means it won't work with those filetypes - which would rule it out for a code chatbot...

  • @BrandonInsight
    @BrandonInsight หลายเดือนก่อน

    Great!

  • @uwegenosdude
    @uwegenosdude หลายเดือนก่อน

    Thanks, Tim, for the good video. Unfortunately I do not get good results for uploaded content.
    I'm from Germany, so could it be a language problem, cause the uploaded content is german text?
    I'm using the same mistral model from your video and added 2 web pages to anythingLLMs workspace.
    But I'm not sure if the tools are using this content for building the answer.
    In the LM studio log I can see a very small chunk of one of the uploaded web pages. But in total, the result is wrong.
    To get good embeddings values I downloaded nomic-embed-text-v1.5.Q8_0.gguf and use it for the Embedding Model Settings in LM Studio which might be not necessary, cause you didn't mentioned such steps in your video.
    I would appreciate any further hints. Thanks a lot in advance.

  • @zacca03
    @zacca03 2 หลายเดือนก่อน

    Thanks a lot for this amazing tutorial. Unluckily, I've tried on my pc(no gpu) and it takes more than 30 seconds to generate the first token of the 'how are you' query. Do you think the wait is justified by the fact that using coding specific models, I can just make progress faster? I'm asking this because even when using gpt-4, often I have to correct the work it does several times? Thanks again for the creation of AnythingLLM

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน +1

      It's not atypical. My MacBook (Intel cpu) takes about that long for even a single token query. It's rough. More token = longer wait. Model size does impact the time to load but smaller models can often perform work.
      Unfortunately with a great gpu you may be limited to cloud based models if you want really concise response that are accurate.
      That's just the state of things currently

    • @zacca03
      @zacca03 2 หลายเดือนก่อน

      ​@@TimCarambatI'm studying to become a mechanical engineer, so it's not really my field. But something that comes to my mind is the following: if you cannot exploit the GPU, why not exaggerate on RAM or SWAP usage? I have 16gb of RAM with 512gb of NVME storage, there's plenty to be given to AnythingLLM but mostly LMStudio. Seeing your brilliance developing AnythingLLM, I'm sure you will figure out something. For me, having a laptop and being a heavy gpt user, it's not cost effective to actually get another PC for this usage. You have also to account for probably expensive energy bills. Still, the project has a LOT of potential! Keep it up with your work mate!

  • @brettgarnier107
    @brettgarnier107 2 หลายเดือนก่อน

    Thanks for the great tutorial! I was hoping I would be able to dump a github repo in and talk to it about entity framework relationships. Saw .cs and other files are not valid. I was going to iteratively go through my repo and dump all the source into a single .txt file, then put that in the vectordb. Have you a better solution?

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน

      We have a Github repo connector that can pull an entire repo!

    • @brettgarnier107
      @brettgarnier107 2 หลายเดือนก่อน

      I spent a while getting that to work with my private repo. I made the temp token and gave it read access to my files. I paste my url, and access token, "Could not prepare github repo for lading! Check URL" is what I get every time, tried for an hour
      @@TimCarambat

  • @mohamedemarah9379
    @mohamedemarah9379 2 หลายเดือนก่อน

    Looks really clean thank you ! quick question wanted to test with a 50mb txt log file but after sometime embedding got an error cannot create a string longer than 0x1 and didn't catch the rest any thoughts on how I could add big log files ? Used the default embedder and vector store with ollama codellama 7b

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน

      That is a bizzare error i have never seen. What operating system?

  • @maximmono1
    @maximmono1 2 หลายเดือนก่อน

    This is superb. Would it be possible to provide Web interface to small number of people? Protected by username and password? Do you provide paid webinars? Amazing stuff.

  • @FisVii77
    @FisVii77 2 หลายเดือนก่อน +4

    Can you do more of these demonstrations or vidoeos, is anythingLLM capable of generating visual conent like a dalle3 or video, assuming using a capable open sourse modell is there a limitation other then local memory as to the size of the vector databases created? this is amazing ;)
    Thanks for this video truly appreciated man. Liked and subscrided to support you.

  • @zeroxinfinity5816
    @zeroxinfinity5816 หลายเดือนก่อน

    which local llm is the best and have largest context token window? Mine only shows around 2k context token window

  • @nightmisterio
    @nightmisterio หลายเดือนก่อน

    To add PDFs in the chat and make a Pools of knowledge to select from would be great.

  • @inout3394
    @inout3394 2 หลายเดือนก่อน

    Thx

  • @Mursaat100
    @Mursaat100 2 หลายเดือนก่อน +1

    Thanks for the video!
    I did it as you said and got the model working (same as you picked). It ran faster than i expected and I was impressed with the quality of the text and the general understanding of the model.
    However when i uploaded some documents [in total just 150 kb of downloaded HTML from a wiki] it gave very wrong answers [overwhelmingly incorrect]. What can i do to improve this?

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน +1

      two things help by far the most!
      1. Changing the "Similarity Threshold" in the workspace settings to be "No Restriction". This basically allows the vector database to return all remotely similar results and no filtering is applied. This is based purely on the vector database distance of your query and the "score" filtered on, depending on documents, query, embedder, and more variables - a relevant text snippet can be marked as "irrelevant". Changing this setting usually fixes this with no performance decrease.
      2. Document pinning (thumbtack icon in UI once doc is embedded). This does a full-text insertion of the document into the prompt. The context window is managed in case it overflows the model, however this can slow your response time by a good factor, however coherence will be extremely high.

    • @Mursaat100
      @Mursaat100 หลายเดือนก่อน

      Thank you! But i dont understand what you mean with "Thumbtack icon in UI once doc is embedded". Could you please clarify?@@TimCarambat

  • @mstew8386
    @mstew8386 หลายเดือนก่อน

    How do I go in an change the LS-Studio base model URL and Token context window in AnythingLLM after install.

  • @JasonStorey
    @JasonStorey 24 วันที่ผ่านมา

    Hey, Great video. For some reason I don't have LMStudio as an optional provider in AnythingLLM, any thoughts? thanks.

    • @TimCarambat
      @TimCarambat  24 วันที่ผ่านมา

      That certainly isn't right... Where are you in the UI where you do not see lmstudio?

  • @adriAM55
    @adriAM55 2 หลายเดือนก่อน

    works for understand code projects?

  • @TheExceptionalState
    @TheExceptionalState 2 หลายเดือนก่อน +1

    Many thanks for this. I have been looking for this kind of solution for 6+ months now. Is it possible to create an LLM based uniquely on say a database of say 6000 pdfs?

    • @TimCarambat
      @TimCarambat  2 หลายเดือนก่อน +2

      A workspace, yes. You could then chat with that workspace over a period of time and then use the answers to create a fine-tune and then you'll have an LLM as well. Either way, it works. No limit on documents or embeddings or anything like that.

    • @TheExceptionalState
      @TheExceptionalState 2 หลายเดือนก่อน

      @@TimCarambatMany thanks! I shall investigate "workspaces". If I understand correctly I can use a folder instead of a document and AnythingLLM will work with the content it contains. Or was that too simplistic? I see other people are asking the same type of question.

  • @bhushan80b
    @bhushan80b หลายเดือนก่อน

    That's a great one .. Just got stucked in one scenario after some time of use on asking any questions giving respone- could not respond to messages.
    Request failed with status code 400. Please help!

  • @s.c.r.e.a.m.forp.e.a.c.e.3541
    @s.c.r.e.a.m.forp.e.a.c.e.3541 หลายเดือนก่อน

    Downloaded. I'm excited to install on my 3 GPU rig.

  • @anysobyspbyany7299
    @anysobyspbyany7299 2 หลายเดือนก่อน

    What is best for translating from english to another languages?

  • @MusicByJC
    @MusicByJC หลายเดือนก่อน

    I am a software developer but am clueless when it comes to machine learning and LLM's. What I was wondering, is it possible to train a local LLM by feeding in all of your code for a project?

  • @bhushan80b
    @bhushan80b 24 วันที่ผ่านมา

    Hi tim, citations showing are not correct it just showing random files ...is there any way to sort it out

  • @mstew8386
    @mstew8386 หลายเดือนก่อน

    This all worked well until I closed it. It just says loading workspaces and never does anything now?
    Do I need to run the setup exe again? How do I use this again?
    Where does this install on a windows machine and can I change it?

  • @aifortheworld7152
    @aifortheworld7152 2 หลายเดือนก่อน

    so its free by using a gpu correct>? what gpu is minimum and how to feed it data?

  • @viveks217
    @viveks217 2 หลายเดือนก่อน +3

    I have tried, but could not get it to work with the files that was shared as context. Am I missing something? It's giving answers like that the file is in my inbox I will have to read it, but does not actually reads the file

    • @_skiel
      @_skiel หลายเดือนก่อน

      i m also struggling. sometimes it refers to the context and most of the times it forgot having access eventho its referencing it