Easily Run LOCAL Open-Source LLMs for Free

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ต.ค. 2024

ความคิดเห็น • 19

  • @GrantLylick
    @GrantLylick 7 หลายเดือนก่อน +1

    Loading multiple models is nice but you need a vid card with more than 8 gigs of ram. You can 2 smaller models but it obviously wont be as good as loading 2 7billion parameter models.

    • @redamarzouk
      @redamarzouk  7 หลายเดือนก่อน

      That is true, a decent hardware is the main consideration.
      but with a good enough Nvidia card not only you’re gonna be able to run really good models like Mistral 0.2 for your daily use, you’ll be able to benchmark good models as well. I see that as an investment tbh.

  • @ArisimoV
    @ArisimoV 5 หลายเดือนก่อน +1

    Can you use this for self operating pc ? Thanks

    • @redamarzouk
      @redamarzouk  5 หลายเดือนก่อน

      Believe me I tried, but my NVIDIA RTX 3050 4Gb simply can’t withstand filming and running Llava at the same time.
      Hopefully I’ll upgrade my setup soon and be able to do it.

    • @ArisimoV
      @ArisimoV 5 หลายเดือนก่อน

      So it is possible it's just matter of programing and pc sepecs

  • @AJ5
    @AJ5 7 หลายเดือนก่อน +1

    Is LMStudio portable? Can I copy it to a USB disk and run it on any machine?

    • @redamarzouk
      @redamarzouk  7 หลายเดือนก่อน

      I think you can download it on any usb disk and install it on any machine as long as you have admin rights over that machine.

    • @chipcookie707
      @chipcookie707 7 หลายเดือนก่อน

      what sketchy reasons do people want to have a completely private LLM for?

    • @redamarzouk
      @redamarzouk  7 หลายเดือนก่อน +1

      @@chipcookie707 nothing sketchy about privacy, private LLM will guarantee your data is not gonna be used for training new models(let’s say you’re building an app and you want to keep the code private).
      You will have more control over the output (more configurations)
      Better performance with the right hardware.
      Specialised LLM will yield better results (there are LLMs only for coding for example )
      And data security, in case those companies are breached you data won’t be there.
      I can go on, on how good private LLMs are but I’ll stop here..

    • @AJ5
      @AJ5 7 หลายเดือนก่อน +1

      Wow Reda, that is such a long comprehensive list of reasons. Honestly deserves its own video instead of being buried in this reply comment section

  • @handler007
    @handler007 7 หลายเดือนก่อน +2

    too bad it does not allow to reference documents for response.

    • @redamarzouk
      @redamarzouk  7 หลายเดือนก่อน

      You’re right about that, I thought about it when I was filming.
      But they seem to have a good team in place so maybe they’ll introduce it in the next release.

    • @GrantLylick
      @GrantLylick 7 หลายเดือนก่อน

      I thought it has embedding. I will have to recheck that.

    • @GrantLylick
      @GrantLylick 7 หลายเดือนก่อน +1

      Yes, it does. I loaded Llava and asked it to describe a picture.

    • @GrantLylick
      @GrantLylick 7 หลายเดือนก่อน

      If you want something different you could use Anything llm. It has an easier embedding setup that even allows you to load a web page for it to look at. So use whatever you want to load and serve a model, then just use Anything llm to chat with it.

    • @redamarzouk
      @redamarzouk  7 หลายเดือนก่อน +1

      @@GrantLylick thank you for pointing that out.
      I have to do that myself, I always go through huggingface spaces to see how good Llava is getting.

  • @gregisaacson660
    @gregisaacson660 7 หลายเดือนก่อน +1

    Im using Oogabooga , is this better??? what would you say. will try it out.

    • @redamarzouk
      @redamarzouk  7 หลายเดือนก่อน

      Oogabooga is a really good option to control all configurations of your model.
      But I find LMStudio more user friendly, and you can also start server for multiple models at the same time, so it will help create highly expert agents for specific fields.
      So I do prefer LMStudio now.