HUGE LM Studio Update | Multi-Models with AutoGen ALL Local

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ต.ค. 2024

ความคิดเห็น • 22

  • @UTubeGuyJK
    @UTubeGuyJK 7 หลายเดือนก่อน +2

    I saw the update today and didn’t realize what the start server button was doing on this tab. Thank you for explaining! :)

    • @TylerReedAI
      @TylerReedAI  7 หลายเดือนก่อน

      Awesome, glad it was helpful! Yeah I was a little confused as well

  • @DavidSchneir
    @DavidSchneir 6 หลายเดือนก่อน +2

    the AI red car in the background looks pretty real. A new feature called TylerVision by TylerGen hahaha.

    • @TylerReedAI
      @TylerReedAI  6 หลายเดือนก่อน +1

      Hey I do my best around here hahaha :D I like the sound of TylerVision...can have some interesting ideas

    • @DavidSchneir
      @DavidSchneir 6 หลายเดือนก่อน +1

      @@TylerReedAI joking aside, it sounds cool. "TylerVision by TylerLabs" 🦾

  • @delightfulThoughs
    @delightfulThoughs 7 หลายเดือนก่อน +1

    Hi Tyler, great videos you put out every time. Thank you. A question, is it possible to have a different RAG for each agent in Autogen separately? Thank you again and keep it up!

    • @TylerReedAI
      @TylerReedAI  7 หลายเดือนก่อน +1

      Hey, thanks for watching, appreciate it! So, a different rag for each agent. I think this would be possible because I would imagine you can just instantiate multiple Rag Agents, they are call RetrieveProxyAgent I believe.

  • @Marzuwq
    @Marzuwq 7 หลายเดือนก่อน +1

    Glad they update it

    • @TylerReedAI
      @TylerReedAI  6 หลายเดือนก่อน

      Yup agreed, good update

  • @Anthony-dj4nd
    @Anthony-dj4nd 5 หลายเดือนก่อน +1

    A.I. 🔥 so happy to be alive to see it🥹

  • @protovici1476
    @protovici1476 7 หลายเดือนก่อน +1

    Need to try this on my CrewAI frameworks.

    • @TylerReedAI
      @TylerReedAI  6 หลายเดือนก่อน

      Let me know how this goes when you try it out!

    • @protovici1476
      @protovici1476 6 หลายเดือนก่อน

      @@TylerReedAI Absolutely! Likewise if you have time friend 🙂

  • @RetiredVet1
    @RetiredVet1 7 หลายเดือนก่อน

    I could not get any model to run under the Multi-Model mode of LMStudio, but I am running a Linux Beta version on an older laptop, so I am not surprised I am having issues. It looks like they have taken a leap forward. In the future I will keep trying and maybe try it on a larger cloud server.

    • @TylerReedAI
      @TylerReedAI  6 หลายเดือนก่อน

      Let me know! Yeah that is the beta version, and I just checked in their discord and there are a few issues with the beta version right now

  • @JaredWilliamScott
    @JaredWilliamScott 6 หลายเดือนก่อน

    What would you recommend I run this on? I presume that some of the heftier models require about 64GB of ram yes?

    • @TylerReedAI
      @TylerReedAI  6 หลายเดือนก่อน

      so I have 8GB Ram, and I run the phi-2 model, but that's one of the lower end models as far as needing hardware and that's okay. But as soon as I get to a 7B parameter model like Mistral, it's really slow. This is ollama's recommendations for RAM:
      Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
      But, I would add 8GB of RAM to each of those in my opinion for it to be smooth and quick.

  • @lomek4559
    @lomek4559 7 หลายเดือนก่อน

    1:20 LM studio says playground supports only full GPU offload, but in your case the top indicator shows that you use RAM (that has maximum size of 8 GB?) instead of VRAM. Is this error on their side?

    • @TylerReedAI
      @TylerReedAI  6 หลายเดือนก่อน

      Hmm, good question! I will have to look in their discord to see if this has come up with other people. I didn't really notice this tbh, I'll try to get back to you

    • @lomek4559
      @lomek4559 6 หลายเดือนก่อน

      @@TylerReedAI ok, I'll be waiting for the responce. Thanks in advance!

  • @handler007
    @handler007 6 หลายเดือนก่อน +1

    is there RAG now? else the update is nothing more than bloatware.

    • @TylerReedAI
      @TylerReedAI  6 หลายเดือนก่อน

      Not with this update, but it's more of allowing multiple models to run locally off of 1 server. Which is pretty helpful