Popularity doesn't always mean Great, But Pretty Good is Possible

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 พ.ย. 2024

ความคิดเห็น • 57

  • @debjit21
    @debjit21 6 หลายเดือนก่อน +5

    My gosh, Matt, the rag is why I find this appealing. With agents as well, it is far from perfect, but it works.

  • @drmetroyt
    @drmetroyt 6 หลายเดือนก่อน +3

    3:34 , to use RAG and fetch information from the vector database you have to select query mode instead of chat in the workspace settings

  • @figs3284
    @figs3284 6 หลายเดือนก่อน +9

    I feel like the creator should watch this video carefully and address all of these concerns 👍

    • @TokyoNeko8
      @TokyoNeko8 6 หลายเดือนก่อน +1

      @TimCarambat hi, this is a good video covering your tool!

    • @Borszczuk
      @Borszczuk 6 หลายเดือนก่อน

      @@TokyoNeko8 I do not think "@nick" notification works on YT.

  • @tiredofeverythingnew
    @tiredofeverythingnew 6 หลายเดือนก่อน +1

    Agree with all your points. I wanted it to be so much better than it actually is. Needs more work imho but appreciate the work done by the devs.

  • @retronftgroup4133
    @retronftgroup4133 6 หลายเดือนก่อน +1

    i love anything LLM version 1.0 - it just needs image support and if they improve the visual i think we are good to go ...

  • @bodyguardik
    @bodyguardik 6 หลายเดือนก่อน +4

    Lm studio is popular because it was native for WINDOWS before ollama has same, i guess

  • @YorkyPoo_UAV
    @YorkyPoo_UAV 6 หลายเดือนก่อน

    I've been using the cloud version because it was the easiest thing I could find to easily set up a chatbot for my wiki style website for firefighters. Others I could not put enough documents in the Rag without upgrading for $75 a month.

  • @aivy-aigeneratedmusic6370
    @aivy-aigeneratedmusic6370 6 หลายเดือนก่อน +1

    Still haven't found an alternative with which i am fully satisfied. Still going back to use chat gpt..
    But it's also hard to keep the overview

  • @jayd8935
    @jayd8935 6 หลายเดือนก่อน

    I have to admit that I find the tool ok on Llama3, but less nice on Phi for my use case, which could just be my misunderstanding on how it 'primes' the model before chat. It hallucinates a lot, at one point suggesting something was purchased for 5000 pieces of eight. Go figure. It was still good enough that I have added it to my growing arsenal of tools. I hope the author keeps improving it. Things like editing/changing responses and so on. Thank you for the video Matt!

  • @TokyoNeko8
    @TokyoNeko8 6 หลายเดือนก่อน

    spot on... I tried it and saw some of the issues and hallucinations. It would tell me Lamma3 is made by Microsoft and when documents was not imported and I had Query mode where it should only answer if there is a hit on the RAG docs but it still just answered on its own from the language model.

  • @mrka_
    @mrka_ 5 หลายเดือนก่อน

    Agree with the conclusion. I've tried it twice already. The main point that is the RAG is still not delivering. The ChatGpt RAG like feature is performing better these days

  • @kiiikoooPT
    @kiiikoooPT 6 หลายเดือนก่อน +2

    Comparing Ollama with LLMStudio is like comparing any Linux distro with Windows.
    Ollama is a CLI tool does not have GUI, and LLMStudio is a GUI tool does not have CLI, you need the part where it serves the model only, you can do that with both. But you need to connect with code.
    I don't see a reason to use the serve side of LLMStudio if you don't want to use the app GUI, for that is what Ollama was created for. But still in LLMStudio there is a good thing if you are a noob coder, which is at least for me, I can use it as a server in one machine, and go to other machine to do just the code.
    But it has a weak spot, which is that it is spending ram in your GPU when you use it just from the interface.
    The easy part of it, is that to do that, you just need to click a few buttons, in Ollama the only way to do it is the CLI or mess with the environment variables if you even know what that is.
    If you are using Ollama you are ok with using CLI, so you probably know, but 99% of the LLMStudio users I'm almost sure just users that want a chatbot, that runs with a click, not to have to handle CLI or code.
    And for both of those apps, I have to say they both have the same problem, which is running the GPU side, in only last gen software. Example, I unfortunately right now don't have the resources to buy another pc, so all I have is my old laptop, that has an NVIDIA 920m GPU which only has 2gb of dedicated VRAM. None of those apps work with that. But if I download lamacpp and cuda-toolkit and build the lamacpp in my laptop, I can run models that fit in that VRAM, trough lamacpp but can not in either Ollama or LLMStudio.
    So for anyone saying this tool is better than another, you either don't know how both those tools work, or you have so much confidence in tech that you don't see that most of the people don't even know what CLI is or mean. Or you are using sides, because you work with or in relation with any of those tools or companies that make them.

    • @technovangelist
      @technovangelist  6 หลายเดือนก่อน

      I think you are referring to lm studio not llm studio. They are two different products. I don’t know much about llm studio yet. But lm studio and ollama perform mostly the same function just lm studio does it poorly and far slower.

    • @kiiikoooPT
      @kiiikoooPT 6 หลายเดือนก่อน

      @@technovangelist wait, I feel like an idiot now.
      You're telling me that there are 2 different tools with almost the same name? lmstudio and llmsudio? Clever devs btw.
      I tried to find it but all I see is talking about lmstudio the purple one, the app, and also found 2 different things, being one, h2o-llmstudio on GitHub, and a video on TH-cam from the channel TensorOps with a title "Introduction to LLM Studio" is this one from the video that you are talking about?
      If so I'm really sorry about the confusion then, no wonder I never heard about that with the little info there is about it, and also because they were smart enough to blow their own brand with the name almost identical to another app.
      I really feel like an idiot lol, but I was assuming you were talking about the lmstudio app and judging by some other comments in here I was not the only one, there even is a comment from someone saying lmstudio was made for windows first, then Ollama made the Windows version. So it made me even more sure you were talking about the same thing I was, lol.
      Thank you for your reply, sometimes we are quick to get mad and start ranting about something before having the entire context, and I try not to do that, but based on the info you gave and the popularity of lmstudio to llmstudio, I never even realized there was another one, and had to say my feelings because I'm still correct with what I said in my first comment, and I have seen multiple TH-camrs comparing both tools, from lmstudio to ollama, and is always a bad comparison because they have nothing to do with each other, like you said in your reply they are different tools for different use cases.
      Once again sorry for the confusion ;)

    • @kiiikoooPT
      @kiiikoooPT 6 หลายเดือนก่อน

      @@technovangelist I replied to your comment here, saying I feel like an idiot now for confusing 2 different products, but for some reason that comment is not showing, maybe I didn't send it right, but then I went to check on your video again, and you clearly say lmstudio not llmstudio, and you are talking about a tool that runs not on top of ollama but instead of ollama and has its own GUI so we are talking about the same right?
      Can you please explain what is the tool you are talking about if it is not the app lmstudio. Because I am totally lost here. :p

    • @technovangelist
      @technovangelist  6 หลายเดือนก่อน

      Correct. I talked about lm studio which is comparable to ollama. Llm studio I believe runs on top of tools like ollama and lm studio.

  • @-........
    @-........ 6 หลายเดือนก่อน

    creating custom system prompts per chat basis is really easy in lm studio but that's about it i think.

  • @mohammed333suliman
    @mohammed333suliman 5 หลายเดือนก่อน

    Thanks you, I became a fan of your YT channel.

  • @ihaveacutenose
    @ihaveacutenose 6 หลายเดือนก่อน +1

    A review of perplexica would be great.

  • @Maisonier
    @Maisonier 6 หลายเดือนก่อน

    Great video, It's great to learn about new software. Could be awesome to see comparison with other similar app.

  • @jofus521
    @jofus521 6 หลายเดือนก่อน

    Idk about your version, but mine has the option to change the top left icon under Appearance.
    They also have a chat embed feature.

  • @vaughanjackson2262
    @vaughanjackson2262 5 หลายเดือนก่อน

    I see so many RAG videos but there dosnt seem to be one thats stands out as a "favourite" amongst the crowd.
    Is anyone actually using any open source RAG (local models) in production?

    • @technovangelist
      @technovangelist  5 หลายเดือนก่อน

      There is a lot of rag used in production. But I doubt any of them are using these projects.

    • @vaughanjackson2262
      @vaughanjackson2262 5 หลายเดือนก่อน

      @@technovangelist tend to agree.. ragflow is one you may want to take a look at for your channel..

  • @mistersunday_
    @mistersunday_ 6 หลายเดือนก่อน

    Breathe life into our RAGdolls master Jepeto!

  • @jofus521
    @jofus521 6 หลายเดือนก่อน

    Did you say all the models are supported for tool/function calling in Ollama? I’m not sure that is true out of the box with Ollama. If it is, then can you go over that more? That must be a brand new feature.

    • @technovangelist
      @technovangelist  6 หลายเดือนก่อน

      I did say that. They have all worked well for that since October or November of last year. So nothing really new.

    • @jofus521
      @jofus521 6 หลายเดือนก่อน

      Oh. Cool. I didn’t realize almost all the libraries are just that far behind. Some of them use json mode and/or a wrapper with special parsing. Not sure if any support streaming/multi function calling either. Maybe soon.
      Just deployed anythingLLM to kubernetes today. Works well

  • @new_artiko
    @new_artiko 6 หลายเดือนก่อน

    thank you matt!

  • @madytyoo
    @madytyoo 6 หลายเดือนก่อน

    Thank you very informative.

  • @claxvii177th6
    @claxvii177th6 3 หลายเดือนก่อน

    i love anythingllm tho

  • @mbarsot
    @mbarsot 6 หลายเดือนก่อน +1

    I cannot get the RAG on docs to work. I mean, following your video it is easy to understand how to do it, only...results are not good, worse than a python program using rag directly. Anyway: 1 GB ?? How can it be this large, when ollama is much smaller ?

  • @anthony-fi
    @anthony-fi 6 หลายเดือนก่อน

    Discord is great.. if you want to have different channels about different topics.

  • @drmetroyt
    @drmetroyt 6 หลายเดือนก่อน

    It's the only good working rag project for AI .. nothing close to anything LLM and in future it'll be more great as the project is constantly changing and upgrading and it's oper source 🎉

    • @technovangelist
      @technovangelist  6 หลายเดือนก่อน

      Well that’s definitely not true.

  • @sammcj2000
    @sammcj2000 6 หลายเดือนก่อน

    AnythingLLM looks like some melted plastic I pulled out of the fireplace 😂

  • @mikedavis1426
    @mikedavis1426 4 หลายเดือนก่อน

    Wow Matt 2 months and this review is old and irrelevant and I don't mean to sound so cruel. Wow!! Ai developments are moving fast... Version 1.5.4 -- AnythingLLM this video reviewed but its up to version v1.5.11 and just about everything you leaned on is fixed or improved. RAG is its super power and overall an outstanding UI to Ollama but hard on a content provider when the software development is moving so fast!!!!!

    • @technovangelist
      @technovangelist  4 หลายเดือนก่อน

      I looked and was going to do an update bin the last week but it didn’t seem like it changed much.

    • @technovangelist
      @technovangelist  3 หลายเดือนก่อน +1

      You seem to be a super fan. I'll update the video when it changes enough to warrant it. The review is pretty spot on still.

  • @bodyguardik
    @bodyguardik 6 หลายเดือนก่อน

    Wich embedding model you used in anythingllm?

    • @adsfb5118
      @adsfb5118 6 หลายเดือนก่อน

      It 's in the video nomic-text .