Rivet: How To Run Multiple Local LLMs In Your Projects With Ollama! Easy Comparison - No Code

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 ต.ค. 2024

ความคิดเห็น • 20

  • @AIMadeApproachable
    @AIMadeApproachable  8 หลายเดือนก่อน +4

    Update 21.02.2024: The ollama plugin now has a revamped "Ollama Chat" node which will work out of the box (no prompt formatting neccessary!)
    Also if people have issues connecting in Rivet:
    1. Click on the "..." in the top right corner and choose the "node" executor
    2. If that does not help go to "settings" and replace the "Host (ollama)" URL with: 127.0.0.1:11434 as well

  • @GiovanneAfonso
    @GiovanneAfonso 9 หลายเดือนก่อน

    Incredible work with the prompt formatting! Thank you for sharing this

  • @holgergelhausen8616
    @holgergelhausen8616 9 หลายเดือนก่อน

    Fantastic! That was my goal, to build something like this for my use cases. Great explanation! Great.

  • @Gipeties
    @Gipeties 8 หลายเดือนก่อน

    Your videos are amazing. Thank you so much for sharing them!

  • @mayorc
    @mayorc 8 หลายเดือนก่อน +1

    Would be interesting a tutorial where the benefits of different models are stacked together to get the best output possible. For instance you said Phi is good at reasoning, it could be good to stack that or assign specific models a role, like reasoning, coding, and other capabilities (using the benchmarks for reference) and creating a graph similar to a multi agent framework with the main difference that is not only specifying the roles of the agents but also the best local models tailored for the specific taks/agent.

  • @deeplearningdummy
    @deeplearningdummy 8 หลายเดือนก่อน

    Is there a way to change the download location of the LLM within Rivet? Or, if I download it manually, can I change the path in Rivet? Thanks for all the Rivet videos!

    • @AIMadeApproachable
      @AIMadeApproachable  8 หลายเดือนก่อน +1

      Not really. You need to configure it in Ollama. They have implemented that feature, but I cannot find a proper documentation for it: github.com/ollama/ollama/issues/680

    • @deeplearningdummy
      @deeplearningdummy 8 หลายเดือนก่อน

      @@AIMadeApproachableThank you. Now I don't feel so bad about not figuring this one out!🙃

  • @holgergelhausen8616
    @holgergelhausen8616 9 หลายเดือนก่อน

    What are the HW specs for your Mac? How much many is needed for this workflow?

    • @AIMadeApproachable
      @AIMadeApproachable  9 หลายเดือนก่อน +1

      M2 processor, no separate graphic card, 24gb of RAM.
      But I think that all the models should be able to run if you have 8-16 gb of ram. And ollama makes sure that only 1 model is loaded at a time. So you don’t need the more memory the more models you use.

  • @turnerworks252
    @turnerworks252 4 หลายเดือนก่อน

    can you supply to rivet-graph file?

  • @mudakisaa
    @mudakisaa 9 หลายเดือนก่อน +1

    For some reason doesnt work under Linux. Nodes produce errors (Error from Ollama: Load failed) and looks like doesn't want to communicate with ollama. Anyway, thanks a lot for the video and especially for the project files.
    For the people who will read this and meet with the same problem - if you found somewhere solution, please tell me about it.

    • @AIMadeApproachable
      @AIMadeApproachable  9 หลายเดือนก่อน +1

      Try to change the executor to „node“. That often helps with those kind of issues. Or did you already do that?

    • @mudakisaa
      @mudakisaa 9 หลายเดือนก่อน

      @@AIMadeApproachable Thanks man! In the end this solution helped... In conjunction with installation into system instead of using portable version.

    • @deeplearningdummy
      @deeplearningdummy 8 หลายเดือนก่อน

      @@AIMadeApproachable I get the same errors, but when I change executor to "node", the run button disappears. Any thoughts?

    • @AIMadeApproachable
      @AIMadeApproachable  8 หลายเดือนก่อน +1

      @@deeplearningdummy It is some CORS errors when using the browser node. Nothing you can really do much about. So I recommend just sticking to the node executor.
      Rivet team is currently even thinking about removing the browser executor for reasons like this.

    • @deeplearningdummy
      @deeplearningdummy 8 หลายเดือนก่อน

      @@AIMadeApproachable Thanks for the insights!

  • @connoradair
    @connoradair 7 หลายเดือนก่อน

    is there a way to do this with gguf models?

    • @AIMadeApproachable
      @AIMadeApproachable  7 หลายเดือนก่อน

      Sure. Ollama can run gguf models. If you have a model that is not in the ollama model list, you just need to manually create a modelfile and use the "ollama create" command. There are some good instructions here:
      otmaneboughaba.com/posts/local-llm-ollama-huggingface/