What is an LLM Router?

แชร์
ฝัง
  • เผยแพร่เมื่อ 31 ธ.ค. 2024

ความคิดเห็น • 75

  • @JohnBoen
    @JohnBoen 6 หลายเดือนก่อน +7

    Haha.
    Last night I was chatting with someone about how to come up with data to solve this problem. I have a story writing engine that can blow through blow through $10 of tokens in minutes. It is getting really expensive just to develop it.
    This morning I was going to look around to see if anybody had something like this.
    And the solution to my quest is in the first video I watched this morning.
    I hope the rest of my day is this awesome.

    • @toadlguy
      @toadlguy 6 หลายเดือนก่อน +3

      You could probably modify the code as well so that you have both a "debug" context and a "production" context so that cheaper LLMs could be used when the final output doesn't require the most expensive tokens.

    • @JohnBoen
      @JohnBoen 6 หลายเดือนก่อน

      @toadlguy
      Thanks. You have pointed me to solutions several times.
      We were trying to figure out how to get prompt plus user choice data. There are plenty of sites that run your prompt against 2 LLMs and ask which you prefer. That is some really valuable data...
      Now I don't need to - and that would have blocked me.
      My current algorithm involves looking at the size and complexity of the prompt. Most of my prompting is highly structured and templatized - I expect LLMRouter will pick Chat 4o most of the time.
      But later today I will find out :)
      Now I suspect need to find a way to generalize the use of tokenizers and different embeddings and such... Do you have a quick and easy solution to this, too?
      Thanks for all the help.

    • @JohnBoen
      @JohnBoen 6 หลายเดือนก่อน +1

      @toadlguy
      It was a nice thought...
      The application I am making uses prompts that contain RDF - state/node/conditional edge... names, descriptions, pass messages, fail messages...
      The prompt then acts as a state engine outputting the proper messaging based on the non-templatized stuff passed in.
      This is ChatGPT 4o all the way.

    • @anantgupta3285
      @anantgupta3285 6 หลายเดือนก่อน

      @JohnBoen do post how it went, for you test it

    • @JohnBoen
      @JohnBoen 6 หลายเดือนก่อน

      @anantgupta3285
      Sure.
      I get different results - undesired results - when I try using lesser models.
      My initial thoughts:
      * Templatized agent prompts work differently in different LLMs.
      * It would be quite a bit of work to revise things so I could dynamically select the LLM - this is not something I considered when I started playing with LangChain...
      Not gonna work for this project.
      However - I am writing an auto-tester to execute my standard test suite against new LLMs...I will plug it into that...
      However...
      ChatGPT 4o $20 plan supports 8k tokens. I assume 750 words for 1000 tokens... 5600 words in the context window.
      Realistically this limits my story engine to about 10 275-word pages of text. A couple of chapters before it loses track of the story metadata and rules...
      A team account would get me 32k tokens perhaps 50 pages...
      I think I might just need to use Google's million token context window...

  • @jeffg4686
    @jeffg4686 6 หลายเดือนก่อน +2

    Would be interesting to see how this and MoA (mixture of agents) could be used together.
    Perhaps the route could go to a different model that uses several smaller agents (models) together, medium agents together, and larger agents together and/or mixed with smaller agents

  • @KevinKreger
    @KevinKreger 6 หลายเดือนก่อน +1

    Great one Sam. So, to make this all about me :-), I've been using GPT4x as the router/manager under the theory that it is the smartest (this is a Mixture of Agents). Then the agents are cheaper. I can see this is much better. Thanks!

  • @toadlguy
    @toadlguy 6 หลายเดือนก่อน +4

    Wow, this is great that it was released with the entire framework open source, as I believe that this (or something like it) will be part of the interface we will all be using soon. The other component is determining what data is required to respond. For instance, does the query require proprietary or personal data? This would first create a context (through RAG) for that data but also determine which LLMs would be available to that context based on the required security (do you even want to send the proprietary data to a commercial LLM?). Also with Llama3 8B, this could be done locally (at almost no cost). BTW, this is part of the framework that Apple will implementing, but can be tailored for many other applications now using this framework and LangChain (for instance).

  • @RolandoLopezNieto
    @RolandoLopezNieto 6 หลายเดือนก่อน +1

    Great opensource release, thanks for the video

  • @lydedreamoz
    @lydedreamoz 6 หลายเดือนก่อน +4

    Claude 3.5 Haiku with this framework is gonna be insane. Nice video as always !

  • @bayesian7404
    @bayesian7404 5 หลายเดือนก่อน

    Great work and excellent explanation. Thank you

  • @SanjaySingh-gj2kq
    @SanjaySingh-gj2kq 5 หลายเดือนก่อน

    Good one. Really it will help enterprises to save cost.

  • @StephenRayner
    @StephenRayner 6 หลายเดือนก่อน

    Now that is OPEN 😮 wow. Great work!

  • @thirdreplicator
    @thirdreplicator 6 หลายเดือนก่อน

    I'd like to see more examples of applications of LLMs.

  • @YuleiSheng
    @YuleiSheng 6 หลายเดือนก่อน

    This makes so much sense

  • @andrew.derevo
    @andrew.derevo 5 หลายเดือนก่อน

    This is good stuff 🙌♥️

  • @AdrienSales
    @AdrienSales 6 หลายเดือนก่อน

    This is actually ery interesting. Concretely, when you use langchain and has satically linked LLMs on some custom tools, how could we redirect this from langchain directly from langchain so the routing is made afterwards ?

  • @themax2go
    @themax2go 5 หลายเดือนก่อน

    this might actually work real well in test scenarios, ie which llm provides the best accuracy vs speed compromise, for example in rag- / knowledge-graph systems

  • @yazanrisheh5127
    @yazanrisheh5127 5 หลายเดือนก่อน

    Can we have a code example of this using langchain since its the most common framework people use for LLMs please

  • @VastCNC
    @VastCNC 5 หลายเดือนก่อน

    Can this pair a local model to a cloud LLM and be even cheaper? Would love to see with the new generations of phi and Gemma

  • @thesimplicitylifestyle
    @thesimplicitylifestyle 6 หลายเดือนก่อน

    Very helpful! Thank you! 😎🤖

  • @armans4494
    @armans4494 6 หลายเดือนก่อน

    Wonderful video. Thank you

  • @not_a_human_being
    @not_a_human_being 5 หลายเดือนก่อน

    What about "function calling"? Can you really move between models?

  • @amj-9421
    @amj-9421 6 หลายเดือนก่อน +1

    What about the latency impact? Wouldnt this preclude a lot of production use cases

    • @jarail
      @jarail 6 หลายเดือนก่อน

      The router would be a very small and fast model. The cheaper model would also be a smaller model. Since cheaper models are smaller, they respond much faster. Latency would only go up a tiny bit for the responses that get routed to the most expensive model. Overall, you'd see a massive improvement in latency.
      Consider cost to be a proxy for compute time spent. They say they save 85% of the cost while maintaining 95% of the benchmark score. So estimate a 85% latency reduction (not counting the fixed networking latency). This doesn't actually play out exactly as expensive models are more parallel as well but you get the idea.

    • @longboardfella5306
      @longboardfella5306 6 หลายเดือนก่อน

      I would think so. Perhaps that’s another parameter to be optimised for: latency required for the type of query. Eg if conversational voice tokens input would route to a low latency model like upcoming 4o voice mode.

  • @DanielJAHENY
    @DanielJAHENY 6 หลายเดือนก่อน

    Worth comparing to how well it performs vs the semantic router lib which is also free to use

  • @hqcart1
    @hqcart1 6 หลายเดือนก่อน +5

    It's a good idea, but not practical in real life apps.
    I use multiple LLM for my app, and i manually test them first and make sure the weaker model are suitable for my task , then i route each of different tasks to different LLMs based on intensive test results.
    I am unsure how or where this router AI would be useful.

    • @tvwithtiffani
      @tvwithtiffani 6 หลายเดือนก่อน

      Maybe it can help you iterate faster. Help you manually test your prompt + models quicker and see which of the cheapest models are suitable for each task.

    • @JohnBoen
      @JohnBoen 6 หลายเดือนก่อน

      @hqcart1
      Initial thoughts...
      After poking around with example prompts...
      Highly structured and templatized prompts seem to always suggest frontier models.
      Not too useful for me either.

    • @hqcart1
      @hqcart1 6 หลายเดือนก่อน

      @@tvwithtiffani no, all it does is add another layer of complexity and latency to a judge (route LLM) that is weak in nature, so it will guess which model to use based on unknown criteria. a manual route is way better without complex systems and reliable results.

    • @tvwithtiffani
      @tvwithtiffani 6 หลายเดือนก่อน

      @@hqcart1 oh well don't use it then 🤷🏾‍♀️and it's not unknown criteria. This video said it uses its llm arena data.

    • @hqcart1
      @hqcart1 6 หลายเดือนก่อน

      @@tvwithtiffani exactky, thats the unknown

  • @Mzulfreaky
    @Mzulfreaky 6 หลายเดือนก่อน +2

    Interesting 😮

  • @nickludlam
    @nickludlam 6 หลายเดือนก่อน

    I don’t know whether this data oriented way of evaluating where to direct a query is going to be better than a task based one. For my app, it would be far easier to route for summarisation vs data extraction tasks, vs other tasks

  • @Viki_vigneshwaran
    @Viki_vigneshwaran 6 หลายเดือนก่อน

    Good insight

  • @jarail
    @jarail 6 หลายเดือนก่อน +1

    Haha how is this new? I started doing this about 20 mins after trying GPT-4. I appreciate the formal framework and improvements they've made tho. That said, I use GPT 3.5 to filter first. And yeah, saved me a ton of money. Not only is it the cheaper model but using simpler (short) prompts. Like "respond IGNORE if this message is not asking for a response." Then I'll only send the messages to GPT4 with a full prompt for messages that need responses. Use a simple model first. Save tokens. Save 20-50x of LLM costs (my usecase). Profit.
    Also worth noting that ChatGPT has something similar. People have long known that some responses get routed to gpt 3.5 vs 4.0+.

    • @GregoryMcCarthy123
      @GregoryMcCarthy123 6 หลายเดือนก่อน +1

      Haha, nice man, yeah I’ve been doing the same thing too, I also use the cheap guys for filtering. It’s very cool to bump into someone else who does the same thing!

  • @Leo-ph7ow
    @Leo-ph7ow 6 หลายเดือนก่อน

    Thanks!

  • @BorisHrzenjak
    @BorisHrzenjak 6 หลายเดือนก่อน

    that's basically like mixture of agents or am I wrong?

    • @tvwithtiffani
      @tvwithtiffani 6 หลายเดือนก่อน

      They used their own data from the arena to produce the framework. So theoretically it should be more versed in which queries are ok for each of the Models you select to MoE(xperts)/MoA(gents) with.

    • @BorisHrzenjak
      @BorisHrzenjak 6 หลายเดือนก่อน +1

      @@tvwithtiffani so same concept just a bit more polished :)

    • @longboardfella5306
      @longboardfella5306 6 หลายเดือนก่อน

      Am I right that this could be combined with MoA to enable you to then optimise for cost / performance and accuracy?

    • @tvwithtiffani
      @tvwithtiffani 6 หลายเดือนก่อน +1

      @@BorisHrzenjak I think so. More like mixture of models tho because Agents and experts are typically preset with a specific task or set of tasks for agents and a specific set of preset knowledge for each the Experts in a mixture.

    • @husanaaulia4717
      @husanaaulia4717 6 หลายเดือนก่อน

      ​@@BorisHrzenjakSambaNova call it CoE I guess

  • @93simongh
    @93simongh 6 หลายเดือนก่อน

    Does it work for languages other than english?

    • @samwitteveenai
      @samwitteveenai  6 หลายเดือนก่อน

      This is a good question. It may do ok out of the box, but you could certainly train one of their models to handle other languages.

  • @robcz3926
    @robcz3926 6 หลายเดือนก่อน

    isn't a semantic router easier and faster?

  • @bastabey2652
    @bastabey2652 6 หลายเดือนก่อน

    Flash is a better and cheaper reranker than the rerankers in the market (including Cohere)

  • @MattJonesYT
    @MattJonesYT 6 หลายเดือนก่อน

    Requesting a vid on GraphRAG

  • @designstudiohq7764
    @designstudiohq7764 6 หลายเดือนก่อน

    ❤❤❤

  • @SR-zi1pw
    @SR-zi1pw 6 หลายเดือนก่อน

    Lite llm?

    • @samwitteveenai
      @samwitteveenai  6 หลายเดือนก่อน

      This is different than LiteLLM it is dynamically changing between the two model choices

  • @enriquebruzual1702
    @enriquebruzual1702 6 หลายเดือนก่อน +2

    I have a huge problem with with all solutions must be a Framework, at best this is a library or even a function. Not saying you but companies/developers.

    • @samwitteveenai
      @samwitteveenai  5 หลายเดือนก่อน +1

      Yeah I do feel like that about a bunch of these things. This I would look at as more of a proxy you go through.

  • @husanaaulia4717
    @husanaaulia4717 6 หลายเดือนก่อน

    I guess this idea similar to CoE that SambaNova use

  • @guanjwcn
    @guanjwcn 6 หลายเดือนก่อน

    No code time this time?

    • @samwitteveenai
      @samwitteveenai  6 หลายเดือนก่อน

      I linked their GitHub with the code etc. in the description

  • @ATH42069
    @ATH42069 6 หลายเดือนก่อน

    this is a super sexy topic

  • @MrJinwright
    @MrJinwright 6 หลายเดือนก่อน

    Most people don't need this because you'll know if you can use the cheaper model before hand.

    • @gabrielkeith3189
      @gabrielkeith3189 6 หลายเดือนก่อน

      Not very accurate
      I have built a tool for my org that works on a custom built sql agent that could use a cheap model for over half of the questions being asked. I am building a "router" to check complexity and context need.

  • @jeffsteyn7174
    @jeffsteyn7174 6 หลายเดือนก่อน

    They could saved a load of time by just using gpt3.5 and function calling.

  • @christosmelissourgos2757
    @christosmelissourgos2757 6 หลายเดือนก่อน

    Nice content but too much complexity if you want to build a product for scaling

    • @samwitteveenai
      @samwitteveenai  6 หลายเดือนก่อน +1

      the good thing is it out there an open sourced so others can work on improving it for lots of use cases.

    • @nosult3220
      @nosult3220 6 หลายเดือนก่อน

      😢

    • @davidw8668
      @davidw8668 6 หลายเดือนก่อน

      Clearly not for (most) of production cases. But it be useful in dev as a heuristic

  • @WeirdoPlays
    @WeirdoPlays 6 หลายเดือนก่อน +19

    First Comment should get likes 😅

  • @nAme-bf9uz
    @nAme-bf9uz 6 หลายเดือนก่อน

    This is like a really old idea.

  • @reza2kn
    @reza2kn 6 หลายเดือนก่อน

    ❤❤❤