Making Long Context LLMs Usable with Context Caching

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 ก.ค. 2024
  • Google's Gemini API now supports context caching, aimed at addressing limitations of long context LLMs by reducing processing time and costs. This video explains how to use the caching feature, its impact on performance, and implementation details with examples.
    LINKS:
    Context Caching: tinyurl.com/4263z4da
    Vertex AI: tinyurl.com/yex8ua5h
    Notebook: tinyurl.com/2et8spkf
    Pricing: ai.google.dev/pricing
    💻 RAG Beyond Basics Course:
    prompt-s-site.thinkific.com/c...
    Let's Connect:
    🦾 Discord: / discord
    ☕ Buy me a Coffee: ko-fi.com/promptengineering
    |🔴 Patreon: / promptengineering
    💼Consulting: calendly.com/engineerprompt/c...
    📧 Business Contact: engineerprompt@gmail.com
    Become Member: tinyurl.com/y5h28s6h
    💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
    Signup for Newsletter, localgpt:
    tally.so/r/3y9bb0
    TIMESTAMPS
    00:00 Introduction to Google's Context Caching
    00:48 How Context Caching Works
    01:00 Setting Up Your Cache
    03:07 Cost and Storage Considerations
    04:46 Example Implementation
    08:57 Creating and Using the Cache
    11:06 Managing Cache Metadata
    12:53 Conclusion and Future Prospects
    All Interesting Videos:
    Everything LangChain: • LangChain
    Everything LLM: • Large Language Models
    Everything Midjourney: • MidJourney Tutorials
    AI Image Generation: • AI Image Generation Tu...
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 8

  • @unclecode
    @unclecode 9 วันที่ผ่านมา +3

    Thanks! I found the ability to update the TTL very interesting. Imagine building an assistant application for answering questions or customer service. On the server side, we could update the TTL another let's say 5 minutes. When a new user sends a question, we can update it again. When there's no new user, it will be gone. Five minutes is just an example, but it's a great way to keep your cache ready and clear it when you don't need it.
    I think the minimum token requirement is likely about profit. They need a minimum number to offer the service economically, saving expenses. Below that threshold, it wouldn't be cost-effective for them. That's my guess.

    • @engineerprompt
      @engineerprompt  9 วันที่ผ่านมา +1

      Dynamically controlling TTL can be really helpful and I agree the token limit is probably related to cost. I hope they implement the latency reduction soon, since that will make more sense.

  • @paraconscious790
    @paraconscious790 8 วันที่ผ่านมา

    this is very helpful buddy, very time saving and quickly updating my own biological cache without searching for it explicitly. Thanks!

  • @boooosh2007
    @boooosh2007 9 วันที่ผ่านมา +2

    Seems similar but more expensive to vector storage. What am I missing?

    • @engineerprompt
      @engineerprompt  9 วันที่ผ่านมา +5

      A couple of thing that differentiate it from vector storage. When you use retrieve info with vector based search, you only get some "chunks" where the LLM doesn't have the whole context of the document, an approach like this will provide complete context to the LLM. Caching can also be really useful with RAG as well. I agree it is going to be more expensive than vectorstores but will potentially save on the infra. Will be interesting to see how it evolves.

    • @boooosh2007
      @boooosh2007 8 วันที่ผ่านมา

      @@engineerprompt yeah chunking would have to be perfect to match the context. But if vector representation and chunking are accurate it should match in context quality. Time will tell ehh?

  • @DearGeorge3
    @DearGeorge3 9 วันที่ผ่านมา

    Great news! Thanks!!