Fixing RAG with GraphRAG

แชร์
ฝัง
  • เผยแพร่เมื่อ 9 ก.พ. 2025

ความคิดเห็น •

  • @wayneqwele8847
    @wayneqwele8847 7 หลายเดือนก่อน +1

    Thank you for the video, that was a great paper to go through. I find RAG research techniques have so much insight to how we can develop and identify our own cognitive impediments to our own judgement. The Comprehensiveness, Diversity of perspective, Empowerment and Directness is such a good mental model to use in our own human judgement.

  • @jordycollingwood
    @jordycollingwood 8 หลายเดือนก่อน +2

    Really great explanation, I’m currently struggling to decide on my own KG structure for a 2000 medical pdf corpus, so this was very helpful

    • @awakenwithoutcoffee
      @awakenwithoutcoffee 7 หลายเดือนก่อน

      same here brother. There are so many techniques, everyday I learn something new which is both good and terrifying ha. What stack are you thinking of using ? We are researching DataStax, Pinecone, Weaviate and are learning to build agents with LangGraph.

  • @rafikyahia7100
    @rafikyahia7100 7 หลายเดือนก่อน

    Excellent content summarizing cutting edge approaches, thank you!

  • @sasha297603ha
    @sasha297603ha 9 หลายเดือนก่อน +1

    Very interesting paper, thanks for covering!

  • @awakenwithoutcoffee
    @awakenwithoutcoffee 7 หลายเดือนก่อน +1

    great presentation Vivek. Some questions:
    - is graphRAG production ready ? if not, would it be difficult to upgrade RAG methods once we are in production ?
    - is there a RAG provider/stack that you prefer ? (datastax, pinecone, weaviate + a bunch of others who are all competing for attention)
    - what are your thoughts on LangChain vs LangGraph ?

  • @dawid_dahl
    @dawid_dahl 5 หลายเดือนก่อน

    Thanks so much, great content.

  • @themax2go
    @themax2go 7 หลายเดือนก่อน

    very well "ragged"... both on the local domain (details) and global domain (overview of pros-cons) 😉😎

  • @brandonheaton6197
    @brandonheaton6197 7 หลายเดือนก่อน

    can you pontificate on the combination of upcoming transformer inference ASICs with deep agentic workflows employing GraphRAG style strategies? Seems like we will be close to our personal assistants writing a PhD thesis in the background whenever we ask a question. SOHU is reporting 500,000 tokens per second with Llama3 70B....

  • @fintech1378
    @fintech1378 7 หลายเดือนก่อน

    super excellent video

  • @ashwinnair5803
    @ashwinnair5803 6 หลายเดือนก่อน

    Why not just use RAPTOR instead?

  • @wanfuse
    @wanfuse 7 หลายเดือนก่อน

    wouldn't it cut to the chase to train an llm on your own data?
    theres your graph
    use one of these
    OpenAI's GPT-3/4
    Hugging Face Transformers (e.g., GPT-2, GPT-3 via third-party providers)
    Google's T5 (Text-to-Text Transfer Transformer)
    Meta's BART and BlenderBot
    Anthropic's Claude
    every week update the llm
    summarization is the death of real data, better off one level of summarization? Just a thought!

    • @mccleod6235
      @mccleod6235 7 หลายเดือนก่อน

      Maybe you don't want to send all your valuable business data to third party companies.

    • @wanfuse
      @wanfuse 7 หลายเดือนก่อน

      @@mccleod6235 thats true but its not necessary, there are models that are open source you can train air gapped from a jetson

    • @bohnohboh676
      @bohnohboh676 7 หลายเดือนก่อน

      "every week update the llm"
      yeah no way unless you have tons of cash, compute, and time

    • @wanfuse
      @wanfuse 7 หลายเดือนก่อน

      maybe maybe not, let you know! your probably right, will see if my idea pans out