What Happens When You Combine RAG with Text2SQL?

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ต.ค. 2024

ความคิดเห็น • 5

  • @limjuroy7078
    @limjuroy7078 2 วันที่ผ่านมา +5

    Great job! Keep up the excellent work!

  • @ChiTien-Hsieh
    @ChiTien-Hsieh วันที่ผ่านมา +1

    Great talk!
    One small point I’d like to mention is that at around 17:55, Angelina “hmm”s five times within the next 15 seconds, which is quite distracting.
    While this habit might work well in an offline meeting where such sounds signal active listening, in an online setting, it can actually interrupt the flow and impact the quality of the talk-especially when I’m trying to focus on Mehdi’s insights.
    A little nodding or some sign language with the mic muted would be really appreciated!
    Anyway, it was a very insightful talk-I’m just nitpicking.

    • @TwoSetAI
      @TwoSetAI  22 ชั่วโมงที่ผ่านมา

      Thank you for your feedback!

  • @DaviZucon
    @DaviZucon วันที่ผ่านมา +1

    Well done ! I would like to see a comparison in terms quality and scale for classification between a in house trained models vs LLMs !

    • @MehdiAllahyari
      @MehdiAllahyari 23 ชั่วโมงที่ผ่านมา

      Here's a great blog post that hopefully answers your question. They have compared the results of an LLM (Llama-3.1-8B) with a small model. They demonstrate that small trained classifier outperforms LLM especially in few-shot learning. Here's the link:
      huggingface.co/blog/sdiazlor/custom-text-classifier-ai-human-feedback
      But in general, scaling an LLM for classification is hard, dealing with latency, cost, etc in general is challenging.