High-performance RAG with LlamaIndex

แชร์
ฝัง
  • เผยแพร่เมื่อ 9 ต.ค. 2023
  • GPT-4 Summary: "Master Advanced AI Retrieval Techniques in Our Upcoming Event! Are you ready to elevate your AI game with high-quality data engineering? Join us to learn how to craft robust, scalable retrieval pipelines using hierarchical embeddings for maximum relevance. Discover the secrets of optimizing context precision and recall with the RAG ASsessment (RAGAS) framework. Gain insights on fine-tuning embedding models to enhance retrieval metrics and see the remarkable improvements in generations with advanced retrieval techniques. Ideal for AI engineers, learners interested in advanced retrieval methods, and LLM practitioners focusing on industry-standard metrics. Participate in live sessions with access to all GitHub repositories and Colab notebooks. Don't miss this chance to become an expert in building high-performance RAG applications!"
    Event page: lu.ma/llamaRAG
    Have a question for a speaker? Drop them here:
    app.sli.do/event/g217LbPE8rQP...
    Speakers:
    Dr. Greg Loughnane, Founder & CEO, AI Makerspace
    / greglough. .
    Chris Alexiuk, CTO, AI Makerspace
    / csalexiuk
    Join our community to start building, shipping, and sharing with us today!
    / discord
    Apply to our upcoming LLM Ops: LLMs in production course on Maven today!
    maven.com/aimakerspace/llmops
    How'd we do? Share your feedback and suggestions for future events.
    forms.gle/Kt6vF6NSGAJA7fyi8
  • บันเทิง

ความคิดเห็น • 26

  • @AI-Makerspace
    @AI-Makerspace  9 หลายเดือนก่อน +10

    Google Colab Notebook: colab.research.google.com/drive/1TDiWZtb6gsM9wVXCLQrR-7OEPaQ2n-JA?usp=sharing
    Slides: www.canva.com/design/DAFw0D8y038/5Yh9MA2XXd2Lfr2thcsuLA/edit?DAFw0D8y038&

  • @MurselTasgn
    @MurselTasgn 9 หลายเดือนก่อน +1

    Great tutorial! Thanks. Definitely try it out 👏

  • @kevon217
    @kevon217 9 หลายเดือนก่อน

    Fabulous overview and demonstrations.

  • @elevasis
    @elevasis 10 หลายเดือนก่อน +1

    Awesome stuff. You guys are killing it!

  • @Kingromstar
    @Kingromstar 10 หลายเดือนก่อน +1

    Camels are my favorite animal, thanks for this.

    • @AI-Makerspace
      @AI-Makerspace  10 หลายเดือนก่อน

      YES! You're so welcome @Kingromstar 🤗! We had a blast putting it together!

  • @peregudovoleg
    @peregudovoleg 7 หลายเดือนก่อน

    Nice presentation, subscribed! Keep it up!

  • @Canna_Science_and_Technology
    @Canna_Science_and_Technology 6 หลายเดือนก่อน +1

    Some retrievers use libraries in the background, like pypdf, beautiful soup, and so on.

  • @calcs001
    @calcs001 7 หลายเดือนก่อน

    solid

  • @dmikas139
    @dmikas139 10 หลายเดือนก่อน +4

    It was another amazing event with great content - thank you so much! Could you please share the slides and the notebook used for the event?

    • @AI-Makerspace
      @AI-Makerspace  10 หลายเดือนก่อน +4

      Thanks and of course!
      Slides: www.canva.com/design/DAFw0D8y038/5Yh9MA2XXd2Lfr2thcsuLA/edit?DAFw0D8y038&
      Google Colab Notebook: colab.research.google.com/drive/1TDiWZtb6gsM9wVXCLQrR-7OEPaQ2n-JA?usp=sharing

  • @augmentos
    @augmentos 2 หลายเดือนก่อน

    Would it be worth renaming this raft as it sounds very familiar to retrieval augmentation fine-tuning or is it different than what you’ve done done here?

  • @peeturpain9379
    @peeturpain9379 9 หลายเดือนก่อน +1

    Compared to OpenAI's Ada-02, how well does the finetuned embedding model work? Since my application is fully dependent on openai and I prefer working with better models (sentence transformer is good but cant really stack up to Ada). Also, is it possible to apply this finetuning to Ada?

    • @AI-Makerspace
      @AI-Makerspace  9 หลายเดือนก่อน +1

      In the specific case we examined, the fine-tuned embeddings performed slightly better than ada-02 on the aggregate hitrate metric.

  • @thomashsu5252
    @thomashsu5252 2 หลายเดือนก่อน

    Great context. I really rllearnd something

  • @mhylle
    @mhylle 8 หลายเดือนก่อน +2

    Great session. I am able to take a lot of this and use directly. However a concern of mine is in regards to using GPT4 as the evaluator, since we are not allowed to send our data outside the company. What would be your suggestions into what model to use if it has to run on-prem? Woudl that be a llama-2-70b model? Again thanks for a rreally great and informative session...

    • @AI-Makerspace
      @AI-Makerspace  8 หลายเดือนก่อน +2

      Thanks for the question! The best evaluator out there is GPT-4, although if you're very keen on using an open-source model instead, we'd recommend one of the instruct-tuned Llama models today.

  • @user-nv5zf1wt1f
    @user-nv5zf1wt1f 6 หลายเดือนก่อน +1

    I appreciate the effort and the information. But there is always a reason when they don't actually RUN the notebook in a demo. Add `cohere` to the package list before the openai package.

    • @AI-Makerspace
      @AI-Makerspace  6 หลายเดือนก่อน +1

      You don't actually *need* Cohere for this notebook, you can safely ignore the pip error!

  • @tramanhphamthi5218
    @tramanhphamthi5218 9 หลายเดือนก่อน +2

    Thanks for wonderful sharing! I would like to ask if I can use GPT-3.5 to evaluate the answer from GPT-3.5 (because I am unable to use GPT-4.0 right now). Does it induce any risks when I use GPT-3.5?

    • @AI-Makerspace
      @AI-Makerspace  9 หลายเดือนก่อน +1

      Great question @tramanhphamthi5218! You can absolutely use GPT 3.5 for everything that you saw here! However, whenever you can while doing evaluation or automatically generating code, you should opt for the most powerful model that you can get your hands on. If it's GPT 3.5 for now, then go with that!

    • @tramanhphamthi5218
      @tramanhphamthi5218 9 หลายเดือนก่อน

      @@AI-Makerspace Thanks a lot for your response!

  • @DSumForce
    @DSumForce 10 หลายเดือนก่อน +2

    Could you share the URL for the notebook in the description box? Super session! 🎉

    • @AI-Makerspace
      @AI-Makerspace  9 หลายเดือนก่อน +2

      Thank you, and of course!
      Google Colab Notebook: colab.research.google.com/drive/1TDiWZtb6gsM9wVXCLQrR-7OEPaQ2n-JA?usp=sharing