Stop Prompt Engineering! Program Your LLMs with DSPy

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ม.ค. 2025

ความคิดเห็น • 9

  • @micams2009
    @micams2009 วันที่ผ่านมา +3

    thanks for the breakdown. hmmm, with dspy, I am still missing the options to work more focused on the "for which cases didn't it work and how could that be mitigated/tackled" route. Of course, you don't want to just focus on those; you'd certainly want to keep the performance as good as possible for the ones that already worked. However, quite often blindly optimizing against a score is a kind senseless exercise if the failures come from mislabeled data (which I would clearly first look at if a large LLM model can't solve such a task). This is just from experience - if you have flawed data, that might hurts the actual downstream application because the optimization process might draw to much on it. Great content; go on!

  • @kevinpham6658
    @kevinpham6658 2 ชั่วโมงที่ผ่านมา

    Thanks for the breakdown. Have you had success with it in production? It seems like in your examples, the performance didn’t go up significantly over baseline until there was fine tuning of actual weights. A trial and error prompt engineering approach might yield similar results if there is a test.

  • @themax2go
    @themax2go วันที่ผ่านมา +1

    can it create the json / struct (i haven't watched the full video yet) as part of the optimized prompt?

    • @aaronabuusama
      @aaronabuusama 22 ชั่วโมงที่ผ่านมา +1

      Yesser

  • @CiaoKizomba
    @CiaoKizomba 15 ชั่วโมงที่ผ่านมา

    Is dspy harder to use for complicated prompts?

  • @themax2go
    @themax2go วันที่ผ่านมา +1

    does it make sense to pair it w pydantic ai?

    • @kallemickelborg
      @kallemickelborg 4 ชั่วโมงที่ผ่านมา

      Yes definitely

  • @volker_roth
    @volker_roth 6 ชั่วโมงที่ผ่านมา +1

    Danke!

    • @AdamLucek
      @AdamLucek  3 ชั่วโมงที่ผ่านมา

      Thank you for supporting the channel! 🙏