Nice example! Enjoyed your walk through. I’m wanting to dive into DSPy. Starting to work on a training dataset for my application. Looking forward to more videos from you!
No! It's not, at least, it shouldn't be. It doesn't need to be perfect, but you've got to collect data all of the time when you're using these systems. The goal is to curate a dataset that you're deeming "ground truth". It may not be perfect but it's just supposed to be "good enough". You can start with raw LLMs, but ground truth (in some sense) is what you should be moving towards!
Yes, but no. 1. Yes, because 'training' / 'optimization' is more expensive because you're optimizing. 2. No because you can use a much cheaper model to run your inference. For instance, you 'train' haiku to be better. Haiku from Anthropic is sooo much cheaper than GPT4 or Opus. If you're doing a lot of inference, you're going to save money.
TH-cam doesn't let me post links yet because the channel is new. All notebooks are here: github.com/bllchmbrs/learnbybuilding.ai and here :learnbybuilding.ai/
Couldn't appreciate more the "watch it happen right here, make it super simple, really easy to learn" lemma, this will make u go far
Maybe instead of zooming out and in, keeping a zoom in would be a pro
check check! thanks!
Looking forward to the upcoming DSPy videos in the series. Much appreciated. Thank you.
More to come!
I appreciate what you are doing here, with the grug angle.. Smart, grug smart.
grug not smart, grug just learn hard lesson when grug walk into wall many times
Fantastic tutorial! Thanks so much!
You're very welcome!
Great walk-through. Subscriber no. 42 here 😄
Thank you!
Everyone starts somewhere, hope I get some momentum. I appreciate your support!!!
Nice example! Enjoyed your walk through. I’m wanting to dive into DSPy. Starting to work on a training dataset for my application. Looking forward to more videos from you!
That's great work! Awesome to see that you're already applying it!
Great stuff - thanks!
Thanks Brian. It's the first - let's keep it moving forward :)
Good vid - hope you post the notebook soon.
Notebook is here: github.com/bllchmbrs/learnbybuilding.ai/blob/main/dspy-grug-text/dspy-gentle-intro-part-1.ipynb
Please star the Repo too!
what if you don't have the ground truth, can you still use dspy? generally, isn't the ground truth hard to come by?
No! It's not, at least, it shouldn't be.
It doesn't need to be perfect, but you've got to collect data all of the time when you're using these systems. The goal is to curate a dataset that you're deeming "ground truth". It may not be perfect but it's just supposed to be "good enough".
You can start with raw LLMs, but ground truth (in some sense) is what you should be moving towards!
won't dspy approach result in costing us more in terms of openai token usage, given how it tries to optimize?
Yes, but no.
1. Yes, because 'training' / 'optimization' is more expensive because you're optimizing.
2. No because you can use a much cheaper model to run your inference.
For instance, you 'train' haiku to be better.
Haiku from Anthropic is sooo much cheaper than GPT4 or Opus.
If you're doing a lot of inference, you're going to save money.
Nice explanation , Just subscribed
Thanks for the sub!
Very helpful video, thanks! New subscriber here. One minor niggle: always helpful to have a link to the code/.ipynb!
TH-cam doesn't let me post links yet because the channel is new.
All notebooks are here: github.com/bllchmbrs/learnbybuilding.ai
and here :learnbybuilding.ai/
@@LearnByBuildingAI thanks!