🗂 ACCES ALL THE CODE: bartslodyczka.gumroad.com/l/iosqn 📋 Take This Quick Survey: forms.gle/otAr1xUamgyYZE5y7 🛠 Hire me: bart@supportlaunchpad.com 👉 LinkedIn: www.linkedin.com/in/bartlomiejslodyczka 🤝 Sign up to Make.com: www.make.com/en/register?pc=bartslodyczka Learn AI & Coding (20% off Pro plan with my link): v2.scrimba.com/the-ai-engineer-path-c02v?via=BartSlodyczka
Hey, suppose I want to chat with pdf file and I have a list of questioins I want to ask the model. Why when I load the pdf file with prompt caching and start asking questions from this list of questions I hit the request per minute tokens limit already after the second question? I use tier 1 anthropic api, but it can not be that bad, right? The document I use weights 1 mb and has 2k tokens length. Do you have the same problem?
Yeah I also hit the request limit pretty quickly (i'm on tier 2). If you go to your anthropic account, you'll see the new claude 3.5 pdf model listed with its own token limit, which is kinda low on Tier 1 and tier 2. so either you'll need to slow down your requests, or you'll need to drop some more cash so you can bump up to tier 2. Hopefully in the near future they increase the limit 🙏
@michabbb it hurts* You can't even phrase your feedback properly, so why don't you go bluff somewhere else; what an absolute loser. @BartSlodyczka keep up the good work!
🗂 ACCES ALL THE CODE: bartslodyczka.gumroad.com/l/iosqn
📋 Take This Quick Survey: forms.gle/otAr1xUamgyYZE5y7
🛠 Hire me: bart@supportlaunchpad.com
👉 LinkedIn: www.linkedin.com/in/bartlomiejslodyczka
🤝 Sign up to Make.com: www.make.com/en/register?pc=bartslodyczka
Learn AI & Coding (20% off Pro plan with my link): v2.scrimba.com/the-ai-engineer-path-c02v?via=BartSlodyczka
So good! Keep it coming 🎉
thank you :)
Amazing explanation thankyou Bart!
Let's go!!
thanks:) i analyse pdfs every day - very useful:)
So awesome 💪💪
Wow this is an absolute banger..Bart can you share the repo for this?
thank you legend 💪 here it is: (Gumroad) bartslodyczka.gumroad.com/l/iosqn
how can we store the context so that it can be used later on . also is it possible to have multiple context going on
I don't think there is built in storage just yet, like OpenAI has for their threads. But I hope it comes out soon!
Hey, suppose I want to chat with pdf file and I have a list of questioins I want to ask the model. Why when I load the pdf file with prompt caching and start asking questions from this list of questions I hit the request per minute tokens limit already after the second question? I use tier 1 anthropic api, but it can not be that bad, right? The document I use weights 1 mb and has 2k tokens length. Do you have the same problem?
Yeah I also hit the request limit pretty quickly (i'm on tier 2). If you go to your anthropic account, you'll see the new claude 3.5 pdf model listed with its own token limit, which is kinda low on Tier 1 and tier 2. so either you'll need to slow down your requests, or you'll need to drop some more cash so you can bump up to tier 2. Hopefully in the near future they increase the limit 🙏
@@BartSlodyczka got you, thanks!
Oh dear, please Google the pronunciation of "cache" and "caching" there is no "i" after the "a" and it's hurts listening to that. Please.
Thanks man, you’re 100% correct. Appreciate the feedback but don’t appreciate the tone, be kinder
@michabbb it hurts* You can't even phrase your feedback properly, so why don't you go bluff somewhere else; what an absolute loser. @BartSlodyczka keep up the good work!
lmao @michabbb you're overdue for your 5th booster fyi
you need to include "for advanced user" in your title. This is not for the newbies.
Yes bit more advanced and mainly intended to showcase the range of the API and what you could do. Either way, download the code and try it out 💪