@@pitpatgazorpazorp3356 I have a paid subscription 🤷 I use gpt-4o-mini with aider and switch between 4o and local LLMs with cursor & VScode. I’ve never had trouble with the Anthroptic rate limit. Do you have a paid subscription?
That visual prompting is so powerful! In my game project, I just screenshotted the situation where bug was visualized, pasted it to Claude and saved a ton of time! Incredible tbh… 🤯
Three questions. 9:50 1. Didn’t see the terminal report the error? I wanted to see what that looks like. 2. Why did you type npm start? 3. What is the purpose of using AI terminal test your Omni prompts?
Do you run out of tokens while working on a project and then have to wait for additional ones? During debugging do you think using screenshots instead of text could reduce token usage and speed up project development?
On my projects i have not had any issues with the 200K token window. You can open a new chat if it gets to long to reset the attention. It is an interesting idea with using image with text, i might give it a test
It's funny how easy you make this look. As someone who historically has done all this manually, this really takes the toil out of it. How long do you think it will take local models to be able to reproduce these results at a fraction of the size of frontier models?
Yeah I dont think the open source a far behind. Just needs better vision, but i also think frontier models will move the goal posts. Interesting times ahead for sure
Aider + Claude-engineer + Cursor has taught me more about how to program than anything ever. It all about (ai) time on keyboard and familiarity
How do you manage with the claude rate limits? Every project i start i hit the rate limits with claude engineer and have to wait a day.
@@pitpatgazorpazorp3356 I have a paid subscription 🤷 I use gpt-4o-mini with aider and switch between 4o and local LLMs with cursor & VScode. I’ve never had trouble with the Anthroptic rate limit. Do you have a paid subscription?
Please share your setup and how you use it. Thank you for the tip!
Cool, love to hear that. Will test that myself for sure:)
We love creators who do awesome segways to their sponsors. When the Segway is smooth and painless it is the best
That visual prompting is so powerful! In my game project, I just screenshotted the situation where bug was visualized, pasted it to Claude and saved a ton of time! Incredible tbh… 🤯
Excellent segue into the sponsorship!
Thnx:)
Your tips and tutorials have been inspiring. It seems like the only thing limiting what we can build now is our imagination.
Excellent video that actually gives some tips. Were you a programmer before?
Thnx a lot:) not really, im just a self thought Electrial Engineer
Three questions. 9:50 1. Didn’t see the terminal report the error? I wanted to see what that looks like. 2. Why did you type npm start? 3. What is the purpose of using AI terminal test your Omni prompts?
Do you run out of tokens while working on a project and then have to wait for additional ones? During debugging do you think using screenshots instead of text could reduce token usage and speed up project development?
On my projects i have not had any issues with the 200K token window. You can open a new chat if it gets to long to reset the attention. It is an interesting idea with using image with text, i might give it a test
It's funny how easy you make this look. As someone who historically has done all this manually, this really takes the toil out of it. How long do you think it will take local models to be able to reproduce these results at a fraction of the size of frontier models?
A few months
Yeah I dont think the open source a far behind. Just needs better vision, but i also think frontier models will move the goal posts. Interesting times ahead for sure
your pepe ai Art is incredible... how did you make that ???
🎉
Thnx dor tuning in :)
I want one🥰✡️☯️✡️☪️🎉
Can I please get an invite?
I'm subscribed to your channel.
THX stuff be so insightful it is super-great