For me, it was excellent that it is unable to do big tasks. It forced me to separate my code into smaller more specialised sections. So it forced me to follow recommended coding standards and it broke down the code for me. I think this is a good thing.
Constraints breed creativity! 😄 One good thing I found myself is after every small change, I ask Cascade to make a git commit. Works just as intended-my commits now r more focused and only include the minimal, logically done code changes. 😊
Im testing cascade by having it work on an existing ridesharing codebase. It is fast, but it seems to be slightly dumber than claude. It tries to implement very big tests and it fails. I had to suggest several times to create smaller tests.
Yeah, tbh I’m not sure if it’s just too early for agents to handle big tasks on their own or if we’re just not good enough yet at phrasing the tasks to get the best results. From what I’ve seen, any kinda big task without some supervision ends up with half the project rewritten. Like, even in this vid (not sure if you caught it), Cascade changed the service just so a single test case would pass. 🤦♂️
Once these models get a much longer context memory, their coding skills will improve drastically. I think the development is done, it will all come down to marketing, but even marketing will be infested with ai tools. We r entering a world of insanity already
More than just increasing context memory, I’d love to see some critical thinking added. So far, I imagine this could work by introducing “manager agents” into the system, whose job would be to criticize everything generated before it gets to us. Something like, “Why the hell did you change code that was already written before you?!”-you’d probably enjoy that idea. 😄 As for marketing, I’m not so sure. If these agents become self-learning, it won’t take long before their long memory includes: “Don’t throw marketing materials here.” 😂
For me, it was excellent that it is unable to do big tasks. It forced me to separate my code into smaller more specialised sections. So it forced me to follow recommended coding standards and it broke down the code for me. I think this is a good thing.
Constraints breed creativity! 😄
One good thing I found myself is after every small change, I ask Cascade to make a git commit. Works just as intended-my commits now r more focused and only include the minimal, logically done code changes. 😊
Im testing cascade by having it work on an existing ridesharing codebase. It is fast, but it seems to be slightly dumber than claude. It tries to implement very big tests and it fails. I had to suggest several times to create smaller tests.
Yeah, tbh I’m not sure if it’s just too early for agents to handle big tasks on their own or if we’re just not good enough yet at phrasing the tasks to get the best results. From what I’ve seen, any kinda big task without some supervision ends up with half the project rewritten. Like, even in this vid (not sure if you caught it), Cascade changed the service just so a single test case would pass. 🤦♂️
@GrabDuck oh yeah, that happens with all of the ai models including Claude.
If it is their own code, i let them rewrite it haha
Once these models get a much longer context memory, their coding skills will improve drastically. I think the development is done, it will all come down to marketing, but even marketing will be infested with ai tools. We r entering a world of insanity already
More than just increasing context memory, I’d love to see some critical thinking added. So far, I imagine this could work by introducing “manager agents” into the system, whose job would be to criticize everything generated before it gets to us. Something like, “Why the hell did you change code that was already written before you?!”-you’d probably enjoy that idea. 😄
As for marketing, I’m not so sure. If these agents become self-learning, it won’t take long before their long memory includes: “Don’t throw marketing materials here.” 😂