🎉Thank you for watching! 🎉 We've just uploaded a higher-quality 1080p version of this video for an even better viewing experience. 👉 Click here to watch the 1080p HD version: th-cam.com/video/pMXP2wwC5Ss/w-d-xo.htmlsi=F-L1LwH8asdvArPP.
Love it! I think the friction going forward is 'how to specify' and not 'how to code' systems. You have articulated this point beautifully. More of this stuff. pls :)
I really appreciate your feedback @programmingsiri5007! And, oh yeah, I soooo agree with what you're saying! Right now, the ability to "specify" well requires some solid software engineering background. So, we need to KNOW how to code to get good results out of AI for complex apps. Glad to hear that you're in touch with what's going on in this field 😀
@@CodingtheFuture-jg1hebut what if u have no software engineering background. I have no time to learn a SE degree. don't say get AI to teach me...that's not what I'm looking for. I prefer a guide over the detail of SE. everything is process after all...learn process & that overcomes drawbacks like lack of skill (within reason I mean). thoughts?
@@CodingtheFuture-jg1he any chance we can get it integrated on VSCODE? Albeit i can run it on the terminal on VSCODE myself, could be nice to get a plugin for it as most kids use VSCode these days
Despite my lack of prior coding experience, I have successfully utilized Claude to create two Android applications, one of which includes a Samsung watch app. Additionally, I have developed three Python/HTML and TradingView indicators. However, I have encountered two challenges: firstly, the maximum message limit is reached quickly, even after purchasing additional messages; secondly, when using long code, the output is interrupted halfway through with a notification of reaching the maximum output.
A benefit of using the model to create an API before implementation is that we (developers) can make sure the chosen path is correct. It is best to fix the design at the top level, and then fill in the implementation only when we are happy with the design. Of course this is for projects where already have sufficient domain knowledge and just want AI to do the grunt work (a good pattern I think).
I really appreciate your feedback @erikjohnson9112! Agreed! I think we still need to apply our same iterative design/development practices when using AI. I feel like many of the frustrations folks are having might be due to thinking that engineering practices shouldn't be required when using AI to generate solutions.
I really appreciate all your feedback! I wanted to let you know that I'm seeing some common themes in the comments and I plan to try to address many of them in the coming weeks. Here's a short list of some recurring questions/suggestions: - Costs related to the LLMs being used (e.g. Claude, GPT 4, etc.) - Detailed instructions for setting up the development environment for these tutorials. - Best practices for prompting the coding assistant to get the best results. I hear you. Look for more information on all this in the coming weeks.
thank you as well as providing content enable or enforce a labelling system so one can jump around video to video in context eg episode guide style for Ur content. it will help if u revisit in future to add more content.
Thank you for that @joshuaachoka6478! Yes, we need some more variety. We don't all code in Python 😀In fact, my guess is that there are still far more "enterprise class" systems developed in Java than Python. Don't get me wrong - I really enjoy coding in Python. But, there are many other languages and they're being used.
Awesome video, thank you for explaining your steps so clearly. My only request would be to record in better quality. I would love to be able to choose a higher quality other than 720p
I really appreciate your feedback @JasonPatton1980p! Yes, I overlooked a setting in my video editor that got changed before I published. I'm learning all the ins and outs of editing still. I think you'll find the video quality of the two videos I released after this one to be far better. But, still things I'm addressing, one video at a time ;-) Please keep this kind of feedback coming! It really helps me to get better quality content out there for you!
@@CodingtheFuture-jg1he I would love to see how it goes with more complex projects like machine learning. I tried Aider about 6 months ago and just never feel in love. Would be interesting to see!
This is amazing! I have no coding background but this is a game changer. It's giving more people access to coding than ever before. It's mind blowing! Can you pleaaaseee create a video on how to create this setup? I know it is fairly easy because aider has documentation that tells you how to setup. But having a nice walkthrough would be really helpful! Looking forward to more videos from you!
By popular demand, I created a new video showing detailed steps for installing the tools and getting the API keys required for this tutorial. That guide is linked below the description.
Cool idea! I really appreciate your feedback @vishnuitsrocking! Sounds like you might want to follow Brandon: www.youtube.com/@bhancock_ai. Also, he runs a Skool community and is starting to offer courses on full stack with AI: www.skool.com/ai-developer-accelerator. He has more frontend expertise and it's kinda his thing.
Good Job, Great content. Just one constructive critic for next videos, when you use the terminal and it's on the bottom, youtube UI hides the terminal commands, so just move it a bit upwards.
Thank you for a good video it was a pleasure to watch. But it just raised another question about cost. It could be interesting if you could make a video on the total cost of using these tools on a developer projects from start to finish, so the focus will be on the real cost, and perhaps the cheapest vs. the fastest way.
I really appreciate your feedback @peterwagner9795! Yes, I was just talking with a colleague about this yesterday. I'll be sure to address this in a future video. FWIW, I tore down and rebuilt the 2 apps you saw in the video at least 10 times because I like to be as confident as I can that what I'm demo'ing wasn't a fluke (worked every time). I didn't track precisely, but I believe I spent a total of somewhere around $3-4 on Claude 3.5 Sonnet. Haven't counted tokens, but if we use the number of prompts I sent as a very loose metric, that's about 150 prompts. Now, the prompts are very short. But, Claude's output token count had to have been quite high.
Thank you so much for your kind words @bernard2735! And, for you like and subscribe! I look forward to your feedback on future videos. Or, on any particular content that might help you.
This is fantastic. Folks have asked me to do something on aider before but I never really understood what was special about it. But this makes it so clear in a way I never got from the webpage. Has the way aider fundamentally works changed over the last few months?
Thanks so much @technovangelist! Although new features have been added to aider over the past few months, I'm not aware of anything I'd consider as fundamental to the way it works has changed. Now, I know they did make some changes to the way it creates the context map for your codebase at some point - moved away from ctags, so maybe that improved the context aider supplies to the LLM? Not sure. aider's a great tool for sure, but the most glaring improvement I experienced came when I paired it with Claude 3.5. BTW, love your Ollama videos Matt! Keep 'em coming!
Thank you for that @pnddesign! Although not perfect, these coding assistants and LLMs are getting better at coding really quickly. To be fair, there are bumps and some learning curve. For me, it was mainly repetition to build the intuition around just how to prompt them, how detailed to be and how much to ask for (and not ask for) at once. Once you start getting the feel, these tools really speed things up. They can even let you build apps in languages/frameworks you've never used, if you're already an experienced dev. You can just build a little with the assistant, learn a little from the assistant, and quickly build new knowledge! Way easier to become a "polyglot" programmer now! 😆
Love it. I have a question. I am gonna build websites that similar to each other. Only change in UI theme, a bit on functionality. Can I provide the code base as document (context) so that AI can generate similar clone base on my prompt?
Thank you so much @JohnGodwin777! Thanks for the pointers! I didn't realize the res was at 720p until I'd uploaded it and lots of folks had already viewed it. It was a setting in my video editor, which I now know to double-check before uploading 😀The 2 videos since are at 1080p and, you're right, they look way better. I didn't increase the font yet, but will keep that in mind in the future.
Hi @zsiddiqi! You can find all the prompts in the video description. But, I don't think I actually pushed the 2 projects in this particular video to GitHub. If I still have those on my machine, I'll push them this week and update the video description. I started pushing the tutorial projects to GitHub after this initial video.
Thank you so much @bjoernzosel! Hey, my "first love" is backend software engineering. Mostly what I've been doing for a very long time. I typically try to have a good UX/UI person on my team to do the frontend stuff. Now, I do have a very long backlog and I'm just one dude here, but I'll certainly add it to the list. Or, maybe there's another way I can assist you? Can you tell me a bit more about where you are in your backend dev journey and the kinds of things that you're trying to learn - or what gets you stuck?
Thank you Dean! There's a ton of ground to cover and I have a long list of future topics in mind. But, would love to hear what might be helpful to you 🙂
My friend not sure your name, a quick question what if i ask to aider to create a project using for example next js, there is any way the code generated is based in the last documentation? I think is very important because some models are not updated. Thanks you so much
Hi @bambanx! A great question! Similar to the way I pointed aider at the OpenAPI docs it generated for the Java API (I used aider's /web command and provided the URL), you could theoretically do the same and point aider at the online Next.js docs. Having said that, aider currently only scrapes a single web page at a time. Also, adding too much context at a time (like adding a hundred web pages) is likely to result in poorer output - this is a limitation of LLMs, not aider. Also, the quality of this context is 100% critical, as is the completeness of those docs for the current task at hand. For example, if you were to add a bunch of Next.js web pages to the context and those pages contain a lot of conflicting information, just like a human would, the LLMs is likely to get confused. Bottom line: suggest trying to locate a small handful of web pages that you believe contain the necessary Next.js info you think the LLM might need to perform your next set of tasks. Use aider's /web command to add those to the chat context. Be very clear in your prompts that you want aider/LLM to give priority to the pages you added over its training data (maybe also mention "Use version X of Next.js and the docs I provided"). You'll likely need to experiment with this approach to get the results you want for what you're trying to achieve. But, I hope I've given you enough to give it a try.
Hi @lordav5520! Are you asking whether you can just create the REST API I show in the tutorial? If so, sure. Just do the first part of the tutorial and then stop. The UI isn't a requirement - you'd still have a functional API that could be used by any UI or any other kind of API consumer. And, if you're asking about replacing the in-memory HSQL database with MySQL for the API... once you've built the API following the tutorial, you could just ask aider something like: "Change the database the app is using from hsqldb to MySQL" aider should update your Maven POM dependencies, your Spring application.properties file, etc. Now, you'll still have to setup your MySQL database yourself and then update the MySQL values in your application.properties file. But, you can also ask aider to help you out there: "How can I setup a MySQL database for this API?" Best of luck!
Thank you. Great video. Some of the part is difficult for non programmers like me. Can you please create a video for absolute beginner and create a basic full stack app using aider?
Thank you for that @sivakumarm3569! I understand where you're coming from and others have also asked for a tutorial on just the setup process - i.e. how to install Python, VS Code, etc. and create API keys. All the stuff you need to actually implement the app yourself. I'm planning to create a video very soon that will assume a person has none of these installed and has never worked with any of these tools and takes them up to the point where they can begin any AI-assisted coding project. I probably won't get into how to actually build an app in that video, as I think the video will get too long - and will therefore take me much longer to get out there. Plus, breaking the content apart like this makes that video re-usable as a reference for lots of other videos. Look for this probably within the next week or so.
Let's say we have a project we're working on, and we've completed more than half of it. Is it possible to introduce aider in the middle of the project? From what I've seen in your videos, aider is typically implemented from the beginning...
Hi @miiihaaas! Actually, aider is used far more to work on existing codebases than for bootstrapping new projects. In the first couple of tutorials on aider, I wanted to point out that it can also create a fairly substantial app from zero. I'll be going into various use cases - like refactoring, fixing bugs, adding enhancements, etc. - for existing codebases in future videos. But really, you could imagine that you started with the codebase I ended up with in the tutorial. You come back later and fire up aider in that project and have aider help you fix a bug or whatever. Same workflow.
Hi @geoffreythomazeau8497! Lots of devs using aider on existing codebases. In fact, I think I may be one of the few using it to develop apps from scratch, based on comments I see on their discord server. The key to any coding assistant is how well it handles the context (i.e. info on all the files, functions, etc in your codebase) and provides that to the LLM as it generates new code or updates existing code. aider handles this by maintaining a context map of your entire codebase as you code. It doesn't provide every line of code to the LLM (doing that wouldn't work well for a lot of LLMs), so opts to provide enough info to the LLM to hopefully allow it to generate quality code for you. But, if you think about this a bit... that means that the quality of the LLM's output is highly correlated to the quality of your existing codebase. For example, if your existing classes, functions, etc. are poorly named, the LLM will struggle to understand your codebase - just like any human would ;-) You can check out this link which describes how aider maintains a map of your repo: aider.chat/2023/10/22/repomap.html. Might also want to join their discord server to ask more about this: discord.com/invite/Tv2uQnR88V. Also, I don't just use aider. I also use a couple of other coding assistants. Cody AI is one that's a VS Code extension vs a terminal app. Like aider, it creates a context map of your codebase and constantly updates that index as you code, so it can provide info on your codebase to the LLM. I typically use something like Cody more once I have my project pretty well scaffolded. They all have their strengths and weaknesses. Having said all this, results from any of these tools will greatly depend on devs learning the ins and outs of prompting them. We can watch all the tutorials we want. But experimentation and lots of practice is the only way to figure this all out. I'd love to hear what you find as you integrate these tools into your workflow!
Thank you for this demonstration. Look forward to learning more. I will take aspects of this video to help me code a custom UI for aider. I know that there's a browser command, but I look forward to customizing further. Would love to know if you have any additional tips !
That's a great idea! Using aider to create a custom UI that works and looks the way you'd prefer would be really cool! I'd love to know how that turns out for you. I plan to release a video soon with tips for using aider, along with a couple of other AI tools. I'm currently wrapping up a tutorial for using aider to build a RAG app using LangChain. I try to drop a tip here and there during tutorials, but if you have any areas in mind or run into any gotchas while building your custom UI with aider, please let me know and I'll see about working in some tips in those areas in future videos. Thanks for your feedback!
Hi @mesutsimsek35! Are you sure you're setting your ANTHROPIC_API_KEY environment variable to a Claude API key you created? If aider finds that key set to a valid value, when you run with the --sonnet option, it should just use Claude. Now, if it doesn't find that key, it might default to OpenAI. Please confirm that the API key is set correctly in the same terminal you launch aider from. E.g. either "echo $ANTHROPIC_API_KEY" if in git bash or "echo %ANTHROPICANTHROPIC_API_KEY%" if you're running aider within a windows command prompt. Here's the aider page on this: aider.chat/docs/llms/anthropic.html.
Thank you so much for your kind words @maxhenriquez8819! That's another topic we'll be covering on the channel for sure! I haven't looked at micro-agents at all, but I've developed quite a few multi-agent apps. I do think agents represent the next big leap in AI capabilities and deserve separate treatment.
Awesome video🎉New Sub! How long do you reckon it would have taken you if you tried to make the same app? Including learning Nextjs since you mentioned you had never used it before?
I tried testing Aider openai by building a rolling 3D map module using Threejs for games mostly. Aider moved forward acceptably at the beginning. rephrased prompts seemed to be frequently required in order to proceed. Until we reached loading a hex encoded map which I asked it to complete, but it clogged my output window with the hex representation even when I asked it to avoid doing that draining my openai credits. All in all between the two extremes of being totally useless (0) and very useful (10) I would give it a 7
I appreciate you sharing that @Infinix2023-p8y! That's a very interesting scenario. Not sure, but I'm guessing that may be due to the LLM (I assume GPT 4o?) versus aider. But it could be something aider is doing - maybe one of its internal prompts. I'd be interested to know if you have that same experience using Claude 3.5 Sonnet. I know GPT 4o is also powerful, but my experience with Claude on different types of apps has been much better.
Thank you for that @J3R3MI6! No, I keep hearing how great Cursor is, but I haven't had time to check it out. Have you? If so, what's your experience so far?
Thank you so much @RandyRanderson404! You make a valid point. I have considered this as well. Having said that, a couple of thoughts: 1. All frontier models have been trained on "the entirety of human knowledge". Ok, not sure I totally buy that. But, for all intents and purposes, yeah, I do think that's true. 2. Can you think a kind of app that would be both novel and useful to a wide range of businesses and/or consumers? If it doesn't meet both requirements, then I say, meh - no one will care. After nearly 29 years and about 10 different industries, I really can't say I've encountered what I personally would consider an employer or client trying to build something completely novel. Sure, they put their twists on it, or they would probably just buy something. But I believe that all the LLMs already possess deep knowledge on virtually every domain any human would care about. Which means (I think) that the BEST LLMs should be able to take your novel requirements (which are really only a re-mixing of pieces of knowledge from various domains it knows about) and do nearly as decent (not quite) a job with them as for this very common Task API. Now, that assumes that we break it down and use a similar "iterative dev" process to create it. It's gotta be apples to apples, except for the app requirements. If you try something like this, would you mind sharing? But I'm really speculating a bit here, because I truly haven't had the time to figure out the concept for a new app that would be truly revolutionary that I could use to test this theory. I'm not arguing your point. It's a great one! I love this type of feedback. Debating and testing these things makes us all better 😀
@@CodingtheFuture-jg1he Thank you for the long reply. I focus on building platforms for developers to ship what they work on to the customer. I spend a lot of time configuring Linux. I pay for ChatGPT as my go-to AI. I think it's the best for my work. That being said, I can barely trust it. When it comes to linux configurations, it's been trained on so many distributions, with so many ways of doing things, a lot of which are deprecated, it produces bad output. I work with a lot of new libraries and it doesn't have the training but it acts like it does. It makes me nervous to see boilerplate get generated like this. Dependency hell has been a challenge for my work and I'm always asking devs why they're including packages. Although, a positive is maybe we can generate the little functionality the devs need from a package and we don't need to import the whole thing. Where AI shines is when I can give it a very small scope but detailed task to write something I'm decent at but for something I'm an expert in, it slows me down. So maybe instead of asking for something novel, I would like to critique an SBOM of the AI generated boilerplate instead. Although, I've been working on a novel geospatial app. I exercised a bunch of prompts to generate ideas but it never suggested to me the technology I ended up going with.
@@CodingtheFuture-jg1he Claude has recently stumbled for me when making a basic GSAP slider, and took a loooong time and coaxing to get a React-Aria toast component working, so I don't think I'm as confident as you are about them when you take them off-piste. And Randy is right - a todo app with a basic crud api is very much boilerplate at this point. Still impressive, and looking forward to trying aider, so thanks!
Thanks for the tutorial , may i ask how long did it take to build this ? and how much tokens was spend on this task? was all this a one go thing or was there back and forth for getting at right solution?
Thank you for that @smtkumar007! So, from the point where I came up with the initial idea of what I wanted to build and kinda the steps I planned to follow to get aider to build it out... guessing my first run through took maybe 45 minutes. I then tore it down and re-built it all several times, because I want some level of confidence that what I'm presenting to you is repeatable. Just starting over once I had all the prompts in order was like 5 minutes, end-to-end. Now, during my initial runs, I did run into a couple of minor issues that caused me to revise my prompts. This is normal. For instance, you'll see things like me telling aider "only do this part for now - don't touch anything else" in the prompts. Without those, I found that Claude was getting a bit eager with generating code and getting ahead of where I wanted to be - and the results weren't as solid. So, that little bit of prompt experimentation is why it took me 45 minutes to get it right initially. Now, keep in mind that I've been doing this for a bit, so a lot of what to do to get good results and what to do when things go South is becoming somewhat intuitive to me. I'll be releasing future videos that provides tips and tricks on this kinda stuff.
@@CodingtheFuture-jg1he Thanks for sharing, I still do not have the idea about token costs of doing this, roughly how many tokens were spent during the entire process including the experimental prompts?
Yeah but be sure to talk about all those API calls. If your coding something a little bit more complex like a phone apps using Kotlin files. It seems to have trouble but it was able to get some stuff working. Even without aider. However Claude doesn't have internet access yet. So that could really change a lot since some of Claude coding framework is a little behind some updates.. It can start to get annoying. Plus the message cap per day. It's still amazing.. I"ll be waiting till Antropic grows and gets more support to see how it runs in a year or two. Because if they continue down this path they're going to be a powerhouse. What they have now is really amazing already just all the message limits and 4 hour wait times between 10 messages gets old..
I really appreciate your feedback @TRFAD! I agree with all your points for sure. I'm sure this will all get better - and cheaper quickly. Heck, as you pointed out, Claude has already changed the game in a very short time. As you say, Claude doesn't have browsing capabilities. To mitigate that a bit, most of the AI coding assistants now allow you to point to multiple URLs (haven't seen one that crawls a site yet) and the content of those pages provided to the LLM as context, along with aider's map of your repo. Not as simple for the user as just searching the Internet for context, but on the other hand, might give better results, if the dev has access to solid info sources in the form of websites. I'd be surprised if assistants don't soon have search ability. Getting better, bit by bit 🙂
Thanks for your request @hendoitechnologies! I am planning to launch a community soon (very likely on Skool.com) and start putting together more full-fledged courses. Could you please elaborate on what you mean by "API finetune"?
Hi, great demonstration you got there, thank you. I just wonder if anyone can do a more realistic demonstration which is adding a new function in a already established opensource project?
Thank you so much @ThangNguyen-ot8uz! A quick search didn't turn up a tutorial or demo showing aider specifically adding a function to an existing codebase. But it should be no problem. Not saying it'll get it right first try every time. Depends on the size and complexity of the codebase and also how you prompt it. Here's an explanation of how aider keeps track of all the files and functions in your codebase for context: aider.chat/2023/10/22/repomap.html
@@CodingtheFuture-jg1he thank you for replying. I just wonder how much context can AI really jam up into the question when we ask it to modify a code section in an existing project. Is the current AI tech advance enough to extract features that are logically connected in a code base needed for various requests we gave them? Or, when a project reaches a certain point/length, it's just impossible for the AI to give good answer, even when we tried to avoid asking them too many thing at once? Let's say RAG for example, most RAG systems I know failed to answer questions that are not one-stop (questions that require multiple logic to connect multiple pieces of information to get the answer). I assume the same for coding agents, great for building things from the ground up but after things get big enough, it slows down significantly because the AI is not smart enough to find all the related pcs of information and connect them together. How do you think AI tech will solve this problem in the near future? By increasing context length so that it can include the whole code base in one go every time we ask a question? Or something else?
Hi, can you give your opinion on my question? Oh and I have one more question. Can you show how do you continue from where you left off yesterday if it's a multi-day coding session? In my case, aider seem to create new files and not use the older file it created before @CodingtheFuture-jg1he
@@CodingtheFuture-jg1he hi, I would also love to see a tutorial on how to debug effectively in a multiple files project if you can somehow demonstrate. Thank you very much
Hi @PartneredAdmin! I haven't yet tested this with aider, but I think this aider web page might be of some help: aider.chat/docs/usage/images-urls.html. Let me know if that's what you were looking for.
do you need to start aider with the --sonnet command? I've been using aider to help modify an ansible playbook and it makes lots of mistakes, I wonder if it is not really using sonnet i typically start aider with aider site.yaml which adds the site.yaml file to it's list of files. I then use the /add * command to add all files and sub files into aider aider has never prompt me to modify any existing file nor has it asked me if I want to commit changes to git, it just does it.
@jefffogarty6470! I believe if you have a recent version of aider - like, a release within the past 3-4 weeks, Claude 3.5 Sonnet is the default. I just started aider without any options and it's using that model. By default, aider wants to commit its changes to git. I think you can disable this with the --auto-commits command line option. Also, aider's behavior is to ask you whether it can create any NEW file, but not to ask permission to modify an EXISTING file. Please note that aider provides a /undo command, which lets you reverse the most recent commit.
@@CodingtheFuture-jg1he this is incredible, way better than i could expect! amazing. any thoughts on using Claude over ChatGPT? from my personal experiments Claude been doing a lot better recently.
Hi @shayanr01! To be honest, I have yet to try out aider on a complex existing codebase. I've used other coding assistants but haven't gotten around to using aider this way yet. A future video maybe? However, based on what aider's developers and the community say in the discord channel, existing code is where aider is supposed to shine. Since others have created some tutorials on using it on existing code and it seems to be the most common use case, for this tutorial, I decided to take the opposite approach and see how well aider does jump-starting a new project. What I can say is this: in my experience, no matter which AI coding assistant I choose, it's critical that I carefully direct the assistant - for example, approaching the assistant interactions in the same way I always develop myself, which is iterative. Never give the assistant a big complex task to perform - iterate, one bite at a time. Sorry I can't give you a more authoritative answer on this one. I'd love to know what you find, if you decide to try aider on an existing codebase. UPDATE: just remembered this... maybe this page could shed some light on aider's ability to deal with large existing codebases: aider.chat/2023/10/22/repomap.html.
Hi @corpsedad7368! aider requires an LLM to generate its code. The RAG app also requires an LLM to answer users' queries. I highly recommend using Anthropic's Claude 3.5 LLM for aider. And, you can use either Claude or OpenAI's GPT 4 LLM (as I do in the video) for the RAG app. You have to go the links I put in the video description, create an account and pay a few bucks to be able to use these models. If you're trying to do this for zero cost, here are two options: 1. IF you have an NVIDIA GPU, you may be able to install Ollama locally, pull one of the LLMs that's best at coding into Ollama and point both aider and the RAG app at your local Ollama server. Without a GPU though, for coding, this won't work. The inferencing will be way too slow and you'd likely not get great results. 2. Create an account with Groq (console.groq.com/playground), create an API key there and point both aider and the app at Groq and use one of their hosted LLMs. For the moment, Groq isn't charging, BUT they impose some fairly restrictive rate limits, which means you may get throttled while working and have to wait a few seconds to re-try your prompt. A couple of caveats here: 1. Every LLM is different. Some work better than others for certain tasks and so I make no guarantees as to whether different LLMs than those I use in the tutorial will give you the same results. 2. If you choose any other LLMs than those I used, you'll have to modify a couple of steps in the tutorial, because although the code differences are minor to switch LLMs, there still are changes required. Hope that helps. At some point in the future, I'm going to create a video that focuses on this very topic.
Yes, you can. The tutorial uses Anthropic APIs (for the Claude 3.5 LLM), so all the heavy lifting is done in the Cloud and not on your PC 😀 If I were running aider against an LLM that was running locally on your machine, you'd likely need a GPU.
Hi @RyanJohnson! For this tutorial, I didn't track precisely, but I believe I spent a total of somewhere around *50 cents* on Claude 3.5 Sonnet. Haven't counted tokens, but that was for 15 prompts. Now, the prompts are very short. But Claude's output token count had to have been quite high, because a LOT of code was generated.
Thank you for point that out @marma6937! So sorry about that. Looks like the project settings in my video editor got skewed (they're at 1917 x 1032 and I think TH-cam downgraded the video due to the non-standard setting) somehow and I missed it before publishing. I'm really glad you pointed it out before my next video 😀 I'll know to double-check that setting now and will do better next time.
I will certainly add that to my list of planned videos. Curious... what's your background with programming? And what are you working towards with regards to AI and coding? Your feedback is super valuable to me and other folks on this channel!
I have been using Claude Engineer to do this too. Everything works as advertised. It really does all the work for you. But for me it is useless and nobody mentioned the huge bill that these solutions build up. I work on a huge project. Just one task costs me $0,50. the smallest cost i have managed is $0,04 per task. So yeah, not cheap. When the costs go down by 90% then the real fun will begin.
I really appreciate your feedback @gani2an1! I was hoping someone would provide some data on the costs they're experiencing. I agree that, for people who are doing hard-core dev on large complex codebases, the costs can get really high. I think this is a case for using other commercial AI coding assistants, such as Cody AI or Tabnine, where you likely get a lot more LLM use bundled for a set monthly fee. Now, the costs for Claude to do the kinds of things I'm doing in this tutorial are very low - I think to build the entire app once was a around 40 cents. But, if I kept going and building out a super complex app, I'm guessing I'd end up spending at least tens of dollars.
Yes, I apologize for that @dilshadms6202! This was my first video on this channel and I had a bad setting in my video editor. Didn't notice that TH-cam downgraded my resolution from 1080p to 720p until I had too many views to replace it. All my videos after this first one are in HD.
I wish claude would remove their limits, thats the only limiting factor. Once you get deeper into project it gets very expensive. On top of that you keep getting rate limited every minute. Sigh.
Oh yeah, I totally agree! When we're using open source AI coding assistants, like aider, we're on the hook for the cost of using the hosted LLMs. That's one reason why I use aider for certain things, like generating my initial project structure or making updates that other assistants aren't as well suited for, but then having other coding assistants, such as Continue of Cody AI that let you choose between models like Claude for a single low monthly cost. Do you any tips you use for controlling costs without a lot of fuss? Thanks!
just use openrouter api for claude 3.5, at least solves the rate limiting issue. but yeah aider gets expensive quick especially when it doesnt write the file correctly.
Very cool @peterbabu936! Although AI-generated code doesn't always work, isn't it a great feeling when it does and you can get something useful done in a fraction of the time? 🍾
Coding is about to know what you're doing, you can do it helped by an AI, but here you don't know what's happening, this method of dev is not suit for prod. That's why I really prefer Cursor as IDE/Coding assistant, I have much more visibility on where and how I'll achieve a task.
I really appreciate your feedback @nicolasivorrani! I hear ya. I use multiple AI coding assistants and switch between them depending on the task at hand. And, to develop apps of any complexity still requires: 1. An engineering mindset. 2. Experience developing complex apps. 3. Understanding how to prompt to get the best results (and how to back out or take over when you can't get good results). I didn't spend time in this tutorial reviewing the code or aider's output in any detail, just due to time - and I think I'd lose most folks if I did due to shear boredom 😆 But I can assure that you definitely know what aider is doing. First, while it's generating anything - code, README, etc. - it's explaining it all in gory detail in the terminal. I just didn't take the time to show all that. Apologies. Maybe in a future video, if you're interested? Also, of course, I'm looking at the actual source files as aider is updating them, so I see what it's updating. And, aider is committing to git and also has an "undo" command to reverse whatever changes it made. So, everything that git provides, you have access to with aider, since it (by default - you can disable) auto-commits to git. In case this helps, I have 28 years experience as a software engineer and over a decade architecting Java Spring-based enterprise apps and I reviewed the Java code aider generated in the tutorial. It was really good code. I'll admit that it does much better with some technologies and app types than with others. Also, aider could well get worse as the codebase grows. As with ALL AI coding assistants, the more context they have to use for a task, the harder it is for them - this is almost completely a limitation of the LLM they're using, rather than the coding assistant itself. This is where learning the techniques and tricks comes in. It is suitable for productions apps - just depends on what your expectations are. It won't do it all, that's for certain. You're still the main brain behind it all 😀
Thank you for your comment! I get what you're saying @paulholsters7932. I don't think anyone will be deploying the snake game to production😆 I think that, if someone is saying "AI or coding assistants today can generate a complex app based on a single prompt", I question their agenda. They cannot. On the other hand, if someone is saying "No way you can use these AI tools to develop solid production-quality enterprise apps, they're also incorrect. IMHO, both the AI hypevangelists AND the AI naysayers are wrong when it comes to AI generating code. As with most things, the truth lies in the middle. The issue has mostly to do with realistic expectations. With a strong enough model (e.g. Claude 3.5), a good coding assistant AND lots of learning (especially strong prompting techniques) and experimenting, combined with an "iterative" development mindset, you certainly can (I do myself) develop some pretty complex enterprise apps. We still need to apply an engineering mindset when using AI. Had I carried the 2 apps shown in the tutorial forward towards production, you would have seen me shifting more towards a "coding assistant on the side" workflow style. And I typically switch to a different coding assistant for some tasks. None of them are a panacea. Gotta have an experienced dev in the mix for anything of any complexity.
@@CodingtheFuture-jg1he interesting. But I prefer normal coding. I don’t have the time to learn how to craft something production ready with AI. It’s faster in the normal way. I have spent a lot of time learning how to code. Now I don’t want to learn something new with the same results (at best). AI suggestions like GitHub copilot get in my way of my thought process. In the long run I don’t think AI is the way to go when it comes to coding.
I really appreciate your feedback @MikeRhodesIdeas! Yes, I realized after publishing it was off. Please forgive me - I'm quite new to the video editing process 😆 I think if you check out the more recent video, you'll see a big difference in quality. I appreciate when people are willing to point out what needs to be improved - otherwise, how will it get better?
I really appreciate your feedback @agsvk-com! I understand that video's a bit tough to watch. I didn't realize a setting in my editor got messed up until I'd uploaded that video and a thousand people had watched it. All my videos after that one are all 1080p. And, I'm being diligent in double-checking the resolution settings in my editor now before uploading. Hope you were still able to come away with something useful from that video.
I am Using Deepseekcode api with aider and it is not able to generate the file in vscode. is it deepseekcode limitation? or am i doing something wrong?
Hi @Ecomcodegenius! I haven't tried DeepSeek with these tutorials. But, based on what I read and hear from others using AI coding assistants, it's not performing as well as Claude 3.5 Sonnet. Another thing to note is that each LLM has its own peculiarities (based on how it was trained mostly), which mean that to be really effective using any particular model, you have to experiment with that model and learn its nuances. For example, I was working with aider using Claude 3.5 on an actual project (not a tutorial) last night. It was doing really well. I decided to switch over to the Gemini 1.5 Pro model. Even though it has a far larger context window than Claude, it not only performed worse on coding tasks, it also just behaved differently. For example, aider would describe the code changes and print them to its console, but it stopped actually generating the source files. Lower cost solid models are coming, but for now, it's not worth switching from Claude 3.5. Your time is worth more than the $20-30/month (I spend less and code with it every day almost) Claude will cost you. Now, I'm assuming in that cost estimate that you've learned a bit about how to properly interact with that model. FWIW, I'm about to release a new video that's more of a "tips & techniques" for working with AI coding assistants.
🎉Thank you for watching! 🎉 We've just uploaded a higher-quality 1080p version of this video for an even better viewing experience. 👉 Click here to watch the 1080p HD version: th-cam.com/video/pMXP2wwC5Ss/w-d-xo.htmlsi=F-L1LwH8asdvArPP.
Love it! I think the friction going forward is 'how to specify' and not 'how to code' systems. You have articulated this point beautifully. More of this stuff. pls :)
I really appreciate your feedback @programmingsiri5007!
And, oh yeah, I soooo agree with what you're saying! Right now, the ability to "specify" well requires some solid software engineering background. So, we need to KNOW how to code to get good results out of AI for complex apps.
Glad to hear that you're in touch with what's going on in this field 😀
@@CodingtheFuture-jg1hebut what if u have no software engineering background.
I have no time to learn a SE degree.
don't say get AI to teach me...that's not what I'm looking for.
I prefer a guide over the detail of SE.
everything is process after all...learn process & that overcomes drawbacks like lack of skill (within reason I mean).
thoughts?
Hands down one of the best tutorial type videos for Dev's looking to update their toolkit. Aider certainly looks the ticket!
Thank you so much for your kind words @KCM25NJL!
this is incredible. claude has been off the charts and something like aider is well needed at this point!
I really appreciate your feedback @henriquematias1986!
@@CodingtheFuture-jg1he any chance we can get it integrated on VSCODE? Albeit i can run it on the terminal on VSCODE myself, could be nice to get a plugin for it as most kids use VSCode these days
Despite my lack of prior coding experience, I have successfully utilized Claude to create two Android applications, one of which includes a Samsung watch app. Additionally, I have developed three Python/HTML and TradingView indicators. However, I have encountered two challenges: firstly, the maximum message limit is reached quickly, even after purchasing additional messages; secondly, when using long code, the output is interrupted halfway through with a notification of reaching the maximum output.
Wonderful. Clear, concise and immediately valuable. Please add more!
Thank you so much for your kind words @kosielemmer!
A benefit of using the model to create an API before implementation is that we (developers) can make sure the chosen path is correct. It is best to fix the design at the top level, and then fill in the implementation only when we are happy with the design.
Of course this is for projects where already have sufficient domain knowledge and just want AI to do the grunt work (a good pattern I think).
I really appreciate your feedback @erikjohnson9112!
Agreed! I think we still need to apply our same iterative design/development practices when using AI. I feel like many of the frustrations folks are having might be due to thinking that engineering practices shouldn't be required when using AI to generate solutions.
I really appreciate all your feedback! I wanted to let you know that I'm seeing some common themes in the comments and I plan to try to address many of them in the coming weeks. Here's a short list of some recurring questions/suggestions:
- Costs related to the LLMs being used (e.g. Claude, GPT 4, etc.)
- Detailed instructions for setting up the development environment for these tutorials.
- Best practices for prompting the coding assistant to get the best results.
I hear you. Look for more information on all this in the coming weeks.
thank you
as well as providing content
enable or enforce a labelling system so one can jump around video to video in context eg episode guide style for Ur content.
it will help if u revisit in future to add more content.
Thank you a lot. It would be a wonderful to see best practices for promts that works best with aider
Thanks for choosing Java. As you mentioned, it has always been python examples out here.
Thank you for that @joshuaachoka6478! Yes, we need some more variety. We don't all code in Python 😀In fact, my guess is that there are still far more "enterprise class" systems developed in Java than Python. Don't get me wrong - I really enjoy coding in Python. But, there are many other languages and they're being used.
Awesome video, thank you for explaining your steps so clearly. My only request would be to record in better quality. I would love to be able to choose a higher quality other than 720p
I really appreciate your feedback @JasonPatton1980p! Yes, I overlooked a setting in my video editor that got changed before I published. I'm learning all the ins and outs of editing still. I think you'll find the video quality of the two videos I released after this one to be far better. But, still things I'm addressing, one video at a time ;-) Please keep this kind of feedback coming! It really helps me to get better quality content out there for you!
Great video, champ. I've been waiting for some newer videos on aider.
I appreciate that Zac! Please let me know if you have any specific topics related to aider in mind.
@@CodingtheFuture-jg1he I would love to see how it goes with more complex projects like machine learning. I tried Aider about 6 months ago and just never feel in love. Would be interesting to see!
This is amazing! I have no coding background but this is a game changer. It's giving more people access to coding than ever before. It's mind blowing! Can you pleaaaseee create a video on how to create this setup? I know it is fairly easy because aider has documentation that tells you how to setup. But having a nice walkthrough would be really helpful! Looking forward to more videos from you!
Thank you so much @u.a3!
Yep, expect a video walk-through specifically on the setup process very soon - hopefully next week.
Great video, well done and pleasure to watch. Keep it up!
Glad you enjoyed it!
Good content, looking forward for the next videos
Much appreciated
By popular demand, I created a new video showing detailed steps for installing the tools and getting the API keys required for this tutorial. That guide is linked below the description.
I'm a master ai coder/prompter, I now have 2 deployed web application building wealth
@@SapCompanies Wow, can you explain how ?
This is super cool.❤ Please make a MERN Stack Typescript project.
Cool idea! I really appreciate your feedback @vishnuitsrocking! Sounds like you might want to follow Brandon: www.youtube.com/@bhancock_ai. Also, he runs a Skool community and is starting to offer courses on full stack with AI: www.skool.com/ai-developer-accelerator. He has more frontend expertise and it's kinda his thing.
Good Job, Great content. Just one constructive critic for next videos, when you use the terminal and it's on the bottom, youtube UI hides the terminal commands, so just move it a bit upwards.
Thanks for the tip!
Been using Aider for a while. It has a great workflow. Combine this with Claude 3.5 Sonnet and you have a very interesting proposition.
Oh yes, you do @nickmills8476!
Powerful, but not perfect. But then neither are we "human" engineers, right? 😉
Thanks for your comment!
Thank you for a good video it was a pleasure to watch. But it just raised another question about cost. It could be interesting if you could make a video on the total cost of using these tools on a developer projects from start to finish, so the focus will be on the real cost, and perhaps the cheapest vs. the fastest way.
I really appreciate your feedback @peterwagner9795!
Yes, I was just talking with a colleague about this yesterday. I'll be sure to address this in a future video.
FWIW, I tore down and rebuilt the 2 apps you saw in the video at least 10 times because I like to be as confident as I can that what I'm demo'ing wasn't a fluke (worked every time). I didn't track precisely, but I believe I spent a total of somewhere around $3-4 on Claude 3.5 Sonnet. Haven't counted tokens, but if we use the number of prompts I sent as a very loose metric, that's about 150 prompts. Now, the prompts are very short. But, Claude's output token count had to have been quite high.
Nice, clear tutorial. Liked, subscribed, and looking forward to seeing more.
Thank you so much for your kind words @bernard2735! And, for you like and subscribe!
I look forward to your feedback on future videos. Or, on any particular content that might help you.
Thank you very much for this amazing video. Sure I'll be looking forward to see more videos like this one. Regards!
Thank you so much for your kind words @deepdatasoftware2553!
This is fantastic. Folks have asked me to do something on aider before but I never really understood what was special about it. But this makes it so clear in a way I never got from the webpage. Has the way aider fundamentally works changed over the last few months?
Thanks so much @technovangelist!
Although new features have been added to aider over the past few months, I'm not aware of anything I'd consider as fundamental to the way it works has changed. Now, I know they did make some changes to the way it creates the context map for your codebase at some point - moved away from ctags, so maybe that improved the context aider supplies to the LLM? Not sure.
aider's a great tool for sure, but the most glaring improvement I experienced came when I paired it with Claude 3.5.
BTW, love your Ollama videos Matt! Keep 'em coming!
That was solid, thank you, now I'm sold !!
Thank you for that @pnddesign!
Although not perfect, these coding assistants and LLMs are getting better at coding really quickly.
To be fair, there are bumps and some learning curve. For me, it was mainly repetition to build the intuition around just how to prompt them, how detailed to be and how much to ask for (and not ask for) at once. Once you start getting the feel, these tools really speed things up. They can even let you build apps in languages/frameworks you've never used, if you're already an experienced dev. You can just build a little with the assistant, learn a little from the assistant, and quickly build new knowledge! Way easier to become a "polyglot" programmer now! 😆
Great idea mixing java and next.js.
You made me discover aider. For sure I gonna try it.
Thanks for video. Great explanation btw
Thank you so much for your kind words @digitic3551!
I'd love to hear what you do with aider and how it works out for you.
Love it. I have a question.
I am gonna build websites that similar to each other. Only change in UI theme, a bit on functionality.
Can I provide the code base as document (context) so that AI can generate similar clone base on my prompt?
This was awesome. Thank you so much. Looking forward to playing around with this tool myself.
Thank you so much @mpfiesty!
Just realized I left out the link for installing Node.js. Just updated the description to include that. It's required to run the Task UI.
Great video, thanks. It would help if you increased the font size in vscode (ctrl+=) and make the video at least 1080p
Thank you so much @JohnGodwin777!
Thanks for the pointers! I didn't realize the res was at 720p until I'd uploaded it and lots of folks had already viewed it. It was a setting in my video editor, which I now know to double-check before uploading 😀The 2 videos since are at 1080p and, you're right, they look way better.
I didn't increase the font yet, but will keep that in mind in the future.
wow. just cleared my weekend to play. thank you for sharing, this is exciting. C# dev here and lazy to learn anything else❤
Thank you for that @s11-informationatyourservi44!
Can you kindly share the prompts and the code repo generated in this video? Thanks
Hi @zsiddiqi! You can find all the prompts in the video description. But, I don't think I actually pushed the 2 projects in this particular video to GitHub. If I still have those on my machine, I'll push them this week and update the video description. I started pushing the tutorial projects to GitHub after this initial video.
Awesome vid! Could you make vid for backend beginners?
Thank you so much @bjoernzosel! Hey, my "first love" is backend software engineering. Mostly what I've been doing for a very long time. I typically try to have a good UX/UI person on my team to do the frontend stuff. Now, I do have a very long backlog and I'm just one dude here, but I'll certainly add it to the list. Or, maybe there's another way I can assist you? Can you tell me a bit more about where you are in your backend dev journey and the kinds of things that you're trying to learn - or what gets you stuck?
Thanks, waiting for the next video.❤
Thank you Dean! There's a ton of ground to cover and I have a long list of future topics in mind. But, would love to hear what might be helpful to you 🙂
@@CodingtheFuture-jg1he Future video in 1080p or higher please
That’s amazing - Nice video
Thank you so much @pedrolima-lr3lu!
My friend not sure your name, a quick question what if i ask to aider to create a project using for example next js, there is any way the code generated is based in the last documentation? I think is very important because some models are not updated. Thanks you so much
Hi @bambanx! A great question! Similar to the way I pointed aider at the OpenAPI docs it generated for the Java API (I used aider's /web command and provided the URL), you could theoretically do the same and point aider at the online Next.js docs. Having said that, aider currently only scrapes a single web page at a time. Also, adding too much context at a time (like adding a hundred web pages) is likely to result in poorer output - this is a limitation of LLMs, not aider. Also, the quality of this context is 100% critical, as is the completeness of those docs for the current task at hand. For example, if you were to add a bunch of Next.js web pages to the context and those pages contain a lot of conflicting information, just like a human would, the LLMs is likely to get confused.
Bottom line: suggest trying to locate a small handful of web pages that you believe contain the necessary Next.js info you think the LLM might need to perform your next set of tasks. Use aider's /web command to add those to the chat context. Be very clear in your prompts that you want aider/LLM to give priority to the pages you added over its training data (maybe also mention "Use version X of Next.js and the docs I provided").
You'll likely need to experiment with this approach to get the results you want for what you're trying to achieve. But, I hope I've given you enough to give it a try.
@@CodingtheFuture-jg1he i will do, thanks so much for your kind answer.
Super nice tutorial. Very complete 👏
Thank you so much @robboerman9378!
Hi, i Want only java and springboot is possible? Apirest and then mysql or others? Thanks
Hi @lordav5520! Are you asking whether you can just create the REST API I show in the tutorial? If so, sure. Just do the first part of the tutorial and then stop. The UI isn't a requirement - you'd still have a functional API that could be used by any UI or any other kind of API consumer. And, if you're asking about replacing the in-memory HSQL database with MySQL for the API... once you've built the API following the tutorial, you could just ask aider something like:
"Change the database the app is using from hsqldb to MySQL"
aider should update your Maven POM dependencies, your Spring application.properties file, etc. Now, you'll still have to setup your MySQL database yourself and then update the MySQL values in your application.properties file. But, you can also ask aider to help you out there:
"How can I setup a MySQL database for this API?"
Best of luck!
Wow! Thats is amazing! Thank you. I Will try use this. I need time haha
Thank you. Great video. Some of the part is difficult for non programmers like me. Can you please create a video for absolute beginner and create a basic full stack app using aider?
Thank you for that @sivakumarm3569!
I understand where you're coming from and others have also asked for a tutorial on just the setup process - i.e. how to install Python, VS Code, etc. and create API keys. All the stuff you need to actually implement the app yourself.
I'm planning to create a video very soon that will assume a person has none of these installed and has never worked with any of these tools and takes them up to the point where they can begin any AI-assisted coding project.
I probably won't get into how to actually build an app in that video, as I think the video will get too long - and will therefore take me much longer to get out there. Plus, breaking the content apart like this makes that video re-usable as a reference for lots of other videos.
Look for this probably within the next week or so.
Great video! Please make even more videos like these
Thank you so much @eatkhana-jd4fv!
I'll be publishing a similar video in a day or two, but using AI coding assistants to develop an AI app.
Let's say we have a project we're working on, and we've completed more than half of it. Is it possible to introduce aider in the middle of the project? From what I've seen in your videos, aider is typically implemented from the beginning...
Hi @miiihaaas! Actually, aider is used far more to work on existing codebases than for bootstrapping new projects. In the first couple of tutorials on aider, I wanted to point out that it can also create a fairly substantial app from zero. I'll be going into various use cases - like refactoring, fixing bugs, adding enhancements, etc. - for existing codebases in future videos. But really, you could imagine that you started with the codebase I ended up with in the tutorial. You come back later and fire up aider in that project and have aider help you fix a bug or whatever. Same workflow.
@@CodingtheFuture-jg1he I have subscribed for future videos you mentioned :)
Any AI tool that work well on existing code base ?
Hi @geoffreythomazeau8497! Lots of devs using aider on existing codebases. In fact, I think I may be one of the few using it to develop apps from scratch, based on comments I see on their discord server. The key to any coding assistant is how well it handles the context (i.e. info on all the files, functions, etc in your codebase) and provides that to the LLM as it generates new code or updates existing code. aider handles this by maintaining a context map of your entire codebase as you code. It doesn't provide every line of code to the LLM (doing that wouldn't work well for a lot of LLMs), so opts to provide enough info to the LLM to hopefully allow it to generate quality code for you. But, if you think about this a bit... that means that the quality of the LLM's output is highly correlated to the quality of your existing codebase. For example, if your existing classes, functions, etc. are poorly named, the LLM will struggle to understand your codebase - just like any human would ;-) You can check out this link which describes how aider maintains a map of your repo: aider.chat/2023/10/22/repomap.html. Might also want to join their discord server to ask more about this: discord.com/invite/Tv2uQnR88V.
Also, I don't just use aider. I also use a couple of other coding assistants. Cody AI is one that's a VS Code extension vs a terminal app. Like aider, it creates a context map of your codebase and constantly updates that index as you code, so it can provide info on your codebase to the LLM. I typically use something like Cody more once I have my project pretty well scaffolded. They all have their strengths and weaknesses.
Having said all this, results from any of these tools will greatly depend on devs learning the ins and outs of prompting them. We can watch all the tutorials we want. But experimentation and lots of practice is the only way to figure this all out. I'd love to hear what you find as you integrate these tools into your workflow!
Great job, thanks for telling me about Aider. I have been using cursor, but will give this a try. Thank you!
Thank you so much @Syntaxstic!
I gotta check out Cursor now. Lots of folks saying it's awesome.
Thank you for this demonstration. Look forward to learning more.
I will take aspects of this video to help me code a custom UI for aider. I know that there's a browser command, but I look forward to customizing further. Would love to know if you have any additional tips !
That's a great idea! Using aider to create a custom UI that works and looks the way you'd prefer would be really cool! I'd love to know how that turns out for you.
I plan to release a video soon with tips for using aider, along with a couple of other AI tools. I'm currently wrapping up a tutorial for using aider to build a RAG app using LangChain. I try to drop a tip here and there during tutorials, but if you have any areas in mind or run into any gotchas while building your custom UI with aider, please let me know and I'll see about working in some tips in those areas in future videos.
Thanks for your feedback!
i tried --sonnet but it was still asking me openai API key and I couldn't find the solution do you have a idea?
Hi @mesutsimsek35! Are you sure you're setting your ANTHROPIC_API_KEY environment variable to a Claude API key you created? If aider finds that key set to a valid value, when you run with the --sonnet option, it should just use Claude. Now, if it doesn't find that key, it might default to OpenAI. Please confirm that the API key is set correctly in the same terminal you launch aider from. E.g. either "echo $ANTHROPIC_API_KEY" if in git bash or "echo %ANTHROPICANTHROPIC_API_KEY%" if you're running aider within a windows command prompt. Here's the aider page on this: aider.chat/docs/llms/anthropic.html.
Great video, great video: you can go in depth about multi-agents and/or micro-agents
Thank you so much for your kind words @maxhenriquez8819!
That's another topic we'll be covering on the channel for sure! I haven't looked at micro-agents at all, but I've developed quite a few multi-agent apps. I do think agents represent the next big leap in AI capabilities and deserve separate treatment.
@@CodingtheFuture-jg1he Thank you, I will wait attentively
Awesome video🎉New Sub!
How long do you reckon it would have taken you if you tried to make the same app?
Including learning Nextjs since you mentioned you had never used it before?
I tried testing Aider openai by building a rolling 3D map module using Threejs for games mostly. Aider moved forward acceptably at the beginning. rephrased prompts seemed to be frequently required in order to proceed. Until we reached loading a hex encoded map which I asked it to complete, but it clogged my output window with the hex representation even when I asked it to avoid doing that draining my openai credits. All in all between the two extremes of being totally useless (0) and very useful (10) I would give it a 7
I appreciate you sharing that @Infinix2023-p8y!
That's a very interesting scenario. Not sure, but I'm guessing that may be due to the LLM (I assume GPT 4o?) versus aider. But it could be something aider is doing - maybe one of its internal prompts.
I'd be interested to know if you have that same experience using Claude 3.5 Sonnet. I know GPT 4o is also powerful, but my experience with Claude on different types of apps has been much better.
I agree. Claude ai is outstanding when it comes to coding
Solid video 🙏🏽💎 have you tried the newest version of Cursor AI with Claude 3.5 Sonnet?
Thank you for that @J3R3MI6!
No, I keep hearing how great Cursor is, but I haven't had time to check it out. Have you? If so, what's your experience so far?
Good demonstration. Try a novel full stack app though. There's probably thousands of task apps these AIs have been trained on.
Thank you so much @RandyRanderson404! You make a valid point. I have considered this as well. Having said that, a couple of thoughts:
1. All frontier models have been trained on "the entirety of human knowledge". Ok, not sure I totally buy that. But, for all intents and purposes, yeah, I do think that's true.
2. Can you think a kind of app that would be both novel and useful to a wide range of businesses and/or consumers? If it doesn't meet both requirements, then I say, meh - no one will care.
After nearly 29 years and about 10 different industries, I really can't say I've encountered what I personally would consider an employer or client trying to build something completely novel. Sure, they put their twists on it, or they would probably just buy something.
But I believe that all the LLMs already possess deep knowledge on virtually every domain any human would care about. Which means (I think) that the BEST LLMs should be able to take your novel requirements (which are really only a re-mixing of pieces of knowledge from various domains it knows about) and do nearly as decent (not quite) a job with them as for this very common Task API. Now, that assumes that we break it down and use a similar "iterative dev" process to create it. It's gotta be apples to apples, except for the app requirements.
If you try something like this, would you mind sharing?
But I'm really speculating a bit here, because I truly haven't had the time to figure out the concept for a new app that would be truly revolutionary that I could use to test this theory.
I'm not arguing your point. It's a great one! I love this type of feedback. Debating and testing these things makes us all better 😀
@@CodingtheFuture-jg1he Thank you for the long reply. I focus on building platforms for developers to ship what they work on to the customer. I spend a lot of time configuring Linux. I pay for ChatGPT as my go-to AI. I think it's the best for my work. That being said, I can barely trust it. When it comes to linux configurations, it's been trained on so many distributions, with so many ways of doing things, a lot of which are deprecated, it produces bad output. I work with a lot of new libraries and it doesn't have the training but it acts like it does. It makes me nervous to see boilerplate get generated like this. Dependency hell has been a challenge for my work and I'm always asking devs why they're including packages. Although, a positive is maybe we can generate the little functionality the devs need from a package and we don't need to import the whole thing. Where AI shines is when I can give it a very small scope but detailed task to write something I'm decent at but for something I'm an expert in, it slows me down.
So maybe instead of asking for something novel, I would like to critique an SBOM of the AI generated boilerplate instead.
Although, I've been working on a novel geospatial app. I exercised a bunch of prompts to generate ideas but it never suggested to me the technology I ended up going with.
@@CodingtheFuture-jg1he Claude has recently stumbled for me when making a basic GSAP slider, and took a loooong time and coaxing to get a React-Aria toast component working, so I don't think I'm as confident as you are about them when you take them off-piste. And Randy is right - a todo app with a basic crud api is very much boilerplate at this point. Still impressive, and looking forward to trying aider, so thanks!
This is a very good video! I hope to see new content from you!
Thank you so much for your kind words @paintenzero!
I'd love to hear what topics would be of interest to you.
Thanks for the tutorial , may i ask how long did it take to build this ? and how much tokens was spend on this task? was all this a one go thing or was there back and forth for getting at right solution?
Thank you for that @smtkumar007!
So, from the point where I came up with the initial idea of what I wanted to build and kinda the steps I planned to follow to get aider to build it out... guessing my first run through took maybe 45 minutes. I then tore it down and re-built it all several times, because I want some level of confidence that what I'm presenting to you is repeatable. Just starting over once I had all the prompts in order was like 5 minutes, end-to-end.
Now, during my initial runs, I did run into a couple of minor issues that caused me to revise my prompts. This is normal. For instance, you'll see things like me telling aider "only do this part for now - don't touch anything else" in the prompts. Without those, I found that Claude was getting a bit eager with generating code and getting ahead of where I wanted to be - and the results weren't as solid.
So, that little bit of prompt experimentation is why it took me 45 minutes to get it right initially.
Now, keep in mind that I've been doing this for a bit, so a lot of what to do to get good results and what to do when things go South is becoming somewhat intuitive to me.
I'll be releasing future videos that provides tips and tricks on this kinda stuff.
@@CodingtheFuture-jg1he Thanks for sharing, I still do not have the idea about token costs of doing this, roughly how many tokens were spent during the entire process including the experimental prompts?
Yeah but be sure to talk about all those API calls. If your coding something a little bit more complex like a phone apps using Kotlin files. It seems to have trouble but it was able to get some stuff working. Even without aider. However Claude doesn't have internet access yet. So that could really change a lot since some of Claude coding framework is a little behind some updates.. It can start to get annoying. Plus the message cap per day. It's still amazing.. I"ll be waiting till Antropic grows and gets more support to see how it runs in a year or two. Because if they continue down this path they're going to be a powerhouse. What they have now is really amazing already just all the message limits and 4 hour wait times between 10 messages gets old..
I really appreciate your feedback @TRFAD!
I agree with all your points for sure. I'm sure this will all get better - and cheaper quickly. Heck, as you pointed out, Claude has already changed the game in a very short time. As you say, Claude doesn't have browsing capabilities. To mitigate that a bit, most of the AI coding assistants now allow you to point to multiple URLs (haven't seen one that crawls a site yet) and the content of those pages provided to the LLM as context, along with aider's map of your repo. Not as simple for the user as just searching the Internet for context, but on the other hand, might give better results, if the dev has access to solid info sources in the form of websites.
I'd be surprised if assistants don't soon have search ability. Getting better, bit by bit 🙂
Excellent
kindly post video about Claude 3.5 AI model and API finetune ... full course for developers.
Thanks for your request @hendoitechnologies! I am planning to launch a community soon (very likely on Skool.com) and start putting together more full-fledged courses. Could you please elaborate on what you mean by "API finetune"?
Hi, great demonstration you got there, thank you.
I just wonder if anyone can do a more realistic demonstration which is adding a new function in a already established opensource project?
Thank you so much @ThangNguyen-ot8uz!
A quick search didn't turn up a tutorial or demo showing aider specifically adding a function to an existing codebase. But it should be no problem. Not saying it'll get it right first try every time. Depends on the size and complexity of the codebase and also how you prompt it.
Here's an explanation of how aider keeps track of all the files and functions in your codebase for context: aider.chat/2023/10/22/repomap.html
@@CodingtheFuture-jg1he: wouldn‘t that make it a perfect topic for another video?
@@CodingtheFuture-jg1he thank you for replying. I just wonder how much context can AI really jam up into the question when we ask it to modify a code section in an existing project. Is the current AI tech advance enough to extract features that are logically connected in a code base needed for various requests we gave them? Or, when a project reaches a certain point/length, it's just impossible for the AI to give good answer, even when we tried to avoid asking them too many thing at once? Let's say RAG for example, most RAG systems I know failed to answer questions that are not one-stop (questions that require multiple logic to connect multiple pieces of information to get the answer). I assume the same for coding agents, great for building things from the ground up but after things get big enough, it slows down significantly because the AI is not smart enough to find all the related pcs of information and connect them together. How do you think AI tech will solve this problem in the near future? By increasing context length so that it can include the whole code base in one go every time we ask a question? Or something else?
Hi, can you give your opinion on my question?
Oh and I have one more question. Can you show how do you continue from where you left off yesterday if it's a multi-day coding session? In my case, aider seem to create new files and not use the older file it created before @CodingtheFuture-jg1he
@@CodingtheFuture-jg1he hi, I would also love to see a tutorial on how to debug effectively in a multiple files project if you can somehow demonstrate. Thank you very much
How to use a screenshot of a sample app when generating a new app?
Hi @PartneredAdmin! I haven't yet tested this with aider, but I think this aider web page might be of some help: aider.chat/docs/usage/images-urls.html.
Let me know if that's what you were looking for.
This is awesome❤
Thank you for that @ajitharavindphotog!
Doing gods work 🔥
Thank you for that @marioa6942!
Thank you. This was amazing
Thank you so much @VPAT37!
impresive !!
Thank you so much @diardelavega!
do you need to start aider with the --sonnet command? I've been using aider to help modify an ansible playbook and it makes lots of mistakes, I wonder if it is not really using sonnet
i typically start aider with aider site.yaml which adds the site.yaml file to it's list of files. I then use the /add * command to add all files and sub files into aider
aider has never prompt me to modify any existing file nor has it asked me if I want to commit changes to git, it just does it.
@jefffogarty6470! I believe if you have a recent version of aider - like, a release within the past 3-4 weeks, Claude 3.5 Sonnet is the default. I just started aider without any options and it's using that model.
By default, aider wants to commit its changes to git. I think you can disable this with the --auto-commits command line option.
Also, aider's behavior is to ask you whether it can create any NEW file, but not to ask permission to modify an EXISTING file.
Please note that aider provides a /undo command, which lets you reverse the most recent commit.
Can you use other gpts as well to act as agents? Like using Groq, gemini, chat, and Claude to all debate each other?
@caseystar_! No, aider isn't agent-based. But, you can use Groq and Gemini with aider.
aider.chat/docs/llms.html
You Are Awesome, thanks for this.
Thank you so much for your kind words @nyacumo! And, you are very welcome!
If i run it on a project that already exists, will it take the whole project as a context? How does that work in this case?
Hey @henriquematias1986!
Please check out this page on the aider site: aider.chat/2023/10/22/repomap.html
Let me know if that doesn't help.
Thanks!
@@CodingtheFuture-jg1he this is incredible, way better than i could expect! amazing.
any thoughts on using Claude over ChatGPT? from my personal experiments Claude been doing a lot better recently.
How well does it work with existing codebases?
Hi @shayanr01! To be honest, I have yet to try out aider on a complex existing codebase. I've used other coding assistants but haven't gotten around to using aider this way yet. A future video maybe?
However, based on what aider's developers and the community say in the discord channel, existing code is where aider is supposed to shine.
Since others have created some tutorials on using it on existing code and it seems to be the most common use case, for this tutorial, I decided to take the opposite approach and see how well aider does jump-starting a new project.
What I can say is this: in my experience, no matter which AI coding assistant I choose, it's critical that I carefully direct the assistant - for example, approaching the assistant interactions in the same way I always develop myself, which is iterative. Never give the assistant a big complex task to perform - iterate, one bite at a time.
Sorry I can't give you a more authoritative answer on this one. I'd love to know what you find, if you decide to try aider on an existing codebase.
UPDATE: just remembered this... maybe this page could shed some light on aider's ability to deal with large existing codebases: aider.chat/2023/10/22/repomap.html.
@@CodingtheFuture-jg1he I use Claude Engineer for existing codebases
Let's always do alot of good ❤
Thank you for that @Mari_Selalu_Berbuat_Kebaikan! I agree!
Thanks for the video, your workflow is impressively fast
I really appreciate your feedback @ZuckFukerberg!
Is it free . To do all this . Or there are any paid services we are gonna use
Hi @corpsedad7368!
aider requires an LLM to generate its code. The RAG app also requires an LLM to answer users' queries.
I highly recommend using Anthropic's Claude 3.5 LLM for aider. And, you can use either Claude or OpenAI's GPT 4 LLM (as I do in the video) for the RAG app.
You have to go the links I put in the video description, create an account and pay a few bucks to be able to use these models.
If you're trying to do this for zero cost, here are two options:
1. IF you have an NVIDIA GPU, you may be able to install Ollama locally, pull one of the LLMs that's best at coding into Ollama and point both aider and the RAG app at your local Ollama server. Without a GPU though, for coding, this won't work. The inferencing will be way too slow and you'd likely not get great results.
2. Create an account with Groq (console.groq.com/playground), create an API key there and point both aider and the app at Groq and use one of their hosted LLMs. For the moment, Groq isn't charging, BUT they impose some fairly restrictive rate limits, which means you may get throttled while working and have to wait a few seconds to re-try your prompt.
A couple of caveats here:
1. Every LLM is different. Some work better than others for certain tasks and so I make no guarantees as to whether different LLMs than those I use in the tutorial will give you the same results.
2. If you choose any other LLMs than those I used, you'll have to modify a couple of steps in the tutorial, because although the code differences are minor to switch LLMs, there still are changes required.
Hope that helps. At some point in the future, I'm going to create a video that focuses on this very topic.
Thanks, can I run on CPU because i don't have GPU
Yes, you can. The tutorial uses Anthropic APIs (for the Claude 3.5 LLM), so all the heavy lifting is done in the Cloud and not on your PC 😀 If I were running aider against an LLM that was running locally on your machine, you'd likely need a GPU.
Suscribed and liked. Thanks so much i wish to see more videos using aider.
Thank you so much @bambanx!
How much was the claude bill?
huge
Hi @RyanJohnson!
For this tutorial, I didn't track precisely, but I believe I spent a total of somewhere around *50 cents* on Claude 3.5 Sonnet. Haven't counted tokens, but that was for 15 prompts. Now, the prompts are very short. But Claude's output token count had to have been quite high, because a LOT of code was generated.
why the video is in 720p !!!
Thank you for point that out @marma6937!
So sorry about that. Looks like the project settings in my video editor got skewed (they're at 1917 x 1032 and I think TH-cam downgraded the video due to the non-standard setting) somehow and I missed it before publishing.
I'm really glad you pointed it out before my next video 😀 I'll know to double-check that setting now and will do better next time.
Can you make a video about : how to install VS code , how to install aieder , and some basics for non coding folks who wish to get their feet wet
I will certainly add that to my list of planned videos. Curious... what's your background with programming? And what are you working towards with regards to AI and coding? Your feedback is super valuable to me and other folks on this channel!
I have been using Claude Engineer to do this too. Everything works as advertised. It really does all the work for you. But for me it is useless and nobody mentioned the huge bill that these solutions build up. I work on a huge project. Just one task costs me $0,50. the smallest cost i have managed is $0,04 per task. So yeah, not cheap. When the costs go down by 90% then the real fun will begin.
I really appreciate your feedback @gani2an1!
I was hoping someone would provide some data on the costs they're experiencing. I agree that, for people who are doing hard-core dev on large complex codebases, the costs can get really high. I think this is a case for using other commercial AI coding assistants, such as Cody AI or Tabnine, where you likely get a lot more LLM use bundled for a set monthly fee.
Now, the costs for Claude to do the kinds of things I'm doing in this tutorial are very low - I think to build the entire app once was a around 40 cents. But, if I kept going and building out a super complex app, I'm guessing I'd end up spending at least tens of dollars.
Very small fonts, not readable for me although on a rather large monitor.
Bad low resolution.
Yes, I apologize for that @dilshadms6202! This was my first video on this channel and I had a bad setting in my video editor. Didn't notice that TH-cam downgraded my resolution from 1080p to 720p until I had too many views to replace it. All my videos after this first one are in HD.
@@CodingtheFuture-jg1he No need to apologize, just a hint. Thank you for putting so much effort into a video to educate people.
I wish claude would remove their limits, thats the only limiting factor. Once you get deeper into project it gets very expensive. On top of that you keep getting rate limited every minute. Sigh.
Oh yeah, I totally agree! When we're using open source AI coding assistants, like aider, we're on the hook for the cost of using the hosted LLMs. That's one reason why I use aider for certain things, like generating my initial project structure or making updates that other assistants aren't as well suited for, but then having other coding assistants, such as Continue of Cody AI that let you choose between models like Claude for a single low monthly cost.
Do you any tips you use for controlling costs without a lot of fuss?
Thanks!
@@CodingtheFuture-jg1he LLM Routing seems like the way to go. Have you checked out RouteLLM?
just use openrouter api for claude 3.5, at least solves the rate limiting issue. but yeah aider gets expensive quick especially when it doesnt write the file correctly.
Just yesterday I used 3 ai to generate code, Gemini, Gema and Claude, now I can encode, decode with sha256, so cool, I don't write a line of code
Very cool @peterbabu936! Although AI-generated code doesn't always work, isn't it a great feeling when it does and you can get something useful done in a fraction of the time? 🍾
It's fast like lightning, from prompt crafting and reinforcement, to working code, I am loving it
All fun and happiness until the jira tickets arrive
Great stuff
Thank you!
🤩
Coding is about to know what you're doing, you can do it helped by an AI, but here you don't know what's happening, this method of dev is not suit for prod. That's why I really prefer Cursor as IDE/Coding assistant, I have much more visibility on where and how I'll achieve a task.
I really appreciate your feedback @nicolasivorrani!
I hear ya. I use multiple AI coding assistants and switch between them depending on the task at hand. And, to develop apps of any complexity still requires:
1. An engineering mindset.
2. Experience developing complex apps.
3. Understanding how to prompt to get the best results (and how to back out or take over when you can't get good results).
I didn't spend time in this tutorial reviewing the code or aider's output in any detail, just due to time - and I think I'd lose most folks if I did due to shear boredom 😆
But I can assure that you definitely know what aider is doing. First, while it's generating anything - code, README, etc. - it's explaining it all in gory detail in the terminal. I just didn't take the time to show all that. Apologies. Maybe in a future video, if you're interested?
Also, of course, I'm looking at the actual source files as aider is updating them, so I see what it's updating. And, aider is committing to git and also has an "undo" command to reverse whatever changes it made. So, everything that git provides, you have access to with aider, since it (by default - you can disable) auto-commits to git.
In case this helps, I have 28 years experience as a software engineer and over a decade architecting Java Spring-based enterprise apps and I reviewed the Java code aider generated in the tutorial. It was really good code. I'll admit that it does much better with some technologies and app types than with others.
Also, aider could well get worse as the codebase grows. As with ALL AI coding assistants, the more context they have to use for a task, the harder it is for them - this is almost completely a limitation of the LLM they're using, rather than the coding assistant itself. This is where learning the techniques and tricks comes in. It is suitable for productions apps - just depends on what your expectations are. It won't do it all, that's for certain. You're still the main brain behind it all 😀
Curious , complex apps without coding. Every time this seems a lie or it can only make useless apps that nobody needs. Maybe this is different…
Thank you for your comment!
I get what you're saying @paulholsters7932. I don't think anyone will be deploying the snake game to production😆
I think that, if someone is saying "AI or coding assistants today can generate a complex app based on a single prompt", I question their agenda. They cannot. On the other hand, if someone is saying "No way you can use these AI tools to develop solid production-quality enterprise apps, they're also incorrect.
IMHO, both the AI hypevangelists AND the AI naysayers are wrong when it comes to AI generating code. As with most things, the truth lies in the middle. The issue has mostly to do with realistic expectations.
With a strong enough model (e.g. Claude 3.5), a good coding assistant AND lots of learning (especially strong prompting techniques) and experimenting, combined with an "iterative" development mindset, you certainly can (I do myself) develop some pretty complex enterprise apps.
We still need to apply an engineering mindset when using AI. Had I carried the 2 apps shown in the tutorial forward towards production, you would have seen me shifting more towards a "coding assistant on the side" workflow style. And I typically switch to a different coding assistant for some tasks. None of them are a panacea. Gotta have an experienced dev in the mix for anything of any complexity.
@@CodingtheFuture-jg1he interesting. But I prefer normal coding. I don’t have the time to learn how to craft something production ready with AI. It’s faster in the normal way. I have spent a lot of time learning how to code. Now I don’t want to learn something new with the same results (at best). AI suggestions like GitHub copilot get in my way of my thought process. In the long run I don’t think AI is the way to go when it comes to coding.
to get more subs, please fix the audio - it's out of sync with the video
I really appreciate your feedback @MikeRhodesIdeas!
Yes, I realized after publishing it was off. Please forgive me - I'm quite new to the video editing process 😆
I think if you check out the more recent video, you'll see a big difference in quality.
I appreciate when people are willing to point out what needs to be improved - otherwise, how will it get better?
Thank you. Great tutorial. Can you record in 1080p next time, please? It's harder to read at 720. God bless
I really appreciate your feedback @agsvk-com! I understand that video's a bit tough to watch. I didn't realize a setting in my editor got messed up until I'd uploaded that video and a thousand people had watched it. All my videos after that one are all 1080p. And, I'm being diligent in double-checking the resolution settings in my editor now before uploading.
Hope you were still able to come away with something useful from that video.
I am Using Deepseekcode api with aider and it is not able to generate the file in vscode. is it deepseekcode limitation? or am i doing something wrong?
Hi @Ecomcodegenius! I haven't tried DeepSeek with these tutorials. But, based on what I read and hear from others using AI coding assistants, it's not performing as well as Claude 3.5 Sonnet. Another thing to note is that each LLM has its own peculiarities (based on how it was trained mostly), which mean that to be really effective using any particular model, you have to experiment with that model and learn its nuances. For example, I was working with aider using Claude 3.5 on an actual project (not a tutorial) last night. It was doing really well. I decided to switch over to the Gemini 1.5 Pro model. Even though it has a far larger context window than Claude, it not only performed worse on coding tasks, it also just behaved differently. For example, aider would describe the code changes and print them to its console, but it stopped actually generating the source files.
Lower cost solid models are coming, but for now, it's not worth switching from Claude 3.5. Your time is worth more than the $20-30/month (I spend less and code with it every day almost) Claude will cost you. Now, I'm assuming in that cost estimate that you've learned a bit about how to properly interact with that model.
FWIW, I'm about to release a new video that's more of a "tips & techniques" for working with AI coding assistants.
@@CodingtheFuture-jg1he yes sir, I felt same. Today used claude 3.5 and it worked as expected . Btw thankyou for this detailed explanation 🫡.