Hey Dan, thanks for sharing this. I've been using Cursor for a few months now, but in a very different way than how you demonstrated in the video. Take a look at the chat pane; it allows you to @mention the entire codebase instead of individual files. This leverages chain of thought prompting to determine the appropriate files based on your prompt.
Would most definitely be interested in tutorials on prompt engineering for coders/developers/software engineers. Thanks for spending the time sharing your expertise.
second this. would pay for a course that covers efficient prompt engineering for developers. Also sources that you use to improve your prompting skills. Great content as always. Big thanks!
I think it would be cool if there was a website or some central location developers could share their prompting techniques for certain programming scenarios.
@@francycharuto Theres a research group that done a study on the entire 'prompt' thing and came up with some rather interesting findings. For one thing including a line such as "This is really important, we could lose our jobs and be homeless if we make a mistake" in the prompt and this actually carries efficacy. Basically emotionally blackmailing the model... lol. EDIT: It works by the way, improves GPT3.5 by my reckoning by 50% at least. Full paper called "Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4" on these principles.
I think were these tools are at now, it's allowing us to focus more on the pseudo code level. Indeed I recently wrote a script pseudo code first, then asked Aider to complete it. Which worked nicely. Maybe not a 5x improvement, more like 3x.
would love to see engineering your own aider with optimized error prompt chaining, i have worked on this beore was succussful in complex SQL queries. Coding assistants are a good starter meal, but they lack the salt, spices, and marination. all this can be achieved through self automated context selection(file names with semantic description on the functions, if you ask me evevery function and its dependencies can be fectched) attaching a desc with it will lead to a one prompt for eveerything approch. i believe in pure python and pydantic approaches raher than using external libaraies. i havent learnt much from the video from a technical perspective rather a tutorial on using aider. i loveed your series on Chat with your database! wwas waiting for an ungrade on that as your intuition kicks in for error prompt chaining and self context retirival for code.
Hey Dan, you mentioned you've tried RAG across the whole codebase with unfavourable results. Have you considered knowledge graphing? Logically then, hybrid search across knowledge graph structures that represent vector indexes for individual scripts?
Do you think there would be a relative cost benefit of taking the time to create KG of the codebase and then feed it back to the LLM? Do you think that would be better than using Universal C-Tags like Aider does?
Great call out, I've considered a KB Graph with some limited depth. I think KB graphs + universal C-tags (aider uses this as @evetsnilrac9689 mentioned) PLUS Levenshtein distance on the input data is the winning combo. The trick is getting this functionality into AI coding assistants so they can automatically reference files for you w/out intervention. Lots of value in this problem.
i m working on such a system that might fix some of the AI issues mentioned. its about getting composables and maybe all code into a graph db. I want to reduce dependencies too so fully open system but in compliance / using code as shared digital assets plus getting paid micro payments. no stealing...
I'm not sure I agree about "the prompt" being the fundamental unit of programming. When working with these tools, oftentimes I feel that the work I put into the prompt is as time consuming as the code that it produces, and instead its better to have a 50/50 split of prompting and coding (and don't necessarily "start with the prompt"). Human language is flexible, but imprecise and verbose. Code is rigid and more difficult, but also more accurate and far more concise. I think our future involves a healthy balance of the two, otherwise (as you alluded to), there will be skill atrophy, and then the tools aren't able to produce quality results...garbage in, garbage out.
I agree with most of that. I think unless you realy fundamentaly mess up a prompt or the model has not enoth context it can mess up for sure. Prompting is fundamentaly dependend on the model and the more capable models are amazing at extracting what you want and doing just that. There is no magical prompt or trick to it. As long as you know what your doing and you provide enoth context the model will handle the rest if it is capable enoth. I think learning and using Concepts is the A and O. Syntax and the exact musle memory is unimportant. Why learn 15 programming languages that all have the same concepts? They failed to norm the conventions in the past and now we all have to pay the price beacuse of there mistakes. Cant wait to semantically programm with an Assistant. And the usual "you will not be able to write your own code!" does not make sense. I can run local moddels that will be capable and smal enoth to run localy offline soon. And if i dont have internet or my pc the same logic applys. This is just a shortcut like programming languages went from machinecode to higher level languages and now its just semantic.
Rock solid take and I think RIGHT NOW you're right. Over time though I'll bet (literally) that the balance between coding & prompting will naturally (over the next 5 years) lean more toward prompting and those who lean with that natural flow will output vastly (easily more than 5x) more and receive dividends for it.
@@indydevdan Two months late to your reply, but I'm in complete agreement. I find myself writing comments to outline my tasks in pseudo-code, and letting Cursor take a first pass at it, mostly just to help myself get a mental model around what I should be going for. And there's been more times than not, that I actually delete and discard much of what it provided and just write it myself once the "bones" are there. Then rinse & repeat as the project continues. I think this is basically what you are saying!
I think if we made this known to the Ai or group of Ai's they could develop a workflow where u stay relevant or you upgrade yourself with its help. Nothing is stopping us from improving or developing new ways to improve.
Great video with clear ideas, thoughts and philosophies. I agree with you about our industry moving too slowly on this topic. I use AI in several places to help with my coding and also integrated into the product I build. One step I would like to approve is automated code reviews. Do you know of any good tools that can automatically code review commits and send a daily email with suggestions? I am thinking about building an automated codereview application.
Copy that. Currently experimenting with Mentat. If I see an advantage of Mentat over Aider I'll upload a video to share the value. For the time being though, Aider is 100% the AI Coding Assistant to use.
Great content Dan. In your 2024 predictions video you talked about curating information sources. I agree that there is a lot of garbage news/content about the current AI revolution. I just wanted to ask you if you had any particular recommendations for staying up to date with high quality and relevant news/tech.
Thanks for the videos! Would be interested in AI coding assistants tutorial/course and would happily pay for those, really like the way you explain things
I have a startup. It will be known to you soon. I do contribute a lot of code to the underlying LLM frameworks. I just don't contrib code directly enabling this particular use-case since it's the focus of my company. @@---Oracle---
I have no coding background so us newbs need someone like you to cut through the noise and find the signal. To say whats bs and whats workable some other youtubers like @techfrens do a really good job at it. Utimately we would like open source that can compete with closed source alternatives. Not everybody has money to burn on tokens with open ai etc
Hey Dan, thanks for sharing this.
I've been using Cursor for a few months now, but in a very different way than how you demonstrated in the video.
Take a look at the chat pane; it allows you to @mention the entire codebase instead of individual files.
This leverages chain of thought prompting to determine the appropriate files based on your prompt.
Would most definitely be interested in tutorials on prompt engineering for coders/developers/software engineers. Thanks for spending the time sharing your expertise.
second this. would pay for a course that covers efficient prompt engineering for developers. Also sources that you use to improve your prompting skills.
Great content as always. Big thanks!
I think it would be cool if there was a website or some central location developers could share their prompting
techniques for certain programming scenarios.
I second this.
I third this... and in fairness its a good suggestion, even for a paid app!
lets build it!
@@francycharuto Theres a research group that done a study on the entire 'prompt' thing and came up with some rather interesting findings. For one thing including a line such as "This is really important, we could lose our jobs and be homeless if we make a mistake" in the prompt and this actually carries efficacy. Basically emotionally blackmailing the model... lol.
EDIT: It works by the way, improves GPT3.5 by my reckoning by 50% at least.
Full paper called "Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4" on these principles.
Great talk, I am definitely interested in you learning material. Both AI Coding Assistants tutorials and courses.
I think were these tools are at now, it's allowing us to focus more on the pseudo code level. Indeed I recently wrote a script pseudo code first, then asked Aider to complete it. Which worked nicely. Maybe not a 5x improvement, more like 3x.
would love to see engineering your own aider with optimized error prompt chaining, i have worked on this beore was succussful in complex SQL queries. Coding assistants are a good starter meal, but they lack the salt, spices, and marination. all this can be achieved through self automated context selection(file names with semantic description on the functions, if you ask me evevery function and its dependencies can be fectched) attaching a desc with it will lead to a one prompt for eveerything approch. i believe in pure python and pydantic approaches raher than using external libaraies. i havent learnt much from the video from a technical perspective rather a tutorial on using aider. i loveed your series on Chat with your database! wwas waiting for an ungrade on that as your intuition kicks in for error prompt chaining and self context retirival for code.
what's the open file context tool for aider? I couldn't see
Hey Dan, you mentioned you've tried RAG across the whole codebase with unfavourable results. Have you considered knowledge graphing? Logically then, hybrid search across knowledge graph structures that represent vector indexes for individual scripts?
Do you think there would be a relative cost benefit of taking the time to create KG of the codebase and then feed it back to the LLM?
Do you think that would be better than using Universal C-Tags like Aider does?
Great call out, I've considered a KB Graph with some limited depth. I think KB graphs + universal C-tags (aider uses this as @evetsnilrac9689 mentioned) PLUS Levenshtein distance on the input data is the winning combo.
The trick is getting this functionality into AI coding assistants so they can automatically reference files for you w/out intervention. Lots of value in this problem.
@@indydevdan Then let us consider how to make this happen.
i m working on such a system that might fix some of the AI issues mentioned. its about getting composables and maybe all code into a graph db. I want to reduce dependencies too so fully open system but in compliance / using code as shared digital assets plus getting paid micro payments. no stealing...
What do you use as System Prompt in Cursor (More -> Rules for AI)?
That would interest me.
Great Video!
I am interested in AI coding techniques. Been using Aider for a language I don't use often. Very helpful.
Yes, interested in tutorials on code prompting.
I'm not sure I agree about "the prompt" being the fundamental unit of programming. When working with these tools, oftentimes I feel that the work I put into the prompt is as time consuming as the code that it produces, and instead its better to have a 50/50 split of prompting and coding (and don't necessarily "start with the prompt"). Human language is flexible, but imprecise and verbose. Code is rigid and more difficult, but also more accurate and far more concise. I think our future involves a healthy balance of the two, otherwise (as you alluded to), there will be skill atrophy, and then the tools aren't able to produce quality results...garbage in, garbage out.
I agree with most of that. I think unless you realy fundamentaly mess up a prompt or the model has not enoth context it can mess up for sure. Prompting is fundamentaly dependend on the model and the more capable models are amazing at extracting what you want and doing just that. There is no magical prompt or trick to it. As long as you know what your doing and you provide enoth context the model will handle the rest if it is capable enoth.
I think learning and using Concepts is the A and O. Syntax and the exact musle memory is unimportant. Why learn 15 programming languages that all have the same concepts? They failed to norm the conventions in the past and now we all have to pay the price beacuse of there mistakes. Cant wait to semantically programm with an Assistant.
And the usual "you will not be able to write your own code!" does not make sense. I can run local moddels that will be capable and smal enoth to run localy offline soon. And if i dont have internet or my pc the same logic applys. This is just a shortcut like programming languages went from machinecode to higher level languages and now its just semantic.
Rock solid take and I think RIGHT NOW you're right. Over time though I'll bet (literally) that the balance between coding & prompting will naturally (over the next 5 years) lean more toward prompting and those who lean with that natural flow will output vastly (easily more than 5x) more and receive dividends for it.
@@indydevdan Two months late to your reply, but I'm in complete agreement. I find myself writing comments to outline my tasks in pseudo-code, and letting Cursor take a first pass at it, mostly just to help myself get a mental model around what I should be going for. And there's been more times than not, that I actually delete and discard much of what it provided and just write it myself once the "bones" are there. Then rinse & repeat as the project continues. I think this is basically what you are saying!
Please help review Copilot Workspace. Is it now as capable as Cursur/Aider ?
Stay tuned in - As SOON as I get access I'll release a video on it.
Did you try Mentat? its like Aider. I dont know if its better or not. could you make a video testing it?
Yes 👍 please on ai coding assistant courses / ideas / efficiencies / insights
I'm interested in growing my skills in using assistants
have tested continue with local llm through lm studio? I think some fine tuned open source code specific llms like deepseek can beat gpt3.5 no?
Would be interested to see you do some videos on Aider prompting ;)
I think if we made this known to the Ai or group of Ai's
they could develop a workflow where u stay relevant or
you upgrade yourself with its help.
Nothing is stopping us from improving or
developing new ways to improve.
Dude that was well done I will share with all engineers in our company
I would buy your course in a heartbeat!
Great video with clear ideas, thoughts and philosophies. I agree with you about our industry moving too slowly on this topic. I use AI in several places to help with my coding and also integrated into the product I build. One step I would like to approve is automated code reviews. Do you know of any good tools that can automatically code review commits and send a daily email with suggestions? I am thinking about building an automated codereview application.
Microsoft copilot will be more integrated into VS code. It’s just a matter of time, but I think Microsoft has got this covered.
Yes please, AI coding assistant tutorials.
Btw, what are your thoughts on Mentat as compared to Aider?
Copy that. Currently experimenting with Mentat. If I see an advantage of Mentat over Aider I'll upload a video to share the value. For the time being though, Aider is 100% the AI Coding Assistant to use.
Very interested in ai coding assistance. I'm not a programmer but I a keen learner for Prompt engineeering.
Content is awesome, drop the tutorials bro
Very interested in AI Coding Assistants tutorials and courses!
Great content Dan. In your 2024 predictions video you talked about curating information sources. I agree that there is a lot of garbage news/content about the current AI revolution. I just wanted to ask you if you had any particular recommendations for staying up to date with high quality and relevant news/tech.
Thanks for the videos! Would be interested in AI coding assistants tutorial/course and would happily pay for those, really like the way you explain things
This is 100% in the works - stay tuned.
Well done. Best video today!
this was a great video, thank you!
Great video man, insightful, thanks
Love your content so much mate,
Have you got a discord yet? Or patreon?
What love to chat with you!
Just messaged you on Twitter :)
Nobody is talking about this? It is probably the most common subject between senior engineers, HRC, App architects and CTO in the last 4 months.🤔
Great Video! I agree with your thinking
I watch Primitive Technology. They'll be people doing that but for code a decade from now.
tutorials 👍
aider and cursor are not great. I have solved their problems internally.
Maybe push an Opensource Commit instead of flexing on youtube?
I have a startup. It will be known to you soon. I do contribute a lot of code to the underlying LLM frameworks. I just don't contrib code directly enabling this particular use-case since it's the focus of my company. @@---Oracle---
Most definitely interested in learning how to prompt-gram ❤
I have no coding background so us newbs need someone like you to cut through the noise and find the signal. To say whats bs and whats workable some other youtubers like @techfrens do a really good job at it. Utimately we would like open source that can compete with closed source alternatives. Not everybody has money to burn on tokens with open ai etc
I appreciate your comment. Next video will focus specifically on open source local vs closed source - stay tuned.