I think it is important to forget some "core ideas" or "core programming skills". Like we have already accepted some programming languages are so good that we don't need to know assembly. Or if you are riding a bicycle, you don't need to know how it works if you have a professional that takes care of it twice a year. Of course you can use your free time to figure out how to read assembly code or fix your bike but it isn't for everyone
Agree. Old activities tend to be romanticized and become sports. E.g. majority use mass-produced cars, but there are thriving communities of hot-rodders and equestrians.
I really love to give my class what I just write to different LLM’s to give me ideas how we can improve them or even rewrite them in a better way and sometimes they gives me a really insightful ideas not all the time but sometimes I learn lots of new approaches. The last time was singleton idea in python and never use before and love to use and understand it right now.
I wasn't sure where to ask about this, but I would like to hear your take on the current state of "emergent abilities" for LLMs. Some papers argued against that notion in the past, such as "2304.15004" on arxiv, and I wonder if any new insights have been brought to light since. Thank you for your videos as always.
Great question -- and it's very subjective. There is the theoretical front, but in practice, LLMs + agentic behavior go pretty far. Everyone is clamoring for GPT 5, 6, 7... but if you look at actual day-to-day workflows, I don't think we've even tapped out GPT-4 (and other models at that level).
I still write most of my code from scratch, but I spend far less time refactoring and debugging. AI is best used for discussing requirements and spotting edge cases. The true benefit isn't in code generation but in gaining mental clarity-typing out the code was never the hard part. You can still get all the productivity gains without having your skills atrophy but sharpen instead.
I think the main reason code quality drops when I use LLMs is that I tend to rely on them more when I'm tired but still pushing to finish a feature. I’ve also noticed that I tend to gradually expand the scope of changes more than I used to.
I agree with the notion that we should from time to time turn off the autopilot and see if we can still code. I personally see that using LLMs made me much more worse as a coder. However, I've also managed to build a lot more than I used to. So it's really a trade off. Perhaps we're witnessing a redefinition of what a software engineer is. We're coding less and less and it seems it's not stopping. We're reviewing more and more, which I personally don't like at all.
> I personally see that using LLMs made me much more worse as a coder. However, I've also managed to build a lot more than I used to. Can you elaborate? Worse in that you aren't doing the coding and you feel like your skills are atrophying?
I think it is important to forget some "core ideas" or "core programming skills". Like we have already accepted some programming languages are so good that we don't need to know assembly.
Or if you are riding a bicycle, you don't need to know how it works if you have a professional that takes care of it twice a year. Of course you can use your free time to figure out how to read assembly code or fix your bike but it isn't for everyone
Agree. Old activities tend to be romanticized and become sports. E.g. majority use mass-produced cars, but there are thriving communities of hot-rodders and equestrians.
I really love to give my class what I just write to different LLM’s to give me ideas how we can improve them or even rewrite them in a better way and sometimes they gives me a really insightful ideas not all the time but sometimes I learn lots of new approaches. The last time was singleton idea in python and never use before and love to use and understand it right now.
I wasn't sure where to ask about this, but I would like to hear your take on the current state of "emergent abilities" for LLMs. Some papers argued against that notion in the past, such as "2304.15004" on arxiv, and I wonder if any new insights have been brought to light since. Thank you for your videos as always.
Great question -- and it's very subjective. There is the theoretical front, but in practice, LLMs + agentic behavior go pretty far. Everyone is clamoring for GPT 5, 6, 7... but if you look at actual day-to-day workflows, I don't think we've even tapped out GPT-4 (and other models at that level).
I still write most of my code from scratch, but I spend far less time refactoring and debugging. AI is best used for discussing requirements and spotting edge cases. The true benefit isn't in code generation but in gaining mental clarity-typing out the code was never the hard part. You can still get all the productivity gains without having your skills atrophy but sharpen instead.
I think the main reason code quality drops when I use LLMs is that I tend to rely on them more when I'm tired but still pushing to finish a feature. I’ve also noticed that I tend to gradually expand the scope of changes more than I used to.
Have you tried asking the LLM to review your code, then again to fix the feedback from the "reviewer"?
I agree with the notion that we should from time to time turn off the autopilot and see if we can still code. I personally see that using LLMs made me much more worse as a coder. However, I've also managed to build a lot more than I used to. So it's really a trade off. Perhaps we're witnessing a redefinition of what a software engineer is. We're coding less and less and it seems it's not stopping. We're reviewing more and more, which I personally don't like at all.
> I personally see that using LLMs made me much more worse as a coder. However, I've also managed to build a lot more than I used to.
Can you elaborate? Worse in that you aren't doing the coding and you feel like your skills are atrophying?
Quantity vs quality
One basic question, Do you think it is advisable to learn programming anymore to join a tech company at a junior level ?
That's the main question! I tried to answer it here: th-cam.com/video/dNmisudM9IE/w-d-xo.html but there's a lot more to it...
Deeply insightful