I love to hear someone expressing more of the nuance to using AI. However, I think there’s some more nuance to the topic that should be understood. For me, scrutinizing the output of AI-generated code is far slower than it is to actually write it from scratch, so I’m not sure about its viability for integration into your workflow. I also have dyslexia, so maybe I’m a minority, but given the error rate of AI, I REALLY have to understand the output, which often means re-reading it 15+ times to scan for security vulnerabilities & blunders. I also agree with the point of learning, but would advise that no-one trust the output of an AI for that purpose, because it’s often wrong. It hallucinates, sometimes is easily gaslit and other times doubles down on verifiably wrong answers, and comes up with “close enough” answers that aren’t necessarily accurate, but sounds believable, and speaks them with confidence. However, in today’s world where students are forced to publish blogs on topics they don’t understand as required classwork, and the internet is flooded with bad, short-sighted articles, AI can seriously help cut through the BS by giving you the keywords you couldn’t find on your own, and helping you form better Google searches to get right down to what you’re looking for The main issue is that when we talk about AI, we’re almost always referring to LLMs, which don’t have the intelligence we think they do. They’re next-word-prediction machines, not problem solvers. They’re trained on Q&A, not the steps required to get to the answer, and therefore don’t do their own problem solving. They’re just regurgitating someone else’s answer on the internet, replacing some words, and hoping it works
I love to hear someone expressing more of the nuance to using AI. However, I think there’s some more nuance to the topic that should be understood.
For me, scrutinizing the output of AI-generated code is far slower than it is to actually write it from scratch, so I’m not sure about its viability for integration into your workflow. I also have dyslexia, so maybe I’m a minority, but given the error rate of AI, I REALLY have to understand the output, which often means re-reading it 15+ times to scan for security vulnerabilities & blunders.
I also agree with the point of learning, but would advise that no-one trust the output of an AI for that purpose, because it’s often wrong. It hallucinates, sometimes is easily gaslit and other times doubles down on verifiably wrong answers, and comes up with “close enough” answers that aren’t necessarily accurate, but sounds believable, and speaks them with confidence. However, in today’s world where students are forced to publish blogs on topics they don’t understand as required classwork, and the internet is flooded with bad, short-sighted articles, AI can seriously help cut through the BS by giving you the keywords you couldn’t find on your own, and helping you form better Google searches to get right down to what you’re looking for
The main issue is that when we talk about AI, we’re almost always referring to LLMs, which don’t have the intelligence we think they do. They’re next-word-prediction machines, not problem solvers. They’re trained on Q&A, not the steps required to get to the answer, and therefore don’t do their own problem solving. They’re just regurgitating someone else’s answer on the internet, replacing some words, and hoping it works