Additional thoughts from a human: - I agree that the term "A.I." is being overused to some extent, but the opposite extreme is no less silly, and that's what's been happening for decades: Capabilities that machines don't have yet are considered clear examples of artificial intelligence, but then as soon as machines gain a particular ability that previously required human intelligence, everyone moves the goalposts and stops thinking of that as intelligence. Basically defining A.I. as whatever machines can't do yet. That's still happening, and it's starting to get rather ridiculous now that we have A.I.s passing the Turing test. - Canva is an odd choice for an example of excessive A.I. branding. Weren't they avoiding that term for a while and just calling all their A.I. tools "Magic"? Pete says they rebranded their existing text-to-image feature as A.I. after the A.I. craze started. I think what happened was more like they added that feature right around the beginning of the A.I. boom, using Stable Diffusion, which has always been called A.I. everywhere except in Canva's branding. - Pete describes a machine automatically adapting to a user's preferences as "machine learning". That's not what machine learning is. Machine learning is a way to give machines capabilities without having to program in all the details of how to do it, which Pete doesn't seem to be aware is a thing. No one understands how modern A.I.s work, because we didn't design them. - Pete says A.I. isn't real intelligence because it's just "human-led automation", meaning it just does what people tell it to do. That's true in a way: An A.I. wrote this video reaction because I told it to. But what does that have to do with the question of how intelligent it is? If I had instead hired a human to write the script for this video, would that mean the human wasn't intelligent? People with jobs automate tasks for other people and do what they're told, just like A.I. does. Maybe human workers operate at a higher level of autonomy than a chatbot, but LLMs can easily be made into autonomous agents. They're certainly more autonomous than Pete suggests when he says "a human has to program that in", because that's not how modern A.I.s are made. LLMs have all kinds of capabilities that no one specifically programmed, trained, or prompted them to know how to do.
Additional thoughts from a human:
- I agree that the term "A.I." is being overused to some extent, but the opposite extreme is no less silly, and that's what's been happening for decades: Capabilities that machines don't have yet are considered clear examples of artificial intelligence, but then as soon as machines gain a particular ability that previously required human intelligence, everyone moves the goalposts and stops thinking of that as intelligence. Basically defining A.I. as whatever machines can't do yet. That's still happening, and it's starting to get rather ridiculous now that we have A.I.s passing the Turing test.
- Canva is an odd choice for an example of excessive A.I. branding. Weren't they avoiding that term for a while and just calling all their A.I. tools "Magic"? Pete says they rebranded their existing text-to-image feature as A.I. after the A.I. craze started. I think what happened was more like they added that feature right around the beginning of the A.I. boom, using Stable Diffusion, which has always been called A.I. everywhere except in Canva's branding.
- Pete describes a machine automatically adapting to a user's preferences as "machine learning". That's not what machine learning is. Machine learning is a way to give machines capabilities without having to program in all the details of how to do it, which Pete doesn't seem to be aware is a thing. No one understands how modern A.I.s work, because we didn't design them.
- Pete says A.I. isn't real intelligence because it's just "human-led automation", meaning it just does what people tell it to do. That's true in a way: An A.I. wrote this video reaction because I told it to. But what does that have to do with the question of how intelligent it is? If I had instead hired a human to write the script for this video, would that mean the human wasn't intelligent? People with jobs automate tasks for other people and do what they're told, just like A.I. does. Maybe human workers operate at a higher level of autonomy than a chatbot, but LLMs can easily be made into autonomous agents. They're certainly more autonomous than Pete suggests when he says "a human has to program that in", because that's not how modern A.I.s are made. LLMs have all kinds of capabilities that no one specifically programmed, trained, or prompted them to know how to do.