This man makes the most sense out of all the experts I’ve watched on AI. I haven’t found his arguments effectively countered - actively asking for good counter arguments.
Eliezer seems correct about why and how the AIs would converge towards disempowering or destroying humanity, but he is too optimistic about the solvability of AI alignment, or the AI control problem. He believes that the problem of aligning superintelligences can be solved once and for all if given time, resource and unlimited retries. But some researchers now suspect that the control problem might be unsolvable in principle. IMO that is definitely a possibility. A higher intelligence can explore broader policy space by definition so it is always possible for it to find some way to escape from the control of a lesser intelligence. In short, Eliezer claims that the control problem is definitely solvable. I find no evidence nor mathematical proof supporting that and there are reasons to believe that it might be unsolvable in principle, let alone in practice.
@Musings From The John 00 how do you imagine Yudkowsky's prescription would lead to the death of billions? Answer any ONE of the 3 scenarios he suggested. All 3 if you can...but I'd LOVE to see a good (or even slightly plausible) argument against even one of them.
@Musings From The John 00 I'm typically a sloppy commentor, which doesn't help. I often sacrifice clarity for brevity in the hope that my comment will be read...so lots of time others don't even know what I'm commenting about...which is as bad as not reading my comment. The struggle for comment section balance is real.
This is the best explanations Ive found.. & if AI doesnt do these things Humans with AI neurallink will. if one person or company gets it (Blackrock) then we all HAVE TO.
This man makes the most sense out of all the experts I’ve watched on AI.
I haven’t found his arguments effectively countered - actively asking for good counter arguments.
Eliezer seems correct about why and how the AIs would converge towards disempowering or destroying humanity, but he is too optimistic about the solvability of AI alignment, or the AI control problem. He believes that the problem of aligning superintelligences can be solved once and for all if given time, resource and unlimited retries. But some researchers now suspect that the control problem might be unsolvable in principle. IMO that is definitely a possibility. A higher intelligence can explore broader policy space by definition so it is always possible for it to find some way to escape from the control of a lesser intelligence.
In short, Eliezer claims that the control problem is definitely solvable. I find no evidence nor mathematical proof supporting that and there are reasons to believe that it might be unsolvable in principle, let alone in practice.
@Musings From The John 00 you keep saying that...but you offer no argument.
@@markupton1417it's an early iteration
He has been at the front of the AI Safety field since the early 2000's.
Back then he was still hopeful.
He's not anymore.
@Musings From The John 00 how do you imagine Yudkowsky's prescription would lead to the death of billions?
Answer any ONE of the 3 scenarios he suggested.
All 3 if you can...but I'd LOVE to see a good (or even slightly plausible) argument against even one of them.
@Musings From The John 00 that wasn't me....
@Musings From The John 00 I'm typically a sloppy commentor, which doesn't help.
I often sacrifice clarity for brevity in the hope that my comment will be read...so lots of time others don't even know what I'm commenting about...which is as bad as not reading my comment.
The struggle for comment section balance is real.
Thanks again
Makes sense to me
This is the best explanations Ive found.. & if AI doesnt do these things Humans with AI neurallink will. if one person or company gets it (Blackrock) then we all HAVE TO.