When you say “waking up” do you mean like, becoming conscious (with internal experience)? My understanding is that this is not the concern of the serious AI safety researchers. Rather, their concern is more about it behaving like an intelligent agent and optimizing too hard for imperfect goals
The last quote goes hard, "We need a system that is robust to the continual creation of misaligned AIs" 🔥🔥
Ai never woke up or anything.
Humans just messed up their code.
we could see a lot of automated hacking if it gets just a bit stronger
When you say “waking up” do you mean like, becoming conscious (with internal experience)? My understanding is that this is not the concern of the serious AI safety researchers. Rather, their concern is more about it behaving like an intelligent agent and optimizing too hard for imperfect goals
i was catering the talk towards a more general audience but yes what you’re referring to is the more reasonable version of the same theme of issues
@@Tunadorable Ah ok, cool, thanks for clarifying