Great interview, very informative, thanks. About the extinction risk and use of the term doomerism: I think such scenario actually is a logical thought, because with ASI, humans won't be the most intelligent species on earth anymore and there aren't examples where a more intelligent species is controlled by a less intelligent one. Also, when it functions autonomously AI could have an already automatic inclination to keep or get control in order to achieve a goal or goals. Additionally when AI is super intelligent and gets exponentially hugely more intelligent than us, we won't be able to comprehend its actions and future actions. So what is recommended is that much more researchers specialize in AI safety, and experiment with current AI systems to understand and predict their behavior. That is different from a message of don't touch scary AI. The technology is very beneficial for us and its development won't stop.
Good one
Great interview. Thank you.
Great interview, very informative, thanks. About the extinction risk and use of the term doomerism: I think such scenario actually is a logical thought, because with ASI, humans won't be the most intelligent species on earth anymore and there aren't examples where a more intelligent species is controlled by a less intelligent one. Also, when it functions autonomously AI could have an already automatic inclination to keep or get control in order to achieve a goal or goals. Additionally when AI is super intelligent and gets exponentially hugely more intelligent than us, we won't be able to comprehend its actions and future actions.
So what is recommended is that much more researchers specialize in AI safety, and experiment with current AI systems to understand and predict their behavior. That is different from a message of don't touch scary AI. The technology is very beneficial for us and its development won't stop.