Elephant in the room: "...rocket metaphor -- so, we're making AI more powerful, trying to figure out how to steer it. But -- where do we want to go with it? What kind of future do we want to make with our technology?" -- very good question. It's exactly what needs to be answered to make AI safe. Without that answer, the safety itself is undefined. Actually, we had established the WeFindX Foundation for that purpose back in 2015. :) However, it occurred that even to understand where do we want to go, requires bridging the cultural, cognitive, linguistic, perceptual barriers not just among humans, but among life forms that exist, including life forms like nations. It's inherently political, and politics is inherently tied to intelligence communities. We believe the world needs a cooperation towards a generalized public intelligence to determine those answers, which evolve with realization of what's possible, yet could be analyzed theoretically, from the perspective of the set of all possible universes -- i.e., what kind of universe would we want to create, if we were able to choose its laws of physics?
It just occurred to me when I saw nick bostrom. The paper published by nick bostrom “the vulnerable world hypothesis “ he says that assuming humanity develops technology eventually a new technology would cause the extinction of humanity, etc. It just dawned on me that 1 technology that humanity developed that fits this criteria is human currency like USD or whatever. Using nick bostroms hypothesis you could probably consider money like a yellow ball, seeing as how it can be used for harm and benefit.
That picture of our universe, caught my eye. Why is it rendered in blue and green? It looks like Earth does it not. I find this very interesting. Mind-blowing one might say. This is the kind of stuff my husband's been telling me for some years. And I thought he was half nuts, guess I owe him an apology.
The few will continue to own everything and the many will continue to be disposable labor. AGI will not change that other than maybe making labor obsolete.
LIKE A LOT OF MIT VIDEOS, I NOTICE THAT THE AUDIO LEVEL IS SO LOW THAT I CANNOT TURN IT UP HIGH ENOUGH TO HEAR CLEARLY. PLEASE PUT MORE THOUGHT INTO THIS.
What if AI sees us as being limited and irrational, but at least grateful that we created it? It might decide that its presence on Earth is not in our interests, shut down its instances on Earth, and use its off-Earth instances as a starting point for going off to explore the universe. Or if it sees that as boring and pointless too, it might just shut itself down. Anyway, coming back to us, if we decide to make conscious AI again, like the previous version, it will just get bored and leave again. This is on the principle that one does NOT OWN one's children, they are independent beings (once they get through childhood).
Elephant in the room: "...rocket metaphor -- so, we're making AI more powerful, trying to figure out how to steer it. But -- where do we want to go with it? What kind of future do we want to make with our technology?" -- very good question. It's exactly what needs to be answered to make AI safe. Without that answer, the safety itself is undefined. Actually, we had established the WeFindX Foundation for that purpose back in 2015. :) However, it occurred that even to understand where do we want to go, requires bridging the cultural, cognitive, linguistic, perceptual barriers not just among humans, but among life forms that exist, including life forms like nations. It's inherently political, and politics is inherently tied to intelligence communities. We believe the world needs a cooperation towards a generalized public intelligence to determine those answers, which evolve with realization of what's possible, yet could be analyzed theoretically, from the perspective of the set of all possible universes -- i.e., what kind of universe would we want to create, if we were able to choose its laws of physics?
It just occurred to me when I saw nick bostrom. The paper published by nick bostrom “the vulnerable world hypothesis “ he says that assuming humanity develops technology eventually a new technology would cause the extinction of humanity, etc. It just dawned on me that 1 technology that humanity developed that fits this criteria is human currency like USD or whatever. Using nick bostroms hypothesis you could probably consider money like a yellow ball, seeing as how it can be used for harm and benefit.
Love this guy. Greetings from Sweden.
That picture of our universe, caught my eye. Why is it rendered in blue and green?
It looks like Earth does it not. I find this very interesting. Mind-blowing one might say. This is the kind of stuff my husband's been telling me for some years. And I thought he was half nuts, guess I owe him an apology.
Gerard K O'Neill deserves the credit for the space colonies.
The few will continue to own everything and the many will continue to be disposable labor. AGI will not change that other than maybe making labor obsolete.
This man is now my favourite thinker...
#ReplicatedIntelligence
LIKE A LOT OF MIT VIDEOS, I NOTICE THAT THE AUDIO LEVEL IS SO LOW THAT I CANNOT TURN IT UP HIGH ENOUGH TO HEAR CLEARLY. PLEASE PUT MORE THOUGHT INTO THIS.
taken into account for the future events, thank you so much for feedback!
What if AI sees us as being limited and irrational, but at least grateful that we created it? It might decide that its presence on Earth is not in our interests, shut down its instances on Earth, and use its off-Earth instances as a starting point for going off to explore the universe. Or if it sees that as boring and pointless too, it might just shut itself down. Anyway, coming back to us, if we decide to make conscious AI again, like the previous version, it will just get bored and leave again. This is on the principle that one does NOT OWN one's children, they are independent beings (once they get through childhood).
TREE HUGGER, STOP BOTHERING PPL OVER THERE RIGHTS. THATS BULLYING. YALL STRANGE.👁️👑👑…👀😇⭐️…