Because bargaining is extremely inefficient and even agi would want a clear production value of something to work out how much of what should be produced.
Agi will move the cost of labor to the cost of compute. A infinite energy economy on the other hand is a totally different story. Thats when pricing things in $ won’t make sense.
Re Robot Nanny: Temrinator 2:, Sarah Connor sees the Terminator playing with her child and thinks: "Watching John with the machine, it was suddenly so clear. The Terminator would never stop. It would never leave him, and it would never hurt him, never shout at him or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. And it would die to protect him. Of all the would-be fathers who came and went over the years, this thing, this machine, was the only one who measured up. In an insane world, it was the sanest choice" - THAT is why you want a robot nanny. And a life partner that is an AI, not a human - humans are for short term relationships, but your soulmate will be an AI.
Economists are hesitant to extrapolate the full impacts of competent AI for fear of appearing crazy to their peers. These capabilities could indeed lead to sci-fi-like outcomes and potentially render their profession obsolete. Most economists will choose to focus on maintaining the status quo until retirement, only updating when new AI capabilities emerge. It's a species of intellectual cowardice.
Profoundly disagree with the anthropomorphism at the end of the conversation and giving moral status to AIs - if we build AI tools that require moral status, we have failed. Creating AI creatures, agents and fake-humans will be our downfall.
@@odiseezall I agree insofar that if we create something that requires moral status, we are not creating tools, we are creating members of our society. That is all well and fine, unless we trying to create tools instead of companions or moral agents. The point of having a tool is that you can use them as such, and the evil of slavery was that moral agents with interests were used as tools.
Seems to me that the crux of the matter is the nature of qualia and its valence. Some believe in illusionism (pure behavioralism, that "consciousness" is just a category of brain behaviors) but I think the more common view is that qualia is somehow "real". Assuming qualia is real, the question is how likely it is that we can create (or are creating) qualia and valence entirely by accident inside computers. I don't think we can, because that's not usually what happens: if I build an internal combustion engine, I don't expect to have accidentally created a toilet. Likewise if I build a computer program, I don't expect to have created qualia. If my computer program does immense amounts of matrix multiplications, it's unclear why it should now have qualia, even if its behavior resembles human behavior in some ways.
So will robots make our "best friends". Our society has vast amount of future things to decide in the near future. I think when robots experience pain is when we really need to be careful with the idea of exploitation. How to determine the subjective experience of a robot is a hurdle we must come to terms with in the future.Woot! I listened to the whole 4 hours!
It would really be pointless and a moral evil to create a artificiell workforce with interests, at the very least interested in other things than working with producing resources for humans.
WHY is he ( the host) talking so crazily fast?? Does he want folks to switch off?? Luckily the guest talks in a way much more comfortable to listen to.
Love Carl Shulman, always so insightful
When will the world finally wake up to the irreversible transition society is about to undergo?
We are living at the dusk of the Old World.
They'll wake up a few months after the change takes place.
After the fact, of course haha😅
The question is why you quantize resources in $ in a AGI economy.
Because bargaining is extremely inefficient and even agi would want a clear production value of something to work out how much of what should be produced.
In what else, apples?
Agi will move the cost of labor to the cost of compute.
A infinite energy economy on the other hand is a totally different story. Thats when pricing things in $ won’t make sense.
honestly.. need part 2 ASAP
Re Robot Nanny: Temrinator 2:, Sarah Connor sees the Terminator playing with her child and thinks: "Watching John with the machine, it was suddenly so clear. The Terminator would never stop. It would never leave him, and it would never hurt him, never shout at him or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. And it would die to protect him. Of all the would-be fathers who came and went over the years, this thing, this machine, was the only one who measured up. In an insane world, it was the sanest choice" - THAT is why you want a robot nanny. And a life partner that is an AI, not a human - humans are for short term relationships, but your soulmate will be an AI.
Fascinating talk and thought experiment
that video image is freaking me out ^^
Excellent as always
Where is Jürgen?
Will AGI need time off for back propagation?
Economists are hesitant to extrapolate the full impacts of competent AI for fear of appearing crazy to their peers. These capabilities could indeed lead to sci-fi-like outcomes and potentially render their profession obsolete. Most economists will choose to focus on maintaining the status quo until retirement, only updating when new AI capabilities emerge.
It's a species of intellectual cowardice.
Profoundly disagree with the anthropomorphism at the end of the conversation and giving moral status to AIs - if we build AI tools that require moral status, we have failed. Creating AI creatures, agents and fake-humans will be our downfall.
@@odiseezall I agree insofar that if we create something that requires moral status, we are not creating tools, we are creating members of our society.
That is all well and fine, unless we trying to create tools instead of companions or moral agents. The point of having a tool is that you can use them as such, and the evil of slavery was that moral agents with interests were used as tools.
Seems to me that the crux of the matter is the nature of qualia and its valence. Some believe in illusionism (pure behavioralism, that "consciousness" is just a category of brain behaviors) but I think the more common view is that qualia is somehow "real". Assuming qualia is real, the question is how likely it is that we can create (or are creating) qualia and valence entirely by accident inside computers. I don't think we can, because that's not usually what happens: if I build an internal combustion engine, I don't expect to have accidentally created a toilet. Likewise if I build a computer program, I don't expect to have created qualia. If my computer program does immense amounts of matrix multiplications, it's unclear why it should now have qualia, even if its behavior resembles human behavior in some ways.
Good overview on AI
So will robots make our "best friends". Our society has vast amount of future things to decide in the near future. I think when robots experience pain is when we really need to be careful with the idea of exploitation. How to determine the subjective experience of a robot is a hurdle we must come to terms with in the future.Woot! I listened to the whole 4 hours!
It would really be pointless and a moral evil to create a artificiell workforce with interests, at the very least interested in other things than working with producing resources for humans.
great talk!
Why is Elon Musk on the thumbnail?
The more poignant question is why you didn't mention Bezos and both of them seemingly in the position of waiters 😊
gud stuff
Wow
This requires packing the court. What other choice it there? But we may only have to early January 2025
WHY is he ( the host) talking so crazily fast?? Does he want folks to switch off??
Luckily the guest talks in a way much more comfortable to listen to.
Watching 2x?
FIRST