AI Doom Debate: George Hotz vs. Liron Shapira
ฝัง
- เผยแพร่เมื่อ 6 ต.ค. 2024
- Today I’m going to play you my debate with the brilliant hacker and entrepreneur, George Hotz.
This took place on an X Space last August.
Prior to our debate, George had done a debate with Eliezer Yudkowsky on Dwarkesh Podcast: • George Hotz vs Eliezer...
Follow George: x.com/realGeor...
If you want to get the full Doom Debates experience, it's as easy as doing 4 separate things:
1. Join my Substack - doomdebates.com
2. Search "Doom Debates" to subscribe in your podcast player
3. Subscribe to TH-cam videos - / @doomdebates
4. Follow me on Twitter - x.com/liron
love the debate. dont mind me but i think you shouldve waited till george complete his whole sentence or thoughts for his each argument, most of the time his argument was cutoff in the middle, i mean i was just curious to know it at all, its just that, otherwise its great!
agree!!
Would love to see more debates/discussions. This debate is really about how much more efficiency an Ai can gain in a feedback loop, and if it can find one weird trick to exploit the rest of humanity. Because if hardware is the bottleneck (which I think it is) is, and hardware increases gradually, then society has time to keep the Ai's aligned, and build defenses when Ai models point out potential vulnerabilities. For example, more powerful firewalls to protect the economy.
> Would love to see more debates/discussions
Have you seen my channel? :)
@@DoomDebates Excellent Work!
George seems to have been very confused about how well human intelligence parallelizes. Human parallelization is extremely lossy and we do not perfectly coordinate. He should know about Price's Law (or rather, its most common modern interpretation): the square root of the number of contributors generates about half the output. This difficulty of parallelization probably doesn't apply to a single neural net. I think a single system that is more generally intelligent than humanity on all metrics by about 2x is an existential threat, nevermind 1000x.
Super interesting - Hotz is obviously intelligent but appears to be completely missing your points and responding to his own simulation of your arguments which miss the correct foundation entirely. Wondering why even after reading Lesswrong he still does this
Dying to ASI doesn't mesh with his vibe at all, so he's unconsciously rejecting chains of thought that would lead him to that conclusion.
He's not autistic enough.
"Right?"
really good debate. thanks.
@@meditatewithmike4105 thank you
Liron, i have to salute your patience . i am screaming inside at your every debate
Some responses from the perspective of a math PhD student with an interest in comp sci:
Personally, I think the distinction between S-curve and super-exponential-foom curve is a superficial disagreement to get caught up on. My personal opinion is that if the AI you make is on par with or above human intelligence in its ability to create abstract thought, the speed/acuity of thought for these machines gives it such an edge that we don't have much hope of stopping the AI.
With regards to "Kasparov vs the World", it wasn't just the world to my knowledge. It was the world plus several rising young chess stars that were suggesting moves to the world. I could be wrong about this; however, that fact suggests that it is much closer to the "Magnus vs the Lesser 10" scenario he posed.
Lastly, I've not seen one convincing argument that demonstrates Pr(not doom) > epsilon. In an idealized space of all possible super intelligences, I see multiple depressing observations:
I think it's reasonable to assume the set of all possible SAI's is uncountable, and thus it seems to me that Pr(friendly AI) = 0 since I'm reasonably certain there's not some large swath of friendly SAI's in this set;
with regards to the subset of SAI's makeable using current methods (particularly computers + SGD + transistors etc.) - which would seem finite for some fixed computing power (binary) - I've yet to see a convincing argument for the existence of even one SAI which is friendly to us;
lastly, even if we make the friendly AI, I think it's reasonable to assume we are never in control again: why would it elevate us, possible competitors, to godhood when it still has its own goals.
I agree! My p(d) is >50, if I want to be diplomatic. But, what if "the SAI" would read "Martin Buber- I and Thou"? Is it reasonable to predict, that it will be indifferent to these concepts?
If your ASI is build on computer of finite size the amount of goals it can have is also finite.
Intelligence is required to move the rock...
particularly to move it to the optimal position, in optimal time, with optimal energy use, given some specific goal (like hitting your prey in the head with it)
@@kevinscales What I would like to say is that someone dumb throws the rock with his hand someone smart gets peace of leather and makes sling someone even smarter makes rock++ and hollow tube and places some angry dust behind the rock++.
George was awful here