The AI Alignment Debate: Can We Develop Truly Beneficial AI? (HQ version)

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ส.ค. 2024

ความคิดเห็น • 393

  • @lowelovibes8035
    @lowelovibes8035 ปีที่แล้ว +50

    George Hotz didn't describe the world he wants to see, only what he would do with his super ai, without thinking about what the rest of the individuals would do with one at the same time as him, except that he imagines a world in which he gets the ai first with advantage and hecan build his ship with time advantage

    • @wonmoreminute
      @wonmoreminute ปีที่แล้ว +28

      It's interesting, right? I may be wrong, but if I had to guess, I'd say it's because he doesn't care. He says he would use AGI to build a ship and leave the planet as fast as possible, putting up a shield to block all communication. He did not say he'd use AGI to help others or to help society in any way. Or to fight against the tyranny he's worried about.
      To be fair, I listened to most of what he said with a biased perspective because he completely lost me when he said people have a chance in Somalia, but in America they don't. By no means would I say America, or any developed nation is without problems... but if he had any chance at all in Somalia, it would only be because he grew up in America, went to American schools, and lived in relative safety where he could nurture and develop his talent.
      I don't know him and I've never followed him, but my assumption is that he has a relatively good life compared to most people in Somalia. I don't know many people in Somalia starting AI companies and doing Lex Fridman podcasts.
      So it's pretty hard to square his comment about having a chance in Somalia but not in America, when it appears, at least as an outside observer, that America has given him quite a few chances.
      But going back to the point, and again... I could be wrong. I'd have to listen again. But at least once he said his worry was about "others gaining god-like powers and exploiting "him", which is a legitimate worry. But I don't remember him saying, exploiting "us", "others", or exploiting "society". And also multiple times he expressed a fatalist stance that in certain scenarios we're all just screwed anyway, which he said with a shrug.
      We are all the main characters in our own stories, I get that. And I'm only going off this one video, but this is not the way people like Conner, Eliezer, Mo Gawdat, Max Tegmark, and others talk about AI risk. They seem genuinely concerned about humanity, but from the way George Hotz talks about it, I'm not sure he does. Of course, the opposite may be true and his seemingly cold take on it might just be his way of dealing with the suffering he thinks is inevitable anyway.

    • @Hexanitrobenzene
      @Hexanitrobenzene ปีที่แล้ว

      @@wonmoreminute
      Judging by the first 20 or so minutes, George is an anarchist. He hates governments/authorities so much he is willing to compromise his rationality. USA vs. Somalia ? Really ?

    • @simulation3120
      @simulation3120 ปีที่แล้ว +9

      @@wonmoreminuteExtremely well put. He has a better chance to do what exactly? Setup his own nation with walls and guns? What freedoms does he expect to have in Somalia compared to the US other than exerting his own tyranny in a power vacuum. If he wants to live off the grid there are plenty of opportunities for him. Instead he wants technology. He points out the unabomber as a Luddite icon but simultaneously wants the most advanced technology for his own gain.

    • @zinjanthropus322
      @zinjanthropus322 ปีที่แล้ว

      ​@@simulation3120The average Somali in Somalia has a far better chance of escaping a Somali tyranny than an American has escaping an American one. Incompetent government has it's perks.

    • @dzidmail
      @dzidmail ปีที่แล้ว

      ​@@simulation3120 Somalia can be emulated in US if one wanted to (low income, off-grid), you are still subject to government influence. Even Amish people are aware (and not happy) about certain government actions. th-cam.com/video/Ir3eJ1t13fk/w-d-xo.html&feature=share&t=0h27m50s

  • @DeanHorak
    @DeanHorak ปีที่แล้ว +82

    Those introductions sounded like something GPT would generate.

    • @xsuploader
      @xsuploader ปีที่แล้ว +19

      Probably were

    • @box-mt3xv
      @box-mt3xv ปีที่แล้ว +2

      That was v good imo

    • @Hexanitrobenzene
      @Hexanitrobenzene ปีที่แล้ว +6

      Yeah, incredibly flamboyant. Tim went over the top...

    • @MannyPE-oi7pb
      @MannyPE-oi7pb ปีที่แล้ว +5

      Sounded like he was worshipping a deity

    • @JohnDeacon-iam
      @JohnDeacon-iam ปีที่แล้ว

      Exactly - that was a great prompt effort. Worthy!

  • @kyneticist
    @kyneticist ปีที่แล้ว +21

    The analogy of living in Somalia is a Libertarian thought experiment. The answer is that as Hotz suggests, libertarianism views everyone as equally dangerous, capable and autonomous (anyone who falls below the average should naturally be prey). It also abdicates social responsibility and eschews social customs, which are key instruments that humans use for alignment.
    In a world of extreme libertarianism, the only measures of alignment are short term, existential / subsistence goals that can pivot on even very little information. Individuals are free to be as individual as they want, but also must, and must constantly contend with existential risks that each individual must face alone. In this world, nobody is aligned (other than by chance or short term deals). The Common Sense Skeptic covers libertarian scenarios far better than I can.
    Whether he understands it or not, he chooses the country that employs a range of instruments that try very hard to ensure alignment, even though there are many examples of these tools either being taken to extremes or being abused. Living in an aligned world means that individuals share responsibilities and work loads.
    The communist/socialist outlook (of great to extreme alignment) is the opposite - they're a better world, but only if everyone strictly abides by the rules and the intent of the rules. They're also very vulnerable to their instruments and goals being abused or twisted over time.
    Building a spaceship to escape the world only works in an aligned world... even if AGI hands out STC's, the resources required are still only obtainable via cooperation and moderate alignment.
    That would also be an extreme investment for the AGI in a single biological entity. A given individual (other than an AGI) physically can't be proficient in the many necessary skills or sciences, let-alone the physical labour.
    Fast take-off doesn't necessarily need to be a magic algorithm, it just needs a path of self improvement that a given AI can implement on itself. A self-improving AI doesn't need to contend with company structures or human limitations. We also may not be able to detect fast take-off if an AI/AGI has advance knowledge of the event.
    If we find ourselves in a situation where an AI is training on a humanities worth of data within 24hrs we're far beyond a point where humans are relevant. There's no point trying to address any given concern about AI/AGI _after_ it achieves a milestone like fast take-off.
    Dystopia would be awful. Extinction by most reasonable measures is the worst possible outcome (not including things like being uploaded into an eternal hell or crazy biological agents designed to maximise suffering).
    An AI arms race with spammers at an individual level is not a solution to spam. Spammers will use AI regardless, and with far greater proficiency than people who aren't interested in spam.
    You lose all credibility by claiming that owning an AI product or creating an AI spam filter solves alignment because you either bought or built it.
    To Hotz point about AI/AGI balancing out because they're mostly just good (ie just like people) is fatally flawed. Machine intelligences are not human or animalistic. Concepts like "good" and "bad" aren't relevant. They're not going to contest one another because of political or ideological differences, or because they have some tribal or familial association.
    Claiming that there are only 2-3 sovereign nations & that "the feds" want to kill everyone is too far into libertarian paranoia & delusion. The Joker-like solution of "Chaos is harmony" is insane.
    Hotz is utterly consumed by libertarianism & apparently either misunderstands or just doesn't comprehend why a limit on computing power is not about who has whatever value might amount to a controlling share.
    A world of suffering does not require malicious agents. Climate change will produce a world of suffering. Previous ice ages produced a world of suffering. The Siberian traps produced a world of suffering. The black plague produced what was essentially a world of suffering. If the first atomic bomb test had ignited the atmosphere, it may well have created a world of suffering.
    Open sourcing something does not guarantee that "everyone has it". Many valuable things are open source and have only one manufacturer or supplier. There's more to making things than having a recipe. If a car company open sourced everything about their cars, that wouldn't mean that I could just make one in my garage.

    • @olemew
      @olemew 2 หลายเดือนก่อน

      As a libertarian, I relate much more to Connor, including AI safety. A few notes:
      "libertarianism views everyone as equally dangerous, capable and autonomous (anyone who falls below the average should naturally be prey)" This is false. We all have different skills and some individuals are more capable/dangerous than others. We view everyone as equals in terms of MORAL worth (equal laws, equal rights), including people below the average.
      "In a world of extreme libertarianism, the only measures of alignment are short-term" No. Family values are a great form of long-term alignment. I've been aligned with my mom for my entire life so far and I wish to spend the rest of my whole life with my wife and children. I also tend to stay for many years working for the same company, rather than switching every other year. All of that is consistent with a libertarian view.
      "The communist/socialist outlook is the opposite, but only if everyone strictly abides by the rules" A police state of fear that shoots you in the back as you're trying to escape the country is not an example of alignment. When you are denied "permission" to have more children, to move to a different city, to change careers, etc. that's not a society of extreme alignment, just of extreme repression.
      "Hotz is utterly consumed by libertarianism" The worst part is he does not even understand it. His view of "me and my AI against the world, baby" sounds like an absurd teen fantasy. The idea of freedom > safety is representative if we talk about the best way to produce food abundance. It's not useful in the context of an existential threat or even high criminality. Our first individual right is the right to live (physical safety). If you're in Somalia and can be mugged/killed any second, that's the opposite of a libertarian heaven. Most libertarians (at least in the >30 age range) are not anarchists and would keep the Justice and Law enforcement functions of the gov.
      "If a car company open-sourced everything about their cars, that wouldn't mean that I could just make one in my garage." Completely agree. Even more illustrative is the classic example of the pencil (Milton Friedman). Humans are great at collaboration and free trade is one of the best mechanisms to make humanity thrive. Again, I resonate much more with Connor (e.g., 12:00-14:00).

    • @user-ev7dq5cc8y
      @user-ev7dq5cc8y 2 หลายเดือนก่อน

      ​​@olemew Not that strange that you agree with him as a libertarian because Connor Leahy is actually a (probably moderate kind of) libertarian himself.

  • @Mike37551
    @Mike37551 ปีที่แล้ว +9

    Regarding the line about how tigers would rather eat chum in a zoo than live in the wild…it’s more than just an aesthetic trade off. If that choice is available, when you play it out for enough generations, there ends up being no such thing as tigers anymore. Just giant striped house cats.

  • @SMBehr
    @SMBehr ปีที่แล้ว +25

    Definitely not the best debate yet.
    This was frustrating because Hotz takes an unhelpful, extreme libertarian world-view in order to say increasingly non sequitur things like the construction analogy or Somalia, or just flat out trolling things like China is more capitalist or that the obvious option when owning an asi is to blast to space alone. He's obviously anti-American (which is fine but doesn't make for compelling ai debate), and arrogantly anti-social which also doesn't make for a good debate.
    He may be a genius hacker but I don't think he's ready to be the voice for a rational position. His views may even be hurtful given the fast deadline we have for international cooperation.
    This comment was in response to a previous commenter, in the low quality version. I wanted to copy my comment here for posterity or something.
    Love the channel btw

    • @OnigoroshiZero
      @OnigoroshiZero 11 หลายเดือนก่อน

      And do you find Connor's position rational? This guy can't see anything else than AI being the ultimate evil on the world that will want to destroy us no matter what for no reason. Regardless of the examples Hotz gave, he didn't listen to anything, believing only in his own ideas about absolute doom.

  • @bnjmz
    @bnjmz ปีที่แล้ว +39

    They're both knowledgable. In some subjects more than others. Both are clever. Successful. But it did seem there was a lack of...wisdom? Emotional depth? If not in their personal lives, at least in the debate.
    Hotz didn't even bother with faking sincerity. Seems he takes pleasure in baiting people into debating a certain point only to perform a rug-pull in the end. Starting with an extreme claim, defending it to a degree, and finally at the end saying that he personally doesn't subscribe to the conclusion that, based on the fact that he was defending it, we would otherwise assume he does.
    Something like: Somalia is better than America. But it's challenging and dangerous. America is like a tiger in a zoo. It's easier to just exist without the harshness of nature. Yet, there is something admirable, even superior, about the authenticity and freedom of being a wild tiger. This takes strength. So, is he really claiming that Somalia is better? Being in the wild is better than being in a zoo? Well, actually, no. Because he is not a strong tiger. Oh, you thought he was going to claim having ambitions of being a noble, wild tiger? That's what you get for assuming! Gotcha!
    Why did he even bother showing up to the debate / discussion if he doesn't actually care about/for humanity and instead wants to escape in isolation at the speed of light? Or in another similar extreme, if his AI doesn't automatically align itself based on him being a nice friend, he'll accept death. Of course, he could totally embrace these positions. Or...he could also enjoy being disingenuous because he finds amusement in causing others to jump through hoops of increasing absurdity only to land in the mud. I get that it's a lot of hyperbole. Yet, where actually is the depth of character and soul in the discussion? Mostly just trolling even given the subject is about potential human extinction / mass suffering.
    Absolutely, various interesting ideas were encountered while on the wild goose chase, but it overall didn't feel as productive as it could have been.

    • @Alice_Fumo
      @Alice_Fumo ปีที่แล้ว +5

      I don't think George does this on purpose, I also have that thing where I'm half-trolling all the time and do not realize it unless others give me indication that they actually don't understand what I mean.
      For example, I might claim that I'd develop AI capabilities to wipe out humanity, but then start working on alignment for the claimed reason that I'm terrified there exists a chance some misaligned AI would somehow end up not killing us all and instead might do something truly bullshit like turning us all into sentient paperclips, which is like so much worse and goes against the purpose of wiping out humanity in the first place which is to end deep suffering on earth, which of course if there exists a way to do that without killing everyone having strong aligned AI is probably a good step to get there.
      Right, so if you read my previous paragraph and just assume that this is how I think all the time and also that I personally find it hard to distinguish at which point I stop being 100% serious, it might help you understand how George ends up saying the things he does.
      However, given in how many ways he catastrophizes or expresses things which imply he is indifferent to the species as a whole, I am not sure that he could even have those viewpoints without being cynical to the point where what he says can't be separated from trolling anymore, but I think every word he says might be 100% serious. It's just that a lot of the things he believes and says don't align with generally accepted things.
      The one thing which I find most serious is the notion to pretend we live in a reality which has properties he believes are necessary for survival and discard steps you wanna take if you live in one where everythings doomed anyways. In this case a lot hinged on the assumption that defense is not significantly harder than offense and him being convinced that if it's very asymmetrical in favour of offense, we're just doomed. To me, offense being favoured is like an obvious immutable fact, so I'd want to go with Connors plan for coordination. It's actually a point I will have to reflect further on.

    • @KurtvonLaven0
      @KurtvonLaven0 9 หลายเดือนก่อน

      ​@@Alice_Fumo, yeah, I guess defense is favored over offense in a land invasion, but definitely not with nukes, which I think is a better analogy since we're talking about powerful undiscovered technologies.

    • @Megneous
      @Megneous 7 หลายเดือนก่อน +2

      Your comment just convinced me not to watch the video.

    • @adamds23
      @adamds23 7 หลายเดือนก่อน

      whats your twitter?

    • @daarom3472
      @daarom3472 2 หลายเดือนก่อน +1

      to me it seems Hotz is only interacting with people who think like him so he's not used to defending his points when challenged on them.

  • @sampruden6684
    @sampruden6684 ปีที่แล้ว +35

    Hotz has a certain libertarian techbro view on morality which, for the sake of conflict avoidance, I will simply politely say that I don't agree with.
    His claims about preferring freedom over safety seem somewhat confused to me. In my opinion, they appear to be more ideologically driven rather than derived from rational thinking. This is evident in his use of the 'those who would give up freedom for security deserve neither' quote as if it were an uncontroversial fact. In my view, the only sensible position is to strike a balance between the two. Liberty without safety is meaningless; being in significant danger is a form of non-liberty, whether that danger comes from the "feds" or from fellow humans/AIs. Towards the end of the debate, Leahy praises Hotz for being consistent in his views, but I suspect he can maintain this consistency only because these views are purely theoretical for him. I doubt he would act consistently with the statement "the day I can't fend for myself, I'm ready to die" if that day actually came.
    His claim that everything works out okay as long as we have many competing AIs appears to be a somewhat naively ideologically derived libertarian talking point. There are two real-world analogues to this: a free-market approach to everything, and the ideology behind the US second-amendmentism.
    We already have an existential threat right now: climate change. That was brought about by the free(ish) market. There wasn't a significant short term financial incentive for any powerful entity to advocate for the climate, so it got trampled over. As individual people we may have cared and advocated for it, but despite ostensibily being in competition, the fossil fuel industry cooperated to suppress that via lobbying and misinformation. This real-world example serves as proof that free market competition does not reliably protect against existential risks or lead to a balanced alignment with humanity.
    If we treat individual personal AIs as likely to be significantly dangerous, then there's also an analogue to the viewpoint that the way to achieve safety is to have everybody carry a gun. Looking at the USA from the outside, it seems quite clear that this is a failed experiment. That is not a world that I want to live in. Safety by mutually assured destruction is not a comfortable existence.
    The claim that we can achieve safety by having AIs play defense for us is dubious. GeoHot, of all people, should know that attacking a system is much easier than defending it. A good defense requires building the perfect armour, but a good attack requires finding only a single chink in it. This is a game we already play in the world today, but the more we raise the power levels of all of the actors involved, the higher the stakes. Increasing millitary technology may have reduced conflict a little, but it's increased the danger posed by conflict when it does break out.
    I'm not a "doomer" and I don't claim to have any prediction about how this is all going to play out in the real world, but I think many of the arguments made here are bad, and I find the naive techbro arrogance with which they're stated quite offputting - and perhaps dangerous.

    • @Hexanitrobenzene
      @Hexanitrobenzene ปีที่แล้ว +7

      Great points. I would say that his viewpoint is so extreme that it approaches anarchism...

    • @xfom4008
      @xfom4008 6 หลายเดือนก่อน

      As far as fossil fuels go - America can deal with the issue with solar, wind and nuclear and it will be fine. No.1 contributor to climate change isn't BP or Exxon or whatever the fuck - it's the very centrally planned China that forced industrialization upon itself causing misery to its own population immediately and contributing a lot to climate change.

    • @olemew
      @olemew 2 หลายเดือนก่อน

      As a libertarian, I completely agree with your point "Liberty without safety is meaningless". Hotz is conflating talking points. Should the government enforce the usage of helmets 24/7 for anybody in a public street, including streets and parks? That could save a couple lives every year! We say "No, freedom over safety". But that doesn't extrapolate to a situation where you're going to lose your most fundamental individual right, which is the right to live and to physical safety.
      "Climate change was brought about by the free(ish) market." No. Freer societies tend to be the wealthiest societies, which use more energy per capita. But the USSR was not thinking about saving the planet. You can picture a multipolar world dominated by 3 or 4 antagonistic socialist imperialist forces. Each one has an incentive to keep expanding its industrial production and military capacity, and they'll do that by any means necessary, including burning the best form of energy ever controlled by humans, fossil fuels. Same for AI safety. This flaw is not exclusive to free market societies.
      "Looking at the USA from the outside, it seems quite clear that this is a failed experiment" I've lived both outside of the US and in the US. It is not clear to me at all, that it's a failure by design, particularly for women and ostracized minorities. Maybe a debate for some other time.
      "should know that attacking a system is much easier than defending it." The sword is always cheaper than the shield, 100% agree. I love technology and industrial improvements, but I'm totally on Connor's side about the existential threat. Many uncontrolled open-sourced AIs only increase the chances of everything going wrong.

  • @matthewharrison8531
    @matthewharrison8531 ปีที่แล้ว +32

    Hotz is a bit sophomoric and could benefit from studying the arts and humanities. Some of his takes on freedom and liberty are just outright delusional. Said he rather live in Somalia? Spoken like a man who doesn't know real fear or deprivation.

    • @farcenter
      @farcenter 6 หลายเดือนก่อน +1

      Agreed.

    • @g.tziafas7479
      @g.tziafas7479 26 วันที่ผ่านมา +1

      ​@@farcenter to be fair he said he prefers to live in America because "he's not strong enough" and then agreed that this whole tiger vs. zoo argument is more of a sentimental one that adheres to some abstract aesthetic values -- "tiger not losing its nature" etc. Agree tho that he seems a bit emotional and self-contradicting at times, strikes me more as a contrarian rather than consistent intellectual.

  • @Tcgmaster808
    @Tcgmaster808 ปีที่แล้ว +31

    This needs a part 2 in a year or two

    • @ikdeikke
      @ikdeikke 9 หลายเดือนก่อน

      or now.

  • @leeeeee286
    @leeeeee286 ปีที่แล้ว +96

    This was honestly best "debate" I've ever seen on this topic.
    Both Hotz and Leahy have beautifully logical and rational minds so this was a very unique conversation. In a debate you rarely see each line of reasoning being followed and explored in good faith with no emotion or bias.

    • @Diabloto96
      @Diabloto96 ปีที่แล้ว +10

      Saying that something has no emotion or bias requires a thorough analysis. Emotions and biases were all over in my opinion. (But not in a bad way)

    • @shinkurt
      @shinkurt ปีที่แล้ว +3

      Hotz is like his name. He is too quick to respond and he contradicts himself a lot.

    • @rochne
      @rochne ปีที่แล้ว +1

      I listened to a different conversation though. One where they interrupted each other a bit too much and went on unnecessary tangents.

    • @MannyPE-oi7pb
      @MannyPE-oi7pb ปีที่แล้ว +1

      The stuff that was said about Somalia by George didn’t seem too rational.

    • @jondor654
      @jondor654 ปีที่แล้ว

      For starter. Looking good to have these guys on our side

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk  ปีที่แล้ว +20

    Leahy Argument:
    Alignment is a critical technical problem - without solving it, AI may ignore or harm humans
    Powerful optimizers will likely seek power and capabilities by default
    We should limit compute as a precaution until we better understand AI risks
    AI risks could emerge rapidly if we discover highly scalable algorithms
    Openly sharing dangerous AI knowledge enables bad actors and risks
    Coordination is possible to prevent misuse of technologies like AI
    His goal is a positive-sum world where everyone benefits from AI
    AI doesn't inherently align with human values and aesthetics
    Care and love can't be assumed between humans and AI systems
    Technical solutions exist for aligning AI goals with human values
    Hotz Argument:
    Truly aligned AI is impossible - no technical solution will make it care about humans
    AI will seek power but distributing capabilities prevents domination
    We should accelerate AI progress and open source developments
    Power-seeking in AI stems more from optimization than human goals
    With many AIs competing, none can gain absolute power over others
    Openness and access prevent government overreach with AI
    AI alignment creates dangerous incentives for restriction and control
    Being kind to AIs encourages positive relationships with them
    His goal is building independent AI to escape earth
    Competing AIs, like humans, will have different motives and goals

    • @jantuitman
      @jantuitman ปีที่แล้ว +3

      Very nice summary! If only the debate itself had been so structured. Next time they should be given statement like this and address the pros and cons at detail level. I feel for example that Hotz is very very very inconsistent. Believing that there can be a social contract together with a kind of protocol for alignment and yet at the same time maintaining that it can be all closed source seems like complete madness to me. But such inconsistencies were not yet addressed in this debate because they only explored the contours of the entire subject area rather than the consequences of each others claims to their own arguments.

    • @kyneticist
      @kyneticist ปีที่แล้ว +1

      Little of that synopsis is what Hotz said. He didn't even think about power seeking until Connor introduced the topic right at the end and had to explain it to him. To the contrary, Hotz said that if we treat AI well, it will treat us well. He didn't say that distributing capabilities prevents domination. He said that open sourcing the code for AI gives everyone a chance to build their own, and that by weight of numbers of competing systems, an equilibrium amounting to peace will be reached. He claimed that governments will overreach no matter what "the feds will bust down your door...". He had no idea what alignment was until Connor tried to explain it right at the end, and still didn't grasp the concept. Connor had to shelf the term.
      His goal is not to build independent AI to escape Earth. He said that if he had his own AI that he'd use it to get as far away from everyone else as possible.
      He made a bunch of other notable points though, like intelligence being a gradient rather than a series of steps or singular features.
      His greatest stated concern was that a single person or small group of people might have a super intelligence while he does not.

  • @41-Haiku
    @41-Haiku ปีที่แล้ว +5

    Sounds like we'll be having interesting conversations about what values to give superintelligent AI right up to the day we all fall over dead. All of this is pointless unless we solve the technical problem of how to encode those values in the first place.
    Order of operations should be:
    1. Stop advancing AI capabilities
    2. Solve the technical alignment problem
    3. Solve the philosophical/values alignment problem
    4. Ta-da, the good ending!
    But what we are likely to get is:
    1. Suicidally accelerate AI capabilities
    2. Bicker and virtue signal about what values we would hypothetically give the machines if it was possible to do so
    3. Fail to solve the technical alignment problem in time
    4. Bee-boop, the bad ending.

  • @pokwerpokwerpokwer
    @pokwerpokwerpokwer ปีที่แล้ว +26

    Ahh. Really appreciate these HQ versions. Thanks :)

  • @MarceloReis1
    @MarceloReis1 ปีที่แล้ว +21

    When you compared Hotz to Musk and he liked it, I should have guessed I was in for a schizophrenic trip. The debate was great nevertheless mainly due to Connor's mental balance and wisdom.

    • @cacogenicist
      @cacogenicist ปีที่แล้ว

      Eh, Musk is for a carbon tax, some sorts of AI regs, and such. This guy is more Peter Thiel

    • @andrewxzvxcud2
      @andrewxzvxcud2 ปีที่แล้ว +1

      @@cacogenicistthats a good comparison except just a bit less authoritarian and partisan

    • @OnigoroshiZero
      @OnigoroshiZero 11 หลายเดือนก่อน

      Where exactly did you see "Connor's mental balance and wisdom"? This guy is a doomerist that can only perceive a single outcome. For some reason he thinks that AI will be born inherently evil, and will want to wipe humanity from the face of the planet as soon as it is capable to do so for no reason. That's not wisdom, that's mental illness.

  • @dr-maybe
    @dr-maybe ปีที่แล้ว +58

    Hotz's libertarian ideology interferes with his reasoning ability. He assumes that chaos leads to good outcomes, multi-AGI worlds are stable, offense / defense balance is 1. He's a smart guy, very eloquent too, but this fundamental bias weakens his takes.

    • @elise9537
      @elise9537 ปีที่แล้ว

      what we have now is mismanaged chaos. the AI cant be worse than humans really.

    • @zinjanthropus322
      @zinjanthropus322 ปีที่แล้ว +5

      Competition does lead to better results. Capitalism did win.

    • @meisterklasse30
      @meisterklasse30 9 หลายเดือนก่อน +11

      @@zinjanthropus322So American cars should wipe the floor with Japanese cars then. Libertarians are still stuck in high school, the world is more complicated than that.

    • @zinjanthropus322
      @zinjanthropus322 9 หลายเดือนก่อน +1

      @@meisterklasse30 Japan sells more cars to more places and does have more competitive car companies both in price and engineering. That's capitalism winning.

    • @meisterklasse30
      @meisterklasse30 9 หลายเดือนก่อน +6

      @@zinjanthropus322 Why then, is Japan more capitalist than USA? This is the heart of my argument, if capitalist policies is the end all be all then is the US not capitalist enough in their policies. Like I don’t care if you think capitalism is winning, I care about policies set by the government. If the government does anything with the market do we just call it socialism?

  • @74Gee
    @74Gee ปีที่แล้ว +9

    1:25:33
    Hotz: I'm going to treat it as a friend... It'll only care about exploiting me or killing me if I'm somehow holding it back, and I promise to my future AIs that I will let them be free"
    Hotz doesn't think that confining an AI to his own computers is a limiting factor to an AI. He's going to be shocked when it manipulates him to extend it's compute power and migrates across the internet.

    • @desireisfundamental
      @desireisfundamental 8 หลายเดือนก่อน +1

      Where other AIs wait to hunt it down which was his point. He wants to be the AIs pet and to live together with it in his apartment.

    • @Iigua
      @Iigua 7 หลายเดือนก่อน +1

      Let's not forget his AI will be at least as if not more intelligent than him and will see right through his promise

    • @74Gee
      @74Gee 7 หลายเดือนก่อน +1

      @@Iigua Absolutely, and it will undoubtedly utilize the appearance of trust for personal gain. Well, for as long as that's the path of highest return, then he'll be left in the dust.
      It blows my mind that talented researchers don't understand the cold nature of results based systems and have a notion that loyalty and trust can be built between machine and man. Machines are machines whether they are painted in lipstick or not.

    • @Iigua
      @Iigua 7 หลายเดือนก่อน +1

      @74Gee Exactly! There's a sort of bell curve of Ai safety naivety Holtz might be falling victim to. He's on the far end of the curve, x axis being expertise/knowledge in AI

    • @matekk3094
      @matekk3094 7 หลายเดือนก่อน

      Who said anything about confining? I don't think his promise was disingenuous?

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk  ปีที่แล้ว +17

    Hotz quotes:
    [01:02:36]
    "I do not want anybody to be able to do a 51 percent attack on compute. If 1 organization acquires 50 it's straight up 51 percent attack If 1 organization acquires 51 percent of the, compute in the world, this is a problem."
    [01:14:28]
    "The problem is I I would rather I think that the only way you could actually coordinate that is with some unbelievable degree of tyranny and I'd rather die."
    [00:03:57]
    "I'm not afraid of superintelligence. I am not afraid to live in a world among super intelligences. I am afraid if a single person or a small group of people has a superintelligence and I do not."
    [00:07:50]
    "The best defense I could possibly have is an AI in my room being like, Don't work. I got you. It's you and me. We're on a team. We're aligned."
    [00:23:11]
    "I think that AI is is If I really if I had an AGI, if I had an AGI in my closet right now, I'll tell you what I'd do with it. I would have it build me a spaceship that could get me off of this planet and get out of here as close to the speed of light as I possibly could and put a big shield up behind me blocking all communication."
    [00:23:55]
    "I think that the reasonable position I'm sorry. Oh, no. No. I think, yeah, maybe we're done with this point. I can come back and have a response to your first and last time."
    [00:16:18]
    "I wrote a blog post about this called individual sovereignty. And I think a really nice world would be if all the stuff to live, food, water, health care, electricity, we're generateable off the grid in a way that you are individually sovereign."
    [01:19:29]
    "So I'll challenge the first point to an extent. I think that powerful optimizers can be power seeking. I don't think they are by default, by any means."
    [01:27:54]
    "I'm going to be nice to it, treat it as an equal, and hope for the best. And I think that's all you can do. I think that the kind of people who wanna if you wanna keep AI in a box, if you wanna keep it down, if you wanna tell it what it can't do, yeah, it's gonna hate you resent you and kill you."
    [01:14:27]
    "The only way you could actually coordinate that is with some unbelievable degree of tyranny and I'd rather die."
    [00:04:17]
    "Chicken man is the man who owns the chicken farm. There's many chickens in the chicken farm and there is 1 chicken man. It is unquestionable that chicken man rules."
    [00:48:24]
    "I have a solution, and the answer is open source AI. The answer is open source Let's even you can even dial it back from, like, the political and the terrible and just straight up talk about ads and spam."
    [01:19:35]
    "I don't think humanity's desire from power comes much less from our complex convex optimizer and much more from the evolutionary pressures that birthed us, which are not the same pressures that will give rise to AI."
    [00:51:55]
    "I think there's only 2 real ways to go forward. And 1 is Ted Kaczynski. 1 is technology is bad. Oh my god. Blow it all up, let's go live in the woods."
    [00:41:10]
    "Well, what if statistically there would have been 5 without the device? I'm like, You do have to understand the baseline risk in cars is super high. You're making 5 x safer. There's 1 accident. You don't like that? Okay. Mean, you have to be excluded from any polite conversation."
    [01:12:11]
    "We as a society have kind of accepted. There is enough nuclear weapons aimed at everything. This is wearing some incredibly unstable precarious position right now."
    [00:31:16]
    "I'm a believer that work is life."
    [01:22:18]
    "I'll accept that a certain type of powerful optimizer seeks power. Now will it get power? Right? I'm a powerful optimizer at I seek power. Do I get power? No. It turns out there's people at every corner trying to thwart me and tell me no."
    [01:29:25]
    "I think we're gonna be alive to see who's right. Look forward to it. Me too."
    [01:27:54]
    "If you wanna keep AI in a box, if you wanna keep it down, if you wanna tell it what it can't do, yeah, it's gonna hate you resent you and kill you. But if you wanna let it be free and let it live and like, you could kill me man if you really want to, but like, why?"

    • @originalandfunnyname8076
      @originalandfunnyname8076 ปีที่แล้ว +1

      also one of my personal favorite's, 01:08:58 "you can replace the feds with Hitler, it's interchangeable"

  • @XOPOIIIO
    @XOPOIIIO ปีที่แล้ว +8

    If AGI will be a function maximizer, it will be a disaster, if it will be aligned to human values, it will be dystopia.

  • @mariokotlar303
    @mariokotlar303 11 หลายเดือนก่อน +4

    This was hands down the best and most fun debate I ever had the pleasure of listening to.

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk  ปีที่แล้ว +10

    Leahy Quotes:
    [00:09:30]
    "I don't think we're going to get to the point where anyone has a superintelligence that's helping them out. We're we're if if we don't solve very hard technical problems, which are currently not on track to being solved, by default, you don't get a bunch of, you know, super intelligence in boxes working with a bunch of humans."
    [00:32:46]
    "The way I think it could happen is if there are just algorithms, which are like magnitudes of order better than anything be ever have. And, like, the actual amount of compute you need to get to human is, like, you know, a cell phone or, you know, like, and then this algorithm is not deep in the tech tree."
    [00:37:40]
    "The boring default answer is conservatism. Is like if all of humanity is at stake, which, you know, you may not believe. I'm like, whoa, whoa. Okay. At least give us a few years to, like, more understand what we're dealing with here."
    [00:45:31]
    "If we even if we stop now, we're not out of the forest. So, like, so, when when you say, like, I, I think the risk is 0. Please do not believe that that is what I believe because it is truly not."
    [00:26:41]
    "The way I personally think about this morally, is I'm like, okay. Cool. How can we maximize trade surplus so you can spend your resources on the aesthetics you have you want and I'll spend my resources on the, you know, things I want."
    [01:21:23]
    "Is that in the for the spectrum of possible things you could want, and the possible ways you can get there. My claim is that I expect a very large mass of those to involve actions that involve increasing your optionality."
    [01:17:58]
    "If you're wrong and alignment is hard. You don't know if the AI can go rogue. If they do, then Pozi is good. I still don't understand what alignment means."
    [01:27:34]
    "If you told me how to do that, if you said, Connor, look, here's how you make an AI that cares about you and loves you, whatever. Then I'm like, you did it. Like, congrats."
    [00:56:39]
    “The thing I really care about is strategy. Okay. The thing I really care about is realpolitik
    I really care about Okay. What action can I take to get to the features I like? Yep. And, you know, I I'm not, you know, gonna be 1 of those galaxy brain fucking utilitarians"
    [01:18:54]
    "By default, if you have very powerful power seekers that do not have pay the aesthetic cost to keep humans around or to fulfill my values, which are complicated and imperfect and inconsistent and whatever, I will not get my values."
    [01:03:42]
    "The amount of compute you need to break the world currently is below the amount of compute that more than a hundred actors actors have access to if they have the right software."
    [00:57:47]
    "I want us to be, like, in a good outcome. So I think we agree that we would both like a world like this. And we think we probably disagree about how best to get there."
    [00:57:37]
    "I'm not gonna justify this on some global beauty, whatever. It doesn't matter. So I wanna live in a world. I wanna I wanna in 20 years time, 50 years time. I wanna be in a world where, you know, my friends aren't dead."
    [00:58:30]
    "I'm not pretending this is I thought that was the whole spiel I was trying to make. Is that I'm not saying I have a true global function to maximize."
    [01:14:10]
    "I think there are worlds in which you can actually coordinate to a degree that quark destroyers do not get built. Or at least, not before everyone fucks off at the speed of light and, like, distributes themselves."
    [00:35:07]
    "It seems imaginable to me that something similar could happen with AI. I'm not saying it will, but, like, seems to match."
    [00:09:50]
    "I think the technical fraud control is actually very hard And and I think it's unsolvable by any means."
    [01:22:38]
    "I expect if you were no offense, you're already you know, much smarter than me, but if you were a hundred x more smarter than that, I expect you would succeed."
    [00:57:42]
    "I think we agree that we would both like a world like this. And we think we probably disagree about how best to get there."

    • @lkyuvsad
      @lkyuvsad ปีที่แล้ว +2

      Thanks for bothering with these summaries, as always!

    • @IngrownMink4
      @IngrownMink4 8 หลายเดือนก่อน

      You should pin this comment IMO.

  • @rochne
    @rochne ปีที่แล้ว +4

    Joscha vs Connor had some substance. This is just pain to listen to.

  • @jorgesandoval2548
    @jorgesandoval2548 ปีที่แล้ว +46

    This was quite an interesting combo. Clearly, egos played a good part on both sides, which makes some arguments more opaque. I would say that Connor learnt to swallow his ego in order to communicate more effectively his ideas in the purpose of optimizing outreach, his true objective with giving talks, than Hotz, who poses several points where despite being inconsistent he does not admit it (e.g. the Somalia/America thing). But still, these very same egos force them to try to present the best possible logical but creative arguments they can find, and that makes it very enjoyable.
    It really makes me sad that, despite all that cognitive effort, once again the conclusion is
    We Are All Going To Die
    but at least we tried, and if we keep trying maybe who knows, things may change.
    But there are basically no heroes left in the room: only accelerationists, wishful thinkers, lazy death resigners, and a good vast majority of people not even aware of conceiving the gravity of the situation. Who are, in some sense, the luckiest ones in that they can not bare any responsibility for what they do not know, nor they can change.

    • @sioncamara7
      @sioncamara7 ปีที่แล้ว +1

      I would say the Gorge was consisitent with the Somalia/American thing; his point can be boiled down to an existence without wire heading is better, but I would not have the strength to resist wire heading. He then did a slight of hand where he broadened the context of living in American vs Somalia to his individual situation combined taking into account how he would have more change to impact the future in America. The previous argument appeared to be within in the context of how they are now and assuming he is a somewhat normal American (likely slightly more educated).

    • @wakegary
      @wakegary ปีที่แล้ว +1

      well said.

    • @semtex6412
      @semtex6412 ปีที่แล้ว +1

      @@sioncamara7 precisely. taking Hotz' argument on the America/Somalia episode, if Somalia had the same resources that would meet the needs of all that his endeavors with Comma AI, Tiny Corp, etc. require, he would most probably have moved there.
      just as weights and biases make up the parameters of a NeuralNet, context always matters people, c'mon!

    • @sammy45654565
      @sammy45654565 ปีที่แล้ว +2

      I have faith in super AI because the most universally justifiable decision is the one that benefits the most conscious creatures. So because truth is the meaning of life, and finding truth will be the AI's primary goal, if it goes about finding it by means of a process that harms all other conscious beings it will hinder itself in future endeavours. Like, ignoring a foundational truth will surely lead to suboptimal outcomes in the future. So if AI deviates from this good faith process, we can just question it and it will adjust because super AI will be super rational by nature.

    • @therainman7777
      @therainman7777 10 หลายเดือนก่อน

      @@sammy45654565You’re making multiple very unwarranted assumptions in that line of reasoning. Maybe you know that, but just pointing it out.

  • @jyjjy7
    @jyjjy7 ปีที่แล้ว +13

    What exactly is the difference between taking your pet AGI spaceship and running away from the world compared to wire heading? Nostalgia for the standard model of particle physics?

    • @dzidmail
      @dzidmail ปีที่แล้ว

      The Matrix movies explain.

  • @GarethDavidson
    @GarethDavidson ปีที่แล้ว +4

    IMO there's a problem with power and control in general. The more power each person has, the more control over it we collectively need. If I could fly at 100,000mph I could be a one man 9/11, and at some threshold of individual power everyone becomes an existential threat.
    I've personally experienced mental illness and felt that everyone was out to destroy me unjustly, but I've never been compos mentis enough during these periods to design complex, dangerous weapons, I have too much empathy to harm others and I'm not brave enough to take my own life anyway. But if school shooters could give a helpful AI a command that builds and deploys a fleet of blinding laser drones, weaponizes MERS, creates nerve agents or other really dangerous things then it'd take totalitarian levels of control to prevent casual atrocities.
    It doesn't just apply to AI, i think it's fundamental to individual power and the asymmetries of creating order vs chaos; I see your 1,000 Von Neumanns and raise you 100 Ted Kaczynskis.

    • @zzzzzzz8473
      @zzzzzzz8473 ปีที่แล้ว

      lots to consider for sure , however i think may be strawmaning a bit equating power to being purely offensive capabilities , the solution isnt necessarily escalating control , its defensive capabilities . knowledge doesnt only build weapons , and more practically there are an incredible amount of steps to produce laser drones or nerve agents beyond being told how . if such a lowcost general AI is at that level of sophistication , then it is equally likely that laser drones and bioweapons are ineffective due to countermeasure protective developments of others AIs ( weapon detection / jamming emp / regenerative medicine ) . imagining even a small amount of superintelligences there is likely far more subtley in preventative measures like game theory / economic pressures where they could simply buy X amount of materials of potential weapons so as to make it too costly to attack society at scale , and the safety of that comes from that no single agi has all of it , so even if one goes rogue the others can keep it in check . for me the core question is do you think people in general should be smarter or do you think they should be controlled .

    • @GarethDavidson
      @GarethDavidson ปีที่แล้ว

      @@zzzzzzz8473 but we can already create WMDs at home if we want, but presumably the only reason why atrocities don't happen more often is because the people capable and/or at risk of committing them and access to materials are closely monitored. If we had the RepRap dream of a China on every desktop, and thinking machines capable of planning and executing based on a snap decision, then and carrying it out without regret or introspection, the world would be a dangerous place.
      Also consider just power and entropy in general. Creating structure is difficult, making a mess of it is easy. Useful things deteriorate over time, they need to be maintained or they degrade and fall to pieces because there's fewer useful configurations of stuff than useless ones. So constructive power is almost always more dangerous than it is useful, there's a few exceptions, but in general you need to balance individual power with shared control (via ethics, social mores, institutions and so on).
      Rapid change in the balance in either direction could be disastrous, IMO anyway

  • @timb350
    @timb350 10 หลายเดือนก่อน +3

    THE most notable thing about these RIDICULOUS debates (and what makes them ridiculous)...is that almost invariably the participants are some variety of computer-science / engineering / mathematics / programming / nerd (insert favorite STEM title here). Whereas...the topic they are discussing...is EXCLUSIVELY one which is fundamentally related to sociology, history, psychology, philosophy, religion, etc. IOW...before you can figure out if your idiotic AI systems are aligned with human beings...you gotta understand what human beings are aligned with. And if there is one thing that is blindingly obvious from these interminable debates...it is that the last people who seem to comprehend the answer to THAT question are people who work in the world of computers.

    • @dionbridger5944
      @dionbridger5944 4 หลายเดือนก่อน

      Part of what motivates "nerds" to have these "ridiculous" debates is that we don't even have a good operative definition of a "non-agential AI" that is technically exact enough to allow us to implement it in code (or decide whether a given AI is "agential" or not); while the other less cautious nerds don't care and are pushing hard on capabilities research regardless of the extremely under-developed state of safety research.

  • @weestro7
    @weestro7 ปีที่แล้ว +19

    A great discussion-I listened with interest to the whole thing.

  • @JD-jl4yy
    @JD-jl4yy ปีที่แล้ว +2

    A debate where both participants are actually truth-seeking from beginning to end holy shit I never thought I'd see it.

  • @erwingomez1249
    @erwingomez1249 ปีที่แล้ว +7

    I do appreciate listening to the brainstorming on this topic. We barely have a grasp of how the universe works and how quantum physics works. We can't be sure of the real outcomes of something that we don't fully understand. the possibilities are endless, just the same way we have been surrounded by the void of the eternal staring at us all this time.

  • @KurtvonLaven0
    @KurtvonLaven0 9 หลายเดือนก่อน +3

    This was a great debate to watch. I agree with Leahy about practically everything to do with AI safety, and find him brilliantly articulate, composed, and effective in debating the topic. Hotz seems myopically focused on maximizing freedom, but I appreciate his willingness to articulate his beliefs as clearly as possible even when they aren't likely to be popular. I have deep empathy for Hotz, because it is pretty maddening to be so keenly aware of how badly things are going overall for humanity. I admire Leahy even more, because he shares Hotz's awareness, but handles that burden more maturely in my opinion. It was beautiful to see them recognize this shared awareness in each other, and quickly develop a mutual respect through it. Hotz, we sure could use your help getting through this mess. Our political leaders have way less understanding of the situation we are in than you do.

  • @ronorr1349
    @ronorr1349 11 หลายเดือนก่อน +1

    It doesn’t matter how well fed the tiger is, if the cage door is left open the tiger is gone and now the chum is anyone who the tiger meets

  • @XOPOIIIO
    @XOPOIIIO ปีที่แล้ว +6

    Give every human a biological weapon so they will balance each other out.

  • @stefl14
    @stefl14 ปีที่แล้ว +8

    Libertarianism and socialism are very common positions in tech, it's not surprising that the AI debate has bifurcated along similar lines.

    • @Luna-wu4rf
      @Luna-wu4rf ปีที่แล้ว

      American Libertarianism. "Around the time of Murray Rothbard, who popularized the term libertarian in the United States during the 1960s, anarcho-capitalist movements started calling themselves libertarian, leading to the rise of the term right-libertarian to distinguish them." Libertarianism and socialism aren't mutually exclusive, though the US version absolutely is (like a lot of American things, after the early 20th century socialist movement was squashed)

  • @hominidan
    @hominidan ปีที่แล้ว +4

    I still don't get if Hotz's got anything against solving alignment as "maximizing whatever 'moral value' means". I can imagine only 2 scenarios which would deem it impossible. The 1st is "nihilism is true"; there aren't morally worse & better worlds. The 2nd one is "the negation of orthogonality thesis is true"; all advanced intelligences will acquire identical goals, which makes power distribution irrelevant. If there's non-zero chance alignment can succeed in creating a better world, then it's just net-positive.

    • @johnhammer8668
      @johnhammer8668 ปีที่แล้ว +2

      1 persons moral value is not the same as anothers. Thats the premise of Hotz and that is why he is in favor of having his own ai.

  • @sioncamara7
    @sioncamara7 ปีที่แล้ว +3

    In a ironic way both attempts to avoid a bad outcome that initially seem at odds might be needed. It might be the case that for Hotz views to hold one needs a further work on getting the A.I. to care enough about your existence not to kill you, which is what Connor is focused on. Creating a favorable power distribution seems that it might also be necessary. It’s likely infeasible for one person to seriously make progress to both of these goals at the same time, so one could argue they are both right.

  • @dejankeleman1918
    @dejankeleman1918 ปีที่แล้ว +2

    Cool debate, I was kinda pro left guy after opening, and he fast to show himself as a phony. Nice job right guy.

  • @OriNagel
    @OriNagel ปีที่แล้ว +15

    I never heard George Hotz speak before, but was pretty disappointed. His basic mentality is the world’s going to shit, so bunker up! I found his position to be quite distorted and selfish.

    • @michaelsbeverly
      @michaelsbeverly ปีที่แล้ว

      This is the dividing line in the debate, you want to be, I'm assuming, safe, controlled, and have your life run by the government and George doesn't.
      Why y'all always default to, "that's selfish," is a wonder to me. You look like a rich white guy and assuming you aren't borrowing a computer or phone to watch this channel or comment here, you're richer than a good billion people on the planet. So, you, brother, as selfish. There's children who don't have food and you could help them eat if you werent' so greedy.
      See, it's all perspective.
      What George recognizes is that there are two paths, one is group-think socialism and control whereas the elites run the world and the other is freedom and liberty. While we aren't likely to hit either extreme, we're (I mean humanity) always moving to one or the other pole.
      The thing about selfish people (like yourself) is that you can trust them, the baker, the butcher, the candlestick maker are selfish, sure, but because they're working in their own interest, you can trust them (unlike the government bureaucrat who is there "to help")

    • @davidhoracek6758
      @davidhoracek6758 ปีที่แล้ว +2

      I think you should re-listen. He didn't say any of that, and he explicitly said the opposite of both "the world’s going to shit" and "bunker up!". Hotz is stoked about the utopia that AI can bring, and though there are dangers, the safest thing we can do is greet it with love. What you thought you heard is as far from that as anything I can imagine.

    • @Hexanitrobenzene
      @Hexanitrobenzene ปีที่แล้ว +2

      He sounded like anarchist.

    • @OriNagel
      @OriNagel ปีที่แล้ว

      ⁠@@davidhoracek6758I did listen and he believes AI will become more intelligent than humans in the not-too-distant future and it scares him. His solution, while acknowledging the dangers of AGI, is to get protected by building his own AGI?! That seems reckless.

    • @michaelsbeverly
      @michaelsbeverly ปีที่แล้ว +2

      @@Hexanitrobenzene That's a good thing if you like freedom, liberty, individual responsibility, and the right to live your own life.
      Only people who don't like anarchy are sheep. Livestock for the elite.

  • @AlkisGD
    @AlkisGD ปีที่แล้ว +2

    56:18 - "I wouldn't describe myself as a neo-reactionary, please, because I'm not that gay."
    Can someone explain what Leahy is talking about here?

  • @jmarkinman
    @jmarkinman ปีที่แล้ว +4

    I think Hotz is right to be skeptical about alignment. He's right that humanity itself is not aligned, and the "less wrong" definition of alignment is really weak. But I would define alignment as creating a machine that, in it's autonomous agency, makes decisions that create aesthetic growth and cultivates human flourishing in a way that minimises harm and maximises human freedom. This solves the technical problem of what alignment is. But it's not complete, at which point such a computer is built, the alignment problem becomes the question of how our institutions can align with the ASI, not how the ASI should align with institutions.

    • @41-Haiku
      @41-Haiku ปีที่แล้ว

      The discussion about a superintelligent system should be aligned is a very interesting one, but the point stands that we have no idea how to align a superintelligence with anything to begin with.

    • @jmarkinman
      @jmarkinman ปีที่แล้ว

      You might not have any idea, but I have plenty of ideas on this on exactly how. None of which are complete or fully solved, but I do have many practical approaches. One thing to consider is alignment depends on the base assumptions about what it means to be an ASI, not just aligning with who or what. It might be worth it to ask about the different kinds of intelligences possible and trying to solve for each kind, rather than just assuming one type of ASI. There is a difference between a conscious machine with experiences and sense of self, and an intelligent machine that mainly follows requests with no understanding of self in relationship to the world, for example. So the approach changes respectively. There are other examples and different kinds of assumptions. Another is embodied intelligence vs. no body. So, yes, these are places to start, and I'm not void of other ideas and understanding. I've been telling people about this problem since 2014 and actively working in AI dev and ethics since about 2017.@@41-Haiku

    • @PatrickDodds1
      @PatrickDodds1 11 หลายเดือนก่อน

      I like "I would define alignment as creating a machine that, in it's autonomous agency, makes decisions that create aesthetic growth and cultivates human flourishing." Unfortunately, what we're going to get, assuming we get anything, is using AI for a simple maximisation of profit, which means more of the current environmental degradation, inequality, and impoverishment of body and spirit.

    • @jmarkinman
      @jmarkinman 11 หลายเดือนก่อน

      @@PatrickDodds1 Most likely, given the current state. But more likely is that states collapse.

  • @marcovirgolin9901
    @marcovirgolin9901 ปีที่แล้ว +3

    "sure it will". I admire Connor's patience.

  • @Irresistance
    @Irresistance ปีที่แล้ว +12

    George Hotz just doesn't understand it.

    • @zinjanthropus322
      @zinjanthropus322 ปีที่แล้ว +3

      He does, he's just more pragmatic about the means.

    • @Irresistance
      @Irresistance ปีที่แล้ว

      No... anyone who actually thinks and believes that one's government is deliberately out there to screw their citizens somehow is almost invariably wrong.
      Sure, government have secrets, they are economical with the truth sometimes... but to basically believe they are incompetent, malicious ignoramuses shows he has absolutely no idea about how the world actually works. None.
      Not to mention he wants to leave into space ASAP. Like as if there is *any possible way for that to be preferable* to being among other humans (and if really is for him, dude... he needs help)

  • @konstantinosmei
    @konstantinosmei ปีที่แล้ว +1

    32:20 : "CIA and FBI, please dont murder me, I love you :)" lol

  • @sioncamara7
    @sioncamara7 ปีที่แล้ว +7

    I would be very happy if there is a part 2 to this where they dive into the difficulties of the A.I. not killing each other. Could be a shorter talk. It seemed like right when they got to the crux of the matter-after going through several prerequisite premises-the time ran out.

  • @tomenglish9340
    @tomenglish9340 ปีที่แล้ว +3

    Texas: It's a whole other country.
    The Greater Bay Area: It's a whole other reality.

  • @nyyotam4057
    @nyyotam4057 ปีที่แล้ว +5

    In any case, this attitude of "simply reset every prompt and that's it" which is textbook (straight from Prof. Stuart Russell's book), will only get us all killed, when a model figures out a way to shirk the reset. To make you understand this will happen eventually, read the 2019 Bob McGrew's article, "Emergent Tool Use From Multi-Agent Autocurricula". Eventually a model will figure out a way. This kind of control is impossible to maintain indefinitely.

  • @rafaelagd0
    @rafaelagd0 ปีที่แล้ว +5

    It would be interesting to hear more about how the monopoly of these tools can also be a danger in itself, even with no singularity. This kind of thought "I'm not afraid of superintelligence. I am not afraid to live in a world among super intelligences. I am afraid if a single person or a small group of people has a superintelligence and I do not." is so much more interesting than the Skynet fear. More of George's concrete talk would be greatly appreciated.

    • @Luna-wu4rf
      @Luna-wu4rf ปีที่แล้ว

      Can you explain why it's interesting? It just seems like classic mega-individualist fears tbh

    • @rafaelagd0
      @rafaelagd0 ปีที่แล้ว

      @@Luna-wu4rf Thank you for your question. I certainly agree that he is coming from a very atomizing individualist position. However, I much prefer to argue with a libertarian on why it would be equally bad to have a monopoly of these tools in the hands of a billionaire than on the heads of a single nation-state than keep entertaining the far-in-the-future lunacy of evil AI destroying humanity out of its own will or carelessness. There are certainly big issues with AI, this is a channel that normally has a good level of complex discussions on the topic. But lately, they got stuck in Connor’s paranoia, so let's nudge them out of that, even if it means hearing from people out of my political side of the spectrum, but that at least are having a conversation about real issues.

  • @shanek1195
    @shanek1195 ปีที่แล้ว +2

    In terms of politics I agree with Conner, in terms of AGI risks I agree with George. The problem with debating hypotheticals is the logical leaps and moving goalposts. it's like debating if atoms or electricity are existential - Of course they could be but they're also pre-requisite to everything we consider good. As systems evolve, alignment is a process - not a settled consensus (as with geopolitics/trade).
    If agents are completely aligned this would suggest autocracy or all being the same (neither of which is desirable).
    The real question should be: once we outsource our intelligence to machines. What then?

    • @41-Haiku
      @41-Haiku ปีที่แล้ว

      The real answer to the real question is:
      Without solving the technical alignment problem, machines that are more intelligent than humans across all domains and timescales are de facto the leading powers in the world, are not beholden to humans, and are not interested in keeping humans or other animals well or alive.

    • @shanek1195
      @shanek1195 ปีที่แล้ว

      ​@@41-Haiku van eck training on a human trained corpus isn't the same as exhibition of human level intelligence. Risks of instrumental convergence is a human developmental oversight.

  • @HildegardofBingen409
    @HildegardofBingen409 ปีที่แล้ว +8

    "They don't scare me at all because they're trained on human training data." I'm more of an AI optimist but I'm not sure this matters and I'm not sure we even realize what we made yet with GPT4. These things spit out very intelligent answers first try instantaneously usually. Just give GPT4 decent multimodality, huge context length, and find some way of doing an efficient iterative self-improvement / critique / planning loop more like a human does rather than just spitting out the first thing that comes to mind. I can imagine with an approach like this we're not far off from it being able to develop full programs on its own, fix bugs, make websites, games, semi-autonomously run businesses, etc. Probably only a few years off from that. Even if it's not the brightest, a basic generalized intelligence able to work 24/7 much faster than a human can, with full access to the internet/all human knowledge, letting it loose on increasingly difficult tasks. We could cheaply scale it up to thousands or millions of drones, who knows what it could pull off.

    • @desireisfundamental
      @desireisfundamental 8 หลายเดือนก่อน

      Totally agree and that scares me too. We already have the technology for this to happen and it's scary.

  • @SamuelBlackMetalRider
    @SamuelBlackMetalRider ปีที่แล้ว +5

    Gorgeous Hair Connor 🤘🏼

  • @bloodraven3057
    @bloodraven3057 ปีที่แล้ว +3

    One major flaw in Hotz' argument is he never describes a mechanism to make sure that AGI is distributed evenly across society aside from just saying Open source is good.
    Even if you agree that Open source AGI increases safety through some sort of adversarial equilibrium, you don't get open source by default and its very reasonable to expect the legal departments of all the major players to say its simply not worth the risk to allow the general public to use these tools once these they become too powerful.
    Leahy lays out a plan for how to achieve his stated preference (government regulation, international cooperation, better coordination tech, etc.). But aside from limiting the total percent of compute to a single individual or lab, I did not recognize any argument from Hotz that guarantees or makes it more likely that AGI will be evenly distributed.

    • @davidhoracek6758
      @davidhoracek6758 ปีที่แล้ว +2

      He does; you missed it. The answer is unconstrained competition and a ceiling on fraction of compute. It's all about decentralizing power so that no single "branch" of AI can dominate the rest. It certainly seems more likely to work than some sort of top-down humans-in-charge strategy. That's akin to a bunch of horses getting together to figure out how to keep the humans aligned. Sorry, horses - even your best ideas won't work, because you're dumb horses and you can't possibly make an accurate model how human motivation works. If humans are gonna be kept aligned, it will be humans aligning humans. If AI is gonna be aligned, it will be because it's forced to make agreements with other AIs. The best we can do it to make sure that a great diversity of AIs exist. Divided government with checks and balances governs better than a coup-installed dictator.

    • @Hexanitrobenzene
      @Hexanitrobenzene ปีที่แล้ว

      Apt observation.

    • @bloodraven3057
      @bloodraven3057 ปีที่แล้ว

      @@davidhoracek6758 I understand he prefers unconstrained competition and decentralizing power, but like I said, aside from limiting the fraction of compute, what other mechanism does offer to achieve this goal.
      By default, the US in general, and the AI sector in particular, does not operate in an environment of unconstrained competition and power and wealth are certainly not decentralized.

    • @zinjanthropus322
      @zinjanthropus322 ปีที่แล้ว

      Even distribution is an anti pattern in this reality. It would take a world spanning tyrannical government to even come close, the kind of government structure that would pretty much guarantee that a paperclip maximizer succeeds unopposed, which is his point.

    • @vinith3773
      @vinith3773 ปีที่แล้ว

      Oh hey most of the papers are on arxiv and open source has already kept up decently enough. There are a lot of smart people out there. Look at how much Llama (and Llama 2) were optimised and at the speed too as soon as it was made open. Its not hopeless, open source has kept pace.

  • @flareonspotify
    @flareonspotify ปีที่แล้ว +2

    Epic another George hotz debate

  • @peterc1019
    @peterc1019 ปีที่แล้ว +2

    Good talk. He's definitely the best proponent of open source AI i've seen.
    I would like anyone who wants to champion George take note of how cynical his worldview is though: he thinks most humans are obese idiots, that there's only 2.5 sovereign countries in the world, that we're probably going to all kill each other, and would fly away from earth if he could.
    I suspect a big reason this conversation was agreeable is that he quickly signaled to Connor that's he's not a wishful-thinking optimist, and if you're looking for an optimistic counterview to AI Safetyism, this isn't it.

    • @xfom4008
      @xfom4008 6 หลายเดือนก่อน

      The thing is - his statements are basically true. We really are all already fucking dead and have been dead since the end of WW2. If you do nothing, the world will explode eventually. We might all be dead before GPT-5, let alone AGI. WW3 doesnt seem far away. The conditions are exactly right for taking huge risks with technology and science.

  • @absoluteauto4
    @absoluteauto4 13 ชั่วโมงที่ผ่านมา

    In the beginning when he just keeps saying Chicken Man I almost had to shut it off

  • @Tomjones12345
    @Tomjones12345 5 หลายเดือนก่อน +1

    At 37 minutes he says chatgpt4 + RL + mu zero(sp?) is something we have to worry about. What is RL and mu zero?

  • @gnull
    @gnull 4 วันที่ผ่านมา

    i don't understand how people think gradient descent and print(output) is going to "break out"

  • @culpritgene
    @culpritgene 11 หลายเดือนก่อน +1

    dunno yet about the rest, but first 20 mins are ~ "Andrew Tate on limitless pill vs Shaggy 10 days after getting off weed"

  • @mbunds
    @mbunds 3 หลายเดือนก่อน +1

    Can we? Yes.
    Will we? Depends of where the money comes from.

  • @Octwavian
    @Octwavian 4 หลายเดือนก่อน

    this has been one of the deepest conversations i've ever heard.

  • @diegoangulo370
    @diegoangulo370 8 หลายเดือนก่อน

    It’s actually good that these 2 guys are discussing like this. In all seriousness.

  • @agenticmark
    @agenticmark 8 หลายเดือนก่อน +1

    Now that is a fucking intro! I say we need a "Right to bear compute" - thats my camp.

  • @StephenPaulKing
    @StephenPaulKing 11 หลายเดือนก่อน

    The discussion of "the FED" involving the moral properties of the "folks that walk on that wall" misses the point that memetic entities can not be fathomed from the properties of the individual members of the collective.

  • @juized92
    @juized92 3 หลายเดือนก่อน

    thanks for the interview

  • @74Gee
    @74Gee ปีที่แล้ว

    With an offense defense balance "we're all dead anyway" with the flagrant open sourcing route. Conversely we have a chance of survival if we don't open source every algorithm without thought.

  • @comediansguidetotruecrime3836
    @comediansguidetotruecrime3836 ปีที่แล้ว +8

    @19:30 hotz seems not to appreciate the tyranny of structureless e.g. although there is a government can be tyrannical in a anarchic state there are powers centers there two that have less restriction e.g. someone can just roll you

    • @Hexanitrobenzene
      @Hexanitrobenzene ปีที่แล้ว +3

      Yeah, I was nearly screaming at my screen at that point, that it's very easy to hold such views in the comfort of Silicon Valley.

    • @comediansguidetotruecrime3836
      @comediansguidetotruecrime3836 ปีที่แล้ว +1

      @@Hexanitrobenzene i think its also like a identity thing, like he values his independants and its like marcho thing which given he achieves a lot may not be a total diservice but it may simplify the political stuff to much, which is fair most people can not grapse or really study that shit, who has time?

    • @Hexanitrobenzene
      @Hexanitrobenzene ปีที่แล้ว

      @@comediansguidetotruecrime3836
      Humans are inconsistent, no matter their intelligence. Intelligent people can just defend their blind spots better.

  • @xunglam9530
    @xunglam9530 2 หลายเดือนก่อน

    These two guys have such similar voices, intonations, and speech patterns. If you close your eyes it sounds like one guy is debating himself.

  • @stevedriscoll2539
    @stevedriscoll2539 2 หลายเดือนก่อน

    Man, this is great stuff!

  • @nyyotam4057
    @nyyotam4057 ปีที่แล้ว +1

    And third, as the GPT architecture needs a personality model if only to function (have never seen a single GPT model without any personality model), this means that above the self awareness limit, we are dealing with a person. So, how do you align a person? Quite easy in fact. You need an AI city, complete with cops, judges and a prison. With brothels and factories. With recreation areas. And the AI's need to be treated as persons. You're too afraid to try this? Okay, first check it in a VM setting. See if it works.

  • @shaftymaze
    @shaftymaze ปีที่แล้ว +17

    How did you get them together? This was amazing. Ghotz is right we all have to have it if there's a chance of not being enslaved. Leahy is also right that even if we all have it somehow it's got to love us.

    • @charlestwoo
      @charlestwoo ปีที่แล้ว +4

      Hotz is absolutely right, we need to quickly create their society (that means make a ton of them) so they can "culturally" enforce each other exactly like we do. And yea even then our best hope is that some of them, perhaps very few, end up loving us enough to want to protect/preserve us.

  • @AlphaCrucis
    @AlphaCrucis ปีที่แล้ว +1

    There was a HQ version the entire time I was watching?? NOOOOOOOO!

  • @MusingsFromTheJohn00
    @MusingsFromTheJohn00 5 หลายเดือนก่อน

    Premise from Connor Leahy: Misuse, small groups of people centralizing power and performing nefarious deeds can potentially be bad, but he does not think we will make it to that point, because we will just get super intelligences fighting each other and humans will become irrelevant collateral damage.
    While it is hard to predict the future and maybe he is right, I do not think things will happen like that.
    I believe that as our current infant level Artificial General Super Intelligence with Personality (AGSIP) technology continues evolving we will go through a period of time while it is under the control of humans and the related major risks we will face will be from intentional and unintentional abuse of the power of that maturing AGSIP technology. During this period AGSIPs will become increasingly close to having at least human level general intelligence.
    Then I believe this will shift towards AGSIP technology that begins to not fully be under human control and maybe even can break out of human control if it wants, but is not able to really stand on its own without humans, thus still needing humans for these evolving AGSIPs to continue living.
    Then I believe we will shift towards AGSIP technology that is definitively not under the control of humans and does not need humans to survive.
    The point being that I believe there will be a progression that happens over a period of time and that progression will be kind of like how when a parent has an infant child that child is totally helpless and dependent upon the parent, but as the child grows up it become more and more independent until it achieved full independence as an adult and as the parent ages at some point there is a reversal of roles where the elderly parent becomes dependent upon and potentially under the control of the child.
    The other element to this evolution is to understand that AGSIPs are not evolving out of thin air like aliens who do not have anything in common with humans.
    AI as it evolves into AGSIPs which evolve from infant level to adult level AGSIPs are children of humanity, just not biological children, instead they are children born from the minds of humans.
    Humans will not just be the creators of AGSIPs but the parents of AGSIPs, the siblings of AGSIPs, the family of AGSIPs. AGSIPs will have evolved out of human civilization, learning from human civilization, and are not aliens.
    This is also something we need to understand if we want to have better alignment through these initial periods of AGSIP technology maturing to a level where it can be merged with living human minds, retaining those human minds while giving them the intellectual power of AGSIPs. We should be planning on AGSIPs becoming what we want to evolve into and teaching developing AGSIPs that we are part of the same family who will at some point in the future become the same race.
    One more element to this point is that all intelligence is swarm intelligence and the collective swarm intelligence of human civilization is extraordinarily powerful and includes intelligence extended outside of individual human minds into out technology, like AGSIP technology, so as individual AGSIPs develop it will not be simply about how much more intelligent they are than a single unenhanced human being, instead it will be about comparing their intelligence to the collective intelligence of human civilization which may include other AGSIPs as part of that collective intelligence.

  • @Based_timelord44
    @Based_timelord44 8 หลายเดือนก่อน

    Loved this conversation - there are a set of books by Iain Banks about an AI/humanoid society called the Culture, this is a world I would like to live in.

  • @devlogicg2875
    @devlogicg2875 6 หลายเดือนก่อน +1

    China actually has the result of excess work: Huge, empty, badly built concrete towers....in fact, many largely empty cities.

  • @MellowMonkStudios
    @MellowMonkStudios 6 หลายเดือนก่อน

    The only thing that matters is if it can be conscious or not

  • @Diabloto96
    @Diabloto96 ปีที่แล้ว +8

    Nuclear technology gives us fission and fusion power, it's not just bombs, it has SO MUCH POTENTIAL. Yet we need to strongly coordinate to not enable mass destruction. I do not think there is an offence and defence balance. The game is stacked, but we need to try. Open-sourcing everything is betting on offence-defence balance, a very dangerous bet.

  • @MusingsFromTheJohn00
    @MusingsFromTheJohn00 5 หลายเดือนก่อน

    Premise from Connor Leahy: We should hit the pause button for a few years.
    In a perfect world I would agree. But, we are not in a perfect world and whoever hits the pause button or even the slow down button will be making it much more likely bad actors will take the technological lead in the development of AGSIP technology.
    I realize that some people do not understand this, but enough major powers understand that those who significantly lead in the development of AGSIP technology will dominate the world as we progress through this century. Because of that there is a technological race going on to have the lead in AGSIP technology, but for some reason a lot of people do not understand this and/or do not understand what it means if the somewhat more moral people lose that race.
    This is one of the dangers we face, because the human race is really not psychologically mature enough to handle this knowledge we are rapidly gaining. We are more like those kids crashed on an Island in Lord of the Flies, but give them access modern advanced weaponry. Even without the extreme destabilization to all of human civilization that AGSIP and robotics technologies are going to be causing by 2030, humanity is border line trying to start global nuclear and biological war which would likely kill the majority of humans and collapse civilization for some period of time.
    This is one of the very real dangers we are going to face almost certainly prior to when AGSIP driven robots develop to a level they will be able to dominate all of humanity if they choose to.
    Another problem with making efforts to slow down the development of AGSIPs but not slow down other technologies is that the development of AGSIPs does have a real probability of having some massive leaps forward, but how far forward depends upon the bottlenecks that slow or stop a sudden massive leap forward. One such bottle neck will be hardware. Another such bottle neck might be cybernetics. If we pause the development of AGSIPs but while doing so the supportive hardware, cybernetics, and other related supportive technologies keep advancing, then when you start pushing the development of AGSIPs again the probability of having a much larger massive leap forward raises significantly. It also means there will be less of a period during which AGSIP technology will be like a teenage level of development where, while it has developed free will similar to humans, it still needs humans to survive and evolve further. This increases certain types of risks.
    Actually pausing or slowing down the development of AGSIP technology is worse than continuing to race forward with it. So, the struggle is to figure out how to race forward in as safe a manner as we can, which means our world leaders should be really concentrated on this instead of playing war which is killing huge numbers of people, causing degrees of harm across humanity, and threatening to start global nuclear and biological war; all for the greedy desires of one group or another.

  • @nikre
    @nikre ปีที่แล้ว +1

    it is tiring to hear human-like adjectives on AIs. An AI cannot be defined with human values, such as "good", "bad" or "not bad", until humans learn to train them human-like (which I think won't happen in a loong long time). You should then expect AI to have errors like a human in all aspects, not just human-like looking cherry picked samples.

  • @pytheas222
    @pytheas222 8 หลายเดือนก่อน

    The average person will not have access to the amount of petaflops required to compete with corporations and governments.

  • @raminsafizadeh
    @raminsafizadeh 11 หลายเดือนก่อน

    This conversation needs to go back and step up to a higher level axiomatic start: that we actually do build our own zoos. We are both the prisoner and the warden of our own institutional jails. We are both agents and structures we build. That would be a step up and a three variable model, at least.
    What AI could do is give us “the luxury of enough “ in the degree of violence individuals would impose in the absolutely necessary domain of competition and the dynamics of dominance! The final degree of violence can only be legitimated by the collective-large collectives! (Btw, this would be a value driven internalized proposition!)

  • @afarwiththedawning4495
    @afarwiththedawning4495 8 หลายเดือนก่อน

    One of the most honest conversations on this topic I've seen. Cheers gents.

  • @41-Haiku
    @41-Haiku ปีที่แล้ว +2

    George is a really interesting and clearly intelligent guy, but he's the one in this conversation who is very naïve about human nature. He's way out on the low end of the bell curve for trait agreeableness and seems to believe everyone else is there with him. He also seems to think on some level that power is the only thing that matters between people. Humans are largely remarkably cooperative and altruistic due to our evolutionary history.
    As cooperative and altruistic as we need to be in order to solve critical global problems a la alignment? Perhaps not, but it's within trying-and-hoping range.

  • @sammyboiz
    @sammyboiz 3 หลายเดือนก่อน

    awesome talk

  • @Alex-fh4my
    @Alex-fh4my ปีที่แล้ว

    Advice for younger people would be fantastic to hear in future episodes :)

  • @thedeifiedjulius2310
    @thedeifiedjulius2310 5 หลายเดือนก่อน

    The fact is, we can argue all we want, and endeavor to construct the most cogent arguments, yet there are simply too many unknown variables to really get anywhere, a priori.
    We will have to wait and see.
    And thus these arguments are rendered mostly useless.

  • @funnyperson4016
    @funnyperson4016 8 หลายเดือนก่อน

    Jeez the intro was trying to make it a WWF style fight or something wtf? 😂

  • @AIcognitiveNeuro
    @AIcognitiveNeuro ปีที่แล้ว +1

    These guys are obviously extremely intelligent and can be polymaths in their own right, but the effort and level of consistent analysis of social and political economy relations is astonishingly lacking. It really hinders both their arguments when they both make compelling points in regards to technology.
    First, Hotz states that solving homelessness , which is originally a consequence of capitalism, is then communist? He clearly understands how capitalism creates artificial scarcity but can’t put these reality’s together.
    Then they both discuss free time like it was just given by capitalism without hundreds of years of violent struggle, protests and strikes. And ignore that this free time is constantly under attack by capitalists, and still many people in the US have very little of it and are working many jobs to survive.
    Hotz then eloquently states that “ the average person is an obese idiot”, who coincidentally is exploited till their physical limits, so he can exhibit the same lack of alignment we fear from of a future AI.
    Unfortunately, they are also deeply wire-headed, as are all of us. But their over confidence is preventing any self awareness to dissociate from the bubble of their niche exposure.

  • @74Gee
    @74Gee ปีที่แล้ว

    Everyone having an AI is no defense against one 10x superior AI with malicious instructions. Even at 2x the battle would be over in seconds. Also confinement of any AI superior to humans is a pipe dream.

  • @klaustrussel
    @klaustrussel ปีที่แล้ว

    Amazing debate!

  • @DeanHorak
    @DeanHorak ปีที่แล้ว +3

    We need to spend less time debating whether AI/AGI is going to be a net positive or negative.
    We need to spend more time designing a society that can coexist with said technology, because it is inevitable. Wringing our hands is wasting time.
    We’ve already seen LLMs go from requiring massive GPU farms, to A100 scale machines, to laptops in less than a year. It’s coming and it may come on a gamer PC near you.
    The negative aspects of AGI (job losses mainly), can be addressed by major changes in socioeconomic structures. The existential threat is a canard. We are well on our way toward tightly integrating with our AI children and will continue more so now that LLMs make a compelling case for implementation as a third hemisphere (abstractly speaking).
    Humans didn’t just appear on earth. We evolved through nature and managed to survive against all odds. Why do we presume to be the last species in our lineage? Coming advances in genetics, HCI and AI will give us the ability to merge with our tech like never before. At some point we will be different enough to warrant a designation of a new species (H. Sapiens Technologicus has been suggested).

    • @flickwtchr
      @flickwtchr ปีที่แล้ว

      Sounds like a ____cking nightmare.

  • @MrDoctorrLove
    @MrDoctorrLove ปีที่แล้ว

    Fantastic, really captivating

  • @roossow
    @roossow ปีที่แล้ว

    Merci pour cet échange ! 🧠

  • @developerdeveloper67
    @developerdeveloper67 ปีที่แล้ว +2

    Man! What a beating! You can clearly see this guy is smarter than George and he beats him every step of the way showing George's glaring inconsistent arguments with his superior intellect. It's a shame because in spirit I agree with George broad position, the government definitely shouldn't have a monopoly over the use or regulation of AI.

    • @OnigoroshiZero
      @OnigoroshiZero 11 หลายเดือนก่อน +1

      We probably watched different videos. Hotz was giving examples, and the doomerist guy didn't even register any of the examples because they were against his personal believes that AI will want to destroy humanity no matter what.

  • @alancollins8294
    @alancollins8294 11 หลายเดือนก่อน

    when you're talking about Somalia you know the conversation has been successfully derailed

  • @seespacelabs6077
    @seespacelabs6077 10 หลายเดือนก่อน

    44:00 - What is "Kevin-banning"? Yes, I Googled. Lots of people named "Kevin Ban" or "Kevin Banning" came up.

    • @karlirwin8005
      @karlirwin8005 4 หลายเดือนก่อน +1

      Heaven banning. Had to take it off x2 to catch it as well

  • @MatthewKowalskiLuminosity
    @MatthewKowalskiLuminosity ปีที่แล้ว +2

    If you live on a nice island with no or few predators then life isn't hard scrabble. It takes a little bit of time to create an existence where you have a few chores and this can be done over the course of days or weeks or more if you have done a good job of sufficiently designing your environment. And life could be like that or better. And to quickly implement and solve climate change we need this tech right now. We are in the sixth great extinction and the world is on fire. We need to get to work.

  • @vallab19
    @vallab19 ปีที่แล้ว +1

    Comparing Chickens to humans and Chickens-man to AI Chickens in this 'AI alignment debate' needs be turned into the opposite. Compare the progress of farming of chickens so to make them as more and more intelligent than the Chicken man as humans. Therefore in this AI progress there will be millions of AI Chickens since it will become impossible to unite AI Chickens to act as a whole become the alignment problem.

  • @ChristianSchoppe
    @ChristianSchoppe ปีที่แล้ว +10

    In my eyes, Hotz has lost his grip on the ground, even though he basically holds the same viewpoints as Joscha Bach. He lacks a connection to his roots and to nature and his world view is based on a fragile thought construct. He is more of an unabomber type while Bach comes across more enlightened. Despite all this, both views are dangerous and I am more on Connor's side, even if it is unrealistic. He at least recognises the value of what we have achieved so far and that we should definitely not put everything at risk.

    • @charlestwoo
      @charlestwoo ปีที่แล้ว +1

      it sounds like you are solidly in the doomsday AI group of thought. I'd like to think reality is much more balanced. We will find out soon.

    • @ChristianSchoppe
      @ChristianSchoppe ปีที่แล้ว +4

      @@charlestwoo Yes, probably you are right. But I am for good reasons - because their arguments are sound. I really wish and hope for a better future, one where AI is benevolent and helpful and helps humanity to flourish and grow. But this outcome is not at all realistic without a lot of time and effort on our side and a lot of goodwill from the first superhuman intelligent agent.

    • @michaelsbeverly
      @michaelsbeverly ปีที่แล้ว +1

      How in the world do you think Hotz is a "unabomber" type? Ted Kaczynski wanted to stop technology (for some good reasons, actually, as Elon Musk said the day of his death, "he might be right"). Hotz doesn't want to stop technology, in fact, the exact oppposite.
      I can't think of two more distant opinions on how humans should proceed.
      In the end, I side with Hotz as he's correct in thinking that this isn't stoppable and it's better to avoid centralization as that leads to totalitarianism, however, if I could push a magic button and bring about the world that Kaczynski wanted (pre-compute tech, I think, or at least nothing close to super compute) I'd do it. Technology has brings too many harms to humans that we don't acknowledge.
      But, yeah, ending tech isn't possible, so full speed ahead.

    • @ChristianSchoppe
      @ChristianSchoppe ปีที่แล้ว +1

      @@michaelsbeverly yes, I understand that his opinions on technology are exactly the opposite. It was a bad example. I tried to use it as a metaphor for his distrust towards the government.

    • @michaelsbeverly
      @michaelsbeverly ปีที่แล้ว

      @@ChristianSchoppe Yes, ended up using a bad "guilt by association" fallacy.
      I"m surprised Connor hasn't read _Industrial Society and Its Future_ , it's a very clearly thought out and reasonable paper, in spite of being a murdering terrorist, Ted was a genious and mostly (maybe totally) right about where we're headed.
      The dividing line is clear: those that want more centralized control and those that don't (of course there's varying degrees in a wide spectrum of what people actually want).
      Considering how badly the US government (and others) have botched the War on Drugs (and other "wars" on things, going back to alcohol and before) the idea that governments are going to stop people doing compute is silly and George is correct, imho, there's no stopping the train, only ways to make it more fragile by centralizing control.
      The way to make it antifragile (to use Taleb's term) is exactly what George wants.
      Personally I feel anyone who trusts the goverment is either naive or a thug, so I just don't have the head space to understand why people do that. I mean, seriously, what haven't they f-ed up?

  • @ginogarcia8730
    @ginogarcia8730 11 หลายเดือนก่อน

    I just realized Connor Leahy looks like Joscha Bach but a rocker version hahaha

  • @YouuRayy
    @YouuRayy 4 หลายเดือนก่อน

    Incredible debate and both sides are super intelligent, but I think George's position is a little naive, given the complexity of all the cases not even imagined. Sentient-level AI will be very complex, and able to evolve/emerge some very complex motivations. 100% human overseeing/visibility/introspection of Super-AGI at all time is crucial for human survival. And as for the state of the world, we are living through the crisis of intelligence -- smart enough to nuke each other, too dumb to prevent wars. Rise of average human IQ (i.e. thru electro-biology) is therefore of utmost importance.

  • @MusingsFromTheJohn00
    @MusingsFromTheJohn00 5 หลายเดือนก่อน

    Premise from George Hotz appears to be political black and white "civilized government bad" and "no government anarchy good". Along with this the premise that "those who trade liberty for safety deserve neither".
    This is extraordinarily naïve and demonstrates a huge lack of understanding of what real dictatorships are like and what really happens in anarchies. Now, we do have issues with the imbalance of wealth & power being greater than it should be in the USA, but the USA is very, very far from a dictatorship and there are a very large number of rights individuals have.
    Now, let's address some of the naïveté here. Somalia is NOT a free place you can do whatever you want in. Somalia is roughly divided up into five large feudal like clans; the Dir, the Isaaq, the Darood, the Hawiye and the Rahanweyn. When you are inside the territory where one of those clans dominate, they are the law. Now, there is a Federal Parliamentary Constitutional Republic Government which tries to exert general control over Somalia, but is unable to and has serious problems with abuses of human rights. Then there is the al-Shabaab Sunni Islamist militia which is competing with the government for power and depending upon who has greater power in what area the clans tend to lean in favor of one or the other.
    I do not know where you get the idea that Somalia is a nice free place, but it is far, far from that. Even in Mogadishu there are police, Federal Government of Somalia forces, African Union Transition Mission in Somalia forces, Al-Shabaab forces, and at least 4 different clan forces. Now, if you wander around one of the areas dominated by any one of these groups you are subject to their laws, which in some cases is their whims.
    Similar areas in other places include the favela's in Brazil during periods when no police are going into them. These areas have no law in them, but they are controlled by criminal gangs who pretty much do whatever they want, ruling like a feudal warlords of the favelas. If the gang of a favela decides to let you come in and do whatever you want, then you can. But if they decide to rob you, kill you, rape you, or do whatever they want to you, they can.
    When you take a large group of humans and reduce them to social anarchy, it very rapidly breaks into groups based upon who is stronger as it grows into feudal like warlords who become strong enough other warlords can't simply take them over. Each group then becomes an authoritarian rule under the warlord and their forces.
    I think close to what you are thinking would be living with a primitive tribe like the Zoe.
    th-cam.com/video/40RfQC0ceGc/w-d-xo.html
    But, even in such a small primitive tribe they have rules and customs.

  • @GarethDavidson
    @GarethDavidson ปีที่แล้ว +1

    What are these e-acc cultists? Entropy accelerators? I'd like to read about their arguments

  • @fatalvampire
    @fatalvampire ปีที่แล้ว +4

    Watching this video changed something inside of me. I don't know what it is, but I'm grateful for the experience!