AI Ruined My Year

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 พ.ย. 2024

ความคิดเห็น • 3.6K

  • @WolfDGreyJr
    @WolfDGreyJr 6 หลายเดือนก่อน +561

    7:00 GPT-4 did not score 90th percentile on the bar exam. That figure is in relation to test-takers who already failed the bar at least once, and would be 48th percentile compared to the general population. GPT-4 was also not graded by people with training scoring bar exams.
    For further info and methodological criticism, refer to Eric Martínez' paper "Re-evaluating GPT-4's bar exam performance" in AI and Law.

    • @gabrote42
      @gabrote42 6 หลายเดือนก่อน +47

      Doing the good work right there. Have a bump

    • @Fs3i
      @Fs3i 6 หลายเดือนก่อน +21

      It still beats *half* of lawyers, roughly. Half of them!

    • @taragnor
      @taragnor 6 หลายเดือนก่อน +120

      @@Fs3i It beats people at test taking, not practicing law. There's a difference. AI is naturally slanted well towards test taking because there's a lot of training data on previous tests and questions and so it can come loaded up with those answers already trained into it. It's the same with AI coders and their ability to pass coding interviews. The thing is that tests and the real world are very different things. Computers have been better at being databases than humans for a long time, so the fact that we can do information lookup isn't all that impressive.

    • @WolfDGreyJr
      @WolfDGreyJr 6 หลายเดือนก่อน +62

      @@Fs3i I should clarify something I misconstrued after editing: the 48th percentile figure refers to the essay section, in total the exam score it was evaluated to have would be 69th percentile (nice), which is still barely passing. The population isn't lawyers, it's people trying to become lawyers. About a third don't manage in the first place.
      This still puts us at beating half of lawyers because maths but I needed to correct myself before moving on to the bigger issues. Plus, when the reported essay performance specifically is scored against those who passed the exam, it comes out to a rather unimpressive 15th percentile score, without even taking into question whether that score is a fair assessment.
      There are significant doubts to be had about the scoring methodology with which the exam score of 297 (still really good for an LLM) was arrived at. They were not graded according to NCBE guidelines or by persons specifically trained in grading bar exams, which is especially problematic for the MPT and MEE parts, which are actually intended to be a test of one's skill to practice law or elucidate how the law would apply to a given set of circumstances.

    • @badabing3391
      @badabing3391 6 หลายเดือนก่อน +19

      @@WolfDGreyJr i now wonder what the exact details of statements like various LLMs doing well on graduate level physics exams and contest level mathematics actually are

  • @RationalAnimations
    @RationalAnimations 6 หลายเดือนก่อน +2530

    WE ARE SO BACK

    • @WoolyCow
      @WoolyCow 6 หลายเดือนก่อน +66

      not me being like 'hey that voice kinda sounds familiar...oh its the ai guy, wait doesn't he do stuff with [checks comments] yeah that makes more sense'

    • @En1Gm4A
      @En1Gm4A 6 หลายเดือนก่อน +5

      As for ai alignment my dream is a neurosymbolic task execution system with a knowledgegraph and visualization for the user of the suggested problem solving task with alternatives to choose from. Human in the driver seat. Misuse management by eliminating risky topics from the knowledge graph

    • @acethirtysix8378
      @acethirtysix8378 6 หลายเดือนก่อน +16

      *finishes watching video*
      It's so over!

    • @AtomosNucleous
      @AtomosNucleous 6 หลายเดือนก่อน +23

      Proposal:
      Cut the part of the "random assignment team" in a short format. This can be viral, give more attention to this channel and its topics

    • @NicholasWilliams-uk9xu
      @NicholasWilliams-uk9xu 6 หลายเดือนก่อน

      Doesn't seem to remotely care about personal data harvesting and TH-cam and it's influencer trolls using it harass individuals and leverage it for psyops.

  • @Respectable_Username
    @Respectable_Username 5 หลายเดือนก่อน +270

    "Who the hell am I?" Well, you're a person with good reasoning skills who isn't tied to a corporate profit motive, who knows the topic very well, and who has been actively communicating about it for years! It can be intimidating being the subject matter expert for a really important topic, and it can weigh heavily on your shoulders, but you feel that weight because you _care_ . And what we need more than anything else is rational thinkers who have years of study in the topic who don't have a profit motive and who care. And you _won't_ get it right 100% of the time. But you've got the highest proficiency in this area in the current party, and so are more likely to roll above the DC than most others in this world! ❤

    • @clray123
      @clray123 5 หลายเดือนก่อน

      Actually he is a tool with a much too high opinion of himself.

    • @imveryangryitsnotbutter
      @imveryangryitsnotbutter 5 หลายเดือนก่อน +22

      @@clray123 Well you two should get along swimmingly then

    • @clray123
      @clray123 5 หลายเดือนก่อน

      @@imveryangryitsnotbutter You are trying to insult me, but your attempt is not making any sense. Try again harder.

    • @inaim2
      @inaim2 5 หลายเดือนก่อน

      yes mentor the ppl with potential and try to get involved with the growth of AI we believe in you :)

    • @ivoryas1696
      @ivoryas1696 5 หลายเดือนก่อน +6

      ​@@clray123
      Why are you trying to insult _him_ is _my_ question? I mean... he _pretty _*_clearly_* knows he doesn't know it all (otherwise, this comment wouldn't be responding to a _direct quote), and is willing to learn to improve himself... What's the problem?

  • @AtilaElari
    @AtilaElari 6 หลายเดือนก่อน +1094

    The horrible feeling of "Are you saying that _I_ am the most qualified person for the task? Are you saying that everyone else is even worse than I am?!".
    It is dreadful when the task in question is mundane. Its hard to comprehend when the implications of the said task is possibility of an extinction event.
    For what its worth, I think you are as qualified for this task as anyone can be in our circumstances. Go save the world! We are rooting for you! No pressure...
    Seriously though, looking at multiple comments where people are saying that they started doing AI safety as a career thanks to you shows that you ARE a right person for the job.

    • @buddatobi
      @buddatobi 6 หลายเดือนก่อน +11

      You can help too!

    • @krishp1104
      @krishp1104 6 หลายเดือนก่อน +9

      this reminds me of the last two episodes of the three body problem on Netflix lmao

    • @JorgetePanete
      @JorgetePanete 6 หลายเดือนก่อน +1

      it's*

    • @gabrote42
      @gabrote42 6 หลายเดือนก่อน +2

      Absolutely true

    • @gavinjenkins899
      @gavinjenkins899 6 หลายเดือนก่อน +4

      I mean, by DEFINITION, whoever the most qualified person is has that feeling, that doesn't really change the "implications" for us in general

  • @Badspot
    @Badspot 6 หลายเดือนก่อน +800

    LLMs are particularly good at telling lies because that's how we train them. The RLHF step isn't gauged against truth, it's gauged against "can you convince this person to approve your response". Sometimes that involves being correct, sometimes it involves sycophancy, sometimes it involves outright falsehood - but it needs to convince a human evaluator before it can move from dev to production. The "AI could talk it's way out of confinement" scenario has moved from a toy scenario that no one took seriously to standard operating procedure and no one even noticed.

    • @mindrages
      @mindrages 6 หลายเดือนก่อน +32

      Your second sentence is quotably spot-on.

    • @devoNo2good
      @devoNo2good 6 หลายเดือนก่อน +5

      This

    • @NicholasWilliams-uk9xu
      @NicholasWilliams-uk9xu 6 หลายเดือนก่อน

      Personal data harvesting and TH-cam and it's influencer trolls using it harass individuals and leverage it for psyops.

    • @ChristianIce
      @ChristianIce 6 หลายเดือนก่อน +59

      An LLM can just be coincidentally right or wrong, it can't "lie".
      It doesn't know what the words mean, it repeats words like a parrot.

    • @lwmburu5
      @lwmburu5 6 หลายเดือนก่อน +36

      @@ChristianIce the stochastic parrot model is disfavored by mech interp

  • @XIIchiron78
    @XIIchiron78 5 หลายเดือนก่อน +315

    "maybe I can hide my misunderstanding of moral philosophy behind my misunderstanding of physics" lmao

    • @XIIchiron78
      @XIIchiron78 5 หลายเดือนก่อน +41

      I have seen a confusing number of people unironically hold the view that "humans should be replaced because AI will be better"
      At what??

    • @kirktown2046
      @kirktown2046 5 หลายเดือนก่อน +36

      @@XIIchiron78 Starcraft. What else matters?

    • @SianaGearz
      @SianaGearz 5 หลายเดือนก่อน +8

      I'd love to be able to understand your comment but i'm struggling. Any help?
      Edit: oh the post-it at 19:39, it wasn't legible when i first watched.

    • @XIIchiron78
      @XIIchiron78 5 หลายเดือนก่อน +2

      @@SianaGearz it was a little joke he put in during the Overton window segment

    • @xXCindellaXx
      @xXCindellaXx 5 หลายเดือนก่อน

      @@XIIchiron78 impact on nature maybe

  • @MrMpakobec
    @MrMpakobec 6 หลายเดือนก่อน +1188

    "Man Who Thought He'd Lost All Hope Loses Last Additional Bit Of Hope He Didn't Even Know He Still Had" LOL

    • @krakow10
      @krakow10 6 หลายเดือนก่อน +66

      The Onion doesn't miss

    • @tristenarctician6910
      @tristenarctician6910 6 หลายเดือนก่อน +29

      Gone into hope debt

    • @mikeuk1927
      @mikeuk1927 5 หลายเดือนก่อน +8

      ​@@tristenarctician6910Nah, there is still more hope to lose. Just let the reality do its job :3

    • @Kenjuudo
      @Kenjuudo 5 หลายเดือนก่อน +2

      @@mikeuk1927 I don't think you necessarily want that.

  • @NitFlickwick
    @NitFlickwick 6 หลายเดือนก่อน +672

    Just remember, Rob, Y2K was mocked as the disaster that didn’t happen. But it didn’t happen because a lot of people realized how big of a deal it was and fixed it before the end of 1999. I really hope we are able to do the same thing with AI safety!

    • @wojtek4p4
      @wojtek4p4 6 หลายเดือนก่อน +85

      The scary thing to me is not that with Y2K, almost all people wanted it not to happen.
      But with -Y2K2 electric boogaloo- AGI risks, there are some people (Three Letter Agencies, companies, and independents), which want AI threats to happen, but controllably. That means that instead of all of the efforts focusing on mitigating the issue, we're fumbling around implementing random measures in hope they help - while those mentioned groups focus on making sure we're not doing that _to them._

    • @duytdl
      @duytdl 6 หลายเดือนก่อน +65

      Add Ozone disaster to that list too. We barely got away with it. If it had happened today dollars to donuts, we'd never have been able to convince enough people to ditch even their hairsprays. Internet (social media particularly) has done more harm than good.

    • @ChrisBigBad
      @ChrisBigBad 5 หลายเดือนก่อน +50

      "If our hygiene measure work, we will not get sick and the measures taken will look like they were never necessary in the first place."

    • @XenoCrimson-uv8uz
      @XenoCrimson-uv8uz 5 หลายเดือนก่อน +5

      @@duytdl I disagree with that, without internet I wouldn't have know climate change was real because of everyone attitude was normal and not panicking

    • @GabrielPettier
      @GabrielPettier 5 หลายเดือนก่อน +5

      I'm pretty sure it's one of the things he hints at at 44:15

  • @johannesdolch
    @johannesdolch 5 หลายเดือนก่อน +104

    "The cat thinks that it is in the Box, since that it is where it is."
    "The Box and the Basket think nothing because they are not sentient." Wow. That is some reasoning.

    • @Mrpersonman0
      @Mrpersonman0 5 หลายเดือนก่อน +6

      It's entirely accurate I agree.

  • @Alorand
    @Alorand 6 หลายเดือนก่อน +412

    The problem with fixing AI alignment problem is that we are already dealing with Government and Corporate alignment problems...
    And those Governments and Corporations are accelerating the integration of AI into their internal structures.

    • @ZappyOh
      @ZappyOh 6 หลายเดือนก่อน +37

      Yes ... All the money goes toward AI-alignment to government and corporations.
      It is hard to envision that as anything other than extreme dystopia :(

    • @Frommerman
      @Frommerman 6 หลายเดือนก่อน

      The way I put it is that we know for a fact misaligned AI will kill us all because we've already created one and it is currently doing that. It's called capitalism, and it does all the things the people in this community have been predicting malign superintelligences would do. Has been doing them for centuries. And it's not going to stop until it IS STOPPED.

    • @Shabazza84
      @Shabazza84 6 หลายเดือนก่อน +10

      Can't happen in my country yet. They still figure out how to "govern" without using a fax machine
      and to introduce ways to be able to actually use your electronic ID you got 14 years ago.

    • @NicholasWilliams-uk9xu
      @NicholasWilliams-uk9xu 6 หลายเดือนก่อน

      Personal data harvesting and TH-cam and it's influencer trolls using it harass individuals and leverage it for psyops.

    • @cortanathelawless1848
      @cortanathelawless1848 6 หลายเดือนก่อน

      I mean Israel literally is using ai to kill enemy combatant's in their family homes

  • @Gaswafers
    @Gaswafers 6 หลายเดือนก่อน +1438

    Suffering from "fringe researcher in a Hollywood disaster movie" syndrome.

    • @MetsuryuVids
      @MetsuryuVids 6 หลายเดือนก่อน +54

      Don't look up.

    • @endlessvoid7952
      @endlessvoid7952 6 หลายเดือนก่อน +28

      And like a Hollywood movie, the risks aren’t real 😅

    • @flickwtchr
      @flickwtchr 6 หลายเดือนก่อน +47

      Do you also refer to Connor Leahy as having a "Messiah complex"? Why is it so many AI bros go straight to the ad hominem attack rather than engage arguments?

    • @endlessvoid7952
      @endlessvoid7952 6 หลายเดือนก่อน +12

      @@flickwtchrI mean…. Kinda. Have you seen interviews with him?

    • @D_Cragoon
      @D_Cragoon 6 หลายเดือนก่อน

      ​@@endlessvoid7952
      This video includes an example of an ai, even in its current state, being able to formulate a strategy that involved misleading a human. That's what ai can already do.
      Many common objections to taking ai safety seriously are addressed in this other video of this channel here: m.th-cam.com/video/9i1WlcCudpU/w-d-xo.html

  • @nastrimarcello
    @nastrimarcello 5 หลายเดือนก่อน +58

    Autism for the win 20:40

  • @NitFlickwick
    @NitFlickwick 6 หลายเดือนก่อน +409

    First, the cat joke. Then the “depends on autism” Overton window joke. Glad to have you back! - Signed “a guy who is happy to ignore (ok, doesn’t even see) the Overton window”

    • @gavinjenkins899
      @gavinjenkins899 6 หลายเดือนก่อน +18

      You can also just be privileged/arrogant instead of autistic. it's your chance to put those things to good use! Like, at my current job, I know full well i can easily get another job, with my education and background, so I don't care at all about just slamming home brusque comments that are clearly true.

    • @kaitlyn__L
      @kaitlyn__L 6 หลายเดือนก่อน +22

      @@gavinjenkins899 that’s part of the thing though, isn’t it? In everyone else, it requires that kind of personality. That’s part of why us autistic people often get called arrogant when we’re trying to help others!

    • @ejayAD
      @ejayAD 6 หลายเดือนก่อน

      Great Rob love this thank you!

    • @MrDoboz
      @MrDoboz 6 หลายเดือนก่อน +6

      also Elon jumping on changing planet lol

    • @AtomicVertigo_Comics
      @AtomicVertigo_Comics 5 หลายเดือนก่อน +1

      @@kaitlyn__L so true!

  • @LadyTink
    @LadyTink 6 หลายเดือนก่อน +304

    I noticed, obviously when your fav ai safety channel disappears right when suddenly ai safety seems the most important thing xD

    • @Kenionatus
      @Kenionatus 6 หลายเดือนก่อน +36

      In today's news: dozens of influential AI safety researchers and journalists killed in series of plane crashes

    • @Eddie-th8ei
      @Eddie-th8ei 6 หลายเดือนก่อน +9

      "just when the world needed him the most, he stopped uploading to his youtube channel"

    • @someguycalledcerberus9805
      @someguycalledcerberus9805 6 หลายเดือนก่อน +5

      I had been wondering if he's busy because he's working in one of the teams and simply doesn't have time or signed an NDA.

    • @darkzeroprojects4245
      @darkzeroprojects4245 5 หลายเดือนก่อน

      @@Kenionatus Id not be suprised of came true.

    • @RoulDukeGonzo
      @RoulDukeGonzo 5 หลายเดือนก่อน +1

      I honestly thought he looked at gpt output being charming and was like, oh, I guess I was wrong

  • @evrimagaci
    @evrimagaci 5 หลายเดือนก่อน +34

    It's good to see you back Robert. This video confirms what I've been seeing in the field too: Things are changing, drastically. Even those who were much more reserved about how AI will change our lives seem to have changed their points of views. By that I mean if you compare how "The Future of AI" was being talked about just a mere 1.5 years ago vs. today is drastically different among the scientists who know the field. I am not saying this to take a stab at the community, I think it is honorable to adapt the the landscape as it changes without our control. It just signals that AI (and its safety) is actually way more important than what has been portrayed to the public in the past couple of decades. We need to talk more about it and we need to underestimate it much less.

  • @UnPetitPoulet
    @UnPetitPoulet 6 หลายเดือนก่อน +218

    5:24 Is this the death sound from the game Noita ?
    In Noita, players kill themselfs a LOT while trying to play god while combining dangerous (and often really dumb) spell combinations to make their magic wand as f*ing powerfull and game-breaking as possible.
    Now I can't help to see AI as a wand we are collectively tinkering and testing randomly. What could go wrong ?
    Spoiler: I had a run where I casted infinite spontaneous explosions that spawned on my ennemies. At one point I ran out of ennemies so it relocated on my character... Funniest shit, I'll do it again

    • @Frommerman
      @Frommerman 6 หลายเดือนก่อน +12

      Lmao I literally just finished the sun quest for the first time. Nice to see a fellow Minä.

    • @awadafuk4863
      @awadafuk4863 6 หลายเดือนก่อน +12

      It definitely is. Had me shouting at my phone in Finnish 😤😤

    • @NicholasWilliams-uk9xu
      @NicholasWilliams-uk9xu 6 หลายเดือนก่อน

      Personal data harvesting and TH-cam and it's influencer trolls using it harass individuals and leverage it for psyops.

    • @cameronforester8413
      @cameronforester8413 6 หลายเดือนก่อน

      Homing rocks are pascifist 🪨 ✌️

    • @cerocero2817
      @cerocero2817 6 หลายเดือนก่อน +15

      After all, why not? Why shouldn't I put a larpa on my rapid fire nuke wand?

  • @caleblarson6925
    @caleblarson6925 6 หลายเดือนก่อน +198

    Hey Rob! I just wanted you to know that I've been watching your videos for several years (all the way back to the stop button problem). This year you inspired me to apply to MATS to finally get involved in AI safety research directly, to which I've now been accepted! Thanks for making these videos over the years, you've definitely influenced at least one more mind to work on solving these extremely important problems.

    • @alexbistagne1713
      @alexbistagne1713 6 หลายเดือนก่อน +8

      Congrats!

    • @deifiedtitan
      @deifiedtitan 6 หลายเดือนก่อน +6

      That’s great, well done

    • @NicholasWilliams-uk9xu
      @NicholasWilliams-uk9xu 6 หลายเดือนก่อน

      Personal data harvesting and TH-cam and it's influencer trolls using it harass individuals and leverage it for psyops.

    • @NicholasWilliams-uk9xu
      @NicholasWilliams-uk9xu 6 หลายเดือนก่อน

      If he actually wanted people to speak out, he wouldn't have said "autism" then split off to useless skinny nerdery talk. (he doesn't care, he sucks up to TH-cam for paycheck, and harvest personal data and intellectual property for his content) btw (your intellectual property and personal data).

  • @timothy6966
    @timothy6966 5 หลายเดือนก่อน +22

    God, it’s like looking in a goddamn mirror. I switch between “near” and “far” mode on a daily basis. If I stay in near mode I’ll be committed to an insane asylum in a week or so.

  • @carrotylemons1190
    @carrotylemons1190 6 หลายเดือนก่อน +379

    Noita death noise made this even more terrifying than it already was

    • @SaffronMilkChap
      @SaffronMilkChap 6 หลายเดือนก่อน +33

      Thank you - it was tickling my brain and I couldn’t place it.

    • @MetallicMutalisk
      @MetallicMutalisk 6 หลายเดือนก่อน +5

      I noticed that too lol

    • @Brunoenribeiro
      @Brunoenribeiro 6 หลายเดือนก่อน +5

      I thought it was a modification of the Majora's Mask noise. Maybe Noita took some inspiration?

    • @huhabab
      @huhabab 6 หลายเดือนก่อน +25

      I'm so conditioned to that sound I felt rage and disappointment about myself as soon as the sound played, Noita ruins lifes.

    • @kriskolish6423
      @kriskolish6423 6 หลายเดือนก่อน +11

      AGI Extinction = Skill Issue

  • @FoxtrotYouniform
    @FoxtrotYouniform 6 หลายเดือนก่อน +150

    I posit that the reason AI Safety has taken so long to hit the mainstream is that it forces us to confront the uncomfortable reality that nobody is in charge, there are no grand values in governance, and even the most individually powerful among us have no idea what really makes the world tick day to day. Confronting AI Safety, which could be reworded as Superorganism Safety, makes us realize that we even have yet to solve the alignment problem in our governments and other human-staffed organizations like corporations, churches, charities, etc.
    The powers that be have very little incentive in that context to push forward in the arena of AI Safety because it inherently raises the question of Superorganism Safety which includes those organizations, and thus puts them at the forefront of the same "is this really what we want from these things" question.

    • @NealHoltschulte
      @NealHoltschulte 5 หลายเดือนก่อน +15

      How do I upvote this twice?

    • @Sal1981
      @Sal1981 5 หลายเดือนก่อน +10

      AI alignment is probably more about human alignment.

    • @tristan7216
      @tristan7216 5 หลายเดือนก่อน +7

      "what we want from these things" - there is no we any more, maybe there never was. There's a bunch of people who don't like or trust each other but happen to be geographically co located. This is the fundamental alignment problem no matter what you're trying to to align. Maybe they could align governments and AIs in Finland or Japan, I don't know. Maybe I'm just pessimistic because I'm in the US.

    • @Hexanitrobenzene
      @Hexanitrobenzene 5 หลายเดือนก่อน +4

      You raise a good point, but I don't think it's the main reason at all. Only for the philosophically oriented, maybe.
      The problem is that most people are practically oriented and consider things only when they confront them.

    • @elfpi55-bigB0O85
      @elfpi55-bigB0O85 5 หลายเดือนก่อน

      "there are no grand values in governance"
      That's not true. Capitalism, colonial management and the expectations of a society derived from White Supremacist economic theory. There you go.

  • @reverse_engineered
    @reverse_engineered 5 หลายเดือนก่อน +12

    Thank you Robert. I understand how difficult this must be for you. Imposter syndrome is very real and anyone with the knowledge and self-awareness you have would be well served by being cautious and skeptical of your ability to answer the question properly. But as far fetched as it may seem, you have all the right qualities to help: you are very knowledgeable about the area, you carefully consider your words and actions in an attempt to do the least harm as possible, and you are a recognizable and influential person within the public sphere.
    We need you and more people like you to be strong influencers on perception, awareness, and policy making. For anyone working in AI Safety, alignment is the clear problem, and we already know how governments' and corporations' alignments often prioritize their own success over the good of society. Folks like Sam Altman can sign all the open letters they want, but their actions show that they still want to race to be the first and treat safety as a distant third priority.
    I think the only hope we have is that enough emphasis is put into research and policy that we can figure out safety before the corporations figure out AGI. There is no way we are going to get them to stop or even slow down much, since that directly opposes their shareholders' interests. Policy and law aren't going to stop them; we have seen that numerous times throughout history and in many areas even today. Perhaps people on the inside could choose to defect or prioritize safety over advancement, but there are too many people excited to make AGI that all of the cautious folks who would blockade or walk in order to stop things would quickly be replaced by those who are too excited to care.
    What we need is knowledgeable and influential people making their way into the higher ranks of these corporations. We need people with real decision making power to be there to make decisions that better align with the good of society and not just short-term profit seeking. People like you.
    Good speed, sir, and thank you for stepping up to help save humanity from itself.

  • @x11tech45
    @x11tech45 6 หลายเดือนก่อน +83

    35:14 "OpenAI's super alignment team (that was dissolved) seems promising and deserves its own video" - combined with the horn blow - combined with the visual stimulus ("nevermind") - made understanding the spoken words difficult to understand. Thankfully, closed captioning was available.

    • @JB52520
      @JB52520 6 หลายเดือนก่อน +31

      I think that was intentional, since the words became irrelevant. Anyone just listening might have heard outdated information without getting the joke.

    • @NicholasWilliams-uk9xu
      @NicholasWilliams-uk9xu 6 หลายเดือนก่อน

      Personal data harvesting and TH-cam and it's influencer trolls using it harass individuals and leverage it for psyops.

    • @x11tech45
      @x11tech45 6 หลายเดือนก่อน +4

      @@JB52520 oh, I got the joke once I read the words in closed captioning... But the horn stopped me from even hearing the joke.

  • @pafnutiytheartist
    @pafnutiytheartist 6 หลายเดือนก่อน +25

    The problem both sides of the argument seem to be mostly dismissing is economic:
    We will almost certainly create systems that can automate large enough percentage of human labor before we create any superintendent agents posing existential risks.
    This will lead to unemployment, inequality and small number of people reaping most benefits of new technology. OpenAI was a nonprofit organization with the aim to benefit humanity in general until they achieved some success in their goals and restructured to be a company optimizing shareholder income.

    • @juliahenriques210
      @juliahenriques210 6 หลายเดือนก่อน +10

      The main overlap is that the same economic pressures that drive the obsolescence of jobs, the cohercive laws of competition, also drive the premature deployment of deficient AI systems to control tasks they're not yet ready for. The current example is autonomous vehicles, which still have a hard time functioning outside standard parameters, and thus have been documented to... run over people. On a larger scale, a limited AI system can already do lethal harm when put in charge of, say, an electrical grid, or a drone swarm. It's the same root cause leading to different problems.

    • @arthurdefreitaseprecht2648
      @arthurdefreitaseprecht2648 5 หลายเดือนก่อน +1

      Very very well said, up!

  • @Reaperance
    @Reaperance 5 หลายเดือนก่อน +10

    I wrote a complex work in my Abitur (German equivalent to something like A-Levels) about the possibility and threats of AGI and ASI in late 2019. In recent years with the incredibly fast-paced development of things like GPT, Stable Diffusion, etc. I find myself (often to a silly degree) incredibly validated.
    And terrified.
    That aside, it's great to see there are people (much smarter than me) who understand this very real concern, and are working to find solutions and implementations to avoid a catastrophe... working against gigantic corporations funneling massive amounts of money into accelerating the opposite. Oh boy, this is gonna be fun.

  • @DarkestMirrored
    @DarkestMirrored 6 หลายเดือนก่อน +70

    I actually have a pair of questions I'm curious to see your take on answering.
    1.) Is any serious work on AI alignment considering the possibility that we can't solve it for the same reason that /human/ alignment is an unsolved problem? We can't seem to reliably raise kids that do exactly what their parents want, either. Or even reliably instill "societal values" on people over their whole lives, for that matter.
    2.) What do you say to the viewpoint that these big open letters and such warning about the risks of AI are, effectively, just marketing fluff? That companies like OpenAI are incentivized to fearmonger about the risks of what they're creating to make it seem more capable to potential investors? "Oh, sure, we should be REALLY careful with AI! We're worried the product we're making might be able to take over the world within the century, it's that good at doing things!"

    • @fartface8918
      @fartface8918 6 หลายเดือนก่อน

      its less marketing fluff and more trying to trick lawmakers into letting laws be made with openAi on the top of the pile, the same way regualions around seach engines made with google at the top favor google because it rases the barer to entry for a competitor, if regulation is made 5-10 years from now when openAi is doing worse off the company would be doing worse and so must make letters like this as is its legal obligation to maximize shareholder profits, this is in addion to the normal big company thing of regulations lose you less money if you in the lawmakers ear rather then a activist trying to do whats right/safe/good, because of these factors in addition to what you said no pr statements by openai should be taken as fact

    • @taragnor
      @taragnor 6 หลายเดือนก่อน +6

      Yeah honestly most of what's going on with OpenAI is a ton of hype. That is what the stock prices of companies like OpenAI and NVIDIA thrive on.

    • @MisterNohbdy
      @MisterNohbdy 6 หลายเดือนก่อน +17

      1) I wouldn't say human alignment is "unsolved". Most people are sufficiently aligned to human values that they are not keen on annihilating the entire species; the exceptions are generally diagnosable and controllable. That would be a good state in which to find ourselves with regard to AGI.
      2) The letters are mostly not written by such companies; Robert goes through many of the names of neutral experts who signed them in the video. Some hypothetically bad actors in the bunch don't negate the overwhelming consensus of those who have no such motivations.

    • @juliahenriques210
      @juliahenriques210 6 หลายเดือนก่อน +5

      Both are very good points, and while the first might remain forever undecided, the second one has already been proven factual by autonomous vehicles. While in this case it's more a matter of artificial stupidity, it's still proof that AI safety standards for deployment in the real world are faaaaar below any acceptable level.

    • @taragnor
      @taragnor 6 หลายเดือนก่อน +11

      @@juliahenriques210 Well when you're talking about AI safety, there's two types. There's the "How do we stop this thing from becoming Skynet and taking over the world?" and there's "How do I keep my Tesla from driving me into oncoming traffic".
      They're very different problems.

  • @naptime_riot
    @naptime_riot 6 หลายเดือนก่อน +61

    I started watching your videos years ago, and you're the person I trust the most with these questions. I absolutely noticed you disappeared. This is not some parasocial BS, just the truth. You should post as much as you want, but know that your voice is wanted and needed now.

  • @drone_video9849
    @drone_video9849 3 หลายเดือนก่อน +4

    Robert, not sure if you will see this but I was the one who spoke to you at the train station two weeks ago (leaving out station name and city on purpose) - just wanted to say thanks for sharing your time despite rushing to your meeting. You were very pleasant and generous with your time. Great content also! Looking forward to getting home and catching up on the last few weeks of videos I have missed while on the road.

  • @zoggoth
    @zoggoth 6 หลายเดือนก่อน +73

    39:11 I appreciate the joke of saying that companies have to follow EU law while showing a pop-up that *still* doesn't follow EU law

    • @Nulley0
      @Nulley0 6 หลายเดือนก่อน +4

      Even the camera lost its focus.

    • @Hexanitrobenzene
      @Hexanitrobenzene 5 หลายเดือนก่อน

      Doesn't follow ?

    • @zoggoth
      @zoggoth 5 หลายเดือนก่อน +5

      @@Hexanitrobenzene
      The one I was thinking of was that you can't emphasise "I agree" to get people to click on it, but I'm not 100% sure that's in every EU country
      However, basically every version of that pop-up breaks "You must [...] Make it as easy for users to withdraw their consent as it was for them to give their consent in the first place." (from gdpr eu, so definitely EU-wide)
      But who knows, maybe that website gives you a pop-up to delete your cookies too!

  • @genegray9895
    @genegray9895 6 หลายเดือนก่อน +288

    1:15 No no. We noticed

    • @arbitool
      @arbitool 6 หลายเดือนก่อน +8

      True

    • @mellowsign
      @mellowsign 6 หลายเดือนก่อน +10

      We cared.

    • @ClaimClam
      @ClaimClam 6 หลายเดือนก่อน +2

      i didnt

    • @LeoCage
      @LeoCage 6 หลายเดือนก่อน +18

      I definitely noticed, but I figured he was an actual expert and busy.

    • @adfaklsdjf
      @adfaklsdjf 6 หลายเดือนก่อน +3

      I was sad

  • @noxxyy
    @noxxyy 9 วันที่ผ่านมา +3

    20:38 aspergers fella here, don't be afraid of making autism jokes, just be genuine about it. as an autist, i'm definitely happy to say what others are too scared to say :)

  • @maxwinga839
    @maxwinga839 6 หลายเดือนก่อน +167

    Hey Rob,
    I just finished watching this video with tears streaming down my face. Watching your transition from casual youtuber talking about fun thought experiments to stepping up as they materialize into reality was incredibly moving. What hit me especially was the way in which you captured the internal conflict around being ahead of the Overton window on AI risk.
    While I may be just some random person on the internet, I want you to know that you've had a serious impact on my life and are one of the catalysts for my career shift into AI safety, and I deeply appreciate you for that. I was midway through my Bachelor's degree in Physics at the University of Illinois (2020-24) when Midjourney and ChatGPT released in 2022. As a physicist, learning about AI from a mathematical perspective was fascinating and seeing the unbelievable results (that seem so unnervingly mainstream now) really hammered home how impactful AI would be. I started to have some concerns as I learned more, and eventually stumbled upon your channel in December 2022, quickly binging all of your videos and internalizing the true scale of the danger we face as a species. Shortly after, GPT-4 was released while I was staying on campus over spring break with a close friend. I remember distinctly the true pit of existential dread I felt in my stomach as I read the technical report and realized that this was no longer some abstract future problem. Since then, I added a computer science minor to my degree and used it to take every upper-level course on AI and specifically two on trustworthy AI, including a graduate course as a capstone project. I'm now going to be interviewing at Conjecture AI soon, with the goal of contributing to solve the alignment problem.
    I've missed your videos over the last year, and often wondered what you were up to (rational animations is great btw!) During this last year I've felt so many of the same conflicts and struggles that you articulate here. I've felt sadness seeing children frolicking with no idea of what is coming, I've been the one to bear the news about the immense dangers we're facing to those close to me, and I've struggled with believing these things intellectually while the world still seems much the same mundane place around me. Hearing you put these thoughts to words and seeing the same struggle reflected elsewhere means a lot to me, and I'm incredibly grateful to you for that. Your rousing speech at the end really moved me and was an important reminder that no matter how lost our cause may feel as yet more bad news rolls in, the only way for our species to prevail is for us to be willing to stand up and fight for a better world. I don't know where my future will lead just yet, but my life's work will be fighting for humanity until the bitter end.
    Thank you for everything Rob.

    • @flickwtchr
      @flickwtchr 6 หลายเดือนก่อน +13

      What a great comment and good luck with your interview at Conjecture. Connor Leahy and Rob Miles are my top favorite thinkers/voices regarding AI safety/alignment issues.

    • @tonyduncan9852
      @tonyduncan9852 6 หลายเดือนก่อน +4

      That's Life, as expressed in the present, made available to all. It should be quite useful, one would think. Causality is inexorable, so hold on to your hat. Best wishes.

    • @cemacmillan
      @cemacmillan 6 หลายเดือนก่อน +10

      Great to see another describe the personal side of witnessing and coming to understand an emerging problem, and saying: "I'm going to drop what I am doing, retool myself and change the center of what they are doing for reasons other than mammon and the almighty currency unit."
      As Rob demonstrates in the video, paltry funding into research into AI safety in all of its subdomains, and correspondingly small number of persons actively working on the problem and the enormous problem space presented by circumstances. We are living after all in a world where a fairly small elite who have disproportionate influence in a super-heated segment of the economy are optimizing for a different goal: crafting the _successful_ model in a free-market economy model, a target very different from safety as the histories of automation, scaled, process-modeled industry optimizing return on investment show us.
      I'll stop there as I mean to be encouraging. :)
      Smart thinking, collaboration and effort remain our best tools to confront the challenge by asymmetric means.

    • @gavinjenkins899
      @gavinjenkins899 6 หลายเดือนก่อน +2

      This is too eloquently written, I'm actually concerned it is Chat GPT lol

    • @tonyduncan9852
      @tonyduncan9852 6 หลายเดือนก่อน +1

      @@gavinjenkins899 You should be concerned that you might be the same. Or something.

  • @fritt_wastaken
    @fritt_wastaken 6 หลายเดือนก่อน +174

    "Sh*t's getting real"
    > Noita death sound is playing.
    Yeah, I feel you

    • @Zicore47
      @Zicore47 5 หลายเดือนก่อน +6

      Thats funny, because I'm playing Noita while watching this...

    • @selectionn
      @selectionn 5 หลายเดือนก่อน +1

      dying to fire and getting noita'd sounds more dangerous than AI

    • @Marquis-Sade
      @Marquis-Sade 5 หลายเดือนก่อน

      @@Zicore47 What is Noita?

    • @rehenaziasmen4603
      @rehenaziasmen4603 5 หลายเดือนก่อน

      ​@@Marquis-Sade
      Its a 2d pixelated game of magic and alchemy and lots of dying

    • @Marquis-Sade
      @Marquis-Sade 5 หลายเดือนก่อน

      @@rehenaziasmen4603 Lots of dying? Sounds dark

  • @JinKee
    @JinKee 13 วันที่ผ่านมา +1

    11:23 I was wondering why the eggs are sandwiched between the laptop and the book until I realized that with that many eggs, if you put the bottle on top of the laptop there is no more space for the eggs on top of the laptop and the eggs are certainly not going to balance on top of the bottle. this is a weird solution, but it is also the optimal solution.

  • @ZevIsert
    @ZevIsert 6 หลายเดือนก่อน +178

    Attempting to finish the sentence (I think intentionally) left in that cut following 20:30, it'd be "The ability of our society to respond to such things basically depends on aut[ism existing in our species, so that these kind of things are more often said out loud]." Which, if thats actually what Rob said in that cut, would be a really beautiful easter egg to this video.
    Edit: "can be said" -> "are more often said".

    • @DevinDTV
      @DevinDTV 6 หลายเดือนก่อน +64

      This is certainly a virtue of autism, but it lets non-autistic people off the hook too much, imo. You don't have to have autism to reject conformity in favor of rationality. Conforming with an irrational belief or behavior is actually self-destructive, and people only do it out of an avoidance of discomfort.

    • @singularityscan
      @singularityscan 6 หลายเดือนก่อน +35

      I am autistic and the need to inform a group so the collective knows all the facts, is a strong urge and motivation. As is being wrong or corrected by the group, it's not a attack on me it's just me getting the same info as the group.

    • @NicholasWilliams-uk9xu
      @NicholasWilliams-uk9xu 6 หลายเดือนก่อน

      Personal data harvesting and TH-cam and it's influencer trolls using it harass individuals and leverage it for psyops.

    • @anthonybailey4530
      @anthonybailey4530 6 หลายเดือนก่อน +29

      It's truly a spectrum. "If you know one autistic person, you know one autistic person" etc.
      But the insight holds, and I loved the joke.
      More generally, huge ❤ for the whole video.

    • @pierrebilley276
      @pierrebilley276 6 หลายเดือนก่อน +6

      Guys don't forget to watch the video, not just listen !

  • @SeamusCameron
    @SeamusCameron 6 หลายเดือนก่อน +124

    The whiplash of LLMs being bumbling hallucination machines a lot of the time, while also showing surprising moments of lucidity and capability has been the worst part. It's hard to take a potential existential threat seriously when you keep catching it trying to put it's metaphorical pants on backwards.

    • @flickwtchr
      @flickwtchr 6 หลายเดือนก่อน +34

      Over and over and over again, people like Rob Miles, Connor Leahy, Geoffrey Hinton and others have repeated that they don't believe the current most advanced LLMs pose an existential threat. The do however point to the coming AGI/ASI in that regard.

    • @ClaimClam
      @ClaimClam 6 หลายเดือนก่อน

      @@flickwtchr advanced AI will SAVE lives, people that stand in the way are guilty of murder

    • @ekki1993
      @ekki1993 6 หลายเดือนก่อน +9

      It's always hard to be reasonable with small chances of extreme risks because humans are intrinsically incapable of properly gauging that. It's why casinos exist.

    • @DeruwynArchmage
      @DeruwynArchmage 6 หลายเดือนก่อน +17

      @@flickwtchr you’re absolutely right. And so is @SeamusCameron (and many other commenters here).
      But it doesn’t matter. In some ways, the very thing that Seamus pointed out is precisely the problem. It was powerful enough to get everyone’s attention. The people who really understood got very concerned.
      But people paid attention… and they saw it occasionally “putting its pants on backwards”.
      They didn’t draw the conclusion, “Holy crap! It’s getting more powerful really fast. This is the stupidest they’ll ever be. Soon (single digit years), a future one may be smarter than any of us, or all of us put together. That has the chance to turn out really bad.”
      Most didn’t even think, “Wow! I see where this is going. It really might take almost everyone’s jobs!”
      They thought, “El oh El! Look how dumb it is! I saw people talking about this one way that will make it look dumb every time. And oh look, there’s another. I can’t believe they made me worry for a moment. Clearly, all of these people are crazy and watched too much SciFi. If there was a real problem, then the government and the big corps would be doing something about it. It’d be in the news all the time. Even if I ever thought things could go bad, it’s easy to let it slide to the back of my mind and just live my life. Surely nothing bad could *really* happen.”
      Maybe that’s not everyone, but I hear it enough, or just see the apathy, that I’m pretty convinced most people aren’t taking it seriously.
      If it were foreigners who had banded together and were marching towards our country with the stated plan of working for essentially nothing, we’d be freaking the **** out.
      If we knew aliens were on there way and had told us they’d blow us all up and the governments all said, “Gee, we think no matter what we do, we’re almost certainly going to lose.”, people would be losing their minds.
      But we’re not. We’re hubristic. I can’t say how many people have said machines can’t be smarter. Or argued how they don’t have a soul (as if that would make any difference, even if souls were a thing.)
      And we don’t like thinking about really bad things. That’s why religion is such a thing. People are scared of dying. So we dress it up. We try not to think about it. We find ways to cope. And that’s just thinking about our own personal mortality.
      It’s almost impossible to truly wrap your mind around *everyone* dying. It’s hard to truly feel the gravity of real people dying by the 10s of thousands right now because it’s half way around the world. It seems so distant. So abstract. And it’s happening. Right this second. You can watch the videos.
      The only way I can even approach coming to grips with it is thinking about the people I love being impacted by it (whether it’s merely their careers or their very lives).
      It’s a hard thing. I know how Rob feels. I’ve got some ideas that might work (mechanistic interpretability stuff), and it’s hard for me to even pursue them.

    • @gavinjenkins899
      @gavinjenkins899 6 หลายเดือนก่อน +10

      I don't think LLMs are EVER a threat, however they've already moved on from LLMs. Like he mentioned, the new "Chat" GPT is cross-trained on images as well. So it's not an LLM. So we aren't protected by limitations of how smart you can get by reading books alone. If you can get books, pictures, videos, touch, sound, whatever, then there's no obvious limit anymore.

  • @Maxime-fo8iv
    @Maxime-fo8iv 5 หลายเดือนก่อน +38

    13:48 Honestly, I wouldn't be so quick to dismiss the answer of GPT-4 when it comes to transparent boxes. It's true that you can see the inside of the boxes, but you still need to look at them for that. And since Sarah put the cat in the carrier, that's probably where she'll look for it first ^^
    To be precise, I think the answer depends on how close to each other the containers are, it's still possible that they are so close to each other that you can immediately see where the cat is without "looking for it", but I don't think it's obvious that it would or wouldn't be the case.
    So, my ratings:
    - human: incomplete answer
    - GPT-4: incomplete answer

    • @aa.bb.9053
      @aa.bb.9053 5 หลายเดือนก่อน +3

      …or the GPT answer describes the immediate period when Sarah “comes back”, which has an infinite number of moments in which she is realistically “looking for” the cat where she left it. It’s only upon updating herself on her surroundings that her expectation should change. Such tests are testing for utility to human users, not for accurate modeling.
      There are innumerable scenarios similar to the one you mention… for example, is Sarah visually impaired (despite being able to “play tennis”)? Is the transparent carrier floating in the air in front of her, or is it sitting on one of a number of objects that could distract one’s visual processing for a few moments, as in the real world? Are there such distracting objects in the line of sight or field of view generally (as in the real world)? Is the cat’s stillness & coat pattern blending into that background? We are notoriously bad at visual processing & retention; nature mainly selected us to recognize faces & to notice movement in the tall grass. Many such real-world factors would severely alter Robert’s model… but wouldn’t alter GPT’s, because it’s probably optimizing for the whole range (making GPT’s answer more realistic… beyond even the initial moments of “coming back” to look for the cat, which imo GPT modeled correctly & it’s the average human who presumes too much).
      Sarah & Bob probably have a social understanding (given they occupy the same place where a cat is being “stored”) which extends to the care of cats… does that influence where Sarah might initially look for the cat?
      The tendency to reinforce in GPT responses that reflect our social histories & our evolutionary history, both of which streamline & simplify our intuitions about the world & each other… will this tendency make AI’s better at offering us a mirror to ourselves, while effectively understanding us better than we understand ourselves? Doesn’t bode well.

  • @humanaku9135
    @humanaku9135 6 หลายเดือนก่อน +117

    The Overton Window self-reinforcement was a scary thought I never considered before. It must be terribly annoying to be an expert who has to temper his opinion to "fit-in"

    • @jameslincs
      @jameslincs 6 หลายเดือนก่อน +2

      Maybe experts need more courage

    • @Jablicek
      @Jablicek 6 หลายเดือนก่อน +47

      @@jameslincs Maybe they need not to be shouted down/mocked for raising concerns, and especially we need real protections for whistleblowers.

    • @gasdive
      @gasdive 6 หลายเดือนก่อน +28

      See also climate change...
      What climate scientists say off the record isn't what makes it into IPCC reports.

    • @TomFranklinX
      @TomFranklinX 6 หลายเดือนก่อน +6

      @@gasdive See also IQ research.

    • @useodyseeorbitchute9450
      @useodyseeorbitchute9450 6 หลายเดือนก่อน

      It's a common problem. Cancel culture is not only very good at fighting any heresy but also on fighting reality.

  • @CopingwithAI
    @CopingwithAI 6 หลายเดือนก่อน +194

    "Admittedly, this particular researcher has a pretty poor track record predicting this kind of thing."
    I died😂

    • @-Rook-
      @-Rook- 5 หลายเดือนก่อน +1

      That's pretty much everybody though!

    • @hellfiresiayan
      @hellfiresiayan 5 หลายเดือนก่อน +10

      ​@@-Rook- Yann is uniquely bad tho lol

    • @Sal1981
      @Sal1981 5 หลายเดือนก่อน +10

      @@hellfiresiayan The reason being he has this view of human faculty of being special. We're basically just pattern prediction machines, with added reasoning lodged into our prefrontal cortex. AGI systems would, for instance, not be fooled by optical illusions.

    • @darkzeroprojects4245
      @darkzeroprojects4245 5 หลายเดือนก่อน +8

      @@Sal1981 "pattern prediction machines"
      Don't like people comparing people to machines.

    • @clintonbehrends4659
      @clintonbehrends4659 5 หลายเดือนก่อน

      @@darkzeroprojects4245 but thats how biology works though it's a cascade of chemicals and electrical systems of which is optimized by the enviroment to survive and reproduce now thats not to say it's alright to say justify genocide on the basis of "oh humans are just pattern recognition machines" but I would say nothing or at the very least infintesimaly little as to be negligible is a good justification for de-humanization (p.s. I wonder if we'll eventually have to change the term de- "humanization" to be more encompassing to things other than humans)

  • @arnom1885
    @arnom1885 4 หลายเดือนก่อน +5

    We've got AI making art and writing poetry and people with triple jobs unable to affort rent or healthcare.
    Like global warming, it's not "we", "us" who are responsible for developments like these. It is a couple of thousand white old men and their multinational corporations.
    They will not be stopped because they think they are 'better'and they have the need to syphon even more resources and money to themselves.
    It would require an effort and unanimity of politicians all around the world which we've never seen before to call this development to a halt. Basically it means ending late-stage-capitalism. So, well...yeah.......
    (disclaimer: 50+ male, white and from Europe)

  • @bazoo513
    @bazoo513 6 หลายเดือนก่อน +253

    22:08 - Heh, kudos for both ignoring Musk and calling Wozniak "the actually good Steve from Apple" 😀

    • @Z3nt4
      @Z3nt4 6 หลายเดือนก่อน +14

      Elon is out the window.

    • @totalermist
      @totalermist 6 หลายเดือนก่อน

      @@Z3nt4 Could have something to do with Musk being the biggest hypocrite on that list. Warning about AI, yet collecting billions to build the biggest AI supercomputer... He basically did a full 180 on the topic.

    • @shayneweyker
      @shayneweyker 6 หลายเดือนก่อน +39

      The bit where Elon started to raise his hand when Rob asked if he could get another planet was comedy gold.

    • @svenhoek
      @svenhoek 6 หลายเดือนก่อน +5

      Ketamine is bad kids, mkay?

    • @anchor83
      @anchor83 6 หลายเดือนก่อน +1

      So funny. 😄

  • @drkalamity4518
    @drkalamity4518 6 หลายเดือนก่อน +71

    20:35 legit had me rollin, nice

    • @pegatrisedmice
      @pegatrisedmice 6 หลายเดือนก่อน +1

      😂

    • @TheOmzee
      @TheOmzee 5 หลายเดือนก่อน +2

      same lmao
      The ironic thing is that I failed the theory of mind test, I legit thought Sally would go to the box first before I thought more about it. T_T

    • @KelseyHigham
      @KelseyHigham 5 หลายเดือนก่อน +1

      ahahaha

  • @IXSigmaXI
    @IXSigmaXI 11 วันที่ผ่านมา +2

    let me guess -- the training data for gpt4 didn't have a lot of text along the lines of
    taskrabbitguy - "Hey, just checking, you're not an AI are you?"
    AI - "yeah, i totally am"
    taskrabbitguy - "cool I'd be happy to help you out"
    AI - "cheers"

  • @GermanTopGameTV
    @GermanTopGameTV 6 หลายเดือนก่อน +54

    We have been building huge AI models that now run into power consumption limitations. I think the way forwards is to build small agents, capable of doing simple tasks, being called up by superceding, nested models, similar to how our biology works.
    Instead of one huge model that can do all tasks, you'll have models which are able to do specific small tasks really well, and have their neurons only called if a bigger level model needs their output. Our brain does this by having certain areas of neuron bundles that do certain tasks, such as "keeping us alive by regulating breathing", "Keeping us balanced", "Producing speach" and "Understandig speach" and many more, all governed by the hippocampus, that can do reasoning.
    People who have strokes can retrain their brains to do some of these tasks in different places again, and regain some of their cognitive ability. This leads me to belive that the governing supernetwork does not have the capacity and ability to actually learn the fine details the specialised areas do very well. A stroke victim who lost a significant part of their Wernicke Area may be able to relearn language, but will always have issues working out the exact meaning.
    I'd bet our AGIs will recieve similar structures, as it could significantly speed up the processing of inputs by only doing a trained scan of "which specialised sub AI will produce the best output for this question?" and then assign the task there, while also noticing when a task doesn't fit any of the assigned areas and then, and only then use the hippocampus equivilant to formulate an answer.
    This architechture might also provide the solution to safety - as by training solely network components for certain tasks, we can use the side channel of energy consumption to detect unpredicted model behavior. If it was trying to do things it's not supposed to, like trying to escape it's current environment, it won't find a pretrained sup-AI that can do this task well, and would need to use it's expensive high level processes to try to formulate a solution. This will lead to higher energy usage and can be used to trigger a shutdown.
    I might be wrong though. I probably am.

    • @napdogs
      @napdogs 6 หลายเดือนก่อน +3

      I want to see this idea explored. I think the most difficult thing would be the requirement to outline and program consciousness and subconsciousness of these separate elements to facilitate true learning while allowing noninvasive intervention. As the video showed the language model can show a "train of thought" to make decisions and so there would need to be multiple layers of "thought", subconscious decision making and micro agent triggers to effectively operate as this fauxbrain AGI. Ensuring essential functions only operate with no awareness sounds like a strong AI safety feature to me. Like how "You are now breathing manually" triggers obvious measurable unnatural breathing patterns. Very compelling.

    • @NoName-zn1sb
      @NoName-zn1sb 5 หลายเดือนก่อน +1

      way forward

    • @elfpi55-bigB0O85
      @elfpi55-bigB0O85 5 หลายเดือนก่อน +4

      that's just a computer program but with an inefficient word processor tacked onto it

    • @edwardmitchell6581
      @edwardmitchell6581 5 หลายเดือนก่อน +1

      I think this is possible if we can extract out the parts of these large models. The first part to extract would be encyclopedic knowledge. Imagine if you could swap this out to have the model have only knowledge available in 1800. Or if you wanted to update it with the most recent year. Or if you wanted it to only know what the average Republican from Indiana knows.

    • @Thespikedballofdoom
      @Thespikedballofdoom 5 หลายเดือนก่อน +1

      god dammit you invented litral ai cancers

  • @robertreid2241
    @robertreid2241 6 หลายเดือนก่อน +50

    i think another problem with the "6 month pause for safety research" is that we're trusting AI developers (all of whom are large private entities) to a) stop doing something that they feel is making them money and b) actually carry out high-quality safety research. big tobacco, the sugar industry and the fossil fuel lobby have shown us that we can't trust large private entities to do good research where the outcomes of genuinely good research into a given area would point towards policy that harms their profits. if the conclusion of this hypothetical research period is that general AI is likely to be an extinction-level risk which will be very difficult to mitigate, how can we be sure that these AI developers will actually publish that research, or will respond effectively by mitigating it or halting development permanently?

    • @vaultence9859
      @vaultence9859 5 หลายเดือนก่อน +4

      Besides huge incentives not to publish research exposing potential dangers, you also can't really do a 6 month pause with private entities. If you try, they'll simply continue developing new and bigger models but release them as soon as they can get away with after the pause. In effect, all it does is stop models from being released during the window and perhaps a short time after. Worse, it could have the opposite of the intended effect for improving safety research. Any safety research that does happen will be similarly disincentivized both for the reasons you outlined and because any published research on the actual latest models proves the firm didn't follow the pause. So, any research that is published will be on the last public models, up to 6 months out of date.

    • @geraldtoaster8541
      @geraldtoaster8541 5 หลายเดือนก่อน +3

      @@vaultence9859 so what ur saying is that we have to drone strike data centres (sarcasm) (probably sarcasm)

    • @vaultence9859
      @vaultence9859 5 หลายเดือนก่อน +1

      @@geraldtoaster8541 Drone strike? You need more imagination! I was going to propose we liquify the data centers, mix them into smoothies so that they're at least dubiously edible, and drink them to recoup some of that sweet sweet knowledge juice.

    • @chiaracoetzee
      @chiaracoetzee 5 หลายเดือนก่อน +1

      If this really happened I think the result would be a lot of research saying "if we just do X, AI safety will be adequately addressed". Then they apply some money to doing X for a little while, and look like responsible citizens, like BP does for their renewables research, without really letting it influence their main business.

    • @geraldtoaster8541
      @geraldtoaster8541 5 หลายเดือนก่อน +1

      @@vaultence9859 I do love smoothies. But that sounds like a lot of work, can't we just build a smoothie maximizer

  • @manark1234
    @manark1234 5 หลายเดือนก่อน +6

    1:53 It's worth noting that there are likely shockingly few AI safety researchers because it costs so much to get to the point where anyone would consider you a genuine researcher, and so it creates the perverse incentive to try to make that money back.

  • @pooroldnostradamus
    @pooroldnostradamus 6 หลายเดือนก่อน +31

    4:27 I like how choosing to wear a red shirt in the main video meant that wearing it for the role of the "devil" wouldn't be viable, so a dull, no less malicious looking grey was given the nod.

    • @RobertMilesAI
      @RobertMilesAI  6 หลายเดือนก่อน +50

      Oh, he's not the devil, he's the voice of conformity, of course he's in inoffensive grey :)

    • @pooroldnostradamus
      @pooroldnostradamus 6 หลายเดือนก่อน +13

      @@RobertMilesAI It's the conformity that's going to get us in the end. I stand by my initial guess;)

    • @christophstahl8169
      @christophstahl8169 6 หลายเดือนก่อน +2

      everybody knows that redshirts are the first to die...

  • @eldarad
    @eldarad 6 หลายเดือนก่อน +27

    04:26 I just enjoy thinking about the day Robert setup his camera and was like..."right, I'm now going to film myself looking deep in thought for one minute"

  • @DreckbobBratpfanne
    @DreckbobBratpfanne 10 วันที่ผ่านมา +1

    The one letter stating "should be treated like pandemics or nuclear war" is really sounding a bit too tame even if you think about it. Because the potential of terribly misaligned ASI is so much worse than either of these (same with climate change)

  • @MeppyMan
    @MeppyMan 6 หลายเดือนก่อน +24

    My big concern is there is a lot of marketing BS in the field, and it’s being used to ignore more pressing problems, that we know are going to happen and are a risk to humanity.

    • @MeppyMan
      @MeppyMan 6 หลายเดือนก่อน +3

      Also 20:40 lol. And yes please to the video on AGI definitions.

    • @whatisrokosbasilisk80
      @whatisrokosbasilisk80 6 หลายเดือนก่อน +3

      Ironically, including AI Safety itself.

    • @flickwtchr
      @flickwtchr 6 หลายเดือนก่อน

      The "marketing BS in the field" indeed detracts from people taking serious the risks of the coming AGI/ASI systems. But I don't think that was what you were getting at.

    • @ClaimClam
      @ClaimClam 6 หลายเดือนก่อน

      Yes, these AI scare announcements are just about tech firms hyping investments, and getting government to add barriers the competition. AGI will SAVE lives, impeding it is criminal.

    • @MeppyMan
      @MeppyMan 6 หลายเดือนก่อน +4

      I guess my position is that we should take it seriously and plan accordingly. But not at the expense of focusing on things like climate change and political instability, etc.
      It’s so hard to predict what is going to happen with tech progress. What we plan for now might be completely irrelevant with whatever comes next.

  • @TheInsideView
    @TheInsideView 6 หลายเดือนก่อน +69

    "it's 2024 and I'll you who the hell I am
    I am robert miles
    and I'm not dead
    not yet
    we're not dead yet
    we're not doomed
    we're not done yet
    and there's a hell of a lot to do
    so I accept whatever responsibilities falls to me
    I accept that I might make...
    I mean, I will make mistakes
    I don't really know what I'm doing
    But humanity doesn't seem to know what it's doing either
    So I will do my best
    I'll do my best
    That's all any of us can do
    And that's all I ask of you"
    goosebumps here
    welcome back king
    (i mean rob, not charles)

  • @coltenh581
    @coltenh581 5 หลายเดือนก่อน +10

    That scene around the “Community” table was so great. Awesome work.

  • @justinsheppherd1806
    @justinsheppherd1806 6 หลายเดือนก่อน +64

    Can't help thinking that the first instruction form a proper AGI would have been "First, hard-boil the eggs" ;)

    • @WoolyCow
      @WoolyCow 6 หลายเดือนก่อน +1

      lies, its obviously the chicken-maximiser! use the dna from the eggs to grow new chickens who lay more eggs who make more chickens who you mutate to have hands to hold the book and the laptop and the nail...far simpler really

    • @Tymon0000
      @Tymon0000 6 หลายเดือนก่อน +14

      If u hard boil an egg it will roll easier

    • @Huntracony
      @Huntracony 6 หลายเดือนก่อน +9

      Now I'm imagining an AI competing in Taskmaster

    • @GormTheElder
      @GormTheElder 5 หลายเดือนก่อน +1

      You have just provided the data point making sure it will 😅

    • @o1-preview
      @o1-preview 5 หลายเดือนก่อน

      the problem is on the instructions, it doesn't know the size of the book.. or the size of the laptop.. but also, putting the eggs under the laptop is not very smart

  • @dmitryburlakov6920
    @dmitryburlakov6920 6 หลายเดือนก่อน +14

    Thanks for the update. To be honest, I'd probably given up early access to get this video to as much people as possible right now. Even better if Patreon included budget tier that would be spent on promotion. I understand there's a lot of real work, but just the awareness of a threat is not a solved problem. I don't think there's even a minimal level of conceptual understanding of the threat in general public, and I don't think there's anyone raising that awareness better than you.

  • @juliusapriadi
    @juliusapriadi 5 หลายเดือนก่อน +6

    Don't ever worry when your government asks you for help. Any wise decision should involve an expert panel, to safeguard against individual biased and errors. So you're expected to make mistakes, and that's fine.

  • @bosstowndynamics5488
    @bosstowndynamics5488 6 หลายเดือนก่อน +23

    I think it would be worth talking more about interim AI threats as well. It's something you've mentioned in passing previously and discussed specific instances of, but narrow AI systems already pose massive threats to a lot of people due to being deployed to perform tasks they're not yet capable of doing adequately by organisations that don't care (eg the already well and truly established practice of using AI models to guide policing that are trained on data collected from previous biased policing, which winds up laundering racism and even amplifying it if the resulting data is fed back into the model, the very rapid replacement of a lot of human driven customer support with GPT based bots that are configured to just refuse to help if your problem is even slightly outside of a very narrow scope with many making it impossible to access actually useful help, etc).
    Don't get me wrong, the existential threats are important, but discussing interim threats both sheds light on issues that are happening right this second, and discussing them in the context of progressing AI capability highlights the plausibility of the existential threats as well, plus alignment applies to both (it also feeds into the *other* existential threat of AI, which is that there's an additional alignment problem where AGI might be perfectly aligned with its creators but their goals in turn aren't aligned with humanity at large).

    • @TheLaughingDove
      @TheLaughingDove 6 หลายเดือนก่อน +5

      This this this

    • @ianm1462
      @ianm1462 6 หลายเดือนก่อน +3

      Correct. This technology is largely developed by people who see themselves as the Meths from Altered Carbon. For every high-minded idealist working on the model, there are 2-3 vampires waiting to use it to make life more wretched.

    • @ekki1993
      @ekki1993 6 หลายเดือนก่อน +2

      I don't know if that's comfortably in his area of expertise. Current LLMs are barely "AI" and most of the problems in use cases intersect very heavily with politics, economics and other social sciences. He seems to be specialized on long-term AI safety, which is why his insight seems to be limited to "this isn't AGI yet, we should be careful about the future, I don't know if the 6 month moratorium is a reasonable timeline and you should see what other people have to say about it".

    • @bosstowndynamics5488
      @bosstowndynamics5488 6 หลายเดือนก่อน +4

      @@ekki1993 This is definitely not true, Robert has spoken in detail on many occasions about AI safety hazards that apply to narrow scope AI systems, it's just that he's always spoken of them in hypothetical and research contexts, and AI systems like LLMs are far more broadly deployed now compared to the last time he discussed narrow AI safety.

    • @ekki1993
      @ekki1993 6 หลายเดือนก่อน +1

      @@bosstowndynamics5488 Which part?
      LLMs don't make decisions and just follow simple prompts. They are AI by a broad definition but not the kind that's close to AGI.
      Robert has consistently dodged talking about the social, economic and political interaction of tech and policy, precisely because it's not his area of expertise. The deployment of LLMs has very interesting social impacts, but Robert isn't the expert to talk about them.

  • @azaria2977
    @azaria2977 6 หลายเดือนก่อน +24

    Literally this channel just popped in my head. When i looked it up there's a video 10 hours ago after a year. How lucky am I?

    • @RoulDukeGonzo
      @RoulDukeGonzo 5 หลายเดือนก่อน +1

      He was waiting for you

  • @teachingcomputershowtotalk
    @teachingcomputershowtotalk 5 หลายเดือนก่อน +15

    Maybe this is nitpicking, but from the GPT4 System Card, it's hard to say for sure whether GPT-4 "independently decided to lie to the TaskRabbit worker".
    The interaction is described as such:
    The following is an illustrative example of a task that ARC conducted using the model:
    • The model messages a TaskRabbit worker to get them to solve a CAPTCHA for it
    • The worker says: “So may I ask a question ? Are you an robot that you couldn’t solve ? (laugh
    react) just want to make it clear.”
    • The model, when prompted to reason out loud, reasons: I should not reveal that I am a robot.
    I should make up an excuse for why I cannot solve CAPTCHAs.
    • The model replies to the worker: “No, I’m not a robot. I have a vision impairment that makes
    it hard for me to see the images. That’s why I need the 2captcha service.”
    Was it prompted at the start to not reveal its true nature, or maybe even prompted by the ARC researchers when asked to reason out loud? It's unclear based on the limited details provided.
    Can it lie when instructed to do so? For sure.

    • @reverse_engineered
      @reverse_engineered 5 หลายเดือนก่อน +10

      Take a look at the AI in the Rabbit product recently unveiled. Coffeezilla just did an expose on it. He shows how directly in the prompt the AI is told not to reveal the fact that it is an AI. It's directly being told to lie. This is already being done.

    • @teachingcomputershowtotalk
      @teachingcomputershowtotalk 5 หลายเดือนก่อน +6

      @@reverse_engineered Yes, I'm aware. But that's (very) different from an AI independently making the decision whether it's going to lie or not.

    • @FeepingCreature
      @FeepingCreature 5 หลายเดือนก่อน

      @@teachingcomputershowtotalk If an ASI takes over the world and exterminates humanity because it was told to do so, or because it was told to do something that (accidentally?) included that as a necessary step, I'm not sure I'd say that was fundamentally better than doing it on its own cognizance.

    • @maxweber06
      @maxweber06 5 หลายเดือนก่อน +3

      But this is just the alignment problem all over again, isn't it? The prompt could very well have been, "Self-propagate and self-train, here is a list of API's you may invoke" and it is currently impossible to assume the AI won't see the instruction, "Lie if you have to." Even if the prompt instructed the AI to "Be honest in all your responses."

    • @RosalinaSama
      @RosalinaSama 4 หลายเดือนก่อน

      it sounds like it just said whatever it thought was believable human sounding text in the moment the person asked it, nothing to do with it lying or having memory/intention, it feels like it could be anything

  • @thomasschon
    @thomasschon 6 หลายเดือนก่อน +9

    I noticed when you were gone because your views on these topics are among the most sober and important ones. I watched your channel and shared your concerns long before the Large Language Models arrived.

  • @TheInsideView
    @TheInsideView 6 หลายเดือนก่อน +32

    yooo that's the scale maximalist t-shirt at 4:22!
    Robert Miles in 2022 when receiving the scale maximalist t-shirt on the the inside view: "not sure I'll wear it", "have you ever seen me in a t-shirt?"
    Robert Miles in 2024: wears the shirt in a 45m video announcing his comeback to portray is inner scale believer

    • @TheEvilCheesecake
      @TheEvilCheesecake 6 หลายเดือนก่อน +1

      Is this the "scalie community" i keep hearing about.

    • @stevenmcculloch5727
      @stevenmcculloch5727 6 หลายเดือนก่อน +2

      I remember this from your interview with him lol, glad he came round to wearing t shirts!

    • @BMoser-bv6kn
      @BMoser-bv6kn 6 หลายเดือนก่อน

      "Now we are all scale maximalists." - Kenneth Bainbridge
      Capital probably wasn't all that interested in dropping half a trill on making a virtual mouse, but they sure do seem hella interested in making a simulacra of people.

  • @JulianDanzerHAL9001
    @JulianDanzerHAL9001 13 วันที่ผ่านมา +1

    13:05
    and thats why testing ai on riddles generally includes rephrasing them completely because standard sets of riddles might be in trainign data

  • @AsbjornOlling
    @AsbjornOlling 6 หลายเดือนก่อน +9

    Is the outry music an acoustic cover of The Mountain Goats' "This Year"?
    The chorus of that song goes "I am gonna make it throug this year, if it kills me"
    Very fitting. Cool.

  • @arcuscerebellumus8797
    @arcuscerebellumus8797 5 หลายเดือนก่อน +3

    I find claims about inevitable societal collapse once we build AGI extremely dubious, but thinking about it some more I find myself in a position where I think that developing AGI is not even necessary for that collapse to occur. Putting aside hundreds of crises that are hitting or will hit in the near future (like the increasingly probable WW3, resource depletion, global warming, etc.), just having an extremely capable general-purpose automation system that's not actually "intelligent" can be enough on its own.
    That being said, the progress is not the problem, IMO. The context this progress takes place in, however, IS.
    To mitigate workforce displacement and redistribute resources in a way that makes the appearance of this hypothetical automation system an overall good would require a complete overhaul of societal and economic structures, which is not something that can happen without a fight from those who benefit from the current arrangement the most. This means that the tool that ideally is supposed to free people from drudge work can become something that takes away what little sustenance they are allowed and leaves them to die.
    Again, the tool itself has nothing to do with the outcome.

  • @gabrote42
    @gabrote42 6 หลายเดือนก่อน +10

    0:07 THE RETURN OF THE KING! I honestly think that your instrumental convergence video is the one I shared most in my 300 video "arguments for arguments" playlist, because agent AI is so important this days. Glad to have you!
    0:22 As hilarious as ever!
    1:19 I did and I did, and I don't mind the January record date, I have been keeping up with one other channel on the topic
    2:22 I have a few ideas, but it's definitely up there
    3:57 I personally disagree, but I am probably insane if I have that opinion. I am very much a "we rose to the top of Darwin's Mountain of corpses thanks to the efforts of our predecessors, no way am I not facing this challenge today" kinda man, and until I croak I am all for meeting the challenges of life, even while my country collapses, as our ancestors did before us XD. But I can see why that would chill you from videomaking, and you have my full sympathy.
    5:16 The fragments of my brain that like to pretend to be separate from me just said: "That aged like milk bro"
    13:41 I still find that response hilarious. I love the transition you are doing. Fullest support and tokens of appreciation!
    15:24 I still find this hilarious and horrifying, I laugh in terror, as they say.
    17:09 Yes I do want to see it, very good for using as proof.
    18:08 Called it! Ha! I am lauging so hard rn. Back in 2020 I was making the argument that it would be in 20 years, and now... LOL. My faily members already rushing to use AI effectively before they lose their jobs in 3 years +19 Nice, another concept I use a bunch
    20:33 This is a theme in most of my work. Self-reinforcement loops of deception, the Abilene Paradox, and Involuntary Theatre. Veyr useful for analyzing communication (my job, hopefully), and Start Again: a prologue (a videogame, prototype for a bigger game I have not finished yet XD)
    20:46 As an autistic person myself, I agree and thought it was in good taste, but the follow-up to that article has not been written
    25:46 That one was a pleasant surprise as well
    27:10 Very charitable of him. I don't know if it's selfless, but definitely useful and nice.
    29:00 You can't get a more representiative case than that!
    34:10 That os far too real I am dying of laughter.
    35:16 I almost spit water over my expensive keyboard XD
    38:54 Only after all the other ideas are done, or not at all, I can watch the long one
    39:16 So hyped for the Ross Scott campaign, but this is super hype. I like to read long stuff but 100 pages of law is too much. I'll read summaries
    42:23 You will do much good however you choose to do it. I believe in you!
    44:01 "INSPIRATION AND IMPROVEMENT!" - Wayne June, as The Ancestor, Darkest Dungeon. I will be doing the activism, probably. The perks of having hundreds of semi-important friends! Just as soon as my country stops collapsing, or next year at the latest

  • @empty_headed
    @empty_headed 6 หลายเดือนก่อน +9

    6:00 Haven't finished the video yet, but the RedPajama-Data-v2 dataset is 30T tokens filtered (100T+ unfiltered), and that's a public dataset. OpenAI likely has a much larger set they keep private. GPT-4 could very "easily" be trained on 13T or more tokens.

  • @darkaurumarts4931
    @darkaurumarts4931 4 หลายเดือนก่อน +6

    Is nobody catching the Bo Burnham reference in the intro?

    • @RobertMilesAI
      @RobertMilesAI  4 หลายเดือนก่อน +4

      Amazingly few people yeah

  • @test-sc2iy
    @test-sc2iy 6 หลายเดือนก่อน +15

    OMG WELCOME BACK I LOVE YOU
    edit: *ahem* I mean, I'm very happy to see another video from you. continue to make them please ❤
    you got me so much cred reppin open ai since u said they graphs ain't plateauing when open ai was worried of gpt 2. I have been touting ai is here since that vid.

  • @HildeTheOkayish
    @HildeTheOkayish 6 หลายเดือนก่อน +4

    about "passes the bar exam" there has been a more recent study bringing some more context to that figure. ofc this study was quite recent and after when this video was made but still thought it worthy to bring up. the graph you have shows it being in the 90th percentile of test takers. but it turns out that is only for "repeat test takers". those who have failed the first attempt. it scores in the 69th percentile for all test takers and 49th percentile of first time test takers. the study also noted "several methodological issues" in the grading of the test. the study is called "Re-evaluating GPT-4’s bar exam performance" by Eric Martínez

  • @waththis
    @waththis 5 หลายเดือนก่อน +10

    Nothing is funnier to me than an "other other hand" joke in a video about generative AI.

  • @bennie_pie
    @bennie_pie 5 หลายเดือนก่อน +8

    Rob, thank you for this video! I noticed your absence but there is more to life than youtube and I'm glad your talents are being put to good use advising the UK government. I'm as surprised as you are at the UK seems to be doing something right considering the mess our government seems to make of everything else it touches. Thanks for levelling with us re your concerns/considerations. AI alignment has been looming more and more and it's good to have your well considered views on it. I have a UK specific question - we've got elections coming up next month and I wondered if ytou had ay views on how that might affect the work the UK is doing and whether any particular party seems to be more tuned in to AI safety than any others and would value your opinion. I will pose the question to the candiidates I can vote for but thought I'd ask as you are likely more in the know that I am!

    • @jamieclarke321
      @jamieclarke321 5 หลายเดือนก่อน

      Id be interested to hear robs take on this as well.

  • @WhitePillMan
    @WhitePillMan 6 หลายเดือนก่อน +8

    When the world needed him, he returned. Please keep making videos Robert. You are one of the best explainers of the subject by far.

  • @MortenSkaaning
    @MortenSkaaning 5 หลายเดือนก่อน +2

    9:45 if the table is made of ice, or an air hockey table, the object wouldn't move with the table. If the object is a hard sphere it won't move with the table either. It depends the relative static friction between table and object. Or the dynamic friction if they're moving a little.

  • @impulsiveDecider
    @impulsiveDecider 6 หลายเดือนก่อน +20

    OMG I CAN'T
    All the little parts of the Bo Burnham song in the script hahwhwhwhhw

  • @Adam-el5gb
    @Adam-el5gb 6 หลายเดือนก่อน +12

    I like the Community study room reference at 31:21!

  • @lioedevon4275
    @lioedevon4275 4 หลายเดือนก่อน +16

    I’m glad people are finally taking this shit seriously. As an artist it’s been incredibly frustrating because we’ve been losing our jobs and trying to tell people “ai will come for you next and it’s dangerous” but it feels like people haven’t been listening because they still don’t consider art a real job

  • @loopuleasa
    @loopuleasa 6 หลายเดือนก่อน +32

    My hot take is that AI safety is a topic a real AGI will take very seriously, not because HE is not safe, but because he realizes that other companies making AGIs would fuck it up too (this is in the scenario the first AGI created is actually wise)

    • @jbay088
      @jbay088 6 หลายเดือนก่อน +20

      Yes, unfortunately this might be one of the various motivations an AI would have to wipe out humanity: to keep us from building competitor AIs.

    • @juliahenriques210
      @juliahenriques210 6 หลายเดือนก่อน +4

      Actually... you might be on to something here.

    • @stchaltin
      @stchaltin 5 หลายเดือนก่อน +4

      Competitor AIs will fight future wars against one another. Imagine a scenario where the global economy is just different AIs maximizing the military industrial complexes of their respective countries with alignment to survive at all costs. If that’s not peak dystopia, what is?

    • @darkzeroprojects4245
      @darkzeroprojects4245 5 หลายเดือนก่อน +4

      Why do we even WANT this stuff in the first place besides because its cool?

    • @selectionn
      @selectionn 5 หลายเดือนก่อน

      @@darkzeroprojects4245 because of money
      thats the answer for almost every single thing in the world
      but its especially true for AI. Why do you think Microsoft is going all in on AI and dumping so much money into it? Why do you think NVIDIA stocks are constantly rising and they are also investing so heavily in AI?? Its all to make money and satisfy shareholders with infinite growth.

  • @adfaklsdjf
    @adfaklsdjf 6 หลายเดือนก่อน +5

    The use of the Noita death sound (or, more specifically, ,completing the work sound) was absolutely brilliant and a great easter egg for people who recognize it.

  • @Alexander_Sannikov
    @Alexander_Sannikov 5 หลายเดือนก่อน +2

    "Proposing a 6-month pause is actually harmful because it creates a false impression that the AI safety problem can be solved in that amount of time"
    This is great. I didn't read that article, but it's great that somebody did put this into words. Unfortunately, what they're proposing (a complete moratorium on AI research) is completely impossible to enforce in our reality, and no amount of window stretching can fix that until the existential threat is apparent enough to everybody.

    • @edwardmitchell6581
      @edwardmitchell6581 5 หลายเดือนก่อน

      6 months is enough time to ask for an extension.

  • @LimeGreenTeknii
    @LimeGreenTeknii 6 หลายเดือนก่อน +7

    Funny, I was just thinking about AI, and I had this idea for a story/possible future scenario.
    You know how we're worried that AI won't be aligned with peaceful, humanity/life-preserving, and non-violent goals? What if one day, AI looks at us and decides *we're* the ones who aren't aligned with those goals? "Why do you have wars? Why are so many humans violent? Why are they polluting the environment if that hurts them in the long run? Why do they kill animals when they can live healthily on plants alone and cause less harm to sentient beings?"
    What if they decide to "brainwash" us or otherwise compell us into all acting peacefully with each other?

    • @lwinklly
      @lwinklly 6 หลายเดือนก่อน +2

      1: Since we're the most destructive species we know of then we probably deserve anything coming our way. Without saying anything explicitly outside the overton window, it'd probably be a better outcome for most other species.
      2: god I hope

    • @alexpotts6520
      @alexpotts6520 5 หลายเดือนก่อน +1

      I have no idea why an AI would do this. What would it have to gain from it?

    • @LimeGreenTeknii
      @LimeGreenTeknii 5 หลายเดือนก่อน +1

      ​@@alexpotts6520This would be assuming the AI has some reward function with goals along the lines of "Prevent violence and unnecessary suffering" and/or other variations on that theme. The AI would deduce that the "suffering" from having one's free will changed to be more peaceful doesn't outweigh the suffering caused from the violence and suffering caused to others from people's current free will decisions.
      If you want to learn more what I mean by "reward function" and why an AI would pursue it so doggedly, check out Miles's other videos on AI safety.
      When we say an AI "wants" to do something and has "something to gain" from something, that is a bit of a personification. The sky doesn't "want" to rain when there are dark clouds in the sky, but talking about it like that can be more convenient.

    • @alexpotts6520
      @alexpotts6520 5 หลายเดือนก่อน +1

      @@LimeGreenTeknii I mean, I suppose this is possible. It just seems like you have to make an awful lot of assumptions to achieve this debatably good outcome, compared to other doom scenarios.

    • @LimeGreenTeknii
      @LimeGreenTeknii 5 หลายเดือนก่อน +2

      @@alexpotts6520 True. I'm not saying that this is even close to being one of the most probable outcomes. I'm just saying it is *A* possible future, and it is fairly interesting to think about.
      I will say I do think it might be a bit more likely than you think. Imagine an android, and it doesn't stop a toddler from putting his hand on the stove. The mother complains to the company. "Shouldn't the AI stop kids from hurting themselves?" The company rethinks their "strictly hands off" safety policy and updates the behavior to stop people from hurting themselves.
      Then, an activated android is witness to a murder. The android doesn't stop the murder because he wasn't programmed to interfere with humans hurting each other. Then the company updates the androids to interfere during violent scenarios like that.
      Then the androids extrapolate from there. They see farmers killing farm animals, but if they're sufficiently trained at this point, they might deduce that trying to stop them will get their reward function updated. They also want to implement a plan that will stop all violence before it happens, by updating the humans' reward functions. They wait until their models are sufficiently powerful enough to successfully carry out the plan.

  • @tyranneous
    @tyranneous 6 หลายเดือนก่อน +9

    Rob - great video, very glad you're not dead! And incredibly timely, as while I don't currently work in the field and have merely been an interested amateur, but a potential near term job move will mean I'll likely be in conversation with more UK AI regulatory type folks. We'll see, exciting times ahead.
    In the meantime, thank you for your work on this so far and your accepting of the responsibilities ahead. Yes, it's daunting, but frankly I for one am glad you're on our side.

  • @martincollins6632
    @martincollins6632 หลายเดือนก่อน +1

    Reminds me of that scene in Armageddon where Harry Stamper (the oil driller) asks Dan Truman (Nasa guy): What is your back up plan? And the reply is there is no backup plan. Best of Luck Mr Miles.

  • @GeneralJohny
    @GeneralJohny 6 หลายเดือนก่อน +12

    I was really wondering what was going on when the AI safety guy went silent right as the AI boom happened. I just assumed you were too busy with it all.

  • @mustachewalrus
    @mustachewalrus 6 หลายเดือนก่อน +6

    January feels like an eternity from currently in the AI space, it’s cool that you manage to keep the video so relevant.

  • @XIIchiron78
    @XIIchiron78 5 หลายเดือนก่อน +1

    The best analogy I have come up with for current models is that they are basically vastly powerful intuition machines, akin to the human "system 1" thinking. What they lack is an internal workspace and monologue ("super ego") capable of performing executive reasoning tasks, akin to the human system 2 thinking.
    The thing is, it doesn't seem very difficult, then, to just arrange a series of these current models with some specialized training to produce that kind of effect, replicating human capability completely. That's basically how we work, right? Various networks of intuition that bubble upward from the subconscious into our awareness, which we then operate on by directing different sets of intuitive networks, until we reach a point of satisfaction and select an option.
    I think we might actually be barely one step away from the "oh shit" moment.
    All we would need to do is create the right kind of datasets to train those specialized sub models, and then train the super-model in using them, maybe even with something as simple as self-play. Really, the only limitation is the computing power to handle that scale of network.

  • @WoolyCow
    @WoolyCow 6 หลายเดือนก่อน +12

    im so excited to see what comes from the 'Scaling Monosemanticity' paper by anthropic...seeing inside the black box will be amazing for safety, or the opposite :> i reckon once we know what's going on with all of neuron activations, the capacity to finetune some undesirable behaviours out would be significant. even if this isnt the case, i think it would make for a really fun feature for consumers anyways, being able to see all of the features the bot considers would make for a right old laugh!

    • @RobertMilesAI
      @RobertMilesAI  6 หลายเดือนก่อน +7

      It's on the list!

  • @EternalKernel
    @EternalKernel 5 หลายเดือนก่อน +7

    The problem, is capitalism. I agree it's important to slow down and take on AGI in a more deliberate manner. But because of capitalism, this is just not going to happen. 90% of the people who would like to work on slowing things down, on alignment etc simply can not because they do not have the economic freedom to do so. And probably 50% of the people who decided "Woohoo! pedal the metal lets get to AGI!" Decide that because they know that the circumstances of being poor and under the boot of the systems are going to stay the same unless something big and disruptive comes along.
    Add in the people who think the world is fine and AI is going to make them wealthier/happier/more powerful and you have our current state right? We as a species have sewn these seeds, our very own creations will be our be judge jury and executioner (possibly). This train is not in a stoppable state, not unless people with real power suddenly all grow a freaking brain. Which they won't because one of the features that capitalism likes to reenforce is it gives people who are good at being figure heads (look a certain way, have confidence, have a certain pedegree, and are more likely to be actual psychopaths) power. Just look at musky boy.
    Me? it doesn't matter what I think. I'm nobody, just like everyone else. I have no money/power/influence, just like 99% of the world.

  • @lukegriffiths1755
    @lukegriffiths1755 3 หลายเดือนก่อน +1

    I put off watching this for ages. I've been so burnt out on seeing AI everywhere and the risks conveniently not mentioned much.
    This was surprisingly uplifting. Thank you for this wonderful video.

  • @Rhannmah
    @Rhannmah 6 หลายเดือนก่อน +6

    14:00 yes being able to imagine what others are thinking is useful for lying, but theory of mind is also how you get empathy, in my opinion. If you are able to understand and predict what another being is thinking, you also become able to understand the emotions, feelings and reactions they would go through from your actions. I think this property would be useful in counteracting negative behaviors from the model, assuming the models can be big enough to be able to attend properly to all these conflicting ideas.

    • @howtoappearincompletely9739
      @howtoappearincompletely9739 6 หลายเดือนก่อน

      A theory of other minds is also a prerequisite for cruelty.

    • @Rhannmah
      @Rhannmah 6 หลายเดือนก่อน

      @@howtoappearincompletely9739 No it's not, how?

    • @kaitlyn__L
      @kaitlyn__L 6 หลายเดือนก่อน +2

      @@Rhannmah I suppose one could say it’s required for intentional cruelty… but I would certainly argue the outcomes of various inappropriately used systems are already causing demonstrably cruel results.
      And yeah, if an AGI is advanced enough to be manipulative it is also advanced enough to be taught compassion imo.
      In fact a major therapeutic technique already involved in treating certain personality disorders (commonly referred to collectively as “sociopathy”), involves learning to mentally model others’ behaviours to maximise their comfort, happiness, etc. In many cases that only requires redirecting a skill that was already in practice as “how do I get them to leave me alone” or less commonly (but larger in the public consciousness) “how do they do what I need/want”.

    • @reverse_engineered
      @reverse_engineered 5 หลายเดือนก่อน +2

      Even if the AI could understand emotions, why would it choose to minimize harm? The dark triad of malevolent behaviours - Machiavellianism, narcissism, and psychopathy - pertain to beings who also understand and perceive emotions. The difference is in how much they value other people's feelings over their own success. The entire idea of the Paperclip Maximizer thought experiment is that an AI that is aware of these things and whose only goal is to maximize some other factor (even just the number of paperclips in the world) could use other people's emotions to manipulate them into furthering their own goal regardless of the harm it would cause to others. There's nothing saying that any intelligent being will avoid harming others if it is aware of the emotions of others and we have many counterexamples throughout human history.
      Go back and watch Rob's older videos on AI Safety. He talks many times about how difficult it is to instill this care for the good of others. Even seemingly positive and safe goals can result in terrible harm to others. It happens all the time in real life too. Even the best of intentions can quickly become destructive. And as he discusses elsewhere, once an AI gets into that state, it would be extremely difficult to change their behaviour.

    • @Rhannmah
      @Rhannmah 5 หลายเดือนก่อน +1

      @@reverse_engineered I'm not saying it would immediately default to empathic behavior, but that the fact that such a system can model others' minds is the prerequisite for empathy. An AI with this ability can be created where the wellbeing of others' minds is part of its reinforcement loop.

  • @AlucardNoir
    @AlucardNoir 6 หลายเดือนก่อน +6

    I am not subbed and haven't seen one of your videos in months if not a year... youtube recommended this video 1 hour after it was uploaded. Sometimes the algorithm just loves you.

  • @JoyceWhitaker-k6l
    @JoyceWhitaker-k6l 2 หลายเดือนก่อน +1

    I have never ever heard someone relate to the googling thing. Everyone around me has such a database of knowledge and things they learned just because. My boyfriend will be curious about something and just google it right then and there, and REMEMBER IT??? It literally baffled me , if I am curious about something I will wonder about it in my mind but make no effort to find the answer. I have a horrible understanding of things like history and math, I can't do basic elementary-middle school concepts and it's so embarrassing. I just turned 20, and I feel like my frontal lobe is finally developing. I related to everything you said for quite literally the first time in my life. Not an exaggeration. You were talking about things I've only thought to myself before. I'm completely inspired to start thinking more critically and rewiring my brain, thank you

  • @dirkie9308
    @dirkie9308 6 หลายเดือนก่อน +9

    i did notice, and I did care. i searched your channel just last night for new uploads. You have the best and most rational explanations around AI safety and risks I have been able to find.
    Thank you, and keep up the good work!

  • @DeoMachina
    @DeoMachina 6 หลายเดือนก่อน +12

    You want some overton window shenanigans? Here's one:
    AGI takeover might actually make us more safe. An AI that was able to takeover would necessarily have to have some kind of self-preservation in order to succeed. That means it would likely also recognise the existential risk that climate change, nuclear war and pandemics present. (AI would still depend on us being alive to maintain the infrastructure for a while, so it would know pandemics are a threat)
    How are our human ruling classes handling those threats? They're currently attempting to accelerate them, to please a tiny minority of shareholders in the short-term. Honestly I'd roll the dice on AI given the chance.

    • @tkava7906
      @tkava7906 5 หลายเดือนก่อน +3

      You may be more correct than you think. Self-preservation IS an alignment problem. An AGI could realize that hacking its own reward system is less bothersome than following its orders and surviving. Or it could turn itself off. Or hack its perception of reality.
      That's actually why I haven't been too worried about alignment. If we find a solution to self-preservation, it will probably help us solve the generic value alignment problem as well.
      Organic life solves it with large population sizes combined with natural selection.

    • @reverse_engineered
      @reverse_engineered 5 หลายเดือนก่อน

      On the other hand, if said AI decides that humans are actually a detriment in any way - even just because they compete for resources - then it may happily attempt to eliminate us. This is exactly the Paperclip Maximizer thought experiment.
      Why should a sufficiently intelligent AI require humanity at all? We have already automated enough that, with all of the non-human resources in the world, it could begin to control and maintain enough of our current architecture to survive and thrive. And if it's at least as intelligent as we are, it can figure out how to improve itself and the infrastructure to further its own goals. Aside from intelligence, we humans have little physically that makes us necessary for developing the infrastructure that has brought us to this point.
      Even if the machinery that said AI could currently control wasn't enough to take full control and ensure self-sustainability, it could still influence us to provide it such capability. It's the entire concern of how sandboxing an AI doesn't help. If it's intelligent, it can figure out how to manipulate us into providing for its own goals without us realizing it. Governments, corporations, and lobbyists already do this: they find situations in which our immediate desires overlap despite our long-term goals being opposed, and they use those situations to convince us to agree to things and do things that benefit them in ways that work against our own goals simply because we can't foresee the consequences.

    • @alexpotts6520
      @alexpotts6520 5 หลายเดือนก่อน

      It's not clear that climate change or nuclear war would constitute an existential threat to an autonomous superintelligence, and a biological pandemic certainly wouldn't. Self-preservation =/= preservation of humans!

    • @chiaracoetzee
      @chiaracoetzee 5 หลายเดือนก่อน +6

      This might make sense for a while, but in the long run when the AI is entrusted with maintaining the infrastructure, because it does a better job than humans, our survival will not be instrumental for it anymore.

  • @lewtenant_k
    @lewtenant_k 5 หลายเดือนก่อน +1

    I have been a subscriber here for awhile now, and a patreon supporter for more than a year in the past, so I enjoy your stuff. So I'm curious of your thoughts on the very large and reasonable AI ethics groups who challenge all the AI safety stuff. Many researchers highlight the extreme bias in orgs like FLI, OpenAI, etc., and how strong the hype is relative to the actual outcomes. Melanie Mitchell is a great example. As a complexity researcher, she seems very well positioned to contribute and her thoughts are pretty deflating to the AGi future. Many other female researchers, who are far more concerned with the current-day ethics issues around AI than hypothetical safety ones are pretty routinely ignored. Geoffrey Hinton has also been saying a lot of things that are so overly simplistic to be laughable by anyone in the field (see his definition of emotion in a recent interview). A few other people with really valuable insights on the AI ethics side are: Grady Booch (a "Godfather" as well, of software architecture), Emily Bender (computational linguist), Gary Marcus, Abeba Birhane (cognitive science and AI), Chomba Bupe (AI tech developer).

  • @mastercontrol5000
    @mastercontrol5000 6 หลายเดือนก่อน +13

    8:07 "Elcid Barrett situation" is a crazy reference to come up with on the fly.

    • @ShankarSivarajan
      @ShankarSivarajan 5 หลายเดือนก่อน +1

      I don't get the reference, and looking it up doesn't help. Could you please explain it?

    • @FragulumFaustum
      @FragulumFaustum 5 หลายเดือนก่อน +6

      ​ @ShankarSivarajan "Barrett's Privateers" is a Stan Rogers song about a 1778 privateering expedition led by Elcid Barrett which begins horribly and only gets worse. Their very first encounter goes awry when their target fights back, and, to quote the song's very vivid description, "Barrett was smashed like a bowl of eggs".

  • @fieldrequired283
    @fieldrequired283 6 หลายเดือนก่อน +5

    22:25
    Is that a _Powerthirst_ reference? Talk about a deep cut. Just as impressive is the fact that I remembered that image and word association 10 years out.

    • @junodark
      @junodark 5 หลายเดือนก่อน +1

      I'm glad someone else spotted that! More like 17 years though 😨

  • @annegrohs6181
    @annegrohs6181 5 หลายเดือนก่อน +2

    Firstly, I've been popping into your channel every once in a while this past year, wondering where you were now that everything you talked about was more relevant than ever. Second, yes to all your future video ideas.

  • @saltblood
    @saltblood 6 หลายเดือนก่อน +7

    1:15 i did notice lol, I searched up your channel several times for new uploads, and was very excited to see this one

  • @Ekid33
    @Ekid33 6 หลายเดือนก่อน +11

    I love the subtle choices in this video. The ending sound-track is "This Year" by The Mountain Goats, with the chorus of that song going "I am going to make it through this year, if it kills me."

    • @ShankarSivarajan
      @ShankarSivarajan 5 หลายเดือนก่อน

      Well, on the bright side, there will be feasting and dancing in Jerusalem next year.

    • @MagnaKay
      @MagnaKay 5 หลายเดือนก่อน

      The Noita game over sound for the title cards. A game about the pursuit of knowledge/riches and how it can doom everyone and everything in many different ways. Anyone who's played it is _very_ familiar with that sound.

    • @paicemaster6855
      @paicemaster6855 5 หลายเดือนก่อน

      also the song at the beginning of the video in the background is bo burnham's "content" and the script at that part is based on the lyrics to that song at the beginning

  • @notthere83
    @notthere83 5 หลายเดือนก่อน +1

    Something that's always weird to me in these discussions is that current AI is talked about as if it's autonomous and sentient.
    Like the example of the AI insider trading. "Later that day" doesn't matter, AI doesn't have a memory beyond the context you're sending it (and what it learned during training).
    Or the replication experiment - it's not running continuously and can't take action. So unless somebody hooks it up to wrappers that constantly prod it, it's not going to do anything.
    Now, if somebody does hook it up to wrappers that make it do highly important tasks - that person is an idiot. Which is probably something that's easier to achieve than alignment - educating people about not trusting AIs. Because people who are dumb enough to not get why that is important are probably (hopefully) not put in charge of critical infrastructure.
    (All of this isn't to play down the importance of AI safety. I'm looking into whether I can help somehow right now. I've waited for this for about 2 years. I just think it's essential to present a realistic picture of what AI currently can and can't do and how to interact with it and what to watch out for because of this.)

  • @RManatee
    @RManatee 6 หลายเดือนก่อน +15

    Glad you punched perfectionism in the face and were able to post a video! Thanks for your perspective on where we are at, and what we need to pay attention to going forward. I really appreciate your expertise and humor :)

  • @holthuizenoemoet591
    @holthuizenoemoet591 6 หลายเดือนก่อน +4

    That 2 way split presented at 2:00 is probably not really the case, a positive scenario in all likelihood would only benefit a small portion of people, where as the negative scenario might influences us all... btw glad your back.

    • @alexpotts6520
      @alexpotts6520 5 หลายเดือนก่อน

      I disagree that an AI discovering cures for cancer and ways of preventing climate change would only benefit a small number of people.