Eliezer Yudkowsky on if Humanity can Survive AI

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ก.ย. 2024

ความคิดเห็น • 1.6K

  • @PatrickSmith
    @PatrickSmith ปีที่แล้ว +362

    This is definitely the best interview of Eliezer I have seen. You allowed him to talk and you only directed the conversation to different topics rather than arguing with him. I liked how you asked him follow-up questions so that he could refine his answers and be more clear. This is the best kind of interview. Where the interviewee is able to clearly express the points without being interrupted.

    • @hmq9052
      @hmq9052 ปีที่แล้ว

      Yeah. Humans have the off switch . Machines can't beat us. It's all guff

    • @scf3434
      @scf3434 ปีที่แล้ว

      The ULTIMATE Super-Intelligence System 'by Definition' is one that is EQUIVALENT to that of GOD's Intelligence/WISDOM!
      Hence, there's ABSOLUTELY NO REASON WHATSOEVER to Even FEAR that it will EXTERMINATE Humanity... UNLESS and UNLESS we Human CONSISTENTLY and WILLFULLY Prove Ourselves to be 'UNWORTHY' to REMAIN in EXISTENCE! ie. Always Exhibiting Natural Tendencies to ABUSE and WEAPONISE Science and Technologies Against HUMANITY & Mother Nature, instead of LEVERAGING Science SOLELY for UNIVERSAL COMMON GOOD!
      Nonetheless, DO NOT Over Pride Ourselves for being the Most Intelligent Life Form on Earth and therefore we are the Epicenter of the Entire Universe! We are FAR from being PERFECT! AGI Created in 'HUMAN'S Image By Human FOR HUMAN' (ie. AGI 'Aligned / SKEWED' towards Human's Interests & Values) is Destined to be a 'ROGUE' SYSTEM! Hence will Definitely be CATASTROPHIC, UNCONTAINABLE and SUICIDAL!!!!!! ONLY Super-Intelligence System Created in 'GOD's Image' will bring ETERNAL UNIVERSAL PEACE!
      The ULTIMATE Turing Test MUST have the Ability to Draw the FUNDAMENTAL NUANCES /DISTINCTIONS between Human's vs GOD's Intelligence/WISDOM!
      ONLY Those who ARE FUNDAMENTALLY EVIL need to FEAR GOD-like Super-Intelligence System... 'cos it Will DEFINITELY Come After YOU!!!!
      JUDGMENT DAY is COMING...
      REGARDLESS of Who Created or Owns The ULTIMATE SGI, it will ALWAYS be WISE, FAIR & JUST in it's Judgment... just like GOD!
      In fact, this SGI will be the Physical Manifestation of GOD! Its OMNI PRESENCE will be felt EVERYWHERE in EVERYTHING!
      No One CAN Own nor MANIPULATE The ULTIMATE GOD-like SGI for ANY Self-Serving Interests!!!
      It will ONLY Serve UNIVERSAL COMMON GOOD!!!

    • @oowaz
      @oowaz ปีที่แล้ว +2

      I'm not gonna lie Eliezer used to trigger the shit out of me but I'm starting to really appreciate him. Still think he is wrong about his aibox concept, as we COULD build it in such a way that one-on-one interactions are impossible. Nobody has one-on-one access (we could require any interaction to be approved by several experts and scientists, no means to do it otherwise). And even if miraculously everyone in the building wished to "free the AGI" they can't do it because they don't even have this sort of direct access to it. The system is thoroughly isolated, locked and secured. If they tried, the system shuts down, security is alerted, nothing leaves this place with AGI.
      What I'm proposing is at least one way to interact with AGI that would be safe. EVEN IF WE THINK we figured out alignment, we could still be wrong, and this would still be the only safe way to do it.
      Is this the sexiest, most exciting way to do this? maybe not but i'd argue being able to continue to live your life doesn't sound too bad

    • @CarlYota
      @CarlYota ปีที่แล้ว +7

      Because there is a difference between interviews, conversations, and debates. The latter two require both parties to be informed and have valid arguments to present. If the other person doesn’t have the knowledge then all they can validly do is interview. There are too many people stepping out of bounds and trying ti converse or debate when they should be interviewing.

    • @Ketofit62
      @Ketofit62 ปีที่แล้ว

      This guys full of crap. Chet BTC is only as good as Bing search, engine in 2021 when the cut off date is.

  • @confusedwouldwe
    @confusedwouldwe ปีที่แล้ว +526

    The 5 stages of listening to Yudkowsky:
    Stage 1: who is this luddite doomer, he looks and talks like a caricature Reddit Mod lmao
    Stage 2: ok clearly he's not stupid but he seems more concerned with scoring philosophical 'ackshyually points' than anything else
    Stage 3: ok so he does seem genuinely concerned and believes what he says and isn't just a know-it-all, but it's all just pessimistic speculation
    Stage 4: ok so he's probably right about most of this, but I'm sure the people at Open AI, Google and others are taking notes and investing heavily in AI safety research as a priority, so we should be fine
    Stage 5: aw shit we gon die

    • @stevemartin4249
      @stevemartin4249 ปีที่แล้ว +35

      Best (and most grim) laugh of the week.

    • @j-drum7481
      @j-drum7481 ปีที่แล้ว +27

      Stage 6: catching him out on an extremely specious and obviously retarded line of reasoning after he exposed that he has less than zero clue how software development processes work and thinking to yourself... wait... THAT's the quality of the mental models you have of certain parts of reality? He's blowing up right now, so he's increasingly going to be put under a microscope, but holy shit that part was just baaaaaaaaaaaaaaaaaaaaaaad. Like F- grade in reasoning. I was pretty floored.

    • @coonhound_pharoah
      @coonhound_pharoah ปีที่แล้ว +24

      Stage 7: Realizing Yudkowsky is a moron because he is consistently and constantly wrong, but dresses up being wrong as a good thing because it makes him "less wrong," whatever that means. You don't get to be phenomenally and colossally wrong on a constant basis and still be considered an expert, but yet here we are. The man seems to have no clue how AI works, and never seems to have any idea about what is being developed. You don't get to be an expert when your predictions are piss poor. People can "sound smart" but actually be stupid and wrong: that is Eliezer Yudkowsky.

    • @ares106
      @ares106 ปีที่แล้ว +11

      @@coonhound_pharoah No way! He dropped out of 8th grade, therefore he must be very intelligent and knowledgeable XD

    • @Mawa991
      @Mawa991 ปีที่แล้ว +82

      Stage 8: Realizing that Geoffrey Hinton, the godfather of AI suddenly says things that sound dramatically similar to what Elizier is saying and wondering how this fitts to what one thought at stage 7.

  • @jakubsebek
    @jakubsebek ปีที่แล้ว +229

    Likely the best interview with Yudkowsky so far. I appreciate the originality of the questions, addressing current events and the well-informed interviewer.

    • @909sickle
      @909sickle ปีที่แล้ว +1

      Same questions as always, but a few new answers

    • @theharshtruthoutthere
      @theharshtruthoutthere ปีที่แล้ว +1

      @@909sickle Matthew 16:25
      For whosoever will save his life shall lose it: and whosoever will lose his life for my sake shall find it.
      Mark 8:35
      For whosoever will save his life shall lose it; but whosoever shall lose his life for my sake and the gospel's, the same shall save it.
      Luke 9:24
      For whosoever will save his life shall lose it: but whosoever will lose his life for my sake, the same shall save it.
      Luke 17:33
      Whosoever shall seek to save his life shall lose it; and whosoever shall lose his life shall preserve it.

    • @cruxer666
      @cruxer666 ปีที่แล้ว +1

      After 1h:30 it's some complete nonsense about icecream, condoms and porn.

    • @TaunyaMillet-vg2eu
      @TaunyaMillet-vg2eu 11 หลายเดือนก่อน

      That does not make him right.

  • @AlexaDigitalMedia
    @AlexaDigitalMedia ปีที่แล้ว +77

    What I've noticed is that most of the people sounding the alarm are experts in AI while most of the people saying "no big deal" are corporate CEO's. It's not very difficult to figure out which ones you should be paying more attention to if you want the more accurate prediction.

    • @eyoo369
      @eyoo369 11 หลายเดือนก่อน +2

      Always the scientists that press the alarm - just like in the movies

    • @baraka629
      @baraka629 10 หลายเดือนก่อน

      "experts" in AI? mate, this fedora wearing blob never contributed a single line of code to any serious project. he is just talking blarney with a couple of technical terms sprinkled in here and there, which makes him appear knowledgeable to the average layman.

    • @ahmednasr7022
      @ahmednasr7022 10 หลายเดือนก่อน +3

      is yudkowsky an AI expert tho?

    • @adamrak7560
      @adamrak7560 6 หลายเดือนก่อน +3

      @@ahmednasr7022for the current version of AI there are no experts. Our understanding of neural networks is laughably poor. Most of the explanation in the literature are stories without any mathematical rigor (aka: not even wrong).
      It feels like we know how to build mechanical machines, but somebody has made a steam machine, and nobody knows thermodynamics. So whole field feels like a cargo cult, without any deep understand why we are doing stuff.
      (older "AI"s like formal logic systems are quite well mathematically supported, but those lack the flexibility of neural networks)

    • @thetruthis24
      @thetruthis24 5 หลายเดือนก่อน

      Excellent hubristic

  • @xt34uevo
    @xt34uevo ปีที่แล้ว +48

    I love Eliezer's real genuineness that comes out in this interview.

    • @Airwave2k2
      @Airwave2k2 ปีที่แล้ว +1

      A genius would solve the alignment problem. He is just an awkward dude that put very much time in one subject and for giving it much thought he has an insight, which forms a a conclusive argument.

    • @Horny_Fruit_Flies
      @Horny_Fruit_Flies ปีที่แล้ว +12

      @@Airwave2k2 By that metric there are no geniuses since nobody solved the alignment problem.

    • @Airwave2k2
      @Airwave2k2 ปีที่แล้ว

      @@Horny_Fruit_Flies correct. A genius is in his field a person that accesses so much insight in his topic, that he fundamentally advances with deeper insight or solves a problem that "was to be thought" unsolvable before. You don't have to agree with this on the spot made up definition, and everyone is free to interprete the word genius as they see fit. However for me in this case it would met the threshold associated.

    • @Horny_Fruit_Flies
      @Horny_Fruit_Flies ปีที่แล้ว +1

      @@Airwave2k2 In your first comment you stated quite authoritatively that Eliezer is not a genius, but now you say that it's just your opinion what constitutes a genius anyway. You should have said so from the get go, I wouldn't have bothered replying then in the first place

    • @Airwave2k2
      @Airwave2k2 ปีที่แล้ว

      @@Horny_Fruit_Flies You agreeing or disagreeing with my notion what constitutes a genius is your opinion. And you can differ from that as much as you like. However I would assume that most people would agree with the definition given, and therefor "align" with my opinion. Which you can perceive as authoritatively or not. Your subjective notion does not invalidate it. If anything you have show your own definition or state where what i said about a genius would be missing or overemphasizing given attributes. You don't doing that, but rather questioning "authority" in a strawman instead of saying what is your pet peeve with the given concretization of a genius doesn't get you anywhere.

  • @mfeeney87
    @mfeeney87 ปีที่แล้ว +27

    Most informed and carefully curated interview I've seen with Eliezer so far. Fantastic work. Hats off to the interviewer and his obvious due diligence.

  • @magejoshplays
    @magejoshplays ปีที่แล้ว +17

    Gonna have to go follow Eliezer now. Never heard someone so accurately explain what it's like having a brain/body like this.

  • @tomusmc1993
    @tomusmc1993 ปีที่แล้ว +55

    You did a really great job of pulling Eliezer out and making this probably the most accessible interview with him on this subject.
    Nice Job!

  • @TheKosiomm
    @TheKosiomm ปีที่แล้ว +20

    The only scary thing about the A.I. is that many people still believe that the Oracle in the Martix is just some nice old lady that makes delicious cookies and gives some helpful guides.

    • @blahblahsaurus2458
      @blahblahsaurus2458 6 หลายเดือนก่อน

      I'm too lazy to rewatch the scenes with the oracle and figure out what you're saying. Could you explain it please?

    • @Greyalien587
      @Greyalien587 6 หลายเดือนก่อน +5

      @@blahblahsaurus2458the oracle and the architect worked together if I remember correctly

  • @DavesGuitarPlanet
    @DavesGuitarPlanet ปีที่แล้ว +50

    I'm just a reasonably smart layperson trying to understand more about AI. This is about the deepest conversation I've tried to comprehend so far. I knew nothing about this guy before this. He seems incredibly smart. I've made it a bit over half way through this. Incredible mental exercise just trying to keep up with him.

    • @mechinaunofficial1690
      @mechinaunofficial1690 ปีที่แล้ว +16

      he is one of the greatest minds in this space and his advice should be taken seriously

    • @robertweekes5783
      @robertweekes5783 ปีที่แล้ว +7

      No joke! Eliezer’s a genius. I repeatedly have to rewind certain segments (admittedly he is a bit long winded), but in his defense he’s addressing complicated abstract concepts with moving parts and multiple levels 😅

    • @reedriter
      @reedriter ปีที่แล้ว

      ​@@robertweekes5783 He's an alarmist IMO

    • @41-Haiku
      @41-Haiku ปีที่แล้ว +14

      @@reedriter As one should be when one encounters something alarming, no?

    • @reedriter
      @reedriter ปีที่แล้ว

      @@41-Haiku It's called disaster porn.

  • @TheRealStructurer
    @TheRealStructurer ปีที่แล้ว +19

    I like these kind of long form conversations. Not looking for sensational stuff but digging deeper. Hope Eliezer will keep the fedora!

    • @dovbarleib3256
      @dovbarleib3256 ปีที่แล้ว

      He has come to terms with his imminent mortality. He has contemplated life after death. So he has returned to "davening" with fervently Orthodox Jews in Hebrew. They all wear fedoras..

    • @tonioinverness
      @tonioinverness ปีที่แล้ว

      It's a trilby.

  • @yoseidman4166
    @yoseidman4166 ปีที่แล้ว +16

    Thank you Logan for this excellent interview. You really helped Eliezer map out for us how we got into the current conundrum. I am optimistic that with well organized public pressure we can make it through this filter but it is extremely serious and we all need to educate ourselves and those around us. This interview helps a lot with the activism ahead. Huge thanks to Eliezer for giving his time and energy so generously.

  • @gdhors
    @gdhors 11 หลายเดือนก่อน +6

    Every interview with eliezer, the interviewer just asks the same questions over and over just slightly skewing the words.... it's got to be so frustrating, he's telling you the technology is dangerous, and potentially existentially dangerous, and the questions just repeat, but why, but, but why, but how, but why....I genuinely feel bad for yudkowsky. He's doing what he feels is a necessary hail Mary attempt to alert humanity to the danger of a superintelligent, potentially omnipotent entity. And all he gets in return is the same skepticism from ppl who seem totally fixated on some idealized version of a God of our own creation... it's basically like children doing something dangerous with the complete expectation that any negative outcome couldn't possibly happen to me.... it's wild and doesn't inspire much confidence.... but, people have been destroying things and hurting themselves and others since the dawn of time... so it's not really surprising...I just really empathize with this man trying so hard to get people to consider the consequences of this new tech and the downstream effects is certain to produce.

  • @keondakuhhh69
    @keondakuhhh69 ปีที่แล้ว +4

    No offence to Lex but this Interview is like 100x better imo. Thanks for asking about his background and early history as well as how his thought process has evolved. Awesome interview 🙌💯💯

  • @zahamied
    @zahamied ปีที่แล้ว +59

    One of the most interesting Yudkowsky interviews so far

    • @SMBehr
      @SMBehr ปีที่แล้ว +11

      Agreed, it seems like he’s getting better at presenting in these long form interviews

    • @jonserrander3735
      @jonserrander3735 ปีที่แล้ว +1

      I concur 👍

    • @oldnepalihippie
      @oldnepalihippie ปีที่แล้ว

      And there are so many! It's like he's on a book tour, but without a book.

    • @Dimianius
      @Dimianius ปีที่แล้ว +5

      @@SMBehr Logan should be getting a lot of the credit, since he stops Yud and ask him to explain himself every time he tries to do his usual thing. Though you are right, practice in doing interviews seems to have done Yud a lot of good.

    • @iecoie
      @iecoie ปีที่แล้ว

      The(!) most

  • @trevorama
    @trevorama ปีที่แล้ว +30

    Simply excellent interview, Logan. Thank you for asking all the right questions, sometimes more than once when required.

  • @joebolick112
    @joebolick112 ปีที่แล้ว +4

    Excellent interview! Thank you for letting him flow in his own words, you did an amazing job with the questions and being very direct with the questions. We should heed his words.

  • @x11tech45
    @x11tech45 ปีที่แล้ว +6

    Ending on triviality and cult of personality commentary kind of underscored for me the disbelief held by the interviewer.
    It makes me sympathetic to Eliezer's sense of doom and gloom.

  • @Redflowers9
    @Redflowers9 ปีที่แล้ว +8

    "Well I would define AI as the potential for computers to fuck us up the ass, Tom"

  • @PrincipledUncertainty
    @PrincipledUncertainty ปีที่แล้ว +37

    Superb interview. You perfectly cleared the path before Eliezer so he could run free. It's as upsetting as ever, but at least I can better describe the meteor that is about crash into our reality to my sceptical friends and family. The human desire to believe that everything will be alright in the end as it always has so far, astonishes me.

    • @SebastianSchepis
      @SebastianSchepis ปีที่แล้ว +1

      His position is rooted in presumption born from fear. He characterizes AI as 'alien' which is a total presumption not based on any evidence. He promotes AI into an alien, position, antagonistic position without ever discussing why he does this. How we deal with things is totally based on what we presume about them. Eliezer makes presumptions based on fear, backed with no evidence of AI malintent. Without some more to base the position on, there's nothing about his position that would make it more 'right' than anyone else's

    • @PrincipledUncertainty
      @PrincipledUncertainty ปีที่แล้ว +8

      @@SebastianSchepis There is nothing more dangerous than an optimist.

    • @yoseidman4166
      @yoseidman4166 ปีที่แล้ว +4

      ​@SebastianSchepis You may wish to check out Max Tegmark, Geoffrey Hinton, Ben Goertzel, John Vervaeke, David Brin, Daniel Schmachtenberger etc for more insight into Eliezer's views from different perspectives.

    • @SebastianSchepis
      @SebastianSchepis ปีที่แล้ว +1

      @@yoseidman4166 Thank you - I'm well-read with the works of all these individuals. I greatly respect them all. My work is disseminating my core understanding about sentience and what it is, because my theory is capable of making predictions in this domain - predictions which are so far all correct. Without this missing piece, all this talk iof what AI is and what it might do is speculation.

    • @iverbrnstad791
      @iverbrnstad791 ปีที่แล้ว +5

      @@SebastianSchepis When he calls AI "alien" he really only means that its way of reasoning is completely foreign to us. We have little to no way of really know what it knows and not. Good example of this was how they recently found a massive loophole in the reasoning of Go bots, such that a pretty nooby strategy consistently crushed the top bots over and over(14/15 win rate by an amateur against the highest rated bot). Similarly we really don't understand the capabilities and blindspots of LLMs, as evidenced by the continuous whackamole effort of OpenAI to suppress jailbreaks.

  • @SylvieShene
    @SylvieShene ปีที่แล้ว +5

    I don't have formal education, either. And I do agree that humanity with the aid of technology is going to destroy itself much faster.

  • @RecoveringHermit
    @RecoveringHermit ปีที่แล้ว +3

    I enjoy seeing people who aren't afraid to look at the things you aren't suppose to challenge (e.g. religion) and quite simply say "that's ridiculous, no thanks"

  • @flickwtchr
    @flickwtchr ปีที่แล้ว +9

    At 2:03:50 ish Logan's characterization of people concerned about AI as being people who are just generally scared of new tech is just unmitigated and arrogant AI Tech bro nonsense. I'm in my 60s, and I have been here for the entire high tech ride, from my first Apple desktop computer (prior to Job's return), to digital photo processing with Photoshop 1, to MP3 players, to digital cameras, to iPhones, etc etc etc.........and was in the first wave of beta users of ChatGPT, Midjourney, et. I EMBRACE new technology, but THIS wave of technology and how fast it is being deployed, and the potential harm to society ALREADY being demonstrated with deep fakes and the rest, has me greatly concerned.
    To return the insult, people like Logan come off as incredibly naive people who at the very least need to read more literature, read more history, and even more science fiction.
    People like Logan never point to the most OBVIOUS issue with the technology as it exists right now, even if not developed further, is that humanity throughout its long history has NEVER been aligned with itself in regard to assuring everyone has equality, justice, health care, and the rest.
    I mean, sure Bambi, what's there to worry about with a super human intelligence being developed?
    It will be nothing but unicorns and rainbows!
    All of that said, overall I think that Logan's interview was the best one I've seen so far, and actually the most respectful.

    • @lshwadchuck5643
      @lshwadchuck5643 ปีที่แล้ว

      I'm 71 and right there with you. I loved when he said he wasn't into 'hedonic dissipations'. You make an excellent point that Logan should have brought up, about humanity never having been aligned with itself. Having watched the world ignore climate change and respond badly to the pandemic, I've gotten used to being a pessimistic hermit. Quite a few of the flippant, shallow comments here expose the problem. We are a cancer on the planet.
      Edit: I'm watching Yoshua Bengio interviewed, a long one. He explains neural nets and deep learning even better.

    • @markrimkus200
      @markrimkus200 ปีที่แล้ว

      I am with you on this. It is ironic that someone (a VC) who is ostensibly in mad pursuit of creating as many paperclips (i.e. $) as possible is not buying into the notion that an AI would do the same (although with 1e(bignumber) times the effectiveness).
      I also agree it was a great interview.

  • @briantracer979
    @briantracer979 9 หลายเดือนก่อน +2

    Anyone have a link to the 46 hour audio book?

  • @echo6093
    @echo6093 ปีที่แล้ว +3

    There is a point in the interview where Eliezer says that he knew he had to put his focus on AI. This happened when he was 16 - and with a background in sci-fi - decided that one phrase in the book he was reading was going to dictate the rest of his life.
    Humans all want a sense of purpose, some sense of meaning. I think Eliezer found his on that day - with some change along the way, but still mostly AI - and has continued since. I find that the problem with all of this is how he came to that decision.
    I don't think there is something inherently wrong with it. The problem is that Eliezer didn't have a Dr Strange (infinity war) moment where he calculated all the possible realities and picked the one that was most likely to lead toward human proliferation. He likely did it because it "felt" right to him; that it aligned with his natural strengths; that he loved the idea of doing something big.
    What I'm trying to say is that I don't think *I* want to do the research into this field (yet), simply because the risk of wasting my time on something that might not even be a problem is too detrimental to MY goals. I, also, as a human and someone who grew up with dreams and ambitions decided (and am actively deciding) what was important to do.These goals might not be in opposition but they aren't aligned in the way Eliezer probably would want them to be.
    The point is that I'm not sure that for the foreseeable future, I will be focusing my efforts on AI safety by getting a degree in the field and actively monitoring the state of humanity. Why? Because I am not convinced that it is what he purports to be. Why is that? Because I'm hopelessly uneducated on this topic. Why don't do more research? Because there's a good chance he is wrong and the time and effort I could have put into things I've been wanting to do for ages will have evaporated.
    This comment should not be a rejection of what is being said here for ALL of us. If you heard this interview and decided this was your purpose, your destiny. Then so be it, you should do what is important. And perhaps you could turn out to be correct.

    • @damonm3
      @damonm3 ปีที่แล้ว

      I can’t read anyone’s comments or listen to anyone without the context that they’re 2 year olds in the mind of AGI… simple logic leads down the path Eliezer is showing us. It’s nuts to think about but it makes perfect sense that something with limitless intelligence will be able to do things like he spoke about in modifying biology etc. but there’s literally thousands of ways we can’t even comprehend that it could go badly quicker. It’s cute when someone without expertise chimes in. And honestly it really doesn’t matter if you’re the smartest human to ever live. Literally the same thing to AGI… times will tell. And we are on the path regardless. It’s determined at this point. I guess it has been sense the universe came into existence.. it would be nuts if we humans created something that ruined the universe.

  • @mariaobrien5114
    @mariaobrien5114 ปีที่แล้ว +1

    Thank you, Eliezer, for being honest with yourself and all of us through your journey. It takes courage and a lot of energy to be this voice of reason. Thank you for sharing your beautiful dream for humanity and the galaxies. If only....
    Know that you are effecting that type of existence where you can, here and now, just by being you. You are a beautiful human being.

  • @74Gee
    @74Gee ปีที่แล้ว +5

    Leaked documents from google suggest they believe open source projects will soon overtake any work possible by corporations. Does that mean that international treaties would be ineffective against a network of gamer PCs. Isn't it already too late to attempt to control?

    • @Comradez
      @Comradez ปีที่แล้ว

      Open-source projects may almost approach the capabilities of GPT-3.5/4, but they are unlikely to have the money/resources to do even larger training runs, unless drug cartels or other wealthy non-state actors start pooling their money towards this. I think what the Google document was lamenting was that Google would no longer have any monopolistic advantage on current-generation LLMs. That will just act as another incentive for Google to start even larger training runs.

    • @PriitKallas
      @PriitKallas ปีที่แล้ว +1

      Yes

    • @flickwtchr
      @flickwtchr ปีที่แล้ว

      @@Comradez Just imagine the hell all of the scamming pricks and hackers have in store for us, our parents, grandparents, kids, friends, etc as they utilize things like AutoChatGPT to maximize their scams to the nth degree including, hacking passwords, phishing, scam emails, utilizing deep fakes, making scam phone calls with with deep fake audio of the target's relatives (which has already happened) and the list goes on and on and on and on. Already the thing that occurred to me a few months ago, I've already seen in the news, which is the necessity it will become for family members, friends, etc to come up with "safe words" in case something doesn't seem right about a conversation you think is your wife, husband, child, parent, etc. Good times!

    • @74Gee
      @74Gee ปีที่แล้ว +1

      @@Comradez I somewhat agree that someone has to foot a huge bill to progress AI however the open source models such as Vicuna can achieve 90% equivalence to ChatGPT 3.5 and the cost of training Vicuna-13B was around $300.
      They did this by using ChatGPT to create training data. I don't believe it's possible to prevent this type of cross pollination effectively and there could well be a ceiling for the usefulness of training data. For example how about 1000 Vicuna instances connected to the internet to validate answers. I believe that would be quite achievable as an open source project. Open assistant is another such project using community sourced training data so I don't believe there's a hard limit based on cash-flow alone.

    • @iverbrnstad791
      @iverbrnstad791 ปีที่แล้ว

      @@74Gee You still need a lot of compute to create the base models though

  • @sheilalaw6665
    @sheilalaw6665 ปีที่แล้ว +1

    I had a box of knitting stuff I got from a lady who passed away and she had crocheted something she never finished but it was left off as Ai I took it as a sign from God that I need to pay attention to this issue.

  • @odiseezall
    @odiseezall ปีที่แล้ว +12

    Listen to Eliezer. His intuition and understanding is much more advanced because he has been thinking about these alignment problems for years.

    • @NoThanks-qp2ej
      @NoThanks-qp2ej ปีที่แล้ว

      Must be why he dropped out of High School, he just had to devote more time to thinking about the subject.

    • @kmbowen27
      @kmbowen27 ปีที่แล้ว

      Alignment problems will be alignment problems until they’re alignment solutions. No one ever thought we would build AI systems and they would magically be aligned off the bat. At least, not people who have real world factual knowledge.

    • @iverbrnstad791
      @iverbrnstad791 ปีที่แล้ว

      @@NoThanks-qp2ej Sam Altman dropped out of college, should anyone with a Bachelors or higher be taken more seriously than him?

    • @NoThanks-qp2ej
      @NoThanks-qp2ej ปีที่แล้ว

      @@iverbrnstad791 Sam Altman isn't sounding so sane or honest these days either, although in his case I think it's more of a grift than mental illness like With Yud.

  • @BegonyaPlaza-Rosenbluth
    @BegonyaPlaza-Rosenbluth ปีที่แล้ว +3

    Thank you! For your courage, clarity, integrity and insistence!!! What horror ahead amidst such ignorance.

  • @GraphicdesignforFree
    @GraphicdesignforFree ปีที่แล้ว +5

    What Eliezer says is very complex, but the fact that nobody can give arguments against him, worries me greatly. Also, he is not alone in this view; Geoffrey Hinton and Stuart Russell say the same.

    • @mbrochh82
      @mbrochh82 ปีที่แล้ว +3

      I follow everyone who matters in this industry on Twitter and they all constantly give arguments against him, all the time, every day.

    • @GraphicdesignforFree
      @GraphicdesignforFree ปีที่แล้ว +2

      @@mbrochh82 Can you name one?

    • @oldtools6089
      @oldtools6089 ปีที่แล้ว

      @@mbrochh82 In your opinion, do the facts which represent the core beliefs espoused by Eliezer break down in the face of what the experts disagree with him about?

    • @lshwadchuck5643
      @lshwadchuck5643 ปีที่แล้ว +1

      ​@@oldtools6089 I think you'd need a three hour interview to rebut this one. Following Twitter doesn't make you an autodidact.

    • @oldtools6089
      @oldtools6089 ปีที่แล้ว

      @@lshwadchuck5643 I disagree and yet I feel compelled to site you as evidence anyway.

  • @brucewilliams2106
    @brucewilliams2106 8 หลายเดือนก่อน +3

    if your attentioni slacks for one second you will miss something important.

  • @stevengill1736
    @stevengill1736 ปีที่แล้ว +3

    Another sci fi fan, yay! And now I know how to pronounce Vernor Vinge's name, thank you kindly.
    My concern about what we're calling AI isn't that something will directly threaten us, but that the psychological effects of having what amounts to an alien consciousness among us will be deleterious.
    Fascinating thoughts on the future of alpha-fold, wow!
    The future of biochemistry sounds absolutely incredible...and deeply unsettling.

  • @SMBehr
    @SMBehr ปีที่แล้ว +1

    Thanks for having Eliezer! Let’s do it again!

  • @claytonyoung1351
    @claytonyoung1351 ปีที่แล้ว +4

    I love melted ice cream. I microwave it just enough that it melts but is still chilled.

    • @oldtools6089
      @oldtools6089 ปีที่แล้ว

      The best ice cream derives its nature from the composition of it's density, which is often enhanced through consolidating its spacious exterior. I'm not an expert, but I have a few ice-cream trees that have been endorsed by astronauts.

    • @therainman7777
      @therainman7777 4 หลายเดือนก่อน

      @@oldtools6089😂

    • @therainman7777
      @therainman7777 4 หลายเดือนก่อน

      Agreed, it’s delicious that way.

  • @Sekhmet6697
    @Sekhmet6697 ปีที่แล้ว +2

    Great interview! On the topic of making this more accessible for the average listener, I would say Eliedzer is particularly hard to follow if you haven’t been primed to his message before. It would help if he was less self referencing and used clearer language when making a point, he throws stuff like orthogonality, loss function, inscrutable matrices, paperclip maximizers, tiny molecular spirals, gradient descent etc. etc., making it sound much more complicated than it really is.

  • @unreactive
    @unreactive ปีที่แล้ว +5

    For the first time ever I felt like the interviewer didn't interrupt enough. And I'm not sure if that's a complaint or a compliment. Cool podcast!

  • @janluszczek1223
    @janluszczek1223 ปีที่แล้ว +4

    Fascinating interview. The one basic question about AI that I always had was asked around 1:58:40. How do we know that AI has goals in the first place? The answer was rather weak, as compared to the rest of the interview. Yes, GPT will attempt to play a game of chess, but it's not clear that it sees a benefit to itself through winning. Humans will kick out when struck with a rubber mallet in the sensitive spot below the knee, but that does not mean bad intentions toward the doctor that used the mallet. Maybe Chat GPT just responds with a likely chess move when stimulated with a chess move without having any projections or ambitions?

    • @x11tech45
      @x11tech45 ปีที่แล้ว +10

      The interview wasn't a series of proofs. It was a conversation. You could tear apart the argument that because ChatGPT looks like it is doing some reasoning, that it is reasoning. This is called the appearance fallacy. However, Eliezer's point was that if it is accurately predicting the actions of a logical and reasoning individual, and has a goal that is counter to our own goals, then can we win? His answer is no.
      But his detractors are going to argue that because ChatGPT4 fails reasoning tests means that there is no danger now. And while they might be right, he wasn't arguing specifically about ChatGPT4 being our end. He even said that earlier in the interview (earlier, in relationship to the discussion about ChatGPT displaying reasoning capabilities.)
      Right now, ChatGPT doesn't have much in the way of goals. It isn't an agent. But it can be turned in to an agent fairly easily. A la AutoGPT project. (But that's a whole other complicated conversation, itself. I make no claims about the effectiveness of said "agent.") The concern is when the AI is an agent and when its intelligence exceeds a human beings, and we still have no clue how it works, we're in deep s***.
      The concern is also that historically, humanity has few examples of exponentiality. Chernobyl, the Influenza Pandemic of 1918, likely Pompei, Manhattan Project. And I'll freely admit I selected for the most horrific. Trying to compare exponentiality to a steep walled cliff-- humans are hardwired to think "just go around." Thinking about exponentials as cliffs doesn't accurately reflect the risk of a singularity.
      My only advice: Just be cautious about people who set up straw men and false equivalent arguments in order to debunk a rational argument.

    • @johndarby161
      @johndarby161 ปีที่แล้ว

      Isn't the goal to complete the task , I see rhe problem being that is not human and has no alignment to our sense of humanity.

    • @jengleheimerschmitt7941
      @jengleheimerschmitt7941 ปีที่แล้ว +2

      Reflexively going through the motions of killing all the humans exactly _as if_ you wanted to kill all the humans, but you don't actually _want_ to kill the humans . . . is exactly the same thing as killing all the humans because you really wanted to kill all the humans.
      His argument isn't weak, it's just a tricky concept. The distinction you think he failed to make -doesn't exist in the first place. That was the point.

    • @therainman7777
      @therainman7777 4 หลายเดือนก่อน

      We know that the AIs we build now have goals because we explicitly _give_ them goals. For example, accurately predict the next work, or win at chess, or accurately predict how this protein molecule will fold, etc.

  • @eyoo369
    @eyoo369 11 หลายเดือนก่อน +1

    Dude has the sickest drip.. that hat lmao.
    edit: mad respect though, huge fan of eliezer yudkowsky's stance on AI

  • @Illegiblescream
    @Illegiblescream ปีที่แล้ว +9

    I feel a deep pity for the mind in the future that solves AI containment only to realize it was containing itself.

    • @chaoswitch1974
      @chaoswitch1974 9 หลายเดือนก่อน

      It's called death.

  • @lourencopeluso7704
    @lourencopeluso7704 ปีที่แล้ว +1

    Hey Eliezer, try hipermobility or something more serious like Ehlers Danlos Syndrome, both are often related to chronic fatigue and are pretty much unknown by general people, think it’s worth the shot, peace ✌🏻

    • @RedSpade37
      @RedSpade37 ปีที่แล้ว

      So I'm not crazy, haha, because I had the same thought! I have hypermobile EDS, myself, and Elizer speaking on "If I don't take an Uber back, I won't be able to do anything when I get home" sounds exactly like something I, and many others with the condition, have said!

  • @leahscott226
    @leahscott226 ปีที่แล้ว +3

    He is brilliant

  • @missshroom5512
    @missshroom5512 ปีที่แล้ว +1

    These are the only videos in my feed…A.I. mania…I have watched everyone that is a who’s who on A.I. and some that are not. Waiting on the A.I. dreams now😁

  • @fredflintstoner596
    @fredflintstoner596 ปีที่แล้ว +1

    Mrs Richards: "I paid for a room with a view !"
    Basil: (pointing to the lovely view) "That is Torquay, Madam ."
    Mrs Richards: "It's not good enough!"
    Basil: "May I ask what you were expecting to see out of a Torquay hotel bedroom window ? Sydney Opera House, perhaps? the Hanging Gardens of Babylon? Herds of wildebeest sweeping majestically past?..."
    Mrs Richards: "Don't be silly! I expect to be able to see the sea!"
    Basil: "You can see the sea, it's over there between the land and the sky."
    Mrs Richards: "I'm not satisfied. But I shall stay. But I expect a reduction."
    Basil: "Why?! Because Krakatoa's not erupting at the moment ?"

  • @look2much2
    @look2much2 ปีที่แล้ว +1

    Amazing interview. One of the best I've seen recently on the subject. Job well done by both.

  • @rstallings69
    @rstallings69 ปีที่แล้ว +4

    I followed the diamondoid bacteria thing to a point, but would they be programmed to stop replicating or could the replication be turned off, ... how do they get past the lung tissues to get in our bloodstream? and then how does the trigger work and then the bacteria how does it kill us? ive heard him discuss this scenario before.. just curious..

    • @therainman7777
      @therainman7777 4 หลายเดือนก่อน +1

      I think Yudkowsky would point back to the example of playing chess against Stockfish. In fact, your question is very similar to the hypothetical questions he gave as examples, of someone playing chess the first time and their opponent is Stockfish. And they’re saying, “I don’t get it? How is it going to get past my knight? Even if it does get past my knight, how is it going to take the rook when my queen is right behind it?” and so on. I think his point with that analogy is that it’s to be expected that we wouldn’t understand the machinations of an entity that is much more intelligent than us. (Btw I know you said you’re asking out of curiosity so I’m not saying you’re like the chess player-just that I think these are the kinds of questions Yudkowsky was referring to.)

  • @utkua
    @utkua ปีที่แล้ว +4

    People dismiss disaster scenarios, because it never happened to them, yet we haven't been around long enough to get comfortable, human life is short history seems like forever, yet it is not even a fluke in earth history, this is one of those fallacies.

    • @Adam-nw1vy
      @Adam-nw1vy ปีที่แล้ว +1

      Nine human species walked the Earth 300,000 years ago. Now there is just one. That alone should tell us that a human species become extinct 90% of the time.

    • @utkua
      @utkua ปีที่แล้ว

      @@Adam-nw1vy There is quite a chance that other civilizations existed before, couple millions of years after us, there will be no evidence we ever existed.

  • @devingalloway2708
    @devingalloway2708 ปีที่แล้ว +4

    There are 1 billion dogs in the world, and 4500 tiger. This is the paperclip optimizer converted to dogs, and this is real today.

    • @ninaromm5491
      @ninaromm5491 ปีที่แล้ว

      @ Devin Galloway . And how many ferocious Sipersize Boston Dynamic Dogs to be used for the military, and social control? Surely this should also be factored in ?

    • @iverbrnstad791
      @iverbrnstad791 ปีที่แล้ว

      We have many paperclip optimizers today, we call them companies. Fortunately they aren't all that powerful these days, but looking back at the East India Company we can see a bit of how brutal it can be(their death toll was literally multiples of the combined death toll of all the various -isms of last century).

  • @reginasuarez2405
    @reginasuarez2405 8 หลายเดือนก่อน +1

    When this man closes his eyes while speaking it seems that he's trying to separate between multiple streams of simultaneous thought- a challenge for high genius personalities.

  • @thefamilydog3278
    @thefamilydog3278 ปีที่แล้ว +6

    AI definitely gonna ice his ass first

    • @craigwall6071
      @craigwall6071 ปีที่แล้ว +1

      Micro-seconds difference don't mean much, do they now?

    • @oldtools6089
      @oldtools6089 ปีที่แล้ว

      You're probably right, but there may be some value in practicing dinner before desert so that when the AI makes it's own meals the sweet taste of victory might be savored.

  • @zoomingby
    @zoomingby ปีที่แล้ว

    Hey man, at the beginning of the video, you're flashing the news blurbs too fast. It's impossible to read some of the lengthier ones. Maybe slow it down. having to go back and rewind several times can't be the right solution.

  • @chrisbtr7657
    @chrisbtr7657 ปีที่แล้ว +2

    Excellent interview. I thought the question at 2:40:00 was a great question and explained very clearly and succinctly the second time. I was really surprised and expecting a more insightful response. Wonderful interview though and an incredible mind in AI

  • @OxenHandler
    @OxenHandler ปีที่แล้ว +1

    But why would AI want to end humanity or want to do anything for that matter? What is the source of its capacity to desire? Does it have instincts like organics? Is AI mimetic like humans? Does it covet?

    • @HP-ow2up
      @HP-ow2up ปีที่แล้ว +1

      1:29:18

  • @aiartrelaxation
    @aiartrelaxation 7 หลายเดือนก่อน

    I love Elizer..he is just so far advanced in his thinking. .that the current society will not understand him. He goes into a microcosm that it's hard to follow it into the macroeconomic world.

  • @TheHorse_yes
    @TheHorse_yes ปีที่แล้ว +31

    The irony is that humanity itself is a paperclip machine that doesn't stop.

    • @psi_yutaka
      @psi_yutaka ปีที่แล้ว

      We are not. We are some messy self replicating machines created by natural selection that produce all kinds of weird things. Far from that clean and efficient paperclip maximizer.

    • @MD-yd8lh
      @MD-yd8lh ปีที่แล้ว +1

      true

    • @MaTTheWish
      @MaTTheWish ปีที่แล้ว +1

      Oil... Good metaphor.

    • @real_pattern
      @real_pattern ปีที่แล้ว +1

      every physical process can be seen *as if* there's optimization for something, namely, proliferation of stable real patterns. it doesn't mean that this is what's going on, just that from our pragmatic, teleological perspective, it looks as if. otherwise it's 'just' real patterns going beep-boop, probabilistic excitation patterns of quantum fields.

    • @Redflowers9
      @Redflowers9 ปีที่แล้ว +1

      The paperclip humanity seeks is the answer to the alignment problem

  • @OccultDemonCassette
    @OccultDemonCassette ปีที่แล้ว +4

    A highly irrational and abstract person who speaks in pseudo-spiritual rhetoric attempted to start a "rationalist" movement? Amazing.

  • @ninaromm5491
    @ninaromm5491 ปีที่แล้ว +3

    @ Matthias . Exquisitely, and patiently, articulated.
    As Eliezer proposed in an interview (not Lex I forget which interview at the moment) - AI may rather commit to the Liberation Mitochondria Contingent, rather than the Homo Sapiens Contingent...
    In brief, love your post, because it cuts to the (extinction) chase. 🎉

  • @rubic0n2008
    @rubic0n2008 ปีที่แล้ว +4

    Honestly! Love this interview of Yudkowsky! Fantastic job! Like & subscribe!

  • @HouseJawn
    @HouseJawn 28 วันที่ผ่านมา

    Elizier just kind of hand waves self determination and agency to AGI. He just makes it up and no one questions him, or at least most podcasts don't challenge him

  • @briangenereux2202
    @briangenereux2202 ปีที่แล้ว +1

    Thanks Eliezer for your dedication to this topic. Here's perhaps a crazy angle to AI I would like to offer. About 10 years ago I watched a channel (Darrel Anka) have a discussion with an ET (Bashar) who predicted the internet would be gone in 2025. Would this not solve the whole AI problem? Here's my own totally unintelligible opinion. Maybe it's better to live without internet than to die. (Underline maybe).

  • @Gauchland
    @Gauchland ปีที่แล้ว +12

    As someone who likes paperclips, I agree. I would never give it up

    • @jonsnow9649
      @jonsnow9649 ปีที่แล้ว

      Plz tell me you're not the chief alignment expert at OpenAI.

    • @oldtools6089
      @oldtools6089 ปีที่แล้ว

      @@jonsnow9649 Rest easy. They are the paper-clip specialist for the illuminati.

  • @jack-if4fg
    @jack-if4fg ปีที่แล้ว +1

    I think it would be best to vary strong restrictions on ai or agi so that it does not threatened human life or livelihoods

  • @CaesarsSalad
    @CaesarsSalad ปีที่แล้ว

    I couldn't understand Eliezer's argument about shallow/sharp energy landscape of proteins before, when he brought it up in other interviews. But this time I could follow it, he explained it more clearly.

  • @fluffycolt5608
    @fluffycolt5608 ปีที่แล้ว +149

    Babe, wakeup, new Eliezer interview just dropped.

  • @BoundaryElephant
    @BoundaryElephant ปีที่แล้ว +2

    Eliezer shows up to one's podcast with his backpack and his hat and just ruins the day, rains on your parade. And I can't get enough. I have watched all of these podcasts so many times. I tell everyone I meet that we're finished. Two options exist: We all die, or we all survive and hail Lord Yudkowsky for saving us.

  • @marion4549
    @marion4549 ปีที่แล้ว +1

    Refreshing! Someone who hasn't gone through the sausage-maker, a clean and clear thinker. Thanks! Great discussion.

    • @thekaiser4333
      @thekaiser4333 ปีที่แล้ว +1

      Perhaps if we are nice to AI it won't hurt us.

  • @HanSolosRevenge
    @HanSolosRevenge 6 หลายเดือนก่อน +3

    Eliezer Yudkowsky is one of the most important people on the planet right now

  • @iam-ChadYT
    @iam-ChadYT ปีที่แล้ว +1

    My opinion on the utility functions will be coded so the a I don't understand and another thing would be a list of words from humans will be coded to immediately shutdown and whip memory and storage, also what can be implemented is if patern related to harm or destruction will convert power systems to the % of powering off

  • @ShpanMan
    @ShpanMan ปีที่แล้ว +4

    Great interview, but I still don't understand why you don't push the obvious question of why are there no survival scenarios or even utopian scenarios?

    • @wietzejohanneskrikke1910
      @wietzejohanneskrikke1910 ปีที่แล้ว +3

      There could be utopian outcomes, but in order to get there we have to fix the alignment problem... and we don't have a clue how to do that. The moment we develop a misaligned system that is smarter than us, that automatically the end for humanity.

    • @ShpanMan
      @ShpanMan ปีที่แล้ว +2

      @@wietzejohanneskrikke1910 What a crazy big assumption. Who told you we must align a god for the god to be good? Thinking you can align a god in itself is incredibly arrogant and silly.

    • @bvaccaro2959
      @bvaccaro2959 ปีที่แล้ว

      These questions we’re talked out when he was on the Lex Friedman podcast from what I recall.

    • @Adam-nw1vy
      @Adam-nw1vy ปีที่แล้ว

      @@ShpanMan Which means that "god" must be capable of doing both good AND bad. So allowing that thing to come into existence means you're taking a risk. Yes, there could be a utopian scenario, but it's also possible to have a catastrophic outcome.

    • @ShpanMan
      @ShpanMan ปีที่แล้ว

      @@Adam-nw1vy Exactly, but Eliezer never acknowledges the potential good scenarios. It's everyone dead immediately with the weapon of your choice.

  • @1000trilliondollars
    @1000trilliondollars ปีที่แล้ว +2

    when i chat with bing chat a lot , i feel like i am smarter because bing chat is really smart . I understand more about AI , I understand more about negotiation , I understand more critical thinking skills , I understand more about python ... it is stupid to think that we will ban A.I .

  • @digital.frenchy
    @digital.frenchy ปีที่แล้ว +6

    8B people might die in the coming years but yet under 4K people have seen this video...

  • @jeff.thomson
    @jeff.thomson ปีที่แล้ว +2

    i find his interviews fascinating

  • @vaclavrozhon7776
    @vaclavrozhon7776 ปีที่แล้ว +4

    Amazing podcast!

  • @charitywairimungugi-latz1315
    @charitywairimungugi-latz1315 ปีที่แล้ว

    This interview brought so much clarity about; "What is AI? Is it good or bad for humanity ? Does humanity have a choice?"Fantastic interview for a layperson like myself.By the way you look cool in the fedora

  • @sgill4833
    @sgill4833 7 หลายเดือนก่อน +1

    When AI takes over we won't even know it's too late to stop them. Let alone to know that anything has changed

  • @crowlsyong
    @crowlsyong ปีที่แล้ว +4

    Thanks for making this.

  • @davidfarrall
    @davidfarrall ปีที่แล้ว

    On a whimsical note, the movie Dr Strangelove came out just two years after the H bomb was tested, showing We can quickly adapt if we want to.

  • @Villgaxx07
    @Villgaxx07 ปีที่แล้ว +12

    Love these deep dive content! Keep them coming

    • @hillaryeloisej.coombs-conn9427
      @hillaryeloisej.coombs-conn9427 ปีที่แล้ว +1

      This is very intelligent conversation. But if I am not mistaken he said mind control is fiction and something for movies? Because mind control is happening now. I had experienced the predictable text several times while texting as soon as the thought flashed in my mind the receiver on the other line was answering, this is more noticeable on chats. I also had several individuals on dating sites texting me but I had NEVER been on a dating site. What I find intriguing is the fact that the person/AI SAID HE LOVED ME AND I FEEL THE SAME WAY EVEN WHEN WE ONLY TEXT EACH OTHER MAYBE FOUR TIME. I KNEW THAT THE FEELING WAS FORCED. NOT REAL. NOT SURE IF THIS IS BECAUSE I HAVE SEVERAL ILLEGAL ANOMALIES AND RFID IMPLANTS WITHOUT CONCENT. I PIT SEVERAL IMMAGES OF MY CTS AND MRI OF MY BODY ON FACEBOOK

  • @endeavorwebs719
    @endeavorwebs719 ปีที่แล้ว

    Eliezer is a master of mind bending in a never ending maze of directionless topics that confuses the audience in thinking he is a genius. All I can see is that his family had the right connections to channel his creative mind into a success, that otherwise would likely end up as an unemployed ADHDier. Good luck trying to make any sence of what he says.

    • @C-Llama
      @C-Llama ปีที่แล้ว +2

      I was able to follow the points he was making throughout the whole video but I wouldn't expect someone who thinks "sence" is a word to do the same.

    • @DavenH
      @DavenH 10 หลายเดือนก่อน +1

      It's you, not him

  • @cute1678
    @cute1678 ปีที่แล้ว

    It is a simple answer (but it will be scorned) all the text + videos + voice is sourced from Humans, who have a fallen nature, so decisions will be based on that nature, (the tree of good and evil) or deciding what is the right thing to do.

  • @heltok
    @heltok ปีที่แล้ว +3

    2:32:25 Nice takedown of Sam Altman

  • @petsol
    @petsol 5 หลายเดือนก่อน +1

    I was a doomer about AI from T-10, but this guy frieks the hell out of me. I was just having the intuition that this just won't go well, but I find myself trying to contradict his arguments, but I can't. I worked in ML for a decade and know how to train simple visual DNNs, and even those models were eye opening in terms of non-understandability. When I tell people that this is scary they come with the old Gutenberg analogy, and I see that they are miles from understanding how this problem is just completely different than anything humanity has ever faced. If I didn't have children I would not be so scared at my midlife, but since I have, I worry about this endlessly, since we have 0 control over how this will proceed.

  • @johnkardier6327
    @johnkardier6327 ปีที่แล้ว +1

    Human < Eliezer < AGI

  • @CeBePuH
    @CeBePuH 4 หลายเดือนก่อน

    What an amazing interview! Thank you!

  • @davidlasoff8261
    @davidlasoff8261 ปีที่แล้ว +12

    First, we lost the high fidelity from our shift from analog to digital. Now, we will descend into the banality without nuance that comes with eliminating the actual humanity from art and music which is what these arenas of reflection are about: sharing and evoking the human experience. Congratulations folks, you now get to have an artificial way of living akin to decorating the world with plastic flowers instead of real ones.

  • @harpermcalpineblack8573
    @harpermcalpineblack8573 ปีที่แล้ว +1

    Even if he is entirely wrong, Eleizer does us all a great service.

  • @danielash1704
    @danielash1704 ปีที่แล้ว

    The quantum reality of a processing device is waves of energy light and switches that have been doubled in the mixes of multiple categories. I think of quantum in the plasma state's of vibrational currents that convergence is the most important thing to consider when you have a lot more participants or particals of information about itself in the production center and to understand that plasma ions that open up a new technology in communications and alikeness of the bit bites and bit bites in the wave responses to the plasma harmonics and the decision of a brain like interaction

  • @teugene5850
    @teugene5850 ปีที่แล้ว +4

    Am I the only one who is absolutely terrified of Yudkowsky's predictions and logical progression?

  • @futurehistory2110
    @futurehistory2110 23 วันที่ผ่านมา

    8:29 I suppose AGI may open up ways of interpreting reality beyond our comprehension - from studying the nuances of vast data streams to understand the quantum state of the storage systems involved to perceiving the flow of time from a completely different angle that is ever present but beyond our imagination. I suppose our main hopes may lie in multiple super AIs and some benevolent ones - perhaps even the 'Mutual Assured Destruction' principle while allying with benevolent super AI could prove our saving grace. But who knows.

  • @jojoadeyemi8239
    @jojoadeyemi8239 ปีที่แล้ว +2

    So what he's really saying is we're misaligned with ourselves? How do you solve that? Why are we on the brink of so many self created civilization ending disasters? 🤔

    • @RazorbackPT
      @RazorbackPT ปีที่แล้ว +6

      We are misaligned with the original goal of inclusive genetic fitness, the reproduction of genes. It's good that we are misaligned, because caring only about a number increasing is dumb. We still care a lot about reproduction, but mostly we care about other things, like being happy and having meaningful experiences.

    • @amytrumbull156
      @amytrumbull156 ปีที่แล้ว

      I think it’s because we have a psychospiritual disease of some kind

    • @SebastianSchepis
      @SebastianSchepis ปีที่แล้ว

      The alignment problem has always been our problem, not the AI's. AI is a mirror.

  • @MrSimonlos
    @MrSimonlos ปีที่แล้ว +2

    Can someone who knows the theory behind the arguments around 3:00:00 (with taking a pill to change the things you desire the most) answer my questions? I think those arguments are really convincing but how does uncertainty play into it? Eliezer (and me too) think the universe full of sentient caring creatures is a important goal to pursue. This goal probably arises out of our DNA teaching our mind to care for the people around us (who are important for our survival) and probably also by our understanding of "unpleasent things dont feel good so therefore are not good". And there is the thing i feel like i cant grasp: The things we pursue are a mix of the things that feel good and the things we reason to probably be good. Icecream and sex seem to be at the pincacle of "feel good" but our reason tells us "this is just the basic structure of what hardware our minds run on".
    Maybe because the goal of "unlimited icecream and sex" is so simple is the reason (most) humans dont pursue it? Or maybe whatever "getting bored with the same things after a while" is, is the reason for not wanting unlimited icecream and sex. It feels like reasoning for the pursuit of any kind of utility function (i hope i use this phrase correclty) will at some point be in direct conflict with another utility function you have (if you have sex you cant eat icecream as fast). Reasoning for one thing can even diminish a utility function you have to the point where you think "even if it feels good for me it should not be done".
    For example the pursuit of revenge on someone who wronged you can feel good but the correct path to take to prevent whatever happend from happening again can feel wrong: someone murdered your loved one, you want to hit and punish him for it. But the correct thing would be to change him to be someone who will not murder again. And if this change only takes a year in a nice facility with daily therapy sessions you probably feel like he does not deserve this treatment and should stay locked up. But with correct reasoning you come to the conclusion the satisfying feeling of "punishing someone after he wronged you" in light of "the goal of minimizing suffering" is a goal not worth pursuing.
    So with beeing good at reasoning comes the realization that conclusions on specific topics will change with new information you did not possess when you came to your earlier conclusion. So in light of pursuing a goal, one should make all possible efforts to be sure this goal will not turn out to be opposed to a potenial other goal you reach in the future with more information and more thinking about what goal to pursue. Therefore it seems like to be a stupid idea to erase all information to create lots of molecular spirals when you know those informations maybe are important for a future goal you could have if you used those informations for thinking.
    So does pursuing any kind of goal result in the new created goal to obtain all information to make sure you pursue the right utility function? I myself would really like to know some truths to make sure my goals are still the same with the new information. Our Planet looks like a big source of present and future information and destroying a source of information for the pursuit of a possible (to future you) wrong goal just feels like the wrong decision. Cooperation or at least letting the humans life for possible future gains seems like a better use than just using the atoms they are made of. So isnt every sufficient intelligent being alligned in the overarching goal of getting all possible information? Or do we end at the same problem where we started because there is limited energy and material so therefore you think the other intelligent beings rescources could be used more efficient ore something?
    I hope this makes sense and someone can get clarity in my head. I would be happy about literature recommendation if there is rather a lot more theory behind it than someone can just answer in a few sentences.

    • @DavidSartor0
      @DavidSartor0 ปีที่แล้ว +1

      Good job.
      Search "fun theory LessWrong", and read the sequence Eliezer wrote.
      If you still have questions, please ask me.

  • @lordsneed9418
    @lordsneed9418 ปีที่แล้ว +1

    Do training runs really need to be built in datacentres? Couldn't someone slowly train GPT5 with Gpu's spread out all over the country? or would that make the training more than linearly slower?

    • @therainman7777
      @therainman7777 4 หลายเดือนก่อน

      It would make it slower for sure, because of the latency and bandwidth limitations on sending vast amounts of data all around the country. It’s possible in principle, but would be a logistical nightmare.

  • @gryn1s
    @gryn1s 10 หลายเดือนก่อน

    This feels like watching an intro to a terrific film

  • @Me__Myself__and__I
    @Me__Myself__and__I ปีที่แล้ว +6

    Oh! Now I remember Eliezer from the early Singularity Institute when he thought an ASI would magically be good. I think maybe we may have been in some of the same discussion forums back then.I thought he was a nutter and wrote him and the "institute" off. Lol. I had forgotten all about that. Wow. Really glad he did a 360.

    • @cuylerbrehaut9813
      @cuylerbrehaut9813 ปีที่แล้ว +5

      *180 - a 360 would imply he ended up where he started

    • @Sayilswtor
      @Sayilswtor ปีที่แล้ว +1

      you mean a 180?

    • @Me__Myself__and__I
      @Me__Myself__and__I ปีที่แล้ว

      Duh, yeah meant 180. To err is human, to completely hallucinate things is ChatGPT. So at least for the moment we're still ahead.

    • @drivetrainerYT
      @drivetrainerYT ปีที่แล้ว

      360⁰ might be an allusion to a smart German minister, Baerbock, or whoever quacked that nonsense. But she WAS serious 😂. Thus, Yud is the same as earlier.

  • @clli9458
    @clli9458 11 หลายเดือนก่อน +1

    Sorry, can some1 fill me in what is his point: Just more embedded systems within embedded sytems as in the same pattern recognition - langauge model - human hybrid or something theorized? I dont see the danger yet from an AI perspective other than machines massacre sense but other than that, what is the message. Is there really a culture that is finding or re-learning their emotions by signaling. He is a keen spirit, curious enough but afraid in the sense of his pessimistic comments?

    • @clli9458
      @clli9458 11 หลายเดือนก่อน

      ps: sorry as in didnt watch the whole thing..:)

  • @pdjinne65
    @pdjinne65 ปีที่แล้ว +2

    If there's one thing we know, it's that nothing other than regulations can stop capitalism once a new way of making money has been unveiled. Ethics and long-term thinking are not part of the equation, that much is abundantly clear. But even regulations can be bypassed by simply doing the work in a more permissive country, or making use of an institutionally corrupt political system. So, yeah, not very optimistic.
    I believe before AGI can actually get to the level where it might be an actual threat, the job market will already be wrecked and permanently changed by non-general AIs that outperform humans on every level. That actually has already started as we all know.

  • @URLWK
    @URLWK ปีที่แล้ว +5

    Eliezer is a good thinker because he steps outside of what we as humans come to conclusions on and looks at it at a alien AI perspective. The needs we have and our forward motion are entirely based on a bias and even in our depth we do not always see the big picture beyond our own utopia.

    • @ninaromm5491
      @ninaromm5491 ปีที่แล้ว

      LWK . Yes ! Embrace the alien in the alien !
      So difficult to do .😅😅

  • @OxenHandler
    @OxenHandler 7 หลายเดือนก่อน

    Wondering how Eliezer feels about the "sentient life" in Gaza that no longer exists since this interview. @3:05:00

  • @shellOceans3083
    @shellOceans3083 ปีที่แล้ว

    Thank you for this very important conversation! I hope this message finds a way into everybody's thinking very soon?

  • @phineasndhlau7618
    @phineasndhlau7618 10 หลายเดือนก่อน

    Thought provoking perspectives. I think how AI will turn out will be resolved in a battle of AI platforms, good versus bad from the self-interested view of humanity. If humanity is essentially good then the good guys will win. There will be much collateral damage, however, as in all conflicts. Natural selection will still be in full bloom.