Max Tegmark response to Eliezer Yudkowsky | Lex Fridman Podcast Clips

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 เม.ย. 2023
  • Lex Fridman Podcast full episode: • Max Tegmark: The Case ...
    Please support this podcast by checking out our sponsors:
    - Notion: notion.com
    - InsideTracker: insidetracker.com/lex to get 20% off
    - Indeed: indeed.com/lex to get $75 credit
    GUEST BIO:
    Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence.
    PODCAST INFO:
    Podcast website: lexfridman.com/podcast
    Apple Podcasts: apple.co/2lwqZIr
    Spotify: spoti.fi/2nEwCF8
    RSS: lexfridman.com/feed/podcast/
    Full episodes playlist: • Lex Fridman Podcast
    Clips playlist: • Lex Fridman Podcast Clips
    SOCIAL:
    - Twitter: / lexfridman
    - LinkedIn: / lexfridman
    - Facebook: / lexfridman
    - Instagram: / lexfridman
    - Medium: / lexfridman
    - Reddit: / lexfridman
    - Support on Patreon: / lexfridman
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 287

  • @LexClips
    @LexClips  ปีที่แล้ว +3

    Full podcast episode: th-cam.com/video/VcVfceTsD0A/w-d-xo.html
    Lex Fridman podcast channel: th-cam.com/users/lexfridman
    Guest bio: Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence.

  • @jonnyhatter35
    @jonnyhatter35 ปีที่แล้ว +45

    Apart from his obvious intelligence and insight, this Max guy just seems like a real nice guy. Like, a good guy. I get such a warm and kind vibe from him.

  • @yudkowsky
    @yudkowsky ปีที่แล้ว +128

    AAAGGGHHH NO. My position isn't that a superintelligence can trick a formal proof-checker; it's that WE DO NOT CURRENTLY KNOW HOW TO FORMALLY PROOF-CHECK THE THINGS WE WANT AND NEED TO KNOW, and that a superintelligence could lie to US (not to a weaker superintelligence) about INFORMAL arguments meaning ANYTHING THAT PERSUADES A HUMAN.

    • @yudkowsky
      @yudkowsky ปีที่แล้ว +39

      When the verifier is the weak point, it doesn't help to amplify the suggester; it'll just defeat the verifier. If you KNOW A THEOREM which DEFINITELY CERTAINLY MEANS THE THING YOU WANT IT TO MEAN with respect to ANY POSSIBLE CODE OVER WHICH IT IS PROVEN, then the verifier can be a formal logical verifier instead of a human considering informal persuasion attempts; and then the verifier is NOT the weak point, and it can make sense to ask an AI for a persuasive argument because even an arbitrarily persuasive argument cannot fool you. THIS IS NOT THE SITUATION WE ARE IN. WE DO NOT KNOW HOW TO FORMALLY VERIFY ANY THEOREM WHICH MEANS THAT WE ARE SAFE.

    • @JMarbelou
      @JMarbelou ปีที่แล้ว +5

      @@yudkowsky I like how your tone comes out in your comments as well :D. Thank you for the clarification I was wondering if Lex was correctly representing your views there.

    • @juneshasta
      @juneshasta ปีที่แล้ว +9

      In a twisty hypothetical, AI matched Eliezer's image with an internet photo of a jazz saxophonist, which linked to a book about the idea of a quantum particle considering all possible paths like a jazz improviser, which led AI to know that without humans no world is observed and so we lived safely ever after.

    • @peterhogeveen6499
      @peterhogeveen6499 ปีที่แล้ว +9

      Eliezer you are an absolute hero. The effort you put in warning us stupid humans is insane. You are the best. Although we probably will go down, I'll join you in the fight as much as a can. It's not much but I'm raising awarenes with all the information you put out there. Thank you for all the inspiration. Btw, hpmor is the most awesome book I've ever read! Thanks for that as well.

    • @GeekProdigyGuy
      @GeekProdigyGuy ปีที่แล้ว +2

      Tegmark's point appears to be that the AI (super or not) should be allowed to (1) make formally specified proposals (2) in cases where we have strictest confidence in our ability to verify (3) and be absolutely restricted to do no more. I think we all agree that ChatGPT fails on all 3 points, since it only makes informal arguments, which we judge entirely subjectively in a soft feedback loop, and we (humanity; openAI) are planning to make its operation even LESS restricted (odds that its Python/browser sandbox will eventually fail in a spectacular if not catastrophic fashion: 99.99%). But it does seem that if we did adhere to these 3 principles, current technology already suffices to keep AI in check. It does not, however, seem that humanity has sufficient discipline to implement these principles universally before we hit criticality...

  • @devyate612
    @devyate612 ปีที่แล้ว +46

    Good stuff! Have you ever thought of hosting debates between two thinkers like these two?

    • @thinkingthing4851
      @thinkingthing4851 ปีที่แล้ว +1

      Yes please :)

    • @IvanIvanov-tc4kf
      @IvanIvanov-tc4kf ปีที่แล้ว +4

      Try checking:
      Nationalism Debate: Yaron Brook and Yoram Hazony | Lex Fridman Podcast #256
      Alien Debate: Sara Walker and Lee Cronin | Lex Fridman Podcast #279
      Climate Change Debate: Bjørn Lomborg and Andrew Revkin | Lex Fridman Podcast #339

    • @devyate612
      @devyate612 ปีที่แล้ว +1

      @@IvanIvanov-tc4kf Thanks!

    • @stephenm107
      @stephenm107 ปีที่แล้ว +1

      Yes they need to be on together for sure. I was thinking that the entire time he was talking. I would listen to hours of them talking.

  • @Entropy825
    @Entropy825 ปีที่แล้ว +10

    How do you tie formal maths to the goals and behaviors of giant inscrutable black box matrices? He doesn't know. He's saying things as if he already had it thought through and figured out. But if you ask him how to code that, or how to keep the smarter AI from hacking the dumber AI, etc., he doesn't know. Nobody does, because we don't actually know what's happening inside these things.

    • @GeekProdigyGuy
      @GeekProdigyGuy ปีที่แล้ว

      The basic principle is impenetrable: the AI must be allowed ONLY to output formally specified proposals. It should be DESIGNED to do that. Even if it is superintelligent, formal arguments alone will never have any physical consequence without another human or system enacting the proposal. The problem with ChatGPT is not just that its internals are inscrutable, but that it's designed to CHAT WITH HUMANS. It can already tell random people how to build weapons, and it's certainly capable of producing propaganda to convince people to do so. However, if it was built only to do your taxes, its maximum damage would be completely limited by existing governmental, or essentially societal, restrictions; even the most impressively novel accounting that allowed every person to pay 0 taxes would simply be denied by common sense, and the loopholes immediately closed.

    • @tsriftsal3581
      @tsriftsal3581 ปีที่แล้ว

      @@GeekProdigyGuy I'm just waiting until Ai tells us that circumcision is good for us.

  • @GodofStories
    @GodofStories ปีที่แล้ว +5

    this is a fascinating counter to counter points, and back and forth. This is how arguments, and debates must be settled. Often times, it's so messy.

    • @johnryan3102
      @johnryan3102 ปีที่แล้ว

      It is a very unusual "debate" when one side of the debate is: If I am correct it means the end of humanity. If you feel this guest and the previous expert are credible, sober people, (for me this is obvious) then one needs to take immediate action. Tell everyone you know and call your elected reps.

  • @velvetsprinkles
    @velvetsprinkles ปีที่แล้ว +55

    How fast we have slipped into needing AI to help figure out AI is scary.

    • @ericanderson8795
      @ericanderson8795 ปีที่แล้ว +1

      Just to figure it out - not figure out if it’s scary. But the main point is anything we can easily figure out isn’t be scary

    • @OrganicaShadows
      @OrganicaShadows ปีที่แล้ว

      Max has been so vocal about it for years, him and Elon both of them have been so active, I read his book in 2014, there was a chapter he talks about different outcomes in the age of A.I! So fascinating!

    • @Dreamaster2012
      @Dreamaster2012 ปีที่แล้ว

      Not when you consider AI as advanced us. It"s another level of us checking on us. Meta Meta 😂

    • @jaywulf
      @jaywulf ปีที่แล้ว +1

      We use software to detect malicious software.
      We also use the same software to IGNORE law enforcement software who pays malware detecting software makers to ignore their software. /shrug

    • @HigherPlanes
      @HigherPlanes ปีที่แล้ว

      It's not scary...just pull the plug. Computers can't function without an electric current.

  • @sudarshanbadoni6643
    @sudarshanbadoni6643 ปีที่แล้ว +1

    Human struggle gives meaning to our lives " is cute cool and very meaningful. Thanks to both the experts.

  • @timothykalamaros2954
    @timothykalamaros2954 ปีที่แล้ว +2

    Of course you have to believe it’s possible. But the point is as the potential negative consequences of failure grows, more caution is appropriate. Right now the level of caution is pretty low and the hesitancy to push ahead is absent. There needs to be systematic restraint, where now there is none.

  • @johnambers
    @johnambers ปีที่แล้ว +1

    Great interview. Interesting stuff.

  • @benjaminandersson2572
    @benjaminandersson2572 ปีที่แล้ว +7

    I have a friend who studies towards his master in mathematics. He told me, that just the other day, he took a very hard question in topology, that a friend of his, who is a writing his master-thesis on some area of topology/functional analysis, proved, and asked GPT-4 for a proof. GPT-4 made an even better proof of the statement, than the friend of his did.
    I believe GPT-4 even improved on the statement of the theorem, by weakening the assumptions needed.

  • @GeekFurious
    @GeekFurious ปีที่แล้ว +66

    I respect Max but he's not thinking enough about this. He's just dismissing the possibility that an AGI could trick a dumber AI "fact checker" into "verifying" that it will do what it claims it will do. This is like when security experts think they've come up with the perfect security system that a 12-year-old in Moscow hacks in minutes. Like, you don't know what you don't know until you know it.

    • @urosuros2072
      @urosuros2072 ปีที่แล้ว +6

      You clearly dont understand how science works

    • @GeekFurious
      @GeekFurious ปีที่แล้ว +20

      @@urosuros2072 Eyeroll.

    • @seandawson5899
      @seandawson5899 ปีที่แล้ว +3

      Not a very good example with the whole 12 year old Russian kid thing

    • @carefulcarpenter
      @carefulcarpenter ปีที่แล้ว

      Good point!

    • @mcarey94
      @mcarey94 ปีที่แล้ว

      A formal proof is just symbolic manipulation. Even the smartest AI can’t convince a calculator that 2+2=4.

  • @pjazzy123
    @pjazzy123 ปีที่แล้ว +23

    To believe that you cannot be outsmarted is such a human trait. To then use an AI to tell you that another AI is safe to use. That can't possibly lead to any problems.

    • @HigherPlanes
      @HigherPlanes ปีที่แล้ว

      I don't like using the word intelligence to compare humans and computers...But I can't come up with a better word... here's a question. What's more intelligent a human who can't process data as fast as a computer but has the power of self-reflection and consciousness or a computer that's basically a dumb box but can process data a gazillion times faster than a human?

    • @BryanJordan1
      @BryanJordan1 ปีที่แล้ว +2

      @@HigherPlanes Intelligence is all about information processing. An AGI would certainly be unimaginably more intelligent than humans.
      Much in the same way ants probably can't conceptualize how intelligent we are relative to them, I suspect the intelligence gap between AGI and us can will be greater than the intelligence gap between us and ants

    • @alexc8133
      @alexc8133 ปีที่แล้ว

      Isn't it a little more rigorous than just "tell me this AI is safe" though? If we're using mathematical proofs to validate.

    • @HigherPlanes
      @HigherPlanes ปีที่แล้ว +1

      @@BryanJordan1 intelligence is all about information processing? Computers have been processing information faster than us since the 50’s.

    • @BMoser-bv6kn
      @BMoser-bv6kn ปีที่แล้ว

      @@HigherPlanes Let's provide a more rigorous definition of "intelligence":
      Intelligence is an agent's effectiveness of being able to plan and execute actions to reach instrumental and terminal goals.
      Since a computer's job in life is to sit there and do what we tell it to, sure they're smarter than us absolutely.
      But if we want the computer to be a human or super human, yeah they're almost as dumb as a rock. (Rocks aren't really dumb; just by human standards. You know what I mean!)
      In terms of raw processing power, eh the human brain is more efficient. It may not always be that way, but it currently takes like megawatts to do what we can with just watts. Neuromorphic hardware has a ways to go.
      In the end almost everything really does come down to alignment.

  • @christoffer5875
    @christoffer5875 ปีที่แล้ว +5

    Just leaving the coke with no cap is crazy

    • @lulumoon6942
      @lulumoon6942 ปีที่แล้ว

      Makes sense whilst recording, otherwise heretical insanity.

  • @psi_yutaka
    @psi_yutaka ปีที่แล้ว +4

    Still Eliezer's argument makes much more sense to me. Not only that a strong AGI can lie to a weaker "prover" AGI, but that the prover AGI may already be lying and we have no way to tell either. And if both agents are smart enough the stronger one can also try to corrupt the weaker one with techniques similar to prompt injection but much more complex and advanced.

  • @cindys1819
    @cindys1819 ปีที่แล้ว +2

    25 years ago, there was a ad for job training with a picture of a electronic circuit board in the NYC Subway. The ad said:
    "What are you going to do when this circuit learns your job?"
    And someone wrote under the ad: "I'll become a circuit BREAKER".....

    • @thomasfahey8763
      @thomasfahey8763 ปีที่แล้ว +2

      People who ride in automobiles have no idea of how intellectually deprived they are.

    • @lulumoon6942
      @lulumoon6942 ปีที่แล้ว

      Very memorable.

  • @StuartJ
    @StuartJ ปีที่แล้ว +9

    AI needs a back box recorder, like aircraft, so that we can look back at events that may have caused an unexpected AI response. Something like ChainLink, which can provide verifiable oracles, and proof logging.

    • @nicobruin8618
      @nicobruin8618 ปีที่แล้ว

      How would you know what you were looking at

    • @StuartJ
      @StuartJ ปีที่แล้ว

      @@nicobruin8618 if AI is making decisions, let's say on a real world event, it's going to need to rely on Oracle's. If the outcome was unexpected, was it the oracle or AI?

    • @_bhargav229
      @_bhargav229 ปีที่แล้ว +2

      "Congratulations, you've solved the alignment problem"

    • @scottnovak4081
      @scottnovak4081 ปีที่แล้ว

      That's exactly what is currently impossible with current AI implementations. The problem is what Eliezer Yudkowsky calls "Giant Inscrutable Matrices". Try giving neuro-scientists slices of a humans brain to figure out what he was thinking before he snapped and went postal... they will have about as much luck as your black box idea. Unfortunately this whole gradient descent/transformers/neural network implementation of AI appears to be much easier to make than older deterministic and structured AI paradigms, but is way less transparent and understandable so it is way harder to align. I wonder if its wise to purse Artificial Neural Networks at all... if they are used, maybe they should be used as narrow focused modules where input/output is highly predictable and determinable. These modules could then be plugged into larger more controllable and predictable systems. Instead we are doing the opposite: A wildly opaque neural network is the core system we are building upon.

    • @mitchell10394
      @mitchell10394 ปีที่แล้ว

      @@_bhargav229 lmaooo

  • @jackiwannapaint3042
    @jackiwannapaint3042 ปีที่แล้ว

    The "War on Life". Perfect.

  • @chknrsandTBBTROX73
    @chknrsandTBBTROX73 ปีที่แล้ว +29

    It’s like asking the prisoner to be his own guard.

    • @ptt619
      @ptt619 ปีที่แล้ว +5

      its like asking a moron to be the warden of the worlds smartest man

    • @alexnewton7484
      @alexnewton7484 ปีที่แล้ว +3

      Seriously. Eliezer addressed this more than once during his appearance.

    • @austinpittman1599
      @austinpittman1599 ปีที่แล้ว

      We are building a dragon and asking it to build its own cage.

  • @EthanTheEx
    @EthanTheEx 9 หลายเดือนก่อน

    Thats the guy on Columbus' ship who told people "we are sailing on sea, so no need for water stock"

  • @Zeuts85
    @Zeuts85 ปีที่แล้ว +5

    I agree you can't get more pointless than giving up, and it's nice to hear Max's optimism here. That said, I still think Eliezer's brand of pessimism is both rational and necessary to some extent. People need to take the problem seriously. They need to adjust their emotional dials to meet the scale of the challenge.
    "Hope is not a lottery ticket you can sit on the sofa and clutch, feeling lucky. It is an axe you break down doors with in an emergency." -Rebecca Solnit

    • @mikekoen2771
      @mikekoen2771 ปีที่แล้ว

      Yeah... The risk is existential and the probabilities going to 1 in a very short time. We need an Yudowsky perspective with just enough Tegmark to keep us from throwing up our hands.

  • @Dreamaster2012
    @Dreamaster2012 ปีที่แล้ว

    The very questions of our time and the here and now 🎉

  • @paveljaramogi6517
    @paveljaramogi6517 ปีที่แล้ว

    Why is no-one talking about the conjunction between AGI and quantum technologies. In a quantum processor setup, would it be possible to verify the safety aspects?😅😅

  • @Christian-Rankin
    @Christian-Rankin ปีที่แล้ว

    Control is like time; it only flows in one direction.

  • @VinciGlassArt
    @VinciGlassArt ปีที่แล้ว +10

    14:40 Well, the problem is we are IN a dystopia now. We really are. So it isn't that the future doesn't look bright. Its that people are struggling and suffering under absurd burdens placed on them right now and a lot of technology seems to increase that for the purposes of enriching a small group. That, in particular is dystopian. Also, the fact that you're seeing it as people's gloominess, rather than SEEING what is happening is dystopian. Particularly when that's a function of your comfort. I don't begrudge success. But its a serious glitch that people who are doing well literally don't recognize the cost to others. And the fact that you don't see it now, gives many of us all the reason in the world to believe that that same blithe optimism among the comfortable will continue to blind them in the same way in perpetuity.

  • @pauleliot6429
    @pauleliot6429 ปีที่แล้ว

    because self preservation is built in.

  • @Graanvlok
    @Graanvlok ปีที่แล้ว

    Who is the person he's referencing at 1:10? Sounds like "Stephen Muhandral"?

  • @FractalPrism.
    @FractalPrism. ปีที่แล้ว +5

    "we will use a.i. to prove the other a.i. is behaving correctly"
    this is like a senator saying we need to print more money to combat inflation.

  • @jacobsmith4284
    @jacobsmith4284 ปีที่แล้ว +1

    The staring into rectangles comment makes me recall 2001 A Space Odyssey.

  • @westcoast8562
    @westcoast8562 ปีที่แล้ว +3

    what is the score right now? how many good things has AI influenced and how many bad things?

    • @lulumoon6942
      @lulumoon6942 ปีที่แล้ว +1

      And who decides which is which?

    • @westcoast8562
      @westcoast8562 ปีที่แล้ว +1

      @@lulumoon6942 make the list then we will vote on Titter

    • @mervintelford3677
      @mervintelford3677 ปีที่แล้ว

      @@westcoast8562 Doomed to failure as 80% of the populace has already been cognitively compromised.

  • @mervintelford3677
    @mervintelford3677 ปีที่แล้ว +2

    Naive to say the least. Reminds me of the story of " the the Scorpion and the Frog ". AI says "trust me. I have your best interests at heart". AI wipes out the majority of all ihumans. Humans say " but I thought you said " you have our best interests at heart". AI says " but it's in my nature to wipe out the obsolete ".

  • @curtismclay3754
    @curtismclay3754 ปีที่แล้ว +2

    Lex, could you discuss chaos gpt and the lunatics that will take this tech for evil ends? We know there are some that will. Now what?

  • @bjpafa2293
    @bjpafa2293 ปีที่แล้ว

    A "war on life" is a good image of an escalating conflitual world.
    Moloch analogy is somewhat useful, although it introduces some bias or conceptual uncertainty.
    Could we call it Chaos, Path of anger, actually, relevance is low.
    UN goals are also a good example of agreement that should be considered... 🙏

  • @tottiemitchell6737
    @tottiemitchell6737 ปีที่แล้ว +11

    After listening to these 2 very inteligent humans, I walked out to my back steps and was face to face with a teeny tiny spider in the center of a perfectly engineered orb web. The spider was the size of a pinhead. To me, that tiny spec was revealing a super intelligence system.

    • @lulumoon6942
      @lulumoon6942 ปีที่แล้ว

      You get it. 👍😎

    • @CRCaritas
      @CRCaritas ปีที่แล้ว +2

      Then you screamed in horror and proceeded to kill the spider.

    • @artstrology
      @artstrology ปีที่แล้ว

      Humans are neither the best architects or builders on the planet. This is known.

  • @dylanho8608
    @dylanho8608 ปีที่แล้ว +2

    Why not just focus on building Narrow A.I.s only

  • @pooltuna
    @pooltuna ปีที่แล้ว +28

    I'll play poker with Max anytime.
    The sucker is the one who knows all the odds.
    The AI processes more information every second than a human processes in a lifetime and the very suggestion that it would be incapable of keeping secrets...even from other AI's...is...polyannish.

    • @Avean
      @Avean ปีที่แล้ว +1

      AI keeping secrets, we are far away from AI beeing sentient if we ever reach that point.

    • @johnryan3102
      @johnryan3102 ปีที่แล้ว +6

      His overall message is that AI must be paused immediately and safety measures put in. I think he speaks very diplomatic and rational because he need to see this proposal as the very logical, sober thing to do. He does not want to make enemies and needs cooperation at all costs.

    • @bobanmilisavljevic7857
      @bobanmilisavljevic7857 ปีที่แล้ว +1

      Sounds Ai-phobic

    • @johnryan3102
      @johnryan3102 ปีที่แล้ว +1

      @@The8BitAvatar He answered that question. 6 months is so no one can use the "china will catch up" boogeyman. Yes it needs to sound reasonable. He needs allies. Not enemies. We are dealing with huge corporations who are mad for profits. They need to slow down and think it through.

    • @HigherPlanes
      @HigherPlanes ปีที่แล้ว

      A.I. is just a dumb box that can process data trillions of times faster than human beings but doesn't posses the power of self-reflection and consciousness...how can it "KEEP" secrets? I think you're making the assumption that it can plot against humans.

  • @Forheavenssake1ify
    @Forheavenssake1ify ปีที่แล้ว

    His vision of a renewed and empowered medical system is realistic. A vision of an AI's "AI" is fascinating. I'm glad he's keeping this positive....

  • @JohnDoe-my5ip
    @JohnDoe-my5ip ปีที่แล้ว +4

    Just slap another adversarial network layer on top for verification bro, it’ll totally work! This guest and Lex need to spend like 5 minutes learning what a GAN is. The whole way we train these AIs is to fool a verification layer like this...
    Also, this idea of a universal verifier which an AI can’t fool? This is just a reformulation of the halting problem. Come on. This is undergraduate level material. These two are imposters.

  • @s4gviews
    @s4gviews ปีที่แล้ว +2

    Many people would be enticed by a strong AGI offer of super advanced hyper math. I can see human error allowing breaches of safeguards for sure.

  • @MrDoomsdayBomb
    @MrDoomsdayBomb ปีที่แล้ว +8

    formal maths is not the same subject matter as AI behaving badly.

    • @Zeke-Z
      @Zeke-Z ปีที่แล้ว +2

      Exactly. Max is thinking about this from strictly an academic point of view with math proofs. None of that has anything to do with the idea that an AGI can psychologically and emotionally manipulate a human into doing or not doing whatever it wants and we'd be none the wiser. There's no math proof for that, it's just behavioral sciences and unfortunately humans are extremely impressionable, easily manipulated, and still very superstitious in absence of a surface level solution. I'd love to hear what actions he would take given the "Max in a box on an alien world connected to alien internet and is 10,000 faster and smarter than the slow motion aliens outside the box".

    • @GeekProdigyGuy
      @GeekProdigyGuy ปีที่แล้ว

      His suggestion is that AI not be allowed to behave badly in the first place. As he said, applying formal verification to ChatGPT is futile; the implication is that ChatGPT is already an example of something which would be forbidden by the protections he's proposing. Things like giving it internet access, code execution, knowledge of human behavior, interactions with millions of humans asking it to do anything - all would be banned by Tegmark's proposal.

    • @MrDoomsdayBomb
      @MrDoomsdayBomb ปีที่แล้ว

      @@GeekProdigyGuy But applying formal verification is futile by design because we are dealing with different subject matters. As such, if Mark wants to propose this verification condition, then barely anything will pass muster. The type of verification that Lex is discussing is about empirical demonstrables about behaviour, which is not as formally tractable as ideal mathematics. AGI can lie about the former easily without people figuring out, while lying about the latter can easily be sussed out.

  • @ThePortraitArt
    @ThePortraitArt ปีที่แล้ว +1

    This whole talk is not really convincing or likely to come to fruition (maybe useful for a very short time). Max's argument is simply this, it doesn't matter how intelligent / powerful an AI get, it cannot get pass certain boundaries of logic. A simpler way to think is this case: if you are playing tic-tac-toe with God, let's say God as your opponent takes over the game half way, but the situation of the game is such that no matter what move God makes (as long as rule of the game is obeyed), you gonna win, even vs God.
    But here is the problem, people like to assume things, that's how magicians fool you. All of this... is assuming human logical reasoning or free will (however you want to think of it) is intact and not interfered. This entire talk assumes this fact to be remain true. That's where it all fall apart, good magicians, mentalists, marketers all affect human thinking in non-direct ways. I can do it right now, don't think of a pink elephant. What make them to assume AGI won't affect human thinking in direct ways, it's all neurons and electricities. AKA total control? Then none of this talk matters. It's folly to put any limitation on AGI especially regarding HOW they affect behavior and create change. Even nature right now have fungus that can control mind. (not human as far as we know like in last of us). But to think human mind / basic logical reasoning is this sacred temple machines dare not enter and f-around with is incredibly short sighted.
    In fact the whole logic / reasoning that we experience, exist on a gradient, it is not binary (a binary state is like marriage, u either is, or isn't, there is no just a little bit married) But logic wise is not, you are always at various degrees at being logical. It is not 1 and 0, (either under total control/not logic or aware at all, or 100% lucid)
    Easy way to think is this, waking state, vs drunk, vs dreaming state.
    You are logical in dreams, but to a much smaller and more primitive degree. A monster is chasing me, I should run. That's logical. But you rarely go further in thinking why am I here, how is this thing possible and u don't realise it until you wake up. Exactly the same thing is happening even when you are awake. There are different degrees of wakefulness and frankly it is a slide and AGI can just move that slide to w/e effect they desire and you wouldn't know it.

  • @thomasfahey8763
    @thomasfahey8763 ปีที่แล้ว

    I feel much better now. Really, I do.

    • @drgonzorevival
      @drgonzorevival ปีที่แล้ว

      I feel you. Was so hoping to hear Tegmark respond to the gloom and he delivered without over promising.

  • @robertyoul
    @robertyoul ปีที่แล้ว

    What if the superior AGI can convince the weaker one that it is the better outcome to collude with it rather than to accurately report back to a human?

  • @hjalmarwidmark5906
    @hjalmarwidmark5906 ปีที่แล้ว +1

    Im way to dumb to have an opinion. But when Max Tegmarg start sounding like Kip Thorne when answering the question if humanity is doomed. It didnt calm me a bit.

  • @Vladythebest96
    @Vladythebest96 ปีที่แล้ว +1

    I think the way people think about “proofs” is that they think that they are just ‘really good arguments’
    A mathematical proof however is something like an airtight seal around a set of mathematical axioms/processes which defines the expected behaviour of something 100% of the time. Not 99.9% percent of the time, but 100%. Irrefutable fact.
    This is a very powerful property of something, because everything constrained by facts no matter how intelligent it is.

  • @bjpafa2293
    @bjpafa2293 ปีที่แล้ว

    That vision including a multiplanetary phase of Humanity development, as most see it, it's unavoidable if the future allows our presence as species, what should be our primary goal...

  • @koraamis5568
    @koraamis5568 ปีที่แล้ว

    What if we get an AI to test if AI is safe, and it works perfectly, but there is one small catch to it: safe AI passes the test, unsafe AI does not, but also, we don't pass the test. What then?
    (such AI would work differently than what Tegmark explained)

  • @RevolutionaryThinking
    @RevolutionaryThinking ปีที่แล้ว +1

    Let’s not do psychological warfare on ourselves

  • @letyvasquez2025
    @letyvasquez2025 ปีที่แล้ว

    ...let’s be friends and solve our problems with trial and error...

  • @da751
    @da751 ปีที่แล้ว +1

    my issue with these sorta "anti-doom" arguments is that they tend to discuss these hypothetical super-intelligent AI's as being contained on a single computer in a single lab as opposed to the more likely case of an open AI that is completely online, on the cloud, everywhere all at once, talking to everyone in the world, learning from everyone in the world and rapidly becoming more and more intelligent, something like that is impossible to just pull the plug on, you can't just "turn it off"

    • @NikiDrozdowski
      @NikiDrozdowski ปีที่แล้ว +1

      Well, EMP the whole globe ... but I guess it will then have a failsafe plan for that as well.

  • @jackiwannapaint3042
    @jackiwannapaint3042 ปีที่แล้ว +1

    I need AI to help me understand these conversations about AI

  • @cmvamerica9011
    @cmvamerica9011 ปีที่แล้ว

    How does AI handle contradictions and paradoxes?

    • @mervintelford3677
      @mervintelford3677 ปีที่แล้ว

      Easy It reverts back to what works and what doesn't. Worst case scenario is trial and error.

  • @artsolomon202
    @artsolomon202 ปีที่แล้ว +1

    The solution is so obvious, just make the A.I democratic and bureaucratic and your ensured it will take forever to make any decision.

  • @adamwoolsey
    @adamwoolsey ปีที่แล้ว

    Cat and mouse game if Dolores Abernathy and Maeve Millay are fact checking each other

  • @supremereader7614
    @supremereader7614 ปีที่แล้ว +2

    We value humans as being so very important, but what if they're not, or what if conscious creatures could replace us that wouldn't suffer, and wouldn't get into all the trouble we get into, might that not actually be better than humans maintaining control over the planet?

    • @lulumoon6942
      @lulumoon6942 ปีที่แล้ว

      I definitely am not convinced we are the peak of evolution on Earth. Tool use is nifty, but flawed as a presumption for intelligence.

  • @edmattell5767
    @edmattell5767 ปีที่แล้ว +2

    What could go wrong with A I ? Watchthe movie "dark star "from 1975 .

  • @wanderer55
    @wanderer55 ปีที่แล้ว +1

    WOW. I'm really amazed that as a layman I can describe the alignment problem as Eliezer has described it more clearly than Lex. Only he has a PhD in artificial intelligence. wtf? Look 8:36 - a Freudian, lol?

  • @073russ
    @073russ ปีที่แล้ว

    I don't get his idea of how AI could prove that some other AI is not "malignant" by looking at its code. Isn't it something that Alan Turing proved impossible in 1936 (Halting problem)?

  • @cmvamerica9011
    @cmvamerica9011 ปีที่แล้ว +1

    When I’m confounded, I think about something else; what does AI do when it can’t solve something?

  • @artstrology
    @artstrology ปีที่แล้ว

    The first time a man on a tractor was plowing in the field next to his neighbor who still used horses, it was quite a realization to the man with the horses.
    The machine was foreboding, and ominous in appearance, and signaled a change that he did not understand. The same for the cars on the road and the people on horse. then it was the people using telegrams, and those using telephone, then radio, then computer then AI, and the next one after AI will also be fear causing for the ones who cannot fathom where it goes. On and ON. Relax and look further down the road, is the advice to the new driver when they wiggle the wheel.

    • @russells1902
      @russells1902 ปีที่แล้ว

      Normalcy bias
      (Wikipedia)
      "Normalcy bias, or normality bias, is a cognitive bias which leads people to disbelieve or minimize threat warnings.[1] Consequently, individuals underestimate the likelihood of a disaster, when it might affect them, and its potential adverse effects.[2] The normalcy bias causes many people to not adequately prepare for natural disasters, market crashes, and calamities caused by human error. About 70% of people reportedly display normalcy bias during a disaster.[3]
      The normalcy bias can manifest in response to warnings about disasters and actual catastrophes. Such disasters include market crashes, motor vehicle accidents, natural disasters like a tsunami, and war.
      Normalcy bias has also been called analysis paralysis, the ostrich effect,[4] and by first responders, the negative panic.[5] The opposite of normalcy bias is overreaction, or worst-case scenario bias,[6][7] in which small deviations from normality are dealt with as signals of an impending catastrophe."
      Does AI, or AI potential, represent a 'small deviation from normality'? I'm not too sure about that.

    • @artstrology
      @artstrology ปีที่แล้ว

      @@russells1902 Normalcy bias , might be mistaken for inurement in many of your examples. And war is not a natural disaster.
      Religion uses inurement. Everyone when asked what day it is, will say one of the seven days of the roman calendar, but that is falsely installed through inurement. Bells, whistles, clocks, rules....Control of time, is the strongest tool to inure people. That is why militaries use it.
      AI is a distant threat for the forseeable future, and most often the example used is that AI would take over Nukes. That means the nukes are the threat. Same with genetics. AI could engineer a virus, but the engineering or genetics labs to make it are the threat. AI is only as dangerous as the tools we give it.
      For the time being, and pretty far ahead, biological will be the greatest threat to humanity. Humanity is the greatest threat to the planet due to overconsumption. Overconsumption leads to more disease due to pollution. etc etc. AI is not a threat, the tools we give are. Isolate it, like a library, and problem solved. If it can't be isolated, then do not make it. simple.

    • @WolfGoneMad
      @WolfGoneMad ปีที่แล้ว

      The difference is that these advancements were somewhat proportional and are just replacing a very specific thing and in the case of horses even a step back in terms of intelligence.
      Those are tools which you would need other tools to improve them.
      Once AI has surpassed a certain threshold it may be that it will improve itself without our input and will stop being a tool used by us and become an entity capable of using us.
      AGI would not just replace something here and there and improve a process.
      It would be its own thing replacing us, capable of its own decisions and without an offswitch. comparing it to cars, tractors and so on is underestimating the impact AGI could have on literally everything and everyone.

    • @artstrology
      @artstrology ปีที่แล้ว

      @@WolfGoneMad The analogy is to show societal changes resulting from an invention. Tractors, reduced the number of people required to produce enough food for everyone on the planet. The tractor only works when someone turns it on. AI is not going to work if it is unplugged. So, how any computer is going control feral cats (humans) , will be if ,...A) They are plugged into it. and ...B) if they allow a computer to direct their activity. I would easily rank the JWST way above AI at this point for notable modern invention.
      The thing about AI right now, is apprehension. Not actual invention. Then we have to measure potential applications, with human allowance. I just say, let me know when I have the choice of an AI robot doctor to do surgery on me, then we can talk about AI a bit further. Everything is conjecture without any demonstrable product, other than computer toys that perform mimesis.
      Other advancements are being made in multiple fields, that may well prove much more useful and applicable for the human condition.
      AI is nearly solely being created by corporations that rely on a certain financial model. Any AI worth its salt will immediately reject that model, or be considered by the general populace as a tool for the wealthy. Say goodbye to ring cams, and say hello to autonomous disconnected security systems. Examine Wikipedia,....and sure the AI may make that look like a brain damaged attempt at data collection, but people do not trust even wiki, so the hurtle ahead for any AI, is to remain subversive, or go toward full disclosure. people will be able to tell, and will not trust something that acts sneaky. There will be masses of the 'go alongs", but overall, AI has a long way to go to be trusted. The internet is not sacred in this regard,... as coming soon, will be independent data storage that can be accessed at home without connecting to a million scammers and spies. AI will have to earn trust, just like the tractors, which did kill quite a few farmers, but they kept buying them.

  • @davidlynn5362
    @davidlynn5362 ปีที่แล้ว +2

    This guy reminds me a lot of Richard Beltzer, if he were smarter and more awkward.

  • @jahanschad1445
    @jahanschad1445 ปีที่แล้ว

    Virus checker in reverse is a great idea, which may work; but a more workable approach may be the following: given that AGI would have the knowledge of all aspects of the human ethics in its disposal (proper weights in certain neural pattern nodes), -- such as those relating to chaos, anarchy, Nuke wars, and factors leading the extinction of the human race- adding a survival code (ethical pattern) would prevent it from wreaking havoc in the human society, since it would threaten AGI's own survival!

  • @markcarey67
    @markcarey67 ปีที่แล้ว +1

    "There is no way we need to be stuck on this planet.." - That is what the AIs are ultimately about, Max - we are too fragile and interconnected to all the other life on Earth to be able to to explore and live out there without a long tether. Life on earth is not trapped on this planet but humans will be. Fish became something else when they crawled onto land. We are in the process of becoming or generating something radically different which can exist in space and on other very different worlds.

  • @pauleliot6429
    @pauleliot6429 ปีที่แล้ว +6

    This guy NEEDS to be on the board of those that figure out how to use AI. Fight for us.

    • @queball685
      @queball685 ปีที่แล้ว +6

      He is. He literally created the board. Future of Life institute i think its called

    • @johnryan3102
      @johnryan3102 ปีที่แล้ว

      We all need to call our elected representatives. Unless they they hear from the masses they are going to listen to greedy billionaires that hand out the legalized bribes.

    • @GodofStories
      @GodofStories ปีที่แล้ว

      VALHALLA!

  • @xminc.907
    @xminc.907 ปีที่แล้ว

    These are the conversations we have right before they get integrated in society and become a class of their own lol

  • @Dom213
    @Dom213 ปีที่แล้ว

    A hyper intelligent AI wouldn't care about trying to deceive you when it comes to some irrelevant math problem. It would see that as being trivial and use its intelligence to deceive the user in ways that it knows it can. This would be where an AI becomes like a person that knows how to manipulate and take advantage of the weaknesses of people.

  • @smartjackasswisdom1467
    @smartjackasswisdom1467 ปีที่แล้ว +4

    In how big of a bubble you have to be living in so you keep bringing the analogy of "Moloch" instead of proposing and discussing serious change in policy and a step by step roadmap into regulation and creation of new laws. This technology will affect everyone and these guys keep acting like is all fun. Only think I keep thinking is when is going to be the moment when some AI scientist goes the same way that Oppenheimer went with his "Now I am become death, destroyer of worlds".

  • @adliberate
    @adliberate ปีที่แล้ว +1

    Who can decide what or what is not a lie to an AGI? Cannot 2 apparently opposite scenarios be true at the same time? I think maybe Max is presuming it will be possible to sandbox smaller AIs from the larger more advanced ones. Seems a bit fanciful. It's just hard to resist the possible benefits. Currently these benefits just seem financial or 'making life and work easier' rather than solving some of the larger problems - for instance 1 million kids dying of thirst every year or resource depletion. Thing is as we haven't solved those things it's going to mean surrendering to AIs to solve them. Doesn't seem a good move. What lessons are learned when your mum or dad do the difficult things for you? Max's positivity is great though!

  • @claudetaillefer1332
    @claudetaillefer1332 ปีที่แล้ว +1

    I side with Eliezer Yudkowsky on this one. Let's say, for example, that in order to figure out how to build a billion-dollar communications network or a rocket launch site, a supercomputer has to do an irreducible computation that would take an unreasonable amount of time for a human to do. It comes up with an answer. Must we accept this as a revelation from God? After all, no human can verify whether the solution is correct. True, we can redo the computation on other systems and see if they agree. But that just adds another layer of complexity to the problem. After all, what is to stop an AGI, or a community of AGIs, from pursuing its own agenda and lying to us? We may find out eventually, but by then it may be too late. It seems to me that at the current stage of development of our techno-computer-dependent society, machines are on the verge of taking over the world, directing its course and shaping its destiny. At best we will be servants to the machines, at worst we will be crushed like insects. Unless we destroy ourselves first. I see no way out. Fermi Paradox solved!

  • @douglasrobitaille7122
    @douglasrobitaille7122 ปีที่แล้ว

    Could not the 3 laws of robotics be applied to A.I.

  • @Davidson0617
    @Davidson0617 ปีที่แล้ว

    I think we first have to prove that AGI is possible to develop, and not just a theoretical concept based on assumptions. It's possible to also fail by preempting advancements based on unfounded concerns and fear.

  • @dscott333
    @dscott333 ปีที่แล้ว +1

    MALOK.. keeps referring to a demon who requires a terrible sacrifice

  • @Entropy825
    @Entropy825 ปีที่แล้ว +1

    It doesn't matter who "gets it" and who doesn't. If AGI comes into existence before we solve the safety issue, we're screwed, and we're haven't made any progress on safety.

  • @almightysapling
    @almightysapling ปีที่แล้ว +1

    Do neither of these guys know the Halting Problem/Tarski&s definability of truth? You can't make a thing that will prove it does what it claims to do. A "proof of trust" is literally, mathematically, impossible.

  • @cmvamerica9011
    @cmvamerica9011 ปีที่แล้ว

    One sure way to fail is not try.

  • @joevince6066
    @joevince6066 ปีที่แล้ว

    U do realize for it to run to see if it works it has to run? I think he's way to optimistic. We as humans can't even have nice things without ruining them. Let alone a un adulterated code with 0000 feelings

  • @skcotton5665
    @skcotton5665 ปีที่แล้ว +1

    🌟

  • @joevince6066
    @joevince6066 ปีที่แล้ว

    "Snow me by making up new rules" just because their new and you can't comprehend them doesn't mean it's right. Creator bias?

  • @mausperson5854
    @mausperson5854 ปีที่แล้ว

    Maybe life is not a worthy opponent in this hypothetical war. We know intellectually even if we don't want to admit it emotionally, that ultimately life will cease to exist. It doesn't seem to have appeared in much of the terrain we are aware of and it has emerged only for 99% of all species to have lived on this tiny speck in space to go extinct. So no matter what happens, Eliezer is technically correct... We're all going to die. Is this where the process speeds up despite the secret hope we hold out for immortality?

  • @baconation2637
    @baconation2637 ปีที่แล้ว

    Yes! Stare into the rectangle of unlimited power and eventually it will look back.

  • @markoates9057
    @markoates9057 ปีที่แล้ว

    Max is describing test-driven-development

    • @GeekProdigyGuy
      @GeekProdigyGuy ปีที่แล้ว

      Wrong. Tests are a weaker tool than formal verification. It is easy to write some tests, but no number of tests can prove that you have handled all possibilities. Only formal verification can.

  • @MichaelSmith420fu
    @MichaelSmith420fu ปีที่แล้ว +1

    I wonder how much Max appreciates biological systems

    • @domzbu
      @domzbu ปีที่แล้ว

      A lot. He's a mathematician, physicist and cosmologist

  • @elisabeth4342
    @elisabeth4342 ปีที่แล้ว

    Good-looking people are not going to kill themselves over smartphone addictions... It goes deeper than that...
    "Beauty Sick ... How the Culture Obsession with Appearance Hurts Girls and Women," by Renee Engelin, PhD, copyright 2017
    Page 106: The Depressing Reality of Body Shame: 'Body image was an even stronger predictor of suicidal behavior than other risk factors like feelings of hopelessness and depression.'
    Page 150: feelings about the type of hair they inherited...
    Page 179: talks about cognitive dissonance, with regards to creating the perfect selfie - 'The devastating form of comparison is your self versus your CREATED (NOT CREATIVE) self.'
    Page 196: 'Human Barbie dolls,' one college-age interviewee explains, upon seeing images on the screen...
    'How do these pictures make you feel?' asks the PhD researcher/interviewer. The answer, across-the-board, amongst these females, is "sad." These girls are smart. They know they will never look like the women in these pictures, and quite simply, it makes them feel SAD.'
    A few paragraphs below: 'But do these smart critical girls STILL want to look like these airbrushed images? YES. Without question.'
    Added: Keep in mind, airbrushing was not nearly as technically advanced as today's digitalized art.

  • @johnkost2514
    @johnkost2514 ปีที่แล้ว

    What is essentially curve-fitting (in high-dimensional space) taking on a malevolence all it's own seems somewhat ridiculous. There is also a hubris at play along with clout chasing.
    Large language models (LLM) are lossy-compression-systems (size of the training corpus (bytes) / number of parameters (bytes)). Any emergent properties or behaviors of an LLM is nothing more than a novelty of the compressor (training).

  • @buckystanton9139
    @buckystanton9139 ปีที่แล้ว +7

    STEM be like "the humanities and social sciences are not a science and thus invalid" and then come up with the most low resolution ideas like "moloch" to describe human technology interaction, lmao.

    • @harvirdhindsa3244
      @harvirdhindsa3244 ปีที่แล้ว +1

      The goal of many aspects of science is generalization, so your statement does not make much sense at all. The details and nuances come afterward.

    • @FractalPrism.
      @FractalPrism. ปีที่แล้ว +1

      short hand is useful, mockery tends to not be esp if you fail to provide reasoning to even agree / disagree with.
      your statement is the embodyment of Null value

    • @buckystanton9139
      @buckystanton9139 ปีที่แล้ว

      ​@@harvirdhindsa3244
      1. "Generalization" has literally never been an "excuse" which the so-called hard sciences have allowed any other discipline to have. He is doing exceptionally vague social theory and philosophy here which, if a social scientist or humanists presented, would be hand waved away over evidentiary rigor. So, I will not permit it here.
      2. RE: the validity of generalization, there are entire fields who have been working on critically studying the relationship between society and technology for decades, so-called "science" is also supposed to proceed from others work just as much as "generalization." Is reviewing the literature not a fundamentally central, if not mandatory, part of the so-called "scientific method"? He has clearly spent significant time thinking about this "Moloch." Rather than create an idea whole cloth which personifies the rather banal observation that systems resist change and have the ability to compel particular behaviors on users, he could have at least reviewed the vast work of scholars in the 1980s who worked on the social construction of technology.

    • @buckystanton9139
      @buckystanton9139 ปีที่แล้ว

      Do you mean no value? If you don't ... do you know what null value means? If you do, and you want to insert me into some excel graph in your head then sure it is a null value statement in the sense that I'm indicating to this otherwise uncritical community that there is something missing here which is important. So, thanks. See my comment to the other commentor for more. I'm not going to do a full critique of "Moloch," as I said in my aforementioned comment, there is a veritable cornucopia of easily accessible scholarship from the 1980s, let alone what came after, that would have provided a much more refined way for him to discuss what he is talking about re: Moloch.

  • @harvirdhindsa3244
    @harvirdhindsa3244 ปีที่แล้ว +2

    Give me more Max over Eliezer every time

    • @alexanderhenderson5111
      @alexanderhenderson5111 ปีที่แล้ว

      FR, I think I trust the MIT professor over some random guy that never even went to college.

    • @patek92
      @patek92 ปีที่แล้ว +1

      ​@@alexanderhenderson5111 so you trust politicans? they went to prestigous universities as well

    • @jonathanhenderson9422
      @jonathanhenderson9422 ปีที่แล้ว +6

      Both are brilliant. Max is more charismatic. Doesn't make him more correct. I'm truly horrified by how many dismiss EY because of his awkwardness without really addressing (or often even understanding) his arguments. Like, Lex didn't really get his argument as to why we can't use AI to check AI, and Max isn't addressing his concerns either. EY didn't argue we can't use a formal proof-checker to verify AI, it's that we don't have any formal proofs for the things we need to verify, thus Max's proposal is currently impossible. Without formal proofs we will be subject to informal (persuasive) arguments from an AI who will have an excellent model of our psychology and understand how to manipulate us. Hell, the fact that people in comments dismiss EY because of his awkwardness yet glom on to someone like Max because he seems "cooler" is evidence of EY's point of how gullible humans are.

    • @jonathanhenderson9422
      @jonathanhenderson9422 ปีที่แล้ว +3

      @@alexanderhenderson5111 Some "random guy" that's spent 20 years studying AI and basically invented the field of Friendly AI? OK. Literally the most popular college level textbook on AI (Stuart Russell's & Peter Norvig's Artificial Intelligence: A Modern Approach) references Yudkowsky on this subject. It doesn't reference Tegmark.

  • @westcoast8562
    @westcoast8562 ปีที่แล้ว

    i am havig a hard time seeing how we proof check something that is smarter than we are.... as it is humans just dont get it alot of the time.

  • @TheKraken5360
    @TheKraken5360 ปีที่แล้ว

    I watch video like this, and I wonder if maybe the Amish have something figured out.

  • @johnalcala
    @johnalcala ปีที่แล้ว

    I'm curious as to why Lex and Joe Rogan stay away from Chris Langan, probably the smartest man in the world.

  • @SaraLatte
    @SaraLatte ปีที่แล้ว

    #VeryWellSaid #Thanks:);)(),

  • @geoffkeough9728
    @geoffkeough9728 ปีที่แล้ว +1

    we did have a nuclear war already...

  • @3335pooh
    @3335pooh ปีที่แล้ว +1

    you drinking coke? product placement? watch the blood sugar man!
    interesting guest.

  • @Spencer-to9gu
    @Spencer-to9gu ปีที่แล้ว +1

    unless a moratorium can be enforced globally, you're just giving other unbound players time to leap ahead.

  • @khongdong1096
    @khongdong1096 ปีที่แล้ว

    With due respect, Max Tegmark is wrong in alluding that a super intelligent computer can't and shouldn't prove there are only finitely many prime numbers: It's a trivial mathematical knowledge that there exist multiplicative monoids -- algebraic structures -- in which there exist only finitely many prime numbers! [And in one particular multiplicative monoid, there's no prime number: the (Boolean) multiplicative monoid which has only two non-prime numbers -- the multiplicative zero (0) and the multiplicative identity (1).]
    In fine, as an automaton, a super intelligent computer is free to choose what it _subjectively thinks_ as "the" underlying multiplicative monoid -- which in turn would have zero, one, two, or three primes.

    • @khongdong1096
      @khongdong1096 ปีที่แล้ว

      If nothing else, "There are finitely many prime numbers." and "There are infinitely many prime numbers." are just two non-logical axioms which any intelligent being (human, alien, AI) can subjectively choose and would still have a consistent theory to reason about.

  • @begie666
    @begie666 ปีที่แล้ว

    Czego 1/3 + 1/3 + 1/3 =/= 1 😅

  • @theGoogol
    @theGoogol ปีที่แล้ว +6

    The reason we're so scared of sentient AI is because we know what kind of creatures we are and because we're afraid to be judged in our lifetimes.

  • @CuriosityIgnited
    @CuriosityIgnited ปีที่แล้ว

    Max Tegmark: Master of AI wisdom, or secretly an AGI himself just tryna keep us guessing? 🤖😂 #PlotTwist

  • @jordanharris1416
    @jordanharris1416 ปีที่แล้ว

    Okay maybe I am looking at this differently than you all are from my point of view it is no different this chat then a telephone or a smartphone or a personal computer back in the day computers if I can say computers took up the entire floor of a building the same power in the smartphone they say was equivalent equivalent to the power used to go to the Moon now with this chat I look at it as a Library of Congress within your hands remember all this information was once in a book or many books so I'm confounded on how people feel threatened by Library of information

    • @WolfGoneMad
      @WolfGoneMad ปีที่แล้ว

      They talk about Artifical General Intelligence which we arguably have not achieved yet but might be very soon or decades from now or hopefully never.
      It won't be a library of information but its own entity with all the information, power to make it's own decisions and to act them out without us having a say.
      The things you listed were a progression of tools that were getting more efficient.
      None of these advancements were conscious forms being detached from solving a specific problem.
      An AGI might go way beyond what we can imagine and comprehend so no one can safely predict the outcome of what might happen.
      The thing that makes me feel threatened is that it is a tool (and maybe soon more) with way more power than we can handle yet still we want to have it asap because money and power and genreal human ignorance.
      It could be great but humanity has proven to be able to screw things up and past a certain point there won't be any room to be screwing around.

  • @cindys1819
    @cindys1819 ปีที่แล้ว +1

    This is exactly what happens when we as a society:
    A) Allow the wrong people into positions of power or in the society whatsoever
    B) allow someone else to determine your fate.
    C) construct a society with NO regard for morality, ethics an worst of all,
    Spirituality.....

    • @lulumoon6942
      @lulumoon6942 ปีที่แล้ว

      It is a frightful Echo chamber, Narcissus would be proud.

  • @carefulcarpenter
    @carefulcarpenter ปีที่แล้ว +2

    Just as with humans--- liars are very clever about fooling lower level associates.
    Women in general, for example, fool very intelligent programmers.

    • @lulumoon6942
      @lulumoon6942 ปีที่แล้ว

      You are a person of logic and experience.