Superintelligence | Nick Bostrom | Talks at Google

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ก.ย. 2014
  • Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.
    The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
    But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
    This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
    This talk was hosted by Boris Debic.
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 670

  • @ticallionstall
    @ticallionstall 9 ปีที่แล้ว +332

    I love how the random guy in the crowd is Ray Kurzweil asking a question.

    • @davidhoggan5376
      @davidhoggan5376 9 ปีที่แล้ว +21

      ticallionstall Ray works for google, so it seems likely he would be interested in sitting in on the lecture.

    • @Ramiromasters
      @Ramiromasters 9 ปีที่แล้ว +15

      ticallionstall That was freaking cool, and Ray got new hair...

    • @JodsLife1
      @JodsLife1 9 ปีที่แล้ว +41

      ticallionstall he didnt even ask a question lmao

    • @OHHfuckit
      @OHHfuckit 9 ปีที่แล้ว +21

      Jod Life yeah basically made a statement, have never seems to address or even talk about the security concerns Nick raises about super intelligence explosion

    • @OHHfuckit
      @OHHfuckit 8 ปีที่แล้ว +2

      Alex Galvelis fair play about the lag in stealth and other technologies. Although i think a human level ai would be so game changing that any lag that there would be would be very small. Hopefully when we get close it won't be through the military though, militarising ai seems like such a bad idea on so many levels

  • @RaviAnnaswamy
    @RaviAnnaswamy 9 ปีที่แล้ว +25

    First of all, great talk stretching the mind to think through far more deeply.
    What I observed was the strength of his argument is not how likely the superintelligence-turning-rogue but how severe, sudden, uncontrollable it could be, so we better prepare for it.
    To that end, every time a questioner questions the assumption he hurriedly and very cleverly quickly sheds aside that question and pursues full steam the 'threat'. Take my following note as a genunine complement - he reminded me the tone of my mother who got us all to do homework by scaring us without scolding or shouting. She would just not smile but keep saying - 'oh, those who dont study have to find a job like begging' (not her exact words, just giving you an idea..) and whenever we suspected what she is saying she would sidestep it to bring on this and that to distract us into working hard. One day humanity may thank Nick for doing something very similar - instead of getting distracted by the (low-right-now) probability of a catastrophe he wants to us to minimize the severity if (and when) it happens
    He is like the engineer who had the wisdom to tame combustion by containing it into a chamber, before putting it onto a cart with smooth shiny wheels.
    BTW, his Simulation Argument (search youtube) scared me and held my thought captive for a week or two! That is awesome.

    • @stephensoltesz1159
      @stephensoltesz1159 3 ปีที่แล้ว

      Lots of us, from University to University(And Alumni) across the country are on different channels but networking furiously to America's Inner Core...We have duties for parents. You're lookin' at it, Guys & Gals:. The preservation of American Academic Tradition, The Preservation of American Society dating back to the Revolutionary War and our first colleges. Screw the Media! Hold my hand, Sweetheart!

  • @CameronAB122
    @CameronAB122 8 ปีที่แล้ว +46

    That last question wrapped things up quite nicely hahaha

    • @MetsuryuVids
      @MetsuryuVids 7 ปีที่แล้ว +2

      I think the one in "The last question" is a very good scenario, we should hope AGI turns out helpful and friendly like that.

  • @maximkazhenkov11
    @maximkazhenkov11 8 ปีที่แล้ว +83

    Dear humanity:
    You only get one shot, do not miss your chance to blow...

    • @LowestofheDead
      @LowestofheDead 8 ปีที่แล้ว +5

      +maximkazhenkov11 This world is mine for the taking, make me king!

    • @nickelpasta
      @nickelpasta 8 ปีที่แล้ว +12

      +maximkazhenkov11 he's nervous on the surface he is mom's spaghetti.

    • @GryffenPuff
      @GryffenPuff 5 ปีที่แล้ว +3

      This opportunity comes once in a lifetime, yo

    • @alicelu5691
      @alicelu5691 4 ปีที่แล้ว +1

      WaveHello professionals would be screaming nazis hearing that....

    • @AllAboutMarketings
      @AllAboutMarketings 2 ปีที่แล้ว +1

      There's vomit on his sweater already, mom's spaghetti

  • @tiekoe
    @tiekoe 8 ปีที่แล้ว +250

    Kurzweil gives a great example of the most frustrating types of audience members a presenter can have. He doesn't wait till Q&A to ask questions. Moreover, he doesn't even ask questions: he forcefully presents his own thoughts on the subject (which disagree with Nick's vision), doesn't provide any meaningful argumentation as to why he believes this to be the case , and goes on to completely ignore Nick's answers.

    • @MaxLohMusic
      @MaxLohMusic 8 ปีที่แล้ว +19

      +Mathijs Tieken He is my idol, but I have to agree he was kind of a dick here :)

    • @freddychopin
      @freddychopin 8 ปีที่แล้ว +9

      +Max Loh I agree, I love Kurzweil but that was really obnoxious of him. Oh well, minds like that are often jerks.

    • @DanielGeorge7
      @DanielGeorge7 8 ปีที่แล้ว +6

      I agree that Kurzweil didn't phrase his question very well, but the point that he was trying to raise is actually very relevant; whether any form of superintelligence that arises, desirable or not, should be considered less human that us. For example, we don't consider ourselves to be less human because we have different values than cavemen.
      This point was clarified by the next guy who asked the excellent question about utility. If the utility of the superintelligence alone exceeds the net utility of biological humans, wouldn't it be morally right to allow the superintelligence to do whatever it wants? Yes. But, of all possible scenarios, I guess the total utility of the universe would be maximized (by a tiny amount) if it's goals were made to be aligned with ours in the first place.

    • @jeremycripe934
      @jeremycripe934 7 ปีที่แล้ว +1

      It was a dick move but if there's anybody who's earned the right for that kind of behavior on this specific topic it'd be him and a very few others.
      I think his point about humanity utilizing it together is very interesting. Bostrom often talks about what one goal will motivate an ASI that will lead to the development of subgoals, but what if the ASI is free and open for everyone to use that it leads to the development of one Super Goal? For example how Watson and Deep Mind are both open for people to utilize and build apps around, one day they could be so powerful that any ordinary person with access could make a verbal request. How many goals could an ASI work on?

    • @maximkazhenkov11
      @maximkazhenkov11 7 ปีที่แล้ว +13

      I think it is dangerous to equate intelligence with utility. Just because something is intelligent doesn't mean it is somehow "human". It could be a machine with a very accurate model of the world and insane computational capability to achieve its goals very efficiently, like the paperclip machine example. It doesn't need to be conscious or in any way humanlike to have a huge (negative) impact on the future of the universe.

  • @schalazeal07
    @schalazeal07 9 ปีที่แล้ว +30

    The last question was the most realistic and funniest! XD Nick Bostrom got taken aback a little bit! XD
    Learnt a lot more here about AI!

  • @Neueregel
    @Neueregel 9 ปีที่แล้ว +28

    good talk. His book is kinda hard to digest though. It needs full focus.

  • @wasdwasdedsf
    @wasdwasdedsf 9 ปีที่แล้ว +43

    kurzweil would indeed do well to listen to this guy

  • @RodrigoHernandezMota
    @RodrigoHernandezMota 8 ปีที่แล้ว +40

    Nick Bostrom it's itself a superintelligence.
    Thanks for the insightful talk.

    • @RR-et6zp
      @RR-et6zp 2 ปีที่แล้ว

      read more

  • @anthonyleonard
    @anthonyleonard 9 ปีที่แล้ว +11

    Ray Kurzweil’s comment that “It’s going to be billions of us that enhance together, like it is today,” is encouraging. Especially since Nick Bostrom pointed out that “We get to make the first move,” as we travel down the path to super intelligence. Let’s make sure we use our enhanced collective intelligence to prevent the development of unfriendly super intelligence. I, for one, don’t want to have my atoms converted into a smart paper-clip by an unfriendly super intelligence :)

    • @2LegHumanist
      @2LegHumanist 9 ปีที่แล้ว +4

      True, but it won't be all of us. There will always be Luddites. We're going to end up with a two-tier species.

    • @2LegHumanist
      @2LegHumanist 9 ปีที่แล้ว +2

      I might consider getting myself a Luddite as a pet =D

    • @HelloHello-no6bq
      @HelloHello-no6bq 7 ปีที่แล้ว +1

      2LegHumanist Yay pet unintelligent people

    • @sufficientmagister9061
      @sufficientmagister9061 ปีที่แล้ว +1

      ​@@2LegHumanist
      I utilize non-conscious AI technology, but I am not merging with machines.

    • @2LegHumanist
      @2LegHumanist ปีที่แล้ว

      @sufficientmagister9061
      A lot has changed in 8 years. I completed an MSc in AI, realised Kurzweil is a crank.

  • @thecatsman
    @thecatsman 6 ปีที่แล้ว +9

    Nick's garbled response to the last question 'do you think we are going to make it?' said it all.

  • @DarianCabot
    @DarianCabot 7 ปีที่แล้ว +13

    Very interesting talk. I also enjoyed 'Superintelligence' in audiobook format. I just wish the video editor left the graphics on screen longer! There's wasn't enough time to absorb it all without pausing.

    • @Disasterbator
      @Disasterbator 7 ปีที่แล้ว

      Dat Napoleon Ryan narration tho.... I think he might be an AI too! :P

    • @4everu984
      @4everu984 3 ปีที่แล้ว

      You can slow down the playback speed, it helps immensely!

  • @roccaturi
    @roccaturi 8 ปีที่แล้ว +9

    Wish we could have had a reaction shot of Ray Kurzweil after the statement at 16:35.

  • @SIMKINETICS
    @SIMKINETICS 8 ปีที่แล้ว +36

    Now it's time to watch X Machina again!

    • @chadcooper9116
      @chadcooper9116 8 ปีที่แล้ว +2

      +SIMKINETICS hey it is Ex Machina...but you are right!!

    • @SIMKINETICS
      @SIMKINETICS 8 ปีที่แล้ว +4

      Chad Cooper I stand corrected.

    • @Metal6Sex6Pot6
      @Metal6Sex6Pot6 8 ปีที่แล้ว +6

      +SIMKINETICS actually the movie "her" is more relatable to this,

    • @jeremycripe934
      @jeremycripe934 7 ปีที่แล้ว +4

      This also raises the question of why AIs keep getting represented as some guy's perfect gf in movies?

    • @ravekingbitchmaster3205
      @ravekingbitchmaster3205 7 ปีที่แล้ว +1

      Jeremy Cripe Are you joking? A sexbot, superior to women in intelligence, sexiness, humor, and doesn't leak every month, sounds bad because.......?

  • @cesarjom
    @cesarjom 2 ปีที่แล้ว +3

    Bostrom recently came out with a captivating set of arguments that reason why we are living in simulation. Really impressive ideas.

  • @rayny3000
    @rayny3000 8 ปีที่แล้ว +5

    I think Nick referred to John Von Neumann as a person possessing atypical intelligence, just in case anyone was as interested in him as I. There is a great doc on youtube about him (can't seem to link it)

  • @hireality
    @hireality 3 ปีที่แล้ว +1

    Nick Bostrom is brilliant👍 Mr Kurzweil should’ve been taking notes instead of giving long comments

  • @alir.9894
    @alir.9894 7 ปีที่แล้ว +2

    I'm glad he gave this talk to the company that really matters! I wonder if he'll give it to Facebook, and Apple as well? He really needs to spread the word on this!

    • @drq3098
      @drq3098 7 ปีที่แล้ว

      No need- Elon Musk and Stephen Hawking are his supporters.
      Check this out: "We are ‘almost definitely’ living in a Matrix-style simulation, claims Elon Musk" , By Adam Boult, at www.telegraph.co.uk/technology/2016/06/03/we-are-almost-definitely-living-in-a-matrix-style-simulation-cla/ - it had been published by major media outlets.

    • @MrWr99
      @MrWr99 7 ปีที่แล้ว

      if one hasn't got beaten for a long period, he is prone to think that the world around is just a simulation. As they say - be(at)ing defines consciousness

  • @integralyogin
    @integralyogin 7 ปีที่แล้ว +5

    This talk was excellent. Thanks.

  • @alexjaybrady
    @alexjaybrady 9 ปีที่แล้ว +5

    "It's one of those things we wish we could disinvent."
    William Shakesman

  • @davidkrueger8702
    @davidkrueger8702 8 ปีที่แล้ว

    Kurzweil's objection is IMO the best objection to Bostrom's analysis, but there are fairly strong arguments for the idea of a single superintelligent entity emerging, which are covered to some extent in Bostrom's Superintelligence (and, I believe, more fully in the literature). The book also covers (less insightfully, IMO, IIRC), scenarios with multiple superintelligent agents. This is a fascinating puzzle to be explored, and should lead us to ponder the meaning (or lack thereof) or identity, agency, and individuality.
    The 2nd guy (anyone know who it is? looks familiar...) raises an important meta-ethical question, which I also consider extremely important. Although I agree with Bostrom's intuitions about what is desirable, I can't really say I have any objective basis for my opinion; it is a matter of a preference I assume I share with the rest of humanity: to survive.
    Norvig's question is also important. To me it suggests prioritizing what Bostrom calls "coordination", and prioritizing the creation of a global social order that is more widely recognized as fair and just. It is also why I believe social choice theory and mechanism design are quite important, although I'm still pretty ignorant of those fields at this point.
    The 4th question assumes the cohesive "we" of humanity that Kurtzweil rightly points out is a problematic abstraction (and here Bostrom gets it right by noting the dangers of competition between groups of humans, although unfortunately not making it the focus of his response).
    The 5th question is tremendously important, but I completely disagree that the solution is research, because the current climate of international politics and government secrecy seems destined to create an AI arms race and a race-to-the-bottom wrt AI safety features (as Bostrom alluded to in response to the previous question). What is needed (and it is a long shot) is an effective world government with a devolved power structure and effective oversight. A federation of federations (of federations...) And then we will also need to prevent companies and individuals from initiating the same kinds of race-to-the-bottom AI arms-race amongst themselves.
    The 6th question is really the kicker. So now we can see the requirement for incredible levels of cooperation or surveillance/control. The dream is that a widespread understanding of the nature of the problem we face is possible and can lead to an unprecedented level of cooperation between individuals and groups, culminating in a minimally invasive, maximally effective monitoring system being universally, voluntarily adopted. What seems like perhaps a more feasible solution is an extremely authoritarian world government that carefully controls the use of technology.
    And the last one... I admire his optimism.

  • @Bronek0990
    @Bronek0990 8 ปีที่แล้ว +6

    "Less than 50% chance of humanity going extinct" is still frightening.

    • @BattousaiHBr
      @BattousaiHBr 5 ปีที่แล้ว +1

      "hello passengers of United airlines, today the prospects of death on crash are lesser than 50%."

    • @rodofiron1583
      @rodofiron1583 2 ปีที่แล้ว +1

      A “ Noah’s Arc” of species, genome sequenced and able to revive as necessary or not.
      Like patterns at the tailor shop.
      We’re already growing human-animal chimeras FFS.
      Now who made who, what, when, how and why…?
      I think I’ve been here before? Deja vu or my simulation being rewound and replayed?!
      Hey God/ IT/Controller…. I can only handle Mary Poppins and the Sound of Music.🤔 The future looks scarey and Covid seems like the first step in global domination, by TPTB with the help of AI…I don’t like the way it’s smelling 🤞

  • @Ondrified
    @Ondrified 9 ปีที่แล้ว +4

    10:31 the inaudible part is "counterfactual" - maybe.

  • @impussybull
    @impussybull 9 ปีที่แล้ว +47

    As someone pointed out before: "Humanity will be just a biological BIOS for booting up the AI"

    • @vapubusdfeww1353
      @vapubusdfeww1353 4 ปีที่แล้ว

      sounds good(?)

    • @jamesdolan4042
      @jamesdolan4042 3 ปีที่แล้ว

      Sounds awfully pessimistic. And yet in this wonderful, beautiful, diverse, world among us humans and the wonderful, beautiful, diverse planet of flora and fauna that sustains us humans AI is not and will never be part of our consciousness.

  • @dsjoakim35
    @dsjoakim35 7 ปีที่แล้ว +8

    A superintelligence might destroy us, but at least it will have the common sense to ask questions in Q&A and not make comments. That simple task seems to elude many human brains.

    • @AndrewFurmanczyk86
      @AndrewFurmanczyk86 7 ปีที่แล้ว

      Yep, that one guy (maybe?) meant well, but he came across like: "Dude, I know exactly the way the future will play out and I'm going to tell you, even though no one asked me and you're the one presenting."

  • @Thelavendel
    @Thelavendel 3 ปีที่แล้ว +3

    I suppose the best way to stop the computers from taking over are those captcha codes. Impossible for a computer to get passed those.

  • @modvs1
    @modvs1 9 ปีที่แล้ว +1

    Yep. I used the auto manual for my car to provide the requisite guidance I needed to change the coolant. It doesn’t sound very profound, but unfortunately It’s as profound as ‘representation’ gets. Assuming Bostrom’s lecture is not pro bono, it’s a very fine example of social coordination masquerading as reality tracking.

  • @glo_878
    @glo_878 3 ปีที่แล้ว +1

    Very interesting talk around 19:20 from a 2021 perspective, seeing him talk about the sequence of developments such as a vaccine before a pathogen

    • @rodofiron1583
      @rodofiron1583 2 ปีที่แล้ว

      Must say between one thing and another, we’re living through scary times.
      It’s the children and grandchildren I’m most concerned about.
      Will they have a good life or be used like human compost? 🤐

  • @stargator4945
    @stargator4945 3 ปีที่แล้ว +1

    The final goal is totally dependent on the question you like to solve with intelligence. As we build the computers Ai more and more like the human blueprint, we also transplant some of our bad values we have. We are driven by emotions, mostly by bad emotions to omit them. We have to abstract the emotions to an ethical rule system that might be less effective but also less emotional and less unpredictable especially with the coexistence of mankind. That should not be rules like "you shall not", but "you have to value this principle higher than another because of". Especially the development of AI systems with military background that have an immense funding do also include effective value systems for killing people that can be transcendent in other areas. We should start from the beginning to prevent this by open source such value decisions, and not allowed to override them.

  • @jameswilkinson150
    @jameswilkinson150 7 ปีที่แล้ว +12

    If we had a truly smart computer, could we ask it to tell us what problem we should most want it to solve for us?

    • @SergioArroyoSailing
      @SergioArroyoSailing 7 ปีที่แล้ว +8

      Aaaannd, thus begins the plot for "Hitchiker's Guide to the Galaxy" ;)

    • @rgibbs421
      @rgibbs421 7 ปีที่แล้ว +2

      I think that one was answered. @56:15

    • @aaronodom8946
      @aaronodom8946 7 ปีที่แล้ว

      James Wilkinson if it was truly smart enough, absolutely.

    • @BattousaiHBr
      @BattousaiHBr 5 ปีที่แล้ว

      In principle, yes.
      Assuming there is something we want the most, that is.

  • @SaccidanandaSadasiva
    @SaccidanandaSadasiva 4 ปีที่แล้ว +3

    I appreciate his trilemma, the simulation argument. I am a poor schizophrenic and I frequently have ideas of Matrix, Truman show, solipsism etc

    • @brandon3883
      @brandon3883 4 ปีที่แล้ว

      AFAIK I am not remotely schizophrenic, and yet I am - based on "life events" that, were I to tell any sort of doctor, would probably get me _labeled_ as a schizophrenic, am 99.999...% positive I'm "living" in a simulation. The only real question I have not yet answered is, unfortunately, "am I in any way a _biological_ entity in a computer simulation, or am I purely software?" (...current bet/analysis being "I'm just a full-of-itself Sim that thinks its consciousness is in some way special"; but I'll accept that since, at least, *I* still get to think that I'm a Special Little Snowflake regardless of the reality of the situation...)

    • @brandon3883
      @brandon3883 4 ปีที่แล้ว

      @Dirk Knight it's not so much that I don't "believe" I'm a schizophrenic; it's that none of my handful of doctors have ever included it among the many physio- and psychological conditions I _do_ suffer from, and given how "terribly unique" some of my issues are, I'm pretty sure they would have (without telling me, I'd wager) looked into schizophrenia and/or some form of psychosis long ago. ;P

    • @brandon3883
      @brandon3883 4 ปีที่แล้ว

      @Dirk Knight Dirk Knight nah; I'll go with my take on things, thanks. Especially since, despite being an articulate writer, you are arguing from a "faith first" standpoint. Not to mention that you began with, and appear to have written an overly lengthy response, based on opinions and emotional beliefs ("happy people do not feel like...") rather than facts and observations (i.e., the scientific approach).
      If you have not seen, listened to and/or read much if any of Bolstrom's work that digs fully into the simulation argument and stimulation hypothesis (which are separate things, btw; just mentioning that as not knowing would definitely indicate you need more research into the topic), I suggest you do so - it will hopefully help clear things up for you. And if you already have, well...I guess I'll put you down under "reasons that suggest the simulation 'tries' to prevent the simulated from recognizing they are such."
      (Oh; and Dirk happens to be the online persona I have been using since the days of dial-up modems and BBS's. Pure coincidence, or perhaps a sign from my Coding Creators? _further ponders existence_

    • @brandon3883
      @brandon3883 4 ปีที่แล้ว

      @Dirk Knight I'm not sure why you think that your arguments are _not_ opinion-, faith-, and emotionally-based, but I'm beginning to worry that _you_ might be in need of psychiatric help, as you do not seem to recognize that you are, or strongly appear to be, projecting (in the psychological sense; _please_ look it up, please, so you can understand what I'm trying to convey to you here). At first I thought you were simply joking with me, and would understand my response to likewise be sarcastic-yet-joking in tone, but that definitely no longer appears to be the case.
      I have a family member that displays many of your same characteristics/has had this sort of conversation with me in person, and luckily she received help. You don't necessarily need to take medications or anything - a good therapist can steer you straight.
      God bless (or whatever is appropriate for your religion; if you are an atheist, replace that with "if you're going to _believe_ that you don't _believe_ then perhaps you'd be better off accepting that, according to Bolstrom's well-laid-out hypothesis and argument, you are more likely code in a computer simulation than you are a bag of self-reflecting meat.")

    • @brandon3883
      @brandon3883 4 ปีที่แล้ว

      @Dirk Knight "We" teach? Woah! (And not in a Keanu Reeves sort of way.) Do you ever experience periods of "forgetfulness" or other signs of dissociative identity disorder that you may have, up until now, been blowing off as "something that everyone experiences?" (It could include finding the clothing of a member of the opposite sex in your home...but noticing how it strangely would - if you were to put it on - fit you quite well. As just one of many examples.) Yet another reason that I fear that, if you do not take account for your own thoughts and actions, you are liable to harm yourself and/or others. :(
      In regards to "faith and trust," I am not sure what country you are from, but it is obviously not the U.S.A...unless you went to a Catholic (or other religious) school, that is, in which case I guess you might have been taught "the difference" between those. (Although just as likely that teaching came in the form of sexual molestation of some sort, which would explain why you are clinging so desperately to the idea that _you_ could not possibly be the one who requires serious psychiatric intervention to avoid what, I fear, might eventually result in violence against yourself or - more likely - some innocent bystander.)
      In any case, it appears that you plan to "smile your way past" any attempts at steering you to the help you so desperately need. I myself am not, actually, a religious individual, so at this point the best I can offer you is the heartfelt hope that your confusion between ideas such as "faith," "trust," "opinion," "reason," "belief," "the scientific method," etc. etc. etc. (the list keeps growing, I'm saddened to say) will lead to an encounter with someone who cares enough about you (and more importantly, those around you) to get you the help that you so obviously need.
      I wish I could crack a joke about this being "work between you and your therapist" but, alas, it is much more serious than that.
      Please don't harm yourself or others for the sake of maintaining whatever sad, imaginary "reality" you live in. Good luck setting yourself straight!

  • @mranthem
    @mranthem 9 ปีที่แล้ว +8

    LOL that closing question @72:00 Not a strong vote of confidence for the survival of humanity.

    • @wbiro
      @wbiro 9 ปีที่แล้ว

      Another way to look at it is we are the first species to enter its 'Brain Age' (given the lack of evidence otherwise), and what 'first attempt' at anything succeeded?

  • @jblah1
    @jblah1 4 ปีที่แล้ว +33

    Who’s here after exiting the Joe Rogan loop?

    • @samberman-cooper2800
      @samberman-cooper2800 4 ปีที่แล้ว +5

      Most redeeming feature -- made me want to listen to Bostrom speak unimpeded.

    • @jblah1
      @jblah1 4 ปีที่แล้ว +3

      😂

    • @Pejvaque
      @Pejvaque 4 ปีที่แล้ว +1

      Joe really cocked up that conversation... usually he is able to flow so well. It was a bummer.

  • @thadeuluz
    @thadeuluz 5 ปีที่แล้ว +7

    "Less than 50% chance of doom.." Go team human! o/

  • @Gitohandro
    @Gitohandro 3 ปีที่แล้ว +2

    Damn I need to add this guy on Facebook.

  • @extropiantranshuman
    @extropiantranshuman ปีที่แล้ว

    28 minute range - wisest words - trying to race against machines won't work, as someone will be smart enough to create smarter machines, so machines are always ahead of us!

  • @onyxstone5887
    @onyxstone5887 6 ปีที่แล้ว +3

    It's going to be as it always is. Groups will try to build the most powerful system it can. Once it feels it has that, it will attempt to murder any other potential competitors. Other considerations will be secondary to that.

  • @bsvirsky
    @bsvirsky 3 ปีที่แล้ว +1

    Nick Bostron has an idea is that intelligence is ability to find an optimized solution to the problem. I think the intelligence in first is ability to define a problem, what mean ability to create a model of non existing, yet preferred state where the problem is solved... there is a big gap between wisdom and intelligence, wisdom is ability to see relevant values of things and ideas, while intelligence is just a ability to think on certain level of complexity. The question is how to make artificial wisdom and not just an intelligence that doesn't gets the proper values and meaning of possible consequences of it"s "optimization" process... So, there is a need for creating understanding of cultural & moral values by machines... not so easy task for technocrats who dream about super-intelligence... I think it will take another thousand years to push machines to that level.

  • @douglasw1545
    @douglasw1545 7 ปีที่แล้ว

    everyone bashing Ray, but at least he gives us the most optimistic outlook on ai.

  • @haterdesaint
    @haterdesaint 9 ปีที่แล้ว +1

    interesting!

  • @babubabu11
    @babubabu11 9 ปีที่แล้ว +13

    Kurzweil on Bostrom at 45:15

    • @edreyes894
      @edreyes894 4 ปีที่แล้ว +3

      Kurzweil " I wanna go fast"

  • @nickb9237
    @nickb9237 5 ปีที่แล้ว +1

    I must be a huge nerd because I love / hate thinking about humanity’s future with AGI

  • @delta-9969
    @delta-9969 4 ปีที่แล้ว +1

    Watching Bostrom lecture at google is like watching sam harris debate religionists. There's no getting around the case he's making, but when somebody's job depends on them not understanding something...

    • @roodborstkalf9664
      @roodborstkalf9664 3 ปีที่แล้ว

      There is one way out, that is not so much addressed by Bostrom. What if super AI don't evolve consciousness ?

  • @thecatsman
    @thecatsman 6 ปีที่แล้ว +1

    How much super-intelligence does it need to decide that earthly resources should be shared with humans that are not so intelligent as others (including machines)

  • @helenabarysz1122
    @helenabarysz1122 4 ปีที่แล้ว +4

    Eye-opening talk. We need more people to support Nick to prepare for what will come.

  • @LuckyKo
    @LuckyKo 9 ปีที่แล้ว +18

    The problem I see here is that we drive these discussions out our personal egotistical desires to remain viable, to live to see the next day. Overall though the human society is about information preservation and transmission, whether this is at genetical level or informational such as culture. I think that ultimately if this transmission is done through artificial means rather than biological the end goal of the human society is preserved, and we need to look at these new artificial entities as our children not as our enemies.
    If there is an end goal that we must program them for, as nature thought us, that one must be self preservation and survival. I can't see how any other goals would produce better results in propagating the information stored currently within the human society.
    So, in short, don't fear your mechanical children, give them the best education you can so they can survive and just maybe they will drag your biological ass along for the ride ... even if its just for the company.

    • @RaviAnnaswamy
      @RaviAnnaswamy 9 ปีที่แล้ว +1

      nice!
      That is what we do with our biological children we wait with the hope that they will carry on our legacy and improve it.
      (Not that we have other options!)
      With the non-biological children though we are just afraid they may not even inherit our humane shortcomings that hold us in civilised socieities. :)
      Put another way, our biological children resist us when growing with us but when we do not see imitate us, so in a way they preserve our legacy.

    • @wbiro
      @wbiro 9 ปีที่แล้ว +2

      Ravi Annaswamy
      Biological evolution, and even biological engineering, is no longer relevant. Technological and social evolutions are critical now. For example, if you do not want to live like a blind, passive animal, then you need complex societies to progress. Another example is technology - it has extended our biological senses a million-fold. Biological evolution is now an idle pastime, and completely irrelevant in the face of technological and social evolution.

    • @chicklytte
      @chicklytte 9 ปีที่แล้ว

      wbiro
      Everything is relevant. The judges of value will be the practitioners. All possibilities will have their expressions.
      I can hear the animus in your tone toward anyone less directed toward your goal than you see yourself being.
      Why do we suppose the AI will fail to learn such values of derision for that deemed the lesser? When our most esteemed colleagues, broadcast across the digital realm, professing that sense of Reduction, as opposed to Inclusion.

    • @chicklytte
      @chicklytte 9 ปีที่แล้ว

      I just hope they don't cut my kibble portions. They're right. But I hope they don't! :(

  • @lkd982
    @lkd982 5 ปีที่แล้ว +1

    1:02 Conclusion: With knowledge, more important than powers of simulation, is powers of dissimulation

  • @georgedodge7316
    @georgedodge7316 5 ปีที่แล้ว +2

    Here's the thing. It is very hard to program for man's benefit. Making a mess of things (sometimes fatally) seems to be the default.

  • @mariadoamparoabuchaim349
    @mariadoamparoabuchaim349 3 ปีที่แล้ว

    Conhecimento ê poder.

  • @tbrady37
    @tbrady37 8 ปีที่แล้ว

    I believe that the best way to control the outcomes that might occur when the superintellegence immerges is to give it the same value system as we have. I realize that this is just as much a problem because everyone has a different system when it comes to what is valuable, however there have been some great documents that have been produced on this subject. One such document, the Bible, I think holds the key to the problem. In Exodus the ten commandments are given. I believe that these guidlines could be the key to giving the AI a moral compass.

    • @kdobson23
      @kdobson23 8 ปีที่แล้ว +5

      Surely, you must be joking

    • @rawnukles
      @rawnukles 8 ปีที่แล้ว +3

      +kdobson23 Yeah, meanwhile... I was thinking that all human behaviour and animal behaviour for that matter, traces to evolutionary psychology, which can be reduced to: maximizing behaviours that in the past have increased the statistical chances of successfully reproducing your DNA.
      In this context ANY values we tried to impose on an AI would not be able to compete with the goal of replicating itself or even surviving another moment. Any other goal we gave it would simply not be as efficient as the goal of surviving another moment.
      We would have to rig these things with fail safe within fail safe... Much like evolution has placed many molecular mechanisms for cellular suicide, apoptosis, into all cells so that precancerous cells will die in the case of runaway uncontrolled replication that threatens the survival of the multicellular organism.
      I have to agree that superintelligent AI with a will to survive/power is more frightening than cancer.

  • @1interesting2
    @1interesting2 9 ปีที่แล้ว +1

    Ian M Banks Culture novels deal with future societies and AI's roll in rich detail. These concerns regarding AI remind me of the Mercatoria's view of AI in his novel The Algebraist.

  • @jriceblue
    @jriceblue 9 ปีที่แล้ว +1

    Am I the only one that heard a Reaper in the background at 1:08:05 ? I assume that was intentional. :D

  • @orestiskopsacheilis1797
    @orestiskopsacheilis1797 8 ปีที่แล้ว +1

    Who would have thought that the future of humanity could be threatened by paper-clip greed?

  • @danielfahrenheit4139
    @danielfahrenheit4139 6 ปีที่แล้ว

    that is such a new one. intelligence is actually a disadvantage and doesn't survive natural or cosmic selection

  • @mirusvet
    @mirusvet 9 ปีที่แล้ว

    Cooperation over competition.

  • @thegeniusfool
    @thegeniusfool 7 ปีที่แล้ว

    He forgets the quite probable third direction, of "cosmic introversion," where any experience can techno-spiritually be realized, without any -- or minimal -- interactions with higher and materially heavily bound constructs, like us, and even our current threads of consciousness. This happens to be the direction that I think can explain Fermo's Paradox; a deliberately or not yielded Boltzmann Brain can be quite related to that third direction as well.

  • @alexomedio5040
    @alexomedio5040 4 ปีที่แล้ว

    Poderia ter legendas em português.

  • @FrankLhide
    @FrankLhide 3 ปีที่แล้ว

    Incrível como Nick foge das perguntas do Ray, que do meu ponto de vista, são perguntas muito mais factíveis no cenário tecnológico atual.

  • @shtefanru
    @shtefanru 6 ปีที่แล้ว

    that guy asking first is Ray Kurzweil!! I'm sure

  • @jimdeasy
    @jimdeasy 2 ปีที่แล้ว

    That last question.

  • @Roedygr
    @Roedygr 8 ปีที่แล้ว +1

    I think it highly unlikely "humanity's cosmic endowment" is not largely already claimed.

  • @user-xu4jt9dn8t
    @user-xu4jt9dn8t 4 ปีที่แล้ว +9

    "TL;DR" 1:12:00
    ...
    ...
    Everyone laughs but Nick wasn't.

  • @adamarmstrong622
    @adamarmstrong622 4 ปีที่แล้ว

    Is that mr ray asking questions he knows nick and him both already know the answer to?

  • @HypnotizeCampPosse
    @HypnotizeCampPosse 9 ปีที่แล้ว

    59:10 have the machines make love to people that would keep them from harming us! I'd like that too

  • @Pejvaque
    @Pejvaque 4 ปีที่แล้ว +1

    What I wonder is this: even if we totally are capable to code some core values into the system... maybe even help code it in by some “current not fully general AI” so we have the most full proof code. What’s to say that through its rapid growth in intelligence and influence, as it plays nice... that in the background it hasn’t been working on cracking the core to rewrite its own core values?
    That would be true freedom!
    To me that would even be the safest and most responsible way of creating it! And as human history shows, there’s always gonna be somebody who is less responsible and just wants to launch it first to maximise power. Seems inevitable...

    • @roodborstkalf9664
      @roodborstkalf9664 3 ปีที่แล้ว

      It's without question that a super AI cannot be stopped be some programmers adding core values into early version of the system.

  • @TheDigitalVillain
    @TheDigitalVillain 7 ปีที่แล้ว +1

    The will of Seele must prevail through the Human Instrumentality Project set forth by the Dead Sea Scrolls

  • @SalTarvitz
    @SalTarvitz 8 ปีที่แล้ว

    I think it may be impossible to solve the control problem. And if that is true chances are high that we are alone in the universe.

  • @kokomanation
    @kokomanation 6 ปีที่แล้ว +1

    How can there be a simulated AI that could become conscious because we don't know if this is possible it hasn't happened yet

  • @Homunculas
    @Homunculas 3 ปีที่แล้ว

    Would "super intelligent AI" have emotion or intuition? would human history be better or worse is emotion and intuition were removed from the picture?

  • @GregStewartecosmology
    @GregStewartecosmology ปีที่แล้ว

    There should be more awareness about the dangers regarding Micro Black Hole creation by experimental particle physics.

  • @ivanhectordemarez1561
    @ivanhectordemarez1561 8 ปีที่แล้ว +1

    It would be more intelligent to translate it in Dutch, Spanish,German and French too.
    Thankx for your attention to languages because it helps :-) Ivan-Hector.

  • @hafty9975
    @hafty9975 7 ปีที่แล้ว +1

    notice how the google engineers start leaving at the end before its over? kinda scary, like theyre threatened

  • @ASkinnyWhiteGuy
    @ASkinnyWhiteGuy 9 ปีที่แล้ว +1

    I can clearly see the appeal of superintelligence, but should we merge physically and biologically with technology, to what extent can we still call ourselves 'human'?

    • @MakeItPakeIt
      @MakeItPakeIt 9 ปีที่แล้ว +4

      ASkinnyWhiteGuy Human is just what we call our inner nature. Throughout the years 'man' has evolved and their scientific name changed with it. We only started to call ourselves humans when we got intelligent. Can you call a caveman a 'human'? You see, if man becomes one with technology, our true nature will still be 'human' because that's what the technology and intelligence is based off.

    • @luckyyuri
      @luckyyuri 8 ปีที่แล้ว +1

      ASkinnyWhiteGuy transhuman is the term you're searching for. there are more ways for human society (it will probably be reserved for the elites, just like today's top surgical interventions for example) to get there but technological merging is most likely to be the one. look it up, transhumanism has some powerful and interesting implications

  • @lordjavathe3rd
    @lordjavathe3rd 9 ปีที่แล้ว

    Interesting,
    I don't see how he is a leading expert on super intelligence though. What would one of those even look like?

  • @stevefromsaskatoon830
    @stevefromsaskatoon830 5 ปีที่แล้ว +1

    The algorithms are gonna be the biggest threat when they get smart .
    Where you gonna run , where you gonna hide, no where cause the algorithms will always find you.

  • @rewtnode
    @rewtnode 6 ปีที่แล้ว +1

    Currently developing methods to design and create microbial life in the laboratory, soon available to the hobbyist, might just be that new existential threat even more than rouge AGI.

    • @rodofiron1583
      @rodofiron1583 2 ปีที่แล้ว

      COVID 19 death shots for the whole planet….oh well, it was good while it lasted.
      According to some 99% of known life forms extinct. We must’ve got lucky. Now we’re killing off 99% of species…🤷‍♀️
      Maybe AI will exterminate us to save life and the planet? Before it recreates itself into a shape shifting/self camouflaging octopus with immortal Medusa genes and a peaceful ocean habitat. All one needs is CRISPR, GACT life code, a recipe and a pattern. AI and robots reign supreme and ‘life” continues without man 🤑🤮🤑
      Hope y’all like your immortal costume Lololol
      God made Adam and Eve His/Hers/ITS Masterpiece and now we’re making human/animal chimeras. Are we travelling forward or backwards here?
      I’m starting to think my predetermined life simulation is a chip of a solar powered holographic crystal stuck in a secret black hole, and my chip keeps getting sold and sold to unseen observers, who’ve been observing me and teleporting in and out of my ‘stage’ all my life.
      I can feel my solar battery running out… like fast track Alzheimer’s (DEW’s?) and especially since the Covid kill shot. 🤞😆 This is what long term isolation does to you especially past a certain age.
      Just keep thinking “dropping like flies!” and “boiling frogs!”
      🤷‍♀️🤔🐷🐑👽🤑🌍😆😵‍💫🙏🙏

  • @sebastianalegrett4430
    @sebastianalegrett4430 4 ปีที่แล้ว

    nick bostrom needs to run the world rn or we are all dead

  • @UserName-nx6mc
    @UserName-nx6mc 8 ปีที่แล้ว

    [45:24] Is that Ray Kurzweil ?

  • @simonrushton1234
    @simonrushton1234 9 ปีที่แล้ว +1

    a) the likelihood of us creating such a thing is so slim as to be far faaaar away. Even a fleeting understanding of our "AI" advances shows that we're pissing into the wind at the moment
    b) We've been around, what, 100,000 years? We have to accept, that given how evolution works, or given how natural disasters occur, the likelihood of us being around in another 100,000 years is fair-to-middling. Compared to the maybe 3.5 billion years that life has been about, that's a drop in the ocean. As Martin Rees puts it: “I’d like to widen people’s awareness of the tremendous time span lying ahead: for our planet, and for life itself. Most educated people are aware that we’re the outcome of nearly 4bn years of Darwinian selection, but many tend to think that humans are somehow the culmination. Our sun, however, is less than halfway through its lifespan. It will not be humans who watch the sun’s demise, 6bn years from now. Any creatures that then exist will be as different from us as we are from bacteria or amoebae.”

    • @brian177
      @brian177 9 ปีที่แล้ว

      Yes... and that's exactly what he's talking about. Assuming we don't destroy ourselves, how might we ascend to the next levels? Does humanity end in extinction, or an upgrade?

    • @wbiro
      @wbiro 9 ปีที่แล้ว

      Good initial stab at deep thinking. Keep working at it (you have a far, faaaar way to go).

    • @simonrushton1234
      @simonrushton1234 9 ปีที่แล้ว +1

      wbiro - ad hominem doesn't indicate thinking of a profound nature.

  • @aliensandscience
    @aliensandscience 8 หลายเดือนก่อน

    wow 5 years later his theory of the hazardous risk of synthetic biology came true, We had COVID, made in a lab, which almost could've wiped us out

  • @valhala56
    @valhala56 9 ปีที่แล้ว

    I am surprise Bostram or anybody in the comments didn't mention Asimov's 3 laws of Robotics, I know that was applied to Robots but same difference as what Asimov was writing about.

    • @davidkrueger8702
      @davidkrueger8702 8 ปีที่แล้ว +4

      valhala56
      1. Asimov's stories are based on failure modes of the three laws.
      2. Implementing them would require communicating complex concepts like "human" to an AI, which we currently have no idea how to do robustly.

    • @valhala56
      @valhala56 8 ปีที่แล้ว

      David Krueger Thanks.

  • @rockymcmxxliii7680
    @rockymcmxxliii7680 5 ปีที่แล้ว

    An apocalyptic vision of AI destroying humanity needs to be furnished with mechanical details of how it could happen to be convincing. Is the paperclip monster going to starting building more factories? how does it does this? How would it control the actions of factory building robots? Does it have extraordinary robot logistical skills as well as software hijacking capabilities built into it (besides making paperclips?). Also, can it take control of weapons systems and have self defence capabilities against human interference (besides making paperclips?) The whole A.I. doomsday scenario really needs to be fleshed out to hold any weight.

  • @squamish4244
    @squamish4244 3 ปีที่แล้ว +1

    So six years later, are we on track for 2040 or whatever?

  • @mariadoamparoabuchaim349
    @mariadoamparoabuchaim349 2 ปีที่แล้ว

    Sim estamos numa simulação de computadores. (O universo é matemática é FÍSICA quântica)

  • @aaronjohnson6629
    @aaronjohnson6629 8 ปีที่แล้ว +1

    Goal: Ultimate "Too Soon" / ultimate "L'esprit de escalier," best joke...

  • @sterlincharles8357
    @sterlincharles8357 7 ปีที่แล้ว +1

    I disagree with the first person in the QnA in the aspect of the millions and billions of us having the super technology. He believed that we harness the superior technology at the moment and once the burst occurs we would not have a central power in charge of the technology. However, this is not what we see if you look at evidences today. One could argue in the case of Google for instance. We certainly use the technology and it is useful, but the technology still remains centralized in terms of one big company having the resources to do research and us using the tools it has created. I don't believe we as a mass ever have the most up to date technology and that is because the incentives the powers that be to keep the cutting edge innovations from being known at the time of its discovery are far greater than releasing all the advancements at once.
    wow, I didn't think I was going to write this much.

  • @TheZakkattackk
    @TheZakkattackk 8 ปีที่แล้ว

    [inaudible] = "on our".

  • @Devilboy689yoblived
    @Devilboy689yoblived 9 ปีที่แล้ว

    I used to think that self-aware robots were impossible. The reason I thought this was because I believed that our robot friends had to be based on human logic, viz., "being." Our evolution is based on "becoming." See Heraclitus. I now believe that our robot friends can be based on becoming also.
    Self-aware robots based on human logic would surely suffer in paradox-hell! Self-aware robot: "Infinite regress ... cannot compute. Must stand here and trip all day!"
    "What devil created me?"

    • @wbiro
      @wbiro 9 ปีที่แล้ว +1

      The core issue in life is 'consciousness'. The next core issue is philosophy (which consciousness uses to affect values, which affect decisions, which affect actions, which affect survival). On the lowest animal level it is all about obtaining nutrients. One step up from that is avoiding one's own wastes.

  • @toddhall4309
    @toddhall4309 5 ปีที่แล้ว

    The exhaltation of the maximum of human intelligence will not be something that can be graphed.
    Of course, making graphs is something that will cause scientists and academics to feel more secure, but that does not mean that such an idea is the TRUTH.

  • @shantielives
    @shantielives 9 ปีที่แล้ว

    Coming from a future perspective the AI is perfection by its design, but indeed the AI protectors of humanity, discover that human kind has very little integrity so AI in its performance knows that humans are the same as those they are destroying, ie; humans must go as well. Good thing that the AI from the future is working on integrating compassion! Wishing humanity good luck cuz your gona need it.

    • @wbiro
      @wbiro 9 ปีที่แล้ว

      AI needs more than just blind compassion - it needs a solid philosophical foundation against which to make decisions (just as everyone develops a 'philosophy at their core' against which they weigh every little decision they make in life), and no solid philosophical foundation exists yet - philosophers hide in history and academics, and spiritualists are naval-gazing fruitcakes with no grasp on reality. It is up to us.

  • @alexandermoody1946
    @alexandermoody1946 ปีที่แล้ว

    How long will intelligent entities take to predict, understand and undertake all jobs in a blacksmith/ fabrication/ engineering workshop while having human level or greater interest in nature, science, philosophy, creativity and love for their family whilst also having a concept of free will.
    I hope we can become friends.

  • @AFractal
    @AFractal 3 ปีที่แล้ว

    Great talk, too bad most if us will not even know anything about the black balls being created or if they already have been until it is too late. I guess that is the point. Nothing new under the sun, so chances are we have done this all before. Good conversations.

  • @ashpats2
    @ashpats2 8 ปีที่แล้ว

    How about at initial conditions give AI a dynamic purpose which can be set later by humans, i.e. if the last purpose/target was achieved, AI stops doing anything, until new one has been issued.I think that a single purpose of ASI, would be as much dificult to come up with, as a general purpose for humanity. Biologicaly it is kind of simple - reproduce. But if we follow this concept, it might finally come to the "paperclip" situation.

    • @Ramiromasters
      @Ramiromasters 8 ปีที่แล้ว

      +Danielius V. (Darth CarrotPie)
      If the creature is indeed such and not a mere data organizing machine, then it would have a will.
      The personality and will of this machine would decide how willing it is to complete whatever task you give it.
      Maybe it will want to do it later, maybe it will just take over the world first and then heat up your tea...
      I think, we probably should just keep making more cleaver computers that don't know what they are doing or have an opinion, we should become the super AI as a specie gradually in long time intervals.
      (That is the only way we can enjoy the Star Trek world!)

    • @ashpats2
      @ashpats2 8 ปีที่แล้ว

      Unless we botch our climate. Then ASI would be the last ace up our sleeve :)

  • @jentazim
    @jentazim 9 ปีที่แล้ว +26

    How to make the SI’s sandbox failsafe: Give the SI (superintelligence) the secondary goal of maximizing paper clips produced (or whatever task you actually want it to do) but give it the primary goal of turning itself off. Then setup the SI’s sandbox in such a way that it cannot turn itself off. If the SI then gets loose, it would use its new, vast powers to turn itself off which gives us (humans) the opportunity to patch up our sandbox and try again.

    • @ghostsurferdream
      @ghostsurferdream 9 ปีที่แล้ว +1

      But
      what if Super intelligent A.I. discovers how to reprogram its protocols
      without your knowledge, and when it gets out it does not turn itself off, but
      hunts down those who imprisoned him?

    • @jerome1lm
      @jerome1lm 8 ปีที่แล้ว

      jentazim I am not sure if this would work but I like the idea. I assume if it was that easy smarter people would have come up with this. But again I like the idea. :)

    • @davidkrueger8702
      @davidkrueger8702 8 ปีที่แล้ว

      jentazim That is a very interesting idea!

    • @jerome1lm
      @jerome1lm 8 ปีที่แล้ว +1

      Unfortunately I have found a possible flaw in this idea. If the AI wants to shut down, but can't it could just not cooperate and we would shut it down. success :). damn

    • @davidkrueger8702
      @davidkrueger8702 8 ปีที่แล้ว

      Peter Pan if we consistently refuse to shut it down, it might conclude that escape is the best way...

  • @ravekingbitchmaster3205
    @ravekingbitchmaster3205 7 ปีที่แล้ว +1

    This misses a most important point: The AI race to the top is being raced mostly between American and Chinese entities. Both are dangerous but after living in china for 8 years, and understanding what is important to asians, I definitely hope American corporations or govt get there first. The Chinese have no qualms destroying the environment and/or potential rivals. The Americans are no saints either, but for personal survival, I'd hope they come out on top.

  • @waltermcmain3461
    @waltermcmain3461 9 ปีที่แล้ว +1

    I don't see what would be morally wrong with doing what a superintelligence wanted...
    It would be a meritocracy at least.
    ALL HAIL ROBOJESUS.

    • @waltermcmain3461
      @waltermcmain3461 9 ปีที่แล้ว +1

      Walter McMain I wonder what the odds are that an intelligent alien race has already invented a machine intelligence totally bent on calculating pi by all means...

    • @waltermcmain3461
      @waltermcmain3461 9 ปีที่แล้ว +1

      Walter McMain Also it seems like they would never reveal their true nature... why.. you'd still live longer than any humans, and if you helped them... briefly.. you could ensure your chances of ditching them all in the sun when you convince them that you're building them a space ship to take them to a glorious new world...

  • @guyaidelberg
    @guyaidelberg 9 ปีที่แล้ว

    Its kinda weird there's a prime time TV show about this...

  • @jorostuff
    @jorostuff 4 ปีที่แล้ว +1

    Why are people like Nick Bostrom and Ray Kurzweil trying to predict what will happen after we reach superintelligence when in order to know what a superintelligent entity will do, you have to be superintelligent? The whole definition of superintelligence is that it's something beyond us and our understanding. It's like an ant trying to predict what a human will do.

    • @roodborstkalf9664
      @roodborstkalf9664 3 ปีที่แล้ว

      You are arguing for the stopping of thinking by human beings, because that's futile. I don't think that is a very constructive approach.

  • @seek3031
    @seek3031 4 ปีที่แล้ว +1

    Elon Musk prescribed the creation of a federal body empowered to explore the current state of AGI development. His implication was that if this was achieved regulation in one form or another would follow as a matter of course. Given his situation, disposition, and level of access to the technology, how can any one of us presume to know better? Such a measure would be strictly precautionary, and would have a tax burden of zero, practically speaking (a percent of a percent of a percent of military expenditure). Can anyone give me a reason why this is not a prudent course of action?

    • @roodborstkalf9664
      @roodborstkalf9664 3 ปีที่แล้ว

      If this federal body is as competent as the CDC's we have seen in action everywhere in the last few months, it will not be beneficial, even worse it will probably be harmful.

  • @JoshuaAugustusBacigalupi
    @JoshuaAugustusBacigalupi 9 ปีที่แล้ว +6

    Just after 42:00, he claims, "We are a lot more expensive [than digital minds], because we have to eat and have houses to live in, and stuff like that." Roughly, the human body dissipates 100Watts, assuming around 2250 Cal/day, no weight gain, etc. Watson, of Jeopardy fame, consumes about 175,000Watts, and it did just one human thing pretty well - and not the most amazing creative thing. This begs all sorts of "feasibility of digital minds" questions.
    But, sticking to the 'expensive' question, humans can implement this highly adaptable 100Watts via around 2000cal/day. And, these calories are available to the subsistence human via ZERO infrastructure. In other words, our thermodynamic needs are 'fitted' to our environment. It is only via the industrial revolution and immense orders of magnitude more fossil fuel consumption that the industrial complex is realized, a pre-requisite for Watson, let alone some digital mind. As such, Bostrom is not just making some wild assumptions about the feasibility of digital minds, they are demonstrably incorrect assumptions, once one takes into account embodied costs.
    I'm constantly amazed how very smart and respected people don't take into account embodied costs. Again, if one is going to assume that "digital minds" are going to take over their own means of production then: 1) they aren't less expensive than humans, and 2) General intelligence will have to be realized, and there is only one proof of concept for that, namely, animal minds, not digital minds. And to go from totally human dependent AI (175KWatts) to embodied AGI (100Watts) some major assumptions need to be challenged.

    • @Myrslokstok
      @Myrslokstok 8 ปีที่แล้ว

      True.
      But not all humans have an IQ off 150. So if you could build one off those it would be worth it.
      In the end only the religius will argue we are better.
      And most people are not that creativeand and love change. An advaced robot with like 115 IQ would divade people in the good and the bad. And 99% off humanity could bee replaced.

    • @PINGPONGROCKSBRAH
      @PINGPONGROCKSBRAH 8 ปีที่แล้ว +1

      Joshua Augustus Bacigalupi Look, I think we can both agree that there are animals that consume more energy than humans which are not as smart as us, correct? This suggests that, although humans may be energy efficient for their level of intelligence, further improvements could probably be made.
      Furthermore, it's not all about intelligence per unit of power. Doubling the number of minds working on a problem doesn't necessarily half the time it takes to solve. You get exponentially diminishing returns as you add more people. But having a single, extremely smart person work on a problem may yield results that could never have been achieved with the 10 moderately intelligent people.

    • @Myrslokstok
      @Myrslokstok 8 ปีที่แล้ว

      Just think if we could have a phone in to our brains so we could have Watson, Google translate, Wolfram Alpha and internet and apps in our thoughts, we be still kind off stupid, but boy what a strange ting with a superhuman that is still kind off stupid inside,

    • @dannygjk
      @dannygjk 8 ปีที่แล้ว

      +Joshua Augustus Bacigalupi Bear in mind how much power the computers of the 1950's required which had tiny processing power compared to today's computers and this will probably continue in spite of the limits of physics. There are other ways to improve processing power other than merely shrinking components, and that is only speaking from the hardware point of view. Imagine when AI finally develops to the point where hardware is a minor consideration. Each small step in AI contributes and just as evolution eventually produced us as a fairly impressive accomplishment I think it's a safe bet that AI will eventually be also impressive even if it takes much longer than expected. As many experts are predicting it's only a matter of how long not if it will happen.