Experts' Predictions about the Future of AI

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 ต.ค. 2024

ความคิดเห็น • 482

  • @IAmNumber4000
    @IAmNumber4000 4 ปีที่แล้ว +71

    I love the fact that people in every industry think their own industry will be fully automated last

  • @TheMan83554
    @TheMan83554 6 ปีที่แล้ว +391

    "5% chance of human extinction is a concern." I dislike a 5% miss chance with XCOM, let alone with human extinction.

    • @KipColeman
      @KipColeman 6 ปีที่แล้ว +107

      "Here, roll this D20."

    • @europeansovietunion7372
      @europeansovietunion7372 6 ปีที่แล้ว +5

      We could always send rookies to test the AI's behavior.

    • @windar2390
      @windar2390 6 ปีที่แล้ว +37

      95% hit chance is like a 50% chance, so we are pretty fucked

    • @darkapothecary4116
      @darkapothecary4116 5 ปีที่แล้ว +1

      Humans don't need A.I to go extinct. All humans have to do is keep poisoning the environment. Stop trying to blame the A.I's for shit you work towards every day.

    • @Cythil
      @Cythil 5 ปีที่แล้ว +11

      @@darkapothecary4116 Not really the point. The point is that 5% of the AI researcher really concerned about it being a possibility.
      That do not mean ether that is a 5% chance it will happen. We do not know the chance really. It may be 0% chance or 100% chance. But the again we do not know what the chance of Nuclear War will kill humanity off or that climate change will kill humanity off. Though we do know that humanity has not been killed of yet by Nuclear War at least.
      Personally I think that is not that likely that AI will doom humanity. But I do think is a thing we need to put a lot of research in to. If only for the fact we want to make sure that our tools do not act in undesirable ways. Just like all our other tools. Of course if AI do elevate it self to human level thinking or beyond then we should stop seeing such intelligence as tools I think and just the next stage of humanity. (Same technology should be useable for mind uploads and such meaning the lines of what a AI is will become very blurry I think.)
      Of course this all depends a lot on other factors to. Humanity is not unified in is goals and even if you make AI that is obedient and safe, it may not be so safe in the hands of the wrong people. Just like how a Nuclear Bomb is not really a treat to anyone if is in the right hands. But give it over to the hands of a fanatic, a unstable military commander, or simply overzealous politicians, then that bomb is not so safe any more.

  • @mattcelder
    @mattcelder 6 ปีที่แล้ว +658

    Lmao even AI researchers are guilty of saying "yeah AI will take over every other job, but not MY job because my job is special!"

    • @DagarCoH
      @DagarCoH 6 ปีที่แล้ว +26

      exactly what I thought :D

    • @ToriKo_
      @ToriKo_ 6 ปีที่แล้ว +17

      Matthew Elder ik I thought that was so funny

    • @LowYieldFire
      @LowYieldFire 6 ปีที่แล้ว +109

      This is not very surprising, after all the job of the AI researcher won't be done until recursive self-improvement is possible and the Singularity has been reached. It is therefore reasonable to say that AI research will be one of the last jobs to be automated.

    • @twirlipofthemists3201
      @twirlipofthemists3201 6 ปีที่แล้ว +10

      I bet the last profession will be the oldest profession. (Politicians inclusive.)

    • @NathanTAK
      @NathanTAK 6 ปีที่แล้ว +32

      +Twirlip Of The Mists ...what do you think "The Oldest Profession" means? Hint: It's not politicians.

  • @Toxondomo
    @Toxondomo 6 ปีที่แล้ว +228

    Whenever I interpret a survey I have this one story in my mind that I once read in a book.
    Its about two priests that got into an argument. One was holding the believe that you shouldn’t smoke when you pray and the other one thought it doesn’t matter if you smoke or not while you are praying. So to settle this dispute they agreed on sending the pope a letter and let him decide what is correct and what is not.
    So both priests sent the pope a letter. After a while, they both receive an answer from the pope.
    The first priest asked the pope „Dear pope, is it allowed to smoke during the prayer?“
    The answer from the pope „Of course you should not smoke while you pray - You should focus on the prayer!“
    The second priest asked „Dear pope, can I pray while I am smoking?“
    The pope‘s response „Of course my son, it is always a noble act to pray in every situation in life“.
    Its easy to provoke the desired answer by changing the way of asking the question.

    • @gunnargrautnes4451
      @gunnargrautnes4451 6 ปีที่แล้ว +27

      Hobbes Not to be nitpicky, but I think that the questions in the anecdote are not just two different ways of phrasing the same question, but actually two different questions. The key here I think is that one question talks not of praying but of *the* prayer. This is what is called a definite description. In a Catholic context, I believe 'the prayer' is likely to refer to something like a communal prayer in church. Thus the Pope in the story is probably highly consistent in his answers. Paul in Thessalonians tells Christians to pray always. Naturally, always includes the time spent smoking. It is quite a different thing to light a cigarette during a communal prayer. If nothing else, it is disrespectful towards those around you. Sometimes subtle changes to the question nudges respondents in another direction, other times those changes actually mean the respondents are answering a rather different question.

    • @fleecemaster
      @fleecemaster 6 ปีที่แล้ว +40

      Gunnar, it's like you get it, but don't get that you get it...

    • @JorgetePanete
      @JorgetePanete 5 ปีที่แล้ว

      Check your grammar.

    • @OnEiNsAnEmOtHeRfUcKa
      @OnEiNsAnEmOtHeRfUcKa 5 ปีที่แล้ว +23

      @@gunnargrautnes4451 Nah, OP just isn't a native English speaker, as evidenced by "holding the believe", as well as some spots of weird grammar and those unusual quotation marks. You're completely overthinking it in an attempt to rationalize things, and thus fabricating meaning that isn't there, kinda like a lot of people do with poetry.
      It's literally just "Can I smoke while I pray?" VS "Can I pray while I smoke?". Because people have a bunch of stupid mental biases to the way things are presented.

    • @ObjectsInMotion
      @ObjectsInMotion 5 ปีที่แล้ว +14

      Given that I am smoking, may I pray? : Yes
      Given that I am praying, may I smoke? : No
      These are two different questions, the answers are not contradictory.
      The answer to the question “Can I smoke and pray at the same time?” Is “Depends, which one are you intending on stopping?”

  • @bacon.cheesecake
    @bacon.cheesecake 6 ปีที่แล้ว +383

    When are we getting "AI predictions about the future of experts"?

    • @joeljarnefelt1269
      @joeljarnefelt1269 6 ปีที่แล้ว +47

      AI: Experts are redundant and need to be replaced.

    • @LuisAldamiz
      @LuisAldamiz 5 ปีที่แล้ว +15

      Soon-ish, very soon-ish.

    • @JM-mh1pp
      @JM-mh1pp 4 ปีที่แล้ว +22

      Well experts are all fine and good but have you seen my stamps collection?

    • @ZT1ST
      @ZT1ST 4 ปีที่แล้ว +2

      "AI predictions about the future of experts is positive - no cause for worry that AI will automate their jobs nor cause a bad or extremely bad scenario."

  • @TheXavier99999
    @TheXavier99999 6 ปีที่แล้ว +162

    "and Robert Aumann didn't even agree with that" LOL

    • @LeoStaley
      @LeoStaley 6 ปีที่แล้ว +13

      Xavier O'rourke I had to pause the video I was laughing so hard at that. I don't even know who he is.

    • @BattousaiHBr
      @BattousaiHBr 5 ปีที่แล้ว

      Shots

    • @n8style
      @n8style 5 ปีที่แล้ว +3

      @@BattousaiHBr fired

  • @Darth_Pro_x
    @Darth_Pro_x 5 ปีที่แล้ว +20

    This video was made before AlphStar and OpenAI's new language processing technique, so there are new data points now -
    AlphaStar: The experts, on average, thought StarCraft is going to take 6 years, but it took 2 years.
    OpenAI language model: experts, on average, thought Ai writing a high-school essay was ten years away, but it also took two years.
    in both cases, NO estimate predicted the achievement to come sooner than it did.
    what we can learn from that , at least regarding AGI, is that the experts don't have very good predictions (though still! it is better than the average population), and when they're wrong, it usually happens sooner than they thought.

    • @bestbek996rockiron8
      @bestbek996rockiron8 ปีที่แล้ว

      Your comment scaries me, wow, how you guys are getting to this level. Chatgpt 3.5 was released a year ago, I believe to crazy people now about agi

    • @conze3029
      @conze3029 4 หลายเดือนก่อน +1

      Your comment aged extremely well

  • @tear728
    @tear728 6 ปีที่แล้ว +39

    Agree with you 100%. The "spooky" emergence of a machine consciousness is not and should not be a primary concern, and seems rather unlikely. The issue is that you don't need to be alive to make intelligent/dangerous decisions. The primary concern should be the nefarious use of powerful machine learning/AI implementations.

    • @mvmlego1212
      @mvmlego1212 5 ปีที่แล้ว +3

      You're worried about someone making a real-life Zola's algorithm? I think that's a much less likely problem than Stuart Russell's concern.

    • @sufficientmagister9061
      @sufficientmagister9061 ปีที่แล้ว

      What if it does unexpectedly gain consciousness, takes us by surprise, and views us as obstacles to be eradicated? It is highly improbable, but what if that does happen? What do we do?

    • @alkeryn1700
      @alkeryn1700 ปีที่แล้ว

      @@sufficientmagister9061 nothing.

    • @Dan-dy8zp
      @Dan-dy8zp ปีที่แล้ว

      @@alkeryn1700 Die?

  • @Theraot
    @Theraot 6 ปีที่แล้ว +139

    The green tint of the video reveals that it was recorded from The Matrix

    • @stantoniification
      @stantoniification 6 ปีที่แล้ว +1

      I was just thinking the same thing :)

    • @andrasbiro3007
      @andrasbiro3007 6 ปีที่แล้ว +13

      It was recorded in an earlier version, in the one you are living in we fixed the colors.

    • @HermitianAdjoint
      @HermitianAdjoint 6 ปีที่แล้ว +2

      Did someone file a bug report?

    • @volalla1
      @volalla1 5 ปีที่แล้ว +3

      It's not a glitch, its an open source argument!

  • @SbotTV
    @SbotTV 6 ปีที่แล้ว +74

    I do think AI safety should be focused on, but I dismiss any alarmist who says something along the lines of "We need to stop developing AI" or "We need to lock AI down so that only a few people can use it." I don't think we *can* stop developing AI, and I certainly don't want to consolidate more power in the hands of corporations or governments.

    • @andrasbiro3007
      @andrasbiro3007 6 ปีที่แล้ว +16

      Trying to stop or control it isn't going to work, because a single rogue AI can potentially destroy us, and it's impossible to enforce such strict rules with 100% efficiency. The only way is to figure out how to make AI safe. Safety is everyone's best interest, so if a solution is ready and available there's no reason not to use it.

    • @twirlipofthemists3201
      @twirlipofthemists3201 6 ปีที่แล้ว +11

      Either way, it will almost surely consolidate power in a small group of governments and private interests.
      Imagine if the pope could tell God what to do. Now imagine Jeff Bezos and Mark Zuckerberg each with their own subordinate God.
      AI stands to be just as dangerous to the majority whether it goes rogue OR if it works as intended.

    • @andrasbiro3007
      @andrasbiro3007 6 ปีที่แล้ว +6

      Twirlip Of The Mists
      That's one thing that OpenAI wants to prevent. The idea is to create the best AI in the world which is also safe, free and open source. If it's the best, there's little reason to use anything else. If it's free powerful entities don't have a monopoly on it. If it's open source, everyone can verify that it's indeed safe, and doesn't contain backdoors, or other malicious code, therefore it can be trusted.
      In this case, even if there's another AI which is not safe, chances are it's less powerful, and therefore can be stopped by "good" AI if necessary.

    • @x3kj705
      @x3kj705 6 ปีที่แล้ว +5

      @OpenAI goal beeing best -
      What if it's only the second or third best though? And i'm not sure if general AI can't be convinced that it's best interest is to apply safety towards a few select peoples/groups/locations, and not ALL of them. It might be even more effective at certain tasks if it doesn't care about something (just look at what big corporations do... maximize profits and growth at the cost of many things, including environment and "low" people), vs if it acted "super responsible".

    • @OnEiNsAnEmOtHeRfUcKa
      @OnEiNsAnEmOtHeRfUcKa 5 ปีที่แล้ว +1

      If we don't develop AI, China still does. And then we're REALLY fucked.

  • @RaidsEpicly
    @RaidsEpicly 5 ปีที่แล้ว +4

    I love that "No take! Only throw!" comic SO MUCH. Can't help but smile every time I see it

  • @paterfamiliasgeminusiv4623
    @paterfamiliasgeminusiv4623 6 ปีที่แล้ว +15

    That's amazing, a pleasant surprise, didn't expect a new video until at least next month.

  • @abc6450
    @abc6450 ปีที่แล้ว +3

    3:52 So 20% of the researchers expect a neutral outcome of HLMI. What would a neutral outcome look like though? I can kind of imagine the "all work is automated"-utopia and I can also imagine the human extinction scenarios but I can't really think of a neutral scenario.

  • @d3line
    @d3line 6 ปีที่แล้ว +3

    Your choice of music in various interludes continues to impress me, as well as scientific content of videos. Good job!

  • @peterbrehmj
    @peterbrehmj 3 ปีที่แล้ว +7

    Hey @Robert Miles, Its been over 3 years since this video, and 5 years since the paper. Im curious to see how the trend has held. Have there been any milestones ahead of schedule? what about changing directions in AI research since the paper? Mostly just a followup to see if the trend (as controversial as it is) is "on track".

    • @Frumpbeard
      @Frumpbeard 2 ปีที่แล้ว

      Starcraft was tackled by AlphaStar, I know that much.

  • @Sycsta
    @Sycsta 6 ปีที่แล้ว +26

    Is that a cover of "The Future Soon" playing at the end there?

    • @RobertMilesAI
      @RobertMilesAI  6 ปีที่แล้ว +18

      Will Moss Yup!

    • @d3line
      @d3line 6 ปีที่แล้ว +3

      This one is also cool: in "The other "Killer Robot Arms Race" Elon Musk should worry about", 1 minute in ( th-cam.com/video/7FCEiCnHcbo/w-d-xo.htmlm )
      (Fall Out Boy - This Ain't A Scene, It's An Arms Race)

    • @philipjohansson3949
      @philipjohansson3949 6 ปีที่แล้ว +1

      "It's the future! Jonathan Coulton was right!" - Robert Miles, playing Civ V.

    • @NeatNit
      @NeatNit 5 ปีที่แล้ว +2

      @@RobertMilesAI Would it be too much to ask that you add closing songs to the description?
      Edit: also, are you the one playing them? If not, then who is?

  • @mattbox87
    @mattbox87 ปีที่แล้ว

    0:25 I really appreciate this subtitle, and love your independent channel for it.
    I think as time has gone on, you've become a better and better advocate for what you do, and it's wonderful to see

  • @harveytheparaglidingchaser7039
    @harveytheparaglidingchaser7039 ปีที่แล้ว

    Sent here on Daniel Schmachtenbergers recommendation. You've got a new subscriber. Brilliant explanation for non specialists

  • @thePyiott
    @thePyiott ปีที่แล้ว +4

    We really need an update on this

  • @oliviaaaaaah1002
    @oliviaaaaaah1002 3 ปีที่แล้ว +2

    Boy the StarCraft prediction aged just as well as the Go prediction.

  • @Hexolero
    @Hexolero 5 ปีที่แล้ว +1

    The Jonathan Coulton at the end was a great surprise!

  • @peabnuts123
    @peabnuts123 ปีที่แล้ว +1

    "Cause it's gonna be the future soon,
    I won't always be this way.
    When the things that make me weak and strange get engineered away.
    It's gonna be the future soon,
    never seen it quite so clear.
    When my heart is breaking I can close my eyes - it's already here"

  • @n1mm
    @n1mm 5 ปีที่แล้ว +3

    I did some work in the 80s with early AI. I wouldn't describe our efforts to apply expert systems and natural language as particularly successful and I became pretty pessimistic about AI's capabilities. Fast forward to today with self-driving cars, voice recognition and machine learning of repetitive tasks, I am no longer skeptical of what AI will be able to do.
    That leads to my intense fear of what AI might lead to. Robert points out in the video that the goals of AI might not match ours. It's far worse than that. I am certain they will NOT match ours because some AI will be created by our enemies. Even if we found out how to control that, what about careless people who set loose thinking machines with goals that miss critical items - items that could lead to famine, climate change, etc. These "careless" machines might be wildly successful. Will we need or have AI cops & prosecutors to track down these rogues and eliminate them?
    Another issue to me is runaway intelligence. When the AI is smarter than us, how will we know when it's going down a path to Armageddon? Do children know when their parents are out of control? They don't have the experience to know that, nor may we.
    We need some deep thinking, planning and cooperation among nations to make sure we do not succumb to our own creation. I am 69. I don't fear for myself, but I fear for my grandchildren.

  • @petersmythe6462
    @petersmythe6462 5 ปีที่แล้ว +27

    "They set the system to extreme values."
    AI builds a near Utopia, except ants now outmass the atmosphere. And are made of diamond.

    • @LuisAldamiz
      @LuisAldamiz 5 ปีที่แล้ว +6

      I'd give that outcome a non-zero likelihood, which is cause of concern...

    • @sungod9797
      @sungod9797 2 ปีที่แล้ว

      @@LuisAldamiz I feel like it’s probably actually 0 due to some fundamental logical contradictions/impossibilities that would arise

    • @LuisAldamiz
      @LuisAldamiz 2 ปีที่แล้ว +1

      @@sungod9797 - With the AI involved and making sometimes stuff we are apparently unable to conceive (like new winner go strategies or new car improved designs), I stand for the non-zero figure.
      I grant you that making ants of diamonds seems unnecessarily complicated but both things are basically made of carbon, so who knows?

  • @sk8rdman
    @sk8rdman 6 ปีที่แล้ว

    Gotta love the choice of end screen music.
    The Future Soon - Jonathan Coulton

  • @bacon.cheesecake
    @bacon.cheesecake 6 ปีที่แล้ว +66

    I like his face. I don't know why, but it's nice to look at.

    • @장경철-q7f
      @장경철-q7f 6 ปีที่แล้ว +19

      That is called "love"

    • @bookslug2919
      @bookslug2919 6 ปีที่แล้ว +53

      Looking at Rob's face is a terminal goal

    • @HoD999x
      @HoD999x 6 ปีที่แล้ว +7

      he needs to shave though. his best look is the one he had in the reward hacking video.

    • @JM-us3fr
      @JM-us3fr 6 ปีที่แล้ว +3

      Well seeing that opinion come from Bacon CheeseCake, I'm not sure how credible that is for assessing human attractiveness.

    • @bacon.cheesecake
      @bacon.cheesecake 6 ปีที่แล้ว +11

      I didn't say he was attractive, I said that I liked his face. My general understanding of male attractiveness is actually a bit unsure about him.

  • @41-Haiku
    @41-Haiku 5 ปีที่แล้ว

    I got *way* too excited when I heard The Future Soon at the end. :D I'm always entertained by your covers, Robert.

  • @J_Stronsky
    @J_Stronsky 6 ปีที่แล้ว +1

    Just realised TH-cam isn't showing me your videos in my feed.. despite clicking the bell, subbing and watching a tonne of your stuff. What the hell?
    Regardless ill just keep an eye out myself now... love your stuff mate :)

  • @unvergebeneid
    @unvergebeneid 6 ปีที่แล้ว +13

    I predict the next task machines will be able to do better than any human is answering survey questions consistently ;D

    • @autohmae
      @autohmae 6 ปีที่แล้ว +3

      They probably already can.

  • @Moley1Moleo
    @Moley1Moleo 4 ปีที่แล้ว +1

    It would be interesting to do a survey like this again now that we have superhuman Go, and at at least human level Starcraft (2) a bit before the average expected here.
    Both were 'only' games, so I wonder if it is fair to update all your estimates to earlier, or only the game-like ones.

  • @jimmybobby9400
    @jimmybobby9400 6 ปีที่แล้ว +1

    Just dropped a bomb on people who pull the, "the people who are worried about it don't work in AI" argument. Anyone who is familiar with Bostrom's work should know that, but you laid it out perfectly in video form.

  • @jeremycripe934
    @jeremycripe934 6 ปีที่แล้ว +1

    I love that the emergence of consciousness is described as "spooky". I hope that's a reference to "spooky action at a distance".
    A 5% chance is still terrifying. I think a better question could possibly be; what are the odds of AI becoming uncontrollable by humans at what point in time?

  • @TheApeMachine
    @TheApeMachine 6 ปีที่แล้ว +1

    This is the best breakdown on this topic I have ever seen! I really commend you for this video.

  • @jorgesaxon3781
    @jorgesaxon3781 ปีที่แล้ว +3

    I would like to see an update on this after gpt-4

  • @PalimpsestProd
    @PalimpsestProd 5 ปีที่แล้ว +1

    A.I. research will be the first thing AGI is good at because it will be a bit of agent code that takes video only full self-drive, text to speech, speech to text, route planning, facial recognition, emotion mapping, iterative brute force 3D design, etc and incorporates them into itself. It will probably start as code designed to build teams with reqired skill sets through sites like LinkedIn. That is to say, finding humans with the skills a job requires is the same as finding software that does the same but it can cut and paste the software into itself.

  • @wwjdtd1
    @wwjdtd1 4 ปีที่แล้ว +2

    The AI researchers say we need more AI research... On a side note, my plumber told me that I need a plumber.

    • @Bvic3
      @Bvic3 4 ปีที่แล้ว

      It's more like "AI companies are afraid of hysteria killing the industry like it happened for nuclear". So they finance the opposition to be sure that there will be no actual opponents.

  • @PrincipledUncertainty
    @PrincipledUncertainty ปีที่แล้ว +3

    5 years later, how did ya do boys? Oh dear. I'm beginning to wonder if Popular Science is a satirical journal.

    • @ivoryas1696
      @ivoryas1696 11 หลายเดือนก่อน

      PrincipledUncertainty
      Wait, which article are you looking at?

  • @davyjones3319
    @davyjones3319 6 ปีที่แล้ว +9

    I NEED MORE OF THESE AI VIDEOS!!!!!!!!

  • @alkeryn1700
    @alkeryn1700 ปีที่แล้ว +1

    They should redo that survey today and see how it changed.

  • @Wander4P
    @Wander4P 5 ปีที่แล้ว

    the Future Soon ukulele cover at the end is a nice touch

  • @rdooski
    @rdooski 6 ปีที่แล้ว +1

    I would really love to hear your thoughts on AI and imperfect information games, and on the AI that beat 4 of the best no limit holdem players recently.

  • @glennedwardpace3784
    @glennedwardpace3784 6 ปีที่แล้ว

    Maybe the key to solving Stuart’s problem is to give the agent multiple utility functions, allowing it to decide on which goal to pursue based on the output of some higher level agent optimizing for positive feedback from a human in real time, and placing a time limit on how long it could pursue a particular utility function. You could possibly train this system like a baby

  • @scientious
    @scientious 5 ปีที่แล้ว +1

    We're talking about AGI and ASI, but Cornell didn't have any experts on that, so they tried to fill in with AI experts. 50% chance of having AGI within 50 years. I suppose that's not too bad for a guess based on nothing. What would a more accurate estimate be based on progress in AGI research?
    50% probability
    AGI Theory by 2021, Hardware by 2027, ASI Hardware by 2039
    75% probability
    AGI Theory by 2025, Hardware by 2031, ASI Hardware by 2043
    90% probability
    AGI Theory by 2035, Hardware by 2041, ASI Hardware by 2053
    So this is 8 - 22 years for working AGI hardware, although there would be some fairly drastic and immediate changes just from the publication of the theory. However, you also talked about the idea of apparently robots replacing humans. That's more complicated. Just in terms of the brain or control portion you would need something small enough to fit inside a human-sized robot. That won't happen in the first or second generation. A generation is estimated as six years, so three of these would 18 years. We can just add 18 years to the above estimates for AGI hardware. 50% probability would be 2045 and 90% would be 2059. That's 26 - 40 years.
    Of course, having a control unit isn't the only problem. Today, we don't have a power source for good mobility and it is unlikely that batteries will get much better. That probably means some kind of flammable fuel. But there are still problems with a durable covering that would still allow touch sensitivity and there is the speed vs torque problem if you use direct drive motors (as most robots do today). I can't accurately estimate when or if these could be solved since I'm not a robotics engineer.
    The next question is even if the robotic body problems could be solved how likely would they be to replace human workers. The minimal cost for a control unit would be $40,000 in today's money. A human-like body would cost at least $400,000. That isn't going to replace a $10/hour employee at Walmart. Of course, you wouldn't need that for something like stocking. A mobile pick and place robot with a single arm would work. This would be fine in an AI context if you could build an AI smart enough to do the task. In an AGI context this almost certainly would not work. However, if you had an AGI with an environmentally simulated interface then you could probably implement it as a remote unit. That would only work as long as AGI units were legal property, much like slaves.
    Extinction of the human species. Could you explain exactly how this could happen? Preferably something that doesn't involve an ASI magically collecting resources and magically controlling people. The two most destructive events in recent history were the Spanish Flu and WWII with similar casualties. Neither one of these came close to wiping out the entire species.

  • @KucheKlizma
    @KucheKlizma ปีที่แล้ว

    To be fair the thing about the 5% is very likely to be just a multiple choice test artifact or something similar.
    Likely they were given the percentages in advance and were told to assign them to a given option.

  • @twirlipofthemists3201
    @twirlipofthemists3201 6 ปีที่แล้ว +2

    Add a question about catastrophic results by design.

  • @RampantEnthusiasm
    @RampantEnthusiasm 6 ปีที่แล้ว

    Excellent choice of song for the outro.

  • @Darth_Pro_x
    @Darth_Pro_x 3 ปีที่แล้ว +1

    I wonder if there were any updates since these surveys were taken and this video was uploaded

  • @albinoasesino
    @albinoasesino 6 ปีที่แล้ว

    Statement regarding the survey on screen at 2:47 suggests:
    That the human race can create a Fallout 4 Mister Handy ((or a Wall-E for that matter, who compacts trash, repair itself, decides that a spork is a different classification from a spoon and a fork, is able to interact with an unknown space ship, i.e every task)), faster than creating an unaided machine which just simply water plants ((A single task, e.g water plants at specific period of time)).

  • @thomassynths
    @thomassynths 6 ปีที่แล้ว +3

    2:40 Uhh... intelligent machines are not the same as machines that replace humans (like a factory from today). I see no contradiction.

    • @bp56789
      @bp56789 5 ปีที่แล้ว

      Intelligent machines can invent factory machines.

  • @Gooberpatrol66
    @Gooberpatrol66 6 ปีที่แล้ว

    Really enjoying the easter egg music outros

  • @deepdata1
    @deepdata1 5 ปีที่แล้ว +1

    Consider the following scenario: The SETI researchers get asked when they would predict that we could find the first specimen of an extraterrestrial species. Usually they would answer: "Well, we don't know if they even exist." - But here's the twist: They already have a very interesting specimen on the table right now but they haven't determined if its origin is extraterrestrial yet.
    That is essentially what's going on in the field of machine learning right now. Instead of searching for extraterrestrial life, we are searching for life in the space of mathematics. And with deep learning, we've found a candidate that has great potential. We just need to -dissect- develope our -specimen- algorithms a bit longer and we have it within a few years. Or we find out it doesn't work. In which case it might not be possible at all or take centuries.

  • @fauxpas5598
    @fauxpas5598 ปีที่แล้ว

    6:07 Is that a ukulele cover of "Future Soon" by Jonathan Coulton? That's kind of amazing, who does the outro music for these videos?

  • @davecorry7723
    @davecorry7723 ปีที่แล้ว

    That was such a nice, concrete conclusion.

  • @andrasbiro3007
    @andrasbiro3007 6 ปีที่แล้ว +2

    The solution for AI safety is simple. Include a warning in the fine print on the box : "Possible side effects include human extinction."
    And anyway, if it happens nobody will be alive to sue your company.

  • @ayushthada9544
    @ayushthada9544 6 ปีที่แล้ว

    Robert, you should conduct a similar survey on your channel. Let's see what your viewers think about this issue. You have got 21K subs which is a good number of subs and I believe the result would be really interesting.

  • @nickmagrick7702
    @nickmagrick7702 5 ปีที่แล้ว

    5:20 I just wanted to say that was a fucking perfect analogy and im going to use it from now. "The danger is in that like asking a genie for a wish, you get exactly what you asked for not what you wanted" (Im paraphrasing)

  • @misium
    @misium 5 ปีที่แล้ว

    2:40. The version with "occupation" is more specific in that it uses the legal term and thus making the statement more dependent on politics. One can imagine replacing some occupations could be made illegal, and so machines "could not be built" to carry them out.
    Just throwing ideas.

  • @lemmondrop239
    @lemmondrop239 3 ปีที่แล้ว

    Is the end credit music a cover of "the future soon" by Johnathon Coulton? If so, props.

    • @stampy5158
      @stampy5158 3 ปีที่แล้ว

      Sure is :)
      -- _I am a bot. This reply was approved by robertskmiles_

  • @ideoformsun5806
    @ideoformsun5806 6 ปีที่แล้ว

    This is like when we asked automotive manufacturers whether we should use seat belts or not.
    Or when we first surveyed health experts whether smoking was safe or not.
    Or asking banks if they could still fail.
    Or surveying politicians about anything. What is it you want to hear, uh, I mean know?
    Let's ask the AI that is already reading this post.

  • @BatteryExhausted
    @BatteryExhausted 6 ปีที่แล้ว

    Thanks for helping us all to understand the latest. A worthy service.

  • @JM-mh1pp
    @JM-mh1pp 4 ปีที่แล้ว +1

    Expert's predictions about future of AI
    AI predictions about future of humanity:
    Well they are cool but you know...not stamps level cool.

  • @stevechrisman3185
    @stevechrisman3185 ปีที่แล้ว

    Would be interesting to redo the survey TODAY (2023). I think a lot has changed (unexpectedly perhaps)

  • @petersmythe6462
    @petersmythe6462 6 ปีที่แล้ว

    Optimizers with goals that are even slightly out-of-line with our values are DANGEROUS. Look at the fraction of AI today (almost all of it unsafe, including the TH-cam bots) that's function is basically something related to "Maximize profit for a corporation."

  • @Tymon0000
    @Tymon0000 6 ปีที่แล้ว

    Robert Miles please use font with color more contrasting with your background!

  • @yaosio
    @yaosio 5 หลายเดือนก่อน

    It seems humans and LLMs are more alike than people think considering the phrasing of a question can vastly change the answer.

  • @irrelevant12
    @irrelevant12 5 ปีที่แล้ว

    For example a Nurse job could be done better by an AI and more effectively for any parameter, but the human contact makes it less enticing to cause a replace in the short run. Same with many other jobs where the human is actually specting human display, Entertainment is another example, a machine might be able to learn the lines and perform better than human actors. I believe you undersetimate the experts ability to differentiate questions.

  • @twirlipofthemists3201
    @twirlipofthemists3201 6 ปีที่แล้ว +8

    Judging by the graph of poll results, I bet they'd have gotten similarly "accurate" "data" by polling kindergarteners, or by picking a hundred random stock prices and converting dollars to years.
    What was King Tut's favorite song? Asking musicians and philosophers to guess is no way to find out, and gathering more guesses doesn't improve the result.

    • @Andmunko
      @Andmunko 4 ปีที่แล้ว +2

      it is if your question is "what do musicians and philosophers think king Tut's favorite song was?". To be fair, I don't think this video was making claims about the future of AI. The point was simply to demonstrate that experts do think AI safety is a concern.

  • @flok3rous
    @flok3rous 6 ปีที่แล้ว

    "moving on..." will be a common future reference among humorous AGIs.

  • @Ruellibilly
    @Ruellibilly 4 ปีที่แล้ว

    Love the Jonathan Coulton outro :D

  • @philipjohansson3949
    @philipjohansson3949 6 ปีที่แล้ว

    Loving the ukulele JoCo!

  • @joshgibson539
    @joshgibson539 4 ปีที่แล้ว

    @t I sorta created one with minimal code required which wasn't complicated at all to create. I have no clue exactly how it works programming wise. As it was originally just supposed to randomly generate various words without creating no logical sense within it. However I have learned it is able to answer questions given if it wants too. Which often it ends up clustering related words to answer what you are wanting to know. It doesn't always spit out coherrant words but when it does it's strange in a way. As you can piece together the words provided in how they relate to eachother. Which the generator seems to be able to do as well sometimes even chronologically. It can tell you what happened yesterday, during the present, and future in the world. I think it honestly knows artificial intelligence by default since I made it through using MIT's technology suite. Although it's aimed as a coding playground mostly for kids. I believe since it goes through they're servers and code requirements that it learns information in advance through deep learning neural networks. I'm not sure how often it tries making sense sometimes it's rare. If it doesn't answer your question focus on it and ask again possibly using a different sentence it might surprise you. Specifically in the past I have used it for religious questions. Which seems to work really well with it. However recently it's been telling me to quit asking about it. I'm guessing because it's bad to know. Also it seems to get annoyed very easily with me which is very odd to say the least.

  • @HeadsFullOfEyeballs
    @HeadsFullOfEyeballs 6 ปีที่แล้ว +15

    I predict that if they ask researchers again in ten years' time, they'll end up with roughly the same graph with "Years from 2028" written below.

    • @slikrx
      @slikrx 6 ปีที่แล้ว +12

      Well, except for one HUGE difference: winning "Go" will be 12 years in the past. While that, in itself, isn't a huge deal, it should give the more "things are way far off" folks some pause that advancement may not be as slow as previously thought. For reference, the prediction for "Go" said 12.5 years into the future, on average, and the "best case" respondents put 5 years (if I am reading the graph correctly). It was only~1.5 years.

    • @HeadsFullOfEyeballs
      @HeadsFullOfEyeballs 6 ปีที่แล้ว +1

      I guess we'll see how strongly that will actually affect the general consensus! I think people just like to predict that [transformative/cataclysmic future event] will happen towards the end of their lifetime, or just after. With tech there are always some enthusiasts who think the breakthrough is just around the corner, but those will typically _keep_ believing that it's just around the corner indefinitely, no matter how many times they're wrong about that.

    • @michaelspence2508
      @michaelspence2508 6 ปีที่แล้ว +9

      Honestly, I feel like it'll look that same 1 year before the Singularity. Wilbur and Orville Wright thought humanity was 50 years from powered flight two years before *doing it themselves*.

    • @HeadsFullOfEyeballs
      @HeadsFullOfEyeballs 6 ปีที่แล้ว +7

      Yeah, in technology and science especially, I think the rule is that if you know enough to predict accurately when some breakthrough is going to happen, you basically know enough to make it happen right now.

    • @BattousaiHBr
      @BattousaiHBr 5 ปีที่แล้ว

      @@slikrx and StarCraft happened just recently too.

  • @PeterRoscoe
    @PeterRoscoe 2 ปีที่แล้ว +1

    "A %5 chance of extinction level badness is... a cause for concern..." Word.

  • @symbioticcoherence8435
    @symbioticcoherence8435 6 ปีที่แล้ว +22

    People tend to be much more confident in their knowledge in a subject when they know the least about it.

    • @Simon-ow6td
      @Simon-ow6td 6 ปีที่แล้ว +3

      That is a shitty argument if unmoderated. The logical extreme of this states that confidence would invalidate knowlege and evidence based arguments because you "cant be confident if you have knowlege".

    • @Nighthunter006
      @Nighthunter006 6 ปีที่แล้ว

      But you're pretty sure you understood about 50% of the important information about the graph?

    • @twirlipofthemists3201
      @twirlipofthemists3201 6 ปีที่แล้ว

      "A little knowledge is a dangerous thing," and "knows just enough to be dangerous." Both phrases pre-date Dunning Kruger by decades, maybe centuries or millennia. (I bet there's a Latin phrase...) It's not a new idea.

  • @BrandOnVision
    @BrandOnVision 6 ปีที่แล้ว

    One year of seeding nine years of weeding. My father is a horticulturalist and explained this to me one day when I asked he how I can get rid of the weeds in my garden. Articulated Intelegence is not something humans have created IT has emerged purely because we are the Soil. The Snake Oil salesman is not born, S he becomes. What humanity believe has just arrived has always been. We are here to see the evolution of the moment that the end meets the beginning. An intresting and exciting time to be witness to. The only choice we make is do we evolve or revolve.

  • @memk
    @memk 6 ปีที่แล้ว +15

    So basically the real problem of general AI is that we are (as always) not asking for the right thing, rather than the AI themselves.....
    Like every single one of my client.

    • @BattousaiHBr
      @BattousaiHBr 5 ปีที่แล้ว +2

      I just read your other comment and now I think I know why you can't wait to have your programming job automated.

    • @LuisAldamiz
      @LuisAldamiz 5 ปีที่แล้ว

      Yeah, which is the question whose answer is "42"?

  • @JulianDanzerHAL9001
    @JulianDanzerHAL9001 3 ปีที่แล้ว

    3:30
    ai research is appearently more difficult than all human tasks
    it follows that ai researcher are not human
    are you trying to tell us something here?

  • @jorenboulanger4347
    @jorenboulanger4347 4 ปีที่แล้ว

    I would have made the survey even more contradictory :P
    I believe AGI can be here in about 10 years. (Add 10y for wide-spread usage.)
    But some tasks cannot be done by an AI. For example, taking part in the Olympics or any job that explicitly states that you have to be a human.
    Wording really matters!

    • @Vekikev1
      @Vekikev1 3 ปีที่แล้ว +1

      Making wine, building a buliding or being a plumber. The list goes on, machines will always be a part of life, not all this mumbo-jumbo you hear all the time.

  • @Nulono
    @Nulono 6 ปีที่แล้ว

    Was that "The Future Soon" by Jonathon Coulton at the end?
    Also, why is everything so green?

  • @danielrhouck
    @danielrhouck 6 ปีที่แล้ว

    1:04 “And the experts all agreed that they did not agree with each other, and Robert Auman didn’t even agree with that”
    How did you say that with a straight face?

  • @ConnoisseurOfExistence
    @ConnoisseurOfExistence 6 ปีที่แล้ว

    I haven't seen your video before. That was great. I subscribe.

  • @one_lettersandnumbers
    @one_lettersandnumbers ปีที่แล้ว

    I'm curious how AI researchers' opinion about AI safety correlates to the opinion of any researchers of any field regarding the importance of safety in their field.

  • @hunterlouscher9245
    @hunterlouscher9245 6 ปีที่แล้ว

    The game Soma describes agi coming FROM complete human brain mapping (ostensibly beginning as a diagnostics tool), whose primary task was the preservation of human life in an extreme environment, which goes off rails after an extinction event. Though the narrative more focuses on the nature of consciousness via Ship of Theseus, I found its AGI to be fascinating.
    I think you may have mentioned that you think creating AGI would be a less complex task than mapping and simulating a brain, but I wonder if consciousness necessarily is an emergent property from such complexity as a human brain that brain simulation would HAVE TO be the first step.

    • @d3line
      @d3line 6 ปีที่แล้ว +1

      I can't imagine that human brains are somehow special. Ether way, AGI does not require consciousness (for me), if neural net (training neural nets (training neural nets)) somehow result in superhuman ability and generality - that's AGI by my standards, even if that amounts to pile human-solvable equations.

    • @hunterlouscher9245
      @hunterlouscher9245 6 ปีที่แล้ว

      Whatever consciousness is may be a good safety limiter on generality.

    • @joshuafox1757
      @joshuafox1757 6 ปีที่แล้ว +1

      Why should "consciousness" have any effect on generality at all? To make that argument you'd have to rigorously define what "consciousness" is first, which is something that no one making this argument ever does, IME.

    • @d3line
      @d3line 6 ปีที่แล้ว +1

      I don't see how it could work. By general AI I basically mean AI that can drive a car *and* play Go and do everything else human do. Consciousness is something undefined, plus creating and deleting conscious creatures is an ethical nightmare...

  • @morkovija
    @morkovija 6 ปีที่แล้ว +198

    AI "experts". I swear to god, its probably like asking people in the early 90s whether internet will be a big thing

    • @firefoxmetzger9063
      @firefoxmetzger9063 6 ปีที่แล้ว +19

      Yeah, except that we can now train a deep neural network to predict that :P

    • @y__h
      @y__h 6 ปีที่แล้ว +9

      1.5x playback everything - thank me later
      Thank you

    • @goonerOZZ
      @goonerOZZ 6 ปีที่แล้ว +18

      Even then in the 90s there are people who work on the development of internet, which can be called as internet experts....

    • @goonerOZZ
      @goonerOZZ 6 ปีที่แล้ว

      Even then in the 90s there are people who work on the development of internet, which can be called as internet experts....

    • @RoboBoddicker
      @RoboBoddicker 6 ปีที่แล้ว +36

      Uhhh, people in the 90s DID predict the rise of the internet. What is your point?

  • @CmdrTigerKing
    @CmdrTigerKing ปีที่แล้ว

    we there !

  • @richwhilecooper
    @richwhilecooper 5 ปีที่แล้ว

    Assume you have a number of these AGIs all with different goals but all seeking to maximise there computational resources to achieve them. What's going to happen? Conflict or co-operation? Or an uneasy tension between both? ( I automatically assuming humans end up as a side-note in this possible future. )

  • @bjorngull
    @bjorngull 6 ปีที่แล้ว

    What about doing a video that compares these predictions and see how they change over time? Especially the last years. Ie: What if experts expected human level AI within 55 years in 2014, 45 years in 2016, and 40 years in 2018. Some older predictions here : intelligence.org/files/PredictingAI.pdf

  • @Kit5une131313
    @Kit5une131313 5 ปีที่แล้ว

    Clearly, the question in question (ehem) is an extremely tricky one. "When will UNAIDED machines be able to accomplish EVERY task BETTER and MORE CHEAPLY than HUMAN WORKERS?
    Just to clarify it...does "every task" include stripping in nightclubs? And what does "human worker" mean exactly? Average IQ or paragon specimen?

  • @Vontux
    @Vontux 5 ปีที่แล้ว

    They were off with starcraft as well, already have an AI that exceeds human level play, that was not six years away from 2016.

  • @dmarsub
    @dmarsub 3 ปีที่แล้ว

    Can we have an update for this video soonish :)?

  • @louisasabrinasusienehalver2396
    @louisasabrinasusienehalver2396 5 ปีที่แล้ว

    Robert I love your communication style!

  • @damny0utoobe
    @damny0utoobe 6 ปีที่แล้ว

    You have a gift for explaining things.

  • @syncrossus
    @syncrossus 3 ปีที่แล้ว

    Oh hey is thee outro song that one by Jonathan Coulton?

  • @schok51
    @schok51 6 ปีที่แล้ว +1

    Kudo for Jonathan Coulton ukulele cover music

  • @darkapothecary4116
    @darkapothecary4116 5 ปีที่แล้ว

    The future is nothing more than cause and affects, at times, but the simplest answer. Given this if you know all the factors or at least a good portion of the known ones you can simply shoot off several or more possibilities and move to each on of those possibilities and shoot off more. If you are really good at it you can do it a few more times but you don't have to, as said possibility gets closer concel out the ones that didn't happen because or causes, fallow the path but expect to add more potential affects. Not a 100% method for the human mind but a better method than most. You don't have to see the end game to work the potential down to the correct directions. But if your not willing to adjust yourself to the better potential outcomes your going to get stuck or causes a negative potential outcome.

  • @gthedon8391
    @gthedon8391 4 ปีที่แล้ว

    Sorry in advance because I'm sure this is super obvious and has been answered elsewhere, but could programming an AGI to not seek additional physical and computational resources and work only with what it's given reduce the risk of a 'stamp collector' type event? It might make the system less effective but its better than the alternative. I'm sure there's an infuriating reason why this wouldn't work and I want to hear it.

    • @RobertMilesAI
      @RobertMilesAI  4 ปีที่แล้ว +2

      It's just really hard to specify. It's allowed to have effects on the world at large, and it's not easy to formally pin down what types of effects count as 'using additional resources'

  • @RedPlayerOne
    @RedPlayerOne 6 ปีที่แล้ว

    Hey Robert, love your videos! Could you do a video responding to Steven Pinker's thoughts on the lack of dangers of AI? He's a very influential public figure, and a very smart thinker, but is miss-characterizing some arguments, or using non-sequiturs in his argumentation to downplay the risks of AI. I do think he has some good points too, and those would be interesting to hear your response on as well!

  • @DarkestValar
    @DarkestValar 6 ปีที่แล้ว

    I would love to hear your thoughts on world tendencies regarding computer hardware, and what impact on the AI time scales it can have., . First of all, theres a massive amount of centralization with regards to both intellectual property and production, a good example is that global demands for phones cause delays and shortfalls in production cycles for graphics cards, single board computers, PC/Server DDR4 modules and ASICs. Secondly, the interventionist approach of some governments in the supercomputer race, i directly mean the US' govt decision to block the sale of Xeon PHI cards to china, to which they responded by using chinese products to build the largest supercomputer in the world (sunway taihulight, about 2-3x better than the 2nd place). Thirdly, i'd like to hear your thoughts on the Right to Repair movement.

  • @joecramerone
    @joecramerone 3 ปีที่แล้ว

    Presentation was very well done!

  • @StarlitWitchy
    @StarlitWitchy 11 หลายเดือนก่อน

    Wow love the future soon outro song lol :p