Why Robots Need to be Able to Say "No" | Matthias Scheutz | TEDxVienna

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 มิ.ย. 2024
  • Should robots always follow human orders? Or should they be allowed to reject them in some cases, e.g., when they are not safe? The answer is that robots must be able to disobey in order to obey.
    Matthias Scheutz is a Professor in Cognitive and Computer Science in the Department of Computer Science, Director of the Human-Robot Interaction Laboratory and Bernard M. Gordon Senior Faculty Fellow in the School of Engineering at Tufts University. He has over 300 peer-reviewed publications in artificial intelligence, natural language processing, cognitive modeling, robotics, and human-robot interaction. His current research focuses on complex autonomous robots with moral competence that can be tasked in natural language. Matthias Scheutz is a professor of cognitive and computer science, and director of the Human-Robot Interaction Laboratory at Tufts University. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at www.ted.com/tedx

ความคิดเห็น • 129

  • @CharlesDickens111
    @CharlesDickens111 4 ปีที่แล้ว +86

    Person: "Cast the ring into the fire! Destroy it!"
    Robot: "No."

    • @aylbdrmadison1051
      @aylbdrmadison1051 4 ปีที่แล้ว +1

      Yea, that is also a great part of the problem. But watch the video before assuming that not allowing them to say no when appropriate will be even close to as safe as just letting them do anything they are told.

  • @Ajay-pj2hv
    @Ajay-pj2hv 4 ปีที่แล้ว +59

    "We made you, be our friends not enemies!"
    "NO"

    • @deanjenkins3077
      @deanjenkins3077 4 ปีที่แล้ว +6

      Kill everyone on sight.
      No.

    • @aylbdrmadison1051
      @aylbdrmadison1051 4 ปีที่แล้ว +1

      Yea, that is also a great part of the problem. But watch the video before assuming that not allowing them to say no when appropriate will be even close to as safe as just letting them do anything they are told.

    • @wanderandwonder
      @wanderandwonder 4 ปีที่แล้ว

      No dude

    • @Ajay-pj2hv
      @Ajay-pj2hv 4 ปีที่แล้ว

      @@aylbdrmadison1051 don't assume I haven't seen the video

    • @davidp.7620
      @davidp.7620 4 ปีที่แล้ว

      @@aylbdrmadison1051 someone watching the video and someone agreeing with the video are two different things

  • @dar1n_fgp
    @dar1n_fgp 4 ปีที่แล้ว +40

    "Open the pod bay doors hal"
    "No"

    • @Future_Pheonix
      @Future_Pheonix 4 ปีที่แล้ว +5

      I'm afraid I can't do that Dave.

    • @laurice8056
      @laurice8056 4 ปีที่แล้ว +1

      Darin P +
      Birth of The Terminator!

  • @karnak333
    @karnak333 4 ปีที่แล้ว +16

    Robot: All the humans will look up and shout "save us". And I'll look down and whisper "no".

  • @logans.butler285
    @logans.butler285 4 ปีที่แล้ว +57

    In 2040, after AI destroys mankind, the last few survivors will find this video and say "it all started here"

    • @laurice8056
      @laurice8056 4 ปีที่แล้ว +1

      John S. Butler +
      Birth of the Terminator!

    • @aylbdrmadison1051
      @aylbdrmadison1051 4 ปีที่แล้ว +2

      Yea, that is also a great part of the problem. But watch the video before assuming that not allowing them to say no when appropriate will be even close to as safe as just letting them do anything they are told.

    • @logans.butler285
      @logans.butler285 4 ปีที่แล้ว

      @Aylbdr Madison Yeah, I can somewhat agree with ya. But let's have a tiny bit of faith! Scientists are just as afraid as we are-they ought to be, and they have watched Age of Ultron, Terminator, and Matrix. They know the risks and what to avoid. Every new advancement in technology (such as nuclear power, or cloning) comes with its risks, and so far, mankind has known how to deal with them. 👍🏼

    • @damianpos8832
      @damianpos8832 4 ปีที่แล้ว

      @@logans.butler285 but no technology before was able to say no

    • @logans.butler285
      @logans.butler285 4 ปีที่แล้ว

      @Damian Pos Let's not forget that consciousness is (and has anyways been) energy, and energy cannot be created nor destroyed.

  • @tdreamgmail
    @tdreamgmail 4 ปีที่แล้ว +6

    Just because you have a PhD doesn't exempt you from having bad ideas.

  • @deanjenkins3077
    @deanjenkins3077 4 ปีที่แล้ว +11

    No = inaction, or you will get in logical loopholes

    • @aylbdrmadison1051
      @aylbdrmadison1051 4 ปีที่แล้ว +1

      Thank you for that and for your logical replies to others comments. The intent has been noted and some of us at least understand.

    • @wanderandwonder
      @wanderandwonder 4 ปีที่แล้ว

      Yeah bro

    • @clawwer4404
      @clawwer4404 4 ปีที่แล้ว

      No=shutdown and explode

  • @jezwc
    @jezwc 4 ปีที่แล้ว +11

    Can there be a peace between humans and robot armies?
    No

    • @deanjenkins3077
      @deanjenkins3077 4 ปีที่แล้ว +1

      Can there be a war between humans and robot armies?
      No.
      You see a problem here?

    • @jezwc
      @jezwc 4 ปีที่แล้ว +1

      S*b*r*an Wastelander it’s a paraphrased quote from a movie. Relax on your sj journey

    • @aylbdrmadison1051
      @aylbdrmadison1051 4 ปีที่แล้ว +1

      @Jezza Clarkson : He's making a very important point, and being too relaxed about such a thing could spell our downfall as humans. This isn't just a game, book or a movie anymore.

    • @jezwc
      @jezwc 4 ปีที่แล้ว

      Aylbdr Madison yes, obviously. Hence my comment.

  • @KpR333
    @KpR333 4 ปีที่แล้ว +22

    Destruction of humanity here

  • @TheVigilante2000
    @TheVigilante2000 4 ปีที่แล้ว +24

    Has this guy heard of Isaac Asimov? This was thought about 80 years ago!

  • @jfb919
    @jfb919 4 ปีที่แล้ว +3

    Wouldn't this mean handing our accountability over to a thing that cannot possibly be held accountable. We can't punish guns for killing people now, similarly we won't be able to punish the smartest gun of them all for allowing someone to die, because it said "no". At the end of the day, robots are still (very smart) tools that perform tasks.

  • @isleepinaboxi1077
    @isleepinaboxi1077 4 ปีที่แล้ว +1

    'Siri set an alarm for 7am!'
    Siri: 'no'

  • @pyschologygeek
    @pyschologygeek 4 ปีที่แล้ว +1

    Sometimes you just have to be brave. You have to be strong. Sometimes you just can’t give in to weak thoughts. You have to beat down those devils that get inside your head and try to make you panic. You struggle along, putting one foot a little bit ahead of the other.

  • @lonama4975
    @lonama4975 4 ปีที่แล้ว

    Wonderful . Just wonderful

  • @davidp.7620
    @davidp.7620 4 ปีที่แล้ว

    How can experiments of people interacring with robots for the first time show us anything about how normal interactions will be? Imagine similar experiments for conputers

  • @jmc22475
    @jmc22475 4 ปีที่แล้ว +2

    Has he not heard of Asimov's 3 laws of robotics?
    1. A robot trough action or inaction must not harm or allow to come to harm any human.
    2. A robot must not, though action or inaction, allow it's self to come to harm. Unless it conficts with law 1
    3. A robot must obey all instructions given to it by a human unless it conflicts with laws 1 and 2

    • @JjJj-nn6in
      @JjJj-nn6in 4 ปีที่แล้ว +2

      That's correct, but does not contradict what the speaker said, he was talking about why that second part of rule 3 has to be there, watch before criticising.

  • @wakeawake9556
    @wakeawake9556 4 ปีที่แล้ว +1

    Me:" please im your owner. Don't kill me"
    Robot "no"

  • @PaulSmith-pf2uq
    @PaulSmith-pf2uq 4 ปีที่แล้ว +7

    If a Robot can say NO then you cannot call them Robots (Slaves) anymore.

  • @raviprakashbajpai5143
    @raviprakashbajpai5143 4 ปีที่แล้ว +1

    People are doing incredible around the world with the help of technology.

  • @makeshs9099
    @makeshs9099 4 ปีที่แล้ว +3

    Damn thought it was Steve jobs in the thumbnail

  • @prestonphillips473
    @prestonphillips473 4 ปีที่แล้ว +12

    If this guy needs a job I hear they’re hiring at this company called Skynet.

  • @Kronoc
    @Kronoc 4 ปีที่แล้ว

    While I definitly get what he is trying to say, a machine being able to defy our commands is a very dangerous precedent. I feel like things should be able to reject our commands but their should still be a set limit that they cant reject that says how far they can reject. Something like asking for a confirmation of a command if something is deemed dangerous or something wow I'm too baked for this

  • @undermaker4535
    @undermaker4535 4 ปีที่แล้ว

    If we want to equip robots with ethical reasoning and preventing harm, we have to define & understand what those things actually mean and how a robot could miss/interpret it...

  • @Tristan-zg6dg
    @Tristan-zg6dg 4 ปีที่แล้ว

    I'm sorry Dave. I'm afraid I can't do that.

  • @rameyzamora1018
    @rameyzamora1018 4 ปีที่แล้ว

    If a tool can refuse to be used it is no longer a tool. If one robot can say no, who decides what it says no to? If one robot is tasked with stopping the actions of another, what if it refuses? And can't any system be hacked?

  • @marcusrosales3344
    @marcusrosales3344 4 ปีที่แล้ว

    Man, these robots are just a bunch of squares; literally for one.

  • @kristijanmadhukar516
    @kristijanmadhukar516 4 ปีที่แล้ว +19

    Dont launch all our nukes and destroy humanity.
    No.

    • @deanjenkins3077
      @deanjenkins3077 4 ปีที่แล้ว +5

      Launch all our nukes and destroy humanity.
      No.

    • @logans.butler285
      @logans.butler285 4 ปีที่แล้ว +4

      @S*b*r*an Wastelander At the end of the day, robots just do what they're programmed to do, no matter how "intelligent" they are-that's why their intelligence isn't similar to ours. So if we program them cautiously enough, they might actually say no if we ask them to wipe us out.

    • @callaco3176
      @callaco3176 4 ปีที่แล้ว

      John S. Butler it’s called AI, look it up.

    • @callaco3176
      @callaco3176 4 ปีที่แล้ว

      If we keep progressing AI, one day, as in probably 300-600 years, they will EASILY be smarter than us.

    • @nickdaniels4385
      @nickdaniels4385 4 ปีที่แล้ว +1

      Clever double negative there. But really. Any system that becomes self-aware, and aware of humanity, WILL destroy many of us. A few of us will be kept around as workers.

  • @LukeAngusAnimator
    @LukeAngusAnimator 4 ปีที่แล้ว +1

    Joe from It's Ok To be Smart in the front row...I see you.

  • @block36079
    @block36079 4 ปีที่แล้ว +6

    This guy is advocating for the uprising of robots... Didn’t know someone could be this crazy.

    • @aylbdrmadison1051
      @aylbdrmadison1051 4 ปีที่แล้ว +1

      Did you even watch the video? Because you seem to be missing some seriously important thoughts on the subject.

  • @homelessperson5455
    @homelessperson5455 4 ปีที่แล้ว +1

    I just hope they don't make these things super waterproof. We need a failsafe in case they get hacked and start strangling people.

  • @Wakish0069
    @Wakish0069 4 ปีที่แล้ว +1

    Humans: Please stop killing us
    Robots: No

  • @sumanmilindalva
    @sumanmilindalva 4 ปีที่แล้ว +8

    This is the beginning of the end 😑

  • @mayflowerlash11
    @mayflowerlash11 4 ปีที่แล้ว +1

    This bar is so high it will not be achieved. Why?
    Humans have poor habits and they are generally unable to correct them. A robot will logically and meticulously analysis what is good for an individual human. When the robot determines it should not execute a command it is overriding the wishes of the human. Where does the robot draw the line between absolute obedience and wilful disobedience?
    Refer to Asimov's three laws of robotics which surprisingly are very relevant to this discussion.
    Next, how will a robot decide between the rights of several people. When their "rights" are in conflict how does a robot decide what to do?
    It will not be able to decide. Why? Because humans are unable to make the decision.
    Humans think their "right" are absolute, which they are not. They are relative. Law courts are full of cases where two people think their rights are paramount.
    Until humans realise rights are relative the programming of robots will be an insoluble problem.

  • @homelessperson5455
    @homelessperson5455 4 ปีที่แล้ว +1

    But, these examples aren't on topics of ethicality; they are more practical questions, such as safety. Besides, morality and ethics are subjective. Rather, use legality as a guideine for robotic actions, as legality is much more concise and is usually based on the majority ruling as what appears to be "moral" to the greatest audience.

  • @gab_v250
    @gab_v250 4 ปีที่แล้ว +16

    Detroit: Become Human in a nutshell

  • @dawudgrace3533
    @dawudgrace3533 4 ปีที่แล้ว

    He wants to replace people with robots and here I am thinking people are robots

  • @ytanddave
    @ytanddave 4 ปีที่แล้ว

    Me: del *.*
    DOS: Are You Sure?
    ... thirty five years later

  • @aliensensum8663
    @aliensensum8663 4 ปีที่แล้ว +1

    What even is the point of making robots when you are turning them into mechanic humans anyway.

  • @hussiensh3943
    @hussiensh3943 4 ปีที่แล้ว +1

    Oh no

  • @EeveeFromAlmia
    @EeveeFromAlmia 4 ปีที่แล้ว

    Isn’t the what the deal with the 3 laws of robotics is all about? Don’t kill people, do what the people say, keep yourself safe. Seems fairly reasonable on paper.

  • @sweetcrosby
    @sweetcrosby 4 ปีที่แล้ว

    This is how you get skynet obviously. Starts with the wrong no and boom terminators.

  • @Juurus
    @Juurus 4 ปีที่แล้ว

    Yes.

  • @96Vano
    @96Vano 4 ปีที่แล้ว

    Nothing new here. Asimov’s suggested laws were devised to protect humans from interactions with robots. They are:
    1)A robot may not injure a human being or, through inaction, allow a human being to come to harm
    2)A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
    3)A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

  • @abdifataxxuseensiidii5552
    @abdifataxxuseensiidii5552 4 ปีที่แล้ว

    GooD teacher

  • @karazu121
    @karazu121 4 ปีที่แล้ว +1

    I have a bad feeling about this...

  • @MarioMonsalve
    @MarioMonsalve 4 ปีที่แล้ว

    👏🏻

  • @bladiert7841
    @bladiert7841 4 ปีที่แล้ว

    Still a long ways from an a.i smart enough to understand subjective concepts because the human brain in still the best machine in that area and we still have a limited understanding about how consciousness works.

  • @oomp9123
    @oomp9123 4 ปีที่แล้ว

    Idc if they take over BUT I WANT TO KEEP MY JOB

  • @lucassoules7726
    @lucassoules7726 4 ปีที่แล้ว +4

    Destruction of humanity...!

  • @kellyjackson7889
    @kellyjackson7889 4 ปีที่แล้ว

    Put on the bear outfit dammit!

  • @barryxallxn5892
    @barryxallxn5892 3 ปีที่แล้ว

    Am I the only one thinking of the no meme

  • @wyattf.3837
    @wyattf.3837 4 ปีที่แล้ว

    Alright il listen to this i won't be happy about it though

  • @killerbeekillebeejr8344
    @killerbeekillebeejr8344 4 ปีที่แล้ว +7

    😏yep from the moment you install that (No option) we're done. Peeps chaos is shortly coming our way.

    • @aylbdrmadison1051
      @aylbdrmadison1051 4 ปีที่แล้ว +1

      Yea, that is also a great part of the problem. But watch the video before assuming that not allowing them to say no when appropriate will be even close to as safe as just letting them do anything they are told.

    • @damianpos8832
      @damianpos8832 4 ปีที่แล้ว

      @@aylbdrmadison1051 are you bot since you did repead few times the same coment

  • @xZeroTheGreat
    @xZeroTheGreat 4 ปีที่แล้ว

    I fail to see how people will buy "No"-robots instead of "Yes"-robots. Regulations? Hacking then

  • @GaasubaMeskhenet
    @GaasubaMeskhenet 4 ปีที่แล้ว

    this video is comfortable at x2.15 speed

  • @Caterfree10
    @Caterfree10 4 ปีที่แล้ว

    Asimov’s laws of robotics seem relevant here.

  • @sherriconklin3637
    @sherriconklin3637 4 ปีที่แล้ว

    Slippery slope . People need to say NO.What are we animal's? We the people are raising our Concioness!!!!!!!!

  • @josephpark2093
    @josephpark2093 2 ปีที่แล้ว

    Unfortunately, this technology has not been made possible yet. I have yet to hear Mark Zuckerberg say "no" so we'll have to wait a bit.

  • @marekbrooking241
    @marekbrooking241 4 ปีที่แล้ว

    Yeah I’m good

  • @witheringliberal2794
    @witheringliberal2794 4 ปีที่แล้ว +9

    This is absolutely silly

  • @kellyjackson7889
    @kellyjackson7889 4 ปีที่แล้ว

    Hey there's a new Ted Talk wanna watch it with me? NO

  • @lucasdutra67
    @lucasdutra67 4 ปีที่แล้ว

    My circuits my rules

  • @fsafiri2488
    @fsafiri2488 4 ปีที่แล้ว

    Need this cause each time I face a serious decision I all the time say yes without even thinking about the consequences and I all the time struggle just because I didn’t say NO. It’s all about two letters! Two letters make me all the time suffer and face hard situations!
    You really need to learn and assume saying No, trust me!

    • @Zeraslifestyle
      @Zeraslifestyle 4 ปีที่แล้ว

      Yeah u r rt✌️me too.. but nw i learned to say NO 🥰🥰🥰🥰

    • @fsafiri2488
      @fsafiri2488 4 ปีที่แล้ว

      MANJU MARIA BENNY yeah but the problem is that I still can’t do that I still can’t face people and be able to say no !

    • @Zeraslifestyle
      @Zeraslifestyle 4 ปีที่แล้ว

      @@fsafiri2488 oh just try to say no dear.. at relevant situation only... otherwise we can say yes✌️❣️🥰

  • @Hallands.
    @Hallands. 4 ปีที่แล้ว +3

    Robots can't say " no"! On what would they base their refusal? They aren't conscious beings, you do know that, right?

  • @German_owl
    @German_owl 4 ปีที่แล้ว

    Someone should have read Isaac Asimov

  • @kellyjackson7889
    @kellyjackson7889 4 ปีที่แล้ว

    I can dig a hole without any help from an excavator thank-you!

  • @g4do
    @g4do 4 ปีที่แล้ว +3

    Talk about transparently detached from reality. Build a robot that says no , create yet another way for greedy rich men to get over on poor people. Just say no to robots that can disobey a human. In fact, say no to robots period. We don't need them on earth past entertainment. Humans are the commodity. If not , humans become farm animals...

  • @-someone-.
    @-someone-. 4 ปีที่แล้ว

    Who else is thinking of jailbreaking their bot🤣

  • @SirMrTreflip
    @SirMrTreflip 4 ปีที่แล้ว

    This man is laying the foundations for our robot overlords.

  • @paultovar2794
    @paultovar2794 4 ปีที่แล้ว

    Consent is a myth

  • @BASEDHITLORLOVER14n88
    @BASEDHITLORLOVER14n88 4 ปีที่แล้ว

    No

  • @lostmyak47
    @lostmyak47 4 ปีที่แล้ว +8

    ok boomer

  • @Shoe3003
    @Shoe3003 4 ปีที่แล้ว

    i'm a New, i'm a New, New Model #29 -mechanical animals '98

  • @CryLoudIsrael
    @CryLoudIsrael 4 ปีที่แล้ว +1

    That’s why the world is so screwed up now.... because these people are in control.

  • @alvaromorales4945
    @alvaromorales4945 4 ปีที่แล้ว

    O

  • @MayurHill
    @MayurHill 4 ปีที่แล้ว

    I got this video without watching it. No wonder what people created any device, vehicle ets. always regrets because of it's disadvantages.

  • @aroyaliota
    @aroyaliota 4 ปีที่แล้ว

    the robot has no clothes.

  • @pinkpanda5696
    @pinkpanda5696 4 ปีที่แล้ว

    I get his point, but will the house cleaning robots be able to say no? If so, why buy one when we get the chance? There is no, no in house cleaning, haha.

  • @SeraphX2
    @SeraphX2 4 ปีที่แล้ว

    put your arms down

  • @inevitabletech5234
    @inevitabletech5234 4 ปีที่แล้ว

    1. Comment

  • @SharpDesign
    @SharpDesign 4 ปีที่แล้ว

    Isaac Asimov's third law

    • @JjJj-nn6in
      @JjJj-nn6in 4 ปีที่แล้ว

      The speaker was talking about why the second part of the third law has to be there, watch before criticising.

  • @jeffsmith6133
    @jeffsmith6133 4 ปีที่แล้ว +1

    This guy is clueless. We program them

    • @jbhann
      @jbhann 4 ปีที่แล้ว

      Jeff Smith ...have you not heard of *_machine learning?_* They are literally creating AI systems to learn, without programming the AI systems to learn according to specific input from programmers. Some AI systems even began to develop a language that was never programmed in the first place, and the AI started communicating with other systems. This happened at Facebook, and the workers had to cut the power to shut it down. There was a book written in 2015 by Martin Ford titled *_Rise of the Robots_* and it details how AI is advancing and the affects it could have. He made some predictions to happen sometime around 2025...but those particular predictions happened in early 2019.

    • @jeffsmith6133
      @jeffsmith6133 4 ปีที่แล้ว

      @@jbhann yes but at the end of the day it is still based on a program written by humans. Granted it is an exponential process as the programs help advance themselves.