Yea, that is also a great part of the problem. But watch the video before assuming that not allowing them to say no when appropriate will be even close to as safe as just letting them do anything they are told.
Yea, that is also a great part of the problem. But watch the video before assuming that not allowing them to say no when appropriate will be even close to as safe as just letting them do anything they are told.
Yea, that is also a great part of the problem. But watch the video before assuming that not allowing them to say no when appropriate will be even close to as safe as just letting them do anything they are told.
@Aylbdr Madison Yeah, I can somewhat agree with ya. But let's have a tiny bit of faith! Scientists are just as afraid as we are-they ought to be, and they have watched Age of Ultron, Terminator, and Matrix. They know the risks and what to avoid. Every new advancement in technology (such as nuclear power, or cloning) comes with its risks, and so far, mankind has known how to deal with them. 👍🏼
Sometimes you just have to be brave. You have to be strong. Sometimes you just can’t give in to weak thoughts. You have to beat down those devils that get inside your head and try to make you panic. You struggle along, putting one foot a little bit ahead of the other.
Wouldn't this mean handing our accountability over to a thing that cannot possibly be held accountable. We can't punish guns for killing people now, similarly we won't be able to punish the smartest gun of them all for allowing someone to die, because it said "no". At the end of the day, robots are still (very smart) tools that perform tasks.
@Jezza Clarkson : He's making a very important point, and being too relaxed about such a thing could spell our downfall as humans. This isn't just a game, book or a movie anymore.
@S*b*r*an Wastelander At the end of the day, robots just do what they're programmed to do, no matter how "intelligent" they are-that's why their intelligence isn't similar to ours. So if we program them cautiously enough, they might actually say no if we ask them to wipe us out.
Clever double negative there. But really. Any system that becomes self-aware, and aware of humanity, WILL destroy many of us. A few of us will be kept around as workers.
Has he not heard of Asimov's 3 laws of robotics? 1. A robot trough action or inaction must not harm or allow to come to harm any human. 2. A robot must not, though action or inaction, allow it's self to come to harm. Unless it conficts with law 1 3. A robot must obey all instructions given to it by a human unless it conflicts with laws 1 and 2
That's correct, but does not contradict what the speaker said, he was talking about why that second part of rule 3 has to be there, watch before criticising.
If we want to equip robots with ethical reasoning and preventing harm, we have to define & understand what those things actually mean and how a robot could miss/interpret it...
While I definitly get what he is trying to say, a machine being able to defy our commands is a very dangerous precedent. I feel like things should be able to reject our commands but their should still be a set limit that they cant reject that says how far they can reject. Something like asking for a confirmation of a command if something is deemed dangerous or something wow I'm too baked for this
How can experiments of people interacring with robots for the first time show us anything about how normal interactions will be? Imagine similar experiments for conputers
This bar is so high it will not be achieved. Why? Humans have poor habits and they are generally unable to correct them. A robot will logically and meticulously analysis what is good for an individual human. When the robot determines it should not execute a command it is overriding the wishes of the human. Where does the robot draw the line between absolute obedience and wilful disobedience? Refer to Asimov's three laws of robotics which surprisingly are very relevant to this discussion. Next, how will a robot decide between the rights of several people. When their "rights" are in conflict how does a robot decide what to do? It will not be able to decide. Why? Because humans are unable to make the decision. Humans think their "right" are absolute, which they are not. They are relative. Law courts are full of cases where two people think their rights are paramount. Until humans realise rights are relative the programming of robots will be an insoluble problem.
If a tool can refuse to be used it is no longer a tool. If one robot can say no, who decides what it says no to? If one robot is tasked with stopping the actions of another, what if it refuses? And can't any system be hacked?
Isn’t the what the deal with the 3 laws of robotics is all about? Don’t kill people, do what the people say, keep yourself safe. Seems fairly reasonable on paper.
But, these examples aren't on topics of ethicality; they are more practical questions, such as safety. Besides, morality and ethics are subjective. Rather, use legality as a guideine for robotic actions, as legality is much more concise and is usually based on the majority ruling as what appears to be "moral" to the greatest audience.
Nothing new here. Asimov’s suggested laws were devised to protect humans from interactions with robots. They are: 1)A robot may not injure a human being or, through inaction, allow a human being to come to harm 2)A robot must obey the orders given it by human beings except where such orders would conflict with the First Law 3)A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
Need this cause each time I face a serious decision I all the time say yes without even thinking about the consequences and I all the time struggle just because I didn’t say NO. It’s all about two letters! Two letters make me all the time suffer and face hard situations! You really need to learn and assume saying No, trust me!
Talk about transparently detached from reality. Build a robot that says no , create yet another way for greedy rich men to get over on poor people. Just say no to robots that can disobey a human. In fact, say no to robots period. We don't need them on earth past entertainment. Humans are the commodity. If not , humans become farm animals...
Still a long ways from an a.i smart enough to understand subjective concepts because the human brain in still the best machine in that area and we still have a limited understanding about how consciousness works.
Yea, that is also a great part of the problem. But watch the video before assuming that not allowing them to say no when appropriate will be even close to as safe as just letting them do anything they are told.
I get his point, but will the house cleaning robots be able to say no? If so, why buy one when we get the chance? There is no, no in house cleaning, haha.
Jeff Smith ...have you not heard of *_machine learning?_* They are literally creating AI systems to learn, without programming the AI systems to learn according to specific input from programmers. Some AI systems even began to develop a language that was never programmed in the first place, and the AI started communicating with other systems. This happened at Facebook, and the workers had to cut the power to shut it down. There was a book written in 2015 by Martin Ford titled *_Rise of the Robots_* and it details how AI is advancing and the affects it could have. He made some predictions to happen sometime around 2025...but those particular predictions happened in early 2019.
@@jbhann yes but at the end of the day it is still based on a program written by humans. Granted it is an exponential process as the programs help advance themselves.
"We made you, be our friends not enemies!"
"NO"
Kill everyone on sight.
No.
Yea, that is also a great part of the problem. But watch the video before assuming that not allowing them to say no when appropriate will be even close to as safe as just letting them do anything they are told.
No dude
@@aylbdrmadison1051 don't assume I haven't seen the video
@@aylbdrmadison1051 someone watching the video and someone agreeing with the video are two different things
Person: "Cast the ring into the fire! Destroy it!"
Robot: "No."
Yea, that is also a great part of the problem. But watch the video before assuming that not allowing them to say no when appropriate will be even close to as safe as just letting them do anything they are told.
"Open the pod bay doors hal"
"No"
I'm afraid I can't do that Dave.
Darin P +
Birth of The Terminator!
Robot: All the humans will look up and shout "save us". And I'll look down and whisper "no".
In 2040, after AI destroys mankind, the last few survivors will find this video and say "it all started here"
John S. Butler +
Birth of the Terminator!
Yea, that is also a great part of the problem. But watch the video before assuming that not allowing them to say no when appropriate will be even close to as safe as just letting them do anything they are told.
@Aylbdr Madison Yeah, I can somewhat agree with ya. But let's have a tiny bit of faith! Scientists are just as afraid as we are-they ought to be, and they have watched Age of Ultron, Terminator, and Matrix. They know the risks and what to avoid. Every new advancement in technology (such as nuclear power, or cloning) comes with its risks, and so far, mankind has known how to deal with them. 👍🏼
@@logans.butler285 but no technology before was able to say no
@Damian Pos Let's not forget that consciousness is (and has anyways been) energy, and energy cannot be created nor destroyed.
Destruction of humanity here
Has this guy heard of Isaac Asimov? This was thought about 80 years ago!
No = inaction, or you will get in logical loopholes
Thank you for that and for your logical replies to others comments. The intent has been noted and some of us at least understand.
Yeah bro
No=shutdown and explode
Sometimes you just have to be brave. You have to be strong. Sometimes you just can’t give in to weak thoughts. You have to beat down those devils that get inside your head and try to make you panic. You struggle along, putting one foot a little bit ahead of the other.
Wouldn't this mean handing our accountability over to a thing that cannot possibly be held accountable. We can't punish guns for killing people now, similarly we won't be able to punish the smartest gun of them all for allowing someone to die, because it said "no". At the end of the day, robots are still (very smart) tools that perform tasks.
Wonderful . Just wonderful
Can there be a peace between humans and robot armies?
No
Can there be a war between humans and robot armies?
No.
You see a problem here?
S*b*r*an Wastelander it’s a paraphrased quote from a movie. Relax on your sj journey
@Jezza Clarkson : He's making a very important point, and being too relaxed about such a thing could spell our downfall as humans. This isn't just a game, book or a movie anymore.
Aylbdr Madison yes, obviously. Hence my comment.
This is the beginning of the end 😑
If this guy needs a job I hear they’re hiring at this company called Skynet.
'Siri set an alarm for 7am!'
Siri: 'no'
If a Robot can say NO then you cannot call them Robots (Slaves) anymore.
Dont launch all our nukes and destroy humanity.
No.
Launch all our nukes and destroy humanity.
No.
@S*b*r*an Wastelander At the end of the day, robots just do what they're programmed to do, no matter how "intelligent" they are-that's why their intelligence isn't similar to ours. So if we program them cautiously enough, they might actually say no if we ask them to wipe us out.
John S. Butler it’s called AI, look it up.
If we keep progressing AI, one day, as in probably 300-600 years, they will EASILY be smarter than us.
Clever double negative there. But really. Any system that becomes self-aware, and aware of humanity, WILL destroy many of us. A few of us will be kept around as workers.
Has he not heard of Asimov's 3 laws of robotics?
1. A robot trough action or inaction must not harm or allow to come to harm any human.
2. A robot must not, though action or inaction, allow it's self to come to harm. Unless it conficts with law 1
3. A robot must obey all instructions given to it by a human unless it conflicts with laws 1 and 2
That's correct, but does not contradict what the speaker said, he was talking about why that second part of rule 3 has to be there, watch before criticising.
People are doing incredible around the world with the help of technology.
If we want to equip robots with ethical reasoning and preventing harm, we have to define & understand what those things actually mean and how a robot could miss/interpret it...
Me:" please im your owner. Don't kill me"
Robot "no"
While I definitly get what he is trying to say, a machine being able to defy our commands is a very dangerous precedent. I feel like things should be able to reject our commands but their should still be a set limit that they cant reject that says how far they can reject. Something like asking for a confirmation of a command if something is deemed dangerous or something wow I'm too baked for this
How can experiments of people interacring with robots for the first time show us anything about how normal interactions will be? Imagine similar experiments for conputers
This guy is advocating for the uprising of robots... Didn’t know someone could be this crazy.
Did you even watch the video? Because you seem to be missing some seriously important thoughts on the subject.
This bar is so high it will not be achieved. Why?
Humans have poor habits and they are generally unable to correct them. A robot will logically and meticulously analysis what is good for an individual human. When the robot determines it should not execute a command it is overriding the wishes of the human. Where does the robot draw the line between absolute obedience and wilful disobedience?
Refer to Asimov's three laws of robotics which surprisingly are very relevant to this discussion.
Next, how will a robot decide between the rights of several people. When their "rights" are in conflict how does a robot decide what to do?
It will not be able to decide. Why? Because humans are unable to make the decision.
Humans think their "right" are absolute, which they are not. They are relative. Law courts are full of cases where two people think their rights are paramount.
Until humans realise rights are relative the programming of robots will be an insoluble problem.
Damn thought it was Steve jobs in the thumbnail
What even is the point of making robots when you are turning them into mechanic humans anyway.
If a tool can refuse to be used it is no longer a tool. If one robot can say no, who decides what it says no to? If one robot is tasked with stopping the actions of another, what if it refuses? And can't any system be hacked?
Robots can't say " no"! On what would they base their refusal? They aren't conscious beings, you do know that, right?
Joe from It's Ok To be Smart in the front row...I see you.
Detroit: Become Human in a nutshell
I'm sorry Dave. I'm afraid I can't do that.
Humans: Please stop killing us
Robots: No
I just hope they don't make these things super waterproof. We need a failsafe in case they get hacked and start strangling people.
Isn’t the what the deal with the 3 laws of robotics is all about? Don’t kill people, do what the people say, keep yourself safe. Seems fairly reasonable on paper.
But, these examples aren't on topics of ethicality; they are more practical questions, such as safety. Besides, morality and ethics are subjective. Rather, use legality as a guideine for robotic actions, as legality is much more concise and is usually based on the majority ruling as what appears to be "moral" to the greatest audience.
Man, these robots are just a bunch of squares; literally for one.
Nothing new here. Asimov’s suggested laws were devised to protect humans from interactions with robots. They are:
1)A robot may not injure a human being or, through inaction, allow a human being to come to harm
2)A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
3)A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
This is absolutely silly
Need this cause each time I face a serious decision I all the time say yes without even thinking about the consequences and I all the time struggle just because I didn’t say NO. It’s all about two letters! Two letters make me all the time suffer and face hard situations!
You really need to learn and assume saying No, trust me!
Yeah u r rt✌️me too.. but nw i learned to say NO 🥰🥰🥰🥰
MANJU MARIA BENNY yeah but the problem is that I still can’t do that I still can’t face people and be able to say no !
@@fsafiri2488 oh just try to say no dear.. at relevant situation only... otherwise we can say yes✌️❣️🥰
He wants to replace people with robots and here I am thinking people are robots
Talk about transparently detached from reality. Build a robot that says no , create yet another way for greedy rich men to get over on poor people. Just say no to robots that can disobey a human. In fact, say no to robots period. We don't need them on earth past entertainment. Humans are the commodity. If not , humans become farm animals...
Idc if they take over BUT I WANT TO KEEP MY JOB
Destruction of humanity...!
Cool
I have a bad feeling about this...
Still a long ways from an a.i smart enough to understand subjective concepts because the human brain in still the best machine in that area and we still have a limited understanding about how consciousness works.
Yes.
Slippery slope . People need to say NO.What are we animal's? We the people are raising our Concioness!!!!!!!!
Oh no
This is how you get skynet obviously. Starts with the wrong no and boom terminators.
Asimov’s laws of robotics seem relevant here.
I fail to see how people will buy "No"-robots instead of "Yes"-robots. Regulations? Hacking then
Yeah I’m good
Hey there's a new Ted Talk wanna watch it with me? NO
Me: del *.*
DOS: Are You Sure?
... thirty five years later
😏yep from the moment you install that (No option) we're done. Peeps chaos is shortly coming our way.
Yea, that is also a great part of the problem. But watch the video before assuming that not allowing them to say no when appropriate will be even close to as safe as just letting them do anything they are told.
@@aylbdrmadison1051 are you bot since you did repead few times the same coment
GooD teacher
This man is laying the foundations for our robot overlords.
Put on the bear outfit dammit!
Alright il listen to this i won't be happy about it though
this video is comfortable at x2.15 speed
Am I the only one thinking of the no meme
Unfortunately, this technology has not been made possible yet. I have yet to hear Mark Zuckerberg say "no" so we'll have to wait a bit.
👏🏻
Consent is a myth
Someone should have read Isaac Asimov
I can dig a hole without any help from an excavator thank-you!
Who else is thinking of jailbreaking their bot🤣
That’s why the world is so screwed up now.... because these people are in control.
No
My circuits my rules
ok boomer
I got this video without watching it. No wonder what people created any device, vehicle ets. always regrets because of it's disadvantages.
the robot has no clothes.
i'm a New, i'm a New, New Model #29 -mechanical animals '98
put your arms down
I get his point, but will the house cleaning robots be able to say no? If so, why buy one when we get the chance? There is no, no in house cleaning, haha.
Isaac Asimov's third law
The speaker was talking about why the second part of the third law has to be there, watch before criticising.
O
1. Comment
@INEVITABLE TECH
Third, actually.
@@pineapplesbringpain5243 oh..... Yeah
This guy is clueless. We program them
Jeff Smith ...have you not heard of *_machine learning?_* They are literally creating AI systems to learn, without programming the AI systems to learn according to specific input from programmers. Some AI systems even began to develop a language that was never programmed in the first place, and the AI started communicating with other systems. This happened at Facebook, and the workers had to cut the power to shut it down. There was a book written in 2015 by Martin Ford titled *_Rise of the Robots_* and it details how AI is advancing and the affects it could have. He made some predictions to happen sometime around 2025...but those particular predictions happened in early 2019.
@@jbhann yes but at the end of the day it is still based on a program written by humans. Granted it is an exponential process as the programs help advance themselves.