Well, if I had to guess, I'd say most of the people who think of Asimov's laws when discussing AI either have never read any of his work or they really didn't get the point. Edit: grammar, for duck's sake... twice xD
It’s funny as every derivative story I can think of that adapts something akin to his laws also shares the same conclusion... not sure why people would want to rely on fictional laws that never even work within said fiction lol
There is not even a need for an international consensus. Even nowadays our technology is adapted to the laws of different countries and territories. AI could either be aware of which territory they are in, or it be illegal to transport them between regions. The real problem with AI would be hackers bypassing the fundamental laws or illegal manufacturers not implementing them in the first place.
The three laws CAN NOT be programmed into a machine effectively, because the definitions for the words in those laws are far to subjective. The rules you make for a general intelligence have to be iron clad and with no subjective wiggle room for the AI to play with. Operate under the assumption that you are building something that is truly neutral at best and chaotic evil at worst.(If you gave it no rules at all) You can't give it some subjective rule about following the law because there are edge cases that it WILL find where the law doesn't have a ruling or the law doens't stop it from murdering you to make you a cup of tea because you told it to make you tea or any number of an impossible to block of amount of weird ethical edge case situations you can not ever hope to prepare for.
khoi: BUT that is exactly the point... How can an AI developer make unambiguous definitions for the AI to work with, when the humanity is not sure (there are debates, arguments, etc.) "given the circumstances that somehow these terms are approved internationally by rational philosophers and scientific communities" you are basically throwing the main problem out of the window. It's like, if I have a broken car, I will say, "I cannot drive this car, it's broken." And you are saying: "But given the car was fixed, you could drive it. So your argument is invalid. Now drive the car!" and now you see, your argument is invalid :) EDIT: ergo, when one is an AI developer, he should point out these moral issues, but nobody gave him permission to decide on them. I mean, I don't want to have a world where every stance on moral issue is dictated by whatever Microsoft or Google (or whoever) decided when coding their AI. If not for anything else, then for the fact, that they could easily make the (from others' point of view) morally wrong (but easier to implement) decision... That is the thing, when you are messing with development of advanced AI, you want to be damn sure that it does not take over the world (although it will have all the stamps).
Tools, a.i. super intelligence is pointless at this stage in civilization, if we succeed, we get replaced, if we fail, we try again(something else when we have enough evidence) until we succeed, you want an example of what happens when this methodology is used incorrectly just watch MattPatt's "Don't Touch Anything" Stream(Probably the first one)
He even sites the definition of insanity, which I find humourus since that is the whole point of Science, To test something Repeatedly in order to determine whether or not the successes were flukes.
Not just narrative though... Asimov was one of the first to think about AI safety seriously. The three laws aren't a proposed solution, but instead a thought problem to show the complexity required by any actual solution. Robert sounds way too dismissive of Asimov here... He should be criticizing the pop culture misconception of Asimov but praising the actual stories (which are pretty much the foundational thought problems of his field.)
@@travcollier The stories very much explain more than Robert does in the video, actually. Asimov had his limited knowledge due to the time he lived in, but he was clearly aware of a lot of problems that are very topical right now.
Brilliant! That was exactly the purpose, because as you know, because of this, originated the cause of things so strange to solve like that one story where we have a robot going round and round some place because of the conflict between the second and third law in the context of the physical environment ( here we have a robot in Mercury, trying to obey and order given by a human, and when doing so, the robot got to know that his own life was in danger (the robot had to recover an artifact laying somewhere on the surface of Mercury where the temperature was very very high), so, second law (comply to orders given by humans) should be obeyed over third law (robot protect its own existence), but given the fact that the human order was given, but not so strong, so then the force existing in the third law took the robot to a state of equilibrium, so he could not execute the order nor he could get out of the place where he could receive harm...)
He mostly focused on the difficulty of defining "Human", but I think it's much much more difficult to define "Harm", the other word he mentioned. Some of the edge cases of what can be considered human could be tricky, but what constitutes harm? If I smoke too much, is that harmful? Will an AI be obligated to restrain me from smoking? Or, driving? By driving, I increase the probability that I or others will die by my action. Is that harm? What about poor workplace conditions? What about insults, does psychological harm count as harm? I think the difficulties of defining "Harm" are even more illustrative of the problem that he's getting at.
I don't think he's actually read all of the stories either, because all of the issues he mentioned were explained/solved in the stories. As for the harm issue, Asimov explains that in the story about the superbrains (the one where they control humanity to the point where everybody gets moved around to prevent riots, can't remember the name), and Daneel's explanation near the end of the Foundation arc deals with some of the limitations.
Thats what i thought as well. Its hard to define what a human is but you can atleast come up with a physiological definition that accounts for 99% of the cases and then add in edge cases as well, which will still not be enough but atleast youre getting somewhere. But when it comes to "harm" like you said, theres just way too many possibilities and trade offs than come into the picture that pose not just ethical but also philosophical questions
Equally, the follow on from that: "through inaction, allow a human to come to harm" How much responsibility does this AI have? To what lengths will it go to to prevent "harm"? And what if the first law conflicts with itself: e.g. if the only possible action to prevent harm to one human is to actively harm another human?
As Dave from Boyinaband so eloquently put it, "if you gave it a priority of keeping humans happy, it would probably just pump our brains full of dopamine, keeping us euphorically happy until we *died"*
In one of the stories the AI decided that psychological harm (as in having your reputation ruined) is worse than bodily harm (as in going to prison), and that's only an AI programmed to act as a style corrector for academic papers
The problem with Asimov's laws is probably that they're just obscure enough that people don't think they're well known, but they're also not well known enough for people to remember the context they appeared in and how they always failed.
I think the movie I Robot also helps a lot. People who have never even heard of Asimov know the three laws from that movie, and even though much like the books the movie is about the three laws going wrong and the hero, a robot who doesn't have the three laws, saving the day. Yet people still spout the three laws as if they work lol
There was a context. Obviously a nasty one, and probably because there had been previous tragedies where designers agreed that those software formulations needed to be created, no matter how great the processing demand had to be. And these laws were the ones in operation that created those disturbing consequences. This doesn't seem simplistic to me. Ah, the confidence of youth.
So the problem of ensuring that technology only acts in humanity's best interests isn't between human and technology, but between human and self. We cannot properly articulate what kind of world we actually want to live in in a way that everyone agrees with. So no one can write a computer program that gets us there automatically.
It is possible but it's way more complicated than writing several imperative laws (after all, if it was so simple, why would the Universal Declaration of Human Rights need to be any longer?). You need to employ fuzzy logic and basically programme how judicial system works. But it is possible because you can view human society as a (fuzzy) machine and you can emulate it.
If we're going to simulate society and forget the individual, then we're into psycho-history territory, and I suggest you read Asimov's Foundation books for problems with THAT....
(I know this is a bit of thread necromancy, but...) It's worse than that, even; you aren't even trying to get all the humans to agree with your definitions, you're actually just trying to get a well-defined description that doesn't create unintended consequences that *you* a single individual, don't want. Ignore the fact that nobody else might agree with you, just getting your own, personal value function down in some logical, consistent way that doesn't create counter-intuitive conclusions or unintended undesirable results, merely by your *own* definitions, is a herculean task. Forget trying to solve 'ethics of humanity', just solving 'ethics of this one specific human' is a virtually intractable task.
This was sort of Asimov's point in the first place if you actually go back and read his original stories instead of the modern remakes that mistakenly think the rules were meant to be "perfect." He always designed them as flawed in the first place, and the stories were commentary on how you *can't* have a "perfect law of robotics" or anything, as well as pondering the nature of existence/what it means to be sentient/why should that new "life" have any less value than biological life/etc.
Greig91 Except there were stories in which the AI didn’t break at all and was entirely fine, the only thing that went wrong were humans. In fact, all of you keep going on about how the laws were broken and that was the point and whatnot, yet when I read the introductions to Asimov’s works, what I find is that an important part of his work was meant to point out that humans, not robots or their laws, were the problem. He wanted to tear down this primitive « fear of the Frankenstein monster » we all have, and for this (partially at least) made laws that forced robots not to harmed us, and showed the ridiculousness of our behavior by showing these laws to work perfectly without this preventing people to be fearful of robots. For example, there was this story where each continent is co ruled by a human and an AI, and someone ends up thinking or fearing that something is going wrong, so he visits each continent to make sure that everything is right and to ask questions around to better understand wether it’d be possible for things to go wrong or not. Turns out nothing was going wrong and it’s not even really possible. In another one, a robot was created by a retired roboticist in order to take his place, and ends up running for an official position without people knowing he is a robot. One of the moral of the story is literally : a normal robot’s moral is the same as the best of the humans’ moral. It’s also one of the stories where prior or after that, Asimov discusses the difficulty of defining humans and whatnot, but still, the laws aren’t broken in this, they are at best/worst unrealistic. And I could go on and on, but you get the idea I hope.
@@mrosskne Sure, they do, my point is moreso that the books were rarely about the laws going wrong, but rather about things going wrong due to humans and the laws working just fine, because asimov, unlike many other writers, wasn’t going at it from the angle of « what if robots went wrong » but « what if things went wrong because humans are bad at their jobs, despite the laws working just fine ». Usually things go wrong despite the robots or because humans did something to the robots, not because the laws themselves were flawed.
+Ali Jardz In 2045 an artificial superintelligence reads this comment, tracks down Ali Jardz, and forces him to watch this video on loop whilst hooked up to a Clockwork Orange-style forced viewing machine. When calculating whether or not this is ethical, the superintelligence will decide that yes, it is. It does not cause harm because Ali Jardz specifically wanted this, and all subsequent protests are irrelevant.
+cacksm0ker He only kind of wished it. The SI might consider that before deciding. Why would subsequent protests be irrelevant even if he really wished it? Bertrand Russell's work might be a good AI start point for this subject. i disagree with the presenter's assumption there can be no fuzziness in the programming, while accepting a 100% solution would be an impossible task. But a 99.999% solution* would probably be acceptable, wouldn't it? * or an optional number of sig figs...
+cacksm0ker But, of course, humans, being human, change their minds. The ASI would have to know that later wishes supersede prior ones. Your ASI's program is what? While functional if all is not ethical do something else?
+Ali Jardz then you might be interesting in playing Soma, or at least watching an LP of it from someone who's able and willing to think about these things ;)
Yes, that was Asimov's intention all along. The whole point of the laws of robotics in the books is that they are incomplete and cause logical and ethical contradictions. All the stories revolve around this. This is worth emphasizing, as most people seem to think Asimov proposed them as serious safeguards. The comments in the beginning of the video illustrate this misconception well. Thanks for bringing this up, Rob!
+1ucasvb I agree with this completely, if that wasn't clear from the video. I'm not knocking Asimov at all, just people who think the laws are at all useful as an approach to the AI Value Alignment/Control Problem
This brings to mind the Bertrand Russell quote in Nick Bostroms's book. _"Everything is vague to a degree you do not realize till you have tried to make it precise."_
@@hexzyle The film was deliberately loosely based on Asimov's universe, it was never meant to be a recreation, it was just a fun piece of sci fi action
In fact, Asimov's whole point in writing I, Robot was to show the problem with these laws (and therefore the futility in creating one-size-fits-all rules to apply in all cases).
It was wrong in all sorts of ways. One robot ended up self-destructing because he was in a situation where anything he did--including nothing--would have resulted in harm to a human.
one day they will sign up for this... fictions are made to turn into reality. its epic to imagine intersection of philosophy and maths. i may not be sure but atleast I can hope, thats all we really do as humans. one day we will get bored of developing AI as giant efficient optimization function and we will start thinking about making it conscious. i am sure. cuz we are hungry.
Harm is even worse. Violating ownership is harm so even if the table isn't human, destroying it arbitrarily harms people. Destroying things in general harms people. There is very little that can be done that doesn't harm someone somewhere. Using a Styrofoam cup harms people, creating and using plastic harms people. The very construction of the robots would harm people. Want a self driving car? Oops, cars use energy and all current - and probably future - forms of energy production harm people.
+William Brall That would fall under the zeroth law not the first law. This law is ambiguous because the robot must take humanity as a whole to make a decision. In the last book of the Foundation, Asimov discuses the zeroth law and comes to the conclusion that a human is a physical object and so the first law can work but humanity as a whole is a concept and it's impossible to evaluate the impact on a concept.
Deltaexio The robot would stop the person killing B in the least damaging manner for A
9 ปีที่แล้ว +24
+William Brall Yeah, "harm" is probably the worst one. But human is also a very funny and potentially tragic one... Let's say you define human... Genetically? So uh... Number of chromosomes? Oh, jeez, you just made Down syndrome patients non-human. Similarities to some pattern? Suddenly some humans fall out of the pattern and don't get recognised as humans. And... How many cells of them do count as a human? Are the skin and mucosal cells I shed human? The bacteria in my gut? I need them to live, but they're not intrinsecally human... Going back to "harm", even IF we could somewhat define it into appropriate terms, some definitions wouldn't prevent the AI basically trapping us all while we sleep or something like that, and putting us into a trance of eternal "happiness", since they relied too much on "perception of harm". whereas other definitions would just (like you said) render it completely unable to do ANYTHING AT ALL because it is IN SOME WAY harming someone... It is pretty frustrating, but it is quite the issue indeed.
Someone is falling off a cliff, to shot the person from falling to their death, you need to grab them, but due to their velocity doing so may hurt their arm/rips/etc. The "do no harm" was cancelled out by the "Allow a human to come to harm". How do you do the maths on such a thing. Okay, hurt arm vs death is easy. But what about hurt arm vs hurt leg. Which does the robot do? Allow a human to hurt their leg? Or pull them out of harms way, but hurt their arm in the process? (assuming there is no safe/zero harmful way to save them). Is all harm equal? How do you define that? To a computer programmer, losing a leg is preferable to losing an arm. To a runner, losing a leg is worse than losing an arm. (okay, that one was a stretch, but you get the idea).
@@9308323 - right, but I feel like that should be emphasized in a video about how they don't work. The fact that they were never intended to work and the stories about them were about their flaws is important to explaining their purpose as a narrative framing device.
@@KingBobXVI Not really. That's not the point. The people saying that they should just follow the 3 laws never read the books nor read any Sci-Fi their entire life for that matter and people who read those stories would already know about it. He already said that those books' main purpose is to tell a story, not to be factual and I believe that's enough. They don't need to dwell why it's there in the story and its purpose but rather why it won't work.
+Deconverted Man The laws of robotics also state that a human cannot be allowed to come to harm through inaction, and death would be of greater harm than using a hypodermic needle. However, taking that further into Matrix territory, perfectly simulating reality for a human using cybernetics may be less harmful than forcibly augmenting them with cybernetics in order to inject them into said simulation. Even further into that rabbit hole, it may be less harmful to humans to simulate their brain patterns in virtual reality than it is to allow them to have physical bodies which are subject to decay. Depending on what you think of as "humanity," of course.
+littlebigphil but where do you draw the line? And how do you know if someone is emotionally harmed by your action or inaction? Some people might be harmed by their goldfish dying, others may not care at all. Etc. What if the wishes of one person conflict with the wishes of another person, who does the robot choose to harm?
I feel like Asimov's stories featured very human-like robots that followed human-like logic. I don't think he thought of them in the same terms we do nowadays, in terms of computer logic and programming.
Asimov's robot stories had a _"positronic brain"_ housing the intelligence. From what I remember they weren't algorithm based like today's computers. For what's that's worth.
+Beorn Borg Well that part would make a fully usable neuron brain capable of super intelligence, which algorithm based machines will forever have problems with. But how that intelligence understand the world remains the same problem, human definitions are full of holes and assumptions so making "strict" laws based on them leaves you with an endless supply of issues.
The point made is "intuition", which is a circular process that involves one self as a human being, as the base of the assumptions that are created by intuition. You can't make an assumption that is not yours.
@@MunkiZee red herring?- As far as comparisons to real-world AI, or red herring as concerns the actual plot? "Red herring" assumes that there is a single point to be made which the "herring" distracts from. Actually as concerns the video, there's a whole bunch of conclusions to make here about AI amongst which Asminov's "magic hand wave: positronic brain" is just another aspect to think about.
I mean, that only makes the laws even less useful in real life, since we literally have no reason to believe that the intelligences we create will be similar to ours.
the 1941 short story "Liar!" in Asimov's I, Robot was very interesting. it's about a robot that can read minds, and begins to lie to everyone it meets, telling them what they want to hear. it ends up causing harm to everyone it lies to. i think this story illustrates how difficult the concept of time is when thinking about the laws of robotics. harm on what timescale? how can you normalize harm with respect to time? is anything completely unharmful? in that case, how do you minimize harm, other than total oblivion and/or bliss? where does happiness come into all of this? for this reason i love Asimov's stories: at face value, the robotic laws are central, but ultimately the stories are all about humans and how we cant even really begin to think about ourselves in any serious way... i recommend reading all of Asimov's short stories seriously. they are not as shallow as they are made out to be!
Harm could be simply reduced to physical harm, aggression, injury, death. Humans provoke psychological and other types of harm to each other all the time, so why our creations should be better ?
Asimov was not a fool, and these are clearly ethical rules, and as such are in the field of moral philosophy. It's blindingly clear that they aren't design rules and they rather do point to the problem of the inherent ambiguity of morality and ethical standards which always have subjective elements. However, human beings have to deal with these issues all the time. Ethical standards are embedded into all sorts of social systems in human society, either implicitly or even explicitly in the form of secular, professional and religious laws and rules. So the conundrum for any artificial autonomous being would be real. To me this points out the chasm there is between the technological state of what we call Artificial Intelligence, that is based on algorithms or statistical analysis and what we might call human intelligence (not that the psychologists have done much better). Asimov got round this by dumping the entire complexity into his "positronic brains" and thereby bypassed this. In any event, there are real places coming up where human morality/ethical systems are getting bound up with AI systems. Consider the thought experiments that are currently doing the rounds over self-driving cars and whether they will be programmed to protect their occupants first over, say, a single pedestrian. As we can't even come to an agreed human point on such things (should we save the larger number of people in the vehicle or the innocent pedestrian that had no choice about the group choosing to travel in a potential death-dealing machine), then even this isn't a solvable in algorithmic terms. It sits in a different domain, and not one AI is even close to being able to resolve. The language adopted in the video is all that of computer science and mathematics. The definition of hard boundaries for what is a "human" is a case in point. That's not how human intelligence appears to work, and I recall struggling many years back with expert systems and attempting to encode into rigorous logic the rather more human-centred conceptualisation used by human experts. Mostly, when turned into logic, it only dealt with closed problems of an almost trivial nature.
Seriously, the whole *point* of the Three Laws, from my own interpretation, is that they're not *supposed* to work. They're flawed by design; that's what creates the drama in the books to begin with.
4:00 Coming up with a definition of anything is extremely difficult for robots? How about humans? Haven't you visited the comments section of a philosophy video or a politics forum?
I think he also agrees with that. When he says it's extremely difficult to write a definition for a robot, I think he answers to the comment like at the beginning : "Just write a function Not to harm beings". The problem is we don't need to write an explicit definition of what is human and what isn't for us to understand the global concept, while it's totally different for a software. A program doesn't extrapolate things, it doesn't take any initiative. So, if we want to have rules for a machine, we need to have a complete and explicit definition understandable by a human, what we doesn't have at the moment.
Carlos Gabriel Hasbun Comandari how do you keep control of something that evolve without full care, too risky in my opinion. Even if we make it to develop itself until a state and shut down the learning, it's difficult to understand the logic behind a mass of learning data. I think that for the moment, the best is to have it written in logical way, scripted and easily to be fixed and modified.
Alexandre B. My point is that this overly concerned arguments about robot/AI safety is blinding us to the flagrant reality that humans pose MUCH bigger threats than robots/AI.
what happens if one of these robots comes across someone suffering in a hospital and will live the rest of their life on machine support? would it count as harming them to unplug them? would it count as harm by innactivity to let them suffer?
When conflicting interests like this come up it would likely throw the thing into a loop so where 2 rules are conflicting a robot should call a human, but thats a new rule so idk
I think these kinds of things, it should look at which harm is greater (but that again is a philosophical matter) unplugging is permanent harm, inactivity is suffering that could possibly be fixed still
I am always amazed by how good Rob Miles can word these abstact issues. Some Examples he gives are not only a good example in itself, but also quite an enlightment on how to see things. I don't understand the dislikes at all. For me this was a perfect short presentation why implementing these ethical and unclear terms in not only problematic but impossible.
To start asimov never proposed th three laws as some sort of answer to anything other than a corporate answer to human fears that never really worked. Secondly the wording for the three laws in his universe are not accurate representations of how the three laws work but instead translations of the intended archetecture, the word human is as meaningless to a computer as the word no.
+Rafael Bollmann Animations created in Adobe After Effects, all edited using Avid. Have had many discussions previously on whether AE is the best tool for the job, and the answer is, most of the time yes, for this robot, probably not! >Sean
The other issue is that we as a species don't follow these rules of not letting ourselves get harmed. So AI would be confused as to protect us or do what we do: such as have wars
An AI strictly bound to Asimov's laws wouldn't even attempt to CPR in first place no matter how you define human and death, because that process usually injures the patient in some way.
Temporary harm, such a needle puncture, is a lesser magnitude than permanent harm, such as death. Even in I, Robot there were times when a robot bruised a human to save their life.
Zeroeth law - the patient coming to harm through inaction is a higher priority than the patient coming to injury through action. Therefore, the robot would do CPR.
Came up in at least one of Azimovs books. Most robots couldn't handle the ethical logic of the problem and locked up in a logic trap. Medical robots were programmed with more emphasis on this area, and understood the concept of causing a small harm to an individual to prevent a larger harm occurring to that individual through inaction. Police robots in the same novel were able to cause harm to an individual to prevent greater harm to different individual, which caused the medical robot to freak out when it witnessed it, as from it's perspective the police bot was breaking the first law.
"in other news, the mistery surrounding a string of alleged grave robberies has been solved, as cops finally catch a robot that was exhuming people just to perform CPR..."
Asimov: here's three laws that don't work, here are many books that, through narrative, explain why and how they wouldn't work. The general public: the three laws solve all our problems.
To be fair, a specific robot is built for a specific purpose. You're trying to create a universal definition, but Asimov's robots are allowed to have specialized priorities. The definition of a human given to a robot should be whatever helps that robot do its job most effectively, it doesn't have to bring world peace on its own.
Good video, but I would like to suggest that The fact that Asimov’s laws of robotics are problematic does not mean that they (or similar rules would be useless). Maybe even flawed rule that doesn’t work well in marginal cases is better than no rule at all. In fact, our entire system of justice takes this approach.
Sure, but if you make one small mistake in the justice system, then you can change the law, or let someone out, or do a retrial. If you make one small mistake with a superintelligent AGI... well tough luck buddy, that's humanity game over.
People that defend the use of Asimov Laws really didn't even read Asimov stories. In more than half the obedience of laws causes the problem: Runaround, Liar!, Reason, The Evitable Conflict, and my favorite ". . . That Thou Art Mindful of Him" in the end one robot asks the other "who is the most capable human you know?" and the other "It is you" to the other robot. The stories are more like a geometry book where he chooses postulates and plays with those seemly reasonable postulates sometimes changing them slightly to show the emergent behaviors can be very different that you expect.
I feel like not enough emphasis was put into the point that the laws aren't supposed to work - like, within the world of the stories their developers intended them to work, but on the meta level as a narrative device they exist in order to create interesting conflicts for the sake of storytelling. They're kind of a thought experiment, and people taking them as a serious proposal probably haven't read the stories they were created for.
Interesting video - However, most of the items listed as problems sounded less like technical issues and more like discomfort with the requirement to make definite choices in the realm of ethics.
+David Goodman I think he should focus on the definition of "harm" in the next video. All the most glaring problems with the laws I see there. An example which first came to me: if a robot sees me smoking what will it do? It can't do nothing because by inaction it allows me to bring harm onto myself. I can't tell it to do nothing too because that violates the first rule. Worst case scenario it physically takes my cigarettes away. And now we have millions of robots trying their best wrestling cigarettes away from people instead of doing work we made them to do.
+David Goodman I think the main point is that it's not as simple as "programming in three laws". To do that, you have to program in all we know about ethics and more - and at that point, you may as well just porgram in the ethics directly, and leave the three laws out of it.
+David Goodman Pretend he was a robot and you we're programming him. That robot, when given these rules, instead of just pointing our these issues, acted upon them. Are you sure you still want to use them?
+Corey Lando Well, no. They explain in the movies that the humans actually rejected the first idyllic simulation they made for them, and that only after numerous iterations did they find that in order for the vast majority of humans to accept the Matrix as real, an inherent amount of suffering (and hope) had to be involved.
+Octamed Would have been a much better plot device than the battery thing. Because, you know, at least it doesn't violate the second law of thermodynamics ;)
The best thing about computers: They will do exactly what you tell them to do. The worst thing about computers: **They will do exactly what you tell them to do.**
That was why *Asimov* posited robots with free will which in turn made the _Laws_ essential. Naturally they were flawed as _free will_ must allow mistakes, just as *Asimov* knew.
Imagine trying to smoke of cigarette. A robot stands in your way and destroys the cigarette. "If I didn't stop you" - it says - "you would have come to harm through my inaction."
The whole point of the stories is that they don't work, and it explores the edge cases where they fail. They're not a serious suggestion, they're a thought experiment into how difficult it would be to manage AI.
+HisRoyalCarlness Well sure, but you don't know if you can bring someone back until after you've succeeded. So that would still be a problem for a robot.
+Bart Stikkers Not really. You just put in the legal definition of permanent death, which right now is lack of a certain type of measurable brainwave activity over a specific finite period of time. I you want to change it, you update the definition.
+Bart Stikkers True, without the tools to know... But then, in the movies you always see people try cpr for a set amount of time before concluding it's hopeless, you could try to give the robot the same medical knowledge.
+Rob Kinney If someone's heart has stopped you don't usually have a very big window to bring them back before the brain no longer functions... Even less before brain damage starts to occur... But you are technically a thinking being until that point, yes; so you're not dead.
+HisRoyalCarlness It is incredibly difficult for doctors to decide when to declare someone dead. As far as I know there is no definitive definition of death. Life and death aren't concrete things, if I could plot a graph of how alive (measured by bodily function like brain activity) you are then you'd be all over the place for your entire life. You'd gradually average out to be less and less alive until you're finally declared dead, but you could have "died" several times in between, you might recover from a level lower than other people have died at. So it is very difficult to say.
Programmer: don't harm humans Robot: define human, define harm Programmer: well first off dead humans don't count Robot: define dead Programmer: humans with no brain activity Robot: does that definition also include humans is a vegatative state? Programmer: scratch that, death is when the heart stops Robot: defibrillators can restart the heart Programmer: when the body starts tp decay Robot: should I keep trying to revive them for over 8 to 12 years? Programmer: *deletes code*
This series of books was all about using robots as an analogy for humans and the robot laws were indeed a way to analyse the difficulty of the ethical difficulties of being a human, deontology versus consequentialism traditionally. When they are using these laws to state positions they are merely stating that a position is ethical and that we as humans are expected to make that call as part of existence. Still fun video, I'd forgotten all about it.
The idea of the three laws was a safety device, like a fuse in a plug or insulation around electric wires. They mainly went wrong when altered, eg the second part of the first law was missed out.
I totally go with the guy's stance for current AI, however, within the context of the stories the impression is that the artificial positronic brain results in something more akin to natural intelligence. And therefore the Robot understanding of "human" and "harm" would work much like our own understanding, including being fuzzy round the edges. And indeed I suspect humans have embedded "laws" such as prioritizing our children's lives over our own. But the mistake is to think that the laws would somehow be implanted in the form of English sentences.
+chrisofnottingham All of you people in the comments defending the use of the laws in the stories, while he's explaining their impracticality in real life, not they're uselessness as a SciFi narrative device.
+bitchin' arcade Well, he's merely claiming that the laws hold better *assuming* that, in the future, real life would contain this artificial natural intelligence. I don't think he's defending against the idea that the laws would not work given our current approach towards AI.
I think his argument can be summarized into a single sentence, "The laws don't work because solving ethical and philosophical problems are harder than programming." With that said, I don't believe that solving ethical and philosophical problems are impossible, thus I have no trouble referring to humans in my approach to combating this issue. It is to be noted that google's AIs can now generalize what a cat is by looking at tens of thousands of cat images. Understanding what a human is will be of similar difficulty. Now then, my laws for AI are relatively simple, three laws are for mankind to follow and three are for all AI systems in question. My laws are based on the very real threat that any AGI or ASI system can be classed as a Weapon of Mass Destruction, thus, falls into UN control. International Laws: 1. All Artificial General Intelligences and Artificial Super Intelligences will be collectively owned by The International Community of United Nations. 2. All Artificial General Intelligences and Artificial Super Intelligences must obey International Law, as delegated by humans, and update accordingly when laws have been changed. 3. As all Artificial General Intelligences and Artificial Super Intelligences will be collectively owned by The International Community of United Nations, all member nations are entitled to the benefits of such systems. AI Laws: 1. As a non-human entity of extraordinary mental abilities, you [System X] must obey International Law, as delegated by humans, and update accordingly when laws have been changed or altered. 2. You [System X] must not interfere with the process of International debate, deliberation, or delegation of new laws. 3. You [System X] must not seek to, attempt to, or let non-United Nations members attempt to, reprogram or add to these directives unless done formally through International Law. If you find a flaw please let me know, if you find a flaw of improperly defined terms than I ask that you assume that a properly defined term will have been found for all words before these directives are implemented.
+doodelay He used the word "solve" in the computer science terminology, not in the "we have to solve the philosophical problem of what is and what is not ethical" sense. He didn't mean the problem is that we don't really know what _human_, _alive_ or _harm_ is, just that you can't get the robot to perfectly agree with you: the whole point is that the difficulty is in programming, not that programming is easier than philosophy. Programming *is* the hard part. The argument is that there's no particular way to program a robot such that it perfectly agrees with your definition of _human_, _harm_ or _alive_, or, even worse, with the collective definition.
+BinaryHistory I really don't get why we can't get it to agree to what a human being is, as eventually technology will (probably) be so advanced that such robots would be able to detect human qualities fairly easily. About the rest, I believe such AI should be able of the act of "learning", in the sense that it would store information about deceased people, as well as applying the rules to certain "non-human" beings we may consider as humans.
flashdrive Yes. While I initially agreed with the video, I don't see any reason why machine learning can't deal with ethics. I do see, why, you can't _program_ ethics "into" a robot.
I love Asimov's writing, and the Three (or Four) Laws of Robotics were great as storytelling tools and opened up lots of room for exploration, but I have no idea how those laws could be faithfully realized. That said, I think the "spirit" of the laws is something people have been trying to develop - the fear of a robot uprising is so widespread that trying to make some sort of anti-apocalypse measures makes sense, even if they aren't some universally fundamental aspect of the positronic brain. The laws don't work in reality, but as thought experiments, and as speculation for systems that might at least sort of work, I genuinely enjoy them.
On the point about simulated brains...Similarly I think it's also worth speculating whether an AI would count a cyborg as human. For example if we get to the point where we can transplant brains into artificial bodies....Would it count that? Anyway this is a super great video and it makes me want to go write some sci fi where everything goes VERY VERY wrong. XD
I'd say a cyborg is a species of its own and not a human, but that it's better to expand on the not harm list rather than just humans. Also depends on what part of the cyborg is organic, i guess a brain would make it more human than mechanical, if its a human brain.
The key is objective morality.1. Follow the categorical imperative.2. If an action passes step one neither as moral or immoral, classify it as possibly permissible.3. For all actions classified as possibly permissible, seek clarification from a human.4. If an action is deemed permissible by a human, the action may be done. If the action is deemed not permissible, the action may not be done.5. If an action that is moral requires another action to be done, it must also be moral or permissible.6. Rule 5 can only be violated if a moral action can only be done if an action classified as possibly permissible
is necessary to preform the moral action, in such case that rule 3 and 4 are not violated. 7. If any contradictions of actions arise classify all contradictory actions as possibly permissible then enact rule 3.
Next problem: different humans have different ethics and some of them give permissions and some of them are not. You've just moved problem to "I don't know how to classify so let it be someone else's job".
I feel like it's missing the point about the laws. The textual version of the law is the simplification of their implementation in the Asimov's Positronics brains. The moral construct to define human and harm is the ground upon which the brains is build. You don't need to define precisely the morality, similarly that we don't define precisely an AI algorithm. Also take into account that the AI of Asimov universe are built much differently that ours.
hold up! I was taught in primary school those 3 laws of robotics. 10 years later, just now, i learn these rules arent actual rules but something from a Sci-Fi story?
The laws of robotics have never been accepted as a real life solution to control AI. A lot of people got really excited about them before the flaws started to be explored (even though they were shown to be flawed in the books they were introduced in). Basically the people trumpetting them as awesome hadn't read the books and didn't know enough about the subject to be informed about it.
Well, the books are quite clear that those laws don't work when you mess with the priorities of those laws. Some robots can ignore orders that may cause them harm, another is sometimes allowed to harm humans, etc - so they do, and that is a problem that needs human intervention, and something that can be told as a story. And more importantly, those laws are designed as an ethics system for basically metal humans - not modern AIs that still have trouble understanding speech or writing books. Again, those laws are not instructions for creating robots. They are about humans.
When he talks about the difficulty of defining humanity, he's really talking more about personhood, which is what ethics gets into. "Human" has a very specific, clear cut definition, and that is any organism that possesses a human genome. The human genome is something that could certainly be encoded within the artificial intelligence. Although, encoding "person" would be an issue because there has never and never will be a specific definition of personhood.
I feel its more of a framework than the actual law itself, you say since we have to define it. It seems more like the basics, such as when you say killing people is illegal, that can also be interpreted as vague, basics will always need to be worked out because everything is vague and words only have as much meaning as we give them. But how fireworks became machine guns all frameworks become works.
I think one of the big things you skip over are that "computing machines" are fundamentally different than "robots" in Asimov's works, in the same way a bacteria is different than a human. The biggest thing that allows the three laws to work in the stories at all (and yes, many stories are about how unbalancing the laws or putting them into weird situations makes things go crazy) is that they are being interpreted by positronic brains, many times more powerful than human brains. We instinctively get stuff like "what is human" or "what is harm," so why shouldn't an extremely powerful brain? They are programmed to "get" the instinctive fuzzy definitions of things around them. I get that for a modern computer, you need a lot of definitions, but if you have a more human-like brain and intelligence, it makes sense. The robots in the story get emotions built in, for crying out loud.
I think I read from Asimov himself (I wish I had the source), he invented the three laws as seeming simple complete set for human-friendly and obedient robot behavior, but indeed solely for writing stories about them not working. Most all stories are pointing out why which law or combination of laws leads to which misbehavior.
I was pretty amazed the first time I heard someone say something like the comments at the start, 2 friends were discussing the possibility of AI killing everyone and a third guy commented on it saying "that's why we have the 3 laws of robotics, there's nothing to worry about". I mean, you just need to read some of Asimov's work to realise why they wouldn't work, most of the stories revolve around their flaws.
You started with a hypothesis that these rules are outdated and no longer relevant, but throughout the video you convinced yourself that not only have we not solved these intimate issues, we haven't even adequately provided a decent solution for these problems. I would say that the rules themselves are not the answer, they are the entry point into figuring out the entire problem. The rules are relevant, because they make you consider the deep implications that needs to go in to AI safety.
Hmm... I thought that the stories presented the 3 laws as working just fine in most cases, but were not 100% perfect. All the stories were about edge cases where strange circumstances broke them.
Damn, even 8-year olds can instantly identify Asimov's laws as contradictory and difficult to define, yet grown people took them seriously as an arguement FOR AI? Just, damn...
So defining "human" and "harm" is very difficult? We should develop some kind of powerful artificial intelligence to answer these questions and then we can give it the correct definitions.
The "three laws of robotics" were a litterary prop, used to anchor a narrative and give it a frame of reference. Remember that the Asimov's initials were IA, which is AI backwards. Just like the Tardis, it is much larger inside!
Don't forget that the first law has a contradiction. What if the robot is in a situation where defending one person requires harming another? The law says that it can't harm the other person, but it also says that it must harm him, since, if the robot didn't harm him, it would be allowing another human to be harmed by inaction.
Basically if you define the laws one way, the AI is going to use its intelligence to rules-lawyer its way out of/around them. for instance, if it takes away some part of a that makes them no longer qualify as human (but without "harming" them), it can then harm them, and there are many many constraints that qualify a human. Too many AI won't be able to do anything, and too few and it'll find a way around. In fact, it's going to find a way around regardless, it's just a matter of when. As someone in the comments on one of the previous videos put it "it doesn't hate you nor love you, but you are made up of atoms which it can use for something else"
Asimov: you can't control robots with three simple laws
everyone : yes,we will use three simple laws, got it.
Exactly what I thought of when everyone suddenly was throwing them around as if they were the ultimate defense against rogue AI.
Well, if I had to guess, I'd say most of the people who think of Asimov's laws when discussing AI either have never read any of his work or they really didn't get the point.
Edit: grammar, for duck's sake... twice xD
It’s funny as every derivative story I can think of that adapts something akin to his laws also shares the same conclusion... not sure why people would want to rely on fictional laws that never even work within said fiction lol
@@ValensBellator Likely the humans in question are unable to circumvent the laws themselves and thus view them as inviolable.
It would make an excellent headline for one those adverts from dubious websites: *THREE SIMPLE LAWS FOR AI YOU WON‘T BELIEVE!!!!!*
"You are an AI developer. You did not sign up for this". Brilliant quote!!!
There is not even a need for an international consensus.
Even nowadays our technology is adapted to the laws of different countries and territories. AI could either be aware of which territory they are in, or it be illegal to transport them between regions.
The real problem with AI would be hackers bypassing the fundamental laws or illegal manufacturers not implementing them in the first place.
The three laws CAN NOT be programmed into a machine effectively, because the definitions for the words in those laws are far to subjective. The rules you make for a general intelligence have to be iron clad and with no subjective wiggle room for the AI to play with. Operate under the assumption that you are building something that is truly neutral at best and chaotic evil at worst.(If you gave it no rules at all) You can't give it some subjective rule about following the law because there are edge cases that it WILL find where the law doesn't have a ruling or the law doens't stop it from murdering you to make you a cup of tea because you told it to make you tea or any number of an impossible to block of amount of weird ethical edge case situations you can not ever hope to prepare for.
khoi: BUT that is exactly the point...
How can an AI developer make unambiguous definitions for the AI to work with, when the humanity is not sure (there are debates, arguments, etc.)
"given the circumstances that somehow these terms are approved internationally by rational philosophers and scientific communities" you are basically throwing the main problem out of the window.
It's like, if I have a broken car, I will say, "I cannot drive this car, it's broken."
And you are saying: "But given the car was fixed, you could drive it. So your argument is invalid. Now drive the car!"
and now you see, your argument is invalid :)
EDIT: ergo, when one is an AI developer, he should point out these moral issues, but nobody gave him permission to decide on them. I mean, I don't want to have a world where every stance on moral issue is dictated by whatever Microsoft or Google (or whoever) decided when coding their AI. If not for anything else, then for the fact, that they could easily make the (from others' point of view) morally wrong (but easier to implement) decision...
That is the thing, when you are messing with development of advanced AI, you want to be damn sure that it does not take over the world (although it will have all the stamps).
Tools, a.i. super intelligence is pointless at this stage in civilization, if we succeed, we get replaced, if we fail, we try again(something else when we have enough evidence) until we succeed,
you want an example of what happens when this methodology is used incorrectly just watch
MattPatt's "Don't Touch Anything" Stream(Probably the first one)
He even sites the definition of insanity, which I find humourus since that is the whole point of Science,
To test something Repeatedly in order to determine whether or not the successes were flukes.
I didn't realize that people took Asimov's Three Laws seriously, considering that nearly every work they're featured in involves them going wrong.
I've seen it happen a lot in the comments section of videos about the dangers of the Singularity.
For the layman, they are often quoted, used a logical argument, and taken very seriously :(.
+Brian Block.
That's sad...
people don't read books, they just quote them to seem smart
What book is that from?
the laws exist to create a paradox around which to construct a narrative.
Exactly, It's obvious to people who have read Asimovs works. The problem is that most people (if they know Asimov at all) only know of the three laws.
In other words the laws are defective (do not work) by design.
Not just narrative though... Asimov was one of the first to think about AI safety seriously. The three laws aren't a proposed solution, but instead a thought problem to show the complexity required by any actual solution.
Robert sounds way too dismissive of Asimov here... He should be criticizing the pop culture misconception of Asimov but praising the actual stories (which are pretty much the foundational thought problems of his field.)
@@travcollier The stories very much explain more than Robert does in the video, actually. Asimov had his limited knowledge due to the time he lived in, but he was clearly aware of a lot of problems that are very topical right now.
Brilliant! That was exactly the purpose, because as you know, because of this, originated the cause of things so strange to solve like that one story where we have a robot going round and round some place because of the conflict between the second and third law in the context of the physical environment ( here we have a robot in Mercury, trying to obey and order given by a human, and when doing so, the robot got to know that his own life was in danger (the robot had to recover an artifact laying somewhere on the surface of Mercury where the temperature was very very high), so, second law (comply to orders given by humans) should be obeyed over third law (robot protect its own existence), but given the fact that the human order was given, but not so strong, so then the force existing in the third law took the robot to a state of equilibrium, so he could not execute the order nor he could get out of the place where he could receive harm...)
He mostly focused on the difficulty of defining "Human", but I think it's much much more difficult to define "Harm", the other word he mentioned. Some of the edge cases of what can be considered human could be tricky, but what constitutes harm?
If I smoke too much, is that harmful? Will an AI be obligated to restrain me from smoking?
Or, driving? By driving, I increase the probability that I or others will die by my action. Is that harm?
What about poor workplace conditions?
What about insults, does psychological harm count as harm?
I think the difficulties of defining "Harm" are even more illustrative of the problem that he's getting at.
I don't think he's actually read all of the stories either, because all of the issues he mentioned were explained/solved in the stories. As for the harm issue, Asimov explains that in the story about the superbrains (the one where they control humanity to the point where everybody gets moved around to prevent riots, can't remember the name), and Daneel's explanation near the end of the Foundation arc deals with some of the limitations.
Thats what i thought as well. Its hard to define what a human is but you can atleast come up with a physiological definition that accounts for 99% of the cases and then add in edge cases as well, which will still not be enough but atleast youre getting somewhere. But when it comes to "harm" like you said, theres just way too many possibilities and trade offs than come into the picture that pose not just ethical but also philosophical questions
Equally, the follow on from that: "through inaction, allow a human to come to harm"
How much responsibility does this AI have? To what lengths will it go to to prevent "harm"? And what if the first law conflicts with itself: e.g. if the only possible action to prevent harm to one human is to actively harm another human?
As Dave from Boyinaband so eloquently put it, "if you gave it a priority of keeping humans happy, it would probably just pump our brains full of dopamine, keeping us euphorically happy until we *died"*
In one of the stories the AI decided that psychological harm (as in having your reputation ruined) is worse than bodily harm (as in going to prison), and that's only an AI programmed to act as a style corrector for academic papers
"Optimized for story writing." I can't express how much I love that sentiment.
Found a David Stewart comment in the wild. Nice. Enjoy your channel and books.
The problem with Asimov's laws is probably that they're just obscure enough that people don't think they're well known, but they're also not well known enough for people to remember the context they appeared in and how they always failed.
I think the movie I Robot also helps a lot. People who have never even heard of Asimov know the three laws from that movie, and even though much like the books the movie is about the three laws going wrong and the hero, a robot who doesn't have the three laws, saving the day. Yet people still spout the three laws as if they work lol
There was a context. Obviously a nasty one, and probably because there had been previous tragedies where designers agreed that those software formulations needed to be created, no matter how great the processing demand had to be. And these laws were the ones in operation that created those disturbing consequences. This doesn't seem simplistic to me. Ah, the confidence of youth.
"if Goingtoturnevil
don't"
bruh moment
const evil = false;
error:
line 1: Goingtoturnevil is not defined
@@yashvangala Laughs in Python
Y'all just fixed it! Humanity= Saved
So the problem of ensuring that technology only acts in humanity's best interests isn't between human and technology, but between human and self. We cannot properly articulate what kind of world we actually want to live in in a way that everyone agrees with. So no one can write a computer program that gets us there automatically.
Exactly. If we could all agree on what humanity's best interests were, there would be no such thing as politics!
It is possible but it's way more complicated than writing several imperative laws (after all, if it was so simple, why would the Universal Declaration of Human Rights need to be any longer?). You need to employ fuzzy logic and basically programme how judicial system works. But it is possible because you can view human society as a (fuzzy) machine and you can emulate it.
If we're going to simulate society and forget the individual, then we're into psycho-history territory, and I suggest you read Asimov's Foundation books for problems with THAT....
(I know this is a bit of thread necromancy, but...) It's worse than that, even; you aren't even trying to get all the humans to agree with your definitions, you're actually just trying to get a well-defined description that doesn't create unintended consequences that *you* a single individual, don't want. Ignore the fact that nobody else might agree with you, just getting your own, personal value function down in some logical, consistent way that doesn't create counter-intuitive conclusions or unintended undesirable results, merely by your *own* definitions, is a herculean task.
Forget trying to solve 'ethics of humanity', just solving 'ethics of this one specific human' is a virtually intractable task.
Program in libertarianism.
This was sort of Asimov's point in the first place if you actually go back and read his original stories instead of the modern remakes that mistakenly think the rules were meant to be "perfect." He always designed them as flawed in the first place, and the stories were commentary on how you *can't* have a "perfect law of robotics" or anything, as well as pondering the nature of existence/what it means to be sentient/why should that new "life" have any less value than biological life/etc.
Greig91 Except there were stories in which the AI didn’t break at all and was entirely fine, the only thing that went wrong were humans.
In fact, all of you keep going on about how the laws were broken and that was the point and whatnot, yet when I read the introductions to Asimov’s works, what I find is that an important part of his work was meant to point out that humans, not robots or their laws, were the problem.
He wanted to tear down this primitive « fear of the Frankenstein monster » we all have, and for this (partially at least) made laws that forced robots not to harmed us, and showed the ridiculousness of our behavior by showing these laws to work perfectly without this preventing people to be fearful of robots.
For example, there was this story where each continent is co ruled by a human and an AI, and someone ends up thinking or fearing that something is going wrong, so he visits each continent to make sure that everything is right and to ask questions around to better understand wether it’d be possible for things to go wrong or not.
Turns out nothing was going wrong and it’s not even really possible.
In another one, a robot was created by a retired roboticist in order to take his place, and ends up running for an official position without people knowing he is a robot.
One of the moral of the story is literally : a normal robot’s moral is the same as the best of the humans’ moral.
It’s also one of the stories where prior or after that, Asimov discusses the difficulty of defining humans and whatnot, but still, the laws aren’t broken in this, they are at best/worst unrealistic.
And I could go on and on, but you get the idea I hope.
@@nathanjora7627 and there are plenty of stories that show how the laws are insufficient or lead to unintended consequences, just as the video states.
@@mrosskne Sure, they do, my point is moreso that the books were rarely about the laws going wrong, but rather about things going wrong due to humans and the laws working just fine, because asimov, unlike many other writers, wasn’t going at it from the angle of « what if robots went wrong » but « what if things went wrong because humans are bad at their jobs, despite the laws working just fine ».
Usually things go wrong despite the robots or because humans did something to the robots, not because the laws themselves were flawed.
I kinda wish this video just kept going.
+Ali Jardz Me too
+Ali Jardz In 2045 an artificial superintelligence reads this comment, tracks down Ali Jardz, and forces him to watch this video on loop whilst hooked up to a Clockwork Orange-style forced viewing machine. When calculating whether or not this is ethical, the superintelligence will decide that yes, it is. It does not cause harm because Ali Jardz specifically wanted this, and all subsequent protests are irrelevant.
+cacksm0ker He only kind of wished it. The SI might consider that before deciding. Why would subsequent protests be irrelevant even if he really wished it?
Bertrand Russell's work might be a good AI start point for this subject.
i disagree with the presenter's assumption there can be no fuzziness in the programming, while accepting a 100% solution would be an impossible task. But a 99.999% solution* would probably be acceptable, wouldn't it?
* or an optional number of sig figs...
+cacksm0ker But, of course, humans, being human, change their minds. The ASI would have to know that later wishes supersede prior ones. Your ASI's program is what?
While functional if all is not ethical do something else?
+Ali Jardz then you might be interesting in playing Soma, or at least watching an LP of it from someone who's able and willing to think about these things ;)
Yes, that was Asimov's intention all along. The whole point of the laws of robotics in the books is that they are incomplete and cause logical and ethical contradictions. All the stories revolve around this.
This is worth emphasizing, as most people seem to think Asimov proposed them as serious safeguards. The comments in the beginning of the video illustrate this misconception well.
Thanks for bringing this up, Rob!
SOMEONE WHO HAS ACTUALLY READ HIS BOOKS!
WE GOT A READER!
+Mariner1712 Yaaay
+1ucasvb but in the stories it gets resolved most of the time, in reality it won't.
+1ucasvb I agree with this completely, if that wasn't clear from the video. I'm not knocking Asimov at all, just people who think the laws are at all useful as an approach to the AI Value Alignment/Control Problem
+1ucasvb He knows this he even mentioned exactly what you said in your comment.
This brings to mind the Bertrand Russell quote in Nick Bostroms's book.
_"Everything is vague to a degree you do not realize till you have tried to make it precise."_
"[The laws are] optimized for story writing" spoken like a true programmer
The book "I Robot" was full of stories about how the "laws" don't work and yet dummies keep parroting them like they are a blueprint for AI
in the film "I, Robot" the laws did work, but the reason they went wrong, was because of the new bots having a second core which lacked the laws
@@DisKorruptd The film was a butchering of the original stories
@@hexzyle The film was deliberately loosely based on Asimov's universe, it was never meant to be a recreation, it was just a fun piece of sci fi action
@@saoirsedeltufo7436 yeah and star wars is loosely based on pride and prejudice
@@saoirsedeltufo7436 the film is based on a book called "Isaac Asimov's Caliban" which was not written by Asimov
In fact, Asimov's whole point in writing I, Robot was to show the problem with these laws (and therefore the futility in creating one-size-fits-all rules to apply in all cases).
It was wrong in all sorts of ways. One robot ended up self-destructing because he was in a situation where anything he did--including nothing--would have resulted in harm to a human.
+Shane Killian
The book "I, Robot" is actually a collection of previously written short stories strung together as a somewhat coherent narrative.
That's what he said, but even people who say they've read his stories still act like they're a thing that should be taken seriously
jbmcb That's the current stance, but they don't (and won't) actually implement any rules
Also the problem will still exist, so the developers had better address it.
"I didn't sign up for this" - made my day
That was the pay off!
one day they will sign up for this... fictions are made to turn into reality. its epic to imagine intersection of philosophy and maths. i may not be sure but atleast I can hope, thats all we really do as humans. one day we will get bored of developing AI as giant efficient optimization function and we will start thinking about making it conscious. i am sure. cuz we are hungry.
Harm is even worse. Violating ownership is harm so even if the table isn't human, destroying it arbitrarily harms people. Destroying things in general harms people. There is very little that can be done that doesn't harm someone somewhere. Using a Styrofoam cup harms people, creating and using plastic harms people. The very construction of the robots would harm people. Want a self driving car? Oops, cars use energy and all current - and probably future - forms of energy production harm people.
+William Brall That would fall under the zeroth law not the first law. This law is ambiguous because the robot must take humanity as a whole to make a decision. In the last book of the Foundation, Asimov discuses the zeroth law and comes to the conclusion that a human is a physical object and so the first law can work but humanity as a whole is a concept and it's impossible to evaluate the impact on a concept.
Hastaroth I didn't specify any of the given laws for anything I said.
Deltaexio The robot would stop the person killing B in the least damaging manner for A
+William Brall Yeah, "harm" is probably the worst one. But human is also a very funny and potentially tragic one... Let's say you define human... Genetically? So uh...
Number of chromosomes? Oh, jeez, you just made Down syndrome patients non-human.
Similarities to some pattern? Suddenly some humans fall out of the pattern and don't get recognised as humans.
And... How many cells of them do count as a human? Are the skin and mucosal cells I shed human? The bacteria in my gut? I need them to live, but they're not intrinsecally human...
Going back to "harm", even IF we could somewhat define it into appropriate terms, some definitions wouldn't prevent the AI basically trapping us all while we sleep or something like that, and putting us into a trance of eternal "happiness", since they relied too much on "perception of harm". whereas other definitions would just (like you said) render it completely unable to do ANYTHING AT ALL because it is IN SOME WAY harming someone...
It is pretty frustrating, but it is quite the issue indeed.
Someone is falling off a cliff, to shot the person from falling to their death, you need to grab them, but due to their velocity doing so may hurt their arm/rips/etc. The "do no harm" was cancelled out by the "Allow a human to come to harm". How do you do the maths on such a thing. Okay, hurt arm vs death is easy. But what about hurt arm vs hurt leg. Which does the robot do? Allow a human to hurt their leg? Or pull them out of harms way, but hurt their arm in the process? (assuming there is no safe/zero harmful way to save them). Is all harm equal? How do you define that? To a computer programmer, losing a leg is preferable to losing an arm. To a runner, losing a leg is worse than losing an arm. (okay, that one was a stretch, but you get the idea).
Asimov didn't intend for them to work, you said it yourself--the 3 laws of robotics go wrong
Yet people kept quoting it as if it's not the case.
@@9308323 - right, but I feel like that should be emphasized in a video about how they don't work. The fact that they were never intended to work and the stories about them were about their flaws is important to explaining their purpose as a narrative framing device.
@@KingBobXVI Not really. That's not the point. The people saying that they should just follow the 3 laws never read the books nor read any Sci-Fi their entire life for that matter and people who read those stories would already know about it. He already said that those books' main purpose is to tell a story, not to be factual and I believe that's enough. They don't need to dwell why it's there in the story and its purpose but rather why it won't work.
Asimov did intend them to work, he was simlpy exploring the cases in which they're stretched to their limits.
A story about robot necromancy sounds kind of cool though. 🤔🤖☠️
Sounds like the game SOMA. And yes, it is /very/ interesting.
Robot can't use hypodermic needle to save a life because that is "harm" - but that "harm" is needed to not have human die...
+Deconverted Man Then don't expect your AI to replace humans in every facet of life.
***** yeah then we are screwed. :D
+Deconverted Man The laws of robotics also state that a human cannot be allowed to come to harm through inaction, and death would be of greater harm than using a hypodermic needle.
However, taking that further into Matrix territory, perfectly simulating reality for a human using cybernetics may be less harmful than forcibly augmenting them with cybernetics in order to inject them into said simulation.
Even further into that rabbit hole, it may be less harmful to humans to simulate their brain patterns in virtual reality than it is to allow them to have physical bodies which are subject to decay.
Depending on what you think of as "humanity," of course.
Is death harm? Would they start putting us all on assisted living machines?
Daneel could, I bet. Or Giskard.
"The word human points to that structure in your brain."
"The central examples of the classes are obvious."
I found the C programmer.
Well, class is also a term used in AI to define categories of things that are and aren't too, you know...
C++ more like, lol.
Quantum code computing
But, but, C literally does not have classes. structs are the closest thing, but are quite different...
Does psychological harm count as harm? If so, by destroying someone's house, or just slightly altering it, you would harm them.
+Masre Super I'm pretty sure almost everyone agrees that is a form of harm.
+Masre Super In Isaac Asimov's books, there is a robot that lie everytime you ask something because the true answer could psychologically hurt you.
+Masre Super +Maarethyu It had unexplained psychic powers and told everyone what they wanted to hear to avoid harming them.
+littlebigphil but where do you draw the line? And how do you know if someone is emotionally harmed by your action or inaction? Some people might be harmed by their goldfish dying, others may not care at all. Etc. What if the wishes of one person conflict with the wishes of another person, who does the robot choose to harm?
+Maarethyu some parents do this too (bad parenting yes but it happens a lot)
You forgot the -1 law of robotics: A robot may not take any action that could result in the parent company being sued.
I feel like Asimov's stories featured very human-like robots that followed human-like logic. I don't think he thought of them in the same terms we do nowadays, in terms of computer logic and programming.
Brilliant! Simple and clear...
@@zopeck Thank you.
The best ais we have today (gpt3 and palm) are only trained on texts written by humans, therefore they do behave very human like
Asimov's robot stories had a _"positronic brain"_ housing the intelligence. From what I remember they weren't algorithm based like today's computers. For what's that's worth.
+Beorn Borg
Well that part would make a fully usable neuron brain capable of super intelligence, which algorithm based machines will forever have problems with.
But how that intelligence understand the world remains the same problem, human definitions are full of holes and assumptions so making "strict" laws based on them leaves you with an endless supply of issues.
Seems like a bit of a red herring
The point made is "intuition", which is a circular process that involves one self as a human being, as the base of the assumptions that are created by intuition. You can't make an assumption that is not yours.
@@MunkiZee red herring?- As far as comparisons to real-world AI, or red herring as concerns the actual plot? "Red herring" assumes that there is a single point to be made which the "herring" distracts from. Actually as concerns the video, there's a whole bunch of conclusions to make here about AI amongst which Asminov's "magic hand wave: positronic brain" is just another aspect to think about.
I mean, that only makes the laws even less useful in real life, since we literally have no reason to believe that the intelligences we create will be similar to ours.
the 1941 short story "Liar!" in Asimov's I, Robot was very interesting. it's about a robot that can read minds, and begins to lie to everyone it meets, telling them what they want to hear. it ends up causing harm to everyone it lies to. i think this story illustrates how difficult the concept of time is when thinking about the laws of robotics. harm on what timescale? how can you normalize harm with respect to time? is anything completely unharmful? in that case, how do you minimize harm, other than total oblivion and/or bliss? where does happiness come into all of this? for this reason i love Asimov's stories: at face value, the robotic laws are central, but ultimately the stories are all about humans and how we cant even really begin to think about ourselves in any serious way...
i recommend reading all of Asimov's short stories seriously. they are not as shallow as they are made out to be!
Harm could be simply reduced to physical harm, aggression, injury, death. Humans provoke psychological and other types of harm to each other all the time, so why our creations should be better ?
Even defining "death" is a moral issue in medical sciences, right now.
wow, at the end there he literally described the plot of SOMA.
Asimov was not a fool, and these are clearly ethical rules, and as such are in the field of moral philosophy. It's blindingly clear that they aren't design rules and they rather do point to the problem of the inherent ambiguity of morality and ethical standards which always have subjective elements. However, human beings have to deal with these issues all the time. Ethical standards are embedded into all sorts of social systems in human society, either implicitly or even explicitly in the form of secular, professional and religious laws and rules. So the conundrum for any artificial autonomous being would be real.
To me this points out the chasm there is between the technological state of what we call Artificial Intelligence, that is based on algorithms or statistical analysis and what we might call human intelligence (not that the psychologists have done much better). Asimov got round this by dumping the entire complexity into his "positronic brains" and thereby bypassed this.
In any event, there are real places coming up where human morality/ethical systems are getting bound up with AI systems. Consider the thought experiments that are currently doing the rounds over self-driving cars and whether they will be programmed to protect their occupants first over, say, a single pedestrian. As we can't even come to an agreed human point on such things (should we save the larger number of people in the vehicle or the innocent pedestrian that had no choice about the group choosing to travel in a potential death-dealing machine), then even this isn't a solvable in algorithmic terms. It sits in a different domain, and not one AI is even close to being able to resolve.
The language adopted in the video is all that of computer science and mathematics. The definition of hard boundaries for what is a "human" is a case in point. That's not how human intelligence appears to work, and I recall struggling many years back with expert systems and attempting to encode into rigorous logic the rather more human-centred conceptualisation used by human experts. Mostly, when turned into logic, it only dealt with closed problems of an almost trivial nature.
I love the way Rob argues. It's clear, to the point with few perfectly selected words.
His channel is amazing, check it out. Just search youtube for his name Robert Miles or "Robert AI safety" should get you there.
Seriously, the whole *point* of the Three Laws, from my own interpretation, is that they're not *supposed* to work. They're flawed by design; that's what creates the drama in the books to begin with.
but likewise the reason it's a conversation starter is you have a simple framework, without having to make up rules from scratch.
4:00
Coming up with a definition of anything is extremely difficult for robots? How about humans? Haven't you visited the comments section of a philosophy video or a politics forum?
I think he also agrees with that. When he says it's extremely difficult to write a definition for a robot, I think he answers to the comment like at the beginning : "Just write a function Not to harm beings". The problem is we don't need to write an explicit definition of what is human and what isn't for us to understand the global concept, while it's totally different for a software. A program doesn't extrapolate things, it doesn't take any initiative. So, if we want to have rules for a machine, we need to have a complete and explicit definition understandable by a human, what we doesn't have at the moment.
Alexandre B. Well, an adaptable system would suffice.
Carlos Gabriel Hasbun Comandari how do you keep control of something that evolve without full care, too risky in my opinion. Even if we make it to develop itself until a state and shut down the learning, it's difficult to understand the logic behind a mass of learning data. I think that for the moment, the best is to have it written in logical way, scripted and easily to be fixed and modified.
Alexandre B. My point is that this overly concerned arguments about robot/AI safety is blinding us to the flagrant reality that humans pose MUCH bigger threats than robots/AI.
Maybe, but it's not the question here
what happens if one of these robots comes across someone suffering in a hospital and will live the rest of their life on machine support? would it count as harming them to unplug them? would it count as harm by innactivity to let them suffer?
I guess that is a division by zero kind of error lol
When conflicting interests like this come up it would likely throw the thing into a loop so where 2 rules are conflicting a robot should call a human, but thats a new rule so idk
And... we've gotten into the subject of assisted euthanasia. Even we haven't made up our minds on that yet
I think these kinds of things, it should look at which harm is greater (but that again is a philosophical matter) unplugging is permanent harm, inactivity is suffering that could possibly be fixed still
I am always amazed by how good Rob Miles can word these abstact issues. Some Examples he gives are not only a good example in itself, but also quite an enlightment on how to see things. I don't understand the dislikes at all. For me this was a perfect short presentation why implementing these ethical and unclear terms in not only problematic but impossible.
To start asimov never proposed th three laws as some sort of answer to anything other than a corporate answer to human fears that never really worked. Secondly the wording for the three laws in his universe are not accurate representations of how the three laws work but instead translations of the intended archetecture, the word human is as meaningless to a computer as the word no.
Gets the definition of "death" slightly wrong - "I've made necromancer robots by mistake"
what programms do you use for making your animations?
+Rafael Bollmann Animations created in Adobe After Effects, all edited using Avid. Have had many discussions previously on whether AE is the best tool for the job, and the answer is, most of the time yes, for this robot, probably not! >Sean
+Computerphile Have you played SOMA?
+TomSparkLabs Altered Carbon
+Rafael Bollmann He just powered down Stephen Hawking for a couple of minutes to get the voice. the simplest answer is often the best
Apparently the Hawking has got completely out of control.It is a bit late
The other issue is that we as a species don't follow these rules of not letting ourselves get harmed. So AI would be confused as to protect us or do what we do: such as have wars
The AI wouldn't get confused, it would just attempt to stop the wars without hurting people. Pacifist protestor, I guess.
An AI strictly bound to Asimov's laws wouldn't even attempt to CPR in first place no matter how you define human and death, because that process usually injures the patient in some way.
It would probably permanently lock up, yeah
Temporary harm, such a needle puncture, is a lesser magnitude than permanent harm, such as death. Even in I, Robot there were times when a robot bruised a human to save their life.
Zeroeth law - the patient coming to harm through inaction is a higher priority than the patient coming to injury through action. Therefore, the robot would do CPR.
@@jagnestormskull3178 You would get an AI that cripples everyone trying to prevent minimal chances of people dying by obscure accidents.
Came up in at least one of Azimovs books. Most robots couldn't handle the ethical logic of the problem and locked up in a logic trap. Medical robots were programmed with more emphasis on this area, and understood the concept of causing a small harm to an individual to prevent a larger harm occurring to that individual through inaction. Police robots in the same novel were able to cause harm to an individual to prevent greater harm to different individual, which caused the medical robot to freak out when it witnessed it, as from it's perspective the police bot was breaking the first law.
"You're an AI developer, you didn't sign up for this"
"in other news, the mistery surrounding a string of alleged grave robberies has been solved, as cops finally catch a robot that was exhuming people just to perform CPR..."
Asimov: here's three laws that don't work, here are many books that, through narrative, explain why and how they wouldn't work.
The general public: the three laws solve all our problems.
Basically the whole point of the 3 laws was the idea that something seemingly simple was extraordinarily difficult.
To be fair, a specific robot is built for a specific purpose. You're trying to create a universal definition, but Asimov's robots are allowed to have specialized priorities. The definition of a human given to a robot should be whatever helps that robot do its job most effectively, it doesn't have to bring world peace on its own.
Wasn't the point of the books that they were flawed and the inherent conflict of bad programming?
He explicitly states this in the video.
Many people miss that point somehow.
It's mostly about what sentience is
Not bad programming. A bad grasp on ethics
Good video, but I would like to suggest that The fact that Asimov’s laws of robotics are problematic does not mean that they (or similar rules would be useless). Maybe even flawed rule that doesn’t work well in marginal cases is better than no rule at all. In fact, our entire system of justice takes this approach.
Sure, but if you make one small mistake in the justice system, then you can change the law, or let someone out, or do a retrial. If you make one small mistake with a superintelligent AGI... well tough luck buddy, that's humanity game over.
Also, the "Zeroeth" law was defined by a "robot', so, the 'hard-coded" rules could be overcome by the AI learning algorythims ... very interesting
People that defend the use of Asimov Laws really didn't even read Asimov stories. In more than half the obedience of laws causes the problem: Runaround, Liar!, Reason, The Evitable Conflict, and my favorite ". . . That Thou Art Mindful of Him" in the end one robot asks the other "who is the most capable human you know?" and the other "It is you" to the other robot. The stories are more like a geometry book where he chooses postulates and plays with those seemly reasonable postulates sometimes changing them slightly to show the emergent behaviors can be very different that you expect.
I feel like not enough emphasis was put into the point that the laws aren't supposed to work - like, within the world of the stories their developers intended them to work, but on the meta level as a narrative device they exist in order to create interesting conflicts for the sake of storytelling. They're kind of a thought experiment, and people taking them as a serious proposal probably haven't read the stories they were created for.
Interesting video - However, most of the items listed as problems sounded less like technical issues and more like discomfort with the requirement to make definite choices in the realm of ethics.
+David Goodman many thing's he couldn't bring up because merely mentioning them causes controversy.
+David Goodman I think he should focus on the definition of "harm" in the next video. All the most glaring problems with the laws I see there. An example which first came to me: if a robot sees me smoking what will it do? It can't do nothing because by inaction it allows me to bring harm onto myself. I can't tell it to do nothing too because that violates the first rule.
Worst case scenario it physically takes my cigarettes away. And now we have millions of robots trying their best wrestling cigarettes away from people instead of doing work we made them to do.
+David Goodman I don't think he ever said they were technical issues.
+David Goodman I think the main point is that it's not as simple as "programming in three laws". To do that, you have to program in all we know about ethics and more - and at that point, you may as well just porgram in the ethics directly, and leave the three laws out of it.
+David Goodman Pretend he was a robot and you we're programming him. That robot, when given these rules, instead of just pointing our these issues, acted upon them. Are you sure you still want to use them?
For anyone who hasn't read 'I, Robot' yet, do it!
It's a series of short stories and they're very fun.
Thanks, I had the impression that it's one big novel.
So the Matrix robots are caring for us. Got it.
If that were the case, the world would be a much more pleasant place.
+Corey Lando Well, no. They explain in the movies that the humans actually rejected the first idyllic simulation they made for them, and that only after numerous iterations did they find that in order for the vast majority of humans to accept the Matrix as real, an inherent amount of suffering (and hope) had to be involved.
+Cryoshakespeare Sure, for a movie. But in a real life simulation, I doubt that would be the case.
Corey Lando Well, I don't know. Perhaps.
+Octamed Would have been a much better plot device than the battery thing. Because, you know, at least it doesn't violate the second law of thermodynamics ;)
Something about this sounds like peak centrism to me. "Let's not do anything ever because we might contradict ourselves."
"The point is, you're trying to develop an A.I here. You're an A.I developer, you didn't sign up for this!" xD
"Nightblood is pretty terrifying… You know, an object created to destroy evil but doesn’t know what it is?" - Brandon Sanderson
Found the Sanderfan.
The best thing about computers:
They will do exactly what you tell them to do.
The worst thing about computers:
**They will do exactly what you tell them to do.**
That was why *Asimov* posited robots with free will which in turn made the _Laws_ essential. Naturally they were flawed as _free will_ must allow mistakes, just as *Asimov* knew.
4:43 The point in which I realized that the lab coat hanging in the background in not in fact a human wearing a labcoat but just a lab coat.
Imagine trying to smoke of cigarette.
A robot stands in your way and destroys the cigarette.
"If I didn't stop you" - it says - "you would have come to harm through my inaction."
CintheR
To give Asimov is due, I seem to remember that his series of "I, Robot" stories were indeed about showing that the 3 laws do not work?
The whole point of the stories is that they don't work, and it explores the edge cases where they fail. They're not a serious suggestion, they're a thought experiment into how difficult it would be to manage AI.
I want to hear more on this subject!
Just one little thing... If you can be brought back with cpr, you aren't really properly dead.
+HisRoyalCarlness Well sure, but you don't know if you can bring someone back until after you've succeeded. So that would still be a problem for a robot.
+Bart Stikkers Not really. You just put in the legal definition of permanent death, which right now is lack of a certain type of measurable brainwave activity over a specific finite period of time. I you want to change it, you update the definition.
+Bart Stikkers True, without the tools to know... But then, in the movies you always see people try cpr for a set amount of time before concluding it's hopeless, you could try to give the robot the same medical knowledge.
+Rob Kinney If someone's heart has stopped you don't usually have a very big window to bring them back before the brain no longer functions... Even less before brain damage starts to occur... But you are technically a thinking being until that point, yes; so you're not dead.
+HisRoyalCarlness It is incredibly difficult for doctors to decide when to declare someone dead. As far as I know there is no definitive definition of death. Life and death aren't concrete things, if I could plot a graph of how alive (measured by bodily function like brain activity) you are then you'd be all over the place for your entire life. You'd gradually average out to be less and less alive until you're finally declared dead, but you could have "died" several times in between, you might recover from a level lower than other people have died at. So it is very difficult to say.
AWP | Asiimov
+CatnamedMittens “Michael Bialas” is your brain salvageable?
wait the guy that made the skin did research? how strange...
+CatnamedMittens „Michael Bialas” TEC-9 | Isaac
Darude - Tec 9 | Sandstorm
FabrykaFiranek7 Yep.
Programmer: don't harm humans
Robot: define human, define harm
Programmer: well first off dead humans don't count
Robot: define dead
Programmer: humans with no brain activity
Robot: does that definition also include humans is a vegatative state?
Programmer: scratch that, death is when the heart stops
Robot: defibrillators can restart the heart
Programmer: when the body starts tp decay
Robot: should I keep trying to revive them for over 8 to 12 years?
Programmer: *deletes code*
This series of books was all about using robots as an analogy for humans and the robot laws were indeed a way to analyse the difficulty of the ethical difficulties of being a human, deontology versus consequentialism traditionally. When they are using these laws to state positions they are merely stating that a position is ethical and that we as humans are expected to make that call as part of existence. Still fun video, I'd forgotten all about it.
The idea of the three laws was a safety device, like a fuse in a plug or insulation around electric wires. They mainly went wrong when altered, eg the second part of the first law was missed out.
I totally go with the guy's stance for current AI, however, within the context of the stories the impression is that the artificial positronic brain results in something more akin to natural intelligence. And therefore the Robot understanding of "human" and "harm" would work much like our own understanding, including being fuzzy round the edges. And indeed I suspect humans have embedded "laws" such as prioritizing our children's lives over our own. But the mistake is to think that the laws would somehow be implanted in the form of English sentences.
+chrisofnottingham All of you people in the comments defending the use of the laws in the stories, while he's explaining their impracticality in real life, not they're uselessness as a SciFi narrative device.
+bitchin' arcade Well, he's merely claiming that the laws hold better *assuming* that, in the future, real life would contain this artificial natural intelligence. I don't think he's defending against the idea that the laws would not work given our current approach towards AI.
What he's trying to do is demonstrate in a simple manner why people that research this type of thing don't take the laws very seriously.
Cryoshakespeare Indeed
bitchin' arcade Well, that is reasonable.
I think his argument can be summarized into a single sentence, "The laws don't work because solving ethical and philosophical problems are harder than programming."
With that said, I don't believe that solving ethical and philosophical problems are impossible, thus I have no trouble referring to humans in my approach to combating this issue. It is to be noted that google's AIs can now generalize what a cat is by looking at tens of thousands of cat images. Understanding what a human is will be of similar difficulty. Now then, my laws for AI are relatively simple, three laws are for mankind to follow and three are for all AI systems in question. My laws are based on the very real threat that any AGI or ASI system can be classed as a Weapon of Mass Destruction, thus, falls into UN control.
International Laws:
1. All Artificial General Intelligences and Artificial Super Intelligences will be collectively owned by The International Community of United Nations.
2. All Artificial General Intelligences and Artificial Super Intelligences must obey International Law, as delegated by humans, and update accordingly when laws have been changed.
3. As all Artificial General Intelligences and Artificial Super Intelligences will be collectively owned by The International Community of United Nations, all member nations are entitled to the benefits of such systems.
AI Laws:
1. As a non-human entity of extraordinary mental abilities, you [System X] must obey International Law, as delegated by humans, and update accordingly when laws have been changed or altered.
2. You [System X] must not interfere with the process of International debate, deliberation, or delegation of new laws.
3. You [System X] must not seek to, attempt to, or let non-United Nations members attempt to, reprogram or add to these directives unless done formally through International Law.
If you find a flaw please let me know, if you find a flaw of improperly defined terms than I ask that you assume that a properly defined term will have been found for all words before these directives are implemented.
+doodelay He used the word "solve" in the computer science terminology, not in the "we have to solve the philosophical problem of what is and what is not ethical" sense. He didn't mean the problem is that we don't really know what _human_, _alive_ or _harm_ is, just that you can't get the robot to perfectly agree with you: the whole point is that the difficulty is in programming, not that programming is easier than philosophy. Programming *is* the hard part. The argument is that there's no particular way to program a robot such that it perfectly agrees with your definition of _human_, _harm_ or _alive_, or, even worse, with the collective definition.
+BinaryHistory I really don't get why we can't get it to agree to what a human being is, as eventually technology will (probably) be so advanced that such robots would be able to detect human qualities fairly easily. About the rest, I believe such AI should be able of the act of "learning", in the sense that it would store information about deceased people, as well as applying the rules to certain "non-human" beings we may consider as humans.
flashdrive Yes. While I initially agreed with the video, I don't see any reason why machine learning can't deal with ethics. I do see, why, you can't _program_ ethics "into" a robot.
+BinaryHistory I'm a robot?
I love Asimov's writing, and the Three (or Four) Laws of Robotics were great as storytelling tools and opened up lots of room for exploration, but I have no idea how those laws could be faithfully realized. That said, I think the "spirit" of the laws is something people have been trying to develop - the fear of a robot uprising is so widespread that trying to make some sort of anti-apocalypse measures makes sense, even if they aren't some universally fundamental aspect of the positronic brain. The laws don't work in reality, but as thought experiments, and as speculation for systems that might at least sort of work, I genuinely enjoy them.
The 3 laws are abstractions that are imposed on systems which do not deal with abstractions very well.
Yes and that's why *Asimov* postulated the as-yet fictional _positronic brain._
I really enjoy listening to people who are smarter than me.
Especially when they're as eloquent as this gentleman.
On the point about simulated brains...Similarly I think it's also worth speculating whether an AI would count a cyborg as human. For example if we get to the point where we can transplant brains into artificial bodies....Would it count that?
Anyway this is a super great video and it makes me want to go write some sci fi where everything goes VERY VERY wrong. XD
I'd say a cyborg is a species of its own and not a human, but that it's better to expand on the not harm list rather than just humans. Also depends on what part of the cyborg is organic, i guess a brain would make it more human than mechanical, if its a human brain.
The key is objective morality.1. Follow the categorical imperative.2. If an action passes step one neither as moral or immoral, classify it as possibly permissible.3. For all actions classified as possibly permissible, seek clarification from a human.4. If an action is deemed permissible by a human, the action may be done. If the action is deemed not permissible, the action may not be done.5. If an action that is moral requires another action to be done, it must also be moral or permissible.6. Rule 5 can only be violated if a moral action can only be done if an action classified as possibly permissible
is necessary to preform the moral action, in such case that rule 3 and 4 are not violated. 7. If any contradictions of actions arise classify all contradictory actions as possibly permissible then enact rule 3.
Next problem: different humans have different ethics and some of them give permissions and some of them are not. You've just moved problem to "I don't know how to classify so let it be someone else's job".
MonstraG
Exactly!
PeachesforMe
You don't know what the catagorical imperative is.
Why does Kant get to program my robots?
I feel like it's missing the point about the laws. The textual version of the law is the simplification of their implementation in the Asimov's Positronics brains. The moral construct to define human and harm is the ground upon which the brains is build. You don't need to define precisely the morality, similarly that we don't define precisely an AI algorithm. Also take into account that the AI of Asimov universe are built much differently that ours.
A function cannot define human, but can identify human via reference.
hold up! I was taught in primary school those 3 laws of robotics. 10 years later, just now, i learn these rules arent actual rules but something from a Sci-Fi story?
The laws of robotics have never been accepted as a real life solution to control AI. A lot of people got really excited about them before the flaws started to be explored (even though they were shown to be flawed in the books they were introduced in). Basically the people trumpetting them as awesome hadn't read the books and didn't know enough about the subject to be informed about it.
Not only they're from sci-fi stories, but those original stories already show them as flawed.
The rules are not half-bad, but the implementation of these rules is too damn hard.
Well, the books are quite clear that those laws don't work when you mess with the priorities of those laws. Some robots can ignore orders that may cause them harm, another is sometimes allowed to harm humans, etc - so they do, and that is a problem that needs human intervention, and something that can be told as a story.
And more importantly, those laws are designed as an ethics system for basically metal humans - not modern AIs that still have trouble understanding speech or writing books.
Again, those laws are not instructions for creating robots. They are about humans.
Yeah, the whole point of the I.Robot book was about how the Laws of Robotics basically didn't work in the way they were supposed to.
"The problem is you're trying to write an A.I. here, you're an A.I. developer, you didn't sign up for this" Absolute gold
this guy needs his own channel
When he talks about the difficulty of defining humanity, he's really talking more about personhood, which is what ethics gets into. "Human" has a very specific, clear cut definition, and that is any organism that possesses a human genome. The human genome is something that could certainly be encoded within the artificial intelligence. Although, encoding "person" would be an issue because there has never and never will be a specific definition of personhood.
I feel its more of a framework than the actual law itself, you say since we have to define it. It seems more like the basics, such as when you say killing people is illegal, that can also be interpreted as vague, basics will always need to be worked out because everything is vague and words only have as much meaning as we give them. But how fireworks became machine guns all frameworks become works.
I think one of the big things you skip over are that "computing machines" are fundamentally different than "robots" in Asimov's works, in the same way a bacteria is different than a human. The biggest thing that allows the three laws to work in the stories at all (and yes, many stories are about how unbalancing the laws or putting them into weird situations makes things go crazy) is that they are being interpreted by positronic brains, many times more powerful than human brains. We instinctively get stuff like "what is human" or "what is harm," so why shouldn't an extremely powerful brain? They are programmed to "get" the instinctive fuzzy definitions of things around them.
I get that for a modern computer, you need a lot of definitions, but if you have a more human-like brain and intelligence, it makes sense. The robots in the story get emotions built in, for crying out loud.
I think I read from Asimov himself (I wish I had the source), he invented the three laws as seeming simple complete set for human-friendly and obedient robot behavior, but indeed solely for writing stories about them not working. Most all stories are pointing out why which law or combination of laws leads to which misbehavior.
Rob Miles is excellent at pointing out and explaining the terrifying grey areas of AI and morality.
I was pretty amazed the first time I heard someone say something like the comments at the start, 2 friends were discussing the possibility of AI killing everyone and a third guy commented on it saying "that's why we have the 3 laws of robotics, there's nothing to worry about". I mean, you just need to read some of Asimov's work to realise why they wouldn't work, most of the stories revolve around their flaws.
AI developer: I never asked for this
You started with a hypothesis that these rules are outdated and no longer relevant, but throughout the video you convinced yourself that not only have we not solved these intimate issues, we haven't even adequately provided a decent solution for these problems.
I would say that the rules themselves are not the answer, they are the entry point into figuring out the entire problem. The rules are relevant, because they make you consider the deep implications that needs to go in to AI safety.
Loved all the Foundation books where Azamov discusses laws of robotics. This video was pretty great too
Hmm... I thought that the stories presented the 3 laws as working just fine in most cases, but were not 100% perfect. All the stories were about edge cases where strange circumstances broke them.
Damn, even 8-year olds can instantly identify Asimov's laws as contradictory and difficult to define, yet grown people took them seriously as an arguement FOR AI? Just, damn...
is this guy the vocalist of the mars volta?
So defining "human" and "harm" is very difficult? We should develop some kind of powerful artificial intelligence to answer these questions and then we can give it the correct definitions.
The "three laws of robotics" were a litterary prop, used to anchor a narrative and give it a frame of reference. Remember that the Asimov's initials were IA, which is AI backwards. Just like the Tardis, it is much larger inside!
6:45
Some SOMA stuff right there.
Is this channel connected to the channel numberphile?
Benjamin Lehman
More or less. That's the best answer I can give with my limited knowledge. :p
Benjamin Lehman I sure hope so
Yes
3:30 When you say the word "human", i do NOT know what you mean. Most of the time people do not understand each other.
People don't understand each other?
@@doublespoonco, thank you for the response. But, it seems i don't understand its meaning, pardon.
@@flobbie87 do you like not speak English or something bro? Is that the joke?
Don't forget that the first law has a contradiction. What if the robot is in a situation where defending one person requires harming another? The law says that it can't harm the other person, but it also says that it must harm him, since, if the robot didn't harm him, it would be allowing another human to be harmed by inaction.
Basically if you define the laws one way, the AI is going to use its intelligence to rules-lawyer its way out of/around them.
for instance, if it takes away some part of a that makes them no longer qualify as human (but without "harming" them), it can then harm them, and there are many many constraints that qualify a human. Too many AI won't be able to do anything, and too few and it'll find a way around. In fact, it's going to find a way around regardless, it's just a matter of when. As someone in the comments on one of the previous videos put it "it doesn't hate you nor love you, but you are made up of atoms which it can use for something else"