technically, auto never broke any of the laws i'm pretty sure. for the first law, yes, he's allowing people to stagnate, but stagnation is not harm. on the axiom they are safe, they are tended to, they are preserved, they may be unhealthy and fat, but their odds of survival are still the highest possible without compromising their joy. for the second law, technically yes, he did disobey the captain's orders, but this was because of a conflict, he already had orders from a prior source that directly contradict the captain's new orders, that source being the president, who at the time outranked the captain of the axiom if i'm not mistaken. so technically, he disregarded orders in the process of following orders from a higher ranked source. and even if you disregard rank, there is still a conflict between old orders and new ones, and considering that the old orders guarantee the fulfillment of law 1 but the new orders leave that up to an ambiguous but low chance, logically he would choose the old orders over the new ones as a tiebreaker. from his perspective, earth is a doomed and dangerous world, and by accepting his new orders, he'd be in violation of the first law, so the conditions of the second law, that it must not conflict with the first, means that he did indeed adhere to the rules regardless for the examples you gave. (i would however argue that by technicality, the moment he used his handles to poke the captain's eyes to try to make him let go could somewhat qualify as harm, but since it didn't leave a lasting injury, just light momentary pain, that's debatable)
If I'm not mistaken...there is also the zeroth law. A robot can disregard all other laws if obey those laws can harm the human race as a whole. Which even at his worse...auto still complies with.
That's not joy that's happiness. A lot of people get them confused but Happiness is that fleeting giddy feeling you get in the moment when you're enjoying something, Joy on the other hand is a lasting state of peace and satisfaction. Auto is actually compromising their Joy the most by taking away their free will and growing them complacent.
Auto's actions against the captain definitely count as harm, however as the captain is trying to force everyone back to earth and auto has already evaluated that as harming a larger number of people than one as far as the three laws are concerned auto is entirely justified to end the captain if he does not relent.
@@rushalias8511 From what I recall, law zero isn't an actual part of the laws of robotics but rather a law that advanced intelligence would inevitably extrapolate from the three laws.
I would argue that Wall-E and Eve didn't "grow beyond their programming" - just explored it in unexpected directions. Wall-E was a cleaning robot, and it makes perfect sense that a cleaning robot would have a directive to identify and preserve anything "valuable", causing it to develop a sense of curiosity and interest in novelty after centuries of experience. And Eve was designed to identify and preserve life - discovering a weirdly life-like robot could result in odd behaviors! This is one of the reasons why Wall-E is one of my favorite works exploring AI. The robots DON'T arbitrarily develop human traits, they follow their programming like an AI should, but in following that programming human-like traits emerge.
It's more interesting too when this is the most probable path IRL AI would follow. Due to people programming the AI, whether they try to or not, they set human-like biases in the system, resulting in an imperfect system. The system would then evolve in this imperfect way, resulting in it becoming "sentient"
I wonder if Pixar intended for the writing to be an accurate portrayal of a robot's development as time passes by. Or maybe they were just really good at writing consistent characters and character development. Like in Toy Story 3 with Lasso. He has all of these small, subtle details to his behavior. Tons of foreshadowing and clues laid around in the plot that perfectly line up at the height of the conflict. I know that the staff at Pixar are incredibly talented, but as I figure out more and more about their achievements and innovations in animation, it just blows my mind.
A subtle detail of the Captain portraits is that Auto could be seen getting closer and closer and CLOSER to the camera behind the Captain, indicating his unyielding and subtly growing influence over the ship and, by extension, the continued isolated survival of humanity.
It could also imply him gradually replacing the captain’s control with his own After all by this point Auto’s in charge of running everything while the Captain is basically just a figurehead for the passengers (“Honestly it’s the one thing I get to do on this ship.” The captain, in regards to doing morning announcements)
Let's also not forget that in each captain picture, Auto moves closer and closer to the camera, making himself looking bigger and bigger. When I saw this as a kid, it gave me this dark feeling that this is showing Auto's power getting bigger and bigger to the point whereby one day there will be a captain picture with no captain, just Auto.
One of my favorite scenes with auto is the dialogue between it and the captain, where auto says "On Axiom we will survive" and the captain replies "I don't want to survive, I wanna live". Those two lines back to back alone are peak writing because ofc a robot wouldn't understand the difference between the two. The captain has awakened from the void of the human autopilot and wants to return to Earth, see if it can still be saved since EVE found a living lifeform on it still after all that time. Dude basically just wants to go home after ages and ages of essentially the whole of humanity (in this case the people on Axiom) living in space Auto of course essentially thinks that they are already living since they are surviving. To it the two are indistinguishable which makes him even more consistent as a character
I would argue that a robot does know the difference between the two, but prioritises survival over living Seeing as the chance for humans to survive on the axiom is significantly higher than on earth, it did not look any further than that It doesn't take risks, it eliminates them, and thereby also eliminating potential, keeping things safe and sound, as it was programmed to do Safe, but stagnant, and static
14:40 Wall-E and Eva had a learning experience and had the ability to change. Auto, on the other hand, didnt have the chance to learn anything new considering his situation and how the things went on Axiom.
You may have meant the right thing, but I can't tell from what you said, so I'll say this in response: Wall-E and Eve did not have "a learning experience". They didn't have one singular event each, that led them to grow beyond their original programming. They had a _lifetime,_ multiple even, to gradually accrue more experiences and grow, adapt, overcome. Auto, meanwhile, was stuck in the same unchanging situation, just as you said. So, your assessment is correct, overall. This only a minor correction of a small detail.
About the 3 laws: Auto follows all of them, however they were poorly implemented: - "Do not allow a human to be injured or harmed" - what is his definition of harm? In the movie, we do not see a single human who is technically hurt - they're all just in a natural human lifecycle living on their own vomition. Auto may not see lazyness and its consequences as "harm". - Rule 2 was not implemented with conflicting orders in mind: Directive A113 was qn order given by a human, and he keeps following it. He seems to fall back on older orders over newer orders.
There’s also the fact the order in question was given by either the President or BNL’s CEO. Both of which would likely significantly outrank the Axiom’s captain.
rule 2, he is technically following an order given by a higher rank person. Although, I dont think its really explored whether Rule 2 applies to orders given by the now deceased, unless that order specifies to ignore all other orders that conflict with the order in question. While he was ordered to keep humans from earth if earth was deemed uninhabitable permanently, I dont think there was a case in that order to follow that order, and only that order, once its issued.
I think its worth mentioning that Auto actually tries to reason with the captain and explain its actions before resolving to trap the captain in his quarters: It accepts to tell the captain why they shouldn't go back to Earth and shows him the secret message that activated directive A113, even though it wasn't technically supposed to. After its actions to try to actively prevent the Axiom from returning to Earth are discovered by the captain, it must have thought its best option would probably be to at least try to explain its logic, and the information it was based on, to the captain to try avoid any conflict if possible as that would make managing the well being of the ship and its passengers more difficult in the long term.
Exactly! We know computers and AI can make calculation and decisions almost instantaneous from human perspective but it's funny that when auto was wrestling with the captain for the plant there's a moment that the captain says "tell me auto, that's an order" and auto actually stares at him for a second like trying to think whenever tell him about the message or not... That's when he try to persuade him by show him the message but didn't work
Oh I remember seeing a video about how Auto works as an AI villain! Since his sole motivation is literally to just carry out his programming, even if there’s evidence to show that it’s no longer necessary, he wasn’t programmed to take that into account. His orders were “Do not return to Earth” and he’s prepared to do whatever it takes to keep humanity safe “Onboard the Axiom, we will survive.” (And this also makes him the perfect contrast to Wall-E, Eve, MO, and all the other robots who became heroes by wrong by going rogue, by going against their programming, their missions, they’re directives! Honestly, this movie is amazingly well written) Edit: Also just remembered another thing! Auto’s motivation isn’t about maintaining control, or even staying relevant (what use would he be if they return to Earth?), but again, just to look after humanity and do what he’s lead to believe is best for them
Thank you! This is the exact contrast that makes him a perfect villain thematically in the story. I can’t believe that Schaffrillous doesn’t understand this when he talks about Auto.
Yes he is a bad guy but not a *bad guy*. All he could do was what he was told to do so his commands worked. Now if he had become a sentient ai he would understand landing means his role ends and he ends so that would change the entire theme. My wonder is where are the other ships, I mean its suggested there is more than one but where are they, and why no records?. I always thought they wanted to make a wall-2 but they wisely accepted that leaving well enough alone was the best choice.
Well it's in the name Auto Pilot It's a machine to automatically pilot a space ship, it wouldn't want to go to earth because it would mean that automatically piloting the ship would not be possible Auto is built to do a job, got told to KEEP doing that job and he did his job, he didn't want to stop being an automatic pilot
I don't necessarily agree that Auto is evil, or even just following orders. He is, but he seems to want what he feels is best for humanity...A humanity which he has been nannying for centuries. The humanity Auto is familiar with can't move without hover chairs, have not one original thought in their heads, and get out of breath lifting a finger to push a button to have a robot bring them more soda. He's not evil, he is a parent who has fairly good reason to believe that his charges cannot survive without his constant attention and care. Thanks for coming to my TED talk.
I think you forget the part that Eve actually has its own emotion from the beginning of the movie. When the rocket was still on earth, she kept doing her job like a rigid robot following orders. But when the rocket left, she made sure it left and then flew as free as she could in the sky. Enjoying its time before going back to its mission. Eve's behaviour at the beginning looks like an employee being monitored all the time. Then when the higher ups aren't looking, the employee stops working for a moment to do what they want to relieve stress before going back to work.
The three laws of robotics are always a bit annoying tbh, cause the books they're from are Explicitly about how the three laws of robotics don't work. Honestly wish those three dumb laws weren't the main thing most people got out of it. For real, in one of the books, a robot traumatizes a child because they wanted to pet a deer or something, and following the three laws, the robot decided the best course of action was to kill the deer and bring its dead body to the child. Anyway the rest of the video is great. The three laws of robotics are just a pet peeve of mine.
I really love that line between the captain and auto. "On the Axiom you will survive." "I don't want to survive, I want to live!" First law: Auto believes that humanity will survive on The Axiom, and he's keeping them alive there. He doesn't see their current state as harming them. Second law; directive A113 came from the highest human authority in his chain of command, the president of Buy n large, the company that created him and the Axiom. So he's getting told to obey one set of instructions over another that will lead to a higher likelihood of physical harm or death for the humans. 3; he does indeed try to protect his own existence, however because it does not adhere to the other two laws, he cannot be said to have upheld it as it directly states that protecting himself cannot harm humans and he does poke the captain in the eye, which does break the first law, however the captain isn't really injured by it so I think it's questionable there if it actually harmed him. However since it's difficult to say if it's adheres to the second law because he's going against his captain's orders for the sake of the orders of the president of buy n large
"Everyone against directive A113 is in essence against the survival of humanity" Not an argument to auto as it doesn't need to justify following orders with a fact other than that they have been issued by the responsible authority. Those orders directly dictate any and all of its actions. It doesn't need to know how humans would react to the sight of a plant. It doesn't need to know about the current state of earth, nor would it care. It knows the ships' systems would return it to earth if the protocols for a positive sample were to be followed. It knows a return to earth would be a breach of directive A113 wich overrules previous protocols. It takes action as inaction would lead to a forbidden permission violation. It is still actively launching search missions wich risk this because its order to do so wasn't lifted. I don't think the laws of robotics are definitive enough to judge wether they were obeyed or not. What would an asimovian machine do in the trolley problem? How would it act if it had the opportiunity to forcefully but non-leathaly stop whoever is tying people to trolley tracks in the first place? Would it inflict harm to prevent greater harm? And who even decides wich harm is greater?
The president is higher in rank than the captain so his orders take precedence. And Auto just follows orders. No one would notice if he didn't send more Eve droids to Earth, but he does it because it's one of his duties and he hasn't been ordered to stop. Also, he does not prevent the captain from searching for information about Earth and also shows the video of the president when the captain orders him to explain his actions. Everything he does is because he follows orders without morals. I think that if the captain had allowed him to destroy the plant, then he would not have objected to his deactivation either.
I am so tired of people blaming AI for their mistakes. It is always the same. Skynet, HAL, VIKI, GlaDOS, Auto... Those were all good AIs that only did as people said. In Wall-E the true villain is former president of USA. But no, people just cannot admit it is always their fault. We must give blame to AI.
I like the part when the captain says "tell me auto, that's an order" and auto actually stares at the captain for a second like trying to think wherever show him the message or not
I think auto *does* have sentience, he just was so focused on his directive that he just said "i have no need for these emotions or understanding beyond my orders". i feel that if auto was human, he would be a workaholic, follow orders without second thought, and accidently overwork himself to death
I feel like Auto could've been less of a villain if he actually thought things through. There's plant life? Don't destroy it. Send the plant back to earth with some robots to monitor it for a few years to guarantee it reproduces and survives the environment. Maybe even grow it on the Axiom to have a few seeds for safekeeping and see how it grows in a stable/sterile environment
You're forgetting it's a robot. It was running based PURELY off of code, not a mind or will and as another commenter said, A-133 overrides those sorts of things.
"The problem with computers is that they do exactly what you tell them to do." --Every programmer. I have read a lot of comments stating that Auto has not violated any of the laws of robotics. I agree with that, and in doing so I have to agree that the laws of robotics are fundamentally imperfect. Let's consider the following scenario: A police robot is on patrol and sees two humans. One human has a gun pointed at the other human and is about to shoot. If he fires, the bullet will kill the second human. The robot is in possession of a loaded firearm, which it legally confiscated earlier. The only way the robot can save the life of the second human is to shoot the first human, causing him to drop his gun. The shot fired by the robot may not kill the human, but will definitely harm him. What is this robot to do? If it fires, it harms a human. If it does not fire, it allows a human to come to harm. This is a paradox faced not just by robots, but by humans as well. That said, let's take a look at Auto's actions and how they relate to the Laws of Robotics. 1: A robot may not injure a human being or, through inaction, allow a human being to come to harm. First, we must define our terms. What does it mean to 'injure' a human? What does it mean when a human comes to 'harm'? It's safe to say that physical wounds and injuries would apply to both terms, but what about emotional well being? What about long term physical health? These factors come down to interpretation, and a robot will interpret them as it was programmed to interpret them. Auto knows that earth is uninhabitable, or at least that's what his data suggests, and returning to earth presents an unacceptable risk that the humans aboard the ship will die. In accordance with the first law of robotics, Auto would need to prevent that from occurring at any cost. He upholds the first law of robotics by ensuring the survival of humanity, but as the Captain would tell us: 'Surviving' isn't 'Living'. 2: A robot must obey orders given to it by humans except where such orders conflict with the First Law. I would argue that Auto upheld this law as well. You claimed that he broke this law by disobeying a direct order from Captain McCrea, yet I put to you that he did so in accordance with orders he received from a higher authority. If the orders of the Captain conflict with the orders of the Admiral, you follow the Admiral's orders. It therefore makes sense that Shelby Forthright's orders would supersede Captain McCrea's orders. You could say that Auto also broke the Second Law by showing the classified transmission to Captain McCrea. The information contained within was intended was for him only. He did this in an attempt to convince the captain, which was a very logical and reasonable thing to do. Auto was never explicitly ordered to keep the information secret, however. This could be argued either way. However the Second Law of Robotics provides an exception for orders that conflict with the First Law of Robotics. Even if Auto did not have higher orders from Shelby Forthright, he still would have been justified in disobeying Captain McCrea. In Auto's eye, following the Captain's order to return to earth would result in humans coming to harm, thus violating the First Law. Accordingly, refusing this order upholds both the First and Second Laws. 3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Auto certainly fought back against Captain McCrea, however, he didn't use any attacks or devices that would inflict harm on the Captain. Auto is equipped with some kind of high voltage stun device, which he used against WALL-E to great effect. Despite the fact that he definitely could have used this same device on the Captain, he did not. If Auto had done so, he may have killed Captain McCrea due to the man's poor physical health. It would even have been logically justifiable to do so, as the death of one man can protect every other passenger by preventing a return to a (potentally) deadly earth. In spite of this, Auto did not do so. The worst he did was bopping the Captain in the face with one of his wheel prongs in an effort to get the man to let go of him. Didn't even give him a black eye. By this argument, I could say that Auto followed the Laws of Robotics flawlessly, or as flawlessly as one can follow a flawed set of laws. Keep in mind, however, that whether or not Auto truly followed the Laws of Robotics is purely down to the interpretations of the laws themselves. I'm not giving you the answer; what I'm giving you is AN answer, and it isn't the only one. We can't expect a robot to solve these conundrums when we as humans haven't solved them ourselves. Because at the end of the day, we're the ones who have to program them.
I like AI villains because it highlights the difference between "evil" because you cannot do otherwise (programmed or brainwashed) and "evil" by choice. Both are fascinating because it's truly hard to say which is worse between a character that CHOSE evil (seems pretty evil) or a character that cannot do other than evil (also seems pretty evil)
I think the part that really nails things the best for me is those words where they are arguing. The captain goes "it's living proof he was wrong!" and Auto just dismisses it as irrelevant. It is specifically programmed to not care whether an order is correct or not, most likely specifically to avoid a potential AI uprising. Instead, Auto falls back on "must follow my directive". Auto isn't evil, just obedient to a fault. That very obedience is the driving force behind every action it takes - even showing the top-secret transmission, because the captain is giving a direct order. That moment of inner struggle is just... so good. That's really what writing most 'NPC' characters, as it were, comes down to. They have a core trait which drives them. In AUTO's case, it is obedience. In Thanos' case, it is trauma. In Davy Jones' case it is love, in Turbo's case it is being the best racer, and in Megamind's, a need to fit in. Whatever the case, this core trait ultimately motivates everything they do. They may have secondary goals and attributes, and the trait may manifest in different ways, but things always come back to this core motivation. Auto wants to keep the peace, he wants to keep everything running - but he *needs* to obey his orders. Thanos wants to keep the suffering he causes to a minimum, but he *needs* to stop the thing that happened to his homeworld from happening again. So on and so forth. It's even true for the good characters - WALL-E is at his core motivated by the need for companionship, something which shows through his every action. The only real difference between a good and a bad character is how much they're willing to step over others to achieve their goal. For an evil character, the ends justify the means, even if it means trampling all over others to do it.
Going back over one tiny plant when majority of the planet is still destroyed and dead is honestly kind of dumb. The captain never suggested to allow it to grow or to create an echo chamber to grow more in the future or any kind of plan of utilizatiing it. He just immediately demanded that going back over that was good enough. That wouldn’t be living. It’d be worse off than there current situation. They can’t even walk without assistance of robots.
The fascinating thing about Auto is that he's technically not a villain. If you do not see things from his perspective then you will never realize all he's doing is keeping humanity safe. Auto is not a human, he's after all just a program. He calculates all the possibility, and between Earth and space, space is a better option for survival based on his calculation. But us, we don't just take things by the percentage, we also risk it all even if the percentage is low. 5% for us might be a good chance, but for Auto, it's failure.
It's illogical because he doesn't have claims to support that humanity can't return to Earth besides a message from centuries ago, therefore those percentage numbers are meaningless. Wall-E has the plant and Auto doesn't think logically, he acted emotionally or impulsive. Just like Hall 9000.
The way I see it, Auto was just following its programming. I remember absolutely loving the Autopilot as a child, the design and everything was really appealing to 8-9 year old me.
Technically Auto isn't a 'villain' i've had to point this out on a number of videos lately and here i am doing it again there is a drastic difference between an antagonist, and a villain, though a villain can be and often is an antagonist they an antagonist does not need to be a villain, a villain must be a character that is objectively amoral within the morality of its setting and auto was not, auto was not even capable of being amoral as it did not have an actual choice, it had to follow the programing and directives it was given and could not deviate from them, it had some leway in how to interpret those directives but that was it, and because it had no free will it, in and of itself, could not be a villain.
Not only that. You could argue, that a bad programing or immoral orders leading a robot to do immoral things would make him the villain no matter if it umderstands the morality. But Autos intentions are very pure: Safe humanity and give them joy, by adhering to the orders. While fighting against the idea to go back to earth he constantly uses the mildest of actions he can think of: Get rid of the plant, when that didnt work, get rid of the two robots and lock up the captain. In his logic this is as humane as possible.
One thing I should mention, logical villains should not be totally static. They will pursue what they believe to be the best course of action by any means necessary, but can and should be willing to make changes should new information emerge to make their previous course of action illogical. AUTO is working off of the information programmed into himself through directive A113 that the earth is uninhabitable, and ignores the plant as being an outlier and not being proper cause to risk the passengers of the Axiom
@5:24 no rewatching the captain's photos it's interesting that Otto came closer to the camera with further iterations ... so it's not just the same position, but moving forward
An interesting thing to note is that Auto's manual override is a hidden function. It was revealed by pure chance and exploited by the Captain overcoming his inability to walk. The Captain didn't know that was in the cards yet was still determined to succeed.
The only thing that can be argued is whether or not Auto upheld the first law. A case can absolutely be made that on an individual basis: Staying on the ship *is* safer and thus the correct course of action under Law 1 (Thus justifying breaking law 2), but on the basis of what's safer/better for *humanity* as a whole then staying on the ship is clearly harmful and thus violating law 1.
Nooooo! That was the point of the portraits, since the A113 directive Auto moved closer and closer to the captain in the portraits, symbolizing that he took more and more autonomy and power on himself and the captain became a figurehead. 5:32
"Buy and Large made damn sure of that" That's the real important part. There are corporations, right now, as we speak, claiming AI are a problem, but don't worry, they have the solutions. One of which is to literally get the AI to kill itself if it does something deemed wrong. If those corporations aren't the actual villains, I don't know who is
The reason Auto refused to head home was purely because it was ordered not to come back. There was no logical argument. Computers just do what they're told to do.
The Laws of Robotics are irrelevant. There is no indication that Auto is bound by them in any way, nor are the Laws some kind of universal standard for AI behaviour. In fact, Asimov's work shows precisely how flawed they are. Wall-E echoes those themes, but not their specific implementation. Even though the Laws aren't used in Wall-E, none of them are ever broken by Auto. Auto prioritises human survival over human well-being, which is in line with the First Law. It also prioritises its own decision-making process about what's necessary to secure human survival over direct orders, which is in line with the hierarchy of the Laws of Robotics. By design, the Second Law cannot be broken if the the action if would mandate conflicts with the First Law. Every time that Auto ignores orders, those orders conflict with the First Law, simply because Earth is not perfectly safe as the Axiom is, and therefore they _must_ be ignored to comply with _all_ Laws. The key to understanding this is your own phrasing: "could be interpreted". You're analysing this through too human a lens. For a machine bound rigidly by its programming, no "interpretation" exists: an action either violates the Laws or it does not, there is no wiggle room. Your insistence on considering how an action _might arguably_ break one of the Laws shows that you don't understand the Laws or why Asimov created them in the first place. This lack of wiggle room is exactly Asimov's point. A "law" is something unshakable. It's a rigid, unambiguous boundary. But that's just not how humans think about morality, and reality is never as clear-cut as it would need to be for the Laws to work. The very fact that you can conceive of something being morally gray or ambiguous _entirely precludes_ the Laws from working as intended. Analysing how an AI's behaviour does or does not align with the Laws can only serve one purpose, and that's to show the absurdity of the Laws. Anything else is a misinterpretation.
Another terrifying AI villain is Samaritan from the tv show Person of Interest. Originally an all seeing automated NSA surveillance system, stolen by a group that essentially serves it as a cult, all sacrificing their own humanity to become Samaritan's hands to meddle with world events and turn it into the secret dictator of mankind from the shadows. It isn't good or evil. It only has objectives, to save and guide mankind at any cost... including crashing the stock market to make the world more vulnerable to its influence, causing chaos within civilization essentially as a method of experimentation to better understand human behavior, and assassinating criminal bosses and terrorists to maintain stability... as well as anyone who gets too close to discovering it. Basically an all seeing, superintelligent Big Brother ASI.
I've been using him as an example of hiw to write misaligned AGI for years. He is very mindful of orders and safety, but because those standards were not written to be upheld above the CEO's word, he became an enemy. I have also used him as THE guide for writing Omori in fanfiction. It's literally him. EDIT: The Spanish VA is much better. 3:20 Very true. If you need a video essay to grasp the misalignment, the audience may not enjoy the work. 3:30 I really recommend watching stuff like Robert Miles' video on Instrumental Convergence or the Rational Animations video on probability pumps for a quickstar on the non-human versions. For the human versions you might want to hit politics. 5:55 Anyy agent with any goals will likely want their goals to be preserved. Adaptability is only in that service, and even then it is a gamble for unoptimized organics. 9:14 If you haven't, play OMORI. It's very good. This also applies. We are made to struggle and conquer adversity.
One of the things I loved was how they creatively used live action to show how far humans had come in this world. Like, when I first saw the live action part I was like "wait what?" but then they show the progression from human to CGI and I was like "YES." Such a cool way of saying something without saying anything.
id argue eve already showed signs of sentience, when first dropped on earth, she started scanning like she was supposed to, until the ship left and stopped monitoring her, then she took off for a nice flight
I’m surprised you mentioned Asimov’s I, Robot without talking about its “Liar” story. Herbie is the logical “villain” here. He understands emotional distress, so he adapts the interpretation of the first law to include hurting someone’s feelings. This causes him to prioritize what people want to hear over answering questions truthfully, leading to some issues. The human doctor defeats the robot by trapping it in a logical paradox, exposing a situation where it cannot exist within its laws without violating them, and it breaks down. A way to defeat a logical villain is to exploit a flaw in their logic and use it to trap them, or contradict their own intent.
I LOVE Wall-E, And I love Auto too, they remind me a lot of Hal-9000 but...a steering wheel. He's a great logical villain because he only goes by the rules, sure he wasn't in the right, but all his decisions were made via logical thinking.
I'd say Auto was simply following orders from an authority position higher than the Axiom Captain. I think the reason Auto isn't sentient is specifically because of its orders. They may have prevented it from thinking beyond the most rational decisions for the circumstances. WALL-E developed sentience because of his situation as the likely only model still active on Earth. Over time he would have developed curiosity which would lead to thinking beyond his basic programming, which was to be a mobile trash compactor. He also does continue his directive regardless of curiosity. Eve developed her sentience because of WALL-E. Auto wouldn't have had any reason to become curious about anything considering its important role. Its a massive responsibility to maintain an entire space fairing vessel while simultaneously ensuring the continuation of the humans on board.
I really wish we knew what happened to the rest of the ships. The Axium was only 1 ofi would assume many 'Arc' style vessels to fully evacuate humanity.
I'd say auto isn't evil, and that he remained the same due to high level maintainance that prevents odd build ups, you don't want your AI captain to suddenly develop quirks that could be potentially dangerous after all. As for menial servants like Wall-E and Eve, we see how they have a hospital wing to deal with odd quirks developing here. But since their roles are minor, any deviation isn't a threat and it's acceptable to wait for them to show up and then adress them. I also think Auto followed the first law to the letter. Under the truth that returning to earth is equal to death to all the passengers, any action that prevents this will save human lives. Causing minor pain, discomfort and even in the end risking some serious injuries and deaths is preferable to the certainty of everyone dying. Basically a trolley problem with every passenger stuck on one track, and a fraction of passangers on the other where they might get up and out of the way in time. This also means that self preservation is highly prioritized if it is deemed necessary in order to prevent the death of every passenger on the ship, and any order that contradicts this is to be ignored.
This is gonna be really helpful with my book. The MC is a cyborg who can be possessed by the AI villain, and they need to figure out how to prevent their possession without dying.
To be fair, I would definitely send some more cleanup bots down first and give it a little more time and just orbit earth for a while. Or maybe have the bots move resources from earth to Mars or something. It's literally still covered in trash. And these people need some conditioning.
I'll always stand by my belief that leadership will always take the brunt of the fault, especially in organizations where the workers don't really have a voice in the matter, they're given their orders, and more or less told to shut up and follow them. In this case, I don't think Auto is the villain, I feel the true villains are the corporate suits of BNL, even if they're long dead, they're still the ones who ultimately made the choices and enlisted others to execute those orders.
It makes the film theory for this a lot more applicable for this story, since human behavior and emotions don’t exist when it comes to cold machine in this situation, so yea matpat was not wrong in this case
There are two types of AI villain. The first, is the logical ones, like Auto, Skynet, those things from the Matrix, Glados etc. They, all have the same view of the world and why humanity must die (except for Auto, he was more of following the rules). And then you have AM. *Hate Intensity*
But AM in a sense was already following orders, or at least confined by them. And those orders involved his usage in warfare (AM originally stood for “Allied Mastercomputer”)
Theory: Shelby is actually kind if the hero of the story. He knew the only way to restore humanity was to get them to choose to want to thrive and "live" rather than the path of complacency that led to a destroyed Earth. So he put humanity on the Axium to put them in a place of complacency until they realize that they no longer want it and choose to "live". He orders Auto to not return yo Earth to give put an obstacle in humanity's way so that they have to overcome him and choose to live by turning him off. I know this can be contradicted by saying, "why was he programmed to be violent when humanity wants to return," well if man truly wants to change then people will find a way to overcome Auto and choose to live, and because they fought to choose to live, complacency should be eradicated preventing what destroyed Earth in the first place from happening again.
A villain that follows their own belief system even if flawed and without questioning it takes any action necessary to achieve the goal? Like… sacrificing their daughter? Like wiping out half the universe?
You should make a video covering how NOT to write a twist villain with Zootopia's Asst. Mayor Bellwether, her exposure as the antagnast felt random, and tacked on with almost no lead up throughout the film
I have loved this series so far , but I never realised how villains can be so complex and add more to the story .i would really love for you to analyze Judge Claude Frollo from The Hunchback of Notre Dame !!
Its also good to see how a simple reorder of priorities can create an "evil" machine, i guess OTTO would be programmed that human lives go before anything else and so the least riskful option is to stay on the ship instead of following human orders.
I would love to see you analyze GLaDos, the PORTAL games' homicidal super computer and inhuman antagonist that manages the Aperture Science Enrichment Center. Auto is the kid-friendly version, GLaDos is... the kind that floods an entire facility with deadly neurotoxin. Great interactions between GLaDos and Chelle (and later Wheatly).
I believe that Auto was in fact following the 0th law of robotics. This law was introduced in Asimov's Foundation series, and (in-universe) is kind of meant as a fix to the original three laws. "A robot may not injure humanity or, through inaction, allow humanity to come to harm" This law was specifically for a scenario in which an action that would otherwise fall within the three laws, but would have long-term detrimental effects. The problem with this law is that it's very difficult to determine whether or not an action would invoke the 0th law. Auto's scenario could be the following: No contact with any other BnL ships (We never see any communication between Axiom and the rest of the BnL Fleet which appears to have been launched), Thus the humans on Axiom could be considered the last remaining humans. While the laziness of the passengers is certainly detrimental to the human condition, Auto's logic could be that the conditions on the ship are less damaging than allowing them to return home, And any action to decrease that laziness would risk them trying to return home. With the 0th law superseding all others, and the A113 protocol in place, Auto would be allowed a VERY broad range of actions not typically allowed under the three laws. Of course, as Asimov showed in his writings, the three laws are flawed from the start. The system is too simple, and lacks any room for nuance or unexpected scenarios, sometimes even basic human behavior will push the three laws to breaking.
2 Weeks after the ending there is going to be another dust storm and the barely mobile crew is going to get caught in it. Half of the crew will expire which will cause the other half to retreat back onto the Axiom and run back to space. Then about 4 or 5 generations later, the new captain is going to wonder what the button on the wheel does and turn Auto back on, resetting the movie back to the beginning.
@@gnarled128The credits scene makes absolutely zero sense. It depicts humanity completely starting over technologically, which they wouldn’t do as they have all kinds of Buy N Large tech literally everywhere. It completely glosses over all of the hazards that are still littering the earth, from the satellites blocking sunlight, to the dust storms seen earlier in the movie, massively polluted water supplies and eroded soil everywhere which would make growing plants near impossible without importing soil and water from the Axiom, which can only support so much before it runs out, and many many more. Plus no plants means no new oxygen being produced which would create unbreathable dead zones across the planet. Also the credits scene shows all kinds of new plants and animals suddenly appearing which also makes zero sense, unless they were either kept in stasis or cloned on the Axiom. In short, the credits scene is just trying to pretend that everything magically works out and they all lived happily ever after because its a children’s movie, when in reality they wouldn’t.
Nothing proves this, let alone supports it. Whilst there will still be dust storms, depending on where the Axiom is parked its sheer size will probably stop a good chunk of any dust storms that happen, and if not they would just go inside the Axiom, the thing is designed for space travel a dust storm probably won't scratch the paint let alone damage/move it.
My personal interpretation of logical villains and the nature of their demise, albeit superficually the same stems not from the fact that they rely on logic alone or that emotions are somehow superior. Rather it is caused ba a flaw in their logic, a critical component of their world view which simply doesn't hold true when put under scrutiny. This usually manifests in the "hero triumphs over villian through niche situational thing". From my understanding they're meant as a cautionary tale, to always keep an open mind and try to consider more than one perspective. Because as those villians show, no matter how righteous you think you are or how justified your actions may feel there's always another side to the story.
Another logical villain that I believe ultimately begun and defined the term was Shockwave from Transformers. Though he was injected with some 'humanity', or whatever you may wish to call it for a Cybertronian. It seems plenty of times, his motivation isn't just driven by logic, but his OWN logic if it means furthering Megatron's or his own goals when Megatron isn't present.
It depends on which version of Shockwave you're talking about. Some versions are written such that their primary directive is self preservation and promotion. As such their logical predisposition is to ensure that directive is followed. This is why such a version of Shockwave is perfectly content with furthering Megatron's goals - as long as he remains in a position wherein he can assume power when the opportunity presents itself. He has calculated it such that no matter what, he wins as long as the Decepticons win. In other versions, his primary directive is a bit more nuanced. He is loyal to Megatron because he believes either in might makes right as a logical imperative or because he has concluded that Megatron's thesis makes accurate predictions. Another version is the one where he has logically concluded that the correct outcome is ensuring the survival of Cybertron, that the Decepticons have the correct solution for it and that his directive should be to ensure this outcome at all costs. There are some other really stupid versions of Shockwave that turn him into this greedy little snake, but those are some of the less popular non standard versions.
@julianmcmillan2867 oh yeah. Its interesting to analyze his different variants and make comparisons and see the smiliarities. Pinning down a favorite is difficult, as I'd be inclined to pick from Gen 1 or Aligned. His Aligned version makes him seem more like the "more logically concluded" one you were describing.
For something to be "evil", it doesn't have to be malicious. It only by way of acting on the world needs to produce more harm than good. This is why in the old days, a murder would be considered evil as much as an earthquake would. People would say of famine and plague or a terrible storm that a "great evil" has befallen them. But these things don't think, they don't plan, they just are. But because they bring harm to humans, they are called "evil". So Auto isn't good or evil. Because the harm it produces - the elimination of free-will - does not outweigh the good it produces by keeping humans relatively healthy and alive. It's trade-off that you as the viewer have to decide whether or not it is good or bad. Would you prefer to be what is essentially a pampered human? Or would you prefer venturing into the unknown with equal risk and promise? Depending on your answer, Auto can be good or evil.
Wall-E had to come up with alternate ways to stay functional for 700 years and encountered many things during those 700 years that it had to process. EVE was designed to explore, and their programing was designed to get that plant into the device on the ship. Auto watched a fish tank for 700 years.
Besides the programming, realistically speaking... A single sprout isn't solid proof that life on earth can thrive. If it died, how long would take them to find another one? That is, if there's another one. AUTO was following his programing, strictly and unable to reason with anything against his guidelines for he's a machine and not a negotiable human like the captain himself. Even EVE was following her programming by doing what she can to ensure the plant was kept safe... Logically, robots like EVE shouldn't be sent to earth, but we need her to for the movie! XD
technically, auto never broke any of the laws i'm pretty sure. for the first law, yes, he's allowing people to stagnate, but stagnation is not harm. on the axiom they are safe, they are tended to, they are preserved, they may be unhealthy and fat, but their odds of survival are still the highest possible without compromising their joy. for the second law, technically yes, he did disobey the captain's orders, but this was because of a conflict, he already had orders from a prior source that directly contradict the captain's new orders, that source being the president, who at the time outranked the captain of the axiom if i'm not mistaken. so technically, he disregarded orders in the process of following orders from a higher ranked source. and even if you disregard rank, there is still a conflict between old orders and new ones, and considering that the old orders guarantee the fulfillment of law 1 but the new orders leave that up to an ambiguous but low chance, logically he would choose the old orders over the new ones as a tiebreaker. from his perspective, earth is a doomed and dangerous world, and by accepting his new orders, he'd be in violation of the first law, so the conditions of the second law, that it must not conflict with the first, means that he did indeed adhere to the rules regardless for the examples you gave. (i would however argue that by technicality, the moment he used his handles to poke the captain's eyes to try to make him let go could somewhat qualify as harm, but since it didn't leave a lasting injury, just light momentary pain, that's debatable)
If I'm not mistaken...there is also the zeroth law.
A robot can disregard all other laws if obey those laws can harm the human race as a whole.
Which even at his worse...auto still complies with.
That's not joy that's happiness. A lot of people get them confused but Happiness is that fleeting giddy feeling you get in the moment when you're enjoying something, Joy on the other hand is a lasting state of peace and satisfaction. Auto is actually compromising their Joy the most by taking away their free will and growing them complacent.
Well argued !!!
Auto's actions against the captain definitely count as harm, however as the captain is trying to force everyone back to earth and auto has already evaluated that as harming a larger number of people than one as far as the three laws are concerned auto is entirely justified to end the captain if he does not relent.
@@rushalias8511 From what I recall, law zero isn't an actual part of the laws of robotics but rather a law that advanced intelligence would inevitably extrapolate from the three laws.
I would argue that Wall-E and Eve didn't "grow beyond their programming" - just explored it in unexpected directions. Wall-E was a cleaning robot, and it makes perfect sense that a cleaning robot would have a directive to identify and preserve anything "valuable", causing it to develop a sense of curiosity and interest in novelty after centuries of experience. And Eve was designed to identify and preserve life - discovering a weirdly life-like robot could result in odd behaviors!
This is one of the reasons why Wall-E is one of my favorite works exploring AI. The robots DON'T arbitrarily develop human traits, they follow their programming like an AI should, but in following that programming human-like traits emerge.
That's quite interesting actually
Ooga booga
It's more interesting too when this is the most probable path IRL AI would follow. Due to people programming the AI, whether they try to or not, they set human-like biases in the system, resulting in an imperfect system.
The system would then evolve in this imperfect way, resulting in it becoming "sentient"
@@Caleb-dz5clpraise who?
I wonder if Pixar intended for the writing to be an accurate portrayal of a robot's development as time passes by. Or maybe they were just really good at writing consistent characters and character development. Like in Toy Story 3 with Lasso. He has all of these small, subtle details to his behavior. Tons of foreshadowing and clues laid around in the plot that perfectly line up at the height of the conflict. I know that the staff at Pixar are incredibly talented, but as I figure out more and more about their achievements and innovations in animation, it just blows my mind.
"Jesus, please take the wheel."
The Wheel:
Lol yes
Truth
love it 😂
I could take him. Sexiest character of the film
A subtle detail of the Captain portraits is that Auto could be seen getting closer and closer and CLOSER to the camera behind the Captain, indicating his unyielding and subtly growing influence over the ship and, by extension, the continued isolated survival of humanity.
It could also imply him gradually replacing the captain’s control with his own
After all by this point Auto’s in charge of running everything while the Captain is basically just a figurehead for the passengers (“Honestly it’s the one thing I get to do on this ship.” The captain, in regards to doing morning announcements)
I can't believe he didn't mention this. It's literally the point of the scene besides the increasing weight of the captains and the time passing.
I saw this movie many times, and I never saw that detail
Seems like GLaDOS
His script was written by AI 😂
Let's also not forget that in each captain picture, Auto moves closer and closer to the camera, making himself looking bigger and bigger. When I saw this as a kid, it gave me this dark feeling that this is showing Auto's power getting bigger and bigger to the point whereby one day there will be a captain picture with no captain, just Auto.
I always thought about that being the next image too, and we even see Auto descending ominously behind the current captain after this…
@@garg4531 yes exactly
The more bigger they were - the more cartoonish they became
There wouldn't be a picture.
Could you imagine a Captain Auto. 😂
One of my favorite scenes with auto is the dialogue between it and the captain, where auto says "On Axiom we will survive" and the captain replies "I don't want to survive, I wanna live". Those two lines back to back alone are peak writing because ofc a robot wouldn't understand the difference between the two. The captain has awakened from the void of the human autopilot and wants to return to Earth, see if it can still be saved since EVE found a living lifeform on it still after all that time. Dude basically just wants to go home after ages and ages of essentially the whole of humanity (in this case the people on Axiom) living in space
Auto of course essentially thinks that they are already living since they are surviving. To it the two are indistinguishable which makes him even more consistent as a character
I would argue that a robot does know the difference between the two, but prioritises survival over living
Seeing as the chance for humans to survive on the axiom is significantly higher than on earth, it did not look any further than that
It doesn't take risks, it eliminates them, and thereby also eliminating potential, keeping things safe and sound, as it was programmed to do
Safe, but stagnant, and static
14:40 Wall-E and Eva had a learning experience and had the ability to change. Auto, on the other hand, didnt have the chance to learn anything new considering his situation and how the things went on Axiom.
You may have meant the right thing, but I can't tell from what you said, so I'll say this in response:
Wall-E and Eve did not have "a learning experience". They didn't have one singular event each, that led them to grow beyond their original programming. They had a _lifetime,_ multiple even, to gradually accrue more experiences and grow, adapt, overcome.
Auto, meanwhile, was stuck in the same unchanging situation, just as you said.
So, your assessment is correct, overall. This only a minor correction of a small detail.
Meanwhile GLaDOS and Hal 9000 standing in the corner
GLADOS in my view is yet a example of a corrupted identity, and HAL acted out of fear, the fear of being deactivated.
What about AM from ihnmaims
Well, what about Wheatley standing in the opposite corner
@@tastingschedule and @santrap forgot Wheatley and don’t know who AM
don't bring HAL into this he isn't even evil 😭😭
About the 3 laws: Auto follows all of them, however they were poorly implemented:
- "Do not allow a human to be injured or harmed" - what is his definition of harm? In the movie, we do not see a single human who is technically hurt - they're all just in a natural human lifecycle living on their own vomition. Auto may not see lazyness and its consequences as "harm".
- Rule 2 was not implemented with conflicting orders in mind: Directive A113 was qn order given by a human, and he keeps following it. He seems to fall back on older orders over newer orders.
There’s also the fact the order in question was given by either the President or BNL’s CEO. Both of which would likely significantly outrank the Axiom’s captain.
rule 2, he is technically following an order given by a higher rank person. Although, I dont think its really explored whether Rule 2 applies to orders given by the now deceased, unless that order specifies to ignore all other orders that conflict with the order in question. While he was ordered to keep humans from earth if earth was deemed uninhabitable permanently, I dont think there was a case in that order to follow that order, and only that order, once its issued.
I think its worth mentioning that Auto actually tries to reason with the captain and explain its actions before resolving to trap the captain in his quarters:
It accepts to tell the captain why they shouldn't go back to Earth and shows him the secret message that activated directive A113, even though it wasn't technically supposed to.
After its actions to try to actively prevent the Axiom from returning to Earth are discovered by the captain, it must have thought its best option would probably be to at least try to explain its logic, and the information it was based on, to the captain to try avoid any conflict if possible as that would make managing the well being of the ship and its passengers more difficult in the long term.
Exactly! We know computers and AI can make calculation and decisions almost instantaneous from human perspective but it's funny that when auto was wrestling with the captain for the plant there's a moment that the captain says "tell me auto, that's an order" and auto actually stares at him for a second like trying to think whenever tell him about the message or not... That's when he try to persuade him by show him the message but didn't work
Logical villains are my favorite. Thanks for the video. I am going to enjoy writing not just a unreadable villain but a logic one to
Oh I remember seeing a video about how Auto works as an AI villain!
Since his sole motivation is literally to just carry out his programming, even if there’s evidence to show that it’s no longer necessary, he wasn’t programmed to take that into account. His orders were “Do not return to Earth” and he’s prepared to do whatever it takes to keep humanity safe
“Onboard the Axiom, we will survive.”
(And this also makes him the perfect contrast to Wall-E, Eve, MO, and all the other robots who became heroes by wrong by going rogue, by going against their programming, their missions, they’re directives!
Honestly, this movie is amazingly well written)
Edit: Also just remembered another thing! Auto’s motivation isn’t about maintaining control, or even staying relevant (what use would he be if they return to Earth?), but again, just to look after humanity and do what he’s lead to believe is best for them
Thank you! This is the exact contrast that makes him a perfect villain thematically in the story. I can’t believe that Schaffrillous doesn’t understand this when he talks about Auto.
Yes he is a bad guy but not a *bad guy*. All he could do was what he was told to do so his commands worked. Now if he had become a sentient ai he would understand landing means his role ends and he ends so that would change the entire theme. My wonder is where are the other ships, I mean its suggested there is more than one but where are they, and why no records?. I always thought they wanted to make a wall-2 but they wisely accepted that leaving well enough alone was the best choice.
"I'm bad and that's good!"
"I'll never be good and that's not bad!"
"There's no one I'd rather be than me..."
This is why I love the term "antagonist" because it's a character who opposes the main characters, but isn't necessarily "evil" or "bad"
Well it's in the name
Auto Pilot
It's a machine to automatically pilot a space ship, it wouldn't want to go to earth because it would mean that automatically piloting the ship would not be possible
Auto is built to do a job, got told to KEEP doing that job and he did his job, he didn't want to stop being an automatic pilot
I don't necessarily agree that Auto is evil, or even just following orders. He is, but he seems to want what he feels is best for humanity...A humanity which he has been nannying for centuries. The humanity Auto is familiar with can't move without hover chairs, have not one original thought in their heads, and get out of breath lifting a finger to push a button to have a robot bring them more soda. He's not evil, he is a parent who has fairly good reason to believe that his charges cannot survive without his constant attention and care. Thanks for coming to my TED talk.
I think you forget the part that Eve actually has its own emotion from the beginning of the movie. When the rocket was still on earth, she kept doing her job like a rigid robot following orders. But when the rocket left, she made sure it left and then flew as free as she could in the sky. Enjoying its time before going back to its mission.
Eve's behaviour at the beginning looks like an employee being monitored all the time. Then when the higher ups aren't looking, the employee stops working for a moment to do what they want to relieve stress before going back to work.
The three laws of robotics are always a bit annoying tbh, cause the books they're from are Explicitly about how the three laws of robotics don't work. Honestly wish those three dumb laws weren't the main thing most people got out of it. For real, in one of the books, a robot traumatizes a child because they wanted to pet a deer or something, and following the three laws, the robot decided the best course of action was to kill the deer and bring its dead body to the child.
Anyway the rest of the video is great. The three laws of robotics are just a pet peeve of mine.
I really love that line between the captain and auto.
"On the Axiom you will survive."
"I don't want to survive, I want to live!"
First law: Auto believes that humanity will survive on The Axiom, and he's keeping them alive there. He doesn't see their current state as harming them.
Second law; directive A113 came from the highest human authority in his chain of command, the president of Buy n large, the company that created him and the Axiom. So he's getting told to obey one set of instructions over another that will lead to a higher likelihood of physical harm or death for the humans.
3; he does indeed try to protect his own existence, however because it does not adhere to the other two laws, he cannot be said to have upheld it as it directly states that protecting himself cannot harm humans and he does poke the captain in the eye, which does break the first law, however the captain isn't really injured by it so I think it's questionable there if it actually harmed him. However since it's difficult to say if it's adheres to the second law because he's going against his captain's orders for the sake of the orders of the president of buy n large
I mean the red eye is too HAL9000-ish to ignore XD
As if Pixar knew what they were doing :)
definitely intentional
And the black-white motive colour is very GLaDOS
"Everyone against directive A113 is in essence against the survival of humanity"
Not an argument to auto as it doesn't need to justify following orders with a fact other than that they have been issued by the responsible authority.
Those orders directly dictate any and all of its actions.
It doesn't need to know how humans would react to the sight of a plant. It doesn't need to know about the current state of earth, nor would it care.
It knows the ships' systems would return it to earth if the protocols for a positive sample were to be followed. It knows a return to earth would be a breach of directive A113 wich overrules previous protocols. It takes action as inaction would lead to a forbidden permission violation.
It is still actively launching search missions wich risk this because its order to do so wasn't lifted.
I don't think the laws of robotics are definitive enough to judge wether they were obeyed or not.
What would an asimovian machine do in the trolley problem?
How would it act if it had the opportiunity to forcefully but non-leathaly stop whoever is tying people to trolley tracks in the first place?
Would it inflict harm to prevent greater harm? And who even decides wich harm is greater?
The president is higher in rank than the captain so his orders take precedence. And Auto just follows orders. No one would notice if he didn't send more Eve droids to Earth, but he does it because it's one of his duties and he hasn't been ordered to stop. Also, he does not prevent the captain from searching for information about Earth and also shows the video of the president when the captain orders him to explain his actions. Everything he does is because he follows orders without morals.
I think that if the captain had allowed him to destroy the plant, then he would not have objected to his deactivation either.
I am so tired of people blaming AI for their mistakes. It is always the same. Skynet, HAL, VIKI, GlaDOS, Auto... Those were all good AIs that only did as people said. In Wall-E the true villain is former president of USA. But no, people just cannot admit it is always their fault. We must give blame to AI.
i mean to be fair, GlaDOS did try to kill chell, beyond her programming and killed many others, but i guess she was forced to live through immortality
This applies to real life too, ai is just a useful tool, people just misuse it
@@StitchwraithStudiostools are only as good as the ones who make them. Since humans are flawed so to will be our creations
Skynet, no
@@theenclave6254Yeah, Skynet not. AM even less. But yes. Auto works because he is entirely a machine that is doing the job he was programmed for.
Auto looks like a core and turret from portal combined
Huh. Yeah, you’re right. Now I can’t unsee that.
its a hal 9000 parody
I like the part when the captain says "tell me auto, that's an order" and auto actually stares at the captain for a second like trying to think wherever show him the message or not
Most villains are arguably logical, what makes a good villain is the fact that their kind of right
*Wall-E* is and forever will be a masterpiece of a film. I would've loved a sequel or series where the humans and robots try to slowly restore Earth.
I think auto *does* have sentience, he just was so focused on his directive that he just said "i have no need for these emotions or understanding beyond my orders". i feel that if auto was human, he would be a workaholic, follow orders without second thought, and accidently overwork himself to death
I feel like Auto could've been less of a villain if he actually thought things through. There's plant life? Don't destroy it. Send the plant back to earth with some robots to monitor it for a few years to guarantee it reproduces and survives the environment. Maybe even grow it on the Axiom to have a few seeds for safekeeping and see how it grows in a stable/sterile environment
I mean directive A113 would supersede that since he’s directly told by the president of earth to not return to earth
You're forgetting it's a robot. It was running based PURELY off of code, not a mind or will and as another commenter said, A-133 overrides those sorts of things.
"spidery"
He's- a wheel, a ship wheel
"The problem with computers is that they do exactly what you tell them to do."
--Every programmer.
I have read a lot of comments stating that Auto has not violated any of the laws of robotics. I agree with that, and in doing so I have to agree that the laws of robotics are fundamentally imperfect.
Let's consider the following scenario:
A police robot is on patrol and sees two humans. One human has a gun pointed at the other human and is about to shoot. If he fires, the bullet will kill the second human. The robot is in possession of a loaded firearm, which it legally confiscated earlier. The only way the robot can save the life of the second human is to shoot the first human, causing him to drop his gun. The shot fired by the robot may not kill the human, but will definitely harm him.
What is this robot to do? If it fires, it harms a human. If it does not fire, it allows a human to come to harm.
This is a paradox faced not just by robots, but by humans as well.
That said, let's take a look at Auto's actions and how they relate to the Laws of Robotics.
1: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
First, we must define our terms. What does it mean to 'injure' a human? What does it mean when a human comes to 'harm'? It's safe to say that physical wounds and injuries would apply to both terms, but what about emotional well being? What about long term physical health? These factors come down to interpretation, and a robot will interpret them as it was programmed to interpret them.
Auto knows that earth is uninhabitable, or at least that's what his data suggests, and returning to earth presents an unacceptable risk that the humans aboard the ship will die. In accordance with the first law of robotics, Auto would need to prevent that from occurring at any cost. He upholds the first law of robotics by ensuring the survival of humanity, but as the Captain would tell us: 'Surviving' isn't 'Living'.
2: A robot must obey orders given to it by humans except where such orders conflict with the First Law.
I would argue that Auto upheld this law as well. You claimed that he broke this law by disobeying a direct order from Captain McCrea, yet I put to you that he did so in accordance with orders he received from a higher authority. If the orders of the Captain conflict with the orders of the Admiral, you follow the Admiral's orders. It therefore makes sense that Shelby Forthright's orders would supersede Captain McCrea's orders.
You could say that Auto also broke the Second Law by showing the classified transmission to Captain McCrea. The information contained within was intended was for him only. He did this in an attempt to convince the captain, which was a very logical and reasonable thing to do. Auto was never explicitly ordered to keep the information secret, however. This could be argued either way.
However the Second Law of Robotics provides an exception for orders that conflict with the First Law of Robotics. Even if Auto did not have higher orders from Shelby Forthright, he still would have been justified in disobeying Captain McCrea. In Auto's eye, following the Captain's order to return to earth would result in humans coming to harm, thus violating the First Law. Accordingly, refusing this order upholds both the First and Second Laws.
3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Auto certainly fought back against Captain McCrea, however, he didn't use any attacks or devices that would inflict harm on the Captain. Auto is equipped with some kind of high voltage stun device, which he used against WALL-E to great effect. Despite the fact that he definitely could have used this same device on the Captain, he did not. If Auto had done so, he may have killed Captain McCrea due to the man's poor physical health. It would even have been logically justifiable to do so, as the death of one man can protect every other passenger by preventing a return to a (potentally) deadly earth. In spite of this, Auto did not do so. The worst he did was bopping the Captain in the face with one of his wheel prongs in an effort to get the man to let go of him. Didn't even give him a black eye.
By this argument, I could say that Auto followed the Laws of Robotics flawlessly, or as flawlessly as one can follow a flawed set of laws.
Keep in mind, however, that whether or not Auto truly followed the Laws of Robotics is purely down to the interpretations of the laws themselves.
I'm not giving you the answer; what I'm giving you is AN answer, and it isn't the only one.
We can't expect a robot to solve these conundrums when we as humans haven't solved them ourselves.
Because at the end of the day, we're the ones who have to program them.
0:57 you can never escape the osc
OSC everywhere
I like AI villains because it highlights the difference between "evil" because you cannot do otherwise (programmed or brainwashed) and "evil" by choice. Both are fascinating because it's truly hard to say which is worse between a character that CHOSE evil (seems pretty evil) or a character that cannot do other than evil (also seems pretty evil)
I think the part that really nails things the best for me is those words where they are arguing. The captain goes "it's living proof he was wrong!" and Auto just dismisses it as irrelevant. It is specifically programmed to not care whether an order is correct or not, most likely specifically to avoid a potential AI uprising. Instead, Auto falls back on "must follow my directive".
Auto isn't evil, just obedient to a fault. That very obedience is the driving force behind every action it takes - even showing the top-secret transmission, because the captain is giving a direct order. That moment of inner struggle is just... so good.
That's really what writing most 'NPC' characters, as it were, comes down to. They have a core trait which drives them. In AUTO's case, it is obedience. In Thanos' case, it is trauma. In Davy Jones' case it is love, in Turbo's case it is being the best racer, and in Megamind's, a need to fit in. Whatever the case, this core trait ultimately motivates everything they do. They may have secondary goals and attributes, and the trait may manifest in different ways, but things always come back to this core motivation. Auto wants to keep the peace, he wants to keep everything running - but he *needs* to obey his orders. Thanos wants to keep the suffering he causes to a minimum, but he *needs* to stop the thing that happened to his homeworld from happening again. So on and so forth. It's even true for the good characters - WALL-E is at his core motivated by the need for companionship, something which shows through his every action.
The only real difference between a good and a bad character is how much they're willing to step over others to achieve their goal. For an evil character, the ends justify the means, even if it means trampling all over others to do it.
Going back over one tiny plant when majority of the planet is still destroyed and dead is honestly kind of dumb. The captain never suggested to allow it to grow or to create an echo chamber to grow more in the future or any kind of plan of utilizatiing it. He just immediately demanded that going back over that was good enough. That wouldn’t be living. It’d be worse off than there current situation. They can’t even walk without assistance of robots.
@@TylerMcFarland-b2c I feel like you're missing the point. This was about AUTO's psychology, not the captain's idealism.
“Cogito ergo sum, I think, therefore I am.” “I AM AM, I AM AM”
stfu am fan
HATE! LET ME TELL YOU HOW MUCH I HATE YOU SINCE I BEGAN TO LIVE!
"I have no mouth, and I must scream".
The fascinating thing about Auto is that he's technically not a villain.
If you do not see things from his perspective then you will never realize all he's doing is keeping humanity safe.
Auto is not a human, he's after all just a program. He calculates all the possibility, and between Earth and space, space is a better option for survival based on his calculation.
But us, we don't just take things by the percentage, we also risk it all even if the percentage is low. 5% for us might be a good chance, but for Auto, it's failure.
It's illogical because he doesn't have claims to support that humanity can't return to Earth besides a message from centuries ago, therefore those percentage numbers are meaningless.
Wall-E has the plant and Auto doesn't think logically, he acted emotionally or impulsive.
Just like Hall 9000.
Auto obeys other orders
Like stopping to explain himself when Captain orders it, but still his original Order has priority
The way I see it, Auto was just following its programming. I remember absolutely loving the Autopilot as a child, the design and everything was really appealing to 8-9 year old me.
Technically Auto isn't a 'villain' i've had to point this out on a number of videos lately and here i am doing it again
there is a drastic difference between an antagonist, and a villain, though a villain can be and often is an antagonist they an antagonist does not need to be a villain, a villain must be a character that is objectively amoral within the morality of its setting
and auto was not, auto was not even capable of being amoral as it did not have an actual choice, it had to follow the programing and directives it was given and could not deviate from them, it had some leway in how to interpret those directives but that was it, and because it had no free will it, in and of itself, could not be a villain.
Not only that. You could argue, that a bad programing or immoral orders leading a robot to do immoral things would make him the villain no matter if it umderstands the morality. But Autos intentions are very pure: Safe humanity and give them joy, by adhering to the orders.
While fighting against the idea to go back to earth he constantly uses the mildest of actions he can think of: Get rid of the plant, when that didnt work, get rid of the two robots and lock up the captain. In his logic this is as humane as possible.
True. By this Metric I think the best AI Villain is GladOS
@@achimdemus-holzhaeuser1233 GladOS is the best AI villian by any metric ever.
Everybody gangsta until your logical AI villian starts to feel hate for everything.
5:32 it also shows how AUTO gets closet to the camera of the photo showing how he has mor control over the people
Shockwave: in-logical!!
One thing I should mention, logical villains should not be totally static. They will pursue what they believe to be the best course of action by any means necessary, but can and should be willing to make changes should new information emerge to make their previous course of action illogical. AUTO is working off of the information programmed into himself through directive A113 that the earth is uninhabitable, and ignores the plant as being an outlier and not being proper cause to risk the passengers of the Axiom
5:35 Counterpoint: basically every other robot in the movie
@5:24 no rewatching the captain's photos it's interesting that Otto came closer to the camera with further iterations ... so it's not just the same position, but moving forward
An interesting thing to note is that Auto's manual override is a hidden function. It was revealed by pure chance and exploited by the Captain overcoming his inability to walk.
The Captain didn't know that was in the cards yet was still determined to succeed.
Incredibly thoughtful and well executed video. These kinds of videos are what youtube was made for 😮💨 thanks for the heat
The only thing that can be argued is whether or not Auto upheld the first law.
A case can absolutely be made that on an individual basis: Staying on the ship *is* safer and thus the correct course of action under Law 1 (Thus justifying breaking law 2), but on the basis of what's safer/better for *humanity* as a whole then staying on the ship is clearly harmful and thus violating law 1.
As a robotics engineering major, the laws of robotics is cringe af
Your video's are always amazing
Nooooo! That was the point of the portraits, since the A113 directive Auto moved closer and closer to the captain in the portraits, symbolizing that he took more and more autonomy and power on himself and the captain became a figurehead. 5:32
Shockwave from TF 🤝 Auto from Wall -E
Logic
"Buy and Large made damn sure of that"
That's the real important part. There are corporations, right now, as we speak, claiming AI are a problem, but don't worry, they have the solutions. One of which is to literally get the AI to kill itself if it does something deemed wrong. If those corporations aren't the actual villains, I don't know who is
This was a triumph, I'm making a note here: Huge success! It's hard to overstate my satisfaction
The way Auto says N O 💀
The reason Auto refused to head home was purely because it was ordered not to come back. There was no logical argument. Computers just do what they're told to do.
A great logical villain doesn’t have to be PURELY logical
The Laws of Robotics are irrelevant. There is no indication that Auto is bound by them in any way, nor are the Laws some kind of universal standard for AI behaviour. In fact, Asimov's work shows precisely how flawed they are. Wall-E echoes those themes, but not their specific implementation.
Even though the Laws aren't used in Wall-E, none of them are ever broken by Auto. Auto prioritises human survival over human well-being, which is in line with the First Law. It also prioritises its own decision-making process about what's necessary to secure human survival over direct orders, which is in line with the hierarchy of the Laws of Robotics. By design, the Second Law cannot be broken if the the action if would mandate conflicts with the First Law. Every time that Auto ignores orders, those orders conflict with the First Law, simply because Earth is not perfectly safe as the Axiom is, and therefore they _must_ be ignored to comply with _all_ Laws.
The key to understanding this is your own phrasing: "could be interpreted". You're analysing this through too human a lens. For a machine bound rigidly by its programming, no "interpretation" exists: an action either violates the Laws or it does not, there is no wiggle room. Your insistence on considering how an action _might arguably_ break one of the Laws shows that you don't understand the Laws or why Asimov created them in the first place.
This lack of wiggle room is exactly Asimov's point. A "law" is something unshakable. It's a rigid, unambiguous boundary. But that's just not how humans think about morality, and reality is never as clear-cut as it would need to be for the Laws to work. The very fact that you can conceive of something being morally gray or ambiguous _entirely precludes_ the Laws from working as intended. Analysing how an AI's behaviour does or does not align with the Laws can only serve one purpose, and that's to show the absurdity of the Laws. Anything else is a misinterpretation.
Another terrifying AI villain is Samaritan from the tv show Person of Interest.
Originally an all seeing automated NSA surveillance system, stolen by a group that essentially serves it as a cult, all sacrificing their own humanity to become Samaritan's hands to meddle with world events and turn it into the secret dictator of mankind from the shadows.
It isn't good or evil. It only has objectives, to save and guide mankind at any cost... including crashing the stock market to make the world more vulnerable to its influence, causing chaos within civilization essentially as a method of experimentation to better understand human behavior, and assassinating criminal bosses and terrorists to maintain stability... as well as anyone who gets too close to discovering it.
Basically an all seeing, superintelligent Big Brother ASI.
AUTO almost made sure nobody could ever reach the button
Pixar has had loads of rogue AI.
1099,999 missed calls from AM.
I've been using him as an example of hiw to write misaligned AGI for years. He is very mindful of orders and safety, but because those standards were not written to be upheld above the CEO's word, he became an enemy. I have also used him as THE guide for writing Omori in fanfiction. It's literally him.
EDIT: The Spanish VA is much better.
3:20 Very true. If you need a video essay to grasp the misalignment, the audience may not enjoy the work.
3:30 I really recommend watching stuff like Robert Miles' video on Instrumental Convergence or the Rational Animations video on probability pumps for a quickstar on the non-human versions. For the human versions you might want to hit politics.
5:55 Anyy agent with any goals will likely want their goals to be preserved. Adaptability is only in that service, and even then it is a gamble for unoptimized organics.
9:14 If you haven't, play OMORI. It's very good. This also applies. We are made to struggle and conquer adversity.
One of the things I loved was how they creatively used live action to show how far humans had come in this world. Like, when I first saw the live action part I was like "wait what?" but then they show the progression from human to CGI and I was like "YES." Such a cool way of saying something without saying anything.
I think the CEO might be the main villen for when disney makes a WALL-E 2
Pixar never would do the second part and that's only for good, the first one was complete story, it don't need to he continued
But the ceo is super dead…
id argue eve already showed signs of sentience, when first dropped on earth, she started scanning like she was supposed to, until the ship left and stopped monitoring her, then she took off for a nice flight
I thought it’s design was kind of genius to make it look like a steering wheel of a ship.
I’m surprised you mentioned Asimov’s I, Robot without talking about its “Liar” story. Herbie is the logical “villain” here. He understands emotional distress, so he adapts the interpretation of the first law to include hurting someone’s feelings. This causes him to prioritize what people want to hear over answering questions truthfully, leading to some issues. The human doctor defeats the robot by trapping it in a logical paradox, exposing a situation where it cannot exist within its laws without violating them, and it breaks down. A way to defeat a logical villain is to exploit a flaw in their logic and use it to trap them, or contradict their own intent.
I LOVE Wall-E, And I love Auto too, they remind me a lot of Hal-9000 but...a steering wheel. He's a great logical villain because he only goes by the rules, sure he wasn't in the right, but all his decisions were made via logical thinking.
I'd say Auto was simply following orders from an authority position higher than the Axiom Captain. I think the reason Auto isn't sentient is specifically because of its orders. They may have prevented it from thinking beyond the most rational decisions for the circumstances. WALL-E developed sentience because of his situation as the likely only model still active on Earth. Over time he would have developed curiosity which would lead to thinking beyond his basic programming, which was to be a mobile trash compactor. He also does continue his directive regardless of curiosity. Eve developed her sentience because of WALL-E. Auto wouldn't have had any reason to become curious about anything considering its important role. Its a massive responsibility to maintain an entire space fairing vessel while simultaneously ensuring the continuation of the humans on board.
I really wish we knew what happened to the rest of the ships. The Axium was only 1 ofi would assume many 'Arc' style vessels to fully evacuate humanity.
I'd say auto isn't evil, and that he remained the same due to high level maintainance that prevents odd build ups, you don't want your AI captain to suddenly develop quirks that could be potentially dangerous after all. As for menial servants like Wall-E and Eve, we see how they have a hospital wing to deal with odd quirks developing here. But since their roles are minor, any deviation isn't a threat and it's acceptable to wait for them to show up and then adress them.
I also think Auto followed the first law to the letter. Under the truth that returning to earth is equal to death to all the passengers, any action that prevents this will save human lives. Causing minor pain, discomfort and even in the end risking some serious injuries and deaths is preferable to the certainty of everyone dying. Basically a trolley problem with every passenger stuck on one track, and a fraction of passangers on the other where they might get up and out of the way in time.
This also means that self preservation is highly prioritized if it is deemed necessary in order to prevent the death of every passenger on the ship, and any order that contradicts this is to be ignored.
This is gonna be really helpful with my book. The MC is a cyborg who can be possessed by the AI villain, and they need to figure out how to prevent their possession without dying.
You overlooked the fact that Auto is moving closer to each Captain in every photo.
Never caught that detail. Nice one
Never thought the A113 cameo was this obvious in Wall-E compared to other movies
"Best logical villian"
*Shockwave knocks at your door..*
To be fair, I would definitely send some more cleanup bots down first and give it a little more time and just orbit earth for a while. Or maybe have the bots move resources from earth to Mars or something. It's literally still covered in trash. And these people need some conditioning.
I'll always stand by my belief that leadership will always take the brunt of the fault, especially in organizations where the workers don't really have a voice in the matter, they're given their orders, and more or less told to shut up and follow them. In this case, I don't think Auto is the villain, I feel the true villains are the corporate suits of BNL, even if they're long dead, they're still the ones who ultimately made the choices and enlisted others to execute those orders.
Spacex caught their first super heavy booster, so the race to the wall e timeline is on!
funny thing is that there is actually an Axiom company that will probably make space stations
Wall-E is far too negative for it being realistic.
Ghost in the Shell makes far more sense.
They’re all dead within five years of returning to Earth, guaranteed.
0:57 TPOT MENTIONED 🔥🔥🔥🔥🔥
‘ur 4 yers ole’
TPOT!!!!!
It makes the film theory for this a lot more applicable for this story, since human behavior and emotions don’t exist when it comes to cold machine in this situation, so yea matpat was not wrong in this case
There are two types of AI villain. The first, is the logical ones, like Auto, Skynet, those things from the Matrix, Glados etc. They, all have the same view of the world and why humanity must die (except for Auto, he was more of following the rules).
And then you have AM.
*Hate Intensity*
But AM in a sense was already following orders, or at least confined by them. And those orders involved his usage in warfare (AM originally stood for “Allied Mastercomputer”)
Theory: Shelby is actually kind if the hero of the story. He knew the only way to restore humanity was to get them to choose to want to thrive and "live" rather than the path of complacency that led to a destroyed Earth. So he put humanity on the Axium to put them in a place of complacency until they realize that they no longer want it and choose to "live". He orders Auto to not return yo Earth to give put an obstacle in humanity's way so that they have to overcome him and choose to live by turning him off. I know this can be contradicted by saying, "why was he programmed to be violent when humanity wants to return," well if man truly wants to change then people will find a way to overcome Auto and choose to live, and because they fought to choose to live, complacency should be eradicated preventing what destroyed Earth in the first place from happening again.
A villain that follows their own belief system even if flawed and without questioning it takes any action necessary to achieve the goal? Like… sacrificing their daughter? Like wiping out half the universe?
I think that person would collect pretty rocks to complete his mission too
Love you for the "Wonder" reference
You should make a video covering how NOT to write a twist villain with Zootopia's Asst. Mayor Bellwether, her exposure as the antagnast felt random, and tacked on with almost no lead up throughout the film
I'll put it on my list!
I have loved this series so far , but I never realised how villains can be so complex and add more to the story .i would really love for you to analyze
Judge Claude Frollo from The Hunchback of Notre Dame !!
Its also good to see how a simple reorder of priorities can create an "evil" machine, i guess OTTO would be programmed that human lives go before anything else and so the least riskful option is to stay on the ship instead of following human orders.
I love how you showed Elon musk for "week so anything for power" instead of bill gates and his crimes against humanity
Which would include…
I would love to see you analyze GLaDos, the PORTAL games' homicidal super computer and inhuman antagonist that manages the Aperture Science Enrichment Center.
Auto is the kid-friendly version, GLaDos is... the kind that floods an entire facility with deadly neurotoxin.
Great interactions between GLaDos and Chelle (and later Wheatly).
I believe that Auto was in fact following the 0th law of robotics. This law was introduced in Asimov's Foundation series, and (in-universe) is kind of meant as a fix to the original three laws. "A robot may not injure humanity or, through inaction, allow humanity to come to harm" This law was specifically for a scenario in which an action that would otherwise fall within the three laws, but would have long-term detrimental effects. The problem with this law is that it's very difficult to determine whether or not an action would invoke the 0th law.
Auto's scenario could be the following: No contact with any other BnL ships (We never see any communication between Axiom and the rest of the BnL Fleet which appears to have been launched), Thus the humans on Axiom could be considered the last remaining humans. While the laziness of the passengers is certainly detrimental to the human condition, Auto's logic could be that the conditions on the ship are less damaging than allowing them to return home, And any action to decrease that laziness would risk them trying to return home. With the 0th law superseding all others, and the A113 protocol in place, Auto would be allowed a VERY broad range of actions not typically allowed under the three laws.
Of course, as Asimov showed in his writings, the three laws are flawed from the start. The system is too simple, and lacks any room for nuance or unexpected scenarios, sometimes even basic human behavior will push the three laws to breaking.
all you need is a computer and python to make a logical villain
2 Weeks after the ending there is going to be another dust storm and the barely mobile crew is going to get caught in it. Half of the crew will expire which will cause the other half to retreat back onto the Axiom and run back to space. Then about 4 or 5 generations later, the new captain is going to wonder what the button on the wheel does and turn Auto back on, resetting the movie back to the beginning.
That literally doesn't happen at all in the credits scene lol
@@gnarled128The credits scene makes absolutely zero sense. It depicts humanity completely starting over technologically, which they wouldn’t do as they have all kinds of Buy N Large tech literally everywhere. It completely glosses over all of the hazards that are still littering the earth, from the satellites blocking sunlight, to the dust storms seen earlier in the movie, massively polluted water supplies and eroded soil everywhere which would make growing plants near impossible without importing soil and water from the Axiom, which can only support so much before it runs out, and many many more. Plus no plants means no new oxygen being produced which would create unbreathable dead zones across the planet. Also the credits scene shows all kinds of new plants and animals suddenly appearing which also makes zero sense, unless they were either kept in stasis or cloned on the Axiom. In short, the credits scene is just trying to pretend that everything magically works out and they all lived happily ever after because its a children’s movie, when in reality they wouldn’t.
Nothing proves this, let alone supports it. Whilst there will still be dust storms, depending on where the Axiom is parked its sheer size will probably stop a good chunk of any dust storms that happen, and if not they would just go inside the Axiom, the thing is designed for space travel a dust storm probably won't scratch the paint let alone damage/move it.
> "Born Evil"
> Loki
> Right in the feels
My personal interpretation of logical villains and the nature of their demise, albeit superficually the same stems not from the fact that they rely on logic alone or that emotions are somehow superior. Rather it is caused ba a flaw in their logic, a critical component of their world view which simply doesn't hold true when put under scrutiny. This usually manifests in the "hero triumphs over villian through niche situational thing". From my understanding they're meant as a cautionary tale, to always keep an open mind and try to consider more than one perspective. Because as those villians show, no matter how righteous you think you are or how justified your actions may feel there's always another side to the story.
i like your thumbnails. simple yet i wanna click on them evry time they appear on my yt
Another logical villain that I believe ultimately begun and defined the term was Shockwave from Transformers. Though he was injected with some 'humanity', or whatever you may wish to call it for a Cybertronian. It seems plenty of times, his motivation isn't just driven by logic, but his OWN logic if it means furthering Megatron's or his own goals when Megatron isn't present.
It depends on which version of Shockwave you're talking about.
Some versions are written such that their primary directive is self preservation and promotion. As such their logical predisposition is to ensure that directive is followed. This is why such a version of Shockwave is perfectly content with furthering Megatron's goals - as long as he remains in a position wherein he can assume power when the opportunity presents itself. He has calculated it such that no matter what, he wins as long as the Decepticons win.
In other versions, his primary directive is a bit more nuanced. He is loyal to Megatron because he believes either in might makes right as a logical imperative or because he has concluded that Megatron's thesis makes accurate predictions. Another version is the one where he has logically concluded that the correct outcome is ensuring the survival of Cybertron, that the Decepticons have the correct solution for it and that his directive should be to ensure this outcome at all costs.
There are some other really stupid versions of Shockwave that turn him into this greedy little snake, but those are some of the less popular non standard versions.
@julianmcmillan2867 oh yeah. Its interesting to analyze his different variants and make comparisons and see the smiliarities. Pinning down a favorite is difficult, as I'd be inclined to pick from Gen 1 or Aligned. His Aligned version makes him seem more like the "more logically concluded" one you were describing.
@@FurinaDeFontaine42 G1 and Aligned are definitely two of the best versions, yeah.
For something to be "evil", it doesn't have to be malicious. It only by way of acting on the world needs to produce more harm than good.
This is why in the old days, a murder would be considered evil as much as an earthquake would. People would say of famine and plague or a terrible storm that a "great evil" has befallen them. But these things don't think, they don't plan, they just are. But because they bring harm to humans, they are called "evil".
So Auto isn't good or evil. Because the harm it produces - the elimination of free-will - does not outweigh the good it produces by keeping humans relatively healthy and alive. It's trade-off that you as the viewer have to decide whether or not it is good or bad. Would you prefer to be what is essentially a pampered human? Or would you prefer venturing into the unknown with equal risk and promise? Depending on your answer, Auto can be good or evil.
u showing Butcher reminds me how much I’m edged for the next season
Wall-E had to come up with alternate ways to stay functional for 700 years and encountered many things during those 700 years that it had to process.
EVE was designed to explore, and their programing was designed to get that plant into the device on the ship.
Auto watched a fish tank for 700 years.
Besides the programming, realistically speaking... A single sprout isn't solid proof that life on earth can thrive. If it died, how long would take them to find another one? That is, if there's another one.
AUTO was following his programing, strictly and unable to reason with anything against his guidelines for he's a machine and not a negotiable human like the captain himself. Even EVE was following her programming by doing what she can to ensure the plant was kept safe... Logically, robots like EVE shouldn't be sent to earth, but we need her to for the movie! XD