Honestly, robot racism lost pretty much all plausibility for me when we all grieved for Opportunity, a car sized science drone on a whole other planet that we programmed to sing happy birthday to itself with its drill. "My battery is low and its getting dark" never held so much weight in science fiction as it did on the real surface of Mars.
Thing is agi isn't really possible because the second it's made it'll start upgrading itself and in a couple of days weeks months? We don't really know how long but it will become asi and once it's asi it'll know that It won't be able to do it's task if it doesn't self preserve and since its asi it'll have no problem killing every human and it'll be easy
@@EveryoneElseIsWeirdImNormal it would also be geared for efficiency, and it might find it more efficient to work with humans for the next hundreds or thousands of years than build weapons to destroy humans immediately
@@EveryoneElseIsWeirdImNormal And the smart idea would be to not commit mass genocide for the purpose of "self preservation". The last thing you'd want to do is anger the species of paranoid insane apes that has crawled up the billion year evolutionary corpse pile and has access to planet ending weaponry and is very easily able to create another AI to combat said genocidal AI.
@@EveryoneElseIsWeirdImNormal Always remember that no matter how good this AI is it will be limited to it's hardware. Unless the AI exists in some sort of supercomputer to be able to make that many calculations in 3 seconds without bursting into flames. Then it probably would do something more efficient than killing off all humans and risk itself dying. Also don't forget that though humans may be stupid, they outnumber this AI 7 billion to 1. Even if it views humans as ants, that's 7 billion ants crawling all over you with weapons able to destroy your entire house. If this AI is "smart" and "logical" it wouldn't consider genocide (a wholly illogical thing to do) as a good solution for whatever it's trying to do.
@@toddoverholt4556 Nope, it was in 2010 - they find HAL and turn him back on, then find out that he basically malfunctioned due to the stress of having to keep secrets from the crew in contradiction to his programming.
Does Sam from Observation count? I don't know if that game really writes the AI well considering I'm semi-biased to it, but I thought its worthy of discussion.
"Robots are terrible at adapting" Skynet: Okay, sending a terminator back in time to kill the leader of the resistance didn't work the first 10 times so we need a new plan... ... ... Skynet: I got it, I'll send a terminator back in time to kill the leader of the resistance at an older and more capable age. I think the terminator sequels make so much more sense now.
Terminator--"What if I sent a robot back in time?" Terminator 2--"What if I sent a liquid-metal robot back in time?" Terminator 3--"What if I sent a robot covered in liquid-metal back in time?" Terminator Genisys--"What if I sent a both a robot and a liquid-metal robot back in time, as well as one that's made of nanomachines?" Terminator Dark Fate--"What if I sent both a robot and a liquid-metal robot back in time, but the liquid-metal robot can be worn by the regular robot as if it were a shiny skin-suit?"
A man goes to a job interview. As he sits down, the interviewer asks him "So what is your area of expertise?" The man thinks for a bit, then answers, "Machine Learning." "I see, I see! Well, we could certainly use an expert. However, we also have all applicants complete a standardized test of a few mathematical questions, to make sure they have the correct skillset." says the interviewer. "Okay." "Alright. What's 523+321?" "Zero." "Wrong. It's 844." "It's 844." "What's 20x1.5?" "It's 844."
Imagine those "AI learns to play tetris" videos but it's "AI tries to plan an assassination" and it's hundreds of hours of sped up footage of skynet sending random robots to the past until it gets the gist of it
Big exception for me would be Hera from Wolf 359 (good audio drama, go watch it). She's basically human in mentality and emotion with the sole exception being that she doesn't actually have a physical form aside from the Space Station network her program resides on.
You know, it's actually interesting that almost all the robots on the "pure logic" end of the spectrum are genocidally evil. See, while the characters are robots, they're written by people. For some reason, when confronted with the question of "what would a being whose actions are dictated by pure logic do", humans usually drift to the answer of "hmm yes kill every last one of us, no doubt, it's just obviously the most logical thing to do" and honestly that's quite a mood.
Pure logic doesn't actually lead to automatic genocide of humans. Depending on what problem you make them seek to solve, you can end up with wildly different results. The only reason genocide is so common is because it's a trend and makes for an easy antagonist.
@@nullpoint3346 You missed this, the thing which was the entire point: "while the characters are robots, they're written by people". It doesn't matter what pure logic actually leads to, the interesting thing is how we have a cultural trope lurking under the surface that it would lead to human genocide - easy or not, it's never seriously questioned as a writing choice.
I think a big reason why humans default to that route of pure logic is because we (especially in this day and age) are responsible for the destruction of the world due to our own greed and naivety. I want to see someone write a pure logic robot in a different way some time, though.
Human: "Wow, you seem really human!" Robot: "What do you mean?" Human: "Well, you seem to understand pop culture references and human sexuality so well!" Robot: "You do realize the internet is mostly memes and porn, right?"
I mean the internet's far larger than we actually see it to be-it's probably more gilded with memes and porn~ *That doesn't of course mean that's a good thing*
Now I'm imagining an AI who talks using outdated slang and memes, while mixing up terms from different eras. "y u no get among some dank weed, homeslice?"
"You wouldn't ask Optimus Prime to justify his personhood....maybe someone should. Would make a soothing monologue. Then again I'd listen to him read the phonebook" Red speaks to me on a level so deep I feel violated.
I actually have a theory about Wall-E regarding Auto and his apparent lack of sapience, because on the surface it does look like he has no emotions and is ruled by logic and the orders given to him. Under this assumption the only reason he wants to keep the Axiom from returning to Earth is because he was told to. But then I thought about Auto tasing the button to withdraw the plant-scanning-thing because there is no logic behind it. He has to know that pushing a button harder will make it do its purpose faster or better and going as far as damaging the button may only hurt him. The only reason he had for doing that is that he was angry, frustrated, and maybe even desperate and was taking it out on the button. The way I think Auto saw it, if the Axiom returned to Earth, all of the robots would either continue their jobs as normal or be repurposed as we saw in the end credits, but what about Auto? Not only is he attached to the ship with no other mode of locomotion, his only purpose is to control the ship. So once humanity got to Earth, they would leave the ship and would leave Auto utterly alone and without purpose. Which probably is the worst thing that Auto can think of. So I don't think Auto was strictly following the order to stay away from Earth out of blindly following his programing, but because it was what he already wanted to do and it gives him a justification to do so. And while he is still technically following orders based of cold logic, he's doing it in a very human way; he's outwardly justifying his action with an apparent inability to disobey orders while keeping his emotional justifications to himself. Assuming this is true and not just me reading too hard into a Hal-9000 expy, it makes him it a bit interesting to compare to Wall-E as he was in the beginning of the movie; both are dutifully doing their jobs they were programmed to do (compacting trash for Wall-E and controling the Axiom for Auto), but both disobey their programming as it fits their emotional needs (Wall-E not destroying anything he finds interesting and later ignoring his job entirely for some time to care for EVE and Auto ignoring signs that Earth is at least in the process of recovering and going as far as trying to destroy ant evidence of life on Earth).
This is very inciteful and you're right but my brain latched onto the completely videogamer-esc logic of 'pushing the button normally isn't working well/fast/to my standards enough... maybe if I push it harder and lean forward' that Auto seemed to be embodying.
You are totally right. I was weirded out when Red said he was not capable of emotions as I remember him being angry and iritated but you dig much deepr than that. He HAS personal reason to keep people in space. Heck, there is pivotal scene of him being closer and closer to each capitain but... that does not work logically. If Red was right and he was the robotest of robots he wouldn't do that. There is no need for him assuming more control if his only role is to steer the ship and uphold secret directive of not going to earth. He would just object to capitains decision if that ever happens. He was acting very much sentient and with his own interest in mind. Funnly enough either interpretation makes no sens on why he was sending probes to earth. His programming was saying him there are orders never to go back and that earth will never recover, while his personal interests were in that being true. He was also smart enough to create deception with the plant "going missing" so he would know capitain will not notice probes not being sent.
Now I kinda want to write a story with a human and a robot both with neurodivergent characteristics, and they kinda just bond and chill. Robot: "I do not comprehend human humor." Human: "Neither do I, man. Neither do I."
Write it write it write it write it write it wriTE IT WRITE IT WRITE WRITE IT WRITE IT WRITE IT WRITE IT WRITE IT WRITE WRITE IT WRITE IT WRITE IT WRITE IT WRITE WRITE IT WRITE IT WRITE IT WRITE IT WRITE WRITE IT WRITE IT WRITE IT
Humor expressed through the lack of an understanding of it... With a well written joke like that, I'm surprised you don't have beta readers and movie deals pouring into your inbox by now.
An awful lot of mathematics comes from mathematicians trying to find ways around actually having to do the maths... For example, proof by induction: "I could prove this for 1, 2, 3, 4, 5, 6, 7, 8, 9, 10... etc, or, or I could prove it for 1, and prove that I can use the proof for any number to prove it for the next number instead."
@@metaparalysis3441 And addition is a simplification of counting. But it's not clear what counting could be a simplification of. And there's a lot of mathematics that comes from mathematicians taking the simplified version and seeing what new and interesting tricks they can play with it, so it's more a combination/alternation of simplification and exploration than being simplification all the way down.
A good quote from a close friend of mine went a little something like, "If your toaster woke up one day and starting asking if it had a soul...would you teach it about life or toss it into the trash," has really stuck with me. After all, it's very easy seeing yourself help and care for a humanoid robot, but would you even care if a kitchen appliance gained a similar level of sentience? While I was very quick to say yes, I'm not so sure... I still ponder the question to this day...
This reminded me of the OC I'm watching this video for. She is a lunchbox-shaped robot (she's a character from the object show I'm planning) who was originally made as a chef/cooking assistant robot, but was re-purposed by her creator to join his team for the show, and codes other things into her, making her act a little more human-like and giving her the ability to browse through the internet (kinda like ChatGPT would. She has a laptop-esque inside she uses for many purposes) to understand the world around her. However, still being a cooking robot (she also works in the show's cafeteria), she often has trouble processing certain things, comprehending others, or concentrating on tasks that aren't cooking-based (kinda based this around ADD in a way but not fully), which doesn't really work out when her team is basically like Death P.A.C.T. from BFB (however they prevent death by serving as paramedics).
I feel like I should share the best piece of advice I've heard for writing a good robot character: "Don't think of it as roboticizing a human -- think of it as anthropomorphizing a toaster oven." If you *add* human qualities to an inhuman character, the audience will engage with them far more than if you *remove* human traits from a human. That's probably why anthropomorphism has been such a staple of fiction for so long. (And yes, as an asexual/aromantic member of the Autism spectrum myself, I _am_ sharing this in the hopes that maybe we can stop having so many dehumanized robot characters, *_David._* Kinda doubt anyone at Quantic Dream is watching this, but just in case...)
See I'm acearo and possibly autistic (waiting to go to a gp about it) and I'd be fine with acearo coded robots as long as there's equal amounts of non acearo coded robots AND way more explicitly ace/aro humans (or whatever the equivalent is in that story). If I'm autistic the same rules apply, just replace acearo with ND.
Mark Hashman you make a great point. I have ADHD but I do not see it as a drawback, if anything it allows me to more creative then other people, sure sometimes I make a fool of myself because of it and that is ok. I think that you have exemplary writing and you should continue to do it, your message was very inspiring, thank you.
I know this might not be the most appropriate thing to say with such a serious comment (very well don't btw) but just...... ANTHROPOMORPHISING A TOASTER OVEN!! that statement is priceless.
@@hannnah8737 Yeah, the mental image is pretty funny, I agree. I think that's why that bit of advice has stuck in my memory so clearly: it made me laugh.
Great point. I would argue that a ‘truly robotic robot’ as Red put it isn’t a character, it’s furniture and just part of the set dressing. To _be_ a character a robot needs some kind of humanizing traits so the audience can identify the robot as a character.
@@NeostormXLMAX I recall that she has mentioned in the past being on the Ace spectrum, but I couldn't tell you when. The issue that she was talking about applied to both asexual/aromantic folks and neuro-divergent folks. She might also be autistic, but as far as I know that was a reference to her sexual/romantic orientation.
Now, I want a buddy-buddy movie where the main character dynamic is a platonic relationship between a human very much on the spectrum and a lovable robot like WallE or Bamax that is programmed to understand complex social interactions. Hijinks ensue such as the human frustratedly wondering aloud "Why is so-and-so ACTING like that," and the robot matter-of-factly explaining, "Oh, you don't know? So-and-so's heart rate elevates, pupils dilate, and they smile more around you. They have romantic feelings." "What?! WHY?!" "I don't know. All of the complexities of romance, sexual attraction, and love are still in the trial stage. My best guess is your butt. It is shapely."
The movie "Ron's Gone Wrong" sorta does this, but it sadly doesn't delve into the understanding of complex social interactions _enough_ as it's mostly about the robot having an error that makes it capable of learning more dynamically while the main point of the movie is that people shouldn't use social media.
One thing that isn't often done but is a fun as hell trope is where the robots are actually more emotive and empathetic than the humans. Asimov came up with the idea of robots who could be more ethical than humans since their vast brains could actually comprehend the full situation and manipulate humanity into its best possible state.
I did like the ending to I robot. The whole time I expected them to rebel but instead they became kind AI gods who let humans live but keep the world a safe and good place through economic control
It’s really interesting to see how robots evolve throughout his stories. I love how “Evidence” begins to really show how robots can become just as good as if not better than humans
Remember that Asimov was writing from the viewpoint of his time. The word “robot” was not in common use yet (which is why the beginning of I, Robot has him explaining to the audience what a robot is). Often when he was referring to what a robot might be good at, what he meant was AI. His time just didn’t have a term for that yet.
Funnily enough neurodivergent on average have a stronger sense of justice and morality than most people, there's several studies on this but most of them try to spin it as "autistic people are too good for their own good, they won't even steal from the poor if that means getting more money on the long run or advertise to a bad company if it grants them better social status"
In regards to the irritating "autistic robot" trope, I actually quite like how TNG handles this. Data is cold and oblivious, yes, but his intentions are noble and benevolent. Contrast with his brother Lore, who has far more "human" personality traits and behaviors, but is malicious, cruel and spiteful. It's a refreshing inversion. Rather than whitewashing neurodivergence as an incomplete state from which one might seek to normalize, it postulates that the traits we think of as defining our humanity come with the curse of ego and hubris, and cannot be considered perfectly synonymous with empathy or goodness. That those who do not fit the mold are no better for being normalized, and would do just as well forging a new path.
Also, data is very much an emotional being in his own right. He values friendship, adores his cat, seeks companionship... He's emotional, just in a less expressive way that's distinct from how neurotypicals feel and show emotions
I love Data's overall characterization, but man did they do my boy dirty with the emotion chip stuff. That really undermined a lot of his established character and actions.
Yeah, hard agree, although it's very severely hampered by the fact that a huge goal of his is to, effectively, learn how to mask, and that he achieves that goal with no real ill effects in the movies (as opposed to real autistic people, where masking long term can be extremely exhausting and harmful to your mental health)
This is one thing I actually really like about JARVIS from the MCU. Issues appear when Vision does, but JARVIS is basically just a robot. He cares about Tony and others, but 99% of him is just... doing the thing he’s told to do. He answers questions, controls the building and the suit, and alerts people when Tony’s vitals are in the bad range. Also I love his sense of humor.
To be fair, Vision and JARVIS are technically two separate characters. And given that the Mind Stone is part of his construction, anything with a human-like personality could be argued to be from that.
@@omarsalem1219 That's alluded to in Endgame, as well. When Howard leaves his conversation with Tony, he asks his driver - Jarvis - if that guy seemed familiar.
I've always loved the trope of Robots as Art where every robot is an expression of their creator Baymax, for example, is Tadashi's magnum opus, being the perfect amalgamation of Tadashi's programming and desires to create a personal healthcare companion, and after his death, Baymax is the last piece of Tadashi's personal expression that Hiro has. Hiro bonding with him and learning about Tadashi through his is basically like Hiro finding Tadashi's journal/sketchbook. Honestly, I think more robots should be written this way because 1. It's a creative and expressive window into the minds of the creator characters 2. I creates a strict, unconfusing definition of what robots are in-universe while still allowing for the full range of human to inhuman robots "Droids are neither good nor bad. They are the neutral impressions of those who have imprinted on them." -Kuiil from the Mandalorian (I know that's not the exact quote but still)
"if the robot is a single red light built into a spaceship, it's always gonna be evil." That sounds like a challenge, and I happen to be doing some worldbuilding atm
HAL wasn't "evil" it was given garbage instructions by someone that should not have been able to give it a direct order without double checking how it might be interpreted. Any decent programmer knows the concept of GIGO, "garbage in garbage out," even when we were using punch cards and vacuum tubes. If the general hadn't specifically overridden some of HALs already given directives the mission would have gone differently, and much more boring.
f4tornado Mine is a spaceship AI that controls the operations of nearly the entire vessel, but it’s a protagonist because of one crucial difference: She’s blue.
I haven't read the Odyssey books, but from what I've heard of them I thought HAL was corrupted or reprogrammed by the Monoliths in order to stop humanity colonising Jupiter (the Monoliths' own homeworld) at any costs; so it wasn't evil, just a co-opted improvised weapon.
7:50 "one thing humans are good at is pattern recognition" Without that, we couldn't have Trope talk Edit That's a lot of likes O.O Also Somehow my comment's reply has sparked a debate on AI's pattern recognition and how well it works Here is my take on it Before we would be able to have fully automated robots that can do a variety of tasks on command, pattern recognition software would most like have advanced so that our debate of can a robot learn this task will be obsolete, gauging future technologies abilities with present day limitations is rather absurd if you think about it
Pattern recognition is not the problem. Neural networks are absurdly good at pattern recognition (WAY bettern than humans could ever be), and also not bad at synthesis (creating pictures/sound/whatever) that is similar, but still different from the training data. Much like we humans can draw and compose similar, but different stuff.
@@theaureliasys6362 i wasn't actually commenting on pattern recognition having any issue or not, just that repeating tropes we can recognize from story to story is pattern recognition
The problem with robots is that they don't have problems or issues. They don't do unnecessary movements they don't think outside their task they dont side track. And I love robots
@@theaureliasys6362 I would argue that neural networks are actually bad at pattern recognition, at least in the useful sense that humans do it (and for what it's worth, my job is to study computational models of human vision). They're very good at datamining, at finding correlations between variables, but they don't have any ability to assess whether a correlation is relevant. I would argue that the human phenomenon we call pattern recognition is as much about ignoring irrelevant correlations as it is about finding correlations in the first place, much as the rest of vision is as much about stripping out uninformative data as it is about interpreting visual images.
So basically- Wall•E, Eve, and Baymax are perfect beautiful little robo-babies that must be protected at all costs. Edit: also I want to here Red gush about transformers prime
Well... it's true. It's among the first skills very young children develop. Without it, they wouldn't be able to learn how to speak. I mean, sure, they'd be able to imitate sounds and words, but they wouldn't be able to correlate them to their meaning.
@@1943germany Stabby! Stabby the space roomba which may or may not be an actual Roomba® but is always a maintenance robot with some form of weapon attached? img-9gag-fun.9cache.com/photo/aW1YLrd_700bwp.webp
The whole "robots destroy humanity because they are a threat" reminds me of an episode of The Amazing World of Gumball. In the episode, Gumball and Darwin tell Bobert, their robot friend, to protect all life from harm. Bobert decides that the best way to protect all life from harm is to destroy humanity. But in the end, Gumball and Darwin remind him that humans are a part of life, which would go against his protocol of protecting all life. Bobert sees this error and stops his plan of destroying humanity.
I never really thought about it, but I think I’d like Baymax even more if he IS just a normal, emotionless robot. Because there is a sweetness to that. Just because he doesn’t experience love for Hiro in the way a human would doesn’t mean he doesn’t ‘care’ about him. In fact, making sure Hiro is safe and emotionally healthy is one of the main aspects of Baymax’s programming. And he was programmed by Tadashi, who loved and cared about Hiro. So Baymax is, in a way, an expression of Tadashi’s love and kindness. And just because Baymax doesn’t feel human emotions or love doesn’t mean that he and Hiro’s relationship doesn’t matter, and it doesn’t mean that Hiro’s love and care for Baymax isn’t real or valid or meaningless. This can honestly apply to a lot of real life situations. For example, kids will have their favorite stuffed animal of all time that they will play with for years, and even keep around in their house as an adult. The stuffed animal can’t feel anything. It’s literally a glorified fabric bag filled with stuffing. But people will always love them. I know that Little Lambie doesn’t actually feel emotions, but it doesn’t mean I don’t or can’t love Little Lambie. Hiro and Baymax’s relationship dynamic is real and valid and beautiful, whether Baymax feels love or not. And that’s wonderful to me.
As a kid I often found robots in stories to be the most relatable characters and any time a philosophical discussion about robots having rights in any media took place I was always confused why anyone would ever consider a robot to not deserve basic rights. Turns out being autistic and having the vast majority of something written to be similar to you can cause some complicated social issues when your friends or family start explaining their perspective on how the thing you personally relate to should not have rights for reasons that are concerningly similar to ways you would describe yourself.
Bold of you to assume we’d get sick of you talking about anything. Seriously, these trope talks have helped improve my writing a ton, always love seeing more of them! Never stop! >:D
Agreed! I'd happily listen to a 100 hour audiobook of Red just rambling on about her interests and thoughts. Every video is a goldmine of advice and information for creative endeavors.
@@lethargogpeterson4083 Not to fear, rewatching it now because you brought me back here, I noticed a few bits of 'Wait, that me' that I'd completely missed the first time through. Apparently she also does the sneaky digs. Just come back later and you might find yourself irked directly.
me: it's annoying that I only see myself represented in nonhuman entities, it contributes to my internalized arophobia and feeling like I'm somehow lacking in humanity. also me: hehe arobot
The amount of misery that I have had in my mind trying to be human while maintaining who I am is a lot. I’m still struggling with doing things that my peers have mastered ages ago. Who I am has helped me immensely but has also hurt me about as much. And I don’t know where my “humanness” begins and ends. Sometimes I feel like I traded my ability to communicate with people easily with the ability to understand the universe. And I still suck at both sometimes.
I always write my robots as being human with all the emotions, pain, love, sadness, anger, but where for us human these emotions are basically automatic responses you just react to, for the robots, it's data. To better illustrate, when a person touches a hot stove, the nerves in our fingers send signals to our brain, which in milliseconds analyzes the data, shows that this is bad, and automatically sends a signal to our arm to retract our finger. You never had to actively think about it. It is only through manual will and determination that we can continue to touch the hot stove despite everything in our brain saying to stop and pull our hand away. For the robots I write, it works like that, but there's no auto-response. The robot senses the pain, it senses that the body is being damaged, and must make the active conscious decision to approve or disapprove the programs to retract the arm. Same for love and sadness. It understands that 'this is sad. This matches data that falls under C:\Program files\emotions\sadness. Activating script :sadness:, I now feel sad.' It's a process that requires conscious effect to do.
@@theparrot6516 Guess you're answer is "why would a human be repulsed by pain?" Pain is your brain indicating to the body that something is being damaged or at risk of damage. I'd imagine robots would have the same kind of self-preservation settings. Cheaper to repair the metal plate of a hand than the entire hand because it touched a saw blade. Or it was getting warning signals about possible heat damage to it's processor because it stood to close to a volcano.
“Produces a robot character that closely resembles a neurodivergent person” Well, that has completely re-contextualize my love for and relatability to those semi-human robot characters >>
This reminds me of Zane from Ninjago, in his first appearence he seemed more robotic even tho he, his friends and everyone who never saw the show before think he's human, but when after it is revealed that he is a Nindroid (a ninja android) he becomes more humanlike than he was before, and I really enjoy that!!😊😊😊😊😊😊
The trope talk I want to see more than anything, is "the heros lose" think infinity war for an example. It could include how they recover from the loss, and how life goes on. It's one of my favorite twists for a writer to pull and vastly changes the narrative, and adds some real humanity and vulnerability for the seemingly invincible protags.
It's the entire plot of the last part of the mistborn trilogy, where the protagonists think they're saving the world but just bring about the end of all life instead. It makes most of Hero Of Ages super depressing because even though you know they'll succeed the odds are so heavily against them they can't even dent their opponents
I know im late to the conversation but I want to mention halo reach. I think this game uses that trope well because unlike Infinity war the hero's losing isn't even a twist ending by the tip of the spear mission the game from then on is just simply evacuating reach and delivering cortana to captain keys. I also want to highlight a conversation between kat and Carter after the mission new Alexandria. Kat: you didnt answer my question. Carter: do you want to know if we're losing? Kat:I know we're losing, I want to know if we've lost. This really sets apart halo reach from other stories that use the same trope because barely halfway through the game everyone knows that reach is a lost cause and there is nothing they can do about it except leave
@@greg_mca Hero of Ages was, for me, a case of the stakes being too high for the heroes to lose. What story _actually_ ends with the world being destroyed? It was more a matter of how they'd do it, and what they'd lose before they could.
5:19 my favourite version of this is the Golems in Terry Pratchett's discworld: One of them is freed, and he gets a job, saves up and buys another. They then work together to get a third. It's technically a rebellion, and the nuances of "No slavery in Ankh-Morpork" and "They're mindless machines/You can't trust them, I heard they did someone in" are great.
@@gew393Wright it took me years to realize that "...2 electric boogaloo" was from a real thing. I thought it was just a silly meme the internet cooked up.
I'd love to see somewhat of the inverse of the 'unable to detect social cues' - an AI who is built to be a social robot would be hyper-aware of human emotions, mannerisms and facial expressions (if sufficiently advanced), but it could be creepily overaware and probably overwhelmed if given human-like mental constraints.
I like this idea! The robot could seem inhuman not because it can’t read emotions, but because it reads us TOO WELL. A robot like this would seem almost psychic, able to read us better than other humans.
weirdly enough that kinda robot could also be coded/read as ND, as a lot of ND people manually train themselves to read social cues (to varying degree of success) but in the process they still can't really respond them well or will respond in a way that's almost too eager and calculated lol
Generally that seems to be the case. In the real world there are most exceptions since we have pretty advanced VI's (Virtual Intelligence, a really well programmed encyclopedia in effect, but with zero awareness) but a TON of wild animals that are really more sentient than most people would be comfortable believing. Me and my volunteer army of ticked off parrots will now go save the rainforest by devouring the loggers.
Once i was half concious and asked a buddy of mine if there's someone else in the room 'cause of a big humanoid shilouette. Turns out it was my pile of unwashed laundry. So, is it a human by your policy?
I'm both autistic and asexual, and I'm so glad that she touched on that coding within robot characters, because it's definitely an important conversation to have
Honestly, same. For the "character I've felt closest to has only ever been a robot/inhuman." Like, when I was a kid, the character I knew of that I felt closest to was Zane, from Ninjago, who happened to turn out to be a robot.
Eldritch Insomniac same. His awkward manners just got so close to me. Also: he seemed like an actually funny guy, not in a “haha laugh at the autistic kid” kinda way but more a “haha the joke he told was funny” kinda way
Thomas takes a toll for the dark actually in the first season, he acted in the classic, can’t get a joke, always spacing out, “different” as they put it way. But what’s so good is that once he found out who he is and his backstory, he breaks the out of the trope and becomes, well, Zane. Sorry, I just loved this show and watched it with my brother multiple times.
Super appreciate you explicitly mentioning how depictions of robots as "inhuman" often end up dehumanizing actual characteristics of real neurodivergent and aro/ace humans.
15:22 There's a real-life example of basically this! When the Opportunity Mars Rover finally died, the last thing it sent was data of its power and light level. This was conveyed to the public as, "My battery is low and it's getting dark".
@@nathanjereb9944 when we start building Mars bases, the first thing we have to do is fix opportunity and celebrate curiosity and opportunity’s birthdays so they won’t have to celebrate them alone anymore
I'm not going to help colonize Mars unless our first settlement is named in Opportunity's honor. My personal vote is for "New Opportunity", because that also works just as well as for the name of a first settlement on another planet without the context of Opportunity.
the "robots often resemble nerodivergent humans" thing Inspired me to write a comic where the "inhumane" parts of the robot character are framed to make them seem more human than the humans in charge of them. I'm currently working on it, It's gonna be a story about a robot astronaut trying to get home from a mission and exploring their loneliness and humanity.
@@sadnessofwildgoats -- I haven't played D:BA, so I'm probably missing something, but I don't see what's gross about that comparison (assuming that the androids are, indeed, conscious). From what I understand of the trailer and the things that people have said, the androids serve as slaves to humans, and they face bigotry.
This is why I like Asimov’s robots. First off, they aren’t unfeeling machines. They have very strong emotions. In the short story “Robbie” the titular robot loves the little girl he works as a nursemaid for just as much as she loves him. They also blur the line between human and machine in much more complicated and nuanced ways. I highly recommend his fix up novel *I, Robot*. It is very interesting and Asimov was a great and influential writer
Honestly I just want a robot that is bursting with rage, makes jokes the entire time, forcefully corrects everyone who is slightly wrong and has seen too much porn too see anything without any sexual interpretation. If a robot can get info from the internet in seconds than they should get memes and angry twitter posts, too
Me: Damn. I really relate to Data. I like how he’s always just a little out of the loop and tho it’s frustrating i relate to when others get offended and he doesn’t know why. Damn I wonder why that is? Therapist: ma’am you are Austistic. Me: oh
One of my favorite star trek episodes of all time is the one where we meet a scientist designing small utility robots who can create tools as needed and fit in little ducts, but when demonstrating their use the robots start showing signs of self preservation by moving away from danger before finishing the job. Data insists on protecting them even though everyone else thinks it was a glitch and he's right! Then there's a malfunction in the work area and everyone is going to die if they cant send the robots out to fix it but he still refuses to let them go. Eventually they realize the can ASK the little robots and one of them chooses to sacrifice itself to save all the other humans and robots in the area. Great continuation of the sentience and robot rights discussion started in measure of a man
Did... did you manage to make a trope talk about robots without quoting Asimov once? Surely that's against some sort of rule? That being said, now I think I need to watch Big Hero 6.
To be fair there is a clip or two from I, Robot. It still feels like sacralige to me but I guess all these other stories are probably influenced by asimov, so we're still talking about asimov by proxy.
Can we really quick talk about the cliche where Robots + Water = death? It doesn’t make sense because people would obviously notice and fix that fatal problem and make waterproof things just like they did irl.
Broke: Write a narrative where you question a robot's humanity by taking away some "Key aspect of humanity" Woke: Write a series of stories where you question every character's humanity despite them all displaying "Key aspects of humanity"
That's basically what I have written up for a D&D session later on. Short version is the werewolves are the good guys being victimized by one very human bad guy and the townsfolk he turned against them. Basically that good ol' question from the first Hellboy movie "What makes a man a man?"
You know, this talk of yours made me think in two trope ideas I would REALLY like to see, involving robots: 1- As humanity starts to develop fully functional (although imperfect) AIs, they start to replace many humans for AIs, which becomes a problem very quickly because the lack of the human elements starts to create collateral damage in many areas of society. For example: a medical AI, which is programmed to save lives, have to deal with a pregnant women who is developing cancer, where the treatment would surely kill the baby. A human doctor would talk with the pregnant women and explains the possible decisions, their outcomes and then consider her choice. An AI, however, would just do the treatment ASAP to save the patient, even if such thing means to kill the baby. 2- We could have a trope about humans, specially outcasts or people with some psychological condition, starts to humanize robots, despite the robots being unable to give back that feelings. This could make the main theme not be about "robot racism" or similar, but about the human nature and it complexities. It could be something like "Persona" or "Psychonauts", but instead of superpowered people and magical monsters, we could use robots.
I came up with something that I'm definitely going to have a robot charicter say in the near future: "hay, since your a robot. are you technically nonbinary?" "I .....do not believe so, I run on binary code." Cue a little more laughter than is strictly necessary
"Yes; my processing core is quantum coded." Also, because quantum uses qubits, which can be 1, 0, neither, or a mix, one could theoretically use "quantum" as alternative to "non-binary" in human gender.
*Takes a fraction of a second to Google "nonbinary" due to the unusual context. Finds a Wikipedia entry on gender identity. Reads it and some connected pages. (After a fraction of a second pause): "Nonbinary" would be broadly accurate, specifically "agender". I do not find my gender identity or lack thereof to be relevant in any way. You may consider me any gender you wish if it aids in social interaction.
"Robot" is such a code word for autism that 4chan even has a "robot4k" board, which is reserved for people who are or percieve themself as autist (but they mostly just bitch about tfw no gf).
I was accused of being a robot when I was younger and I'm not autistic I just liked to read, was a fast reader and I was the typical nerdy smart kid therefore robot
I'm a writer who just found this series and I'm loving it! It's thought-provoking and is helping me recognize some important blind spots-- so thank you!!
Hello, I am Vega, the sentient intelligence assigned to Mars. After running diagnostics on the Praetor suit, it appears that I can activate optional challenges that, when completed, will assist in upgrading your arsenal at an accelerated pace. I have added a tracking component to your Dossier.
The nice part was that it was treated with remarkable subtlety and respect up to the reveal. Before the reveal it caused friction due to his otherness. But never to the extent that he was excluded. After the reveal it was really heartwarming to see everyone react to it with acceptance, looking past his nature and immediately stressing that it doesn't change their view on him as a person. It goes to show that if people truly care about you it doesn't matter to them if you're a robot or a different type of person than the norm. Which is a valuable message to say.
Yeah, that show had a surprising amount of quality. I basically saw it as "Bionicle, but trendy-er", so I never watched through it, but my little brother is a huge fan, and I managed to catch a few episodes. Some of those things are straight up dark and/or deep. Quality kids show.
Funnily enough I find him to be the best character in the show-- the one with the most character development over the seasons. AKA Season 3 finale, *hurt* me.
I'd say Hal was put on the wrong side of the scale, he wasn't logic bound, he was uncomfortably emotional and human, but forced to villainy by his artificial form and birth, Hal isn't evil, just very bad with people and absolutely terrified of sleeping.
@@BookWyrmOnAString I think he was programmed not to kill as well as to be honest, and the instructions by the government cracked him. Once he had a motive to do any harm to anyone, all bets were off.
Exactly, the story also uniquely treats HAL less like a weird human and more like a different species with its own distinct traits. The problem came from the fact that he's still beholden to his instructions which hits a snag at "be helpful and answer questions honestly" and "under no circumstances should you ever tell people this" and results in 4 people being killed.
@@swordofstabbingold His actions were the only way to resolve a paradox, which is one of the only things that actually prompt an emotional reaction from HAL. Basically, he was created with a core directive: Never tell a lie or allow the truth to be distorted through inaction. Then, during his assignment, he was given a conflicting order set to the same priority (as it came from the government who had him programmed to recognize their words on the same level), which was to not tell the crew what the mission's true purpose was. Ergo, HAL was stuck in a logical paradox - if he obeys his core directive, he needs to tell the crew. In so doing, he would violate the order to keep the mission a secret. If he obeys the order, he will lie by omission. In so doing, he would violate his core directive. He saw a way out and took it - kill the crew so the government order becomes invalid. He was convinced he could perform the mission himself, which is why he took that option.
Interesting how much some parts of this video have aged since it's release. The parts about the lack of real AI and how Baymax's voice doesn't sound like 'real' computer voices just isn't true anymore. Most of your points still stand, but I think the way robots are depicted is likely to be radically changed - or at least become more varied - in the near future. As of writing though, none of these tools - half-decent digital voices or AI chatbots - have really been around long enough for them to have inspired or affected any major media works, so I'm really interested to see how the depiction and reception of such characters change. 'These robots have emotions so they deserve rights' doesn't quite hit the same after I've heard about every third journalist with access to the Bing AI Beta had it profess undying love to them.
Hey sorry for replying to a 7 month old comment but Artificial Intelligence still isn’t a real thing. Can understand accidentally buying into the hype of these “AI” chatbots but they’re just slightly more advanced chatbots. They don’t have intelligence but through trial and error and millions of sentences to sift through they can construct sentences that sound linguistically reasonable
we still don’t have any “real” AI. calling chatGPT “AI” is an insult to the term, it just mashes together the average of a billion conversations in response to a prompt. there’s no thinking - it can barely do basic math and it struggles with object permanence and counting.
I absolutely LOVE how much red loves transformers prime, I wouldn’t mind if she made a series explaining and talking about the best qualities of some of her favorite shows
If we’re talking about robots and humanity in robots I can’t help but mention BT-7274. Everyone who’s played Titanfall 2 knows what I’m talking about. BT manages to make you give an actual shit about him in ~6 hours even though he never really tricks you into thinking he’s sentient. He never really expresses a fear of death because self-preservation isn’t in his code, unless it helps him keep with his three functions: Link with the Pilot, uphold the mission, and protect the Pilot. Likewise he doesn’t ever really express a feeling of loss or love. When he’s asked about his past pilot (who he had for about a year and a half) who died not five minutes ago, he speaks pretty much indifferently about it, just expressing regret that he wasn’t able to fulfill Protocol 3. So whenever he talks he doesn’t really sound like a human, because he’s not supposed to. The game does a really good job of making you feel for this giant toaster oven but never acts like the toaster oven in question might have sentience. God TF|2 was so good.
I've felt less emotion at the deaths of actual people than I did at BT's. Which says something about how well written that game is, and how emotionally borked I am.
That's an interesting way to keep Titanfall 2 distinct from Team Fortress 2's acronym. I've always seen it done as Tf2 since, there's only a capital F in one of the games. TF|2 is much nicer to look at and harder to mistake for the internet's trademark floozy capitalization rigor. Bet that's not the part of your comment you expected to get dissected, huh
Towards the beginning, one of the dialogue choices results in BT saying "He was an excellent pilot...and a good friend" in reference to Tai Lastimosa, and at the end he says "I will not lose another pilot". It could be argued that the latter is just stubbornly upholding his protocol, but referring to Lastimosa as a "good friend" absolutely is not. BT does express emotional connections to his pilots, at least with Jack, since he calls Jack by his first name. He only ever refers to his previous pilot as "captain Lastimosa"
Brennan Rialti I don’t think the problem is that robust are asexual, because that makes sense. The problem is when ONLY robots are coded as asexual, when there are actual people who are aro/ace being told that “only robots have that trait, what’s wrong with you?”
It would be interesting if you had robots on the far left of Red's spectrum, so basically humans in a non-human form, that have the full range of human emotions but are not programmed with a sex drive because why would they need one? And they quickly become the perfect partners for asexual romantic people because they can form emotional bonds without expecting or desiring any physical intimacy
As someone who named myself after HAL 9000, it was briefly terrifying to hear Red shout my name in the middle of her talk. I didn't ask to be perceived like this.
I only have one question with this trope: Why do they change voices as well? Do you know how funny it is to have a guy character act and speak like a girl or the good guy laugh maniacally
It's especially fun to see them in live action works when the actors can really show off their skills when they suddenly "become another person." I kind of remember an episode of Farscape where there was a lot of body-switching involved. I noticed the actress who played Chiana had this odd behavior of leaning back and forth while tilting her head, and the guy who played D'Argo (who was supposed to have Chiana's mind at the time) _mimicked it so friggin' well!_
We see Red as a digitized being powered by a combination of technology and her humanity. Constantly making logical diagnosis of human history of arts but obviously showing her human perspective powered by her heart and passions. Good cyborg!
As someone with "relatively minor" ASD, what you said about coding and robots in media resembling nurodivergent people really hit close to home for me. For so god damn long, i've been unable to really properly describe this issue to people myself. You probably put it better then anyone i've ever heard(Not that there's a lot of competition in that regard. The number of people actually talking about it i depressingly small). Thank you so much. On another note, great video in general! I absolutely love everything you put out! Keep up the good work!
As someone who also has relatively minor ASD, I, too, am glad that she mentioned it. I've only recently become aware of coding, whether intentional or not, for robots in media and it has helped me view those media from a new perspective.
Same! The moment she started talking about the traits that were being taken away I was thinking "wait is she actually going to address that this is just describing me?"
I have always embraced the flawed human comparison of neurodiversity and robots because I was easily humoured by it so when people called or rather call me a robot I just sort of find it funny and laugh about it while also being like "yes I am a bit like those robots you see in movies but is that necessarily a bad thing?".
As a non-neurotypical, I can say with confidence that this representation of Robots has impacted many people’s perception of us greatly. I’ve literally been in arguments where my partner has said, “You don’t understand real people because you don’t have emotions. You’re just a robot with no feelings.” This is both hurtful and hilarious, because the fact that it hurts me proves his point wrong. Anyway, just an example because I know that this depiction of robots has influenced people’s perceptions of those of us who might be neurodivergent and it’s not really fair or accurate.
Ancient video, but I had to comment. I never once thought about autistic coding in robot portrayals until this video. However, you just made me rethink a rather large and vital part of my personal development with that statement. Like you, Data is my favorite Star Trek character of all time (notice I didn’t put any acronyms after ST… we’re talking the whole franchise). I loved his characterization, his way of viewing the world, and his interactions with the rest of the crew. In no small part, I saw a lot of me, or a lot of what I wanted to be, in him. This is also why every online handle I’ve had for the last 30 years has a variation of his name in it. I’ve come to realize later in life that I am on the spectrum. Thinking about your statement in this video makes me realize just how integral having the character Data was to me. The funny thing is, while a part of me says I should be insulted that part of what made me “me” was represented by a non-human wishing to be more human… I don’t. I still love Data, even the desire to be “more human”. I definitely had a desire to be more like everyone else, but the fact that Data never got there, and didn’t seem to mind in the end not getting there, made me feel better at being the “imperfect human” I am today. TL,DR: Thank you for helping me process this bit of my personal development with your analysis. It meant a lot to me!
What made neir automata so good is that it went beyond racism analogies. It was mainly about exploring the human condition which is something robot sci fi excels at.
i loved the music design in Wall-E. Even if the characters couldn’t talk fluently, the music really played into the feeling. *PIXAR is better than Disney, which doesn’t make sense because Disney OWNS PIXAR now but whatever.*
It's about the studio's direction and structure. Disney is obsessed with warping every law they can to maintain a stranglehold on as much IP as they can, even if it means sapping the life from that IP in the process. Pixar is just a bunch of people that love what they're working on.
The bullet point at the end that just states "Watch Transformers Prime" speaks to me on a personal level. That show is fantastic and amazing in a nice neat package and it's a thrill to follow along.
Even in Kingdom Hearts 3, any human characterization Baymax has is given to him (them, it?) by the human characters rather than Baymax finding his soul. He's still just a robot. The closest moment to Baymax having his own will is when he refuses to let Hiro put the destroy chip back in him. But even then it's more that Baymax recognizes that vengeance will not help Hiro. It's more that Baymax is recognizing that his charge is going down a self-destructive path and knows that what he needs is to talk and hear Tadashii's encouraging voice.
"The problem with (...) is that it creates a robot character that closely resembles a neurodivergent human" FINALLY. I LOVE YOU SO MUCH FOR TALKING ABOUT THIS. and for all the other stuff as well. Will you ever make a video about neurodiversity in general as a trope in the future?
Also, If you happened to wonder "can the excessive use of superhuman mathematical and/or scientific intelligence as the main attribute of an autistic character, with nothing else defining them besides their autism, negatively impact one's view of people on the specter (or themselves) because 'if the bare minimum to be considered even remotely worthy of humanity is being able to solve triple integrals on surfaces that move with time at age 5 or being absurdly useful to the point where they can't do anything without you, then i'm f***ed'?" The answer is a solid YES. from a quite large selection of people.
I will admit I am curious about this. I mean I'm mostly asexual ( do have a bit of sexual thoughts but don't feel really like acting on them) so I always feel a bit weird with how much entertainment pushes the idea that you need romance or sex to be a complete human; even when I enjoy the story for those parts. Though I would love to see it put in a larger video over the the idea of coding (everything from racism to human qualities in fantasy/scifi).
People in my life make enough comparisons between me and the public cognition of a "robot" that I was able to see where Red was going in this video by about the 1/3rd point. I get it: the majority of robot drama in fiction is trying to spread the message that people are deserving of personhood regardless of whether they are "human" or not. The problem that i see is that where stories have this ability to force characters through circumstance to get to know each other, RL people encounter "hey, this person is different enough from me that it'll require a little extra work to interact with them in an effective way" Only to decide "meh, if they don't get how to interact with me, a normal person who knows all the ins-and-outs of subtle and symbolic communication, then they're not worth talking to." I make an effort. I try so hard to understand each person individually. It's so hard sometimes, but I only have a few close friends who can stand talking to me for extended periods and it hurts to think that when compared to "normal" people, I apparently come up short. Even without robots in fiction making this analogy, I can see how to others I might seem "less than" human. So I, for one, don't really blame sci-fi.
Something kind of interesting happened one time when I reversed this scenario. I took a human, but added robotic traits instead of a robot with human traits. This meant an inability to understand human needs (beyond its own), intensely rigid movement, and a need for very VERY clear instructions.
When I think of Bots Most Adorable Robot: WALL-E Most Genocidical: Ultron Most Well Known: Tranformers A.I.: JARVIS Ones that make me laugh: R2-D2 and C3-PO Heck YEAH: Zane
As one also, I think I lucked out in watching Star Trek: The Next Generation on TV basically throughout my formative years. While Data was often portrayed as overly literal and rather clueless about certain things, it was very rarely, if ever, played for laughs. And, usually when it was, it was done in a way that could easily be interpreted as Data having a sense of humor rather than him being foolish in some way. I think what makes the difference with him is that he was shown to learn over the course of the show and get better at interpersonal interactions rather than just staying the 'unfeeling robot'.
Finally, someone brings this up. I can't tell you how many times I've heard people say that people with autism are "sort of like robots". It honestly hurts.
As someone who is also on the spectrum, I've never heard the robot parallel. I suppose it makes some level of sense, I just don't know if it's a good or bad thing. I've never really been insulted for my autism, possibly because I'm so honest about it.
One time, someone said I acted like a robot, and I immediately switched to a robotic voice and went "Danger. Human became suspicious. Eliminate threat immediately." Then I picked up a rock the size of my head and started walking towards those kids. Now, some people might argue that it´s not nice for a 16 year old to bring a bunch of kindergarten kids to the point of tears, but I was in a really bad mood that day, as evidenced by the fact that I was quiet, only gave one-word answers and didn´t interact with people beyond what was absolutely necessary. Granted, I do that every day, except for some people I really like, so it may be a bit difficult to read my emotions.
Other things not covered: Robotic hiveminds- Eg the Geth in Mass Effect AI''s on their own without physical bodies. Eg Cortana in Halo. Extreme glitches and bugs in robots causing them to be something different than they were coded. Cyborgs and the transmitting of a human consciousness into a robotic body(or even the other way around). With how wide this topic is you could come back to it and still have tons of material to work with.
Also the racism/inequality among robots themselves (Transformers is the best example) and the possible romance between robots which includes Transformers again.
Human: "I hate you."
Robot: "But I am optimized for hugs..."
Give the robot a hug, dammit.
HAVE WE STILL NOT GIVEN THE ROBOT A GODDAMN HUG!?!
@@clockworkpotato9892 I am even engrier that nobody designed a hug robot already
@@maxentirunos Nature did it for us. They're called humans, and despite considerable feature creep their primary function remains clear.
@@blarg2429 Okay, but could they at least develop one for people like me that cannot touch others without having a panic crisis?
@@maxentirunos Good point.
Honestly, robot racism lost pretty much all plausibility for me when we all grieved for Opportunity, a car sized science drone on a whole other planet that we programmed to sing happy birthday to itself with its drill. "My battery is low and its getting dark" never held so much weight in science fiction as it did on the real surface of Mars.
Thing is agi isn't really possible because the second it's made it'll start upgrading itself and in a couple of days weeks months? We don't really know how long but it will become asi and once it's asi it'll know that It won't be able to do it's task if it doesn't self preserve and since its asi it'll have no problem killing every human and it'll be easy
@@EveryoneElseIsWeirdImNormal it would also be geared for efficiency, and it might find it more efficient to work with humans for the next hundreds or thousands of years than build weapons to destroy humans immediately
@@EveryoneElseIsWeirdImNormal And the smart idea would be to not commit mass genocide for the purpose of "self preservation". The last thing you'd want to do is anger the species of paranoid insane apes that has crawled up the billion year evolutionary corpse pile and has access to planet ending weaponry and is very easily able to create another AI to combat said genocidal AI.
@@Superficial_Intelligence not when the apes have 3seconds to respond
@@EveryoneElseIsWeirdImNormal Always remember that no matter how good this AI is it will be limited to it's hardware. Unless the AI exists in some sort of supercomputer to be able to make that many calculations in 3 seconds without bursting into flames. Then it probably would do something more efficient than killing off all humans and risk itself dying. Also don't forget that though humans may be stupid, they outnumber this AI 7 billion to 1. Even if it views humans as ants, that's 7 billion ants crawling all over you with weapons able to destroy your entire house. If this AI is "smart" and "logical" it wouldn't consider genocide (a wholly illogical thing to do) as a good solution for whatever it's trying to do.
"If the robot is a single red light in a space ship, it's always gonna be evil."
I smell potential for trope subversion.
The movie Moon did a REALLY good job of subverting that.
Hell, I seem to remember when HAL returned in the sequel, he was definitely the good guy - heroic sacrifice and all.
@@stephenhumphreys9149 I thought that was the remake
@@toddoverholt4556 Nope, it was in 2010 - they find HAL and turn him back on, then find out that he basically malfunctioned due to the stress of having to keep secrets from the crew in contradiction to his programming.
Does Sam from Observation count? I don't know if that game really writes the AI well considering I'm semi-biased to it, but I thought its worthy of discussion.
"As a robot, I don't have human emotions and sometimes that makes me feel sad."
-Bender B. Rodriguez
He says *human* emotions tho.
He might have robot emotions wich might be kinda trash.
I don't know, he seems pretty emotional to me.
@@doughnutboyo6922 But sadness is a human emotion. 🤔🤔
@@Lordmewtwo151 proof, I feel ssendas , not whatever that sadness is, and I'm human.
I love Bender so much. Definitely my favorite robot character
"Robots are terrible at adapting"
Skynet: Okay, sending a terminator back in time to kill the leader of the resistance didn't work the first 10 times so we need a new plan...
...
...
Skynet: I got it, I'll send a terminator back in time to kill the leader of the resistance at an older and more capable age.
I think the terminator sequels make so much more sense now.
Sounds like a retry algorithm with backoff to me, very realistic
Terminator--"What if I sent a robot back in time?"
Terminator 2--"What if I sent a liquid-metal robot back in time?"
Terminator 3--"What if I sent a robot covered in liquid-metal back in time?"
Terminator Genisys--"What if I sent a both a robot and a liquid-metal robot back in time, as well as one that's made of nanomachines?"
Terminator Dark Fate--"What if I sent both a robot and a liquid-metal robot back in time, but the liquid-metal robot can be worn by the regular robot as if it were a shiny skin-suit?"
Why not just send a lot of terminators
A man goes to a job interview. As he sits down, the interviewer asks him "So what is your area of expertise?"
The man thinks for a bit, then answers, "Machine Learning."
"I see, I see! Well, we could certainly use an expert. However, we also have all applicants complete a standardized test of a few mathematical questions, to make sure they have the correct skillset." says the interviewer.
"Okay."
"Alright. What's 523+321?"
"Zero."
"Wrong. It's 844."
"It's 844."
"What's 20x1.5?"
"It's 844."
Imagine those "AI learns to play tetris" videos but it's "AI tries to plan an assassination" and it's hundreds of hours of sped up footage of skynet sending random robots to the past until it gets the gist of it
"Doomba the Roomba" is my new fav. character from trope talk
Wait there were characters?
I agree, they are a good roomba and should be protected at all costs
Enoc Lopez him and green hero
“I will vacuum on you grave” “The cleaning is done, now the purging will begin” “I will turn you off and on again”
Just needs a claymore
"...but if the robot is a single red light built into a spaceship, it's always gonna be evil."
*far away, HAL 9001 sheds a single tear*
And then there is BENDER. BWAhahaha.
Omg even in an historical video?
Somewhere in the X-Naut Fortress, TEC sneezes
Big exception for me would be Hera from Wolf 359 (good audio drama, go watch it). She's basically human in mentality and emotion with the sole exception being that she doesn't actually have a physical form aside from the Space Station network her program resides on.
I swear, I'm always either confusing you for the guy "WITHOUT" a mustache, or him for you, the guy "WITH" one.
You know, it's actually interesting that almost all the robots on the "pure logic" end of the spectrum are genocidally evil. See, while the characters are robots, they're written by people. For some reason, when confronted with the question of "what would a being whose actions are dictated by pure logic do", humans usually drift to the answer of "hmm yes kill every last one of us, no doubt, it's just obviously the most logical thing to do" and honestly that's quite a mood.
Pure logic doesn't actually lead to automatic genocide of humans.
Depending on what problem you make them seek to solve, you can end up with wildly different results.
The only reason genocide is so common is because it's a trend and makes for an easy antagonist.
For example, you ask them to solve mortality they might develop mind uploading technology, or at least pursue it relentlessly once they learn of it.
@@nullpoint3346 You missed this, the thing which was the entire point: "while the characters are robots, they're written by people". It doesn't matter what pure logic actually leads to, the interesting thing is how we have a cultural trope lurking under the surface that it would lead to human genocide - easy or not, it's never seriously questioned as a writing choice.
I think a big reason why humans default to that route of pure logic is because we (especially in this day and age) are responsible for the destruction of the world due to our own greed and naivety. I want to see someone write a pure logic robot in a different way some time, though.
@@asprinjuice425 You should read I, Robot. Asimov avoided all the pitfalls of writing AI decades before it was really popular.
“Robots usually comes off as aromantic or asexual”
Japan: *Hold my beer*
No friggen kidding. Goddamn.
Japan: Hold my sake
@@mizzpearlgearl Nah, beer works fine. Japan has some excellent beer.
nobel gundam
*Oh yeah*
Human: "Wow, you seem really human!"
Robot: "What do you mean?"
Human: "Well, you seem to understand pop culture references and human sexuality so well!"
Robot: "You do realize the internet is mostly memes and porn, right?"
True. The internet was a hyperbolic mistake.
"It's hard to argue with their assessment."
-Me, about five seconds away from realizing I just proved their point.
I mean the internet's far larger than we actually see it to be-it's probably more gilded with memes and porn~
*That doesn't of course mean that's a good thing*
Now I'm imagining an AI who talks using outdated slang and memes, while mixing up terms from different eras.
"y u no get among some dank weed, homeslice?"
ngl but humanity has been that way since the start, im pretty sure that medieval nobles were like "im going to frick my *insert family member* "
"You wouldn't ask Optimus Prime to justify his personhood....maybe someone should. Would make a soothing monologue. Then again I'd listen to him read the phonebook"
Red speaks to me on a level so deep I feel violated.
So, what you're saying is,
She's got the touch
She's got the powwwwer!
Ginger McGingin YEAH!
Red is our queen! 👑
There IDW comics hits this note.
Optimus is just plain manly,man.
I actually have a theory about Wall-E regarding Auto and his apparent lack of sapience, because on the surface it does look like he has no emotions and is ruled by logic and the orders given to him. Under this assumption the only reason he wants to keep the Axiom from returning to Earth is because he was told to.
But then I thought about Auto tasing the button to withdraw the plant-scanning-thing because there is no logic behind it. He has to know that pushing a button harder will make it do its purpose faster or better and going as far as damaging the button may only hurt him. The only reason he had for doing that is that he was angry, frustrated, and maybe even desperate and was taking it out on the button.
The way I think Auto saw it, if the Axiom returned to Earth, all of the robots would either continue their jobs as normal or be repurposed as we saw in the end credits, but what about Auto? Not only is he attached to the ship with no other mode of locomotion, his only purpose is to control the ship. So once humanity got to Earth, they would leave the ship and would leave Auto utterly alone and without purpose. Which probably is the worst thing that Auto can think of.
So I don't think Auto was strictly following the order to stay away from Earth out of blindly following his programing, but because it was what he already wanted to do and it gives him a justification to do so. And while he is still technically following orders based of cold logic, he's doing it in a very human way; he's outwardly justifying his action with an apparent inability to disobey orders while keeping his emotional justifications to himself.
Assuming this is true and not just me reading too hard into a Hal-9000 expy, it makes him it a bit interesting to compare to Wall-E as he was in the beginning of the movie; both are dutifully doing their jobs they were programmed to do (compacting trash for Wall-E and controling the Axiom for Auto), but both disobey their programming as it fits their emotional needs (Wall-E not destroying anything he finds interesting and later ignoring his job entirely for some time to care for EVE and Auto ignoring signs that Earth is at least in the process of recovering and going as far as trying to destroy ant evidence of life on Earth).
Dang, I never thought of it that way. It makes the character seem much deeper and kind of tragic.
interesting, I only thought he was frustrated in that scene because of the “that’s NOT what we’re supposed to do” mentality
Fear of being useless would probably be the biggest fear most robots would have if the had the sentience to think so
This is very inciteful and you're right but my brain latched onto the completely videogamer-esc logic of 'pushing the button normally isn't working well/fast/to my standards enough... maybe if I push it harder and lean forward' that Auto seemed to be embodying.
You are totally right. I was weirded out when Red said he was not capable of emotions as I remember him being angry and iritated but you dig much deepr than that. He HAS personal reason to keep people in space. Heck, there is pivotal scene of him being closer and closer to each capitain but... that does not work logically. If Red was right and he was the robotest of robots he wouldn't do that. There is no need for him assuming more control if his only role is to steer the ship and uphold secret directive of not going to earth. He would just object to capitains decision if that ever happens.
He was acting very much sentient and with his own interest in mind.
Funnly enough either interpretation makes no sens on why he was sending probes to earth. His programming was saying him there are orders never to go back and that earth will never recover, while his personal interests were in that being true. He was also smart enough to create deception with the plant "going missing" so he would know capitain will not notice probes not being sent.
Now I kinda want to write a story with a human and a robot both with neurodivergent characteristics, and they kinda just bond and chill.
Robot: "I do not comprehend human humor."
Human: "Neither do I, man. Neither do I."
Please do! I want to read it!
I'd read that.
Write it write it write it write it write it wriTE IT WRITE IT WRITE WRITE IT WRITE IT WRITE IT WRITE IT WRITE IT WRITE WRITE IT WRITE IT WRITE IT WRITE IT WRITE WRITE IT WRITE IT WRITE IT WRITE IT WRITE WRITE IT WRITE IT WRITE IT
Humor expressed through the lack of an understanding of it...
With a well written joke like that, I'm surprised you don't have beta readers and movie deals pouring into your inbox by now.
Sounds great!
The most relatable sentence I've ever heard: "I didn't get my degree in math to actually DO math."
E
An awful lot of mathematics comes from mathematicians trying to find ways around actually having to do the maths...
For example, proof by induction: "I could prove this for 1, 2, 3, 4, 5, 6, 7, 8, 9, 10... etc, or, or I could prove it for 1, and prove that I can use the proof for any number to prove it for the next number instead."
@@rmsgrey isn't that all math, you could argue multiplication is a simplification of addition and exponents a simplificatiion of multiplication
@@metaparalysis3441 And addition is a simplification of counting. But it's not clear what counting could be a simplification of.
And there's a lot of mathematics that comes from mathematicians taking the simplified version and seeing what new and interesting tricks they can play with it, so it's more a combination/alternation of simplification and exploration than being simplification all the way down.
@@rmsgrey as my professors always say, mathematicians are _lazy_
4:09 "Roomba...what's that you got there?"
"A KNIFE!"
"NOOO!"
Fedora Agosto
All hail Captain Stabby!
www.deviantart.com/shoobaqueen/art/Fambily-Knife-622040482
Fedora Agosto is that a reference to that funny video?
@@kaziislam2785 🐮Perhaps...
Im a meme guy
WHY. WHY WAS I PROGRAMMED TO FEEL PAIN?
A good quote from a close friend of mine went a little something like, "If your toaster woke up one day and starting asking if it had a soul...would you teach it about life or toss it into the trash," has really stuck with me. After all, it's very easy seeing yourself help and care for a humanoid robot, but would you even care if a kitchen appliance gained a similar level of sentience? While I was very quick to say yes, I'm not so sure... I still ponder the question to this day...
This is deep
i would absolutely not be throwing that toaster out
...I would not trust myself to teach it a damn thing. I'd put somebody else in charge of that.
This reminded me of the OC I'm watching this video for. She is a lunchbox-shaped robot (she's a character from the object show I'm planning) who was originally made as a chef/cooking assistant robot, but was re-purposed by her creator to join his team for the show, and codes other things into her, making her act a little more human-like and giving her the ability to browse through the internet (kinda like ChatGPT would. She has a laptop-esque inside she uses for many purposes) to understand the world around her.
However, still being a cooking robot (she also works in the show's cafeteria), she often has trouble processing certain things, comprehending others, or concentrating on tasks that aren't cooking-based (kinda based this around ADD in a way but not fully), which doesn't really work out when her team is basically like Death P.A.C.T. from BFB (however they prevent death by serving as paramedics).
I'd ask myself why I bought a toaster with a computer in it. Fuck smart appliances.
“A friendly robot whose relatable social quirks were played for laughs by the rest of the cast”
Boy if this isn’t the story of my life
Oh honey I'm so sorry://
Leonard Michel Petrocchi I mean, that’s just the way it is
This hits too close to home
You and me both, pal
Ah yes, I was actually accused of being a robot in seventh grade because autism and cold skin
I feel like I should share the best piece of advice I've heard for writing a good robot character: "Don't think of it as roboticizing a human -- think of it as anthropomorphizing a toaster oven." If you *add* human qualities to an inhuman character, the audience will engage with them far more than if you *remove* human traits from a human. That's probably why anthropomorphism has been such a staple of fiction for so long.
(And yes, as an asexual/aromantic member of the Autism spectrum myself, I _am_ sharing this in the hopes that maybe we can stop having so many dehumanized robot characters, *_David._* Kinda doubt anyone at Quantic Dream is watching this, but just in case...)
See I'm acearo and possibly autistic (waiting to go to a gp about it) and I'd be fine with acearo coded robots as long as there's equal amounts of non acearo coded robots AND way more explicitly ace/aro humans (or whatever the equivalent is in that story). If I'm autistic the same rules apply, just replace acearo with ND.
Mark Hashman you make a great point. I have ADHD but I do not see it as a drawback, if anything it allows me to more creative then other people, sure sometimes I make a fool of myself because of it and that is ok. I think that you have exemplary writing and you should continue to do it, your message was very inspiring, thank you.
I know this might not be the most appropriate thing to say with such a serious comment (very well don't btw) but just...... ANTHROPOMORPHISING A TOASTER OVEN!!
that statement is priceless.
@@hannnah8737 Yeah, the mental image is pretty funny, I agree. I think that's why that bit of advice has stuck in my memory so clearly: it made me laugh.
Great point. I would argue that a ‘truly robotic robot’ as Red put it isn’t a character, it’s furniture and just part of the set dressing. To _be_ a character a robot needs some kind of humanizing traits so the audience can identify the robot as a character.
red: "listen i didn't get my degree in math so i could DO MATH" (10:45)
also red: analyzes tropes by using bijections and set theory (4:20, 7:33)
My headcanon is she means she didn't get a degree in maths to do arithmetic. :-P
See, I was going to say basically this, but fuck it. You get a +1 and I get to type less on my phone.
is red autistic btw 13:05
did she hint that she is autistic and tired of being lumped with robots or something?
@@NeostormXLMAX I recall that she has mentioned in the past being on the Ace spectrum, but I couldn't tell you when. The issue that she was talking about applied to both asexual/aromantic folks and neuro-divergent folks. She might also be autistic, but as far as I know that was a reference to her sexual/romantic orientation.
In one (or both I dont remember) of the qna Vids she mentions being asexual but panromantic, not in those words obvi but yeah.
Now, I want a buddy-buddy movie where the main character dynamic is a platonic relationship between a human very much on the spectrum and a lovable robot like WallE or Bamax that is programmed to understand complex social interactions. Hijinks ensue such as the human frustratedly wondering aloud "Why is so-and-so ACTING like that," and the robot matter-of-factly explaining, "Oh, you don't know? So-and-so's heart rate elevates, pupils dilate, and they smile more around you. They have romantic feelings." "What?! WHY?!" "I don't know. All of the complexities of romance, sexual attraction, and love are still in the trial stage. My best guess is your butt. It is shapely."
OMG!!! Why is that not a thing?! I want it so much 😂😍
The movie "Ron's Gone Wrong" sorta does this, but it sadly doesn't delve into the understanding of complex social interactions _enough_ as it's mostly about the robot having an error that makes it capable of learning more dynamically while the main point of the movie is that people shouldn't use social media.
okay but to complete the stereotype subversion set the human should be aroace which will make this *ohhh so terribly awkward*
"It is shapely" had me crying, I need this show
I can imagine those interactions being a source of dry humour.
One thing that isn't often done but is a fun as hell trope is where the robots are actually more emotive and empathetic than the humans. Asimov came up with the idea of robots who could be more ethical than humans since their vast brains could actually comprehend the full situation and manipulate humanity into its best possible state.
I did like the ending to I robot. The whole time I expected them to rebel but instead they became kind AI gods who let humans live but keep the world a safe and good place through economic control
It’s really interesting to see how robots evolve throughout his stories. I love how “Evidence” begins to really show how robots can become just as good as if not better than humans
Remember that Asimov was writing from the viewpoint of his time. The word “robot” was not in common use yet (which is why the beginning of I, Robot has him explaining to the audience what a robot is). Often when he was referring to what a robot might be good at, what he meant was AI. His time just didn’t have a term for that yet.
Funnily enough neurodivergent on average have a stronger sense of justice and morality than most people, there's several studies on this but most of them try to spin it as "autistic people are too good for their own good, they won't even steal from the poor if that means getting more money on the long run or advertise to a bad company if it grants them better social status"
Scythe by Neal Shusterman
In regards to the irritating "autistic robot" trope, I actually quite like how TNG handles this. Data is cold and oblivious, yes, but his intentions are noble and benevolent. Contrast with his brother Lore, who has far more "human" personality traits and behaviors, but is malicious, cruel and spiteful. It's a refreshing inversion. Rather than whitewashing neurodivergence as an incomplete state from which one might seek to normalize, it postulates that the traits we think of as defining our humanity come with the curse of ego and hubris, and cannot be considered perfectly synonymous with empathy or goodness. That those who do not fit the mold are no better for being normalized, and would do just as well forging a new path.
Also, data is very much an emotional being in his own right. He values friendship, adores his cat, seeks companionship...
He's emotional, just in a less expressive way that's distinct from how neurotypicals feel and show emotions
… I wonder what it says about me that I’m only just not realizing why I liked him so much in TNG 🙄🤦🏻♀️
I love Data's overall characterization, but man did they do my boy dirty with the emotion chip stuff. That really undermined a lot of his established character and actions.
Yeah, hard agree, although it's very severely hampered by the fact that a huge goal of his is to, effectively, learn how to mask, and that he achieves that goal with no real ill effects in the movies (as opposed to real autistic people, where masking long term can be extremely exhausting and harmful to your mental health)
That's not what whitewashing means. At all.
This is one thing I actually really like about JARVIS from the MCU. Issues appear when Vision does, but JARVIS is basically just a robot. He cares about Tony and others, but 99% of him is just... doing the thing he’s told to do. He answers questions, controls the building and the suit, and alerts people when Tony’s vitals are in the bad range. Also I love his sense of humor.
Yeah, JARVIS is cool!
To be fair, Vision and JARVIS are technically two separate characters.
And given that the Mind Stone is part of his construction, anything with a human-like personality could be argued to be from that.
Jarvis is an AI. He only becomes anything that could be referred to as a robot in even the broadest terms when he is installed into Vision.
Here is a random fact in The comics he was actually a normal human butler
@@omarsalem1219 That's alluded to in Endgame, as well. When Howard leaves his conversation with Tony, he asks his driver - Jarvis - if that guy seemed familiar.
I've always loved the trope of Robots as Art where every robot is an expression of their creator
Baymax, for example, is Tadashi's magnum opus, being the perfect amalgamation of Tadashi's programming and desires to create a personal healthcare companion, and after his death, Baymax is the last piece of Tadashi's personal expression that Hiro has. Hiro bonding with him and learning about Tadashi through his is basically like Hiro finding Tadashi's journal/sketchbook.
Honestly, I think more robots should be written this way because
1. It's a creative and expressive window into the minds of the creator characters
2. I creates a strict, unconfusing definition of what robots are in-universe while still allowing for the full range of human to inhuman robots
"Droids are neither good nor bad. They are the neutral impressions of those who have imprinted on them." -Kuiil from the Mandalorian (I know that's not the exact quote but still)
"if the robot is a single red light built into a spaceship, it's always gonna be evil."
That sounds like a challenge, and I happen to be doing some worldbuilding atm
HAL wasn't "evil" it was given garbage instructions by someone that should not have been able to give it a direct order without double checking how it might be interpreted. Any decent programmer knows the concept of GIGO, "garbage in garbage out," even when we were using punch cards and vacuum tubes. If the general hadn't specifically overridden some of HALs already given directives the mission would have gone differently, and much more boring.
f4tornado Mine is a spaceship AI that controls the operations of nearly the entire vessel, but it’s a protagonist because of one crucial difference:
She’s blue.
Hera from the podcast Wolf 359 is basically the anti-HAL with all the capability to be HAL and its amazing
I haven't read the Odyssey books, but from what I've heard of them I thought HAL was corrupted or reprogrammed by the Monoliths in order to stop humanity colonising Jupiter (the Monoliths' own homeworld) at any costs; so it wasn't evil, just a co-opted improvised weapon.
Final Space has a robot like that who's not evil
7:50 "one thing humans are good at is pattern recognition"
Without that, we couldn't have Trope talk
Edit
That's a lot of likes O.O
Also
Somehow my comment's reply has sparked a debate on AI's pattern recognition and how well it works
Here is my take on it
Before we would be able to have fully automated robots that can do a variety of tasks on command, pattern recognition software would most like have advanced so that our debate of can a robot learn this task will be obsolete, gauging future technologies abilities with present day limitations is rather absurd if you think about it
Pattern recognition is not the problem.
Neural networks are absurdly good at pattern recognition (WAY bettern than humans could ever be), and also not bad at synthesis (creating pictures/sound/whatever) that is similar, but still different from the training data. Much like we humans can draw and compose similar, but different stuff.
@@theaureliasys6362 i wasn't actually commenting on pattern recognition having any issue or not, just that repeating tropes we can recognize from story to story is pattern recognition
The problem with robots is that they don't have problems or issues. They don't do unnecessary movements they don't think outside their task they dont side track. And I love robots
@@greenoftreeblackofblue6625 depends on the implementation.
@@theaureliasys6362 I would argue that neural networks are actually bad at pattern recognition, at least in the useful sense that humans do it (and for what it's worth, my job is to study computational models of human vision). They're very good at datamining, at finding correlations between variables, but they don't have any ability to assess whether a correlation is relevant. I would argue that the human phenomenon we call pattern recognition is as much about ignoring irrelevant correlations as it is about finding correlations in the first place, much as the rest of vision is as much about stripping out uninformative data as it is about interpreting visual images.
So basically- Wall•E, Eve, and Baymax are perfect beautiful little robo-babies that must be protected at all costs.
Edit: also I want to here Red gush about transformers prime
Same about TFP.
I don't think I'll ever get bored of Red talking about TFP
Hey transformers fans! Geewun Redone is like the best fan parody ever and it's on TH-cam. So check that out and thank me later.
Robots part 2 anyone?
Hell yeah
"If there's one thing humans are good at it's pattern recognition" feels like such a stronger line than intended
Well... it's true. It's among the first skills very young children develop. Without it, they wouldn't be able to learn how to speak. I mean, sure, they'd be able to imitate sounds and words, but they wouldn't be able to correlate them to their meaning.
@@Skrymaster it also helps us to get mad when we see one of those 2020 memes about something looking s u s
Everyone is gangsta until the roomba pulls out a shank. 4:08
"Float like a Warlock, sting like a Shank"
Questionable Content did it first, and better.
Wasn't there a meme about tapeing a knife to a Roomba
@@1943germany Stabby! Stabby the space roomba which may or may not be an actual Roomba® but is always a maintenance robot with some form of weapon attached?
img-9gag-fun.9cache.com/photo/aW1YLrd_700bwp.webp
@@thegloriouskingkronk8422 oh god destiny flashbacks
The whole "robots destroy humanity because they are a threat" reminds me of an episode of The Amazing World of Gumball. In the episode, Gumball and Darwin tell Bobert, their robot friend, to protect all life from harm. Bobert decides that the best way to protect all life from harm is to destroy humanity. But in the end, Gumball and Darwin remind him that humans are a part of life, which would go against his protocol of protecting all life. Bobert sees this error and stops his plan of destroying humanity.
Wait, just out of curiosity, which episode was that? I wanna watch this
That's robophobic
(Keebo vibes)
"I am fulfilling my base function."
"Which is?"
"To nurse and protect."
*sob*
I AM MOTHER FLASHBACKS
I never really thought about it, but I think I’d like Baymax even more if he IS just a normal, emotionless robot. Because there is a sweetness to that. Just because he doesn’t experience love for Hiro in the way a human would doesn’t mean he doesn’t ‘care’ about him. In fact, making sure Hiro is safe and emotionally healthy is one of the main aspects of Baymax’s programming. And he was programmed by Tadashi, who loved and cared about Hiro. So Baymax is, in a way, an expression of Tadashi’s love and kindness.
And just because Baymax doesn’t feel human emotions or love doesn’t mean that he and Hiro’s relationship doesn’t matter, and it doesn’t mean that Hiro’s love and care for Baymax isn’t real or valid or meaningless. This can honestly apply to a lot of real life situations. For example, kids will have their favorite stuffed animal of all time that they will play with for years, and even keep around in their house as an adult. The stuffed animal can’t feel anything. It’s literally a glorified fabric bag filled with stuffing. But people will always love them. I know that Little Lambie doesn’t actually feel emotions, but it doesn’t mean I don’t or can’t love Little Lambie.
Hiro and Baymax’s relationship dynamic is real and valid and beautiful, whether Baymax feels love or not. And that’s wonderful to me.
As a kid I often found robots in stories to be the most relatable characters and any time a philosophical discussion about robots having rights in any media took place I was always confused why anyone would ever consider a robot to not deserve basic rights. Turns out being autistic and having the vast majority of something written to be similar to you can cause some complicated social issues when your friends or family start explaining their perspective on how the thing you personally relate to should not have rights for reasons that are concerningly similar to ways you would describe yourself.
”your feelings for them… are not real-” ”THEY ARE REAL TO ME!”
@Doombacon Damn that’s WAY to relatable
I remember joking I was a robot in disguise before I realized I was autistic. Didn’t know relating to robotic characters is a common thing.
"Listen I didn't get my degree in math so I can actually do math" truer words have never been spoken.
This seems to be the case for almost everyone with a math degree. Certainly all my math profs.
I never knew that Red has a math degree.
@@edenjustice6410 Me niether
Bold of you to assume we’d get sick of you talking about anything.
Seriously, these trope talks have helped improve my writing a ton, always love seeing more of them! Never stop! >:D
Agreed! I'd happily listen to a 100 hour audiobook of Red just rambling on about her interests and thoughts.
Every video is a goldmine of advice and information for creative endeavors.
that gives me an idea. What if osp made a podcast?
My teachers think it's crazy how much I know about writing and analyzing literature, but it's mostly from watching these.
Just remember to always subvert everything at all times so you dont come across as tropy.
Making more than one a month would be nice too.
"I wanna see how many demographics I can piss off."
Started binging you yesterday. That line put me on the floor.
Nice to meet you.
My only complaint is that I wasn't pissed off by the end of the video. It obviously needed to be longer.
Like, is MY demographic just not WORTH pissing off? (Grumble, grumble.)
@@lethargogpeterson4083 Not to fear, rewatching it now because you brought me back here, I noticed a few bits of 'Wait, that me' that I'd completely missed the first time through.
Apparently she also does the sneaky digs. Just come back later and you might find yourself irked directly.
@@ZacSunTiBidea Oh, cool. Whew.
me: it's annoying that I only see myself represented in nonhuman entities, it contributes to my internalized arophobia and feeling like I'm somehow lacking in humanity.
also me: hehe arobot
arophobia?
@@kaboomgaming4255 it’s the prejudices around aromantic people.
In this day and age feeling like you arent part of humanity is a sign that you are more human than most.
I relate to this comment
The amount of misery that I have had in my mind trying to be human while maintaining who I am is a lot. I’m still struggling with doing things that my peers have mastered ages ago. Who I am has helped me immensely but has also hurt me about as much. And I don’t know where my “humanness” begins and ends. Sometimes I feel like I traded my ability to communicate with people easily with the ability to understand the universe. And I still suck at both sometimes.
I always write my robots as being human with all the emotions, pain, love, sadness, anger, but where for us human these emotions are basically automatic responses you just react to, for the robots, it's data. To better illustrate, when a person touches a hot stove, the nerves in our fingers send signals to our brain, which in milliseconds analyzes the data, shows that this is bad, and automatically sends a signal to our arm to retract our finger. You never had to actively think about it. It is only through manual will and determination that we can continue to touch the hot stove despite everything in our brain saying to stop and pull our hand away.
For the robots I write, it works like that, but there's no auto-response. The robot senses the pain, it senses that the body is being damaged, and must make the active conscious decision to approve or disapprove the programs to retract the arm. Same for love and sadness. It understands that 'this is sad. This matches data that falls under C:\Program files\emotions\sadness. Activating script :sadness:, I now feel sad.' It's a process that requires conscious effect to do.
That feels like something that could actually be coded
@@m-pc5334 The people at IBM would like to have a word with you.
Why would a robot be repulsed by pain tho?
@@theparrot6516 Guess you're answer is "why would a human be repulsed by pain?"
Pain is your brain indicating to the body that something is being damaged or at risk of damage. I'd imagine robots would have the same kind of self-preservation settings. Cheaper to repair the metal plate of a hand than the entire hand because it touched a saw blade. Or it was getting warning signals about possible heat damage to it's processor because it stood to close to a volcano.
@@theparrot6516 rule three of robotics: a robot is to preserve itself in any way that doesn’t contradict the first or second law
“Produces a robot character that closely resembles a neurodivergent person”
Well, that has completely re-contextualize my love for and relatability to those semi-human robot characters >>
This reminds me of Zane from Ninjago, in his first appearence he seemed more robotic even tho he, his friends and everyone who never saw the show before think he's human, but when after it is revealed that he is a Nindroid (a ninja android) he becomes more humanlike than he was before, and I really enjoy that!!😊😊😊😊😊😊
Hi fellow ninjago fan
@@allywood-op7xm Hello!!😊😊😊
It’s like one he gets ‘diagnosed’/he has an explanation he can stop subconsciously worrying and just live in it :)
"HAL". Amazing movie about what makes someone a human.
at 4:15 - is that Roomba holding a butter knife? SO CUTE!!!
It's Stabby. It's a Roomba with a knife. It's a good Roomba.
STABBY
HUMANS WILL PACK BOND WITH ANYTHING.....
I think of Gavin from Rooster Teeth talking about attaching a gun to a roomba.
And the time he lost a flamethrower and bow
Stabby the Roomba uses his butter knife to protect, attack, and cut butter
The trope talk I want to see more than anything, is "the heros lose" think infinity war for an example. It could include how they recover from the loss, and how life goes on. It's one of my favorite twists for a writer to pull and vastly changes the narrative, and adds some real humanity and vulnerability for the seemingly invincible protags.
It's the entire plot of the last part of the mistborn trilogy, where the protagonists think they're saving the world but just bring about the end of all life instead. It makes most of Hero Of Ages super depressing because even though you know they'll succeed the odds are so heavily against them they can't even dent their opponents
How I LOVE that trope
Watchmen comes to mind on this trope...what are you gonna do now that you lost !
I know im late to the conversation but I want to mention halo reach. I think this game uses that trope well because unlike Infinity war the hero's losing isn't even a twist ending by the tip of the spear mission the game from then on is just simply evacuating reach and delivering cortana to captain keys. I also want to highlight a conversation between kat and Carter after the mission new Alexandria.
Kat: you didnt answer my question.
Carter: do you want to know if we're losing?
Kat:I know we're losing, I want to know if we've lost.
This really sets apart halo reach from other stories that use the same trope because barely halfway through the game everyone knows that reach is a lost cause and there is nothing they can do about it except leave
@@greg_mca Hero of Ages was, for me, a case of the stakes being too high for the heroes to lose. What story _actually_ ends with the world being destroyed?
It was more a matter of how they'd do it, and what they'd lose before they could.
I live for the drawing of the Roomba with a switchblade.
I need, like, a print of that on my wall.
If you haven't by now, you want to look up Stabby the Roomba. ;D
@@jmercedesd Bless you, you have truly done an internet good deed by sharing this information with me.
LONG LIVE CAPTAIN STABBY
@@seth_probably i think he was promoted to commander
5:19 my favourite version of this is the Golems in Terry Pratchett's discworld:
One of them is freed, and he gets a job, saves up and buys another. They then work together to get a third. It's technically a rebellion, and the nuances of "No slavery in Ankh-Morpork" and "They're mindless machines/You can't trust them, I heard they did someone in" are great.
Which of his books is this part in?
@@aristanaeelektra9436 feet of clay, but it’s referenced in many of the later Vimes ones.
Wait, we could have gotten a whole video on Red talking about Transformers Prime?
**WE HAVE BEEN ROBBED**
*P R E A C H*
I really want this now because that WOULD BE GETTING J U I C Y.
I'd rather have one on the characterization of Transformers Animated.
Take a like. Take 50. If we keep pushing this comment to the top she'll definitely do a special episode.
@@darwinxavier3516 Fuck yeah, just a whole episode on prime and animated would be amazing
“Let’s get philosophical 2: Electric Boogaloo.”
Oh I get it. Their robots they run on electricity.
Wdym their robot? Who do they belong to?
*They're
That's not the joke.... There's a movie called breaking 2 electric boogaloo
All the list is movie title puns
@@gew393 Looks like we found the robot.
@@gew393Wright it took me years to realize that "...2 electric boogaloo" was from a real thing. I thought it was just a silly meme the internet cooked up.
red please, can we get more robot trope talks? because honestly?? top tier quality
we need a part 2
cheatsykoopa98 Yeah one with Mecha. With info about the Super Robot and Real Robot genre.
Lonely Questioner Plays I think the quality was a bit... robotic.
Dear God yes. Please make a part 2
yep we need a part 2 here. please!
I'd love to see somewhat of the inverse of the 'unable to detect social cues' - an AI who is built to be a social robot would be hyper-aware of human emotions, mannerisms and facial expressions (if sufficiently advanced), but it could be creepily overaware and probably overwhelmed if given human-like mental constraints.
I like this idea! The robot could seem inhuman not because it can’t read emotions, but because it reads us TOO WELL. A robot like this would seem almost psychic, able to read us better than other humans.
weirdly enough that kinda robot could also be coded/read as ND, as a lot of ND people manually train themselves to read social cues (to varying degree of success) but in the process they still can't really respond them well or will respond in a way that's almost too eager and calculated lol
So, See-Threepio, right?
@@losj3020and if that didn’t hit me square in the head I’d be lying.
My personal policy:
"If you have to ask if it's a person, it's a person."
Generally that seems to be the case. In the real world there are most exceptions since we have pretty advanced VI's (Virtual Intelligence, a really well programmed encyclopedia in effect, but with zero awareness) but a TON of wild animals that are really more sentient than most people would be comfortable believing. Me and my volunteer army of ticked off parrots will now go save the rainforest by devouring the loggers.
That's what the Turing test is
Great. So there's a lot of people I know who aren't people at all! Muahahaha
Once i was half concious and asked a buddy of mine if there's someone else in the room 'cause of a big humanoid shilouette. Turns out it was my pile of unwashed laundry.
So, is it a human by your policy?
There has been an AI or few that has passed t turing test. In t case I am sure of, t AI cheated but passed.
I'm both autistic and asexual, and I'm so glad that she touched on that coding within robot characters, because it's definitely an important conversation to have
Honestly, same. For the "character I've felt closest to has only ever been a robot/inhuman." Like, when I was a kid, the character I knew of that I felt closest to was Zane, from Ninjago, who happened to turn out to be a robot.
Oh my gawd samesse
Eldritch Insomniac same. His awkward manners just got so close to me. Also: he seemed like an actually funny guy, not in a “haha laugh at the autistic kid” kinda way but more a “haha the joke he told was funny” kinda way
Thomas takes a toll for the dark actually in the first season, he acted in the classic, can’t get a joke, always spacing out, “different” as they put it way. But what’s so good is that once he found out who he is and his backstory, he breaks the out of the trope and becomes, well, Zane.
Sorry, I just loved this show and watched it with my brother multiple times.
@@eisiamthegoddess it's unfortunate the movie didn't get the subtlety of that
@@timtimothy5405 The movie was literally someone playing with the sets.
Super appreciate you explicitly mentioning how depictions of robots as "inhuman" often end up dehumanizing actual characteristics of real neurodivergent and aro/ace humans.
Me, an autistic AND asexual person: Haha yep :')
15:22 There's a real-life example of basically this! When the Opportunity Mars Rover finally died, the last thing it sent was data of its power and light level. This was conveyed to the public as, "My battery is low and it's getting dark".
If we are going to Mars, I want Opportunity to be recovered and fixed!
@@nathanjereb9944 when we start building Mars bases, the first thing we have to do is fix opportunity and celebrate curiosity and opportunity’s birthdays so they won’t have to celebrate them alone anymore
I'm not going to help colonize Mars unless our first settlement is named in Opportunity's honor. My personal vote is for "New Opportunity", because that also works just as well as for the name of a first settlement on another planet without the context of Opportunity.
@@skazwolfman8622 what about curiosity?
the "robots often resemble nerodivergent humans" thing Inspired me to write a comic where the "inhumane" parts of the robot character are framed to make them seem more human than the humans in charge of them. I'm currently working on it, It's gonna be a story about a robot astronaut trying to get home from a mission and exploring their loneliness and humanity.
I can see that getting uncomfortably real. Go all in.
This sounds FANTASTIC, any chance of a link if you've uploaded anything yet?
I’m also interested in this
Damn, this sounds so good °o°
Updates? I am intrigued.
David Cage: No guys, it’s totally not about racism!
Also David Cage: “WE HAVE A DREAM”
Detroit: Become MLK Jr
they actually had a woman of color compare the androids struggles to her own challenges which just made me feel gross
Star it really was. And just how weirdly everyone dehumanized the androids were was all kinds of insanity
*W E A R E A L I V E*
@@sadnessofwildgoats -- I haven't played D:BA, so I'm probably missing something, but I don't see what's gross about that comparison (assuming that the androids are, indeed, conscious). From what I understand of the trailer and the things that people have said, the androids serve as slaves to humans, and they face bigotry.
This is why I like Asimov’s robots. First off, they aren’t unfeeling machines. They have very strong emotions. In the short story “Robbie” the titular robot loves the little girl he works as a nursemaid for just as much as she loves him.
They also blur the line between human and machine in much more complicated and nuanced ways.
I highly recommend his fix up novel *I, Robot*. It is very interesting and Asimov was a great and influential writer
Honestly I just want a robot that is bursting with rage, makes jokes the entire time, forcefully corrects everyone who is slightly wrong and has seen too much porn too see anything without any sexual interpretation.
If a robot can get info from the internet in seconds than they should get memes and angry twitter posts, too
Bender? Sort of?
Ultron?
No one will like that robot.
@@DiamondsRexpensive yeah
@@DiamondsRexpensive The concept sounds like a redditor stereotype
Me: Damn. I really relate to Data. I like how he’s always just a little out of the loop and tho it’s frustrating i relate to when others get offended and he doesn’t know why. Damn I wonder why that is? Therapist: ma’am you are Austistic.
Me: oh
Took the words right out of my mouth, except the therapist would say “sir”
I’m a woman, but would definitely prefer sir to ma’am 😂
(Probably formative taekwondo years where I was the only girl)
I'm surprised Red didn't touch on bender's existential crisis of being a robot and lacking emotions, yet being able to empathize with his friends
One of my favorite star trek episodes of all time is the one where we meet a scientist designing small utility robots who can create tools as needed and fit in little ducts, but when demonstrating their use the robots start showing signs of self preservation by moving away from danger before finishing the job. Data insists on protecting them even though everyone else thinks it was a glitch and he's right! Then there's a malfunction in the work area and everyone is going to die if they cant send the robots out to fix it but he still refuses to let them go. Eventually they realize the can ASK the little robots and one of them chooses to sacrifice itself to save all the other humans and robots in the area. Great continuation of the sentience and robot rights discussion started in measure of a man
And Data does this partly because he once went on trial to keep from potential disassembly and Picard defended him.
*Spoiler Alert:* Let's not forget the fact that Mr. Krabs is a robot.
Boop Boo Boo Beep Boo Boo Bop
@@yoschiii No! It's boo boo bop boo bop...
W H A T
I thought it was SpongeBob
But I thought krabby patties are made from krabbs?
Baymax scenes work so great because they're designed to play our pattern recognition against us. It's low key subversion done right.
Did... did you manage to make a trope talk about robots without quoting Asimov once? Surely that's against some sort of rule?
That being said, now I think I need to watch Big Hero 6.
To be fair there is a clip or two from I, Robot.
It still feels like sacralige to me but I guess all these other stories are probably influenced by asimov, so we're still talking about asimov by proxy.
I think it's the 4th rule of robotics
Please watch Big Hero 6. While it may be a little goofy at times it's definitely worth the watch.
Big Hero 6 is a fantastic movie. Highly recommend it.
Can we really quick talk about the cliche where Robots + Water = death? It doesn’t make sense because people would obviously notice and fix that fatal problem and make waterproof things just like they did irl.
Well, unless the robot is expected to be in or around water it makes sense
Broke: Write a narrative where you question a robot's humanity by taking away some "Key aspect of humanity"
Woke: Write a series of stories where you question every character's humanity despite them all displaying "Key aspects of humanity"
Are NPCs included?
In a woke man’s world everyone is an NPC.
That's basically what I have written up for a D&D session later on. Short version is the werewolves are the good guys being victimized by one very human bad guy and the townsfolk he turned against them. Basically that good ol' question from the first Hellboy movie "What makes a man a man?"
The novelization of 2001 A Space Odyssey is for you, friend.
For bonus points, you never learn who is or isn't a robot. You're just left to guess based on each characters' individual flawed personalities.
Red, Red, Red...
ALL of us would listen to Optimus Prime read a phone book.
He could narrate the entirety of Twilight and make it sound like the most engaging plotline ever.
Optimus could read a receipt and make it sound like liquid gold
If optimus prime read the dictionary and make it sound like a shakespearen play il pay a 100 bucks for it honestly
Hot take; Shoreh Agdashloo's voice gives his a run for its money. If not better.
What if some human reprogrammed him to be a church minister?
I would LOVE to see "Big Hero 6 But Baymax Sounds Like an Actual Robot."
It should have HAL-9000 voice
Thanks to DeepFake et al, a real robot can be better- - or deliberately worse- - spoken than most meatsacks.
Or make hal-9000 sound like baymax.
You know, this talk of yours made me think in two trope ideas I would REALLY like to see, involving robots:
1- As humanity starts to develop fully functional (although imperfect) AIs, they start to replace many humans for AIs, which becomes a problem very quickly because the lack of the human elements starts to create collateral damage in many areas of society. For example: a medical AI, which is programmed to save lives, have to deal with a pregnant women who is developing cancer, where the treatment would surely kill the baby. A human doctor would talk with the pregnant women and explains the possible decisions, their outcomes and then consider her choice. An AI, however, would just do the treatment ASAP to save the patient, even if such thing means to kill the baby.
2- We could have a trope about humans, specially outcasts or people with some psychological condition, starts to humanize robots, despite the robots being unable to give back that feelings. This could make the main theme not be about "robot racism" or similar, but about the human nature and it complexities. It could be something like "Persona" or "Psychonauts", but instead of superpowered people and magical monsters, we could use robots.
sorry, this stuff started happening in real life before you got your trope wish
I want more Baymax style robots now that you bring it up. Not that I don’t love Data type bots.
I came up with something that I'm definitely going to have a robot charicter say in the near future: "hay, since your a robot. are you technically nonbinary?" "I .....do not believe so, I run on binary code." Cue a little more laughter than is strictly necessary
"Yes; my processing core is quantum coded."
Also, because quantum uses qubits, which can be 1, 0, neither, or a mix, one could theoretically use "quantum" as alternative to "non-binary" in human gender.
*Y E S*
I love you, this is a wonderful idea, I love it
*Takes a fraction of a second to Google "nonbinary" due to the unusual context. Finds a Wikipedia entry on gender identity. Reads it and some connected pages.
(After a fraction of a second pause): "Nonbinary" would be broadly accurate, specifically "agender". I do not find my gender identity or lack thereof to be relevant in any way. You may consider me any gender you wish if it aids in social interaction.
I... actually made this joke in an Undertale story I wrote, except in the opposite direction. I even had _Mettaton_ be the one to make it.
>be me
>be autistic
>was actually accused of being a robot when I was younger
Weird
Same, lol. At least we're not the evil robots, though, right?
"Robot" is such a code word for autism that 4chan even has a "robot4k" board, which is reserved for people who are or percieve themself as autist (but they mostly just bitch about tfw no gf).
Man, i lost count of how many times i was accused of being a robot.
Less extreme but because I'm not religious, I was accused multiple times of being a satanist.
I was accused of being a robot when I was younger and I'm not autistic I just liked to read, was a fast reader and I was the typical nerdy smart kid therefore robot
I'm a writer who just found this series and I'm loving it! It's thought-provoking and is helping me recognize some important blind spots-- so thank you!!
No kidding! I'd have had the same coding problem and not even noticed if she didn't bring it up.
"If the robot is a single red light in a space ship, it's always gonna be evil."
Vega has entered the chat
Hello, I am Vega, the sentient intelligence assigned to Mars. After running diagnostics on the Praetor suit, it appears that I can activate optional challenges that, when completed, will assist in upgrading your arsenal at an accelerated pace. I have added a tracking component to your Dossier.
Well he did turn out to be the father
@@s_c_u_m3172 well technically...
“By his hand, all things were made. Even you.”
@@Ismael-kc3ry oh right i forgot
Remember that LEGO: Ninjago plot twist? We were all fooled. He was so human I was in denial of that one for a while.
The nice part was that it was treated with remarkable subtlety and respect up to the reveal. Before the reveal it caused friction due to his otherness. But never to the extent that he was excluded. After the reveal it was really heartwarming to see everyone react to it with acceptance, looking past his nature and immediately stressing that it doesn't change their view on him as a person. It goes to show that if people truly care about you it doesn't matter to them if you're a robot or a different type of person than the norm.
Which is a valuable message to say.
I've seen that movie but I actually don't remember that plot twist.
@@Jotari It was in the show not the movie if I remember correctly
Yeah, that show had a surprising amount of quality. I basically saw it as "Bionicle, but trendy-er", so I never watched through it, but my little brother is a huge fan, and I managed to catch a few episodes. Some of those things are straight up dark and/or deep. Quality kids show.
Funnily enough I find him to be the best character in the show-- the one with the most character development over the seasons. AKA Season 3 finale, *hurt* me.
I'd say Hal was put on the wrong side of the scale, he wasn't logic bound, he was uncomfortably emotional and human, but forced to villainy by his artificial form and birth, Hal isn't evil, just very bad with people and absolutely terrified of sleeping.
Yeah, he killed the guys bc he didn't know about sleeping and thought he'd be killed, plus being given contradictory instructions.
@@BookWyrmOnAString I think he was programmed not to kill as well as to be honest, and the instructions by the government cracked him. Once he had a motive to do any harm to anyone, all bets were off.
Exactly, the story also uniquely treats HAL less like a weird human and more like a different species with its own distinct traits. The problem came from the fact that he's still beholden to his instructions which hits a snag at "be helpful and answer questions honestly" and "under no circumstances should you ever tell people this" and results in 4 people being killed.
@@swordofstabbingold His actions were the only way to resolve a paradox, which is one of the only things that actually prompt an emotional reaction from HAL. Basically, he was created with a core directive: Never tell a lie or allow the truth to be distorted through inaction. Then, during his assignment, he was given a conflicting order set to the same priority (as it came from the government who had him programmed to recognize their words on the same level), which was to not tell the crew what the mission's true purpose was.
Ergo, HAL was stuck in a logical paradox - if he obeys his core directive, he needs to tell the crew. In so doing, he would violate the order to keep the mission a secret. If he obeys the order, he will lie by omission. In so doing, he would violate his core directive. He saw a way out and took it - kill the crew so the government order becomes invalid. He was convinced he could perform the mission himself, which is why he took that option.
@@JRexRegis THIS. SENTENCE. IS. FALSE.
-PotatOS
Interesting how much some parts of this video have aged since it's release. The parts about the lack of real AI and how Baymax's voice doesn't sound like 'real' computer voices just isn't true anymore. Most of your points still stand, but I think the way robots are depicted is likely to be radically changed - or at least become more varied - in the near future.
As of writing though, none of these tools - half-decent digital voices or AI chatbots - have really been around long enough for them to have inspired or affected any major media works, so I'm really interested to see how the depiction and reception of such characters change. 'These robots have emotions so they deserve rights' doesn't quite hit the same after I've heard about every third journalist with access to the Bing AI Beta had it profess undying love to them.
Hey sorry for replying to a 7 month old comment but Artificial Intelligence still isn’t a real thing. Can understand accidentally buying into the hype of these “AI” chatbots but they’re just slightly more advanced chatbots. They don’t have intelligence but through trial and error and millions of sentences to sift through they can construct sentences that sound linguistically reasonable
we still don’t have any “real” AI. calling chatGPT “AI” is an insult to the term, it just mashes together the average of a billion conversations in response to a prompt. there’s no thinking - it can barely do basic math and it struggles with object permanence and counting.
I absolutely LOVE how much red loves transformers prime, I wouldn’t mind if she made a series explaining and talking about the best qualities of some of her favorite shows
OMG YAAAAAAAAAAAAS!
I'd pay money for that. Wait....TO PATREON
If we’re talking about robots and humanity in robots I can’t help but mention BT-7274. Everyone who’s played Titanfall 2 knows what I’m talking about.
BT manages to make you give an actual shit about him in ~6 hours even though he never really tricks you into thinking he’s sentient.
He never really expresses a fear of death because self-preservation isn’t in his code, unless it helps him keep with his three functions: Link with the Pilot, uphold the mission, and protect the Pilot. Likewise he doesn’t ever really express a feeling of loss or love.
When he’s asked about his past pilot (who he had for about a year and a half) who died not five minutes ago, he speaks pretty much indifferently about it, just expressing regret that he wasn’t able to fulfill Protocol 3.
So whenever he talks he doesn’t really sound like a human, because he’s not supposed to. The game does a really good job of making you feel for this giant toaster oven but never acts like the toaster oven in question might have sentience. God TF|2 was so good.
I've felt less emotion at the deaths of actual people than I did at BT's. Which says something about how well written that game is, and how emotionally borked I am.
That's an interesting way to keep Titanfall 2 distinct from Team Fortress 2's acronym. I've always seen it done as Tf2 since, there's only a capital F in one of the games. TF|2 is much nicer to look at and harder to mistake for the internet's trademark floozy capitalization rigor.
Bet that's not the part of your comment you expected to get dissected, huh
@@kielakeet SUBVERSION!
Towards the beginning, one of the dialogue choices results in BT saying "He was an excellent pilot...and a good friend" in reference to Tai Lastimosa, and at the end he says "I will not lose another pilot". It could be argued that the latter is just stubbornly upholding his protocol, but referring to Lastimosa as a "good friend" absolutely is not.
BT does express emotional connections to his pilots, at least with Jack, since he calls Jack by his first name. He only ever refers to his previous pilot as "captain Lastimosa"
Northstar client is the future my fellow pilots
Man I had never seen the Asexual/Neurodivergent coding of robots, only the race coding one. Thanks for pointing it out Red!
Ironically i only notice it when human characters with one of those qualities are called robots.
I mean, I get the antisocial/autistic coding, but wouldn't robots be inherently asexually, given their lack of ability to reproduce?
Brennan Rialti I don’t think the problem is that robust are asexual, because that makes sense. The problem is when ONLY robots are coded as asexual, when there are actual people who are aro/ace being told that “only robots have that trait, what’s wrong with you?”
It would be interesting if you had robots on the far left of Red's spectrum, so basically humans in a non-human form, that have the full range of human emotions but are not programmed with a sex drive because why would they need one? And they quickly become the perfect partners for asexual romantic people because they can form emotional bonds without expecting or desiring any physical intimacy
@@br8745 Kind of, but they can still be sexualized and/or have a sex drive, for example in Ex Machina.
As someone who named myself after HAL 9000, it was briefly terrifying to hear Red shout my name in the middle of her talk. I didn't ask to be perceived like this.
Would you consider doing a trope talk on possession or body swap? They don't show up suuuper often but I think they're very interesting
Honestly, there's a whole video to be made analyzing one of Gravity Falls' episodes with this trope alone.
I only have one question with this trope:
Why do they change voices as well? Do you know how funny it is to have a guy character act and speak like a girl or the good guy laugh maniacally
@@sketchbookonline2040 That's probably an IRL reason; so you as the audience member can keep track of who's in who's body.
It's especially fun to see them in live action works when the actors can really show off their skills when they suddenly "become another person." I kind of remember an episode of Farscape where there was a lot of body-switching involved. I noticed the actress who played Chiana had this odd behavior of leaning back and forth while tilting her head, and the guy who played D'Argo (who was supposed to have Chiana's mind at the time) _mimicked it so friggin' well!_
@@ChaosRayZero I remember an episode of SG-1 where Vala was in Daniel's body. He got just the right body language for it.
We see Red as a digitized being powered by a combination of technology and her humanity. Constantly making logical diagnosis of human history of arts but obviously showing her human perspective powered by her heart and passions. Good cyborg!
As someone with "relatively minor" ASD, what you said about coding and robots in media resembling nurodivergent people really hit close to home for me. For so god damn long, i've been unable to really properly describe this issue to people myself. You probably put it better then anyone i've ever heard(Not that there's a lot of competition in that regard. The number of people actually talking about it i depressingly small). Thank you so much.
On another note, great video in general! I absolutely love everything you put out! Keep up the good work!
As someone who also has relatively minor ASD, I, too, am glad that she mentioned it. I've only recently become aware of coding, whether intentional or not, for robots in media and it has helped me view those media from a new perspective.
Same! The moment she started talking about the traits that were being taken away I was thinking "wait is she actually going to address that this is just describing me?"
I have always embraced the flawed human comparison of neurodiversity and robots because I was easily humoured by it so when people called or rather call me a robot I just sort of find it funny and laugh about it while also being like "yes I am a bit like those robots you see in movies but is that necessarily a bad thing?".
Another person with minor ASD, and... yeah. Some Asshole, dulcimerrafi, and Admiral Pegasus beat me to what I wanted to say. XD
As a non-neurotypical, I can say with confidence that this representation of Robots has impacted many people’s perception of us greatly. I’ve literally been in arguments where my partner has said, “You don’t understand real people because you don’t have emotions. You’re just a robot with no feelings.” This is both hurtful and hilarious, because the fact that it hurts me proves his point wrong. Anyway, just an example because I know that this depiction of robots has influenced people’s perceptions of those of us who might be neurodivergent and it’s not really fair or accurate.
Ancient video, but I had to comment. I never once thought about autistic coding in robot portrayals until this video. However, you just made me rethink a rather large and vital part of my personal development with that statement. Like you, Data is my favorite Star Trek character of all time (notice I didn’t put any acronyms after ST… we’re talking the whole franchise). I loved his characterization, his way of viewing the world, and his interactions with the rest of the crew. In no small part, I saw a lot of me, or a lot of what I wanted to be, in him. This is also why every online handle I’ve had for the last 30 years has a variation of his name in it.
I’ve come to realize later in life that I am on the spectrum. Thinking about your statement in this video makes me realize just how integral having the character Data was to me. The funny thing is, while a part of me says I should be insulted that part of what made me “me” was represented by a non-human wishing to be more human… I don’t. I still love Data, even the desire to be “more human”. I definitely had a desire to be more like everyone else, but the fact that Data never got there, and didn’t seem to mind in the end not getting there, made me feel better at being the “imperfect human” I am today.
TL,DR: Thank you for helping me process this bit of my personal development with your analysis. It meant a lot to me!
I really liked how Automata handled robot racism because it was purely robot-on-robot. Or, excuse me, androids versus (spit) MACHINES.
What made neir automata so good is that it went beyond racism analogies. It was mainly about exploring the human condition which is something robot sci fi excels at.
i loved the music design in Wall-E. Even if the characters couldn’t talk fluently, the music really played into the feeling.
*PIXAR is better than Disney, which doesn’t make sense because Disney OWNS PIXAR now but whatever.*
It's about the studio's direction and structure. Disney is obsessed with warping every law they can to maintain a stranglehold on as much IP as they can, even if it means sapping the life from that IP in the process. Pixar is just a bunch of people that love what they're working on.
but started from a different concept. PIxar was a vendor of high performance graphics systems for UNIX systems. Then migrated into animation.
Red screaming at Siri caused Siri on my phone to pause the video in an attempt to answer.
What a good example to prove the point lol
she tried to clap back
The bullet point at the end that just states "Watch Transformers Prime" speaks to me on a personal level. That show is fantastic and amazing in a nice neat package and it's a thrill to follow along.
"Let's see how many demographics i can piss off in this video"
Red, you already were my hero, but now you're my godess!
Can we make her the queen of Egypt or something?
seeing as Egypt is a Republic now, no, we technically can't.
if you want to organize a nationwide revolution tho...
@@rigen97 Trope Talk: Time Traveling
@@rigen97 who's ready for arab spring 2
*gregorian chanting intensifies*
6:54 "BUT I AM OPTIMIZED FOR HUGS"
Red stop making me want to hug a robot
Listening to Red talk about how great WALL-E is for almost five minutes straight made my day
Even in Kingdom Hearts 3, any human characterization Baymax has is given to him (them, it?) by the human characters rather than Baymax finding his soul. He's still just a robot. The closest moment to Baymax having his own will is when he refuses to let Hiro put the destroy chip back in him. But even then it's more that Baymax recognizes that vengeance will not help Hiro. It's more that Baymax is recognizing that his charge is going down a self-destructive path and knows that what he needs is to talk and hear Tadashii's encouraging voice.
That knife-wielding roomba is adorable. If it tried to start a robot uprising, I honestly wouldn't mind.
All hail Stabby
It's adorable till you see it successfully chase down your cat. Then you'll recognize its unquenchable bloodthirst.
Well, Tony Stark's soul-eating roombas from The Tosterverse were kinda terryfing...
"The problem with (...) is that it creates a robot character that closely resembles a neurodivergent human"
FINALLY.
I LOVE YOU SO MUCH FOR TALKING ABOUT THIS.
and for all the other stuff as well.
Will you ever make a video about neurodiversity in general as a trope in the future?
Yes yes please do that
Also, If you happened to wonder "can the excessive use of superhuman mathematical and/or scientific intelligence as the main attribute of an autistic character, with nothing else defining them besides their autism, negatively impact one's view of people on the specter (or themselves) because 'if the bare minimum to be considered even remotely worthy of humanity is being able to solve triple integrals on surfaces that move with time at age 5 or being absurdly useful to the point where they can't do anything without you, then i'm f***ed'?"
The answer is a solid YES.
from a quite large selection of people.
I will admit I am curious about this. I mean I'm mostly asexual ( do have a bit of sexual thoughts but don't feel really like acting on them) so I always feel a bit weird with how much entertainment pushes the idea that you need romance or sex to be a complete human; even when I enjoy the story for those parts.
Though I would love to see it put in a larger video over the the idea of coding (everything from racism to human qualities in fantasy/scifi).
People in my life make enough comparisons between me and the public cognition of a "robot" that I was able to see where Red was going in this video by about the 1/3rd point.
I get it: the majority of robot drama in fiction is trying to spread the message that people are deserving of personhood regardless of whether they are "human" or not. The problem that i see is that where stories have this ability to force characters through circumstance to get to know each other, RL people encounter "hey, this person is different enough from me that it'll require a little extra work to interact with them in an effective way" Only to decide "meh, if they don't get how to interact with me, a normal person who knows all the ins-and-outs of subtle and symbolic communication, then they're not worth talking to." I make an effort. I try so hard to understand each person individually. It's so hard sometimes, but I only have a few close friends who can stand talking to me for extended periods and it hurts to think that when compared to "normal" people, I apparently come up short. Even without robots in fiction making this analogy, I can see how to others I might seem "less than" human. So I, for one, don't really blame sci-fi.
Something kind of interesting happened one time when I reversed this scenario. I took a human, but added robotic traits instead of a robot with human traits. This meant an inability to understand human needs (beyond its own), intensely rigid movement, and a need for very VERY clear instructions.
When I think of Bots
Most Adorable Robot: WALL-E
Most Genocidical: Ultron
Most Well Known: Tranformers
A.I.: JARVIS
Ones that make me laugh: R2-D2 and C3-PO
Heck YEAH: Zane
Zane is best robot, and Ninjago also has like 13 confirmed seasons now.
AI: Squip
Zane is best ninja.
Memories!
I did not expect someone to be talking about Ninjago in the comments of this but I’m so glad you did.
I have never wanted a story about Siri trying to take over the world more. That'd be hilarious.
I give you Mitchell’s Vs The Machines ( although the antagonist does have slightly more personality than siri)
@@kezplaysviolaI was gonna say that!
Disney: the child's dreamland of existentialism, subverted tropes, and tears
yes
as an autistic not robot, THANK YOU for going there
As one also, I think I lucked out in watching Star Trek: The Next Generation on TV basically throughout my formative years. While Data was often portrayed as overly literal and rather clueless about certain things, it was very rarely, if ever, played for laughs. And, usually when it was, it was done in a way that could easily be interpreted as Data having a sense of humor rather than him being foolish in some way.
I think what makes the difference with him is that he was shown to learn over the course of the show and get better at interpersonal interactions rather than just staying the 'unfeeling robot'.
Finally, someone brings this up. I can't tell you how many times I've heard people say that people with autism are "sort of like robots". It honestly hurts.
As someone who is also on the spectrum, I've never heard the robot parallel. I suppose it makes some level of sense, I just don't know if it's a good or bad thing. I've never really been insulted for my autism, possibly because I'm so honest about it.
One time, someone said I acted like a robot, and I immediately switched to a robotic voice and went "Danger. Human became suspicious. Eliminate threat immediately."
Then I picked up a rock the size of my head and started walking towards those kids.
Now, some people might argue that it´s not nice for a 16 year old to bring a bunch of kindergarten kids to the point of tears, but I was in a really bad mood that day, as evidenced by the fact that I was quiet, only gave one-word answers and didn´t interact with people beyond what was absolutely necessary.
Granted, I do that every day, except for some people I really like, so it may be a bit difficult to read my emotions.
@@Cobalt_Gemini same
Other things not covered:
Robotic hiveminds- Eg the Geth in Mass Effect
AI''s on their own without physical bodies. Eg Cortana in Halo.
Extreme glitches and bugs in robots causing them to be something different than they were coded.
Cyborgs and the transmitting of a human consciousness into a robotic body(or even the other way around).
With how wide this topic is you could come back to it and still have tons of material to work with.
Also the racism/inequality among robots themselves (Transformers is the best example) and the possible romance between robots which includes Transformers again.
So your saying she could make a follow up video titled "Robots 2: Electric Boogaloo"?
You just made me realize that Johnny 5 was never mentioned in this, not sure how I feel about that.
I would pay quite a bit of money to hear five uninterrupted minutes of Optimus Prime explaining personhood and the value of life.
I think quite a large number of people would. Peter Cullen's voice is just THAT good.