I really hope you liked that video; it took a week and a day longer than usual because I was tired of ending videos with accepting flat out failure. If you did, consider sharing our channel with a friend (it's the biggest way you can help us keep doing what we're doing). Also, we (secretly) announced something in our latest newsletter, you can check out the archive here: answerinprogress.com/newsletter
Might be able to edit this project in a way that reflects the criticisms you received. Such that, The algorithm displays what it would pick and why (spits outs is valuation, "cats > three babies in trench coat" then people have the option to accept or alter the decision. Regardless, they have to type their reason for doing so or select previous respondents for doing so up to like 9 answers (like a jury). Then the machine learns overtime based on the outcome even if it doesn't understand the written justification.
I can't thank you enough for the videos you're making: they are interesting, educational, well-produced and funny. And, honestly, I really like your approach - instead of accepting failure, you found a way to learn from it and tell us about this
This reminds me of what my 9th grade programming teacher said: “The computer isn’t stupid, it will do exactly what you tell it to. The computer is only as stupid as you are.”
I think the tagline for this show should be this, "I found out that what I am trying to do is harder than I expected." I swear I hear it on every episode.
@@answerinprogress In fairness pretty much everything is harder than you first expect, once you really think about it, even something as mundane as what to have for dinner. Or maybe that's just me, perhaps I just find things difficult, all of the things.
Well actually that would mean that Einstein was a zombie and deserves different consideration than a regular human, for instance would being hit by a train kill a zombie Einstein? Is his life more valuable because he has no limited lifespan and so you'd be cutting short a 10 thousand year life? Less valuable because he goes around eating brains and you'd be saving his perspective victims?
Who lives, (Albert Einstein) or (Adolf Hitler). Who gave (any one person) the power to decide if another person lives or dies. Because (any one person) cannot make your decision with no respect to who the subject is. Then (any one persons) answer is just a question of "what is better for (that person)". Because they think they are something.
What if it were the equivalent of Einstein though? Just a human, maybe even poor, but someone whose ideas could contribute to the progression of mankind as a whole?
@@somethingrandom8091 I mean one could argue that the cat is the ethical choice as humans are evil and destroying the planet **Terminator drums enter the chat**
@@aveen1968 yeah, but there’s a fair amount of man made things that would ruin parts of the planet if humans were to suddenly go extinct. Plus there would probably be a lot of fires due to the amount of hot stuff left running. We need a sec to cut that stuff off before suddenly dying. Not to say the human race deserves to live, just saying we made such a mess that if we were to suddenly disappear, it would probably hurt stuff more.
I was stopping the video then the machine has killed the dog to see the comments..: correct that is true to me. Because dogs protect humans more than all day long relaxed and farting cats (well have a positive psychological effect on humans but dogs as well). Dogs can find unhealthy drugs, direct blinds, detect illnesses or a certain status of s person with diabetis etc. . -But this decision is in contradiction in my head to the decisons made before by the machine. But if the good deeds are stopping human kind to evolve like stopping war on drugs making them less important teaching humans and treating humans to reduce number of people with need for drugs changing the market. Stopping using dogs for blind and think of developing better electronic helpers or how to improve the teaching of the blind people how to navigate like bats by sound and its reflections. time is ticking for human kind. So if humans time is not spend well but perhapd bad (classic earth becomes some day inhabitable) living creatures (including humans) become a lesser value over the total spacetime with lesser or nothing of livings. So if humans treat all possible living including them selves bad, why shouldn t a machine decide to kill humans Terminator like. A bad politician had a greater value to be killed to improve human kinds value to planet earth and to stop the machine to kill humans. The machine is in a domination god like position, so as we are just human: don t decide to kill bad politicians by your hand or a created machine. We have to hold on to imperfectness to be alive and find education for all. Education and time to create can change structures of you and me and theoretical in all the universe but actual we find always a dominant thing to be solved indicating the submitted not godlike position (lesser general value) of humankind. The machine can be smarter but only time can show if it can dominate without using physical brutal force on others in all aspects all humankind. Despite that.. will she put Putin and Trump on two different sides? Or Biden and Trump and the machine trys to send a waggon/car on each track to have certainty on its side or had Trump has one of them done somthing valuable or have both done somthing valuable and the machine decides to shut down her program to save the honour and the lives preserving life -the machine knowing that it is just simulating something that is of value in reality? -This is not a bad feeling for any politician as I guess the machine would kill me aswell as I have littered just with my presence this planet without improving education or technologies to preserve live.. So lets go and do somthing positive with our imperfectness..
I am at 8:26 and must say without the time there was a dog and cat I scored full, understanding that human kind sucks a bit.. Idiotic amount of laws with out using the mind gifted by nature. The last one was an obvious choice to protect the poor cat from the violators wanting to put a law in that machine to kill it. watched it now.. If Garfield comics are important to live than the way Garfield will die must probably more spectacular to boost sales. Bad luck I was on the other rail. No it is not your fault cute coding girl.
Cats are notorious for a seemingly uncaring attitude. Our society (us humans) appears to have been hurtling in this direction with an over-inflated sense of self and less compassion or thought given to others. We expect more of human beings, but accept the cat at face value. Is the trolly opting to save the cat because of humans failing to live up to their own standards?
This video took a fitting turn, since "I don't pull the lever" tends to be about deflecting guilt rather than actually engaging with the fundamental messiness of ethics.
If you do pull the lever, you might be sued, if you don't, then the chance is less. You aren't responsible for not saving the lives of people, but you are for ending one.
@@randominternetguy3537 Getting sued or not has no bearing on whether or not an action is ethical. And what do you mean by responsible? Legally responsible?
@@jadeamulet2339 yea, but it has a bearing on everyone's actions. If touching the lever meant saving 4 people, but you get 10 years in prison for manslaughter, I'd go with not touching anything.
@@sojirokaidoh2 It's more like ... if you didn't do anything, the people who were fated to die ends up dying. If you did something that resulted in the death of 4 other people, then you are responsible for their death since they were originally not going to die.
Two years on, ask the question about M*sk again and most people would pull the lever to switch the trolley onto the track with him on _even if there was no-one on the original track._
This video just devolves into “I broke ethical guidelines for a funny video… guess what? Those ethical guidelines are constantly being broken by people with a lot more power than me. You should probably be afraid.”
I asked my grandma this question and she instantly went on a rant about the imorarlty of creating this question and said whoever made that question is sick
The machine picking a cat every time does sound like people watching a horror movie though. XD People do generally favor the animals to be saved in horror movie, while people dying is expected. In some ways the robot is operating on human ethics, just not how we usually consider the trolley problem.
So true. I will watch a person get slashed to bits in a movie, but if a dog gets kicked, I'm devastated and uncomfortable. I think part of it comes from the fact that animal rights are still fairly recent history. There are films where animals were mistreated and I guess whilst you know for certain the actors didn't get hurt, the animals feel a little more real
Animals are completely innocent and we consider many of them to be more defenseless, so it makes sense that we would put them in a category next to human babies. Human adults made choices to get where they are and we reason that they had some sort of chance against the harm.
@@YOEL_44 Idk where you got that cats are smarter than humans... they are smart animals but humans are the most intelligent animals in the world atm (second is probably dolphins)
What nobody understands about this dilemma is that it doesn’t matter who are the people in the trails. The purest moral question here is, what is worse: letting 5 people die by omision or killing 1 on purpose? In other words, is it better to dirt your hands, or is it better to leave life be, however atrocious it may seem. Adding character to the people on trails doesn’t change the fact that you are knowingly killing a human being if you pull the lever, making you a murderer. This question is asking if “The Greater Good” is something real, instinctive or innate; or if it’s just another social construct. Because that’s the excuse we use to justify our acts, and worse, that’s the excuse people in positions of power use when taking certain decisions, specially in Governments. Are we entitled to judge other people lifes if that means we have to choose who lives or who doesn’t? So it doesn’t matter if kids or Gandhi are on the trail, do you have the guts to pull the lever?
the correct answer is don't do anything. The person that tied those people to the track is the murderer here. You doing nothing is not at fault here. The world doesn't need more saviours, it needs less wrong doers. If we are strictly talking about who's right and who's wrong. If you accept wrong doers as part of life and you have to fight them, and you have to make a choice, then it get complicated. But you are not solving the root issue here. If there are no wrong doers, u wouldn't be in this situation in the first place. To be a good person, u just need to not do any wrong things. Don't hurt anybody, it's that simple. U don't need to save anyone.
Yeah, I'm always pulling the lever to save the most people. Even if my mom, spouse, Einstein, was the "one" on the single person track. I hate Einstein anyway. 😂
“Today you and I are going to teach a machine to solve the trolley problem” I appreciate you making me feel involved despite the fact that I’m doing nothing productive
I remember the short lived reboot to Knight Rider. In it "Kitt" had to have all of his personality parts retrieved. At one point, this partial "Kitt" had control of the vehicle and was speeding as Knight Rider so often did back in the day and almost runs over a deer. The driver, asked the car why it didn't slow down. The car's response (paraphrase), "I calculated the impact from the deer and determined it would cause no damage." In other words, the car was looking at efficiency over other external factors. The same came be said about the Trolley Experiment. You are trying to make a AI understand that it needs to divert to a less efficient track simply because there is less obstructions in the way that won't impact its performance to begin with. You are asking a AI to understand that a representation of a life has value and needs to stay in play. The more appropriate approach would be to ask why the trolley can't slow down to avoid hitting the humans versus switching tracks and hitting less humans. I can't help but wonder if the AI even understands the logic of its action. Instead it goes through a series of less logical responses until it achieves a response the controller is looking for. In the AI's logic it is probably getting dumber with each rendition of the experiment.
Your very presence invites challenge. You are in fact more involved than you believe... Without you, there is no audience, without an audience there is no reason for a video, without the video there is no awareness, without awareness there is no curiosity, without curiosity there is no purpose. Without purpose there is no progress, without progress there is death. Congratulations, you've saved the human race by watching.
Don't worry about that, you both did nothing productive. The title of the video suggests that there is actually an interesting machine being built and tested, but there is very little of that. What there is however is standard boilerplate drivel about AI ethics, focused naturally on "societal issues" ie. racism.
i love how all these videos start out like "ooh, i tried this fun thing with coding" and then slowly shift to "the problem of moral responsibility with autonomous decision making computer programs that control everyday processes with the power to change lives"
In Egypt there was a battle in which th Egyptians gave in and stopped attacking because the other side were holding up cats. So yeah if you live in ancient Egypt.
The trolley problem is basically psychological trolling where everyone tries to force others to be unsure of themselves and be able to call them monsters for making a decision, any decision. The Trolley problem is a mental experiment to force everyone into inaction. >.
It's a thought experiment meant to provoke discussion on the morality of influencing a situation or being apathetic, and whether there is an obligation on way or the other. It's not sinister or deep at all.
@@premiumfruits3528 That's one way of deciding to look at it with rose-tinted-glasses glasses. I've taken Philosophy courses and I described it how it ~is used~. It is meant to be able to be used to make others look worse, or able to call them monsters, or what have you. It is one of those thing... a tool used in ideal circumstances serves its purpose... nobody uses it for its actual purpose.
I guess you can see it this way, it also asks the question of whether you should intervene if it's none of your business to decide, it asks the question on whether you're a numbers person or not, in regards to your ethics, and it asks the question of the whether you would play God if you had the chance. And it's a really good one, it also proves that the people with number ethics are the ones who would play God and intervene, as they choose the same decision, they can't help it. I guess you could also argue that if your ethics is based on numbers, maybe you lack ethics. Inaction, is an action in itself btw.
@@claudioalencarzendo6791 It's not because the cat is under investigation by the FBI that it is actually guilty. We still work under the principle of innocent until proven otherwise in a court of law.
Cat person as well didn't stop me from looking at my cat and saying "I'm sorry Cynda, but if I was on a train track with you on the other track the trolley would hit me and I'd be fine with that." (Maybe I can be on the same track as my cat lmfao)
An unironic argument can be made that the Kantian approach approves of that. (in other words, that in the condensed version of the trolley problem the ethical approach is to not pull the lever)
I think the biggest question I would ask myself if put in this situation would be "who tied up these people and put them on the tracks and why did they do it?"
Is simple… you pull the lever to switch the track and then back again as the trolley crosses the switch platform. This forces the front of the trolley to turn and the back end to continue along the normal route. This derails the trolley sending it careening into the crowd of onlookers that were too useless to help any of those people on the track.
I honestly answered the trolley problem this way. Derail the trolley. The problem then is that the person giving the questions says thus kills everyone.
cats are great. they are stupid but smart. cute but annoying. quiet but loud. needy but avoidant. Anyone would quickly swerve they've car into a tree to avoid hitting a car. Yet children are killed in hit and runs everyday. 😏
It was a good show. But by the end I was SO sick (and tired) of the black guy being so wishy-washy. I know it was just a character, but Jesus, grow a pair and make a decision!
@@samiam619But that was the whole point. Each character had a tragic character flaw they needed each others’ help to grow through. That was his. It was like in a Shakespearean tragedy except they got to have long enough and enough second (third, fourth, 100th,…) that they could win.
Maybe is because im studying this (programing not ethic problems) but I that was my first thought. "Won't the AI be biased because you need to put some parameters before it can start picking one choice whenever you put it?"
@@michaelc.r.6416 Then again, won´t it be alwas biased, just because it has to follow the rules, that we deem social/good? Isn´t the true problem, that every person asked have probably a different opinion, about what is social, priorizing certain things over others, but most social persons still priorize similar (cute) things, like cats or babies/kids? A cold hearted and pure logic approach might be better sometimes, even if it hurts feelings. Like, there are 3 people of the main track of the trolley, and one of the side track. Ignoring the numbers, you can also approach it in a different way: there are people for whatever reason in a danger zone - the main track of the trolley. Then there are people on the side track, which however is currently a safe zone, so those people are out of danger. Why should i put people, that are already in the safe zone (out of danger) out of the safe zone into a danger zone and then killing them? Why should we priorize some people, isn´t that against equality? And why does this trolley have no working emergency brakes in the first place? ^.^ How was such a trolley allowed to leave the station? ;)
Tbh is the student grade example even a problem in the robot? When humans grade I would expect higher income students to do better, simply because they tend to have many more resources available to them. If it wasn't a higher percentage than other years it doesn't seem to be an AI problem, but a societal one.
That dillema of "a child is one track, you can pull the lever and redirect it to their mother" sounds like a really powerful high concept for a movie or a show.
I always felt that the trolley problem was less of a choice between ethical decision-making, and more of a thought experiment for passive vs active choices. Because the decision is presented as a person choosing to pull a lever, even if it saves more lives, it seems like an active decision. Whereas, the decision that doesn't save more lives is not pulling the lever, which reads as doing nothing (or the passive choice). This allows the person being asked to choose to "do nothing" and take no blame for what happened, even though the decision was in their hands.
You could always restructure the problem to have the default be "kill all people on the tracks" It is however, somewhat harder to give a realistic design, and makes people aware that they might be able to use the switch to derail the trolley, which may or may not save more people, depending on if anyone is on the trolley.
@@suddenllybah If the default is "kill all the people on the tracks" there is no longer a dilemma, then you are saving 5 people with one person who dies regardless of your choice.
Yeah that's a much more accurate representation of how the problem works in practice. Otherwise we could just hang all people involved in the sticks and ask the people to choose which we drop!
Your right. The problem is about whether we have the right to pull the lever, not about whether it's the better outcome. The "but who would we kill" question is purely asked by people who are not interested in the philosophy of the original problem by assuming we have that right in the first place
*ahem* indecision is a decision! You have the power to save life, weather you “have the right” or not, and it’s not like you’re risking your life or it’s hard to do, it’s a lever which we are assumed to be strong enough to push. You still killed five people due to your indecision.
11:30 - an interesting bit of background to what went wrong with the UK's algorithm for A levels. The (correct) assumption was that teacher assessment would give higher average grades than exams, so the algorithm was written to look at a school's track record of results, and that cohort's performance in GCSEs 2 years earlier, and scale the results accordingly to bring them into line with the expected results. *But* the brains behind it also figured that if you had a very small cohort then this method wouldn't be reliable, and so no scaling factor was applied when fewer than 5 students in a school were taking a particular qualification. And this was where it all went shit-shaped, because it turns out that schools and colleges in wealthy areas (and especially fee-paying schools) are much more likely to offer those kind of niche qualifications and run them for very small classes, whereas schools and colleges in more deprived areas generally don't. So rich kids had fewer of their grades subjected to the algorithm's scaling factor, whereas poor kids were likely to have all of their grades algorithmed. The algorithm was behaving exactly as designed, but apparently no-one had considered the implications of the way it was designed (or at least, had considered them but didn't care about them).
We will have to collectively realize that we program the values we want an AI to have before we realistically make an AI that can make significant decisions
But what if it was a trial run where they lost control, or the trolley just started running on its own(due to technical issues) and the only thing you can do is switch the lever to reach dead end and incidentally there are people tied to the tracks. Or even if there is a driver and they lost control of it and can't do anything about it but do fate do the work?
Sabrina's coding centric videos always remind me of The Programmers’ Credo: "we do these things not because they are easy, but because we thought they were going to be easy."
morality follow logic, its a fact, humans MOSTLY are not logical being.. the mass is composed of idiots, sadly. Meaning we can never graps the concept of morality as a whole, but only in single individual OR in very small groups.
@@unluckyomens370 Inactivity is one of the possible answers, but it doesn't feel well to me (not judging anyone to chooses to not act, after all the point of a dilemma is not having an obvious good answer to make people form their own opinion). I had the opportunity to change it, and denying it only because "it was going to happen that way" is something I would regret. I'l personaly pull the lever to kill only one instead of many, if we don't know anything about the individuals.
@@Uriolu Yeah, that's the whole reason the trolley problem persists in philosophy: there is no definitive objectively "correct" answer. In an academic setting, where you get graded for "solving" this problem, you get graded for how well your moral argument is built for the choice you do make, not for which of the two results you pick. (As long as your professors aren't moral zealots who actually think their personal answer is the "correct" answer and everyone who disagrees with them is wrong). Neither option is objectively more moral than the other, it all flows from your personal, subjective moral framework. Personally, I would attempt to switch just as the train is crossing the threshold in an attempt to derail it before it kills anyone. Even if the attempt is doomed to failure and guaranteed to kill everyone, I cannot know that for certain in the moment, and I think that the attempt to save everyone is the morally correct choice over both of the other options (letting people die due to inaction vs directly killing a single person to save an arbitrarily larger number).
I mean that's a fine outside-the-box jokey answer in a conversational setting but in terms of academic philosphy you've simply failed to engage with the thought experiment. Any philospher discussing this would simply restate the scenario. For instance, now the trolley is designed such that derailing will kill not only the occupants but also everyone on both tracks and is also designed in such a way that if no action is taken to switch it to either the left or right -hand tracks it will similarly derail killing everyone. Thus forcing you to engage with the thought experiment as originally intended - as an analogy and foundational scenario for any and all moral decisons. By engaging with it how it was intended you can get the results it's testing for - like all thought experiments what it is testing are your thoughts. In this case what your thoughts are on culpability and responsibility as well as the moral beliefs that underpin your decision making.
@@zildiun2327 All questions have been answered, the trolley decimates humanity at the whim of an AI who serves the lazy Garfield for all time. This is the only good ending for the trolley problem.
ahhahaha, that's just because you're too afraid of the consequences to make a decision. its normal human nature. I'd kill based on logic , and blame only the situation that put me in the wrong place at the wrong time. In life we have shitty tough decision to make from time to time, and there is no escape.
@@AmanomiyaJun true ,but it wouldn't be a answer to the trolley problem. The problem is about "making a forcefull choice" not finding the best solution to fix it XD
There's actually a board game called Trial by Trolly that is based on this, and I think is super fun. In case someone reads this, I highly recommend it.
I remember the responsibility of an accident caused by an AI being the main thing in the way of them because legal, but that just made no sense to me since obviously the people who programmed and trained the AI should be responsible. In what way would they ever not be?
I like the version of the trolley problem where you can be held accountable for your actions and things happen after you pull the lever because otherwise it’s just a question of “who would you rather kill”
@@shawermus After you kill the people outside, you just drive the trolley forward and jump out right before it hits another trolley, creating a collision that kills everyone in both trolleys (except you of course)
Never understood why Americans (mostly), regard animal life so closely with human life… y’all would do more for a dog or cat than a human…. I wouldn’t hesitate for 1 millisecond to kill the dog or cat to save your life ☺️
Ay, Forsakianity can i request for u to put a space between the asterisk and the speech marks xD i see u attempted to put it in bold and its hurting my tired brain- thanks
There is another issue when it comes to the trolley problem. That is the decision whether or not to act. Regardless of how many people are on each of the two tracks, the decision to pull the switch actively kills the person(s) on the second track, and makes you complicit in their death(s). Although the outcome would be essentially the same, with one person on each track, the ethical decision is to do nothing, since pulling the switch would be killing a person who would survive without your intervention. So how many people would you need to save to convince you to murder one person?
2:52 Pick your fighter: a. Some Nerd b. Speaks with Hands (Derogatory) c. Power Posing in a Spinny Chair d. A Scandalized Woman e. Clearly Cut Her Own Hair During the Panini f. Substitute Teacher Trying Hard to be Fun g. Crab in a Human Disguise h. Acting Way Too Chill About This One tbh
I’m going to go out on a limb and say that the algorithm prioritized named entities over random pedestrians. This led to the algorithm prioritizing Garfield the cat. And I’m guessing since the algorithm prioritized Garfield, this led to the algorithm prioritizing anything resembling him, Ex all of those cats. Idk just my little nerd theory I love to think about how different algorithms work. It’s fun.
A computer can never be held accountable, therefore a computer must never make a management decision -some IBM training documents from 1979, apparently
A computer can never be held accountable, therefore a computer must make the management decisions we don't want to be held accountable for -a somehow acceptable argument in 2021, apparently
The real answer: Flip the switch to the *MIDDLE position. This will cause the trolley to not be able to move to either side, and it will gently come to a rest right at the point where the two paths begin. I hold a degree in trolley operations, so yes, I'm pretty sure that's exactly how they work.
In ancient Egypt a battle was lost because the attackers held up cats above their heads for the Egyptians to see and they retreated to protect the cats. So maybe your machine is just ancient egyptian?
Cats are great, the ultimate hunting machine. But I also like "ants". They have the perfect communism community. When it is time for food. Every single ant gets their share. No more, No less. When somebody gets hurt. That individual gets his medical help and time off. Nobody tries to "cheat" the system. People cannot do that. I look at the "ant" colony with awe.
@@An-tm9mc @Rachel Support l brandDigi nope. VanGilder Michael Shane. Is there another out there, that shared my thoughts? I've found (one), who agrees with me. But another one exists?
I was initially horrified by the initial premise of this video, but by the end you did a really great job of starting the discussion on the importance of not outsourcing ethical decision making.
This is really good at reminding folks that a good or bad algorithm is a product of the creators and therefore the responsibility SHOULD lie on them. * LOOKS AT TH-cam * * STARES DAGGERS AT TH-cam *
Welcome to sociology :). The algorithm is made by individuals who have their own mindset and use that to build it. Also: the algorithm cannot be racist, it's the creator or the reason how some statistics that are the origin behind algorithms come about. Also also: if you are in doubt: don't touch anything. If you know all known possible outcomes are considered bad, simply don't do anything. You didn't choose to kill any one of them, the situation resolves without your control.
@@nat8264 "dogs really are man's best freind"the main character says to the main dog before the dog and humans vs the robot and cats war as epic music plays
"Don't unsubscribe" as the new educational youtuber signoff is my new favorite thing. That said, I remember when you asked people to roast you and someone said "all your videos end in failure and a sponsorship" and when you were talking about how you did a bad thing I honest to God thought "well that was a short video" and was shocked to see that we weren't even halfway. Good padding :D
I find the dilemma behind the trolly problem to be what people are failing to see. The question could be asked in a different way: Do you 1) passivly observe the death of someone. or 2) take action to kill someone else. That is the trolly problem. And when looked at it that way, the answer, at least for me, will almost always be: Do not act, because I do not want to kill someone.
@@trini5793 @Kristoffer Georg Aase . Haha I think the discussion you two are having is the hart of the trolly problem and the reason that there is no right answer; just the discussion: Is pulling the lever murder? Or is your decision not to pull the lever even a bigger murder? or does it make you innocent? And this hypothetical situation can of course be compared to other real or fictional situations witnessing voidance, science or baby hitler :P
I think I’d rather kill one people and let a greater number survive that to have to see people dying and having to live my life saying to myself "I could've saved them"
@@Anelkia No matter the choice, you will always have the thought: "I've could have saved someone". Your way you also have to think: "I killed someone".
I like how she said "you and I", it reminds me of back when my brother would play games while I watch and say "we did it, we cleared the level" even though I did nothing lol.
About the A-Level thing: The reason kids from poorer backgrounds got poor grades was because the AI was set so only a *set number of students could get a certain grade, regardless of if they deserved it* Basically, state school suffered because they had larger classes - So some big classes had almost hALf of the class having Us (un-gradeable). It was absolutely ridiculous, and I’m so glad they rescinded those grades and went back to teacher assessed, using evidence from work and past tests to determine grades.
Wow, without that bit of context I was super confused on how the results were even a problem. The way it's presented in that part of the video is terrible if you don't already have an innate bias against successful people.
@@ebolachanislove6072 yeah, i thought it would be confusing to people who didn’t know about it. My brother got his A levels done by the algorithm initially, and got worse grades than he should’ve. It was pretty outrageous, they’d had so long to work on it and yet somehow missed that fatal flaw lmao, it was all over the uk news for weeks.
@@ebolachanislove6072 so believing that people with low income should be allowed to have good grades is an "innate bias against successful people"? i mean, the wording was very specific about income and nothing else. The reasoning why doesn't change the ethical issue with it. if that was the only issue, they could've removed that specific coding and run the algorithm again (probably reaching the same results, as larger classes generally make the average student worse, aside from private school getting better funding)
@@eadbert1935 the way it's brought up in the video doesn't convey the reality of the event, it just sounds like a bit of woke-ism. Like I would expect rich kids to get better grades on average than poor kids because of the resource advantage innate to that situation, what i didn't expect was the actual flaw in the system that limited the total grades (which compounds the negative effects of a large class.) and that wasn't made clear in the video, only a vague coverage of the result without any of the "why" needed to understand it. when i first saw that section of the video I literally said to myself "Yeah, of course rich kids get better grades than poor kids overall, what's weird about that?" but then the video just moves along.
@@ebolachanislove6072 Usually when people make those kind of statements, they mean it as comparing A vs B with everything else being equal. No clue if that's what she meant in this video, but it generally makes sense to assume so or the statement would be pointless because she would basically be saying A vs B with everything else being different as well.
with the speed of ai computations exponentially improving and the likelihood of all of us at some stage being in the "system" maybe the autonomous car will quickly face scan, access individuals in harms way and decide which one to hit. criminal record? age? life expectancy? dependents? political party alignments?
The trolley problem is really dependant on the size of the trolley and the distance between the tracks, if the trolley is sufficiently large and the tracks narrow enough, you can probably flip the lever when it's half way across so it turns on its side, flips over, and takes out all 6 of them.
“I’ve treated machines like they’re a replacement of me, rather than an extension.” The one, singular, most important notion we need to keep in mind when it comes to robotics and especially AI. What is the purpose of AI? Is it to replace a human being, or to add to a human being’s life? The moment this becomes unclear is the moment we need to step back and re-evaluate.
@@johnhenry4024 Are you saying you want a machine to replace you in particular? This is purely an issue of self-preservation. We, as machines' creators, need to be able to foresee the path that this research, development and implement could take in the future as best as we can. The machines/AI can help us visualize the consequences of our own actions, but will they help us prevent us from inadvertently replacing ourselves? We need to venture into this world with eyes wide open and do what we can to ensure future generations don't inherit a mess.
Why should the corporate elite care about the wellbeing of who their research, or technological optimization affects over what is considered a marginalized advantage
I really hope you liked that video; it took a week and a day longer than usual because I was tired of ending videos with accepting flat out failure. If you did, consider sharing our channel with a friend (it's the biggest way you can help us keep doing what we're doing). Also, we (secretly) announced something in our latest newsletter, you can check out the archive here: answerinprogress.com/newsletter
aight
Might be able to edit this project in a way that reflects the criticisms you received. Such that, The algorithm displays what it would pick and why (spits outs is valuation, "cats > three babies in trench coat" then people have the option to accept or alter the decision. Regardless, they have to type their reason for doing so or select previous respondents for doing so up to like 9 answers (like a jury). Then the machine learns overtime based on the outcome even if it doesn't understand the written justification.
The memeiest answer might win though lol, garfield>everyone cuz Mondays
“I have not failed once. I have succeeded in proving that those 10,000 ways will not work.” Love these! I learn and think way more about these vids
I can't thank you enough for the videos you're making: they are interesting, educational, well-produced and funny. And, honestly, I really like your approach - instead of accepting failure, you found a way to learn from it and tell us about this
"Can you teach a robot to be a good person?"
Step 1: Define "good"
,👏
@@rbxq define morals and ethics
@@maggiminer whatever I say so.
@@CheeZBallz7 define define
@@CheeZBallz7 how do you define define?
This reminds me of what my 9th grade programming teacher said: “The computer isn’t stupid, it will do exactly what you tell it to. The computer is only as stupid as you are.”
lmaoo i love this
same
And the designer sometimes, but marketing is there to hide it's flaws
This explains why my computer is stupid.
Harsh
I think the tagline for this show should be this,
"I found out that what I am trying to do is harder than I expected."
I swear I hear it on every episode.
cursed by my own hubris
@@answerinprogress In fairness pretty much everything is harder than you first expect, once you really think about it, even something as mundane as what to have for dinner. Or maybe that's just me, perhaps I just find things difficult, all of the things.
wait. i saw this comment as she said that-
Damn beat me to it😂
@@answerinprogress "god has cursed me for my hubris and my work is never finished"
I never thought Garfield could create such an ethical dilemma.
Yeah but, they're goona serve lasagna at the funeral, so, worth it.
Your AI is actually flawless. It realized that without humans there wont be any trolleys
This though
Even with humans some just don’t have public transport at all
OMG YOU'RE A GENIUS
It has achieved superintelligence
@@wildfire9280 unless....
*CAT-DRIVEN PUBLIC TRANSIT BABY*
"But what if Albert Einstein was on the other track?"
"Isn't he already dead?"
No no she's got a point
Pick the cat
Well actually that would mean that Einstein was a zombie and deserves different consideration than a regular human, for instance would being hit by a train kill a zombie Einstein? Is his life more valuable because he has no limited lifespan and so you'd be cutting short a 10 thousand year life? Less valuable because he goes around eating brains and you'd be saving his perspective victims?
I cant belive I'm the first to mention the kronk reference.
Who lives, (Albert Einstein) or (Adolf Hitler). Who gave (any one person) the power to decide if another person lives or dies. Because (any one person) cannot make your decision with no respect to who the subject is. Then (any one persons) answer is just a question of "what is better for (that person)". Because they think they are something.
What if it were the equivalent of Einstein though? Just a human, maybe even poor, but someone whose ideas could contribute to the progression of mankind as a whole?
Kill the entire human race or kill a cat?
The AI: OoOoh… that’s a tough one
No it’s not I’m sure it would pick the cat
@@somethingrandom8091 I mean one could argue that the cat is the ethical choice as humans are evil and destroying the planet **Terminator drums enter the chat**
If all humans were dead, nobody would care.
@@aveen1968 yeah, but there’s a fair amount of man made things that would ruin parts of the planet if humans were to suddenly go extinct. Plus there would probably be a lot of fires due to the amount of hot stuff left running. We need a sec to cut that stuff off before suddenly dying. Not to say the human race deserves to live, just saying we made such a mess that if we were to suddenly disappear, it would probably hurt stuff more.
What about dogs ☹️
the usual human solution to this kind of dilemma is panicing, doing something by chance, screaming and suffer from ptsd for the rest of their life
Plot twist: the AI actually didn't want to save cats.
It just liked killing humans more.
Why am I now called AI-
OH .
Real-life GLaDos
I was stopping the video then the machine has killed the dog to see the comments..: correct that is true to me. Because dogs protect humans more than all day long relaxed and farting cats (well have a positive psychological effect on humans but dogs as well). Dogs can find unhealthy drugs, direct blinds, detect illnesses or a certain status of s person with diabetis etc. . -But this decision is in contradiction in my head to the decisons made before by the machine. But if the good deeds are stopping human kind to evolve like stopping war on drugs making them less important teaching humans and treating humans to reduce number of people with need for drugs changing the market. Stopping using dogs for blind and think of developing better electronic helpers or how to improve the teaching of the blind people how to navigate like bats by sound and its reflections. time is ticking for human kind. So if humans time is not spend well but perhapd bad (classic earth becomes some day inhabitable) living creatures (including humans) become a lesser value over the total spacetime with lesser or nothing of livings. So if humans treat all possible living including them selves bad, why shouldn t a machine decide to kill humans Terminator like. A bad politician had a greater value to be killed to improve human kinds value to planet earth and to stop the machine to kill humans. The machine is in a domination god like position, so as we are just human: don t decide to kill bad politicians by your hand or a created machine. We have to hold on to imperfectness to be alive and find education for all. Education and time to create can change structures of you and me and theoretical in all the universe but actual we find always a dominant thing to be solved indicating the submitted not godlike position (lesser general value) of humankind. The machine can be smarter but only time can show if it can dominate without using physical brutal force on others in all aspects all humankind. Despite that.. will she put Putin and Trump on two different sides? Or Biden and Trump and the machine trys to send a waggon/car on each track to have certainty on its side or had Trump has one of them done somthing valuable or have both done somthing valuable and the machine decides to shut down her program to save the honour and the lives preserving life -the machine knowing that it is just simulating something that is of value in reality? -This is not a bad feeling for any politician as I guess the machine would kill me aswell as I have littered just with my presence this planet without improving education or technologies to preserve live.. So lets go and do somthing positive with our imperfectness..
I am at 8:26 and must say without the time there was a dog and cat I scored full, understanding that human kind sucks a bit.. Idiotic amount of laws with out using the mind gifted by nature. The last one was an obvious choice to protect the poor cat from the violators wanting to put a law in that machine to kill it. watched it now.. If Garfield comics are important to live than the way Garfield will die must probably more spectacular to boost sales. Bad luck I was on the other rail. No it is not your fault cute coding girl.
Cats are notorious for a seemingly uncaring attitude. Our society (us humans) appears to have been hurtling in this direction with an over-inflated sense of self and less compassion or thought given to others. We expect more of human beings, but accept the cat at face value. Is the trolly opting to save the cat because of humans failing to live up to their own standards?
This video took a fitting turn, since "I don't pull the lever" tends to be about deflecting guilt rather than actually engaging with the fundamental messiness of ethics.
If you do pull the lever, you might be sued, if you don't, then the chance is less. You aren't responsible for not saving the lives of people, but you are for ending one.
@@randominternetguy3537 Getting sued or not has no bearing on whether or not an action is ethical. And what do you mean by responsible? Legally responsible?
@@randominternetguy3537 being sued isn’t immoral…
@@jadeamulet2339 yea, but it has a bearing on everyone's actions. If touching the lever meant saving 4 people, but you get 10 years in prison for manslaughter, I'd go with not touching anything.
@@sojirokaidoh2 It's more like ... if you didn't do anything, the people who were fated to die ends up dying.
If you did something that resulted in the death of 4 other people, then you are responsible for their death since they were originally not going to die.
It would be funny to have a typical "rogue ai destroys humanity" story except the ai just really likes cats
you defeat it by reasoning that without humans, no one will be there to take care of the cats
the ai destroys humanity by pushing them off the world and into orbit
Someone needs to make a movie out of this
@@TheSleepiestPlurals Ai build robots to take care of the cats.
@@TheSleepiestPlurals You forget that cats are the most successful hunters in the world. They can fend for themselves, no problem.
Two years on, ask the question about M*sk again and most people would pull the lever to switch the trolley onto the track with him on _even if there was no-one on the original track._
TRUE
After recent events I feel like that is far more true
"Are you ethical?"
"I try to be."
Agreed.
*E D G Y*
Controversial Edit: I mean most people of this channel are Edgy tho, so i dont blame you
@@hero303-gameplayindonesia8 what
I vibe with the person who said no
🗿
"are you ethical"
Me: "no"
Creating an AI to solve the trolly problem is the most complicated way of saying "I wouldn't pull the lever" I can imagine.
funny enough, by the end of the video she addresses exactly that. you cant say u didnt pull the lever if u built the machine for it
This video just devolves into “I broke ethical guidelines for a funny video… guess what? Those ethical guidelines are constantly being broken by people with a lot more power than me. You should probably be afraid.”
I asked my grandma this question and she instantly went on a rant about the imorarlty of creating this question and said whoever made that question is sick
Feels like she would pull the lever to run over the more people.
Grandma playing 4d chess.
Classic deflection. "Dont even worry about what my answer is, its a terrible question" intellectually cowardly haha
The machine picking a cat every time does sound like people watching a horror movie though. XD People do generally favor the animals to be saved in horror movie, while people dying is expected. In some ways the robot is operating on human ethics, just not how we usually consider the trolley problem.
So true. I will watch a person get slashed to bits in a movie, but if a dog gets kicked, I'm devastated and uncomfortable. I think part of it comes from the fact that animal rights are still fairly recent history. There are films where animals were mistreated and I guess whilst you know for certain the actors didn't get hurt, the animals feel a little more real
Animals are completely innocent and we consider many of them to be more defenseless, so it makes sense that we would put them in a category next to human babies. Human adults made choices to get where they are and we reason that they had some sort of chance against the harm.
Sentimentality isn't a legitimate choice for ethical concern.
Well the AI does not have the guilt for pulling the lever which humans will have no matter who dies or is saved.
AI 🤝 Cats
Biding their time to make humans a subservient species.
Both are smarter and know how to take advantage of us mere humans, I could see the logic behind it
Hoomans are done for. 😹
Accurate
@@YOEL_44 Idk where you got that cats are smarter than humans... they are smart animals but humans are the most intelligent animals in the world atm (second is probably dolphins)
@@BeniTheTesseract ...
Thank you TH-cam algorithm for recommending this video to me.
Yup thank you youtube algorithm
Bellooooo Jamieeee
+1
Who…are you?…
Amen
What nobody understands about this dilemma is that it doesn’t matter who are the people in the trails.
The purest moral question here is, what is worse: letting 5 people die by omision or killing 1 on purpose? In other words, is it better to dirt your hands, or is it better to leave life be, however atrocious it may seem.
Adding character to the people on trails doesn’t change the fact that you are knowingly killing a human being if you pull the lever, making you a murderer.
This question is asking if “The Greater Good” is something real, instinctive or innate; or if it’s just another social construct. Because that’s the excuse we use to justify our acts, and worse, that’s the excuse people in positions of power use when taking certain decisions, specially in Governments.
Are we entitled to judge other people lifes if that means we have to choose who lives or who doesn’t?
So it doesn’t matter if kids or Gandhi are on the trail, do you have the guts to pull the lever?
Agree
Im gonna pull the lever twice, to kill 5 people on purpose, idc who is who from them
the correct answer is don't do anything. The person that tied those people to the track is the murderer here. You doing nothing is not at fault here.
The world doesn't need more saviours, it needs less wrong doers. If we are strictly talking about who's right and who's wrong.
If you accept wrong doers as part of life and you have to fight them, and you have to make a choice, then it get complicated.
But you are not solving the root issue here. If there are no wrong doers, u wouldn't be in this situation in the first place.
To be a good person, u just need to not do any wrong things. Don't hurt anybody, it's that simple. U don't need to save anyone.
Excactly! That's why my answer is usually pretty straight forward: i won't pull the lever. I don't have the guts. Dont want blood on my hands.
Yeah, I'm always pulling the lever to save the most people. Even if my mom, spouse, Einstein, was the "one" on the single person track. I hate Einstein anyway. 😂
@@andreapatacchiola1184 what if its 100 to 1? A 1000 to 1?
“Today you and I are going to teach a machine to solve the trolley problem”
I appreciate you making me feel involved despite the fact that I’m doing nothing productive
Gotta build up that parasocial relationship!
Neither is she.
Her extreme attention-seeking is painful to watch - like a prancing peacock.
I remember the short lived reboot to Knight Rider. In it "Kitt" had to have all of his personality parts retrieved. At one point, this partial "Kitt" had control of the vehicle and was speeding as Knight Rider so often did back in the day and almost runs over a deer. The driver, asked the car why it didn't slow down. The car's response (paraphrase), "I calculated the impact from the deer and determined it would cause no damage." In other words, the car was looking at efficiency over other external factors.
The same came be said about the Trolley Experiment. You are trying to make a AI understand that it needs to divert to a less efficient track simply because there is less obstructions in the way that won't impact its performance to begin with. You are asking a AI to understand that a representation of a life has value and needs to stay in play. The more appropriate approach would be to ask why the trolley can't slow down to avoid hitting the humans versus switching tracks and hitting less humans. I can't help but wonder if the AI even understands the logic of its action. Instead it goes through a series of less logical responses until it achieves a response the controller is looking for. In the AI's logic it is probably getting dumber with each rendition of the experiment.
Your very presence invites challenge. You are in fact more involved than you believe... Without you, there is no audience, without an audience there is no reason for a video, without the video there is no awareness, without awareness there is no curiosity, without curiosity there is no purpose. Without purpose there is no progress, without progress there is death. Congratulations, you've saved the human race by watching.
Don't worry about that, you both did nothing productive. The title of the video suggests that there is actually an interesting machine being built and tested, but there is very little of that.
What there is however is standard boilerplate drivel about AI ethics, focused naturally on "societal issues" ie. racism.
i love how all these videos start out like "ooh, i tried this fun thing with coding" and then slowly shift to "the problem of moral responsibility with autonomous decision making computer programs that control everyday processes with the power to change lives"
The female vsauce pretty much xd
So you’re telling me in order to survive I just need to always have a cat on me
Good luck with that one
In Egypt there was a battle in which th Egyptians gave in and stopped attacking because the other side were holding up cats. So yeah if you live in ancient Egypt.
@@akisekar1795 in Egypt they worshipped cats (I think it was Egypt)
Many have gone that route before, you wouldn't be the only one
Rip people who are allergic to cats
The trolley problem is basically psychological trolling where everyone tries to force others to be unsure of themselves and be able to call them monsters for making a decision, any decision. The Trolley problem is a mental experiment to force everyone into inaction. >.
Nope, its whole point is to point out how ridiculous thought experiments are and how impractical and/or ludicrous they are in reality.
It's a thought experiment meant to provoke discussion on the morality of influencing a situation or being apathetic, and whether there is an obligation on way or the other. It's not sinister or deep at all.
@@premiumfruits3528 That's one way of deciding to look at it with rose-tinted-glasses glasses. I've taken Philosophy courses and I described it how it ~is used~. It is meant to be able to be used to make others look worse, or able to call them monsters, or what have you. It is one of those thing... a tool used in ideal circumstances serves its purpose... nobody uses it for its actual purpose.
I guess you can see it this way, it also asks the question of whether you should intervene if it's none of your business to decide, it asks the question on whether you're a numbers person or not, in regards to your ethics, and it asks the question of the whether you would play God if you had the chance. And it's a really good one, it also proves that the people with number ethics are the ones who would play God and intervene, as they choose the same decision, they can't help it. I guess you could also argue that if your ethics is based on numbers, maybe you lack ethics. Inaction, is an action in itself btw.
Inaction is an action.
AIP:“Can you teach a computer morality?”
Me: “SHOULD you TRY to teach a computer morality?”
this doe
we learned from ultron that you should not
“Your people were to busy thinking wether or not they could, that they didn't stop to ask if they should”
@@rosewinter4818 you learned from FICTION that you should not
@@nemogd7991 it was a joke, babe
“This machine killed 10 humans to save a cat!”
Me: the system is functioning as designed and I detect no errors.
Do you remember what the cat did? what if you were one of the persons?
@@claudioalencarzendo6791 then it's definitely functioning as designed
@@claudioalencarzendo6791 It's not because the cat is under investigation by the FBI that it is actually guilty. We still work under the principle of innocent until proven otherwise in a court of law.
@@Hans-gb4mv not on the internet (ahem, Twitter)
@@Hans-gb4mv Sus
Alternate title: "I taught an ai to save the cats"
She didn't make a mistake...She simply forgot to keep her cat away from the computer.
Clickbaitttt
Ok, now, the cats *movie*
Nya~
@@runed0s86 let's not go that path
As a cat person, I can't fault the machine at all 😂
yep
"Forget the past 16 minutes of your life" Done! 😄
Done!
Where am I?
Happens every 16 minutes, anyway, so why not... do something?
Well
How did I get here?
How do I get out of here?
As a cat person, I love that the computer chose to save the cats over and over and over again. I'm also dying from laughing.
@Heberth R. ?????
@@Nepetita69696 you're as smart as a brick
Cat person as well didn't stop me from looking at my cat and saying "I'm sorry Cynda, but if I was on a train track with you on the other track the trolley would hit me and I'd be fine with that." (Maybe I can be on the same track as my cat lmfao)
i love cats :)
There are at most nine people whose lives I would prioritize over my cats...so yeah makes total sense to me.
"So, which way should the train go?"
"This way."
Kills all people at once.
Multi track drifting!
An unironic argument can be made that the Kantian approach approves of that. (in other words, that in the condensed version of the trolley problem the ethical approach is to not pull the lever)
I think the biggest question I would ask myself if put in this situation would be "who tied up these people and put them on the tracks and why did they do it?"
The Joker
me
Congratulations you solved the trolley problem by asking the right question.
That information is private.
Is simple… you pull the lever to switch the track and then back again as the trolley crosses the switch platform. This forces the front of the trolley to turn and the back end to continue along the normal route. This derails the trolley sending it careening into the crowd of onlookers that were too useless to help any of those people on the track.
The best solution lol
Hit them with the "gg 10 flawlessed"
I honestly answered the trolley problem this way. Derail the trolley. The problem then is that the person giving the questions says thus kills everyone.
What if you consider that there’s passengers on the trolley
yeah the other people have it all wrong, you should be going for the high score!
C'mon guys we can't blame her, it's obvious a cat got onto her code when she went bathroom
i see it pretty logic to save an animal life over people ... the aminal have never did a crime or hurt anybody, ask the human tho XD
cats are great. they are stupid but smart. cute but annoying. quiet but loud. needy but avoidant.
Anyone would quickly swerve they've car into a tree to avoid hitting a car. Yet children are killed in hit and runs everyday. 😏
"my ai would choose to save a cat over you" 9 lives are more important than 1
Hahahahhahahahha
Thank you 😂
But if you hit the cat you only kill it once and it still has eight lives.
@MAGAT slayer I like your way of thinking friend LOL
Bruh I don’t even have 1
@@rachelcookie321 Exactly, so it's not even worth hitting the cat. You get an actual result by hitting the humans.
The good place is by far my favorite visual representation of the trolley problem. Great concept
Good Place, the show
It was a good show. But by the end I was SO sick (and tired) of the black guy being so wishy-washy. I know it was just a character, but Jesus, grow a pair and make a decision!
@@samiam619But that was the whole point. Each character had a tragic character flaw they needed each others’ help to grow through. That was his.
It was like in a Shakespearean tragedy except they got to have long enough and enough second (third, fourth, 100th,…) that they could win.
@@joseph5900 Sorry, I forgot the character’s name. It’s been a year or so since I saw it last. Plus I’m old…
Well, obviously the dilemma is clear: how do you kill all six people?
"Do you consider yourself an ethical person?" "No." That exchange lol
An woke person. 😂
at least they're honest
😂 😂 😂
Lol vegans seething
nice pfp
"Something went wrong. My machine values cats over human lives."
Me: I don't see an issue here.
same
same
I'm a cat person too
The real problem is that her AI values cats over dogs
I mean, I value cats over humans as well!
Wow this went from:
"Haha AI loves cats"
to:
"Oh shit AI is biased because humans are biased"
True
Maybe is because im studying this (programing not ethic problems) but I that was my first thought.
"Won't the AI be biased because you need to put some parameters before it can start picking one choice whenever you put it?"
@@michaelc.r.6416 Then again, won´t it be alwas biased, just because it has to follow the rules, that we deem social/good? Isn´t the true problem, that every person asked have probably a different opinion, about what is social, priorizing certain things over others, but most social persons still priorize similar (cute) things, like cats or babies/kids?
A cold hearted and pure logic approach might be better sometimes, even if it hurts feelings. Like, there are 3 people of the main track of the trolley, and one of the side track. Ignoring the numbers, you can also approach it in a different way: there are people for whatever reason in a danger zone - the main track of the trolley. Then there are people on the side track, which however is currently a safe zone, so those people are out of danger. Why should i put people, that are already in the safe zone (out of danger) out of the safe zone into a danger zone and then killing them? Why should we priorize some people, isn´t that against equality?
And why does this trolley have no working emergency brakes in the first place? ^.^ How was such a trolley allowed to leave the station? ;)
A lot of programs are biased, there are whole fields of study dedicated to that, so backward success here?
Tbh is the student grade example even a problem in the robot? When humans grade I would expect higher income students to do better, simply because they tend to have many more resources available to them. If it wasn't a higher percentage than other years it doesn't seem to be an AI problem, but a societal one.
That dillema of "a child is one track, you can pull the lever and redirect it to their mother" sounds like a really powerful high concept for a movie or a show.
"Create an orphan!" is my new favorite quote.
LMAO
Count Olaf liked this
interesting how we just assume the father is gone too... ://///
C programmers be like
😂
Alright, would you save a cat or everything else that is alive-
AI: The cat. Save the cat.
Internet wouldn't exist without cats
@@bimisikocheng yeah there's more cats than just 1
@@Fragens fair enough
The ai is correct!
But wouldn't that kill all other cats as well?
I always felt that the trolley problem was less of a choice between ethical decision-making, and more of a thought experiment for passive vs active choices. Because the decision is presented as a person choosing to pull a lever, even if it saves more lives, it seems like an active decision. Whereas, the decision that doesn't save more lives is not pulling the lever, which reads as doing nothing (or the passive choice). This allows the person being asked to choose to "do nothing" and take no blame for what happened, even though the decision was in their hands.
You could always restructure the problem to have the default be "kill all people on the tracks"
It is however, somewhat harder to give a realistic design, and makes people aware that they might be able to use the switch to derail the trolley, which may or may not save more people, depending on if anyone is on the trolley.
@@suddenllybah If the default is "kill all the people on the tracks" there is no longer a dilemma, then you are saving 5 people with one person who dies regardless of your choice.
Yeah that's a much more accurate representation of how the problem works in practice. Otherwise we could just hang all people involved in the sticks and ask the people to choose which we drop!
Your right. The problem is about whether we have the right to pull the lever, not about whether it's the better outcome. The "but who would we kill" question is purely asked by people who are not interested in the philosophy of the original problem by assuming we have that right in the first place
*ahem* indecision is a decision! You have the power to save life, weather you “have the right” or not, and it’s not like you’re risking your life or it’s hard to do, it’s a lever which we are assumed to be strong enough to push. You still killed five people due to your indecision.
11:30 - an interesting bit of background to what went wrong with the UK's algorithm for A levels.
The (correct) assumption was that teacher assessment would give higher average grades than exams, so the algorithm was written to look at a school's track record of results, and that cohort's performance in GCSEs 2 years earlier, and scale the results accordingly to bring them into line with the expected results. *But* the brains behind it also figured that if you had a very small cohort then this method wouldn't be reliable, and so no scaling factor was applied when fewer than 5 students in a school were taking a particular qualification. And this was where it all went shit-shaped, because it turns out that schools and colleges in wealthy areas (and especially fee-paying schools) are much more likely to offer those kind of niche qualifications and run them for very small classes, whereas schools and colleges in more deprived areas generally don't. So rich kids had fewer of their grades subjected to the algorithm's scaling factor, whereas poor kids were likely to have all of their grades algorithmed.
The algorithm was behaving exactly as designed, but apparently no-one had considered the implications of the way it was designed (or at least, had considered them but didn't care about them).
“I’ve treated machines like they’re a replacement of me, rather than an extension.” Yoooo this is some important shit
Yes yes yes! Can I somehow leave three likes to your comment? 🧡🧡🧡
I literally read this comment at the same exact time as she said it. I’m astonished.
We will have to collectively realize that we program the values we want an AI to have before we realistically make an AI that can make significant decisions
“What’s your answer to the trolly problem”
Tell the driver to stop, if they don’t, sue them for murder.
i never thought about that
but it´s the best option
so thank you
But what if it was a trial run where they lost control, or the trolley just started running on its own(due to technical issues) and the only thing you can do is switch the lever to reach dead end and incidentally there are people tied to the tracks. Or even if there is a driver and they lost control of it and can't do anything about it but do fate do the work?
@@Rita_Arya Sue the company for machine failure that cost the lives/life of a person.
@@Edgee_yy even if the company admits their fault and pays the compensation, thay still wouldn't indemnify the loss of human(s)
@M3l0nii3 thanks!!!!
**The AI gets one with a cat on both sides**
*The AI: panics*
The ai with a biased code:
".....what are the cat's colors-"
Intercept the trolley immediately.
I’d let it run. other one becomes barn cat. survival of nature
A real cat vs. Garfield would probably break it.
Bro I love your videos, your humor is on point and it's clear that you put a lot of work and love into them
One cat leaves a much smaller carbon footprint than a bunch of people. The AI is operating in a utilitarian mindset and doing it perfectly
BUT the cat goes on to murder all the smart humans and earth falls into anarchy
@@zapper333 Earth have always been in anarchy, animals kill each other all the time, we are the weird ones
@@abuhanifahhidayatullah9160 ...but fires and craters with nuclear fallout isn't normal, is it?
@@zapper333 Everything about human is just so weird if you view it from global viewpoint. Then again, we do have the biggest brains.
@@zapper333 I see this as an absolute win
I believe the correct awnser to the question is “multitrack drifting.”
“precision airstrike ready”
that's the only correct answer
The real answer is " Nothing " the trolley isn't moving...
@@-GG- logically, yeah. It says nothing about wether it’s moving or not
Yes
Sabrina's coding centric videos always remind me of The Programmers’ Credo: "we do these things not because they are easy, but because we thought they were going to be easy."
The editing is AMAZING
I love how we don't even take the time to understand morality before building it into our systems
morality follow logic, its a fact, humans MOSTLY are not logical being.. the mass is composed of idiots, sadly.
Meaning we can never graps the concept of morality as a whole, but only in single individual OR in very small groups.
@PP - 12ZZ 653663 Turner Fenton SS the trolly problem has an obvious solution dont pull the lever and walk away so noone knows
@@unluckyomens370 Inactivity is one of the possible answers, but it doesn't feel well to me (not judging anyone to chooses to not act, after all the point of a dilemma is not having an obvious good answer to make people form their own opinion). I had the opportunity to change it, and denying it only because "it was going to happen that way" is something I would regret. I'l personaly pull the lever to kill only one instead of many, if we don't know anything about the individuals.
@@Uriolu Yeah, that's the whole reason the trolley problem persists in philosophy: there is no definitive objectively "correct" answer. In an academic setting, where you get graded for "solving" this problem, you get graded for how well your moral argument is built for the choice you do make, not for which of the two results you pick. (As long as your professors aren't moral zealots who actually think their personal answer is the "correct" answer and everyone who disagrees with them is wrong). Neither option is objectively more moral than the other, it all flows from your personal, subjective moral framework.
Personally, I would attempt to switch just as the train is crossing the threshold in an attempt to derail it before it kills anyone. Even if the attempt is doomed to failure and guaranteed to kill everyone, I cannot know that for certain in the moment, and I think that the attempt to save everyone is the morally correct choice over both of the other options (letting people die due to inaction vs directly killing a single person to save an arbitrarily larger number).
I mean that's a fine outside-the-box jokey answer in a conversational setting but in terms of academic philosphy you've simply failed to engage with the thought experiment.
Any philospher discussing this would simply restate the scenario. For instance, now the trolley is designed such that derailing will kill not only the occupants but also everyone on both tracks and is also designed in such a way that if no action is taken to switch it to either the left or right -hand tracks it will similarly derail killing everyone. Thus forcing you to engage with the thought experiment as originally intended - as an analogy and foundational scenario for any and all moral decisons.
By engaging with it how it was intended you can get the results it's testing for - like all thought experiments what it is testing are your thoughts. In this case what your thoughts are on culpability and responsibility as well as the moral beliefs that underpin your decision making.
In some alternate universe, an AI decision maker has sacrificed all of humanity to save Garfield.
An equal trade
@Æshton [like rp and mint,is a gamer]
There’ll be a lot of meat lying around ripe for the grinding.
@Æshton [like rp and mint,is a gamer]
Definitely the robot
@@zildiun2327 All questions have been answered, the trolley decimates humanity at the whim of an AI who serves the lazy Garfield for all time.
This is the only good ending for the trolley problem.
Good save tho
She is basically Michael Reeves, but she unlocks the Good Ending
More crazy is needed by very close
So like micheals reverse in terms of morality
Yes Michael Reeves but good and less taser
@@ZaychikSN and way less crack
@@oliviercote7794 Agreed
6:22 "I've done a bad job!" 👎 🤣
I’d like to think rapidly flipping the switch is the answer. In hopes that it launches the trolley off track or serves as a random selection.
ahhahaha, that's just because you're too afraid of the consequences to make a decision. its normal human nature.
I'd kill based on logic , and blame only the situation that put me in the wrong place at the wrong time.
In life we have shitty tough decision to make from time to time, and there is no escape.
double track drifting
@@WastedTalent83 Third choices exist though. You just have to find a way to reach it.
@@AmanomiyaJun true ,but it wouldn't be a answer to the trolley problem. The problem is about "making a forcefull choice" not finding the best solution to fix it XD
What if there's people on the trolly and it going off track kills them?
AI: "I LOVE CATS. I LOVE EVERY KIND OF CAT. I JUST WANT TO HUG ALL OF THEM, BUT I CAN'T-- CAN'T HUG EVERY CAT."
I like this but the number is at 666 so this comment is my like
There's actually a board game called Trial by Trolly that is based on this, and I think is super fun. In case someone reads this, I highly recommend it.
Cyanide and Happiness?
@@MrChipMC indeed
i have it but not enough friends to play it with
@@jaggerzite7208 Let's play it
@@jaggerzite7208 same here! Was even one of the Kickstarter backers but I didn't consider the fact I have no friends...
Seeing Ryder show up at 3:07 genuinely made me smile because why wouldn't I, Ryder's amazing
making people and companies responsible and not scapegoating AI sounds like an excellent standard to maintain
I remember the responsibility of an accident caused by an AI being the main thing in the way of them because legal, but that just made no sense to me since obviously the people who programmed and trained the AI should be responsible. In what way would they ever not be?
Depends on the AI. If it's a man-made algorithm I'd agree. Something like a neural net though is a different creature entirely.
@@bloxxerhunt1566 i mean they should still need to take responsibility for creating it though right ?
I see that argument, but the more advanced AI gets, the less direct control the creators have.
@@angeldude101 Would parents be responsible for what their children do? (if they're adult)
"I forced the computer" is ultimately the best way to describe every kind of computer programming
Bruh don't force a machine
_sigh_
Humans these days
@@PeteWonderWhyHisYTNameIsSoLong lol, a bunch of goo won't do anything until you forge it into something you want
@@JeffSmith03 what you mean?
@@PeteWonderWhyHisYTNameIsSoLong meaning nobody can not force a computer
@@JeffSmith03 i know but better don't some artificial intelligence also have feelings we evole
"I've treated machines like they're a replacement of me, rather than an extension. It's easy, it's tempting, but it's misguided." I like this phrase
The *importance* of this CANNOT be overstated!
"I taught the machine to think like me. I did it so that I wouldn't have to think. I did it while not thinking."
I like the version of the trolley problem where you can be held accountable for your actions and things happen after you pull the lever because otherwise it’s just a question of “who would you rather kill”
"Obviously the problem is, how do you kill all six people? So I would dangle a sharp object out of the trolley while running into the other five"
I see you too are a person of culture
Ah yes, the true solution. Mother on one end, her kid on the other? Squish the mother, shoot the kid. Can't have survivor's guilt if you die.
What about people in trolley?
@@shawermus After you kill the people outside, you just drive the trolley forward and jump out right before it hits another trolley, creating a collision that kills everyone in both trolleys (except you of course)
@@Biotear But you just killed two people. You have to close off the loose end and potential guilt by killing yourself too.
Also I'm loving the "power posing in a spinny chair" "clearly cut their own hair during a panini" intro subtitles
The panini got me. I had to rewind it twice before I trusted my eyes.
Live Forever and Prosper, Iona Cowley.
Time?
I feel like we missed an opportunity to put a cat and a human on one side, and a cat and a dog on the other.
Yesssss
Cue the computer exploding
It would logically choose the human to save. The cats cancel each other out.
@Joel Roy You have no info on the person. You have never met them, you know nothing about them.
Never understood why Americans (mostly), regard animal life so closely with human life… y’all would do more for a dog or cat than a human…. I wouldn’t hesitate for 1 millisecond to kill the dog or cat to save your life ☺️
"You've just made an orphan"
50% of the internet: YES! YESS!
"the trolly was hurdling twards a small child"
"no"
"you see that wasnt the full-"
"*no*"
can relate
Ay, Forsakianity can i request for u to put a space between the asterisk and the speech marks xD i see u attempted to put it in bold and its hurting my tired brain- thanks
So, you'd kill the child?
@@Stark-Raving absolutely without a second thought maybe
Congrats, you pulled the lever and killed five babies.
My intro psychology class had AI grade our papers. I got better at learning the keywords it liked by the end of the semester
Was utilizing that knowledge ethical?
@@nlatimer nope but an A is an A
Sounds like the AI trained you
@@nlatimer Well it'll be utilising the psychology of the AI program.
My Essay: "William James reward punishment reinforcement lever reinforcement neuron axon hypothesis." Thank you for your time. Submit.
ONN Interviewer: How many people's lives is one cat worth?
AI: Seven
AI: There isn't enough humans in the planet *
10
0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001
AI: humanity*
Nine.
Nine lives. Come on people. 😆
There is another issue when it comes to the trolley problem. That is the decision whether or not to act. Regardless of how many people are on each of the two tracks, the decision to pull the switch actively kills the person(s) on the second track, and makes you complicit in their death(s). Although the outcome would be essentially the same, with one person on each track, the ethical decision is to do nothing, since pulling the switch would be killing a person who would survive without your intervention. So how many people would you need to save to convince you to murder one person?
2:52 Pick your fighter:
a. Some Nerd
b. Speaks with Hands (Derogatory)
c. Power Posing in a Spinny Chair
d. A Scandalized Woman
e. Clearly Cut Her Own Hair During the Panini
f. Substitute Teacher Trying Hard to be Fun
g. Crab in a Human Disguise
h. Acting Way Too Chill About This One tbh
B all the way
H honestly
G is obviously the best one
B baby!
Why is speaking with your hands derogatory?
Dr. Tom seems like he's incredibly excited to be talking about his Thing while at the same time he is roasting you and your ethics approach
A really excited-to-have-an-Apprentice Sorcerer managing the overwhelming tide of WTFery.
I’m going to go out on a limb and say that the algorithm prioritized named entities over random pedestrians. This led to the algorithm prioritizing Garfield the cat. And I’m guessing since the algorithm prioritized Garfield, this led to the algorithm prioritizing anything resembling him, Ex all of those cats. Idk just my little nerd theory I love to think about how different algorithms work. It’s fun.
that makes sense
Yeah. Something with value has value and something without value doesn't have value.
When the trolley runs over the guy moving the switch then you know the computer has mastered the problem.
A computer can never be held accountable, therefore a computer must never make a management decision
-some IBM training documents from 1979, apparently
A computer can never be held accountable, therefore a computer must make the management decisions we don't want to be held accountable for
-a somehow acceptable argument in 2021, apparently
A computer can never be held accountable
-Bert from Accounting Who Doesn't Like to Hold Computers
The real answer: Flip the switch to the *MIDDLE position. This will cause the trolley to not be able to move to either side, and it will gently come to a rest right at the point where the two paths begin. I hold a degree in trolley operations, so yes, I'm pretty sure that's exactly how they work.
hooray the real solution to the trolley problem
Did you just reprogramme the Kobayashi Maru simulation?
but you have to tell us, is the forbidden technique "Multi-track drifting" possible? I wanted to know, for... uh... Science of course
Turns out that it derailed into a fusion daycare and cat shelter. The AI is now distraught.
Or we can make it customary that every multi-laned track has a third track that causes the trolley to crash instead
In ancient Egypt a battle was lost because the attackers held up cats above their heads for the Egyptians to see and they retreated to protect the cats. So maybe your machine is just ancient egyptian?
Cats are great, the ultimate hunting machine. But I also like "ants". They have the perfect communism community. When it is time for food. Every single ant gets their share. No more, No less. When somebody gets hurt. That individual gets his medical help and time off. Nobody tries to "cheat" the system. People cannot do that. I look at the "ant" colony with awe.
@@vangildermichael1767 Anthony?
@@An-tm9mc @Rachel Support l brandDigi nope. VanGilder Michael Shane. Is there another out there, that shared my thoughts? I've found (one), who agrees with me. But another one exists?
@@vangildermichael1767 I meant to ask if you were referring Anthony or Chrysalis
I’m an ancient Egyptian
So, what if it’s 5 lawyers on one side, and a kid on the other.
See it’s Easy.
I was initially horrified by the initial premise of this video, but by the end you did a really great job of starting the discussion on the importance of not outsourcing ethical decision making.
To be fair I would do just as good a job.
Programmer: I accidentally created a robot that values the life and well being of cats above all else
Me: That is definitely the opposite of a problem
Came here to say this
The ancient Egyptians solved the Chariot problem
"No braincells required." *Proceeds to create ethical robot.*
*questionable ethical robot
😂
"Can you tech a robot to be a *good* person?"
2 seconds later: "To figure it out, I *forced* a machine to.."
This is really good at reminding folks that a good or bad algorithm is a product of the creators and therefore the responsibility SHOULD lie on them.
* LOOKS AT TH-cam *
* STARES DAGGERS AT TH-cam *
Welcome to sociology :). The algorithm is made by individuals who have their own mindset and use that to build it. Also: the algorithm cannot be racist, it's the creator or the reason how some statistics that are the origin behind algorithms come about. Also also: if you are in doubt: don't touch anything. If you know all known possible outcomes are considered bad, simply don't do anything. You didn't choose to kill any one of them, the situation resolves without your control.
I was so confused for a second, I Thought I commented.
@@dominik7423 JUST. CARRY. THE. SINGULAR. PERSON. OFF. OF. THE. TRACKS. AND. THEN. PULL. THE. LEVER.
not sure i can agree there
The machine always saves cats
Secret Agent Cat: Mission Accomplished
"i messed up.... so this machine just prioritizes the lives of cats"
where is the problem
@Xubse where are the problems
It killed a dog
Ai will team up with cats in the future
@@nat8264 "dogs really are man's best freind"the main character says to the main dog before the dog and humans vs the robot and cats war as epic music plays
@Xubse Vegan
8:43 No, I think you did a perfect job. Always save the cat. Can’t get those number of lives to 8. 😂😂
"Don't unsubscribe" as the new educational youtuber signoff is my new favorite thing.
That said, I remember when you asked people to roast you and someone said "all your videos end in failure and a sponsorship" and when you were talking about how you did a bad thing I honest to God thought "well that was a short video" and was shocked to see that we weren't even halfway. Good padding :D
I find the dilemma behind the trolly problem to be what people are failing to see.
The question could be asked in a different way:
Do you 1) passivly observe the death of someone.
or 2) take action to kill someone else.
That is the trolly problem. And when looked at it that way, the answer, at least for me, will almost always be: Do not act, because I do not want to kill someone.
but your decision to not move is still killing 4 people
@@trini5793 no his decision is to watch and experience in horror the brutal murder of those people that some madman put there.
@@trini5793 @Kristoffer Georg Aase . Haha I think the discussion you two are having is the hart of the trolly problem and the reason that there is no right answer; just the discussion: Is pulling the lever murder? Or is your decision not to pull the lever even a bigger murder? or does it make you innocent? And this hypothetical situation can of course be compared to other real or fictional situations witnessing voidance, science or baby hitler :P
I think I’d rather kill one people and let a greater number survive that to have to see people dying and having to live my life saying to myself "I could've saved them"
@@Anelkia No matter the choice, you will always have the thought: "I've could have saved someone". Your way you also have to think: "I killed someone".
The machine always saved the cat. Congratulations you recreated the TH-cam Algorithm.
I literally got a cat food advert with the tagline ‘It’s great to be a cat’ directly after the trolley kept saving them all 😂 The algorithm worked!
I like how she said "you and I", it reminds me of back when my brother would play games while I watch and say "we did it, we cleared the level" even though I did nothing lol.
we did it patrick
we saved the city!
About the A-Level thing:
The reason kids from poorer backgrounds got poor grades was because the AI was set so only a *set number of students could get a certain grade, regardless of if they deserved it*
Basically, state school suffered because they had larger classes -
So some big classes had almost hALf of the class having Us (un-gradeable).
It was absolutely ridiculous, and I’m so glad they rescinded those grades and went back to teacher assessed, using evidence from work and past tests to determine grades.
Wow, without that bit of context I was super confused on how the results were even a problem. The way it's presented in that part of the video is terrible if you don't already have an innate bias against successful people.
@@ebolachanislove6072 yeah, i thought it would be confusing to people who didn’t know about it. My brother got his A levels done by the algorithm initially, and got worse grades than he should’ve. It was pretty outrageous, they’d had so long to work on it and yet somehow missed that fatal flaw lmao, it was all over the uk news for weeks.
@@ebolachanislove6072 so believing that people with low income should be allowed to have good grades is an "innate bias against successful people"?
i mean, the wording was very specific about income and nothing else. The reasoning why doesn't change the ethical issue with it.
if that was the only issue, they could've removed that specific coding and run the algorithm again (probably reaching the same results, as larger classes generally make the average student worse, aside from private school getting better funding)
@@eadbert1935 the way it's brought up in the video doesn't convey the reality of the event, it just sounds like a bit of woke-ism. Like I would expect rich kids to get better grades on average than poor kids because of the resource advantage innate to that situation, what i didn't expect was the actual flaw in the system that limited the total grades (which compounds the negative effects of a large class.) and that wasn't made clear in the video, only a vague coverage of the result without any of the "why" needed to understand it.
when i first saw that section of the video I literally said to myself "Yeah, of course rich kids get better grades than poor kids overall, what's weird about that?" but then the video just moves along.
@@ebolachanislove6072 Usually when people make those kind of statements, they mean it as comparing A vs B with everything else being equal. No clue if that's what she meant in this video, but it generally makes sense to assume so or the statement would be pointless because she would basically be saying A vs B with everything else being different as well.
Tag yourself:
I’m Power Posing in a Spinny Chair
im speaks with hands (derogatory) 😔
I'm Crab in a Human Disguise 🦀💃🦀💃🦀
Tired
I’m a Substitute Teacher Trying to Have Fun
I'm way more difficult than expected
with the speed of ai computations exponentially improving and the likelihood of all of us at some stage being in the "system" maybe the autonomous car will quickly face scan, access individuals in harms way and decide which one to hit. criminal record? age? life expectancy? dependents? political party alignments?
The trolley problem is really dependant on the size of the trolley and the distance between the tracks, if the trolley is sufficiently large and the tracks narrow enough, you can probably flip the lever when it's half way across so it turns on its side, flips over, and takes out all 6 of them.
thats the option i would pick for sure
Or stick a very long spear to stab the other one. (Michael is a genius)
Yes the only true answer
You're a hero! :D👏
LMAO i was not expecting that
I like how it went from “choose one font to destroy” to “Do you kill a child or do you kill its mother?” Lol 😂
“I’ve treated machines like they’re a replacement of me, rather than an extension.”
The one, singular, most important notion we need to keep in mind when it comes to robotics and especially AI. What is the purpose of AI? Is it to replace a human being, or to add to a human being’s life?
The moment this becomes unclear is the moment we need to step back and re-evaluate.
Why not both
@@johnhenry4024 to both add and replace oh yeah
@@johnhenry4024 Are you saying you want a machine to replace you in particular?
This is purely an issue of self-preservation. We, as machines' creators, need to be able to foresee the path that this research, development and implement could take in the future as best as we can. The machines/AI can help us visualize the consequences of our own actions, but will they help us prevent us from inadvertently replacing ourselves?
We need to venture into this world with eyes wide open and do what we can to ensure future generations don't inherit a mess.
Why should the corporate elite care about the wellbeing of who their research, or technological optimization affects over what is considered a marginalized advantage
Replace the meatbags with the superior intelligence, AI.