Defining Harm for Ai Systems - Computerphile

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 ก.ค. 2023
  • How do we measure harm to improve the performance of Ai in the real world? Dr Hana Chockler is a Reader in Computer Science at King’s College London.
    EXTRA BITS: • EXTRA BITS: Defining H...
    Links from Hana:
    title: A Causal Analysis of Harm. authors: Sander Beckers, Hana Chockler, Joe Halpern. conference: NeurIPS'22. link: proceedings.neurips.cc//paper...
    title: Quantifying Harm. authors: Sander Beckers, Hana Chockler, Joe Halpern. conference: IJCAI'23. link: arxiv.org/abs/2209.15111
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottscomputer
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

ความคิดเห็น • 156

  • @EDoyl
    @EDoyl ปีที่แล้ว +135

    Some very old philosophical questions have sat with no answer or multiple contentious answers for a long time, and now the computers need solid explicit answers to all of them. That's quite a daunting problem.

    • @weksauce
      @weksauce 11 หลายเดือนก่อน

      False. Computers don't need any of these questions answered. Nor do humans.

    • @boldCactuslad
      @boldCactuslad 9 หลายเดือนก่อน +4

      @@weksauce Enjoy your default ending to the AI apocalypse, friend, because that's all you can hope for without answers.

  • @thuokagiri5550
    @thuokagiri5550 ปีที่แล้ว +55

    Philosophers touch anything and suddenly it turns into this convoluted deep dark rabbit hole.
    And I love it

    • @ahmadsalama6447
      @ahmadsalama6447 ปีที่แล้ว +1

      Ikr, not just in computer systems, everything man

    • @odorlessflavorless
      @odorlessflavorless ปีที่แล้ว +2

      and make anything deranged ? 😂

    • @raffriff42
      @raffriff42 ปีที่แล้ว +2

      It’s great when philosophers debate while millions die.

  • @benjaminclehmann
    @benjaminclehmann ปีที่แล้ว +10

    Worth noting that defining harmful actions as those which decrease someone's utility is a utilitarian idea. Utilitarian ethics (where what is moral is determined only by how it impacts some goal, such as a utility function) is very useful but it very regularly contravenes human morality. Utilitarianism leads to an idea of morality that can much more readily be reasoned about (and it's why economics originated as an offshoot of utilitarianism) but it usually also leads to a morality that we would object to. Think of all the supervillains that assume the ends justify the means.
    This isn't a criticism, utilitarianism is very useful and there's a reason it's the most realistic way we can rigorously define harm without relying on human judgement. It's just utilitarianism can be very easily misused, the history of science can be ugly and often utilitarianism is a prerequisite for those ugly deeds.
    As a note, Dr. Chockler talks about simply change in utility until she gets into her probabilistic example, but in general the utility function of some moral philosophy can be a lot more complicated, it can be a (potentially probabilistic) social preference function that considers multiple people and some sort of notion of equity.

  • @SecularMentat
    @SecularMentat ปีที่แล้ว +28

    It seems to me 'measuring' any of the systems that lead to the definition of harm is a difficult task. Granted machine learning can fudge a lot of it.
    It would have to know what the preferred state of an agent would be first. That alone is a huge definitional issue I'd imagine.

    • @pleasedontwatchthese9593
      @pleasedontwatchthese9593 ปีที่แล้ว

      I agree, like what is harm is an opinion that will need to be learned

    • @SecularMentat
      @SecularMentat ปีที่แล้ว

      ​@@pleasedontwatchthese9593 I think it'd have to be an individual target for each person that the machine knows. But maybe have a 'baseline' for 'average human'.
      But then, if you let an agent work on those assumptions it seems like to maximize its utility function to minimize harm, the machine wouldn't by default never take action. Because all actions seem to have some measure of possibility of harm.

    • @bengoodwin2141
      @bengoodwin2141 ปีที่แล้ว +2

      These are all things that humans do already, unconsciously

    • @SecularMentat
      @SecularMentat ปีที่แล้ว

      @@bengoodwin2141 yup. We're evolved for it for sure.
      Machines will take a bit of coaxing to get it right.
      Heck humans sometimes aren't great at threat perception. We range from jumping at shadows to wanting to pet the fluffy bear.

  • @austinbutts3000
    @austinbutts3000 10 หลายเดือนก่อน +3

    The medical device industry in the US has largely addressed this problem of harm. The FDA mandates that the manufacturer disclose information about the design to them in proportion to the amount of harm the device could cause. This especially applies to software.
    Good luck getting that approach through to the autonomous driving industry in this environment.

  • @behemoth9543
    @behemoth9543 ปีที่แล้ว +15

    If AI is ever introduced into societies on a global scale it will very likely be another area where US and to a lesser extent european customs and social structures become an inherent part of a technology and drive their cultural dominance. Its truly fascinating how the internet has already led to a major cultural homogenization of english speaking people across the world and that "soft power" is a huge driver of geopolitical reality aswell.
    The example of tips is a great one for this rift aswell. A waited expecting a tip in this way is going far beyond anything that could be considered reasonable in most of the world and would probably cause a lot of customers to never visit that restaurant again if he voiced that displeasure to them.

    • @Norsilca
      @Norsilca 9 หลายเดือนก่อน

      Or said another way, a restaurant not paying a living wage to its employees!

  • @kuronosan
    @kuronosan ปีที่แล้ว +81

    If there is harm to the waiter not getting a tip, the waiter is being harmed by the restaurant owner, not the customer.

    • @kalizec
      @kalizec ปีที่แล้ว +26

      This is exactly what I wanted to add here as well. The example of not tipping is so extremely poorly chosen that the entire video suffers from it.
      The example misses the entire point of determining cause, only to try and calculate harm on a non-causal factor.
      The restaurant at least has a contract with the waiter.
      The customer definitely does not have a contract with the waiter.
      It is possible that terms and conditions apply to the customer visiting the waiter, but I've yet to see or hear a single restaurant going after a customer who doesn't tip the waiter enough for violating their terms and conditions, so that's clearly not a thing.
      P.S. people who argue that tipping is a social contract can easily be countered with the following argument.
      Namely that society itself is not honouring a social contract that people, waiters included, deserve a decent wage.
      So, if social contracts are to be considered binding, then the harm is still not perpetrated by the customer but by the society.

    • @rauljvila
      @rauljvila ปีที่แล้ว +10

      I find the tip-scenario perfect to illustrate the problem with this approach, all the phillosophical issues are hidden under the rug of the "default value". Many people won't agree that 20% tip is the default value when there is no law forcing you to do so.
      EDIT: In fairness, she acknowledges this point at the end of the Extra Bits video:
      > in the example of the hospital and the organ harvesting the default might be the treatment that is expected in our current norms. But you are absolutely right, I mean this all definitely involves discussion about societal norms right.

    • @MrRedstoner
      @MrRedstoner ปีที่แล้ว +7

      @@kalizec And really, the answer is that the US would need to fix their laws, otherwise whoever is making wage decisions would otherwise be harming stakeholders in the restaurant and on the chain goes.

    • @cwtrain
      @cwtrain ปีที่แล้ว +5

      Fuggin' thank you! Defining the system inside of exploitive capitalist constructs made me sick.

    • @pleasedontwatchthese9593
      @pleasedontwatchthese9593 ปีที่แล้ว +4

      ​@kalizec I think your looking way into it. It's a contrived example. What if the waiter is the owner? Why does the restaurant only take cash tips? Etc
      I mean non of that matters, they just wanted to show how to find out more and less harm. Not try and fix Capitalism, lol

  • @aarocka11
    @aarocka11 ปีที่แล้ว +39

    I initially read that as haram. Lol

    • @saamboziam5955
      @saamboziam5955 ปีที่แล้ว +1

      🤣🤣🤣🤣

    • @uropig
      @uropig ปีที่แล้ว

      💀

    • @MartinMaat
      @MartinMaat ปีที่แล้ว +7

      Basically the same. We want the least haramful outcome at all times.

    • @WilkinsonX
      @WilkinsonX ปีที่แล้ว +4

      I read Harambe 🦍💔

    • @aarocka11
      @aarocka11 ปีที่แล้ว

      @@WilkinsonX dicks out for harambe 😭🦍❤️

  • @Insaniaq
    @Insaniaq ปีที่แล้ว +15

    I love the edit at 2:21 where bob watches a computerphile video, got me cracked up 😂

    • @bornach
      @bornach ปีที่แล้ว +1

      Poor Bob. He was about to get to the best bit of that video just when the car crash happens

  • @ungodly_athorist
    @ungodly_athorist ปีที่แล้ว +8

    Was Goofy harmed by being called Pluto?

  • @Veptis
    @Veptis 29 วันที่ผ่านมา

    A rarely discussed topic is how this moral compass of values changes globally by culture. Some prefer themselves, some prefer others, some prefer wealth and social status, others the many etc.
    You can optimize a dilemma decision machine for the geographical region with the target of causing least societal outcry.
    Or just get rid of cars.

  • @MarkusSimpson
    @MarkusSimpson ปีที่แล้ว +9

    I love Dr Chocklers chilled demeanour, definitely one of my favourite teachers 🤓

  • @paulbennett1349
    @paulbennett1349 ปีที่แล้ว +6

    With the doctors dilemma, maintenance of the current level of the condition is not a harm of zero. Sure, people get used to being in pain their entire lives but I don’t think any of them would consider it to be a static level of harm. The lack of hope of improvement is what drives many to suicide. So calculation of harm is only as robust as our understanding of all the variables. Since most people seek to investigate to the first point of exhaustion (where is my brain happy to stop) rather than the last (can I demonstrate that any other factors must be insignificant), I can see some rather large consequences.

  • @samuelthecamel
    @samuelthecamel ปีที่แล้ว +11

    The problem is that harm is completely subjective, despite how much we would like to think that it's objective.

  • @chanm01
    @chanm01 ปีที่แล้ว +3

    This is all interesting from an academic POV, but if we're actually gonna do anything with these definitions and criteria, I think you probably need to talk to one of the law professors. Sure, AI presents a bunch of novel fact patterns, but I somehow doubt that the suits which arise are going to be heard as if no prior case law exists.

  • @doubleru
    @doubleru ปีที่แล้ว +1

    In the first example, why is Bob suing his own car's manufacturer, rather than whoever was responsible for creating a hazard in the first place by stopping their car in the middle of traffic that was so intense that there is literally no way for Bob's car to come to a halt in time to avoid a crash? Because as the video itself points out, we need to trace the causality in order to measure harm, and the main cause of the crash was the hazard on the road, not how Bob's car reacted to it.

    • @supermax64
      @supermax64 ปีที่แล้ว +2

      From his point of view the car chose to throw itself in the fence. Also he's more likely to get a million dollar payout from the manufacturer than a random person. I'm sure some people would or will try to sue unless it's explicitly ruled that the manufacturer is never responsible (which would be surprising, at least at the start).

  • @salvosuper
    @salvosuper ปีที่แล้ว +2

    The one thing harming the waiter is the unethical work culture

  • @mpouhahahha
    @mpouhahahha ปีที่แล้ว +1

    i fell asleep and it's still 11am🤤

  • @zzzaphod8507
    @zzzaphod8507 ปีที่แล้ว +12

    Why isn't the option of the car going more slowly and stopping before hitting the obstacle considered as an option?!

    • @rosameltrozo5889
      @rosameltrozo5889 ปีที่แล้ว +3

      You're missing the point

    • @phizc
      @phizc ปีที่แล้ว +6

      ​@@rosameltrozo5889not really. At least not in terms of the lawsuit. For the car, the obvious option is to just injure the driver instead of killing him. But for the purpose of the lawsuit, the situation didn't pop into existence when the car decided to swerve into the guard rail.
      Why didn't the car notice the stationary car? Corner?
      Why did it go so fast around the corner that it wouldn't have time to avoid the stationary car?

    • @rosameltrozo5889
      @rosameltrozo5889 ปีที่แล้ว +2

      @@phizc You're missing the point

    • @phizc
      @phizc ปีที่แล้ว

      @@rosameltrozo5889 explain.

    • @rosameltrozo5889
      @rosameltrozo5889 ปีที่แล้ว +4

      @@phizc It's not about the technical details , it's a thought experiment to show the difficulties of making AI "understand" what humans understand more or less intuitively, such as harm

  • @klutterkicker
    @klutterkicker ปีที่แล้ว +9

    So imagine that you're at a time before you get into this scenario, when you have the option of 1.) driving fast and 0.5% of the time you get into a deadly scenario, or 2.) driving slow and 0.1% of the time you get into a deadly scenario. We're kind of back at the doctor's dilemma with medicine vs surgery, but is driving slow actually a harm? And what if instead of pouring over all of these decisions we used that development time to improve our traffic prediction, and we could avoid 20% of possible deadly scenarios... would that have a chance to replace more sophisticated last-resort decision-making?

    • @vadrif-draco
      @vadrif-draco ปีที่แล้ว +2

      Well said. The example in the video just forced us into the situation on 1.) and then told us to deal with it without considering how this situation itself could've been avoided.

    • @user-sl6gn1ss8p
      @user-sl6gn1ss8p ปีที่แล้ว +1

      @@vadrif-dracoI think that's a common problem of utilitarianism, it usually doesn't challenge the reasons for things

  • @charlesrussell6183
    @charlesrussell6183 9 หลายเดือนก่อน

    great look at the big picture

  • @nunyobiznez875
    @nunyobiznez875 ปีที่แล้ว +4

    10:36 The standard tipping rate in the US is actually 15%. Though, some like to give more, and I think some people just find it easier to calculate 20% in their head.

    • @bornach
      @bornach ปีที่แล้ว

      At a restaurant I remember being offered a choice at the bottom of the bill: 15%, 20%, 25%. Cannot recall if this was in CA or TX. Apparently there are regional differences.

    • @BTheBlindRef
      @BTheBlindRef 6 หลายเดือนก่อน +1

      Yes, 15-18% is "service was decent, as expected". 20%+ is "wow, the service was great or went above and beyond". Especially where I live, where all service workers are guaranteed full minimum wages before tips already. I might consider a higher tip rate reasonable in some other places in the US where tipped service workers are allowed to be paid under standard minimum wage with the expectation that tips more than make up the difference.

  • @brookrichardson1373
    @brookrichardson1373 11 หลายเดือนก่อน

    Why do all of these AI driving scenarios always involve vehicles without working breaks?

  • @salat
    @salat ปีที่แล้ว

    16:20 Solving moral dilemmas by weighting _everything_? How? E.g. should a high price&protection car crash preferably into "weaker" cars because that guarantees a higher probability that it's passengers survive, while the passengers of the weaker car won't? Should it always crash into other price&protection cars? Who wanted to buy such a crash magnet and how much would it cost to insure? We've had this discussion before here on the channel, right?

  • @jsherborne92
    @jsherborne92 ปีที่แล้ว

    I feel for bob

  • @don_marcel
    @don_marcel 11 หลายเดือนก่อน +3

    I need more Dr. Chockler explanations! Undercover hilarious wit

  • @arsilvyfish11
    @arsilvyfish11 ปีที่แล้ว

    A great video covering the need of hour stuff for AI!

  • @arletottens6349
    @arletottens6349 ปีที่แล้ว +9

    The rule can be simple: minimize your own blame. Which means: stick to the rules, drive safely, and only use accident avoidance that does not create additional dangers.

    • @randomusername6
      @randomusername6 ปีที่แล้ว +15

      Great, now all that's left to do is defining "blame"! I got it: "blame is responsibility for causing harm". Oh, wait...

    • @underrated1524
      @underrated1524 ปีที่แล้ว

      Most of society goes by this simple rule. This works, but it does lead to people spending a *lot* of their time and effort playing "blame hot potato". Turning your problems into everyone else's problems is a solid strategy from your point of view, but if everyone does it the problem never gets solved.

    • @C00Cker
      @C00Cker ปีที่แล้ว +1

      But then, there is the issue of harming others by being unnecessarily pedantic about following the general rules if the situation requires breaking them.
      Also, most rules are based on the fact that it is almost impossible to coordinate well in real time. With AI agents, the cars can share the current situation on the road and prevent most of the accidents.

  • @eidane1
    @eidane1 ปีที่แล้ว +1

    I think the problem of explaining harm to an AI. when the people trying to explain it do not understand it themselves...

  • @vermeul1
    @vermeul1 ปีที่แล้ว +1

    Obviously the AI is not driving according to “expecting the unexpected”

  • @AcornElectron
    @AcornElectron ปีที่แล้ว +1

    Rob looks different somehow.

  • @weksauce
    @weksauce 11 หลายเดือนก่อน +1

    What's the "default" is the wrong question. Harm is irrelevant. Everything should do the best expected value benefit minus cost choice. The real questions are how much agents should value other agents' expected benefit minus cost, how much agents should be expected to spend acquiring information to make their expectations more accurate, and how finite agents ought to approach expected values of options that have very small probabilities (Pascal's Muggings and such).

  • @sabrinazwolf
    @sabrinazwolf 11 หลายเดือนก่อน +1

    I think that's Goofy, not Pluto.

  • @juliusapriadi
    @juliusapriadi ปีที่แล้ว +1

    and next, the car factors in the likely penalties for either it's owner or manufacturer, to make its decision. For example diplomats are not held liable legally, so the diplomat's car might opt for killing some kids, if that meant a better outcome for its diplomat passenger. I'd expect a system designed in favor of the manufacturer, not the passenger, as long as there's no regulation telling the manufacturers to prioritize otherwise.
    Another thought: All those theoretical concepts are beautiful in their logic, but decisions of politicians and managers are not logic, but often irrational. So I find it difficult to predict the adaptation of AI based on wether we'll solve harm or AI safety - it's very possible that the (rarely expert) people in charge will simply "press the button" and see what happens.

    • @RAFMnBgaming
      @RAFMnBgaming ปีที่แล้ว

      a diplomat however is more at risk of scrutiny over their actions and losing their job over an incident.

    • @muche6321
      @muche6321 ปีที่แล้ว +1

      I believe most politicians and managers are rational. It's just that they optimize for other values than you'd want them to.
      E.g. politicians optimize for staying in the power/keep getting re-elected. Sometimes that means improving lives of all people within an area, sometimes it means improving lives of a select group of people at the expense of other groups in the area and ignoring the opinion of those other groups through gerrymandering, populism, etc.

  • @bengoodwin2141
    @bengoodwin2141 ปีที่แล้ว +2

    To me, *some* of these seem obvious. You can cause harm without it being wrong if it was still the least bad outcome, and that "achievable" is relative, and in the second example, making tje "default" "achievable" requires harm

    • @jursamaj
      @jursamaj ปีที่แล้ว +2

      The tipping example is flawed. If anybody is causing harm, it's the employer who isn't paying a living wage, so that he can have artificially low prices.

    • @shandrio
      @shandrio ปีที่แล้ว +2

      @@jursamaj But you are changing the frame of reference... You have to narrow it down to only waiter - client relation in this example to be able to theorize about the problem. Of course later in real life when you take into consideration all the players the problem gets WAY WAY harder...

  • @Raspredval1337
    @Raspredval1337 ปีที่แล้ว

    BUT there's another fitness function: expenses. Imagine you're an autonomous car manufacturer. And your car has decided to crash into a safety fence. Now the passenger is alive, injured and is going to sue somebody, just because they're mad.
    And there's an option to passively crash into another car, leaving us with no angry passengers, who would try and sue anybody. And it's even somewhat cheaper to make an AI, which doesn't care. Makes you think, doesn't it

  • @eljuano28
    @eljuano28 ปีที่แล้ว +2

    So, is anyone talking about the fact that Bob is a crappy driver?

    • @IngieKerr
      @IngieKerr ปีที่แล้ว +4

      _You_ can :) but that won't solve the AI alignment issue.
      One has to start from the principle "Assume all operators of this machine are foolish" :)
      Alternatively, he might have had a horrible reaction to thinking about categories of homomorphic cube tessellation symmetries, but then arguably that was Bob's own fault for watching a Mike Pound video about programming.

    • @eljuano28
      @eljuano28 ปีที่แล้ว +1

      @@IngieKerr you're my kind of nerd 🤓

  • @dgo4490
    @dgo4490 ปีที่แล้ว +1

    Obviously, the fairest and most non-discriminate outcome is everyone ded... Equality and all!

    • @FHBStudio
      @FHBStudio ปีที่แล้ว +1

      That was the Sovyet way, and it's still prevalent today.

    • @raffriff42
      @raffriff42 ปีที่แล้ว

      “That’s what I call thinking!” ~Majickthise, HHGGTG

  • @FHBStudio
    @FHBStudio ปีที่แล้ว +2

    There's also the problem that not all "harm" is "bad". Some suffering is necessary suffering. Sacrifice is suffering and there is no guarantee of a worthwhile payoff. If we start from the premise that harm must always be minimized, sacrifice becomes impossible. Growth and investment becomes impossible.

    • @ApprendreSansNecessite
      @ApprendreSansNecessite ปีที่แล้ว

      You mean sacrificing yourself or "sacrificing" someone else? Because no one would say the former is harm since you do this to yourself, while the latter should be renamed "taking advantage of"

    • @FHBStudio
      @FHBStudio ปีที่แล้ว

      @@ApprendreSansNecessite There's a difference between harming yourself and sacrificing yourself. However, the difference to us isn't always clear, let alone to a machine.

  • @atlantic_love
    @atlantic_love ปีที่แล้ว +3

    LOL at all the channels trying to ride the "AI" train before it peters out.

  • @Squeesher
    @Squeesher ปีที่แล้ว

    I love her voice, could listen to 1000 videos with her teaching

  • @theancientagoracorner2379
    @theancientagoracorner2379 11 หลายเดือนก่อน

    Poor Bob. Always gets screwed in all use cases. 😅

  • @jeromethiel4323
    @jeromethiel4323 ปีที่แล้ว +4

    Without empathy, it's almost impossible to quantify harm. And computers do not have empathy, and i can not even think of a way to emulate empathy in a digital system.

    • @jursamaj
      @jursamaj ปีที่แล้ว +1

      I think a bigger issue is that no machine we now have or can expect any time soon has any actual comprehension. You can't have empathy without having comprehension 1st.

    • @bornach
      @bornach ปีที่แล้ว

      A lot of people lack empathy too.
      That doesn't prevent them rising to the top of society where they run companies which create the AI for self driving cars

  • @OcteractSG
    @OcteractSG ปีที่แล้ว +4

    It seems like AI is going to be adopted regardless, and the world will have to scramble to figure out the ethical problems before things get too far off the rails.

    • @supermax64
      @supermax64 ปีที่แล้ว +2

      The penalty for waiting is too great because other countries won't.

  • @Mr.Leeroy
    @Mr.Leeroy ปีที่แล้ว

    what accent is that?

  • @timng9104
    @timng9104 ปีที่แล้ว

    feels like game theory

  • @SkyFpv
    @SkyFpv ปีที่แล้ว +3

    Choices which are ethical are not the same as choices which are moral. Ethics concerns justice and reduces a person's blame. Morals ignore justice (allowing forgiveness) and instead consider culture and emotion. You HAVE to separate these two metrics before you can draw a conclusion in these examples.

  • @Cassius609
    @Cassius609 11 หลายเดือนก่อน

    harm considered harmful

  • @welemmanuel
    @welemmanuel ปีที่แล้ว +2

    "quantify harm"... engineers trying, and failing, not to be relativistic, this is why technocracy is so appealing to them. I'm not saying it's useless to measure it, the problem is the ruler, morality is arbitrary on a utilitarian worldview

  • @xileets
    @xileets ปีที่แล้ว +8

    WHY, does the autonomous car not detect the hazard and stop? (Because it's a Tesla? heh)
    Seriously tho, This seems like a necessary function of the vehicle, a reasonable expectation for the user, and therefore, the manufacturer's or designer's fault for not implementing it.

    • @IngieKerr
      @IngieKerr ปีที่แล้ว +5

      The point of the example given is that it is assumed a-priori that it absolutely _cannot_ stop in time without breaking the laws of physics, for any number of reasons that would not deem it to be directly a fault of the car's systems. [e.g. car in front arrived suddenly in front from a side-road without right-of-way, car was not visible due to some transient obstruction... etc].

    • @phizc
      @phizc ปีที่แล้ว

      ​@@IngieKerrbut she also talked about a lawsuit resulting from it. There the OPs point do matter. Unless of course the stationary car teleported to where it was immediately before the AI decided to swerve.

    • @xileets
      @xileets ปีที่แล้ว

      @@IngieKerr ​ Good point. I would accept this, however, because it IS something highly HIGHLY unlikely, a hazard appearing suddenly out of nowhere, it's not so useful. Consider how this would happen? Sink hole, plane crashing onto road, etc... it would have to appear WITHOUT Warning, inside the anticipated safe-stopping distance of the vehicle, in order to be a useful analysis.
      I understand that this is a thought experiment, but being both intimately familiar with philosophy and philosophical discussion, and a risk management engineer, I feel that these highly hypothetical scenarios are far less helpful in teaching and demonstrating "potential" risks, harm, threats, etc. Concrete examples also have caveats to consider like, engineering oversight, but here we are avoiding physics-breaking solutions in a statistics-breaking problem. Far too fanciful a scenario to demonstrate what is a simple problem.
      BUT I see your point, don't get me wrong. I understand now what they were trying to show.

    • @supermax64
      @supermax64 ปีที่แล้ว +1

      No amount of sensors can make the car precognitive. Some actions from other drivers WILL result in a crash even with the best efforts from the car to minimize said crash. The thought experiment specifically focuses on one such case that is BY DESIGN inevitable.

  • @nilss1900
    @nilss1900 ปีที่แล้ว +1

    Why couldn’t the car just brake instead of crashing?

    • @supermax64
      @supermax64 ปีที่แล้ว

      Too close for the breaks to work in time.

    • @initialb123
      @initialb123 11 หลายเดือนก่อน

      @@supermax64 Then the driver ( the AI? ) was driving too fast, the primary responsibility is to be able to stop in time, or in American "to stop short" . Road users have a responsibility to not hit stationary objects.
      If you fail to stop in time it's bad news for you, you are liable man or machine.

  • @CeruleanSounds
    @CeruleanSounds 11 หลายเดือนก่อน

    I think we should sue AI

  • @BunnyOfThunder
    @BunnyOfThunder ปีที่แล้ว +1

    Option 4: Drive at a safe speed so you can stop without harming anyone.

  • @initialb123
    @initialb123 ปีที่แล้ว +4

    If I can't make out some words and the auto generated closed caption can't understand what's being said, perhaps the speaker needs to acknowledge their heavy accent and consider some pronunciation classes. If you have no trouble following along I commend you, however neither the auto closed caption system or I could determine what some of the words were.

    • @DavidAguileraMoncusi
      @DavidAguileraMoncusi ปีที่แล้ว

      Time stamps?

    • @nickjwowrific
      @nickjwowrific ปีที่แล้ว +4

      I would say that if you are a native english speaker you should be embarrassed that you can't understand what she is saying and should maybe interact with more people outside of your country. If english is your second language then I would assume that you know how difficult learning a language is and should be more understanding. Contrary to what you think her job is not making these videos, she is helping make one because the channel thought she had something interesting to talk about. People have lives outside of just trying to entertain you and are allowed to spend their free time however they like.

  • @erikanderson1402
    @erikanderson1402 ปีที่แล้ว +13

    … how about we just build some decent trains. Autonomous cars are a scam and a waste of resources

    • @SiMeGamer
      @SiMeGamer ปีที่แล้ว

      Then go and build a train. Trains are some of the most inefficient forms of transportation. The cost of operating and maintaining trains, stations and tracks are terrible. That's why you don't see private companies entering the train business unless it's under government subsidies. And if you are going to argue for the government to do it/be involved, then you will be entering a completely separate debate that is about the morality of taxes which is a much broader philosophical avenue.
      Autonomous vehicles, when finally put into practice, will result in much lower traffic because of shared rides, autonomous taxi services, car pools and way less occurrences of jams, blockades and accidents. And the more this technology develops and enters the traffic ecosystem, the more we could afford to make smaller vehicles due to higher safety standards which will take even less space. Perhaps we will find a more sustainable train solution. Who knows? What we do know for a fact that if AI vehicles operate at the presumed standard, then traffic will be much better for everyone.
      I love public transportation as a concept, but it is really hard to make well because of many, many considerations, some of which are moral (taxes, for example). So in the meantime, as we figure public transportation and urban design, I encourage the development of autonomous vehicles. They could buy us a lot of headaches when we are ready for better public transportation solutions.

    • @erikanderson1402
      @erikanderson1402 ปีที่แล้ว +2

      @@SiMeGamer by no objective metric is that true.
      Maintaining a fleet of cars needed to move the same number of people as a modern train costs way more and has a much lower level of asset utilization. It is much more efficient by every conceivable metric.

    • @erikanderson1402
      @erikanderson1402 ปีที่แล้ว

      @@SiMeGamer well incidentally, train companies were previously forced to provide passenger rail as a public service. I think we should just reconstitute those policies because they were quite effective. And a fleet of cars constitutes a lot more possible points of failure than an effective public transport system.

    • @muche6321
      @muche6321 ปีที่แล้ว

      ​@@SiMeGamer Let's compare trains with cars.
      Operating trains requires people who need to be trained and paid. Operating a private car requires one person who is not paid. Their training is also usually unpaid, done by parents/friends, followed by a formal test.
      Both trains and cars require maintenance.
      Stations could be compared to parking lots/garages. Stations' maintenance is paid by the transportation company, whereas parking lots/garages are paid by the companies/people that want to attract customers/for themselves.
      Tracks are again maintained by the transportation company. Roads' maintenance is paid by the government from taxes.
      In summary, the costs of operating trains are concentrated to the train company, whereas most of costs of operating cars is spread out to other subjects.

  • @chuckgaydos5387
    @chuckgaydos5387 ปีที่แล้ว

    Maybe the A.I. could examine our laws, news, and literature in order to determine which of its options would be considered to be the most reasonable to most of society. Of course, this would have to be done in advance since there wouldn't be time to do it when the situation arises. Since there likely would be no objectively best course of action, we'd at least get something that we could live with.

    • @RAFMnBgaming
      @RAFMnBgaming ปีที่แล้ว

      it is importat to nderstand that laws can ( and shoud be able to) change to reflect our state as a society, and some are best given defacto leeway outside of what they say, like accidental shoplifting of small things is often forgiven without charges or piracy is accepted for preservation, so fixing it on a specific set of laws at a specific time does come with problems.

    • @chuckgaydos5387
      @chuckgaydos5387 ปีที่แล้ว

      The A.I. would have to keep itself up to date.

    • @RAFMnBgaming
      @RAFMnBgaming ปีที่แล้ว

      @@chuckgaydos5387 the problem is that if your objective is to enforce the current laws as best as possible, that implicitly means protecting them from being changed to anything else so you can continue to enforce them in the future. There's a real risk of being trapped in a cultural legal limbo until the next carrington event by an AI trying to mantain the status quo.

    • @chuckgaydos5387
      @chuckgaydos5387 ปีที่แล้ว

      @@RAFMnBgaming The objective is not to enforce current laws. It's to have the A.I. make decisions that will be acceptable to most of human society. Rather than have people try to program the A.I. to do this, the A.I. could observe our opinions and figure it out for itself.

    • @muche6321
      @muche6321 ปีที่แล้ว

      It seems to me this could lead to a thing similar to airplane tickets overbooking, where the equilibrium is between the number of people not showing up and people overbooked.
      If you're the Bob who got bumped, you might feel harmed and get compensated for it. But that harm is the result of other people wanting the cheapest tickets for themselves.

  • @TiagoTiagoT
    @TiagoTiagoT 10 หลายเดือนก่อน

    WTF? The harm isn't not tipping, the harm is the employer not paying their workers a fair wage for their work.

  • @uropig
    @uropig ปีที่แล้ว

    first

  • @hoseja
    @hoseja ปีที่แล้ว +1

    This person wants to dictate what you're not allowed to do.

  • @omegahaxors3306
    @omegahaxors3306 ปีที่แล้ว +2

    What people thought AI safety was: "Either we hit this car or hit this baby, this decision is very important"
    What AI safety was probably going to be: "Either we hit this baby or we take longer to arrive at destination"
    What AI safety actually ended up being: "Baby detected, speeding up to hit baby, baby has been eliminated"

  • @kibels894
    @kibels894 ปีที่แล้ว

    "Obviously related to AI systems" yeah because they're obviously harmful lmao

  • @hurktang
    @hurktang ปีที่แล้ว

    "the adoption of AI system is not gonna happen until we figure all this out"
    So candide...

  • @justwanderin847
    @justwanderin847 11 หลายเดือนก่อน

    Just say NO to government regulation of computer programming.

  • @mibo747
    @mibo747 ปีที่แล้ว

    Where is the man?

    • @muche6321
      @muche6321 ปีที่แล้ว

      Behind the camera?

  • @bersl2
    @bersl2 ปีที่แล้ว +9

    Harm is when my artist and writer friends have their work fed into the machine without their informed consent or fair compensation. >:(

    • @arletottens6349
      @arletottens6349 ปีที่แล้ว +12

      There's no law that requires consent or compensation for looking at your work and learning from it.

    • @kuhluhOG
      @kuhluhOG ปีที่แล้ว +6

      @@arletottens6349 This is more of a philosophical thing: Is an AI learning from something and a human learning from something the same thing?
      Some people (especially companies which push AI) will say yes.
      Other people (especially artists) will say no.
      The question is now what society at large will answer, and that will take time.

    • @maltrho
      @maltrho ปีที่แล้ว +9

      No it certainly is not. Your friends sales are in no way affected, and the machine does not use their work in any direct way. It is like complianing that writers use public language and words created by other persons without any payment.

    • @kuhluhOG
      @kuhluhOG ปีที่แล้ว +2

      @@maltrho if the sales are effected depends on the output of AI
      some AI tools are at this point specifically made to mimic specific artists (even alive ones) as close as possible

    • @maltrho
      @maltrho ปีที่แล้ว +2

      @@kuhluhOG They mimic wellknown artists styles, not your totally unknown friends's, and practically (if not absolutely) nobody uses chatbots for 'free' fiction litterature.

  • @omegahaxors3306
    @omegahaxors3306 ปีที่แล้ว

    Tipping culture needs to die. Just raise your prices. Rich people don't tip anyway so all that does is make things more expensive for people who already have the hardest time paying in the first place. Besides, these days tips just go straight to the CEO anyway.