Ethics in Autonomous Cars | Josh Pachter | TEDxUniversityofRochester

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 มิ.ย. 2024
  • University of Rochester Senior Josh Pachter discusses his groundbreaking artificial intelligence ethics research. Pulling from traditional moral problems, Josh surveys the latest in AI development and its application to the burgeoning world of self-driving cars. Josh Pachter was a senior at the University of Rochester studying Computer Science and Philosophy at the time of his talk. His key research interests center around robotics and machine ethics, with a focus in mitigating bias in morally consequential autonomous systems. In the past, Josh has worked at the Wyss Institute for Biologically Inspired Engineering at Harvard University, where he helped create intuitive control systems for a soft robotic glove to assist patients with an impaired ability to grip. He has also done research at the University of Rochester Human-Computer Interaction Lab where he worked on testing and optimizing a virtual agent that aims to help improve the conversational skills of its users. Following graduation, Josh is working at Amazon as a Software Development Engineer in Seattle, WA before pursuing graduate school in a few years’ time. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at www.ted.com/tedx

ความคิดเห็น • 10

  • @SelfDrivingCarsNews
    @SelfDrivingCarsNews 4 ปีที่แล้ว

    Good talk! Hope that ethical considerations will push us forward to self-driving future!

  • @digitalkoh
    @digitalkoh 2 ปีที่แล้ว +3

    Should the car learn my values or values of some other standard?
    If I was driving, chances are good that I'd make decision that favors my life in an emergency.
    There would be consequences of course, but that's part of being human.
    I'd not pay to own a car that would make those type of decision for me.

  • @maohuawang5471
    @maohuawang5471 3 ปีที่แล้ว +3

    A good talk for the general public but bad talk for ppl who know the stuff. The "solution" he proposed is vague and how can RL be used to solve the trolley problem?

    • @Origamibones
      @Origamibones 3 ปีที่แล้ว +3

      I don’t think it was a vague solution. What he proposed is we use the model of reinforcement learning, accept that the cars will make mistakes and we assess them after the fact and determine whether it was a good outcome or not. If the car encounters a trolly problem like scenario we let it make a decision and then if we don’t like the outcome we will see if it has other more preferable solutions. Seen as the trolly problem has been around for a while and there hasn’t been an agreed solution its kind of silly to suddenly ask people to make the call. It’s also silly to assume that a self driving cars will never crash. It’s about reducing the negative impacts on human life over time.

    • @oliverwan1520
      @oliverwan1520 2 ปีที่แล้ว +1

      @@Origamibones but if the feedback we are providing to facilitate reinforcement learning is the product of human opinion, it will again be subject to human bias and the inevitable variability in ethical values between individuals. Perhaps I'm not understanding his solution correctly, but it seems that providing all the "moral truths" *in advance* (based on the mountains of data provided by the moral machine experiment) would at least reduce the variability you'd experience by surveying a small portion of humans for their opinion *after* the crash has occurred. Would be interested in discussing it more though!

    • @Origamibones
      @Origamibones 2 ปีที่แล้ว +1

      @@oliverwan1520 I'm actually not advocating for making an ethical decision after the fact, I'm more trying to say that we should figure out what steps can be taken to prevent a car from entering the scenario in the first place. Something like increasing following distance by 2 meters to prevent the car from needing to swerve for example... if that makes sense. Sorry it's been a while since I watched the video so I'm not sure if I'm totally on point here.

    • @oliverwan1520
      @oliverwan1520 2 ปีที่แล้ว

      @@Origamibones yeah that's fair, I see where you're coming from. The video was more about 'what if you were in a really bad situation wherein someone will die, regardless of your decision and you need to make a choice based on certain ethical principles of who should be the one you crash into?' It's a very difficult question to answer, and something definitely worthy of discussion.

    • @Origamibones
      @Origamibones 2 ปีที่แล้ว

      @@oliverwan1520 right! i guess my argument is mostly about avoiding the need to make ethical decisions. Similar to how jurisdictions write laws that determine how vehicles operate, but they don't make ethical decisions on who should be saved. It's more about creating rules that balance risk. When big accidents happen, often new speed limits or signage are put up in that area. I guess that's the approach I would take because we're never going to be able to determine whose life is more valuable. I trying to work around having to make those calls. Thanks for the civil discussion, it's rare to find!

  • @pierosanna5016
    @pierosanna5016 ปีที่แล้ว +1

    This guy is all over the place. He started to talk about the different types of Machine Learning -and which ones will be the best one for self driving cars-. And then he suddenly jumps into talking about Alphago AI... like what? And now he is talking about the problems of those cars...

  • @leticiasuprovici2344
    @leticiasuprovici2344 ปีที่แล้ว

    At the point that you said that you can teach robots 🤖 as you teach kids your output will never be the same come on - human vs a machine