Blog

ProjectBlog: How do self driving cars decide who dies?

Blog: How do self driving cars decide who dies?


Photo by Jp Valery on Unsplash

I bet that title got your attention, but don’t worry this isn’t clickbait. I was having a conversation based around the future and how technology will take us to places we could never imagine, as well as the ethics behind each one. Our focus changed to autonomous vehicles as this is something we could start seeing in the next few years.

As with any form of transport it is inevitable that even the most highly trained machine learning models are going to have collisions. It’s the brains behind how these cars deal with collisions that I’m most interested in. Here’s a hard hitting scenario for you: you’re in a self driving car travelling down a busy street with crowds of people on either pavement. A 5 year old girl steps into the road close to your car, not leaving it enough time to come to a stop. In the oncoming lane is a bus full of people. How should the car react?

So let’s break this situation down to the key points:

  • You’re alone in the passenger seat of this self driving car with no control over the input.
  • The five year old hasn’t seen the car so she won’t have time to get out of the way. The girl is too close for the car to stop before hitting her if it continues with its current path.
  • The pavement is lined with shoppers on a busy Saturday so mounting the pavement would case many potential fatalities.
  • The bus is coming quickly in the oncoming lane. If the car pulls into the path of the bus a crash is inevitable. At these speeds it’s likely to be fatal for you and some of the passengers on the bus.

In my opinion the outcome should put the least amount of people in danger. In the above scenario I added more detail into the story about the girl making it harder for you as an empathetic human with emotions to choose that outcome. The difference is that the car can’t deviate from its programming. In this example that sees the least amount of people injured. It’s important to note at the time of writing this no car is fully autonomous so they aren’t making judgement like this just yet.

Right now these ideas are completely hypothetical but in the coming years we could see car manufacturers getting approval for their autonomous cars to be used on the public roads. When the time comes it’s all down to the measures that developers and ethic boards have put in place. Let’s hope these people don’t have any sociopathic tendencies. However, in this situation are these companies murderers? They’re instructing machines that under certain circumstances it’s the correct action to kill.

Is this how Skynet began? Of course I’m joking, but is giving machines the decision making power of murder ethical? This is the million pound question and not for me to answer but I can’t wait to see the outcome.

AI is still reasonably new in terms of a technology and models can often get predictions partially correct or even completely wrong. These manufacturers are going to have to invest a huge amount of time in their testings to prevent incidents like the one in 2018 when an autonomous vehicle from Uber killed Elaine Herzberg.

For those of you that don’t know Elaine Herzberg was killed by Ubers self driving car on a test drive. This was the first recorded case of a pedestrian being killed in an accident involving an AV. This incident had many different factors involved but ultimately could have been prevented. I hope for the sake of everyone involved a lot has been learnt from the collision and ultimately prevent crashes like this one from happening again in the future. If you’d like to read more about the collision involving Elaine this article on wikipedia is very informative: https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg

I can’t fathom how much data is processed for these models to even attempt to predict human behaviour. Even humans can find it hard to predict behaviour amongst their peers.

An MIT website emerged a few years ago called Moral Machine, based around the principle that a self driving car develops brake failure. There are 13 questions with a range of scenarios involving people from different demographics and animals. Once all the questions have been answered the website shows your results as well as the average for each of the morals and demographics. It’s a very interesting site to see how your bias weighs up against others. You can find the website here: http://moralmachine.mit.edu/

Iyad Rahwan, a co-author in the study, came across an ethical conundrum in 2016. In surveys, people said that they wanted an autonomous vehicle to protect pedestrians even if it meant putting its passengers at risk but also that they wouldn’t buy a self-driving vehicle that is programmed to act in this way. Would you buy a car that’s been programmed to kill you?

As a lover of all technologies I’d love to see fully autonomous vehicles on our roads in the next few years but this can’t be at the expense of others safety. What are your thoughts on autonomous vehicles?

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
a

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.

Social