Who would you sacrifice?

self-driving cars Dec 2, 2019

Have you seen this image before?

Many people have talked about it for years. Even today, I still get the question A LOT.

One of the most discussed topics of self-driving cars is ethics. In this image, you can see a self-driving car facing two choices…

  • Run into the barrier and kill the passengers
  • Avoid it, and kill the pedestrians, who happen to be thieves

If these choices were ever coded, it would be based on our own human intuitions from an MIT survey…Because the choice is too difficult.

If you want my honest opinion, this scenario will never happen.

First, it’s not implemented in any self-driving car today. Second, self-driving car designs make it almost impossible to end up with these choices.Here is why…

Is it technically feasible? Yes! Classifying between thieves and others is possible…… but also between young and old, rich and poor, men and women …

We can make mistakes on the classification… but with enough training, classifiers will converge towards something better than random.

Another question occurs.

How are we sure that the car will inevitably run into someone? It might appear pretty straightforward in the image, but real-world scenarios don’t have that. It is really unlikely that we come up with only two choices that both end up killing someone.

What is more likely is that the car sees an obstacle and breaks… without questioning what is going to happen afterward.

In trajectory generation, we have multiple trajectories and decide which one has the lowest possible cost.

In this image, multiple trajectories exist, but there is always one optimal choice.

Before trajectory generation, we build what is called a finite-state-machine.

In our “kill the thieves” machine, we have two finite states:

  • Stay in the current lane (and kill the passengers)
  • Change lanes (and kill the thieves)

To decide between the two, we set up costs functions and pick the lowest one .We base costs on weighted criteria, like (from most weighted to least weighted)…

  • Feasibility
  • Security
  • Legality
  • Comfort
  • Speed

In other words, it’s more important to make a feasible, secured and legal move than a fast one.

How do we define the cost for each criterion? For each criterion, we have metrics to enter. Since it’s a sum of costs, higher metrics should relate to bad scenarios.

  • Feasibility - What is the steering angle required? Larger numbers are not feasible
  • Security- What is the distance to the closest obstacle? What is the distance to the center of the lane? Take these numbers and subtract them to the cost. Low numbers, large costs
  • Legality- If the move is not legal, just remove the state without considering the cost.
  • Comfort- What is the acceleration rate? Large numbers (even negative ones) should imply large costs.
  • Speed- What is the maximum speed of the lane, considering obstacles, law, … Subtract the numbers from the cost or inverse them so larger velocities imply lower costs.

When designing all of this, we don’t include whether the obstacles are old or young, thieves or not, …It doesn’t bring anything to the system to add those things.

We design it to always generate legal, feasible trajectories and take the best one. We never design it considering it might run into something because these options are always thresholded by the cost function.

If the cost gets over a threshold value, we simply remove the state from the machine.

In the moral problem, all costs are very large and therefore thresholded. There is no better choice. The car breaks until a better choice comes available.

To me, the trolley problem is not a real one…

It is a way to create a buzz around self-driving cars.

📩 If you’re ready to talk about autonomous driving and cutting-edge engineering more seriously, I’m running a Daily Mailing list where I will teach you everyday what’s used in the field, and how you can become an engineer working in the future.
It’s happening here!

Tags