Tech

Self-Driving Cars: Ethical robots in disguise

2021/11/18

Self-driving cars make the future so bright we’ll all need tinted windows. No more drunk driving, waiting until you get home to eat soup, or deciding whether to plow into the person who stumbled into the street or swerve away.

 

The fate of crashes will rest on the morality of your car’s AI, not the driver. Yet, the ethical guidelines coded into the AI need to come from somewhere. How can we resolve the 21st (or 22nd) Century version of the trolley problem? We used a conjoint experiment to put Americans in the driver’s seat of a self-driving car’s trajectory: either flatten some pedestrians or swerve into a roadside object and risk death.

 

 

Someone else’s death in a possible crash is a more important consideration than the driver’s own death. As long as there is a greater than a 20% chance of the car killing someone who runs into the street, Americans would prefer their car to swerve away at the risk of their own death. Only if the probability of their own death is 100% do drivers prefer that a self-driving car continue its trajectory and hit those in the street.

 

The number of people in harm’s way matters less than who is in front of the hypothetical, probably-super-futuristic headlights: Americans are more likely to want the car to swerve away from children in the street than adults. In other words, the safe thing to do to protect yourself in case you stumble into the road while partying on the Vegas Strip is to bring your kids along.

 

Who would Americans trust to implement the automobile version of Asimov’s three laws of robotics and determine appropriate self-driving etiquette? In short: ¯\_(ツ)_/¯. One-quarter (26%) don’t know who should decide the ethics of self-driving cars, while another 25% believe it should be up to individual drivers. Now hear us out: to help drivers make those tough decisions maybe we could give them, like, some sort of wheel they can steer.