If you’re looking for a headline to capture our technological times, this recent one might fit the bill: “The DIY kid-tracking drone.”
To help keep a close eye on his child, American parent Paul Wallich adapted a quadcopter to follow a GPS chip in his son’s backpack to the bus stop every morning on the way to school: an ingenious outsourcing of parental responsibility, and a formidable piece of hacking to boot. What, though, does this say about the increasing delegation not only of daily tasks to machines, but of potentially life-changing decisions themselves?
Take a life and death dilemma that could soon be science fact rather than fiction. As psychology professor Gary Marcus pointed out in a recent piece for The New Yorker, driverless cars are now street-legal in three American states and could soon be cruising into a garage near you. But if your driverless car were hypothetically to encounter a bus full of schoolchildren on the wrong side of the road, how should it react: should it make a decision to swerve, preventing a collision but putting your life at risk; or should it collide with the bus if there’s a greater chance of you surviving? “If the decision must be made in milliseconds,” Marcus reasons, “the computer will have to make the call.”
Cars are an interesting case study because they already represent one of the most hazardous technological environments most of us enter on a regular basis – and because the belief that technology should make this environment as safe as possible has been accepted, and been saving lives, for decades. As a famous adage of product design puts it, once you’ve built the car, your task is not simply to hope that you can prevent all accidents, but to design a better car crash – that is, to make those inevitable occasions on which things go wrong as unlikely as possible to cause fatal harm.
Minimising harm is a clear enough good. What happens, though, to ethical issues in design when the product you’re creating is itself going to be making decisions?
Marcus’ driverless car scenario riffs off the famous “trolley problem”: a thought experiment that asks subjects to decide between pushing one man off a bridge in order to save the lives of several others trapped in the path of a runaway railway trolley, or allowing that one man to survive while the others meet their doom. Most respondents are unwilling physically to push another human being to their death even if it would save multiple lives – one of the factors the test is designed to measure. A machine’s response, though, would depend entirely upon the decision-making model encoded by its creators. One robot might refuse to act; one might save the many; one might not even recognize a choice exists. It’s up to us to decide which program to write. But once we’ve done it and set things in motion, it’s up to them.
Programming this into cars is one thing. Weapons are quite another – and represent perhaps the most urgent testing-ground for machine ethics today. Early this December, Iran boasted that it had captured a US ScanEagle drone over its airspace; one of the most basic models of the thousands of unmanned aerial vehicles that have become the standard stuff of warfare over the last decade.
Like Paul Wallich’s child-minding quadcopter, most drones are more like sophisticated remote-controlled aircraft than autonomous decision-makers. Late November, though, saw some of the first tests of what some news outlets colourfully labelled a “killer robot”: a US Navy stealth drone piloted by artificial intelligence. Snappily named the X-47B Unmanned Combat Air System, the 19m- (62ft-) wide aircraft is designed to take split-second decisions on its own initiative – and even land itself on an aircraft carrier – while remaining under the overall control of human operators.