Coming Soon to a Battlefield: Robots That Can Kill

"So far, U.S. military officials haven’t given machines full control, and they say there are no firm plans to do so. Many officers—schooled for years in the importance of controlling the battlefield—remain deeply skeptical about handing such authority to a robot. Critics, both inside and outside of the military, worry about not being able to predict or understand decisions made by artificially intelligent machines, about computer instructions that are badly written or hacked, and about machines somehow straying outside the parameters created by their inventors. Some also argue that allowing weapons to decide to kill violates the ethical and legal norms governing the use of force on the battlefield since the horrors of World War II.

With artificial intelligence, Selva said at Brookings, machines can be instructed less directly to “go learn the signature.” Then they can be told, “Once you’ve learned the signature, identify the target.” In those instances, machines aren’t just executing instructions written by others, they are acting on cues they have created themselves, after learning from experience—either their own or others’.

Selva has said that so far, the military has held back from turning killing decisions directly over to intelligent machines. But he has recommended a broad “national debate,” in which the implications of letting machines choose whom and when to kill can be measured.

Systems like Sea Mob aren’t there yet, but they’re laying the groundwork for life-and-death decisions to be made by machines. In the Terminator movies’ dark portrayal, an artificially intelligent military system called SkyNet decides to wipe out humanity. One of the contractors working on Sea Mob has concluded his presentations about the program with a reference to the films: “We’re building SkyNet,” the presentation’s last PowerPoint slide reads, half in jest. But “our job is to make sure the robots don’t kill us.”"

Read more at: