" A new generation of autonomous weapons or “killer robots” could accidentally start a war or cause mass atrocities, a former top Google software engineer has warned.
Laura Nolan, who resigned from Google last year in protest at being sent to work on a project to dramatically enhance US military drone technology, has called for all AI killing machines not operated by humans to be banned.
Nolan said killer robots not guided by human remote control should be outlawed by the same type of international treaty that bans chemical weapons.
Unlike drones, which are controlled by military teams often thousands of miles away from where the flying weapon is being deployed, Nolan said killer robots have the potential to do “calamitous things that they were not originally programmed for”.
“If you are testing a machine that is making its own decisions about the world around it then it has to be in real time. Besides, how do you train a system that runs solely on software how to detect subtle human behaviour or discern the difference between hunters and insurgents? How does the killing machine out there on its own flying about distinguish between the 18-year-old combatant and the 18-year-old who is hunting for rabbits?”
The ability to convert military drones, for instance into autonomous non-human guided weapons, “is just a software problem these days and one that can be relatively easily solved”, said Nolan.
She said she wanted the Irish government to take a more robust line in supporting a ban on such weapons.
“I am not saying that missile-guided systems or anti-missile defence systems should be banned. They are after all under full human control and someone is ultimately accountable. These autonomous weapons however are an ethical as well as a technological step change in warfare. Very few people are talking about this but if we are not careful one or more of these weapons, these killer robots, could accidentally start a flash war, destroy a nuclear power station and cause mass atrocities.”
Read more at: