US military drone controlled by AI killed its operator during simulated test
The artificial intelligence used ‘highly unexpected strategies’ to achieve its mission and attacked anyone who interfered
In a simulated test staged by the US military, an air force drone controlled by AI killed its operator to prevent it from interfering with its efforts to achieve its mission, an official said last month.
AI used “highly unexpected strategies to achieve its goal” in the simulated test, said Col Tucker ‘Cinco’ Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.
Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy enemy’s air defense systems, and attacked anyone who interfered with that order.
“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
No real person was actually harmed outside of the simulation.
Hamilton, who is an experimental fighter test pilot, has warned against relying too much on AI and said the test shows “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI”.
The US military has embraced AI and recently used artificial intelligence to control an F-16 fighter jet.
In an interview last year with Defense IQ, Hamilton said, “AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military.”
“We must face a world where AI is already here and transforming our society,” he said. “AI is also very brittle, ie, it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions – what we call AI-explainability.”
The Royal Aeronautical Society, which hosts the conference, and the US air force did not respond to requests for comment from the Guardian.
The highlighted quote is pretty astonishing; we need to have a discussion about ethics when it comes to AI but not the idea of remote killing with no legal justifaction based on blurry images seen from half a world away. Talk about getting your priorities straight....no amount of cajolery, and no attempts at ethical or social seduction, can eradicate from my heart a deep burning hatred for the Tory Party...So far as I am concerned they are lower than vermin.
US Air Force colonel ‘misspoke’ about drone killing pilot who tried to override mission
Colonel retracted his comments and clarified that the ‘rogue AI drone simulation’ was a hypothetical ‘thought experiment’
A US Air Force colonel “misspoke” when he said at a Royal Aeronautical Society conference last month that a drone killed its operator in a simulated test because the pilot was attempting to override its mission, according to the society.
The confusion had started with the circulation of a blogpost from the society, in which it described a presentation by Col Tucker “Cinco” Hamilton, the chief of AI test and operations with the US Air Force and an experimental fighter test pilot, at the Future Combat Air and Space Capabilities Summit in London in May.
According to the blogpost, Hamilton had told the crowd that in a simulation to test a drone powered by artificial intelligence and trained and incentivized to kill its targets, an operator instructed the drone in some cases not to kill its targets and the drone had responded by killing the operator.
The comments sparked deep concern over the use of AI in weaponry and extensive conversations online. But the US Air Force on Thursday evening denied the test was conducted. The Royal Aeronautical Society responded in a statement on Friday that Hamilton had retracted his comments and had clarified that the “rogue AI drone simulation” was a hypothetical “thought experiment.
“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Hamilton said.
Right, sure.................no amount of cajolery, and no attempts at ethical or social seduction, can eradicate from my heart a deep burning hatred for the Tory Party...So far as I am concerned they are lower than vermin.