Reports of an AI drone that ‘killed’ its operator are pure fiction
It has been widely reported that a US Air Force drone went rogue and “killed” its operator in a simulation, sparking fears of an AI revolution – but this simulation never took place. Why are we so quick to believe AI horror stories?
By Matthew Sparkes
2 June 2023
Some AI stories are so bad they would make a robot facepalm
Corona Borealis Studio/Shutterstock
News of an AI-controlled drone “killing” its supervisor jetted around the world this week. In a story that could be ripped from a sci-fi thriller, the hyper-motivated AI had been trained to destroy surface-to-air missiles only with approval from a human overseer – and when denied approval, it turned on its handler.
Only, it is no surprise that story sounds fictional – because it is. The story emerged from a report by the Royal Aeronautical Society, describing a presentation by US Air Force (USAF) colonel Tucker Hamilton at a recent conference. That report noted the incident was only a simulation, in which there was no real drone and no real risk to any human – a fact missed by many attention-grabbing headlines.
Later, it emerged that even the simulation hadn’t taken place, with the USAF issuing a denial and the original report updated to clarify that Hamilton “mis-spoke”. The apocalyptic scenario was nothing but a hypothetical thought experiment.
Advertisement
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology. It appears the colonel’s comments were taken out of context and were meant to be anecdotal,” a USAF spokesperson told Insider. USAF didn’t respond to New Scientist‘s request for interview before publication.
Read more: Fully autonomous F-16 fighter jet takes part in simulated dogfights
This story is just the latest in a string of dramatic tales told about AI that has at points neared hysteria. In March, Time magazine ran a comment piece by researcher Eliezer Yudkowsky in which he said that the most likely result of building a superhumanly smart AI is that “literally everyone on Earth will die”. Elon Musk said in April that AI has the potential to destroy civilisation, while a recent letter from AI researchers said the risk of extinction is so high that dealing with it should be a priority alongside pandemics and nuclear war.
Why do these narratives gain so much traction, and why are we so keen to believe them? “The notion of AI as an existential threat is being promulgated by AI experts, which lends authority to it,” says Joshua Hart at Union College in New York – though it is worth noting that not all AI researchers share this view.