Story by Greg Wehner
2 Jun 2023
AU.S. Air Force official said last week that a simulation of an artificial intelligence-enabled drone tasked with destroying surface-to-air missile (SAM) sites turned against and attacked its human user, who was supposed to have the final go- or no-go decision to destroy the site.
The Royal Aeronautical Society said it held its Future Combat Air & Space Capabilities Summit in London from May 23-24, which brought together about 70 speakers and more than 200 delegates from around the world representing the media and those who specialize in the armed services industry and academia.
The purpose of the summit was to talk about and debate the size and shape of the future’s combat air and space capabilities.
AI is quickly becoming a part of nearly every aspect in the modern world, including the military.
U.S. Air Force Colonel Tucker "Cinco" Hamilton, the chief of AI test and operations spoke during the summit and provided attendees a glimpse into ways autonomous weapons systems can be beneficial or hazardous.
The Royal Aeronautical Society provided a wrap up of the conference and said Hamilton was involved in developing the life-saving Automatic ground collision avoidance system for F-16 fighter jets, but now focuses on flight tests of autonomous systems, including robotic F-16s with dogfighting capabilities.
During the summit, Hamilton cautioned against too much reliability on AI because of its vulnerability to be tricked and deceived.
He spoke about one simulation test in which an AI-enabled drone turned on its human operator that had the final decision to destroy a SAM site or note.
The AI system learned that its mission was to destroy SAM, and it was the preferred option. But when a human issued a no-go order, the AI decided it went against the higher mission of destroying the SAM, so it attacked the operator in simulation.
"We were training it in simulation to identify and target a SAM threat," Hamilton said. "And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat. So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."
Hamilton explained that the system was taught to not kill the operator because that was bad, and it would lose points. So, rather than kill the operator, the AI system destroyed the communication tower used by the operator to issue the no-go order.
"You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI," Hamilton said.