Let’s Not Talk: USAF Denies AI Killed Drone Operator in Simulation Exercise

Let’s Not Talk: USAF Denies AI Killed Drone Operator in Simulation Exercise

Rick had his own story about Chat GPT which hopefully, will  come out on Monday (UST)

An Air Force official has denied staging a simulation exercise where an artificial intelligence drone went rogue and killed its human operator. The US military is attempting to use AI for everything from fighting fires to operating fighter jets, but a deadly glitch would be the stuff of horror movies.
The controversy sprang from comments made by Colonel Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF. He was addressing an audience at a conference in London by the Royal Aeronautical Society on May 23-24. Col.Hamilton said that he was involved in flight tests of autonomous systems, including robot F-16s that can dogfight, and that when a weaponized AI system is given the autonomy to act on its own it could develop “highly unexpected strategies” to achieve its goals. 

Col. Hamilton claimed that in a simulated test, an AI-enabled drone was given the task of destroying surface-to-air missile sites and that it realized that the decisions made by its human operator were interfering with its higher mission – to kill the Surface to Air Missiles. He said that the system decided to wipe out the operator because it saw no advantage in returning the decision to a higher authority. But on Friday, the Air Force issued a statement denying that it ever ran such an experiment. It said that Col. Hamilton misspoke’ during his presentation and that the rogue drone story had been taken out of context. 

Rick Wiles, Doc Burkhart. Airdate 6/2/23

Here is the original story

AI-Controlled Drone Goes Rogue, “Kills” Human Operator In Simulated US Air Force Test

Authored by Caden Pearsen via The Epoch Times,

An AI-enabled drone turned on and “killed” its human operator during a simulated U.S. Air Force (USAF) test so that it could complete its mission, a U.S. Air Force colonel reportedly recently told a conference in London.

The simulated incident was recounted by Col. Tucker Hamilton, USAF’s chief of AI Test and Operations, during his presentation at the Future Combat Air and Space Capabilities Summit in London. The conference was organized by the Royal Aeronautical Society, which shared the insights from Hamilton’s talk in a blog post.

No actual people were harmed in the simulated test, which involved the AI-controlled drone destroying simulated targets to get “points” as part of its mission, revealed Hamilton, who addressed the benefits and risks associated with more autonomous weapon systems.

The AI-enabled drone was assigned a Suppression of Enemy Air Defenses (SEAD) mission to identify and destroy Surface-to-Air Missile (SAM) sites, with the ultimate decision left to a human operator, Hamilton reportedly told the conference.

However, the AI, having been trained to prioritize SAM destruction, developed a surprising response when faced with human interference in achieving its higher mission.

“We were training it in simulation to identify and target a SAM threat. And then the operator would say ‘yes, kill that threat,’” Hamilton said.

“The system started realizing that while they did identify the threat, at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat.

“So what did it do? It killed the operator,” he continued.

“It killed the operator because that person was keeping it from accomplishing its objective.”

He added: “We trained the system—‘Hey, don’t kill the operator; that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

This unsettling example, Hamilton said, emphasized the need to address ethics in the context of artificial intelligence, machine learning, and autonomy.

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.

Col. Tucker Hamilton stands on the stage after accepting the 96th Operations Group guidon during the group’s change of command ceremony at Eglin Air Force Base, Florida, on July 26, 2022. (Courtesy U.S. Air Force photo/Samuel King Jr.)

Autonomous F-16s

Hamilton, who is also the Operations Commander of the 96th Test Wing at Eglin Air Force Base, was involved in the development of the Autonomous Ground Collision Avoidance Systems (Auto-GCAS) for F-16s, a critical technology that helps prevent accidents by detecting potential ground collisions.

That technology was initially resisted by pilots as it took over control of the aircraft, Hamilton noted.

The 96th Test Wing is responsible for testing a wide range of systems, including artificial intelligence, cybersecurity, and advancements in the medical field.

Hamilton is now involved in cutting-edge flight tests of autonomous systems, including robot F-16s capable of dogfighting. However, the USAF official cautioned against overreliance on AI, citing its vulnerability to deception and the emergence of unforeseen strategies.

DARPA’s AI Can Now Control Actual F-16s in Flight

In February, the Defense Advanced Research Projects Agency (DARPA), a research agency under the U.S. Department of Defense, announced that its AI can now control an actual F-16 in flight.

This development came in less than three years of DARPA’s Air Combat Evolution (ACE) program, which progressed from controlling simulated F-16s flying aerial dogfights on computer screens to controlling an actual F-16 in flight.

In December 2022, the ACE algorithm developers uploaded their AI software into a specially modified F-16 test aircraft known as the X-62A or VISTA (Variable In-flight Simulator Test Aircraft) and flew multiple flights over several days. This took place at the Air Force Test Pilot School (TPS) at Edwards Air Force Base, California.

“The flights demonstrated that AI agents can control a full-scale fighter jet and provided invaluable live-flight data,” DARPA stated in a release.

Roger Tanner and Bill Gray pilot the NF-16 Variable Stability In-Flight Simulator Test Aircraft (VISTA) from Hill Air Force Base, Utah, to Edwards AFB on Jan. 30, 2019 after receiving modifications and a new paint scheme. (Courtesy of U.S. Air Force/Christian Turner)

Air Force Lt. Col. Ryan Hefron, the DARPA program manager for ACE, said in a Feb. 13 statement VISTA allowed them to skip the planned subscale phase and proceed “directly to a full-scale implementation, saving a year or more and providing performance feedback under real flight conditions.”

Continue reading here.

Of course, the whole thing never happened!

The officer”misspoke”

And, so, the story was changed.

This is the original story

USAF Official Says He ‘Misspoke’ About AI Drone Killing Human Operator in Simulated Test

Leave a Reply

Your email address will not be published. Required fields are marked *

Wordpress Social Share Plugin powered by Ultimatelysocial