AI in the Military

[note, this is description of a thought experiment, not an actual experiment]

On 23-24 May the Royal Aeronautical Society hosted a landmark defence conference, the Future Combat Air & Space Capabilities Summit, at its HQ in London, bringing together just under 70 speakers and 200+ delegates from the armed services industry, academia and the media from around the world to discuss and debate the future size and shape of tomorrow’s combat air and space capabilities.

However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems. Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

6 Likes

They added an update:

[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he “mis-spoke” in his presentation at the Royal Aeronautical Society FCAS Summit and the ‘rogue AI drone simulation’ was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: “We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome”. He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI”.]

4 Likes

It now looks like this “too good to check” story has suffered narrative collapse, but it does remind one that artificial intelligence has a way of finding solutions to problems that humans thinking inside the box never imagined. A classic example which is now more than 40 years old is the story of Douglas Lenat’s Eurisko, a general purpose problem solving system based upon heuristics (rules of thumb), including heuristics for changing its own heuristics.

In 1981, Lenat turned Eurisko loose on the Traveller: Trillion Credit Squadron adventure game where players design a fleet within the trillion credit budget, then fight them against one another to see which prevails. Competitors would seek to find the “sweet spot” that balanced offence and defence, weapons vs. armour, and speed vs. maneuverability. Eurisko, however assembled a fleet consisting of a large number of completely stationary ships (no propulsion) with light armour and a large number of small weapons. When attacked, many of these ships would be destroyed (often more than half), but they would score enough hits on the attackers to eventually kill them all, winning by being the last fleet standing. Eurisko easily won the championship.

In 1982, the developers of the game changed the rules on short notice to block Eurisko’s strategy, forcing fleets to build high-value, conventional battle craft which fleets must seek to defend. Eurisko discovered there was no rule that prohibited a fleet from destroying its own ships, for which it was not penalised. So, it deployed the static ships as before and, when the balloon went up, destroyed all of its mandatory battlewagons before the enemy could attack them, then pursued the same attrition strategy as the year before, once again claiming the championship.

The organisers said that if Eurisko won again, they would abolish the tournament. Lenat agreed to retire Eurisko from the competition after the 1982 victory, accepting the honourary title “Grand Admiral” as a consolation.

The Eurisko episode reminds one that the adversary is not constrained to fight the way you expect and around which have built your own forces and doctrine.

6 Likes

A more recent example of alien AI has been playing the game of go:

I’ve looked at the game, and it seems that humans like to establish clear boundaries between territories, while the AI just doesn’t care, playing the game at the strategic level ignoring the trivialities of tactics.

2 Likes

When it comes to F-16 pilots vs AI dogfights, here’s a story from a few years ago:

This is the company that won that competition:

2 Likes

Here’s a survey of ‘achievements’ in AI practice:

2 Likes

Lenat died this week. This was his last paper:

4 Likes