Experts roll eyes at Lords' AI in Weapon Systems Committee

Experts roll eyes at Lords’ AI in Weapon Systems Committee

Source Node: 2152964

Experts in technology law and software clashed with the UK House of Lords this week over whether it was technically possible to hand responsibility for battlefield decisions to AI-driven weapons.

During the AI in Weapon Systems Committee hearing on Thursday, the Lords struggled to draw the experts into the idea that it might be eventually possible, or introduced cautiously, apparently concerned about losing ground in the introduction of AI in warfare.

Lord Houghton of Richmond said the committee wanted to recommend how to progress in some kind of legal framework.

The former Chief of the Defence Staff of the British Armed Forces asked for comment on whether distinction and proportionality can eventually be discharged autonomously.

Professor Christian Enemark, professor of international relations, University of Southampton, responded: “Only humans can do discrimination, only humans can do proportionality, and the autonomous discharging by a nonhuman entity is a philosophical nonsense, arguably.”

Lord Houghton replied: “Incrementally, it may be that advances in the technology will advance the envelope under which those sorts of delegations can be made.”

AI ethics expert Laura Nolan, principal software engineer with reliability tooling vendor Stanza Systems, argued that battlefield decisions by AI could not assess the proportionality of a course of action.

“You need to know the anticipated strategic military value of the action and there’s no way that a weapon can know that,” she said. “A weapon is in the field, looking at perhaps some images, some sort of machine learning and perception stuff. It doesn’t know anything. It’s just doing some calculations, which don’t really offer any relation to the military value.”

Nolan added: “Only the commander can know the military value because the military value of a particular attack is not purely based on that conflict, local context on the ground. It’s the broader strategic context. It’s absolutely impossible to ask a weapon on the ground and make that determination.”

Taniel Yusef, visiting researcher at Cambridge University’s Centre for the Study of Existential Risk, said the simple algorithms for classifying data points which might identify targets could be shown to mistake a cat for a dog, for example.

“When this happens in the field, you will have people on the ground saying these civilians were killed and you’ll have a report by the weapon that feeds back [that] looks at the maths,” she said.

“The maths says it was a target… it was a military base because the math says so and we defer to maths a lot because maths is very specific and … the maths will be right.

“There’s a difference between correct and accurate. There’s a difference between precise and accurate. The maths will be right because it was coded right, but it won’t be right on the ground. And that terrifies me because without a legally binding instrument enshrining that kind of meaningful human control with oversight at the end that’s what we’ll be missing.”

“It’s not technically possible [to make judgements about proportionality] because you can’t know the outcome of a system [until] it has achieved the goal that you’ve coded, and you don’t know how it’s got there.”

Conservative peer Lord Sarfraz interjected: “The other day, I saw a dog which I thought was a cat.”

“I assume you didn’t shoot it,” Yusef replied. ®

Time Stamp:

More from The Register