From remote-controlled sniper rifles to killer drones, weapons today allow soldiers to attack people thousands of kilometres away. Amid the cotton-wool platitudes that usually swaddle artificial intelligence (AI), this isn’t an application you hear about much.
Yet it’s becoming an increasingly sore point for human rights activists, who have dubbed it "assassination intelligence".
The downside of this technology was brought into sharp relief in August when the US military was forced to admit that a drone attack in Kabul had gone awry. Rather than killing Islamic State fighters, the attack left 10 civilians dead — seven of them children.
But AI weaponry can also go terribly right.
In November, Israel, presumably with a nod and a wink from the US, assassinated Mohsen Fakhrizadeh, Iran’s top nuclear scientist. Fakhrizadeh was killed by an FN machinegun hidden in an empty vehicle parked on a roadside near Tehran — and the person who pulled the trigger did so from thousands of kilometres away via a satellite link.
Death by digital has taken hi-tech warfare into new realms.
In the Fakhrizadeh case, the method of assassination was so innovative that intelligence sources could not find a precedent. Drone technology is far more commonplace; even North Korea has put one up.
The US began to deploy armed drones in the early 2000s and has used them in Iraq, Libya, Somalia, Pakistan, Yemen and Afghanistan. The Biden administration may be rethinking the value of drone strikes, but there’s no disputing that they have been enormously effective for the US.
According to US historian Heather Cox Richardson, who has tried to tabulate the number of drone strikes, president George W Bush ordered nine between 2004 and 2007. In 2008, shortly before leaving the White House, Bush ordered 34 such strikes, Richardson says — illustrating an increasing reliance on the unmanned weapons.
Under Barack Obama, the trend took off. News website The Daily Beast compiled a list of 186 drone strikes in Obama’s first two years in office.
In all, the Associated Press and the UK’s Bureau of Investigative Journalism noted 1,878 drone strikes during the eight years of Obama’s presidency. Most of these were in Yemen, where the US and Saudi Arabia supported the government of President Abdrabbuh Mansur Hadi in its war against Houthi rebels supported by Iran. This is something of a proxy war between the two regional powerhouses.
During the first two years of Donald Trump’s presidency, the Bureau of Investigative Journalism counted 2,243 drone strikes. Most notably, Trump ordered the drone strike that killed Iranian general Qasem Soleimani at Baghdad airport in January 2020.
Soleimani’s death sparked an outcry. Agnès Callamard, the secretary-general of Amnesty International and the UN Human Rights Council’s special rapporteur on extrajudicial killings at the time, said the attack violated international law since the US had not provided evidence that the general presented an imminent threat. The Trump administration brushed off the criticism.
The drone strike illustrated just how enticingly easy remote assassinations can be — with the added attraction of letting the aggressor nation avoid casualties of its own.
President Joe Biden is reviewing the rules of engagement for drone strikes, which were relaxed by the Trump administration. But the general assumption is that the use of AI to kill at a distance will become far more widespread.
AI assassination in a military context is one thing, but what legal framework applies in civvy street? Here, legal scholars are debating how to deal with death or damage caused by AI systems, according to the MIT Technology Review.
To provide a practical example: what happens if a self-drive car, steered by AI, kills a pedestrian? In most jurisdictions, criminal charges would be brought against a human driver who does so — culpable homicide or murder.
In the MIT Technology Review, British AI and cybersecurity expert John Kingston debates whether criminal responsibility can follow when AI is used.
The key question, says Kingston, is whether the programmers of the self-drive car’s technology were aware of the possible consequences of its use. Had they foreseen the possibility of an accident, or if they should have foreseen it, this could create legal liability.
But if criminal liability can apply in such a case, what legal defences are possible?
Kingston suggests at least two: it could be argued that a program malfunction was the equivalent of insanity in a human, or that a computer virus was the equivalent of coercion or intoxication.
And it gets more complex: in the case of a guilty verdict, what would be the punishment?
"Who or what would be punished for an offence for which an AI system was directly liable, and what form would it take?" Kingston asks. For the moment, he says, there are no answers to these questions.
For the Mossad killers and the drone pilots, there is always the anonymity of the bunker. Distance helps foster this sense of dislocation from the deaths. But it won’t be so easy for the occupants of a self-drive car to escape consequences.






Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.