The drone attack on Saudi Arabia’s oil fields (which, despite Donald Trump’s tweets - “PLENTY OF OIL!!!” - have sent crude prices soaring), is a vivid demonstration of how radically warfare has been changed by autonomous technologies. Yet what they also make clear is that we are only the beginning of a process which will increasingly put self-guiding machines - and hence the data scientists behind their software - on the front line.
Look at each stage of the strike on Saudi. It relied on detailed reconnaissance and then analysis of that reconnaissance data, leading to armed action which was, in its own terms, risk-free. All of that is a world away from a few years ago.
Read more: Lasers, drone swarms and the Future of War
When I used to run the obituaries desk on this paper, we would carry tales of people like Wing Commander Gordon Hughes, who “was one of the RAF’s outstanding photographic reconnaissance pilots, flying unarmed Spitfires and Mosquitos deep into enemy territory” during the Second World War.
Taking photos, getting film back, then processing it, delivering it to the right decision makers, and combining it with all the other intelligence coming in, was then a dangerous and lengthy process. Now all of it can be automated, at vastly greater scale, in seconds.
Reconnaissance of the kind Gordon Hughes risked his life to carry out is today performed by arrays of satellites, spy planes and drones, providing millions of hours of footage to the world’s militaries each year.
Of course no human or team of humans can scrutinise all that footage, but no problem: one of the three major leaps forward that the "deep learning" variant of Artificial Intelligence (AI) has made since 2011 is in object recognition (the others being speech recognition and machine translation). Machines can collect, but they can also watch, battlefield footage, then decide what is suspicious, and then…. And then what? Well, they are then totally capable of firing off weapons and becoming Lethal Autonomous Weapons Systems (Laws).
Take the Israeli Harop system, which sits up there in the sky for hours at a time with a set of criteria for stuff to kill and blow up. Those criteria might be “looks like a Hamas missile emplacement”. But they could be anything.
Indeed, whereas there was once a time gap of days or weeks between Gordon Hughes’ missions and military action based on it, the lag between surveillance and destruction has now been collapsed to the point where it is almost pointless to distinguish between the two.
Which creates another problem. The best people to develop algorithms controlling systems like object recognition are not always to be found in the military. They are often in academia or in the private sector. And these private sector or academic researchers often don't want their work being used in the business of killing people, enemies or not.
For example, in the last two years, Google workers have protested mightily about the company’s participation in Project Maven, which the Pentagon set up precisely to improve battlefield object recognition by drones. Eventually, last summer, Google management said they were pulling out of the project.
It is a hugely significant debate because, if Western researchers get cold feet about military AI, then that may give a strategic advantage to nations whose researchers don’t, or aren’t allowed to. As the former First Sea Lord George Zambellas succinctly put it to me recently: “We are massively disadvantaged [by liberal democracy].” Academics and private-sector AI researchers, he said, thought of themselves as “the conscience of part of our society” but, he said, by withdrawing their labour “many of them have not really thought through the totality of their responsibilities”. He also pointed out that Silicon Valley only exists because of Pentagon investment going back decades, which is true.
What a time, then, here in the UK, for the Alan Turing Institute - our national institute for data science and artificial intelligence - to launch its new defence and national security applied research centre [my emphasis] which will, the Institute says, “enable the UK’s security forces to draw on the very best of academia to achieve high impact solutions to the most pressing challenges in the field. The emphasis will be on delivering useable outputs.”
“High impact solutions”? “Useable outputs”? They sound like euphemisms for killing people and blowing things up. But apparently not. I’ve just interviewed Mark Briers, the Turing's Programme Director for Defence and Security, about the new centre.
“There are three schools of thought [among data scientists],” he says. “People who care passionately about defence; people who are passionately against working for the defence sector; and those people who don’t care either way.”
But, he said, “we don’t want academia to have moral and ethical problems”. So, rather than work on things like Project Maven, Turing's new centre will “focus on social good”.
That means it will use data science to counter modern slavery, and help with humanitarian and disaster relief. When it comes to the military, projects will aim to reveal mental health problems among soldiers, or help with personnel retention, or equipment failure.
All very worthy aims, and significant in their way. After all, government data released last month revealed that the strength of the British military fell for the ninth year in a row. The Army, with 74,400 regular troops, is 7,000 short of the target figure of 82,000, a deficit of over nine per cent. But hardly what one first thinks of in a military context as “high impact solutions” and “useable outputs”.
Briers, who is clearly thoughtful and robust about this issue, acknowledged that “some academics are against working in this domain and that’s fine. [But] we have a social responsibility to ensure that the country’s safe, and the international domain we work in is safe.”
However, he added that “we’ve not been asked to do anything like drone targeting, for as long as I’m in post we won’t be taking on any of those kind of activities. Nothing’s guaranteed in life but there’s no desire on our part and there’s been no ask from the MoD.”
This is vital. For such AI arguments are only the first step on the treadmill that leads to full automation of command and control systems themselves. So while a drone strike is relatively “risk-free” in that no pilot must risk their lives like Gordon Hughes, it is part of an evolution of military automation that could lead to AI command systems launching strike and counter-strike near instantaneously, without the time for generals to pause and make considered decisions, or for red telephones to ring in Oval Offices and de-escalation to occur. Not exactly risk-free then.
It is also worth pointing out that the Turing’s own internal debate is, in another way, slightly academic. Its new defence centre has a budget of £3.5m, compared to the tens of billions spent annually on AI by the world’s private sector tech companies. Indeed, private spending on AI now dwarves anything Western state militaries can do, even the mighty Pentagon. Which is why governments are now desperate to, and inevitably must, draw on private sector research.
That is ultimately the point of the new Turing AI centre - to liaise between and convene public, private and academic sectors. It is, in a way, is the bleeding edge of military and ethical development. Indeed, the future of the world could depend on it. Oh, and if you fancy being a part of you can. The defence and security applied research centre (ARC) is currently recruiting data scientists.