Paul Scharre: Army of None. Autonomous Weapons and the Future of War. ISBN 978-0-393-35658-8 ⭐️⭐️⭐️⭐️⭐️ You don’t read books about military affairs? You should. Weapons research has reached a point where humanity faces moral, legal and technical choices similar to those at the beginning of the nuclear arms race. High-performance sensors, remote platforms like satellites, artificial intelligence, communication networks and real-time transmission of high volumes of data converge and make the fielding of autonomous weapons a realistic option – weapons with little or no human input. Does that correspond to the future we want? Think about it.
We all know about drones circling above the Gaza strip, Ukraine, Syria or Afghanistan to locate and kill targets. So far there’s still a human being in the OODA loop: Observation – Orientation – Decision – Action. An analyst decides whether a target is eligible for a kill within a set legal framework, an operator pushes the button that fires the lethal missile.
But what if computers would take over these decisions? What if an algorithm would decide about who is to live and who is to die? After all computers are unemotional, never tired, never stressed, never sadistic. Objective in a way. What if a computer would release the missile from a long-endurance Unmanned Aerial Vehicle or a seaborne platform? No more costly and dangerous deployments to war theatres far away. It could all be done from a terminal in the US or Europe or from a ship far away from a hostile coast.
Paul Scharre takes us on a journey into the world of software programmers and weapons engineers, of robotics and Artificial Intelligence to show where we stand in terms of technological progress. From there on he explores the moral choices we face and outlines the shape of tomorrow’s wars. This is insofar relevant as all major military powers push research in these fields, and in a world of global political and economic competition, war is always an option – open, offensive as well as undeclared, clandestine wars for some, purely defensive actions for others.
One of the key take-aways is summed up by Brad Tousley, the director of DARPA, the Pentagon’s R&D agency tasked to imagine tomorrow’s game-changing military technologies: “Until the machine processors are equal to or surpass humans at making abstract decisions, there’s always going to be mission command.” This means that human beings will remain in charge when it comes to evaluate options for action the machine may propose. A target may be identified by an algorithm to be legitimate, e.g. a person carrying a rifle in an enemy controlled territory. But only a human will recognize that it is an adolescent guarding his sheep and understand that pictures of killed children help the enemy’s propaganda.
For the time being, algorithms seem to be unable to analyze multi-dimensional contexts in the way the human brain does. This may explain why Google, Twitter, Facebook etc. have such trouble finding and removing extremist propaganda from their networks. Context is key and context is complex when it comes to human behaviour. Our brain, the collective rules that govern our societies, our empathy, our experience – evolution has produced a sophisticated system over thousands of years that technology can not easily emulate or surpass. However technology is getting better and better. Drones take off and land by themselves on an aircraft carrier. Unmanned ships have put sea and navigate on their own and may soon hunt submarines. Automated logistics systems and surveillance platforms are already operational.
Humans make mistakes, no doubt. Usually the consequence of one human error of judgment is limited. But machines make mistakes too, even those with Artificial Intelligence. And if one machine makes a specific mistake, all machines of that type will make the same mistake. And they will repeat the mistake until a human steps in. In autonomous weapons there would be no human to step in. A horrifying scenario!
The key question for developers is: Can we build a piece of technology that fulfills mission requirement with a high level of reliability? Soldiers want weapons they can trust under many different circumstances. Their life may depend of it. If a certain piece of hard- or software is mission-critical and its reliability is not proven beyond doubt, it may be safe to keep a human operator or supervisor in the loop.
Not that this will prevent fatal errors. When the US attacked Iraq after 9/11, Patriot batteries shot down two allied fighters. The software did what it should do: track incoming targets and destroy them when authorised by the operator. Man in the loop working on tested equipment. The software however did not distinguish between Iraqi ballistic missiles and friendly planes. And the operators did not question the information the battery’s sensors fed back to them. Soldiers need to trust their weapons – but not blindly.
In his book Scharre goes to great lengths to point out what technology cannot do yet and what it may be able to do in the future. And he highlights the machines’ vulnerabilities and their inherent shortcomings. Each course of action in terms of developing and fielding autonomous or semi-autonomous weapons needs an ethical evaluation and a consistent set of rules for its operation, embedded itself in a general strategy. This is the point where the human input will always remain crucial: Man sets the rules.
Technology will do what we will let it do. We can decide not to pursue certain types of research. It has happened before with the neutron bomb. We can prohibit the use of certain technologies as we prohibited the use of (not very smart) anti-personnel mines and biological weapons. But first of all, we, the tax payers, must know what is possible. We may then ask our politicians to present to us options and cost-benefit analyses. And then we can make an informed political choice. This is why this book is so important. Stay informed not only about politics or climate change, but also about technology. All three factors will shape our future more than any time before.
This said I recently enjoyed a ride in what Luxembourg calls its “first autonomous bus shuttle”. Point 1: Its route is pre-defined by a human. The vehicle transports six passengers from a pedestrian escalator to a railway station and back. Point 2: It has an operator on board who defines when the bus moves and stops. The shuttle’s sensors identify obstacles on their own and forces the bus to stop, but the operator gives the go to move on once the obstacle is gone. Point 3: It is moving at a slow speed and it comes abruptly to a stop. At best we may call it semi-autonomous. And as far as its capacities are concerned, walking from the escalator to the station is smoother and almost as fast. But of course riding the shuttle was a lot funnier!
Napoleon Bonaparte revolutionized military affairs in the fields of training, tactics and grand strategy. His intellectual genius and his daring mindset enabled him to subjugate the European continent with the exception of Great Britain. He and his troops however failed to beat Russia in 1812. Napoleon’s lines of communication were overextended, his once successful maneuvering strategy failed when the enemy retreated further and further into the vast Russian plains. He occupied Moscow only to discover the Russians had set it on fire. Napoleon had to retreat without a decisive victory and his army was annihilated in rear-guard engagements, weakened by a harsh winter and a lack of both food and ammunition. Technology wasn’t an issue. Bad human judgment was the problem. Pyotr Tchaikovsky has set to music the events of 1812 in an overture of the same name: