We have become comfortable in a world where artificial intelligence (AI) has the voice of a Ted Williams but not his life, where “art” can be generated from prompts, and much of the music has less human in it than machine. We have accepted that machines can kill just like a human only more so and without the repercussions of regret and moral injury. The perfect killer, no propaganda or drugs needed, no education or ideology, just software and engineering. The machine is never unmotivated, does not need to be fed, rested or cared for. It just needs a power source and ammunition. It can be abandoned, does not need rescuing, and is a potential Kamikaze.
In the Gulf War, the ability of the invading coalition to attack multiple targets at the same time was revolutionary. Inside of a twenty-four hour window more objectives were hit than what the entire U.S. 8th Air Force was capable of in 1942-1943. With precision and a deadly tempo perfected by years of Cold War drilling, it was the peak of human warfare, a marriage of technology and personnel into a network of communications, command, control, logistics, and firepower. The elimination of the human factor in such areas removes the necessity for support and medical infrastructure, rescue teams, and the post-war obligations of veteran care. Machines are more disposable than soldiers, essentially bullets with “brains.” The future will bring with it a new revolution in warfare that will make past doctrines and investments of war obsolete.
War is apparently governed by rules, laws, and ethics that determine what is allowable in the conduct of maiming and killing human beings on large scales. Despite those many rules and laws, atrocities still occur. Whether these are unintentional or exceptions, they happen often. They are committed by those nations that use such rules to justify their wars. Ignoring the hypocrisy of military actions that intentionally lead to the death of countless innocent lives, human beings make mistakes and when they do, they are relatively slow, cumbersome, and occur in isolation. When machines acting in parallel waves, attacking numerous targets with speed and precision “make mistakes,” it can be with greater consequence to innocent life.
The factors that make such machines attractive—instant communication, coordination, speed, precision, less lag, no thought—are also realities that may exacerbate an atrocity. It also assumes those who are wielding such machines do so with any adherence to gentlemanly conduct of war. It was already unlikely that terrorists and nation states would concern themselves with minimizing civilian deaths in the pursuit of their objectives. But machines will simply kill life, all and any of it in a specific region until they are told to stop.
It can be assumed that robot will fight robot on the battlefield, with automated drones seeking one another out to destroy themselves. But there is little to be gained in just killing drones and hardware. The aim of warfare is not to just take and hold ground or control resources but to kill and subjugate populations. Humans have and are at present capable of killing recklessly and intentionally despite any claimed ethics or rules of war. Unleashing automated machines will make many operations seemingly easier.
Ret. Lt. Col. Dave Grossman in his book, On Killing, makes the case that “ten percent of those in combat do eighty percent of the killing.” Not all humans are willing, even in the heat of battle, to kill another person, even if it’s the enemy. Soldiers may fire their weapons above the target, or give the impression of fighting. With full automation this is not a problem; not only will more war fighters be available for battle, one hundred percent of them will be willing killers, free of any human psychological “fragility” or reluctance.
The human fragility caused by physical limitations of biology that inspires the ergonomics and architecture of weapon systems is no longer an issue. Weapons will be faster, smaller, lighter, and have less concern for operator safety or protection. Machines can be “suicidal” and not need to return to base, so long as the price point is sufficient or logistics allows for re-supply. Air combat will no longer be limited by the human bodies inability to endure G-forces, not to mention considerations of oxygen, ejection seats, or sophisticated displays for the pilot. The infantry who suffers in mud, snow, or swamp will no longer need food and shelter not to mention rest. Automated machines do not sleep, but just kill, observe, and die.
Humans are killers when political and military masters declare entire regions to be “free fire zones,” meaning all inside that area may be killed. Area bombing or blockades that lead to mass starvation are by their very nature indiscriminate but political. Strategic and philosophical scholars have often rationalized such mass killing as a means of expedience or under the premise of the “greater good.” The pretence of moral concern, that machines will do such things, already ignores current events and human history. With the complete outsourcing of war and no flag draped boxes returning to the homeland, the only time a population and its political elites will be concerned is when it’s themselves dying at the hands of such machines. Otherwise it will be a distant event happening to others, only to matter in digital glimpses or should uncensored radicals mention it.
War is already divorced from the conscience and responsibility of the societies that wage them, no matter how many victims or how heinous the conduct. It’s only when domestic blood and capital are shed in large enough quantities that it matters or becomes unexceptional. The distant innocent, congealed into a blob of statistics is relegated to “them” or “they.” Those safe at home relish in indignation that their mighty military does what it needs to do for the greater good, or empire, or revenge, or security, or human rights, or glory itself. With machines doing the killing, there will be no warrior heroes to venerate into martyrdom. The indignation and certainty of wars righteousness shall remain in a very voyeuristic and tribal sense.
There will be no agreement made by the belligerents that should one side kill the other’s machines at such a large scale, or even if two champion mega robots fight it out ala Robot Jox, how disputes shall be settled. The machines will only ever be used to kill, observe, and coerce life. At home and in the occupied zones they will be deployed to enforce obedience, censor, monitor, and control under the guise of security. Just as the human operated drones in the past decade have been used to assassinate, “kill lists” formulated from algorithms using the arbitrary metrics invented by war planners and policy makers will be issued. The machines will eventually create their own lists based on experience and information gained from operations. Better to kill the innocent in the hopes of getting a guilty person is a calculation that has and will continue to be used.
The machines will use all the data made available to them. Human inputs, fed into the system in conjunction with those observed, will come together to create patterns and predictive models which determine whether a human may be a threat based on assessments and mathematical assumptions. Already a combatant is defined by their actions: fighting, holding a weapon, acting as a courier, rendering aid, and in some cases merely existing. With self-replication the machines eventually won’t be hindered by the humans designing or building them.
There will be no robot whistle blowers or conscientious objectors, just as there will be no need for a military tribunal to prosecute any that disobey or conduct themselves contrary to standing orders or rules of war. The nature of a My Lai massacre or the genocide at Nangking will not occur; the sadistic rapes and torture that only human beings are capable of don’t happen with automated machines. The killing will be cold, the discrimination between targets in itself indiscriminate based on where and when rather than unarmed or civilian. The killing may be objectively humane, killing with maximum efficiency to eliminate suffering. This was also the objective of the Nazis in their death camps. The calculation of how many in a population can be killed before it’s immoral is a statistic that only machines and humans with government ideologies can come to. For the machines, the morality is mathematical. For the human beings wielding them it’s a combination of pragmatism, ethics of convenience, and self-interest.
A terrorist or soldier armed with a rifle is not going away overnight. There won’t be a transition that leads to the obsolescence of people killing other people or destroying hardware. But there will be a change on the battlefield where expensive and traditional weapon systems are too “extravagant,” as the Libertarian Institute’s Bill Buppert would put it, and don’t offer the best “bang for bucks.” The romantic illusions of war will persist but may struggle in an age where drones hunt and kill. It’s less likely that some human operated and traditional weapon systems last that much longer outside of display purposes only. Those who insist they should remain usually come at the problem from financial interest or fondness rather than objectivity. The funding and money to be made from some weapon platforms and programs sully their real value on the battlefield.
The Maxim gun gave imperial powers an edge oftentimes against the “natives” they fought, and so will full automation early on. But once those natives had an abundance of AK’s, dominance was lost. Some of the more unique operators of various drones have been low-budget forces, such as the Houthis in Yemen. Electricity and battery power will be a key logistical focus, along with chips rather than gasoline, food, and medicine. Conscription or “press ganging” people to fight will be less of a concern when such kill-machines become widespread.
If the planners, enablers, cheerleaders, profiteers, and killers see human beings as a means to an end or merely a target, then the transition to automated warfare is a blessing for them, not their victims. The seduction of killing from a distance is always appealing to those with the most resources; automation provides the perfect distance from blood and risk. Isaac Asimov’s laws of robotics are a fiction invented out of faith in the morality of masters and engineers. Whether killing is right or wrong has rarely been an imposition for the individual person or elites of society. It won’t be a consideration for machines either. To kill and die is very much a living reality, and soon non-living machines with “brains” will do it. If the definition of genocide and calculations of allowable dead civilians have been made before, what will seem so strange when machines do it better and with no risk for the policy makers and the society that unleashes them? The misery of war will be perfect; strangely that seems invariably human. The machines learn well from their creators.