The image that war oftentimes conjures up is a bloody one. It also is an image that is said to permanently change a person who has witnessed its horrors.
But, the age of digital warfare has arrived and the image of war increasingly is becoming a much more impersonal image as “drones” and “lethal autonomy” become normative.
Drones are undoubtedly changing the face of war. They lessen the need for “boots on the ground.” They take war directly to the enemy. They reduce collateral damage. And, they also may be legal under international law because they arguably are a form of self-defense.
It sounds good…almost too good.
Almost silent and invisible, predators in the sky offer the promise of ridding the world of the lawless who would like to inject chaos into it. Intelligence officials in Langely, VA, can pinpoint an enemy and armed services personnel located thousands of miles away from the battlefield can then direct joy sticks and press buttons that obliterate the “target,” filming the sortie for later analysis.
The Washington Post has also reported new robotic technologies that may very well transform the image of war. For example, “autonomous robotics” may one day allow drones to search for human targets and then make identifications based upon facial-recognition or other software. Once a match is confirmed, a drone could launch a missile to kill the target. It’s called “lethal autonomy.”
Even if international law sanctions lethal autonomy, is its use moral and ethical?
Yes, lethal autonomy takes war directly to the enemy. Yes, it lessens the need for standing armies and assists in keeping troops out of harm’s way. Yes, it can be effective in ridding the world of heinous criminals.
According to the Washington Post article:
In the future, micro-drones will reconnoiter tunnels and buildings, robotic mules will haul equipment and mobile systems will retrieve the wounded while under fire. Technology will save lives.
However, the most troubling aspect of lethal autonomy is that it also has the potential to remove human beings and personal responsibility from the decision-making calculus. Even if the tools of lethal autonomy were directly linked to their human operators, these machines process so much more data than human beings can process at any given moment in time that it may be near to impossible for armed forces personnel to manage more than one drone and autonomous robot at one time. Then, too, as an enemy become increasingly sophisticated about how to do battle with drones and autonomous robots, there is no doubt that the amount of time available to make decisions will be reduced and the new technologies will have to be allowed to operate on their own.
The author of Governing Lethal Behavior in Autonomous Robots, Ronald C. Arkin, told the Washington Post that ethical military drones and robots—capable of using deadly force while programmed to adhere to international humanitarian law and the rules of engagement—can be built. Software would instruct them machines to return fire with proportionality, to minimize collateral damage, to recognize surrender, and, in the case of uncertainty, to maneuver to reassess or wait for humans to assess the situation. In other words, Arkin believes that the rules of warfare that humans understand can be converted into mathematical algorithms for machines to follow on the battlefield.
Who’s to know with certitude?
What is for sure is that making determinations about the legal, moral, and ethical, and legal implications of digital warfare, in general, and this technology, in particular, require a careful and sober assessment.
To read the Washington Post article, click on the following link: