Pages

November 27, 2012

Pentagon: A Human Will Always Decide When a Robot Kills You


The Pentagon wants to make perfectly clear that every time one of its flying robots releases its lethal payload, it’s the result of a decision made by an accountable human being in a lawful chain of command. Human rights groups and nervous citizens fear that technological advances in autonomy will slowly lead to the day when robots make that critical decision for themselves. But according to a new policy directive issued by a top Pentagon official, there shall be no SkyNet, thank you very much.

Here’s what happened while you were preparing for Thanksgiving: Deputy Defense Secretary Ashton Carter signed, on November 21, a series of instructions to “minimize the probability and consequences of failures” in autonomous or semi-autonomous armed robots “that could lead to unintended engagements,” starting at the design stage (.pdf, thanks to Cryptome.org). Translated from the bureaucrat, the Pentagon wants to make sure that there isn’t a circumstance when one of the military’s many Predators, Reapers, drone-like missiles or other deadly robots effectively automatizes the decision to harm a human being.

The hardware and software controlling a deadly robot needs to come equipped with “safeties, anti-tamper mechanisms, and information assurance.” The design has got to have proper “human-machine interfaces and controls.” And, above all, it has to operate “consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement.” If not, the Pentagon isn’t going to buy it or use it.

It’s reasonable to worry that advancements in robot autonomy are going to slowly push flesh-and-blood troops out of the role of deciding who to kill. To be sure, military autonomous systems aren’t nearly there yet. No Predator, for instance, can fire its Hellfire missile without a human directing it. But the military is wading its toe into murkier ethical and operational waters: The Navy’s experimental X-47B prototype will soon be able to land on an aircraft carrier with the barest of human directions. That’s still a long way from deciding on its own to release its weapons. But this is how a very deadly slope can slip.


It’s that sort of thing that worries Human Rights Watch, for instance. Last week, the organization, among the most influential non-governmental institutions in the world, issued a report warning that new developments in drone autonomy represented the demise of established “legal and non-legal checks on the killing of civilians.” Its solution: “prohibit the “development, production, and use of fully autonomous weapons through an international legally binding instrument.”

Laudable impulse, wrong solution, writes Matthew Waxman. A former Defense Department official for detainee policy, Waxman and co-author Kenneth Anderson observe that technological advancements in robotic weapons autonomy is far from predictable, and the definition of “autonomy” is murky enough to make it unwise to tell the world that it has to curtail those advancements at an arbitrary point. Better, they write, for the U.S. to start an international conversation about how much autonomy on a killer robot is appropriate, so as to “embed evolving internal state standards into incrementally advancing automation.”

Waxman and Anderson should be pleased with Carter’s memo, since those standards are exactly what Carter wants the Pentagon to bake into its next drone arsenal. Before the Pentagon agrees to develop or buy new autonomous or somewhat autonomous weapons, a team of senior Pentagon officials and military officers will have to certify that the design itself “incorporates the necessary capabilities to allow commanders and operators to exercise appropriate levels of human judgment in the use of force.” The machines and their software need to provide reliability assurances and failsafes to make sure that’s how they work in practice, too. And anyone operating any such deadly robot needs sufficient certification in both the system they’re using and the rule of law. The phrase “appropriate levels of human judgment” is frequently repeated, to make sure everyone gets the idea. (Now for the lawyers to argue about the meaning of “appropriate.”)

So much for SkyNet. But Carter’s directive blesses the forward march of autonomy in most everything military robots do that can’t kill you. It “[d]oes not apply to autonomous or semi-autonomous cyberspace systems for cyberspace operations; unarmed, unmanned platforms; unguided munitions; munitions manually guided by the operator (e.g., laser- or wire-guided munitions); mines; or unexploded explosive ordnance,” Carter writes.

So in other words, the Pentagon doesn’t have to build similar safeguards when developing, perhaps, a worm released into the wild that disrupts the industrial controls on a centrifuge system is fine. (SkyNet no; Stuxnet yes?) Nor when the Navy’s carrier drones one day decide to snoop on vast swaths of ocean, with cameras even more powerful than the existing ones that can spy on 36 square miles in the blink of an eye and recording the equivalent of 80 years’ worth of video in a single day. Nor when a piece of code starts collecting the computer usage history of a Defense Department employee suspected of being the next Bradley Manning. While everyone’s worried about preventing the Rise of the Machines, the machines are getting a pass to spy on you, under their own power.

2 comments:

  1. "Pentagon: A Human Will Always Decide When a Robot Kills You"----Is that supposed to be comforting? Ask the dead in Iraq, Afghanistan, Libya, Yemen, Somalia Mali, and all the other places our dromes have "acted" with humans at thge switch.

    ReplyDelete
  2. I'm sure we can trust these murdering psychopaths to keep their word.

    ReplyDelete