There is an important debate to be had about the use of lethal drones and other automated machines in policing, and in general, on domestic soil. But that debate ought to be separate from the one currently ongoing about the killing of a suspect in Dallas’ recent, tragic shootings of five police officers. The most important issue here isn’t whether Dallas police should or should not have delivered their lethal force remotely, but whether they should or should not have been using lethal force at all. The only real tech issue has to do with the ability of new technologies to fool people (both police officers and news anchors) into thinking that anything meaningful has changed. In reality the old rules still apply perfectly well.
The superficial tech angle to this case should not obscure the real problem: due process. As Motherboard asks toward the end of this article, can a shooter holed up alone constitute an imminent threat to life? This almost rhetorical question relies on an assumption: that it matters whether the suspect is an imminent threat to life. But the consistent argument from police has been that the key factor was avoidance of threat to the officers who would otherwise have had to storm the room, almost certainly under fire.
If the police had had a sniper of their own positioned nearby, unknown to the suspect and with a clear shot at his entire body, there would not be this seeming uncertainty about the prudent way forward. Even with the suspect having killed several officers, lacking a hostage or a vulnerable person in danger the police sniper would be obliged not to aim straight for the head, if at all possible. The priority would still be apprehension, even though a clean kill of a murderous suspect was possible, and even easy.
At the moment these officers decided what to strap to the robot, knowing they would soon be triggering it from behind a robot viewfinder, they were in much the same position as that sniper. Put differently, a well concealed sniper rifle is no less a tool of “remote” killing, in this context. Let’s also remember that a simple helicopter was used as early as 1985 to drop explosives on suspects. The event became infamous because it sparked fires nearby and killed several children, but the basic police decision to drop bombs on Americans to end a gun battle has also been seen as extremely contentious. If the helicopter had been remotely controlled, it would have changed the situation little, if at all.
The obvious question is why the robot was not fitted with a non-lethal payload. On the one hand, there have been multiple uses of police robots to immobilize suspects, including with the use flash-bangs and “chemical munitions” (teargas, etc). On the other, this was a hastily cobbled-together solution made from a robot that was not designed to carry an explosive, and an explosive that was not designed to be on a robot. It’s also worth noting that the conventional language being used right now, that the suspect was “holed up,” might not adequately cover the danger he represented to officers at the time he was killed. Police had tried to negotiate with the man, who eventually returned to exchanging fire with police.
There could very well be a reason that, in the moment, police had to proceed with an explosive and not a flash-bang, or a chemical grenade. As The Atlantic noted, such munitions are common in police departments, while weaponized munitions are rare. As of this writing, there’s still no word on exactly what sort of explosive was used nor how it interfaced with the robot, but it has been compared to the duct tape innovations of American soldiers in Iraq and Afghanistan. That makes sense, given the immediate need for a solution, but it’s also troubling to type out the statement that the latest police action involved the construction and deployment of a literal IED.
There are aspects of this story that have to do with the use of domestic drones, as well as the access of regular citizens to the sort of weaponry and armor these shooters had, perhaps requiring newly lethal police tactics in return. However, the core issue is not to do with tech, but law. Are the police allowed to kill a suspect in a case such as this, regardless of the means of delivery? If the answer is ‘No,’ perhaps we should undertake legal reform to make the answer ‘Yes,’ but even if that’s the case, we need to be clear about what the law is and how we want it to be applied.
It might seem absurd to use a term like “suspect” with respect to a man actively shooting a gun, as though he might be innocent. In recent years, the idea of due process (or at least public due process, also known as “due process”) has all but disappeared from the thinking of the US security apparatus when it comes to non-US citizens outside of the United States. The protections for US citizens abroad have changed as well, though not nearly as much. These changes were only ever applied outside of US borders, however, since they mostly affected parts of the government that had restricted powers on domestic soil.
Even as more and more police departments acquire drones and other military-like technologies, the different legal powers available to the cops, as well as their distinct mission statement, should keep those technologies from being used in the ways we currently see around the world: to eliminate dangerous suspects rather than neutralize them. So the question is pressing: Did this constitute a breach of due process? Were the cops using this drone the way a soldier might, without thinking about whether that was the appropriate thing to do?
Most likely, the real issue going forward will be reluctance to look for misconduct among a police department suffering a horrible tragedy, and lauding these officers as heroes. It might seem inhumane to open an investigation over a procedural misstep made while under fire, but the issues at stake aren’t frivolous. At the least, there needs to be a discussion about exactly how the police should be thinking about their use of lethal force, as their ability to apply that force becomes far more varied.
The law needs to be stated and understood in such a way that the next time a new police technology arrives, it’s immediately clear how the rules ought to apply to its use. We can’t afford to have a lethal learning period every time the world moves forward.
Now read: Google’s developing its own version of the Laws of Robotics