Robicide: The Inevitability of Robots Killing People

In the past month, three individuals have been killed by robots in drastically different circumstances. While the loss of human life is always tragic, I don’t decry these moments with fear mongering about the dangers of robotics, or the folly of man’s experiments with playing God. I look at all of these situations as news headlines that we’d better start getting used to, because like it or not, this is only the beginning. Truth be told, we’ve been blowing people up with drones for over a decade and there are countless incidents of automated factory fatalities, so this isn’t even the beginning… it’s the awkward teenage years of robo-induced death.

The first occurred when Joshua Brown, a massive Tesla enthusiast, crashed into a white tractor trailer while operating a Tesla Model S under Autopilot mode. The Tesla system could not differentiate between the trailer and the bright sky, and the car impacted at near full speed, killing Brown. Tesla’s response was to offer their deepest condolences, but very quickly remind the world that Autopilot is not to be operated fully autonomously; the driver must always keep their hands on the wheel and be ready to take control. However, this does highlight an important reminder about our use of robotics and AI to be aware of their limitations and not become too complacent. Just because we can rely on a robot to do something for us, doesn’t necessarily mean that we should relinquish all control, for a variety of reasons.

Second, Micah Johnson was killed in Dallas by a tactical robot that self-detonated a pound of C4 explosives, following a violent gun fight. While this type of action is par for the course overseas, it is the first time to happen on US soil. Dallas police claim that the robot was used as a last resort after Johnson killed five and wounded seven. Lawyers and ethicists are having a field day with this one, however, technologists like myself see the Grumman tactical robot for what it is: a technological extension of human capability no different from a sniper scope or a knife.

Finally, a technician in a German Volkswagen production facility was crushed to death by a robotic assembly arm. VW is claiming that the incident was caused by human error, but of course, the internet is going crazy with memes and conspiracy theories about robots killing us all. Though this is a damned shame, the other appropriate word to call it is an accident.

Now don’t get me wrong, I’m all for a good bout of robo-fear-mongering. However, if we look at all three of these incidents, the important word to focus on is agency. Yes, a robot was the executioner in all three situations, however, judge and jury was left to humans. Whether we chose to keep our hands off the wheel, push the little red button, or accidentally hit go on a control panel, all three situations can ultimately be tied to a human decision. In these situations, if the robots in question had any agency to take those lives, it was because their human operators gave it to them.

Until a sentient robot stands up, shakes your hand, and crushes you where you stand for no good reason, we as humans need to take responsibility for our own actions and stop blaming a mechanized degree of separation. Whether through reckless action, poor systems design (you don’t come away smelling of roses, Elon), active choice, or operational negligence, blaming robotics for these incidents is quite possibly the most reckless action of all. And as we continue to give greater and greater agency to robots, we only have ourselves to blame for every time we see another news headline like this. Robotics, much like any other technological leap, is a tool that we choose to use, but must also acknowledge the inherent challenges, shortcomings, and risks when we do.

Robots don’t kill people… people kill people, through an extension of poor design or poor use.

Shane Saunderson is the VP of IC/things at Idea Couture.