Automate with Caution: Learnings from a History of Dehumanization

This article was originally published in my role as a Fellow with the Human Futures Studio.

As a researcher in Human-Robot Interaction, a lecturer in digital transformation, and a historian of all things automated, I am constantly surprised by the consistent failure of organizations deploying automation. I speak broadly here of automation: technologies that execute multi-step processes with minimal human intervention. More colloquially, I’m talking about robots, AI, IoT, chatbots, computerization, assembly lines, and even Jacquard looms. What these things all have in common is that we keep implementing them in the same narrow-sighted and preventable way. We keep thinking that the point of automation is the technology itself, but like all things in this world, what should really matter is how it helps people. But don’t worry… you can avoid this pitfall without even much difficulty, so long as you’re willing to do the work.

Automation fails in organizations when it dehumanizes us. It is that simple. However, the ways in which automation has the potential to dehumanize us can be much more complex.

The luddites were skilled textiles workers in the 18th century whose livelihoods were threatened by lower-cost, low-skilled labourers that factory owners were able to hire as looms became increasingly automated. This shift in operation not only devalued they pay of textiles workers but also the skillset and culture many had spent a lifetime developing. They rioted throughout parts of England and destroyed automation equipment in an attempt to halt the evolution of textile mills and protest the erosion of social orders such as weavers’ guilds. However, their struggle was not with automation itself, but the deceitful practices of factory owners and the harsh economic conditions following the Napoleonic Wars that were only further exacerbated by employers seeking less skilled, low-wage labourers. In essence, luddites – a term now commonly associated with distain for technology – were not opposed to technology per say, but to the undermining of their purpose, expertise, and livelihoods.

Though fascination with creating forms of artificial life date back to the Zhou dynasty in China and the ancient Greeks, the emergence of cybernetics out of World War II rekindled enthusiasm around the idea of thinking machines, but was met with an equal dose of incredulity. While early cyberneticists dreamed of futures where machines would be indistinguishable from humans, skeptics, such as Nehemiah Jordan saw the challenge this created for people: “Man does not live by bread alone. [His] psychological environment is subsumed under one word: motivation… By designing man-machine systems for man to do least we also eliminate all challenge from the job.” Once again, we see an acknowledgement of people’s need for challenge and motivation.

The computerization movement of the 70s and 80s saw ethnographic researchers like Shoshana Zuboff investigating the impacts of changing office and work dynamics. Her findings from speaking to office clerks transitioning from pencil-and-paper to computer-based work were so shocking and abhorrent that she titled the chapter of her book referencing this research, “Office Technology as Exile and Integration.” In this chapter, she describes countless stories of clerks feeling purposeless, alienated from their peers, and imprisoned by their new automation. Zuboff acknowledges short-term efficiencies gained through the transition, however, quickly points out the massive cost of attrition created by the transition and subsequent failure and retries of many computerization projects. Like eras before, Zuboff acknowledges the lack of consideration and respect for employees’ sense of purpose and humanity.

Today, isn’t much different. Each time we cringe at the sound of an automated phone system; grow bored of our narrow, mundane work tasks; become infuriated when setting up a new smart home device; mash our keypad angrily at failed auto-correct; or attempt to decipher the foreign ramblings of a generative AI our frustrations are the same. We begrudge having to discard our humanity and act like machines. And the shockingly simple solution to this obvious problem is to better understand people and design our technologies to serve them instead of the other way around.

As Kelley said of automating work, “In so doing, the tasks performed are, in fact, dehumanized. To take away an individual’s freedom to choose how he does his work makes of him an automaton.” Automation is a problem when we consider people themselves as cogs within a broader machine and consequently rob them of the sense of purpose they derive from their work. When we dehumanize people, we should not be surprised when they begin to function like machines. And when our machines begin to fail, we’ll be left with no humans to fix them.

Bibliography

Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6), 775–779.

Fitts, P. M. et al. (1951). Some Basic Questions in Designing an Air-Navigation and Traffic-Control System. In Human engineering for an effective air navigation and traffic control system. National Research Council.

Jordan, N. (1963). Allocation of functions between man and machines in automated systems. Journal of Applied Psychology, 47, 161–165.

Kelley, C. R. (1968). The role of man in automatic control processes. In Manual and Automatic Control (pp. 232–25). Wiley Inc.

Sale, K. (1996). Rebels against the future: The Luddites and their war on the Industrial Revolution: lessons for the computer age. Basic Books.

Zuboff, S. (1988). Office Technology as Exile and Integration. In In the Age of the Smart Machine: The Future of Work and Power (pp. 124–173). Basic Books Inc.