To think, or not to think, that is the question.
Technological advancements have nearly always come with the promise of making our lives easier, taking a burden off our shoulders, and freeing up more time and mindshare for us to use as we see fit. As the current wave of automation technologies (AI, robotics, IoT, chatbots, big data… pick your favourite) become increasingly reliable, robust, and rampant they deliver on this promise in a big way. However, what if there is a limit to how much we should offload? Is there a level of agency that we could relinquish to technology that leaves us vulnerable to a scourge of ignorance?
The typical promise of automation in this day and age is to leverage insight garnered from data big and small in order to make informed decisions or act on behalf of a consenting user and in doing so make their life better or easier. You’ve seen this happen a thousand times, possibly without even noticing it, with recommendation engines, traffic algorithms, automated reordering, and personalized content. These types of services tend to be examples of good automation where we have taken small, often mundane aspects of our lives and handed agency over to technology to make tiny decisions for us and take small actions on our behalf.
The new breed of automation is stepping up its game from mundane tasks into low-level livelihoods and lifestyle alterations. When we were simply relinquishing decision making for buying toilet paper, no one really cared to think about Charmin versus Cottonelle or spending an extra 50 cents. However, as we surrender greater and greater agency to automation technologies, with each step, we also make the choice to think about one less thing. On the surface, this sounds innocuous enough yet as we watch automation dig into more meaningful, personal, and essential aspects of our humanity, do we begin to give up more than simple skills and actually lose pieces of ourselves?
The implications I fear in this scenario stretch far beyond job loss and begin to abolish entire modes of thinking. When your money invests itself, you don’t need to understand personal fiscal management. When your products show up automatically at your door, you don’t need to think about the ethical or environmental aspects of what it took to get them there. When your expenses optimize themselves, you don’t need to think about where the extra dollars are shaved from. When your life simply works, it becomes possibly for you to absently move through it, unaware of how it works.
There are three reasons as to why this has the potential to be so dangerous. First, by not thoroughly considering the increasingly complex issues in our lives that will become automated, we slowly become blind to a host of activities and decisions that impact our lives and those around us. While these decisions will theoretically be made through data-driven representations of our own logic and values, they may not represent us perfectly, may not evolve with our complex lives, and may be vulnerable to third-party influence or violation. Simple errors or shortcomings in learning algorithms could lead to decisions being made that misalign with our own set of context-sensitive, ever-shifting morals and beliefs. Far more sinister is the potential for organizations to manipulate or bypass your decision algorithms and sneak information and decisions through your personalized automation filters. There is simply no guarantee that your agency is being fulfilled by automation in a way that aligns with your intent.
Second, if and when these systems either fail or require human intervention, we will see fewer and fewer people with the ability to intervene. I don’t foresee a future where automation technologies enslave us for our own good, but I do foresee one where a simple oversight causes a catastrophic chain of events that no human has the requisite skill set to stop. I don’t believe in the malfeasance of machines, but I set my clock to the certainties of human ignorance, stupidity, and corruptibility. By voiding ourselves of certain modes of thinking and decision making, we allow aspects of our minds to atrophy and, in the event we are ever called upon, we may lack the knowledge, experience, or confidence to effectively take action.
Finally, there is a third, subtler, but potentially far darker risk in the continued expansion of automation into our lives. The individuals given priority access to these types of technologies and the freedom they enable will be those who can afford it. Those with the most clever and powerful technologies will become those whose productivity and wealth will grow exponentially. Versions of these technologies will be available to the common person, however, in lieu of dollars, data will be the price to pay. This will allow those in power the ability to know the downtrodden to a frightening degree and successfully market and sell their lives back to them. This will widen the class divide even further than we are seeing today by automating so much of how our world thinks, how our economies proliferate, and how our people live.
So, what does ‘responsible automation’ look like? Is there such a thing? Being someone who is devoting their life’s work to these technologies, I must believe so.
To address the alignment issue, we must insist that there is transparency in automation (something that the tech industry is currently horrendous at) so that users can understand how decisions are made for them and, when things misalign, they can make the appropriate changes to ensure their values are well-represented. Developers need to ensure that they are building technologies that thoroughly understand your intent and desire and communicate why and how they make decisions so that we can make informed decisions about whether and how we will use them.
To address the loss of skills and modes of thinking, we can draw on inspiration from human factors in automation. In order to keep operators in the loop and ready to handle emergencies if they arise, human factors designers typically build systems where even an individual in a supervisory control scenario is periodically given lower-level tasks to diagnose, analyze, and execute upon. This helps keep them in the loop of this mode of thinking, while still offloading most of their work to automation tech. Thus, if anything ever goes wrong, or they need to better understand how the system is working, they have the know-how to step in and take over, granted at a reduced capacity. We should give pause before relinquishing our agency wholesale and ensure that we periodically check-in with our own automated decisions to keep ourselves aware and effective.
To address the class-divide issue, we may have more of an uphill battle. Open-source software, non-profit rights communities like the Electronic Frontier Foundation, and educational institutions are attempting to push forward technologies that have socialist intents at their roots; they want to be fair, open, and available to everyone who wants them. However, these actors are a drop in the ocean compared to the development efforts being poured forth by massive corporations like Google, Amazon, Facebook, and Apple. The sad reality is that while we enjoy many of the digital services offered by these companies, they are not simply there for our benefit – these are for-profit companies, designing for-profit tools that either gather your data to sell, sell things to you, or do both. Until regulatory or social intervention ensures a level-playing field, both from an access and education perspective on what these automation technologies are and how to use them fairly and effectively, your ability to use a service and not get used by it comes down to your ability to know what you’re doing or pay someone else to.
Automation of the future could be beautiful. Done properly we could build an equitable, safe, and prosperous tomorrow where we usher in a golden age of productivity and accomplishment that allows individuals to easily meet their basic needs and pour their passions into achieving so much more. However, automation also has the potential to be ugly. Done improperly could spell the digital enslavement of swaths of the human population into an algorithmically-enforced cycle of failure where they are doomed daily to have digital services give the perception of gently holding your hand while forcefully dragging you down a social and economic hole.
Either of these futures is possible, but only if you choose to think about it.
Shane Saunderson plays with robots, minds, and organizations.