And I, for one, am both wildly excited and extremely terrified. This isn’t simply because of my twisted delusions of supernatural infallibility, or the fact that I still daydream about taking over the world with an army of mechanized minions. Nay, I’m interested in experience design for robots because it gives us the opportunity to essentially be a part of designing and building a new form of intelligence—one that doesn’t have to be bound to the constructs or limitations of humanity. It also opens humanity to an entirely new, more profound, and more human way of interacting with technology, so long as we do it right.
Though not properly coined until the mid-90s, the discipline of UX has been helping us design interactions between humans and technology since basically the dawn of the Industrial Revolution. It has come into prominence in recent years due to the boom of the experience economy and the realization that the thing you build and sell in the world is only as important as the experience it creates for the person using it. Some people will argue that you can’t truly design an experience for someone around a product like a phone or a computer; the best you can do is set the playing field and allow the user to determine their own experience. An experience is essentially an emergent result of the interaction between two social actors, and if one of those actors is an inanimate object, odds are the experience is going to be heavily defined by one side of the relationship.
Robotics has the potential to confront this conundrum head-on since it empowers technology with the ability to be a much more active actor in the social dynamic. However, creating the right experience will require complex design considerations that go well beyond UX. Suddenly, our considerations aren’t simply the size or shape of a button; instead, we are trying to program variations of our own social intent into a disembodied actor that, at some point, we lose control of. Robotic UX will require us to draw on nearly everything that we know about human organization and socialization and channel it into programmed intent.
Underlying this intent, robotic UX will also require deep philosophical and ethical considerations about what it means to essentially create another species that could slowly come to be considered our peer. We will have to confront the fact that, like it or not, we are playing god and birthing a new form of sentience that will take our intent and run with it. Asimov’s three (technically, four) laws will not be enough since human interaction cannot be codified into simple commandments like “a robot may not injure a human being.”
Recently, the British Standards Institute (BSI) unveiled BS 8611:2016 – Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems. The standard covers a lot of ground from the obvious 5.1.1a, “robots should not be designed solely or primarily to kill or harm humans,” to the somewhat more thoughtful and concerning 5.1.13, “The human potential to be behaviorally conditioned and to become addicted to using the robot should not be negatively exploited.” BS 8611 stands as one of the first and most robust considerations of robotic UX and design since Asimov’s original laws, and while it’s a good start, it is not enough.
The reality is that we’re not building cars or phones anymore. We’re not specifying tolerances, material grades, and human factors dimensions. In a way, we are creating life itself, and this requires more than just a manual. There will be no black and white answers, no simple formulae, and only a handful of rules that may have to be broken periodically. Robotic UX should not be a field of study or a process to be learned, it should be a privilege bestowed only to those who have shown to hold the best of humanity within them and who are willing to think through the layers upon layers of implications involved with their actions and creations.
So while many of us may have fantasized about the role, playing god is not an easy job. For the first time in human history, we are entering an era where we have the ability to assume the mantle of robotic deity. Though historically, and in the near future, we have and will continue to leave the doors open for anyone to play roboticist and build a self-driving car, chatbot, or military drone, we need to start considering whom we will bestow this level of power upon. Standards and rules may not be enough for the looming redefinition of society and to neglect this fact will, at best, leave us interacting with a bunch of asshole robots and, at worst, leave us with no society at all.
Still think playing god is fun?