Talking to Yourself: Deconstructing Chatbots

Scanning is a tool used in foresight to uncover weak signals that herald shifts within different industries, behavioral changes, and other emerging movements that will shape the future. To uncover the types of signals that give way to true breakthroughs, our goal must not simply be breadth of exploration, but also depth of analysis if we are to understand the driving forces of the future and the implications they will have. This was a piece written in 2014 to deconstruct the emerging chatbot movement.

 

A few years back, Cornell’s Creative Machines Lab ran an experiment where they facilitated a conversation between two independent artificial intelligences. Chatbots are AIs designed to emulate a human conversation in the purest form possible. While not sentient entities, the intent of most chatbots is to compete in the Loebner prize – an annual competition for the most human-like AI conversation – and pass what is considered the first Turing test. The team at Cornell decided to place two of these together to see what kind of a conversation they would have. While awkward and at times downright strange and near incomprehensible, this experiment asks an important question about what happens when we take humans out of the equation of AI interaction.

Signal: The Artificial Unknown

The most obvious and prevalent signal from this experiment surrounds the idea that when we place two AIs together, all bets are off. Until we achieve true sentience in robotics, most programmers, given enough time, could perfectly map the conversation a chatbot would have with a human, based on what the person decides to say. Even the most advanced AI is still basically a complex if-this-then-that decision tree. However, chatbots are entities programmed to serve the purpose of responding to and conversing with humans. The AI can ask questions and probe, but they effectively exist to let the human guide the topic of conversation, and respond accordingly. When we put 2 chatbots together, they have nothing to talk about and it means that their conversation is going to get a bit weird.

Fast forward to the day we have passed the Turing test and our AIs have achieved sentience and the question still remains: what do they talk about? Human beings are driven by certain basic needs, emotions, and instincts that, in a very roundabout way, dictate the types of social interactions we will have. However, these basic needs for robots will be entirely different and raises a reality that two robots interacting with each other will take their conversation into a place that we simply cannot predict.

Implications

The implications here are twofold. Aligned with Karl Schroeder’s theory of Thalience, this means that when AIs begin to interact with each other, they will develop their own paths of intelligence, social structures, languages and ways of perceiving the world. Their fundamental differences as individuals will create motivations that will uncover pieces of knowledge that humans have never even considered. This should both excite us for the possibilities to consider and understand new directions of information and knowledge and the potential to harness the intellect of these AIs for rapid technological advancement.

However, this should also frighten us. By creating a form of intelligence as smart as us and yet with fundamentally different motivations, we launch ourselves into a strange direction where we have effectively created a type of alien species that will force us to rethink how we live as people in order to find a way to mutually coexist…or not.

Signal: Robotic Social Ladder

A knock-on effect of these unknown interactions is how social structures and hierarchies will form within the robotic community. Undoubtedly, we will program robots to be courteous and polite to humans (and they will be, at least initially), however, when two AIs interact, what will be the social norm for robotic conversation? Much in the same way that human beings posture and position for rank within social circles, will robots house the same insecurities, envy and lust for power that pushes them to be constantly battling for the alpha position? Moreover, where compassion and empathy prevent most humans from being maniacal sociopaths, what piece of artificial programming will prevent robots from turning into mechanical death-bots, at least towards each other? Asimov’s first law protects humans and third law protects the individual robot, but where is the fourth law precluding mass-scale robocide? Alternatively, could the foundational pieces of robotic AI turn them all into chill, peace-loving hippybots? Could the reality be far less exciting and simply dictate dull, cold, and calculated interactions between these mechanical beings?

Implications

The real point implied above is that in creating artificial intelligence, we are also creating an “artificial society.” However, as opposed to contrasting this society to the differences between the American and Japanese ways of living, the differences between ours and a robotic society may be more drastic and akin to that of another species. In much the same way that our society has created institutions of education, correction and indoctrination, a robotic society will likely also need to create a separate set of institutions to normalize and coordinate behavior. Robots of the future may need their own schools, jails, workplaces, hospitals, and entertainment forms to meet the unique needs of what is essentially another species. Yet, even typing these words draws up immediate and frightening images of segregation, class wars and tiered citizenship. It begs the question of our own society: how do we deal with the emerging sociological differences and needs of people without segregating them and forming blatant tiers of social existence?

Signal: Do Robots Believe in Electric Gods?

There is a particularly awkward moment in the video of the experiment performed by Cornell when the two chatbots stumble onto the conversation of God. When asked, “What is God to you?” one chatbot replies, “not everything.” Meanwhile, when asked if they believed in God, the other chatbot states, “Yes I do.” This innocent exchange begs a much broader question about what God is to a robot. Should humans be considered as Gods since we created robots? Does this mean that a robotic atheist doesn’t believe in humans? Alternatively, would robots align with different human deities, or potentially create their own electric God to worship? Could humans ever switch faith to the electric deity, or would we all accept it as complete rubbish?

Implications

The creation of robotic sentience, and in turn, artificial faith, would force us to question our own faith and belief systems. If robots viewed us as Gods for creating them, religious sects of the world should have a moral obligation to destroy them all since their very creation would be an affront to God. If alternatively, robots created their own God, could we truly view this God as any less plausible or legitimate than Christ, Allah, Yahweh, or Brahma? Could we deny robots their faith or would we have to embrace this digideity with the same tolerance we offer religions of the world (which granted, may not be much).

Or would faith be a purely human idea? In the creation of a true artificial intelligence, would we learn something of ourselves and how we differ from other forms of intelligence in the world? We would have a being of comparable intelligence to contrast ourselves to and understand what faith even means to us – whether it is a positive, beautiful thing, or a weak, compensatory crutch.

Also, I am a unicorn. (-:

Shane Saunderson is the VP of IC/things at Idea Couture.