Parlor Games and Asian Rooms: The AI Debate

Today I want to write a bit about the two sides of the artificial intelligence debate. The bulk of this piece – save some fun thoughts at the end – is going to be nothing new to those in the know, but there’s a lot of important prerequisite here that, since most of you aren’t huge robotics nerds, you likely won’t have. It is a tale of an epic battle of the minds fought by two men, even though one of them was already dead by the time the other was just scratching the surface of his calling. It is a tale of tech-deity – Turing and his Imagination Game – versus philosopher king – Searle and the Chinese Room – and is fought out by the engineering and philosophical minions of each discipline… often in poorly worded online forums.

The first jab was taken by the father of computer science in 1950. Though Alan Turing wrote his Computing Machinery and Intelligence as a brilliant foray into the future of intelligence, he would spark a debate that, so close to human heart and mind, would burn like a fire for decades to come. His basic premise was that if we accept that the mind is like a computer, eventually, we should be able to create a computer that can mimic a mind well enough to fool us into thinking it is a person. He proposed a common human-machine definition of intelligence through a thought experiment around an old parlor game called the Imitation Game, which has now come to be known as the Turing test. In a nutshell, the Imitation Game is attempting to guess who your opponent is imitating by asking a series of questions. Turing applied this game to a machine imitating a human and his point was that if we could not distinguish the entity from a human being, it was sufficiently intelligent to pass for human; as long as you can’t tell the difference, who really cares?

John Searle came flying back with a counter to the notion of “Strong AI,” decades later and in the late 80s and early 90s, and put forth his own thought experiment around his question, Is the Brain’s Mind a Computer Program? Searle’s Chinese Room experiment highlighted an English-speaking individual in a fully enclosed room who is being passed Chinese characters, cross referencing them with a manual, and then passing back different Chinese characters in response. To the outsider, the room appears to know how to communicate, even though inside, all the individual is doing is following a book. Obviously, Searle likened the characters to communication with people, the individual to the processor, and the manual to the AI code. His conclusion was that a computer passing the Turing test could not be considered intelligent because it lacked intentionality – the characteristic of a consciousness being aware or directed toward an object. Since the person in the room does not “understand” Chinese and will never be able to learn it while trapped in the room, we can also conclude that in the Turing test, a machine does not display true comprehension or thought.

We saw this battle come to applied reality in 2014 when Eugene Goostman, a simulated 13-year-old Ukrainian boy, passed the Turing test for the first time. Immediately, people felt cheated. They realized the limitations of the test – that if we are simply content with being tricked, have we built true intelligence? Many in the AI community called the five minute test a farce, including the creator of the competition, Hugh Loebner, who’s own Loebner prize must pass a 25-minute test. Yet, at this point, we’re simply mincing shades of grey and how long we can design a computer to trick us for.

However, going forward, Searle’s arguments against Turing’s Strong AI afford us an interesting new experiment and a new approach to building AI. Most current AI designs are like the Chinese Room – a series of if-this-then-that responses to inputs. And, when confining them to this, the AI will never learn intelligence, just as the person inside will never learn Chinese. Yet if we take the individual out of the room and allow them to begin to associate characters and responses with different objects – essentially showing and teaching them intentionality – that individual should start to learn Chinese. Similarly, the future of AI is in taking our best chatbots like Eugene Goostman and, using deep learning algorithms, teaching them intentionality.

The future of artificial intelligence is no longer in code or maths; it is in education. Yes, we must build AI capable of being dynamic and evolving their core structure. However, much like IBM’s Watson, the true way of empowering and creating the next generation of artificial intelligence is not through brute force coding, but instead by realizing that our AI are infants and we must take on the role of parent, teacher, and mentor.

Shane Saunderson is the VP of IC/things at Idea Couture.