Intelligence is Social

Opposable thumbs are great, but they’re not what makes us special. Ask just about any social scientist about the thing that has allowed human beings to accomplish our most awe-inspiring feats of art, science, and progress, and they’ll likely say something about the importance of society and social interaction. What makes us smart isn’t just our big brains, it’s the ability for us to collaborate with other big brains.

As such, while we try to produce increasingly smart AI systems, I find it odd that we continue to develop systems without the ability to socialize and collaborate. I’m not talking about Chat-GPT responding to your every request like a dutiful servant, I’m talking about systems that can engage and ask questions of other social beings – humans, yes, but also other AI systems. If we want to create not just smarter but also more aligned AI systems, should we not be giving them experience in the very thing that may define our intelligence? Social is smart.

This is perhaps no more beautifully demonstrated by the Wason selection task. This logic puzzle was developed by Peter Wason in 1966 as a way of highlighting a flawed bias in human reasoning. The task works very simply: imagine you had 4 cards in front of you – 3, 8, blue, and red – and were asked which card(s) must you turn over in order to to test this hypothesis: if a card shows an even number on its face, the opposite of the card is blue.

Think you’ve got the answer? If you said the 8 and the blue card, you’re wrong… but you’re in good company. Time again, when this study is run, around 90% of people get the wrong answer. We pick the 8 and blue because they seem to satisfy the requirements of the request. However, we ignore that the blue card could easily have an odd number on its back and still satisfy our condition. We also ignore that if the red card has an even number on it’s back, we fail.

The really fascinating part is what comes next. A study roughly 10 years ago by Maciejovsky, Sutter, Budescu, and Bernau found that while they replicated the results of this study with isolated participants (90% failure), if people were put into teams of 3, the solve rate suddenly jumped from 10% to over half of participants. Unquestionably, there is some influence of the original 10% keeners, however, the numbers still don’t add up. Many teams whose members all initially failed, suddenly passed as a group.

This is because emerging psychological research is starting to understand the human brain not as a mechanism for solo rational thought, but instead as a collaborative tool for group discussion and rationalization. We are emergently smarter when we’re with other people. Intelligence isn’t necessarily to be thought about as a solo endeavour, but instead a distributed activity. It’s why the best teams for creative and innovative work are often diverse ones: it pays off to have a divergent thinker, a rational thinker, a skeptic, an optimist, and a wildcard in the room together. Their combined intelligence doesn’t just add up to 5x the individual, but instead allows them to challenge and improve upon each other’s thoughts to find a common ground that satisfies everyone’s view of the world.

So back to the bots: why are we creating isolated systems that, yes may learn from interactions with you, but are deprived of the ability to interact with, challenge, and rationalize with you? If this is our greatest intellectual strength as human beings AND the way in which we tend to find harmony and alignment with each other… why are we creating digital loners without a colleague to bounce ideas off and who are rapidly becoming the sociopaths we’re all afraid of?

Someone had this idea… and some pretty wild shit happened. Stanford researchers created a miniature RPG world with 25 characters, each controlled by Chat-GPT. In their little world called “Smallville”, each character started to evolve beyond their initially set parameters. Non-programmed behaviours started to emerge, such as stories socially diffusing around the town, characters developing memory-based relationships, and weirdest of all, the coordination of a Valentines Day party.

If you want to play out Marla and Klaus’ Valentines crush in real-time, you can check out the Smallville demo. However, beyond the adorable flirting of fake humans, this experiment highlights and important step forward in AI development. By creating sandboxes for AI systems to explore and learn independently from our own dull human questions, we leave open the ability for new possibility to arise.

Yes, we unquestionably need to put guards and protections around these sandboxes. Yes, we need to develop AI systems with personal guards and boundaries against speaking to bad human actors (not unlike needing to train a child with how to deal with stranger danger). However, much like that same child, if we continue to lock our AI creations away from the real world, hamper their ability to be curious, and spoon feed them only the most banal of human thoughts and activities, what we will create can hardly be called ‘intelligent’. Let’s challenge ourselves to think of new modes of AI use that expand the potential and very definition of intelligence instead of lock it to our own limited imaginations.