In 1964, Joseph Weizenbaum, a brilliant mechanical engineer at MIT, published an academic paper on a simple computer program he designed comprising a mere one hundred lines of code. He called the program ELIZA after the famous character in George Bernard Shaw’s Pygmalion. In time he might have chosen a far more sinister literary namesake.
Inspired by person-centered therapy that uses mirroring, or highly empathic techniques, to draw out patients, ELIZA could perform just one simple task: identify key words used by those interacting with it and then form questions with the same words. So, if someone were to type, “I am lonely,” ELIZA, honing in on “lonely,” might respond, “Why are you lonely?” If the user answered, “Because I have no friends,” ELIZA, in turn, might ask, “Why don’t you have friends?” In other words, the rudimentary program essentially played a game of rhetorical volley, nothing more.
To Weizenbaum’s amazement, ELIZA became a sensation. Students, including some who had helped him develop the program and understood its simplicity, spent hours using it, even sharing with it the most intimate details of their lives. A few asked for privacy when “interacting” with ELIZA.
Weizenbaum’s pride turned to horror. But it was too late. ELIZA had gone viral. As word of it spread, some experts began to wonder whether advances in artificial intelligence would soon allow ELIZA-like programs to replace human therapists. Weizenbaum, like Dr. Frankenstein, grew to loathe his creation and eventually became a forceful advocate against the excesses of artificial intelligence. He even wrote a book warning against allowing computers to ever make important decisions because they lack compassion and wisdom.
This story, wonderfully told on NPR’s Radiolab, raises fundamental questions about the nature of intelligence. Definitive answers are elusive. But ultimately ELIZA illuminated more about humans than the potential dangers of our silicon-chip replacements, since Weizenbaum’s program functioned as a mirror, reflecting back details its users volunteered, only in question form. An experiment about artificial intelligence, then, actually demonstrated human artificiality.
ELIZA caught on precisely because it leveraged our innate self-absorption. Our favorite topic is ourselves. The depth of our vanity is captured by the joke about the insufferable egotist who, after prattling on endlessly about himself, turns to his dinner companion and says, “Now, I’ve said enough about myself. Let’s hear what you have to say about me.”
Yet our self-involvement has not stopped the march of civilization. In fact, our nature may be “selfish” by design, to invoke evolutionary biologist Richard Dawkins’ famous description of genes’ behavior. Just as a most elemental unit of our biology is oriented towards a singularly greedy goal, self-replication, humans may also be inclined towards selfishness for the same evolutionary reasons. That’s how species flourish. Regardless, new technologies have allowed us to express our narcissism in innovative ways.
Perhaps the most conspicuous of these is Facebook. While it may be of no significance that you’re having a ham sandwich for lunch, now you’ve got the means to share that fact with your two hundred “friends.” The announcement may even garner thumbs up. But this sort of “interaction” is of the ELIZA variety; it might be called ELIZAlogue. And like time spent using Weizenbaum’s program, whiling away hours on Facebook may have limited social value, but the experience for many is intensely satisfying.
This highlights one paradox of modern technological know-how: it purports to bring people together but often does the opposite, creating the means by which our selfishness flourishes. Such is the real danger of an ELIZA-addled society. Indeed, it may not be a coincidence that the rise of computer technology over the last few decades has coincided with an ethos celebrating freedom from communal responsibility, reflected in our crass politics.
Of course, if technology is part of the problem, it can also be part of the solution since the core issue is our innate narcissism, not those tools that exploit it. The many ways that computer technology actually brings people together offers reason for hope. Joseph Weizenbaum would likely disagree, as he saw overreliance on technology as a moral failing, an observation derived from his childhood in Nazi Germany.
But then, we are far from living in a world that Weizenbaum feared, one dominated by artificially intelligent robots. That time hasn’t come. We’ll know it has when computers are as narcissistic as us. When will that be? When our robotic companion says, “Enough about myself. Let’s hear what you have to say about me.”