The three phases of personification

During my brief stint at Stanford I took a Communications class from Cliff Nass, who was working on what would become The Media Equation. The class was fantastic and Cliff was a great storyteller, but I remember one story in particular about how people react to new technology. We humans, Cliff explained, have always tried to differentiate ourselves from everything else in the world. For the longest time the main competition was animals, so we told ourselves all sorts of stories about how humans were different than animals. For example, humans were special because we could understand time — we had calendars and festival days, knew when to sow and when to harvest, etc. (The fact that animals migrate and show other seasonal behavior we could explain away as simply reacting to the environment — we humans are also good at explaining things away.) So then along comes a new technology that happens to intrude on our understanding of ourselves: the clock. And when that happens we go through three phases:

  1. We personify the technology. So a clock has hands and a face. God is a watchmaker.
  2. We depersonify ourselves. People are “just” clockworks.
  3. We move on and find some new reason that we’re different from both animals and technology.

So we decide what really makes us special is that we’re able to create great works. That holds us until the industrial revolution when we get factories. So first we talk about the “arm of industry”. Then we move on to people being a mere “cog in the machine”, and Henry Ford boasting about how he can make a factory work with 2637 workers with just one leg, 670 with no legs, etc. Then we move on and decide that what really makes us special is that we’re tool users. Only then we start to discover animals use tools, so instead it’s that we can solve complex problems, like chess. Then language. Then emotion. And finally we landed on “well, they (computers and/or animals) may look like they have emotions but they don’t really,” which I guess at least has the advantage that it can’t ever be refuted.

I don’t know how historically this account really is — it was just a side anecdote in a class filled with them — but when I look at the latest gushing about ChatGPT and similar large-language models I can’t help but think we’re seeing the cycle repeat itself yet again. If so, we know what comes next.