Translated from Chinese. Originally published in Sanlian Life Weekly (2025, Issue 45, pp. 116–117).
When we speak of intelligence, we often blur the line between what is natural and what is artificial. Artificial intelligence comes in many forms; “intelligence develops along a continuum, from partial assistance to full automation.”
In our collective imagination, not all intelligent agents are treated equally.
“By automotive standards, a toaster may already have reached near-full automation. Why does an automatic transmission seem like a boring tool, while an algorithm for drug discovery is described by Henry Kissinger as capable of designing new strategies? Many intelligent objects are never given the chance to be described this way, while some far simpler devices easily capture the public imagination. Savvy marketers know that self-driving cars and smart homes are exciting topics—automatic transmissions and toasters are not.”
Yi Tenen suggests that we need not be astonished by the capabilities of artificial intelligence. At the same time, we must recognize that its intelligence is metaphorical rather than literal. The similarity between human and machine intelligence is analogical; their internal mechanisms are entirely different.
“Sometimes results matter more than process. I don’t care whether it’s a robot—if it looks like a duck and sounds like a duck, then it is a duck. But at other times, process matters more. Then we want the duck to taste like a duck as well.”
To explore how we should understand machines that can read and write—and what this means for human culture—Sanlian Life Weekly spoke with Dennis Yi Tenen.
slw: One of the key ideas in your book concerns metaphors of intelligence. Do you think writing itself can also be understood as a metaphor—an important way for humans and machines alike to understand and engage with the world?
dyt: The etymology of metaphor already implies movement—from one place to another. Writing moves ideas from the mind onto the page, and then from the page into another person’s mind.
We may all understand the word tree, but my understanding might include childhood memories of climbing trees and eating fruit, while your understanding may be tied to entirely different experiences. Still, we use the same word and manage to understand each other to some degree.
Something is always lost in transmission, yet understanding persists. This is a deeply human process: it is not only about finding the right words, but about the recognition and joy that arise when we discover shared experience.
Today, AI can interpret and even generate text with remarkable accuracy in our chat windows. That ability surprises us. But what may be even more surprising is Yi Tenen’s claim in How Machines Learned to Write: A Literary Theory of Artificial Intelligence that humans have been trying to build reading and writing machines for a very long time.
Large language models are “the culmination of a long and mysterious tradition.”
In the thirteenth century, the Majorcan monk Ramon Llull invented one of the earliest chatbots using rotating paper disks. In 1668, John Wilkins proposed a universal writing system. In seventeenth-century Germany, Baroque poets built cabinets of curiosities that could be mechanically rearranged to produce music and poetry.
Machines that read and write are older—and more widespread—than we tend to imagine.
Today, “all human activity passes through computational channels.” People in every profession use machines to read text. In healthcare systems, patient consultations are transcribed and encoded into digital files. These files are translated across systems, deleted, edited, compressed, processed, and mined for missing billable codes. Sanitation workers also use computers to turn waste into data.
From this perspective, we have been using artificial intelligence for centuries. It is increasingly difficult to say whether the intelligence at work in our daily activities is natural or artificial.
As Yi Tenen puts it:
“Machines think, speak, explain, understand, write, and feel—these are all analogies.”
And yet the effects of artificial intelligence are real. AI always operates at the intersection of personal meaning and shared meaning.
slw: You’ve emphasized that machines capable of writing are far older than we usually assume. Do ancient automata reflect humanity’s effort to summarize and simplify its worldview? And do they sacrifice something in the process?
dyt: What do you think readers believe about divination? Many cultures—ancient Greece and China among them—developed combinatorial forms of divination.
Drawing a slip from a basket was a way of extracting personal destiny from a shared reservoir of meaning. Tossing stones or wooden sticks can be seen as a simple algorithm: a way of recombining words to generate unexpected meanings.
Divination always stages a tension between free will—the openness of the future—and fate—the sense that the future is already determined. I see artificial intelligence as a continuation of this tradition. It, too, operates at the intersection of personal meaning and shared meaning.
Technologies that manipulate symbols do not diminish human experience. On the contrary, they are what make us human.
slw: You distinguish between Platonic and Aristotelian models of intelligence—one inward and private, the other outward and public. Does contemporary AI development favor the latter?
dyt: Yes. Intelligence can refer to an inner, mental property. A person in a medical coma may appear motionless from the outside, yet still be thinking intelligently within.
But intelligence can also be defined by external achievement—the ability to complete complex tasks. In that sense, a clever robot that solves difficult puzzles would make Aristotle proud. But its “inner life” may not truly understand those puzzles. It cannot gather with fellow puzzle enthusiasts, drink wine, converse, and enjoy companionship. That would disappoint Plato.
slw: Many people worry that artificial intelligence will make humans less intelligent. Can better metaphors help reduce its potential harms?
dyt: Let me ask you: do dictionaries make humans smarter or dumber?
Some might say dumber—if we don’t consult a dictionary, we can’t remember how to spell. But that’s absurd. In the modern world, being intelligent requires knowing how to use a dictionary.
Similarly, sewing machines may reduce the skill of individual tailors, but they increase our collective ability to produce clothing. Technology embodies collective intelligence.
We should stop treating intelligence as private property and begin to understand it from a collectivist perspective.
slw:
You are not especially worried that AI will cause mass unemployment. Has recent AI development changed your view?
dyt:
Every history of automation disrupts labor markets. We can look to earlier episodes of industrialization to anticipate AI’s impact on intellectual labor.
Consider photography. Twenty years ago, people worried that digital cameras would destroy photography. Traditional photographers did lose jobs. But today, photographic practice has become extraordinarily complex: it involves software, platforms, data centers, analytics, and new social and cultural forms.
Photography did not shrink—it expanded dramatically, creating many new roles and professions.
Intelligence will follow a similar trajectory of growth, transformation, and innovation. It is becoming a more equal and shared enterprise. Our task is to ensure that it remains in the public domain.