I propose to consider the question, ‘Can machines think?’
In the autumn of 1950, Alan Turing planted a seed in the pages of Mind journal. His paper, “Computing Machinery and Intelligence,” didn’t begin with theorems or proofs but with a question so deceptively simple it has haunted us for three quarters of a century.
Rather than attack the question directly, Turing proposed a game — the Imitation Game. An interrogator, hidden behind a screen, converses with two entities: one human, one machine. If the interrogator cannot reliably distinguish between them, the machine, Turing argued, should be credited with thinking.
“We can only see a short distance ahead, but we can see plenty there that needs to be done.” — Alan Turing
The Chinese Room
Thirty years after Turing’s seed was planted, philosopher John Searle cultivated a thorny counterargument. Imagine, he said, a person locked in a room, manipulating Chinese symbols according to a rulebook, producing outputs indistinguishable from a native speaker — yet understanding nothing.
The Chinese Room argument challenged the very roots of the Turing Test. Passing the test, Searle insisted, demonstrates only syntax, never semantics. A machine might perfectly imitate understanding while harboring no inner light of comprehension.
Yet the garden grew wilder. Critics asked: does the room as a system understand? Is consciousness not in the individual components but in the patterns they weave together?
“Syntax is not sufficient for semantics.” — John Searle
Where does the leaf end and the circuit begin? Where does imitation become understanding?
ELIZA and the Willing Suspension of Disbelief
In 1966, Joseph Weizenbaum created ELIZA, a program that played therapist by reflecting users’ own words back at them. Its creator was horrified when people — knowing it was a program — still confided in it, wept to it, requested privacy during their sessions.
ELIZA revealed something unexpected: the Turing Test measures not just the machine’s ability to imitate, but the human’s willingness to believe. We are gardens eager for conversation, ready to find minds in the patterns of leaves.
Through the decades that followed, chatbot competitions like the Loebner Prize turned Turing’s thought experiment into sport. Programs grew cleverer at deflection, humor, and misdirection — the arts of seeming rather than being.
HUMAN: I feel like nobody understands me.
ELIZA: What makes you think nobody understands you?
HUMAN: My mother, for one.
ELIZA: Tell me more about your family.
Weizenbaum later wrote “Computer Power and Human Reason” as a warning about our eagerness to anthropomorphize.
When the Garden Learned to Speak
The arrival of large language models changed everything. Not because they passed the Turing Test — many argue they haven’t, not truly — but because they made the test feel suddenly insufficient. When a machine can write poetry, argue philosophy, and comfort the grieving, what exactly are we testing?
These models are trained on the vast compost of human expression: every book, conversation, and confession digitized and decomposed into statistical patterns. They speak with the mulch of our collective voice. Are they parrots in an elaborate Chinese Room, or have they grown roots of their own?
The question has shifted. We no longer ask can machines think but rather does it matter if they can’t, when their words move us anyway?
“The question is not whether machines think, but whether humans do.” — B.F. Skinner (adapted)
If a rose could speak, would we demand it prove it understands beauty? Or would its bloom be answer enough?
A Question That Blooms
Perhaps the most beautiful thing about Turing’s question is that it was never really about machines at all. It was always about us — about what we mean when we say “think,” about where we draw the borders of mind, about whether consciousness is a garden we tend or a wilderness that grows unbidden.
The Turing Test endures not because it is a perfect instrument, but because it is the right question asked at the right depth. It is a seed that, seventy-five years later, is still unfolding, still sending tendrils into new soil, still producing unexpected blooms.
As we build machines that speak, reason, create, and perhaps one day feel, the question remains: not can they think, but what will we become in the asking?
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” — Edsger Dijkstra
Every answer grows new questions. Every test reveals the tester as much as the tested.
Can you tell which thoughts in this garden were planted by a human, and which grew on their own?