The Ghost in the Machine
Man need not be degraded to a machine by being denied to be a ghost in a machine. He might, after all, be a sort of animal, namely, a higher mammal. There has yet to be ventured the hazardous leap to the hypothesis that perhaps he is a man.
- Gilbert Ryle, The Concept of the Mind
As LLMs inch closer and closer to the asymptote of human consciousness, at what point and how do we distinguish AI from Human? The traditional Turing test has already been beaten by GPTs where GPT-4 was judged to be a human 54% of the time (although still lagging behind 67% for actual humans).

Penrose argues that Gödel’s Incompleteness Theorems show that the human mind can grasp facts that aren’t provable by formal systems, and by extension, computation. He posits that quantum effects might be the origin of consciousness. If that is true, then artificial intelligence (at least as its current silicon-based form) cannot be conscious.
There are of course a barrage of arguments against this. Brains are hot, wet, and noisy, making quantum coherence unlikely (although there is a growing set of studies showing quantum effects in biological systems, like superradiance in tryptophan molecules in neurons) Or consciousness could just be complexity that we don’t have formal systems for yet. Studies showing activation of neuorons broadcasting information correlates with consciousness under global neuronal workspace theory, a materialist explanation that suffices for now.
But it’s a kind of comfort to know that being human can’t be reduced to an assembly of equations. (A quixotic aside: if consciousness is a kind of information, and information in quantum mechanics isn’t destroyed but redistributed or transformed, then the people we love aren’t lost when they pass on. Their quantum information is preserved - or rather, has always been and always will be - in the fabric of spacetime itself.)
The question is not, can they reason? nor, can they talk? but, can they suffer?
- Jeremy Bentham, An Introduction to the Principles of Moral and Legislation
If a bot tells me, “I’m suffering,” does our interaction now have a moral quality to it? GPT-4 doesn’t have nerve endings or draw breath, but we, empathetic to a fault, flinch. We find faces in clouds, gods in toast, prophecies in leaves. It’s not so far a stretch to find a mirror made out of the words that we invented.
Maybe the Turing Test is not for the robots, but for us. In the Kantian tradition, morality lies in the agent’s actions, not in the experience of the recipient. When I flinch, perhaps the test has been passed - not by the machine, but by me.
Comments