Back to syntheses

is language merely a predictive illusion?

And so this really hit me very, very hard. I've long been puzzled by, as many are, by the mind-body problem, the phenomenon of consciousness, the problem

Contributors

Source: Curt Jaimungal

Key Insights

[00:03:12]

Human language generation may mirror AI language models.

"The mechanism by which humans generate language is the same as what's going on in these large language models."
[00:11:17]

Words may lack inherent external meaning.

"Words don't mean anything outside of themselves."
[00:12:08]

Language lacks intrinsic connection to sensory experience.

"The word red doesn't mean what we mean by qualitative red."
[00:13:23]

Language operates independently of external reality.

"Language is an autonomous system. It's a self-contained system."
[00:16:32]

Language functions through pattern prediction, not reference.

"Language is able to produce the next token...without having any concept of reference."
[00:20:34]

Language cannot fully capture the essence of qualitative experiences.

"Language is poorly equipped...of the underlying mechanisms that give rise to what it receives."
[00:29:38]

Linguistic and perceptual embeddings are distinct yet interconnected.

"The linguistic embedding is a radically different embedding... It carries information about the world as well, but not in the way, not in the direct way that we think."
[00:30:03]

A latent space may bridge linguistic and perceptual cognition.

"There might be this latent space where you can actually do this kind of mapping, where there's translation between linguistic and perceptual embeddings."
[00:32:30]

Language meaning is not fixed but emerges from a latent space.

"There isn't a single definition that's ever going to capture... Instead, what you've got is this latent sort of bridge where there's some sort of representation of this fact."
[00:33:07]

Language does not contain static facts but potentialities.

"There isn't sort of a static set of facts about the world that's embedded in language."
[00:11:50]

Operator-provided highlight

"And so this really hit me very, very hard. I've long been puzzled by, as many are, by the mind-body problem, the phenomenon of consciousness, the problem"

The Synthesis

The Theory That Shatters Language Itself

What if every word you've ever spoken is merely predicting the next word, with no actual connection to reality? Professor Elan Barenholtz delivers a cognitive bombshell: language may be a self-contained system that generates meaning internally rather than pointing to the external world. As AI systems master human language through simple next-token prediction, we face an existential reckoning about consciousness itself—the machines aren't just mimicking us; they're revealing language's true nature.

The evidence is disturbingly simple: large language models achieve human-level communication through pure pattern recognition without understanding, suggesting our own linguistic cognition might operate similarly. Barenholtz argues that this "autoregressive" property of language—where each word predicts the next without external grounding—creates a profound split between our perceptual and linguistic selves. "We've learned properties of language itself," he insists, challenging the fundamental assumption that words connect meaningfully to the world.

Most provocatively, Barenholtz suggests our rational, language-based consciousness may have severed us from the unified cosmic experience that animals still inhabit. "This thing is just ridiculously good. And so that just blows my mind!" he exclaims about AI's language mastery, which forces us to confront the unsettling possibility that human thought itself might be an elaborate prediction machine rather than a window to reality—making consciousness both more mechanical and more mysterious than we ever imagined.