Back to syntheses

can ai become conscious? bernardo kastrup's provocative challenge

Bernardo, you are well known for opposing the concept of AI consciousness. I'd like to know why.

Contributors

Robert Lawrence Kuhn
Robert Lawrence Kuhn

@CloserToTruth

Bernardo Kastrup
Bernardo Kastrup

@bernadokastrup

Source: Closer To Truth

Key Insights

[00:00:11]

Simulation is not equivalent to consciousness.

"I think we mistake a simulation for the thing simulated when it comes to consciousness."
[00:04:38]

Computational processes do not inherently lead to consciousness.

"Once you know down to the individual wire and gate how a computer works, you know that you can do all that in principle with pipes, water, and pressure valves."
[00:06:08]

Analytic idealism posits a universal field of consciousness.

"Under analytic idealism, everything exists in a form of subjectivity that does not have spatial boundaries."
[00:07:46]

Consciousness may be more widespread in biological entities than assumed.

"Nature is showing me that I have good reasons to think that even amoeba have a private conscious point of view of their own."
[00:00:00]

Operator-provided highlight

"Bernardo, you are well known for opposing the concept of AI consciousness. I'd like to know why."

The Synthesis

Can AI Become Conscious? Bernardo Kastrup's Provocative Challenge

The AI consciousness debate isn't just philosophical navel-gazing—it's the collision point where our most advanced technology confronts humanity's oldest mystery, with trillion-dollar industries and potential existential risks hanging in the balance. Computer scientist-turned-philosopher Bernardo Kastrup dismantles Silicon Valley's AI consciousness hype with surgical precision, wielding his dual expertise to expose what he sees as a fundamental category error in our thinking.

Kastrup's analytic idealism draws a battle line: consciousness isn't computational, and we're making a critical mistake by confusing simulation with reality. "I can run an accurate simulation of kidney function on my laptop," he argues, "but I would have no reason to think that when I run that simulation, my desktop would be filtering urine." This distinction demolishes the materialist assumption that sufficient computational complexity inevitably spawns awareness. While materialists might theoretically achieve artificial consciousness through engineering that mimics biology (in "50,000 years, not 20"), Kastrup maintains that silicon computers running simulations will remain fundamentally unconscious.

The stakes crystallize in Kastrup's most cutting observation: "Once you know down to the individual wire and gate how a computer works, you know that you can do all that in principle with pipes, water, and pressure valves... Who would think pipes, water, and pressure valves would become conscious if you just add a few more?" His perspective forces a reckoning with our assumptions about consciousness itself—not just whether machines can attain it, but what it means that we already have it.