Back to syntheses

what defines the essence of consciousness?

I think the right question is what kind and how much.

Key Insights

[00:07:11]

Consciousness should be measured by type and degree.

"I think the right question is what kind and how much."
[00:07:37]

Consciousness is an interface, not a creation.

"We don't create conscious beings... I think what we make are interfaces or pointers into a space."
[00:11:04]

Studying consciousness requires first-person involvement.

"I don't think it can be done entirely in third person."
[00:12:43]

Consciousness is rooted in collective intelligence.

"I claim that all intelligence is collective intelligence. We're all made of parts."
[00:15:52]

Learning increases integrated information in systems.

"As you train them, one of the things that goes up is integrated information of the system."
[00:17:33]

Interfaces could enable shared consciousness experiences.

"We need to develop translator interfaces that allow us to experience being part of a collective intelligence from the inside."
[00:18:23]

Future interfaces may enable participation in diverse consciousnesses.

"We could have interfaces that allow you to participate in the consciousness of radically different beings."

The Synthesis

The Consciousness Test: Probing the Edges of Self-Awareness

Every machine consciousness debate misses what Michael Levin illuminates with startling clarity: consciousness emerges from the messy, continuous negotiation between competent parts that must somehow align toward collective goals. In this conversation between Levin and CIMC's Joscha Bach and Lou de K, the biological foundations of consciousness reveal urgent implications for AI development—just as software agents gain unprecedented autonomy and companies race to build systems that might harbor inner experiences.

Levin demolishes binary thinking about consciousness, arguing instead for measuring "what kind and how much" rather than yes/no categorization. His framework positions conscious entities as interfaces or "pointers into a platonic space" containing patterns we recognize as minds, with consciousness being "what it looks like from the inside of that space looking out." Unlike our error-correcting computers with clean abstraction layers, biological consciousness emerges from unreliable substrates where boundaries remain perpetually uncertain—a stark contrast to how we engineer AI systems today.

"We don't create conscious beings," Levin provocatively asserts, challenging the fundamental assumption underlying AI ethics debates. His demonstration of single-celled organisms with six different learning capacities, including Pavlovian conditioning, forces a reconsideration of consciousness's biological foundations. When embryonic cells that "don't know where you end and the outside world begins" can self-organize into multiple separate beings depending on context, it suggests consciousness might be less about computational power and more about navigating the fundamental problem of selfhood.