is consciousness 'smeared across time'?
I'm not saying that consciousness isn't a thing
Contributors
@MLStreetTalk
@MiTiBennett
Source: Machine Learning Street Talk
Key Insights
Intelligence is defined by adaptation efficiency, not data scale.
"Intelligence is about adaptation with limited resources."
Intelligence systems produce programs but are not defined by them.
"Programs are the output of an intelligence system, not the intelligence itself."
Abstraction choice impacts task complexity in AI.
"If I use a different set of abstractions to achieve the same ends, I can make it I can make something very difficult or very easy."
AI systems must be contextually aware.
"Programs have purpose. They have they are situated in a context, in a world, in an environment."
Intelligence involves multiscale bidirectional causality.
"It is definitely multiscale bidirectional because in the same way that uh so uh cells uh cells can network right each cell has its sort of own goal-directed behavior."
Adaptive learning of abstractions is essential for AI.
"Adaptively learning it is definitely the way to go because we we have learned the abstractions we have in order to uh because these things are useful to us."
The concept of 'immortal computation' is misleading.
"There's just mortal computation. Let's not complicate the matter by adding in an extra concept that doesn't apply."
Current AI lacks true human-like intelligence.
"If you want something that's actually intelligent like a human though, we do not have that."
Philosophical zombies are deemed impossible.
"I propose to solve it by showing that what's called a philosophical zombie is impossible in every conceivable world."
Optimal learning requires a self-representation.
"If we can define an optimal agent that learns optimally it must construct this sort of representation of itself sort of a causal identity for self."
Operator-provided highlight
"I'm not saying that consciousness isn't a thing"
The Synthesis
Intelligence Isn't What You Think: The Adaptation Game
Silicon Valley's obsession with scaling AI has blinded us to what intelligence actually is – not a brute-force data crunching exercise, but "adaptation with limited resources," as Dr. Michael Timothy Bennett argues in his provocative paper "What the F*** is Artificial Intelligence." At a moment when trillion-parameter models dominate headlines, Bennett's biological perspective offers a desperately needed reality check: biological systems accomplish extraordinary feats with a fraction of the energy and data that our most advanced AI systems require.
The clash between computational and biological frameworks creates the podcast's intellectual voltage. Bennett dismantles conventional AI wisdom by rejecting computational dualism – our tendency to separate software from hardware in ways nature never does. Instead of simply scaling parameters, he advocates studying self-organization in living systems, where intelligence emerges through efficient adaptation rather than sheer processing power. This positions him against Silicon Valley's "just scale it up" orthodoxy while offering a more nuanced path forward.
"We have just replaced the pineal gland with a Turing machine," Bennett quips, highlighting how modern AI thinking remains trapped in centuries-old Cartesian frameworks despite our technological advances. His most provocative claim cuts to the existential core of AI development: "Whatever that software does has to pass through an interpreter, and the interpreter decides what it does" – suggesting our machines remain fundamentally different from biological intelligence not because of scale, but because of how they're structured at their most basic level.