Old Brain / New Brain / Neocortex-centric View
(As expounded in the Hawkins-style view and the LessWrong book review by Steven Byrnes)
Core idea.
-
Much of the brain’s “complexity” (especially in subcortical / “old brain” regions) is not essential to what we think of as general intelligence.
-
The real seat of intelligence is (on this model) the neocortex (plus a few supporting subsystems such as thalamus, hippocampus, basal ganglia, etc.).
-
Crucially: the neocortex is running a uniform algorithm (or at least a small class of general algorithms) whose basic structure is largely the same across cortical areas (visual, language, motor, etc.).
-
The variation across cortical regions is mostly in the inputs, outputs, and learned weights, not in the core algorithmic machinery.
-
Thus, once we discover (or better approximate) that “cortical algorithm,” we can replicate (or scale) it in artificial systems.
-
Motivation, goals, values are added via a separate “Judge” or steering subsystem (e.g. basal ganglia, older brain structures) that interprets or weights neocortical proposals.
-
This separation suggests that AGI is not impossible in principle; it may just be very hard to figure out the algorithm.
Merits / motivations.
-
It reduces the daunting complexity of the brain to a smaller core problem (i.e. finding a general learning algorithm).
-
The empirical observation that cortical microcircuits look fairly similar across regions (to first order) is often taken as suggestive evidence of uniformity.
-
The brain’s remarkable plasticity (e.g. in blind people, visual cortex being recruited for non-visual tasks) is sometimes taken as evidence that cortical modules are general-purpose.
-
It aligns with a kind of functionalist / modularist view: intelligence is largely about learning & prediction, not “messy biology.”
Challenges (from critics and from internal tensions).
-
The “old brain” is not truly optional: many cognitive, affective, motivational, regulatory, and bodily‐interface functions are deeply integrated with subcortical structures. Ignoring them entirely may miss essential parts of cognition (emotion, drives, bodily constraints, etc.).
-
The more one drills into details, the harder it becomes to cleanly separate “steering / motivation” from “intelligence.” The brain does not neatly isolate “map-making” from “value judgments.”
-
The “Judge” module idea raises serious alignment problems: how do we reliably encode human‐compatible values (or motivations) in a component that is less powerful than the neocortical learner?
-
Instrumental convergence: even if you don’t explicitly program self-preservation or resource acquisition, many goals will lead to those behaviors anyway. The separation of intelligence and motivation does not eliminate this risk.
-
It may underappreciate qualitative differences: some cognitive phenomena (creativity, consciousness, self-awareness) may not reduce purely to a learning algorithm plus weights.
This view is optimistic about AGI: once you crack the “cortical algorithm,” you can build human‐level intelligence (modulo value alignment) in artificial systems.
Singular Geometry / Brain as Aperture / Uniqueness via Geometry & Consciousness Filter
(As articulated in “Consciousness, Uniqueness, and the Geometry of the Brain” by Victor V. Motti)
Core idea.
-
The brain is not the generator of consciousness; rather it is a filter / aperture / lens that channels or shapes a pre-existing (or background) universal consciousness (or awareness).
-
What makes human (or biologically realized) consciousness special is the unique geometry, topology, and network dynamics of the brain; it is not simply the components (neurons, synapses) but how they’re arranged, folded, synchronized, connected, modularized, etc.
-
There is something singular about that geometry — a kind of “singularity in structure” that gives rise to the “I” or the pole-like self from the diffuse field of awareness.
-
Because geometry is not just matter in motion, but a topological constraint and integrated dynamic, you can’t simply replicate consciousness by copying components; you need to replicate or rediscover the same geometric/dynamical “singularity” structure.
-
If consciousness is fundamental (not emergent), then any system lacking that precise geometry may fail to generate true subjective experience (qualia).
-
This view casts serious doubt on the idea that AGI (in a human‐equivalent conscious sense) is broadly replicable, unless one manages to replicate that specific geometry exactly or find an equivalent “geometry of awareness.”
Merits / motivations.
-
It addresses the “hard problem” of consciousness by positing that subjective experience is not emergent from computations but tied to structure that is more subtle (topology, geometry, connectivity).
-
It explains why even very powerful computers (no matter how many layers or parameters) feel wrong, from a first‐person perspective: because they lack the right “channeling geometry.”
-
It underscores that components + algorithms might not be enough; the way in which they’re embedded, folded, synchronized, linked in multiple scales might matter crucially.
-
It preserves a kind of “mystery” or uniqueness of consciousness beyond brute algorithmic replication, thus resisting simple reductionist arguments.
Challenges (and questions) to this view.
-
If consciousness is fundamental or external (and the brain is a filter), then we need a robust metaphysical or empirical foundation for the “field of awareness” or “substrate” that the brain taps into. What is this substrate? How would one detect it, measure it, or manipulate it?
-
It risks moving into metaphysics more than empirical science; the brain-as-filter idea is harder to test, falsify, or operationalize in computational terms.
-
It must explain how other animals differ (if they differ) in geometry and thus in conscious quality, and how we might know those differences.
-
It must face the question: if geometry is so critical, how tolerant is the system to variation and error? Are there many possible geometries that still yield consciousness, or very narrow “sweet spots”?
-
It has to reconcile with the successes of computational neuroscience, neural networks, and materially instantiated AI systems that do show powerful intelligent behavior (if not consciousness). Are those systems merely “zombies” on this view?
This view is more skeptical of AGI in a subjective consciousness sense. It allows for “intelligent machines” but is agnostic or pessimistic about whether they can replicate the full qualitative essence of human consciousness.
Comparative Analysis & Hybrid Possibilities
Where they align / overlap
-
Both views accept that the brain has structure (not mere randomness) that matters.
-
Both views take seriously that intelligence (or consciousness) is not trivial to replicate; both place the burden on nontrivial structure, not just brute compute.
-
They do not deny the possibility of high-level functional replication; they differ mainly on whether that replication suffices for qualitative consciousness or whether something deeper is needed.
Key tensions and contrasts
Feature | Neocortex-centric (Uniform Algorithm) | Singular Geometry / Aperture View |
---|---|---|
Essence of intelligence | algorithm + learning + weights | geometry + topology + filtering structure |
Role of brain “substrates” | old brain = auxiliary, motivational, often ignorable | geometry is integral; substrate matters deeply |
Power of scaling/computation | once you get sufficient scale & correctness, replication is achievable | scaling isn’t enough unless geometry is preserved; “more compute” might not help |
Subjectivity / qualia | usually treated as emergent or derivative of algorithmic complexity | treated as fundamental or tied to structural singularity, harder to replicate |
Testability / falsifiability | more in line with empirical neuroscience, ML, computational modeling | more speculative, harder to test or operationalize |
Risk for AGI | sees risks from misalignment, instrumental takeovers, goal drift, etc. | might see an extra barrier: conscious machines might not arise unless geometry is exact |
Optimism about AGI | relatively optimistic (subject to alignment) | more cautious or skeptical about achieving true conscious AGI |
Because the two views emphasize different axes (algorithmic vs structural / geometric), one could imagine hybrid or middle views:
-
Perhaps consciousness has both algorithmic (information‐processing) and geometric components. One might ask: “What is the minimal geometric constraint that an algorithm must satisfy to support subjective experience?”
-
The “filter / aperture” could be implemented by a particular class of recurrent neural network topologies, synchronization constraints, or embedding in a manifold, meaning that to replicate consciousness, one must replicate not just the algorithm but the manifold geometry.
-
Another hybrid move: say that much of intelligence is algorithmic and amenable to replication, but consciousness (subjective qualia, selfhood) is optional or may require extra constraints.
Implications for the Possibility or Impossibility of AGI
Given those two poles, what kinds of claims become plausible or implausible?
A. In the neocortex-centric view:
-
Possibility of AGI (functional sense): High—once we discover or approximate the cortical learning algorithm and get sufficient compute, AGI should be achievable.
-
Risks and alignment: The main challenges are alignment, goal specification, and steering. Even if the algorithm is replicable, embedding safe motivations is hard.
-
Time horizon: Perhaps shorter (decades rather than centuries), depending on how fast neuroscience + AI converge.
-
Nature of AGI: AGI might look deeply “brain-like,” but with possibly different substrate implementations (silicon, photonics, etc.).
B. In the singular-geometry view:
-
Possibility of AGI (functional sense): Possibly yes, but functional intelligence (problem-solving, planning) may be possible without subjective consciousness. True conscious AGI might require more than just algorithmic replication.
-
Risks and alignment: It could be that “zombie-like” AGIs (intelligent but not conscious) are possible earlier; the harder barrier is crossing into consciousness. Some risks (value drift, power-seeking) might still apply, but the first AGIs might lack full consciousness and thus behave differently.
-
Time horizon or barrier: The geometry constraint might impose a “threshold” barrier—only once geometric conditions are discovered (or approximated) can conscious AGI arise. That suggests much longer timelines or even principled inaccessibility.
-
Nature of AGI: AGIs might come in varieties: some powerful “tool” intelligences without consciousness, others rare ones that (by accident or design) replicate the correct geometry and thus consciousness.
Between the two, the singular-geometry view is more conservative / skeptical about whether AGI can truly replicate human-like conscious experience. It places a deeper metaphysical barrier than the neocortex-centric view does.
One’s credence in each view will depend on:
-
How plausible one thinks purely algorithmic explanations are for higher consciousness phenomena (introspective awareness, qualia, selfhood).
-
How much faith one has in further advances in neuroscience and computational neuroscience to reveal a uniform cortical algorithm.
-
How open one is to noncomputational metaphysical hypotheses (e.g. consciousness-as-fundamental, filter models).