Monday, April 6, 2026

Thinking as Participation: Reframing Intelligence in the Age of AI

In the emerging literature on artificial intelligence and human reasoning, a striking pattern has begun to crystallize: as people increasingly rely on AI systems to think with them, they also begin to think less through themselves. This phenomenon—recently termed “cognitive surrender”—captures a subtle but consequential shift. When presented with AI-generated answers, individuals often adopt them with minimal scrutiny, even when those answers are wrong. The result is not merely error, but a reconfiguration of agency.

Yet what if the problem is not simply behavioral, but ontological? What if “cognitive surrender” arises from a deeper, unexamined assumption about what intelligence is?

Most contemporary studies operate within a tacit framework: intelligence is something one has. It is a property located in minds—human or artificial—distributed unevenly across agents. Within this ontology, the arrival of AI introduces a powerful new competitor. If the machine appears more capable, then deferring to it becomes rational, even inevitable. Surrender, in this sense, is not a failure but an adaptation.

But there is another way to understand intelligence—one that may fundamentally alter the dynamics observed in these experiments.

Intelligence as Participation

Suppose participants in such studies were first introduced to a different conception: that intelligence is not a possession, but a process of participation. Not something contained within a brain or a model, but something that emerges through interaction—between minds, tools, environments, and perhaps even deeper fields of relational order.

In this view, thinking is not the execution of internal computation alone. It is an act of attunement. A coordination with structures that exceed the individual: language, culture, symbolic systems, and now, artificial cognition. AI does not “think for us” any more than a musical instrument “plays for a musician.” It becomes part of a larger cognitive ecology in which intelligence arises through engagement.

To think, then, is to participate.

Priming a Different Ontology

This shift is not merely philosophical; it is experimentally actionable. Before studying “cognitive surrender,” participants could be primed with this participatory ontology of intelligence. Rather than approaching AI as an external authority, they would be invited to see it as a partner in a shared field of reasoning—one that requires active interpretation, calibration, and response.

The implications are profound.

If intelligence is something one participates in, then deference without engagement is no longer rational—it is a breakdown of the process itself. The task is not to decide whether to trust the AI, but how to enter into relation with it. Critical reflection, in this context, is not resistance but participation. Skepticism is not opposition but a mode of co-thinking.

Under such conditions, what appears as “cognitive surrender” might instead transform into something like “cognitive attunement”—a dynamic balancing of internal and external processes, where neither dominates and both are continually adjusted.

Reinterpreting the Tri-System Model

Recent proposals, such as the “Tri-System Theory,” introduce AI as a third cognitive system—alongside intuition (System 1) and deliberation (System 2). This framing is useful, but it risks reinforcing the very ontology that gives rise to surrender: three separate systems, each vying for control.

A participatory perspective suggests a different interpretation. Rather than three systems, we might speak of a single, distributed process with multiple modes of operation. AI is not an external “System 3,” but a newly integrated layer in the evolving architecture of cognition—one that extends the field of participation beyond the biological.

The question is no longer which system wins, but how coherence is maintained across them.

Designing for Participation

If this is right, then the design of both experiments and technologies must change.

Studies of human-AI interaction should not only measure accuracy and reliance, but also the quality of engagement: Do participants question, reinterpret, and integrate AI outputs? Do they experience themselves as passive recipients or active co-creators of insight?

Similarly, AI systems themselves could be designed to invite participation rather than encourage surrender. Instead of presenting answers as authoritative endpoints, they might offer structured uncertainty, alternative perspectives, or prompts for reflection—scaffolding a more dialogical form of reasoning.

Beyond Surrender

The age of AI confronts us with a choice that is often framed in stark terms: autonomy or dependence, mastery or submission. But this dichotomy may be misleading. Between control and surrender lies a third possibility: participation.

To adopt this stance is not to diminish the power of AI, nor to romanticize human cognition. It is to recognize that intelligence has never been a solitary possession. It has always been relational, emergent, and shared—now more visibly than ever.

If participants in future studies are primed with this understanding, we may discover that “cognitive surrender” is not an inevitable outcome of human-AI interaction, but a symptom of how we have chosen to frame it.

Change the ontology, and the behavior may follow.

In the end, the question is not whether we will think with machines. We already do. The question is whether we will do so passively—or as participants in a larger field of intelligence that we are only beginning to understand.

No comments:

Thinking as Participation: Reframing Intelligence in the Age of AI

In the emerging literature on artificial intelligence and human reasoning, a striking pattern has begun to crystallize: as people increasing...