Glimmer - A Word I Didn't Know I Needed
Dewey in the Age of AI: Glimmers as a Practical Device for Experiential Learning
I heard the word glimmer today in a sense I didn’t recognize.
Not shimmer. Not hope. Something more precise and more clinical: a specific small cue — sensory, relational, contextual — that shifts the nervous system toward safety. The granular opposite of a trigger.
The term comes from Deb Dana’s work on polyvagal theory. Stephen Porges mapped the autonomic nervous system’s responses to perceived safety and threat. Dana, in The Rhythm of Regulation (2018) and her broader clinical development of Porges’ framework, introduced glimmers as the micro-moment counterpart to what everyone already understood about triggers. A trigger is a specific cue that moves the nervous system toward defense. A glimmer is the opposite: a small specific signal that moves it toward the ventral vagal state — the condition where genuine engagement, learning, and social connection become possible.
The clinical significance is in the scale. Glimmers are not big positive experiences. They are tiny specific ones. The quality of light through a particular window. A specific person’s laugh. The weight of a familiar mug. Small enough to overlook. Specific enough to be genuinely activating when noticed.
Dana’s therapeutic application was about training clients to accumulate glimmers — building what she called a glimmer practice — as a bottom-up regulation strategy. Not cognitive reframing from the top down. Sensory specificity as the mechanism. The body first. The mind follows.
Branding and design practitioners picked the word up because it named something they had been circling for years without adequate language. The detail that makes a brand feel alive rather than performed. The specific weight of a product in the hand. The exact corner of a page. Always specific. Never general.
When I heard the word, I recognized the mechanism immediately — not from Dana, but from a problem I’d been sitting with for years.
Practical Dewey
Dewey spent his career trying to name what makes an experience come alive rather than lie flat. The difference between the encounter that genuinely reorganizes how a person sees the world and the encounter that simply adds one more item to what they already know. He called it aesthetic experience. The specific sensory moment that activates genuine engagement before the conceptual apparatus has time to categorize and dismiss it.
The practical problem with Dewey — and every educator who takes him seriously eventually hits this wall — is that genuinely reconstructive experience requires real problems with real resistance and real consequence. The child cooking an actual meal. The student building something that has to work. The inquiry that fails in a way that costs something. These conditions are often impractical at scale, difficult to design, and nearly impossible to sustain across a full curriculum.
Glimmer offers a way through.
Not as a replacement for the real — nothing replaces the real. But as the entry point that makes the real accessible. Small enough to be achievable. Specific enough to be genuinely activating. Carrying enough of the actual structure of the problem that what follows is genuine inquiry, not a simulation of it.
The fracture Dewey identified in 1900 is the same fracture the AI age has made undeniable. What follows is an attempt to think through what a glimmer-based practice might look like — and why, right now, the instrument matters as much as the argument.
John Dewey spent his career arguing that the curriculum was wrong. Not wrong in its methods, but wrong in its foundations. Teaching children to retrieve facts, execute procedures, and perform correctly for assessment was never what education was for — even when humans were the best available instruments for doing those things.
The machines didn’t create that error. They exposed it.
This is the claim most AI-in-education discourse buries or avoids. Everyone is asking: how do we use AI to improve learning outcomes? Dewey’s prior question is harder and more important: what kind of people does education produce, and are they capable of living fully, thinking independently, and participating in democratic life?
The AI age makes that question urgent in a new way. The cognitive capacities that Tier 1 education optimized for — pattern retrieval, syntactic correctness, fact recall, arithmetic speed — are now performed superhumanly by machines that fit in a pocket. The student who spent twelve years developing these capacities has spent twelve years preparing to lose a competition they didn’t know they were entering.
But the deeper problem isn’t obsolescence. It’s that the capacities education didn’t develop — problem formulation, causal reasoning, plausibility auditing, collective intelligence, practical wisdom — are now the only remaining path to a fully human life. Not because AI can’t do them. Because these capacities are what it means to think, not just to retrieve.
Dewey saw this clearly in 1900. He just didn’t have the evidence that 2025 provides.
What Dewey Actually Argued
Dewey’s central claim wasn’t pedagogical. It was epistemological. Knowledge is not a commodity to be acquired and stored. It is a capacity developed through genuine encounter with real problems. The mind is not a container. It is an instrument of adaptation — biological, social, and democratic simultaneously.
This is what he meant by the reconstruction of experience. Not the accumulation of content. Not the performance of understanding. The genuine reorganization of how a person sees and acts in the world, produced by transaction with problems that have real resistance and real consequence.
Education is not preparation for life. It is life.
The implications for curriculum are radical. Subject-area divisions are administrative conveniences mistaken for epistemological truth. History, science, mathematics, and literature are not separate in the world — they are separate in the faculty lounge. A child cooking learns chemistry, mathematics, history, economics, and social cooperation simultaneously because reality doesn’t arrive pre-sorted by department.
The inquiry process that Dewey formalized — felt difficulty, hypothesis, testing, reflection, reconstruction — is not a teaching method. It is a description of how genuine thinking actually works. Every departure from it produces what he called mis-educative experience: activity that closes off future growth rather than opening it.
Three principles govern everything that follows:
Continuity — each experience must connect to what came before and open into what comes next. An experience disconnected from the learner’s existing understanding and not pointed toward future development is inert regardless of how well it is delivered.
Interaction — genuine learning requires transaction between the learner and an environment that pushes back. A simulated environment that doesn’t resist, a case study that has no consequence, a problem designed to be solvable — none of these produce reconstruction. They produce performance.
Democratic purpose — education is not primarily economic preparation. It is the development of citizens capable of self-governance. The epistemic capacities that allow a person to formulate problems, reason through evidence, revise beliefs, and participate in collective inquiry are not soft skills. They are the prerequisites for democratic life. A population that can retrieve information but cannot reason together is not a democracy. It is a collection of well-informed individuals with no shared epistemic infrastructure.
The Taxonomy of What Remains
Against this framework, the Irreducibly Human taxonomy of human intelligence tiers is not primarily a curriculum design tool. It is a map of what education has abandoned and what the AI age makes irreplaceable.
Tier 1 — Pattern and Association. The intelligences that standardized education optimized for: linguistic ability, logical-mathematical reasoning, pattern recognition, encyclopedic recall. These are also the intelligences where machines are now superhuman. Not faster-than-average. Superhuman, by orders of magnitude, without fatigue, without error. Teaching humans to compete directly at Tier 1 is, in Dewey’s terms, teaching students to lift with their backs after the forklift has arrived.
The forklift metaphor requires extension. The point of the forklift is not to free your back so you can do other physical tasks. The point is to free your mind so you can ask what needs moving, where, and why — questions the forklift cannot ask. AI doesn’t just change the labor. It changes what counts as the work.
Tier 2 — Embodied and Sensorimotor. The knowledge that lives in the body: a surgeon’s hands, a carpenter’s feel for grain, a nurse’s ability to read tension in a patient’s movement before the patient can name it. Dewey’s Laboratory School understood this. The child cooking wasn’t simulating cooking. The child building wasn’t practicing building. The hand and the mind develop together. You cannot separate them without impoverishing both.
Tier 3 — Social and Personal. Reading others, cultural navigation, emotional regulation, moral reasoning under genuine stakes. Machines simulate these. They do not live them. A language model produces text that reads as empathetic without experiencing anything. It generates ethical arguments without having skin in the game. The danger is not that the output is wrong. The danger is that the capacity atrophies in the person who stopped exercising it.
Tier 4 — Metacognitive and Supervisory. The intelligences that oversee the others. Plausibility auditing: knowing an answer is wrong before you can prove it. Problem formulation: deciding what is worth solving. Tool orchestration: knowing which instrument to use, when, and whether to trust it. Interpretive judgment: what does this result mean in this specific context. Executive integration: coordinating all of the above toward a unified goal.
Dewey would call Tier 4 reflective inquiry in its most concentrated form. Problem formulation is exactly what he meant by the felt difficulty — the entry point of genuine inquiry. Plausibility auditing is what happens when a person has internalized enough prior reconstructed experience to sense that something is wrong before they can prove it. These capacities cannot be taught directly. They can only be developed through repeated encounter with real problems where the cost of poor judgment is genuine.
Tier 5 — Causal and Counterfactual. The capacity to ask not just what the data shows but what would happen if we intervened — and what we gave up by not intervening differently. Judea Pearl’s three rungs of causation are Dewey’s inquiry cycle made formal. Observation is pattern recognition. Intervention is hypothesis testing. Counterfactual is reflection on what the reconstruction actually cost.
JC Penney had the correlations right. Customers who paid full price showed less price sensitivity than coupon users. What the data could not tell them was what would happen if they removed the coupon system entirely. That’s an intervention. That’s Rung 2. They ran the experiment on a live business instead of a causal model. The cost was not bad data or bad analysts. It was the wrong instrument for the question being asked.
Current AI systems are superhuman at Rung 1. They are weak to absent at Rungs 2 and 3. A population that can query AI for associations but cannot formulate interventions or reason about counterfactuals has access to extraordinary pattern recognition and no capacity to make the decisions that actually matter.
Tier 6 — Collective and Distributed. The intelligence that is not a property of any individual but emerges from groups of people in genuine relationship. The thing that makes science work over centuries. The thing that makes democracy more than the sum of its voters. Language models may be a lossy compression of collective human intelligence — not alien intelligence but our own reflected back. What they cannot reflect is the thing that happened between us: the disagreement that refined an idea, the trust that made knowledge transmissible, the collaborative friction that no individual possessed and no training corpus can capture because it existed in the interaction, not in the record of the interaction.
Tier 7 — Existential and Wisdom. Phronesis: the practical wisdom that knows when and how to apply what you know, and when not to. This tier requires being alive, mortal, and situated in time. It requires stakes — the possibility of loss, of reputation, of a life poorly lived. You cannot teach it. You can only design the conditions that make it more or less likely to develop when a person encounters the real.
Dewey would call Tier 7 simply living. The series points toward it. The work of getting there happens elsewhere.
The Problem with Keeping Up
Here is where the practical problem announces itself.
Educators, practitioners, and intellectually serious people across every domain report the same experience: they cannot keep up. Not with tasks, not with workload — with frameworks. Causal inference. Network science. Polyvagal theory. Large language models. Transformers. Retrieval-augmented generation. Each genuinely interesting. None integrated. The accumulation produces anxiety, not capacity.
This is the most sophisticated version of the periodic table problem. It is Tier 1 about Tier 1. Pattern retrieval about frameworks for understanding patterns. The student memorizing the names of intelligences without developing any of them. The practitioner keeping up with theories of experiential learning without having a single experience that reconstructs how they see their work.
The theories are not the problem. The relationship to the theories is the problem.
An idea you’ve encountered is not a tool. An idea you’ve used on a real problem — that failed, that required revision, that changed how you see the problem — is a tool. Dewey was precise about this. Ideas are instruments assessed by their practical utility in resolving specific problems. An instrument you’ve never picked up isn’t part of your toolkit. It’s an item you’ve read about tools.
The person drowning in frameworks doesn’t need more frameworks described more clearly. They need one framework used on one real problem until it either works or breaks in an instructive way.
The parallel experiment described below is a response to this problem.
Glimmers: The Missing Instrument
The term glimmer comes from polyvagal theory — the small, specific, sensory moment that signals safety and genuine aliveness to the nervous system. Branding practitioners adopted it because it names something they had been trying to describe for years: the specific detail that makes something feel real rather than performed. Not the logo, not the tagline — the weight of a product in the hand, the exact sound of a notification, the corner of a page that’s slightly rough.
The mechanism is specificity. Glimmers are always specific.
Dewey spent his career trying to name what makes an experience come alive rather than lie flat. His closest term was aesthetic experience — the dramatic, compelling, unifying encounter in which the learner feels genuinely absorbed. Not decorative. Not a reward for completing the real work. The aesthetic dimension of an experience is what makes it reconstructive rather than merely informative.
Glimmer is the best single word for what Dewey was pointing at.
Consider the difference:
“JC Penney experienced significant revenue decline following their pricing strategy change.”
“Revenue dropped 25% in one year. The CEO was gone in 18 months.”
The first is information. The second is a glimmer. The nervous system registers something before the conceptual apparatus engages. The felt difficulty is activated before the lesson begins.
Or consider the Sherpa asking “What did you start to say?” rather than “What happened?” One is data collection. One is a glimmer — the specific small move that creates the conditions for genuine reconstruction.
Or the MVAL protocol’s Environment field, which forces the student to describe organizational power structure rather than the room. The moment a student realizes what they’ve been avoiding is a glimmer. Small. Specific. Changes everything that follows.
The design criteria for a glimmer:
Specificity — not a general principle but a particular detail. 25%, not “significant.” 18 months, not “quickly.” The exact weight of something real.
Aliveness — the nervous system registers genuine encounter before the mind categorizes it. Something is at stake even before the learner can articulate what.
Scale-independence — glimmers exist in everything from a sentence to a semester. The meal at the Laboratory School was a glimmer. The question “what did you start to say?” is a glimmer. A well-designed assignment brief can contain a glimmer or not. The difference is not length or complexity.
Fractal structure — a good glimmer contains the full structure of the problem it opens. JC Penney is not a simplified version of causal reasoning. It is the entire structure of Tier 5 at human scale. The student who genuinely reconstructs what went wrong at JC Penney has encountered the real problem — not a toy version of it.
The load criterion — a glimmer without effort is information snacking with better production values. This is the test that separates a genuine glimmer from aesthetic decoration.
Training science offers the precise concept: Rate of Perceived Exertion. RPE 7-8 is productive struggle — working at the edge of current capacity with enough reserve to maintain form and recover. This is where adaptation happens. RPE 2 is 5 pounds lifted 10,000 times — high volume, negligible load, zero reconstruction. You could do it forever and never get stronger. The completion certificate gets issued. Nothing changes.
The glimmer has to carry enough weight to demand genuine effort from the learner encountering it. Not crushing — that produces shutdown not inquiry. Not comfortable — that produces maintenance not growth. Working at the edge of current capacity with something real at stake.
Critically the load varies. The 350 that was RPE 8 last month is RPE 6 this month. A well-designed glimmer is self-calibrating — it contains enough genuine resistance to demand real effort from someone at the right developmental stage and is completable enough that someone beyond that stage moves on naturally. The same specific real problem loads different capacities differently depending on where the learner is.
What doesn’t vary is the requirement for genuine effort. A glimmer that requires nothing of the learner is a micro-glimmer — a pleasant novelty hit that returns to baseline in 36 minutes. Reconstruction happens in the struggle that follows the entry point. Not in the entry point itself.
The glimmer earns its place by making the learner willing to pick up the weight. What happens after has to be real.
The Parallel Experiment: AI-Assisted Glimmers
Irreducibly Human maps what AI can and cannot do and develops the pedagogy for what remains irreducibly human. That is its purpose and it should not be diluted.
The parallel experiment is different in kind. It is the territory where the map gets tested.
The premise: AI tools have collapsed the barrier between “I wonder if” and “here is a thing that exists.” The friction between idea and working prototype has been reduced to almost nothing for a wide range of problems. This changes the curriculum bottleneck fundamentally. It used to be technical — can the student build the thing they imagine? Now it is a judgment problem — can the student identify a problem worth solving, recognize when the output is wrong, and make the call about whether the result is useful or merely impressive?
Those are Tier 4 and Tier 5 capacities. But they get developed through Tier 1 practice on small real things with low stakes. The instrument that develops judgment is not a course on judgment. It is the repeated experience of building something, encountering the moment it fails, and being required to decide why.
The parallel experiment proposes AI as a Sherpa for this process — not a teacher, not a coach, not a co-creator. A Sherpa carries the infrastructure that makes the climb possible. The climbing belongs to the builder.
The core assignment across every tier is the same:
Build one small real thing that didn’t exist yesterday and matters to someone today. Not a demonstration. Not an exercise. Not an impressive artifact. A useful thing that works, at human scale, that someone actually uses.
Small — completable this week. The Deweyian cycle requires completion. You must undergo the consequence to reconstruct from the doing. Incompletion produces learned helplessness, not inquiry. The massive project that never ships is the enemy of development.
Real — works in the world, not just in the assignment. The feedback is honest because the environment is honest. No rubric required. Did it do what you needed? Yes or no.
Useful — solves a problem someone actually has, including the builder. Useful is not the same as impressive. Many impressive things are useless. Many useful things are unimpressive. The criterion is genuine utility, not demonstration of mastery.
Potentially interesting — has an edge that might surprise. Might connect to something larger. Might matter more than expected. This criterion preserves the continuity that Dewey required: each experience opening into the next. The student who builds something interesting keeps pulling the thread past the assignment deadline.
The Glimmer as Entry Point Across Tiers
The parallel experiment is loosely mapped to the Irreducibly Human tiers not as curriculum but as orientation. The tier structure describes the territory. The glimmer is how you enter it.
Tier 1 — Tool mastery. Stakes are almost irrelevant here. Low consequence failure is fine and instructive. The glimmer assignment: find something you do repeatedly that wastes your time. Use AI to reduce that waste. Ship it. Not elegant. Not generalizable. Useful to you today.
This constraint does something important. It forces problem formulation before tool selection. You have to identify what actually wastes your time before you can build anything. That single move is already more Deweyian than most AI literacy courses.
Tier 4 — Metacognitive and Supervisory. The entry point shifts from personal to interpersonal. The glimmer assignment: build something useful for a decision someone else has to make. Now you must formulate their problem, not yours. The metacognitive demand appears immediately. You can’t outsource the judgment about what they actually need.
The moment the tool produces something confidently wrong — and it will — is the educative moment. Not the moment of correct output. The moment of plausible-sounding but incorrect output that the builder recognizes as wrong before they can prove it. That sensation is Tier 4 being born.
Tier 5 — Causal and Counterfactual. The glimmer assignment: find one decision someone in your organization made last month based on correlation they interpreted as causation. Build the causal model that shows what question they were actually asking. Show what the Rung 2 question would have been.
That’s a week’s work. It contains the full JC Penney structure. Nobody loses their job if the student gets it wrong. But the causal model has to be defensible to someone who knows the domain. That’s genuine resistance. That’s the environment pushing back.
Tier 6 — Collective and Distributed. The glimmer assignment: build something useful that requires other people to build it with you. The collective intelligence problem appears immediately. Division of labor is not collective intelligence. The thing that emerges from genuine collaborative synthesis — where the output exceeds what any individual possessed — only appears when the design requires it.
Tier 7 — Wisdom. No assignment. The horizon the other tiers point toward. The person who has built many small real things, encountered genuine failure, revised under real pressure, and carried the consequences across time — that person is developing phronesis. Not from the curriculum. From the accumulated weight of having been wrong in ways that mattered and continuing anyway.
The Theory You Need is the One You Use
The people who report they cannot keep up with new theories are not behind on the literature. They are ahead of their own application.
The gap is not between them and the frameworks. It is between the frameworks they have encountered and the real problems they have not yet used them on.
Pearl on causal inference: you don’t need to master the technical apparatus. You need to build one causal model for one real decision in your domain. Pearl becomes an instrument not a theory to keep pace with.
Barabási on network science: you don’t need to understand scale-free networks in the abstract. You need to map one network that affects your work and notice where the hubs are. Network science becomes a lens not a course to complete.
Dewey on experiential learning: you don’t need to read the secondary literature. You need to build one small real thing and notice what the experience taught you that reading couldn’t. Dewey becomes obvious not academic.
The parallel experiment reframes keeping up entirely. It is not a solution to information overload. It is a replacement of information consumption with building practice. The theory you use once on a real problem is worth more than fifty theories you have kept up with.
This is the instrument. Not the map. Not the taxonomy. The repeated practice of taking a framework, finding the smallest real problem it applies to, building something, and letting the environment respond.
Glimmers are the entry points that make this practice feel alive rather than obligatory. The specific detail that activates the nervous system. The 25% and 18 months. The question “what did you start to say?” The MVAL field that reveals what the student has been avoiding. The meal on the Laboratory School table.
The full Deweyian argument, stated plainly for the AI age:
You cannot understand these ideas from the outside. You have to be changed by using them. The AI tools are the most powerful instruments for building small real things that have ever existed. The barrier between inquiry and artifact has nearly disappeared. What remains is judgment — the irreducibly human capacity to decide what is worth building, recognize when the output is wrong, and make something that genuinely matters to someone.
That capacity is not developed by keeping up with theories about it.
It is developed by building things, encountering failure, revising under real conditions, and building again.
The glimmer is what keeps you building.
What Dewey Would Build
Dewey would not build a better AI tutor. He would be alarmed by AI tutors — not because of the technology but because they make intellectual outsourcing frictionless, which is precisely the opposite of what he thought education was for.
He would be in crisis mode about the democratic implications of systems that answer questions rather than deepen them, that optimize for engagement over reflection, that make the production of knowledge dependent on a few institutions whose reasoning is opaque.
What he would build is simpler and harder:
Tools that surface the right problem before offering any solution. Environments where group inquiry is the unit of learning, not individual instruction. Infrastructure that connects learners to real communities facing real problems where their work has genuine consequence. Systems that make the reasoning behind important decisions visible and contestable by citizens.
And the parallel experiment: a practice of building small real things with AI as Sherpa, mapped loosely to the tiers of irreducibly human capacity, entered through glimmers specific enough to activate genuine inquiry.
Not because it is ambitious. Because it is real.
The meal on the table. The question that reveals what you’ve been avoiding. The thing that didn’t exist yesterday and matters to someone today.
That is what education has always been for.
The machines have simply made it undeniable.


