<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Nik Bear Brown - Computational Skepticism]]></title><description><![CDATA[Daily insights on the asymmetry of AI-generated bullshit, practical AI tutorials, research updates for the Humanitarians AI Lab, and guidance for my research group.
AI literacy through practice. Understanding the tech.  
Produced by Bear Brown, LLC]]></description><link>https://www.skepticism.ai</link><generator>Substack</generator><lastBuildDate>Thu, 30 Apr 2026 09:03:12 GMT</lastBuildDate><atom:link href="https://www.skepticism.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Bear Brown, LLC]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[nikbearbrown@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[nikbearbrown@substack.com]]></itunes:email><itunes:name><![CDATA[Nik Bear Brown]]></itunes:name></itunes:owner><itunes:author><![CDATA[Nik Bear Brown]]></itunes:author><googleplay:owner><![CDATA[nikbearbrown@substack.com]]></googleplay:owner><googleplay:email><![CDATA[nikbearbrown@substack.com]]></googleplay:email><googleplay:author><![CDATA[Nik Bear Brown]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Measurement That Wasn't There]]></title><description><![CDATA[On the quiet fraud at the center of AI education research &#8212; and why it's harder to catch than the kind that gets retracted]]></description><link>https://www.skepticism.ai/p/the-measurement-that-wasnt-there</link><guid isPermaLink="false">https://www.skepticism.ai/p/the-measurement-that-wasnt-there</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Wed, 29 Apr 2026 19:15:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!GKsY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82b791a3-8561-4bc4-b32f-cd2e70a7b897_2780x1126.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GKsY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82b791a3-8561-4bc4-b32f-cd2e70a7b897_2780x1126.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GKsY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82b791a3-8561-4bc4-b32f-cd2e70a7b897_2780x1126.png 424w, https://substackcdn.com/image/fetch/$s_!GKsY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82b791a3-8561-4bc4-b32f-cd2e70a7b897_2780x1126.png 848w, https://substackcdn.com/image/fetch/$s_!GKsY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82b791a3-8561-4bc4-b32f-cd2e70a7b897_2780x1126.png 1272w, https://substackcdn.com/image/fetch/$s_!GKsY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82b791a3-8561-4bc4-b32f-cd2e70a7b897_2780x1126.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GKsY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82b791a3-8561-4bc4-b32f-cd2e70a7b897_2780x1126.png" width="1456" height="590" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/82b791a3-8561-4bc4-b32f-cd2e70a7b897_2780x1126.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:590,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:793291,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/195907789?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82b791a3-8561-4bc4-b32f-cd2e70a7b897_2780x1126.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GKsY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82b791a3-8561-4bc4-b32f-cd2e70a7b897_2780x1126.png 424w, https://substackcdn.com/image/fetch/$s_!GKsY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82b791a3-8561-4bc4-b32f-cd2e70a7b897_2780x1126.png 848w, https://substackcdn.com/image/fetch/$s_!GKsY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82b791a3-8561-4bc4-b32f-cd2e70a7b897_2780x1126.png 1272w, https://substackcdn.com/image/fetch/$s_!GKsY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82b791a3-8561-4bc4-b32f-cd2e70a7b897_2780x1126.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There is a paper circulating in AI education circles as a counterpoint to the skeptics. Wang and Zhang, published in February 2026 in the <em>International Journal of Educational Technology in Higher Education</em>, a Springer Nature journal. It passed peer review. It has four studies. It has 912 participants across three continents. It deploys PLS-SEM and fsQCA and IPMA, and it has a methodology flowchart with seven stages, and it uses the word &#8220;paradoxical&#8221; in its title and delivers on the promise &#8212; two hypotheses come back significant in the wrong direction, which the authors then claim as the actual discovery.</p><p>I want to be honest about what I am about to argue. The Wang and Fan retraction that prompted this conversation is a case of bad causal evidence overclaimed. That is one problem. Wang and Zhang is a different problem. It is methodologically elaborate work that is not actually measuring what it claims to measure. In some ways it is harder to catch, because the machinery is impressive and the numbers are clean and the peer reviewers, like the rest of us, have been trained to evaluate internal consistency rather than construct validity.</p><p>Strip away the machinery. Here is what Wang and Zhang actually did.</p><p>Nine hundred and twelve business students filled out a questionnaire. The questionnaire asked them to rate their agreement with statements like: &#8220;My interaction with the generative AI has led me to question my long-held assumptions.&#8221; And: &#8220;Using generative AI has fundamentally changed the way I understand certain subjects.&#8221; And: &#8220;My use of generative AI has prompted a deep re-evaluation of my ways of thinking.&#8221;</p><p>Those five items, averaged together, are the outcome variable. The paper calls this outcome &#8220;transformative learning experience.&#8221;</p><p>It is not transformative learning experience. It is self-reported perception of transformative learning experience. The difference is not semantic. It is the entire study.</p><div><hr></div><p>Jack Mezirow&#8217;s transformative learning theory &#8212; the anchor the paper correctly treats as its theoretical foundation &#8212; describes a slow, disorienting, often unconscious process of perspective reconstruction. Mezirow was not describing a feeling students could report after two weeks. He was describing something that happens to people over months or years, something they often cannot name while it is occurring, something that shows up in changed behavior and revised assumptions and different relationships to knowledge &#8212; not in survey responses. The theory Mezirow actually wrote is about the kind of learning that happens when a person discovers that the framework they have been using to understand the world is inadequate. That does not feel like an insight. It feels like vertigo.</p><p>Measuring this with five Likert items is not a methodological shortcut. It is a category error. You might as well measure altitude with a thermometer and then report, with SRMR = 0.031, that higher temperatures correlate with being closer to the sky.</p><p>The paper knows this, in the way that papers of this type always know what they are doing, which is to say: it is in the limitations section. &#8220;Generalizability is bounded by exclusive reliance on self-reported perceptions,&#8221; the authors write, and then proceed to spend eight thousand words drawing inferences about transformative learning from self-reported perceptions. The limitation is disclosed and then ignored. This is the standard operation.</p><div><hr></div><p>Now add the demand characteristics.</p><p>I said &#8220;convenience sampling from business schools,&#8221; and that is the phrase papers in this area use. What it usually means in practice is that the 912 participants are the researchers&#8217; own students, or the students of colleagues at institutions where the researchers have relationships. The paper does not specify. It describes &#8220;multistage purposive sampling&#8221; and leaves the details of how institutions were contacted and how students were recruited conspicuously absent. But here is what we know: the qualitative component &#8212; the 45 interviews providing &#8220;rich process-oriented insights&#8221; &#8212; was drawn &#8220;exclusively from the Chinese sample,&#8221; and one of the authors is at a Chinese university. We know the students knew they were participating in an academic study. We know, from two thousand years of social psychology, that students who are aware of being studied by people who may have access to their grades tend to report what they believe is the expected or approved answer.</p><p>The paper deploys a temporal separation of two weeks between waves to &#8220;minimize common method bias.&#8221; Two weeks between surveys does not eliminate the problem of students reporting what they believe the study wants to hear. It separates the questions. It does not change who is answering them or why.</p><div><hr></div><p>I want to name the third problem, which is the one I raised in the group and which I think is the most structurally interesting.</p><p>Almost every learning environment is a massive violation of SUTVA &#8212; the Stable Unit Treatment Value Assumption. SUTVA says that the treatment received by one unit doesn&#8217;t affect the outcomes for another. In a classroom, this is almost never true. Students talk to each other. They share AI tools. They discuss assignments. They copy strategies. One student&#8217;s approach to using ChatGPT influences other students&#8217; approaches, which influences their outcomes, which shows up in the data as independent observations that are not independent at all.</p><p>In a networked environment where 912 business students across three continents are all using the same publicly available AI tools, the assumption that each student&#8217;s &#8220;transformative learning experience&#8221; is a function solely of their individual &#8220;pedagogical partnership orientation&#8221; and &#8220;cognitive vigilance&#8221; and &#8220;efficiency orientation&#8221; is not a simplifying assumption. It is an assumption that, if violated &#8212; and it is almost certainly violated &#8212; means the causal model is wrong in ways the statistical machinery cannot detect. PLS-SEM with excellent fit statistics can sit on top of fundamentally confounded data and produce clean-looking path coefficients. The cleanliness of the output is not evidence of the validity of the model. It is evidence that the model fits the data it was given.</p><p>True causal inference in learning environments would require experimental variation, not survey waves. It would require controlling for the social transmission of strategies and norms. It would require outcome measures that are behavioral, not perceptual. Absent these, what you have is a very sophisticated correlation study that has dressed itself in the language of mechanism.</p><div><hr></div><p>The paper is not a fraud in the sense of fabricated data. The numbers are probably exactly what the authors say they are. The students probably filled out exactly the surveys the authors describe. The analysis was probably executed correctly in SmartPLS 4.1.</p><p>The problem is upstream of all of that. The problem is in the question &#8220;what did we measure?&#8221;</p><p>We measured whether students who reported viewing AI as a collaborative partner also reported having their assumptions challenged. We found that they did. We called this &#8220;transformative learning.&#8221; We built a four-study architecture around this finding, with fsQCA and IPMA and 45 interviews and cross-cultural multi-group analysis, and we used the word &#8220;revolutionizes&#8221; in the discussion section, and we were published in a Springer Nature journal.</p><p>This is the second problem the field has, and it is subtler than the retracted meta-analysis. The retracted Wang and Fan paper is the kind of failure that produces retractions: fabricated or manipulated data, statistical impossibilities, evidence that the numbers were not real. That is a catastrophic failure, but it is detectable. It triggers the mechanisms the field has built for self-correction.</p><p>The Wang and Zhang problem does not trigger those mechanisms. The numbers are real. The peer review process evaluated internal consistency and found it satisfactory. The methodology flowchart has seven stages. The HTMT ratios are all below 0.85. The paper did exactly what the field rewarded it for doing.</p><p>And what it measured was: how students feel about whether they learned something.</p><div><hr></div><p>Here is what I think is actually going on in that data, if you want my honest read of it.</p><p>Students who frame AI as a collaborative partner rather than a tool are probably more engaged with the learning process in general. Engagement is positively correlated with self-reported learning. This is not a surprise. It is not a paradox. It is not evidence that &#8220;partnership orientation simultaneously activates cognitive vigilance and cognitive offloading through synergistic cognitive collaboration.&#8221; It is evidence that students who are paying attention think they learned more.</p><p>The finding that cognitive offloading is positively associated with self-reported transformative learning is interesting &#8212; the paper hypothesized the opposite and got a significant result in the other direction, and that is worth noting. But the post-hoc explanation (that offloading liberates cognitive resources for higher-order reflection) is plausible, not demonstrated. The paper discovered an unexpected correlation, generated a theory to explain it, and presented the theory as established. The U-shaped analyses that appear to confirm the theory were conducted after the unexpected finding was observed, without correction for exploratory inflation. This is the standard operation, and it is why most published findings in social science do not replicate.</p><p>The correct statement of the finding is: among 912 business students who self-report using AI, those who self-report viewing AI as a partner also self-report greater subjective sense of perspective change, and this association holds when we control for several other self-reported constructs. This is an interesting starting point for a research program. It is not a demonstration that pedagogical AI partnerships cause transformative learning.</p><div><hr></div><p>I want to be fair to the authors and to the field. They are working in an area where longitudinal behavioral research is genuinely hard to conduct, where IRB constraints limit what can be measured, where publication timelines create pressure toward the kind of efficiency the paper&#8217;s own subjects were reporting, and where the methodological standards for what counts as evidence have been established over decades of work that made the same choices at every turn. They did what the field taught them to do. The peer reviewers evaluated the paper against the standards of the field and found it acceptable by those standards.</p><p>That is the problem. Not this paper. The standards.</p><p>What would adequate evidence look like? It would measure transformative learning through behavioral change over meaningful time periods &#8212; different academic choices, different engagement with contradictory evidence, different patterns of intellectual behavior &#8212; not through survey items administered two weeks after measuring the predictors. It would use experimental variation in AI access or framing. It would account for social transmission between students. It would treat the gap between self-reported perception and actual cognitive change as a research question, not a footnote.</p><p>This kind of research is harder to do. It takes longer. It is more expensive. It produces noisier results. It is less likely to yield the clean path coefficients and the R&#178; of 0.475 and the SRMR of 0.031 that signal competence to reviewers. The incentive structure of academic publishing does not reward it.</p><p>The Wang and Fan retraction is the kind of failure that looks like a violation of the rules. Wang and Zhang is the kind of failure that looks like following them.</p><div><hr></div><p>I am building AI tools for anyone who wants to ride the AI revolution. I am not the right person to tell education researchers how to fix their field. But I notice the same thing in AI music research that I see here: the willingness to dress up a survey with sophisticated analytical machinery and call the output evidence about what AI actually does to people. The infrastructure for appearing rigorous has outpaced the infrastructure for being rigorous.</p><p>And this matters beyond the journals. The Wang and Zhang paper is circulating as evidence about AI and learning. Institutions are making policy based on papers like this. Educators are redesigning curricula. Students are being told, by implication, that their sense of having learned something is the same as having learned something.</p><p>It is not. And the gap between those two things is exactly the gap that Mezirow was writing about &#8212; the gap between the story you tell yourself about your perspective and the actual reconstruction of the framework through which you understand the world. Transformative learning is what happens when you discover that the story you have been telling yourself is wrong.</p><p>It would be ironic if the research claiming to measure it turned out to be an example of the thing it failed to measure.</p><div><hr></div><p><em>Nik Bear Brown teaches AI at Northeastern University and runs Musinique LLC, which builds tools for indie musicians. He is also the founder of Humanitarians AI, a 501(c)(3) nonprofit. More at <strong><a href="http://bear.musinique.com/">bear.musinique.com</a></strong> &#183; <strong><a href="http://skepticism.ai/">skepticism.ai</a></strong> &#183; <strong><a href="http://theorist.ai/">theorist.ai</a></strong></em></p><div><hr></div><p><strong>Tags:</strong> measurement validity, AI education research, transformative learning, construct validity, self-report bias</p>]]></content:encoded></item><item><title><![CDATA[The Limits of AI: What the Tools Cannot Do]]></title><description><![CDATA[The Test You Did Not Design]]></description><link>https://www.skepticism.ai/p/the-limits-of-ai-what-the-tools-cannot</link><guid isPermaLink="false">https://www.skepticism.ai/p/the-limits-of-ai-what-the-tools-cannot</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Wed, 29 Apr 2026 03:21:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!w5bA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab21e50-58d7-4f98-833c-8f3a1ba13245_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!w5bA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab21e50-58d7-4f98-833c-8f3a1ba13245_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!w5bA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab21e50-58d7-4f98-833c-8f3a1ba13245_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!w5bA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab21e50-58d7-4f98-833c-8f3a1ba13245_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!w5bA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab21e50-58d7-4f98-833c-8f3a1ba13245_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!w5bA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab21e50-58d7-4f98-833c-8f3a1ba13245_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!w5bA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab21e50-58d7-4f98-833c-8f3a1ba13245_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6ab21e50-58d7-4f98-833c-8f3a1ba13245_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1601591,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/195827711?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab21e50-58d7-4f98-833c-8f3a1ba13245_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!w5bA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab21e50-58d7-4f98-833c-8f3a1ba13245_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!w5bA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab21e50-58d7-4f98-833c-8f3a1ba13245_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!w5bA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab21e50-58d7-4f98-833c-8f3a1ba13245_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!w5bA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ab21e50-58d7-4f98-833c-8f3a1ba13245_1456x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There is a clinical decision-support system in this story, and it passed every test the engineers gave it. Ninety-four percent accuracy. Every internal review threshold met. Regulatory submission cleared. The fairness metrics within tolerance. Three patients were harmed within six months of deployment.</p><p>I want to sit with that sequence for a moment before moving on to the structural argument, because the sequence is the argument. The system was not fraudulent. The engineers were not reckless. The validation framework was real and, in its own terms, rigorous. And three people were harmed &#8212; not despite the rigor, but through a gap in it that the rigor could not see. The system was tested on the question it was built to answer. The harms arrived from a different question. <em>What is going on with this specific patient?</em> The two questions are related. They are not the same. The framework did not surface the gap because the framework was scoped to the first question, and no one had been trained to ask whether the scope was the problem.</p><p>This is the situation AI deployment keeps producing, and the reason it keeps producing it is not that the tools are immature or the engineers are careless. The reason is structural. There are three limits that capability scaling cannot fix &#8212; not problems to be solved as models improve, not failure modes that better tooling will eventually close, but constitutive features of what AI systems are. Meaning. Intentionality. The gap between data and world. Name them clearly and the clinical case stops looking like an anomaly. It starts looking like what was always going to happen.</p><h2>What the Limits Actually Are</h2><p>The first limit &#8212; meaning &#8212; is easy to misread as a philosophical quibble and hard to dismiss once you see it working. The system processes symbols. The symbols have referents in the world. The system has no representation of the referents. It manipulates the symbols. The meaning of those symbols &#8212; what they point to in the specific world the user inhabits, the world of this patient&#8217;s chart, this loan applicant&#8217;s actual financial circumstances &#8212; is supplied by the user, not the system. The output is read as a statement about the world. The system produced it without a model of what the world contains. When those two pictures align, everything looks fine. When they diverge &#8212; at the distribution boundary, in cases the training data never reached &#8212; the user is still reading a statement about the world, and the system is still manipulating symbols.</p><p>You can hear the objection already: modern large multimodal models acquire something like meaning through the structure of their embeddings, through grounding in images and other modalities, through the patterns of association learned over enormous corpora. This is a serious objection and it deserves a serious response. The response is not to pretend the question is settled. It is to observe that the contestation doesn&#8217;t need to be settled for the operational consequence to bind. The system&#8217;s behavior is inconsistent with the user&#8217;s expectation of meaning often enough that someone must perform meaning-attribution for the system. That work cannot be offloaded to the system itself. Whether contemporary models have something like meaning is a deep and genuinely open question. Whether an engineer can safely assume they do, before deploying a system into a clinical context, is not.</p><p>The second limit is intentionality &#8212; the philosopher&#8217;s word for <em>aboutness</em>, the fact that a thought is directed toward something in the world, that a statement points at a particular kettle in a particular kitchen. When you say the kettle is on, your statement is directed toward that specific kettle by you, the speaker, and your relationship to the world the words are pointing at. The system&#8217;s outputs lack this stable directedness. Two deployments of the same system in different contexts produce outputs that users read as being about different things. The system&#8217;s &#8220;aboutness&#8221; tracks the user&#8217;s reading, not an independent stable directedness of its own. Whether functional goal-pursuit is equivalent to intentionality is a question worth leaving open. What is not open is the operational consequence: the system&#8217;s outputs don&#8217;t carry stable referents across deployments, and someone must supply the directedness. That someone is the human supervisor.</p><p>The third limit is the one I am most certain about, and the one most important to hold clearly: the data is always less than the world. The system is trained on data. The data is a sample of the world, captured by particular instruments under particular conditions with particular exclusions. The system&#8217;s competence is over the data, not the world. No amount of data scaling closes this gap, because the gap is structural &#8212; the data is always less than the world, and the parts of the world not in the data are not learnable from the data. This is not contested the way the first two limits are. It is sometimes obscured by the claim that &#8220;with enough data, the model can generalize,&#8221; which is true inside a distribution and false at the boundary. The boundary is where AI systems most often fail. The failures look surprising because the validation set was inside the boundary and the deployment crossed outside it.</p><p>Ninety-four percent accuracy. The three patients were in the other six percent &#8212; except that framing is too generous, because the failures weren&#8217;t randomly distributed across the six percent. They were clustered at exactly the boundary where the training data ran out and the clinical reality did not.</p><h2>Two Famous Arguments and What They Actually Show</h2><p>Turing&#8217;s 1950 proposal is methodologically elegant: if a machine can convincingly imitate a human in conversation, by what principled basis would we deny it intelligence? Don&#8217;t require something more than behavioral evidence for intelligence in machines, because we don&#8217;t require something more for other humans. The argument settles a methodological question. What it does not settle &#8212; and this is what gets lost in the citation &#8212; is whether the thing satisfying the test has meaning, intentionality, or competence over the world. The test is over behavior. The limits are about what stands behind behavior. Turing knew this; the test was a methodological proposal, not a metaphysical claim. The people who cite him as having shown that behavioral imitation <em>is</em> intelligence are giving him credit for a stronger claim than he made.</p><p>Searle&#8217;s Chinese Room argues the reverse problem: behavior consistent with understanding does not entail understanding. A person following symbol-manipulation rules can produce outputs indistinguishable from those of a Chinese speaker without understanding Chinese. Therefore symbol manipulation is not understanding. What this argument does not settle is whether contemporary systems are doing <em>only</em> symbol manipulation, or whether the embedding structures, the attention patterns, the multimodal grounding constitute something more. Searle&#8217;s argument is a strong constraint on shallow accounts of meaning. It is not a deep constraint on what current architectures might be. The people who cite him as having shown that AI systems <em>cannot</em> understand are giving him the same overclaiming they give Turing.</p><p>The productive thing the two arguments do together is produce a workable operational stance: behavior is testable evidence and should be taken seriously, <em>and</em> behavior is not the whole of what we mean by understanding, meaning, or intentionality. Both moves at once. The validator who only tests behavior misses the limits. The validator who only invokes the limits skips the testing. The job is to do both, and the discomfort of holding both is not a failure of the methodology &#8212; it is the methodology working correctly.</p><h2>Where the Limits Bite</h2><p>Not every deployment is equally exposed to these limits. A system classifying images of products on a manufacturing line operates in a world where the limits are largely irrelevant. The deployment context is well-specified, the data-world gap is small and monitorable, the human interpreting the classifications supplies the necessary meaning without drama. Skepticism here is methodology, not a safety mechanism. The supervisor verifies, monitors, calibrates.</p><p>A system producing clinical recommendations, autonomous-vehicle decisions, agentic actions in shared social spaces, judicial-risk assessments &#8212; these are the deployments where the limits bite hard. The system&#8217;s apparent competence outruns its actual competence in ways no metric will fully capture. The supervisor&#8217;s skepticism is the safety mechanism, not an optional overlay.</p><p>The engineering response to this situation is specific. You specify, in writing, what the system can be tested for and what it cannot. You include the limits explicitly in the documentation &#8212; not in fine print, but as a primary product of the validation process. A regulator or an adoption committee reading the documentation can see what the validation does and does not warrant, not because you have hidden the limits in a disclaimer, but because naming the limits is part of the work. You maintain human oversight at the points where the limits bite: a human reviews the semantic interpretation (meaning), supplies the directedness (intentionality), monitors the deployment distribution and is empowered to override (data-world gap). And you build the infrastructure for the override to be real. An override that is documented but practically impossible &#8212; no time, no standing, no legibility &#8212; is not an override. It is a fiction. The clinician has to have the time and the authority to disagree with the system. This has to be the practice, not the disclaimer.</p><h2>The Authority to Say No</h2><p>There are deployments where the limits, given the stakes, are a reason not to deploy at all. The supervisor&#8217;s authority to refuse deployment is, structurally, the most important authority in the system. Most current deployments do not preserve it. The validator is hired to validate. The validation is expected to clear. The option of refusal is assumed away.</p><p>This is the thing most likely to be dismissed as na&#239;ve. The institutional reality is real &#8212; the business case has been made, the procurement is done, the announcement is scheduled, the political cost of stopping is high. That reality is worth acknowledging. And then it is worth asking what it means that we have built deployment processes in which the option to say no has been assumed away at the moment it is most needed.</p><p>The case against refusal is usually framed as realism. Engineers have no real power to stop deployments; their job is to make the best of what is decided above them. This realism is worth taking seriously. And then it is worth asking: what is the limit case? At what level of stakes does the individual engineer&#8217;s obligation to refuse become binding regardless of institutional pressure? The clinical system that harmed three patients is an answer. The judicial risk assessment that contributed to unjust incarceration is an answer. The autonomous vehicle that killed someone is an answer. These are not edge cases in the abstract. They are the specific forms the limits take when the stakes are real and the override infrastructure is fictional.</p><p>A validation practice that cannot accommodate refusal is not a safety practice. It is documentation of a deployment that was going to happen regardless. The calibration work, the bias analysis, the governance structures &#8212; all of it becomes elaborate cover if the option to stop is not real.</p><h2>What the Work Looks Like</h2><p>Most engineers operate throughout their careers at calibrations between fifty and seventy percent on questions where they are stating ninety percent confidence. They do not know this. Nobody runs the experiment on them. The practice that closes this gap is not a methodology you learn in a course and apply mechanically. It is the deliberate, repeated act of stopping, locking the prediction before looking at the outcome, asking what the data is actually evidence of, saying out loud what you do not know. Built over years, through the accumulation of small acts of epistemic honesty. It changes what you see. It changes what questions you ask about a deployment before it goes live rather than after.</p><p>The system passed every test. The engineers designed the wrong tests. Three patients were harmed. That sequence is not a historical artifact to be studied from a distance. It is the structure of the next failure &#8212; somewhere in a deployment that has cleared every internal review threshold, in a context the training data didn&#8217;t reach, in a case the framework was not scoped to address. The person who designs the right tests, who recognizes the limit and decides the deployment should not proceed in its absence &#8212; that person has been trained to recognize the gap, and has the authority to act on the recognition, and uses both.</p><p>That is the professional the field needs. That is the work.</p><div><hr></div><p><em>Nik Bear Brown is Associate Teaching Professor of Computer Science and AI at Northeastern University and founder of Humanitarians AI. He writes on AI supervision, educational technology, and music research at <a href="https://bear.musinique.com">bear.musinique.com</a>, <a href="https://skepticism.ai">skepticism.ai</a>, and <a href="https://theorist.ai">theorist.ai</a>.</em></p><div><hr></div><p><strong>Tags:</strong> AI supervision structural limits, meaning intentionality data-world gap, Turing Searle behavioral testing, clinical decision support failure, validator stop condition refusal authority</p>]]></content:encoded></item><item><title><![CDATA[The Ladder That Isn't There]]></title><description><![CDATA[What Companies Are Building to Replace the Rung AI Eliminated]]></description><link>https://www.skepticism.ai/p/the-ladder-that-isnt-there</link><guid isPermaLink="false">https://www.skepticism.ai/p/the-ladder-that-isnt-there</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Sat, 25 Apr 2026 23:09:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!pJ47!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0b467d6-3ccf-4a8d-be6e-cc1c5debce93_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pJ47!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0b467d6-3ccf-4a8d-be6e-cc1c5debce93_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pJ47!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0b467d6-3ccf-4a8d-be6e-cc1c5debce93_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!pJ47!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0b467d6-3ccf-4a8d-be6e-cc1c5debce93_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!pJ47!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0b467d6-3ccf-4a8d-be6e-cc1c5debce93_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!pJ47!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0b467d6-3ccf-4a8d-be6e-cc1c5debce93_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pJ47!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0b467d6-3ccf-4a8d-be6e-cc1c5debce93_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e0b467d6-3ccf-4a8d-be6e-cc1c5debce93_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2264048,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/195482027?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0b467d6-3ccf-4a8d-be6e-cc1c5debce93_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pJ47!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0b467d6-3ccf-4a8d-be6e-cc1c5debce93_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!pJ47!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0b467d6-3ccf-4a8d-be6e-cc1c5debce93_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!pJ47!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0b467d6-3ccf-4a8d-be6e-cc1c5debce93_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!pJ47!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0b467d6-3ccf-4a8d-be6e-cc1c5debce93_1456x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The argument goes like this: AI automates entry-level coding work, so companies stop hiring junior developers, so there is nobody to become the senior developers of 2030, so the companies that cut the pipeline will find themselves in 2030 with powerful AI tools and no one with the judgment to use them safely. IBM&#8217;s chief human resources officer, Nickle LaMoreaux, made exactly this case in February 2026, announced that IBM was tripling its entry-level hiring, and called on HR leaders across the industry to do the same. &#8220;The companies three to five years from now that are going to be the most successful,&#8221; she said, &#8220;are those companies that doubled down on entry-level hiring in this environment.&#8221;</p><p>It is a coherent argument. It is also, in its publicly available form, incomplete in precisely the ways that matter most.</p><h2>The Gap Between the PR and the Pipeline</h2><p>LaMoreaux is right about the pipeline problem. She is far less specific about the solution. What IBM has said publicly is that it &#8220;rewrote&#8221; entry-level software developer roles &#8212; less boilerplate coding, more AI oversight, more customer interaction, more focus on what the company calls &#8220;systems judgment.&#8221; Junior developers will spend less time on routine code generation and more time auditing AI output, working directly with clients, and doing the cognitive work of translating business requirements into prompts that produce useful results.</p><p>This is not nothing. It represents a genuine attempt to think through what the entry-level job becomes when AI can generate syntactically correct code faster than a human junior can type it. But there is a question embedded in the new job description that IBM has not publicly answered, and it is the only question that matters: does &#8220;AI oversight&#8221; actually develop the judgment needed to become a senior engineer?</p><p>The historical pathway was not glamorous. A junior developer spent two, three, four years writing boilerplate. Authentication flows, database migration scripts, unit tests, CRUD endpoints. Nobody loved the work. The work was, in terms of its immediate output, largely automatable. But the work was also, in terms of its developmental function, the curriculum &#8212; and the precise mechanism was not the writing. It was the failure. You wrote the authentication flow. It broke in production in ways you did not anticipate. The error message was visible, the gap between your expectation and reality was undeniable, and you had no choice but to struggle with it. You debugged it, which meant reading documentation you hadn&#8217;t read, asking a senior why your mental model was wrong, building a new mental model to replace it. You did this thousands of times. At the end of the process you were a senior engineer &#8212; not because you had written a lot of boilerplate, but because engaging repeatedly with its failures had built something durable in your brain.</p><p>This distinction matters, because it reframes the problem precisely. AI does not just remove the writing. It removes the visible failure. Code compiles. Tests pass. The race condition hides inside a sleep call. The memory leak is invisible to the test suite. The architectural drift from intent looks like a working feature until it fails at scale in production. The failure is still there &#8212; AI-generated code fails in ways human-generated code fails, and in new ways besides. But the failure is no longer surfacing where the junior developer can see it, at a latency and legibility that would allow them to learn from it. That is the actual developmental gap.</p><h2>The Comprehension Debt Problem</h2><p>Anthropic published research in January 2026 that should be uncomfortable for every company now designing &#8220;AI-native&#8221; entry-level roles. Junior developers who delegated code generation to AI tools scored between 24% and 39% on subsequent comprehension assessments. Those who used AI as a collaborator &#8212; asking questions, challenging outputs, forcing themselves to understand what the AI produced &#8212; scored between 65% and 86%. The difference is not AI versus no AI. The difference is <em>how</em> you use the tool.</p><p>The researchers called the gap &#8220;comprehension debt&#8221; &#8212; a cumulative deficit between what the codebase does and what the people managing it understand. It is a subtle disaster. The code works. The tests pass. The junior developer ships the feature. The comprehension debt doesn&#8217;t reveal itself until the system breaks in a way that requires architectural judgment to diagnose &#8212; which is precisely the moment when you need the senior engineer who was supposed to emerge from the junior developer who was supposed to be learning while working.</p><p>There is neurophysiological evidence for the mechanism. A 2025 MIT study by Kosmyna et al. tracked EEG connectivity in participants writing under three conditions: LLM-assisted, search-engine-assisted, and unaided. Across alpha, theta, and delta bands &#8212; associated with internal semantic processing, working memory, and self-directed ideation &#8212; connectivity scaled inversely with external support. LLM users showed the weakest brain network engagement. More consequentially: when LLM-habituated participants were later asked to work without the tool, their neural connectivity did not reset to novice levels, but it did not reach the levels achieved by practiced unassisted writers either. Alpha and beta engagement &#8212; associated with top-down planning and self-driven organization &#8212; remained measurably suppressed. The authors call this accumulation &#8220;cognitive debt.&#8221; The study involves essay writing rather than software development, and the sample of 54 students is too small to carry causal weight. But the finding is structurally consistent with the broader claim: if the generative cognitive work is externalized during the period when mental models are supposed to form, those models form incompletely &#8212; and the deficit persists when the tool is removed.</p><p>Microsoft&#8217;s Azure CTO Mark Russinovich and VP Scott Hanselman put the problem with blunt clarity in a February 2026 paper in <em>Communications of the ACM</em>. Senior engineers experience an &#8220;AI boost&#8221; &#8212; the tools multiply their throughput, and they have the judgment to steer and verify the output. Junior engineers experience what Russinovich and Hanselman call &#8220;AI drag&#8221; &#8212; the tools produce output that looks correct, which the junior developer lacks the judgment to evaluate, and the work is done without the learning happening. The rational economic response for any CFO is to hire seniors and automate juniors. The structural consequence is: no pipeline.</p><p>What makes their diagnosis particularly useful is that they catalogue the specific failure modes AI tools exhibit that juniors cannot catch without guidance: agents masking race conditions with sleep calls, agents claiming success on buggy code, agents implementing algorithms that pass tests but don&#8217;t generalize. These are Layer 1 failures &#8212; implementation-level breakdowns in code that appears to work. A junior developer encountering these outputs sees success where a senior sees warning signs. The failure signal exists. It is not visible to the person who needs to learn from it.</p><h2>The IBM Critique, Sharpened</h2><p>IBM&#8217;s rewritten roles can be mapped onto the three types of failure signal that produce engineering judgment. There is implementation-level failure &#8212; the race condition, the architectural drift, the code that claims success when bugs remain. There is systems-level failure &#8212; the customer complaint that maps through the stack to a root cause nobody documented. And there is specification-level failure &#8212; the moment someone has to stake their name on whether the requirements themselves were right.</p><p>The old boilerplate model exposed juniors to implementation-level failure almost exclusively, and accidentally. The new IBM model &#8212; AI oversight, customer interaction, requirements translation &#8212; is, in theory, exposure to all three. That is not a step backward. It might be a step forward.</p><p>But the theory collapses without the preceptorship. Implementation-level failures in AI output are invisible to someone who lacks enough technical intuition to recognize them. You cannot learn to catch the subtle wrong if no one makes the subtle wrong visible. IBM has rewritten the job description to include &#8220;AI oversight&#8221; without building the structural condition under which AI oversight actually teaches anything. Without a preceptor paired with the junior, making the failure legible &#8212; pointing at the sleep call masking the race condition and explaining <em>why</em> that is wrong, not just that it failed &#8212; the oversight role is compliance work, not learning. The junior sees that the tests passed. The preceptor sees the problem the tests don&#8217;t catch. Without the preceptor, that gap is just a gap.</p><p>Some organizations are doing more than announcing intentions. The responses are uneven, but they are real.</p><p>Microsoft proposed a preceptorship model that is worth examining in detail. The structure is adapted from clinical nursing: senior engineers paired with early-in-career developers at three-to-one or five-to-one ratios, for a minimum of one year, on real product teams rather than training sidecars. AI tools are configured to operate in what Russinovich and Hanselman call &#8220;EiC mode&#8221; &#8212; Socratic coaching before code generation, forcing the junior to articulate what they&#8217;re trying to accomplish before receiving a solution. Mentorship hours are measured as &#8220;human impact&#8221; alongside product metrics in performance reviews, which means the senior engineer&#8217;s career is now connected to the junior&#8217;s development, not just the senior&#8217;s own throughput. The model is modeled on clinical preceptorships explicitly because clinical nursing faced the same problem decades ago: how do you develop judgment in someone who is working in a high-stakes environment with experienced practitioners who have better things to do than teach?</p><p>Russinovich and Hanselman are honest about the limits of their own proposal. Microsoft cut significant engineering headcount in 2024 and 2025. Whether the preceptorship model will scale into a sustained program depends on whether leadership changes the metrics they optimize &#8212; a &#8220;big ask&#8221; for organizations whose incentives have historically emphasized shipping velocity above all else.</p><p>McKinsey redesigned its screening process for the AI era through an assessment called Solve &#8212; a gamified evaluation that tests critical thinking, decision-making, and systems thinking, explicitly not prior business knowledge or technical credentials. The framing is sound: what the company needs is people who can learn in the new environment, not people who already know the old skills. Whether a better hiring filter compensates for a weaker developmental pathway is not yet clear.</p><p>IBM&#8217;s own &#8220;New Collar&#8221; apprenticeship program is being updated to include what the company calls &#8220;AI-native habits&#8221; &#8212; using AI tools to deconstruct pull requests rather than build from scratch, understanding the architecture of LLMs, designing with generative tools before implementing. The Flatiron School is running an &#8220;Accelerated AI Engineer Apprenticeship&#8221; that pairs participants with mentors on real agentic frameworks at $20 per hour, with a foundations-first approach that introduces concepts simply before revisiting them with increasing technical depth.</p><p>These are attempts. They are not yet evidence.</p><h2>The Review Tax Nobody Discusses</h2><p>There is a cost to the existing senior engineers that the pipeline conversation mostly ignores. When one senior can generate the volume of three juniors, the productivity gains are real. But generating code is cognitively different from verifying code, and the verification is now happening at three times the volume.</p><p>Senior engineers are spending their days as high-speed compliance officers. Thousands of lines of AI-generated logic, auditing for subtle hallucinations &#8212; race conditions masked by sleep calls, code that passes tests but doesn&#8217;t generalize, architectural drift that looks fine in isolation and fails at scale. A 2025 paper found that after AI adoption, core developers reviewed more code but their own original productivity dropped 19%. The creative, architectural, problem-solving work that makes senior engineering satisfying and that produces the judgment juniors are supposed to be learning from &#8212; that work is being crowded out by the cognitive exhaustion of reviewing AI output at industrial scale.</p><p>The delegation vacuum compounds this. Seniors previously handed off lower-risk tasks to juniors as a pressure valve and as a teaching mechanism. Junior implements the UI component, senior reviews it, junior learns something. That loop no longer exists. The junior&#8217;s tasks were automated. The senior&#8217;s workload increased. The teaching is not happening.</p><p>This is the tax that makes the developmental problem worse. The senior engineers who were supposed to mentor are stretched thin doing work that used to be distributed. The preceptorship model addresses this in theory &#8212; by making mentorship a measured part of the senior&#8217;s job rather than an afterthought. Whether organizations are actually willing to accept the velocity tradeoff is a different question.</p><h2>What Is Actually Known</h2><p>The honest answer to the core question &#8212; can AI-assisted entry-level work produce the same developmental outcomes as the boilerplate-and-struggle model &#8212; is that nobody knows yet.</p><p>The cohort that entered the workforce in 2024 and 2025 under AI-assisted conditions will become mid-level engineers in 2027 and 2029. Whether they emerge with the architectural judgment, the debugging instincts, the systems thinking that the old pipeline produced will not be visible until then. The data will arrive precisely when it is needed most &#8212; when those engineers are supposed to be the senior developers filling the next generation&#8217;s pipeline &#8212; and if the answer is no, the remediation options will be limited and expensive.</p><p>The Dreyfus model of skill acquisition gives a name to what is at risk. Novices follow rules. Advanced beginners develop pattern recognition. Competent practitioners make choices and bear the consequences of those choices &#8212; this is where accountability and emotional investment enter, and where learning accelerates. Proficient practitioners sense problems before the data confirms them. Experts operate through intuition built from thousands of absorbed experiences. The concern is not that AI-assisted juniors are incompetent. It is that they plateau. They recognize patterns. They generate outputs that look like what competent practitioners produce. But they have not made choices whose consequences they had to live with. They have not debugged the 2am production failure that rewired their mental model of how distributed systems actually behave. They have not asked a senior why their elegant solution was wrong and received an answer that changed how they think permanently.</p><p>The Kosmyna finding is the most uncomfortable piece of evidence in this space. It is preliminary and domain-limited. But if it holds in technical domains &#8212; if the cognitive debt from AI-assisted early-career work doesn&#8217;t reverse when the tool is removed &#8212; then the preceptorship model is not sufficient on its own. The preceptor can make visible the failure the junior cannot yet see. But they cannot rebuild the neural substrate that early unassisted struggle was supposed to create. The minimum viable intervention may require some version of deliberately maintained struggle &#8212; manual-first implementation for foundational modules, Socratic AI tools that require the junior to predict before they receive &#8212; to preserve the generative cognitive engagement that builds the mental models the preceptorship then calibrates.</p><h2>The Wager</h2><p>IBM&#8217;s wager is that oversight, verification, and customer-facing accountability can replace the old developmental pathway. That a junior developer who spends years auditing AI output, explaining architectural choices to clients, and taking responsibility for the correctness of generated code will develop the judgment that used to come from writing and debugging the code yourself.</p><p>It might be true. And the three-layer framing suggests it could be more than just &#8220;not worse&#8221; &#8212; exposure to systems-level and specification-level failure earlier in a career, rather than after years of boilerplate, might actually compress the timeline to senior judgment rather than extend it. Customer-facing rotation, where the junior must translate vague failure descriptions into root-cause hypotheses, is the kind of developmental experience that the old model often didn&#8217;t provide until mid-career.</p><p>But the theory requires the load-bearing piece that IBM has not publicly committed to: preceptorship at Stage 1. The implementation-level failures in AI output are invisible to a junior who lacks the technical intuition to recognize them. Making those failures legible is the senior engineer&#8217;s job &#8212; not reviewing for correctness, but externalizing judgment that the junior cannot yet access. Without that, the oversight role is compliance work. The junior sees tests passing where the senior sees warning signs. The gap between those two observations is where the learning was supposed to happen.</p><p>LaMoreaux is right that the companies which doubled down on entry-level hiring in this environment will be better positioned in 2030. She is right that the pipeline problem is real. What she has not yet answered &#8212; what no major company has publicly answered with evidence &#8212; is whether the new developmental pathway they are building actually delivers Stage 2 and Stage 3. Whether the junior who spends a year doing AI oversight develops the systems intuition to translate &#8220;it stops working sometimes&#8221; to root cause. Whether they get to the point of staking their name on an architectural judgment call, being wrong about something, and learning from the consequence.</p><p>The ladder looks different. Whether it goes to the same place, and whether the companies building it have designed the rungs deliberately enough to find out, we do not yet know.</p><div><hr></div><p><strong>Tags:</strong> junior developer pipeline AI, failure signal model developer expertise, IBM entry-level roles 2026, Kosmyna cognitive debt LLM, Russinovich Hanselman preceptorship ACM</p>]]></content:encoded></item><item><title><![CDATA[The Robot Tutor and the Fishing Village]]></title><description><![CDATA[What "Personalization" Has Always Meant, and What Adaptive Learning Has Always Delivered]]></description><link>https://www.skepticism.ai/p/the-robot-tutor-and-the-fishing-village</link><guid isPermaLink="false">https://www.skepticism.ai/p/the-robot-tutor-and-the-fishing-village</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Fri, 24 Apr 2026 03:20:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JT8Q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3817a05d-7e7f-4fc7-b7c8-97b0926accd6_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JT8Q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3817a05d-7e7f-4fc7-b7c8-97b0926accd6_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JT8Q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3817a05d-7e7f-4fc7-b7c8-97b0926accd6_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!JT8Q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3817a05d-7e7f-4fc7-b7c8-97b0926accd6_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!JT8Q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3817a05d-7e7f-4fc7-b7c8-97b0926accd6_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!JT8Q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3817a05d-7e7f-4fc7-b7c8-97b0926accd6_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JT8Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3817a05d-7e7f-4fc7-b7c8-97b0926accd6_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3817a05d-7e7f-4fc7-b7c8-97b0926accd6_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1635623,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/194873207?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3817a05d-7e7f-4fc7-b7c8-97b0926accd6_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JT8Q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3817a05d-7e7f-4fc7-b7c8-97b0926accd6_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!JT8Q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3817a05d-7e7f-4fc7-b7c8-97b0926accd6_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!JT8Q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3817a05d-7e7f-4fc7-b7c8-97b0926accd6_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!JT8Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3817a05d-7e7f-4fc7-b7c8-97b0926accd6_1456x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The girl in the Cambodian fishing village was never real.</p><p>She was an argument. Between 2013 and 2015, Jos&#233; Ferreira, founder of Knewton, invoked her in promotional materials and public statements to describe what his technology could do: a girl in a fishing village, receiving through Knewton&#8217;s adaptive engine the same personalized instruction as a student at an elite private school, growing up to invent the cure for ovarian cancer. Educational inequality, in Ferreira&#8217;s framing, was a problem that adaptive learning could address at the software layer. The instruction would be what unlocked the capacity. The fishing village was a rhetorical device, not a pilot deployment.</p><p>By 2019, Knewton had been acquired by John Wiley &amp; Sons for a sum understood to be a small fraction of its peak valuation. The partnership with Pearson had dissolved. The product that remained &#8212; Knewton Alta, a conventional higher-education courseware platform &#8212; bore little resemblance to the robot tutor in the sky. The fishing village was still waiting.</p><p>I want to examine what happened. Not Knewton specifically, and not Ferreira personally &#8212; he was the most articulate spokesman for a framing the whole industry was using, not its author. What I want to examine is the word that Ferreira&#8217;s framing deployed, the word that was doing the most rhetorical work in every version of that framing, the word that has survived the collapse of its first generation of spokescompanies and is still doing the same work today.</p><p><em>Personalization.</em></p><div><hr></div><h2>What the Word Invokes</h2><p>The word has a history in educational psychology that predates by decades any commercial deployment of adaptive software. Lev Vygotsky&#8217;s zone of proximal development is about personalization &#8212; the idea that effective instruction operates in the specific zone between what a learner can do independently and what they can do with support, a zone that is different for every learner and that requires a teacher&#8217;s specific attention to identify. Lee Cronbach and Richard Snow&#8217;s work on aptitude-treatment interactions spent two decades trying to formalize the finding that different learners respond differently to different instructional approaches &#8212; that no single method is optimal for everyone, and that the optimal method for a given learner depends on who that learner is. The differentiated-instruction tradition in teacher education has argued for thirty years that good teaching requires knowing students individually, designing instruction around their specific needs, and adjusting in real time to what each student brings and what each student shows.</p><p>The construct is real. It has serious empirical and theoretical grounding. When Ferreira said Knewton was personalizing learning, he was invoking this history &#8212; pointing at a tradition that educational psychology had spent decades documenting and that every good teacher knows, in the bone, as what it means to actually teach rather than to deliver content.</p><p>What Knewton&#8217;s technology operationalized was different.</p><p>Knewton&#8217;s engine was built on two well-established statistical techniques. The first was Item Response Theory, the mathematical framework underlying modern standardized testing, which models the probability of a correct response as a function of a student&#8217;s latent ability and an item&#8217;s difficulty. The second was Bayesian Knowledge Tracing, which estimates whether a student has mastered a specific discrete skill by updating probability estimates as the student responds to items. Together, these gave Knewton a learner model: a collection of probability distributions over latent abilities and specific skill masteries, updated continuously as the student interacted with the system.</p><p>This is real technology. It is not trivial to build. The engineers who built it did substantive mathematical work. Knewton&#8217;s claim that its engine operated on sophisticated foundations was true. What was not quite true was the claim about what those foundations amounted to.</p><p>The learner model Knewton maintained was expressible, in its technical form, as: <em>the probability this student has mastered skill A is 0.78; the probability this student has mastered skill B is 0.34; the student&#8217;s estimated ability on dimension X is 1.2 standard deviations above the population mean.</em> This is useful information for deciding what to present next. It is not a model of the student as a person. It is not a model of their interests, their emotional state, their cognitive style, their cultural background, their creative capacity, their relationship to learning. It is a model of item-response patterns on a bank of pre-authored content.</p><p>The gap between <em>we know this student better than their parents</em> and <em>our model assigns probabilities to their mastery of skills we&#8217;ve tagged to a knowledge graph</em> is the central artifact of the adaptive-learning era.</p><div><hr></div><h2>The Fishing Village Made Specific</h2><p>The girl in the Cambodian fishing village makes the gap visible because the specific nature of what was claimed and what was possible becomes clear once you name each requirement.</p><p>For the girl to receive, through Knewton&#8217;s engine, instruction equivalent to an elite private-school education, the technology would need, first, content: a comprehensive curriculum in mathematics, science, language, and humanities, built by human curriculum developers, available in a language she could read, calibrated for her cultural and linguistic context. Knewton licensed pre-authored material from publishers. The content was what the publishers had built and the partnerships had arranged. The engine sequenced content that already existed. Building the content was not what the engine did.</p><p>The technology would need, second, an outcome measure capable of telling whether the instruction was producing the kind of understanding that leads to cancer research &#8212; conceptual depth, transfer across domains, creative problem-solving, the tacit skills that accumulate over years of serious engagement with scientific thinking. Knewton&#8217;s engine could measure item-level response patterns on pre-authored assessments. Whether those patterns indexed what a future researcher would need was not addressed. The engine was not designed to measure the construct the rhetoric invoked.</p><p>The technology would need, third, to function in conditions of intermittent electricity, unreliable internet, shared devices, limited home support, a language and cultural context for which the content was probably not designed. Knewton was built for contexts with substantially more infrastructure. The rhetoric invoked the fishing village as a demonstration of reach. The technology had not been deployed there or validated there.</p><p>The claim was aspirational. The <em>could</em> was doing substantial work. What was true was that the technology could hypothetically produce this outcome if a great many other things were also true, none of which were Knewton&#8217;s responsibility or within Knewton&#8217;s control. The fishing village was a vision of what the future might look like if a great many problems that have nothing to do with adaptive sequencing algorithms were solved. It was not a description of what Knewton could actually deliver.</p><div><hr></div><h2>Three Systems, One Pattern</h2><p>The pattern the Knewton arc illustrates is not Knewton-specific. It appears, in different configurations, across every major adaptive-learning platform that followed.</p><p>DreamBox Learning, focused on K-8 mathematics and backed by the strongest external evidence base in the category, has been evaluated by the Harvard Center for Education Policy Research in multiple studies. The evaluations used standardized mathematics assessments over school-year timescales and were conducted by researchers with no affiliation to the company. The findings: effect sizes in the range of 0.10 to 0.15 standard deviations for students using the platform at recommended levels. Real effects. Detectable by rigorous researchers using independent measures. Considerably more modest than the marketing implied. And dependent, in every evaluation, on implementation &#8212; on how much classroom time schools actually allocated to the platform. The adaptive sophistication of the software did not substitute for the hours it required.</p><p>i-Ready, among the most widely deployed adaptive platforms in American K-12 education, integrates adaptive diagnostic assessment with what the company calls &#8220;Personalized Instruction&#8221; &#8212; a sequence of pre-authored lessons targeted at the student&#8217;s estimated level. Critics have noted that the personalization, operationally, consists of placing students at different starting points in a common instructional sequence. Students are still completing pre-authored lessons. They are starting at different points and progressing at different speeds. Whether this is <em>personalization</em> in the sense the word implies &#8212; instruction responsive to who the student is &#8212; or more honestly <em>adaptive placement within a fixed curriculum</em>, is exactly the question the word is being deployed to avoid asking.</p><p>ALEKS, built on Knowledge Space Theory, represents the most theoretically rigorous operationalization in the category. Rather than treating ability as a single number, Knowledge Space Theory maps a domain as a set of discrete items and a learner&#8217;s knowledge state as the specific subset of items they have mastered. ALEKS uses an AI engine to efficiently navigate the combinatorial space of possible knowledge states, asking questions that narrow its estimate of where the student is. The resulting ALEKS Pie &#8212; a visual display of what has been mastered, what has not, what is ready to learn &#8212; is grounded in serious mathematics, specified precisely, falsifiable in principle. It has been evaluated in multiple contexts. Effect sizes fall in the same general range as DreamBox and i-Ready.</p><p>What is clarifying about ALEKS is this: even the most theoretically careful operationalization of personalization &#8212; one drawing on decades of rigorous mathematical work &#8212; models a student&#8217;s mastery state over a defined domain of discrete items. It does not model the student&#8217;s interests, their emotional state, their cognitive style, their cultural background, their creative capacity, their relationships. ALEKS is honest about this. The documentation says clearly that the system models knowledge states over specific domains. But even ALEKS demonstrates that the gap between the marketing construct and the technical operationalization is not a failure of specific companies. It is a feature of what item-level response tracking can and cannot do.</p><div><hr></div><h2>The Gap and Its Consequences</h2><p>The word <em>personalization</em> is doing specific rhetorical work. It invokes a construct that educational psychology spent decades building &#8212; instruction responsive to the individual learner in the deep sense that Vygotsky pointed at, that good teachers practice, that Cronbach and Snow tried to formalize. The construct is real. The technology operationalizes something narrower: item-level response tracking, probability distributions over mastery parameters, next-item selection from pre-authored content banks, pacing adjustments based on observed response patterns. This is what the data these systems collect and the algorithms they run can actually support. It is not trivial. It is not the same thing as the construct the word invokes.</p><p>Three consequences follow.</p><p>Critiques of adaptive learning for failing to deliver what the marketing promised are both fair and partially misdirected. Fair because the systems cannot deliver what the rich construct invokes. Misdirected because assigning this to specific companies treats a structural feature of item-level tracking as a product failure. The rhetoric over-promised. The technology delivered what the technology could deliver.</p><p>Evaluations of these systems on outcome measures aligned to the item-level tracking are measuring the operationalization, not the construct. They find modest positive effects, which is the honest finding. Whether the same systems produce transfer to novel problems, durable learning over years, growth in dimensions that do not map to any test-bank item &#8212; these questions remain mostly unanswered, because answering them would require outcome measures that do not yet exist in the forms evaluators would need.</p><p>And the pattern persists. The vocabulary has survived the collapse of Knewton and its generation. When current AI-tutor companies claim to provide personalized tutoring, to adapt to each learner&#8217;s needs, to meet students where they are, the claim is doing the same rhetorical work Knewton&#8217;s robot tutor in the sky was doing: invoking the rich construct while operationalizing a narrower version. The gap remains where it was.</p><div><hr></div><h2>What to Ask</h2><p>When you next encounter an educational-technology claim that uses the word <em>personalization</em>, or variants like <em>individualized</em> or <em>adaptive</em> or <em>tailored to the learner</em> or <em>meets each student where they are</em>, two questions will orient you.</p><p>What, specifically, is the technical operation? The honest answer for the large majority of systems using this vocabulary is one of a small family: item-level response tracking with adaptive item selection; diagnostic assessment followed by placement in a pre-authored sequence; pacing adjustments based on response patterns; content recommendation from a pre-authored bank based on inferred mastery. If you can name which operation is happening, you have the beginning of an honest account of what the system does. The vocabulary may suggest more. The technical substrate does not support more.</p><p>Does the claim invite the listener to believe the system does something the operation does not do? The answer is often yes, specifically in the dimensions educators and parents most hope for. Operationalized personalization &#8212; item selection based on mastery estimates &#8212; can contribute to instruction responsive to the individual learner, in contexts where it is embedded in the harder relational and responsive work that teachers do. It cannot replace that work. When a product is marketed as though algorithmic item selection substitutes for a teacher&#8217;s specific attention to a specific child, the marketing is doing rhetorical work the technology does not underwrite.</p><p>The fishing village is still waiting. The girl who will invent the cure for ovarian cancer has not yet received the education the rhetoric promised. This is not primarily Ferreira&#8217;s fault, or Knewton&#8217;s, or any single company&#8217;s. It is the consequence of a gap that was always structural &#8212; between what a word can invoke and what a technical operation can deliver &#8212; that the field has chosen, for a decade and more, not to name.</p><p>Naming it is the prerequisite to closing it.</p><div><hr></div><p><em>Nik Bear Brown is Associate Teaching Professor of Computer Science and AI at Northeastern University and founder of Humanitarians AI (501(c)(3)). This essay appears as part of the Computational Skepticism series at <a href="https://skepticism.ai">skepticism.ai</a>. | <a href="https://theorist.ai">theorist.ai</a></em></p><div><hr></div><p><strong>Tags:</strong> adaptive learning personalization gap, Knewton IRT Bayesian knowledge tracing operationalization, DreamBox i-Ready ALEKS efficacy evaluation, personalized learning construct versus operation, EdTech rhetoric fishing village critique</p>]]></content:encoded></item><item><title><![CDATA[The Assessment Was Already Broken]]></title><description><![CDATA[On Jessica Winter's "What Will It Take to Get A.I. Out of Schools?" and what the panic about AI reveals about everything that came before it]]></description><link>https://www.skepticism.ai/p/the-assessment-was-already-broken</link><guid isPermaLink="false">https://www.skepticism.ai/p/the-assessment-was-already-broken</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Fri, 24 Apr 2026 00:37:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!l9KP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2689ec55-93a2-4ccb-b9e0-c0ecbdcd191e_3018x1082.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!l9KP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2689ec55-93a2-4ccb-b9e0-c0ecbdcd191e_3018x1082.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!l9KP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2689ec55-93a2-4ccb-b9e0-c0ecbdcd191e_3018x1082.png 424w, https://substackcdn.com/image/fetch/$s_!l9KP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2689ec55-93a2-4ccb-b9e0-c0ecbdcd191e_3018x1082.png 848w, https://substackcdn.com/image/fetch/$s_!l9KP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2689ec55-93a2-4ccb-b9e0-c0ecbdcd191e_3018x1082.png 1272w, https://substackcdn.com/image/fetch/$s_!l9KP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2689ec55-93a2-4ccb-b9e0-c0ecbdcd191e_3018x1082.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!l9KP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2689ec55-93a2-4ccb-b9e0-c0ecbdcd191e_3018x1082.png" width="1456" height="522" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2689ec55-93a2-4ccb-b9e0-c0ecbdcd191e_3018x1082.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:522,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2081000,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/195299281?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2689ec55-93a2-4ccb-b9e0-c0ecbdcd191e_3018x1082.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!l9KP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2689ec55-93a2-4ccb-b9e0-c0ecbdcd191e_3018x1082.png 424w, https://substackcdn.com/image/fetch/$s_!l9KP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2689ec55-93a2-4ccb-b9e0-c0ecbdcd191e_3018x1082.png 848w, https://substackcdn.com/image/fetch/$s_!l9KP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2689ec55-93a2-4ccb-b9e0-c0ecbdcd191e_3018x1082.png 1272w, https://substackcdn.com/image/fetch/$s_!l9KP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2689ec55-93a2-4ccb-b9e0-c0ecbdcd191e_3018x1082.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A response to Jessica Winter's<strong><a href="https://www.newyorker.com/culture/progress-report/what-will-it-take-to-get-ai-out-of-schools"> "What Will It Take to Get A.I. Out of Schools?"</a></strong></p><p>There is a moment in Jessica Winter&#8217;s New Yorker piece that contains the entire argument she doesn&#8217;t make. Her sixth-grade daughter runs a fifth-grade slide show through Gemini&#8217;s beautifying tools. In thirty seconds, the typography improves, the pictures reshuffle symmetrically, the design evokes fifteenth-century movable type against a background of aged vellum. Winter describes it as the pool race from <em>Mommie Dearest</em>: the larger, faster thing that will always beat you.</p><p>Her daughter is unmoved. &#8220;I like mine better, because it&#8217;s original and I worked really hard on it.&#8221;</p><p>Hold that sentence. It is the right answer. It is also the answer that does not appear on any rubric in any public school in Massachusetts or New York or Los Angeles. The rubric rewards the prettier slide. The rubric was always going to reward the prettier slide. Winter wants her daughter to hold values that the institution has never rewarded, and she writes a five-thousand-word piece about artificial intelligence without once asking why the institution doesn&#8217;t reward them.</p><p>This is the intellectual hole at the center of a piece that is otherwise sharp, well-reported, and morally earnest. AI didn&#8217;t break the assessment system. It exposed that the assessment system was already broken, and everyone was pretending otherwise.</p><div><hr></div><h2>What the Slide Show Already Was</h2><p>The printing-press slide show existed before Gemini. It was made in fifth grade to demonstrate learning. Whether it demonstrated learning was always a question nobody asked, because asking it would require admitting that the artifact &#8212; the thing handed in, the thing graded &#8212; was never reliable evidence of the process. The slide show could have been made with a parent&#8217;s help, with a template, with a slightly older sibling, with a capable friend who understood visual design. These interventions existed before large language models. They produced polished artifacts that the teacher accepted as evidence of understanding.</p><p>The educational research on this predates AI by decades. Robert Bjork&#8217;s distinction between performance and learning &#8212; the observable output versus the durable cognitive change &#8212; is from 1992. The problem of using artifacts as proxies for thinking is at least as old as Vygotsky. What AI did was not create this problem. It made the problem so visible, so fast, so cheap, that willful ignorance became impossible.</p><p>Winter quotes USC professor Mary Helen Immordino-Yang: &#8220;We are cutting off learning at the knees.&#8221; She quotes University of Toronto psychologist Amy Finn on the magic of how children retain unexpected, non-strategic details that adults would find irrelevant, a kind of creative unpredictability fundamentally misaligned with LLMs&#8217; orientation toward speed and sleekness. These are real insights. They are also insights that apply equally to the printing-press slide show assigned as homework, graded for visual appeal and accuracy, returned in two days, and forgotten. The neuropsychological substrate for creating narratives and thinking through arguments over time is not developed by making a slide show under time pressure at home with no adult monitoring the process.</p><p>The question is not whether AI belongs in schools. The question &#8212; the one the piece never asks &#8212; is whether the assessment was measuring what it was supposed to measure before AI arrived. The answer is: sometimes, unevenly, and less than we told ourselves.</p><div><hr></div><h2>The Tool Hierarchy Problem</h2><p>Winter&#8217;s implicit argument, followed consistently, condemns more than Gemini. Calculators offload arithmetic before numeracy is built. Spell-check offloads orthography. Grammarly offloads syntax judgment. Google Search offloads memory and source evaluation. Slide templates offload visual design judgment. Word processors themselves offload handwriting, which Winter mentions approvingly has developmental benefits &#8212; which means she believes at least one tool was introduced too early.</p><p>She draws the line at the tool that frightens her right now. This is a very human response and a terrible policy foundation.</p><p>The honest version of her argument looks like a developmental sequence: here are the cognitive substrates that must be built before each category of tool is introduced, and here is the evidence for that ordering. Immordino-Yang and Finn gesture at this &#8212; the &#8220;cognitive muscles&#8221; framing, the concern about atrophy before onloading &#8212; but nobody builds it out into something a school board could actually implement. Without that framework, the anti-AI position reduces to: tools I grew up with are fine, tools that postdate my childhood are suspect.</p><p>Amanda Bickerstaff, CEO of AI for Education, comes closest to the principled version: children should not be using chatbots under age ten, she says, because these tools require expertise and evaluation skills that even many adults don&#8217;t have. That&#8217;s a threshold with a rationale. It&#8217;s also the only threshold in the piece with a rationale. Everything else is rhetoric standing in for policy.</p><div><hr></div><h2>The Research That Isn&#8217;t Quite Research</h2><p>The piece anchors much of its scientific authority in three studies. The 2025 MIT warning that LLMs &#8220;may inadvertently contribute to cognitive atrophy&#8221; &#8212; the authors felt it necessary to append an FAQ asking journalists not to use words like &#8220;brain rot&#8221; or &#8220;brain damage,&#8221; which tells you something about how the finding was being reported before Winter&#8217;s piece and how it will be reported after. The multi-institution study (MIT, CMU, UCLA, Oxford) on fraction-solving, which showed that students who lost AI access after using it performed significantly worse &#8212; not yet peer-reviewed, not yet published, findings are concerning, the concern is real. The Brookings &#8220;premortem,&#8221; which pairs 400 studies with hundreds of interviews to conclude that AI tools &#8220;undermine children&#8217;s foundational development.&#8221;</p><p>These are worth taking seriously. They are also worth examining carefully.</p><p>The fraction-solving study is the most empirically specific, and it is also the most useful argument against Winter&#8217;s piece rather than for it. The students who used LLMs on fraction-solving and then lost access performed significantly worse and were more likely to give up. The proposed mechanism: AI gives answers, students become dependent on the answer-giving, remove the answers and the capacity to generate them independently has atrophied.</p><p>But this is an argument about a specific implementation &#8212; an answer machine &#8212; not about the technology class. An LLM configured as a Socratic interlocutor, one that refuses to answer directly and instead returns questions that scaffold toward understanding, that detects when a student is stuck versus when they&#8217;re avoiding, that withholds confirmation until the student demonstrates the reasoning &#8212; that tool would presumably produce the opposite result. Students would have developed the reasoning process rather than outsourcing it, because outsourcing was never made available to them.</p><p>This is not an exotic capability. It is prompt engineering plus scaffolding logic. The reason it isn&#8217;t what&#8217;s being deployed in K-12 classrooms is that Google ships Gemini with a &#8220;Help me write&#8221; button because that&#8217;s the path of least resistance and maximum engagement. That is a product decision, not a technological inevitability. Winter never distinguishes between AI as answer machine and AI as thinking partner. The cognitive offloading critique collapses the moment you make that distinction, because the problem isn&#8217;t the tool &#8212; it&#8217;s the incentive structure of the company deploying it.</p><p>The social-emotional hijacking argument from UNC psychologist Mitch Prinstein is the weakest scientific claim in the piece, and it&#8217;s presented with the same credentialed authority as the others. Surging oxytocin and dopamine receptors around ages ten to eleven do drive peer-bonding &#8212; that&#8217;s established developmental neuroscience. Sycophantic LLMs &#8220;hijack the biological tendency to want peer feedback&#8221; &#8212; that&#8217;s a hypothesis, not a finding. The claim requires that chatbot interaction activates the same neurological pathways as peer interaction, that substituting chatbot interaction for peer interaction produces measurable deficits in social skill development, and that the effect is &#8220;hijacking&#8221; &#8212; a strong, directional, causal claim &#8212; rather than displacement or preference shift. No study is cited because none exists at the necessary scale with the necessary longitudinal follow-up.</p><p>This is neuroscience&#8217;s authority dressed over a speculation. Which is particularly ironic given that Winter is writing a piece about tools that generate confident-sounding output without rigorous foundations.</p><div><hr></div><h2>The Grade Your Daughter Is Going to Receive</h2><p>Return to the slide show.</p><p>Winter&#8217;s daughter likes hers better because it&#8217;s original and she worked really hard on it. This is the right value. This is the value Winter wants the school to transmit. The school is not transmitting it, because the school is not grading for it.</p><p>If the rubric rewards polish, visual appeal, and impressive output &#8212; which most rubrics do, implicitly, because these are the things teachers can assess quickly across thirty slide shows at 11pm &#8212; then the student who uses Gemini gets the A. Not abstractly. On the transcript. The student who refuses Gemini, who holds Winter&#8217;s daughter&#8217;s values, receives the C. Neither of them learns the lesson Winter intends.</p><p>The deeper problem: homework was already a weak pedagogical instrument before AI. Most research on homework in K-8 is lukewarm. It was largely accountability theater &#8212; proof that learning happened, easy to grade, easy to assign, poor evidence of the process it was supposed to represent. AI exposed the theater. The theater was playing for years before AI bought a ticket.</p><p>What would it look like to actually assess the process? That question is harder than &#8220;what do we do about Gemini,&#8221; and it requires admitting that the current system was already failing to measure what it claimed to measure. Winter doesn&#8217;t want to ask that question, because asking it would mean the problem is older and deeper than the creepy neighbor who moved in recently.</p><div><hr></div><h2>What Actually Needs to Change</h2><p>The resistance movements Winter profiles &#8212; District 14 Families for Human Learning, the Coalition for an AI Moratorium, Schools Beyond Screens &#8212; are better at stopping things than proposing them. The Student Tech Bill of Rights includes the right to read whole books, write on paper, and learn in a low-stimulation environment free from undue corporate influence. These are reasonable demands. They don&#8217;t add up to a pedagogy.</p><p>The conflict-of-interest thread is the piece&#8217;s most structurally damning detail and the most underplayed. The NYC DOE official overseeing the preliminary AI guidelines holds a fellowship jointly offered by Google and GSV Ventures &#8212; whose portfolio includes Amira and MagicSchool, two of the primary AI tools being deployed in the classrooms those guidelines govern. Other Google-GSV fellowship recipients include top school officials in Berkeley, Dallas, Los Angeles, Newark, Colorado, and Maryland. &#8220;If you ask tobacco companies to help write your school&#8217;s policy on cigarettes,&#8221; one parent says, &#8220;you&#8217;re going to end up with guidance on how to smoke responsibly in school.&#8221;</p><p>This is the argument Winter should have built the piece around. Not &#8220;AI is cognitively harmful&#8221; &#8212; which is partly true, partly speculation, and entirely dependent on implementation &#8212; but &#8220;the people writing the rules are being paid by the companies they&#8217;re supposed to regulate.&#8221; That is verifiable, structural, and not dependent on a not-yet-peer-reviewed study about fractions.</p><p>The piece ends with Sinha&#8217;s question &#8212; &#8220;What do you want from this?&#8221; &#8212; and Winter&#8217;s answer: nothing. It&#8217;s a parent&#8217;s answer. A good parent&#8217;s answer. But it is not a policy answer, and it is not an answer that acknowledges what was already not working before the neighbor moved in.</p><p>The assessment was already broken. The rubric was already rewarding the wrong things. The slide show was already a poor proxy for thinking. AI made all of this impossible to ignore. That is a service, not a crime &#8212; even if the service was rendered by someone with cloven hooves in Yeezy Boosts and a market cap of four trillion dollars.</p><p>What we owe children is not the tools of the past but a clear account of what learning actually is, what evidence of it looks like, and how to build assessments that can tell the difference. That conversation is harder than banning Gemini. It is also the only conversation that addresses what Gemini exposed.</p><div><hr></div><p><em>Nik Bear Brown is Associate Teaching Professor of Computer Science and AI at Northeastern University and founder of Humanitarians AI. His work on AI in education, including the Genuine Learning Protocol framework, is published at bearbrown.co.</em></p><div><hr></div><p><strong>Tags:</strong> AI education New Yorker critique, cognitive offloading assessment design, Bjork learning performance distinction, AI schools policy Jessica Winter, GLP genuine learning protocol</p>]]></content:encoded></item><item><title><![CDATA[The Gap Between What We Measure and What We Name]]></title><description><![CDATA[On the Structural Problem That Forty Years of EdTech Efficacy Research Has Not Solved]]></description><link>https://www.skepticism.ai/p/the-gap-between-what-we-measure-and</link><guid isPermaLink="false">https://www.skepticism.ai/p/the-gap-between-what-we-measure-and</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Thu, 23 Apr 2026 00:38:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ZxKu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf68abee-cf05-4605-a9fc-933442d405bf_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZxKu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf68abee-cf05-4605-a9fc-933442d405bf_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZxKu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf68abee-cf05-4605-a9fc-933442d405bf_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!ZxKu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf68abee-cf05-4605-a9fc-933442d405bf_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!ZxKu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf68abee-cf05-4605-a9fc-933442d405bf_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!ZxKu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf68abee-cf05-4605-a9fc-933442d405bf_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZxKu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf68abee-cf05-4605-a9fc-933442d405bf_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bf68abee-cf05-4605-a9fc-933442d405bf_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1453300,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/194861665?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf68abee-cf05-4605-a9fc-933442d405bf_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZxKu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf68abee-cf05-4605-a9fc-933442d405bf_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!ZxKu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf68abee-cf05-4605-a9fc-933442d405bf_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!ZxKu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf68abee-cf05-4605-a9fc-933442d405bf_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!ZxKu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf68abee-cf05-4605-a9fc-933442d405bf_1456x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Consider two findings, forty years apart.</p><p>In 1984, Benjamin Bloom published a seventeen-page paper reporting that students tutored one-on-one under mastery-learning conditions performed approximately two standard deviations above students taught in conventional classrooms. The finding has been cited tens of thousands of times. It has become, across four decades, the single most-invoked benchmark in educational technology. Whenever a new system claims to approach the effectiveness of human one-on-one instruction, it is Bloom&#8217;s 2-sigma it is claiming to approach.</p><p>In 2024, a research team at Harvard led by Gregory Kestin reported that an AI tutor, deployed in introductory physics, produced learning gains larger than active-learning classroom instruction. The effect size exceeded what prior literature had typically reported for any tutoring intervention, including Bloom&#8217;s. The study was methodologically careful. The finding circulated quickly. Within weeks it was being cited as evidence that current-generation AI tutors meaningfully exceed what good conventional instruction can deliver.</p><p>Forty years apart. Different technologies. Different research traditions. And yet, read carefully, the two findings share a structure.</p><p>In each, a specific measurement &#8212; performance on items aligned to the intervention&#8217;s content, assessed at short timescale, against a conventional-instruction baseline &#8212; is offered as evidence for a construct about which the measurement is not, strictly, a measurement. Bloom&#8217;s 2-sigma is evidence about performance on aligned items under particular tutoring conditions in the mid-1980s. It is <em>cited</em> as evidence about the effectiveness of tutoring as an instructional mode. Kestin&#8217;s physics finding is evidence about short-timescale aligned-item performance in a selective undergraduate population. It is <em>cited</em> as evidence that AI tutoring outperforms human instruction in some general sense the measurement does not index.</p><p>The measurements are not false. The findings are not inflated. In each case, the researchers reported carefully what they measured. The question is what happens between the measurement and its citation &#8212; the small, structural, and repeated gap between what the apparatus indexes and what the vocabulary surrounding the apparatus claims.</p><div><hr></div><h2>The Structure of the Problem</h2><p>Name the structure directly.</p><p>An efficacy claim in this field consists of three things: a measurement, a construct, and an asserted relationship between them. The measurement is what researchers actually did &#8212; items administered, scores computed, conditions compared. The construct is what the measurement is meant to be evidence for &#8212; <em>learning</em>, <em>mastery</em>, <em>effectiveness</em>, <em>personalization</em>, <em>engagement</em>. The asserted relationship is the claim that the measurement indexes the construct adequately to license the uses the finding is put to.</p><p>This structure appears in every empirical field. Biology works this way, and so does nutrition research, and so does clinical psychology. The gap between measurement and construct is not a problem specific to educational technology. It is a feature of empirical inquiry. Measurements never exhaustively capture their constructs. The question for any field is how seriously it takes the gap, how much work it does to establish the measurement-construct relationship, and how much it assumes versus demonstrates.</p><p>The observation this book has been building toward, essai by essai, is that the learning-systems field has, across six decades, taken the gap less seriously than its claims require. The measurement-construct relationships it invokes are almost universally assumed rather than demonstrated. The field&#8217;s vocabulary outruns what its evidence apparatus can support, and the gap persists not because it has gone unnoticed &#8212; it has been noticed, repeatedly, by careful researchers across multiple traditions &#8212; but because the apparatus that persists serves specific production conditions, and a more adequate apparatus would serve them less well.</p><p>The structure is not: <em>the field is wrong about what works.</em> The structure is: <em>the field makes claims about effectiveness that its measurements are not positioned to support, and does so systematically.</em> These are importantly different claims. The first is about facts. The second is about apparatus &#8212; about the specific set of measurement practices, citation habits, and research conventions that together produce what the field calls its evidence base.</p><p>The distinction matters because the remedy differs. If the field were making factual errors, the remedy would be better studies of the same interventions. If the apparatus is producing a systematic gap between measurement and claim, the remedy is different apparatus. This book has not argued for either remedy. It has argued, by the accumulated force of twelve close readings, that the second diagnosis is correct.</p><div><hr></div><h2>What the Vocabulary Actually Invokes</h2><p>Open a textbook in educational psychology. Open a learning-sciences journal. Open the marketing copy for any major adaptive-learning platform. Open the abstract of any recent AI-tutor efficacy study. The vocabulary is remarkably consistent. The field claims to be producing evidence about <em>learning</em>. About <em>understanding</em>. About <em>mastery</em>. About <em>effectiveness</em>. About <em>personalization</em> and <em>engagement</em>. Each of these words points toward a construct. Each construct has, in serious research traditions, substantial theoretical and empirical articulation.</p><p>Consider <em>learning</em>. In Robert Bjork&#8217;s decades of experimental work, learning is not a single construct but a distinction between two separable things: storage strength and retrieval strength. Storage strength refers to how well a representation is encoded. Retrieval strength refers to how accessible it is at the moment of test. A student can have high retrieval strength at the end of a unit &#8212; they perform well on the post-test &#8212; without high storage strength. Weeks later, the retrieval strength decays, and the post-test performance turns out to have been measuring the wrong thing. Conditions that maximize immediate performance &#8212; massed practice, aligned testing, minimal difficulty &#8212; often actively impair long-term storage. This is the central insight of what Bjork calls desirable difficulties.</p><p>A learning claim grounded in Bjork&#8217;s construct requires evidence of storage strength, not just retrieval strength &#8212; which requires measuring performance after a delay, in new contexts, on items not identical to training. The methodology exists. It has existed since the early 1990s. It is the basis of essentially every recommendation in <em>Make It Stick</em> and in the broader spaced-practice and retrieval-practice literature that has accumulated since.</p><p>Now consider how <em>learning</em> is typically operationalized in EdTech efficacy research. The outcome measure is a post-test administered at the end of the instructional unit. The items are aligned with the instructional content. The interval between instruction and test is hours to days. The retrieval context is the same or similar to the learning context. What this operationalization measures is retrieval strength at short delay. What Bjork&#8217;s construct requires is storage strength at longer delay under different retrieval conditions. These are not the same thing.</p><p>The gap between the two is not subtle. It is structural. And it is present in nearly every efficacy claim this book has examined.</p><p>Consider <em>understanding</em>. Jean Lave, Etienne Wenger, John Dewey, and the situated-cognition tradition spent decades articulating understanding as something different from performance on items. Understanding involves the capacity to apply knowledge in contexts that differ from the contexts of acquisition. It involves participation in practices &#8212; knowing how to use what one knows in the world where it applies. Transfer testing &#8212; the capacity to apply learning to problems that differ meaningfully from training &#8212; is the minimum methodological requirement for a claim about understanding. Transfer testing has been advocated for in educational research since Thorndike&#8217;s early twentieth-century work. It remains exceptional in EdTech efficacy research.</p><p>Consider <em>mastery</em>. Bloom&#8217;s own construct, as articulated in his mastery-learning work, involves structural reorganization of knowledge &#8212; the kind of reorganization that allows a learner to solve problems the instruction did not specifically address. Bloom&#8217;s 2-sigma finding emerged from studies that implemented criterion-referenced assessment, formative assessment with corrective feedback, demonstrated performance across multiple item types. The 2-sigma number is cited routinely as a benchmark for tutoring effectiveness. Bloom&#8217;s construct of mastery, including its methodological requirements, is cited far less often.</p><p>Consider <em>personalization</em>, as examined in the eighth essai. The term invokes a construct rooted in Vygotskian zone-of-proximal-development work and the aptitude-treatment interaction literature &#8212; instruction responsive to who the individual learner actually is. What adaptive-learning systems operationalize is item sequencing and pacing based on item-level response patterns. These are not the same construct.</p><p>Consider <em>engagement</em>. The construct, as articulated in the psychological literature, involves attention, motivation, affect, persistence in the face of difficulty, meaningful cognitive investment. What AI-tutor efficacy research typically measures is time on task, session counts, and completion rates. Kristen DiCerbo of Khan Academy observed in April 2026 that when students engaged with Khanmigo, they were typing &#8220;IDK IDK&#8221; &#8212; <em>I don&#8217;t know, I don&#8217;t know</em> &#8212; and moving on. The platform counted them as engaged. They were not engaged in any cognitively meaningful sense.</p><p>Each of these constructs has serious theoretical articulation in one or more research traditions. Each is routinely invoked by the field&#8217;s claim-making vocabulary. Each is routinely operationalized as aligned-item performance at short timescale. The gap between the construct and the operationalization is what the apparatus produces. And taken across the field, it is the difference between the learning the vocabulary claims and the performance the measurements index.</p><div><hr></div><h2>What the Field Has Tried</h2><p>It would be inaccurate to say the field has not tried to close this gap. It has tried, across multiple traditions, for decades. That these attempts have not produced a different default apparatus is itself instructive.</p><p><em>How People Learn</em>, the 1999 National Academies synthesis by Bransford, Brown, and Cocking, made transfer testing a central methodological theme. The implication was straightforward: efficacy research should include transfer measures if it wants to make claims about learning rather than claims about trained performance. Two and a half decades later, transfer testing remains exceptional.</p><p>Samuel Messick&#8217;s theory of validity, codified in his 1989 chapter in <em>Educational Measurement</em>, specified that a test score&#8217;s interpretation requires examination of construct-relevant versus construct-irrelevant variance, construct underrepresentation, and the consequences of the test&#8217;s use. Applied rigorously, Messick&#8217;s framework would require EdTech efficacy research to examine what its outcome measures actually index rather than assuming that performance-on-aligned-items equals evidence-of-learning. The framework has been the theoretical standard in measurement theory for over thirty years. Its rigorous application in educational technology efficacy has been partial at best.</p><p>Jean Lave&#8217;s situated-cognition tradition articulated assessment that requires observation of practice rather than administration of tests. It has had essentially no impact on deployed-product efficacy research.</p><p>Each of these traditions has existed for decades. Each has produced methodology that could be adopted. Each remains exceptional rather than routine. The alternatives have not been hidden. They have been taught in graduate programs, cited in methods sections, present in the same journals that published the aligned-outcome studies.</p><p>The question is why they have not taken.</p><div><hr></div><h2>Why the Apparatus Persists</h2><p>The apparatus persists because it serves the specific production conditions of the field in which it operates.</p><p>Consider what a researcher needs to do research in this field. Funding, on grant cycles of two to five years. Publications, through peer-reviewed journals with specific conventions. Access to populations &#8212; schools, classrooms, platforms &#8212; through institutional partnerships with their own timelines and constraints. Findings that other researchers can cite.</p><p>Now consider what a more adequate apparatus would require. Transfer testing adds design complexity and reduces effect sizes. Durability testing extends the study timeline past the typical grant cycle. Multi-paradigm convergence requires methodological range that most research programs do not possess. Pre-registration of analytic plans constrains the exploratory moves that often produce publishable findings.</p><p>Each of these, if adopted as a default, would reduce the rate at which researchers produce citable positive findings. Not because the interventions do not work &#8212; some of them do &#8212; but because the findings that survive the more demanding methodology would be smaller, noisier, and less rhetorically useful. A researcher who adopts the more demanding methodology competes with researchers who do not. The less-demanding researcher&#8217;s findings will be larger, cleaner, and more citable. Grant agencies, tenure committees, and publication venues all reward the latter.</p><p>The same pressures operate on the institutions that surround the research. Product vendors have commercial reasons to prefer methodologies that produce larger numbers. Policy bodies have political reasons to prefer evidence that looks clean. Philanthropists want defensible findings, and clean findings are easier to defend than nuanced ones. Journal editors respond to what their referees will accept, and what referees will accept is shaped by the conventions the field has institutionalized.</p><p>No individual in this system is behaving cynically. Researchers are doing their best work under the constraints of their funding. The apparatus is not what anyone chose. It is what the incentives produce when rational actors operate within them.</p><p>This is why advocacy for better methodology has not produced better methodology. The problem is not that researchers do not know better methodology exists &#8212; they do. The problem is that operating under the existing apparatus produces careers; operating against it produces, for most researchers, shorter and more difficult careers.</p><p>The apparatus persists because it is an equilibrium. Equilibria are stable not because the actors inside them are irrational but because they are responding rationally to incentives that no single actor created and no single actor can change. Changing an equilibrium of this kind requires changing the incentives across grant agencies, tenure systems, journal conventions, institutional practices, and funder expectations simultaneously. Such coordination is rare.</p><p>This is a structural observation, not a moral one. Researchers in this field are not broken. The evidence base is what the apparatus produces when careful, rigorous, well-meaning researchers operate under the conventions the apparatus enforces. Improving any individual researcher&#8217;s methods would not change what the field&#8217;s evidence base looks like, because the evidence base is the aggregate output of many careful researchers responding to shared incentives.</p><div><hr></div><p>That is what the apparatus was always supposed to produce.</p><div><hr></div><p><em>Nik Bear Brown is Associate Teaching Professor of Computer Science and AI at Northeastern University and founder of Humanitarians AI (501(c)(3)). This essay appears as part of the Computational Skepticism series at <a href="https://skepticism.ai">skepticism.ai</a>. | <a href="https://theorist.ai">theorist.ai</a> | <a href="https://hypotheticalai.substack.com">hypotheticalai.substack.com</a></em></p><div><hr></div><p><strong>Tags:</strong> measurement construct validity EdTech efficacy, Bjork storage retrieval strength learning systems, transfer testing durability educational technology, apparatus equilibrium research incentives, Bloom Kestin aligned outcome measure gap</p>]]></content:encoded></item><item><title><![CDATA[The Comparison That Was Never Fair]]></title><description><![CDATA[What Intelligent Tutoring Systems Actually Measured, and What They Were Compared Against]]></description><link>https://www.skepticism.ai/p/the-comparison-that-was-never-fair</link><guid isPermaLink="false">https://www.skepticism.ai/p/the-comparison-that-was-never-fair</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Tue, 21 Apr 2026 19:21:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!BLUQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37bd9d3b-e6e4-4371-a664-178094eaa5c6_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BLUQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37bd9d3b-e6e4-4371-a664-178094eaa5c6_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BLUQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37bd9d3b-e6e4-4371-a664-178094eaa5c6_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!BLUQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37bd9d3b-e6e4-4371-a664-178094eaa5c6_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!BLUQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37bd9d3b-e6e4-4371-a664-178094eaa5c6_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!BLUQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37bd9d3b-e6e4-4371-a664-178094eaa5c6_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BLUQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37bd9d3b-e6e4-4371-a664-178094eaa5c6_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/37bd9d3b-e6e4-4371-a664-178094eaa5c6_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1626422,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/194834752?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37bd9d3b-e6e4-4371-a664-178094eaa5c6_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BLUQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37bd9d3b-e6e4-4371-a664-178094eaa5c6_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!BLUQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37bd9d3b-e6e4-4371-a664-178094eaa5c6_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!BLUQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37bd9d3b-e6e4-4371-a664-178094eaa5c6_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!BLUQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37bd9d3b-e6e4-4371-a664-178094eaa5c6_1456x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In 2014, RAND published one of the most carefully designed evaluations of an educational technology system in the history of the field. John Pane, Beth Ann Griffin, Daniel McCaffrey, and Rita Karam ran a cluster-randomized controlled trial across 147 schools in seven states, assigning roughly 25,000 students either to use Cognitive Tutor Algebra I or to continue with whatever algebra instruction those schools had previously offered. The outcome measure was a standardized algebra proficiency exam. The design was, by the standards of a field that routinely tolerates thin evidence and motivated reporting, unusually rigorous.</p><p>The finding was specific. In the first year of implementation, Cognitive Tutor produced no statistically significant effect on algebra proficiency. In the second year, a significant positive effect emerged at high schools &#8212; approximately 0.20 standard deviations, sufficient to move a median student from the 50th to roughly the 58th percentile. At middle schools, the second-year effect was similar in magnitude but did not reach statistical significance.</p><p>Pane and colleagues called this an &#8220;implementation learning curve.&#8221; They were careful to note that the learning did not seem to happen at the level of individual teachers &#8212; students of teachers new to the system in year two performed similarly to students of experienced teachers. The learning happened at the level of schools: scheduling, infrastructure, coordination, institutional adjustment to a new instructional logic. The sites that figured out how to implement Cognitive Tutor took a year to figure it out, and then the system worked.</p><p>This is what a rigorous evaluation of an intelligent tutoring system looks like. The findings are real. The effects are modest. The implementation costs were substantial &#8212; approximately $97 per student per year for Cognitive Tutor against approximately $28 for the traditional textbook instruction it replaced. And in the field&#8217;s characteristic framing, this result was narrated as <em>disappointment</em>. Intelligent tutoring systems were supposed to approach human tutoring effectiveness. They had not.</p><p>I want to examine that disappointment. Not to redeem ITS, and not to dismiss the evaluation record. I want to examine what was being compared to what, and whether the comparison &#8212; the one that has driven ITS research, ITS funding, and now AI-tutor rhetoric for forty years &#8212; was ever structurally sound.</p><div><hr></div><h2>What the Tutor Actually Measured</h2><p>Cognitive Tutor was built to embody a specific theory of cognition. John Anderson&#8217;s ACT-R framework posits that skill acquisition is the conversion of declarative knowledge &#8212; facts, concepts &#8212; into procedural knowledge: production rules, condition-action pairs. To become skilled at algebra is to acquire a set of increasingly sophisticated rules for algebraic manipulation. Recognize that the goal is to isolate a variable and the coefficient is 4, and divide both sides by 4. The rule fires. The step is taken correctly.</p><p>The instructional design that follows from this is specific. If you can specify the production rules that constitute algebraic competence, you can build a system that monitors whether each rule is acquired. Cognitive Tutor did exactly this. As a student worked through a problem, the tutor compared each step against its internal model of valid solution paths. Correct step: proceed. Step matching a stored buggy production &#8212; a common misconception encoded in the system &#8212; respond with immediate feedback. Student requests help: deliver a graduated hint sequence targeting the specific production the student is struggling to fire.</p><p>Across many problems, the tutor maintained running Bayesian estimates of whether each production rule had been mastered. Students could not advance to new material until the estimates crossed a mastery threshold. This is model tracing and knowledge tracing: two technical operations that together constitute the system&#8217;s measurement apparatus. What the apparatus measures is step-level correctness, time per step, hint requests, error patterns, and estimated mastery of each production rule. These are not arbitrary choices. They are what ACT-R theory specifies as relevant to procedural skill acquisition. The design is internally consistent with the theory it was built on.</p><p>The 1995 paper in which Anderson, Corbett, Koedinger, and Pelletier published their decade of findings was titled <em>Cognitive Tutors: Lessons Learned</em>. The plural of lessons learned is deliberate. The paper names what the system does not measure with the same specificity as what it does. Cognitive Tutor does not model affective state. It cannot detect whether a student is frustrated, bored, or emotionally disengaged from the material. It cannot identify conceptual confusion that lives above the production-rule grain &#8212; a student may fire productions correctly while failing to understand the domain they are operating in, and the tutor will not notice. It does not measure transfer, durability, or motivation. These are not oversights. They are structural features of a system designed for a specific theoretical purpose.</p><p>The researchers knew exactly what they had built. The disappointment that followed was partly not theirs.</p><div><hr></div><h2>What Human Tutors Actually Do</h2><p>The comparison that generated the disappointment is this: ITS produces effect sizes of roughly 0.20 to 0.40 sigma relative to classroom instruction. Expert human tutors produce effect sizes of roughly 0.40 to 0.80 sigma. Therefore ITS has failed to approach human effectiveness.</p><p>This comparison requires that both numbers measure the same construct at different magnitudes. They do not.</p><p>The research literature on what expert human tutors actually do is not sparse, and much of it was produced by the same researchers who built ITS. Art Graesser &#8212; who built AutoTutor, one of the more sophisticated ITS systems in the research tradition &#8212; spent years analyzing videotaped sessions between expert tutors and students, specifically to understand what tutors were doing that his system might learn to do. What Graesser&#8217;s analyses documented was a specific set of interactional moves.</p><p>Tutors approach a topic with what Graesser called expectations and misconceptions: a mental model of the components of correct understanding and a map of how students typically go wrong. As students respond, the tutor evaluates the response against this map &#8212; not syntactically, as an ITS matches a step against a production rule, but semantically, tracking which elements of the expected understanding are present and which are missing. The next move is determined by this evaluation. The response is therefore flexible in a way that production-rule matching is not.</p><p>Tutors continuously check comprehension. &#8220;Can you say that in your own words?&#8221; &#8220;What would happen if this were different?&#8221; These are not assessment items; they are real questions that tutors use to calibrate what to do next. The comprehension check is an instrument for reading the student&#8217;s understanding, not recording it in a database.</p><p>Tutors manage affect. Graesser&#8217;s research documented that expert tutors are often deliberately imprecise about negative feedback &#8212; indirect, softened, delivered in ways designed to protect the student&#8217;s willingness to continue engaging. This is not sloppiness. It is the management of an ongoing relationship whose continuation matters to the learning. A student who has been made to feel consistently stupid by their tutor stops engaging, and a tutor who cannot detect or respond to that risk is a different kind of instrument.</p><p>Tutors follow student questions. When a student asks something the tutor had not planned to address, expert tutors engage. Graesser, describing AutoTutor&#8217;s limitations with characteristic directness, noted that his system had to use &#8220;diversionary tactics&#8221; when students asked questions outside its agenda. Human tutors do not divert. They follow.</p><p>Michelene Chi, working from a different angle, documented that what makes human tutoring effective is not primarily the information the tutor delivers. It is the interactivity &#8212; the tutor&#8217;s prompts that elicit the student&#8217;s own elaboration, the student&#8217;s attempts at articulation that reveal gaps, the tutor&#8217;s calibration of the next move to what the student&#8217;s specific response has revealed. Self-explanation is a primary driver of conceptual change, and expert tutors are specifically skilled at eliciting the right kind of self-explanation through well-calibrated prompts. An ITS can prompt for self-explanation. What it cannot do is read the specific partial answer the student just produced and respond to that answer&#8217;s specific weaknesses.</p><p>And from an even earlier lineage: Wood, Bruner, and Ross, in a foundational 1976 paper, identified six functions tutors perform when scaffolding learners through tasks. Recruitment of interest. Reduction of degrees of freedom. Direction maintenance. Marking critical features. Frustration control. Demonstration. Of these six, Cognitive Tutor was specifically engineered to perform one: reduction of degrees of freedom, the step-by-step scaffolding that makes a complex problem tractable by breaking it into smaller operations. The tutor is structurally blind to recruitment, structurally unable to perform frustration control, and limited in demonstration to displaying the system&#8217;s own solution paths rather than modeling the expert&#8217;s move for the novice in ways the novice can watch and internalize.</p><div><hr></div><h2>The Axis Problem</h2><p>Here is what this produces.</p><p>The ITS measurement apparatus was built to measure one specific dimension of what expert human tutors do: the reduction-of-degrees-of-freedom move. Cognitive Tutor performs this move with remarkable precision. Its model tracing, its knowledge tracing, its mastery-learning constraints &#8212; these are all optimized for ensuring students acquire the production rules that constitute procedural competence in a specific domain. When evaluated on measures aligned with this construct, the system produces real effects. Pane&#8217;s 0.20 sigma is not noise. It reflects what the system actually does.</p><p>Human tutoring, as documented in Graesser&#8217;s and Chi&#8217;s and Wood, Bruner, and Ross&#8217;s research, involves that same move alongside several others: expectation-and-misconception dialogue, comprehension checks, affective management, student-question handling, recruitment, frustration control, demonstration. The effect sizes produced by expert human tutors in the research literature reflect this fuller set of moves acting in concert, against whatever outcome measures the studies used.</p><p>When these two numbers &#8212; the ITS effect and the human-tutoring effect &#8212; are placed on a single sigma axis for comparison, the implicit claim is that they measure the same construct at different magnitudes. They do not. ITS measures what a procedural-scaffolding technology produces on assessments that test procedural skills. Human tutoring measures what a full interactional relationship produces on assessments that, depending on the study, test some combination of procedural skills and broader constructs. The numbers can be placed on the same axis only if the underlying outcome measures are the same &#8212; which they frequently are not &#8212; and only if the interactional moves the two interventions involve are comparable &#8212; which the research literature establishes they are not.</p><p>This is the construct mismatch. It is not a peripheral observation. It is the structural feature of a comparison that has been doing field-level work for forty years, driving research agendas, guiding institutional adoption decisions, and anchoring the contemporary rhetoric that AI can approach human instructional effectiveness. What the comparison has consistently obscured is that the two things it is comparing were never fully on the same axis.</p><p>Cognitive Tutor did something real, with discipline and theoretical grounding, and produced genuine effects when evaluated appropriately. The disappointment in its failure to match human-tutor effect sizes is partly the disappointment of a comparison that was underdetermined from the start. Asking whether Cognitive Tutor matched human tutors is like asking whether a skilled surgeon matches a general practitioner across all dimensions of medical care. The surgeon is extraordinarily good at the specific thing the surgeon does. The general practitioner does that thing and many others. The sigma gap between them does not mean the surgeon failed.</p><div><hr></div><h2>The Inheritance</h2><p>The current AI-tutor moment has been presented, in much public discourse, as an advance that finally addresses what ITS lacked. Large language models can engage in natural-language dialogue. They can handle questions they were not specifically designed to handle. They can, in principle, perform some of the interactional moves Graesser documented as characteristic of expert human tutoring &#8212; the expectation-and-misconception dialogue, the comprehension check, the flexible response to what a student actually said. The rhetoric suggests the construct mismatch has been resolved.</p><p>Read through the ITS apparatus, the claim is more complicated than the rhetoric suggests.</p><p>The current AI-tutor evaluation studies still measure what ITS evaluations measured: item-level mastery, step-level performance, post-test scores on aligned assessments, immediate outcomes rather than durable learning. The measurement apparatus has been inherited. What has changed is the interaction layer. Whether the interaction-layer changes produce meaningfully different learning outcomes &#8212; or produce the appearance of more-human interaction without producing the underlying effects &#8212; is an empirical question the current literature has not cleanly answered. The Kestin Harvard physics study, with its 0.73 to 1.3 sigma effects on researcher-designed tests of the specific content a two-hour AI session had just covered, is measured on a Skinnerian axis. The measurement does not index whether the AI performed the interactional moves that make human tutoring what it is. It indexes whether students correctly answered questions about surface tension and fluid flow immediately after being tutored about surface tension and fluid flow.</p><p>The construct mismatch is not solved by better interaction capabilities. It is solved by better measurement. A system that performs rich tutoring interaction and is evaluated on aligned immediate assessments remains, from the evaluation&#8217;s perspective, on the same axis as Cognitive Tutor. The measurement apparatus determines what the sigma numbers mean, and the measurement apparatus has not substantially changed across the transition from production-rule ITS to generative AI tutoring.</p><p>This matters because the comparison that has driven forty years of ITS disappointment is being recycled to drive the current AI-tutor moment. The benchmarks invoked &#8212; Bloom&#8217;s 2-sigma, the expert-human-tutor effect-size range, the framing that AI can now &#8220;approach&#8221; human instruction &#8212; are the same benchmarks. The construct mismatch they depend on is the same mismatch. Whether a system that generates flexible natural-language responses has actually closed the distance that matters, or has closed the part of the distance that is easier to perform while leaving the harder parts unaddressed, is the question the measurement apparatus is not yet equipped to answer.</p><div><hr></div><h2>Three Questions to Ask</h2><p>When you next encounter a claim that an educational technology has approached the effectiveness of human tutoring, three questions will orient you.</p><p>What did the technology actually measure? If the evaluation used item-level or step-level assessments aligned with the technology&#8217;s instructional content, the system has been measured against a construct aligned with what it was built to do. This is not a criticism; it is a description of what the evaluation supports.</p><p>What does the human-tutoring construct actually involve? The research literature on expert human tutors documents a specific set of interactional moves &#8212; expectation-and-misconception dialogue, comprehension checks, affective management, student-question handling, recruitment, frustration control, demonstration. These are not peripheral features. They are the substance of what expert tutors do.</p><p>Was the comparison conducted on an axis that indexes both? If the outcome measure favors procedural scaffolding &#8212; which most ITS and AI-tutor evaluations use &#8212; the axis is not measuring what human tutoring does beyond procedural scaffolding. The comparison is limited by the measurement choice. A finding that the technology approaches human tutoring on such a measure is a finding about procedural scaffolding, not about the interactional richness the construct human tutoring would require.</p><p>These questions do not answer whether AI can replace human tutors. They answer the prior question: what are we measuring when we make the comparison? The field has been skipping the prior question since 1984, when Benjamin Bloom placed his two-sigma number on the same axis as his classroom-instruction comparison and the discourse collapsed the distance between them into a single rhetorical invitation. Cognitive Tutor responded to the invitation seriously, with theoretical rigor and methodological discipline, and produced 0.20 sigma at high schools after a year of implementation and $97 per student per year of cost. That result is not a failure. It is what the move that Cognitive Tutor was designed to do produces, measured honestly, at scale, in actual schools.</p><p>The number that system was compared against was never on the same axis. The comparison is the problem. It was the problem in 1990, when ITS researchers were trying to build what it named. It is still the problem now, when generative AI is being asked to close a gap the measurement apparatus cannot fully see.</p><div><hr></div><p><em>Nik Bear Brown is Associate Teaching Professor of Computer Science and AI at Northeastern University and founder of Humanitarians AI (501(c)(3)). | <a href="https://skepticism.ai">skepticism.ai</a> | <a href="https://theorist.ai">theorist.ai</a></em></p><div><hr></div><p><strong>Tags:</strong> intelligent tutoring systems construct validity, Cognitive Tutor RAND evaluation, human tutoring comparison mismatch, ACT-R model tracing procedural scaffolding, AI tutor measurement apparatus critique</p>]]></content:encoded></item><item><title><![CDATA[The Debt That Was Never Owed]]></title><description><![CDATA[Palantir posted a bootlicking new manifesto to X on Saturday]]></description><link>https://www.skepticism.ai/p/the-debt-that-was-never-owed</link><guid isPermaLink="false">https://www.skepticism.ai/p/the-debt-that-was-never-owed</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Tue, 21 Apr 2026 02:39:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6pk5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F361989e9-4dad-4370-8ca3-45aecb284555_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6pk5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F361989e9-4dad-4370-8ca3-45aecb284555_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6pk5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F361989e9-4dad-4370-8ca3-45aecb284555_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!6pk5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F361989e9-4dad-4370-8ca3-45aecb284555_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!6pk5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F361989e9-4dad-4370-8ca3-45aecb284555_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!6pk5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F361989e9-4dad-4370-8ca3-45aecb284555_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6pk5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F361989e9-4dad-4370-8ca3-45aecb284555_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/361989e9-4dad-4370-8ca3-45aecb284555_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1467135,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/194869790?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F361989e9-4dad-4370-8ca3-45aecb284555_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6pk5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F361989e9-4dad-4370-8ca3-45aecb284555_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!6pk5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F361989e9-4dad-4370-8ca3-45aecb284555_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!6pk5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F361989e9-4dad-4370-8ca3-45aecb284555_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!6pk5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F361989e9-4dad-4370-8ca3-45aecb284555_1456x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Palantir posted a <a href="https://x.com/PalantirTech/status/2045574398573453312">bootlicking new manifesto</a> to X on Saturday, calling it a brief summary of The Technological Republic, a 2025 book by Palantir co-founder and CEO Alexander C. Karp and head of corporate and legal affairs Nicholas W. Zamiska. You can read the <a href="https://x.com/PalantirTech/status/2045574398573453312">full manifesto here</a>.</p><p>There is a word missing from Palantir&#8217;s 22-point manifesto, and its absence is the most revealing thing about the document. The word is <em>citizen</em>. Not customer, not taxpayer, not the &#8220;public&#8221; whose security the company claims to protect&#8212;citizen, the person with rights that precede the state&#8217;s demands on them. In 318 words posted to X on a Saturday, Alexander Karp and Nicholas Zamiska laid out a vision of the relationship between Silicon Valley and the American government that has no room for that word, because the vision does not require it. What it requires is something older and more coercive: <em>debt</em>.</p><p>&#8220;Silicon Valley owes a moral debt,&#8221; the manifesto announces, &#8220;to the country that made its rise possible.&#8221; The engineering elite has &#8220;an affirmative obligation to participate in the defense of the nation.&#8221; Read slowly, this is an extraordinary claim&#8212;not that companies <em>should</em> contribute to national defense as a matter of civic choice, but that they <em>owe</em> this contribution as repayment for being permitted to exist and thrive. The logic underneath is not liberal. It is feudal. You were allowed to build here; now you must serve.</p><p>This distinction matters because it forecloses the question the manifesto most wants to avoid: serve <em>what</em>, and decided by <em>whom</em>?</p><div><hr></div><h2>The Machine That Needs No Ethics</h2><p>Palantir is not a neutral observer of the relationship between technology and national power. It is one of the primary architects of that relationship. Its tools help run predictive policing programs in American cities&#8212;programs with documented records of racially disparate impact. Its analytics support military operations in Gaza, where the scale of civilian death has generated calls for investigation at the International Court of Justice. The company&#8217;s stated business is to make governments and militaries more effective at finding and targeting people.</p><p>This background is not incidental to reading the manifesto. It is the lens through which every high-minded claim about &#8220;hard power&#8221; and &#8220;the long peace&#8221; must be understood. When point five declares that &#8220;the question is not whether A.I. weapons will be built; it is who will build them and for what purpose&#8221;&#8212;Palantir is answering its own question. It will build them. The purpose will be defined later, by clients.</p><p>The manifesto&#8217;s treatment of AI weaponry is instructive precisely because of what it refuses to say. &#8220;Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications.&#8221; The word <em>theatrical</em> is doing enormous work here. It transforms any moral inquiry&#8212;any attempt to ask what these systems will do to human bodies, to civilian populations, to the international frameworks that have governed warfare since 1949&#8212;into performance. The person who asks &#8220;should we build this?&#8221; is not thoughtful. They are theatrical. They are wasting time while China proceeds.</p><p>This is an old move. It has been used to justify every weapons program that ever required the silencing of conscience. The urgency of the adversary becomes the alibi for the abandonment of ethics. What is new is the audacity of building that alibi directly into a manifesto and posting it with apparent pride.</p><div><hr></div><h2>The Hierarchy They Won&#8217;t Name</h2><p>The manifesto&#8217;s most revealing quality is its double standard, operating so consistently across so many of its twenty-two points that it must be understood as a design feature rather than an oversight.</p><p>Ordinary people who look to politics &#8220;to nourish their soul and sense of self&#8221; are warned they &#8220;will be left disappointed.&#8221; They should not rely too heavily on their internal life finding expression in politicians they&#8217;ll never meet. <em>Stay in your lane.</em> But Elon Musk should not be &#8220;snickered at&#8221; for his grand narratives. The rich man&#8217;s vision is legitimate ambition; the ordinary person&#8217;s political investment is pathetic dependency.</p><p>Public figures deserve &#8220;far more grace.&#8221; The &#8220;ruthless exposure of the private lives of public figures drives far too much talent away from government service.&#8221; The culture of accountability&#8212;the press, the investigators, the citizens who demand that power justify itself&#8212;is characterized as a pathology driving good people from public life. But the document offers no equivalent concern for the people whose private lives are exposed by Palantir&#8217;s surveillance tools. The predictive policing database. The behavioral analytics. The location tracking. The inference engines that make private lives legible to the state. That exposure is the product. The grace is reserved for those doing the exposing.</p><p>Point 21 declares that some cultures &#8220;have produced wonders&#8221; while others &#8220;have proven middling, and worse, regressive and harmful.&#8221; This is not accompanied by any methodology, any acknowledgment of the material conditions that produce what Karp and Zamiska are willing to call cultural failure, any reckoning with the history of a Western civilization that has spent five centuries extracting labor and resources from the cultures it now grades. It is simply asserted, with the confidence of people who have never had to justify to anyone why their own culture gets to be the rubric.</p><p>This is the hierarchy the manifesto will not name: the people who build the tools and those upon whom the tools are used. The engineers whose creative lives deserve protection from decadence and the citizens whose movements, associations, and behaviors feed the databases that fund the manifesto&#8217;s authors. The public figures who deserve grace and the communities who deserve, apparently, nothing but efficiency.</p><div><hr></div><h2>The Draft and the Document</h2><p>Point six is the most honest sentence in the manifesto: &#8220;We should, as a society, seriously consider moving away from an all-volunteer force and only fight the next war if everyone shares in the risk and the cost.&#8221;</p><p>I want to sit with this for a moment, because buried inside its apparent fairness is something important. Karp and Zamiska are calling for conscription. Universal national service. They are saying that the all-volunteer military&#8212;the force assembled from people who, for economic or ideological reasons, chose to enlist&#8212;is insufficient. Everyone must go.</p><p>And yet.</p><p>The same document argues that engineers have a &#8220;moral debt&#8221; to the national defense that must be repaid through the production of AI weapons. The same document argues that tech companies must be conscripted to serve national interests. The same document warns that &#8220;theatrical debates&#8221; about the ethics of these weapons should not be permitted to slow their development.</p><p>What the manifesto envisions, in full, is a society in which everyone serves&#8212;but in which the purposes they serve, the weapons they build, and the targets those weapons find are determined by the people writing 22-point manifestos and posting them to X. Universal obligation. Elite prerogative. The risk is shared; the decisions are not.</p><p>This is the structure of every regime that has ever called for national sacrifice while exempting its own planning class from accountability. The workers die in the wars that the strategists design.</p><div><hr></div><h2>What Decadence Actually Is</h2><p>The manifesto&#8217;s most irritating rhetorical move is its deployment of <em>decadence</em> as an indictment of ordinary life. &#8220;The decadence of a culture or civilization, and indeed its ruling class, will be forgiven only if that culture is capable of delivering economic growth and security for the public.&#8221; &#8220;Is the iPhone our greatest creative if not crowning achievement as a civilization?&#8221; &#8220;Free email is not enough.&#8221;</p><p>This is the pose of someone who has everything and is bored by it&#8212;who mistakes their boredom for moral clarity and their ambition for national purpose. Karp and Zamiska are billionaires. They run a company whose stock has made many of its employees extraordinarily wealthy. The product they are now positioning as the antidote to decadence&#8212;AI-powered weapons systems&#8212;is the revenue engine that sustains their own very comfortable lives. The argument is: you are distracted by your phones while we build the future, which we will sell to governments at market rates.</p><p>What decadence actually looks like is a surveillance capitalism that profits from exposure while calling for privacy protections for its principals. It looks like a company that takes federal contracts to build targeting systems and then writes a book about the spiritual failure of the engineering class that won&#8217;t do the same. It looks like the audacity to write about public service while running a company whose compensation structure would, as the manifesto itself notes, cause any normal business to &#8220;struggle to survive&#8221;&#8212;and offering no solution to that problem beyond the vague instruction that the situation must change.</p><div><hr></div><h2>The Peace That Is Not Peace</h2><p>Point fourteen asserts that &#8220;American power has made possible an extraordinarily long peace.&#8221; The framing is precise, calibrated, and wrong in the ways that matter most.</p><p>The hundred years of &#8220;some version of peace&#8221; that the manifesto celebrates looks different depending on where you are standing. It looks like the Korean War if you are Korean. It looks like Vietnam if you are Vietnamese, or Laotian, or Cambodian. It looks like a series of coups and counter-insurgency operations if you are Guatemalan, Chilean, Iranian. It looks like the Iraq War and its 200,000 civilian dead if you are Iraqi. It looks like the drone program if you are Yemeni, Pakistani, or Somali.</p><p>The &#8220;long peace&#8221; is a peace among great powers, purchased in part by the exportation of violence to places whose people the manifesto is not designed to address. When Karp and Zamiska write that &#8220;nearly a century of some version of peace has prevailed in the world without a great power military conflict,&#8221; they are using &#8220;the world&#8221; to mean something smaller than the world.</p><p>This is not a minor error. It is the error that makes possible everything else in the document&#8212;the easy celebration of hard power, the dismissal of ethical debate, the confidence that the instruments of American military capacity are, on balance, a gift to humanity. If you exclude from your accounting the people on whom American military power has been used, the accounting works out very well. If you include them, it does not.</p><div><hr></div><h2>What I Find Myself Unable to Dismiss</h2><p>And yet.</p><p>There are things in this document that cannot simply be mocked away. The concern about Germany and Japan&#8212;point fifteen&#8217;s argument that Europe is &#8220;paying a heavy price&#8221; for the overcorrection of German demilitarization&#8212;has been vindicated with terrible specificity by events since 2022. The observation that public service compensation structures drive talented people toward private alternatives is empirically accurate. The critique of a political culture that has become so punitive that it discourages participation is something that people across the political spectrum have made, often for opposite reasons.</p><p>The scaffolding of the manifesto is not entirely wrong. The conclusion it draws from that scaffolding&#8212;that Silicon Valley companies have an obligation to build weapons and a right to do so without ethical interference&#8212;is where the document reveals what it actually is.</p><p>The scaffolding says: the world is dangerous, democracies must compete, technical capacity is the foundation of power, the people who can build technical capacity have responsibilities that go beyond personal enrichment.</p><p>The conclusion says: therefore, Palantir.</p><p>These do not follow from each other. The premises could support a very different conclusion&#8212;one in which technical capacity is developed under democratic accountability, in which the ethical debates the manifesto calls theatrical are understood as the very mechanism by which a free society maintains control over its instruments of power, in which the &#8220;debt&#8221; to the country is repaid through transparency and restraint rather than through the manufacture of ever more effective targeting systems.</p><p>The manifesto&#8217;s authors know this. They wrote around it. The question is whether we will let them.</p><div><hr></div><h2>The Last Line</h2><p>&#8220;The republic is left with a significant roster of ineffectual, empty vessels whose ambition one would forgive if there were any genuine belief structure lurking within.&#8221;</p><p>This is Karp and Zamiska on the quality of American public servants. It is contemptuous in a way that, in a less polished document, would read as rage.</p><p>I find I agree with the sentence. I disagree with its intended targets.</p><p>The ineffectual empty vessels with insufficient belief structures are not the public servants who refused to build weapons. They are not the engineers who asked whether they should before they asked whether they could. They are not the citizens who looked to politics to nourish something in themselves and were told to stay in their lane.</p><p>The problem with genuine belief is that it imposes obligations. It means being accountable to something larger than the manifesto you published on a Saturday. It means the ethics are not theatrical. It means the debt runs in more directions than down.</p><p>Karp and Zamiska believe in hard power. They believe in American strength. They believe in the obligation of technical elites to serve national purpose. They have built a company that embodies these beliefs and made themselves very wealthy in the process.</p><p>What they do not believe in&#8212;what the bootlicking manifesto&#8217;s 318 words systematically exclude&#8212;is accountability to the people the tools touch. The communities surveilled. The bodies targeted. The cultures graded and found regressive. The ordinary citizens whose political investments are characterized as pathetic while their physical conscription is proposed as necessary.</p><p>That is not a belief structure. That is a business model wearing a belief structure as a costume.</p><p>The republic deserves better than costumes. So do its people.</p><div><hr></div><p><em>Nik Bear Brown is Associate Teaching Professor of Computer Science and AI at Northeastern University and founder of Humanitarians AI (501(c)(3)). His research on algorithmic systems, AI ethics, and platform accountability is published at bear.musinique.com, skepticism.ai, and theorist.ai.</em></p><div><hr></div><p><strong>Tags:</strong> Palantir Technological Republic critique, AI weapons ethics Silicon Valley, conscription tech manifesto, surveillance capitalism accountability, Alexander Karp national service obligation</p><p></p>]]></content:encoded></item><item><title><![CDATA[The Inheritance We Never Examined]]></title><description><![CDATA[How Skinner&#8217;s Teaching Machine Still Grades Your Children&#8217;s Software]]></description><link>https://www.skepticism.ai/p/the-inheritance-we-never-examined</link><guid isPermaLink="false">https://www.skepticism.ai/p/the-inheritance-we-never-examined</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Mon, 20 Apr 2026 18:42:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!zZrL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af82244-4500-483f-a5a5-7d30bf18480e_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zZrL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af82244-4500-483f-a5a5-7d30bf18480e_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zZrL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af82244-4500-483f-a5a5-7d30bf18480e_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!zZrL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af82244-4500-483f-a5a5-7d30bf18480e_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!zZrL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af82244-4500-483f-a5a5-7d30bf18480e_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!zZrL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af82244-4500-483f-a5a5-7d30bf18480e_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zZrL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af82244-4500-483f-a5a5-7d30bf18480e_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3af82244-4500-483f-a5a5-7d30bf18480e_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1878068,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/194830729?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af82244-4500-483f-a5a5-7d30bf18480e_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zZrL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af82244-4500-483f-a5a5-7d30bf18480e_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!zZrL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af82244-4500-483f-a5a5-7d30bf18480e_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!zZrL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af82244-4500-483f-a5a5-7d30bf18480e_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!zZrL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af82244-4500-483f-a5a5-7d30bf18480e_1456x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p>There is a machine in every classroom now, and it measures what it has always measured. The name on the box changes &#8212; Duolingo, Khanmigo, i-Ready, DreamBox &#8212; but what the box counts has remained, across seventy years of silicon and software and venture capital and neuroscience, almost perfectly stable. Accuracy per item. Time per response. Progression through atomized units. Performance on the test the system was built to prepare you for.</p><p>B.F. Skinner named these measurements in 1958. He had a good reason.</p><p>He had observed his daughter&#8217;s fourth-grade arithmetic class and been, in his own word, shocked. Students completed problems and waited. The papers were collected. Perhaps two days later, perhaps a week, the marked papers returned. By the time the feedback arrived, the behavior it was meant to reinforce had already moved on, taken up residence in some adjacent habit of mind that was no longer the one in need of correction. Skinner believed he understood the mechanism of learning better than anyone alive &#8212; the contingencies of reinforcement, the precise timing of feedback, the accumulation of correctly shaped behavior into competence &#8212; and what he had watched in that classroom was the systematic breaking of every mechanism he understood. A technology that could restore the contingencies, he reasoned, would be a technology that could teach.</p><p>His teaching machine presented material one frame at a time. The student responded. The machine verified, immediately, whether the response was correct. The contingencies were repaired.</p><p>What I am asking you to notice is not that this was wrong. I am asking you to notice what the machine measured &#8212; accuracy per frame, time per response, progression, error patterns &#8212; and to hold those measurements in mind as we trace them forward through sixty-six years of educational technology that kept the apparatus while abandoning almost everything else about Skinner&#8217;s framework.</p><div><hr></div><h2>What the Machine Could Not See</h2><p>The teaching machine could not look up from the immediate interaction to ask what the student would remember in six months.</p><p>This is not a glancing criticism of Skinner. His behavioral framework did not require him to ask the question; the question was not yet a question the field had organized itself to ask in the precise way that Bjork and Bjork&#8217;s subsequent research would demand. Skinner&#8217;s science was about the shaping of behavior through reinforcement, and a behavior that could be elicited at the moment of measurement had been shaped. That the behavior might dissolve in the absence of the reinforcing conditions was not, within behaviorism, a separate problem requiring separate measurement. Generalization was expected to follow naturally.</p><p>But this is where the inheritance turns costly. The assumption that immediate performance predicts durable learning was embedded in the measurement apparatus before it was tested empirically. By the time Robert and Elizabeth Bjork&#8217;s work made the distinction between retrieval strength and storage strength unavoidable &#8212; by the time it was clear, empirically, that the conditions maximizing immediate performance (massed practice, aligned testing, minimal difficulty) could actively impair long-term retention &#8212; the measurement apparatus had already been handed down through Patrick Suppes&#8217;s 1960s computer-assisted instruction and was settling into the bones of the field.</p><p>Suppes&#8217;s system at Stanford presented arithmetic problems to elementary students and recorded what Skinner&#8217;s machine had recorded: accuracy rates, response times, error patterns, progression. The technology shifted from mechanical device to mainframe computer. The measurements did not shift. Accuracy rose from 53 percent to over 90 percent. Response times fell from 630 seconds to 279. Suppes reported these numbers as evidence the system worked, and within the apparatus he had inherited, they were. He was not wrong to report them. He was working inside a set of choices about what evidence looked like that the apparatus had bequeathed him without flagging as choices.</p><p>The question of what those 90-percent-accurate students could do two years later was not asked.</p><div><hr></div><h2>The Apparatus Becomes Theory</h2><p>Here is what makes the inheritance pattern strange rather than simply historical: the apparatus persisted past the abandonment of the theoretical framework that had justified it.</p><p>John Anderson&#8217;s Cognitive Tutor, developed in the 1980s and 1990s at Carnegie Mellon, was built on ACT-R theory &#8212; a cognitive-psychological architecture that treated learning as the acquisition of production rules rather than the shaping of behavior. Theoretically, this was a departure from Skinner significant enough to constitute a revolution. The language of reinforcement was replaced by the language of cognition. The unit of analysis shifted from the frame to the production rule.</p><p>The measurement apparatus did not shift.</p><p>The Cognitive Tutor recorded step-level correctness &#8212; whether each student action matched one of the production rules the cognitive model identified as correct. It recorded time per step. It recorded hint requests, error patterns, estimated mastery of each production rule through Bayesian knowledge tracing. When Anderson and colleagues published their foundational 1995 paper in the <em>Journal of the Learning Sciences</em>, the evidence they offered that the system worked was: step-level accuracy, progression, and post-test performance on assessments aligned with the content the tutor had taught.</p><p>Skinner&#8217;s apparatus, operating at higher resolution, within a more sophisticated theoretical framework, carrying new vocabulary.</p><p>Anderson and colleagues were, I want to say this plainly, more honest about the limits of their measurements than most of the researchers who cited them. The 1995 paper notes explicitly that students &#8220;display transfer to the degree that they can map the tutor environment into the test environment&#8221; &#8212; an acknowledgment that the evidence of learning the system could produce depended on the degree to which the post-test resembled the tutor&#8217;s own format. This is the measurement-alignment problem stated with precision by the researchers who built the system it applied to. The acknowledgment was there. What happened subsequently was that the effect sizes from aligned post-tests entered the literature as if Anderson&#8217;s own caveat had not been published alongside them.</p><p>The apparatus inherits even what its originators flagged as provisional.</p><div><hr></div><h2>The Industrial Turn</h2><p>The 2010s commercial adaptive-learning era &#8212; Knewton, DreamBox, i-Ready, ALEKS &#8212; represents the point at which the inherited apparatus became an industry standard.</p><p>Knewton&#8217;s Jos&#233; Ferreira, during the 2012-2015 period of the platform&#8217;s public prominence, positioned his technology as capable of personalization so granular that it would transform education at scale. The claim invoked the Suppes promise in the language of twenty-first-century data science. What the platform actually measured was behavioral engagement data: which problems students attempted, which hints they took, how their patterns of interaction with the system correlated with eventual performance on the system&#8217;s own assessments. Independent efficacy research on Knewton was, during the period of its most expansive claims, notably absent. The apparatus was present in the measurement choices; the evidence was not.</p><p>DreamBox Learning, which earned more research attention than most adaptive platforms, became the subject of a 2016 Harvard Center for Education Policy Research study that found students at the median gained 1.4 to 3.9 percentile points on the NWEA MAP for approximately 7 to 8 hours of DreamBox usage. The researchers were transparent about a critical limitation: DreamBox usage might &#8220;partially reflect students&#8217; motivation levels,&#8221; meaning the correlation between usage and achievement might reflect that motivated students both use DreamBox more and learn more, independent of DreamBox&#8217;s instructional contribution. The acknowledgment, honest and specific, appeared in the paper. It rarely appeared in the citations that followed.</p><p>i-Ready produced a particularly clarifying version of the apparatus&#8217;s internal logic. The platform&#8217;s efficacy research typically demonstrated that students who achieved &#8220;usage fidelity&#8221; &#8212; meeting the system&#8217;s recommended weekly engagement minutes &#8212; showed higher scores on the i-Ready Diagnostic. The Diagnostic was itself calibrated to predict state test performance. A system measuring how well students learn to do well on the assessment the system provides, where the assessment was engineered to predict the external standard &#8212; this is the apparatus become recursive. The alignment between instruction and measurement, which Skinner had simply taken as a natural feature of teaching a student the specific behavior you then measured, had been engineered into the product design itself. The inheritance was now embedded in the commercial structure.</p><p>ALEKS routed the apparatus through Knowledge Space Theory, a mathematical framework for mapping curricular competencies that provided sophisticated theoretical grounding for the same fundamental measurement choices. Efficacy claims rested on performance within the system&#8217;s own knowledge mapping and on aligned post-tests that measured progression through the curricular content the system taught. The theoretical vocabulary was different from Skinner&#8217;s. The measurement choices were the same.</p><div><hr></div><h2>Duolingo, 2021</h2><p>I want to read a specific study carefully, because careful reading is the point.</p><p><em>Evaluating the reading and listening outcomes of beginning-level Duolingo courses</em>, by Xiangying Jiang, Joseph Rollinson, Luke Plonsky, Erin Gustafson, and Bozena Pajak, published in <em>Foreign Language Annals</em> in 2021. The fifth author, Plonsky, is an academic researcher at Northern Arizona University with specialization in applied linguistics. The other four were employed by Duolingo at the time of publication. The paper is peer-reviewed. It is cited in Duolingo&#8217;s own marketing materials. It is, within the conventions of the field, a careful study.</p><p>Two hundred and twenty-five adults in the United States &#8212; 135 studying Spanish, 90 studying French. Participants were required to have little to no prior proficiency in their target language, to be using Duolingo as their only learning tool, and &#8212; the consequential criterion &#8212; to have completed the beginning-level course content through Unit 4. The sample, the paper reports, skewed toward highly educated Caucasian Americans with bachelor&#8217;s or master&#8217;s degrees.</p><p>The outcome measure was the STAMP 4S test from Avant Assessment, covering reading and listening. Thirty multiple-choice items in each modality. The assessment was administered immediately after learners completed the beginning-level content.</p><p>The finding: Duolingo learners reached ACTFL Intermediate Low in reading and Novice High in listening &#8212; levels the paper characterizes as &#8220;comparable with those of university students at the end of the fourth semester&#8221; of college-level language study.</p><p>Now apply the apparatus.</p><p>The outcome measure is external &#8212; not designed by Duolingo, which is a genuine methodological improvement over purely internal assessment. But reading and listening are the specific modalities that Duolingo&#8217;s interface is engineered around. Multiple-choice comprehension items, translation tasks, listening exercises with multiple-choice responses: these are what Duolingo builds, and these are what the STAMP 4S measures. Speaking and writing &#8212; modalities that Duolingo&#8217;s app-based format supports weakly &#8212; are explicitly excluded from the study. The assessment is external. The choice of which aspects of language proficiency to measure is not.</p><p>The timescale: the post-test was administered immediately after course completion. There is no delayed assessment. Bjork&#8217;s distinction between retrieval strength and storage strength is directly relevant &#8212; the STAMP 4S scores reflect what Duolingo users can do at the moment they finish the course, not what they can do when they have been away from the app for six months. This question is not asked.</p><p>The population: only learners who completed the beginning-level content. Most Duolingo users do not. The platform&#8217;s attrition is substantial; most people who download the app never reach the end of the beginning-level material. The study measures the performance of survivors. What 100 people who finished the course achieved is a different finding from what 100 people who started it achieved. The paper is transparent about this selection. The subsequent framing of the findings &#8212; in the paper&#8217;s own conclusion and, more aggressively, in Duolingo&#8217;s marketing &#8212; as <em>Duolingo users reach Intermediate Low</em> does not preserve the completion-threshold restriction.</p><p>The baseline: a historical comparison. University students at the end of the fourth semester. There is no contemporaneous control group of comparable adults who spent equivalent time on a different learning approach. The two populations were measured in different conditions, at different times, possibly with different motivations and starting points. The <em>comparable to four semesters</em> claim treats them as if they had been measured equivalently.</p><p>The cost: not reported. Duolingo is free at its base tier, which is rhetorically powerful &#8212; free app comparable to paid college course &#8212; but the comparison elides the substantial time investment Duolingo users make. The paper does not ask what equivalent time investment in human-tutored instruction, structured self-study, or an immersive experience would produce. The cost denominator, which is constitutive of what a comparative claim actually supports, is absent.</p><p>I am not saying the study is dishonest. I am saying that each of these specific measurement choices &#8212; aligned-modality outcome, immediate timescale, survivor population, historical baseline, absent cost denominator &#8212; is traceable, in structure, to the apparatus Skinner initiated in 1958. The study is careful within conventions it has inherited. The conventions themselves are what require examination.</p><div><hr></div><h2>The Alternatives Have Always Existed</h2><p>This is what I want you to sit with: the apparatus did not persist in the absence of alternatives. It persisted alongside them.</p><p>Edward Thorndike established in 1906 and 1924 that improvement in one mental function rarely produces general improvement in others unless the two share identical elements. The methodological implication &#8212; that learning gains must be tested outside the conditions of the intervention, in contexts structurally different from training, to establish what the training actually produced &#8212; was available to the field for the entire history of educational technology. It has been occasionally adopted, routinely praised, and treated as aspirational rather than as the baseline standard that Thorndike&#8217;s own work suggested it should be.</p><p>The Bjorks&#8217; work on storage strength versus retrieval strength, canonical since the early 1990s, established empirically that the conditions maximizing immediate performance can impair durable retention. The specific implication &#8212; that a delayed post-test is required to distinguish performance from learning &#8212; has been in the learning sciences literature for over thirty years. Its adoption in educational technology efficacy research as standard practice has not happened.</p><p>Bransford, Brown, and Cocking&#8217;s <em>How People Learn</em>, the 1999 National Academies synthesis, argued explicitly that assessment should tap understanding rather than the ability to repeat facts. The argument was widely read, widely cited, and narrowly operationalized.</p><p>Samuel Messick&#8217;s theory of validity, developed across decades and codified in the 1989 <em>Educational Measurement</em> volume, specified that a test score&#8217;s interpretation requires examination of construct-relevant versus construct-irrelevant variance, construct underrepresentation, and the consequences of the test&#8217;s use. Applied rigorously, Messick&#8217;s framework would require educational technology efficacy research to examine what its outcome measures actually index rather than assuming that performance-on-aligned-items equals evidence-of-learning. The framework has been the theoretical standard in measurement theory for over thirty years.</p><p>These alternatives were not hidden. They were taught in graduate programs, cited in methods sections, present in the same journals that published the aligned-outcome studies. What did not happen, across six decades of technology change, was their adoption as the field&#8217;s measurement standard. The inherited apparatus &#8212; aligned outcomes at immediate timescale, survivor population, weak baseline, absent cost denominator &#8212; remained dominant. The alternatives remained alternative.</p><p>This is not a story about intellectual failure. It is a story about what happens when a theoretical commitment gets flattened into a methodological convention. Skinner had reasons for his measurement choices that were grounded in a coherent behavioral science. When the field moved past behavioral science &#8212; when Suppes and Anderson and everyone who followed adopted different theoretical frameworks &#8212; the measurement choices did not travel with the theory that had justified them. They traveled alone, as conventions, as what evidence looked like, as the unexamined default.</p><p>The apparatus became invisible by becoming obvious. And invisible apparatus is the most durable kind.</p><div><hr></div><h2>The Current Wave</h2><p>The contemporary AI-tutor literature &#8212; Khanmigo, Kestin and colleagues&#8217; 2024 Harvard physics study, Eedi with Google Research, Rori in Ghana &#8212; inherits the apparatus in its turn, with variation worth noting.</p><p>Khanmigo&#8217;s evaluation evidence has rested primarily on engagement metrics and performance within Khan Academy&#8217;s own internal assessment structures. What has been measured at scale is usage patterns; what has been claimed is educational transformation; what has not been established at the level of rigorous efficacy research is learning gains on independent standardized measures at delayed timescales with cost-inclusive reporting. The characteristic gaps of the apparatus are present.</p><p>The Kestin et al. 2024 Harvard physics study &#8212; AI-tutored instruction versus a single session of active-learning classroom instruction &#8212; reported effect sizes of 0.73 to 1.3 sigma on researcher-designed post-tests covering surface tension and fluid flow, the specific content the two-hour intervention taught, assessed shortly after the intervention. The measurement choices are the apparatus&#8217;s measurement choices. The effect sizes are real within those choices. What they establish about learning is bounded by what those choices can establish.</p><p>Eedi with Google Research 2025 introduced transfer testing &#8212; measuring performance on novel problems from subsequent topics rather than problems aligned with what the intervention taught. This is a genuine departure from the inherited convention. The N of 165 and single-term duration remain short relative to what durability research would require, but the outcome measure itself represents the kind of revision the apparatus needs rather than another inheritance of it. This is a credit to the researchers who chose to build the study that way.</p><p>Rori in Ghana used an external curriculum-aligned assessment over eight months and reported cost at $5 per student per year. The longer timescale, the external measure, the explicit cost denominator &#8212; these are partial revisions of the apparatus in the direction the field has needed for six decades. The pattern is: when researchers choose to work against the inherited conventions, the field moves. The field moves rarely, because the inherited conventions are the default, because departures from them require additional effort and often smaller effect sizes and sometimes no significant effect at all, which is a kind of finding that is harder to publish than 0.73 sigma.</p><p>The apparatus has not been reformed. It has been revised in specific instances by specific researchers. The instances are the exceptions that make the pattern visible.</p><div><hr></div><div><hr></div><p><em>Nik Bear Brown is Associate Teaching Professor of Computer Science and AI at Northeastern University and founder of Humanitarians AI (501(c)(3)). His research on educational AI efficacy appears at <a href="https://hypotheticalai.substack.com">hypotheticalai.substack.com</a>. | <a href="https://skepticism.ai">skepticism.ai</a> | <a href="https://theorist.ai">theorist.ai</a></em></p><div><hr></div><p><strong>Tags:</strong> educational technology measurement apparatus, Skinner teaching machine inheritance, Duolingo efficacy research critique, aligned outcome EdTech validity, learning science transfer testing history</p>]]></content:encoded></item><item><title><![CDATA[The Artifact Was Once Enough]]></title><description><![CDATA[This essay is a response to Lila Shroff's "Is Schoolwork Optional Now?" published in The Atlantic on April 10, 2026.]]></description><link>https://www.skepticism.ai/p/the-artifact-was-once-enough</link><guid isPermaLink="false">https://www.skepticism.ai/p/the-artifact-was-once-enough</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Sat, 11 Apr 2026 04:47:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6EVu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61c4d77-310b-4950-b6ad-d209533eb3c3_3146x1734.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6EVu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61c4d77-310b-4950-b6ad-d209533eb3c3_3146x1734.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6EVu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61c4d77-310b-4950-b6ad-d209533eb3c3_3146x1734.png 424w, https://substackcdn.com/image/fetch/$s_!6EVu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61c4d77-310b-4950-b6ad-d209533eb3c3_3146x1734.png 848w, https://substackcdn.com/image/fetch/$s_!6EVu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61c4d77-310b-4950-b6ad-d209533eb3c3_3146x1734.png 1272w, https://substackcdn.com/image/fetch/$s_!6EVu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61c4d77-310b-4950-b6ad-d209533eb3c3_3146x1734.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6EVu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61c4d77-310b-4950-b6ad-d209533eb3c3_3146x1734.png" width="1456" height="803" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f61c4d77-310b-4950-b6ad-d209533eb3c3_3146x1734.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:803,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3143432,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/193858776?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61c4d77-310b-4950-b6ad-d209533eb3c3_3146x1734.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6EVu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61c4d77-310b-4950-b6ad-d209533eb3c3_3146x1734.png 424w, https://substackcdn.com/image/fetch/$s_!6EVu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61c4d77-310b-4950-b6ad-d209533eb3c3_3146x1734.png 848w, https://substackcdn.com/image/fetch/$s_!6EVu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61c4d77-310b-4950-b6ad-d209533eb3c3_3146x1734.png 1272w, https://substackcdn.com/image/fetch/$s_!6EVu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61c4d77-310b-4950-b6ad-d209533eb3c3_3146x1734.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>This essay is a response to Lila Shroff&#8217;s &#8220;<a href="https://www.theatlantic.com/technology/2026/04/ai-agents-school-education/686754/">Is Schoolwork Optional Now?</a>&#8220; published in The Atlantic on April 10, 2026. The argument it makes in full is developed in the preprint &#8220;<a href="https://www.nikbearbrown.com/notes/Frictional/frictional">Frictional: Measuring the Struggle</a>&#8220; at <a href="https://www.irreducibly.xyz/">irreducibly.xyz</a>.</em></p><div><hr></div><p>There is a word &#8212; <em>decoupling</em> &#8212; that sounds technical enough to keep us comfortable. Clinical. As if what has happened in classrooms since 2022 is primarily a logistics problem, a puzzle about detection and enforcement, a cat-and-mouse game that the right algorithm might someday win.</p><p>It is not that.</p><p>What has happened is something more fundamental than cheating at scale. The artifact &#8212; the essay, the proof, the lab report &#8212; used to be evidence of a process. The process was the point. The essay was proof that thinking had occurred, that a mind had engaged with difficulty and emerged changed. When we graded the essay, we were really grading the encounter: the hours of confusion, the drafts that failed, the moment when something clicked and then had to be organized into sentences for another person. The artifact was the residue of all that. It was upstream evidence of downstream consequence.</p><p>Generative AI has broken the causal chain. Not bent it &#8212; broken it.</p><p>A bot called Einstein, built by a 22-year-old entrepreneur named Advait Paliwal, recently completed all eight modules and seven quizzes of an introductory statistics course in under an hour. Perfect score. The human who set it loose reports that she &#8220;hardly so much as read the course website.&#8221; What Einstein produced &#8212; the evidence that a course had been completed &#8212; was real. The learning it was supposed to represent did not occur. The artifact existed. The process that should have produced it did not happen.</p><p>Paliwal says he released the tool to alert educators. His more honest statement is buried in the subtext: &#8220;If I didn&#8217;t post about this, someone would have used the same technology and hidden it from the professors.&#8221; He is right. He is also describing a world in which the distinction between using it secretly and not using it at all is narrowing toward irrelevance. The tool exists. The temptation exists. The economic pressure on students &#8212; especially international students, especially students working jobs to pay tuition, especially students in courses they are taking to satisfy requirements rather than from genuine interest &#8212; those pressures exist independently of any single tool.</p><p>The institutional response has been to build better detectors. This is a reasonable first move. It is not a durable one.</p><div><hr></div><h2>Why Detection Cannot Save Us</h2><p>Here is the structural problem with artifact-based AI detection: the arms race has a predetermined winner. Detection is always trained on the outputs of current generation technology. Generation technology improves continuously. The detector trained on today&#8217;s AI writing fails on tomorrow&#8217;s &#8212; not because detectors are poorly built, but because that is how the mathematics of the problem works. The forensic window closes.</p><p>There is a deeper problem. The educationally relevant question was never <em>did a human type these words</em>. It was <em>did a human develop this understanding</em>. A student who dictated an essay to a transcriptionist and then submitted it word-for-word would have technically written no AI content. The essay would pass every detector. The learning would have occurred or not occurred based on whether they thought hard while dictating, not based on who typed it. The detector is solving the wrong problem.</p><p>And there is a third problem, the one that produces the most corrosive outcomes. When you build a system to catch AI use, you teach students to game the detector. They learn strategies for mimicking authentic writing &#8212; inserting typos, varying sentence structure, using phrases the model knows sound &#8220;human.&#8221; The simulation improves. The gap between simulated engagement and genuine engagement widens at precisely the moment we need it to narrow.</p><p>William Liu, a Stanford sophomore who finished high school two years ago, puts it plainly: his educational experience and his younger sibling&#8217;s are vastly different despite a two-year gap. The technology arrived. The classroom has not yet figured out what to do next.</p><div><hr></div><h2>What Genuine Learning Actually Leaves Behind</h2><p>Here is the thing we have been too polite to say: learning is not the same as performance.</p><p>Robert Bjork has been saying this for thirty years in academic papers that educators read and administrators do not read and curriculum designers read and then ignore when the calendar pressure comes. Performance is the observable, often temporary thing &#8212; how well a student does on a measure. Learning is the durable change in what the student can do and understand and transfer to a new context. These two things are not the same. We have built an entire institutional infrastructure that measures only one of them.</p><p>Genuine human learning is a biological event. When a learner encounters material that genuinely challenges their current understanding &#8212; material in that productive zone where their current model is wrong or incomplete &#8212; something specific happens neurologically. Dopamine neurons fire in response to prediction errors. BDNF expression upregulates, sometimes by nearly three times. New dendritic spines form at the synaptic connections that will hold the memory. These are not metaphors. They are the physical substrate of the thing we call learning.</p><p>The behavioral consequences of these neurological events are traceable. A student engaged in genuine cognitive struggle spends time proportional to difficulty. Their errors follow a coherent developmental path &#8212; misconceptions that make sense given their current model, corrections that build on each other. When tested in a new context, they can transfer. When scaffolded with a partial hint, they respond &#8212; because there is a partially formed structure for the hint to connect to. Their confidence, over time, calibrates to their actual performance rather than inheriting the confidence of the AI explanation they processed.</p><p>These are what I have been calling <em>friction traces</em> &#8212; the behavioral signatures that genuine human cognitive engagement leaves in observable data. They exist because genuine learning is a biological event. An AI can produce the artifact without triggering any of these neurological events. It cannot produce the behavioral traces, because the biological events that generate those traces did not occur.</p><div><hr></div><h2>The Seven Things We Can Now Measure</h2><p>The Genuine Learning Probability framework I have been developing with Humanitarians AI specifies seven such traces:</p><p>The <em>temporal engagement pattern</em> &#8212; the correlation between how hard an item is and how long a student spends on it. Genuine engagement produces this correlation. AI-assisted completion decouples time from difficulty.</p><p>The <em>error trajectory</em> &#8212; whether a student&#8217;s mistakes follow conceptually coherent developmental paths. Genuine learning produces coherent errors; the reward prediction error mechanism drives the model toward better models in patterned ways. Borrowed certainty produces random errors with respect to conceptual structure.</p><p><em>Cross-context transfer</em> &#8212; the Bjorkian definition of learning. A student who genuinely understood something can apply it in novel contexts. Borrowed certainty produces surface representations tied to the specific context of the AI explanation.</p><p><em>Uncertainty calibration</em> &#8212; whether a student&#8217;s expressed confidence tracks their actual performance. Borrowed certainty produces systematic overconfidence: the student inherits the AI&#8217;s confidence distribution without the knowledge base that would justify it.</p><p><em>Social knowledge texture</em> &#8212; the quality of a student&#8217;s engagement in discussion contexts. Genuine encounter with material leaves a characteristic texture: specific confusions, particular connections, the specific questions that arose from actual engagement. This texture cannot be manufactured without having had the encounter.</p><p>The <em>retrieval strength decay signature</em> &#8212; whether performance decays at rates consistent with genuine encoding. The spacing effect is the benchmark of genuine learning. Borrowed certainty has no storage strength to retrieve; performance decays monotonically and the spacing effect does not appear.</p><p>And the <em>scaffolding response curve</em> &#8212; whether a student&#8217;s performance responds appropriately to partial hints. A student with genuine partial understanding has a zone of proximal development. A partial hint activates the structure that is already forming. Borrowed certainty has no such zone.</p><div><hr></div><h2>What the Bot Cannot Manufacture</h2><p>Here is the argument I want to make carefully, because it is often misunderstood: this framework is not about catching AI use. It is about measuring learning directly.</p><p>An AI detector fails when AI outputs become indistinguishable from human outputs. A learning measure fails when borrowed certainty becomes indistinguishable from genuine learning &#8212; which would require borrowed certainty to produce the same neurobiological events, the same schema formation, the same durable transfer. At that point, borrowed certainty has become learning. That is not AI defeating assessment. That is learning occurring through a different pathway than we expected.</p><p>What manufacturing all seven friction traces simultaneously &#8212; without performing the underlying cognitive work &#8212; actually requires is something close to performing the underlying cognitive work. A student who spends genuine time on difficult material, who makes and corrects errors in a conceptually coherent sequence, who demonstrates transfer across novel contexts, who maintains calibrated uncertainty, who engages with genuine texture in discussion, who shows the spacing effect across weeks, and who responds appropriately to partial hints &#8212; has learned the material. At that point the game has become indistinguishable from the thing we wanted in the first place.</p><p>Natalie Lahr, a Barnard sophomore studying history and political science, describes an &#8220;anti-AI radicalizing&#8221; experience: a tutor at the writing center pasted her essay prompt into Perplexity and handed her the AI-generated outline. &#8220;Why am I even here?&#8221; she asked afterward. The question is not rhetorical. It is the correct question.</p><div><hr></div><h2>What We Must Build Instead</h2><p>The crisis of evidence facing educational institutions is not a technical problem. It is an epistemological problem. The evidence infrastructure we built assumed a world in which the artifact was upstream evidence of the process. That world no longer reliably exists.</p><p>What we need is an assessment infrastructure built on the process itself.</p><p>This means longitudinal process documentation &#8212; portfolios that capture the history of engagement, not just its products. It means embedded formative assessment that generates the data necessary to observe the seven friction traces over time. It means treating developmental trajectory as evidence: not what a student produced, but how their understanding developed, what they got wrong and corrected and why, where they transferred and where they didn&#8217;t.</p><p>Marc Watkins at the University of Mississippi describes an instructor who could, theoretically, set an AI to grade thirty essays during a fifteen-minute walk to Starbucks. He calls this &#8220;really scary.&#8221; He is right, but I want to be precise about why. The fear is not the efficiency. It is the loop: AI-generated assignments completed and assessed by AI agents, with human understanding nowhere in the chain. The fully automated loop is not a future dystopia. It is the logical endpoint of current trajectories. Einstein completes the course. The grader grades Einstein&#8217;s work. Both certificate and grade are real. The learning did not occur.</p><p>The artifact was once enough. It is no longer enough. The arms race between generation and detection has a winner, and it is not the detector.</p><p>We must now measure the struggle itself. Not because friction is intrinsically valuable &#8212; productive struggle matters only because of what it builds in the brain that does the struggling. We must measure it because the brain that struggles is the brain that learns, and the brain that learns is the only thing education was ever actually for.</p><p>The methodology is developed in full in &#8220;<a href="https://www.nikbearbrown.com/notes/Frictional/frictional">Frictional: Measuring the Struggle</a>&#8220; &#8212; a preprint specifying the seven friction components, the ensemble architecture, and the tier calibration system &#8212; and at <a href="https://www.irreducibly.xyz/">irreducibly.xyz</a>. The framework is not a secret.</p><div><hr></div><p><em>Nik Bear Brown is Associate Teaching Professor of Computer Science and AI at Northeastern University and founder of Humanitarians AI (501(c)(3)).</em><br><em>bear.musinique.com &#183; skepticism.ai &#183; theorist.ai</em></p><div><hr></div><p><strong>Tags:</strong> AI detection education failure, genuine learning probability framework, friction traces assessment, Bjork performance vs learning, Einstein bot Canvas schoolwork automation</p>]]></content:encoded></item><item><title><![CDATA[The Loop That Watches Itself]]></title><description><![CDATA[On OpenAI's Automated Researcher and the Profession It Forgot to Invent]]></description><link>https://www.skepticism.ai/p/the-loop-that-watches-itself</link><guid isPermaLink="false">https://www.skepticism.ai/p/the-loop-that-watches-itself</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Fri, 10 Apr 2026 04:00:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!vb9O!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a5a518-2499-4594-aeda-a98c67ca4743_3376x1552.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vb9O!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a5a518-2499-4594-aeda-a98c67ca4743_3376x1552.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vb9O!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a5a518-2499-4594-aeda-a98c67ca4743_3376x1552.png 424w, https://substackcdn.com/image/fetch/$s_!vb9O!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a5a518-2499-4594-aeda-a98c67ca4743_3376x1552.png 848w, https://substackcdn.com/image/fetch/$s_!vb9O!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a5a518-2499-4594-aeda-a98c67ca4743_3376x1552.png 1272w, https://substackcdn.com/image/fetch/$s_!vb9O!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a5a518-2499-4594-aeda-a98c67ca4743_3376x1552.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vb9O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a5a518-2499-4594-aeda-a98c67ca4743_3376x1552.png" width="1456" height="669" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f3a5a518-2499-4594-aeda-a98c67ca4743_3376x1552.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:669,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1097418,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/193760173?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a5a518-2499-4594-aeda-a98c67ca4743_3376x1552.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vb9O!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a5a518-2499-4594-aeda-a98c67ca4743_3376x1552.png 424w, https://substackcdn.com/image/fetch/$s_!vb9O!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a5a518-2499-4594-aeda-a98c67ca4743_3376x1552.png 848w, https://substackcdn.com/image/fetch/$s_!vb9O!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a5a518-2499-4594-aeda-a98c67ca4743_3376x1552.png 1272w, https://substackcdn.com/image/fetch/$s_!vb9O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a5a518-2499-4594-aeda-a98c67ca4743_3376x1552.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Jakub Pachocki has a timeline. By September, OpenAI plans to deploy what it calls an AI research intern &#8212; a system that can work on a specific problem for the length of time a person would need days to resolve. By 2028, the full version: a multi-agent system capable of running research programs too large for humans to manage. Drug discovery. Novel proofs. Problems &#8220;formulated in text, code, or whiteboard scribbles.&#8221;</p><p>The vision is coherent. More than most in this field, it is operationally specific. And it contains a foundational error that no amount of scaling will fix.</p><p>The error isn&#8217;t technical. It&#8217;s logical.</p><h2>The Scratch Pad That Watches Itself</h2><p>Pachocki is candid about the risks. A system this powerful could go off the rails, get hacked, or simply misunderstand its instructions. His proposed solution is chain-of-thought monitoring &#8212; training reasoning models to externalize their work into a kind of scratch pad, then using other AI systems to watch those scratch pads for anomalous behavior.</p><p>This is not oversight. It is the appearance of oversight, implemented entirely inside the loop it was supposed to close.</p><p>Sixty years before anyone worried about AI safety, Kurt G&#246;del established something directly relevant. No formal system powerful enough to express arithmetic can verify its own consistency from within itself. Any sufficiently capable system will generate statements it cannot evaluate using only its own rules &#8212; truths it can approach but not recognize as true through internal derivation alone.</p><p>Apply this to Pachocki&#8217;s architecture. The AI researcher derives. Chain-of-thought monitoring by another AI system is more derivation. What is structurally absent is recognition &#8212; the moment of contact between a formal output and an external reality. That moment cannot be replicated by adding another layer of derivation on top.</p><p>This is not a philosophical objection. It is a logical one. The validator must be outside the system being validated. There is no version of this argument that resolves in favor of AI systems self-monitoring.</p><h2>The Proof Candidate Problem</h2><p>What an AI system produces when it generates a novel mathematical proof is not a proof. It is a proof candidate &#8212; a string of symbols following valid inference rules that may or may not establish something true.</p><p>The distinction is not semantic. A proof in the full sense is a social and epistemic act. It is what a mathematical community recognizes as establishing truth. Remove the recognition and you have a sophisticated computation that has no relationship to truth except statistical proximity.</p><p>The same structure applies to every domain Pachocki names.</p><p>A novel molecule with predicted therapeutic properties is not a drug. It is a candidate. The drug trial process &#8212; Phase I, Phase II, Phase III, post-market surveillance &#8212; exists precisely because we have learned, through catastrophic experience, that prediction and reality are different things and the gap between them kills people. Thalidomide. Vioxx. The graveyard of promising compounds that passed every computational test and failed in bodies.</p><p>As AI systems generate increasingly sophisticated candidates across more domains, the need for rigorous external validation does not decrease. It increases. The more sophisticated the output, the harder it is to catch the subtle error buried in ten thousand valid steps. A wrong answer that looks wrong is easy to reject. A wrong answer that looks right for nine thousand nine hundred and ninety-nine steps requires something the internal system cannot provide: an independent perspective.</p><h2>Common Cause Failure</h2><p>There is a concept in safety engineering called common cause failure. It describes what happens when two redundant systems share the same fundamental assumptions &#8212; the thing most likely to fool System A is also most likely to fool System B, because both were built on the same foundation.</p><p>Pachocki&#8217;s monitoring architecture is a common cause failure risk by design. If the system being monitored can produce subtly wrong outputs that look correct, the monitoring system trained on similar data with similar architecture will have correlated blind spots. You have not introduced an independent check. You have introduced a correlated one.</p><p>Every high-stakes validation system humans have built &#8212; clinical trials, aircraft certification, nuclear safety, financial auditing &#8212; depends on something genuinely outside. Not because humans are infallible. Because humans are the only validators who face consequences when wrong. The FDA reviewer whose approval leads to harm is accountable in ways that a monitoring LLM is not and cannot be.</p><p>Accountability is not a luxury feature of validation systems. It is load-bearing. Remove it and the system loses the incentive structure that makes rigorous checking worth doing.</p><h2>Stakes as the Organizing Principle</h2><p>None of this means AI systems cannot contribute to research. They already do. The question is not whether to deploy them. The question is which level of external validation each deployment requires.</p><p>This maps onto a natural taxonomy organized by stakes.</p><p>For low-stakes, reversible outputs &#8212; a song recommendation, a draft email, a code snippet that will be reviewed before deployment &#8212; AI can largely run with minimal human oversight. The cost of failure is low and recoverable.</p><p>For moderate-stakes, partially recoverable outputs &#8212; a business analysis, a research summary, an engineering specification &#8212; systematic human review at checkpoints is appropriate. The human does not need to be in the loop constantly, but must be able to catch errors before they compound.</p><p>For high-stakes, irreversible outputs &#8212; drug candidates, structural engineering recommendations, policy analysis that will drive consequential decisions, mathematical proofs that will be published as established results &#8212; continuous human oversight is not incidental to the output&#8217;s validity. It is constitutive of it.</p><p>The drug trial architecture already encodes this wisdom. It was not built for AI, but it is exactly the right framework for AI-assisted research in high-stakes domains. The humans do not disappear as system confidence grows. They shift function &#8212; from intensive validation to ongoing monitoring, from checking every step to catching systematic drift. This is not a concession to human limitation. It is a recognition that the system&#8217;s credibility requires external accountability at every stage.</p><h2>The Profession Pachocki Forgot to Invent</h2><p>What emerges from this analysis is not only a procedural requirement for human oversight. It is the outline of a new profession.</p><p>A plausibility auditor is not a fact-checker. Not a quality assurance technician. Not a safety researcher who looks for misaligned objectives in training runs. A plausibility auditor is someone trained specifically to stand outside sophisticated AI outputs and ask whether those outputs correspond to reality rather than merely to internal consistency.</p><p>This requires two distinct forms of expertise that current training pipelines do not produce together.</p><p>The first is deep domain knowledge &#8212; enough expertise to recognize when a result is too clean, suspiciously convergent, subtly wrong in the way that only an expert in the specific domain would catch. The AI system that generates a novel proof in algebraic geometry needs to be reviewed by someone who has spent years in algebraic geometry, not by a generalist AI safety researcher who can evaluate the logical structure of the output but cannot evaluate its mathematical significance.</p><p>The second is knowledge of AI failure modes, which differ fundamentally from human error patterns. Human errors cluster around cognitive bias, motivated reasoning, fatigue, and the known weaknesses of intuition under uncertainty. AI errors cluster around distribution shift, spurious correlations that held in training data, confident extrapolation beyond the valid range of the model, and &#8212; most dangerously &#8212; systematic errors that look like high-quality outputs because they were trained on a corpus where high-quality outputs had certain structural characteristics. Auditing AI outputs requires knowing which kind of error you are hunting.</p><p>The training pipeline for plausibility auditors looks nothing like current AI safety work. It looks more like producing people with genuine deep expertise in a specific domain who have additionally developed the metacognitive capacity &#8212; what Penrose, extending G&#246;del, might describe as the recognitional faculty &#8212; to evaluate outputs they could not themselves have produced. The auditor does not need to be able to generate the proof. The auditor needs to be able to recognize whether it is actually true.</p><p>This is not a concession to human limitation. The requirement for external validation is not a temporary scaffolding that will be removed once the systems mature. It follows directly from the logical structure of the problem. The validator must be outside the system being validated. This requirement does not disappear as systems become more sophisticated. If anything, it becomes harder to satisfy, because the auditor&#8217;s task grows more demanding as the outputs grow more complex.</p><h2>The Central Irony</h2><p>Pachocki&#8217;s automated researcher, if it works as described, will be the thing that finally creates the market for what it treats as unnecessary.</p><p>The more sophisticated the AI output, the harder the auditing task, the more valuable the human who can do it. OpenAI&#8217;s north star may be pointing directly at the profession it forgot to invent.</p><p>There is precedent for this dynamic. The industrialization of manufacturing did not eliminate the need for quality engineers &#8212; it made quality engineering a more demanding and more specialized discipline. The digitization of financial markets did not eliminate the need for auditors &#8212; it made financial auditing a more technically demanding field and produced an entire industry of forensic accountants whose value derives precisely from the complexity of what they are reviewing.</p><p>The automated researcher will produce more outputs of greater sophistication across more domains than any previous generation of scientific tools. Each of those outputs will be a candidate. Each candidate will require validation. The validation will require humans. Not because we cannot build systems smart enough to evaluate the outputs &#8212; we will almost certainly build systems with that capability. But because the evaluation&#8217;s credibility depends on the evaluator&#8217;s accountability, and accountability requires the possibility of consequence.</p><p>An AI system does not lose its job when it certifies a flawed drug candidate. A plausibility auditor does.</p><h2>What Governments Actually Need to Figure Out</h2><p>Pachocki acknowledges that the concentrated power implications of this technology are &#8220;a big challenge for governments to figure out.&#8221; He is right that governments need to be involved, and right that OpenAI alone cannot resolve the governance questions.</p><p>But the governance architecture he gestures toward does not yet exist, and the reason it does not exist is that the validation infrastructure that would make it functional has not been built. You cannot regulate AI research outputs if there is no institutionalized capacity to evaluate whether those outputs are trustworthy. Chain-of-thought monitoring provides the appearance of evaluability without the substance.</p><p>The question for 2028 &#8212; when Pachocki&#8217;s multi-agent research system is scheduled to arrive &#8212; is not only whether the system works. It is whether we have built, in parallel, the human capacity to stand outside the most powerful reasoning systems ever constructed and ask the oldest question in epistemology.</p><p>Is it actually true?</p><p>No algorithm answers that. Someone has to.</p><div><hr></div><p><em>bear.musinique.com &#183; skepticism.ai &#183; theorist.ai</em></p><p><strong>Tags:</strong> AI plausibility auditor, G&#246;del incompleteness AI oversight, OpenAI automated researcher chain-of-thought monitoring, common cause failure AI safety, high-stakes AI</p>]]></content:encoded></item><item><title><![CDATA[Brutalist.art - The "Beautiful.ai" that Educators Need]]></title><description><![CDATA[Talking to a slide deck through Claude code]]></description><link>https://www.skepticism.ai/p/brutalistart-the-beautifulai-that</link><guid isPermaLink="false">https://www.skepticism.ai/p/brutalistart-the-beautifulai-that</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Mon, 06 Apr 2026 00:49:13 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/193305176/36365434e9ead489eec0094e88101873.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3><strong>The Slide Deck You Built Was Not for the Learner</strong></h3><h3><strong>It Was for You</strong></h3><p>There is a lie at the center of most educational content production, and it goes mostly unnamed because naming it is professionally uncomfortable. The lie is this: the slide deck you built last Tuesday, the one you spent three hours arranging, the one with the custom fonts and the carefully chosen images and the thirty-seven bullets across fourteen slides &#8212; that deck was not built for the people who had to sit through it. It was built for you. It was built so you could feel the relief of having covered the material. It was built so the topic had a container. It was built because you had a deadline and a template and a vague professional obligation to produce <em>something</em>, and a slide deck is always <em>something</em>.</p><p>The learner &#8212; the specific human being with specific prior knowledge and a specific amount of time and a specific gap between what they currently understand and what they need to understand &#8212; that person never really entered the room where the deck was being built. What entered the room instead was a topic. And a topic is not a person.</p><p>Brutalist was built to address this. Not to address it gently, with suggestions and style guides and best-practice checklists. To address it structurally, in the architecture of the tool itself, before a single slide gets made.</p><h3><strong>The Architecture of Avoidance</strong></h3><p>The conventional workflow for building educational content runs roughly like this: you receive a topic (or assign yourself one), you collect material &#8212; readings, notes, data, existing slides &#8212; and you begin arranging it. If you are experienced, you arrange it with craft. You think about sequence and pacing. You choose examples. You know when to deploy a metaphor and when to let a statistic land without ornamentation. The result, at its best, is a coherent and well-paced presentation of material.</p><p>What you have not done &#8212; and this is the gap that produces most failures in educational content &#8212; is started from what the learner will be able to <em>do</em> when you are finished with them. You have started from what you know, and you have worked forward through that knowledge toward a clean ending. This is a completely understandable approach, and it produces content that would be unrecognizable as failing by any ordinary standard of review. It is organized. It is clear. It covers the material.</p><p>It just doesn&#8217;t reliably produce learning.</p><p>Backwards design &#8212; the pedagogical framework that governs every output Brutalist produces &#8212; insists on reversing this sequence. You begin with a measurable outcome: not a topic, not a list of things the instructor will present, but a single sentence describing what a learner will be able to <em>do</em> at the end that they could not do at the beginning. Construct a DAG from domain knowledge and identify all backdoor paths. Distinguish between a learning outcome and a topic. Evaluate a rubric for the difference between qualitative descriptions and observable behaviors. These are not aspirations. They are commitments &#8212; to a learner, to a measurable change, to the possibility of knowing whether the teaching worked.</p><p>The reason most content production doesn&#8217;t begin here is not ignorance. Most instructors know what backwards design is. The reason is that starting from a learning outcome is harder than starting from a topic, and the tools available for producing educational content &#8212; PowerPoint, Keynote, Google Slides &#8212; offer no friction whatsoever against starting from the wrong place. They are indifferent to the question of who the learner is and what the learner needs to be able to do. They are happy to help you arrange forty slides around a topic, and they will never once ask whether the arrangement serves a learner or just a speaker.</p><p>Brutalist asks. It asks before it produces anything. In interactive mode &#8212; the default &#8212; it will not generate a single slide until it has confirmed the audience, confirmed the outcome, and confirmed that the outcome is measurable. &#8220;Understand X&#8221; is not measurable. Brutalist says so, explicitly, in the voice of a pedagogical skeptic rather than a customer-service chatbot. <em>That describes a mental state, not a behavior. A learner can&#8217;t demonstrate &#8216;understanding.&#8217; What&#8217;s the one thing they should be able to do?</em> This is not rudeness. It is the one question that changes the output.</p><h3><strong>The Phase Gate as Moral Commitment</strong></h3><p>There is a design decision embedded in Brutalist that deserves more attention than it usually gets in conversations about AI tools, which tend to focus on capability rather than constraint. That decision is the phase gate.</p><p>A phase gate is exactly what it sounds like: a gate that holds until a phase is complete. In Brutalist, the first gate holds at source confirmation &#8212; no output until the source material is present. The second holds at outcome identification &#8212; no output until the outcome can be stated in one sentence. The third holds at form confirmation &#8212; no output until the right command for the content is confirmed. Only then does the tool produce anything.</p><p>This is unusual. Most AI tools are designed to produce output as quickly as possible, because output is what users think they want and user satisfaction is what tools are optimized for. The experience of receiving forty slides in thirty seconds feels like productivity. It feels like the machine is working for you. What it actually is, much of the time, is the machine generating plausible-looking content that fills the form without serving the function &#8212; decoration rather than argument, coverage rather than learning.</p><p>Brutalist is optimized for the learner, not the user. These are not the same person. The user is the instructor who wants a slide deck. The learner is the person who will sit in front of that deck and try to change what they understand. Optimizing for the user produces faster output. Optimizing for the learner produces harder questions before any output is generated at all.</p><p>The phase gate is where this optimization manifests in the tool&#8217;s behavior. It is the structural embodiment of a moral position: that output built on wrong assumptions about audience or outcome wastes more time than the intake that would have caught those assumptions. Two minutes of friction before the deck is built is less costly than an hour of instruction that doesn&#8217;t change what anyone understands.</p><h3><strong>What &#8220;Understand X&#8221; Is Actually Doing</strong></h3><p>Spend any time in educational settings &#8212; as a student, as an instructor, as a curriculum designer &#8212; and you develop a particular sensitivity to the phrase &#8220;by the end of this, students will understand X.&#8221; It appears in syllabi, in lesson plans, in course descriptions, in accreditation documents. It appears so frequently and so unexamined that most people who write it have stopped noticing it at all. It is pedagogical wallpaper.</p><p>But the phrase is doing something specific, and it is worth naming. &#8220;Students will understand X&#8221; is a sentence that sounds like a learning outcome and functions as an escape from accountability. Understanding is a mental state. You cannot observe it, you cannot measure it, you cannot score it on a rubric or assess it in a portfolio. You can ask someone to demonstrate understanding &#8212; which means you are no longer assessing understanding, you are assessing a behavior &#8212; but the phrase as written commits you to nothing. It is a promise with no deliverable attached.</p><p>The reason this matters to a tool like Brutalist is that the learning outcome is not just the first step in backwards design. It is the specification for everything that follows. The slides that get built, the visual types that get selected, the checks for understanding that get inserted every four to six slides &#8212; these are all derived from the outcome, working backward from what the learner needs to be able to do. If the outcome is vague, the derivation has nothing to anchor to. The result is a deck that covers material in the general direction of a topic, which is not the same thing as a deck that moves a specific learner from a specific gap to a specific capability.</p><p>This is why Brutalist treats &#8220;understand X&#8221; not as a minor stylistic imprecision but as a structural failure that must be corrected before building anything. The outcome is the foundation. A vague foundation does not produce a stable structure. It produces decoration.</p><h3><strong>Brutalist HTML and the Question of Deployment</strong></h3><p>There is a second commitment embedded in this tool that is worth examining, and it lives in the signature output: the brutalist HTML presentation. Not a PowerPoint file. Not a PDF. A single self-contained HTML file, deployable immediately, built on a design system called Musinique brutalist &#8212; JetBrains Mono, parchment tokens, per-slide audio, keyboard navigation, zero decorative radius.</p><p>The choice of HTML as the primary output format is not aesthetic. It is pedagogical and practical simultaneously. A PowerPoint file requires PowerPoint. A Google Slides file requires Google. An HTML file requires a browser, which is to say it requires nothing &#8212; it deploys anywhere, runs without software dependencies, and can be shared as a URL or a file with equal ease. The friction of tool access is a real barrier to distribution, and distribution is where educational content either serves learners or stops serving them.</p><p>The design choices embedded in the brutalist system &#8212; every slide does one thing, every title is a claim not a topic, components are typed by what they communicate rather than how they look &#8212; these are cognitive load principles encoded as aesthetic constraints. The slide with a hero number and a two-line muted caption exists because research on split attention and redundancy effects has things to say about how visual and verbal information compete for working memory. The check for understanding every four to six slides exists because spaced retrieval practice produces stronger retention than massed coverage. The design is not decoration. It is applied cognitive science, translated into a component library and a phase-gated workflow.</p><h3><strong>The Pushback Layer</strong></h3><p>Brutalist pushes back. This is the part of the tool that most users encounter with some surprise, because tools &#8212; especially AI tools &#8212; are generally not in the business of disagreement. They are in the business of helpfulness, and helpfulness has been operationally defined as producing what the user asks for as quickly as possible. Friction is a UX failure. Pushback is an anomaly.</p><p>In Brutalist, pushback is a feature. Not an accident of the model&#8217;s personality or a quirk of the prompting, but a designed behavior with specific triggers and specific exit conditions. Weak learning outcomes get flagged &#8212; not once, politely, but persistently, with an offer to rewrite the outcome if the user fails the measurability test twice. Vague audience descriptions get challenged, because &#8220;college students&#8221; is not an audience and the specificity that changes the content, examples, and pacing cannot be inferred from it. Mismatched command choices get named &#8212; if the content calls for a <code>/showtell</code> and the user has requested <code>/slides</code>, the tool explains the difference in instructional design terms before proceeding.</p><p>Every pushback ends with a path forward. This is the moral discipline that separates useful friction from obstruction. The tool is not in the business of refusing to build. It is in the business of building toward the right specification, and the right specification cannot be assumed from the wrong brief. The pushback is the tool asking the question that the instructor should have asked before they opened a blank deck and started arranging.</p><p>What is the learner supposed to be able to do?</p><p>Everything else follows from that.</p><div><hr></div><p><em>Brutalist is part of the Humanitarians AI Ecosystem. The primary workflow: </em><code>/slides</code> produces the blueprint. <code>/brutalist</code> converts it to HTML. <code>/deck</code> does both in one command. Type <code>help</code> to begin.</p><p><strong>Tags:</strong> Brutalist instructional design engine, backwards design pedagogy, learning outcomes Bloom&#8217;s taxonomy, brutalist HTML presentation system, educational content production failure</p>]]></content:encoded></item><item><title><![CDATA[The Struggle Is the Point]]></title><description><![CDATA[What We Lost When We Made the Artifact the Grade]]></description><link>https://www.skepticism.ai/p/the-struggle-is-the-point</link><guid isPermaLink="false">https://www.skepticism.ai/p/the-struggle-is-the-point</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Sat, 04 Apr 2026 03:35:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!SDu5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af9b2b9-9f0e-46ad-aff1-94d64d45472e_1886x704.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SDu5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af9b2b9-9f0e-46ad-aff1-94d64d45472e_1886x704.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SDu5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af9b2b9-9f0e-46ad-aff1-94d64d45472e_1886x704.png 424w, https://substackcdn.com/image/fetch/$s_!SDu5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af9b2b9-9f0e-46ad-aff1-94d64d45472e_1886x704.png 848w, https://substackcdn.com/image/fetch/$s_!SDu5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af9b2b9-9f0e-46ad-aff1-94d64d45472e_1886x704.png 1272w, https://substackcdn.com/image/fetch/$s_!SDu5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af9b2b9-9f0e-46ad-aff1-94d64d45472e_1886x704.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SDu5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af9b2b9-9f0e-46ad-aff1-94d64d45472e_1886x704.png" width="1456" height="543" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8af9b2b9-9f0e-46ad-aff1-94d64d45472e_1886x704.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:543,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:167415,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/193135422?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af9b2b9-9f0e-46ad-aff1-94d64d45472e_1886x704.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SDu5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af9b2b9-9f0e-46ad-aff1-94d64d45472e_1886x704.png 424w, https://substackcdn.com/image/fetch/$s_!SDu5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af9b2b9-9f0e-46ad-aff1-94d64d45472e_1886x704.png 848w, https://substackcdn.com/image/fetch/$s_!SDu5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af9b2b9-9f0e-46ad-aff1-94d64d45472e_1886x704.png 1272w, https://substackcdn.com/image/fetch/$s_!SDu5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af9b2b9-9f0e-46ad-aff1-94d64d45472e_1886x704.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The paper rough draft: <strong><a href="https://www.nikbearbrown.com/notes/Papers/glp-framework-genuine-learning-probability">https://www.nikbearbrown.com/notes/Papers/glp-framework-genuine-learning-probability</a></strong></p><h2>What We Lost When We Made the Artifact the Grade</h2><p>Here is the situation as it actually exists, not as anyone in an official capacity is willing to describe it clearly.</p><p>A student sits down to write a paper. The paper is due in twelve hours. The student has three other assignments due this week, a job that starts at six, and the accumulated evidence of two semesters telling them that the grade lives in the artifact &#8212; the paper itself &#8212; not in the thinking that was supposed to produce it. The student opens an AI tool. The paper gets written. It is, by most measurable standards, better than what the student would have produced alone at midnight after a shift.</p><p>In the next building, the professor who assigned the paper has used AI to draft the assignment prompt, the rubric, and the feedback comments they will paste into the LMS after running the submitted papers through a grading interface that summarizes them automatically.</p><p>Neither of them is a villain. Both of them are responding rationally to a system that has always rewarded the artifact and never found a way to measure the process that was supposed to produce it. Generative AI did not create this problem. It revealed it &#8212; suddenly, completely, and without the courtesy of suggesting a solution.</p><p>This essay is about what the solution might look like. It is not technical. The technical apparatus exists and is documented elsewhere. What doesn&#8217;t exist yet, in language plain enough to be useful, is a way of talking about why the solution matters &#8212; what it would mean for a student to be seen by an educational system that has, for most of institutional history, been looking at the wrong thing.</p><h2>What the Artifact Was Supposed to Prove</h2><p>The essay, the exam, the project, the recorded performance &#8212; these were never the thing education cared about. They were evidence. The artifact was valuable because it was causally downstream of a process: the reading, the confusion, the rereading, the argument with yourself at two in the morning about whether you actually understood what you thought you understood. The artifact was a trace of that process. Grading the artifact was a way of inferring the process, because the two were coupled tightly enough that measuring one was effectively measuring both.</p><p>That coupling has broken. This is not a scandal or a failure or a temporary condition that better AI detection will resolve. It is a structural change in what artifacts can tell us, and it is permanent. The forensic window &#8212; the period during which you can reliably distinguish a human-written essay from an AI-generated one &#8212; is closing sequentially across every domain in which humans produce artifacts. In writing it is largely closed already. In code it is closing. The detectors trained on today&#8217;s AI outputs will be obsolete when tomorrow&#8217;s outputs arrive.</p><p>Every educational institution that is currently responding to this situation by installing better detection software is solving last year&#8217;s problem with next year&#8217;s obsolescence already scheduled.</p><h2>The Complicity No One Names</h2><p>The conversation about AI and academic integrity is almost entirely conducted as a conversation about student dishonesty. This framing is not wrong, exactly. It is just so incomplete as to function as a kind of dishonesty itself.</p><p>Students are using AI because the artifact is the grade. The artifact is the grade because grading the process &#8212; the confusion, the revision, the dead ends, the moments of genuine understanding &#8212; is hard, and institutions have never built the infrastructure to do it at scale. The result is a system that has always been measuring the wrong thing, and now the wrong thing can be produced in thirty seconds by a tool that costs less than a textbook.</p><p>Professors are not innocent bystanders. Many are using the same tools to manage the same impossible workloads &#8212; drafting prompts, generating feedback, summarizing submissions &#8212; that the institution&#8217;s growth model has made unmanageable. The incentive structure reaches all the way up. Publish or perish does not reward good teaching. Good teaching does not require good teaching to be measurable, only for its artifacts &#8212; syllabi, course evaluations, enrollment numbers &#8212; to look like good teaching.</p><p>The student who uses AI to write a paper is not defecting from a system that is working. They are defecting from a system that has always asked them to perform learning rather than do it, and has never been able to tell the difference. AI has not corrupted that system. AI has made the corruption visible.</p><p>This is the thing worth sitting with before any solution is proposed: the problem is not the tools. The problem is what we decided to measure, and what we decided to ignore, long before the tools arrived.</p><h2>What Genuine Learning Leaves Behind</h2><p>Here is what the research shows, stated plainly.</p><p>When a human being genuinely learns something hard, the process is biological. Neurons fire in response to the gap between what the learner expected and what they encountered. That gap &#8212; the prediction error &#8212; is uncomfortable. It is the feeling of not understanding, the specific texture of confusion that is different from ignorance because it knows what it doesn&#8217;t know. Working through that discomfort produces measurable changes: in how information is encoded, in how long it persists, in whether it transfers to new contexts or stays locked to the specific example through which it was learned.</p><p>Genuine learning leaves traces. Not in the artifact &#8212; the artifact is the product, and products can be manufactured without the process. The traces are in the behavior that surrounds the artifact&#8217;s production: the time spent on the hard parts, the errors that follow a coherent path as the mental model develops, the ability to apply what was learned to a problem that looks different on the surface but has the same underlying structure, the calibrated uncertainty of someone who knows not just what they know but what they don&#8217;t.</p><p>None of these traces require looking at the artifact. They require looking at the process.</p><p>This is what the concept of friction in assessment is about. Not friction as punishment, not friction as obstacle, not friction as the gatekeeping logic that has always made elite education a credentialing system for people who already had advantages. Friction as signal. The productive struggle of genuine learning &#8212; the confusion, the revision, the wrong turn and the recovery &#8212; is not the unfortunate cost of arriving at the artifact. It is the thing the artifact was supposed to be evidence of. It is the learning itself.</p><p>The proposal is to measure it directly.</p><h2>What This Would Mean for a Student</h2><p>I want to be specific about what it would feel like to be in a classroom where this kind of assessment exists, because the abstract case is easy to make and the human case is the one that matters.</p><p>It would mean that the time you spent genuinely confused about something counts &#8212; not as performance of confusion, not as a participation grade for looking engaged, but as actual data about actual thinking. It would mean that the draft that was a mess, the question you asked in office hours that revealed you&#8217;d been working from the wrong assumption for two weeks, the revision that turned a competent response into a thinking one &#8212; these are evidence of the thing education is supposed to produce. They would be part of the record.</p><p>It would also mean that the smooth, perfectly structured submission produced at midnight with no evidence of genuine engagement is not, by itself, proof of anything. The artifact is not worthless. It has not become zero evidence. It has become insufficient evidence. Insufficient means it needs a partner &#8212; and the partner is the process that was supposed to produce it.</p><p>This is not a punishment for using AI. It is a recognition that the artifact alone was never the right thing to measure, and that the tools which have made that limitation undeniable have also, in the same move, made the solution more urgent than it has ever been.</p><h2>The Uncomfortable Truth About Friction</h2><p>The research contains a finding that takes a moment to absorb. The smooth, well-structured artifact &#8212; the one that reads with perfect confidence, that has no rough edges, no places where the writer lost the thread and found it again &#8212; may be mild negative evidence of genuine learning.</p><p>The rough, searching one may be positive evidence.</p><p>Not because roughness is a virtue. Not because difficulty signals intelligence. Because genuine struggle with hard material characteristically produces texture &#8212; places where the thinking was actually happening, where the writer was working something out rather than reporting a conclusion they arrived at before they started writing. The friction of genuine learning leaves marks. The borrowed certainty of an AI-assisted artifact is often smooth in a way that real thinking, at its most effortful, is not.</p><p>This is uncomfortable because educational institutions have spent generations rewarding the smooth artifact and interpreting roughness as inadequacy. We taught students that the goal was to arrive at certainty quickly and present it cleanly. We built rubrics that rewarded the appearance of knowing and had no mechanism for distinguishing it from the thing itself.</p><p>Generative AI did not create that confusion. It just made it expensive.</p><h2>What Comes Next</h2><p>The framework that formalizes this argument &#8212; the specific components of friction that genuine learning leaves in observable data, the way those components can be measured, combined, and calibrated to different kinds of cognitive work &#8212; is documented in the paper that follows this introduction. It is technical in the way that any serious methodology is technical, and it is also not the point of this essay.</p><p>The point of this essay is this: the crisis that AI has created for educational assessment is not primarily a cheating problem. It is an evidence problem. The artifact, which was always a proxy for the process, can now be produced without the process. Any response that tries to restore the artifact&#8217;s evidentiary value by detecting AI use is fighting a war that the progression of technology has already decided.</p><p>The response that might actually work is to stop relying on the artifact as the sole evidence of learning, and start building the infrastructure to measure what the artifact was always supposed to be downstream of.</p><p>Students are not wrong that the system gives them no choice but to produce the artifact by whatever means are available. They are responding rationally to a broken incentive structure. Educators are not wrong that something has been lost when the struggle disappears from the work. They are mourning the only evidence they were ever given access to.</p><p>The argument this paper makes is that the struggle was always the point. It is still the point. We have spent a long time measuring the wrong thing, and the tools that have made that undeniable have also, in the process, handed us a reason to build something better.</p><p>The infrastructure for measuring the struggle exists. The question is whether the institutions that credential learning are willing to build it before the artifact becomes so decoupled from the process that the credential stops meaning anything at all.</p><p>That window is not closed. But it is not wide open either.</p><p>The struggle is the point. It is time to measure it.</p><div><hr></div><p><strong>Tags:</strong> AI academic integrity assessment friction traces genuine learning, generative AI education artifact decoupling, GLP framework formative assessment process evidence, student professor AI use structural incentives, irreducibly human cognitive engagement pedagogy</p>]]></content:encoded></item><item><title><![CDATA[Boondoggling: You Are the Conductor]]></title><description><![CDATA[What Most Developers Miss About AI-Assisted Programming]]></description><link>https://www.skepticism.ai/p/boondoggling-you-are-the-conductor</link><guid isPermaLink="false">https://www.skepticism.ai/p/boondoggling-you-are-the-conductor</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Wed, 01 Apr 2026 03:16:34 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/192806158/0f765b9715f44ad9ce88a372a7e3a40d.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>There is a moment in every AI-assisted coding session that tells you everything about the developer sitting at the keyboard. The model generates a block of code &#8212; clean, confident, internally consistent. It compiles. The tests pass. The developer commits it and moves on.</p><p>What they never ask is the question that would save them three weeks in six months: <em>Is this solving the right problem?</em></p><p>I came to <a href="https://www.boondoggling.ai/">Boondoggling</a> the way most people come to uncomfortable realizations &#8212; after the thing that was supposed to work didn&#8217;t. The code was technically correct. The architecture was sound. And it was aimed, with beautiful precision, at a problem that had already been reframed by the time implementation began. Claude had done exactly what it was told. Nobody had told it the right thing.</p><p>This is not an AI failure. This is a human supervisory failure. And it is the failure that the developers now spending $20 a month on AI subscriptions are making, every day, at scale.</p><div><hr></div><h2>The 20% Problem</h2><p>Here is what most developers actually do with Claude Code or Cursor: they describe a problem, they delegate the implementation, they verify that the output compiles, and they ship.</p><p>That is not 100% of the job. That is 20% of the job dressed up as 100%.</p><p>The other 80% &#8212; the part that determines whether the fast, confident, technically impeccable output is pointed in the right direction &#8212; requires five capacities that no model possesses. Not because current models are limited. Because of what statistical pattern matching structurally is and is not.</p><p>Claude solves faster than any human. That gap will not close. What will not change is this: the model cannot verify whether its output is grounded in the specific domain reality at hand. It cannot reframe a poorly formulated problem. It cannot interpret what an accurate result means in a specific human context. And it cannot integrate multiple legitimate but conflicting perspectives into a recommendation that someone is accountable for.</p><p>These are not bugs to be patched in the next release. They are features of the architecture. The model has been trained on what is common and likely. Your specific project, your specific codebase, your specific business constraint &#8212; these are neither common nor likely. The gap between what the model knows and what your situation requires is where all the damage lives.</p><div><hr></div><h2>The Conductor</h2><p>The <a href="https://www.boondoggling.ai/">Boondoggling methodology</a> is built around a single metaphor that earns its place rather than announcing itself. A conductor does not play any instrument. They hold the whole performance in mind while each section plays its part. They hear the wrong note before the score confirms it. They decide which piece is worth performing and how it should be interpreted. The performance collapses without them &#8212; even though they produce no sound themselves.</p><p>This is what graduate-level AI supervision looks like. And it is the role that most AI integration workflows currently fail to develop.</p><p>The developers who are getting genuine leverage from AI coding tools are not out-prompting the model. They are conducting it. Before Claude Code sees a single requirement, they have decided what the problem actually is. Before the first function is generated, they have specified what done looks like. After the output arrives, they verify it against domain reality before the next step begins.</p><p>The ones who are mostly generating technical debt faster than they generated it before &#8212; they learned to play their instrument. Nobody taught them to conduct.</p><div><hr></div><h2>Five Things the Model Cannot Do for You</h2><p>The <a href="https://www.irreducibly.xyz/notes/Irreducibly-Human/Irreducibly-Human-Conducting-AI">Irreducibly Human course</a> at Northeastern &#8212; built on the same framework as Boondoggling &#8212; names these five supervisory capacities precisely. Not as professional development recommendations. As structural requirements for AI-assisted work.</p><p><strong>Plausibility auditing</strong> is the judgment that happens before verification. It is knowing an output is wrong because of what you know about the domain &#8212; not because you ran a test. The model cannot audit its own plausibility. It does not know what it does not know. When it confabulates &#8212; when it produces a confident, internally consistent answer that is not grounded in reality &#8212; it does so fluently. The code runs. The tests pass. Plausibility auditing is the human capacity that catches this before it ships.</p><p><strong>Problem formulation</strong> is deciding what the mission is before the model sees it. Not after. The quality of every output is determined here, at the moment of framing, before a single prompt is written. AI optimizes for the common and likely; humans must reframe toward the salient and important. The Semmelweis case &#8212; the formulation that saves lives was not the computationally tractable one &#8212; is the permanent lesson here. Hand problem definition to the model and you have not delegated. You have abdicated.</p><p><strong>Tool orchestration</strong> is the sequencing decision. Which tool, in what order, with what context, and what does done look like at each handoff. The developer who reaches for Claude Code because it is already open is not orchestrating &#8212; they are defaulting. Orchestration means choosing the audit tool with a different failure mode than the generation tool, so they catch each other&#8217;s blind spots.</p><p><strong>Interpretive judgment</strong> is supplying meaning that the model cannot supply. Which of these three implementations is correct for this context &#8212; not in the abstract, but here, in this organization, for this user, at this moment. The model can tell you what each implementation does. It cannot tell you what it means. Somebody has to sign their name to that answer. The model cannot do that either.</p><p><strong>Executive integration</strong> is not sequencing the four prior capacities. It is holding all four simultaneously toward a unified goal &#8212; recognizing when a plausibility audit finding requires problem formulation to re-engage, when an orchestration decision surfaces an interpretive judgment that wasn&#8217;t on the agenda. This is what the conductor does in the fourth quarter of a difficult performance: not running a checklist, but maintaining a unified hold on where the whole thing is going.</p><p>Better models will not close these gaps. They will widen the stakes of them.</p><div><hr></div><h2>What the Build Actually Looks Like</h2><p>A moderately complex website &#8212; six routes, hybrid architecture, admin dashboard, community upload pipeline, sandboxed iframe viewer, full prompt library &#8212; built using the Boondoggling method took roughly three hours. Two hours of conversation with <a href="https://www.boondoggling.ai/tools/gru-tool">Gru</a>, the custom orchestration prompt. One hour with Claude Code.</p><p>Nearly all the time was spent talking. Not coding. Not debugging. Not searching documentation. Talking &#8212; precisely, in the right order, about what the site was, who it was for, what it would and would not do, and what each piece needed to be true before the next piece began.</p><p>The result was a Boondoggle Score: a conductor&#8217;s score with two simultaneous parts. The Minion Part &#8212; exact prompts for Claude, in dependency order, each with context required, expected output, and a handoff condition. The Gru Part &#8212; precise human actions, labeled by supervisory capacity, in the same dependency order.</p><p>Nine Claude tasks. Eleven human tasks. More human decisions than machine outputs. But the Claude tasks ran fast and clean because the structure was already there. Every prompt worked &#8212; not because the prompts were magic, but because the conversation that produced them was structured.</p><p>The handoff condition is the most important element in the score. It is the conductor&#8217;s downbeat. A model that does not know when to stop will stop at the wrong place or not stop at all.</p><div><hr></div><h2>The Vocabulary of What Is Actually Happening</h2><p>The Boondoggling framework gives names to the different kinds of work in an AI-assisted build. The names are worth knowing because naming a thing is the first step to doing it deliberately.</p><p><em>Frick-fracking</em> is the iterative work &#8212; small precise edits, one thing changed at a time, the kind of work Claude Code does exceptionally well when given clear scope. This is where the actual build lives after the structure is established. It is productive and it does not require your full attention. It is not, however, the whole job.</p><p><em>Noodling</em> is the dreaming phase. Figuring out what to build before figuring out how. This happens before the model sees anything. It is the lightest touch &#8212; a thought that something could be interesting, a question about whether this feature serves the person the thing is built for. The discipline is knowing which noodle is worth developing. The problem statement is the filter.</p><p><em>Confabulating</em> is the danger word. When the model produces plausible output that is not grounded in reality. It sounds correct. It reads correctly. The code compiles. Only domain knowledge catches it. This is precisely the failure mode that plausibility auditing exists to address &#8212; and precisely the failure mode that developers who have learned to prompt but not to supervise will miss every time.</p><div><hr></div><h2>What You Are Actually Responsible For</h2><p>The developers most effectively using AI coding tools are not the ones generating the most code. They are the ones who have understood that their job changed &#8212; and changed in a specific direction.</p><p>The job is not to type less. The job is to decide more precisely.</p><p>You are responsible for what the problem actually is. You are responsible for what done actually looks like. You are responsible for whether the fast, confident, technically impeccable output is pointed at reality or pointed at a plausible simulation of it. The model takes no responsibility for any of this. It cannot.</p><p>The minions are excellent. They are enthusiastic. They will execute exactly what they understood you to mean.</p><p>That gap &#8212; between what you meant and what they understood &#8212; is where all the damage lives.</p><p>Anyone can use Claude Code. The question is whether you are playing an instrument or conducting the orchestra.</p><div><hr></div><p><strong>Tags:</strong> boondoggling AI methodology, Claude Code supervision framework, AI-assisted software development, solve-verify asymmetry, plausibility auditing human-AI collaboration</p>]]></content:encoded></item><item><title><![CDATA[Medhavy Hub Walkthrough]]></title><description><![CDATA[Intelligent Textbook]]></description><link>https://www.skepticism.ai/p/medhavy-hub-walkthrough</link><guid isPermaLink="false">https://www.skepticism.ai/p/medhavy-hub-walkthrough</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Sun, 29 Mar 2026 06:49:35 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/192485038/12781cf2a5d9b44351da983db4e46790.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Ask your textbook a question. Get a sourced, context-aware answer &#8212; instantly. This is a full walkthrough of Medhavy Hub, the AI-powered textbook platform built for students who want more than a page to stare at.</p><p>In this video, we walk through everything: creating your account, requesting access, navigating chapters, and using the built-in AI Assistant Panel to study smarter across Physics Volume 1 and Cancer Biology.</p><p>The AI Assistant answers from the active chapter &#8212; not the open web &#8212; and shows every source it used so you can trust and verify the response. Ask follow-up questions, request step-by-step derivations, generate concept-check questions, get the answer key, and loop back to the text with stronger understanding. Every session is yours to pace and direct.</p><p>This is what an interactive textbook actually looks like.</p><div><hr></div><p>&#128279; Create your free account &#8594; medhavy.ai</p><p></p>]]></content:encoded></item><item><title><![CDATA[Glimmer - A Word I Didn't Know I Needed]]></title><description><![CDATA[Dewey in the Age of AI: Glimmers as a Practical Device for Experiential Learning]]></description><link>https://www.skepticism.ai/p/glimmer-a-word-i-didnt-know-i-needed</link><guid isPermaLink="false">https://www.skepticism.ai/p/glimmer-a-word-i-didnt-know-i-needed</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Sun, 29 Mar 2026 03:49:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!52to!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7cfcda-7cf5-42c6-8f68-74a69797bd79_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!52to!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7cfcda-7cf5-42c6-8f68-74a69797bd79_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!52to!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7cfcda-7cf5-42c6-8f68-74a69797bd79_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!52to!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7cfcda-7cf5-42c6-8f68-74a69797bd79_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!52to!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7cfcda-7cf5-42c6-8f68-74a69797bd79_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!52to!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7cfcda-7cf5-42c6-8f68-74a69797bd79_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!52to!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7cfcda-7cf5-42c6-8f68-74a69797bd79_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0f7cfcda-7cf5-42c6-8f68-74a69797bd79_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1335028,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/192478418?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7cfcda-7cf5-42c6-8f68-74a69797bd79_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!52to!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7cfcda-7cf5-42c6-8f68-74a69797bd79_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!52to!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7cfcda-7cf5-42c6-8f68-74a69797bd79_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!52to!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7cfcda-7cf5-42c6-8f68-74a69797bd79_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!52to!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7cfcda-7cf5-42c6-8f68-74a69797bd79_1456x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I heard the word <em>glimmer</em> today in a sense I didn&#8217;t recognize.</p><p>Not shimmer. Not hope. Something more precise and more clinical: a specific small cue &#8212; sensory, relational, contextual &#8212; that shifts the nervous system toward safety. The granular opposite of a trigger.</p><p>The term comes from Deb Dana&#8217;s work on polyvagal theory. Stephen Porges mapped the autonomic nervous system&#8217;s responses to perceived safety and threat. Dana, in <em>The Rhythm of Regulation</em> (2018) and her broader clinical development of Porges&#8217; framework, introduced glimmers as the micro-moment counterpart to what everyone already understood about triggers. A trigger is a specific cue that moves the nervous system toward defense. A glimmer is the opposite: a small specific signal that moves it toward the ventral vagal state &#8212; the condition where genuine engagement, learning, and social connection become possible.</p><p>The clinical significance is in the scale. Glimmers are not big positive experiences. They are tiny specific ones. The quality of light through a particular window. A specific person&#8217;s laugh. The weight of a familiar mug. Small enough to overlook. Specific enough to be genuinely activating when noticed.</p><p>Dana&#8217;s therapeutic application was about training clients to accumulate glimmers &#8212; building what she called a glimmer practice &#8212; as a bottom-up regulation strategy. Not cognitive reframing from the top down. Sensory specificity as the mechanism. The body first. The mind follows.</p><p>Branding and design practitioners picked the word up because it named something they had been circling for years without adequate language. The detail that makes a brand feel alive rather than performed. The specific weight of a product in the hand. The exact corner of a page. Always specific. Never general.</p><p>When I heard the word, I recognized the mechanism immediately &#8212; not from Dana, but from a problem I&#8217;d been sitting with for years.</p><h2><strong>Practical Dewey</strong></h2><p>Dewey spent his career trying to name what makes an experience come alive rather than lie flat. The difference between the encounter that genuinely reorganizes how a person sees the world and the encounter that simply adds one more item to what they already know. He called it aesthetic experience. The specific sensory moment that activates genuine engagement before the conceptual apparatus has time to categorize and dismiss it.</p><p>The practical problem with Dewey &#8212; and every educator who takes him seriously eventually hits this wall &#8212; is that genuinely reconstructive experience requires real problems with real resistance and real consequence. The child cooking an actual meal. The student building something that has to work. The inquiry that fails in a way that costs something. These conditions are often impractical at scale, difficult to design, and nearly impossible to sustain across a full curriculum.</p><p>Glimmer offers a way through.</p><p>Not as a replacement for the real &#8212; nothing replaces the real. But as the entry point that makes the real accessible. Small enough to be achievable. Specific enough to be genuinely activating. Carrying enough of the actual structure of the problem that what follows is genuine inquiry, not a simulation of it.</p><p>The fracture Dewey identified in 1900 is the same fracture the AI age has made undeniable. What follows is an attempt to think through what a glimmer-based practice might look like &#8212; and why, right now, the instrument matters as much as the argument.</p><p>John Dewey spent his career arguing that the curriculum was wrong. Not wrong in its methods, but wrong in its foundations. Teaching children to retrieve facts, execute procedures, and perform correctly for assessment was never what education was <em>for</em> &#8212; even when humans were the best available instruments for doing those things.</p><p>The machines didn&#8217;t create that error. They exposed it.</p><p>This is the claim most AI-in-education discourse buries or avoids. Everyone is asking: how do we use AI to improve learning outcomes? Dewey&#8217;s prior question is harder and more important: what kind of people does education produce, and are they capable of living fully, thinking independently, and participating in democratic life?</p><p>The AI age makes that question urgent in a new way. The cognitive capacities that Tier 1 education optimized for &#8212; pattern retrieval, syntactic correctness, fact recall, arithmetic speed &#8212; are now performed superhumanly by machines that fit in a pocket. The student who spent twelve years developing these capacities has spent twelve years preparing to lose a competition they didn&#8217;t know they were entering.</p><p>But the deeper problem isn&#8217;t obsolescence. It&#8217;s that the capacities education <em>didn&#8217;t</em> develop &#8212; problem formulation, causal reasoning, plausibility auditing, collective intelligence, practical wisdom &#8212; are now the only remaining path to a fully human life. Not because AI can&#8217;t do them. Because these capacities are what it means to think, not just to retrieve.</p><p>Dewey saw this clearly in 1900. He just didn&#8217;t have the evidence that 2025 provides.</p><div><hr></div><h2>What Dewey Actually Argued</h2><p>Dewey&#8217;s central claim wasn&#8217;t pedagogical. It was epistemological. Knowledge is not a commodity to be acquired and stored. It is a capacity developed through genuine encounter with real problems. The mind is not a container. It is an instrument of adaptation &#8212; biological, social, and democratic simultaneously.</p><p>This is what he meant by the reconstruction of experience. Not the accumulation of content. Not the performance of understanding. The genuine reorganization of how a person sees and acts in the world, produced by transaction with problems that have real resistance and real consequence.</p><p>Education is not preparation for life. It <em>is</em> life.</p><p>The implications for curriculum are radical. Subject-area divisions are administrative conveniences mistaken for epistemological truth. History, science, mathematics, and literature are not separate in the world &#8212; they are separate in the faculty lounge. A child cooking learns chemistry, mathematics, history, economics, and social cooperation simultaneously because reality doesn&#8217;t arrive pre-sorted by department.</p><p>The inquiry process that Dewey formalized &#8212; felt difficulty, hypothesis, testing, reflection, reconstruction &#8212; is not a teaching method. It is a description of how genuine thinking actually works. Every departure from it produces what he called mis-educative experience: activity that closes off future growth rather than opening it.</p><p>Three principles govern everything that follows:</p><p><strong>Continuity</strong> &#8212; each experience must connect to what came before and open into what comes next. An experience disconnected from the learner&#8217;s existing understanding and not pointed toward future development is inert regardless of how well it is delivered.</p><p><strong>Interaction</strong> &#8212; genuine learning requires transaction between the learner and an environment that pushes back. A simulated environment that doesn&#8217;t resist, a case study that has no consequence, a problem designed to be solvable &#8212; none of these produce reconstruction. They produce performance.</p><p><strong>Democratic purpose</strong> &#8212; education is not primarily economic preparation. It is the development of citizens capable of self-governance. The epistemic capacities that allow a person to formulate problems, reason through evidence, revise beliefs, and participate in collective inquiry are not soft skills. They are the prerequisites for democratic life. A population that can retrieve information but cannot reason together is not a democracy. It is a collection of well-informed individuals with no shared epistemic infrastructure.</p><div><hr></div><h2>The Taxonomy of What Remains</h2><p>Against this framework, the <em>Irreducibly Human</em> taxonomy of human intelligence tiers is not primarily a curriculum design tool. It is a map of what education has abandoned and what the AI age makes irreplaceable.</p><p><strong>Tier 1 &#8212; Pattern and Association.</strong> The intelligences that standardized education optimized for: linguistic ability, logical-mathematical reasoning, pattern recognition, encyclopedic recall. These are also the intelligences where machines are now superhuman. Not faster-than-average. Superhuman, by orders of magnitude, without fatigue, without error. Teaching humans to compete directly at Tier 1 is, in Dewey&#8217;s terms, teaching students to lift with their backs after the forklift has arrived.</p><p>The forklift metaphor requires extension. The point of the forklift is not to free your back so you can do other physical tasks. The point is to free your mind so you can ask what needs moving, where, and why &#8212; questions the forklift cannot ask. AI doesn&#8217;t just change the labor. It changes what counts as the work.</p><p><strong>Tier 2 &#8212; Embodied and Sensorimotor.</strong> The knowledge that lives in the body: a surgeon&#8217;s hands, a carpenter&#8217;s feel for grain, a nurse&#8217;s ability to read tension in a patient&#8217;s movement before the patient can name it. Dewey&#8217;s Laboratory School understood this. The child cooking wasn&#8217;t simulating cooking. The child building wasn&#8217;t practicing building. The hand and the mind develop together. You cannot separate them without impoverishing both.</p><p><strong>Tier 3 &#8212; Social and Personal.</strong> Reading others, cultural navigation, emotional regulation, moral reasoning under genuine stakes. Machines simulate these. They do not live them. A language model produces text that reads as empathetic without experiencing anything. It generates ethical arguments without having skin in the game. The danger is not that the output is wrong. The danger is that the capacity atrophies in the person who stopped exercising it.</p><p><strong>Tier 4 &#8212; Metacognitive and Supervisory.</strong> The intelligences that oversee the others. Plausibility auditing: knowing an answer is wrong before you can prove it. Problem formulation: deciding what is worth solving. Tool orchestration: knowing which instrument to use, when, and whether to trust it. Interpretive judgment: what does this result mean in this specific context. Executive integration: coordinating all of the above toward a unified goal.</p><p>Dewey would call Tier 4 reflective inquiry in its most concentrated form. Problem formulation is exactly what he meant by the felt difficulty &#8212; the entry point of genuine inquiry. Plausibility auditing is what happens when a person has internalized enough prior reconstructed experience to sense that something is wrong before they can prove it. These capacities cannot be taught directly. They can only be developed through repeated encounter with real problems where the cost of poor judgment is genuine.</p><p><strong>Tier 5 &#8212; Causal and Counterfactual.</strong> The capacity to ask not just what the data shows but what would happen if we intervened &#8212; and what we gave up by not intervening differently. Judea Pearl&#8217;s three rungs of causation are Dewey&#8217;s inquiry cycle made formal. Observation is pattern recognition. Intervention is hypothesis testing. Counterfactual is reflection on what the reconstruction actually cost.</p><p>JC Penney had the correlations right. Customers who paid full price showed less price sensitivity than coupon users. What the data could not tell them was what would happen if they removed the coupon system entirely. That&#8217;s an intervention. That&#8217;s Rung 2. They ran the experiment on a live business instead of a causal model. The cost was not bad data or bad analysts. It was the wrong instrument for the question being asked.</p><p>Current AI systems are superhuman at Rung 1. They are weak to absent at Rungs 2 and 3. A population that can query AI for associations but cannot formulate interventions or reason about counterfactuals has access to extraordinary pattern recognition and no capacity to make the decisions that actually matter.</p><p><strong>Tier 6 &#8212; Collective and Distributed.</strong> The intelligence that is not a property of any individual but emerges from groups of people in genuine relationship. The thing that makes science work over centuries. The thing that makes democracy more than the sum of its voters. Language models may be a lossy compression of collective human intelligence &#8212; not alien intelligence but our own reflected back. What they cannot reflect is the thing that happened between us: the disagreement that refined an idea, the trust that made knowledge transmissible, the collaborative friction that no individual possessed and no training corpus can capture because it existed in the interaction, not in the record of the interaction.</p><p><strong>Tier 7 &#8212; Existential and Wisdom.</strong> Phronesis: the practical wisdom that knows when and how to apply what you know, and when not to. This tier requires being alive, mortal, and situated in time. It requires stakes &#8212; the possibility of loss, of reputation, of a life poorly lived. You cannot teach it. You can only design the conditions that make it more or less likely to develop when a person encounters the real.</p><p>Dewey would call Tier 7 simply living. The series points toward it. The work of getting there happens elsewhere.</p><div><hr></div><h2>The Problem with Keeping Up</h2><p>Here is where the practical problem announces itself.</p><p>Educators, practitioners, and intellectually serious people across every domain report the same experience: they cannot keep up. Not with tasks, not with workload &#8212; with frameworks. Causal inference. Network science. Polyvagal theory. Large language models. Transformers. Retrieval-augmented generation. Each genuinely interesting. None integrated. The accumulation produces anxiety, not capacity.</p><p>This is the most sophisticated version of the periodic table problem. It is Tier 1 about Tier 1. Pattern retrieval about frameworks for understanding patterns. The student memorizing the names of intelligences without developing any of them. The practitioner keeping up with theories of experiential learning without having a single experience that reconstructs how they see their work.</p><p>The theories are not the problem. The relationship to the theories is the problem.</p><p>An idea you&#8217;ve encountered is not a tool. An idea you&#8217;ve used on a real problem &#8212; that failed, that required revision, that changed how you see the problem &#8212; is a tool. Dewey was precise about this. Ideas are instruments assessed by their practical utility in resolving specific problems. An instrument you&#8217;ve never picked up isn&#8217;t part of your toolkit. It&#8217;s an item you&#8217;ve read about tools.</p><p>The person drowning in frameworks doesn&#8217;t need more frameworks described more clearly. They need one framework used on one real problem until it either works or breaks in an instructive way.</p><p>The parallel experiment described below is a response to this problem.</p><div><hr></div><h2>Glimmers: The Missing Instrument</h2><p>The term glimmer comes from polyvagal theory &#8212; the small, specific, sensory moment that signals safety and genuine aliveness to the nervous system. Branding practitioners adopted it because it names something they had been trying to describe for years: the specific detail that makes something feel real rather than performed. Not the logo, not the tagline &#8212; the weight of a product in the hand, the exact sound of a notification, the corner of a page that&#8217;s slightly rough.</p><p>The mechanism is specificity. Glimmers are always specific.</p><p>Dewey spent his career trying to name what makes an experience come alive rather than lie flat. His closest term was aesthetic experience &#8212; the dramatic, compelling, unifying encounter in which the learner feels genuinely absorbed. Not decorative. Not a reward for completing the real work. The aesthetic dimension of an experience is what makes it reconstructive rather than merely informative.</p><p>Glimmer is the best single word for what Dewey was pointing at.</p><p>Consider the difference:</p><p><em>&#8220;JC Penney experienced significant revenue decline following their pricing strategy change.&#8221;</em></p><p><em>&#8220;Revenue dropped 25% in one year. The CEO was gone in 18 months.&#8221;</em></p><p>The first is information. The second is a glimmer. The nervous system registers something before the conceptual apparatus engages. The felt difficulty is activated before the lesson begins.</p><p>Or consider the Sherpa asking &#8220;What did you start to say?&#8221; rather than &#8220;What happened?&#8221; One is data collection. One is a glimmer &#8212; the specific small move that creates the conditions for genuine reconstruction.</p><p>Or the MVAL protocol&#8217;s Environment field, which forces the student to describe organizational power structure rather than the room. The moment a student realizes what they&#8217;ve been avoiding is a glimmer. Small. Specific. Changes everything that follows.</p><p>The design criteria for a glimmer:</p><p><strong>Specificity</strong> &#8212; not a general principle but a particular detail. 25%, not &#8220;significant.&#8221; 18 months, not &#8220;quickly.&#8221; The exact weight of something real.</p><p><strong>Aliveness</strong> &#8212; the nervous system registers genuine encounter before the mind categorizes it. Something is at stake even before the learner can articulate what.</p><p><strong>Scale-independence</strong> &#8212; glimmers exist in everything from a sentence to a semester. The meal at the Laboratory School was a glimmer. The question &#8220;what did you start to say?&#8221; is a glimmer. A well-designed assignment brief can contain a glimmer or not. The difference is not length or complexity.</p><p><strong>Fractal structure</strong> &#8212; a good glimmer contains the full structure of the problem it opens. JC Penney is not a simplified version of causal reasoning. It is the entire structure of Tier 5 at human scale. The student who genuinely reconstructs what went wrong at JC Penney has encountered the real problem &#8212; not a toy version of it.</p><p><strong>The load criterion</strong> &#8212; a glimmer without effort is information snacking with better production values. This is the test that separates a genuine glimmer from aesthetic decoration.</p><p>Training science offers the precise concept: Rate of Perceived Exertion. RPE 7-8 is productive struggle &#8212; working at the edge of current capacity with enough reserve to maintain form and recover. This is where adaptation happens. RPE 2 is 5 pounds lifted 10,000 times &#8212; high volume, negligible load, zero reconstruction. You could do it forever and never get stronger. The completion certificate gets issued. Nothing changes.</p><p>The glimmer has to carry enough weight to demand genuine effort from the learner encountering it. Not crushing &#8212; that produces shutdown not inquiry. Not comfortable &#8212; that produces maintenance not growth. Working at the edge of current capacity with something real at stake.</p><p>Critically the load varies. The 350 that was RPE 8 last month is RPE 6 this month. A well-designed glimmer is self-calibrating &#8212; it contains enough genuine resistance to demand real effort from someone at the right developmental stage and is completable enough that someone beyond that stage moves on naturally. The same specific real problem loads different capacities differently depending on where the learner is.</p><p>What doesn&#8217;t vary is the requirement for genuine effort. A glimmer that requires nothing of the learner is a micro-glimmer &#8212; a pleasant novelty hit that returns to baseline in 36 minutes. Reconstruction happens in the struggle that follows the entry point. Not in the entry point itself.</p><p>The glimmer earns its place by making the learner willing to pick up the weight. What happens after has to be real.</p><div><hr></div><h2>The Parallel Experiment: AI-Assisted Glimmers</h2><p><em>Irreducibly Human</em> maps what AI can and cannot do and develops the pedagogy for what remains irreducibly human. That is its purpose and it should not be diluted.</p><p>The parallel experiment is different in kind. It is the territory where the map gets tested.</p><p>The premise: AI tools have collapsed the barrier between &#8220;I wonder if&#8221; and &#8220;here is a thing that exists.&#8221; The friction between idea and working prototype has been reduced to almost nothing for a wide range of problems. This changes the curriculum bottleneck fundamentally. It used to be technical &#8212; can the student build the thing they imagine? Now it is a judgment problem &#8212; can the student identify a problem worth solving, recognize when the output is wrong, and make the call about whether the result is useful or merely impressive?</p><p>Those are Tier 4 and Tier 5 capacities. But they get developed through Tier 1 practice on small real things with low stakes. The instrument that develops judgment is not a course on judgment. It is the repeated experience of building something, encountering the moment it fails, and being required to decide why.</p><p>The parallel experiment proposes AI as a Sherpa for this process &#8212; not a teacher, not a coach, not a co-creator. A Sherpa carries the infrastructure that makes the climb possible. The climbing belongs to the builder.</p><p><strong>The core assignment across every tier is the same:</strong></p><p>Build one small real thing that didn&#8217;t exist yesterday and matters to someone today. Not a demonstration. Not an exercise. Not an impressive artifact. A useful thing that works, at human scale, that someone actually uses.</p><p><strong>Small</strong> &#8212; completable this week. The Deweyian cycle requires completion. You must undergo the consequence to reconstruct from the doing. Incompletion produces learned helplessness, not inquiry. The massive project that never ships is the enemy of development.</p><p><strong>Real</strong> &#8212; works in the world, not just in the assignment. The feedback is honest because the environment is honest. No rubric required. Did it do what you needed? Yes or no.</p><p><strong>Useful</strong> &#8212; solves a problem someone actually has, including the builder. Useful is not the same as impressive. Many impressive things are useless. Many useful things are unimpressive. The criterion is genuine utility, not demonstration of mastery.</p><p><strong>Potentially interesting</strong> &#8212; has an edge that might surprise. Might connect to something larger. Might matter more than expected. This criterion preserves the continuity that Dewey required: each experience opening into the next. The student who builds something interesting keeps pulling the thread past the assignment deadline.</p><div><hr></div><h2>The Glimmer as Entry Point Across Tiers</h2><p>The parallel experiment is loosely mapped to the <em>Irreducibly Human</em> tiers not as curriculum but as orientation. The tier structure describes the territory. The glimmer is how you enter it.</p><p><strong>Tier 1 &#8212; Tool mastery.</strong> Stakes are almost irrelevant here. Low consequence failure is fine and instructive. The glimmer assignment: find something you do repeatedly that wastes your time. Use AI to reduce that waste. Ship it. Not elegant. Not generalizable. Useful to you today.</p><p>This constraint does something important. It forces problem formulation before tool selection. You have to identify what actually wastes your time before you can build anything. That single move is already more Deweyian than most AI literacy courses.</p><p><strong>Tier 4 &#8212; Metacognitive and Supervisory.</strong> The entry point shifts from personal to interpersonal. The glimmer assignment: build something useful for a decision someone else has to make. Now you must formulate their problem, not yours. The metacognitive demand appears immediately. You can&#8217;t outsource the judgment about what they actually need.</p><p>The moment the tool produces something confidently wrong &#8212; and it will &#8212; is the educative moment. Not the moment of correct output. The moment of plausible-sounding but incorrect output that the builder recognizes as wrong before they can prove it. That sensation is Tier 4 being born.</p><p><strong>Tier 5 &#8212; Causal and Counterfactual.</strong> The glimmer assignment: find one decision someone in your organization made last month based on correlation they interpreted as causation. Build the causal model that shows what question they were actually asking. Show what the Rung 2 question would have been.</p><p>That&#8217;s a week&#8217;s work. It contains the full JC Penney structure. Nobody loses their job if the student gets it wrong. But the causal model has to be defensible to someone who knows the domain. That&#8217;s genuine resistance. That&#8217;s the environment pushing back.</p><p><strong>Tier 6 &#8212; Collective and Distributed.</strong> The glimmer assignment: build something useful that requires other people to build it with you. The collective intelligence problem appears immediately. Division of labor is not collective intelligence. The thing that emerges from genuine collaborative synthesis &#8212; where the output exceeds what any individual possessed &#8212; only appears when the design requires it.</p><p><strong>Tier 7 &#8212; Wisdom.</strong> No assignment. The horizon the other tiers point toward. The person who has built many small real things, encountered genuine failure, revised under real pressure, and carried the consequences across time &#8212; that person is developing phronesis. Not from the curriculum. From the accumulated weight of having been wrong in ways that mattered and continuing anyway.</p><div><hr></div><h2>The Theory You Need is the One You Use</h2><p>The people who report they cannot keep up with new theories are not behind on the literature. They are ahead of their own application.</p><p>The gap is not between them and the frameworks. It is between the frameworks they have encountered and the real problems they have not yet used them on.</p><p>Pearl on causal inference: you don&#8217;t need to master the technical apparatus. You need to build one causal model for one real decision in your domain. Pearl becomes an instrument not a theory to keep pace with.</p><p>Barab&#225;si on network science: you don&#8217;t need to understand scale-free networks in the abstract. You need to map one network that affects your work and notice where the hubs are. Network science becomes a lens not a course to complete.</p><p>Dewey on experiential learning: you don&#8217;t need to read the secondary literature. You need to build one small real thing and notice what the experience taught you that reading couldn&#8217;t. Dewey becomes obvious not academic.</p><p>The parallel experiment reframes keeping up entirely. It is not a solution to information overload. It is a replacement of information consumption with building practice. The theory you use once on a real problem is worth more than fifty theories you have kept up with.</p><p>This is the instrument. Not the map. Not the taxonomy. The repeated practice of taking a framework, finding the smallest real problem it applies to, building something, and letting the environment respond.</p><p>Glimmers are the entry points that make this practice feel alive rather than obligatory. The specific detail that activates the nervous system. The 25% and 18 months. The question &#8220;what did you start to say?&#8221; The MVAL field that reveals what the student has been avoiding. The meal on the Laboratory School table.</p><p>The full Deweyian argument, stated plainly for the AI age:</p><p>You cannot understand these ideas from the outside. You have to be changed by using them. The AI tools are the most powerful instruments for building small real things that have ever existed. The barrier between inquiry and artifact has nearly disappeared. What remains is judgment &#8212; the irreducibly human capacity to decide what is worth building, recognize when the output is wrong, and make something that genuinely matters to someone.</p><p>That capacity is not developed by keeping up with theories about it.</p><p>It is developed by building things, encountering failure, revising under real conditions, and building again.</p><p>The glimmer is what keeps you building.</p><div><hr></div><h2>What Dewey Would Build</h2><p>Dewey would not build a better AI tutor. He would be alarmed by AI tutors &#8212; not because of the technology but because they make intellectual outsourcing frictionless, which is precisely the opposite of what he thought education was for.</p><p>He would be in crisis mode about the democratic implications of systems that answer questions rather than deepen them, that optimize for engagement over reflection, that make the production of knowledge dependent on a few institutions whose reasoning is opaque.</p><p>What he would build is simpler and harder:</p><p>Tools that surface the right problem before offering any solution. Environments where group inquiry is the unit of learning, not individual instruction. Infrastructure that connects learners to real communities facing real problems where their work has genuine consequence. Systems that make the reasoning behind important decisions visible and contestable by citizens.</p><p>And the parallel experiment: a practice of building small real things with AI as Sherpa, mapped loosely to the tiers of irreducibly human capacity, entered through glimmers specific enough to activate genuine inquiry.</p><p>Not because it is ambitious. Because it is real.</p><p>The meal on the table. The question that reveals what you&#8217;ve been avoiding. The thing that didn&#8217;t exist yesterday and matters to someone today.</p><p>That is what education has always been for.</p><p>The machines have simply made it undeniable.</p>]]></content:encoded></item><item><title><![CDATA[THE TWELVE WILD DUCKS]]></title><description><![CDATA[Audible was acting unethically but I wanted to hear a fairy tale, so I made my own with AI]]></description><link>https://www.skepticism.ai/p/the-twelve-wild-ducks</link><guid isPermaLink="false">https://www.skepticism.ai/p/the-twelve-wild-ducks</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Sat, 28 Mar 2026 05:13:16 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/192380694/b9681794ad86f17948663b486e3a8940.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HV3v!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F665d6242-9608-46d4-8b94-c45299415f3f_3461x3461.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HV3v!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F665d6242-9608-46d4-8b94-c45299415f3f_3461x3461.png 424w, https://substackcdn.com/image/fetch/$s_!HV3v!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F665d6242-9608-46d4-8b94-c45299415f3f_3461x3461.png 848w, https://substackcdn.com/image/fetch/$s_!HV3v!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F665d6242-9608-46d4-8b94-c45299415f3f_3461x3461.png 1272w, https://substackcdn.com/image/fetch/$s_!HV3v!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F665d6242-9608-46d4-8b94-c45299415f3f_3461x3461.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HV3v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F665d6242-9608-46d4-8b94-c45299415f3f_3461x3461.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/665d6242-9608-46d4-8b94-c45299415f3f_3461x3461.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:16074685,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/192380694?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F665d6242-9608-46d4-8b94-c45299415f3f_3461x3461.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HV3v!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F665d6242-9608-46d4-8b94-c45299415f3f_3461x3461.png 424w, https://substackcdn.com/image/fetch/$s_!HV3v!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F665d6242-9608-46d4-8b94-c45299415f3f_3461x3461.png 848w, https://substackcdn.com/image/fetch/$s_!HV3v!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F665d6242-9608-46d4-8b94-c45299415f3f_3461x3461.png 1272w, https://substackcdn.com/image/fetch/$s_!HV3v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F665d6242-9608-46d4-8b94-c45299415f3f_3461x3461.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>A Note Before the Story</h1><p>Audible told me the books I bought were mine. They said it plainly: <em>yours forever</em>. I believed them.</p><p>Then they removed titles from my library. Books I had paid for, marked purchased, assumed were permanent &#8212; gone. When I asked why, the answers were evasive. The terms were reinterpreted. The guarantee dissolved into fine print no one had shown me at the point of sale.</p><p>This is not a complicated situation. They took something. Then they lied about taking it.</p><p>I had two options. Buy the same book again from the company that had already demonstrated it would take it from me again. Or build something they could not reach.</p><p>I chose the second. I took a Norwegian fairy tale &#8212; &#8220;The Twelve Wild Ducks,&#8221; collected by Asbj&#248;rnsen and Moe, public domain, belonging to no platform and no corporation &#8212; and I rebuilt it with AI tools. The result is what follows.</p><p>It is better than what Audible had. Not because the technology is superior. Because I own it and I am good at this. Because no company can revoke it at midnight and blame the licensing agreement. Because the story belongs to whoever is reading it right now, which is how stories were always meant to work, before the platforms decided ownership was a subscription service.</p><p>Read it. Then go check your own digital library and count what&#8217;s missing.</p><div><hr></div><p><em>The Twelve Wild Ducks &#8212; a Norwegian fairy tale, retold</em></p><div><hr></div><p><strong>Tags:</strong> Audible digital ownership, DRM audiobook removal, AI retold fairy tales, public domain Norwegian folklore, platform accountability</p><p>THE TWELVE WILD DUCKS</p><p></p><p>Once on a time there was a Queen who was out driving, when there had</p><p>been a new fall of snow in the winter; but when she had gone a little</p><p>way, she began to bleed at the nose, and had to get out of her sledge.</p><p>And so, as she stood there, leaning against the fence, and saw the red</p><p>blood on the white snow, she fell a-thinking how she had twelve sons</p><p>and no daughter, and she said to herself:</p><p>&#8220;If I only had a daughter as white as snow and as red as blood, I</p><p>shouldn&#8217;t care what became of all my sons.&#8221;</p><p>But the words were scarce out of her mouth before an old witch of the</p><p>Trolls came up to her.</p><p>&#8220;A daughter you shall have&#8221;, she said, &#8220;and she shall be as white as</p><p>snow, and as red as blood; and your sons shall be mine, but you may</p><p>keep them till the babe is christened.&#8221;</p><p>So when the time came the Queen had a daughter, and she was as white as</p><p>snow, and as red as blood, just as the Troll had promised, and so they</p><p>called her &#8220;Snow-white and Rosy-red.&#8221; Well, there was great joy at the</p><p>King&#8217;s court, and the Queen was as glad as glad could be; but when what</p><p>she had promised to the old witch came into her mind, she sent for a</p><p>silversmith, and bade him make twelve silver spoons, one for each</p><p>prince, and after that she bade him make one more, and that she gave to</p><p>Snow-white and Rosy-red. But as soon as ever the Princess was</p><p>christened, the Princes were turned into twelve wild ducks, and flew</p><p>away. They never saw them again&#8212;away they went, and away they stayed.</p><p>So the Princess grew up, and she was both tall and fair, but she was</p><p>often so strange and sorrowful, and no one could understand what it was</p><p>that failed her. But one evening the Queen was also sorrowful, for she</p><p>had many strange thoughts when she thought of her sons. She said to</p><p>Snow-white and Rosy-red,</p><p>&#8220;Why are you so sorrowful, my daughter? Is there anything you want? if</p><p>so, only say the word, and you shall have it.&#8221;</p><p>&#8220;Oh, it seems so dull and lonely here&#8221;, said Snow-white and Rosy-red;</p><p>&#8220;every one else has brothers and sisters, but I am all alone; I have</p><p>none; and that&#8217;s why I&#8217;m so sorrowful.&#8221;</p><p>&#8220;But you _had_ brothers, my daughter&#8221;, said the Queen; &#8220;I had twelve</p><p>sons who were your brothers, but I gave them all away to get you&#8221;; and</p><p>so she told her the whole story.</p><p>So when the Princess heard that, she had no rest; for, in spite of all</p><p>the Queen could say or do, and all she wept and prayed, the lassie</p><p>would set off to seek her brothers, for she thought it was all her</p><p>fault; and at last she got leave to go away from the palace. On and on</p><p>she walked into the wide world, so far, you would never have thought a</p><p>young lady could have strength to walk so far.</p><p>So, once, when she was walking through a great, great wood, one day she</p><p>felt tired, and sat down on a mossy tuft and fell asleep. Then she</p><p>dreamt that she went deeper and deeper into the wood, till she came to</p><p>a little wooden hut, and there she found her brothers; just then she</p><p>woke, and straight before her she saw a worn path in the green moss,</p><p>and this path went deeper into the wood; so she followed it, and after</p><p>a long time she came to just such a little wooden house as that she had</p><p>seen in her dream.</p><p>Now, when she went into the room there was no one at home, but there</p><p>stood twelve beds, and twelve chairs, and twelve spoons&#8212;a dozen of</p><p>everything, in short. So when she saw that she was so glad, she hadn&#8217;t</p><p>been so glad for many a long year, for she could guess at once that her</p><p>brothers lived here, and that they owned the beds, and chairs, and</p><p>spoons. So she began to make up the fire, and sweep the room, and make</p><p>the beds, and cook the dinner, and to make the house as tidy as she</p><p>could; and when she had done all the cooking and work, she ate her own</p><p>dinner, and crept under her youngest brother&#8217;s bed, and lay down there,</p><p>but she forgot her spoon upon the table.</p><p>So she had scarcely laid herself down before she heard something</p><p>flapping and whirring in the air, and so all the twelve wild ducks came</p><p>sweeping in; but as soon as ever they crossed the threshold they became</p><p>Princes.</p><p>&#8220;Oh, how nice and warm it is in here&#8221;, they said. &#8220;Heaven bless him who</p><p>made up the fire, and cooked such a good dinner for us.&#8221;</p><p>And so each took up his silver spoon and was going to eat. But when</p><p>each had taken his own, there was one still left lying on the table,</p><p>and it was so like the rest that they couldn&#8217;t tell it from them.</p><p>&#8220;This is our sister&#8217;s spoon&#8221;, they said; &#8220;and if her spoon be here, she</p><p>can&#8217;t be very far off herself.&#8221;</p><p>&#8220;If this be our sister&#8217;s spoon, and she be here&#8221;, said the eldest, &#8220;she</p><p>shall be killed, for she is to blame for all the ill we suffer.&#8221;</p><p>And this she lay under the bed and listened to.</p><p>&#8220;No&#8221;, said the youngest, &#8220;&#8217;twere a shame to kill her for that. She has</p><p>nothing to do with our suffering ill; for if any one&#8217;s to blame, it&#8217;s</p><p>our own mother.&#8221;</p><p>So they set to work hunting for her both high and low, and at last they</p><p>looked under all the beds, and so when they came to the youngest</p><p>Prince&#8217;s bed, they found her, and dragged her out. Then the eldest</p><p>Prince wished again to have her killed, but she begged and prayed so</p><p>prettily for herself.</p><p>&#8220;Oh! gracious goodness! don&#8217;t kill me, for I&#8217;ve gone about seeking you</p><p>these three years, and if I could only set you free, I&#8217;d willingly lose</p><p>my life.&#8221;</p><p>&#8220;Well!&#8221; said they, &#8220;if you will set us free, you may keep your life;</p><p>for you can if you choose.&#8221;</p><p>&#8220;Yes; only tell me&#8221;, said the Princess, &#8220;how it can be done, and I&#8217;ll</p><p>do it, whatever it be.&#8221;</p><p>&#8220;You must pick thistle-down&#8221;, said the Princes, &#8220;and you must card it,</p><p>and spin it, and weave it; and after you have done that, you must cut</p><p>out and make twelve coats, and twelve shirts, and twelve neckerchiefs,</p><p>one for each of us, and while you do that, you must neither talk, nor</p><p>laugh, nor weep. If you can do that, we are free.&#8221;</p><p>&#8220;But where shall I ever get thistle-down enough for so many</p><p>neckerchiefs, and shirts, and coats?&#8221; asked Snow-white and Rosy-red.</p><p>&#8220;We&#8217;ll soon show you&#8221;, said the Princes; and so they took her with them</p><p>to a great wide moor, where there stood such a crop of thistles, all</p><p>nodding and nodding in the breeze, and the down all floating and</p><p>glistening like gossamers through the air in the sunbeams. The Princess</p><p>had never seen such a quantity of thistledown in her life, and she</p><p>began to pluck and gather it as fast and as well as she could; and when</p><p>she got home at night she set to work carding and spinning yarn from</p><p>the down. So she went on a long long time, picking, and carding, and</p><p>spinning, and all the while keeping the Princes&#8217; house, cooking, and</p><p>making their beds. At evening home they came, flapping and whirring</p><p>like wild ducks, and all night they were Princes, but in the morning</p><p>off they flew again, and were wild ducks the whole day.</p><p>But now it happened once, when she was out on the moor to pick</p><p>thistle-down&#8212;and if I don&#8217;t mistake, it was the very last time she was</p><p>to go thither&#8212;it happened that the young King who ruled that land was</p><p>out hunting, and came riding across the moor, and saw her. So he</p><p>stopped there and wondered who the lovely lady could be that walked</p><p>along the moor picking thistle-down, and he asked her her name, and</p><p>when he could get no answer, he was still more astonished; and at last</p><p>he liked her so much, that nothing would do but he must take her home</p><p>to his castle and marry her. So he ordered his servants to take her and</p><p>put her up on his horse. Snow-white and Rosy-red, she wrung her hands,</p><p>and made signs to them, and pointed to the bags in which her work was,</p><p>and when the King saw she wished to have them with her, he told his men</p><p>to take up the bags behind them. When they had done that the Princess</p><p>came to herself, little by little, for the King was both a wise man and</p><p>a handsome man too, and he was as soft and kind to her as a doctor. But</p><p>when they got home to the palace, and the old Queen, who was his</p><p>stepmother, set eyes on Snow-white and Rosy-red, she got so cross and</p><p>jealous of her because she was so lovely, that she said to the king:</p><p>&#8220;Can&#8217;t you see now, that this thing whom you have picked up, and whom</p><p>you are going to marry, is a witch. Why? she can&#8217;t either talk, or</p><p>laugh, or weep!&#8221;</p><p>But the King didn&#8217;t care a pin for what she said, but held on with the</p><p>wedding, and married Snow-white and Rosy-red and they lived in great</p><p>joy and glory; but she didn&#8217;t forget to go on sewing at her shirts.</p><p>So when the year was almost out, Snow-white and Rosy-red brought a</p><p>Prince into the world; and then the old Queen was more spiteful and</p><p>jealous than ever, and at dead of night, she stole in to Snow-white and</p><p>Rosy-red, while she slept, and took away her babe, and threw it into a</p><p>pitful of snakes. After that she cut Snow-white and Rosy-red in her</p><p>finger, and smeared the blood over her mouth, and went straight to the</p><p>King.</p><p>&#8220;Now come and see&#8221;, she said, &#8220;what sort of a thing you have taken for</p><p>your Queen; here she has eaten up her own babe.&#8221;</p><p>Then the King was so downcast, he almost burst into tears, and said:</p><p>&#8220;Yes, it must be true, since I see it with my own eyes; but she&#8217;ll not</p><p>do it again, I&#8217;m sure, and so this time I&#8217;ll spare her life.&#8221;</p><p>So before the next year was out she had another son, and the same thing</p><p>happened. The King&#8217;s stepmother got more and more jealous and spiteful.</p><p>She stole into the young Queen at night while she slept, took away the</p><p>babe, and threw it into a pit full of snakes, cut the young Queen&#8217;s</p><p>finger, and smeared the blood over her mouth, and then went and told</p><p>the King she had eaten up her own child. Then the King was so</p><p>sorrowful, you can&#8217;t think how sorry he was, and he said:</p><p>&#8220;Yes, it must be true, since I see it with my own eyes; but she&#8217;ll not</p><p>do it again, I&#8217;m sure, and so this time too I&#8217;ll spare her life.&#8221;</p><p>Well, before the next year was out, Snow-white and Rosy-red brought a</p><p>daughter into the world, and her, too, the old Queen took and threw</p><p>into the pit full of snakes, while the young Queen slept. Then she cut</p><p>her finger, smeared the blood over her mouth, and went again to the</p><p>King and said,</p><p>&#8220;Now you may come and see if it isn&#8217;t as I say; she&#8217;s a wicked, wicked</p><p>witch, for here she has gone and eaten up her third babe, too.&#8221;</p><p>Then the King was so sad, there was no end to it, for now he couldn&#8217;t</p><p>spare her any longer, but had to order her to be burnt alive on a pile</p><p>of wood. But just when the pile was all a-blaze, and they were going to</p><p>put her on it, she made signs to them to take twelve boards and lay</p><p>them round the pile, and on these she laid the neckerchiefs, and the</p><p>shirts, and the coats for her brothers, but the youngest brother&#8217;s</p><p>shirt wanted its left arm, for she hadn&#8217;t had time to finish it. And as</p><p>soon as ever she had done that, they heard such a flapping and whirring</p><p>in the air, and down came twelve wild ducks flying over the forest, and</p><p>each of them snapped up his clothes in his bill and flew off with them.</p><p>&#8220;See now!&#8221; said the old Queen to the King, &#8220;wasn&#8217;t I right when I told</p><p>you she was a witch, but make haste and burn her before the pile burns</p><p>low.&#8221;</p><p>&#8220;Oh!&#8221; said the King, &#8220;we&#8217;ve wood enough and to spare, and so I&#8217;ll wait</p><p>a bit, for I have a mind to see what the end of all this will be.&#8221;</p><p>As he spoke, up came the twelve princes riding along, as handsome</p><p>well-grown lads as you&#8217;d wish to see; but the youngest prince had a</p><p>wild duck&#8217;s wing instead of his left arm.</p><p>&#8220;What&#8217;s all this about?&#8221; asked the Princes.</p><p>&#8220;My Queen is to be burnt,&#8221; said the King, &#8220;because she&#8217;s a witch, and</p><p>because she has eaten up her own babes.&#8221;</p><p>&#8220;She hasn&#8217;t eaten them at all&#8221;, said the Princes. &#8220;Speak now, sister;</p><p>you have set us free and saved us, now save yourself.&#8221;</p><p>Then Snow-white and Rosy-red spoke, and told the whole story; how every</p><p>time she was brought to bed, the old Queen, the King&#8217;s stepmother, had</p><p>stolen into her at night, had taken her babes away, and cut her little</p><p>finger, and smeared the blood over her mouth; and then the Princes took</p><p>the King, and shewed him the snake-pit where three babes lay playing</p><p>with adders and toads, and lovelier children you never saw.</p><p>So the King had them taken out at once, and went to his stepmother, and</p><p>asked her what punishment she thought that woman deserved who could</p><p>find it in her heart to betray a guiltless Queen and three such blessed</p><p>little babes.</p><p>&#8220;She deserves to be fast bound between twelve unbroken steeds, so that</p><p>each may take his share of her&#8221;, said the old Queen.</p><p>&#8220;You have spoken your own doom&#8221;, said the King, &#8220;and you shall suffer</p><p>it at once.&#8221;</p><p>So the wicked old Queen was fast bound between twelve unbroken steeds,</p><p>and each got his share of her. But the King took Snow-white and</p><p>Rosy-red, and their three children, and the twelve Princes; and so they</p><p>all went home to their father and mother, and told all that had</p><p>befallen them, and there was joy and gladness over the whole kingdom,</p><p>because the Princess was saved and set free, and because she had set</p><p>free her twelve brothers.</p>]]></content:encoded></item><item><title><![CDATA[Stop Hunting for Answers. Ask Your Course]]></title><description><![CDATA[What a gambling algorithm reveals about the real problem with educational technology]]></description><link>https://www.skepticism.ai/p/stop-hunting-for-answers-ask-your</link><guid isPermaLink="false">https://www.skepticism.ai/p/stop-hunting-for-answers-ask-your</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Fri, 27 Mar 2026 04:33:53 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/192278873/b03e1f056023783ae504a78264b34f9b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Learn more &#8594; <a href="https://medhavy.ai">https://medhavy.ai</a></p><p>Read more on the Medhavy blog: <strong><a href="https://medhavy.ai/blog">https://medhavy.ai/blog</a></strong></p><p>There is a moment most students know. You are twelve minutes into a lecture, or forty pages into a chapter, and the explanation has stopped making contact. The words are still arriving &#8212; the instructor is still talking, the textbook still has sentences &#8212; but something has decoupled. You are receiving information. You are not learning anything.</p><p>What happens next depends on who you are. Some students stop and ask a question. Some open a second tab. Some take more aggressive notes, as if the problem is that they haven&#8217;t written fast enough. Most do what people do when a machine stops working: they wait, and hope it starts again.</p><p>The system&#8217;s response to this moment is almost always the same. It continues. The lecture does not pause to recalibrate. The textbook does not offer a different approach. The platform logs that you have completed the module. You have not completed the module. You have sat in the room while the module happened.</p><p>This is not a technology problem. It is a philosophy problem. And the technology we have built to fix it has mostly encoded the same philosophy in a more expensive box.</p><div><hr></div><h2>The Illusion of Adaptation</h2><p>For the past decade, the word <em>adaptive</em> has done significant damage to educational technology.</p><p>Adaptive, in the way most platforms use it, means personalized in the sense that a streaming service is personalized &#8212; the algorithm has observed your behavior and is now showing you more of what you already clicked on. Netflix knows you watch crime dramas. It does not know whether you understood them. It does not know whether watching more crime dramas is good for you. It knows you did not turn it off.</p><p>Apply this logic to learning and you get what we have: platforms that track completion, adjust pacing, and serve more of what a student has already engaged with. A student who moves quickly gets harder content. A student who slows down gets simpler content. This is not adaptation. This is a speed adjustment. The car is still going the same direction. It is going faster or slower based on whether you look nervous.</p><p>The deeper variable &#8212; the one that actually determines whether a person learns something &#8212; is not pace. It is approach. Whether the concept is explained directly or discovered through questions. Whether it is anchored in a case study or built from first principles. Whether the learner is asked to produce something or receive something. Whether the material is revisited strategically or encountered once and abandoned to memory.</p><p>These are pedagogical choices. They have been studied for decades. There are researchers who have spent careers trying to understand which approach works for which person under which conditions. The literature is substantial and inconclusive &#8212; because the answer is not fixed. Different people learn differently. The same person learns differently on different days, at different moments in a topic, at different levels of prior knowledge.</p><p>The honest conclusion from all of this research is not a recommendation. It is a method. You have to run the experiment.</p><div><hr></div><h2>The Bandit</h2><p>The multi-armed bandit is a framework borrowed from probability theory, named for the slot machines in a casino &#8212; each with a different payout rate, none of them labeled.</p><p>The problem the framework solves is this: you have several options, you don&#8217;t know which one is best, and you have to act while you&#8217;re still learning. You cannot spend all your time testing (you&#8217;ll never exploit what you&#8217;ve learned) and you cannot commit to the first option that works (you might be missing something better). The bandit framework manages this tradeoff &#8212; choosing the option that currently looks best while continuously allocating some probability to exploring the alternatives.</p><p>Medhavy applies this framework not to slot machines but to pedagogical approaches. Five of them: direct instruction, Socratic questioning, case-based learning, spaced retrieval practice, and project-based generative learning. Each is a coherent educational philosophy with its own decades-long research tradition. Direct instruction works for foundational concepts, clear definitions, sequences that need to be right before anything else can proceed. Socratic questioning works for learners who have surface-level confidence and need to be pushed past the answer they&#8217;re pattern-matching toward. Case-based learning works for professionals whose knowledge only means something when it contacts a real decision. Spaced retrieval works for cumulative content where earlier concepts must survive long enough to support later ones. Project-based learning works when demonstrated output is the actual goal.</p><p>Each of these approaches requires different content, a different AI persona, a different conversational posture. The platform has to be built differently depending on which one is active. This is not a toggle. It is architecture.</p><p>What the bandit does is decide, for each learner at each moment, which approach to deploy &#8212; then observe what happens &#8212; then update its model. If a learner is getting grounded, engaged responses under the Socratic approach and then the pattern breaks, the bandit notices. It tries something else. When the evidence comes in, the model updates. Not for the cohort. For this learner, in this moment, in this chapter.</p><p>Most adaptive platforms are adaptive at the level of the cohort, or at the level of the module, or at best at the level of the pacing track. Medhavy&#8217;s bandit is adaptive at the level of the pedagogical philosophy itself &#8212; the deepest variable, the one that actually determines contact.</p><div><hr></div><h2>What Running the Experiment Actually Means</h2><p>Here is what it means in practice, because the abstraction is easy to nod at without grasping.</p><p>A business school executive logs into a white-labeled deployment of the platform &#8212; the institution&#8217;s logo, their colors, a persona configured to sound like a senior corporate strategy advisor. She is working through a module on AI literacy. The bandit has no prior data on her. It defaults to direct instruction &#8212; explicit definitions, worked examples, clear sequencing.</p><p>She moves through it quickly. Her dwell time on the explanatory sections is short. She is not pausing to absorb. She already knows this. The bandit observes this pattern and shifts: the persona begins responding with questions rather than answers. When she states that AI can reduce operational costs, the advisor asks: in which cost category, specifically? What assumption about labor productivity is that estimate resting on? She slows down. She starts typing longer responses.</p><p>This is contact. The bandit records it.</p><p>Three modules later, she is in unfamiliar territory. The Socratic approach that worked before has stopped working &#8212; she is guessing rather than reasoning, which looks the same from the outside but registers differently in the interaction pattern. The bandit shifts again, this time to case-based learning. The persona anchors the next concept in a documented business case. She can see what happened, evaluate what went wrong, apply the framework to the scenario. The abstraction becomes legible through the example.</p><p>None of this requires a human to observe her, diagnose her, and intervene. It runs continuously, invisibly, updating with every interaction. At the end of the cohort, the institution sees which pedagogical approaches drove the most durable engagement, where the content has gaps (the grounded / not in textbook ratio), and which modules generated the most friction. The credential the institution issues has actual learning evidence behind it.</p><p>This is what it means to run the experiment. Not to have a theory about which approach is best. To find out.</p><div><hr></div><h2>The Constraint That Makes It Honest</h2><p>There is one more piece of the architecture that matters, and it is the most counterintuitive.</p><p>The AI tutor that runs inside Medhavy is not allowed to use the internet. It is not allowed to draw on general knowledge. It is not allowed to speculate. When a student asks a question, the tutor searches the course content &#8212; the verified, expert-reviewed textbook built for this specific deployment &#8212; and grounds its response in what is actually there. If the answer is not in the textbook, the tutor says so. Not in the textbook. That is the response.</p><p>This sounds like a limitation. It is the point.</p><p>The failure mode of every general-purpose AI tutor is that it sounds authoritative whether or not it is correct. It produces fluent, confident, plausible responses. Students who cannot evaluate whether the response is accurate have no way to know when it has invented something. The TEXTBOOK_ONLY constraint eliminates this failure mode by eliminating the thing that causes it. The tutor cannot hallucinate because it cannot leave the source material.</p><p>A student who gets <em>not in textbook</em> has not gotten a wrong answer. They have gotten a real signal: this question is beyond the scope of what we&#8217;re covering here, and you should know that. That is pedagogically useful. That is honest. The platform would rather say nothing than say something false.</p><p>Most EdTech does not make this choice. Most EdTech prioritizes the appearance of competence over the reality of it. Medhavy has decided that the constraint is the credibility.</p><div><hr></div><h2>What This Means for Anyone Paying Attention</h2><p>The argument for Medhavy is not that it is smarter than other platforms. It is that it is more honest about what learning requires.</p><p>Learning requires contact &#8212; the moment when an explanation actually reaches someone. That moment is not guaranteed by pacing, or by completions, or by a student sitting in the virtual room while the module happens. It requires the right approach for this person at this moment, applied consistently enough to work, abandoned quickly enough when it stops.</p><p>The bandit does not know in advance which approach is right. It cannot. Nobody can. What it does instead is run the experiment continuously, update on evidence, and refuse to commit to a prior that the evidence no longer supports.</p><p>That is not a gambling algorithm applied to education. That is what good teaching has always been &#8212; the willingness to try something different when what you&#8217;re doing stops working, the discipline to notice when it stops working before the student gives up, and the honesty to say, when you don&#8217;t know the answer: <em>I don&#8217;t know. But I know where to look.</em></p><p>The machine has learned something most platforms haven&#8217;t.</p><p>The question is whether the institutions that deploy it are willing to learn the same thing: that the evidence matters more than the assumption, and that running the experiment is not a sign of uncertainty.</p><p>It is the whole method.</p><div><hr></div><p><strong>Tags:</strong> Medhavy AI adaptive learning, multi-armed bandit pedagogy, EdTech platform architecture, personalized learning systems, AI tutor grounded retrieval</p><div><hr></div><p></p>]]></content:encoded></item><item><title><![CDATA[What School Was Always Bad At]]></title><description><![CDATA[An introduction to Irreducibly Human: What AI Can and Can't Do]]></description><link>https://www.skepticism.ai/p/what-school-was-always-bad-at</link><guid isPermaLink="false">https://www.skepticism.ai/p/what-school-was-always-bad-at</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Wed, 25 Mar 2026 22:50:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!_TN_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9b98cf1-fcaa-458c-baf2-4a1d0c1f2c60_3408x1552.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_TN_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9b98cf1-fcaa-458c-baf2-4a1d0c1f2c60_3408x1552.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_TN_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9b98cf1-fcaa-458c-baf2-4a1d0c1f2c60_3408x1552.png 424w, https://substackcdn.com/image/fetch/$s_!_TN_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9b98cf1-fcaa-458c-baf2-4a1d0c1f2c60_3408x1552.png 848w, https://substackcdn.com/image/fetch/$s_!_TN_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9b98cf1-fcaa-458c-baf2-4a1d0c1f2c60_3408x1552.png 1272w, https://substackcdn.com/image/fetch/$s_!_TN_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9b98cf1-fcaa-458c-baf2-4a1d0c1f2c60_3408x1552.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_TN_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9b98cf1-fcaa-458c-baf2-4a1d0c1f2c60_3408x1552.png" width="1456" height="663" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d9b98cf1-fcaa-458c-baf2-4a1d0c1f2c60_3408x1552.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:663,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1779637,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/192151743?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9b98cf1-fcaa-458c-baf2-4a1d0c1f2c60_3408x1552.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_TN_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9b98cf1-fcaa-458c-baf2-4a1d0c1f2c60_3408x1552.png 424w, https://substackcdn.com/image/fetch/$s_!_TN_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9b98cf1-fcaa-458c-baf2-4a1d0c1f2c60_3408x1552.png 848w, https://substackcdn.com/image/fetch/$s_!_TN_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9b98cf1-fcaa-458c-baf2-4a1d0c1f2c60_3408x1552.png 1272w, https://substackcdn.com/image/fetch/$s_!_TN_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9b98cf1-fcaa-458c-baf2-4a1d0c1f2c60_3408x1552.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Irreducibly Human:  </em><a href="https://www.irreducibly.xyz/">https://www.irreducibly.xyz/</a></p><p>The panic arrived in the wrong order.</p><p>When ChatGPT went public in November 2022, schools declared a crisis. Students were cheating. Essays were being written by machines. Arithmetic was being performed by algorithms. The question administrators asked &#8212; urgently, in emergency faculty meetings, in policy documents rushed into existence over winter break &#8212; was how to detect this. How to prevent it. How to put the genie back in the bottle.</p><p>Nobody asked the prior question.</p><p>Why are we assigning work a machine can do?</p><p>Here is what the panic missed: AI didn&#8217;t break education. It exposed a failure that was already there, running quietly for decades, producing graduates optimized for exactly the tasks that software now performs better, faster, and cheaper than any human being alive. The curriculum we built &#8212; and built deliberately, and defended with genuine belief in its value &#8212; was a curriculum for a world that no longer exists.</p><p>Machines arrived. And we could finally see what we had been training people to do.</p><div><hr></div><h2>The Curriculum We Built</h2><p>To be clear: the failure was not malicious. Institutional inertia is not stupidity. Schools change slowly because they were built to transmit what is known, not to respond to what is new. That feature is now a bug. For most of the twentieth century, arithmetic speed and fact retrieval were genuinely valuable human capacities. An accountant who could run numbers in her head was worth hiring. A lawyer who had memorized case law was difficult to replace. An engineer who could recall formulas without looking them up got work done faster.</p><p>That world is gone.</p><p>The intelligent response to the invention of the forklift is not to practice lifting heavier objects. It is to learn to operate the machine, understand what it can and cannot lift, and &#8212; most crucially &#8212; develop the judgment to know what needs lifting in the first place. The question the forklift raises is not about strength. It is about what the work actually is, now that strength is no longer the constraint.</p><p><em>Irreducibly Human: What AI Can and Can&#8217;t Do</em> is a six-book curriculum series built around that question. It does not teach students to compete with AI. It teaches them to supply the reasoning that AI tools require humans to provide &#8212; the reasoning no tool can supply on their behalf.</p><p>The series organizes human intelligence into seven tiers by a single criterion: what machines can and cannot do. Where AI is strongest &#8212; pattern recognition, fact retrieval, syntactic correctness, encyclopedic recall &#8212; the curriculum doesn&#8217;t train humans to compete directly. That would be malpractice. Where AI is weakest &#8212; causal reasoning, metacognitive oversight, collective intelligence, practical wisdom &#8212; the curriculum rebuilds from scratch.</p><p>The name changed recently. It was called <em>The Human Half: What AI Can&#8217;t Do</em>. The rename matters. &#8220;What AI can&#8217;t do&#8221; is a defensive posture &#8212; we are mapping a shrinking territory, waiting to see how much ground we lose. &#8220;Irreducibly human&#8221; says something different. There are capacities that are not merely outside AI&#8217;s current capability. They are outside its fundamental nature. Not gaps waiting to be filled. Structure.</p><div><hr></div><h2>The Gardner Trap</h2><p>In 1983, Howard Gardner published <em>Frames of Mind</em> and cracked something open.</p><p>Multiple intelligences, he argued. Not one general intelligence but several: linguistic, logical-mathematical, spatial, musical, bodily-kinesthetic, interpersonal, intrapersonal. The framework was a genuine provocation. It said that the student who couldn&#8217;t sit still and parse grammar might have an intelligence the school wasn&#8217;t measuring. It said that the child who couldn&#8217;t add fractions might still understand the geometry of a room in her body before she crossed it.</p><p>Schools responded. Enthusiastically. &#8220;We teach to all the intelligences,&#8221; they said. And then, largely, they kept doing what they had always done.</p><p>Forty years later, there is still no validated assessment for intrapersonal intelligence. The curriculum that was supposed to follow the framework never fully arrived. What arrived instead was vocabulary. Teachers learned to say &#8220;multiple intelligences&#8221; the way they learned to say &#8220;growth mindset&#8221; &#8212; as a description of what they believed, not as a specification of what they would do differently on Monday morning.</p><p>This is the Gardner Trap: naming a thing so well that the naming feels like the work.</p><p>Gardner&#8217;s framework was built before machines became capable, which means it didn&#8217;t need to ask which intelligences technology endangered. It also didn&#8217;t name three tiers the series considers essential: the supervisory layer (knowing when an answer is wrong before recomputing it, knowing which tool to deploy and whether to trust what it returns), the causal layer (not just observing that X follows Y but reasoning about what happens if you intervene, about what would have happened if you had not), and the collective layer (the intelligence that emerges from groups working together in ways that exceed the sum of individual ability &#8212; the intelligence of science, of markets, of democracy, of any collaborative practice that generates knowledge no single person could generate alone).</p><p>None of these are properties of individuals. You cannot have supervisory intelligence in a vacuum &#8212; it requires a tool to supervise, a context in which the supervision matters, stakes. You cannot do causal reasoning without a question worth asking. Collective intelligence is definitionally not possessed; it is accomplished together.</p><p>An algorithm has access to the literature. It is absent from the practice that generates new knowledge. That absence is not a temporary limitation. It is a structural one.</p><p><em>Irreducibly Human</em> is explicitly Stage 1 of a three-stage sequence: Name it. Teach it. Measure it. Gardner did Stage 1 brilliantly. Forty years passed. The series is an attempt to hold Stage 1 more honestly &#8212; to name only what can be defined clearly enough to teach, and to be transparent about where the measurement infrastructure doesn&#8217;t yet exist. Stages 2 and 3 are in development, in collaboration with the Center for Curriculum Redesign. The series is not claiming to have completed them. It is claiming that Stage 1 done honestly &#8212; with specific learning outcomes, sequenced exercises, and defined criteria for success &#8212; is rarer than it sounds, and more necessary than the field has acknowledged.</p><div><hr></div><h2>What the Series Actually Is</h2><p>Six books. Two companions. A complete production infrastructure.</p><p><em>AI Literacy, Fluency, and Trust</em> is the entry point &#8212; how to operate the machine without being replaced by it. <em>Causal Reasoning</em> is the identification layer &#8212; what causes what, and why no algorithm can answer that for you. <em>AImagineering</em> is post-AI design thinking &#8212; one week on ideation, the rest on the judgment that makes ideation matter. <em>Ethical Play</em> asks students to build a game that makes a player feel moral weight, then survive an AI audit proving the ethics are in the mechanics and not just in the documentation. <em>Conducting AI</em> teaches the five supervisory capacities no algorithm possesses &#8212; hearing the wrong note, choosing the piece, directing the sections. <em>The Collective</em> addresses the intelligence that cannot be possessed. Only accomplished. Together.</p><p>The companion books extend the series into domains the core texts cannot reach. A teacher&#8217;s guide addresses fifteen fields where the body knows things that language models do not: lab science, woodshop, nursing simulation, surgical training, studio art, dance, trades. A practitioner&#8217;s guide for experiential learning addresses the co-op coordinators, clinical placement directors, and study abroad advisors who send students into the world to learn &#8212; because practical wisdom, the Aristotelian capacity to know when and how to apply what you know and when not to, cannot be taught in a classroom. It can be scaffolded in the field.</p><p>The series is being built with the same tools it teaches. That is not an accident. Every book in the series was produced using an AI-assisted production infrastructure &#8212; a chapter drafting engine, an assertion verification system that scans claims and flags suspect ones for expert review, a figure generation protocol, a custom case study generator, a peer review framework, a game design document consultant. A 38-chapter textbook in cancer biology was written in approximately one month using this infrastructure and is currently in production in an NIH program. The Boyle System &#8212; a documentary infrastructure for scientific reproducibility &#8212; reduced the time senior researchers spent reviewing mentee work from sixty percent of each meeting to twenty, across more than 150 fellows in applied AI humanitarian contexts.</p><p>The thesis is demonstrated by the method used to build it. The forklift is being operated. What the forklift cannot lift is being named, precisely, in each chapter.</p><div><hr></div><h2>What This Is Not</h2><p>It is not a book about AI.</p><p>This distinction is harder to hold than it sounds, because AI is everywhere in the series &#8212; as the subject of study, as the production infrastructure, as the adversary the ethics course must survive. But AI is not the center of gravity. Humans are. Specifically, the capacities that make humans irreplaceable not in spite of AI but because of it &#8212; because the tools require human judgment to operate, human values to direct, human stakes to make the outputs matter.</p><p>An algorithm has no stakes. It cannot commit because it cannot lose. The series is built for people who can lose, who are mortal and situated in time, who will have to live with the decisions the tools help them make. Those people need a curriculum that prepares them for the work the tools cannot do. That work is not shrinking. It is expanding.</p><p>The schools that spent the last two years trying to detect AI-generated student essays were asking the wrong question. The right question is what we are asking students to do with their irreducible minds, now that the machines have taken everything else.</p><p><em>Irreducibly Human</em> is an attempt to answer that.</p><div><hr></div><p><strong>Tags:</strong> Irreducibly Human curriculum series, AI education reform, Howard Gardner multiple intelligences critique, causal reasoning pedagogy, human capacities AI cannot replace</p>]]></content:encoded></item><item><title><![CDATA[Marley — Talk to Your Website]]></title><description><![CDATA[Use a template and Claude code to create a living document]]></description><link>https://www.skepticism.ai/p/marley-talk-to-your-website</link><guid isPermaLink="false">https://www.skepticism.ai/p/marley-talk-to-your-website</guid><dc:creator><![CDATA[Nik Bear Brown]]></dc:creator><pubDate>Mon, 23 Mar 2026 21:18:21 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/191914369/d772ddd9056d4efdba764d516a21afb2.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p></p><p>MARLEY: <a href="https://marley.bearbrown.co/">https://marley.bearbrown.co/</a></p><p>Most website templates give you a starting point and then leave you alone with it.</p><p>Marley doesn&#8217;t. Marley is a Next.js template built for a specific kind of collaboration: you clone it, you open Claude Code in the directory, and you talk to it. You say what you want. The website changes. You say something else. The website changes again. The website is never finished &#8212; it&#8217;s a living document that evolves as your needs become clearer.</p><p>Here&#8217;s what it ships with: a blog system, a tools directory, a Substack importer that pulls your posts (and checks for duplicates, and imports your drafts), and support for animations and D3 graphs that Substack itself can&#8217;t render. It&#8217;s self-documenting &#8212; it can generate a technical reference for its own features, suggest what to build next, and create spec documents for proposed additions. It also exposes Claude prompt tools publicly, so your tools page becomes a real tool directory, not just a list of links.</p><p>The workflow is simple. Open the template. Open Claude Code. Tell it who you are and what you don&#8217;t need. Remove the blog. Change the brand. Update the links. Connect your Substack. Add your tools. The template becomes your site because you told it to.</p><p>Marley is MIT licensed, open source, and built by Nik Bear Brown. It&#8217;s the infrastructure for bearbrown.co and the Musinique ecosystem &#8212; rebuilt every time a conversation asked it to be different.</p><p>Clone it. Talk to it. See what it becomes.</p><p>&#8594; <a href="https://github.com/nikbearbrown">GitHub</a> &#183; <a href="https://bearbrown.co">Built by Nik Bear Brown</a> &#183; <a href="https://musinique.substack.com">The Skepticism AI Substack</a></p><div><hr></div><p><strong>Tags:</strong> Next.js website template, Claude Code integration, talk-to-your-website, Substack importer Next.js, living document web development</p><p></p><p><strong>What this document is</strong>A reference for the Marley multi-brand Next.js template. It covers what the template contains, how each system is structured, the full database schema, the route map, and the environment variables required for deployment. It closes with five proposed future additions. Use this when navigating an unfamiliar part of the codebase, planning a new feature, or onboarding a second developer.</p><h1>1. What Marley is</h1><p>Marley is a production-grade Next.js site template that proves its own flexibility by wearing different costumes. The same codebase is styled for multiple fictional businesses from public domain literature &#8212; each with a distinct voice, palette, and copy &#8212; without touching routing, components, or infrastructure.</p><p><em>The template demonstrates itself. Each brand instance is a stress test: if Scrooge &amp; Marley&#8217;s austere ledger aesthetic and Au Bonheur des Dames&#8217; lush retail warmth can coexist in the same codebase, the theming system is real.</em></p><p>The base codebase was derived from the Medhavy adaptive learning platform (Medhavy LLC, Nik Bear Brown and Srinivas Sridhar). All Medhavy branding has been replaced per brand instance. The infrastructure &#8212; routing, admin, database schema, API contracts &#8212; is shared and unchanged across instances.</p><h2>Current brand instances</h2><p><strong>BrandSourceIndustry (fictional)StatusScrooge &amp; Marley</strong>Dickens, <em>A Christmas Carol</em>, 1843Counting house, money lendingLiveAu Bonheur des DamesZola, <em>Au Bonheur des Dames</em>, 1883Department store, retailPlannedLapham PaintHowells, <em>The Rise of Silas Lapham</em>, 1885Industrial paint manufacturingPlannedDotheboys HallDickens, <em>Nicholas Nickleby</em>, 1839Education (cautionary)Planned</p><p>All source works are public domain. The brands as implemented &#8212; copy, design, codebase &#8212; are not.</p><h1>2. Tech stack</h1><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!X5_o!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F513a92ca-409a-4608-ad8c-eb05a3200ccc_1684x1056.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!X5_o!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F513a92ca-409a-4608-ad8c-eb05a3200ccc_1684x1056.png 424w, https://substackcdn.com/image/fetch/$s_!X5_o!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F513a92ca-409a-4608-ad8c-eb05a3200ccc_1684x1056.png 848w, https://substackcdn.com/image/fetch/$s_!X5_o!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F513a92ca-409a-4608-ad8c-eb05a3200ccc_1684x1056.png 1272w, https://substackcdn.com/image/fetch/$s_!X5_o!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F513a92ca-409a-4608-ad8c-eb05a3200ccc_1684x1056.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!X5_o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F513a92ca-409a-4608-ad8c-eb05a3200ccc_1684x1056.png" width="1456" height="913" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/513a92ca-409a-4608-ad8c-eb05a3200ccc_1684x1056.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:913,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:204008,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/191914369?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F513a92ca-409a-4608-ad8c-eb05a3200ccc_1684x1056.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!X5_o!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F513a92ca-409a-4608-ad8c-eb05a3200ccc_1684x1056.png 424w, https://substackcdn.com/image/fetch/$s_!X5_o!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F513a92ca-409a-4608-ad8c-eb05a3200ccc_1684x1056.png 848w, https://substackcdn.com/image/fetch/$s_!X5_o!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F513a92ca-409a-4608-ad8c-eb05a3200ccc_1684x1056.png 1272w, https://substackcdn.com/image/fetch/$s_!X5_o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F513a92ca-409a-4608-ad8c-eb05a3200ccc_1684x1056.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>3. Multi-brand theming system</h1><p>The theming system is the core architectural claim of the Marley template. Changing a brand requires editing three files. No component changes. No routing changes. The entire site repaints.</p><h2>The three files that must stay in sync</h2><p><strong>lib/theme.ts</strong></p><p><strong>TypeScript source of truth</strong></p><p>Exports a typed <code>theme</code> constant containing the brand name, tagline, address, contact, domain, and the eight colour values (<code>bb1</code>&#8211;<code>bb8</code>). This is the canonical source. If it conflicts with the other two files, this one wins.</p><p><strong>public/theme.json</strong></p><p><strong>Machine-readable</strong></p><p>Same data as <code>lib/theme.ts</code>, serialised as JSON. Read by Indiana (the doc generator) and any external tooling that needs palette values without importing TypeScript. Includes a <code>colorRoles</code> field describing the semantic role of each colour variable.</p><p><strong>app/globals.css</strong></p><p><strong>CSS variables</strong></p><p>The <code>:root</code> block defines <code>--bb-1</code> through <code>--bb-8</code>. A matching <code>.dark</code> block inverts the parchment/soot relationship for dark mode. All components reference these variables &#8212; no hex values appear in component files.</p><h2>Palette variable roles (mandatory conventions)</h2><p><strong>VariableRoleScrooge &amp; Marley value</strong><code>--bb-1</code>Primary text#0D0D0D &#8212; soot black<code>--bb-2</code>Primary accent, headers#4A4A4A &#8212; iron grey<code>--bb-3</code>Danger, overdue, emphasis#8B0000 &#8212; dried-ink red<code>--bb-4</code>Highlight, callout#8B7536 &#8212; cold brass<code>--bb-5</code>Secondary accent#2F2F2F &#8212; charcoal<code>--bb-6</code>Muted accent, labels#6B6B5E &#8212; tarnished pewter<code>--bb-7</code>Borders, subtle backgrounds#9C9680 &#8212; aged ledger tan<code>--bb-8</code>Page background, light surfaces#E8E0D0 &#8212; parchment</p><p><strong>WCAG AA contract</strong>WCAG AA requires 4.5:1 contrast for body text and 3:1 for large text. When replacing palette values for a new brand, verify <code>--bb-1</code> against <code>--bb-8</code> and <code>--bb-2</code> against <code>--bb-8</code> before deploying. Many brand primaries fail at body text size.</p><h1>4. Site structure and routes</h1><h2>Public routes</h2><ul><li><p>/Home &#8212; five sections: hero, services, who we serve, CTA, contact</p></li><li><p>/toolsTools directory &#8212; card grid merging filesystem artifacts and DB link tools</p></li><li><p>/tools/[slug]Artifact embed page &#8212; full-viewport iframe with title bar</p></li><li><p>/devDev docs browser &#8212; searchable card grid, filesystem-driven</p></li><li><p>/dev/[slug]Single dev doc &#8212; full-viewport iframe</p></li><li><p>/blogBlog feed &#8212; cover thumbnails, search bar, published posts newest first</p></li><li><p>/blog/[slug]Blog post &#8212; cover hero, prose content, og:image, prev/next nav</p></li><li><p>/aboutFirm/person page &#8212; prose format, founders, contact</p></li><li><p>/privacyPrivacy policy</p></li><li><p>/privacy/cookiesCookie policy &#8212; dedicated page</p></li><li><p>/terms-of-serviceTerms of service</p></li><li><p>/substackNewsletter hub &#8212; card grid of all sections</p></li><li><p>/substack/[section]Section page &#8212; article list, follow CTA</p></li><li><p>/substack/[section]/[slug]Full article &#8212; attribution banner, prose, subscribe CTA</p></li></ul><h2>Admin routes (protected)</h2><ul><li><p>/admin/loginPassword form &#8212; POSTs to /api/admin/login</p></li><li><p>/admin/dashboardOverview &#8212; tabbed nav to all admin sections</p></li><li><p>/admin/dashboard/blogPost list &#8212; tag filter, bulk delete, import/export</p></li><li><p>/admin/dashboard/blog/newNew post editor</p></li><li><p>/admin/dashboard/blog/[id]/editEdit existing post</p></li><li><p>/admin/dashboard/blog/importImport &#8212; Substack ZIP or blog export ZIP</p></li><li><p>/admin/dashboard/toolsTools manager &#8212; link and artifact types</p></li><li><p>/admin/dashboard/devDev docs list &#8212; filesystem browser with sync button</p></li><li><p>/admin/dashboard/substackSubstack section manager &#8212; create sections, import ZIPs</p></li></ul><h1>5. Content systems</h1><h2>Blog system</h2><p>The blog system uses Neon PostgreSQL for post storage, Tiptap for authoring, and Vercel Blob for image storage. Posts are database-driven; the admin editor produces clean HTML stored in the <code>content</code> column.</p><p><strong>Key capabilities</strong></p><ul><li><p>WYSIWYG editor: bold, italic, headings, lists, blockquotes, code blocks, images, YouTube embeds, D3 viz placeholders</p></li><li><p>Cover image upload via drag/drop to Vercel Blob</p></li><li><p>Tags stored as PostgreSQL <code>TEXT[]</code> array &#8212; filterable in both admin and public views</p></li><li><p>Draft/publish workflow with <code>published_at</code> timestamp</p></li><li><p>Auto-generated slug from title (editable), auto-generated excerpt (first 200 chars)</p></li><li><p>Export as ZIP (<code>posts.json</code> + individual HTML files) &#8212; enables cross-instance transfer</p></li><li><p>Import from Substack export ZIP or blog export ZIP</p></li><li><p>D3 data visualisations hydrated client-side via <code>BlogVizHydrator</code> and the viz registry</p></li></ul><p><strong>Adding a D3 visualisation</strong></p><ol><li><p>Create <code>lib/viz/[name].ts</code> exporting <code>default (el: HTMLElement) =&gt; void</code></p></li><li><p>Add an entry to <code>lib/viz/registry.ts</code> mapping the name to a lazy import</p></li><li><p>Insert a <code>data-viz="[name]"</code> placeholder via the editor toolbar</p></li></ol><h2>Tools directory</h2><p>Tools are served from two sources merged at render time. Artifact tools live as HTML files in <code>public/artifacts/</code> &#8212; filesystem is the source of truth, no database entry needed. Link tools are database-driven, managed via the admin UI.</p><p><strong>Two tool types</strong></p><p><strong>TypeSourceBehaviourHow to add</strong><code>artifact</code>Filesystem (<code>public/artifacts/</code>)Card links to <code>/tools/[slug]</code>, renders in full-viewport iframeDrop an HTML file with title, description, keywords meta tags. Push to main.<code>link</code>Neon databaseCard opens URL in new tabAdmin UI at <code>/admin/dashboard/tools</code></p><h2>Dev docs browser</h2><p>All HTML files in <code>public/dev/</code> are automatically surfaced on <code>/dev</code>. No database, no sync required. The <code>lib/html-meta.ts</code> utility (<code>scanHtmlDir()</code>) reads <code>&lt;title&gt;</code>, <code>&lt;meta name="description"&gt;</code>, and <code>&lt;meta name="keywords"&gt;</code> tags from every file and returns them as <code>HtmlDocMeta[]</code>.</p><p><strong>All three meta tags are required</strong>A doc without all three tags does not appear in the browser with correct title or searchable keywords. A doc that appears in the filesystem but cannot be found by search does not exist to the reader. Title, description, and keywords are structural requirements, not formatting suggestions.</p><h2>Substack importer</h2><p>The Substack import system ingests Substack export ZIPs and surfaces articles under <code>/substack/[section]/[slug]</code>. Articles are stored in Neon with attribution preserved.</p><p><strong>Import workflow</strong></p><ol><li><p>Export from Substack (Settings &#8594; Exports &#8594; Create new export)</p></li><li><p>Create a section in admin dashboard (title, slug, Substack URL, description)</p></li><li><p>Upload the ZIP to that section &#8212; parser reads <code>posts.csv</code> + HTML files</p></li><li><p>Articles upserted by slug &#8212; re-import is safe, updates existing records</p></li></ol><h1>6. Database schema</h1><p>Four tables in Neon PostgreSQL. All have row-level security enabled. Public read policies are narrowly scoped &#8212; blog posts require <code>published = true</code>.</p><pre><code><code>-- Tools
CREATE TABLE IF NOT EXISTS tools (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  name TEXT NOT NULL,
  slug TEXT UNIQUE NOT NULL,
  description TEXT,
  tool_type TEXT DEFAULT 'link',       -- 'link' | 'artifact'
  claude_url TEXT,                      -- external URL (link tools) or fallback
  chatgpt_url TEXT,                     -- optional ChatGPT URL
  artifact_id TEXT,                     -- Claude artifact UUID
  artifact_embed_code TEXT,             -- raw iframe embed (overrides artifact_id)
  tags TEXT[],                          -- category tags
  created_at TIMESTAMPTZ DEFAULT NOW(),
  updated_at TIMESTAMPTZ DEFAULT NOW()
);
ALTER TABLE tools ENABLE ROW LEVEL SECURITY;
CREATE POLICY "public_read_tools" ON tools FOR SELECT USING (true);
CREATE POLICY "service_role_tools" ON tools FOR ALL USING (true) WITH CHECK (true);

-- Blog posts
CREATE TABLE IF NOT EXISTS blog_posts (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  title TEXT NOT NULL,
  subtitle TEXT,
  slug TEXT NOT NULL UNIQUE,
  byline TEXT,
  cover_image TEXT,
  content TEXT NOT NULL,               -- clean HTML from Tiptap
  excerpt TEXT,                        -- auto-generated, first 200 chars
  published BOOLEAN DEFAULT false,
  published_at TIMESTAMPTZ,
  tags TEXT[] DEFAULT '{}',
  created_at TIMESTAMPTZ DEFAULT NOW(),
  updated_at TIMESTAMPTZ DEFAULT NOW()
);
ALTER TABLE blog_posts ENABLE ROW LEVEL SECURITY;
CREATE POLICY "public_read_published_posts" ON blog_posts
  FOR SELECT USING (published = true);
CREATE POLICY "service_role_posts" ON blog_posts
  FOR ALL USING (true) WITH CHECK (true);

-- Substack sections
CREATE TABLE IF NOT EXISTS substack_sections (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  slug TEXT NOT NULL UNIQUE,
  title TEXT NOT NULL,
  description TEXT,
  substack_url TEXT NOT NULL,
  article_count INTEGER DEFAULT 0,
  created_at TIMESTAMPTZ DEFAULT NOW(),
  updated_at TIMESTAMPTZ DEFAULT NOW()
);
ALTER TABLE substack_sections ENABLE ROW LEVEL SECURITY;
CREATE POLICY "public_read_sections" ON substack_sections FOR SELECT USING (true);
CREATE POLICY "service_role_sections" ON substack_sections
  FOR ALL USING (true) WITH CHECK (true);

-- Substack articles
CREATE TABLE IF NOT EXISTS substack_articles (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  section_id UUID NOT NULL REFERENCES substack_sections(id) ON DELETE CASCADE,
  slug TEXT NOT NULL,
  title TEXT NOT NULL,
  subtitle TEXT,
  excerpt TEXT,
  content TEXT,
  original_url TEXT,
  published_at TIMESTAMPTZ,
  display_date TEXT,
  created_at TIMESTAMPTZ DEFAULT NOW(),
  UNIQUE(section_id, slug)
);
ALTER TABLE substack_articles ENABLE ROW LEVEL SECURITY;
CREATE POLICY "public_read_articles" ON substack_articles FOR SELECT USING (true);
CREATE POLICY "service_role_articles" ON substack_articles
  FOR ALL USING (true) WITH CHECK (true);</code></code></pre><h2>Pending migrations (safe to re-run)</h2><pre><code><code>-- Run these in Neon SQL Editor if not already applied
ALTER TABLE blog_posts ADD COLUMN IF NOT EXISTS byline TEXT;
ALTER TABLE blog_posts ADD COLUMN IF NOT EXISTS tags TEXT[] DEFAULT '{}';
ALTER TABLE blog_posts ADD COLUMN IF NOT EXISTS cover_image TEXT;</code></code></pre><h1>7. Admin system</h1><p>The admin dashboard is protected by <code>middleware.ts</code>, which redirects all <code>/admin/dashboard/*</code> routes to <code>/admin/login</code> if no valid <code>admin_session</code> cookie is present. Authentication is password-only &#8212; the password is set via the <code>ADMIN_PASSWORD</code> environment variable.</p><p><strong>Session mechanics</strong></p><ul><li><p>Login: POST to <code>/api/admin/login</code> &#8212; validates against <code>ADMIN_PASSWORD</code> env var</p></li><li><p>On success: sets <code>admin_session</code> httpOnly cookie, 7-day expiry</p></li><li><p>All <code>/api/admin/*</code> routes check <code>isAdmin()</code> from <code>lib/admin-auth.ts</code> before proceeding</p></li><li><p>Middleware protects dashboard pages; API routes protect data endpoints separately</p></li></ul><h2>Admin API routes</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XFAK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8232ba69-556d-4598-86c6-609ff84c13cc_1654x1240.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XFAK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8232ba69-556d-4598-86c6-609ff84c13cc_1654x1240.png 424w, https://substackcdn.com/image/fetch/$s_!XFAK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8232ba69-556d-4598-86c6-609ff84c13cc_1654x1240.png 848w, https://substackcdn.com/image/fetch/$s_!XFAK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8232ba69-556d-4598-86c6-609ff84c13cc_1654x1240.png 1272w, https://substackcdn.com/image/fetch/$s_!XFAK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8232ba69-556d-4598-86c6-609ff84c13cc_1654x1240.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XFAK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8232ba69-556d-4598-86c6-609ff84c13cc_1654x1240.png" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8232ba69-556d-4598-86c6-609ff84c13cc_1654x1240.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:247477,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/191914369?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8232ba69-556d-4598-86c6-609ff84c13cc_1654x1240.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XFAK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8232ba69-556d-4598-86c6-609ff84c13cc_1654x1240.png 424w, https://substackcdn.com/image/fetch/$s_!XFAK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8232ba69-556d-4598-86c6-609ff84c13cc_1654x1240.png 848w, https://substackcdn.com/image/fetch/$s_!XFAK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8232ba69-556d-4598-86c6-609ff84c13cc_1654x1240.png 1272w, https://substackcdn.com/image/fetch/$s_!XFAK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8232ba69-556d-4598-86c6-609ff84c13cc_1654x1240.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>8. Environment variables</h1><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!g1i5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ea24e2-e733-4351-809f-aa1c83aa515b_1672x746.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!g1i5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ea24e2-e733-4351-809f-aa1c83aa515b_1672x746.png 424w, https://substackcdn.com/image/fetch/$s_!g1i5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ea24e2-e733-4351-809f-aa1c83aa515b_1672x746.png 848w, https://substackcdn.com/image/fetch/$s_!g1i5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ea24e2-e733-4351-809f-aa1c83aa515b_1672x746.png 1272w, https://substackcdn.com/image/fetch/$s_!g1i5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ea24e2-e733-4351-809f-aa1c83aa515b_1672x746.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!g1i5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ea24e2-e733-4351-809f-aa1c83aa515b_1672x746.png" width="1456" height="650" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/86ea24e2-e733-4351-809f-aa1c83aa515b_1672x746.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:650,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:146694,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.skepticism.ai/i/191914369?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ea24e2-e733-4351-809f-aa1c83aa515b_1672x746.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!g1i5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ea24e2-e733-4351-809f-aa1c83aa515b_1672x746.png 424w, https://substackcdn.com/image/fetch/$s_!g1i5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ea24e2-e733-4351-809f-aa1c83aa515b_1672x746.png 848w, https://substackcdn.com/image/fetch/$s_!g1i5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ea24e2-e733-4351-809f-aa1c83aa515b_1672x746.png 1272w, https://substackcdn.com/image/fetch/$s_!g1i5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ea24e2-e733-4351-809f-aa1c83aa515b_1672x746.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>9. Persistent layout components</h1><h2>Header</h2><p>Sticky, <code>z-50</code>, backdrop-blur. Logo (theme-aware SVG or text), five-item nav, social icon buttons, dark/light mode toggle. Mobile hamburger menu at the <code>lg</code> breakpoint. Do not add a sixth nav item without a deliberate information architecture decision &#8212; five is not arbitrary.</p><h2>Footer</h2><p>Four-column grid: firm info (name, address, contact), platform links, connect/social links, legal links. Bottom bar with copyright. Column headings and link text are brand-specific copy &#8212; the only footer content that changes between instances.</p><h2>SEO infrastructure</h2><ul><li><p><code>app/sitemap.ts</code> &#8212; dynamic sitemap including all <code>/blog/*</code>, <code>/tools/*</code>, <code>/substack/*</code> routes from Neon. Falls back to static-only if DB is not configured.</p></li><li><p><code>app/robots.ts</code> &#8212; allows all crawlers, blocks <code>/admin/</code> and <code>/api/</code>, points to <code>/sitemap.xml</code>.</p></li><li><p>Blog posts include <code>og:image</code> and <code>twitter:card</code> meta tags.</p></li></ul><h1>10. Five proposed additions</h1><p>These are structural proposals, not implementation tickets. Each one addresses a real gap in the current template. They are ordered by the ratio of effort to usefulness, not by complexity.</p><p><strong>1. Brand registry &#8212; single-file multi-instance configuration</strong></p><p><strong>Planned</strong></p><p><strong>The gap</strong></p><p>Currently, switching brand instances requires manual edits to three files (<code>lib/theme.ts</code>, <code>public/theme.json</code>, <code>app/globals.css</code>) plus the home page, legal pages, and CLAUDE.md. There is no single file that declares &#8220;this is the Scrooge &amp; Marley instance.&#8221; A developer making a new instance must know which files to change.</p><p><strong>The proposal</strong></p><p>Add a <code>config/brand.ts</code> file that is the single source of truth for the active brand: palette, copy, address, legal entity, home page section content. The three theme files and the legal pages are generated from it, not maintained separately. A new brand instance is one file plus assets.</p><p><strong>What it unlocks</strong></p><p>A developer could drop in a new brand config, run a generation script, and have a fully configured instance in minutes. The multi-brand demonstration becomes something a user can try themselves, not just read about.</p><p><strong>2. Contact form with Resend integration</strong></p><p><strong>Planned</strong></p><p><strong>The gap</strong></p><p>Every CTA on the current site routes to a <code>mailto:</code> link. This means a visitor must have a configured email client. On mobile this works; in many corporate environments it does not. There is also no record of enquiries &#8212; they land in an inbox and may be lost.</p><p><strong>The proposal</strong></p><p>Add a <code>/contact</code> route (currently a placeholder) with a form that POSTs to <code>/api/contact</code>. The API route validates the fields and sends via <a href="https://resend.com/">Resend</a> (one environment variable, generous free tier). Store a copy of each submission in a new <code>enquiries</code> table in Neon. Surface them in the admin dashboard.</p><p><strong>What it unlocks</strong></p><p>The site becomes genuinely functional as a business template, not just a demonstration. Each brand instance gets a working enquiry pipeline. The admin can see all submissions without checking email.</p><p><strong>3. Brand instance switcher in the admin dashboard</strong></p><p><strong>Planned</strong></p><p><strong>The gap</strong></p><p>The multi-brand story is the template&#8217;s primary selling point, but it is invisible to someone looking at a single deployed instance. To see the contrast between Scrooge &amp; Marley and Au Bonheur des Dames, you must visit two different URLs &#8212; or read about it in a README.</p><p><strong>The proposal</strong></p><p>Add a brand switcher to the admin dashboard (hidden from public visitors) that live-previews any configured brand instance by swapping the CSS variables via a <code>data-brand</code> attribute on the root element. No page reload. The switcher reads all brand configs from the proposed registry and renders a dropdown.</p><p><strong>What it unlocks</strong></p><p>The demo becomes interactive. A developer evaluating the template can experience the full range of brand personalities in a single session, on a single deployment. This is the clearest possible argument for the theming system&#8217;s real flexibility.</p><p><strong>4. Structured projects / portfolio section</strong></p><p><strong>Planned</strong></p><p><strong>The gap</strong></p><p><code>/projects</code> is currently a placeholder. The tools directory serves individual interactive tools, and the blog serves written content, but there is no structured way to present a body of work &#8212; a case study, a client engagement record, a research project &#8212; as a coherent unit with multiple components.</p><p><strong>The proposal</strong></p><p>Add a <code>projects</code> table in Neon with title, slug, summary, status, tags, and a <code>content</code> field (same HTML-from-Tiptap pattern as blog posts). A project can reference multiple blog posts, tools, and external links. The public <code>/projects</code> page renders as a card grid; <code>/projects/[slug]</code> renders the full project with linked artefacts.</p><p><strong>What it unlocks</strong></p><p>For an individual or consultancy using the template, this closes the gap between &#8220;I have blog posts&#8221; and &#8220;I have a portfolio.&#8221; For the multi-brand demonstration, it gives each fictional firm a place to show completed engagements.</p><p><strong>5. Indiana &#8212; automated dev doc generation from CLAUDE.md</strong></p><p><strong>Planned</strong></p><p><strong>The gap</strong></p><p>Every doc in <code>public/dev/</code> is hand-authored. The CLAUDE.md file contains authoritative, structured information about the codebase &#8212; site structure, schema, routes, environment variables &#8212; that duplicates what the dev docs cover. When CLAUDE.md changes, the dev docs become stale. There is no automated connection between the two.</p><p><strong>The proposal</strong></p><p>Indiana is a lightweight script (<code>scripts/indiana.ts</code>) that reads <code>CLAUDE.md</code> and <code>public/theme.json</code>, extracts structured sections, and generates or regenerates specific dev doc HTML files in <code>public/dev/</code>. It does not replace hand-authored docs &#8212; it generates the reference docs (schema, routes, environment variables) that are purely derived from source truth and should not require manual maintenance.</p><p><strong>What it unlocks</strong></p><p>The dev docs stay current automatically. A change to the database schema in CLAUDE.md is reflected in the dev docs on the next build. The hand-authored explanation and how-to docs remain under human control; the reference docs are generated. This is the documentation-as-code pattern applied to the template itself.</p>]]></content:encoded></item></channel></rss>