The Thing That Cannot Be Banned
How K-12 AI bans don't protect vulnerable students — they abandon them.
The school, which exists precisely to interrupt the intergenerational transmission of disadvantage, has instead become its instrument.
That sentence is what this debate is actually about. Not ChatGPT, not academic integrity, not the executive order. The debate is about whether a public institution, confronted with a technology that lives in a child’s pocket, will choose equitable engagement or comfortable inaction — and which children pay the price of the choice.
You cannot ban a technology that lives in a child’s pocket. You can refuse to teach the child how to use it, but the refusal is not symmetric. The student with a phone, reliable home internet, and a parent who can explain what a language model is will learn to use the technology with or without the school’s assistance. The student whose only device is a school-issued Chromebook, whose home network is the library two blocks away, will be left behind — not by the technology, but by the decision to pretend the technology can be uninvented.
Everything else is a consequence of institutions failing to confront this fact honestly, or confronting it and choosing, for reasons of politics or caution, to act as if it weren’t true.
The Ban That Taught the Lesson
In January 2023, New York City blocked ChatGPT on every school-issued device and district network. Four months later, the ban was lifted. The official explanation was framed in terms of equity and innovation. The real explanation was simpler: the ban had not worked.
Students with personal devices and home connections were unaffected. The students who were actually restricted were the ones who relied on school-issued hardware — disproportionately, the students the system was designed to serve. The district’s network filter had functioned, in practice, as a restriction on the children who had no other options, while leaving the technology fully available to the children who did.
The 2026 global data confirms this is not a New York story. It is the story everywhere: 92% of students are using AI tools globally, while only 39% of institutions have acceptable use policies.
The gap between those two numbers is not a measurement of noncompliance.
It is a measurement of abandonment.
Nearly a quarter of students from families earning under $36,000 have access to only one device, which must often be shared among multiple family members. For these students, the school-issued Chromebook is not an auxiliary resource. It is the only connection to the modern digital economy they have. When a district blocks AI on that device, it has decided — perhaps without meaning to, but decided nonetheless — that AI fluency is a private benefit: available to families who can provide it, unavailable to families who cannot.
The digital capital — the years of prompting and iterating and learning to verify, the comfort with algorithmic logic that converts directly into labor market advantage — accumulates elsewhere, in houses the student has never been inside.
What History Keeps Saying
Here is the pattern, consistent enough to be a law: institutional resistance to a disruptive technology delays the development of the skills the technology requires, concentrates those skills among students whose families provide them privately, and ends — always — with integration.
The calculator arrived in classrooms in the mid-1970s, and the fear was immediate and specific. Children would lose the ability to compute. They would become dependent on machines. By 1985, Connecticut became the first state to require calculators on state exams, because the tool was permanent and the question of whether to use it had been settled by the world outside the school’s walls. Subsequent research found that students who used calculators on the SAT outperformed those who didn’t — not because computation had become less important, but because computation freed attention for the reasoning the test was actually measuring.
Wikipedia went through the same cycle. “Wicked-pedia.” Irresponsible scholarship. Proposed federal bans in public schools. Then the realization that the ban was impossible, followed by the recognition that the real skill was not avoidance but evaluation — how to read a source critically, where it came from, what it was likely to get wrong. The tool accused of undermining critical thinking became the occasion for teaching it.
The question for AI is not whether it will be integrated. It is whether the school will be present for that integration, or whether it will have stepped aside and left the student alone with the machine.
The Novice Problem and Its Limits
The strongest case for restriction rests on a real cognitive science insight — and it is not enough.
Dr. Jared Cooney Horvath, whose 2025 book The Digital Delusion has become the primary text of the analog-first movement, argues the novice-expert distinction with force: a skilled professional uses AI to streamline workflow because she has the internal knowledge structure against which the machine’s output can be tested. A child learning to reason does not yet have that structure. The AI does not supplement her thinking. It replaces it. The thinking the assignment was designed to produce never occurs.
This is a real problem. It is not the universal problem that the ban impulse treats it as.
The research literature draws a sharp distinction between answer-delivery AI — the chatbot that produces a finished essay — and step-based intelligent tutoring systems that interact with a student at the level of the individual calculation, flagging the specific step where the error occurred, providing feedback calibrated to where she actually is. A 2025 meta-analysis found effect sizes between 0.27 and 0.76 for these systems, with the highest end in elementary mathematics. An effect size of 0.76 in early math would, if replicable at scale, represent one of the most consequential interventions in the history of American public education.
The populations for whom these effects are most consistent are the ones whom blanket bans harm most: students with dyslexia and neurodevelopmental disorders, for whom AI-assisted feedback provides what researchers call “clinical adjunct” support; English language learners, for whom real-time translation and adjusted texts allow engagement with grade-level material that would otherwise be inaccessible; students in rural districts so short of specialist teachers that an AI tutor is not a supplement — it is the only specialist available.
To call the technology that serves these children categorically dangerous is to make the same error as the Valley’s most credulous evangelists, only in the other direction.
The Governance Vacuum
Consider the district administrator right now. Her state has privacy mandates governing how student data can be used by AI vendors. The federal government has told her that enforcing those mandates may cost her district its broadband funding — the infrastructure through which any AI tool would be delivered in the first place. She is being asked to choose between protecting her students and connecting them.
That is not a policy. That is a trap.
The December 2025 executive order framed state-level AI regulation as a threat to national competitiveness, and directed the Justice Department to challenge laws deemed inconsistent with federal priorities. The Consortium for School Networking called this accurately: it displaces state oversight without providing anything to replace it. The states that passed anti-discrimination requirements for algorithmic systems — Colorado, California — did so because the question is real: when an AI platform influences a special education placement, or produces racially disparate discipline outcomes, who answers for it? The federal answer, right now, is nobody. The characterization of that accountability work as ideological overreach is not an argument. It is an instruction to stop asking.
The coercion is real. What it manufactures — a vacuum where governance should be — is more dangerous than the technology it purports to govern.
What the Pipeline Actually Shows
Against that vacuum, states are moving. But precision matters here, and the “watershed year” framing requires a correction before it becomes mythology.
The 2026 legislative pipeline is real. It is also two distinct things, and conflating them produces overconfidence about where we actually are.
The first layer is governance mandates that have already been enacted. Ohio signed HB 96 into law in June 2025, requiring every public district to adopt a formal AI use policy by July 1, 2026 — the Ohio Department of Education and Workforce released a model policy in December 2025 that districts can adopt or customize. Illinois enacted PA 104-0399, requiring the State Board of Education to develop statewide AI guidance for K-12 districts covering nine areas — machine learning, ethics, academic integrity, and more — by the same deadline. These are not aspirational. They are law. The shift from voluntary guidance documents to legal obligation has already occurred in at least two states, and the policy infrastructure those laws are building is the ground on which the second layer stands.
The second layer is graduation mandates moving through active 2026 legislative sessions. Iowa’s SF 2094 would require one semester of CS and AI coursework — foundational concepts, ethics, societal impact — for the class of 2030-31 and beyond. Illinois HB 4411 would mandate a full year of CS and AI starting for ninth graders in 2028-29. Ohio HB 594 would require one unit of CS with explicit AI content for students entering ninth grade on or after July 1, 2029. Hawaii SB 2212 would mandate a six-week AI literacy course for all juniors and seniors starting in 2027-28, backed by a $5 million teacher training grant program. Alabama is already past this stage — the graduation requirement for CS including AI is enacted law, and HB 329 is expanding the definition of CS to explicitly include AI.
These bills have not yet passed. The distinction matters — not to diminish what is happening, but because the story of how they pass, or stall, or get amended, is where the accountability lives. The FutureEd Legislative Tracker is currently monitoring 49 bills across 23 states this session. Forty-nine bills. That number is not a sign that AI literacy has been solved. It is a sign that the question is now unavoidable, and that the answers being proposed vary widely enough to warrant scrutiny.
What the two-layer pipeline shows, taken together, is that state legislatures have concluded something: the era of voluntary guidance documents is over. The toolkits have been issued. The resources have been posted to department websites. The districts that were going to act on them have acted, and the districts that weren’t have not. The question is no longer whether AI literacy matters. The question is whether the state is willing to make it a condition of graduation.
The School as the Only Equalizer
For the student with the personal device and the technologically literate parent, the school’s absence from this transition is an inconvenience. The learning happens anyway, at home, in the extracurricular coding program the family can afford. For the student whose only connection to the modern digital economy is the Chromebook the district issued and the Wi-Fi at the public library, the school’s absence is the end of the road.
The hardest cases are not in the aggregate. They are in the 24% of Ohio’s public high schools that, in FY 2024, did not offer a single computer science course. They are in Hawaii’s rural districts where the six-week AI literacy mandate would require a certified instructor that does not yet exist. They are in Iowa’s small districts where “blended learning” is not a pedagogical choice but a concession to geography — the only way to reach a student who has no specialist teacher within thirty miles.
The bills being proposed in 2026 are responding to this specific geography of absence. Ohio’s CS Promise guarantees that if a student’s home district cannot provide the required computer science course in-person, the state will facilitate and cover the cost of instruction through a community college or partnering district. Iowa’s mandate includes financial support through the Computer Science Professional Development Fund, allowing tuition reimbursement for teachers seeking AI endorsements. Hawaii’s $5 million grant is not supplementary — for many districts, it is the precondition of compliance.
The implementation challenges are real: the teacher shortage, the infrastructure gaps, the question of whether a one-week professional development program produces an instructor capable of teaching algorithmic bias rather than just naming it. These are not reasons to wait. They are reasons to watch the bills carefully as they move, to scrutinize the quality standards, to ask which students the implementation model will actually reach.
The public school exists because a democratic society decided, at some point, that the accident of birth should not determine the ceiling of a life. That decision has to be renewed in every generation, against every new form of advantage that private resources can manufacture and institutional silence can guarantee.
AI fluency is the current form. The pipeline of 2026 legislation is the current renewal. The governance mandates already enacted in Ohio and Illinois are the floor. The graduation mandates moving through Iowa, Illinois, Ohio, and Hawaii are the ceiling being built in real time.
The machine cannot be uninvented. The school has to decide whether to be in the room.
Forty-nine bills in twenty-three states say the decision is being made right now.
Tags: AI in Education, Digital Equity, K-12 Policy, EdTech, Education Reform


