When Science Journalism Becomes the Thing It's Criticizing
A Fortune piece makes a genuinely important argument about screen time and learning. Its framing undermines it.
Science journalism commits a specific kind of harm when it mistakes a compelling argument for a proven one — and it rarely announces itself. It arrives in verb choices, in headline framings, in the invisible architecture of a story that leads you through its logic so smoothly you forget to ask whether the logic has been tested. Sasha Rogelberg’s March 2026 Fortune piece — “American schools weren’t broken until Silicon Valley used a lie to convince them they were” — is a good example of this harm, delivered with clean prose and a genuinely important story underneath that the framing works against.
The argument is Jared Cooney Horvath’s, drawn from his 2025 book The Digital Delusion and a Senate testimony that preceded it. Test scores are declining. Screen adoption expanded over the same period. Correlation exists across PISA datasets covering fifteen-year-olds in dozens of countries. The transfer problem — the documented failure mode where students master the tool rather than the subject — is real, historically recurring, and now arriving again in the form of AI. These claims have varying degrees of evidential support, ranging from robust to contested. The article treats all of them as if they occupy the same register.
That is not a minor editorial choice. It is the piece’s central structural failure.
The Lie That Wasn’t Quite a Lie
Fortune’s headline says Silicon Valley “used a lie” to convince schools they were broken. That’s a claim about deliberate deception. The evidence in the article doesn’t establish intent. That gap is the story — and it matters for every administrator reading this.
“Used a lie” requires documentary evidence of coordinated deception — not mistaken enthusiasm, not motivated reasoning, not the well-documented human tendency to believe in the solutions you are selling. The Fortune article provides none of this. What Horvath demonstrates — persuasively — is that tech companies promoted a narrative about broken American education without sufficient empirical justification, and that this narrative created a market for devices that didn’t work as advertised. This is a damaging finding. It is not proof of fraud.
The distinction matters because the policy implications differ. If the narrative was manufactured, the remedy is regulatory. If it was the product of genuine but mistaken enthusiasm compounded by financial incentive, the remedy is evidential — building the research base that makes future decisions harder to make on faith.
The article collapses this distinction in its first words and never recovers it.
The body is more careful than the headline. Horvath “claimed” Google sold Chromebooks to schools to recoup costs on a shaky product launch. Google did not respond to comment requests. The non-response is noted and then, in the surrounding framing, treated as partial confirmation. A non-response proves nothing. What it means depends entirely on what you already believe about the source — and the article treats silence as confirmation.
What the Data Actually Show
The PISA correlation is the article’s most solid ground, and it deserves to be treated seriously rather than promoted beyond what it can support. PISA data on fifteen-year-olds across dozens of countries shows that students using computers six or more hours daily score measurably lower than those who use them less. The Utah NAEP data shows an inflection point coinciding with statewide digital infrastructure mandates in 2014. These patterns are real, consistent across reporting periods, and not easily explained away.
They are also correlational — which is the entire problem.
The article presents them as more. “Technology was put in schools in a bid to help them learn. Instead, Horvath said, computers had an adverse impact on learning.” The phrasing moves from correlation to causation in the span of a conjunction. The mechanism connecting computer-adaptive testing mandates to cognitive decline in Utah is not specified. The counterfactual — what happened to comparable states without the 2014 infrastructure change — is not examined. The contemporaneous confounders are not partitioned: Common Core implementation, changes in teacher certification, education funding as a share of state budgets, the specific disruptions of the pandemic period.
The same years that saw edtech expansion saw enormous demographic and economic changes in American public education. Attributing the observed decline primarily to screens is a hypothesis. It may be the correct hypothesis. The article treats it as confirmed.
The Transfer Problem and the Reach of Historical Analogy
The historical section — Pressey in 1924, Skinner in the 1950s, the letter in which Pressey conceded that students mastered the machine rather than the subject — is the article’s most intellectually honest passage. The transfer problem is documented, theoretically grounded, and not seriously contested in educational psychology. It applies to calculators, spell-checkers, GPS navigation, and now AI. Horvath is correct that this mechanism is real and has recurred across technological generations.
The argument then moves from the historical pattern to contemporary tablets and laptops. The leap is plausible. It is asserted rather than demonstrated. Teaching machines in 1955 and Chromebooks in 2014 share a structural failure mode; they do not share context, design, curriculum integration, teacher training, or the specific conditions of deployment. The article uses the historical analogy as evidence when it is more precisely an invitation to investigate.
There is a difference between saying “this mechanism has appeared before and may be appearing again” and saying “this mechanism explains the observed declines.” The first is intellectually honest and useful. The second requires the kind of controlled evidence the article never provides.
The Curriculum/Pedagogy Distinction: The Insight That Should Lead
Here is the thing the article almost does — and gets closest to in its final paragraphs, too late.
Horvath draws a distinction that is both precise and genuinely useful: curriculum (what is taught) is different from pedagogy (how it is taught). Putting computers in the curriculum — teaching students about technology, its mechanics, its limitations, how to evaluate its outputs — is categorically different from using computers as the medium through which all other subjects are taught. The first builds the meta-cognitive capacity to use tools productively. The second generates the dependency Horvath identifies.
This distinction has immediate, actionable implications for school districts currently implementing AI literacy courses under the same label for wildly different practices. It gives administrators a principled basis for distinguishing between programs that are likely to harm and programs that are likely to help. It is the article’s most substantive intellectual contribution, and it arrives in the last five paragraphs with two sentences of development.
That is the cost of overselling. When a story commits to certainty early, the genuinely useful nuance at the end registers as retreat rather than precision. The curriculum/pedagogy distinction deserved to be the piece’s spine — the framework that made the evidence readable. Instead it appears as a coda after the verdict has already been delivered.
The Problem of the Single Voice
The deepest structural problem in the piece is not any individual claim. It is the architecture. Horvath is the only expert quoted. No independent researcher who works in edtech efficacy appears to validate, challenge, or complicate his analysis. No study showing mixed or context-dependent outcomes for educational technology is cited. No example of a digital intervention that worked — and the literature contains them, at meaningful effect sizes, particularly for intelligent tutoring systems and assistive technology for learning disorders — is included.
This is not balance for its own sake. It’s the floor — what you owe readers when your evidence has policy consequences.
When you write for Fortune’s readership — administrators, school board members, policy advocates, researchers, parents making decisions about their children’s classrooms — the absence of a contrary voice is not neutrality. It is a thumb on the scale.
The Pew Research citation, the article’s sole independent data point, measures AI usage frequency among teenagers. It says nothing about learning outcomes. The Brookings citation reports teacher observations of problematic AI use — a sample selected precisely because it captures failure, not a representative cross-section of all AI use in schools. Both are used to support a causal narrative about cognitive harm for which neither provides direct evidence.
What the Story Deserves
The story underneath this article is important. There is real evidence that current edtech deployment is underperforming its cost. There is real evidence that smartphones cause measurable harm to student wellbeing and attention. There is a genuine and underexamined problem with how AI is being introduced to students who lack the domain expertise to evaluate its outputs. The transfer problem is real and recurring.
These findings deserve coverage that distinguishes correlation from causation, that includes researchers who find mixed rather than uniformly negative results, that acknowledges the populations — students with learning disorders, rural students without access to human teachers, English language learners — for whom some technology interventions produce outcomes that matter. They deserve a headline that matches the evidence: not “used a lie” but “pushed an unverified narrative.”
The argument Horvath is making is strong enough to take seriously without inflating it. That’s the work science journalism is supposed to do — and this article didn’t do it.
The reason this matters is not pedantry about evidentiary standards. Bad epistemology produces bad policy. If administrators read this article and conclude that all educational technology has been proven harmful, they will defund the intelligent tutoring systems that produce measurable gains. They will eliminate the assistive technology that enables literacy for students who lack it otherwise. They will take the single-source certainty of one expert’s book tour and apply it as if it were consensus science.
The schools were probably not broken before Silicon Valley arrived. But the claim that Silicon Valley lied them into breaking requires more than one neuroscientist and a correlation. In the gap between the gesture and the proof, the consequences will be real, and they will fall on students who had no vote in the headline.
The article under review: Sasha Rogelberg, “American schools weren’t broken until Silicon Valley used a lie to convince them they were—now reading and math scores are plummeting”, Fortune, March 1, 2026.
Tags: edtech journalism epistemic standards, Horvath Digital Delusion Fortune critique, correlation causation educational technology, science journalism single-source analysis, PISA screen time learning outcomes essay


