Book Review - Calling Bullshit: The Art of Skepticism in a Data-Driven World
The Art of Not Being Fooled
Bullshit Everywhere
The book opens not with human folly but with mantis shrimp—crustaceans that perform threat displays even when molting, their devastating claws temporarily useless. This evolutionary prologue establishes bullshit as older than language, deeper than politics, woven into communication itself. Bergstrom and West trace deception from bluffing crustaceans through ravens that fake-cache food while being watched through peepholes, arriving finally at humans with our rich language and theory of mind. The chapter introduces their central concern: “new school bullshit” that uses mathematics and statistics to create an impression of rigor. Where old school bullshit deployed flowery language (”leveraging under-utilized human resource portfolio opportunities”), new school bullshit deploys numbers (”our top performing global fund beat the market in seven of the past nine years”). The authors invoke Brandolini’s principle—the amount of energy needed to refute bullshit is an order of magnitude larger than that needed to produce it—and Swift’s observation that falsehood flies while truth comes limping after. By chapter’s end, Andrew Wakefield’s fraudulent vaccine-autism paper has become their exemplar: thoroughly discredited yet stubbornly persistent, requiring millions of dollars and countless research hours to debunk while the original deception took minimal effort to produce.
Medium, Message, and Misinformation
If smartphones were supposed to eliminate bullshit by making fact-checking instantaneous, they instead became its primary distribution channel. The chapter traces how information technology revolutions—from Gutenberg’s press to today’s social media—consistently produce more content while degrading quality. The pattern repeats: costs of production collapse, gatekeepers vanish, fluff proliferates. Filippo de Strata’s 1474 complaint that the printing press would lead readers to “the brothel” of cheap entertainment finds its echo in today’s “7 cats that look like Disney princesses” clickbait. But the modern problem is structural. Where subscription-based media rewarded quality that kept readers, click-driven media rewards only immediate engagement. Headlines no longer convey information (”Kennedy Killed by Sniper”) but promise emotional experiences (”This Will Make You Cry”). Algorithms optimize not for truth but for keeping users on-platform, leading YouTube to recommend flat-earth videos alongside International Space Station footage. The chapter documents how partisan and hyperpartisan news flourishes because it performs social signaling—sharing a story about contrails as endocrine disruptors says less about atmospheric chemistry than about tribal affiliation. The authors note that algorithms are themselves bullshitters: “they don’t care about the messages they carry, they just want our attention.”
The Nature of Bullshit
Here the authors provide philosophical grounding, drawing on Harry Frankfurt’s distinction between lies (designed to lead away from truth) and bullshit (produced with indifference to truth). The key insight: bullshit trades on the appearance of authority while being fundamentally unconcerned with accuracy. The chapter introduces the concept of “black boxes”—statistical tests or algorithms that conceal bullshit behind technical complexity. But Bergstrom and West argue you rarely need to open the black box. Most bullshit can be spotted by examining what goes in (biased data) or what comes out (implausible results). They demonstrate this with a Stanford study claiming AI could detect sexual orientation from facial photos. Rather than delving into neural network architecture, they question the training data: dating site photos where gay men weren’t smiling, straight men were. The algorithm learned to detect smiles, not sexuality. The chapter emphasizes that numbers themselves can be bullshit vehicles precisely because they seem objective. A healthcare quality equation—Q = (A × O + S) / W—looks rigorous but is arbitrary mathiness, chosen to confer authority rather than capture actual relationships. The authors establish that effective bullshit detection requires not statistical expertise but clear thinking about whether data are appropriate and results plausible.
Causality
The chapter opens with self-esteem and kissing: teenagers with higher self-esteem are more likely to have been kissed. Does confidence lead to romantic success, or does romantic success boost confidence? Possibly both, possibly neither—maybe parental wealth causes both. This introduces causality’s fundamental problem: association doesn’t imply causation, though causation does imply association. The authors walk through causal diagrams, showing how the same correlation can support multiple causal stories. They demolish the “hot guys are jerks” phenomenon through Berkson’s paradox: by selecting for both attractiveness and niceness, we create artificial negative correlation between them in our dating pool. The chapter dissects news stories that conflate correlation with prescription. “Exercise can lower risk of some cancers by 20%” reports a correlation as if it were causal advice, when perhaps healthy people exercise more rather than exercise making people healthy. Particularly damaging are cases where causality arrows get reversed. The Marshmallow Test supposedly proved that delaying gratification causes later success, spawning entire industries of willpower training. But replication studies suggest parental wealth causes both patience and success—the correlation was confounded all along. The chapter ends with a crucial methodological point: randomized experiments can establish causation, but observational studies merely suggest it, no matter how compelling the correlation appears.
Numbers and Nonsense
An accountant applies for a job. Asked what 2+2 equals, he leans forward and whispers: “What do you want it to be?” The chapter explores how numbers—seemingly objective—become vehicles for manipulation through selective presentation. Percentages prove particularly treacherous. A hot cocoa packet boasts “99.9% caffeine free,” which sounds impressive until you realize strong coffee is also 99.9% caffeine free. The problem isn’t the number’s accuracy but its meaninglessness. Insurance companies claim that customers who switch save an average of $500, which is true but misleading: only people who would save substantially bother switching, creating selection bias in who becomes a customer. The chapter introduces Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure. University rankings led schools to game metrics—capping class sizes at 19 instead of 20 to qualify as “small,” recruiting applications from students who’ll be rejected to lower acceptance rates. Faculty wellness programs claimed success by measuring participation rather than health outcomes. Throughout, the authors demonstrate how the same number can tell opposite stories depending on presentation. A pharmaceutical company notes their drug showed “clinically important effect size” despite being “short of statistical significance”—translation: it didn’t work, but we’re pretending it did by shifting between different standards of evidence.
Selection Bias
The chapter begins at Solitude ski resort, where everyone Carl talked to praised it as the best mountain in the world. His father’s response: “Why do you think they’re skiing at Solitude?” People who preferred other resorts were elsewhere. This introduces selection bias: when your sample systematically differs from your population, conclusions mislead. The Friendship Paradox follows: most people have fewer friends than their friends do, because popular people appear in many friend groups while unpopular people appear in few. The mathematics is straightforward but the implications profound. Studies showing that customers who switch to Geico save $500 aren’t measuring average savings but rather sampling only people who had sufficient reason to switch. The chapter dissects how ranked city lists—”America’s Most Dangerous Cities”—fail to account for arbitrary city boundaries. St. Louis appears dangerous because its borders tightly circumscribe the urban core, while other cities incorporate suburbs where crime is lower. A scatter plot confirms: the smaller a city is relative to its metro area, the higher its reported crime rate, not because of actual danger but because of geographic definitions. Perhaps most striking is the right-censoring problem: a study comparing musician death rates by genre found rap/hip-hop musicians dying at 30, jazz musicians at 60. But rap is only 40 years old—only musicians who died young appear in the data set, while jazz has existed long enough for natural lifespans to occur.
Data Visualization
A graph appears showing Florida firearm murders declining precipitously after the state passed its Stand Your Ground law in 2005. Only the graph’s vertical axis is inverted—murders actually spiked. The chapter catalogs how accurate data can mislead through presentation choices. “Ducks” prioritize aesthetics over information: bar charts shaped like forks to illustrate restaurant spending, pie charts twisted into ram’s horns. “Glass slippers” force data into inappropriate visualization forms: periodic tables of content marketing, subway maps of moral philosophy, Venn diagrams that aren’t actually showing set relationships. More insidiously, axis manipulation distorts perception. A graph showing trust levels in Quebec used a truncated y-axis, making a small difference appear massive. Line graphs can legitimately exclude zero, but bar charts cannot—the length of the bar is the data. Changing bin sizes reshapes distributions: a Wall Street Journal graph appearing to show most taxable income coming from the middle class actually used $100,000 bins for wealthy brackets, $10,000 bins for middle class, creating visual distortion. The principle of proportional ink emerges: when shaded regions represent values, their areas must be proportional to those values. Violations abound in “donut charts” where outer rings use more ink than inner ones despite representing smaller values, and 3D bar charts where depth creates illusory volume.
Calling Bullshit on Big Data
In 1958, the New York Times announced that the Navy’s perceptron computer would “walk, talk, see, write, reproduce itself and be conscious of its existence.” Fifty-five years later, the Times published essentially the same article about neural networks. The chapter argues that machine learning lives and dies by its training data—algorithms haven’t fundamentally changed, but data availability has exploded. This creates new failure modes. Google Flu Trends claimed to predict outbreaks from search queries but failed catastrophically because search behavior changed when Google introduced autocomplete, shifting which terms people used. The algorithm had overfit historical data rather than capturing actual disease dynamics. Amazon’s hiring algorithm discriminated against women because it was trained on Amazon’s existing resumes, which were disproportionately male. The system learned that attending women’s colleges or participating in women’s professional organizations predicted lower “success” at Amazon—not because of actual performance differences but because Amazon had historically hired fewer women. A criminal sentencing algorithm flagged Black defendants as future criminals at twice the rate of white defendants, perpetuating rather than eliminating bias. The chapter introduces the distinction between what we want to know (is this hypothesis true given the data?) and what p-values actually tell us (how likely is this data if the hypothesis is false?). This prosecutor’s fallacy pervades both courtrooms and science.
The Susceptibility of Science
Science works, the authors insist, even as they catalog its vulnerabilities. The problem isn’t malicious fraud—that’s rare—but rather structural incentives that reward novelty over replication, positive results over negative ones. P-hacking emerges as the central pathology: researchers testing multiple hypotheses and reporting only those achieving statistical significance. The authors demonstrate this by proving that listening to “When I’m 64” makes people younger, achieving significance through selective data analysis. Publication bias compounds the problem: journals print positive results, file-drawering negative ones, so the literature overrepresents successful experiments. The chapter walks through the prosecutor’s fallacy in scientific contexts: a p-value of 0.05 means 5% chance of the data given no effect, not 5% chance of no effect given the data. When testing hypotheses unlikely to be true, most positive findings are false positives—like testing for rare diseases, where most positive tests are wrong because the disease is uncommon. The FDA’s required trial registration revealed that antidepressant effectiveness was overstated: 38 of 74 trials showed positive results, but published papers reported success in 94% of cases. Negative results were either not published or “spun” as positive findings. The chapter concludes that education offers the best defense against scientific bullshit, but acknowledges that scientists themselves are epistemically sullied—motivated by status, funding, and career advancement, not pure truth-seeking.
Spotting Bullshit
A photograph circulated showing Seattle Seahawks player Michael Bennett burning an American flag in the team locker room. The image was fake—a poor Photoshop job—but it spread widely because it aligned with existing narratives about NFL players protesting during the national anthem. The chapter provides six practical rules for spotting bullshit. First: question the source. Who’s telling you this, how do they know it, what are they selling? Second: beware unfair comparisons. “Airport security trays have more germs than toilets” measured only respiratory viruses, the kind on trays but not on toilets. Third: if it seems too good or bad to be true, it probably is. NBC tweeted that international student applications fell 40%, but the actual study showed applications declined at 39% of schools and increased at 35%—statistical noise, not Trump effect. Fourth: think in orders of magnitude. Representative Mo Brooks suggested rising sea levels come from rocks falling into the ocean. Fermi estimation shows the white cliffs of Dover eroding annually would raise sea levels by three angstroms—the height of a water molecule. Fifth: avoid confirmation bias. Be especially skeptical of claims that align with your worldview. Sixth: consider multiple hypotheses. When Disney stock fell 2.5% the day Roseanne was cancelled, headlines blamed the cancellation—but the drop occurred before the announcement, during a market-wide slide.
Refuting Bullshit
Calling bullshit is performative utterance—not reporting skepticism but publicly declaring it. The chapter opens with reductio ad absurdum: researchers claimed women would outsprint men by 2156 based on linear trend lines. Ken Rice replied that by 2636, sprinters would complete the race in negative time. The absurdity discredits the model. Counter-examples prove equally powerful: when a physicist claimed long-lived organisms must have adaptive immune systems with specific features, an immunologist asked simply, “What about trees?” Analogies recontextualize: Seattle’s $74 million Mercer Street project was derided for saving only two seconds of drive time, but this ignored the 30,000 additional cars now moving through without delay—the authors compared it to criticizing pitcher Felix Hernandez’s contract because team batting average declined. Redrawing figures exposes manipulation: Apple’s cumulative iPhone sales graph showed relentless growth, but quarterly sales had been declining. The chapter emphasizes being correct when calling bullshit—hypocrites are despised. Be charitable: attack arguments, not people; don’t assume malice when incompetence suffices. Be pertinent: avoid being a “well-actually guy” who interrupts with irrelevant technicalities. The goal isn’t demonstrating cleverness but advancing understanding. The authors close with Neil Postman’s warning: “at any given time, the chief source of bullshit with which you have to contend is yourself.”
The chapters trace a path from evolutionary origins through modern manifestations to practical defense, building not just a taxonomy of bullshit but a philosophy of information skepticism. What emerges isn’t simple cynicism but rather a call for rigorous humility—to question ourselves as vigorously as we question others, to recognize that the tools of deception are also the tools of understanding, and that the price of truth in an information-saturated age is constant vigilance combined with intellectual charity.
The Salmon in the Machine
There’s a peculiar optimism embedded in the phrase “data-driven,” as if data arrives from some realm beyond human error, bias, and strategic obfuscation—as if numbers speak with nature’s authority rather than through human mouths. Carl T. Bergstrom and Jevin D. West’s Calling Bullshit: The Art of Skepticism in a Data-Driven World exists to disabuse us of this notion, though not to make us cynics. Their project, which began as a course at the University of Washington, concerns itself less with outright lies than with the subtler category that philosopher Harry Frankfurt termed bullshit: claims produced with blatant disregard for truth, designed to persuade or overwhelm rather than inform.
The book’s early example of the dead Atlantic salmon proves instructive. Neuroscientists placed the deceased fish in an fMRI machine, showed it photographs of people in various emotional states, and asked it to identify their feelings. Several regions of the salmon’s brainstem showed activity. Either they had discovered post-mortem cognition in fish, the researchers noted, or something had gone wrong with their statistical methods. This reductio ad absurdum—deliberately absurd experiment exposing real methodological problems—captures the book’s approach. Bergstrom and West rarely attack the people producing bullshit. Instead, they demonstrate how systems, incentives, and cognitive biases lead intelligent, well-intentioned people to generate, amplify, and defend nonsense.
The authors divide their attention between spotting bullshit (chapters one through nine) and refuting it (chapters ten and eleven), but this structure understates how the book actually moves. What accumulates across chapters isn’t merely a toolkit of detection techniques but rather a theory of how quantitative claims function rhetorically in a society increasingly convinced that numbers represent objective truth. Where previous eras deployed flowery language to obscure meaning—”leveraging under-utilized human resource portfolio opportunities” meaning “we’re a temp agency”—contemporary bullshit launders dubious claims through statistical terminology. A pharmaceutical company reports their drug shows “clinically important effect size” despite being “short of statistical significance,” hoping readers won’t notice these terms contradict: either the effect is large enough to matter (significant) or it isn’t (not significant), but both can’t be true simultaneously.
The genius of new school bullshit, as they term it, lies in exploiting two asymmetries. First, the knowledge asymmetry: most people lack statistical training to evaluate quantitative claims. Second, what they call Brandolini’s principle: the energy required to refute bullshit exceeds by orders of magnitude the energy required to produce it. Andrew Wakefield’s fraudulent 1998 paper linking vaccines to autism required only a small, poorly designed study to publish. Debunking it demanded millions of dollars, countless research hours, studies involving hundreds of thousands of children, and formal retractions from the Lancet and Britain’s General Medical Council. Yet twenty years later, measles—nearly eliminated in the developed world—makes comebacks wherever vaccination rates decline.
What interests Bergstrom and West isn’t primarily the mechanics of individual deceptions but rather the ecosystems that allow bullshit to flourish. They trace how social media transforms information economics. Where subscription-based journalism rewarded quality that retained readers, click-driven media rewards only immediate engagement. Headlines no longer convey information (”Kennedy Killed by Sniper”) but promise emotional experiences (”This Will Make You Cry”). Algorithms optimize for keeping users on-platform rather than informing them, leading YouTube to recommend flat-earth videos alongside International Space Station footage. The structure creates what they call a “bullshit pandemic”—not because people are more dishonest than previously, but because systems now exist to amplify, distribute, and monetize bullshit at scale.
The chapter on causality reveals how easily correlation becomes prescription. News outlets report that “exercise can lower cancer risk by 20%” based on studies showing people who exercise have lower cancer rates. But perhaps healthy people exercise more rather than exercise making people healthier—the causal arrow might point backward. Even randomized controlled trials, the gold standard for establishing causation, can mislead. Studies showing antidepressants work appeared in 94% of published papers, but FDA trial registrations revealed only 38 of 74 trials produced positive results. The others were either unpublished or “spun” to appear successful. The published literature thus overrepresents effectiveness—not through fraud but through publication bias, the tendency to file-drawer negative results.
But the book’s deepest insights concern not how bullshit deceives but why it persists. Bergstrom and West describe a physics talk at the Santa Fe Institute where a speaker claimed mathematical models proved long-lived multicellular organisms must have adaptive immune systems with specific features. The speaker presented complicated equations, sophisticated analysis. Then an immunologist raised his hand: “What about trees?” Trees are long-lived multicellular organisms. They lack the predicted immune features. The entire edifice collapsed. This counter-example’s power derived not from technical sophistication but from observational simplicity—anyone with basic biological knowledge could understand it.
Yet counter-examples often fail to dislodge beliefs. The Marshmallow Test supposedly demonstrated that children’s ability to delay gratification at age four predicted success in adolescence. This spawned an industry of willpower training, articles about building grit, programs teaching impulse control. Replication studies eventually revealed the correlation was confounded: wealthy parents’ children were both better at waiting (having learned through experience that good things come to those who wait) and more likely to succeed academically (having better schools, tutors, stability). Delaying gratification didn’t cause success; parental wealth caused both. But the original interpretation persists because it aligns with meritocratic narratives—success rewards character rather than circumstances—making it stickier than the correction.
This points toward what the authors identify as perhaps the deepest source of bullshit susceptibility: confirmation bias. We notice, believe, and share information consistent with pre-existing beliefs. A study analyzing recommendation letters found that writers used different language when describing male versus female candidates—words like “exceptional” and “talented” for men, “hardworking” and “collaborative” for women. The finding aligned with known gender biases in academia. It was also wrong. The image showing this pattern illustrated the study’s hypothesis, not its results. The researchers actually found minimal gender differences. But the claim spread because it confirmed what people already believed about academic sexism.
The book reserves particular attention for machine learning and big data, areas where bullshit hides most effectively behind technical complexity. Google Flu Trends claimed to predict outbreaks from search terms, outperforming CDC tracking. It failed catastrophically when Google introduced autocomplete, shifting which terms people used. Amazon’s hiring algorithm discriminated against women because it was trained on Amazon’s existing resumes, which were disproportionately male. Criminal sentencing algorithms flag Black defendants as high-risk at twice the rate of white defendants, not because of explicit racism in the code but because they’re trained on historical data reflecting systemic bias.
The authors’ insight here: garbage in, garbage out applies not just to data quality but to the training process itself. Machine learning doesn’t discover truth; it learns to recognize patterns in training data. If training data reflect human biases—and all human-generated data do—algorithms perpetuate those biases while claiming mathematical objectivity. No amount of technical sophistication can compensate for biased data. The algorithm becomes what Bergstrom and West call a “bullshit laundering” device, transforming human prejudice into seemingly objective prediction.
What elevates the book beyond mere debunking is its attention to institutional and structural factors. Scientists aren’t producing fraudulent research because they’re dishonest but because incentive structures reward novelty over replication, positive findings over negative ones, publishable results over exploratory work. Universities game rankings not from mendacity but from competitive pressures. Social media platforms amplify extreme content not because Silicon Valley hates truth but because engagement metrics reward outrage and tribal signaling over accuracy. Individual virtue won’t solve these problems; changing systems might.
The final chapters on calling bullshit introduce ethical dimensions often absent from debunking literature. The authors distinguish between calling bullshit (necessary, productive) and being a “well-actually guy” (annoying, counterproductive). The difference lies partly in relevance—does your correction actually address the substantive claim?—and partly in intent. Calling bullshit aims to protect audiences from deception. Being a well-actually guy aims to demonstrate superior knowledge. At lunch, a friend suggests mammals don’t trick other species into raising their offspring because mammals don’t lay eggs. Responding “well actually, monotremes lay eggs” is technically correct but fundamentally irrelevant—echidnas seal themselves in pouches before laying eggs, and platypuses seal themselves in tunnels. The friend’s insight about mammalian reproduction stands despite the exception.
They emphasize charitable interpretation: don’t attribute to malice what incompetence explains; don’t attribute to incompetence what honest mistakes explain. Most bullshit arises not from deception but from confusion, haste, or incentive structures that reward attention over accuracy. The goal isn’t finding villains but improving discourse. When calling bullshit, be correct—getting it wrong while correcting others destroys credibility. Be clear—confused refutations convince nobody. Be pertinent—address substantial points, not tangential technicalities. And remember that “at any given time, the chief source of bullshit with which you have to contend is yourself.” Confirmation bias affects bullshit callers as much as bullshit producers.
What the book doesn’t quite resolve—and perhaps can’t—is the asymmetry problem it identifies early: producing bullshit takes minimal effort, refuting it demands substantial work, and falsehood spreads faster than correction. They advocate for better statistical literacy, clearer thinking, institutional reforms. But these solutions operate on different timescales than the problem. By the time careful analysis debunks a viral false claim, the claim has already spread, been internalized, and moved on to new audiences. Truth, as they note, comes limping after falsehood with its pants around its ankles, struggling down the hallway in hopeless pursuit.
The prose occasionally strains for accessibility—extended analogies, repeated examples—in ways that undergraduate teaching requires but book-length treatment doesn’t need. Chapters sometimes feel like assembled lecture notes, complete with pedagogical scaffolding unnecessary for readers who’ve chosen to engage with a 350-page book on statistical reasoning. But this oversharing of examples has a purpose: the book aims to train intuition, not just convey information. Seeing how the same error (say, confusing correlation with causation) manifests across domains—vaccines, hiring, climate, criminal justice—builds pattern recognition that checking individual claims never could.
What lingers after finishing isn’t the specific debunkings, memorable as some are, but rather a shifted relationship to quantitative claims. Numbers don’t arrive from some realm of pure truth. They’re produced by people with interests, shaped by systems with biases, and interpreted through frames that determine what we notice. The question isn’t whether to trust data but rather what questions to ask about any claimed finding: Where did this data come from? Who collected it and why? What’s being measured and what’s being ignored? Are comparisons fair? Do results pass basic plausibility checks? Could selection effects explain the pattern?
The book’s title plays on a double meaning: calling bullshit as speech act (publicly declaring something false) and calling bullshit as moral imperative (duty to speak up when deception threatens communal understanding). Bergstrom and West ultimately argue for the latter while providing tools for the former. In an information environment optimized for engagement over accuracy, where algorithms amplify outrage, where publication bias distorts scientific literature, where confirmation bias makes us credulous about convenient claims, calling bullshit becomes civic duty.
But civic duty, they acknowledge, is exhausting. Brand’s principle—refutation requires orders of magnitude more energy than production—means we can’t possibly fact-check everything. We must choose our battles, focus attention, let most bullshit pass unremarked. This creates a kind of triage ethics: which deceptions warrant response, which audiences merit persuasion, which battles have stakes worth fighting over? The book provides few answers here. It can teach us to spot bullshit and construct effective refutations. It cannot tell us which fights to pick or how to sustain motivation in what feels increasingly like a Sisyphean struggle.
What emerges most forcefully is recognition that the problem operates at multiple scales simultaneously. Individual errors compound into systemic failures. Publication bias in journals creates misleading medical literature. Social media algorithms create filter bubbles. University rankings incentivize gaming metrics. Machine learning trained on biased data perpetuates discrimination. None of these problems admits simple solutions because they’re not problems in the sense of errors that could be corrected. They’re features of systems designed to accomplish goals other than accuracy—sell subscriptions, maximize engagement, compete for students, make hiring decisions cheaply. Accuracy competes with these goals rather than complementing them.
The book’s contribution lies less in revealing individual deceptions—though it does this admirably—than in making visible the infrastructure that generates, distributes, and protects bullshit. Once seen, this infrastructure becomes difficult to unsee. Reading news stories, you notice the small distortions: percentage increases reported without baseline rates, causal language used for correlational findings, studies with tiny sample sizes presented as decisive evidence. Encountering data visualizations, you check the axes, the bins, whether comparisons are fair. Hearing about new research, you wonder about selection effects, publication bias, statistical significance thresholds.
This vigilance has costs. It makes consuming information more laborious, social media more exhausting, casual conversation more fraught. The book acknowledges but doesn’t quite grapple with these costs—the way that skepticism, taken too far, curdles into cynicism; the way that demanding rigor in every domain makes normal discourse impossible; the way that always questioning becomes itself a kind of paralysis.
Still, Bergstrom and West make a convincing case that these costs pale beside the alternative: a society where bullshit proliferates unchallenged, where quantitative claims acquire unearned authority, where numbers obscure rather than illuminate. The skills they teach—thinking in orders of magnitude, checking for selection bias, distinguishing correlation from causation—aren’t just defensive measures. They’re tools for better thinking generally, ways of approaching claims with appropriate humility about what data can and cannot show.
The dead salmon in the fMRI machine became a famous image in neuroscience, a cautionary tale about statistical methods and multiple comparisons. It was funny—neuroscientists asking a dead fish to identify human emotions—but the humor disguised serious critique. Many published fMRI studies used the same flawed methods that found activity in the salmon’s brainstem. The joke exposed real problems that technical papers struggled to make visible. This exemplifies the book’s method: use clear examples, sometimes absurd ones, to make complex issues comprehensible. Not every reader needs to understand Bonferroni corrections or multiple hypothesis testing. Everyone can understand that if your method finds brain activity in a dead fish, something has gone wrong.
What the book ultimately argues for is a kind of democratic epistemology—not in the sense that truth is voted on, but rather that citizens in a democracy require tools to evaluate the quantitative claims increasingly central to policy debates, medical decisions, and civic life. You don’t need a statistics PhD to think clearly about numbers. You need basic principles, healthy skepticism, and willingness to dig into sources. These aren’t specialized skills reserved for experts. They’re civic competencies, as essential to democratic participation as literacy itself once was.
The final pages return to Walter Lippmann’s warning: “There can be no liberty for a community which lacks the means by which to detect lies.” Bergstrom and West update this for an era when lies are less dangerous than bullshit, when the problem isn’t malicious deception but rather systemic indifference to truth. Their book provides means by which to detect bullshit, but detection alone won’t suffice. What’s needed is collective commitment to calling it out, even when exhausting, even when the energy required exceeds the impact achieved, even when truth keeps limping after falsehood with no hope of catching up. The alternative—passive acceptance of a bullshit-saturated information environment—amounts to abandoning the project of shared reality that democratic discourse requires.


