Discussion about this post

User's avatar
Shravya Ushake's avatar

The detail that stopped me cold was the housing data incident—researchers finding that Section 8 vouchers actually worked, and city officials demanding the slide be removed. That single moment says more about WMDs than any technical breakdown could. The problem was never that we lacked good data; it's that good data is politically inconvenient. You can build the most transparent, well-audited model in the world, and it still gets buried the moment it tells power something it doesn't want to hear.

This reframes the whole conversation for me. We spend a lot of energy debating algorithmic bias, fairness metrics, and auditability—all important—but O'Neil is pointing at something upstream of all that. The corruption starts at the objective function, before a single line of code is written. Who decides what the model optimizes for, and who's absent from that room when the decision gets made? The people scored by the LSI-R don't get a seat at that table. Neither did Wysocki. Neither do the Starbucks baristas whose sleep schedules are someone else's efficiency gain.

What also stood out is how these systems quietly dismantle the possibility of proving them wrong. Baseball models self-correct because the outcome is visible the next day. But when a teacher gets fired or a prisoner gets a longer sentence, there's no follow-up, no mechanism for the model to learn it was wrong. A system that can never be falsified isn't science—it's doctrine. And we've handed that doctrine the power to decide who eats and who doesn't.

Navya Ravuri's avatar

This goes beyond summary into a clear, sustained argument about power. What’s especially strong is your through-line: these systems don’t malfunction—they optimize for the wrong objectives. By returning to feedback loops, objective functions, and the confusion of correlation with causation, you show that O’Neil’s examples form a coherent architecture of harm rather than isolated failures. The progression from individual cases to systemic design feels deliberate and persuasive.

The second half stands out for grounding abstraction in lived experience. The idea of a “mathematics of modern humiliation” makes algorithmic opacity feel personal, not just technical. Your point about how these systems fracture solidarity is particularly sharp, and the conclusion reframes the issue effectively: the real question isn’t whether we can build better models, but whether we will demand they serve justice instead of efficiency.

37 more comments...

No posts

Ready for more?