Essay - Algorithms to Live By: The Computer Science of Human Decisions
The Arithmetic of Living
Optimal Stopping: When to Stop Looking
The chapter opens with the secretary problem: you’re interviewing candidates in sequence, must decide on each immediately, and can’t return to those you’ve passed over. The mathematics yield an elegant solution—spend 37% of your time gathering information, then commit to the first candidate better than all you’ve seen. What seems like a puzzle about hiring becomes a lens for apartment hunting, dating, parking. The authors trace the problem’s strange history through mid-century mathematics, noting that even Abraham Lincoln faced a version (circuit planning) and Johannes Kepler documented his struggle applying it to remarriage. The real revelation isn’t the 37% rule itself but what it teaches about regret: even optimal strategies fail most of the time. You can follow perfect process and still end up with the second-best apartment, the third-best relationship. The chapter suggests a kind of algorithmic stoicism—control process, not outcome. What lingers is the tension between the rule’s mathematical certainty and its practical failure rate, between knowing the right answer and living with wrong results.
Explore-Exploit: The Latest Versus the Greatest
Christian and Griffiths frame one of life’s persistent tensions through a deceptively simple question: try the new restaurant or return to your favorite? Computer science formalizes this as the multi-armed bandit problem—imagine slot machines with unknown odds, how do you maximize winnings? The chapter walks through various solutions: the Gittens Index (mathematically optimal but complex), upper confidence bound algorithms (nearly as good, far simpler), and the crucial insight that your time horizon changes everything. With infinite time, explore aggressively. With limited time, exploit what you know. This explains why young people try many careers while older people stick to proven favorites—not rigidity but rationality. The authors visit a Mars rover malfunction caused by priority inversion, interview music journalists exhausted by mandatory exploration, and land on something poignant: the narrowing of social circles in old age isn’t decline but optimal strategy. When Laura Carstensen’s research shows elderly people are happier despite having fewer social connections, we’re seeing exploration/exploitation math in human form. Life should get better over time, the chapter argues, because you’re finally exploiting decades of gathered knowledge.
Sorting: Making Order
Danny Hillis watches his college roommate pull socks randomly from a hamper, tossing back non-matches until he finds a pair. The inefficiency appalls him. This is the book’s entry into sorting theory, but it quickly complicates: the roommate’s system, while terrible, at least gets him dressed. Sometimes unsorted is better than the labor of sorting. The chapter moves through sorting algorithms—bubble sort’s intuitive inefficiency, merge sort’s elegant power, the revealing fact that most scheduling problems are actually sorting problems (sports leagues, NASA’s scheduling algorithms for Mars rovers). There’s a curious section on how different sorting methods reflect different ways of establishing dominance: round-robin tournaments produce quadratic time complexity, while race-based rankings offer near-instant sorting because numbers don’t require pairwise comparisons. The most provocative claim arrives near the end: your messy desk isn’t failure, it’s optimal. The “pile” on your desk sorts itself by recency of use, making it remarkably efficient. Attempting to alphabetize your bookshelves, the authors suggest, will take more time than scanning unsorted shelves ever will. Sometimes the cost of making order exceeds the benefit.
Caching: Forget About It
The chapter begins with Hermann Ebbinghaus’s 1879 experiments memorizing nonsense syllables, mapping how memory fades over time—the famous forgetting curve. For over a century this was treated as a flaw, evidence of human limitation. But when John Anderson in 1987 started looking at real-world data patterns—New York Times headlines, parent-child conversations, email inboxes—he found something remarkable: the world itself forgets in exactly the pattern that human memory does. Words that appeared recently are likely to appear again soon; words absent for months will likely stay absent. Our forgetting isn’t a bug, it’s optimal tuning to the environment’s actual statistics. The insight extends beyond individual memory to organizational systems. The chapter describes visiting Berkeley’s library, where books are shelved by Library of Congress number but should arguably be sorted by recency of use—the least recently used books relegated to remote storage, recently returned books displayed prominently in the lobby. That we don’t do this reveals something about the gap between theoretical optimization and practical implementation. And there’s a darker observation embedded: if forgetting is optimal, what does it mean that as we age, our memory “failures” increase? Perhaps we’re not declining—we’re managing an ever-larger database, paying the computational cost of longer experience.
Scheduling: First Things First
The chapter opens with an inventory of time management advice—Getting Things Done, Eat That Frog, The Now Habit—all offering contradictory guidance. Then it points out something obvious: computer scientists have been thinking about scheduling for decades, and their answers are not only more rigorous but dramatically simpler. If you care about minimizing maximum lateness, do tasks in order of deadline (earliest due date). If you want to minimize the number of tasks left undone, do the shortest tasks first. If tasks have different importance, work in order of importance-per-unit-time. The rules are clear, provable, and largely ignored. More interesting is what happens when scheduling becomes hard: precedence constraints (you can’t do B until A is finished) and context switching costs. The chapter describes the Mars Pathfinder rover freezing because of priority inversion—a low-priority task held a resource that a high-priority task needed, while medium-priority tasks ran instead of either. The fix required priority inheritance, essentially making the low-priority task temporarily important. There’s a human parallel: sometimes the most urgent thing is making sure the seemingly unimportant thing gets done. And then there’s thrashing—when a system spends all its time switching between tasks rather than completing any of them. The chapter suggests that when we feel paralyzed by our to-do list, we’re thrashing. The solution isn’t working harder; it’s doing less.
Bayes Rule: Predicting the Future
J. Richard Gott III stood at the Berlin Wall in 1969 and wondered how long it would last. With no other information, he applied the Copernican Principle: assume you’re observing at a random point in the phenomenon’s lifetime. If the wall was eight years old, it would likely last another eight years. (It lasted twenty.) This is Bayesian reasoning—updating beliefs based on evidence—and the chapter uses it to reframe prediction. Thomas Bayes, an 18th-century minister, proved that we can reason backward from effects to probable causes. Pierre-Simon Laplace extended this to create the surprisingly simple rule: if you’ve seen W wins in N attempts, estimate the probability as (W+1)/(N+2). The “+1” and “+2” encode something profound: even one success suggests future success, but our confidence should scale with the number of observations. The chapter applies this everywhere—movie grosses, poem lengths, waiting times—and shows that people’s intuitions closely match optimal Bayesian predictions. We absorb probability distributions from the world and use them unconsciously. But there’s a warning embedded: our priors come from our experience, so marginalized groups form different—and for their lives, more accurate—priors than privileged ones. The marshmallow test example stings: children who eat the marshmallow quickly aren’t failing at willpower; they’re succeeding at Bayesian inference about adult reliability.
Overfitting: When to Think Less
Charles Darwin made a pro/con list to decide whether to marry his cousin Emma Wedgwood. The list was exhaustive, meticulous, rational. And probably counterproductive. The chapter introduces overfitting through machine learning: a model that perfectly fits existing data will make terrible predictions about future data because it’s modeling noise as signal. This is the paradox at the book’s heart—more thinking doesn’t always help. Darwin should have stopped after his first few considerations; the rest was overthinking. The authors apply this to multiple domains: taste is overfitted to ancestral nutrition needs and now optimizes for junk food, standardized tests can be overfitted through teaching to the test, law enforcement can overfit training scenarios (cops found dead with brass in their hands, having instinctively collected spent casings mid-gunfight). Cross-validation emerges as the solution: hold back some data to test whether your model generalizes. Applied to life, this means testing whether your optimization actually serves your goals or just serves the metric you chose. The chapter ends with Henry Markowitz, who won the Nobel Prize for portfolio optimization theory, revealing he split his retirement savings 50-50 between stocks and bonds rather than using his own complex formula. Why? Because given the uncertainty in his estimates, the simple solution was likely more robust. Sometimes the best way to be smart is to deliberately be simple.
Relaxation: Let It Slide
Megan Bellos was planning her wedding and wrestling with table assignments—107 guests, 11 tables, infinite social complexities. She realized the problem was identical to her PhD research in chemical engineering: placing amino acids in protein chains to maximize binding energy. Both were discrete optimization problems, and both were intractable—no efficient solution exists. The chapter explains that many real-world problems (traveling salesman, wedding seating, fire station placement) are provably hard. But computer science has developed strategies for getting close-enough answers in reasonable time. Constraint relaxation removes some rules to make the problem easier—in the traveling salesman problem, you might allow the salesman to visit cities twice or retrace steps. Continuous relaxation turns discrete choices (the fire truck is here or not here) into fractional ones (put 0.3 of a fire truck here), then rounds back to reality. Lagrangian relaxation turns impossibilities into penalties, making everything possible but some things very expensive. The insight extends beyond algorithms: sometimes the best approach to an impossible problem is imagining an easier version, solving that, and then adapting the solution back to reality. The chapter closes with a sports scheduling story—NCAA basketball’s complex constraints can only be satisfied by turning some “never do this” rules into “avoid this if possible” guidelines. Perfection blocks progress; relaxation enables it.
Randomness: When to Leave It to Chance
Stanislaw Ulam, recovering from brain surgery in 1946, played solitaire and wondered: what’s the probability a shuffled deck yields a winnable game? The combinatorial math was impossible—52 factorial possible arrangements. So he took a different approach: play many games and count the wins. This is the Monte Carlo method, named for the casino, and it revolutionized computational science. The chapter argues that randomness isn’t giving up on a problem—it’s often the only tractable approach. Michael Rabin’s primality test uses randomness to determine whether huge numbers are prime (essential for encryption) with arbitrary accuracy in minimal time. No deterministic algorithm can match it. The chapter traces randomness through multiple domains: simulated annealing (solving optimization problems by treating them like metallurgical cooling), jitter and random restarts (escaping local maxima), William James’s theory that creativity requires randomness (”new conceptions, emotions, and active tendencies which evolve are originally produced in the shape of random images, fancies, accidental outbursts”). There’s a lovely section on Salvador Luria watching someone hit a slot machine jackpot and realizing bacterial mutations work the same way—not responses to environmental pressure but random variations, most useless, occasionally spectacular. The conclusion complicates optimization culture: sometimes the best strategy is deliberately introducing chance, accepting that rationality includes knowing when to roll the dice.
Networking: How We Connect
The first message sent between computers on October 29, 1969, was supposed to be “LOGIN” but the system crashed after “LO.” Fitting, the authors suggest—networking has been partial from the start. The chapter explores TCP/IP, the protocol underlying the internet, which broke from telephone networks’ circuit switching (dedicated channels) to packet switching (atomized messages merged into communal flow). The design decision solved two problems: efficiency (computers mostly stay silent, then burst) and robustness (packets can route around damage). But it created new ones. How do you know your messages arrived? The two generals problem proves that perfect confirmation is impossible—any confirmation itself requires confirmation. So TCP uses a “triple handshake” and subsequent acknowledgment packets. When packets drop, exponential backoff prevents network collapse: after each failure, wait twice as long before retrying. The authors extend this to human life—invitations to flaky friends should follow exponential backoff, giving up slowly but never completely. More surprising is the application to justice: Hawaii’s HOPE program for probationers uses escalating jail sentences (one day, then two, then four) rather than warnings followed by years-long sentences. Recidivism dropped dramatically. The chapter ends with buffer bloat—modern modems have so much memory that packets queue endlessly rather than dropping, preventing the congestion signals the system needs. The solution is counterintuitive: sometimes systems work better when they reject requests, forcing explicit choices about priority.
Game Theory: The Minds of Others
The prisoner’s dilemma: two criminals, separate cells, each can betray the other or stay silent. Betrayal is the dominant strategy—better regardless of what the other does—yet mutual betrayal leaves both worse off than mutual silence. The chapter uses this to explore Nash equilibrium (stable strategies where no one wants to unilaterally change) and reveals something disturbing: the equilibrium isn’t necessarily good. In fact, it’s often terrible. The tragedy of the commons extends this to multiple players—everyone overgrazing the shared lawn until it’s destroyed. Game theory traditionally assumes rational players find equilibrium, but algorithmic game theory asks different questions: Can players compute the equilibrium? (Often no—it’s intractable.) Can we design games where equilibrium is good? (Sometimes.) The Vickrey auction achieves something remarkable: the winner pays the second-highest bid, making truth-telling the dominant strategy. No recursion, no strategy, just honesty. The revelation principle proves any game can be redesigned to make honesty optimal. But most games aren’t designed at all—they emerge from individual choices. Information cascades explain bubbles: early investors bid up a stock, later investors interpret this as valuable information and bid more, creating runaway feedback unmoored from reality. The chapter’s conclusion circles back to computational kindness: we pose computational problems to each other through our choices. Asking “what do you want to do tonight?” forces the other person to simulate your preferences. Better to state yours clearly, shouldering the cognitive load yourself.
Conclusion: Computational Kindness
The book ends where it began, with the computer as comrade rather than tool. We face computational problems because we exist in constrained space and time; so do computers. Three lessons emerge: First, sometimes computer science offers transferable solutions—the 37% rule, least recently used caching, upper confidence bounds for exploration. Second, knowing you used an optimal algorithm should provide relief even when outcomes disappoint. The 37% rule fails 63% of the time; regret is mathematically inevitable. We should “hope to be fortunate but strive to be wise.” Third, we choose not only the problems we face but also the problems we pose to each other. This creates computational kindness as an ethical principle—frame questions to minimize others’ cognitive burden. Don’t say “I’m flexible” when making dinner plans; that passes the computational buck. Offer specific options or state preferences clearly. The authors note that restaurants could be computationally kinder too: take a name and text when tables are ready, rather than forcing customers to hover in uncertainty. Cities could be kinder: single-helix parking garages eliminate all search strategy; live bus arrival displays let passengers decide once rather than continuously. The deepest point arrives quietly: sometimes good enough really is good enough. Computational kindness isn’t just about helping others think less—it’s about accepting that perfect optimization is often the enemy of human flourishing. The book that began with mathematics ends with ethics, suggesting that the greatest algorithmic achievement might be designing systems that let us stop computing altogether.
Bridge
What emerges from these chapters isn’t a simple story about applying math to life. The book keeps doubling back on its own premise—optimal algorithms often involve thinking less, accepting good enough, introducing randomness. The 37% rule fails most of the time. Messy desks outperform organized ones. The best scheduling strategy is sometimes dropping balls deliberately. By the end, you’re left wondering whether the real insight is about computation or about the limits of computational thinking itself. What follows is an attempt to sit with that tension—to think about what it means when the optimal solution to life’s problems is often to stop optimizing.
The Computational Self
There’s a particular kind of exhaustion endemic to modern professional life that rarely gets named directly but that almost everyone recognizes. It’s the fatigue of treating yourself as a system to be optimized. Track your sleep cycles. Gamify your fitness. Quantify your relationships, your reading habits, your coffee intake, your creative output. The contemporary self arrives pre-loaded with dashboards and metrics, key performance indicators for existence itself. We’ve been told that data is liberation, that measurement leads to improvement, that what gets tracked gets managed. And yet.
“Algorithms to Live By,” Brian Christian and Tom Griffiths’s 2016 exploration of computer science’s applications to human decision-making, starts from a premise that seems to embrace this optimization culture. The book promises to import the rigorous solutions of computer science—algorithms tested across billions of operations, refined through decades of research—into the messier domain of human choice. How to organize your closet? Ask caching theory. When to commit to a relationship? Consult optimal stopping math. How to balance novelty and habit? The explore-exploit trade-off has your answer. It’s a seductive pitch: your life’s persistent dilemmas have already been solved by machines.
But something peculiar happens as you move through the book’s eleven chapters. The authors keep arriving at conclusions that complicate their own premise. Yes, there’s an optimal algorithm for finding a spouse (the 37% rule—spend 37% of your dating life exploring, then commit to the first person better than all you’ve seen). But it fails 63% of the time. Yes, there are efficient sorting algorithms. But attempting to alphabetize your bookshelf will waste more time than scanning unsorted shelves ever will. Yes, there are strategies for minimizing regret. But regret is mathematically inevitable—even perfect play guarantees disappointment. The book that promises computational solutions keeps revealing computational impossibilities.
Consider the secretary problem, the book’s opening gambit. You’re hiring for a position, interviewing candidates sequentially. You must decide on each immediately—accept or reject—with no returns to previous candidates. Given N total applicants, the math proves you should reject the first 37% outright (gathering information), then hire the first subsequent candidate who beats all previous ones. This is optimal. It maximizes your chance of selecting the single best candidate from the pool. And your chance of success? 37%.
Most guides to decision-making would treat this as an unfortunate limitation. Christian and Griffiths take a different tack. They suggest we’ve been asking the wrong question. The problem isn’t that the 37% rule fails most of the time. The problem is our expectation that optimal strategy should guarantee good outcomes. In a universe where time moves forward and information arrives sequentially, regret isn’t a personal failing—it’s a structural feature of reality. The proper response isn’t to optimize harder. It’s to distinguish between process and outcome, to hope for fortune while striving for wisdom.
This becomes the book’s recurring move: taking us to the edge of computational thinking, then revealing its limits. There’s a chapter on overfitting—the machine learning problem where models that perfectly fit existing data make terrible predictions about future data. Darwin’s elaborate pro-con list for whether to marry Emma Wedgwood? Overfitting. The instinct to consider every possible factor before deciding? Overfitting. The belief that more analysis produces better choices? Also overfitting. The chapter suggests something almost heretical: the optimal strategy is often to think less, not more. Regularization—the mathematical technique for preventing overfitting—essentially involves penalizing complexity. The simpler model frequently beats the sophisticated one.
This insight extends across domains in ways the authors pursue with an almost mischievous pleasure. Police officers found dead with spent brass in their hands—they’d been overfitted to training scenarios where you collect your casings. Taste buds optimized for ancestral scarcity now overfit to junk food. Investment strategies perfectly tuned to past market conditions fail when conditions change. The solution in each case isn’t better optimization but strategic simplification, deliberately stopping the refinement process before it runs away from reality. Henry Markowitz won the Nobel Prize for portfolio optimization theory, developing complex mathematical frameworks for balancing risk and return. When it came time to invest his own retirement savings, he split it 50-50 between stocks and bonds. Why? Because given the uncertainty in his underlying assumptions, the simple answer was likely more robust than the sophisticated one.
The book’s most provocative chapters deal with intractability—problems where no efficient solution exists, where brute force computation would outlast the heat death of the universe. Wedding seating arrangements with social complexities, traveling salesman problems above a certain size, many scheduling scenarios with precedence constraints. These aren’t hard because we haven’t found the right algorithm yet. They’re hard because mathematical proof establishes they cannot be efficiently solved. And yet—the book’s key observation—we solve them anyway. Not optimally, but adequately. Through relaxation (removing constraints), through approximation (accepting near-solutions), through strategic use of randomness (escaping local maxima). The lesson isn’t that computation fails. It’s that computation succeeds by giving up on perfection.
There’s something almost spiritual in this reframing, though it arrives via mathematics rather than meditation. The recognition that uncertainty is irreducible, that regret is inevitable, that optimal strategies fail most of the time—these aren’t reasons for despair but for acceptance. The universe has hard problems built into its structure. Time flows forward. Information costs effort to gather. Memory is finite. Computation takes time. These aren’t personal failings. They’re facts about reality. What we control is how we respond.
But I find myself wondering whether the book’s most important contribution isn’t its positive recommendations but its negative ones—the things it suggests we stop doing. Stop trying to arrange every option in priority order; pick from the first few that look good enough. Stop gathering complete information before deciding; 37% is sufficient. Stop holding out for perfect partners, jobs, apartments; regression to the mean ensures disappointment. Stop organizing everything that could be organized; messiness is often optimal. The computational perspective reveals that many forms of striving are worse than futile—they’re counterproductive.
This connects to the book’s closing concept, computational kindness, which emerges as perhaps its deepest insight. We don’t only solve computational problems ourselves; we pose them to others through our choices. When you say “I’m flexible” about dinner plans, you force everyone else to simulate your preferences recursively. When you design a parking lot with multiple lanes requiring complex search strategies, you tax every driver’s cognitive resources. When you implement “unlimited vacation” policies, you create a race to the bottom where everyone competes to take slightly less time off. Computational kindness means structuring choices to minimize others’ cognitive burden—offering specific options rather than open-ended questions, designing systems that make the right path obvious, stating preferences clearly rather than forcing inference.
This principle has implications beyond politeness. Take the Hawaii HOPE program for criminal probationers, which the authors describe in their networking chapter. Traditional probation involved warnings after violations, then at some discretionary point, judges imposed years-long sentences. HOPE replaced this with immediate, escalating consequences: one day in jail for the first violation, then two days, then four, following the exponential backoff principle from network protocols. Recidivism dropped by half. The change wasn’t more punishment or less—it was predictable computation. Probationers could calculate costs rather than guess at arbitrary thresholds. The system became comprehensible, and comprehensibility enabled choice.
Or consider the book’s observation about restaurant seating policies. Some restaurants make you hover by the host stand until a table opens (”spinning,” in computer science terms, where a processor continuously checks for resources). Others take your name and text when tables are ready (”blocking,” where the system handles resource management). The first maximizes table turnover; the second minimizes customer cognitive load. Both work, but they optimize different things. Recognizing this as a computational choice rather than a hospitality instinct changes how we might design such systems.
The book was published in 2016, before the current AI boom made “algorithmic thinking” a cultural flashpoint. Reading it now, in 2025, you notice what it doesn’t anticipate. It treats computer science as a solved discipline offering solutions to human problems, not as an active project making those problems worse. There’s no consideration of how recommendation algorithms might hijack explore-exploit trade-offs for profit, no worry about how computational thinking might infiltrate domains where it shouldn’t, no reckoning with the ways that optimization culture itself might be the problem rather than the solution.
And yet the book’s actual arguments resist the most toxic aspects of that culture. When it suggests we should stop optimizing and accept good-enough solutions, it’s working against the grain of contemporary tech thinking, not with it. When it reveals that many problems are provably intractable, it’s setting limits on computational ambition. When it demonstrates that randomness and simplification often beat careful analysis, it’s undermining rather than reinforcing the ideology of total control through data.
There’s a telling moment late in the book where the authors discuss information cascades—scenarios where rational individuals observe each other’s behavior and create runaway feedback loops entirely divorced from underlying reality. Everyone bids up the stock because everyone else is bidding it up. Everyone works longer hours because everyone else works longer hours. Everyone optimizes their life because everyone else is optimizing theirs. No one’s being irrational; the system itself generates the pathology. Game theory shows that sometimes the problem isn’t the players but the game.
This might be the book’s most subversive implication: that the optimization culture it seems to endorse is itself an information cascade, a mass delusion where we’ve all agreed to treat human life as a computational problem requiring computational solutions, when in fact the optimal solution to many such problems is to stop computing altogether. The secretary problem tells you when to stop looking. But maybe the deeper lesson is about when to stop optimizing the looking process itself.
I keep returning to the chapter on caching, which contains one of the book’s most quietly radical claims. Our forgetting isn’t failure—it’s optimal. Hermann Ebbinghaus’s forgetting curve, long treated as evidence of human limitation, turns out to perfectly match the actual statistics of information recurrence in real environments. Words that appeared recently will likely appear again soon; words absent for months will likely stay absent. Evolution tuned human memory not to remember everything but to remember the right things at the right rates. And here’s the kicker: as we age and our memory seems to decline, we’re not failing—we’re managing an ever-larger database. The “senior moment” is the computational cost of a richer life.
This reframes cognitive decline from personal tragedy to mathematical necessity. It also suggests that many of our supposed inefficiencies might be optimizations we don’t recognize. The messy desk isn’t disorganization; it’s a least-recently-used cache, automatically sorting items by recency of access. The tendency to go with your gut rather than deliberate extensively isn’t impulsivity; it’s early stopping to prevent overfitting. The apparently irrational decision to stick with a known mediocre option rather than explore better alternatives isn’t laziness; it’s optimal exploitation given your time horizon.
But this cuts both ways. If our instincts are already optimized, what’s the point of learning about algorithms? The authors’ answer seems to be that consciousness of the computational structure helps in two ways. First, it lets us distinguish between situations where our instincts are well-calibrated (human memory in natural environments) and situations where they’re not (unlimited vacation policies creating destructive equilibria). Second, it helps us design systems and social arrangements that work with rather than against our computational constraints.
The book’s greatest service might be its vocabulary. It gives us language for recognizing computational problems in social situations, for seeing structure where we saw only mess. That anxious paralysis when facing your to-do list? Thrashing—the system is spending all its energy on meta-work rather than actual work. That tendency for everyone to work themselves to exhaustion? Tragedy of the commons—individually rational choices leading to collectively terrible outcomes. That frustrating dinner where no one will state their preferences? Recursive inference creating exponential computational costs.
Having names for these patterns doesn’t always offer solutions. Game theory proves that many equilibria are stable precisely because they’re terrible—no individual player can improve their situation by changing strategy. Information cascades can trap entire populations in irrational behavior despite everyone acting rationally. Some problems are intractable not because we haven’t found the algorithm but because no efficient algorithm exists. But recognition matters. It shifts the frame from personal failing to structural challenge, from “why can’t I get this right?” to “this problem is genuinely hard.”
And sometimes recognition enables intervention. If unlimited vacation policies create races to the bottom, mandate minimums instead. If restaurant hovering stresses customers, implement notification systems. If parking lots pose impossible optimization problems, design them as single helixes where the answer is always “take the first spot.” These are mechanism design questions—changing the game rather than changing the players—and they require seeing social arrangements as computational problems that can be re-architected.
Yet I’m left uncertain about how far this frame extends. The book mostly addresses middle-class professional problems—apartment hunting, career choice, dinner reservations, parking. For someone facing eviction, food insecurity, or discrimination, the computational perspective feels almost obscenely abstract. The 37% rule assumes you can afford to reject the first third of options. Explore-exploit trade-offs assume you have resources to explore. Computational kindness assumes you have the social capital to impose structure on others’ choices. The algorithmic solutions presume a certain baseline of stability and agency.
Christian and Griffiths don’t ignore this entirely. Their chapter on Bayes rule includes a devastating analysis of the marshmallow test, showing that children who immediately eat the marshmallow aren’t failing at self-control—they’re succeeding at Bayesian inference about adult reliability. If your experience suggests adults don’t keep promises, waiting for a second marshmallow is irrational. This flips the narrative from individual virtue to environmental assessment, revealing how algorithmic thinking can expose structural inequity rather than obscure it.
But the book doesn’t pursue these implications systematically. It gestures at how computational kindness could reshape public policy (Hawaii’s HOPE program, for instance), but it doesn’t reckon with how optimization culture itself might be a mechanism of control, how “efficiency” often means extracting maximum labor for minimum compensation, how the same algorithmic thinking that illuminates personal choices enables surveillance capitalism and algorithmic management. The algorithms are neutral; the systems implementing them are not.
Still, there’s something valuable in the book’s core maneuver—taking computer science seriously as a way to think about human flourishing, then showing how that thinking undermines itself. The optimal algorithm often involves stopping early, thinking less, accepting good-enough answers. Perfect computation is impossible; attempted perfect computation is counterproductive. The goal isn’t to think like a computer but to think like a computer scientist: aware of computational constraints, strategic about where to invest cognitive effort, comfortable with approximation and uncertainty, skilled at designing systems that work with human limitations rather than against them.
The book ends where it began, in that awkward restaurant standoff where everyone claims to be flexible and no one will state preferences. The computational perspective reveals this as a game with no good equilibrium—each person’s politeness imposes exponential costs on everyone else. The solution isn’t more sophisticated strategy. It’s someone saying “I’m inclined toward Thai, what do you think?” and shouldering the computational load. Moving from recursion to assertion. From infinite simulation to finite choice.
Perhaps that’s the real algorithm to live by: Compute when computation helps. Stop computing when it doesn’t. Recognize the difference. And above all, design your life and institutions to minimize the amount of computation required. The examined life might not be worth living if the examination never ends. Better to think carefully, then act. To optimize until it’s time to satisfice. To understand the mathematics of choice well enough to know when to close the equation and pick something, anything, and move on.
The restaurant might be mediocre. The relationship might not last. The apartment might have hidden flaws. We followed the algorithm and things didn’t work out, or we didn’t follow it and somehow things did. Life is like that. But at least we moved. At least we chose. At least we recognized that the choice itself, not its outcome, was the thing within our power. That’s the arithmetic of living—not a guarantee of happiness, but a defense against paralysis. A framework for action in a world that will never reveal all its secrets at once.


