Effective Altruism Is a Short Circuit
For all its pretensions to mathematical objectivity, the moral paradigm of crypto fraudster Sam Bankman-Fried cannot tell you how to respond to “the terrible temptation to do good.”
IN BERTHOLD BRECHT’S PLAY The Caucasian Chalk Circle, an otherwise sensible girl saves a small child when she doesn’t have to. The estate she works on has been overcome by an opposing army, and the wife of the slain governor has run off with a heap of her dresses, leaving a child behind. The girl’s fellow servants tell her to walk away, pointing out that the army will be all too interested in the governor’s heir, but she doesn’t. She spends an entire night staring at the child, caught in the throes of what Brecht calls “the terrible temptation to do good.”
The desire to do good when you don’t have to has a strangely powerful attraction. We Americans seem ever more obsessed with our desire to “be a good person,” or at any rate, to be declared “not the asshole.” But it is no longer enough to simply be a Presbyterian. We don’t have a united sense of how being good is supposed to look or play out anymore, unless you count being moralistic about making sure your kids get to eat Halloween candy. To me, it seems that our sheer rudderlessness in the presence of this basic desire—the desire to do good—is likely to bring about more chaos, not less, and certainly not less anytime soon. While it feels right to throttle the motor, we don’t know how to pilot the ship.
When the existentialist Jean-Paul Sartre was asked for life advice by an earnest young man in 1946, his response was characteristically rude and purposefully unhelpful: You are free, he said, so choose. That’s it, he implies: That’s all you get. But our simple allotment of radical freedom is also an unbearable weight, and so we look for advice, rubrics, rules, anything to lift the burden of that choice just a little bit.
It is to this problem at the aching heart of modern life that the “effective altruism community” has sought to address itself—through blogs, conventions, Substacks, online forums, grant-awarding organizations, several billion dollars, and the advocacy of one or two university professors—for going on thirteen years. The big draw of this movement is that they purport to have resolved the dilemma, and in terms that are seductively easy to understand. Want to do good? The best way, the EAs claim, is to donate money to charities that can be objectively proven to work—and, with televangelistic flair, they add that the more money sent in by viewers at home, the more goodness is brought into the world. Fin.
That one could become good through monetary transactions should raise our post-Reformation suspicions, obviously. As a simple response to the stipulation of a dreadful but equally simple freedom, it seems almost designed to hit us at the weakest spots of our human frailty, with disconcerting effects. Given the past year’s clown-car of a scandal involving the collapse of a major cryptocurrency exchange overseen by prominent EA Sam Bankman-Fried, who just this month had his bail revoked for witness tampering in advance of his trial in October for a raft of fraud and conspiracy charges, it’s almost too easy to feel a certain schadenfreude at the possibility that effective altruism—and its parent philosophy, classical utilitarianism—will really, finally get the pie in the face they deserve.
But while it’s tempting to write effective altruism off as “philanthropy + intellectual pretensions,” it is valuable to acknowledge the hold that effective altruism has on our ability to imagine goodness, and to try to understand why.
At the most basic level, it’s not too distant of a dream: Effective altruism speaks to the second moment of the fundamental desire to be good, which is the hope that our goodness would be—well, effective, or at any rate, that we would have some substantial confirmation of its interface with reality. As Liam Kofi Bright argued last fall, much academic and academic-leftist discourse seems to go on with no consequence whatsoever, and often it’s having the right words themselves, without regard for consequences, that is taken to be more important than whatever happens when those words are used.
That’s stupid, and we know it. An anxiety for goodness of this sort—an anxiety for rules, for shibboleths—is different from the painstaking work of figuring out what on earth is best each time life forces us to make use of our terrible freedom. That work requires us to step into a more ambiguous world where goodness is harder to achieve and sometimes even harder to discern, but also a world where, one hopes, the language we use to position and understand goodness would not be so easily co-opted by frauds like Bankman-Fried. The FTX founder had built his public profile on a commitment to effective altruism, but as he explained to Vox shortly after the discovery of the fraud he allegedly perpetrated, he always secretly took the ethos of the philosophy to be “dumb shit.”
SO WHAT IS ALL THIS SHIT ABOUT? If you had jumped sight unseen into effective altruism discourse last year, before it became obvious that the billions of dollars the movement had collected were going astray in ever more preposterous ways, you probably would have found plenty of things to be confused about. The most prominent advocate of EA is its cofounder, William MacAskill, an Oxford associate professor whose 2022 book, What We Owe the Future, argues the following:
Our money best increases goodness if we spend it to secure the good of far-distant future humans, since, after all, there might be more of them than there are of us now. (Maybe.) But then it turns out that the most pressing threat to humanity running up its numbers in futurity is the possibility of the future world being taken over by rogue AI, and so our dollars and prudence ought to be focused on avoiding this possibility—which, frankly, is a virtually unintelligible revision of our “terrible temptation” to do good amid our misshapen lives.
MacAskill’s body of ideas, known as “longtermism,” thrills our desire for practicality with the excitement of the unintuitive conclusion—one presented as an emergency obligation, at that. As essayist Phil Christman has remarked, while a concern for the effects of our actions on the future is laudable, “there’s a difference between taking responsibility for our actions and treating the future as our problem to solve.” Christman is laudably swayed by the promise of practical benefit, but he notes that with longtermism, we all too quickly find ourselves with “duties towards phantoms.” The fun, of course, of worrying about a distant future is that it gets us off the hook for worrying about the most pressing problems of AI: namely, why are we teaching children to write in ways that a cheap toy can imitate, and why are we pretending we don’t know about the already-present, far more consequential purpose of AI—to enable nation-states to wage ever more complicated war?
But the sense that longtermism plunges you in media res into some larger and harder-to-trace story is accurate. At its heart, longtermism is only the dorky science-fiction version of the nineteenth-century classical utilitarianism that English philosophers John Stuart Mill (lifetime employee of the East India Company) and his dad’s best friend, Jeremy Bentham, propounded, and by which means they have managed to hijack public discourse on goodness with for two hundred years and counting, at least amongst the self-professed elite. Utilitarianism argues that happiness is a matter of pleasure and material comfort, in the main, but also intellectual satisfaction, for those who can manage it. What distinguishes it from simple hedonism is three things: 1) that the amount of well-being, conceived as comfort in a given human life, can be measured; 2) that this comfort should be maximized, which is to say, aggressively increased; and 3) comfort should be aggressively increased not just for one human life, but for as many human lives as possible.
That at first glance this all sounds quite reasonable is a measure of our cultural dependency on Mill, the aforesaid employee of the global corporation known for “maximizing” the common good of Britain and India by exploiting India’s labor and people for the ostensibly shared good of colonialism. But it continues to sound reasonable to us long after Mill’s time because it appeals to our desire for solid ethical action in a simple and direct way: through insisting it can offer us mathematical proof that we have done good—and because it exploits our native wish to be generous by offering us the possibility of rationalizing our own desire for comfort and extending its mathematical possibility to limitless, nameless others. While some EAs have argued that it’s the empiricism that is the real “powerhouse” of effective altruistic thinking, the real problem is that this empirical orientation remains part of a larger ethos, with a lot of mathematical strings attached.
The peculiar genius of EA’s contribution to this school of thought can be found in the unseemly but solid inference it draws from utilitarianism: If the good amounts to material comfort achieved by minimizing physical pain, it plainly can be purchased. Early effective altruists set out to do just that with all the dollars they could pull together out of their own lives and, later, from others’ lives. As Amia Srinivasan has pointed out, while EA begins as a response to Peter Singer’s 1972 moralistic call on behalf of classical utilitarianism to waste no penny on the self when it could be spent helping others, something about the purchasability of ethical goodness gives a fundamental ease to all future moral relations.
So how on earth do we measure the good, if indeed it can be measured? Following its parent philosophy’s example, early effective altruism made the claim that Yes, good can be measured. Its novel contribution was to specify a unit of measurement: “quality-adjusted life-years,” a notion borrowed from the world of health policy that stipulates “a year of life lived in perfect health is worth 1 QALY (1 Year of Life × 1 Utility = 1 QALY) and that a year of life lived in a state of less than this perfect health is worth less than 1.” The classic example of how this is supposed to work is that your five dollars will produce a higher QALY figure if you use it to purchase a mosquito net for people in danger of malaria in Africa rather than, say, giving it to a child for him to spend it on ice cream—or, to evoke a common scenario for upper-middle-class EA-curious people, purchasing the mosquito net will produce more goodness than handing a homeless man five dollars during your walk to work, especially if you suspect he might spend it on a beer.
The QALY measure promises not only precision but the beautiful tidiness of goodness at aremove. The pathos of distance allows the good deed on another continent to take on a brilliant purity of simple cause and simple effect—you have no connection to the recipients of the hypothetical net beyond their receipt of your gift—and so you, back at home, can walk right past the homeless guy without having to look at him, or for that matter, smell him. You have purchased the carbon offset credits of the heart.
But it did not take the effective altruists very long to shift their focus from even this salutary logic of better living through spending and for them instead to become consumed with longtermism. MacAskill’s first book, Doing Good Better, a linchpin of the early EA rationale, came out only seven years before his statement of longtermist principles, What We Owe the Future. Longtermism managed to distract a fair number of well-meaning people while diverting a fair amount of their dollars: Not a small portion of the funds donated to EA clearinghouses went to funding AI labs, funds now entangled in the FTX scam. In fact, in recent years, so much money has been channeled into EA projects that there’s even been a sense of not knowing what to do with it all, a wilderness of money going every which way, and to ever-zanier causes.
But even after the community’s shift toward longtermism, the fundamental claim of its parent paradigm, effective altruism—that they have run the numbers and know best how to multiply goodness out of whatever you entrust to them—continued to attract not just casual givers but those with the kind of billions that, if effectively divested, might also effectively divest them of the guilt of obtaining those billions in the first place. Well, up until the scam. But as the most committed EAs continue adjusting to last fall’s news of SBF and company’s quite self-consciously gleeful wire fraud, there is still a sense—within the community, at least, if not without—that the basic principles are good, even if one allows that some tweaks should be made to the mathematics of whom to trust. As John Stuart Mill notes, the point of utilitarianism isn’t to become a hero but to avoid the necessity of ever being one; and so, whatever we might think of EA’s money-laundering potential, that potential is less important than the core of its emotional appeal: that it can launder the angst of those with cash in hand.
TO ME, IT SEEMS THAT EFFECTIVE ALTRUISM has a math problem in the same way that classical utilitarianism does. The problem is: Math is cool. It is learnable and knowable, and from fairly early stages in human development. Therefore, if someone tells us that the solution to our longing to be good is simply to run the numbers well enough, in order to maximize the good (fun calculus metaphor!), then it looks like there is an easy and even sexy solution to the hole in your heart—sexy, that is, if you happen to be particularly good at math. (I’m haunted by the knowledge that a “common social overture” among EAs is “to tell someone that her numbers [are] wrong.”)
MacAskill founded an entire nonprofit on the idea that you will better maximize the amount of good you do with your life by becoming a finance bro instead of a doctor, because the larger amount you stand to earn in the former career can be used to fund good works that far outweigh the number of people you would help merely by performing good works with your hands in the latter role. (Of course, this conveniently forgets that the habits and mindset of the finance bro probably do not furnish the most trustworthy persona for knowing what is good for people.)
But here’s the problem: If your image of enacted goodness is entrusting resources to the person who is best at “running the numbers,” you’re assuming that person knows for sure what good to pursue with those resources. But it is impossible to simply “run the numbers” without being aware—or perhaps suppressing one’s awareness—that there are unanswered questions left hanging, somewhere. As a contributor to the official EA forum described this exact problem shortly before FTX imploded, “EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA.”
In What We Owe the Future, MacAskill imagines small communities devoted first to discerning the good in order to produce scripts for artificially intelligent number-crunching; in January, ten anonymous EAs put together a 20,000 word document laying out a devasting critique of the movement’s epistemic narrowness—that is, its tight strictures on what counts as a good argument, goal, or piece of evidence—as well as criticisms of the billionaire-friendly, consolidated power structure of its community. As with Socrates’ philosopher-kings, who were perforce all Platonists to a man, one gets the sense that insight might escape even the most devoted calculators; the authors of the critique chose to remain anonymous because otherwise they might lose grant money, something that has befallen non-anonymous critics within EA.
The thing is, a moral claim like forecasted changes in quality-adjusted life-years provide an objective measure of right action sounds good because we are used to giving our trust to math and what is measurable by it; “maximization” sounds good because if one number is good then two are better, ad infinitum. This is why the QALY sounds less fictional to us than more amorphous claims to goodness. To be sure, it offers more plausibly imagined numbers than people toss out when speculating about how many humans there will be in a million years. But the addition of mathematics to a problem does not make it more practically addressable. Rather, an invented or stipulated unit like QALY is a conceptual apparatus that makes something theoretical appear practical, where our sense of what is “effective” is simply a better and shinier theory; and this is how the basic deception is practiced.
Unfortunately, there are fates worse than death, and goods that sit beyond merely staying alive longer. And almost every human good there is beyond mere accumulation of healthy days or years—for instance, the goods of justice, love, truth, and compassion—are not amenable to numbers, let alone predictable by dint of them. For a previous generation, the appeal of utilitarian comfort rested in part upon the conviction that human life was real while things like justice or dignity, not visible to the naked eye, were not. But this is not a supposition that we can afford these days. And the question of what happens to the life of the human you’ve saved—before and after they don’t die from malaria, if indeed they don’t happen to die from anything else—is not something we can just leave alone. If you send a mosquito net to someone whose primary threats to healthy living include political oppression and genocide, something more is going on there than has been dreamt of in your philosophy.
Worst of all, it turns out that even the small but seemingly measurable benefit of something like the mosquito net is neither measurable or predictable: In many cases of EA-directed philanthropy, the donated nets were used for fishing and not malaria prevention, which led to overfishing and the risk of entire communities being starved. The problem is not just that goodness is basically unmeasurable; it’s that the “consequences” of our actions, in all their fractal, ever-changing ramifications, aren’t measurable either, to say nothing of whether they can be foretold. Of course, at this point in the argument, an EA might respond with “Well, maybe; but what if consequences are measurable enough that enough goodness still becomes practically manifested?” But this is missing the point. Human rationality itself is unpredictable; and this is no less true of what turns out to be good, to say nothing of goodness itself.
Finally, rough-and-ready charts and graphs cannot displace bad intentions; often they abet them. The more we try to put numbers at the center of moral life, the more we will ignore the character of the human doing the math. And while the language of virtue ethics is not fundamentally less co-optable by charlatans than that of utilitarianism, there is a peculiar frustration at letting yourself get scammed by a moral philosophy that insists that consequences are more worthwhile, as subjects of intellectual and moral exploration, than the sheer variety of human intentions.
But what makes me, personally, the most angry at effective altruism is not just that it diverts our desire for effectiveness and transforms it via ever-zanier mathematics into something laughably impractical. It’s that its narrow intellectual vision has remained standing on the scene, ignoring its own past scams and turmoil, and enjoys our generalized regard still as the best example of what is morally practical, such that serious people still feel an obligation to take it seriously. This is because they don’t seem to know where else to turn for something better—or, for that matter, to whom.
SCATTERED THROUGHOUT THE VARIOUS PROFILES of Bankman-Fried and MacAskill over the last few years are paeans to their personal asceticism, another of the movement’s features that evaporated in the turn away from early EA’s global-poverty-alleviation goals. There’s a pathos in our hope for the arrival of a person on the scene whose combination of personal material-purity and amorphous goodwill for humanity signals that they are the hero of our age, someone to trust, someone to admire. It strikes me as no coincidence that EA, which takes so many of its cues from Pythagoreanism, reconstructs that philosophy’s ancient hope that a faith in numbers can be balanced enough by asceticism that no major evils will find their way past the door.
Authentic elective poverty, like that of Dorothy Day, is impressive, even stirring. But any asceticism you can temporarily suspend does not grant you its moral substance; give away all the money you want, but if you can still drop everything and go skiing precisely whenever, what you’re practicing is not asceticism, not really. It’s an insult to the poor with whom you are affecting solidarity, as they can afford neither food nor skis. And while Sam Bankman-Fried’s affectations of poverty aesthetics were always more camp than substance (he was known for sleeping on a glassed-in bean bag, with some visitors allowed to watch him awaken), a common move in the aftermath of his betrayal of the community is to return to Will MacAskill, and to see him as a similarly misguided person, yes—but one who was, at his core, well, altruistic, and so still an exemplar.
I don’t know MacAskill personally, so I can’t pronounce real judgment on his character. But the fanaticism that Gideon Lewis-Kraus, MacAskill’s remarkably conscientious profiler for the New Yorker, sees poking through in MacAskill’s habits and moral temptations, combined with his position of moral and financial influence over an emotionally close-knit community of do-gooders who now require someone to love more than ever before, does worry me. Latent fanaticism is present in any kind of morality that prioritizes “more” good over the courageous challenge of doing the good present at hand, that denigrates the beautifully good in its pursuit of the unearthliness of more. And any morality that prioritizes the distant, whether the distant poor or the distant future, is a theoretical-fanaticism, one that cares more about the coherence of its own ultimate intellectual triumph—and not getting its hands dirty—than about the fate of human beings: muddy, hairy, smelly, messy people who, unlike figures in a ledger or participants in a seminar, will not thank you for handing them a dollar, and indeed, might be likelier to cuss you out for it, for the perfectly reasonable sake of their own dignity.
Being fair and just to MacAskill and the still-grant-dispersing EA community doesn’t mean we have to search out a yet-uncut thread of quixotic moral exemplariness in them. The assumption that there must remain something praiseworthy in EA flips us into a bizzarro-Kantianism wherein we long for a holy and foolish person who fails in everything consequential yet whose goodwill, as Kant put it, shines like a jewel. In fact, the desire to admire EA despite its flaws indulges a quixotic longing to admire an ineffective altruist. Do not be deceived. Instead, let us face the comedy and tragedy of what goes wrong when your philosophical principles are even just a little bit off, but you let fly on the trajectory anyway. We’ve seen the results. Let us, too, feel not admiration but pity, and fear.
As Simone de Beauvoir puts it, we need to be on our guard any time a philosophical-moral stance signals its willingness to count the human lives who stare us in the face as nothing. Chillingly, it’s the inhumanity of depending on the consequence at all costs—backed by a wrongheaded faith in the goodness of one’s chosen project and so placing project and principle above real human lives—that appears “serious” to us, Beauvoir observes, and therefore good and worthy. This, she argues, is the essence of fanaticism, and it is anything, anything but good.
Terrible, says Brecht, is the temptation to do good. We just can’t quit it, and that’s not a bad thing. It’s just also that we’re easily seduced by the rhetoric of “cold logic,” and we struggle to reason past it to the messily human. Often the people who look like trustworthy guides to this terrain simply are not. But we can, at the least, make up our minds not to confuse numbers with judgment, or the desire for implementable rules—our longing for the lightness of moral procedural generation—with the necessity of making a moral call on a case-by-case basis. For that matter, we should test the assumption that a wireless transfer will lead to more goodness against what happens when you accidentally touch someone’s hand when you give them a dollar. Like Brecht’s heroine in The Caucasian Chalk Circle, at some point in our lives, we’ll sit staring at someone else’s child for hours, destroyed by the weight of how much it will personally cost us to pick it up. The cost won’t be in dollars, but in the substance and direction of our lives.
You have to pick up the child yourself. It’s this and this only that will satisfy the terror in your heart.