The good delusion: has effective altruism broken bad?

A group of young idealists wanted to live the most ethical lives possible. Now some wonder whether the movement they joined has lost its moral compass

By Linda Kinstler

In June 2017, Stern, a liberal German magazine, published an article, “Why your banker can save more lives than your doctor”, introducing readers to a social movement called effective altruism. The piece was about a 22-year-old called Carla Zoe Cremer who had grown up in a left-wing family on a farm near Marburg in the west of Germany, where she had taken care of sick horses.

The story told of an “old Zoe” and a new one. The old Zoe sold fair-trade coffee and donated the profit to charity. She ran an anti-drug programme at school and believed that small donations and acts of generosity could change lives. The new Zoe was directing her efforts to activities that were, in her view, more effective ways of helping.

Cremer discovered effective altruism through a friend who was at Oxford University. He told her about a community of practical ethicists who claimed to combine “empathy with evidence” in order to “build a better world”. Using mathematics, these effective altruists, or EAs as many called themselves, sought to reduce complex ethical choices to a series of cost-benefit equations. Cremer found this philosophy compelling. “It really suited my character at the time to try to think about effectiveness and rigour in everyday life,” she told me. She began attending EA get-togethers in Munich and eventually became a public face of the movement in Germany.

Following the guidance of Peter Singer, a philosopher who has inspired many effective altruists, Cremer pledged to donate 10% of her annual income to good causes for the rest of her life, which would make a greater difference than selling coffee beans. As she considered her next job, she was directed towards the movement’s careers arm, 80,000 Hours – a reference to the amount of time that the average person spends at work during their life.

Around the same time, a maths and computer-science undergraduate at the University of British Columbia called Ben Chugg was also discovering effective altruism. He enjoyed volunteering and was passionate about alleviating global poverty. When he graduated in 2018 he wanted to pursue a career that would be both ethically and intellectually fulfilling. He came across 80,000 Hours and appreciated that effective altruism offered clear principles to evaluate both the impact of his volunteering efforts and what to do with his life. He began reading the works of Singer and William MacAskill, a young Oxford philosopher credited with co-founding effective altruism. Chugg decided to apply to Oxford – the movement’s “epicentre”, as he called it – to do a masters degree in maths.

Oxford provides an academic home for the people who built the intellectual scaffolding of the effective-altruism movement. Chugg joined the university’s effective-altruism club, attending workshops to tease out the movement’s philosophy. He met young, ambitious and empathetic people who wanted to combat factory farming, climate change and infectious disease. Unlike participants in other student societies, the effective altruists saw their shared interests as obligations rather than hobbies. To be a true EA meant going vegan, or at least vegetarian; it meant promising to give away money, if not immediately then in the future; it meant immersing yourself in long podcasts on esoteric questions of moral philosophy.

Cremer, too, soon found herself at Oxford. In 2018 she was invited to interview for a trading job with Alameda Research, a new cryptocurrency firm run by a young EA called Sam Bankman-Fried. She was flown to Oxford and spent a day trading with fellow interviewees and wondering why no one asked her any questions about herself.

Instead of joining the firm she returned to her studies, and eventually became a research scholar at the university’s Future of Humanity Institute, which shares office space with two other research centres affiliated with effective altruism. She began working on two topics of great interest to the movement: artificial intelligence and existential risk.

The Oxford branch of effective altruism sits at the heart of an intricate, lavishly funded network of institutions that have attracted some of Silicon Valley’s richest individuals. The movement’s circle of sympathisers has included tech billionaires such as Elon Musk, Peter Thiel and Dustin Moskovitz, one of the founders of Facebook, and public intellectuals like the psychologist Steven Pinker and Singer, one of the world’s most prominent moral philosophers. Billionaires like Moskovitz fund the academics and their institutes, and the academics advise governments, security agencies and blue-chip companies on how to be good. The 80,000 Hours recruitment site, which features jobs at Google, Microsoft, Britain’s Cabinet Office, the European Union and the United Nations, encourages effective altruists to seek influential roles near the seats of power.

Last week FTX, a cryptocurrency exchange founded by Bankman-Fried, filed for bankruptcy. It emerged that FTX had lent client funds worth billions of dollars to Alameda Research, the firm which invited Cremer for an interview, and that some of FTX’s clients were unable to retrieve that money. Assets worth up to $2bn were missing, according to Reuters.

The collapse of FTX has been disastrous for crypto’s reputation. But it is also a massive blow to effective altruism. Bankman-Fried had pledged to give away most of this wealth, which at one point Forbes estimated at more than $26bn. (Bankman-Fried did not respond to interview requests for this article.) He had funnelled over $130m into the movement in 2022 alone via the FTX Future Fund, a charity that provides grants to projects aiming to secure humanity’s long-term future. Some of effective altruism’s leading figures, including MacAskill, were part of the team – they resigned en masse after FTX imploded, saying they were “shocked and immensely saddened”. “To the extent that the leadership of FTX may have engaged in deception or dishonesty, we condemn that behaviour in the strongest possible terms,” they wrote. But the fall of Bankman-Fried raises questions about whether his belief that he was doing the right thing justified some of his reckless professional actions.

Well before the recent fiasco, a number of effective altruists had begun to cast doubt on the direction of the movement and its tightly knit core of benefactors and leaders. Many, like Chugg and Cremer, felt drawn to the effective-altruism community at Oxford. Chugg told me that EAs were “the kindest, nerdiest people you’ll ever meet”. But over time both noticed that the movement’s focus was starting to shift. Of the research areas 80,000 Hours lists as having the most impact, ending factory farming and combating climate change had been downgraded, as had improving health care in poor countries.

The community now encouraged students to pursue work in fields deemed to be “highest priority”. Two of these were speculative: “positively shaping the development of artificial intelligence” and “reducing global catastrophic biological risks”. Another two, “building effective altruism” and “global priorities research”, seemed self-serving. An emerging philosophy called “longtermism” – the idea that the far future should be given at least as much weight as the present in moral and political decision-making – stood behind the change.

To Chugg, effective altruism’s new priorities felt morally iffy, and far from the issues that had first attracted him. Cremer thought the community was becoming increasingly undemocratic and secretive. Chugg began to look into the mathematical justifications for longtermism; Cremer started untangling effective altruism’s claims to predict the risks from advanced artificial intelligence. Both worried that the movement had taken a wrong turn and took it upon themselves to understand what had happened.

The seeds of effective altruism were first planted in St Edmund Hall, one of the colleges that make up Oxford University. The gardens were once a burial ground and a handful of gravestones still protrude from the lawn, though the inscriptions wore away long ago. In 2009 William MacAskill, then a philosophy undergraduate, asked Toby Ord, a junior research fellow, to meet him there. When MacAskill recounts this foundational moment he always mentions that it occurred in a graveyard. The setting foreshadowed a central tenet of their shared mission: cultivating what Ord calls a “strong cosmopolitanism” not just between peoples and countries, but between the dead, the living and the yet-to-be-born.

A central principle of effective altruism is that all individuals are ascribed equal value, regardless of space or time. A human life in Britain is worth exactly the same as a human life in Yemen; a life now is worth just as much as a life in the past or future. After reading Singer, MacAskill had become “terribly concerned by the problem of extreme poverty”. Though Singer is an atheist, he was inspired by the practice of tithing advocated by many religions and suggested that we should all have a “minimum ethical standard of giving”. (In “The Life You Can Save”, published in 2009, he proposes a sliding scale, ranging from 1% of gross income for people making $40,000-81,000 a year, minus certain deductions such as student-loan repayments and pension contributions, to 50% for people earning more than $53m a year.)

MacAskill wondered how such a tithe would work in practice. “I’d read on Toby’s website that he was giving ‘the required amount’, but I was very sceptical about whether he really was,” he later wrote. Yet Ord was no phoney: as of 2020 he donates at least 10% of his annual earnings, which have amounted to more than £125,000 ($147,000) over the years. (His largest donations have gone to deworming and anti-malaria foundations.)

The two men talked in the graveyard for hours, ranging over unusual terrain for moral philosophers: how to apply their theoretical ideas to the real world. Later that year, they teamed up to launch a non-profit organisation, Giving What We Can, encouraging people to donate at least 10% of their earnings to “whichever organisations can most effectively use it to improve the lives of others”. They settled on the name “effective altruism” to describe their project because it seemed to capture their stated goal. Two years after their initial meeting, MacAskill and Ord founded the Centre for Effective Altruism, an umbrella organisation for the community; Giving What We Can was soon folded into its portfolio.

MacAskill is now 35, with dark-rimmed glasses, tousled boy-band hair and a thick Scottish accent. He became an associate professor at Oxford aged 28, where he taught an introductory lecture course on utilitarianism, the ethical theory that underwrites effective altruism. According to utilitarian thinking, the consequences of our actions are the sole measure by which good and bad are determined, so we are morally required to pursue goals that promote the most good in the world.

Over the past decade MacAskill has explained how to go about this to wealthy individuals, elite undergraduates, large companies and government officials around the world. He has offered a series of answers: do give to charities, but only the most effective ones; by all means look after friends and neighbours, but know that you aren’t making effective use of your time, because you could be helping others in greater need; don’t waste precious hours reading the news because, as he said in 2018, “Every day, newspapers lie to you by telling you, ‘This is what is the most important of what is going on right now.’” If he were to publish his own paper, he would call it the Reality Times, he said. The headlines would always be the same: 5,000 children died of malaria, 10,000 nuclear warheads are poised to fire, 100m animals were needlessly killed and tortured. Who would waste time reading about politics while standing amid such carnage?

Goodness can be quantified, argued MacAskill in “Doing Good Better: How Effective Altruism Can Help You Make a Difference”, a book published in 2015. He demonstrated how utilitarianism can help people reach decisions and adapted a measure economists normally use to calculate the benefits of health treatments such as alleviating back pain or life-saving surgery: the “quality-adjusted life year”, or QALY. One QALY is equivalent to a year lived in perfect health; fractional QALYs are ascribed to people living in pain and ill health. The greater the suffering, the lower the value of the QALY.

MacAskill applies a similar measure to the emotional consequences of our experiences, a metric he calls a WALY, or well-being-adjusted life year. (MacAskill doesn’t seriously question whether attempting to quantify an experience might miss something essential about its nature. How many WALYs for a broken heart?) The Centre for Effective Altruism used these kinds of calculations to measure how useful charitable causes might be.

Effective altruists want to achieve more than simply doing good: to waste time doing anything less than the maximal good is implicitly to cause suffering, they say, because you’ve alleviated less pain than you could have done. As MacAskill writes, you should constantly ask yourself, “Of all the ways in which we could make the world a better place, which will do the most good?” On Facebook, EAs often run decisions by each other. In my local group in Washington, DC, one woman wanted to help resettle Afghan refugees but worried that this might not be the most effective use of her time.

The commitment to do the most good can lead effective altruists to pursue goals that feel counterintuitive. In “Doing Good Better”, MacAskill laments his time working as a care assistant in a nursing home in his youth. He believes that someone else would have needed the money more and would have probably done a better job. When I asked about this over email, he wrote: “I certainly don’t regret working there; it was one of the more formative experiences of my life…My mind often returns there when I think about the suffering in the world.” But, according to the core values of effective altruism, improving your own moral sensibility can be a misallocation of resources, no matter how personally enriching this can be.

Since an individual’s capacity for doing good in the world is severely limited, adherents of effective altruism have often been advised to earn as much as possible to support the good deeds of others. A doctor working at a hospital in Africa might contribute 300 QALYs each year, according to MacAskill. But if he established a well-paid private practice in Britain, he would be able to “earn to give”: he’d save “considerably more” lives and still be comfortable.

This reasoning inspired hundreds of benevolent people, including Bankman-Fried, to pursue high-paying careers. The promise of absolution makes effective altruism particularly attractive. You can be blessed as one of the elect by making a great fortune, so long as you keep giving – just like the pre-Reformation Catholic church, which accepted indulgences from its adherents in return for the forgiveness of their sins.

Effective altruism is not a cult. As one EA advised his peers in a forum post about how to talk to journalists: “Don’t ever say ‘People sometimes think EA is a cult, but it’s not.’ If you say something like that, the journalist will likely think this is a catchy line and print it in the article. This will give readers the impression that EA is not quite a cult, but perhaps almost.”

Though it may not be a cult, effective altruism is a kind of church – one that has become increasingly centralised and controlled over time. Several scholars and practitioners have suggested as much, including contributors to a recent book on effective altruism and religion. One of them asks if “EA in some sense [may] be seen as a quasi-religious movement itself, considering how comprehensively life-orienting it is”.

Over the past two years, I’ve heard many stories of young, ambitious people who came to effective altruism wanting to change the world but grew disenchanted. Many people I spoke to didn’t want to be identified, concerned that the community might retaliate by reducing funding or offering fewer professional opportunities. A spokesman for the Centre for Effective Altruism denied this and said that “collaboration and constructive dialogue are essential to free-thinking, a core value of effective altruism”.

This disillusionment was partly because the community expends so much effort raising money for the proliferation of institutes and think-tanks that host its most prominent thinkers. Open Philanthropy, a foundation that Dustin Moskovitz helped to create, funds 80,000 Hours (over $10m since 2017), the Future of Humanity Institute ($7.6m since 2017), the Centre for Effective Altruism (over $35m since 2017), the Effective Altruism Foundation ($1.4m since 2019) and the Global Priorities Institute ($12m since 2018). Effective altruists are themselves encouraged to give directly to the Centre for Effective Altruism, 80,000 Hours and related institutes. The FTX Foundation, the philanthropic arm of Bankman-Fried’s cryptocurrency exchange, lists the Centre for Effective Altruism as one of its grantees and partners.

The circularity of the effective-altruism funding network has accelerated the homogenisation of the community’s culture. Many effective altruists are highly educated white men with degrees from some combination of Oxford, Cambridge, Harvard, Stanford and Yale. The movement supports “campus specialists” who spread the gospel among undergraduates. A survey of over 2,500 EAs in 2019 found that most were aged 25 to 34; over 70% of them were male and more than 85% white. The majority were left-leaning and identified as agnostic, atheist or non-religious. Almost all had or were in the process of acquiring a degree.

As the community has expanded, it has also become more exclusive. Conferences, seminars and even picnics held by the Centre for Effective Altruism are application-only. Simon Jenkins was an early member of the community and founded an effective-altruism group in Birmingham in Britain. He has since drifted somewhat away from the movement, after years of failing to get a job at its related institutions. It has become both more “rigorously controlled”, he said, and more explicitly elitist. During an event at a Birmingham pub he once heard someone announce that “any Oxbridge grad can get involved”. “I was like, hold on a sec, is that the standard?”

The logic of maximisation means its adherents reckon that the “best” universities provide the best education and also, therefore, the most effective thinkers. The community of fearless philosophers is not blind to its own insularity – but it does seem to be largely unconcerned by it.

One idea has taken particular hold among effective altruists: longtermism. In 2005 Nick Bostrom, a Swedish philosopher, took to the stage at a TED conference in a rumpled, loose-fitting beige suit. In a loud staccato voice he told his audience that death was an “economically enormously wasteful” phenomenon. According to four studies, including one of his own, there was a “substantial risk” that humankind wouldn’t survive the next century, he said. He claimed that reducing the probability of an existential risk occurring within a generation by even 1% would be equivalent to saving 60m lives.

The knock-on effects were magnified if you took a rather longer view of 100m years, Bostrom continued. Providing we are capable of colonising the rest of our galaxy and nearby ones, a 1% reduction in the risk of extinction on such a time horizon would be the equivalent of saving 1032 lives. In such a perspective, nothing else matters.

That year, Bostrom founded the Future of Humanity Institute at Oxford, dedicated to mitigating risk. He elaborated on his theory of longtermism in a book in 2008, urging people to be “good ancestors” by practising “altruism toward our descendants”. He didn’t mean composting or stopping driving, but mitigating existential risks to humanity – “x-risks” in community parlance – from hazards such as advanced artificial intelligence and bio-hacking, that could potentially wipe out or severely deplete humankind.

Over the past two decades, existential risk has emerged as a new academic field: Oxford, Cambridge, Berkeley and Stanford all host institutes devoted to its study. This research assumes that the continued existence of humanity has never been so uncertain and that we have created conditions that could easily cause our own demise. The biggest risks, Bostrom argued in 2012, come from technological advances that allow us to manipulate ourselves and our environment with unforeseen consequences: the probability of such disasters occurring are unquantifiable, the potential consequences devastating.

Many longtermists reckon that the antidote to the threat from technology is more technology; mastering artificial intelligence will forestall a malevolent AI from enslaving us. They also believe that technological acceleration is morally good in its own right, as it enables the universe to sustain more people. Bostrom laments the lives lost every second that technological advances are delayed. In a paper from 2003 he invites readers to imagine all the “unused energy…being flushed down black holes”, and all the suns beyond our own that are “illuminating and heating empty rooms”, because we lack the means to populate the planets orbiting them. (Bostrom calculates that 1029 potential lives are lost for every second that we fail to colonise the supercluster of galaxies containing the Milky Way.)

Critics of longtermism say that the outlook almost exclusively concerns what Karin Kuhlemann, a lawyer and population ethicist at University College, London, labels “sexy global catastrophic risks”, such as asteroids, nuclear disaster and malicious AIs. Effective altruists are less bothered by “unsexy” risks like climate change, topsoil degradation and erosion, loss of biodiversity, overfishing, freshwater scarcity, mass un- or under-employment and economic instability. These problems have no obvious culprit and require collective action. (Effective altruists claim that they care about these issues, but that long-term risks are insufficiently studied, given how devastating they are likely to be.)

Disillusioned effective altruists are dismayed by the increasing predominance of “strong longtermism”. Strong longtermists argue that since the potential population of the future dwarfs that of the present, our moral obligations to the current generation are insignificant compared with all those yet to come. By this logic, the most important thing any of us can do is to stop world-shattering events from occurring.

According to Benjamin Todd, a founder of 80,000 Hours, longtermism “might well turn out to be one of the most important discoveries of effective altruism so far”. Yet critics say this conclusion is callous, because it openly proclaims that the neediest people on the planet matter vastly less than people who have not yet been born.

Projects devoted to global health and poverty still garner the most funding, but their share of the total allocation is shrinking as long-term-risk research attracts more cash. EA Funds, the philanthropic wing of the movement, launched a Long-Term Future Fund in 2017 to support research on existential risks and has distributed over $10m to date. (One person told me that it isn’t hard to get funding if you “just talk about what can go wrong, and reference the right people”.) The FTX Future Fund is explicitly dedicated to advancing longtermist causes.

In 2019 Bostrom once again took to the TED stage to explain “how civilisation could destroy itself” by creating unharnessed machine super-intelligence, uncontrolled nuclear weapons and genetically modified pathogens. To mitigate these risks and “stabilise the world”, “preventive policing” might be deployed to thwart malign individuals before they could act. “This would require ubiquitous surveillance. Everyone would be monitored all of the time,” Bostrom said. Chris Anderson, head of TED, cut in: “You know that mass surveillance is not a very popular term right now?” The crowd laughed, but Bostrom didn’t look like he was joking.

Such single-minded reasoning is evident throughout the effective-altruism world. The 80,000 Hours institute now advises students to pursue careers that may benefit the long-term future, such as AI safety and biomedical research. Some influential EAs have begun to participate in politics. Earlier this year, Bankman-Fried donated $10m to the congressional campaign of Carrick Flynn, an EA and former research associate at the Future of Humanity Institute, who was recently trounced in Oregon’s Democratic primary contest. One devotee reflected on the loss online: “​​Could that $10m wasted on Flynn have been better used in just trying to get EA or longtermist bureaucrats in the Centers for Disease Control and Prevention or other important decision-making institutions?”

Nick Beckstead, the chief executive of the FTX Foundation, wrote in a PhD dissertation completed in 2013 that “It now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.” Why? The former has the potential to create more long-term value and therefore save more lives. (Beckstead did not respond to an interview request. He resigned from the FTX Foundation last week.) On his personal blog, Holden Karnofsky, chief executive of Open Philanthropy, has compared effective-altruist reasoning to avant-garde jazz – appreciated by the cognoscenti but a cacophony to untrained ears.

Not everyone agrees. Emile Torres, an outspoken critic of effective altruism, regards longtermism as “one of the most dangerous secular ideologies in the world today”. Torres, who studies existential risk and uses the pronoun “they”, joined “the community” in around 2015. “I was very enamoured with effective altruism at first. Who doesn’t want to do the most good?” they told me.

But Torres grew increasingly concerned by the narrow interpretation of longtermism, though they understood the appeal of its “sexiness”. In a recent article, Torres wrote that if longtermism “sounds appalling, it’s because it is appalling”. When they announced plans on Facebook to participate in a documentary on existential risk, the Centre for Effective Altruism immediately sent them a set of talking points.

This is far from the only attempt by leaders of the movement to act as though they were running a public-relations campaign rather than conducting philosophical inquiry. Jenkins told me about a post he wrote on the community’s Facebook forum, questioning whether bringing people out of poverty might inadvertently increase animal suffering because those on higher incomes could afford to eat more meat. He soon received a phone call from someone at the Centre for Effective Altruism informing him that his post had been deleted. Other people I spoke to reported that they’d been asked by individuals affiliated to the Centre for Effective Altruism not to publish articles or posts that might reflect negatively on the community. A spokesman for the Centre said that “It respects diversity of thought and encourages debate and criticism.”

Effective altruism treats public engagement as yet another dire risk. Bostrom has written about “information hazards” when talking about instructions for assembling lethal weaponry, but some effective altruists now use such parlance to connote bad press. EAs speak of avoiding “reputational risks” to their movement and of making sure their “optics” are good. In its annual report in 2020, the Centre for Effective Altruism logged all 137 “PR cases” it handled that year: “We learned about 78% of interviews before they took place. The earlier we learn of an interview, the more proactive help we can give on mitigating risks.” It also noted the PR team’s progress in monitoring “risky actors”: not people whose activities might increase the existential risks to humanity, but those who might harm the movement’s standing.

When Cremer was doing research at the Future of Humanity Institute in 2019, she started to worry that the movement’s insularity was crippling its ability to help people. She talked to friends in the movement, many of whom turned out to share her concern that secrecy and deference to hierarchy within the community would lead to groupthink. Yet most were unwilling to say so publicly. So Cremer tried to pose the question to the community in the gentlest way possible: a post on its online forum.

Cremer hoped her article, published in July 2020, would lead EAs to reflect. “I wanted to give them a chance,” she said. Her post quickly became one of the most-read pieces on the forum that year. But nothing changed. By then, Cremer had stopped calling herself an EA.

Chugg, for his part, also had his confidence in effective altruism fatally shaken in the aftermath of a working paper on strong longtermism, published by Hilary Greaves and MacAskill in 2019. In 2021 an updated version of the essay revised down their estimate of the future human population by several orders of magnitude. To Chugg, this underscored the fact that their estimates had always been arbitrary. “Just as the astrologer promises us that ‘struggle is in our future’ and can therefore never be refuted, so too can the longtermist simply claim that there are a staggering number of people in the future, thus rendering any counter argument mute,” he wrote in a post on the Effective Altruism forum. This matters, Chugg told me, because “You’re starting to pull numbers out of hats, and comparing them to saving living kids from malaria.”

For help investigating the maths deployed by longtermists, Chugg turned to Vaden Masrani, a friend studying machine-learning at the University of British Columbia. Masrani, who is not an effective altruist, concluded that the calculations had little basis. He noted the extent to which devotees on the Effective Altruism forum had already adopted longtermism: “They are taught to trust equations over their moral intuitions. It’s sociopathic.” Philosophers of longtermism were using mathematical equations as a rhetorical ploy, he said, to leave readers “stunned, confused, and bewildered”.

Masrani’s calculations helped convince Chugg that effective altruism had drifted far from the values that attracted him five years earlier: it seemed to ignore reason, not deploy it. “As long as you give me a bigger number than I gave someone yesterday,” he said, “you can convince me that an alien invasion is the biggest thing we should worry about, and tomorrow it is AI, and the day after that it’ll be the depletion of some natural resource.”

Effective altruists believe that they will save humanity. In a poem published on his personal website, Bostrom imagines himself and his colleagues as superheroes, preventing future disasters: “Daytime a tweedy don/ at dark a superhero/ flying off into the night/ cape a-fluttering/ to intercept villains and stop catastrophes.”

MacAskill has likened effective altruists to trauma surgeons, triaging the claims of people in need. Yet these philosophers and their followers sometimes seem to have more in common with the forensic investigators or insurance assessors who turn up at the site of a plane crash to place a price on the dead and injured. The difference, of course, is that, rather than assess actual disasters, the philosophers conjure imaginary crises and pre-emptively calculate whom we should mourn, whom we might yet save and what sacrifices we ought to make.

Effective altruists are trying to embed their ideas in policymaking and defence circles on both sides of the Atlantic. Bostrom has acted as a consultant to the CIA, the European Commission and the President’s Council on Bioethics in America. Toby Ord has advised the British prime minister and the World Health Organisation. He recently worked with the United Nations on global catastrophic risks and future generations.

Longtermism is here to stay, though its parameters are shifting. Ord, MacAskill and Greaves told me frankly that they were still working out its implications. New ideas are being developed all the time. In a recent paper, MacAskill suggested creating “permanent citizens’ assemblies with an explicit mandate to represent the interests of future generations”. His latest book, “What We Owe the Future”, aims to introduce longtermism to a general audience. In it, he argues that individuals can help secure the far future by making “particularly high-impact” decisions, like donating to “effective” non-profits, being politically active, “spreading good ideas” and “having children”. To Amy Berg, a professor of philosophy at Oberlin College in Ohio who has studied effective altruism and its affiliate movements, MacAskill’s vision seemed “morally inert”: it showed how far the movement had shifted from its original ethos of doing the most good towards championing highly speculative and costly research. “What I can do as a person has really changed to ‘Here’s how people with a lot of money can get involved,’” Berg said.

Yet there are signs that such ideas are being questioned at a high level. In an email, MacAskill told me he believes that “Longtermism is the view that positively impacting the long-run future should be one of the priorities of our time.” He continues to be “much less sure about ‘strong’ longtermism”, and noted that people often misunderstood its implications. “For instance, some have argued that strong longtermism justifies committing harm, which is simply not true.”

Greaves, for her part, told me in 2021 that she wasn’t sure whether longtermism really applied to people’s everyday lives. Despite researching how we can best affect the future, she herself gives money to global-health charities dedicated to ameliorating the present. “I’m not really capable of ignoring all charities that have contact with my life,” she said. “If you’re asking, do I inhabit that middle ground in practical terms, the answer is yes. If you’re asking, do I think it’s the correct thing to do as opposed to just some kind of incoherent mess that I’ve ended up in…then, I’m not sure.”

Most of us experience life as an incoherent mess. Effective altruists have tried to clarify our obligations, and in doing so have entertained a series of increasingly radical positions. The changes in direction by the movement’s most prominent figures are not always absorbed by their followers. “It is fascinating to me that members of effective altruism seem more committed to certain claims about morality than their leaders are, or want to say they are publicly,” Berg said.

Chugg has disengaged from the movement since discussing his misgivings on the EA forum, and has become suspicious of joining any kind of organisation, because “Once you identify as part of a group, you’re much less likely to see its flaws.” But he hasn’t entirely left effective altruism’s orbit: he still donates to and volunteers with some branches. He believes that effective altruists were trying to do good by thinking about longtermism, even if their conclusions have had deleterious effects.

Cremer, too, continues to ponder the questions posed by effective altruism, though she no longer thinks the movement will provide the right answers. She is now pursuing her doctorate at Oxford, with funding from the Future of Humanity Institute. In December 2021 she published a paper with Luke Kemp, a catastrophic-risk researcher at Cambridge University, proposing that effective altruists modify their approach to existential risk to help make the field more democratic, transparent and less self-referential. They pointed out that the study of existential risk is not politically neutral, and challenged the determinism of effective altruism’s “techno-utopian approach”, which, they wrote, presents “the stark choice between one of only two destinies – technological maturity or existential catastrophe – as a fait accompli”.

The paper went through 28 revisions and passed through the hands of over 20 readers before it finally came out. Cremer and Kemp were told that they and their institutions might lose funding because of it, and were advised not to publish at all. They were dismayed by how much scrutiny even their simplest arguments were subjected to – such as the benefits of democratic debate – as well as their concerns about the movement’s funding. “Having a handful of wealthy donors and their advisers dictate the evolution of an entire field is bad epistemics at best and corruption at worst,” they wrote.

Initially, they appeared to achieve their goal: MacAskill offered to talk to Cremer. She presented him with structural reforms they could make to the community. Among other things, Cremer wanted whistleblowers to have more protection and for there to be more transparency around funding and decisions about whom to invite to conferences. MacAskill responded that he wanted to support more “critical work”. Subsequently, the movement established a criticism contest. Yet when it came to specifics such as the mechanisms for raising and distributing money, he seemed to think the current process was sufficiently rigorous. MacAskill disputes this characterisation and told me he was in favour of “increasing donor diversity”.

Cremer had hoped MacAskill would take her suggestions seriously, but was left feeling that she’d had little effect. “I think he was a bit of my last hope,” she messaged me. The paper she wrote with Kemp had been her “parting gift” to effective altruism.

Last week Cremer watched as Alameda Research, the trading firm she might have worked for, imploded on the world stage. Bankman-Fried lost almost all his wealth, leaving many of the effective-altruism funds he supported unable to fulfil their grants.

Critics were quick to link Bankman-Fried’s championing of effective altruism to his gross financial miscalculation. “Effective altruism also encompasses an emphasis on ‘long-termism’, which can read like another excuse for mercenary corner-cutting today, so long as you commit your loot to improving tomorrow,” David Morris wrote on CoinDesk, a website that reports on digital currencies. Kemp said he felt some relief that FTX had imploded now and not later, when it would have done far more financial and political damage. “I hope this is a critical juncture that forces effective altruism to reform itself,” he said. “It should stop playing with fire, and pursuing vast amounts of money and power.”

Bankman-Fried once said that he’d got into cryptocurrency only to make money as quickly as possible and help finance the main goals of the effective-altruism movement. He was a proponent of the longtermist ethos, in which prediction and speculation are often indistinguishable and obligations to a probabilistic future outweigh those to the material present. Effective altruism is ultimately a gamble. Bankman-Fried placed his bet. For now, he has lost.

Linda Kinstler is a contributing writer for 1843 magazine and the author of “Come to this Court and Cry: How the Holocaust Ends”. She has previously written for 1843 magazine about the Aspen Institute and Belarusian exiles

Illustrations: Angelica Paez

More from 1843 magazine

1843 magazine | Inside the Kenyan cult that starved itself to death

During covid-19 a preacher lured thousands of people into a remote forest. Then he told them to stop eating

1843 magazine | Houston, Texas: where asylum cases come to die

Some immigration lawyers relish a challenge


1843 magazine | Robert F. Kennedy junior doesn’t care if he condemns America to Trump

He’s a tree-hugging conspiracy theorist – and he’s running for president