Hide table of contents

Introduction

A decade ago, when effective altruism (EA) was focused primarily on global poverty and preventable diseases, resource allocation decisions were extremely difficult. Methods like randomized control trials (RCTs) were popularized to compare interventions, but they were far from perfect. Even with a singular standard—save the most lives, or quality-adjusted life years (QALYs), per dollar—EAs hotly debated which charities were most deserving of cash. 

Today many other causes beyond global poverty are considered viable EA interventions. These interventions are extremely different from each other. Some are near-term; some are long-term; some are still measured in QALYs; some are based on the probability of human extinction. This means that already difficult resource allocation decisions have gotten much, much harder. In fact, I think that in today’s EA landscape, making these resource allocation decisions requires abandoning some of the philosophical underpinnings of EA.

Revising the ethical backbone of a movement so deeply rooted in philosophy is scary and challenging. Most of you know all too well that Sam Bankman-Fried’s recent actions have prompted the beginnings of a reckoning within the EA community. Although the news about FTX is timely, this post has little to do with Fried—in fact I wrote it before FTX crashed. Instead of specifically reacting to that news, I seek to make the broader claim that long before crypto fortunes were bankrolling billion-dollar EA foundations, there was reason to change EA’s dogma.

In this post, I claim that the maximalist “do the most good” credo is no longer viable to sustain EA as a socio-political movement. Instead, I argue that “do a lot of good” is a more logical and more effective rallying cry for modern EAs.

 

Part 1: “Do a lot of good” is a logical rallying cry: The new quantification problem

The first part of this post is dedicated to showing that “do a lot of good” is logical and in-line with EA’s overall mission. This line of reasoning is not new. Holden Karnofsky recently made the argument that the EA community is better off jettisoning the maximizing principle and embracing more moderation and pluralism.

Let’s dive in with a very old argument against maximization:

  1. Ought implies can.[1] Or, in other words, if an agent is morally required to perform some action, that action must be possible for the agent to perform.
  2. So, in order to “do the most good”, we must know what action contributes to the most good getting done.
  3. It is impossible to know what action contributes to the most good.
  4. Effective altruists should not always try to perform actions that do the most good, but instead perform actions that they can be reasonably sure do a lot of good.

Premises 1-3 of this argument form the commonly articulated epistemic problem with consequentialism. Premise 4 is what I seek to prove in this section. However, before I get there, I’d like to rehash and expand on some evidence for premise 3.

Early critiques of EA’s global poverty strategy prompted skepticism that EA methodology could accurately determine what causes generate the most welfare per dollar. These critiques included problems with RCTs; the fact that EA ignored smaller, grassroots organizations and longitudinal data; and the argument that it is impossible to quantify quality human well-being. EAs, in large part at GiveWell, debated the legitimacy of these concerns. I do not need to rehash these debates to prove my point that deciding which cause(s) to focus on (tropical diseases or cash transfers or deworming or funding PSAs to go to the doctor) was extremely difficult, even with one, near-term, shared goal and decently trustworthy data.

Today, based on the content of this forum alone, it’s clear that much of the EA community has expanded its goals beyond ending extreme poverty and preventable death from tropical diseases. Increasingly, AI alignmentexistential and/or extinction riskU.S. foreign policypoverty relief in the U.S.political campaigns, and many other domains have garnered serious attention from EA. 

Discussion around these topics is markedly different from early EA debate chiefly because it is less focused on measuring aggregate welfare increases per dollar. Faced with the decision to support, for example, extinction threats or animal welfare, you cannot perform a simple comparison of QALYs saved. There is no easy way to correlate the aggregate welfare increase brought about by a marginal increase in our asteroid deflection technology to that of saving 100,000,000 male chicks from being ground up at a factory farm. I came up with three main reasons why quantification is more difficult in today’s EA cause landscape:

  1. Disparate causes do not necessarily share common metrics.
  2. Longtermist outcomes are inherently difficult to measure.
  3. Institutional change involves ripple effects that elude both prediction and causal determination. 

This list is not mutually exclusive. Comparing disparate causes (reason 1) is often difficult because it involves comparing a long term possibility to a near term certainly (reason 2) or comparing direct relief to institutional change (reason 3). Regardless, I’d like to expand on each reason in turn. 

Lack of common metrics:

Perhaps the best example of lack of common metrics is one I have already mentioned: comparing animal welfare to human welfare. Our current understanding of psychology hardly allows us to measure what makes human lives go well. Comparing animal lives to human lives involves a level of understanding of  psychology far beyond the limits of modern science. To get around this, EAs have proposed new frameworks to compare causes. For example, Holden Karnofsky suggests that we use three metrics—importance, tractability, and neglect—to prioritize issues. This framework is undeniably insightful and should be consulted as we make resources allocation decisions. However, the importance and tractability inputs are often impossible to compare across today’s EA priorities. Is pandemic preparedness more important than AI alignment? Similarly, is it tractable to prepare ourselves for a lab-grown super bug or ensure AI doesn’t turn against us? I’ve yet to learn of a framework that allows for comparison of efficacy across new EA causes. 

Longtermism:

All of this gets even more complicated when one considers “longtermist” interventions. Consider again Karnofsky’s importance metric. Is 8°C of global warming more important than inequitable values being “locked-in” for centuries? I don’t know. In What We Owe The Future, William MacAskill proposes we consider significance, persistence, and contingency as the three main factors for longtermist issues.[2] Although many EA thought leaders exude some epistemic confidence in determining the significance and contingency of longtermist issues, I’m less sure. Toby Ord sets the probability of an extinction-level engineered pathogen this century at 3%.[3] Even if this prediction was made with full omniscience of the present and past, I hesitate to give it much credence. Today, we’ve identified 98% of extinction-threatening asteroids. If you asked the world’s best astrophysicists in 1920 the chances humanity would accomplish this feat within a century, I’d guess many of them might have said 0%.

Institutional reform:

The global poverty debate generally steered clear of institutional reforms (i.e. reforms that address the so-called “root causes” of poverty). This facet of EA has changed dramatically in the last few years. Today, most preferred EA interventions are deeply institutional (e.g. changing the United States’ international aid priorities). This is a sensible shift in strategy, especially because much early criticism of EA was that it ignored the power of institutions.[4] However, embracing institutional change comes at a cost. Distributing bed nets, deworming children, and initiating cash transfers are deeply simple interventions. Success can be quantified in basic terms (# of nets, # of children treated, $ transferred). Success at lobbying the US government to change foreign aid policies cannot be quantified in the same way. Sure, changes in the amount and allocation of aid are plain to see, but it's next to impossible to determine the causal structure of those changes. Did my advocacy change the minds of the C-suite of Tyson Foods or was it that NYT article last week?  This inability for causal determination means that institutional interventions often elude quantification. In other words, in today’s EA landscape, many actors don’t know the amount of good they are doing or what alternative work could do more good.  

There are two ways we might go about solving this newly-complicated quantification problem:

The weak (or thin) conclusion:

EAs ought not compare the effectiveness of interventions that do not share common goals (i.e. saving farm animals versus preventing human extinction) and instead allocate resources to the proven “best” intervention within a set of goals determined to be extremely important. 

The strong (or thick) conclusion:

EAs should focus much less on impact measurement (even to compare strategies that share a common goal) and instead allocate resources more widely to many interventions they believe (given their epistemic position and personal circumstances) do a lot of good. 

I will not endorse either of these conclusions in this piece, but they both merit serious consideration. Having already given what I think is a provocative case for either conclusion in this section, I’ll turn to some counterarguments. 


Possible counterarguments:

#1: EA is, at its core, about impact measurement. Even if it’s extremely difficult in the new landscape, we still must quantify the efficacy of various possible interventions to allocate resources. 

I agree that impact quantification is indeed the primary differentiator of EA versus other philanthropic movements. But I also think that asking people to “do a lot of good” aligns with this guiding principle. Peter Singer’s early arguments for reform in the philanthropic space were effective in showing many folks that donations to large university endowments or well-established  arts programs did not, in fact, do very much good at all. Although I argue that the good done by researching AI alignment strategies cannot be compared to the good done by lobbying against inhumane farming practices, I am quite sure they both have the potential to do a lot of good. Moreover, if, during the course of one’s efforts in either of these spaces, she determines that her work is not doing much good, she ought to stop and try something else. In essence, I think the “do a lot of good” approach allows for evidence-centered work with proven impact without the need to constantly justify why one is working in her chosen space and not for some other cause. 


#2: If we accept the weak conclusion, what tools can we use to make resource allocations across disparate causes?

I will leave this as an open question. While it’s true that billionaire EA philanthropists like Cari Tuna and Dustin Moskovits have to grapple with this objection to the weak conclusion, most of us don’t. We each have a certain set of opportunities to do good (based on our material and epistemic circumstances) and given those, we can choose our own altruistic journey. Most of us do not have copious amounts of extra money or time to give to EA causes, so we can use our narrow slice of the pie to chip away at an important, neglected problem and let the large research teams at organizations like Open Philanthropy decide how the rest of the pie is divided.
 

#3: If we accept the strong conclusion, resource allocation will be impossible without comparing efficacy. We would be returning to the dark ages of philanthropy when money was given seemingly indiscriminately and unworthy organizations were granted billions. 

I do not think that this follows from the strong conclusion. A wider and wider range of causes and intervention methods is accepted into the EA umbrella every year. (Most EAs used to only give to GiveWell’s 9-or-so recommended charities—that's just not true anymore).[5] If you are cynical about impact quantification, you probably consider this intellectual and moral progress. I believe that we, as a community, can allocate resources to a large number of impact-centered organizations without dolling out billions to unproven, inefficient organizations with reckless abandon. In short, we no longer need the extremely demanding “always do the most good” criterion to prevent unworthy interventions from creeping into the EA space.

 

Part 2: “Do a lot of good” is an effective rallying cry: How to grow a social movement for good

In my opinion, “do the most good” is less effective rhetoric than “do a lot of good”. This claim is based more on my intuitions than science, but here is my basic reasoning. 

One interesting feature of consequentialism is that, in an effort to actualize the best outcome (fulfill the most preferences, generate the most utils, etc.), actors ought not always preach what they practice. Imagine your friend is choosing between option A, B, and C. In your consequentialist analysis, A is the most moral action, C is the least, and B falls somewhere in between. If you were in your friend’s shoes, you would choose A. However, your friend assures you that she will not choose A. In this case, all else being equal, you should try to convince her to choose B. This is a strange conclusion. Under a deontological moral theory, you would most likely try to convince your friend to perform action A regardless of whether she’ll actually do it. A is, after all, the right thing to do, and most deontological theories advocate for preaching moral rightness no matter what. Consequentialists, though, in an effort to actualize the best possible outcome, base their preaching on the probabilities that certain outcomes occur given their actions. 

The conclusion that consequentialists sometimes ought to preach less than perfectly moral actions is important in EA. Consider two other movements: Giving What We Can and Meatless Monday. William MacAskill, following Peter Singer et al., surely thinks that the extremely rich should give much more than 10% of their income to be perfectly moral. Under his moral calculus, they should probably give almost everything away. However, telling people to give away 99% of their income is not likely to catch on. Instead, MacAskill and others at Giving What We Can chose to advocate for giving 10% of one’s income. This was a position that both sounds reasonable to a large audience and would assuredly change the world if broadly accepted. 

Similarly, many vegetarians and vegans tell their friends and coworkers to try “Meatless Mondays”. Most of these animal rights advocates wish that everybody would avoid meat altogether, but asking their friends and coworkers to go cold turkey (literally) would have less impact than first asking for a more modest reduction in meat consumption. 

The EA movement was founded on radical ideas that came out of consequentialist reasoning. Although many current members of the movement do not consider themselves hardcore consequentialists, outcome-oriented analysis is still a key tenet of EA. That outcome-oriented thinking should not be limited to deciding what actions we make ourselves. We must also consider the consequences of our private and public rhetoric. 

Effective altruists often say things like, “giving to community theater is less effective than giving to global poverty relief, so giving to community theater is wrong”. I suspect that this rhetoric generates little change in the general populace compared with “here are 10 reasons why you should give more to global poverty relief”. Most people, especially people who don’t study philosophy, don’t like being told that their actions are wrong. Telling somebody that an act of kindness (like making frequent donations to a community theater) is actually immoral is likely to end a conversation before any change can be made.

In my analysis, our community has generally adopted this vein of thinking. Most EAs realize that if we want to gain wider reach, we cannot tell people they should live in poverty because they donate so much to Against Malaria. However, as new causes enter the EA space, we’ve been slow to apply this reasoning to our new resource allocation debates. It is not very useful to tell somebody who is passionate about AI alignment that worrying about animal welfare is more effective, so researching AI alignment is immoral. I know that most EAs aren’t actually saying this aloud, but it is frequently implied.

We should continue to debate how resources should be allocated. We should continue to persuade people to donate their time and money to effective interventions. But we should also continue to seek to grow the EA movement. Doing so requires that we accept varying levels of commitment to EA and a wider number of activities that we consider candidates for doing good effectively. Asking folks to “do a lot of good” accomplishes these goals without losing sight of EA’s core mission. 



Conclusion

My argument does not imply that everybody in the EA community is an “ineffective altruist” because nobody can prove her chosen intervention is best. In fact, I believe it does the opposite. If we accept that we can’t know for certain how to do the most good, an effective altruist is somebody who—given her epistemic position and individual circumstances—frequently chooses actions (big and small) she reasonably believes do a lot of good on aggregate.

I suspect that some of you are frustrated by this conclusion. If you read this forum, you likely have an EA cause that you think ought to be prioritized. And I do agree that EA is still extremely funding constrained. Therefore, you might be tempted to argue: “well even if I can’t know that my cause does the most good, it does more good than cause X”. This is an unfortunate response for two reasons: 1. it involves a level of epistemic certainty about the future that I think is unfounded[6]; and 2. it implies that other EAs are dedicating their lives to unworthy causes.

I’ll close by saying that I think the EA community is unique and fantastic in many ways. However, just like all other organizations, we ought to work towards a culture of inclusivity. The increasingly broad focus of the movement coupled with scarce resources have the potential to pit us against ourselves. But a broader focus also allows us the opportunity to grow our community and proliferate our shared mission of improving the lives of people and animals. EAs now appreciate that the world faces a wide array of big problems. We can only solve them with a diverse coalition of change-makers, each bringing her own unique passions and perspectives.


I'm a recent philosophy B.A. grad interested in pursuing a career in EA, and this is my first post on the forum. Please reach out if you have any comments/critiques/questions; I would love to meet as many folks as possible and engage in any conversations around the future of EA!
 

  1. ^

    Kant, Immanuel. Critique of Pure Reason. A548/B576. p. 473.

  2. ^

    pp. 254-5. 

  3. ^

    Ord, Toby. The Precipice. 2020. p. 71. 

  4. ^

    See Clough 2015: “Effective Altruism’s Political Blindspot”; or Herzog 2015: “(One of) Effective Altruism’s blind spot(s), or: why moral theory needs institutional theory”; or Srinivasan 2015: “Stop the Robot Apocalypse”. 

  5. ^

    I’m well aware that the vast majority of EA funding still goes to global health and development, but it’s patently clear that a diversity of interventions are getting more and more attention. 

  6. ^

    If you are advocating for a “neartermist” cause, you are taking the long term survival of the human species as too probable, and if you are advocating for a longtermist cause, you may be too confident about your chosen intervention’s effectiveness.

12

1
0

Reactions

1
0

More posts like this

Comments4
Sorted by Click to highlight new comments since: Today at 4:05 AM

Great post, well explained!

I like the idea of having a more relatable message than "do the most good", but I am not sure how much more relatable "do alot of good" is. To me it seems that there might not be that much of a difference between the two, at least in how they are used in day to day discussion (that is, applying a filter of "practicality" to the maximization problem).

For example, I thought it was common EA Knowledge that there are "top recommended cause areas" (on 80K), where some are higher on the list but with a big * of uncertainty. Theres also enough people to work on all of them, so there's no need for a final judgement of a top 3, let alone 1 most important cause. In a way this could be a "macro" EA perspective - asking not what is the most good an individual could do, but what is the most good a group/society can do, with appropriate allocation between cause areas of high ITN.

I think EA can come across as a bit elitist to others, especially to people voulenteering in non-EA charities or trying to do good in "traditional" ways (doctors, Med-tech, activism, etc). Perhaps the "do alot of good" can help with that - but I still think it would come to similar conclusions in some cases. I have a friend who is volenteering in "Make a Wish" for the past 10 years, and I felt a little uneasy telling him about EA without offending him - though I was able to, and while he was intrigued I don't think he was convinced.

I had a thought a while ago, that perhaps the world would be much better if there were alot of people committed to doing "at least a little good" , rather than a (relatively) small group of highly ambitious people doing "the most good". However, perhaps there is room for that as a separate movement from EA. Plus, someone for sure needs to work on the "big things" too, which seems like a good niche for EA.

Thank you for reading the post and for the helpful comment! I totally agree that "do a lot of good" isn't particularly unique or sexy a message. As I mentioned in the post, I believe that part of EA's early appeal was that maximization was so radical and elusive. I do think there is a fairly big difference between the two messages though, especially when it comes to elite donors. Academics like Anand Giridharadas (in Winners Take All) and Rob Reich (in Just Giving) argue that elite philanthropy is often a well-disguised charade to boost the donor's own power and status. I'm sympathetic to the argument that sometimes, these elites are unaware of the ways in which their giving (e.g. to private schools, religious institutions, the arts, etc.) increase existing inequalities. However, it's also easy to argue that some of these wealthy individuals are not doing "a lot of good". I don't think "do a lot of good" is some magic bullet that will fix billionaire philanthropy, but I also think that it might make it easier to convince elites they need to change their philanthropy (and other actions...) than the rhetoric of maximization.

Great point about 80k Hours. It's certainly interesting to take a macro-level view of what a large group or whole society could do with respect to cause areas. It reminds me a little bit of trying to reason like a Rawlsian about ideal theory. Our decisions as individuals become incredibly contingent on what other members of the group decide, especially other members with wealth and power. In this ideal world, the "Neglect" area becomes arguably the most important, which seems to take away from the power of EA (because, as you say, our niche is taking on the big things not just the things that nobody is doing). I think we're in a time when there are tons of Important and Tractable causes which aren't being altogether neglected but still need more resources. All this is just roundabout way of saying I'm skeptical of both the usefulness and feasibility of trying to take the macro perspective and maximize a group's impact.

I'm particularly sympathetic to your point about elitism, and part of my motivation for this post was to try to temper that problem within EA. In my conversations with friends about EA, it's never the idea of maximization that changes their worldview. Instead, they're usually more interested in the argument that there are big ways to make impact which elite philanthropy largely ignores. If you're talking with somebody who dedicates her free time volunteering for an org like Make A Wish, maximization is a non-starter, but sewing the seed that there might be additional avenues for impact will at least allow for some discourse.

On your last point, I think that the "everybody just does a little good" world is already the world we live in! I agree that there is serious need to for groups of people to tackle the big things, but in an ideal world, this is what governments do. Just like many nonprofits say, EA's main goal should be to not need to exist (because institutions are tackling high-ITN goals efficiently).

[anonymous]1y2
0
0

Great post!

I might be wrong but I don't think many EAs actually believe that say, donating to GiveWell is the single most good they can do for the world.  In the actual situation, given epistemic uncertainty, it happens to be a clear example of what you mentioned - "actions that they can be reasonably sure do a lot of good" So there is an implicit belief, revealed in actions that merely doing a lot of good is not only an acceptable, but a recommended behaviour.

However I'm not sure it logically follows from this that seeking to do "the most" good should be abandoned as a goal. This is particularly the case if effective altruism is not defined as an imperative of any kind but as an overall approach that says "given that I've already decided on my own to be more altruistic, how can my time/money make the biggest difference"? 

Despite being an unattainable ideal if you take it literally,  the "most" framing is still fruitful - it gives altruistic, open minded, but resource constrained people (which describes a lot more people than we might've thought) a scope sensitive framework to prioritize resource allocations. 

To see why, let's take an example. It could be argued that giving to the community theatre does not just a little, but a lot of good. If you are a billionaire giving millions to community theatres all over the world there is a reasonable chance that you are doing a lot of good. (And such altruism should be praised, compared to spending those same millions say lobbying for big tobacco).

What effective altruism then brings to the table is to say "look, if you have a sentimental attachment to giving to the community theatre, that's fine. But if you're indifferent towards particular means and your goal is simply to be a good person and help the world, the same money could carry you much further  towards your goal if you did X."

Of course you can then say sure, X sounds good, but what about Y? What about Z? And so on, ad infinitum. At some point though, you have to make a decision. That decision will be far from perfect, since you lack perfect information. However, by using a scope sensitive optimization framework, you will have been able to achieve a lot more good than you would have otherwise.

So while optimization has its flaws, I would characterize it on the whole as one of those "wrong, but useful" models.

Neat stuff here - thank you for the thoughtful comment! 

I agree that few people believe that their choice of intervention is actually the most useful and that we often lavish praise onto people who do just a lot of good. For example, many people consider characters like Warren Buffett and Bill Gates very praiseworthy because, even though they have private jets, they still do a lot of good.

I also agree that maximization ought not be reveled as an imperative. An imperative to maximize, like thick  consequentialism as a moral theory generally, is too demanding. Following this, I struggle to see why we need it at all. Folks who truly want to do a lot of good will still perform optimization calculations even if they aren't explicitly trying to maximize. This makes maximization neither a normative nor descriptive part of anything we do.

In your example about "the same money could carry you much further  towards your goal if you did X", there is no maximization rhetoric present. If you were using maximization as a "wrong but useful" model, you would likely say something like, "I deduced that the same money could carry you farthest if you did X, so don't give to community theater and don't do Y or Z either unless you show me why they're more effective than X".

As an analogy, you don't have to try to be the best philosopher of all time in order to produce great thinking. 

Curated and popular this week
Relevant opportunities