Hide table of contents

tl;dr: Compared to the early GiveWell days, EA’s moral circle has grown faster than its resources. This implies that despite the highly publicized growth in longtermist funding, money is more useful in absolute terms now than it was in the past. Waste, complacency, and apathy are more important to avoid than ever before. This says nothing on its own about if the EA community is also talent constrained, or whether one should choose earning-to-give as a career, but gives reinforcement to the importance of our present-day financial decisions.

 

Popular recent EA Forum posts have focused on the optics of EA spending. Below I totally ignore optics and argue a stronger case: that even ignoring optics, spending present-day dollars well (and having more of them) is more valuable to EA than it has ever been.

 

Argument

My argument is simple:

  1. Take as given whatever position EA was in before longtermism grew in popularity. The main cause areas are global poverty and animal welfare, and you are an individual saving part of their paycheck to donate and save individual lives. Efficiency is the name of the game, and there are plenty of problems that need to be solved, with your limited dollars
  2. Add longtermist cause areas and funding
  3. Assume no change in non-longtermist cause effectiveness from past to present[1]

 

If you believe the above 3 points, and you also believe that all current longtermist funding is insufficient to solve all longtermist problems (money could be dumped into scaling climate change solutions if nothing else, for instance, and there still would not be enough to solve that problem), then you are left with the conclusion that EA’s obligations have grown faster than its pool of resources. As a result, money is more important (in absolute terms) than ever. Donations could go to the older causes just as before, or to longtermism, which purports higher expected value. And if you are unsure, you could always save the money and invest in compound interest for patient longtermism.
 

Imagine cost-effective cause areas are particularly efficient engines for converting dollars to well-being. Longtermism is a massive engine that needs a lot of fuel. Funding is just one type of fuel for those engines. The growth of longtermism has added more engines than fuel.
 

It’s important to note that the above argument says nothing arguing for or against talent (or any other) kinds of constraints. There are multiple kinds of fuel that are needed to run the engines of different cause areas. My current (uninformed) view: EA’s cause areas have grown the fastest, funding has grown 2nd fastest, talent 3rd fastest, and orgs the slowest. All are necessary.

 

Implications

What changes from this argument? If you are a moral saint who is already maximizing impact with zero waste, nothing changes. But if you are a normal human who has come to believe money is less valuable than it used to be in EA, and are trading off with your personal decisions in a non-moral context, I think this framing is important. However lax with spending towards unproductive ends one would have been in the early days of EA, it seems like we should be behaving equally or less lax than from that position, even if that conflicts with our intuitions.

 

The current view I hold is: “Wow, we need to help current people, current animals, and future people and future animals, all with a subset of present-day resources. What a tremendous task.” I think maintaining this framing is important in keeping EA from becoming wasteful and complacent with the growing resources in the community, and in keeping personal integrity. If you agree with my line of thinking, the value of all resources in our community, even funding, seems to be higher than it ever was before.
 

  1. ^

    This is untrue, but in a way that does not seem to dramatically change the line of thinking in my thought experiment. For example, in 2016 the Against Malaria Foundation was judged by GiveWell to cost $3,461 to save a life, and in 2020 that estimate was $4,500 to save a life.

Comments5
Sorted by Click to highlight new comments since: Today at 3:24 AM

I like this framing! 

In general, I think that the fact that funding is often not a bottleneck for the most impactful longtermist projects often gets conflated with the idea that marginal donations aren't valuable (which they are! Many/most of those previous opportunities in non-longtermist causes that got many of us excited to be part of effective altruism still exist). 

 

“Wow, we need to help current people, current animals, and future people and future animals, all with a subset of present-day resources. What a tremendous task.”

Surely you mean we have a tremendous OPPORTUNITY! 😋

 

In all seriousness, this is a great post. In recent months with everybody talking about how wealthy EA is as a movement, we risk alienating individual donors - but it's still important for folks to donate. There is SOO much good we can do as individuals.

Yeah, and one thing that often gets lost in the 'EA now has loads of money' claim is the fact that it only has a relatively  large amount of money compared to a few years. 

Compared to total global resources, this new money going to EA causes is really rather tiny. There is huge scope to grow and improve allocation of resources. 

We should be encouraging projects that could bring even more money into the influence of EA thinking.

Congrats on your first forum post!! Now in EA Forum style I’m going to disagree with you.... but really, I enjoyed reading this and I’m glad you shared your perspective on this matter. I’m sharing my views not to tell you you’re wrong but to add to the conversation and maybe find a point of synthesis or agreement. I'm actually very glad you posted this

I don’t think I have an obligation to help all people. I think I have an obligation to do as much good as possible with the resources available to me. This means I should specialize my altruistic work in the areas with the highest EV or marginal return. This is not directly related to the number of morally valuable beings I care about. I don’t think that now valuing future humans means I have additional obligations. What changes is the bar for what’s most effective.

Say I haven’t learned about longtermism, I think GiveWell is awesome, and I am a person who feel obligated to do good. Maybe I can save lives to ~$50,000 per life by donating to GiveDirectly. Then I keep reading and find that AMF saves lives for ~$5,000 per life. I want to do the most good, so I give to the AMF, maximizing the positive impact of my donations.

Then I hear about longtermism and I get confused by the big numbers. But after thinking for awhile I decide that there are some cost-effective things I can fund in the longtermism or x-risk reduction space. I pull some numbers out of thin air and decide that a $500 donation to LTFF will save one life in expectation.

At this point, I think I should do the most good possible per resource, which means donating to the LTFF[1].

My obligation is to do the most good, on the margin where I can, I think. What longtermism changes for me is the cost-effectiveness bar that needs to be cleared. Prior to longtermism, it’s about $5,000 per life saved, via AMF. Now it’s about $500 but with some caveats. Importantly, increasing the pool of money is still good because it is still good to prevent kids dying of malaria; however, this is not the best use of my money.

Importantly, efficiency still matters. If LTFF saves lives for $500 and NTI saves lives for $400 (number also pulled out of thin air), I should give to NTI, all else equal.

I somewhat agree with you about

“Wow, we need to help current people, current animals, and future people and future animals, all with a subset of present-day resources. What a tremendous task”

However, I think it’s better to act according to “do the most good I can with my given resources, targeting the highest EV or marginal return areas”. Doing good well requires making sacrifices, and the second framing better captures this requirement.

Maybe a way I would try to synthesize my view and your conclusion is as follows: We have enormous opportunities to do good, more than ever before. If saving lives is cheaper now than ever before, the alternatives are relatively more expensive. That is, wasting $500 was only worth 0.1 lives before and now it’s worth a whole life. This makes wasting our resources even worse than it used to be.

Edit: Also thank you for writing your post because if gave me an opportunity on my own beliefs about this. :)

  1. ^

    Although realistically I would diversify because of moral uncertainty, some psychological benefits of doing good with p~1, empirical uncertainty about how good LTFF is, social benefits of giving to near-term causes, wanting to remain connected to current suffering, and intuitively seems good, etc.

I think both the total view (my argument) and the marginal view (your argument, as I understand it) converge when you think about the second-order effects of your donations on only the most effective causes. You're right that I argue in this post from the total view of the community, and am effectively saying that going from $50b to $100b is more valuable now than it would have been at any time in the past. But I think this logic also applies to individuals if you believe that your donations will displace other donations to the second-best option, as I think we must believe (from $50b to $50.00001b, for example).

 

This is why I think it's important to step back and make these arguments in both total + absolute terms, rather than how they're typically made for simplicity, in marginal and relative terms (an individual picking earn-to-give vs direct work). It's ultimately the total + absolute view that matters, even though the marginal + relative view allows for the most simplified decision-making.

 

Plus, responding to you in your framework it also just so happens that if you believe longtermism, the growth of longtermism has added not just more second-best options, but probably new first-best options, increasing the first-order efficiency like you say. So I think there are multiple ways to arrive at this conclusion :)

Curated and popular this week
Relevant opportunities