Hide table of contents

Lots has been written about this so I wrote a poem instead along with my thoughts and related links at the bottom. I lead the team at Giving What We Can, views are my own.

Poem

Years ago we were struck by big problems: they were so extremely funding constrained.

One-by-one we saw a big impact: by making them less extremely funding constrained.

We didn't wait for permission, we gave from our own pockets first. It became our mission to put others first.

Our thrifty community dug into the data. We made money go further, we made things go better.

Each dollar paid dividends, each DALY gained a good end. Progress felt slow, but was needed, we know.

Constraints were consistent, opportunity cost felt: "Should I pledger further? Should I become a researcher?"

A driven community with compassion so big: we found more neglected problems, solvable, and big.

We said "more research needed", traded money for time: found researchers, founders and then funders aligned.

Some problems found funders more quickly than founders, yet others found moneypits so desperate to fill.

Give trillions in cash or keep coal in the ground? What about the backlash if our decisions aren't sound?

As we made progress we hit the mainstream. Among the first questions: "Why's my cause unseen?"

We're resource constrained, I wish it weren't such: "Yes, we want to help everyone, but we only have so much!"

Our work's still so small in the scheme of the world. Still, let's be more ambitious: let's build a dreamworld.

We need many folks to be stoaked by our mission. We need many funders, founders, and passion.

Experimentation is something we now know we can try: don't let fear of funding be why you don't apply.

But for the foreseeable future your dollars still count: for every life that you help we mustn't discount.

Our mission 'aint over, we're at the start of our road. We need your help: let's make some inroads.

So give what you can and get others involved. Let’s keep working together to get these problems solved.


Postscript

It can be quite difficult to ‘feel’ the fact that all of these things are true at the same time:

  1. We have increased available funding by an order of magnitude over the past decade and increased the rate at which that funding is being deployed
  2. We don't want lack of funds to be the reason that people don't do important and ambitious things; and yet
  3. Yet in most cases we are still extremely funding constrained

I find it painful (and counter-productive) to see these messages floating around:

Whereas I think the better (more truthful and constructive narratives) are:

  • We have a more decent shot at having a significant impact
    • We have more resources which helps us:
      • Double down on things we have good evidence for
      • Justifiably spending more on research and experimentation
      • Become more diverse (e.g. doesn’t require someone to have enough personal resources to take big risks, we can fund people to attend a conference/retreat they couldn’t otherwise afford etc) and therefore find more excellent people to participate in this grand project.
  • The situation is nuanced:
    • The funding situation varies significantly by cause (e.g. a top AI safety lab can likely pay above market rates for salaries for a decent junior researcher while many jobs in global health will be lower paying and still very competitive)
    • Different funders have different priorities and approaches to funding (e.g. it can be much harder to get funding for a more speculative global health project than an equally speculative longtermist project)
    • A lot of the money is concentrated in a small number of donors/evaluators/grantmakers (another reason I think that more diversification here is good)
    • The wrong messages about money could be incredibly damaging
      • Just like we need to be around other topics like careers (e.g. many articles have been written about people hearing that we’re talent constrained and then how it feels to not get “an EA job”).
  • We still need much more funding:
    • As I said in my comment on this post, it pains me deeply that AMF (and many other super robust high-impact charities) still have a funding gap
    • We've already identified far more opportunities than we can possibly fund with our current resources (e.g. megaprojects are all currently out of reach, GiveWell can't fully fund its top charities, GiveDirectly is still an incredible opportunity etc).
    • We’re uncertain about the pipeline of funding coming in the future
      • We don’t want to spend it all too quickly
      • We still want to be very careful with how we spend the money we do have
  • Finally:
    • Giving is still one of the most accessible ways that almost anyway can immediately start having a meaningful impact on important causes
    • Fundraisers and other “big tent” activities help increase our reach and have a nice flow on effect to things like career changes

So please don’t let fear of getting funded for something be your reason for not doing it: be ambitious. But also bear in mind that a lot of good projects aren't getting funded and people aren’t getting hired that would otherwise be if we weren’t still so (extremely) funding constrained. And if you can help provide more funding then please do: It can still be incredibly impactful!


These posts posts make most of these arguments better than I do:

Comments17


Sorted by Click to highlight new comments since:

I always appreciate reading your thoughts on the EA community; you are genuinely one of my favorite writers on meta-EA!

Thanks Miranda, that is very kind of you to say!

Evie
13
3
0

This poem really made me smile; thanks for writing it Luke :)

Thanks for letting me know 😀

Hey, I think an important question is, "is it better for [the person asking this question] to donate or work directly", and I think it's not healthy to try solving this question by trying to think if EA is more talent constrained or money constrained (approaching this as a complicated research question), but rather asking specific orgs what they prefer. What do you think?

Yep, absolutely!

A lot will come down to comparative advantage and context and that’s not a bad way of finding out. That being said the organisation might not know well enough on many edge cases to say it to someone and it could fall prey to the social desirability bias etc. However, not hiring someone is sometimes a signal that the money is more valuable than the labour they’d get.

Anyways, we are constrained on many fronts but also growing in many fronts. Sometimes there are cases where there are fewer constraints (eg lots of money earmarked but not enough talent) but those are narrow cases and don’t apply to “EA”.

My main concern is how often I’ve seen people publicly write and say in person things along the lines that EA is overfunded (eg the examples I gave in my postscript ). It baffles me, concerns me, and I think does a lot of damage.

Why can't someone donate and work directly? 

I think Yonatan may have been talking about additionally donating the difference in salary from a higher paying job, but otherwise, yes – of course – a lot of people do both (ourselves included!).

+1 to Luke's answer

 

I mainly want to push the EA away from "is EtG cool or is working directly cool [for everyone]" to "have each person consider what's better for their specific case (probably by talking to orgs they could work for)"

[Mainly to Jack,] I also do have a prior that if you're both working directly and also donating, then the vast majority of your impact is probably coming from one of them.

I think so because:

  1. It would be an unlikely coincidence, I think, if they'd both have a similar amount of impact
  2. Priors from Purchase Fuzzies and Utilons Separately (I found this very convincing)
  3. Priors from EA analyzing charities that improve the world in many ways, but we usually try picking only one of those "ways" to quantify the impact of the entire charity (which seems correct to me)

Got thoughts about this?

(This is kind of off topic to the rest of my post but I think it's interesting)

Thanks for clarifying, both.

Yes Yonatan, I think that's correct. It just surprises me how often people seem to say 'most of impact comes from my direct work so I shouldn't donate at all'.

In almost all cases, direct work + donating > direct work only

While that is probably mostly true (afaik), note that (in my opinion) many direct workers aren't being paid enough (in the current situation, maybe it will change), so I'd hesitate "pushing" them to donate some of what they're getting

So I guess this hinges on what we mean by "enough". If your position is "most people in direct work are paid below their potential market value" - yes, absolutely. But I don't really see that as relevant to "are they paid enough to donate a %." If we considered those things to be the same, we could end up doing endorsing some strange ideas, e.g. "I'm a consultant at Accenture paid $200k/year but I could be a consultant at McKinsey paid $400k/year, so I shouldn't have to donate."

If we consider those questions separately, then "enough" looks different. Clearly most people in direct work are paid enough to survive in a high cost of living city in a high income country; many are paid enough to be comfortable; some are paid enough to be considered rich by any reasonable standard (top few % in their own country, let alone globally).

One bit of signal here is that so many people in direct work do seem to be donating and don't seem to be making large sacrifices to do that.

It sort of comes back to one of the original EA arguments - what is that extra worth to you versus someone else?

I love this!

I agree we are still funding constrained. Minor point:

Our spendthrift community dug into the data. We made money go further, we made things go better.

Spendthrift: a person who spends money in an extravagant, irresponsible way. But do you mean the opposite? I've always thought the definition was confusing.

Ahah, TIL! That is confusing. Have changed to "thrifty".

Thanks for pointing that out 😀 

It seems worth it to me for EA to direct at least a fraction of its current reserves toward accomplishing smallish, tangible goals, to help create an image of legitimacy to the general public and increase the chances of continuing to attract funding in the future.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f