- For $5,000, you can save one child from malaria.
- For $200, you can have a one-trillionth chance of preventing an existential catastrophe from pandemics.
- For $5,000–$10,000, you can fund events, books, food, t-shirts, etc. for a small EA group for a year.
- For $10,000, you can fund the marginal AI safety researcher for a month.
- For $1, you can invest it to double its expected influence over the world in less than 20 years, assuming nothing crazy happens before then.
Let's say money is part of "EA funding" if the person/system directing it is roughly aiming at doing as much good as possible and considering options like these. Then marginal EA funding goes to interventions that the person/system directing it believes are at least as good as interventions like these. These interventions are really good. Therefore marginal EA funding is prima facie really good.
As long as there exist cost-effective interventions to throw money at, EA is funding constrained.
"It feels like there are three pieces per week on EA Forum with the thesis that an increase in EA funding could be counterintuitively bad and nobody ever [writes] a post with the boring but more correct-sounding thesis that it’s good. I guess my slightly spicy EA take is that there's too much complacency about not being funding constrained, and it would actually be really useful to raise dramatically more money."
Actually, I can't find GiveWell's marginal cost-effectiveness estimates, but my sense is that they've found interventions with average cost-effectiveness better than $4,500 per child saved and that scale without much cost increase. [Update: see comments.]
Open Philanthropy's last dollar project.
Assuming you can get at least 8% expected real returns per year, and the wealth of the rest of the world grows at at most 4%, and influence is proportional to your share of the world's wealth.