Hide table of contents

This is an update on all fundraising activities of the Effective Altruism Foundation designed to raise funds for other high-impact charities. This mainly includes Raising for Effective Giving and our tax-deductible regranting.

Summary

  • In 2018, we raised $9,968,506 for high-impact charities. Conservatively, we estimate that $5,545,783 (56%) would not have been donated otherwise (Raising for Effective Giving: $5,160,173; regranting: $385,609).
  • We estimate the combined expenses for these two projects in 2018 to have been about $171,263. That means for every $1 spent on these projects we raised another $32. Taking into account opportunity costs of about $375,000, the net impact of these two projects was $4,999,499 last year.
  • The donations we raised were split between different cause areas as follows: global health & development (40%), long-term future (32%), animal welfare (23%), and other causes (6%).
  • Last year’s efforts of exploring fundraising projects in the blockchain community and among German-speaking philanthropists did not result in any major donations and we have stopped them.
  • Our fundraising projects have had continual growth over the past few years, with money raised increasing even faster than the associated expenses. From 2017 to 2018 donations that would not have been made otherwise increased by $977,824 (21.4%). Expenses even decreased from $289,618 to $171,263 because we invested considerably less effort into Raising for Effective Giving.
  • Going forward, we intend to maintain our philanthropic activities at the current level.

Review of 2018

Overview

We pursue two projects that raise money for other high-impact charities: Raising for Effective Giving (REG) and a regranting service. REG is a fundraising project aimed at the professional poker community, high-stakes players in particular. With our regranting service, we enable donors to take advantage of tax deductions. Because we are a charitable organization in Germany, Switzerland, and the Netherlands, donors in these countries can make tax-deductible contributions to high-impact charities which do not have charitable status in the home countries of the donors. They make a tax-deductible donation to us, the Effective Altruism Foundation, and we, in turn, regrant their donations to highly effective charities.[1] To this end, we processed 5,867 individual donations last year. These donations are separate from any donations we processed as part of our project Raising for Effective Giving.

In 2018, we registered a total of $9,968,506 in donations for other high-impact charities due to these two projects. Conservatively, we estimate that $5,545,762 (56%) of that would not have been donated otherwise. Our estimate only accounts for some effects and might be biased in various ways. This figure represents our best guess. Rethink Priorities has produced a more detailed model for similar work of RC Forward and we might adopt some parts of their model for future estimates. We estimate the combined expenses for these two projects to be about $171,263. That means for every $1 spent on these projects we raised another $34. The net impact were donations worth $5,374,499 (= $5,545,762 – $171,263). Factoring in estimated staff opportunity costs of $375,000 (assuming some well-paid earning to give opportunity), the net impact goes down to about $4,999,499 (= $5,545,762 – $171,263 – $375,000).

Most of these funds was raised through our work with Raising for Effective Giving.

Last year, we also invested about 100 hours each into testing various fundraising efforts in the blockchain community and in the community of German (would-be) philanthropists. We ultimately decided against pursuing this further. The main reasons were:

  • As an organization, we concluded that research into potential worst-case risks from AI is comparatively more neglected and that we have a comparative advantage at pursuing this type of research.
  • S-risks from AI as a cause seem particularly ill-suited for fundraising outside of the EA community.
  • Funding has become less of a bottleneck for the EA community as a whole, and for EAF in particular.

I want to thank Alfredo Parra, Kim Korte, and Thomas Moispointner for their help over the past year with all of these projects. Without their efforts, this would not have been possible.

Counterfactual estimates

Raising for Effective Giving

With Raising for Effective Giving, we accounted for counterfactual giving in the following way: We counted donations made through our main website as influenced by us. We sometimes also did so when donors gave to our recommended charities directly. In the latter case, we included the donation if the donor confirmed to us that our influence was essential. This metric leaves out any donations to our recommended charities from people who never get in touch with us (also see our FAQ on this question). However, this also assumes that the donors would not have learned about effective altruism or effective giving otherwise. Some likely would have. We have not attempted to account for either of these effects but given the dominant role REG has played in the charitable giving of the poker community, both strike us as fairly minor.

In 2018, most donations came from the Double Up Drive we helped run in December. We registered total donations of $5,437,174 from this campaign. In the end, however, we counted only $3,251,377 (60%) as having been influenced by us. As a first step, we counted all the matching funds that had been provided ($2,718,587) because the matchers learned about effective altruism through Raising for Effective Giving. However, the Double Up Drive had also been advertised by the charities themselves. So it’s very likely that some people contributed to the matching challenge who would have made a donation in any case. For all donors ≥$10,000, we individually estimated the share that they would have donated anyways by looking at their donation history as well as inquiring with the donor in question and the charity they gave to. We conservatively assumed that only 10% was due to the matching challenge if we had no information whatsoever. The resulting average was 15%. From this set of donors, we extrapolated to all the remaining ones (<$10,000) we had no prior information on by using this average.

Regranting

Since these donations are not the result of a dedicated fundraising effort, it’s likely that a significant portion of the total would have been donated in any case, even if we had not provided this service. We used a simple model to account for this counterfactual giving: While donors would not have been able to deduct the donations from their taxable income (or take advantage from similar tax schemes), it’s unlikely that this would have caused them to donate nothing whatsoever. However, they would donate less without the deduction since their taxable income increases, which in turn increases their taxes. This effectively represents a price increase for the donation. This increase is determined by the effective marginal tax rate, which in turn depends on the income of the donor and the size of the donation. The relationship between price increase and the corresponding change in quantity is governed by price elasticity (PE). For simplicity, we’ll assume a price elasticity of –1, i.e. unitary elasticity.[2] We’re using the formula for arc elasticity which takes the price elasticity at the midpoint between the two relevant values on the demand curve. We are not very confident that this is the right way to approach this effect but didn’t want to invest additional time into checking alternative methods in detail. If others in the community have a stronger grasp of these issues, we’d appreciate all feedback. The basic formula is as follows:

PE = ((Q2 – Q1) / ((Q1 + Q2) / 2)) / ((P2 – P1) / ((P1 + P2) / 2))

In our case, Q2 is the donation volume as we observe it right now. Q1 is the donation volume we would have observed if donations were not tax-deductible, i.e., the counterfactual donation volume. P1 is equal to Q1 as the cost of these donations are equal to their volume: making a $100 donation simply costs $100 when donations are not tax-deductible. P2 is the cost of the donations as we observe them right now, i.e., when they’re tax deductible. This is equal to the cost of making the donations minus any tax savings from deducting the donations from the taxable income. The tax savings are equal to the donation volume times the relevant marginal tax rate: Making a $100 donation reduces the taxable income by $100, which in turn saves taxes equal to $100 times the relevant marginal tax rate. So we arrive at:

–1 = ((A – C) / ((C + A) / 2)) / (((A – A * MTR) – C ) / ((C + (A – A * MTR) ) / 2)),

where C equals the counterfactual donation volume (= Q1 = P1), A equals the actual donation volume (= Q2), and MTR equals the marginal tax rate.

Solving for C, we arrive at:

C = A * sqrt(–MTR + 1)

Our impact (I) then is equal to the actual donation volume (A) minus the counterfactual donation volume (C):

I = A – A * sqrt(–MTR + 1) = A * (1 – sqrt(–MTR + 1))

The effective marginal tax rate differs from country to country, and from state to state in some cases. In Germany, the median income is about €34,000 (~$38,000) per year which would put the marginal tax rate at 33% (single household, no children). In Switzerland, the median income is about CHF 78,000 (~$78,000) per year. Unfortunately, income taxes are determined locally to a significant extent. A rough estimate gives a marginal tax rate of 17%. In the Netherlands, the median income is about €36,000 (~$40,000) per year which would put the marginal tax rate at 38.1% (single household, no children). We rounded down to 38%. We’ll assume 0% for any other countries since donations to EAF are not tax-deductible there.

Using these numbers, we arrive at ~$385,609 (15.5%) as having being donated due to us providing this service. This likely underestimates the impact: From what we can tell, the studies on the price elasticity of donations were mainly done on changes to the tax code, not changes to the charitable status of individual charities. In the latter case, we’d expect significant substitution effects because donors decide to support other charities. In this case, they might donate to considerably less effective charities which are, however, tax deductible in their respective countries. There are also some other effects that Rethink Priorities has looked into for their study on the impact of RC Forward.

Expense and opportunity cost estimates

We estimate the 2018 expenses for both projects to be $171,263.[3] We can only estimate this number because the staff members involved in these projects also have other responsibilities within the Effective Altruism Foundation. We also stopped accounting for expenses on a project-level in 2018 since the Effective Altruism Foundation has become a lot more integrated. So expenses cannot be neatly attributed to one project or another. Our estimate is based on figures from previous years and estimates of the relevant staff members on how much time they spent on the projects in question.

Dividing these expenses between REG and our regranting service is also a challenge. We did so by using a weighted formula that takes into account the respective donation count and donation volume. This yielded a 40/60 split between REG and our regranting service: $68,505 and $102,758 respectively.

To account for opportunity costs, we took the estimated 1.5 full-time equivalents dedicated to these two projects in 2018 and multiplied them with $250,000, a crude estimate for a very well-paid earning to give opportunity. This results in an opportunity cost estimate of $375,000. Assuming the same split between projects (40/60), we get $150,000 for REG and $225,000 for our regranting service.

Donation distribution

By cause area

By charity

Development over time

Our fundraising projects have had continual growth over the past few years, with money raised increasing even faster than the associated expenses. Last year we even saw a decrease in expenses because we invested considerably less staff time into Raising for Effective Giving. We didn’t try to find many new donors and focused on maintaining the most important existing relationships instead.

Some clarifying endnotes on this table.[4]

Future plans

Going forward, we intend to maintain our philanthropic activities at the current level. We will continue to provide a donation infrastructure for German, Swiss, and Dutch donors. We will maintain Raising for Effective Giving, focusing on the most important relationships and opportunities.

As we wrote on this forum in December, we will focus much more on research compared to previous years. For instance, we hosted a workshop on s-risks in Berlin in March and another one in the Bay Area in May.


  1. Since we are not a fiduciary, we cannot legally commit to forwarding any donations to the desired charities. However, we very strongly take into account the wishes of the donors when regranting donations, and have always done so in the past. ↩︎

  2. After briefly reviewing the evidence on this, this seems about right. The only meta-analysis we could find gives a mean elasticity of –1.44 (standard deviation of 1.21). When the meta-analysis excluded outliers, they found an elasticity of –1.11. However, this report seems to suggest that such a high elasticity likely overstates the real effects. ↩︎

  3. This includes time spent exploring fundraising efforts in the blockchain community and in the community of German (would-be) philanthropists. ↩︎

  1. The numbers for REG differ slightly from the ones in the respective annual reports for 2014 and 2015. That’s because we decided to retroactively exclude donations made to either REG or EAF during those years. We also decided not to include estimates for opportunity costs since for previous years, we don’t have reliable estimates for full-time equivalents dedicated to these projects. We estimated the counterfactuals for regranting donations by using the same marginal tax rates estimates as for 2018 and adjusting based on the donation volume per country for that year. ↩︎

Show all footnotes
Comments3


Sorted by Click to highlight new comments since:
or instance, we hosted a workshop on s-risks in Berlin last month and will be hosting another one in May in the Bay Area.

Has the May one already happened?

It has! It was successful, both in terms of participant satisfaction and our own assessment of research progress/ideas.

Edited. (Initial draft was made in April and I didn't update afterwards.)

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f