Hide table of contents

First the Open Philantropy Project, now FTX among many others. There is more funding in EA than there ever has been. How should that update our thoughts on earning to give careers? Are smaller donations and giving pledges now unnecessary and shouldn't be suggested?

43

0
0

Reactions

0
0
New Answer
New Comment


6 Answers sorted by

Quick attempt to summarise:

  1. Earning to give is still impactful – probably more impactful than 98%+ of jobs. The current funding bar in e.g. global health by GiveWell is about 10x GiveDirectly, and so marginal donations still have about that level of impact. In longtermism, the equivalent bar is harder to quantify, but you can look at recent examples of what's been funded by the EA Infrastructure and Long Term Funds (the equivalent of GiveDirectly is something like green energy R&D or scaling up a big disease monitoring program). Small donors can probably achieve a similar or better level of effectiveness as GiveWell. Going forward, the crucial question is how quickly more opportunities at that level can be found. It might be possible to keep the bar at 10x, otherwise it's likely to drop a bit so that more funds can be deployed e.g. to ~5x GiveDirectly. If that happens, the value of earning to give in absolute terms will go down 2x, but still be very high.
     
  2. Roles that help to deploy large amounts of funding above the current funding bar are more impactful than before (e.g. grantmaking, research, organisation building, entrepreneurship, movement building, plus supporting and feeder roles to these roles). This means their value has gone up relative to earning to give. (This is what we should expect because funding and labour are partially complementary, so as the amount of funding increases, the value of labour increases.) This means if you can find a good opportunity that's a good fit within one of these paths, it's seriously worth considering switching, and the case for switching is stronger than before.

    3. If you're earning to give and not going to switch, you could consider trying to add extra value by doing more active grantmaking (e.g. exploring new causes) rather than just topping up large pots. However, you still need to be able to do this better than e.g. the EA Funds, and it might still be more efficient just to earn extra money and delegate your grantmaking. Entering a donor lottery is another good option. It might also be better to focus on community building or gaining career capital that might make you happy to switch to direct work in the future (e.g. saving money).

     

I was parsing your comment here as saying that the marginal impact of a GiveWell donation was pretty close to GiveDirectly. Here it seems like you don't endorse that interpretation?

I was wrong about that. The next step for GiveWell would be to drop the bar a little bit (e.g. to 3-7x GiveDirectly), rather than drop all the way to GiveDirectly.

https://twitter.com/moskov/status/1455210000855359490

[anonymous]2
0
0

I'm curious why you and many EA's who focus on longtermism don't suggest donating to longtermist cause areas (as examples often focuses on Givewell or ACE charities). It seems like if orgs I respect like Open Phil and long term future fund are giving to longtermist areas, then they think that's among the most important things to fund, which confuses me when I then hear longtermists acting like funding is useless on the margin or that we might as well give to GiveWell charities. It gives me a sense that perhaps there's either some contradiction going on, or... (read more)

6
Benjamin_Todd
I don't mean to imply that, and I agree it probably doesn't make sense to think longtermist causes are top and then not donate to them. I was just using 10x GiveDirectly as an example of where the bar is within near termism. For longtermists, the equivalent is donating to the EA Long-term or Infrastructure Funds. Personally I'd donate to those over GiveWell-recommended charities. I've edited the post to clarify.

I am also curious to understand why you think that earning to give is more impactful than 98%+ of jobs. Also, did you mean 98% of EA-aligned jobs or all jobs?

7
Benjamin_Todd
It's super rough but I was thinking about jobs that college graduates take in general. One line of thinking is based on a direct estimate: * Average college grad income ~$80k, so 20% donations = $16k per year * Mean global income is ~18k vs. GiveDirectly recipients at $500 * So $1 to GiveDirectly creates value equivalent to increasing global income by ~$30 * So that's ~$500k per year equivalent * My impression is very few jobs add this much to world income (e.g. here's one piece of reading about this). Maybe just people who are both highly paid and do something with a lot of positive externalities, like useful R&D or something like that. Another line of thinking is that earning to give for GiveDirectly is a career path that has already been heavily selected for impact i.e. it contributes to global development, which is one of the most pressing global problems, it's supporting an intervention and org that's probably more effective than average within that cause, and it involves a strategy with some leverage (i.e. earning to give). So, we shouldn't expect it to be easy to find something a lot better.

Thanks for the answer.

Just to make sure I understand #1.

You're saying that if I donated 1000€ to GiveWell right now, my donation would be expected to have 10 times as much impact as a donation to GiveDirectly? However, in the coming years that might change to 5x or 2x?

3
Benjamin_Todd
I think that's roughly right - though some of the questions around timing donations get pretty  complicated.

It seems to me like the primary benefit of typical EA donors (say, most GWWC members, or anyone giving less than $100,000/year, i.e. the vast majority of us) giving effectively comes from the signaling effects of this behavior on helping to promote a culture of effective giving and effective altruism.

It still seems very worthwhile for typical EA donors like me to donate, since the direct value of my donations is still substantial and there's potentially this even greater signaling benefit on top of that.

That said, as Ben Todd summarizes in his answer, most EAs (i.e. everyone not in the reference class of people who have a nontrivial potential to become very wealthy EA donors) can probably do even more good through various kinds of work that help deploy the large amount of EA funding that already exists better and faster than they can through their modest donations.

Given that, I wouldn't want to encourage a small donor to donate a modest amount at the expense of them putting less time/effort/attention into shifting into a very valuable direct work career that helps deploy existing EA funds faster/better. But, if donating some percentage of a person's typical income helps keep them engaged with EA and thinking about important questions related to how we can all do the most good, then it definitely seems worth doing to me.

If anyone thinks I'm wrong about this, please let me know!

I agree there's a substantial signalling benefit.

People earning to give might well have a bigger impact via spreading EA than through their donations, but one of the best ways to spread EA is to lead by example. Making donations makes it clear you're serious about what you say.

That's a good point. I hadn't considered signalling benefits.

It it is not the full answer but in my experience active grant making is easier for a committed small donor than a fund.

At least two projects I am aware of: Effective Altruism London and the All-Party Parliamentary Group for Future Generations were made significantly more likely to happen by active grant making from small donors. In both cases the projects were being run by founders as volunteers. Donors who knew the founders and could form a view on their value of their work reached out and said: "this project is good, deserves full time staff and I could offer funding if needed". Both projects hired staff, grew and in later years went on to receive future funding from various official EA funds. I think in this way well-connected EtG’ers can make an huge impact that grant-makers cannot make.

I guess that, for me, donating is still morally better than things like buying wine. Plus, no headache.

Also, with Patient Philanthropy Fund, I guess it's unlikely that we can have too much funding.

I wrote a suggestion here, about donating to campaign contributions, which are capped in the US so many small donors is better than one large donor. https://forum.effectivealtruism.org/posts/FffuQRBYjvm5hiaFw/there-s-a-role-for-small-ea-donors-in-campaign-finance

I'm optimistic we will unlock new sources of needed funding (Rethink Priorities is working a ton on this) so we should expect the current funding overhang to be temporary, thus making it important to still have future donors ready / have large amounts of money saved up ready to deploy.

Additionally, I think many non-profits would benefit from increased donor diversity on top of the value of a marginal donation, since this helps more organizations with their stability by reducing idiosyncratic risk. And at least in the US it also helps with possible issues arising from the public support test.

Lastly, I think it would be valuable to have more active grantmaking / exploring / "giving to learn" from more donors. I got pretty involved in EA from doing this.

Can you say a bit more about your reasons for believing that the current funding overhang will be temporary?

E.g. Sam Bankman-Fried's net worth is growing very rapidly (Forbes now puts it at $26.5bn), so there seems to be some reason to believe that the funding overhang is growing at present.

Also, even if we do find new large opportunities to spend money, I think it's important to pay attention to the size of small donors' contributions relative to that of larger donors. Even if the funding overhang shrinks, they are going to be relatively less important, if the greater the total amount of EA funds is substantially larger than it previously was.

Comments2
Sorted by Click to highlight new comments since:

One effect of the new funding, IMO, should be to push individual donors towards giving to up-and-coming causes that are currently adjacent to EA, rather than pumping money into the core areas of the movement. Putting lots of money into the already proven-out causes (like global health & development, AI safety research, etc) is what the big institutional funds will be best suited for, so individuals should seek to complement that by funding more experimental causes and interventions. (Of course big institutional funds also try to make small development grants to a range of up-and-coming charities. But I think it's maybe not their comparative advantage vs individuals.)

This is potentially helpful in two ways:

  • As core areas with the highest expected value become more crowded with funding, the value of donating a marginal dollar to them goes down. Moving your donations to peripheral areas with lower-expected-value but less crowding could thus be the impact-maximizing thing to do on an individual level.
  • But more importantly, funding currently-peripheral causes is an experiment with positive externalities. Early funding can help a charity scale up to a point where it can better demonstrate benefits and room-for-more-funding. This in turn can help move it closer to the core of the movement and unlock another stream of big institutional dollars.

I explain my reasoning more and list a few examples here, but by definition there are a lot of adjacent / non-core cause areas... here is a giant list of cause candidates with lots of EA-adjacent ideas which mostly aren't yet receiving billion-dollar donor commitments.

See this post by Ben Todd for an overview of where current spending is going, by cause area.

I do think that it's a reason to emphasise donations less for people who earn modest amounts of money.

As has been discussed elsewhere, the presence of large amounts of money means that people who aim to earn to give may want to go for  projects which have some chance of generating very large returns, even if the chance of success is relatively low.

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
gergo
 ·  · 11m read
 · 
Crossposted on Substack and Lesswrong. Introduction There are many reasons why people fail to land a high-impact role. They might lack the skills, don’t have a polished CV, don’t articulate their thoughts well in applications[1] or interviews, or don't manage their time effectively during work tests. This post is not about these issues. It’s about what I see as the least obvious reason why one might get rejected relatively early in the hiring process, despite having the right skill set and ticking most of the other boxes mentioned above. The reason for this is what I call context, or rather, lack thereof. Subscribe to The Field Building Blog On professionals looking for jobs It’s widely agreed upon that we need more experienced professionals in the community, but we are not doing a good job of accommodating them once they make the difficult and admirable decision to try transitioning to AI Safety. Let’s paint a basic picture that I understand many experienced professionals are going through, or at least the dozens I talked to at EAGx conferences. 1. They do an AI Safety intro course 2. They decide to pivot their career 3. They start applying for highly selective jobs, including ones at OpenPhilanthropy 4. They get rejected relatively early in the hiring process, including for more junior roles compared to their work experience 5. They don’t get any feedback 6. They are confused as to why and start questioning whether they can contribute to AI Safety If you find yourself continuously making it to later rounds of the hiring process, I think you will eventually land the job sooner or later. The competition is tight, so please be patient! To a lesser extent, this will apply to roles outside of AI Safety, especially to those aiming to reduce global catastrophic risks. But for those struggling to penetrate later rounds of the hiring process, I want to suggest a potential consideration. Assuming you already have the right skillset for a given role, it might