I made a mistake in counting the number of committed community members.
I thought the Rethink estimate of the number of ~7,000 'active' members was for people who answered 4 or 5 out of 5 on the engagement scale in the EA survey, but actually it was for people who answered 3, 4 or 5.
The number of people who answered 4 or 5 is only ~2,300.
I've now added both figures to the post.
Hi Aidan, the short answer is that global poverty seems the most funding constrained of the EA causes. The skill bottlenecks are most severe in longtermism and meta e.g. at the top of the 'implications section' I said:
The existence of a funding overhang within meta and longtermist causes created a bottleneck for the skills needed to deploy EA funds, especially in ways that are hard for people who don’t deeply identify with the mindset.
That said, I still thinking global poverty is 'talent constrained' in the sense that:
I agree that figure is really uncertain. Another issue is that the mean is driven by the tails.
For that reason, I mostly prefer to look at funding and the percentage of people separately, rather than the combined figure - though I thought I should provide the combined figure as well.
On the specifics:
I'd guess >20 people pursuing direct work could make >$10 million per year if they tried earning to give
That seems plausible, though jtbc the relevant reference class is the 7,000 most engaged EAs rather than the people currently doing (or about to start doing) direct work. I think that group might in expectation donate several fold-less than the narrower reference class.
2) I agree you should consider your future income, the percentage should be calculated as a percentage of current assets + NPV of future income.
1) I agree the approach of "work out if the community is above or below the optimum level of investing vs. saving, and then either donate everything, or save everything" makes a lot of sense for small donors. I'd feel pretty happy if someone wanted to try do that. (Another factor is that it could be a good division of labour for some to specialise in giving soon and some specialise in investing.)
But I f... (read more)
The estimates are aiming to take account of the counterfactual i.e. when I say "that person generates value equivalent to extra donations of $1m per year to the movement", the $1m is accounting for the fact that the movement has the option to hire someone else.
In practice, most orgs are practicing threshold hiring, where if someone is clearly above the bar, they'll create a new role for them (which is what we should expect if there's a funding overhang).
The advice below is only about where to donate it, but that's only one of the key questions.
It's also worth thinking hard about how much you want to give, and how to time your giving.
Even if you decide you want to use the entire amount for good, Phil Trammel's model about giving now vs. giving later suggests that you should donate x% of the capital per year, where x% is mainly given by your discount rate. In general people think the community as a whole should donate 1-10% per year, so I'd suggest, as a starting point, you could pick a percentage in that r... (read more)
(I agree with many points in this answer. But for communication succinctness and because this answer is highly upvoted, I will only point out my disagreements)
In general people think the community as a whole should donate 1-10% per year, so I'd suggest, as a starting point, you could pick a percentage in that range to donate each year.There are lots of complications[...].But I think a good prior is to donate something close to the optimal percentage for the community as a whole
In general people think the community as a whole should donate 1-10% per year, so I'd suggest, as a starting point, you could pick a percentage in that range to donate each year.
There are lots of complications[...].But I think a good prior is to donate something close to the optimal percentage for the community as a whole
I don't share this intuition at all fwiw, because of two considerations:1) From ... (read more)
I like that you suggest that people should give what they are comfortable giving. I think that's advice I'd want someone to give my friend and I think it's wiser in the long term.
Seems reasonable.Salaries are also lower than in AI.You could make a similar argument about animal welfare, though, I think.
Does 80k actually advise people making >$1M to quit their jobs in favor of entry-level EA work?
It depends on what you mean by 'entry level' & relative fit in each path, but the short answer is yes.
If someone was earning $1m per year and didn't think that might grow a lot further from there, I'd encourage them to seriously consider switching to direct work.
I.e. I think it would be worth doing a round of speaking to people at the key orgs, making applications and exploring options for several months (esp insofar as that can be done w... (read more)
I think I disagree with those fermis for engagement time.
My prior is that in general, people are happier to watch videos than to read online articles, and they're happier to read online articles than to read books. The total time per year spent reading books is pretty tiny. (Eg I think all time spent reading DGB is about 100k hours, which is only ~1yr of the 80k podcast or GiveWell's site.)
I expect that if you sign someone up to a newsletter and give them a book at the same time, they're much more likely to read a bunch of links from the newsletter t... (read more)
Seems like they've changed the form to be the 2017 career guide.
It might also be worth noting that some of the books on the list have a track record of getting great people into EA, while most of them don't. I expect the EV of getting someone to read DGB or the Precipice is over 10x the EV of many of the other books on the list.
It's great you're helping make this easier.
One quick thought: while handing out a book at a talk is probably net positive, I expect eg getting someone onto your mailing list will be significantly better, because then you can then tell them about future events.
Getting someone into EA usually takes several years, so my guess is that we should use free books to get people to fill out feedback forms, sign up to fellowships, join mailing lists, ask people to make referrals, and things like that - more than just handing them out.
Asking for something ... (read more)
To me it sounds like you're underestimating the value of handing out books: I think books are great because you can get someone to engage with EA ideas for ~10 hours, without it taking up any of your precious time.
As you said, I think books can be combined with mailing lists. (If there was a tradeoff, I would estimate they're similarly good: You can either get a ~20% probability of getting someone to engage for ~10h via a book, or a ~5%(? most people don't read newsletters) probability of getting someone to engage for ~40h via a mailing list. And while I'd... (read more)
We put this question to Alexander Berger in our recent podcast. Best to engage with his response directly, but the very short version of his response was that they do expect to be able to find some opportunities that are even more leveraged than AMF within policy (AMF is 20x leveraged on cash transfers, but maybe 100x or more is possible), and that's why they're currently hiring for someone to work on each of SE Asian air quality advocacy and advocacy for more effective international aid, and also people to look for other areas like this. Though, my impres... (read more)
On b), for exactly that reason, our donors at least usually focus more on the opportunity costs of the labour input to 80k rather than our financial costs - looking mainly at 'labour out' (in terms of plan changes) vs. 'labour in'. I think our financial costs are a minority of our total costs.
On a), yes, you'd need to hope for a better return than a doubling leads to +10% labour estimate I made.
If we suppose a 20% increase is sufficient for +10% labour, then the new situation would be:
Total costs: $1.32m
So, the excess value has increased from $... (read more)
That's great to hear!
I should have clarified my points weren't meant as disagreements - I think we're basically on the same page.
I do think aggressive 80/20ing often makes sense
Yes, I agree. One way to reconcile the two comments is that you need to focus on the 20% of most valuable activities within each aspect (marketing, ops, follow up), but you can't drop any aspect. I also agree that it's likely that 'really focusing on what drives impact' is more important than 'really caring', though I think simply caring and trying can go a fairly long way.
On living... (read more)
Thinking out loud / random comments:
Great points, thanks for commenting Ben! Responding to each of the points:
In my experience, running local group events was like an o-ring process. If you're running a talk, you need to get the marketing right, the operations right, and the follow up right. If you miss any of these, you lose most of the value. This means that having an organiser who is really careful about each stage can dramatically increase the impact of the group. So, I'd highlight 'really caring' as one of the key traits to have.
I think I mostly agree with this (and strongly... (read more)
It seems like the impact of running a local group well is often underappreciated, but I think it's one of the highest-impact things you can do as a student or recent graduate, and also one of the highest-impact volunteer opportunities.
It's great to have this write up making a more detailed case. I recently released this stub profile on running a local group, and have added a link to this post.
Not sure I follow the maths.
If there are now 10 staff, each paid $100k, and each generating $1m of value p.a., then the net gain is $10m - $1m = $9m. The CBR is 1:9.
If we double salaries and get one extra staff member, we're now paying $2.2m to generate $11m of value. The excess is $8.8m. The average CBR has dropped to 5:1, and the CBR of the marginal $1.2m was actually below 1.
I'm just saying that when we think offering more salary will help us secure someone, we generally do it. This means that further salary raises seem to offer low benefit:cost. This seems consistent with econ 101.
Likewise, it's possible to have a lot of capital, but for the cost-benefit of raising salaries to be below the community bar (which is something like invest the money for 20yr and spend on OP's last dollar - which is a pretty high bar). Having more capital increases the willingness to pay for labour now to some extent, but tops out after a poi... (read more)
I definitely agree EAs are motivated somewhat by money in this range.
My thought is more about how it compares to other factors.
My impression of hiring at 80k is that salary rarely seems like a key factor in choosing us vs. other orgs (probably under 20% of cases). If we doubled salaries, I expect existing staff would save more, donate more, and consume a bit more; but I don't think we'd see large increases in productivity or happiness.
My impression is that this is similar at other orgs who pay similarly to us. Some EA orgs still pay a lot less, and I... (read more)
Agree that we shouldn't expect large productivity/wellbeing changes. Perhaps a ~0.1SD improvement in wellbeing, and a single-digit improvement in productivity - small relative to effects on recruitment and retention.
I agree that it's been good overall for EA to appear extremely charitable. It's also had costs though: it sometimes encouraged self-neglect, portrayed EA as 'holier than thou', EA orgs as less productive, and EA roles as worse career moves than the private sector. Over time, as the movement has aged, professionalised, and solidified its funding... (read more)
Readers might be interested this twitter thread on megaprojects, and forum discussion of ideas.
This is a big topic, and there are lots of factors.
One is that paying very high salaries would be a huge PR risk.
That aside, the salaries are many orgs are already good, while the most aligned people are not especially motivated by money. My sense is that e.g. doubling the salaries from here would only lead to a small increase in the talent pool (like maybe +10%).
Doubling costs to get +10% labour doesn't seem like a great deal - that marginal spending would be about a tenth as cost-effective as our current average. (And that's ignoring the PR and cultural costs.)
Some orgs are probably underpaying, though, and I'd encourage them to raise salaries.
This kind of ambivalent view of salary-increases is quite mainstream within EA, but as far as I can tell, a more optimistic view is warranted.
If 90% of engaged EAs were wholly unmotivated by money in the range of $50k-200k/yr, you'd expect >90% of EA software engineers, industry researchers, and consultants to be giving >50%, but much fewer do. You'd expect EAs to be nearly indifferent toward pay in job choice, but they're not. You'd expect that when you increase EAs' salaries, they'd just donate a large portion on to great tax-deductible charities, ... (read more)
I was thinking of donating 10% vs. some part time work / side projects.
I agree that someone with the altruism willing to donate say 50% of their income but who isn't able to get a top direct work job could donate more like $10k - $100k per year (depending on their earning potential, which might be high if they're willing to do something like real estate, sales or management in a non-glamorous business).
Though I still feel like there's a good chance there's someone that dedicated and able could find something that produces more impact than that, given the f... (read more)
Thanks! I probably should have just used the 2020 figure rather than the 2017-2019 average.
My estimate was an $80m allocation by Open Phil to global health, but this would suggest $100m.
That makes sense, thanks for the comment.
I think you're right looking at ex post doesn't tell us that much.
If I try to make ex ante estimates, then I'd put someone pledging 10% at a couple of thousand dollars per year to the EA Funds or equivalent.
But I'd probably also put similar (or higher) figures on the value of the other ways of contributing above.
Very quick comment: I think I feel this intuition, but when I step back, I'm not sure why potential to contribute via donations should reduce more slowly with 'ability' than potential to contribute in other ways.
If anything, income seems to be unusually heavy-tailed compared to direct work (the top two donors in EA account for the majority of the capital, but I don't think the top 2 direct workers account for the majority of the value of the labour).
I wonder if people who can't do the top direct work jobs wouldn't be able to have more impact by worki... (read more)
Yes, sorry I was using 'global health' as a shorthand to include 'and development'.
For other near term, that category was taken from the EA survey, and I'm also unsure exactly what's in there. As David says, it seems like it's mostly mental health and climate change though.
Yes, I agree. Different worldviews will want to spend a different fraction of their capital each year. So the ideal allocation of capital could be pretty different from the ideal allocation of spending. This is happening to some degree where GiveWell's neartermist team are spending a larger fraction than the longtermist one.
If lots of the people working on 'other GCRs' are working on great power conflict, then the resources on broad longtermism could be higher than the 1% I suggest, but I'd expect it's still under 3%.
I should have probably have just said that OP seem very interested in the last dollar problem (and that's ~60% of grantmaking capacity).Agree with your comments on meta.
With cause pri research, I'd be trying to think about how much more effectively it lets us spend the portfolio e.g. a 1% improvement to $420 million per year is worth about $4.2m per year.
Though, to be clear, I think this is only a moderate reason (among many other factors) in favour of donating to global health vs. say biosecurity.
Overall, my guess is that if someone is interested in donating to biosecurity but worried about the smaller existing workforce, then it would be better to:
Sure, though I still think it makes it misleading to say that the survey respondents think "EA should focus entirely on longtermism".
Seems more accurate to say something like "everyone agrees EA should focus on a range of issues, though people put different weight on different reasons for supporting them, including long & near term effects, indirect effects, coordination, treatment of moral uncertainty, and different epistemologies."
To be clear, my primary reason for why EA shouldn't entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn't the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.
To some degree my response to this situation is "let's create a separate longtermist community, so that I can indeed invest in that in a way that doesn't get diluted with all the other things that seem relatively unimportant to me". If we ha... (read more)
I agree it's not entailed by that, but both Will and Toby were also in the Leaders Forum Survey I linked to. From knowing them, I'm also confident that they wouldn't agree with "EA should focus entirely on longtermism".
It would indeed be ironic - the fact that Toby and Will are major proponents of moral uncertainty seems like more evidence in favour of the view in my top level comment.
I was talking about the EA Leaders Forum results, where people were asked to compare dollars to the different EA Funds, and most were unwilling to say that one fund was even 100x higher-impact than another; maybe 1000x at the high end. That's rather a long way from 10^23 times more impactful.
Cool. Yeah, EA funds != cause areas. Because people may think that work done by EA funds in a cause area is net positive, whereas the total of work done in that area is negative. Or they may think that work done on some cause is 1/100th as useful another cause, but only because it might recruit talent to the other, which is the sort of hard-line view that one might want to mention.
I'd be happy to see more going to meta at the margin, though I'd want to caution against inferring much from how much the EA Infastructure Fund has available right now.
The key question is something like "can they identify above-the-bar projects that are not getting funded otherwise?"
I believe the Infrastructure team has said they could fund a couple of million dollars worth of extra projects, and if so, I hope that gets funded.
Though even that also doesn't tell us much about the overall situation. Even in a world with a big funding overhang, we should expect there to be some gaps.
Good point, I agree that's a factor.
We should want funding to go into areas where there is more existing infrastructure / it's easier to measure results / there are people who already care about the issue.
Then aligned people should focus on areas that don't have those features.
It's good to see this seems to be happening to some degree!
My hope is that someone with more time to do it carefully will be able to do this in the future.
Having on-going metaculus forecasts sounds great too.
No-one is proposing we go 100% on strong longtermism, and ignore all other worldviews, uncertainty and moral considerations.
the "strong longtermism" camp, typified by Toby Ord and Will MacAskill, who seem to imply that Effective Altruism should focus entirely on longtermism.
They wrote a paper about strong longtermism, but this paper is about clearly laying out a philosophical position, and is not intended as an all-considered assessment of what we should do. (Edit: And even the paper is only making a claim about what's best at the margin; the... (read more)
Just to second this because it seems to be a really common mistake- Greaves and MacAskill stress in the strong longtermism paper that the aim is to advance an argument about what someone should do with their impartial altruistic budget (of time or resources), not to tell anyone how large that budget should be in the first place.
Also- I think the author would be able to avoid what they see as a "non-rigorous" decision to weight the short-term and long-term the same by reconceptualising the uneasiness around longtermism dominating their actions as an u... (read more)
I do think it is important to distinguish these moral uncertainty reasons from moral trade and cooperation and strategic considerations for hedging. My argument for putting some focus on near-termist causes would be of this latter kind; the putative moral uncertainty/worldview diversification arguments for hedging carry little weight with me.
As an example, Greaves and Ord argue that under the expected choiceworthiness approach, our metanormative ought is practically the same as the total utilitarian ought.
It's tricky because the paper on strong longt... (read more)
No-one says longtermist causes are astronomically more impactful.
No-one says longtermist causes are astronomically more impactful.
Not that it undermines your main point - which I agree with, but a fair minority of longtermists certainly say and believe this.
Are there two different proposals?
I think Eliezer is proposing (2), but David is proposing (1). Worldview diversification seems more like (2).
I have an intuition these lead different places – would be interested in thoughts.
Edit: Maybe if 'energy' is understood as 'votes from your parts' then (2) ends up the same as (1).
Yes - part of the reason this the funding overhang dynamic is happening in the first place is that it's really hard to think of a project that has a clearly net positive return from a longtermist perspective, and even harder to put it into practice.
Yes, I wouldn't say CSET is a mega project, though more CSET-like things would also be amazing.
Yes, basically - if you're starting a new project, then all else equal, go for the one with highest potential total impact.
Instead, people often focus on setting up the most cost-effective project, which is a pretty different thing.
This isn't a complete model by any means, though :) Agree with what Lukas is saying below.
I agree this is a big issue, and my impression is many grantmakers agree.
In longtermism, I think the relevant benchmark is indeed something like OP's last dollar in the longtermism worldview bucket. Ideally, you'd also include the investment returns you'll earn between now and when that's spent. This is extremely uncertain.
Another benchmark would be something like offsetting CO2, which is most likely positive for existential risk and could be done at a huge scale. Personally, I hope we can find things that are a lot better than this, so I don't think it's ... (read more)
Agree with this. I just want to be super clear that I think entrepreneurs should optimise for something like cost-effectiveness x scale.
I think research & advocacy orgs can often be 10x more cost-effective than big physical projects, so a $10m research org might be as impactful as a $100m physical org, so it's sometimes going to be the right call.
But I think the EA mindset probably focuses a bit too much on cost-effectiveness rather than scale (since we approach it from the marginal donor perspective rather than the entrepreneur one). If we're also lea... (read more)
The reason most EA founders (and aspiring founders) act as if money is scares, is because the lived experience of most EA founders is that money is hard to get. As far as I know, this is true in all cause areas, including long-termism.
Epistemic status: Moderate opinion, held weakly.
I think one thing that people, both in and outside of EA orgs, find confusing is that we don't have a sense of how high the standards of marginal cost-effectiveness ought to be before it's worth scaling at all. Related concepts include "Open Phil's last dollar" and "quality standards/"In global health I think there's a clear minimal benchmark (something like "$s given to GiveDirectly at >10B/year scales"), but it's not clear I think whether people should bother creating scalable charities that are sl... (read more)
cost-effectiveness x scale
So just total impact?
Glad you're thinking about this!
I've never had much luck myself trying to fundraise just by posting to the forum. Just in case you're not already, I'd suggest trying to approach some potential purchasers in the $1-$10m range directly via email.
A lot of this rings true to me.
I agree there are lots of forms of useful research that could feed into this, and in general better ideas feels like a key bottleneck for EA. I'm excited to see more 'foundational' work and disentanglement as well. Though I do feel like at least right now there's an especially big bottleneck for ideas for specific shovel ready projects that could absorb a lot of funding.
Ah good point. I only found the metaculus questions recently and haven't thought about them as much.
One extra thought is that there was a longtermist incubator project for a while, but they decided to close it down. I think one reason was they thought there weren't enough potential entrepreneurs in the first place, so the bigger bottleneck was movement growth rather than mentoring. I think another bottleneck was having an entrepreneur who could run the incubator itself, and also a lack of ideas that can be easily taken forward without a lot more thinking. (Though I could be mis-remembering.)
An extra thought is that this seems like a positive update on the cost-effectiveness of past meta work.
Here's a rough and probably overoptimistic back of the envelope to illustrate the idea:
I'd guess that maybe $50m was spent on formal movement building efforts in 2020. This is intended to include things like OP & GiveWell's spending on staff, most of FHI and MIRI, plus all of the explicit movement building orgs like CEA and 80k. If that started at 0 in 2010, then it might add up to $250m over the decade (assuming straight line growth).
If the ave