The unstated claim is that the charities EAs are donating to now are significantly more effective than where people would have donated otherwise (assuming they would have donated at all).
If the gain in cost-effectiveness is (say) 10-fold, then the value of where the money would have been donated otherwise is only 10% of the value now generated. That would reduce the cost-effectiveness multiple from 10x to 9x.
I think a 10x average gain seems pretty plausible to me – though it's a big question!
Some of the reasoning is here, though this post is about careers rather than donations: https://80000hours.org/articles/careers-differ-in-impact/
Thank you for writing - I think it's a useful framing.
Excited altruism can sound like it's making light of the world's problems; while the obligation framing framing sounds too sacrificial / negative / internal conflictly. This is a nice middle ground – capturing an appropriate level of seriousness, while also being a route to an aligned & fulfilling life.
(I'm also not sure it's a good a description of my motivation system, though I've had periods when building 80k felt like my central purpose, and I think it's really valuable to have a vision like this on the table – in fact part of me is a bit envious of people who feel like this.)
Super helpful, thank you!
Just zooming in on the two biggest ones. One was CSET, which I think I understand why is Phase 1.
The other is this one: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/massachusetts-institute-of-technology-ai-trends-and-impacts-research-2022
Is this Phase 1 because it's essentially an input to future AI strategy?
Just wanted to add that at 80k we notice a lot of people around who can benefit from these three things, even people who are pretty interested in EA. In fact, I'd say these three things are a pretty good summary of the main value-adds and aims of 80k's one-on-one team.
Just wanted to add that I did a rough cost-effectiveness estimate of the average of all past movement building efforts using the EA growth figures here. I found an average of 60:1 return for funding and 30:1 for labour. At equilibrium, anything above 1 is worth doing, so I expect that even if we 10x the level of investment, it would still be positive on average.
I think there's a lot more thinking to be done about how to balance altruism and other personal goals in career choice, where – unlike donations – you have to pursue both types of goal at the same time. So I was happy to see this post!
I'd find it really useful to see a list of recent Open Phil grants, categorised by phase 2 vs. 1.
This would help me to better understand the distinction, and also make it more convincing that most effort is going into phase 1 rather than phase 2.
Random but in the early days of YC they said they used to have a "no assholes" rule, which mean they'd try to not accept founders who seemed like assholes, even if they thought they might succeed, due to the negative externalities on the community.
1) One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism, but that signal no longer works.
I'm not sure that's an entirely bad thing, because frugality seems mixed as a virtue e.g. it can lead to:
However, we need new hard-to-fake signals of seriousness to replace frugality. I'm not sure what these should be, but here are some alternative things we could try to signal, which seem closer to what we most care about:
The difficulty is to think of hard-to-fake and easy-to-explain ways to show we're into these.
2) Another way to see the problem is that in the past we've used the following idea to get people into EA: "you can save a life for a few thousand dollars and should maximise your donations to that cause". But this idea is obviously in tension with the activities that many see as the top priorities these days (e.g. wanting to convince top computer scientists to work on the AI alignment problem).
My view is that we should try to move past this way of introducing effective altruism, and instead focus more on ideas like:
Thank you again for all your work on this - it's super useful, and maybe a significant update for me. (I wish we'd done more surveying work like this years ago!)
A) I agree the attitude-behaviour gap seems like perhaps the biggest issue in interpreting the results (maybe the most proactive and able to act people are the ones who have already heard of EA, so we've already reached more of the audience than it seems from these results).
One way to get at that would be to define EA interest using the behavioural measures, and then check which fraction had already heard of EA.
E.g. you mention that ~2% of the sample clicked on a link to sign up to a newsletter about EA. Of those, what fraction had already heard of it?
B) Some notes that illustrate the importance of the attitude-behaviour gap:
C) One minor thing – in future versions, it would be useful to ask about climate change as a cause to work on. I expect it would be the most popular of the options, and is therefore better for 'meeting people where they are' than extreme poverty.