Topic Contributions


Most students who would agree with EA ideas haven't heard of EA yet (results of a large-scale survey)

Thank you again for all your work on this - it's super useful, and maybe a significant update for me. (I wish we'd done more surveying work like this years ago!)


A) I agree the attitude-behaviour gap seems like perhaps the biggest issue in interpreting the results (maybe the most proactive and able to act people are the ones who have already heard of EA, so we've already reached more of the audience than it seems from these results).

One way to get at that would be to define EA interest using the behavioural measures, and then check which fraction had already heard of EA.

E.g. you mention that ~2% of the sample clicked on a link to sign up to a newsletter about EA. Of those, what fraction had already heard of it?


B) Some notes that illustrate the importance of the attitude-behaviour gap:

  • ~8.8% were highly sympathetic.
  • Of those, ~85% didn’t know what EA was, suggesting that we’ve only reached about 1/6 of the people who are sympathetic to EA.
  • If (guessing) ~5% of the highly sympathetic people are willing and able to take major action, that would suggest that ~0.4% of students at top 40 english-speaking unis could become highly engaged EAs in the future. That’s ~8,000 people under 30, or 400 graduates per year. (This ignores the possibility of convincing people who are currently less sympathetic.)
  • There’s maybe ~1000 highly engaged english-speaking EAs under 30 now who are graduates of those unis, suggesting we’ve reached 1/8 of the potential. 
  • If it’s 10% who would take major action, we’ve only reached 1/16; while if it’s under 1% then we’ve already reached most of them.
  • So a key parameter is how much sympathy on this scale can be translated into action. If high, then we have a long way to go; if low, then we might have reached most already. 


C) One minor thing – in future versions, it would be useful to ask about climate change as a cause to work on. I expect it would be the most popular of the options, and is therefore better for 'meeting people where they are' than extreme poverty.



EA and the current funding situation

The unstated claim is that the charities EAs are donating to now are significantly more effective than where people would have donated otherwise (assuming they would have donated at all).

If the gain in cost-effectiveness is (say) 10-fold, then the value of where the money would have been donated otherwise is only 10% of the value now generated. That would reduce the cost-effectiveness multiple from 10x to 9x.

I think a 10x average gain seems pretty plausible to me – though it's a big question!

Some of the reasoning is here, though this post is about careers rather than donations:

Altruism as a central purpose

Thank you for writing - I think it's a useful framing.

Excited altruism can sound like it's making light of the world's problems; while the obligation framing framing sounds too sacrificial / negative / internal conflictly. This is a nice middle ground – capturing an appropriate level of seriousness, while also being a route to an aligned & fulfilling life.

(I'm also not sure it's a good a description of my motivation system, though I've had periods when building 80k felt like my central purpose, and I think it's really valuable to have a vision like this on the table – in fact part of me is a bit envious of people who feel like this.)

Longtermist EA needs more Phase 2 work

Super helpful, thank you!

Just zooming in on the two biggest ones. One was CSET, which I think I understand why is Phase 1.

The other is this one:

Is this Phase 1 because it's essentially an input to future AI strategy?

Three Reflections from 101 EA Global Conversations

Just wanted to add that at 80k we notice a lot of people around who can benefit from these three things, even people who are pretty interested in EA. In fact, I'd say these three things are a pretty good summary of the main value-adds and aims of 80k's one-on-one team.

FTX/CEA - show us your numbers!

Just wanted to add that I did a rough cost-effectiveness estimate of the average of all past movement building efforts using the EA growth figures here. I found an average of 60:1 return for funding and 30:1 for labour. At equilibrium, anything above 1 is worth doing, so I expect that even if we 10x the level of investment, it would still be positive on average.

My bargain with the EA machine

I think there's a lot more thinking to be done about how to balance altruism and other personal goals in career choice, where – unlike donations – you have to pursue both types of goal at the same time. So I was happy to see this post!

Longtermist EA needs more Phase 2 work

I'd find it really useful to see a list of recent Open Phil grants, categorised by phase 2 vs. 1.

This would help me to better understand the distinction, and also make it more convincing that most effort is going into phase 1 rather than phase 2.

Free-spending EA might be a big problem for optics and epistemics

Random but in the early days of YC they said they used to have a "no assholes" rule, which mean they'd try to not accept founders who seemed like assholes, even if they thought they might succeed, due to the negative externalities on the community.

Free-spending EA might be a big problem for optics and epistemics

1)  One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism, but that signal no longer works.

I'm not sure that's an entirely bad thing, because frugality seems mixed as a virtue e.g. it can lead to:

  • Not spending money on clearly worth it things (e.g. not paying to have a larger table at a student fair even when it would result in more sign ups; not getting a cleaner when you earn over $50/hour), which in turn can also make us seem not  serious about maximising impact (e.g. this comment).
  • Even worse, getting distracted from the top priority by worrying about efforts to save relatively small amounts of money. Or not considering high upside projects that require a lot of resources, but where there's a good chance of failure, due to a fear of not being able to justify the spending.
  • Feelings of guilt around spending and not being perfectly altruistic, which can lead to burn out.
  • Filtering out people who want a normal middle class lifestyle & family, but could have had a big impact (and go work at FAANG instead). Filtering out people from low income backgrounds or with dependents.

However, we need new hard-to-fake signals of seriousness to replace frugality. I'm not sure what these should be, but here are some alternative things we could try to signal, which seem closer to what we most care about:

  • That we nerd out hard about doing good.
  • Intense focus on the top priority.
  • Doing high upside things even if there's a good chance they might not work out and seem unconventional.
  • Giving 10% (or more) (which is compatible with non-frugality)

The difficulty is to think of hard-to-fake and easy-to-explain ways to show we're into these.


2) Another way to see the problem is that in the past we've used the following idea to get people into EA: "you can save a life for a few thousand dollars and should maximise your donations to that cause". But this idea is obviously in tension with the activities that many see as the top priorities these days (e.g. wanting to convince top computer scientists to work on the AI alignment problem).

My view is that we should try to move past this way of introducing effective altruism, and instead focus more on ideas like:

  • Let's do the most we can to tackle big, neglected global problems. (I'd probably start by introducing climate change and/or pandemics rather than global health.)
  • Find high-upside projects that help tackle the biggest bottlenecks in those problems.
  • If you want to do good, do it effectively, and focus on the highest-leverage ways you can help (but ~no-one is perfectly altruistic and it's fine to have a nice life too).
Load More