Hide table of contents

For CEA's Q3 update, we're sharing multiple posts on different aspects of our work.

Over the past year, we’ve doubled down on the strategy we set out last year. The key metrics we were targeting increased significantly (often more than doubling), and we made many strong hires to nearly double our headcount.

So, unless you’ve been paying a lot of attention, CEA is probably somewhat different from what you think.[1]

Our strategy

We think that humanity will have a better chance of surviving this century, and sharply reducing present suffering, if there are many more highly-engaged EAs (“HEAs”). By this, we mean people who are motivated in part by an impartial care for others[2], who are thinking very carefully about how they can best help others, and who are taking some significant actions to help (most likely through their careers).[3]

In the recent past, people we’d consider “highly engaged” have done a lot to improve human lives, reduce the suffering of animals, develop our understanding of risks from emerging technologies, and build up the effective altruism community.

To increase the number of HEAs working on important problems, we are nurturing discussion spaces: places where people can come together to discuss how to effectively help others, and where they can motivate, support, and coordinate with each other.

In particular, we do this via university groups and conferences, both of which have a strong track record of getting people deeply interested in EA ideas, and then helping them find ways to pursue impactful work (as evidenced, for instance, by OpenPhil’s recent survey).

Recent progress

Some recent progress:

  • For front-facing programs, the key metrics we focus on have (more than) doubled in the last 12-24 months. For instance:
    • The events team is on track to facilitate roughly twice as many new connections as they did in 2019. We hope this means that many more people are getting mentorship, advice, and career opportunities.
    • We had as many calls with group leaders in the last three months as we did in the whole of last year.
    • More generally, it seems that the activities of EA groups are growing rapidly (maybe as much as 400% in some areas), and our support (via retreats, funding, etc.) is contributing to this.
    • The number of hours people spent logged in to the EA Forum doubled in the last year. Many more people are regularly engaging with some of the community’s best new ideas and resources.
  • We introduced and grew some new products and programs to complement our previous products:
    • Virtual Programs, which have helped over 1000 people learn more about EA ideas in the past year. This has helped to seed new groups, get people more involved in EA, and ensure that high-fidelity versions of EA ideas are shared worldwide.
      • The latest EA Handbook, a set of readings based on the curriculum for our introductory virtual program, which has helped hundreds of additional readers work through similar content at their own pace.
    • A new groups/events platform, which we hope will make it much easier for people to transition from online engagement to in-person engagement.

We think this type of progress is critical, because it means that more people are being exposed to and then engaging deeply with the ideas of effective altruism. We are in the process of assessing how well this progress has translated into more people taking action to help others in the last year, but given previous data, we expect to see a strong connection between these figures and the number of people who proceed to work on important problems.

As for CEA’s internal progress:

  • Our team size nearly doubled, and we were especially pleased with the hires we made this year.
  • Less specifically, when we held our team retreat in September, I felt that:
    • There was a lot more clarity about what we’re doing, and how everyone’s work fits together (in the past, CEA sometimes struggled with a lack of strategic clarity).
    • I’m more excited about the current team than any previous CEA team I’ve been a part of, due to a combination of the people and the culture. (Though I’m also excited to see if we can make further improvements here.)

Mistakes and reflections

I think that the key specific mistakes we made during this period were:

  • The Meta Coordination Forum (a retreat for leaders in the EA meta space) was less valuable than it could have been due to a variety of mistakes we made. We plan to make major changes to address these issues. (More details in the events post.)
  • For at least one hiring round (but not all rounds), I think we should have communicated more promptly with applicants and given them more detailed feedback. Assistants are now supporting hiring managers with emails, and we have updated towards giving more substantive feedback for applicants that make it far in our process.

I also plan to spend part of the next few months reflecting on questions like:

  • Should we be more ambitious, and aim to move more quickly than we currently are?
  • What can we do to make sure we’re not displacing even better community building efforts? And if others do begin to offer similar services, what are the best ways to collaborate effectively with them?

If you are interested in helping us, let me know: finding the right people to hire will help us move forward on many of these improvements, and we’re always keen to diversify our funding base.


  1. This probably applies to most organizations you’re not tracking closely: but I think the scale of change is maybe greater with CEA. ↩︎

  2. Without regard to factors like someone’s nationality, birthdate or species, except insofar as those things might actually be morally relevant. ↩︎

  3. For each of these attributes, we set quite a high bar. And when we evaluate whether we’d think of someone as “highly engaged”, we either interview them or look for other strong evidence (such as their having been hired by an organization with high standards and a strong connection to the EA movement). ↩︎

Comments13
Sorted by Click to highlight new comments since: Today at 5:31 AM

Congratulations on this growth, really exciting!

Have you thought about including randomisation to facilitate evaluation?

E.g. you could include some randomisation in who invited to events (of those who applied), which universities/cities get organisers (of those on the shortlist) etc. This could also be done with 80k coaching calls, dunno if it has been tried.

You then track who did and didn't get the treatment, to see what effect it had.  This doesn't have to involve denying 'treatment' to people/places - presumably there are more applicants than there are places - you introduce randomisation at the cutoff.

This would allow some causal inference (RCT/Randomista,  does x cause y, etc) as to what effect these treatments are having (vs the control, and null hypothesis of no effect). This could help justify impact to the community and funders. I'm sure people at eg JPAL, Rethink, etc could help with research design.

I support this idea and have mentioned it previously (e.g. here and here).

This doesn't have to involve denying 'treatment' to people/places - presumably there are more applicants than there are places - you introduce randomisation at the cutoff.

I'm not sure I understand your proposal correctly. To take a concrete example, say 80k gets 500 coaching requests per year and they only have the capacity to coach 250 people. Presumably they select the 250 people they think are most promising, whereas a randomized study would select 250 people randomly and use the remaining 250 as a control. In a sense, this does not involve denying treatment to anyone, since the same number of people (though not the same people) receive coaching, but it does involve a cost in expected impact, which is what matters in this case (and presumably in most other relevant cases—it would be surprising if EA orgs were not prioritizing when they are unable to allocate a resource or service to everyone who requests it). I think the cost is almost certainly justified, given that no randomized studies have been conducted so far and the existing methods of evaluation are often highly speculative, but this doesn't mean that there are no costs. But as noted, I may be misunderstanding you.

If one is still concerned about the costs, or if randomization is infeasible for other reasons, an alternative is to use a quasi-experimental approach such as a regression discontinuity design. Another alternative is to have a series of Metaculus questions on what the results of the experiment would be if it was conducted, which can be informative even if no experiment is ever conducted.

I just want to add, on top of Haydn's comment to your comment, that:

  1. You don't need the treatment and the control group to be of the same size, so you could, for instance, randomize among the top 300 candidates.

  2. In my experience, when there isn't a clear metric for ordering, it is extremely hard to make clear judgements. Therefore, I think that in practice, it is very likely that let's say places 100-200 in their ranking seem very similar.

I think that these two factors, combined with Haydn's suggestion to take the top candidates and exclude them from the study, make it very reasonable, and of very low cost.

Very cool you've previously mentioned it - nice that we've both been thinking about it!

One proposal is a slight modification. Basically to use your example, you could (a) randomise the entire 250 or (b) you could rank the 500, give the 'treatment' to the top 150 say, then randomise 100 'treatments' to 200 around (100 above and 100 below) the cutoff. I think both proposals, or a RDD, would be good - but would defer to advice from actual EA experts on RCTs.

What's the argument against CEA being 10x it's current size. IE why is this the right size to stick at?

Is there research on what the value of HEAs are and why the current amount of money is the right amount to spend finding them?

I think you're assuming that we're planning to stick at this size! I think we'll  continue to grow at least somewhat beyond this scale, but I'm not yet confident that 10x would still be cost-effective (in terms of aligned labour).

There is some research on the value of HEAs, but unfortunately it's not mine so I can't share it. Right now, I'm not particularly concerned that the financial costs of CEA aren't repaid via the number of HEAs we help find. I think that the main thing stopping us from creating more HEAs is probably not funding: it's talent and the ability to coordinate that talent without things breaking as we grow. (Funding is helpful for us to diversify our funding base and be more stable.)

It feels like CEA has been avoiding growing previously and now has started. Am I wrong about that? If not, what changed?

You're right that growth was flatter in previous years (though a lot of metrics -e.g. Forum metrics -  grew a lot in 2020 too).

On an organizational level, we consolidated in 2019, figured out our strategy and narrowed our scope in 2020. At the beginning of 2021 we had a clear strategy and we got more data on our impact from OP's survey. That made me confident that we should switch into expansion mode (in terms of headcount).

More strategically, I think the community is now better set up to accommodate growth - e.g. many more of the core ideas are written up and shared widely, and there are more orgs doing a lot of hiring. So I think we can grow the number of people in the community somewhat quicker at a given quality level than we could in 2018. I don't think the community should grow too quickly, but I think we should grow more quickly than we did in the last couple of years.

So I think the thing I don't understand is why you think we shouldn't grow the community too quickly. Why is this the right level?

And thanks for being so generous with your time here.

Ah, maybe I was confused because "level" sounded like "total size" to me, whereas I think you mean "why is this rate of growth right?". Is that right?

My current best guess is that we should be targeting roughly 40% growth, which is quite a bit faster than Ben Todd's estimates for previous years. (This is growth of highly-engaged EAs: I think we could grow top of funnel or effective-giving-style brands more quickly.)

The main reason that I think we shouldn't grow too much quicker than this is that I think there are some important things (ways of thinking, norms, some of the fuzzier and cutting edge research areas) that are best transferred via apprenticeships of some sort (e.g. taking on a junior role at an org, getting mentorship, doing a series of internships). If you think it takes a couple of years of apprenticeship before people are ready to train up people, then this puts a bit of an upper limit on growth. And if we grow too much faster than that, I worry that some important norms or ways of thinking (e.g. really questioning your beliefs, reasoning transparency, collaborative discussion norms) don't get passed on, which significantly reduces the value of the community's work.

The main reason that I think, despite that, we should grow at about 40% (which is pretty quick compared to the past) is that if we grow too much slower than this, I just don't see us reaching the sort of scale that we might need to address the problems we're facing (some of which have deadlines, maybe in a decade or two).

I'm quite happy to see the progress here. Kudos to everyone at CEA to have been able to scale it, without major problems yet (that we know of). I think I've been pretty impressed by the growth of the community; intuitively I haven't noticed a big drop in average quality, which is obviously the thing to worry about with substantial community growth.

As I previously discussed in some related comment threads, CEA (and other EA organizations in general) scaling, seems quite positive to me. I prefer this to trying to get tons of tiny orgs, in large part because I think the latter seems much more difficult to do well. That said, I'm not sure how much CEA should try to scale over the next few years; 2x/year is a whole lot to sustain, and over-growth can of course be a serious issue. Maybe, 30%-60%/year feels safe, especially if many members are siloed into distinct units (like seems to be happening).

Some random things I'm interested in, in the future:

  • With so many people, is there a strong management culture? Are managers improving, in part to handle future growth?
  • What sorts of pockets of people would make great future hires for CEA, but not so much for other orgs? If there are distinct clusters, I could imagine trying to make projects basically around them. We seem pretty limited for "senior EA" talent now, so some of the growth strategy is about identifying other exciting people and figuring out how to best use them.
  • With the proliferation of new community groups, how do we do quality control to make sure none turn into cults or have big scandals, like sexual assault? Sadly, poor behavior is quite endemic in many groups, so we might have to be really extra rigorous to reach targets we'd find acceptable. The recent Leverage issues come to mind; personally, I would imagine CEA would be in a good position to investigate that in more detail to make sure that the bad parts of it don't happen again.

Also, while there's much to like here, I'd flag that the "Mistakes" seem pretty minor? I appreciate the inclusion of the section, but for a team with so many people and so many projects, I would have expected more to go wrong. I'm sure you're excluding a lot of things, but am not sure how much is being left out. I could imagine that maybe something like a rating would be more useful, like, "we rated our project quality 7/10, and an external committee broadly agreed". Or, "3 of our main projects were particularly poor, so we're going to work on improving them next time, but it will take a while."

I've heard before a criticism that "mistakes" pages can make things less transparent (because they give the illusion of transparency), not more, and that argument comes to mind.

I don't mean this as anything particularly negative, just something to consider for next time.

Thanks! Some comments:

  • Yeah, I agree 2x is quite a lot! We grew more this year because I think we were catching up with demand for our projects. I expect more like 50% in the future.
  • Is there a strong management culture? I think there is: I've managed this set of managers for a long while, and we regularly meet to discuss management conundrums, so I think there's a shared culture. We also have shared values, and team retreats to sync up together. But each manager also has their own take, and I think that is leading to different approaches to e.g. project management or goal setting on each team (but not yet to conflict).
  • Are managers improving? Broadly, I think they still are! For each of them, there's generally some particular area they're focused on improving via feedback or mentorship. But I also think that we're all just getting extra years of management under our belt, and that helps a lot. I think we're still interested in also bringing in people with management experience or aptitude, to help us keep scaling.
  • People who are a good fit for CEA: One thing that I think people haven't fully realized is that we're a remote org first. So if you can't find EA jobs nearby, we might be a good fit. I'm particularly interested in hiring ambitious, agile, user-focused people right now. You can read a lot more on our careers page.
  • I have recently been talking to some people who are interested in setting up new projects that are adjacent to or complementary to our current work, and we're exploring whether some of those could be a part of CEA. So I'm open to that, but the current things are in their early stages. If you are interested in setting up a new thing, and you think it might be better as part of CEA, feel free to get in touch and we can explore that. I think the key reason it might be better at CEA is if it fits in really closely with our current projects, or if there are synergies (e.g. you want to build off Forum tech or do something in the groups space).
  • Re cults/scandals at local groups: I agree that this is a risk. We hope that with more group calls we might catch some of this, but ultimately it's hard to vet all local groups. I'd encourage anyone who has concerns about a group or individual to consider reaching out to Julia Wise.
  • Re mistakes: Those do feel like the biggest ones that directly harmed our outside work. Then I think there were a lot of cases where we could have moved a bit more quickly, or taken on an extra thing that really mattered, or made a slightly better decision. Those really matter too - maybe more than the things that look more like "mistakes" -  but it's often a bit hard to write them up cleanly.  I guess I think that this post overall gives an accurate summary of the balance of successes vs. harm-causing mistakes, but it's not comprehensive about either. And then it might under-weight all of the missed opportunities.  (Our mistakes page has that disclaimer ("not comprehensive") at the top, but I expect people still sometimes see it as comprehensive.)

If you are looking to donate to CEA, the Every.org donation matching program still has $60K in matching funds available (for a 1:1 match up to $100 [USD]). 

No time like the present to convert 100 USD for CEA into 200 USD! The link to CEA's giving page on Every.org is here.

Curated and popular this week
Recent opportunities in Community