This post is co-authored by Aleš Flídr and James. We thank Harri Besceli for helpful comments. Our friend Tobias also wrote an excellent post with a lot of overlap that we highly recommend checking out.

In our experience leading Harvard EA and Oxford EA, we've made a lot of failures and therefore have a fair number of tips we would give to our past selves.

Note that the heuristics described below were built up in the context of English-speaking universities. Some points will not generalize to regional/national groups or other cultures. We expect readers to be able to judge that for themselves.

Neither of us has any major disagreements with the current CEA strategy or CEA’s models  of community building. We think that the heuristics below can serve as a good complement to these high-level strategic thoughts.

Some underlying beliefs:

  • The majority of value will come from a few individuals . As with most other groups, a typical student group will draw a disproportionate amount of value from a relatively small number of deeply engaged members. Most of the counterfactual value is going to come from the smaller number of deeply immersed and engaged people in your community.
  • Long-termism. Most of the value of our actions will be determined by their impact on the long-run trajectory of humanity (and non-human sentient beings). In the context of community building, this implies a relatively stronger focus on social and epistemic norms and people who can preserve/improve them.
  • The Long-term impact of ideas. The long-term impact of EA will be largely determined by the quality of our ideas. We should therefore focus on high-fidelity methods of communication.

What follows is a set of heuristics that we have built up over years. We think that these align relatively well with the underlying assumptions, CEA’s current big-picture strategy and our on-the-ground experience.

Most of these heuristics come from the accumulation of a lot of anecdotal evidence, rather than systematic data-driven analysis. With these caveats in mind, here are the heuristics:

  1. Focus on your understanding of EA. There is no substitute for having detailed models and broad knowledge of everything relevant to EA. People will base their understanding of effective altruism from you so make sure that you are as well versed in the literature as possible and that you can “cite your sources”.
  2. Default to 1:1's. In hindsight, it is somewhat surprising that 1:1 conversations are not the default student group activity. They have a number of benefits: you get to know people on a personal level, you can present information in a nuanced way, you can tailor recommended resources to individual interests etc. Proactively reach out to members in your community and offer to grab a coffee with them or go for a walk. 1:1's also give you a good yardstick to evaluate how valuable longer projects have to be to be worth executing: e.g. a 7-hour project would have to be at least as valuable as 7 1:1's, other things equal. Caveat: we definitely don’t mean to imply that you should cut all group or larger-scale activities. We will share some ideas for such activities in a follow-up post.
  3. Avoid naive EA outreach.
    1. Outreach is an offer, not persuasion. It can be tempting to try and persuade as many people about EA and run events that tweak the message of EA in an attempt to appeal to certain people. From our experience, this is generally a dangerous approach as it leads to low-fidelity diluted or garbled messages. Instead, think about outreach efforts as an ‘offer’ of EA where people can get a taste of what it’s about and take it or leave it. It’s OK if someone’s not interested. A useful heuristic James used for testing whether to run an outreach event is to ask “to what extent would the audience member now know whether effective altruism is an idea they would be interested in”. It turned out that many speaker events that Oxford were running didn’t fit this test, and neither did the fundraising campaign.
    2. Don't "introduce EA". It's fine if people don't come across EA ideas in a particular sequence First, find entry points that capture a person's interest. If someone finds EA interesting and likes the community, they will absorb the basics pretty soon.
  4. Don't teach, signpost. Avoid the temptation to teach EA to people. There’s a lot of great online content, and you won’t be able to explain the same ideas as well or in as much nuance as longform written content, well-prepared talks or podcast episodes. Instead of viewing yourself as a teacher of EA, think of yourself as a signpost. Be able to point people to interesting and relevant material on all areas of EA, and remove friction for people learning more by proactively recommending them content. For example, after a 1:1 meeting, message over 3 links that are relevant to their current bottleneck/area of interest.
  5. Engagement is more important than wide-reach. Engagement with community and resources was typically a much better predictor of the value that an individual brings to the community than the impressiveness of their CV.
    1. Focus on careers, rather than direct impact or fundraising. Getting people in your student community to make progress towards a high-value career path seems like the highest value thing you can be outputting. Students don’t have a lot of money or skills, so you won’t be able to do much good with direct work or fundraising in a student group. 80k’s survey revealed that a median junior hire would be worth $250k to a typical EA org.
    2. Plan changes dominate. The ‘results’ your local group can deliver vary widely in expected impact: a GWWC pledge is much more impactful than a donation to a fundraiser, a career-plan-change for a priority path is significantly more impactful than a GWWC pledge (by 80,000 Hours’ IASPC metric). Given this, if people already in your group aren’t making career plans it’s more important to work out how you can encourage them to do so, rather than trying to get more people into your group.
    3. Optimize content for the most engaged members. A good heuristic for finding useful things to do is to just ask the most engaged members of the community what they would find most valuable. Send Facebook messages to 10 people asking “what things would you find valuable for us to run for you?”
    4. Try to make the community fun and attractive. Having a fun social atmosphere in your community encourages people to keep on exploring EA and motivates people to take action. Be the one to suggest social activities and introduce people to each other.
    5. Beware excessive formalism. Formal team structures tend to just replicate what’s been done the previous year. A better model for a team is a tight knit group of ‘conspirators’. Also beware of getting bogged down in meaningless admin as a substitute for learning more about EA.
    6. Develop a toolkit of questions. You want to help people get as engaged as they want and help them skill up as much as you can, but we often do this by lecturing at people and pushing ideas. A more fruitful strategy is to be able to ask the right questions that encourage people to explore and engage further. For more information of how to get people to reach novel insights or change their mind see David Rock's excellent books Your Brain at Work and Quiet Leadership (Yes, we know that this article goes against this advice, this approach is harder in writing). Also consider attending a CFAR workshop (Hamming questions are particularly useful).

In a future post, we will share a couple of projects compatible with these heuristics that worked particularly well for our groups.


Sorted by Click to highlight new comments since: Today at 8:33 PM

Interesting stuff, thanks guys. I wanted to discuss one point:

  1. From conversations with James, I believe Cambridge has a pretty different model of how they run it- in particular, a much more hands on approach, which calls for formal commitment from more people e.g. giving everyone specific roles, which is the "excessive formalist" approach. Are there reasons you guys have access to which favour your model of outreach over theirs? Or alternate frame; what's the best argument in favour of the Cambridge model of giving everyone an explicit role, and why does that not succeed (if it doesn't)?

For example, is it possible that Cambridge get a significantly higher number of people involved, which then cancels out the effects of immediately high-fidelity models in due course (e.g. suppose lots of people are low fidelity while at Cam, but then a section become more high-fidelity later, and it ends up not making that much difference in the long run)? Or does the Cambridge model use roles as an effective commitment device? Or does one model ensure less movement drift, or less lost value from movement drift? (see here There's a comment from David Moss here suggesting there's an "open question" about the value of focussing on more engaged individuals, given the risks of attrition in large movements (assuming the value of the piece, which is subject to lots of methodological caveats).

The qs above might be contradictory- I'm not advocating any of the above, but instead clarifying whether there's anything missed by your suggestions.

To jump in as the ex-co-president of EA: Cambridge from last year:

I think the differences mostly come in things which were omitted from this post, as opposed to the explicit points made, which I mostly agree with.

There is a fairly wide distinction between the EA community in Cambridge and the EA: Cam committee, and we don't try to force people from the former into the latter (although we hope for the reverse!).

I largely view a big formal committee (ours was over 40 people last year) as an addition to the attempts to build a community as outlined in this post. A formal committee in my mind significantly improves the ability to get stuff done vs the 'conspirators' approach.

The getting stuff done can then translate to things such as an increased campus presence, and generally a lot more chances to get people into the first stage of the 'funnel'. Last year we ran around 8 events a week, with several of them aimed at engaging and on-boarding new interested people (Those being hosting 1 or 2 speakers a week, running outreach focused socials, introductionary discussion groups and careers workshops.) This large organisational capacity also let us run ~4 community focused events a week.

I think it is mostly these mechanisms that make the large committee helpful, as opposed to most of the committee members becoming 'core EAs' (I think conversion ratio is perhaps 1/5 or 1/10). There is also some sense in which the above allow us to form a campus presence that helps people hear about us, and I think perhaps makes us more attractive to high-achieving people, although I am pretty uncertain about this.

I think EA: Cam is a significant outlier in terms of EA student groups, and if a group is starting out it probably makes more sense to stick to the kind of advice given in this article. However I think in the long term Community + Big formal committee is probably better than just a community with an informal committee.


Thanks for the comment JoshP!

I've spoken a lot with the Cambridge lot about this. I guess the cruxes of my disagreement with their approach are:

1) I think their committee model selects more for willingness to do menial tasks for the prestige of being in the committee, rather than actual enthusiasm for effective altruism. So something like what you described happens where "a section become more high-fidelity later, and it ends up not making that much difference", as people who aren't actually interested drop out. But it comes at the cost of more engaged people spending time on management.

2) From my understanding, Cambridge viewed the 1 year roles as a way of being able to 'lock in' people to engage with EA for 1 year and create a norm of committee attending events. But my model of someone who ends up being very engaged in EA is that excitement about the content drives most of the motivation, rather than external commitment devices. So I suppose roles only play a limited role in committing people to engage, but comes at the cost of people spending X hours on admin, when they could have spent X hours on learning more about EA.

It's worth noting that I think Cambridge have recently been thinking hard about this, and also I expect their models for how their committee provides value to be much more nuanced than I present. Nevertheless, I think (1) and (2) capture useful points of disagreement I've had with them in the past.

as people who aren't actually interested drop out.

This depends on what you mean by 'drop out'. Only around 10% (~5) of our committee dropped out during last year, although maybe 1/3rd chose not to rejoin the committee this year (and about another 1/3rd are graduating)

2) From my understanding, Cambridge viewed the 1 year roles as a way of being able to 'lock in' people to engage with EA for 1 year and create a norm of committee attending events.

This does not ring especially true to me, see my reply to Josh.


Hey! Thanks for the comment.

I think it captures a few different notions. I'll try and spell out a few salient ones

1) Pushes back against the idea that an outreach talk needs to cover all aspects of EA. e.g. I think some intro EA 45min talks end up being really unsatisfactory as they only have time to skim across loads of different concepts and cause areas lightly. Instead I think it could be OK and even better to do outreach talks that don't introduce all of EA but do demonstrate a cool and interesting facet of EA epistemology. e.g. I could imagine a talk on differential vs absolute technological progress as being a way to attract new people.

2) Pushes back against running introductory discussion groups. Sometimes it feels like you need to guide someone through the basics, but I've found that often you can just lend people books or send them articles and they'll be able to pick up the same stuff without it taking up your time.

3) Reframes particular community niches, such as a technical AI safety paper reading group, as also a potential entry-point into the broader community. e.g. People find out about the AI group since they study computer science and find it interesting and then get introduced to EA.

I'm still confused: Intuitively, I would understand "Don't introduce EA" as "Don't do introductory EA talks". The "don't teach" bit also confuses me.

My personal best guess is that EA groups should do regular EA intro talks (maybe 1-2 per year), and should make people curious by touching on some of the core concepts to motivate the audience to read up on these EA ideas on their own. In particular, presenting arguments where relatively uncontroversial assumptions lead to surprising and interesting conclusions ("showing how deep the rabbit hole goes") often seems to spark such curiosity. My current best guess is that we should aim to "teach" such ideas in "introductory" EA talks, so I'd be interested whether you disagree with this.


I think that makes sense and I agree with you. We also have run the sort of things you describe in Oxford.

Maybe don't teach can be understood as 'prefer using resources as a way of conveying ideas, rather than you teaching'.

I agree that we should aim to 'outreach', in '(on-topic) introductory' EA talks, and don't disagree here.


Nice and useful post. I'm trying to find its sequel, on 'projects compatible with these heuristics'. Is it ready? where do I find it?

Comment mostly copied from Facebook:

I think most will agree that it's not advisable to simply try to persuade as many people as possible. That said, given the widespread recognition that poor or inept messaging can put people off EA ideas, the question of persuasion doesn't seem to be one that we can entirely put aside.

A couple of questions (among others) will be relevant to how far we should merely offer and not try to persuade: how many people we think will be initially (genuinely) interested in EA and how many people we think would be potentially (genuinely) interested in EA were it suitably presented.

A very pessimistic view across these questions is that very few people are inclined to be interested in EA initially and very few would be interested after persuasion (e.g. because EA is a weird idea compelling only to a minority who are weird on a number of dimensions, and most people are highly averse to its core demands). On this view, offering and not trying to persuade, seems appealing, because few will be interested, persuasion won't help, and all you can do is hope some of the well inclined minority will hear your message.

If you think very few will be initially inclined but (relatively) many more would be inclined with suitable persuasion (e.g. because EA ideas are very counter-intuitive, inclined to sound very off-putting, but can be appealing if framed adroitly), then the opposite conclusion follows: it seems like persuasion is high value (indeed a necessity).

Conversely, if you are a more optimistic (many people intuitively like EA: it's just "doing the most good you can do + good evidence!") then persuasion looks less important (unless you also think that persuasion can bring many additional gains even above the high baseline of EA-acceptance already).


Another big distinction which I assume is, perhaps, motivating the "offer, don't persuade" prescription, is whether people think that persuasion tends to influence the quality of those counterfactual recruits negatively, neutrally or positively. The negative view might be motivated by thinking that persuading people (especially via dubious representations of EA) who wouldn't otherwise have liked EA's offer will disproportionately bring in people who don't really accept EA. The neutral view might be motivated by positing that many people are turned-off (or attracted to) EA by considerations orthogonal to actual EA content (e.g. nuances of framing, or whether they instinctively non-rationally like/dislike ideas things EA happens to be associated with (e.g. sci-fi)). The positive view, might be motivated by thinking that certain groups are turned off, disproportionately, by unpersuasive messages (e.g. women and minorities do not find EA attractive, but would do with more carefully crafted, symbolically not off-putting outreach), and thinking that getting more of these groups would be epistemically salutary for some reason.


Another major consideration would simply be how many EAs we presently have relative to desired numbers. If we think we have plenty (or even, more than we can train/onboard), then working to persuade/attract more people seems unappealing and conversely if we highly value having more people, then the converse. I think it's very reasonable that we switch our priorities from trying to attract more people to not, depending on present needs. I'm somewhat concerned that perceived present needs get reflected in a kind of 'folk EA wisdom' i.e. when we lack(ed) people, the general idea that movement building is many times more effective than most direct work, was popularised, whereas now we have more people (for certain needs), the general idea of 'quality trumps quantity' gets popularised. But I'm worried the very general memes aren't especially sensitive to actual supply/demand/needs and would be hard/slow to update, if needs were different. This also becomes very tricky when different groups have different needs/shortages.

Very helpful post. As someone running an german EA group i didn't really find anything that doesn't apply to us in the same way it did for you.

One interesting thing is your focus on 1on1 conversations: We have never attempted something like this, mostly because we thought it would be at least a bit weird for both parties involved. Did you have the same fear and where proven wrong or is this a problem you run into with some people?

We have never attempted something like this, mostly because we thought it would be at least a bit weird for both parties involved.

If that's helpful: EA Berlin has been using 1:1s for a while now, so there doesn't seem to be a cultural context that would make a difference. That said, I usually distinguish between 1:1s with people interested joining the group, and with existing group members. We've done the former and are only starting to do the latter (partly because it seemed like a really good idea after talking to James). Introducing that wasn't weird at all, when messaging people saying "we're trying this new thing that might be good for a bunch of different reasons", they seemed quite happy about it, perhaps only a bit confused about what was supposed to happen during the 1:1.

I'd also emphasise the active element of reaching out to people that seem particularly interested instead of just having 1:1s with anyone who approaches you. I like Tobias's suggestion to approach people based on answers they write in a feedback form, but I'm not sure how much effort it'd take to implement that.



I think there are easy ways to make it not weird. Some tips:

1) Emailing from an official email account, rather than a personal one, if you've never met the person before.

2) Mention explicitly that this is 'something you do' and that, for newcomers, you'd like to welcome them into the community. This makes it less strange that you're reaching out to them personally.

3) Mention explicitly that you'll be talking about EA, and not other stuff.

4) It's useful to meet people in real life at an event first and say hello and introduce yourself there.

5) Don't feel like you have an agenda or anything; keep it informal. Treat it as if you were getting to know a friend better and have an enjoyable time.

6) Absolutely don't pressure people, just reach out and offer to meet up if they'd find it useful

I’m such a big fan of “outreach is an offer, not persuasion”.

In general, my personal attitude to outreach in student groups is not to ‘get’ the best people via attraction and sales, but to just do something awesome that seems to produce value (e.g. build a research group around a question, organise workshops around a thinking tool, write a talk on a topic you’re confused about and want to discuss), and then the best people will join you on your quest. (Think quests, not sales.)

If your quest involves sales as a side-effect (e.g. you’re running an EAGx) then that’s okay, as long as the core of what you’re doing is trying to solve a real problem and make progress on an open question you have. Run EAGxes around a goal of moving the needle forward on certain questions, on making projects happen, solving some coordination problem in the community, or some other concrete problem-based metric. Not just “get more EAs”.

I think the reason this post (and all other writing on the topic) has had difficulty suggesting particular quests is that they tend to be deeply tied up in someone’s psyche. Nonetheless l think this is what’s necessary.

Been thinking about EA ideas for the past year now but am new to the forum. This is one of the first posts I've read closely, and I just wanted to say I really appreciate these ideas. I will definitely change the way I communicate EA ideas in the future because of this (and will act more as a signpost than a teacher).