It's really hard to do community building well. Opinions on strategy and vision vary a lot, and we don't yet know enough about what actually works and how well. Here, I'll suggest one axis of community-building strategy which helped me clarify and compare some contrasting opinions.[1]
Cause-first
Will Macaskill's proposed Definition of Effective Altruism is composed of[2]:
- An overarching effort to figure out what are the best opportunities to do good.
- A community of people that work to bring more resources to these opportunities, or work on these directly.
This suggests a "cause-first" community-building strategy, where the main goal for community builders is to get more manpower into the top cause areas. Communities are measured by the total impact produced directly through the people they engage with. Communities try to find the most promising people, persuade them to work on top causes, and empower them to do so well.
CEA's definition and strategy seem to be mostly along these lines:
Effective altruism is a project that aims to find the best ways to help others, and put them into practice.
It’s both a research field, which aims to identify the world’s most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.
and
Our mission is to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.
Member-first
Let's try out a different definition for the EA community, taken from CEA's guiding principles[3]:
What is the effective altruism community?
The effective altruism community is a global community of people who care deeply about the world, make helping others a significant part of their lives, and use evidence and reason to figure out how best to do so.
This, to me, suggests a subtly different vision and strategy for the community. One that is, first of all, focused on these people who live by EA principles. Such a "member-first" strategy could have a supporting infrastructure that is focused on helping the individuals involved to live their lives according to these principles, and an outreach/growth ecosystem that works to make the principles of EA more universal[4][5].
What's the difference?
I think this dimension has important effects on the value of the community, and that both local and global community-building strategies should be aware of the tradeoffs between the two.
I'll list some examples and caricatures for the distinction between the two, to give a more intuitive grasp of how these strategies differ, without any clear order:
Leaning cause-first | Leaning member-first |
---|---|
Keep EA Small and Weird | Big Tent EA |
Current EA Handbook (focus on introducing major causes) | 2015's EA Handbook (focus on core EA principles) |
80,000 Hours | Probably Good |
Wants more people doing high-quality AI Safety work, regardless of their acceptance of EA principles | Wants more people deeply understanding and accepting EA principles, regardless of what they actually work on or donate to. |
Targeted outreach to students in high ranking universities | Broad outreach with diverse messaging |
Encourages people to change occupations to focus on the world's most pressing problems | Encourages people to use the tools and principles of EA to do more good in their current trajectory |
Risk of people not finding useful ways to contribute to top causes | Risk of not enough people who want to contribute to the world's top causes |
The community as a whole leads by example, by taking in-depth prioritization research with the proper seriousness | Each individual is focused more on how to implement EA principles in their own lives, taking their personal worldview and situation into account |
Community members delegate to high-quality research, think less for themselves but more people end up working in higher-impact causes | Community members think for themselves, which improves their ability to do more good, but they make more mistakes |
The case of the missing cause prioritization research, Nobody’s on the ball on AGI alignment, and many amazing object-level posts making progress on particular causes | The case against “EA cause areas” , EA is three radical ideas I want to protect, "Big tent" effective altruism is very important (particularly right now), and many posts where people share their own decisions and dilemmas |
... | ... |
Personal takeaways
I think the EA community is leaning toward "cause-first" as the main overarching strategy. That could be the correct call. For example, I guess that a lot of the success of EA in promoting highly neglected causes[6] was due to community-builders and community-focused organizations having a large focus on spreading the relevant ideas to many promising people and helping them to work on these areas.
However, there are important downsides to the "cause-first" approach, such as a possible lock-in of main causes and less diversification in the community. Many problems with the EA community are possibly explained by this decision.
It is a decision. For example, EA Israel, particularly as led by @GidiKadosh, has focused more on the "member-first" approach. This also has downsides. Say, only in the past year or so we really started having a network of people working in AI Safety, and we are very weak on the animal welfare front.
I'm not sure what is the best approach, and very likely we can have the best of both worlds most of the time. However, I am pretty sure that being more mindful of this particular dimension in community building is important, and I hope that this post would be a helpful small step in understanding how to do community building better.
Thanks to the many people I've met at EAG and discussed this topic with! I think that crystalizing this idea was one of the key outcomes of the conference for me.
- ^
I try to make the two main examples a bit extreme, to make the distinction clearer, but most opinions are somehow a mesh of the two.
- ^
I've taken some liberty with paraphrasing the original definition to make my claims clearer. This example doesn't mean that Will Macaskill is a proponent of such a "cause-first" strategy.
- ^
These haven't been updated much since 2018, so I'm not sure how representative they are. Anyway, again, I'm using this definition to articulate a possible strategy.
- ^
By this, I mean a future where principles very close to the current main "tenets" of EA are widespread and commonsense.
- ^
Maybe the focus on "making the principles of EA more universal" is more important than the focus on the community and this section should be called something like "ideas-first". I think now that these two notions should be distinguished, and represent different goals and strategies, but I'll leave this to other people (maybe future Edo) to articulate this clearly if this post would be useful.
- ^
Say, x-risks, wild-animal suffering, and empirically-supported GH&D interventions.
I think this is a legitimate concern, and I’m glad you point to it. An alternative framing is lock-out of potentially very impactful causes. Dynamics of lock-out, as I see it, include:
A recent shortform by Caleb Parikh, discussing the specific case of digital sentience work, feels related. In Caleb’s words:
Personal anecdote: Part of the reason, if I’m to be honest with myself, for my move from nuclear weapons risk research to AI strategy/governance is that it became increasingly socially difficult to be an EA working on nuclear risk. (In my sphere, at least.) Many of my conversations with other EAs, even in non-work situations and even with me trying avoid this conversation area, turned into me having to defend my not focusing on AI risk, on pain of being seen as “not getting it”.
I think this is relatively underdiscussed / important. I previously wrote about the availability bias in EA jobhunting and have anecdotally seen many examples of this both in terms of social pressures and norms, but also just difficulty of forging your own path vs sticking to the "defaults". It's simply easier to try and go for EA opportunities where you have existing networks, and there are additionally several monetary, status, & social rewards for pursuing these careers.
I think it's sometimes hard for people to decouple these when making career decisions (e.g. did you take the job because it's your best option, or because it's a stable job which people think is high status)
Caveats before I begin:
Here are some concrete examples of how the presence of upskilling opportunities & incentives in
more specifically x-risk and AIS space) in the last 12-24 months , with comparisons of some other options and how they stack up:
(written quickly of the top of my head, I expect some specific examples may be wrong in details or exact scope. If you can think of counter-examples please let me know!)
Thanks for writing this. After reading this, I want EA to be even more "cause first". One of the things that I worry about for EA is that it becomes a fairly diffuse "member-first" movement, not much unlike a religious group that comes together and supports each other and believes in some common doctrines but at the end of the day, doesn't accomplish much.
I look at EA now and am nothing short of stunned at how much it is accomplishing. Not in dollars spent. But in stuff done. The EA community was at the forefront of pushing AI safety to the mainstream. It has started several new charities. It's responsible for a lot of wins for animals. It's responsible for saving hundreds of thousands of lives. It's about the only place out there that measures charities, and does so with a lot of rigor. It's produced countless reports that actually change what gets worked on. It creates new charities every year. It changes what it works on pretty well.
I think caring about its members is an instrumental goal to caring about causes. The members do, after all, work on the causes. EA does recognize this though and with notable exceptions, I think it does a very good job of it.
A "cause first" movement has similar risks in vesting too much authority into a small elite, not much unlike a cult that comes together and supports each other and believes in some common goal and makes major strides to get closer to said goal, but ultimately burns out as cults often do due to treating their members too instrumentally as objects for the good of the cause. Fast and furious without the staying power of a religion.
That said, I'm also partial to the cause first approach, but man, stuff we have learnt like Oli Habryka's podcast here made me strongly update more towards a member-first mindset which I think would have more firmly pushed against such revelations as being antithetical to caring for one's members. Less deference and more thinking for yourself like Oli did seems like a better long-term strategy for any community's long-term flourishing. EA's recent wins don't seem to counteract this intuition of mine strongly enough when you think decades or even generations into the future.
That said, if AI timelines really are really short, maybe we just need a fast and furious approach for now.
To emphasize Cornelis's point:
I've noticed that most of the tension that a "cause-first" model has is that it's "cause" in the singular, and not "causes" (ie - people who join EA because of GHWB and Animal Welfare but then discover that at EAG everyone is only talking about AI). Marcus claims that EA's success is based on cause-first, and brings examples:
"The EA community was at the forefront of pushing AI safety to the mainstream. It has started several new charities. It's responsible for a lot of wins for animals. It's responsible for saving hundreds of thousands of lives. It's about the only place out there that measures charities, and does so with a lot of rigor. "
But I think that in practice, when someone today is calling for "cause-first EA", they're calling for "longtermist / AI safety focused EA". The diversity of the examples above seem to support a "members-first EA" (at least as outlined in this post).
I'd like to note that it is totally possible for someone to sincerely be talking about "cause-first EA" and simultaneously believe longtermism and AI safety should be the cause EA should prioritize.
As a community organizer I've lost track of how many times people I've introduced to EA initially get excited, but then disappointed that all we seem to talk about are effective charities and animals instead of... mental health or political action or climate change or world war 3 or <insert favourite cause here>.
And when this happens I try to take a member-first approach and ensure they understand what led to these priorities so that the new member can be armed to either change their own mind or argue with us or apply EA principles in their own work regardless of where it makes sense to do so.
A member-first approach wouldn't ensure we have diversity of causes. We could in theory have a very members-first movement that only prioritizes AI Alignment. This is totally possible. The difference is that a members-first AI alignment focused movement would focus on ensuring its members properly understand cause agnostic EA principles - something they can derive value from regardless of their ability to contribute to AI Alignment - and based on that understand why AI Alignment just happens to be the thing the community mostly talks about at this point in time.
Our current cause-first approach is less concerned with teaching EA principles that are cause agnostic and more concerned with just getting skilled people of any kind, whether they care about EA principles or not, to work on AI Alignment or other important things. Teaching EA principles being mostly instrumental to said end goal.
I believe this is more the cause of the tension you describe in the "cause-first" model. It has less to do with only one cause being focused on. It has more to do with the fact that humans are tribalistic.
If you're not going to put effort into making sure someone new is part of the tribe (in this case giving them the cause-agnostic EA principle groundwork they can take home and feel good about) then they're not going to feel like they're part of your cause-first movement if they don't feel like they can contribute to said cause.
I think if we were more members-first we would see far more people who have nothing to offer to AI Safety research still nonetheless feel like "EA is my tribe." Ergo, less tension.
Thanks Edo, I really like this distinction. In particular, your table helped me understand a bunch of seemingly-unrelated disagreements I tend to have with "mainstream EA" - I tend to lean member-first (for reasons that maybe I'll write about one day).
Thank you for writing this post. Even without agreeing with the exact distinction as it's made on the table, I think this is a good framing for an important problem. Specifically, I think the movement underestimates the importance of having a mismatch between how it presents itself and its exact focus.
The way I think about it is:
(1) An individual encounters the movement and understands that the value they're going to gain from it is X → (2) they decide to get involved because they want X → (3) it takes quite a while (months to years, depends on their involvement) to understand that the movement actually does Y OR sees they don't get the value X they expected → (4) There's a considerable they're not as interested in Y and doesn't get as involved as they originally thought they would.
It means that the movement: (1) Missed many people who would've been interested in Y, (2) invested its resources sub-optimally on people who seek X instead of people who seek Y.
I've experienced this on a weekly basis in EA Israel before we focused our strategy and branding on something that sounds like a members-focused approach. Even after doing that, I have dozens of stories of members being disappointed that the movement doesn't offer them concrete tools for their own social action (as much as it offers tools on how to choose a cause area), or disappointed that the conferences are mostly about AI safety and biosecurity.
Even with a strong member-first approach, the movement could still invest considerable resources into organizing AI safety conferences and biosecurity conferences - which would also attract professionals from outside the movement. And the movement could still be constructed in a way that gets people from the EA movement to these other conferences and communities.
I'm a bit time limited at the moment, but would be happy to discuss this with people working on this topic. I wrote before about this mismatch as a branding problem, tried to address this through better ways to explain what EA is, and got the chance to present EA Israel's member-first approach at conferences (linked above) since CEA was interested in some different community-building results that came out of EA Israel. If you're working on this topic and think I might be helpful, feel free to get in touch!
One last thought - I think that @Will Aldred's framing in the comments is correct in describing a connection between how this approach could shape the structure of the movement. Moreover, l think this goes even beyond incentive structures - for instance, the mismatch described above between X and Y could be a good explanation for why community building efforts leans toward "multi-session programs where people are expected to attend most sessions". This is because the current branding requires us to gradually move people from wanting X to understanding that Y is actually more important. This is kind of the opposite of product-market fit.
I'm not saying that either of the approaches is incorrect, but I think this mismatch is harmful. I hope this is resolved either way.
I'm glad that I tricked you into sharing more of your thoughts :)
I think you give good reasons for the harms of an incoherent community-building strategy.
Thanks for writing this post, I've been thinking about this framing recently. Although more because I felt like I was member-first when I started community building and now I am much more cause-first when I'm thinking about how to have the most impact.
I don't agree with some of the categorisations in the table and think there are quite a few that don't fall on the cause/member axis. For example you could have member first outreach that is highly deferential (GiveWell suggestions) and cause-first outreach that brings together very different people that disagree with EA.
Also when you say the downsides of cause-first are that it led to lock in or lack of diversification I feel like those are more likely due to earlier on member-first focus in EA.
(I generally don't feel that happy with my proposed definitions and the categorization in the table, and I hope other people could make better distinctions and framing for thinking about EA community strategy. )
I don't quite share your intuition on the couple of examples you suggest, and I wonder whether that's because our definitions differ or because the categorization really is off/misleading/inaccurate.
For me, your first example shows that the relation to deference doesn't necessarily result from a choice of the overall strategy, but I still expect it to usually be correlated (unless strong and direct effort is taken to change focus on deference).
And for the second example, I think I view a kind of "member first" strategy as (gradually) pushing for more cause-neutrality, whereas the cause-first is okay with stopping once a person is focused on a high-impact cause.
Do you mean, "the most impact as a community builder"?
I guess the overlap is quite high for myself between 'impact' and 'impact as a community builder'.
Thanks, that makes sense. Can you say a bit about what has changed, and in what way you now focus more on impact?
When I started community building I would see the 20 people who turned up most regularly or had regular conversations with and I would focus on how I could help them improve their impact, often in relatively small ways.
Over time I realised that some of the people that were potentially having the biggest impact weren't turning up to events regularly, maybe we just had one conversation in four years, but they were able to shift into more impactful careers. Partially because there were many more people who I had 1 chat with than there were people I had 5 chats with, but also the people who are more experienced/busy with work have less time to keep on turning up to EA social events, and they often already had social communities they were a part of.
It also would be surprising/suspicious if the actions that make members the happiest also happened to be the best solution for allocating talent to problems.
I like your attempt to draw a distinction between two different ways to view community building, however some parts of the table appear strange.
When people say that they want EA to stay weird, they mean that they want people exploring all kinds of crazy cause areas instead of just sticking the main ones (in tension with your definition of cause-first).
Also: one the central arguments for leaning more towards EA being small and weird is that you end up with a community more driven by principle because a) slower growth makes it easier for new members to absorb knowledge from more experienced ones vs. from people who don't really understand the philosophy very well themselves yet b) lower expectations for growth make it easier to focus on people with whom the philosophy really resonates vs. marginally influencing people who aren't that keen on it.
Another point, there's two different ways to build a member first community:
These two different definitions will lead to two different types of community.
To build the first you'd want to engage in broach outreach with diverse messaging. With the second, it would be more about finding the kinds of people who most resonate with your principles. With the first you try to meet people where they are, with the second you're more interested in people who will deeply adopt your principles. With the first, you want engagement with as many people as possible, with the second you want enagement to be as deep as possible.
I think this is an important point, and I may be doing a motte and bailey here which I don't fully understand. Under what I imagine as a "cause-first" movement strategy, you'd definitely want more people engaging in the cause-prioritization effort. However, I think I characterize it as more top-down than it needs to be.
This feels true to me.
I guess a lot of the strange causes people explored weren’t chosen in a top down manner. Rather someone just decided to start a project and seek funding for it.
This is probably changing now that Rethink is incubating new orgs and Charity Entrepreneurship is thinking further afield, but regardless I expect most people who want EA to be weird want people doing this kind of exploration.
Really grateful for the focus on construction instead of destruction. It might not be as dramatic or exciting, but it's still kind of messed up that damaging large parts of EA count as costly signals to signal credibility, even though people other than the poster are the ones who carry the entire burden of the costs.
I think another dimension of interest for cause-first vs member-first is how much faith you have in the people who make up the causes. If you think everyone is dropping the ball then you focus on the cause, whereas you focus on the people if you trust their expertise and skill enough to defer to them.
Note: I'm writing this comment in my capacity as an individual, not as a representative of CEA, although I do work there. I wouldn’t be surprised if others at CEA disagree with the characterization I’m making in this comment.
I want to provide one counterexample to the conception that most of mainstream EA is leaning “cause-first” in the status quo. CEA is a large organization (by EA standards) and we definitely invest substantial resources in “member-first” style ways.[1]
To be specific, here is a sampling of major programs we run:
Some important caveats: there’s other things we do, we think seriously about trying to capture the heavy-tail and directing people towards specific cause areas (including encouraging groups we support to do the same), and we definitely shifted some content (like the handbook) to be more cause-area oriented. CEA is also only one piece of the ecosystem.
Overall though, I do think much of CEA's work currently represents investment that intuitively seems more "member-first", (whether or not this is the correct strategy), and we're a reasonably large part of the CB ecosystem.
Also, although I think the member/cause distinction is useful, it's also sufficiently vague and "vibes-y" enough that many programs and organizations, like CEA, could probably be construed as focusing on either one.
Thanks for your perspective Conor! Looking into these activities in more detail, I have some notes:
Nice post, Edo!
One seemingly important factor to decide whether to lean cause-first or member-first is whether impact varies more across causes or interventions. 80,000 Hours thinks the variation across causes is larger, so it leans cause-first. This recent analysis from Ben Todd suggests variations across causes are not as large as previously thought.
I'd be interested in more examples and better linking with existing written opinions on these topics. So I invite whoever is reading this to suggest some more ideas, or better - contact me to get editing permissions on the post (and co-authorship if you wish).
Written quickly, prioritizing sharing information over polish. Feel free to ask clarifying qs!
Have been considering this framing for some time, and have quite a lot of thoughts. Will try to comment more soon.
Very rough thoughts are that I don't /quite/ agree with all the examples in your table and this changes how I define the difference between the two approaches. So e.g. I don't quite think the difference you are describing is people vs cause it's more principles vs cause.
Then there is a different distinction that I don't think your post really covers (or maybe it does but not directly?) Which is the difference between seeing your (a community builders) obligation towards improving the existing community vs finding more talented / top people
Arjun and I wrote something on this: https://forum.effectivealtruism.org/posts/PbtXD76m7axMd6QST/the-funnel-or-the-individual-two-approaches-to-understanding
Funnel model = treat people in accordance with how much they contribute (kind of cause first)
Individual model = treat people wrt how they are interacting with the principles and what stage they are in their own journey (kind of people)
Yea, I think I mostly agree with you. I think the main decision I had in mind is pretty much what you make in The funnel or the individual: Two approaches to understanding EA engagement which does make very similar points!
I feel like this post introduces a helpful contrast.
I am personally partial to the member-first approach. A cause-first approach seems to place a lot of trust into the epistemics of leaders and decision-makers that identify the correct cause. I take this to be an unhealthy strategy generally - I believe a vibrant community of smart, empirically-minded individuals can be trusted to make their own calls, and I think this may often challenge the opinion of leadership or the community at large in a healthy way. Even if many individual calls end up leading to suboptimal individual behaviour, I'd expect the epistemic benefits of a diversity of opinions and thought to outweigh this downside in the long run, even for the centrally boosted causes, which benefit from having their opinions challenged and questioned from people that do not share their views, and having the likelihood of groupthink significantly reduced.
On a more abstract level, I think EA is pretty unique as a community because of its open epistemics, where a variety of views can be pitched and will receive a fair hearing, often leading to positive interventions and initiatives. I worry that a cause-first approach will endanger this and turn EA into "just another" cause-specific organization, even if the selection of the cause is well-motivated at the initial point of choice.
I really liked the axis that you presented and the comparision between a version of the community that is more cause oriented vs member oriented.
The only caveat that I have is that I don't think we can define a neutral point in between them that allows you to classify communities as one type or the other.
Luckily, I think that is unnecesary because even though the objective of EA is to have the best impact in the world and not the greatest number of members, I think we all think the best decision is to have a good balance between cause and member oriented. So the question that we should ask is should EA be MORE big tent, weird, or do we have a good balance right now?
And to achieve that balance we can be more big tent in some aspects, moments and orgs and weirder in others.
I'd love to see the results of a good experiment in in the member-first approach.
I'm leaning more towards the cause-first approach, but possibly for the wrong reasons. It's easier to measure, it's impact is easier to communicate and understand, the funnel feels shorter and more straight-forward, the activities and tools to achieve impact are there for me to use, I don't need to invent anything from skratch. This all might be a streetlight fallacy.
The strongest for the member-first approach for me would be:
I'm surprised by this point - surely a core element of the 'cause-first' approach is cause prioritization & cause neutrality? How would that lead to a lock-in?
That might be true in theory, but not in practice. People become biased towards the causes they like or understand better.
Sure, but that's not a difference between the two approaches.
But it'll be intensified if the community mainly exists of people that like the same causes because the filter for membership is cause-centered rather than member-centered.
Thanks for the post, it was an interesting read!
Responding to one specific point: you compare
to
I think there is actually just one correct solution here, namely thinking through everything yourself and trusting community consensus only insofar as you think it can be trusted (which is just thinking through things yourself on the meta-level).
This is the straightforwardly correct thing to do for your personal epistemics, and IMO it's also the move that maximizes overall impact. It would be kind of strange if the right move was for people to not form beliefs as best they can, or to act on other people's beliefs rather than their own?
(A sub-point here is that we haven't figured out all the right approaches yet so we need people to add to the epistemic commons.)
Note that if you place a high degree of trust, then the correct approach to maximize direct impact would generally be to delegate a lot more (and, say, focus on the particularities of your specific actions). I think that it makes a lot of sense to mostly trust the cause-prioritization enterprise as a whole, but maybe this comes at the expense of people doing less independent thinking, which should address your other comment.