These are my own views, and not those of my employer, EVOps, or of CEA who I have contracted for in the past and am currently contracting for at time of writing. This was meant to be a strategy fortnight contribution, but it's now a super delayed/unofficial, and underwritten strategy fortnight contribution. [1]
Before you read this:
This is pretty emotionally raw, so please 1) don’t update too much on it if you think I’m just being dramatic 2) I might come back and endorse or delete this at some point. I’ve put off writing this for a long time, because I know that some of the conclusions or implications might be hurtful or cause me to become even more unpopular than I already feel I am - as a result, I’ve left it really brief, but I’m willing to make it more through if I get the sense that people think it’d be valuable.
This post is not meant as a disparagement of any of my fellow African or Asian or Latin-American EAs. This is less about you, and more about how much the world sucks, and how hard the state of the world makes it for us to fully participate in, and contribute to, EA the way we’d like to. I think I’m hoping to read a bunch of comments proving me wrong or at least making me reconsider how I feel about this. That being said, I don’t like letting feelings get in the way of truth seeking and doing what’s right. So here it goes.
Summary:
I think community builders and those funding/steering community building efforts should be more explicit and open about what their theory of change for global community building is (especially in light of the reduced amount of funding available), as there could be significant tradeoffs in impact between different strategies.
Introduction
I think there are two broad conceptualisations of what/how EA functions in the world, and each has a corresponding community building strategy. If you think there are more than these two, or that these are wrong or could be improved, please let me know. From my experience, I think that all community building initiatives fall into one of two strategies/worldviews, each with a different theory of change. These are:
Global EA
EA can be for anybody in the world - The goal of EA community building is to spread the ideas of EA as far and wide as possible. By showing people that regardless of your context, you can make a difference which is possibly hundreds of times better than you would have done otherwise, we’ll be increasing the chances of motivated and talented people getting involved in high-impact work, and generally increasing the counterfactual positive impact of humanity on the wellbeing of living and future beings. I have a sense that following this strategy currently leads to having a more transparent/non-secretive/less insidious optic for the movement.
Efforts which fall into this bucket would be things like:
- funding city and national groups in countries which aren’t major power-centers in the US, UK, EU, or China
- funding university groups which aren’t in the top 100/200 in the world for subjects which have a track record of being well-represented amongst global decision-makers.
- Allocating community resources to increasing blindly-racial or geographic diversity and inclusion in the community (rather than specific viewpoints or underrepresented moral beliefs etc).
Narrow EA [2]
Power and influence follow a heavy-tailed distribution, and we need power and influence to make important changes. If there is a small group of people who are extremely influential or high-potential, then the goal of community building should be to seek out and try to convince them to use their resources to have an outsized positive influence on the wellbeing of current and future beings. I have a sense that the way that this strategy is pursued often leads to an optic of secrecy and icky elite-control, but it doesn’t have to be this way necessarily.
Different people in the EA community believe more or less in each of these views to differing degrees - you can think either or both are right etc.
e.g.
- Kameel thinks it is true that everyone can be an EA (and that’s great!), but the best thing for an EA community builder to do with their limited time and mental resources is to find the most privileged/high-leverage/high-potential/influential people and get them to use their outsized influence on improving the lives of people/animals/future generations.
- Meelak thinks its true that it’d be very effective to find the most influential and high-potential people and get them to make better decisions in order to help others, but he thinks the best thing for EA community builders to do is to spread the message that anyone can maximize their positive impact on the world regardless of their circumstances, and that we likely won’t ever find all the most impactful or talented people to work on the most pressing issues if we don’t cast out net far and wide, including into communities which might find it very hard to break into the typical EA-space otherwise.
Reasons I think this is important and should be addressed:
- I used to believe strongly in EA, and the role of EA community building, as being that of “Global EA’, but I’ve become more convinced than ever that “narrow EA” is probably the thing we should pursue. There is a finite amount of funding available for EA community building, and it is necessary to figure out if one of these approaches has a higher impact on expectation than the other, and prioritize that approach.
- I expect that this is a decision that has already been made, but it's not like it's written anywhere or stated explicitly, and so we have to just wonder and suspect that this is the case.
- As more attention and resources are allocated to long termist causes, especially AI and biosecurity, as a feature of historical disparity in development and opportunity, it becomes less likely that aspiring EAs from around the world can contribute cost effectively to the EA community’s work either directly or in a field-building capacity.
- Supporting a global EA community is expensive - e.g flying people to conferences in the US and UK from places like South Africa and India is often ~4X the price of local attendees travel costs; we have to sponsor travel and work visas. Its not clear that Eas from places far away or less well attended are any more skilled or high-leverage than people living near existing EA hubs, and this is happening whilst we’re nowhere near close to exhausting the pool of potential EA 'recruits’ in places like the US and UK.
- This is mostly thinking about *myself*: I’ve started feeling super guilty and sad about how much I, and the EA community, have wasted on supporting my participation in various community building and research endeavours - I’m not really any more capable or competent at doing the things I’ve done than a local American graduate would have been - there was no real justification for me to spend
my ownso much money moving to the US and staying here to work on these things whilst someone from Boston could have worked on them instead.
- This is mostly thinking about *myself*: I’ve started feeling super guilty and sad about how much I, and the EA community, have wasted on supporting my participation in various community building and research endeavours - I’m not really any more capable or competent at doing the things I’ve done than a local American graduate would have been - there was no real justification for me to spend
- The lack of clarity and transparency from organizations like OpenPhil and CEA with regards to how they think about these models, or which of them they are pursuing, leads to a lot of emotional strife , and wasted time (experienced by both community builders, and people aspiring to be EAs).
Reasons not to address this:
- CEA/OP/”EA” doesn’t want to be seen as outrightly endorsing the idea that the majority of people, especially those outside of highly-privileged circles in a handful of countries and cities, don’t have a part to play in the most important decisions about global wellbeing and the trajectory of human history. I think this point could be basically seen as “we don’t want to make the appearance of powerseeking” worse, but I think it speaks of deeper worries about being perceived as racist or classist or intolerable dismissive of people who ‘aren't’ important enough’.
Reasons I might be wrong:
- I might be significantly underestimating the value of diversity and inclusion (in the the most use of the phrase)
- There is little empirical evidence for what I’m claiming (as far as I’m aware)
- I’m assuming we are significanlty resource constrained and will remain so for some time.
- This might be based on an entriely unrealistic false-dichotomy.
Other notes:
On diversity: I think oftentimes discussions about diversity in EA seem to point to the idea that we are failing when it comes to community building, because Global EA would result in a broad range of people being in EA. However, it's obviously possible to believe that Narrow EA is true, and that diversity is really important in doing Narrow EA well. To demonstrate what I’m pointing out here:
A: “We should focus more on diversity and inclusion in EA”.
B: “That doesn’t make sense. We’re working on problems which could cause an extinction within our lifetimes, we can’t expend resources on something which is largely just a nod to political correctness or a lost sense of global justice”.
A: I think we’re losing out on some of the most talented people in the world who could be working on these issues”.
Etc.
About me:
I grew up in South Africa, and moved to the US for university in 2016. I have lived in Boston for ~7 years, and have worked on community building almost entirely in the context of US universities and local US groups, except for helping some non-US university groups as a UGAP mentor. I have thought a lot about community building in general, including in the context of Muslims for EA.
Thanks:
- To those who encouraged me to write this, and those who reviewed it. [3]
Important Disclaimer: Again, These are my own views as a member of the EA community, and not the views of my employer - EvOps, or of the Effective Ventures Foundation USA or UK. I have previously worked as a contractor for CEA on the groups and events teams.
- ^
I’ve experienced an unacceptable amount of sadness when I’ve had to explain that “EA strategy fortnight” is a collective feedback contribution drive, and not the first-person shooter,collaborative prioritization game crossover-episode between Effective Altruism and Electronic Arts that we’ve always wanted.
- ^
I also think we make a big mistake by not framing this publicly as some type of global justice/distributive justice project - I think we’d avoid lots of powerseeking/privileged-elite critiques if the public thought of the EA community as people trying to do the best they can for everyone else with the privilege and wealth they’re randomly fortunate to have.
- ^
This is a joke - nobody reviewed this
The paragraphs below are partly responding to your framing of the issue here. If you frame it as "we can either have 4x attendees and not be inclusive by sponsering flights and visas, or 3x attendees and be inclusive" that's pursuasive, if you're saying "we can cut the costs of this conference by a large amount by not sponsering any flights or visas, which means more malaria nets or more ai grants and I think that's worth it" that's potentially pursuasive, but when you frame it as about about the project of inclusion in general, then I do feel like you're making a mistake of unevenly placed skepticism here.
I do think meta orgs could be clearer about their theory of change, but to get there via the questioning of the value of diversity seems like an odd reasoning path, the lack of clarity is so much deeper than that! I feel like there is some selective scepticism going on here. If you apply this skepticism to the bigger picture then I don't see why one ought to zero in on diversity initiatives in particular as the problem.
Firstly, I think it would be illustrative if you said what you think is the point of community building, in your view? Community building is inherently pretty vague and diffuse as a path to impact and why you do it changes what you do.
For instance, suppose you think the point of community is to recruit new staff. Then I'd say maybe you ought to focus on targetted headhunting specifically rather than doing community-building? Or failing that, training people for roles? As far as non-technical roles, it doesn't seem like there's a huge shortage of 95th+ percentile generally-high-talent people who want an EA job but don't have one, but there's lots of work to be done in vetting them, or training them. As far as technical roles, you can try and figure out who the leaders of relevant technical fields are and recruit them directly. If I wanted to just maximize staff hires I wouldn't do community building, I'd do headhunting, training, vetting, recruitment, matchmaking etc in tight conjunction with the high impact orgs i was trying to serve.
Or, if you think the point of the community building is to have meetings between key players, then why not just only invite existing staff members within your specific cause area in a small room? From a networking perspective community building is too diffuse, there's not much in the way of real professional reasons why the AI safety people and the animal rights people need to meet. You don't need a huge conference or local groups for that.
I think when someone focuses on community building, when someone thinks that's the best way to make change, then (assuming they are thinking from an impact maximizing perspective at all, i suspect at least some resources people direct towards meta has more in common with the psychology of donating to your university or volunteering with your church than with cold utilitarian calculus, and i think that's okay) they're probably thinking of stuff which is quite indirect and harder to quantify, like the value of having people from very different segments of the community who would ordinarily have no reason to meet encounter each other, or the value of provoding some local way to connect to EA for everyone who is part of it. For these purposes, being geographically inclusive makes sense. Questions like whether people could sponser their own flights depend on how valueable you think that type of community building is, I agree that there's a difference between thinking it's valueable and thinking it's valuable enough to fly everyone in even if they don't have a clear intent to work on something that requires flying in like you did. If community building is intended to capture soft and not easily quantified effects that don't have an obvious reason behind them, then I don't see why those soft and not easily quantified effects shouldn't include global outreach. Fostering connections between community members even if they work in different areas or are across the globe from each other, or taking advantage of word of mouth spread in local contexts, or the benefits of having soft ties on each continent such as a friendly base for an EA travelling for work to crash or having a friend of a friend who works in the right government agency for your policy proposal, seem like a valid type of "soft and hard to quantify" effect. Like right now, you can throw a dart on the map, and you will probably be able to find one ea in that country to stay with, and if you throw a few more darts then you can probably find an ea in a government and so on, in a handwavey sense I most people would say that this is a generally beneficial effect of doing inclusive global outreach for any policy or ngo goal.
Whereas if you don't have much faith in that soft and hard to quantify narrative, if you're pursuing hard, quantified impact maximizing, then why do community outreach at all? Why not instead work on something more direct like headhunting or fund some more direct work?
I'm sympathetic to "this theory of change isn't clear enough", it just seems weird to me that if you've accepted all the other unclear things about the community building theory of change, that you would worry about inclusion efforts specifically. If you were sending out malaria nets I would understand if you made the choice that gave out the most nets even if it was less inclusive, because in that scenario at least you would have some chance of accurately predicting when inclusion reduced your nets. But in community building, that doesn't make as much sense, if inclusion is hurting your bottom line how would you even know it? I feel like maybe you have to have a harder model of what your theory of change is before you can go around saying "regrettably, inclusion efforts funge against our bottom line in our theory of change", because it seems to me like on soft and fuzzy not very quantified models of impact, inclusion efforts and global reach mostly make as much sense as any other community building impact model, and when one is in that scenario why not do the common sensically positive thing of being inclusive at least when it's not very expensive to do so?