Hide table of contents

This is the final post in a series of two posts on EA movement strategy. The first post categorized ways in which EA could fail.

Summary

  • Following an influx of funding, media attention, and influence, the EA movement has been speeding along an exciting, yet perilous, trajectory recently.
  • A lot of the EA community’s future impact rests on this uncertain growth going well (and thereby avoiding movement collapse scenarios). 
  • Yet, discussions or critiques of EA’s trajectory are often not action-guiding. Even when critiques propose course corrections that are tempting to agree with (e.g., EA should be bigger!) proposed course corrections to make EA more like X often don’t rigorously engage with the downsides of being more like X, or the opportunity cost of not being like Y. Proposals to make EA more like X also often leave me with only a vague understanding of what X looks like and how we get from here to X.
  • I hope this post and the previous write-up on ways in which EA could fail  can make discussions of the EA community’s trajectory more productive (and to clarify my own thinking on the matter).
  • This post analyzes the different domains within which EA could change its trajectory, as well as key considerations that inform those trajectory changes where reasonable people might disagree. 
  • I also share next steps to build on this post. 

Preface

I was going to write another critique of EA. How original. I was going to write about how there’s an increasingly visible EA “archetype” (rationalist, longtermist, interested in AI, etc.) that embodies an aesthetic few people feel warmly towards on first impression, and that this leads some newcomers who I think would be a great fit for EA to bounce off the community.

But as I outlined my critique, I had a scary realization: If EA adopted my critique, I’m not confident the community would be more impactful. Maybe, to counter my proposed critique, AI alignment is just the problem of our century and we need to orient ourselves toward that unwelcome reality. Seems plausible. Or maybe EA is rife with echo chambers, EA exceptionalism[1], and an implicit bias to see ourselves as the protagonist of a story others are blind to. Also seems plausible. 

And then I thought about other EA strategy takes. Doesn't a proposal like “make EA enormous” also rest on lots of often implicit assumptions? Like how well current EA infrastructure and coordination systems can adapt to a large influx of people, the extent to which “Effective Altruism” as a brand can scale relative to more cause-area-specific brands, and the plausible costs of diluting EA’s uniquely truth-seeking norms. I’m not saying we shouldn’t make EA enormous, I’m saying it seems hard to know whether to make EA enormous[2]– or for that matter to have any strong strategy opinion. 

Nevertheless, I’m glad people are thinking about course corrections to the EA movement trajectory. Why? Because I doubt the existing “business as usual” trajectory is the optimal trajectory.[3]

  • I don’t think anyone is deliberately steering the EA movement. The Centre for Effective Altruism (CEA) does at some level with EAG(x)’s, online discussion spaces, and goals for growth levels, but ask them and they will tell you CEA is not in charge of all of EA
  • EA thought leaders also don’t claim to own course-correcting all of EA. While they may nudge the movement in certain directions through grants and new projects, their full-time work typically has a narrower scope. 
  • I get the sense that the EA movement as we see it today is a conglomeration of past deliberate decisions (e.g., name, rough growth rate, how to brand discussion and gathering spaces) and just natural social dynamics (e.g., grouping by interests and vibes, certain people feeling more comfortable in some spaces than others, etc.). 
  • This unintentional “business as usual” trajectory being the best for the EA movement seems suspicious 

I’m not saying that any particular person or institution should be steering the EA movement; there are strong reasons why centralization could be very counterproductive. But, if we agree that “business as usual” isn’t best, I think we should be aiming to have more productive, action-guiding conversations about EA movement strategy to get us on the right track. And that involves concretely stating what aspects of the community you think should change and why  you think they should change. Hopefully, the frameworks outlined help these discussions.

Domains in which EA could course-correct  

The best possible EA trajectory is the one that helps the most sentient beings live lives free of suffering and full of flourishing, over the long term.[4] Or something like that. Reasonable people disagree on most parts of that sentence, like how much to morally care about consequences vs. rules, how much to morally weigh different types of sentience, how much to morally weigh suffering vs. flourishing, and how much to discount the value of future lives. But I’ll use this optimum as a first approximation for the ideal movement and leave solving ethics for another write-up.

So, is EA currently on this best possible trajectory? As argued above, I doubt it. I think a fair bit of how EA has grown and identified itself is pretty unplanned (i.e., evolved from social dynamics and people kind of just doing things that way rather than deliberate, debated decisions), and I don’t expect this would land the EA movement in the ideal trajectory. [5]

If we grant that EA is likely not on the ideal trajectory, in what ways could we nudge EA toward the ideal trajectory? That’s the purpose of this section. Below I identify different “domains” in which EA could course correct.

Prefacing the domains in which EA could course correct:

  • There are different approaches to identifying EA course corrections. I choose to identify the domains in which strategy updates could play out, in hopes that giving an overview of the options on the table will allow people to make critiques and proposals more specific. But I’m not sure if this is the best approach. One could also 1) list out many different overarching strategies, like collating EA should be “big tent”EA as a schelling point, and more; or  2) list the properties of the ideal EA, and work backward from there to identify which aspects of EA need to changed to look more like that ideal EA. 
  • Some of these different domains share a great deal of overlap. For example, I can imagine “extent to which EA is a whole identity” and “demanded hardcoreness” being closely related. I found making the domains hard and changed my mind a lot. I doubt this is the ideal taxonomy. 
  • Just because I list a type of course correction or example here doesn’t mean I am advocating for it. There are quite few example updates I would personally argue against. My goal here is to lay out a framework within which people can have such debates. 
  • “My quick take of where EA is at in this domain” could be wrong. I wrote these pretty off the cuff due to time constraints. 
DomainMy quick take of where EA is at in this domain Example updates
Extent to which EA community splits along philosophical or cause-area specific brands[6]

I think “EA” is still the overwhelming brand (e.g., EA Global, EA forum), but there are some nested groups that form their own brands, including:

- Longtermist or explicitly x-risk circles (e.g., Forethought; Global Challenges Project)

- AI safety circles (e.g., Lightcone)

- Biosecurity circles (e.g., Boston)
- Rationality circles
- Global health and development circles (e.g., GiveWell)
- Academic philosophy circles (e.g., GPI)

- Suffering-focused ethics
- Animal welfare community (e.g., Animal Advocacy) 

These groups vary on the degree to which they associate with EA and how much they coordinate with one another. 

People who get into EA don’t really stay in “EA.” Rather they enter a smaller sub-pocket that has its own professional network, conferences, and vibe, while still acknowledging that they are inspired by EA.
Extent to which “EA” is a whole social and intellectual identity[7] 

Many engaged people in EA, but certainly not all, identify strongly as ‘effective altruists’ or with effective altruism. I think excited newcomers especially identify as EAs, and this maybe gets weaker over time (or for older community members it’s more like, duh). Also I think community members in politics or established professional networks adjacent to EA sometimes avoid the EA label.


 

EA becomes less of something you identify as, either because it becomes more of a traditional intellectual belief (e.g., human rights) or because it is replaced by more specific identification, say with a cause-area community. 


 

Extent to which EA outreach focuses on promoting EA principles vs. cause areas vs. philosophies vs. specific ideas[8] EA outreach has traditionally presented “effectiveness mindset” in the context of charity and global health and development, and then gradually introduced x-risk and longtermism. But I think there’s been a recent uptick in groups that just draw attention to important cause areas, like AI safety groups and animal advocacy groups. I think the jury is still out on the relative impact of these groups. (1) Greater emphasis on cause-specific groups/ "on-ramps" like Harvard-MIT X-risk that effectively promote EA work but don’t filter for being convinced by typical EA drowning child-type arguments.[9]

(2) EA comms tries to spread basic ideas of scope sensitivity and impartial altruism far and wide.
Extent to which EA mixes social and professional communitiesQuite a bit, at least in EA hubs. From Julia Wise, quoting HR staff at an EA org: “Boy, is it complicated and strange”EA becomes a more professional space and there are less EA-branded parties or hangout spaces
Demanded hardcorenessEA does seem quite totalizing, but sizable variance here depending on what spaces you occupy. Some smart people are REALLY into it, and it seems easy to feel off if you’re not one of those people. EA community becomes more welcoming to people who know they could be maximizing altruistic expected value harder but choose not to because they have other preferences. 
Where EA does outreachA lot of outreach at universities, skewed toward top universities. Also professional groups, local and national groups, standalone online courses, and more recently mass media publicity, especially around What We Owe the Future.(1) EA starts doing less relative outreach to university groups and focuses more on existing professional communities.

(2) EA interfaces more with non-Western countries. 
Growth rateMaybe around 20-30% but low confidence here.EA slows growth rate to maintain a high degree of coordination and build infrastructure than can handle later influx of people
Amount of interaction with outside professional networks Increasingly more interaction with initiatives like Future Fund’s AI Worldview prize and MIT Media Lab Future Fellowship but other parts of EA direct-work seem to still exclusively hire and collaborate with other EAs. Low confidence here and likely high-variance across causes. 

(1) EAs interested in biosecurity do more to make inroads with existing biosecurity researchers and policymakers as well as for-profit biotech orgs (e.g., organizing conferences). 


 

(2) EAs in AI safety try harder to understand the broader AI community and make well-reasoned, considerate appeals for alignment. 

Centralization of EA funding and strategy decisionsFunding is pretty centralized, with a majority of funding controlled by a handful of funders. But no one really ‘owns’ EA movement strategy and there’s a strong norm of not just following a few leaders. Probably some leader coordination events. 

Funding becomes more decentralized, perhaps becoming more specific to different cause areas. 


 

Diversity of worldviews and backgrounds EA is predominantly endorsed by people who are white, male, upper-middle class, highly analytical, and from a Western background, but of course not exclusively. Philosophical views are predominantly consequentialist. CEA actively tries to do more outreach to low- and middle-income countries.
Amount of association with other communitiesEA seems most heavily intertwined with the rationalist community, to a degree that some people who come from a non-rationalist background find at least mildly off-putting. Other adjacent communities include progress studies, global health and development networks, longevity (?), transhumanism (?), and parts of silicon valley.EA splits into different communities, each of which associates with other EA adjacent communities. 


I’m worried that some of those domains are too abstract. Social movements are complicated  and something like “Extent to which people identify with EA brand vs. more philosophical or cause-specific brands” is a pretty fuzzy concept. 
 

So here’s a map of some more concrete things that could lead to, or follow from, a trajectory change:

  • Types, size, and target audience of events (e.g., cause-specific EAGs, bigger EAGs)
  • What's on effectivealtruism.org 
  • Funding distribution across cause areas or worldviews
  • Target audience and main message of EA communications projects
  • Online discussion spaces (e.g., cause-specific forums)
  • Outreach to university, local, or professional audiences (and the relative emphasis across these)
  • Branding of organizations, co-working spaces, social gatherings, and professional networks
  • Location of EA Hubs (e.g., Boston, Berkeley, Oxford, NYC, Berlin) and how they distinguish themselves
  • Use of jargon/ in-group language  
  • “Onboarding” content (e.g., EA Intro Program, handbook, curated classics)

When you make an EA strategy proposal or critique, please tell the reader which domains you think need changing (including domains I missed here or that you would refactor), and consider pointing to concrete things in reality.

Where one might disagree about ideal course corrections

EA movement strategy is a complicated business. Stated otherwise, there are lots of reasons people could disagree about the ideal trajectory change and a lot of room for debatable assumptions to creep in.

The purpose of this section is to identify some of those reasons different people disagree – or could disagree – on the ideal course corrections. I doubt this is an exhaustive list, but I hope it illustrates the type of questions I think strategists should be asking themselves. 

Key considerationExample takes and, all else equal, how I’d expect that take to influence your opinion
The type of people EA needs to attract 

(1) If you think the type of people that can make the biggest difference on the most pressing problem are technical and conceptual wizards, I expect you’d be more excited about outreach to top universities and talent clusters (e.g., math olympiads)

(2) If you think EA needs to meaningfully influence global politics, I expect you’d be cautious of anything that could sour EAs public reputation, or more excited about creating spin-off brands that don’t associate heavily with EA. 
 

(3) If you think it’s really hard to know what type of people EA needs to attract, I expect you think traditional EA intros like drowning child + play pumps aren’t a bad place to start.

Confidence-adjusted impact estimates for different cause-areas[10]If you’re confident that AI alignment is the most important cause under your ethical worldview, even after having engaged with the best counter-arguments, I expect you’d be more excited about AI alignment specific outreach that isn’t mediated by EA brand.
Worldview diversificationIf you think worldview diversification is super important, I expect you’d be more excited about EA resources not just pooling behind one best-guess cause area
Importance of mass cultural change for your theory of changeIf you think EA is going to have its biggest impact (e.g., prevent existential catastrophe, end global poverty) by invoking mass cultural change, I expect you’d be more excited about keeping EAs growth rate ambitious and avoiding a super hardcore/ totalizing impression
Costs vs. benefits of insularityIf you think maintaining some degree of insularity yields big returns on trust and coordination, I expect you’re more cautious of fast growth for the EA movement
Length of transformative AI timelines (and difficulty of alignment)If you think transformative AI timelines are short, I expect you’d be more excited about effectively trading reputation for impact by doing things like talent search and associating with weird vibes.
Importance of diversity of opinion and backgrounds[11]If you think that different opinions and worldviews will improve EAs ethics and effectiveness, I expect you’d be more excited about outreach in different parts of the world and making spaces more welcoming to non-prototypical-EAs.
Importance of collaboration with non-EA institutionsIf influencing institutions like the US government, the UN, or the EU is vital for your theory of how EA has the greatest impact, I expect you’d be more excited about interacting more with outside professional networks and less excited about associating with ostensibly weird communities. 
Cost incurred if internal parts of community become disenchantedIf you think that disenchanting EAs who are primarily invested in global health and well-being by orienting more towards a longtermist worldview is not that costly, I expect you’d be more inclined to associate EA with a specifically longtermist cause-prio.[12] 
Relative costs vs. benefits of cause-area or philosophical silos

(1) If you think benefits of grouping more along cause-prioritization, like increased coordination, targeted outreach, and tailored branding, outweigh costs like the possibility of becoming too locked into existing prioritization schemes, I expect you’d be more excited about splitting EA into different professional networks.

(2) If you think longtermism is plausibly true and that the idea benefits from being associated with a community that donates prolifically to global health and animal welfare, I expect you’d be more cautious of making longtermism its own community. 

Where you think the EA communities comparative advantage liesIf you think that much of EA’s value comes from being a schelling point, for example, I expect you’re more excited about outreach wide-scale outreach that makes EA approachable (i.e., not too demanding).
Likelihood EA movement (or something like it) could recover from collapseIf you think that EA movement or something like it is likely to recover or rebuild in the long term after a reputation failure (and you lean longtermist), I expect you could be more inclined to gamble EAs reputation to prevent existential catastrophe in near term.
The value of hardcoreness (vs. casual EAs)[13]If you think that one EA who is sincerely maximizing does more good than 20+ casual EAs, I expect you’d be more inclined to keep outreach more targeted.
Value of transparencyIf you think transparency is really important, then I expect that you think an EA movement that appears open to all cause areas while many decision-makers are reasonably confident that AI is most important cause looks shady.[14]
What types of communities keep up good epistemic hygiene If you think that even communities that grow large quickly can keep up epistemic norms like truth-seeking, wise deference, and constructive criticism, then I expect you think that a higher growth rate isn’t problematic.  
How costly it is to be associated with weird ideas[15]If you think that EA incurs a severe cost by being associated with paperclip maximizer AI-risk arguments and acausal trade, I expect you’re more inclined to separate things out from EA brand and warier of associations with some adjacent communities.
Degree to which an approachable “big test” movement trades off with core EA principles[16]If you think that there are little costs to lower-fidelity outreach that encourages activism and donations even if they aren’t maximally effective, then I expect you’re more inclined to try to increase EA growth rate and mass appeal. 
Need for diverse talent If you think EA really needs diverse talent (e.g., operations ppl, political ppl, entrepreneurial ppl) or are unsure on what type of talent EA needs, I expect you think EA should try to be less homogenous and tailor outreach to different communities. 
Relative costs vs. benefits of placing greater emphasis on not-explicitly-EA brandsIf you think that the costs of emphasizing EA-adjacent brands (e.g., x-risk specific brandsGWWCanimal advocacy) like cause-area silos, less focus on EA principles outweigh benefits like more tailored outreach, I expect that you’re more inclined to try to keep on core EA movement that identifies itself with core EA principles
Relative costs vs. benefits of having EA become an identity[17]If you think that dangers associated with EA becoming an identity (e.g., making it difficult to disentangle yourself later and adding “group belief baggage”) outweigh benefits like possible increases in inspiration, then I expect you’re more weary of people identifying as EAs and sometimes seamlessly mixing EA events and social events.  
Relative costs vs. benefits of EA having overlapping social and professional spheresIf you think that benefits of EAs mixing social and professional life, like increased motivation and casual networking, outweigh costs like heightened cost of rejection, I expect you’re fine with EA staying not just a professional network.
Value of good reputation(1) If you think how EA is perceived is crucial to its future trajectory, then I expect you’re more inclined to make sure EA has lots of good front-facing comms and diligently avoids risky projects that have large downside risks. 

(2) If you think, for example, that technical alignment is the biggest problem and you don’t need a great brand to solve it, I expect you’re more inclined to do potentially costly things like talent search outreach. 
Possibility of making a community that’s about a “question” If you think that it’s feasible to create a movement around a question like “How can we do the most good?” without becoming inseparably associated with your best guess answers, I expect you’re more excited about maintaining a strong EA umbrella and doing outreach with the EA brand. 
Incompatibility of different cause area vibes If you think that the space colonization aesthetic is just very difficult to reconcile in the same movement as a Life You Can Save-style aesthetic, I expect you’re more inclined for EA outreach and/or professional networks to branch out and not try too hard to be under the same umbrella. 

Meta considerations 

Alongside the above object-level considerations, I think other more meta-considerations should influence how we weigh different trajectory updates:

  • Quality of status quo: If you think that the business-as-usual trajectory looks bad for business (e.g., Bad Omens in Current Community Building), I expect you’d be more bullish on making at least some trajectory changes. 
  • Switching cost: Even if we’re confident a new trajectory looks better than business as usual, it may not be worth it to make costly trajectory updates if the gains from the updates are only marginal and business as usual already has a lot of inertia.[18] Example switching costs include the cost of planning new events, the cost of hiring new program manager, the cost of changing comms strategy, etc. I think switching costs are real, but I think we should be wary of local incentives to not make updates that could pay compounding returns. You know, long-term thinking. 
  • Option value: The best course corrections leave room to course correct away from the course correction. For example, we wouldn't want to course-correct to a strategy that permanently splits EA in some way, only to have the strategic landscape change and realize five years later that it's best for the movement to merge.
  • Info value: Ideally, we learn a lot when we try some trajectory update. I’m in favor of taking an experimental approach in trying to answer “what’s the ideal trajectory update?”
  • Tractability: [basically a different framing of switching cost] Some trajectory changes may be much less feasible than others (e.g., I’d expect changing vibes in certain communities to be harder than experimenting with cause-specific EAGs).  
  • Predictability: The consequences of some trajectory updates may be quite easy to predict, while others may be much more uncertain.

Next steps

  • Think through EA success cases. My previous post identified ways in which EA could fail – but what are the ways in which we could win? 
  • Analyze the tractability of changes in different domains. For example, I’d expect changing associations with different sub-cultures to be much less tractable than changing where EA does outreach.
  • Improve collective understanding of where EA stands in different domains. I think my current quick thoughts are pretty weak and could benefit from more data. 
  • Argue for or against concrete course corrections or some stance on a key consideration. Like a lot of what bubbled up in the criticism and red-teaming contest, but perhaps making it more concrete and transparent by identifying which domains should change and where you stand on key considerations. 
  • Identify what organization/ people are best positioned to make certain changes. Who could “own” things? How much of this is on CEA? Who else needs to make moves?
  • Surveys or user interviews with different cross-sections of EA – or people who bounced off EA – to understand their opinions on these different considerations. 
  • Run more small-scale experiments to test hypotheses on considerations. Do you think it's possible that EA would benefit from different “on-ramps” that don’t just flow through the EA brand? Test this by running an animal advocacy student org, a longtermist student org, and an EA org at the same university.[19] 
  • Do (Retrospective) data collection on key considerations. What can we learn about these key considerations from the EA movements past? When EA movement building has been at similar crossroads before, what road did they take and how does that decision look in hindsight? Example: doing user interviews with people leading cause-specific uni groups and learning more about pros and cons of such groups. 
  • Analyze historical case studies for analogous movements. How did they fail, succeed, or land anywhere in between and why? See a previous forum question on reading suggestions. 
  • Forecast EA community health. I previously looked for some measurable proxies.

Closing remarks

I write this post – and I expect others write their good-faith EA critiques – out of a genuine love for the principles of effective altruism. These ideas are so special, and so worth protecting. Yes, we’re likely messing up important things. Yes, vibes are weird sometimes. But we’re trying something big here. We’re trying to help and safeguard sentient life in a  way no community has before – of course, we’ll make mistakes. Regardless of how we disagree with one another, let's acknowledge our shared goal to do. good. better. Now go fiercely debate ideas in the name of this goal :)

Acknowledgments

Thank you to Ben Hayum, Nina Stevens, Fin Moorehouse, Lara Thurnherr, Rob Bensinger, Eli Rose, Oscar Howie, and others for comments or discussions that helped me think about EA movement failures, course corrections, and key considerations. To the hopefully limited extent that I express views in this post, they’re my own. 

  1. ^

    EA exceptionalism: that oft-seen sentiment that “EA can do it better"; that we have nothing to learn from the outside world. I think I first heard this from Eli Rose. 

  2. ^

    Note that I chose “make EA enormous” rather randomly as a recent strategy proposal. I’m using it to illustrate a larger pattern I see in strategy proposals.

  3. ^

    Reasonable people might disagree here. I discuss this more under “meta considerations” in section III. 

  4. ^

     Note what I’m saying the best possible EA trajectory is not: It’s not the one that makes EAs feel most validated; it’s not the one that upsets the fewest people; it’s not necessarily the one that feels inviting to you or me. 

  5. ^

     And even if all the decisions were deliberate, what are the odds that decision-makers over the EA movement’s lifetime have made all the correct decisions? But people could still disagree with me on how likely random social pressures and local decisions – within guardrails placed by EA decision-makers – are to lead to the best possible trajectory. 

  6. ^

     See EA Culture and Causes: Less is More for example arguments for EA splitting at some level. 

  7. ^
  8. ^

     In this list of domains, I make the distinction between different EA brands in outreach and a more overarching community split. I do this because you could imagine an outreach approach that has many different “on-ramps” that effectively funnel into the same community or, vice versa, a small set of “on-ramps” that then branch off into many different communities.

  9. ^

    Other possible “on-ramps” include: 80,000 Hours promotion, Giving What We Can chapters, AI safety groups, Biosecurity groups, animal welfare groups, One For the World chapters. 

  10. ^

    See Benjamin Todd’s comment:

    If we think that some causes have ~100x the impact of others, there seem like big costs to not making that very obvious (to instead focus on how you can do more good within your existing cause).


    I'm particularly bullish on any impact estimate being confidence-adjusted because of reasoning similar to Why we can’t take expected value estimates literally (even when they’re unbiased)

  11. ^

     For people who don’t place a lot of weight on the diversity of opinions, beware of the broccoli effect! This might manifest as something like: 
    “But I don’t want EA to become less purely consequentialist, because if I find out that non-consequentialist lines of reasoning that I haven’t really looked into make sense I’ll end up being less consequentialist, and I don’t want to be less consequentialist.”

  12. ^

    Note that I don’t want to sharpen a somewhat false dichotomy between “longtermist work” and “neartermist work.” See Will MacAskill twitter thread.

  13. ^

    Bad Omens in Current Community Building makes a good point that the value of hardcoreness likely varies across cause areas:

     I think the model of prioritizing HEAs does broadly make sense for something like AI safety: one person actually working on AI safety is worth more than a hundred ML researchers who think AI safety sounds pretty important but not important enough to merit a career change. But elsewhere it’s less clear. Is one EA in government policy worth more than a hundred civil servants who, though not card-carrying EAs, have seriously considered the ideas and are in touch with engaged EAs who can call them up if need be? What about great managers and entrepreneurs?

  14. ^

     Unclear to me if this is the case so don’t want to project that it is. 

  15. ^
  16. ^

    See Thomas Kwa and Luke Freeman debate on this.
    Note that this likely varies a great deal by which specific ideas are spread. Some ideas, like considering the moral value of our grandkids’ grandkids and scope sensitivity, seem far less controversial than others, like AI alignment arguments designed for mass appeal. 

  17. ^
  18. ^

    Switching costs are a particularly important consideration if you think the “business as usual” trajectory isn’t that bad. 

  19. ^

     Yup, this is me shamelessly hyping up my successor exec team at the University of Wisconsin–Madison. s/o Max, Eeshaan, Declan, Cian, and Meera :) 

Comments4
Sorted by Click to highlight new comments since: Today at 10:25 AM

Good followup. Like the blunt but good-faith vibes. Establishing shared vocabulary is good. 

I also think EA is a fantastic, inspiring project but not on the optimal trajectory (which is a very high bar). Course correction makes sense as a response to this. Another option is to start a competitor movement.  In either case you would want to think through all of the different domains that you listed and where their optimum lies. Below are some pros/cons, though completely not exhaustive.

Pros: 

  • Probably easier to create a new movement with the exact norms you want.
  • Even if it ends up not being successful in the long term, it could "Bernie" the mainstream EA movement, forcing it to shift in certain ways because of the popularity of some aspects of the competitor. So perhaps it would be a more efficient form of course correction.
  • Maybe it's actually good to have multiple EA-like movements even if EA itself isn't on a bad path

Cons:

  • EA may be a natural monopoly, and you will reduce efficiency by creating a competitor even if it has better norms/institutions
  • EA course correction has higher scale
  • Could cause infighting/internal rift and bog us down with drama. 

Thanks for this. It is interesting to me how many of the key considerations mention 'outreach' (12/24 by my count). I suppose this makes sense that choosing how and how much to grow is one of the foremost strategy decisions. It also shows how hard making these decisions could be, given all the different considerations to weigh up. The issue of who should do this steering and strategising does seem tricky. I share your concern about getting CEA to take on a more authoritative role, and am generally pretty happy with the somewhat anarchic norms (anyone can post more or less anything on the forum and have a chance of influencing much of the community). But then, it is just harder to make and action important trajectory-change decisions without more structured decision-making.

Good point. It’s worth noting that ‘outreach’ is often mentioned in the examples, not the key consideration itself. I think the key considerations that mention outreach in the example often influence more than outreach. For example, “Relative costs vs. benefits of placing greater emphasis on not-explicitly-EA brands” mentions outreach, but I think this closely connected to how professional networks identify themselves and how events are branded.

I have a background in university community building, so I wouldn’t be surprised if that biased me to often make the examples about outreach.

Thank you for this post, I find it very helpful for clarifying my thoughts when working on community building strategy.

Curated and popular this week
Recent opportunities in Building effective altruism