Hide table of contents

The EA community has expanded to encompass a broad spectrum of interests, making its identity and definition a hotly debated topic. In my view, the community's current diversity could easily support multiple distinct communities, and if we were building a movement from scratch, it would likely look different from the current EA movement.
 

Defining sub-communities within the EA movement can be approached in numerous ways. One proposed division that I believe captures much of what people appreciate about the EA community, is as follows:

  • Question-based communities
    • An Effective Giving Community
    • An Impactful Career Community
  • Answer-based communities
    • An AI X-Risk Community
    • An Effective Animal Advocacy Community
       

Question-based communities

An Effective Giving Community

The concept of effective giving is where EA originated and remains a significant component of the community. Notable organizations such as GWWC, Effektiv Spenden, One for the World, Founders Pledge, and others, share a common mission and practical outcomes. The primary metric for this community is directing funds towards highly impactful areas. GiveWell, for instance, is perhaps the first and most recognized organization within this effective giving community outside the EA movement. This community benefits from its diversity and plurality, as many people could, for example, take the 10% pledge, and an even larger number could enhance their giving effectiveness using EA principles. Key concepts for this community could include determining the best charities to donate to, identifying the most effective charity evaluators, and deciding how much one should donate. This, in many ways, echoes the fundamentals of the EA 1.0 community.
 

An Impactful Career Community 

In addition to funding, individuals can contribute to the world through their careers. Much like the effective giving community, there's the question of how to maximize the impact of one's career across multiple cause areas. Organizations such as Probably GoodHigh Impact Professionals, or Charity Entrepreneurship focus on this area (I intentionally exclude career-focused organizations with a narrow cause area focus, like 80,000 Hours or Animal Advocacy Careers). The objective of this community would be related to career changes and enhancing understanding of the most impactful career paths. Although this is a broadly inclusive community benefiting from cause plurality, it's likely less extensive than the effective giving community, as a smaller percentage of the population will prioritize impact when considering a career switch. Relevant topics for this community could include identifying high absorbency, impactful careers, assessing the most impactful paths for individuals with specific value or skill sets, and determining underrated careers.
 

Answer-based communities, e.g., AI X-Risk Community 

The second community category that is a bit different from these others is anwer-based communities. I think there are two somewhat distinctive answer-based communities in EA: AI and animals. I think AI X-risk is a better example as it's more often mixed with the other above two communities and has significantly grown as a unique area within EA. This community consists of meta-organizations like Longview, Effective Giving and 80,000 Hours as well as the organizations working directly on the problem. It has begun to hold separate forumsconferences, and events. Its shared goal is to mitigate existential risks from AI, a specific objective that doesn't necessarily require members to embrace effective giving or prioritize impact in their careers. However, it does require specific values and epistemic assumptions, leading to this cause being prioritized over others that are also sensible within the EA framework. Much like the adjacent animal welfare community, it is razor-focused on a specific problem and, although it grew out of the EA community, it now occupies a distinct space from EA 1.0 or the EA as a question community.
 

The Benefits of Distinct Communities 

I believe there are considerable benefits to establishing these separate sub-communities. An effective giving community may prefer not to be associated with the practices of the existential risk community, and vice versa. The X-risk community, for instance, could benefit from tapping more into excitement or personal motivations rather than moral obligation, especially given recent updates on timelines. Ultimately, we want EA to be clear about its identity, ensuring that people don't feel like they're joining a community for one reason and are then being misled into another. A more explicit division could also lead to greater focus within each community on the goals it genuinely cares about. For example, if an EA chapter is funded, it's clear whether it's funded by an effective giving community (in which case it would run fundraisers and have people sign up to GWWC), an impactful careers community (thus it would provide career coaching and help members get jobs), or an x-risk community (which would help people donate to or join specific x-risk-focused career paths). I think this sort of division would let us lean into prioritization without losing plurality, as well as helping issues related to maintaining a transparent scope. In some ways, this is almost like having a transparent scope, but applied to a whole movement.

62

0
0

Reactions

0
0

More posts like this

Comments10
Sorted by Click to highlight new comments since: Today at 4:33 PM

I'm a bit unclear on why you characterise 80,000 Hours as having a "narrower" cause focus than (e.g.) Charity Entrepreneurship. CE's page cites the following cause areas:

  1. Animal Welfare
  2. Health and Development Policy
  3. Mental Health and Happiness
  4. Family Planning
  5. Capacity Building (EA Meta)

Meanwhile, 80k provide a list of the world's "most pressing problems":

  1. Risks from AI
  2. Catastrophic Pandemics
  3. Nuclear War
  4. Great Power Conflict
  5. Climate Change

These areas feel comparably "broad" to me? Likewise for Longview, who you list as part of the "AI x-risk community", state six distinct focus areas for their grantmaking — only one of which is AI. Unless I've missed a recent pivot from these orgs, both Longview & 80k feel more similar to CE in terms of breadth than Animal Advocacy Careers.

I agree that you need "specific values and epistemic assumptions" to agree with the areas these orgs have highlighted as most important, but I think you need specific values and epistemic assumptions to agree with more standard near-termist recommendations for impactful careers and donations, too. So I'm a bit confused about what the difference between "question" and "answer" communities is meant to denote aside from the split between near/longtermism.[1] Is the idea that (for example) CE is more skeptically focused on exploring the relative priorities of distinct cause areas, whereas organizations like Longview and 80k are more focused on funnelling people+money into areas which have already been decided as the most important? Or something else?

I do think it's correct note that the more 'longtermist' side of the community works with different values and epistemics to the more 'neartermist' side of the community, and I think it would be beneficial to emphasise this more. But given that you note there are already distinct communities in some sense (e.g., there are x-risk specific conferences), what other concrete steps would you like to see implemented in order to establish distinct communities?

  1. ^

    I'm aware that many people justify focus on areas like biorisk and AI in virtue of the risks posed to the present generation, and might not subscribe to longtermism as a philosophical thesis. I still think that the ‘longtermist’ moniker is useful as a sociological label — used to denote the community of people who work on cause areas that longtermists are likely to rate as among the highest priorities.

Hey, I think this is a pretty tricky thing to contemplate, partly due to organizations not being as transparent about their scope as would be ideal. However, I will try to describe why I view this as a pretty large difference. I will keep the 80k as an example.

1) Tier-based vs. prioritized order

So indeed, although both organizations list a number of cause areas, I think the way CE does it is more in tiers, e.g., there is not a suggested ranking that would encourage someone to lean towards health and development policy over family planning. On the other hand, my understanding of 80k’s list is that they would have a strong preference for someone to go into AI vs. climate change. This means that although five areas might be listed by both, the net spread of people going into each of them might be very different. I think overall, I/we should care more about the outcomes than what is written on a website e.g., CE said it worked in these five areas, but in practice, if 80% of our charities were animal-focused, I would consider us an animal organization.

2) Relative size of a given worldview

I think it's easy to forget how niche some of these cause areas are compared to others, and I believe that makes a significant difference. An area like mental health or global health are orders of magnitude more common a worldview than something like animal welfare or AI. If you consider how many moral and epistemic views would consider something like reducing lead paint as a charitable action vs. working at an AI safety lab, these require markedly different levels of specificity in views. The only area on 80k’s list that I would suggest is a major area outside of EA is climate change, the one listed last.

3) Likelihood of adding additional cause areas that are competitive with number 1

My understanding is that AI has been 80k's top priority since close to its founding (2011), and that right now, it's internally not seen as highly likely that something will supersede it. CE, on the other hand, started with animals and GW-style global development and has now added the cause areas listed above. Additionally, it has a continued goal to explore new ones (e.g., we are incubating bio-risk charities this year, and I expect we will tackle another area we have never worked on before in the next 12 months). This is fundamentally because the CE team expects that there are other great cause areas out there that are comparable to our top ones, ones that we/EA have not yet identified.

I think a lot of this could be made clear with more transparency. If, say, 50%+ of 80k's funding or their staff views on what the top area was were not AI, I would be happy to revise the list and put them back into the exploratory camp. But I would be pretty surprised if this were the case, given my current understanding.

4) Funneling vs. exploring

I think the factor you describe is also relevant. If an organization sees most of their focus in the funneling direction towards a certain cause area, I would definitely categorize them more as an answer-based community. E.g., maybe one could look at the ratio of budget spent on outreach compared to exploration that an organization does. I would not be surprised if that correlated well with a question vs. answer-based approach.

Ultimately, I do think it's a spectrum, and every organization is a bit answer-based and a bit question-based. However, I do think there is a significant and worthwhile difference between being 25% answer/75% question-oriented, and the reverse.

I feel similarly confused with this somewhat arbitrary categorisation which also seems heavily flawed. 

CE is in it's nature a narrow career focus, it focuses just on entrepreneurs in the neartermist space and is highly biased to thinking this is the most impactful career someone can do, whilst for many starting a new charity would not be. It seems a large stretch to put CE in this category and also doesnt seem to be where CE focuses its time and energy. HIP also focuses just on mid-career professionals but it's hard to know what they are doing as they seem to change what they are doing and their target audience relatively often. 

80,000 hours, Probably Good and Animal Advocacy Careers seem broader in their target audience and seem like the most natural fit for being the most impactful career community. They also advise people on how they can do the most effective thing although obviously, they all have their own biases based on their cause prioritisation.

Hey Anon, indeed, the categorisation is not aimed at the target audience. It’s more aimed at the number and requires specific ethical and epistemic assumptions. I think another way to dive into things would be to consider how broad vs. narrow a given suggested career trajectory is, as something like CE or Effective Altruism might be broad cause area-wise but narrow in terms of career category.

However, even in this sort of case, I think there is a way to frame things into a more answer vs. question-based framework. For example, one might ask something like: "How highly does CE rank the career path of CE relative to five unrelated but seen by others as promising career paths?" I think the more unusual this rating is compared to what, for instance, an EA survey would suggest, the more I would place CE in the answer-based community. I also think a decision mentioned above about how much time an organisation spends on funnelling vs. exploring could be another relevant characteristic when considering how question vs. answer-based an organisation is.

What concrete actions might this suggest?

I think the most salient two are connected to the other two posts I made. I think people should have a transparent scope, especially organizations where people might be surprised about their current focus and they should not use polarizing techniques. I think there are tons of further steps that could be taken; a conference for EA global health and development seems like a pretty obvious example of something that is missing in EA.

Thanks for writing this up, it's an interesting frame.

Is "question versus answer based" just the same as "does cause prioritization or not"? It seems to me like AI X-Risk and animal welfare has a bunch of questions, and effective giving has a bunch of answers; the major difference I feel like you are pointing to is just that the former is (definitionally) not prioritizing between causes and the latter is. (Whereas conversely the former is e.g. prioritizing between paths to impact whereas the latter isn't.)

Feels like a "do the most effective thing" community can encompass both effective giving and impactful career

Hey Joey, Arden from 80k here. I just wanted to say that I don't think 80k has "the answers" to how to do the most good.

But we do try to form views on the relative impact of different things, so we do try to reach working answers, and then act on our views (e.g. by communicating them and investing more where we think we can have more impact).

So e.g. we prioritise cause areas we work most on by our take at their relative pressingness, i.e. how much expected good we think people can do by trying to solve them, and we also communicate these views to our readers.

(Our problem profiles page emphasises that we’re not confident we have the right rankings here https://80000hours.org/problem-profiles/#problems-faq and also at the top of the page, and in ranking meta problems like global priorities research fairly highly).

I think all orgs interested in having as much positive impact as the can need to have a stance on how to do that -- otherwise they cannot act. They might be unsure (as we are), and open to changing their minds (as we try to be), and often be asking themselves the question "is this really the way to do the most good?" (as we try to do periodically). I think that's part of what characterises EA. But in the meantime we all operate with provisional answers, even if that provisional answer is "the way to do the most good is to not have a publicly stated opinion on things like which causes are more pressing than others."

Vague agree with the framing of questions vs. answers, but I feel worried that "answer-based communities" are quite divergent from the epistemic culture of EA. Like, religions are answer-based communities but a lot of EAs would dispute that EA is a religion or that it is prescriptive in that way.

Not sure how exactly this fits into what you wrote, but figured I should register it.

[anonymous]10mo1
0
0

I feel worried that "answer-based communities" are quite divergent from the epistemic culture of EA

I don't feel worried about that. I feel worried that this post frames neartermist-leaning orgs (like the OP's) as question-based i.e. as having an EA epistemic culture, while longtermist-leaning orgs are framed as answer-based i.e. as having an un-EA epistemic culture without good reason.

More from Joey
Curated and popular this week
Relevant opportunities