Hide table of contents

The EA community has expanded to encompass a broad spectrum of interests, making its identity and definition a hotly debated topic. In my view, the community's current diversity could easily support multiple distinct communities, and if we were building a movement from scratch, it would likely look different from the current EA movement.
 

Defining sub-communities within the EA movement can be approached in numerous ways. One proposed division that I believe captures much of what people appreciate about the EA community, is as follows:

  • Question-based communities
    • An Effective Giving Community
    • An Impactful Career Community
  • Answer-based communities
    • An AI X-Risk Community
    • An Effective Animal Advocacy Community
       

Question-based communities

An Effective Giving Community

The concept of effective giving is where EA originated and remains a significant component of the community. Notable organizations such as GWWC, Effektiv Spenden, One for the World, Founders Pledge, and others, share a common mission and practical outcomes. The primary metric for this community is directing funds towards highly impactful areas. GiveWell, for instance, is perhaps the first and most recognized organization within this effective giving community outside the EA movement. This community benefits from its diversity and plurality, as many people could, for example, take the 10% pledge, and an even larger number could enhance their giving effectiveness using EA principles. Key concepts for this community could include determining the best charities to donate to, identifying the most effective charity evaluators, and deciding how much one should donate. This, in many ways, echoes the fundamentals of the EA 1.0 community.
 

An Impactful Career Community 

In addition to funding, individuals can contribute to the world through their careers. Much like the effective giving community, there's the question of how to maximize the impact of one's career across multiple cause areas. Organizations such as Probably GoodHigh Impact Professionals, or Charity Entrepreneurship focus on this area (I intentionally exclude career-focused organizations with a narrow cause area focus, like 80,000 Hours or Animal Advocacy Careers). The objective of this community would be related to career changes and enhancing understanding of the most impactful career paths. Although this is a broadly inclusive community benefiting from cause plurality, it's likely less extensive than the effective giving community, as a smaller percentage of the population will prioritize impact when considering a career switch. Relevant topics for this community could include identifying high absorbency, impactful careers, assessing the most impactful paths for individuals with specific value or skill sets, and determining underrated careers.
 

Answer-based communities, e.g., AI X-Risk Community 

The second community category that is a bit different from these others is anwer-based communities. I think there are two somewhat distinctive answer-based communities in EA: AI and animals. I think AI X-risk is a better example as it's more often mixed with the other above two communities and has significantly grown as a unique area within EA. This community consists of meta-organizations like Longview, Effective Giving and 80,000 Hours as well as the organizations working directly on the problem. It has begun to hold separate forumsconferences, and events. Its shared goal is to mitigate existential risks from AI, a specific objective that doesn't necessarily require members to embrace effective giving or prioritize impact in their careers. However, it does require specific values and epistemic assumptions, leading to this cause being prioritized over others that are also sensible within the EA framework. Much like the adjacent animal welfare community, it is razor-focused on a specific problem and, although it grew out of the EA community, it now occupies a distinct space from EA 1.0 or the EA as a question community.
 

The Benefits of Distinct Communities 

I believe there are considerable benefits to establishing these separate sub-communities. An effective giving community may prefer not to be associated with the practices of the existential risk community, and vice versa. The X-risk community, for instance, could benefit from tapping more into excitement or personal motivations rather than moral obligation, especially given recent updates on timelines. Ultimately, we want EA to be clear about its identity, ensuring that people don't feel like they're joining a community for one reason and are then being misled into another. A more explicit division could also lead to greater focus within each community on the goals it genuinely cares about. For example, if an EA chapter is funded, it's clear whether it's funded by an effective giving community (in which case it would run fundraisers and have people sign up to GWWC), an impactful careers community (thus it would provide career coaching and help members get jobs), or an x-risk community (which would help people donate to or join specific x-risk-focused career paths). I think this sort of division would let us lean into prioritization without losing plurality, as well as helping issues related to maintaining a transparent scope. In some ways, this is almost like having a transparent scope, but applied to a whole movement.

64

0
0

Reactions

0
0

More posts like this

Comments10


Sorted by Click to highlight new comments since:

I'm a bit unclear on why you characterise 80,000 Hours as having a "narrower" cause focus than (e.g.) Charity Entrepreneurship. CE's page cites the following cause areas:

  1. Animal Welfare
  2. Health and Development Policy
  3. Mental Health and Happiness
  4. Family Planning
  5. Capacity Building (EA Meta)

Meanwhile, 80k provide a list of the world's "most pressing problems":

  1. Risks from AI
  2. Catastrophic Pandemics
  3. Nuclear War
  4. Great Power Conflict
  5. Climate Change

These areas feel comparably "broad" to me? Likewise for Longview, who you list as part of the "AI x-risk community", state six distinct focus areas for their grantmaking — only one of which is AI. Unless I've missed a recent pivot from these orgs, both Longview & 80k feel more similar to CE in terms of breadth than Animal Advocacy Careers.

I agree that you need "specific values and epistemic assumptions" to agree with the areas these orgs have highlighted as most important, but I think you need specific values and epistemic assumptions to agree with more standard near-termist recommendations for impactful careers and donations, too. So I'm a bit confused about what the difference between "question" and "answer" communities is meant to denote aside from the split between near/longtermism.[1] Is the idea that (for example) CE is more skeptically focused on exploring the relative priorities of distinct cause areas, whereas organizations like Longview and 80k are more focused on funnelling people+money into areas which have already been decided as the most important? Or something else?

I do think it's correct note that the more 'longtermist' side of the community works with different values and epistemics to the more 'neartermist' side of the community, and I think it would be beneficial to emphasise this more. But given that you note there are already distinct communities in some sense (e.g., there are x-risk specific conferences), what other concrete steps would you like to see implemented in order to establish distinct communities?

  1. ^

    I'm aware that many people justify focus on areas like biorisk and AI in virtue of the risks posed to the present generation, and might not subscribe to longtermism as a philosophical thesis. I still think that the ‘longtermist’ moniker is useful as a sociological label — used to denote the community of people who work on cause areas that longtermists are likely to rate as among the highest priorities.

Hey, I think this is a pretty tricky thing to contemplate, partly due to organizations not being as transparent about their scope as would be ideal. However, I will try to describe why I view this as a pretty large difference. I will keep the 80k as an example.

1) Tier-based vs. prioritized order

So indeed, although both organizations list a number of cause areas, I think the way CE does it is more in tiers, e.g., there is not a suggested ranking that would encourage someone to lean towards health and development policy over family planning. On the other hand, my understanding of 80k’s list is that they would have a strong preference for someone to go into AI vs. climate change. This means that although five areas might be listed by both, the net spread of people going into each of them might be very different. I think overall, I/we should care more about the outcomes than what is written on a website e.g., CE said it worked in these five areas, but in practice, if 80% of our charities were animal-focused, I would consider us an animal organization.

2) Relative size of a given worldview

I think it's easy to forget how niche some of these cause areas are compared to others, and I believe that makes a significant difference. An area like mental health or global health are orders of magnitude more common a worldview than something like animal welfare or AI. If you consider how many moral and epistemic views would consider something like reducing lead paint as a charitable action vs. working at an AI safety lab, these require markedly different levels of specificity in views. The only area on 80k’s list that I would suggest is a major area outside of EA is climate change, the one listed last.

3) Likelihood of adding additional cause areas that are competitive with number 1

My understanding is that AI has been 80k's top priority since close to its founding (2011), and that right now, it's internally not seen as highly likely that something will supersede it. CE, on the other hand, started with animals and GW-style global development and has now added the cause areas listed above. Additionally, it has a continued goal to explore new ones (e.g., we are incubating bio-risk charities this year, and I expect we will tackle another area we have never worked on before in the next 12 months). This is fundamentally because the CE team expects that there are other great cause areas out there that are comparable to our top ones, ones that we/EA have not yet identified.

I think a lot of this could be made clear with more transparency. If, say, 50%+ of 80k's funding or their staff views on what the top area was were not AI, I would be happy to revise the list and put them back into the exploratory camp. But I would be pretty surprised if this were the case, given my current understanding.

4) Funneling vs. exploring

I think the factor you describe is also relevant. If an organization sees most of their focus in the funneling direction towards a certain cause area, I would definitely categorize them more as an answer-based community. E.g., maybe one could look at the ratio of budget spent on outreach compared to exploration that an organization does. I would not be surprised if that correlated well with a question vs. answer-based approach.

Ultimately, I do think it's a spectrum, and every organization is a bit answer-based and a bit question-based. However, I do think there is a significant and worthwhile difference between being 25% answer/75% question-oriented, and the reverse.

I feel similarly confused with this somewhat arbitrary categorisation which also seems heavily flawed. 

CE is in it's nature a narrow career focus, it focuses just on entrepreneurs in the neartermist space and is highly biased to thinking this is the most impactful career someone can do, whilst for many starting a new charity would not be. It seems a large stretch to put CE in this category and also doesnt seem to be where CE focuses its time and energy. HIP also focuses just on mid-career professionals but it's hard to know what they are doing as they seem to change what they are doing and their target audience relatively often. 

80,000 hours, Probably Good and Animal Advocacy Careers seem broader in their target audience and seem like the most natural fit for being the most impactful career community. They also advise people on how they can do the most effective thing although obviously, they all have their own biases based on their cause prioritisation.

Hey Anon, indeed, the categorisation is not aimed at the target audience. It’s more aimed at the number and requires specific ethical and epistemic assumptions. I think another way to dive into things would be to consider how broad vs. narrow a given suggested career trajectory is, as something like CE or Effective Altruism might be broad cause area-wise but narrow in terms of career category.

However, even in this sort of case, I think there is a way to frame things into a more answer vs. question-based framework. For example, one might ask something like: "How highly does CE rank the career path of CE relative to five unrelated but seen by others as promising career paths?" I think the more unusual this rating is compared to what, for instance, an EA survey would suggest, the more I would place CE in the answer-based community. I also think a decision mentioned above about how much time an organisation spends on funnelling vs. exploring could be another relevant characteristic when considering how question vs. answer-based an organisation is.

What concrete actions might this suggest?

I think the most salient two are connected to the other two posts I made. I think people should have a transparent scope, especially organizations where people might be surprised about their current focus and they should not use polarizing techniques. I think there are tons of further steps that could be taken; a conference for EA global health and development seems like a pretty obvious example of something that is missing in EA.

Thanks for writing this up, it's an interesting frame.

Is "question versus answer based" just the same as "does cause prioritization or not"? It seems to me like AI X-Risk and animal welfare has a bunch of questions, and effective giving has a bunch of answers; the major difference I feel like you are pointing to is just that the former is (definitionally) not prioritizing between causes and the latter is. (Whereas conversely the former is e.g. prioritizing between paths to impact whereas the latter isn't.)

Feels like a "do the most effective thing" community can encompass both effective giving and impactful career

Hey Joey, Arden from 80k here. I just wanted to say that I don't think 80k has "the answers" to how to do the most good.

But we do try to form views on the relative impact of different things, so we do try to reach working answers, and then act on our views (e.g. by communicating them and investing more where we think we can have more impact).

So e.g. we prioritise cause areas we work most on by our take at their relative pressingness, i.e. how much expected good we think people can do by trying to solve them, and we also communicate these views to our readers.

(Our problem profiles page emphasises that we’re not confident we have the right rankings here https://80000hours.org/problem-profiles/#problems-faq and also at the top of the page, and in ranking meta problems like global priorities research fairly highly).

I think all orgs interested in having as much positive impact as the can need to have a stance on how to do that -- otherwise they cannot act. They might be unsure (as we are), and open to changing their minds (as we try to be), and often be asking themselves the question "is this really the way to do the most good?" (as we try to do periodically). I think that's part of what characterises EA. But in the meantime we all operate with provisional answers, even if that provisional answer is "the way to do the most good is to not have a publicly stated opinion on things like which causes are more pressing than others."

Vague agree with the framing of questions vs. answers, but I feel worried that "answer-based communities" are quite divergent from the epistemic culture of EA. Like, religions are answer-based communities but a lot of EAs would dispute that EA is a religion or that it is prescriptive in that way.

Not sure how exactly this fits into what you wrote, but figured I should register it.

[anonymous]1
0
0

I feel worried that "answer-based communities" are quite divergent from the epistemic culture of EA

I don't feel worried about that. I feel worried that this post frames neartermist-leaning orgs (like the OP's) as question-based i.e. as having an EA epistemic culture, while longtermist-leaning orgs are framed as answer-based i.e. as having an un-EA epistemic culture without good reason.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3
Recent opportunities in Building effective altruism
47
Ivan Burduk
· · 2m read