Hide table of contents

Is EA a question, or a community based around ideology?

After a year of close interaction with Effective Altruism – and recognizing that the movement is made up of many people with different views – I’m still confused as to whether EA aims to be a question about doing good effectively, or a community based around ideology

In my experience, it’s largely been the latter, but many EAs have expressed – either explicitly or implicitly – that they’d like it to be the former.  I see this in the frequent citations of “EA is a question (not an ideology)” and the idea of the scout mindset; and most recently, in a lot of the top comments in the post on suggestions for community changes

As an EA-adjacent individual, I think the single most important thing the EA community could do to become more of a question, rather than an ideology, is to take concrete steps to interact more with, learn from, and collaborate with people outside of EA who seek to do good, without necessarily aiming to bring them into the community

I was a Fellow with Vox’s Future Perfect section last year, and moved to the Bay Area in part to learn more about EA. I want to thank the EA community for letting me spend time in your spaces and learn from your ideas; my view of the world has definitely broadened over the past year, and I hope to continue to be engaged with the community. 

But EA has never been, and I don’t see it ever becoming, my primary community. The EA community and I have differences in interests, culture, and communication styles, and that’s okay on both ends. As this comment says, the core EA community is not a good fit for everyone! 

A bit more about me. After college, I worked with IDinsight, a global development data analysis and advisory firm that has collaborated with GiveWell. Then I wrote for Future Perfect, focusing on global development, agriculture, and climate. I care a lot about lowercase effective altruism – trying to make the world better in an effective way – and evidence-based decision making. Some specific ways in which I differ from the average highly-engaged EA (although my, or their, priors could change) is that I’m more sympathetic to non-utilitarian ethical theories (and religion), more sympathetic to person-affecting views, and more skeptical that we can predict how our actions now will impact the long-term future. 

My personal experience with the EA community has largely been that it’s a community based around an ideology, rather than a question. I probably disagree with both some critics and some EAs in that I don’t think being a community based around ideology is necessarily bad. I come from a religious background, and while I have a complicated relationship with religion, I have a lot of close friends and family members who are religious, and I have a lot of respect for ideology-based communities in many ways. It’s helpful for me to know what their ideology is, as I know going into discussions where we’ll likely differ and where we’ll potentially find common ground. 

If EA aims to be a community based around ideology, I don’t think much has to change; and the only request I’d have is that EA leadership and general discourse more explicitly position the community this way. It’s frustrating and confusing to interact with a community that publicly claims the importance of moral uncertainty, and then have your ideas dismissed when you’re not a total utilitarian or strong longtermist. 

That said, a lot of EAs have expressed that they do not want to be a community based around ideology. I appreciated the post about EA disillusionment and agree with some of the recent critical posts around the community for women, but this post is not about the community itself. 

Why it's important for “EA as a question” for the EA community to engage with outside people and ideas

If EA truly aims to be a question around aiming to “do the most good”, I think the thing the EA community needs to do most is learn from and work with the myriad people who care about doing good, and doing it effectively, but for whom EA will never be a primary community. Lots of the people EAs can learn from are probably less “EA” than I am – some examples (although of course people with these identities may also identify as EA) are people living outside of EA city or country hubs, people based in the Global South; people who are older; people with children; and people who have years of career experience completely outside the EA ecosystem.

None of the following means I think that the EA community should cease to exist: it’s a unique place in which I think a lot of people have found great community. 

But there’s a difference between “EA the community” and (lowercase) “effective altruism the project”. The main tension I have observed, although this is based on anecdotes and conversations and I could be mistaken here, is that the EA community’s insularity – in which cause prioritization, hiring, social life, funding, and more are all interconnected (as discussed in this post) – is hindering lowercase effective altruism, because it means that a lot of people who would add valuable ideas but aren’t engaged with EA the community aren’t at the professional table, either.

Some of the key groups I’ve thought of that are less involved in the EA community but would likely provide valuable perspective are policymakers on local and national levels (including policymakers from the Global South), people with years of expertise in the fields EA works in, and people who are most affected by EA-backed programs. But there are also ways of doing good with which I'm less familiar. I'm still EA-adjacent enough that there's much I would be missing, which is one reason it’s important to have more diverse networks! 

Interacting more with people from these backgrounds would bring perspectives in doing good that people who identify as EAs – largely white, male, and young, with a heavy focus on STEM and utilitarian philosophy – are missing. CEA recognizes the importance of diversity and inclusion towards attracting talented people and not missing out on important perspectives. Beyond this, some concrete problems driven by homogeneity that have been recently brought up on the forum are a lack of good organizational governance and limited ability to “to design and deliver effective, innovative solutions to the world’s most pressing problems”. 

How EA can engage with people outside of the community

Here are some of my concrete suggestions for how different groups within EA can engage with people outside of EA without the aim of bringing them into the community. This is not comprehensive, and I’ve seen many of them discussed elsewhere (I like this comment from another EA outsider); and I note specific positive examples I have seen. I’ve put these suggestions into three broad and imperfect categories of events, ideas, and professional actions. 

Events

  • EA Global accept and actively recruit people, especially from the Global South, who are experts in different fields that are aligned with lowercase effective altruist goals – for example, evidence-based decision-making and ensuring cost-effectiveness of interventions. I found EA Global DC to be making good headway with this with respect to people in the US policy space, but this could expand, which leads to my next point. 
    • Most people in the world, now and in the future, don’t live in the US and Europe. I have seen some good efforts on Global South recruitment for students and early-career professionals for the EA community (such as in India and the Philippines), but I would recommend going beyond even that – bringing in people who won’t become highly-engaged EAs, but could have things to teach and learn from the community. One group I had a discussion about was IAS officers (Indian Civil Service) – they and EA could both benefit from discussions on bringing evidence into policymaking. 
    • This sort of engagement would almost certainly involve more EA-related discussion in languages other than English, and it’s been exciting to see traction in the community on this recently.
  • EA groups cohost more joint community meetups. I’ve seen this happen with Bay Area YIMBY, I’d love to see more of these with other communities with overlapping aims. This might also help fulfill the goal of increasing diversity within the EA community if some attendees want to become highly-involved EAs.
  • EA organizations engage with the ideas and preferences of people impacted by EA programs, such as GiveWell and IDinsight’s collaboration on measuring people's preferences. Given EA (and my) elite background this might be harder than engaging, for example, officers in the Indian Civil Service, but I would love to see efforts to include the majority of the world in decisionmaking about issues that will affect the whole world. It would be great if EA orgs could include these perspectives into both program decisions and cause prioritization. 

Ideas

  • I think it’s important that EAs within the core community discuss ideas with/listen to people from outside the EA community. EAs in conversation with non-EAs often employ the scout mindset, and I think in general EAs are curious and open to learning and coming to synthesis. But I’ve sometimes found conversations around areas in which I disagree with “EA orthodoxy” frustrating; in some cases, my ideas have been seemingly dismissed off the bat and the conversation has ended up with appeals to authority, which is both alienating and bad epistemics. 
  • EAs engage with ideas and critiques outside of the EA ecosystem. This could be through interpersonal interactions; this could be through – especially for new EAs for whom there can be an overwhelming amount of EA-specific content – continuing to engage with philosophy, social science, and other ideas outside of EA; this could be through inviting non-EAs to speak at EA Global or on EA podcasts. Lowercase effective altruism can only be made stronger through reading and engaging with other ideas, even if (and probably especially when) they challenge EA orthodoxy.

Professional actions

  • EA (orgs, and core community at large) de-emphasize EA orthodoxy and trying to find the singular best way to do good, instead bringing in things like evidence-based decisionmaking to all fields.  This is maybe a general ideological difference I have with EA that would merit its own post, but I think cause neutrality taken to its extreme (especially given we all have imperfect information when trying to prioritize causes) can be alienating to people who want to do good effectively, but whose idea of doing good isn’t on the 80000 hrs list. Some organizations like Open Phil and Founders Pledge are great at looking across fields. But general community emphases that make it seem like AI, for example, is central to EA, mean that non-AI people might think that lowercase effective altruism is not for them either – when it might be!
  • EA orgs fund and collaborate with non-EA orgs that want to improve the world in a cost-effective way. Grantmakers should explicitly seek, if possible, organizations and cause areas outside the EA ecosystem. I’ve been excited to see Open Phil’s request for new cause area suggestions, such as the move into South Asian air quality.  
  • EA orgs take concrete steps to hire non-EAs. (I think there was a post on this recently but unfortunately I can’t find it). People with decades of experience in topics as diverse as management, policymaking, scientific research, etc, could add a lot to EA organizations without having to be involved in the community at all. A concrete step EA orgs could take is removing EA ideology words from job descriptions and instead defining for themselves the core principles of EA that are important for the jobs they want to hire for, to ensure they can resonate beyond just people who identify as EAs. 
    • From a personal perspective, I want to say that EA and EA-adjacent orgs have been very open to me working for them, despite (or because of) my being open about my perspectives, and I want to thank them for this. That said, I only started getting recruited once I worked for Future Perfect and started to go to EA events, and I think a lot of better people than me are being missed out on because they don’t know what EA is, EA doesn’t know who they are, or they think EA jobs are not for them. I know recruiting from outside of networks is difficult and that essentially every sector has this problem, but there is much potential increased impact from hiring outside the community. 

If EA aims to be a question, I think there’s a way forward in which EA continues to be a unique community, but one that learns from and engages with the non-EA community a lot more; and we can work to do good better together. 

Many thanks to Ishita Batra, Obasi Shaw, and others for their comments on this draft; and many thanks to the many people both within and without the EA community I’ve discussed these ideas with over the course of the last year. 

Comments22
Sorted by Click to highlight new comments since: Today at 12:20 PM

I think this post is very accurate, but I worry that people will agree with it in a vacuous way of "yes, there is a problem, we should do something about it, learning from others is good". So I want to make a more pointed claim: I think that the single biggest barrier to interfacing between EAs and non-EAs is the current structure of community building. Community-building is largely structured around creating highly-engaged EAs, usually through recruiting college students or even high-school students. These students are not necessarily in the best position to interface between EA and other ways of doing good, precisely because they are so early into their careers and don't necessarily have other competencies or viewpoints. So EA ends up as their primary lens for the world, and in my view that explains a sizable part of EA's quasi-isolationist thinking on doing good.

This doesn't mean all EAs who joined as college students (like me) end up as totally insular - life puts you into environments where you can learn from non-EAs. But that isn't the default, and especially outside of global health and development, it is very easy for a young highly-engaged EA to avoid learning about doing good from non-EAs.

After a cursory read of the post, my summary would have been some vacuous agreement like you described. +1 for making a more specific claim. If the post mentions some specific claims like this (I didn't read very carefully), I'd greatly appreciate a TL;DR / executive summary of these at the top.

What are some particular changes to EA CB that you'd like to see?

Thanks for this thought! I'd considered putting something similar in the original post simply based on anecdotes, but not being a community builder or someone who joined in college I wasn't sure enough to include it. I'd be interested to know your or others' thoughts on what community-building in particular could do to catalyze more interaction between EA and other ways of doing good? 

I'm not a community builder, but I'd like to share some observations as someone who has been involved with EA since around 2016 and has only gotten heavily involved with the "EA community" over the past year. I'm not sure how helpful they'll be but I hope it's useful to you and others.

  • I strongly agree with Karthik's comment about the focus on highly-engaged EA's as the desired result of community building being counterproductive to learning. I think part of this definitely comes down to the relative inexperience of both members and groups leaders, particularly in university groups. There seems to be a lot of focus on convincing people to get involved in EA rather than facilitating their engagement with ideas, and this seems to lead to a selection effect where only members who wholly buy into EA as a complete guideline for doing good stick with the community, creating an intellectual echo chamber of sorts where people don't feel very motivated to meaningfully engage with non-EA perspectives.
  • One reflection of this unwillingness to engage that I've come across recently is EAs online asking how to best defend point X or Y or how to best respond to a certain criticism. The framing of these questions as "how do I convince person that X is right/wrong", "which arguments work best on people who believe Y" or "how do I respond to criticism Z" makes it apparent to me that they are not interested in understanding the other points perspective as much as "defeating" it, and that they are trying to defend ideas or points that they are not convinced of themselves (as demonstrated in the way that they are not able to respond to criticisms themselves but feel the need to defend the point), presumably because it's an EA talking point. 
  • Another issue I've seen in similar online spaces is a sneer-y and morally superior attitude towards "fuzzies" and non-utilitarian approached to doing good. This is both hostile to non-EAs and  thus makes it less likely for them to be willing to engage, and it is demonstrating an unwillingness on the side of the EAs to engage as well. I'm not sure how prevalent this kind of thing is or how it can be counteracted, but it may be worth thinking about. 
    • While not as severe, I think it may be worth looking into discussion norms in this context as well. EAs as a community tend to value relatively highly polished arguments that are backed with evidence and their preferred modes of analysis (Bayesian analysis, utilitarian calculus, expected value etc.) and presented in a very "neutral", "unemotional" tone. There have been posts on this forum over the past few weeks both pointing this out and exemplifying it in the responses. While I do agree with criticisms of discussion norms , I think that it's fairly easy to see that this presents an obstacle to learning regardless of how one feels about it. If our intention is to learn from others, EAs need to be able to meaningfully engage with perspectives that are presented in their preferred style and engage with content over style and presentation, particularly where criticisms or fundamental differences of opinion are concerned. 
  • I've spoken to multiple community builders, both for university groups and local groups, who expressed frustration or disappointment in  not being able to get members to "engage" with EA because members weren't making career changes into direct work on EA causes. I think this is not only a bad approach to community building for reasons stated above, but that it also creates a dynamic where people who could be doing good work and learning elsewhere are implicitly told that this kind of work is not valuable, thus both alienating people are not able to find direct work and further implying that non-EA work is valueless. This is probably something that can be addressed both in community building best practices and by tweaking any existing incentive structures for community building to emphasize highly-engaged EAs less as a desirable end result. 

We’ve been getting flak for being over reliant on quantitative analysis for some time. However, critics of EA insider insularity are also taking aim at times when EA has invested money in interventions, like Wyndham Abbey, based on qualitative judgments of insider EAs. I think there’s also concern that our quantitative analysis may simply be done poorly, or even be just a quantitative veneer for what is essentially a qualitative judgment.

I think it’s time for us to go past the “qualitative vs quantitative” debate, and try to identify what an appropriate context and high-quality work looks like using both reasoning styles.

One change I’d like to see are some standards for legibility for spends above a certain size. If we’re going to spend $15 million on a conference center based on intuitions about the benefit, we should still publish the rationale, maintenance costs, an analysis of how much time will be saved on logistics in a prominent, accessible location so that people can see what we’re up to. That doesn’t mean we need to have some sort of public comment or democratic decision making on all this stuff - we don’t need to bog ourselves down with regulation. But a little more effort to maintain legibility around qualitative decisions might go a long way.

When you buy a conference center you get an asset worth around the cost that you paid it. Please, can people stop saying that "we spent $15 million on a conference center"? If we wanted to sell it today, my best guess is we probably could do that for $13-14 million, so the total cost here is around $1-2 million, which is really not much compared to all the other spending in the ecosystem. 

There is a huge difference between buying an asset you can utilize and spending money on services, rent, etc. If you compare them directly you will make crazily wrong decisions. The primary thing to pay attention to is depreciation, interest and counterfactual returns, all of which suggest numbers an order of magnitude lower (and indeed move it out of the space where anyone should really worry much about this). 

I’m aware that the conference center can be sold. The point is that there wasn’t an accessible, legible explanation available. To accept that it was a wise purchase, you either have to do all the thinking for yourself, or defer to the person who made the decision to buy it.

That’s a paradigm EA tried to get away from in the past, and what made it popular, I think, was the emphasis on legibility. That’s partly why 80,000 hours is popular - while in theory, anyone could come to the same conclusions about careers by doing their own research, or just blindly accept recommendations to pursue a career in X, it’s very helpful to have a legible, clearly argued explanation.

The EA brand doesn’t have to be about quantification, but I think it is about legibility, and we see the consequences when we don’t achieve that: people can’t make sense of our decisions, they perceive it as insular intuitive decision making, they get mad, they exaggerate downsides and ignore mitigating factors, and they pan us. Because we made an implicit promise, that with EA, you would get good, clear reasons you can understand for why we wanted to spend your donations on X and not Y. We were going to give you access to our thought process and let you participate in it. And clearly, a lot of people don’t feel that EA is consistently following through on that.

EA may be suffering from expert syndrome. It’s actually not obvious to casual observers that buying an old plush-looking country house might be a sensible choice for hosting conferences rather than a status symbol, or that we can always sell it and get most of our money back. If we don’t overcome this and explain our spending on a way where an interested outsider can read it and say “yes, this makes sense and I trust that this summary reflects smart thinking about the details I’m not inspecting,” then I think we’ll continue to generate heated confusion in our ever-growing cohort of casual onlookers.

If we want to be a large movement, then managing this communication gap seems key.

Assuming that it costs around 6000£ to save a life, these 1-2 million come down to around 200-300 lives saved. EAs claim to have a very high standard in evaluating the money spent by charities, this shouldn't stop at the 'discretionary spending' of the evaluators.

 I'm not sure what part of my comment this comment is in response to, I initially thought it was posted under my response to Berke's comment below and am responding with that in mind, so I'm not 100% sure I'm reading your response correctly and apologies if this is off the mark. 

We’ve been getting flak for being over reliant on quantitative analysis for some time. However, critics of EA insider insularity are also taking aim at times when EA has invested money in interventions, like Wyndham Abbey, based on qualitative judgments of insider EAs.

I think the issue around qualitative vs. quantitative judgement in this context is mainly on two axes:

  • When it comes to cause prioritization, the causality behind some factors and interventions can be harder to measure more or less definitively in clear, quantitative terms. For example, it's relatively easy to figure out how many lives something like a vaccine or bed net distribution can save with RCTs, but it's much harder to figure out what the actual effect of, say, 3 extra years of education is for the average person - you can get some estimations but it's not easy to clearly delineate between what the actual cause of the observed results are (is it the diploma, the space for intellectual exploration, the peer engagement, the structured environment, the actual content of education, the opportunities for maturing in a  relatively low-stakes environment... ). This is because there are a lot of confounding and intertwined factors and it's not easy to isolate the cause - I had a professor who loved to point to single parent households as an example of difficulty in establishing causality: is it the absence of one parent the problem, or is it the reasons that the parent is absence? These kind of questions are better answered with qualitative research, but don't quantify easily and you can't run something like an RCT on them. This makes them a bit less measurable in a clear cut way. I'm personally a huge fan of qualitative research for impact assessment, but they have smaller sample sizes don't tend to "generalize" the same way RCTs etc do (andhow well other types of study generalize is a whole other question, but seems to be taken more or less as given here and I don't think the way it's treated is problematic on a practical scale)
  • That being said, there is a big difference between a qualitative research study and the "qualitative judgments of insider EAs" - I think that the qualitative reasoning presented in comments in the thread about the Abbey (personal experiences with conferences etc.) are valuable, but don't rise to the level of rigor that an actual qualitative research does - they're anecdotes. 

I think it’s time for us to go past the “qualitative vs quantitative” debate, and try to identify what an appropriate context and high-quality work looks like using both reasoning styles.

I absolutely agree with this and am a strong proponent of methodological flexibility and mixed methods approaches, but I think it's important to keep the difference between qualitative reasoning based on personal experiences and qualitative reasoning based on research studies and data in mind while doing so. "Quantitative reasoning" tends to implicitly include (presumably) rigorously collected data while "qualitative reasoning" as used in your comment (which I think does reflect colloquial uses, unfortunately) does not.  
 

I like all of your suggested actions. Two thoughts:


1) EA is a both a set of strong claims about causes + an intellectual framework which can be applied to any cause. One explanation for what's happening is that we grew a lot recently, and new people find the precooked causes easier to engage with (and the all-important status gradient of the community points firmly towards them). It takes a lot of experience and boldness to investigate and intervene on a new cause.

I suspect you won't agree with this framing but: one way of viewing the play between these two things is a classic explore/exploit tradeoff.[1] On this view, exploration (new causes, new different people) is for discovering new causes.[2] Once you find something huge, you stop searching until it is fixed.

IMO our search actually did find something so important, neglected, and maybe tractable (AI) that it's right to somewhat de-emphasise cause exploration until that situation begins to look better. We found a combination gold mine / natural fission reactor. This cause is even pluralistic, since you can't e.g. admire art if there's no world.

 

2) But anyway I agree that we have narrowed too much. See this post which explains the significance of cause diversity on a maximising view, or my series of obituaries about people who did great things outside the community.

  1. ^

    I suspect this because you say that we shouldn't have a "singular best way to do good", and the bandit framing usually assumes one objective.

  2. ^

    Or new perspectives on causes / new ideas for causes / hidden costs of interventions / etc

Thanks for the comment - this and the other comments around cause neutrality have given me a lot to think about! My thoughts on cause neutrality (especially around where the pressure points are for me in theory vs. practice) are not fully formed; it's something I'm planning to focus a lot on in the next few weeks, in which time I might have a better response. 

I strongly agree. 

I miss discussions about “how we can make EA mainstream” or “bring EA to academia”. 

While I find the EA community to be a great source of personal and social value, we still face the challenge of significantly scaling everything we do. Taking informed steps to doing good better shouldn't be a side consideration for governments, NGOs or people, it should be the default.  Working to systematically address existential risk shouldn't be the work of a few nonprofits, it should be the work of national and international institutions.

If we over-emphasize community building while at the same time de-prioritizing engaging with the outside world, we risk that vision. There are significant advantages to working inside a community (and we should leverage those!), but to be truly successful, we first have to learn how to communicate with the outside world.

For group organizers, one step we could take is prioritize working with existing institutions and experts in different fields. Instead of only inviting EAs to events, we could invite more experts working in related areas. Instead of only networking with EA institutions, we could work more closely with traditional institutions from different fields. [1]

This probably means navigating difficult tradeoffs, in handling outreach and press, in compromising ideas and in producing less “highly-engaged EAs”, but I think this is a discussion worth having.

  1. ^

    This is already happening in the policy space (out of necessity). There's also plenty of precedent from other EA groups (especially if they're specialized), but I don't think it's nowhere there yet.

De-emphasizing cause neutrality could(my guess is) probably would reduce the long-term impact of the movement substantially. Trying to answer the question "How do the most good", without attempting to be neutral between causes we are passionate about and causes we don't (intuitively) care that much about would bias us towards causes and paths that are interesting to us rather than particularly impactful causes. Personal fit and being passionate about what you do is absolutely important, but when we're trying to compare causes and comparing actions/careers in terms of impact(or ITN), our answer shouldn't be dependent on our personal interests and passions, but when we're taking action based on those answers then we should think about personal fit and passions, as these prevent us from being miserable while we're pursuing impact. And also, cause neutrality should nudge people against associating EA with a singular cause like AI Safety or global development or even 80k careers, I think extreme cause neutrality is a solution to the problem you describe, rather than being root of the problem.
De-emphasizing cause neutrality would increase the likelihood of EA becoming mainstream and popular, but it would also undermine our focus and emphasis on impartiality and good epistemics, which were/are vital factors why EA was able to identify so many high-impact problems and take action to tackle those problems effectively imho.

I think this is actually a good example of the dynamic the author is pointing at. 

  • While some people simply care about doing the most good, others will care about doing the most good in X area, and barring the second kind of person from EA is, in my opinion, both not optimal and not conducive to learning. More importantly, the assumption of cause neutrality in this fashion is precisely one of the differences between EA as a question and EA as an ideology. 
  • Cause selection being strictly guided by neutral calculation will likely cause a lot of lost of potential, for reasons you've pointed to (I have some difficulty parsing this paragraph and am not sure where you think it's appropriate or inappropriate to factor in personal fit and passion): 

Personal fit and being passionate about what you do is absolutely important, but when we're trying to compare causes and comparing actions/careers in terms of impact(or ITN), our answer shouldn't be dependent on our personal interests and passions, but when we're taking action based on those answers then we should think about personal fit and passions, as these prevent us from being miserable while we're pursuing impact.

  • More importantly, the impact of many causes are a lot more difficult to measure quantifiably and definitely, let alone in a meaningful way. These causes are de facto left out of EA discussion or will "lose" to causes that allow for cleaner and easier quantitative analysis, which does not seem  idea as it leads to a lot of lost potential. 

If you'll forgive the marketing terminology, I think cause neutrality is EA's Unique Selling Point. It's the main thing EA brings to the table, its value add, the thing that's so hard to find anywhere else. It's great that people committed to particular causes want to be as effective as possible within them - better than not caring much for effectiveness at all - but there are other places they can find company and support. EA can't be for literally everyone otherwise it doesn't mean anything, so it has to draw a line somewhere and I think that the most natural place is around the idea/behaviour/value that makes EA most distinctive (and, I would argue, most impactful).

To your second bullet point, I can't think of an area where it's more difficult to measure impact quantitatively and definitely than longtermism.

I agree that EA can't be for everyone and I don't think it should try to be, but I personally don't think that cause neutrality is EA's unique selling point or the main thing it brings to the table, although I do understand that there are different approaches to EA.

To your second bullet point, I can't think of an area where it's more difficult to measure impact quantitatively and definitely than longtermism.

I agree that longtermist impact isn't really measurable, but this makes it hard for me reconcile cause neutrality with longtermism rather than feel like rigid cause neutrality would not have the effect I stated. 
 

I'm on the side of value alignment being much more important than people often think as it's hard to get anywhere if people want to go five different ways and it's easy for organisational culture to be diluted in the absence of an explicit effort to maintain it.

That said, outside of community-building roles, particular frames are more important than whether a person identifies as EA (someone can have these frames without identifying as EA or lack them when identifying as an EA). These include attempting to do the most good that you can do[1], respect for evidence and reason and a willingness to step outside of the social reality. You can find people like this outside of the EA community, but it's much rarer outside of people who are at least EA adjacent. 

I'd be much more open to bringing in experienced non-EA's who don't necessarily have these attributes in advisory capabilities.

  1. ^

    This does not imply being a naive maximiser.

Post summary (feel free to suggest edits!):
The author asks whether EA aims to be a question about doing good effectively, or a community based around ideology. In their experience, it has mainly been the latter, but many EAs have expressed they’d prefer it be the former.

They argue the best concrete step toward EA as a question would be to collaborate more with people outside the EA community, without attempting to bring them into the community. This includes policymakers on local and national levels, people with years of expertise in the fields EA works in, and people who are most affected by EA-backed programs.

Specific ideas include EAG actively recruiting these people, EA groups co-hosting more joint community meetups, EA orgs measuring preferences of those impacted by their programs, applying evidence-based decision-making to all fields (not just top cause areas), engaging with people and critiques outside the EA ecosystem, funding and collaborating with non-EA orgs (eg. via grants), and EA orgs hiring non-EAs.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

This is a great summary, thank you so much!

Nice one Zoe love these a lot

Great post. Thanks for sharing.

Curated and popular this week
Recent opportunities in Building effective altruism