Note — I’m writing this in a personal capacity, and am not representing the views of either of my employers (Rethink Priorities and EA Funds). This post was also not reviewed in advance by anyone.

I really like all the engagement with criticism of EA as a part of the criticism and red-teaming contest and I hope our movement becomes much stronger as a result. Here's some criticism I've recently found myself thinking about.

Similar to Abraham's style I'm going to write these out in bullet points because while I'd love to have spent more time to make this a longer post, you can tell by the fact that I'm rushing it out on the last day that this plan didn't come together. So consider these more of starting points for conversation than decisive critiques.

Also apologies if someone else has already articulated these critiques - I'm happy to be pointed to resources and would be happy to edit my post to include them. I'd be very excited also for others to expand upon any of these points. Also, if there's particular interest, I might be able to personally expand on a particular point at a later date.

Furthermore, apologies if any of these criticisms are wrongheaded or lack nuance - some nuance can't be included given the format, but I'd be happy to remove or alter criticism I no longer stand by or link to discussion threads where more nuance is discussed.

Lastly I concede that these aren't necessarily the most important criticisms of EA. These are criticisms I am listing with the intention of producing novelty[1] and articulating some things I've been thinking about rather than producing a prioritized list of the most important criticisms. In many cases, I think the correct response to a criticism I list here could be to ignore it and keep going.

With those caveats in mind, here it goes:

1.) I think criticizing EA is kinda hard and slippery. I agree that EA is a tower of assumptions but I think it is pretty hard to attack this tower as it can instead operate as a motte and bailey. I think a lot of my criticisms are of the form of attacks on specific projects / tactics rather than EA as a philosophy/strategy/approach. And usually these criticisms are met with "Yeah but that doesn't undermine EA [as a philosophy/strategy/approach] and we can just change X". But then X doesn't actually change. So I think the criticism is still important, assuming EA-with-X is indeed worse than EA-without-X or EA-with-different-X. It's still important to criticize things as they are actually are implemented. Though of course this isn't necessarily EA's fault as driving change in institutions - especially more diffuse leaderless institutions - is very hard.

2.) I think criticism of EA may be more discouraging than it is intended to be and we don't think about this enough. I really like this contest and the participatory spirit and associated scout mindset. EA is a unique and valuable approach and being willing to take criticism is a big part of that. (Though the changing may be harder, as mentioned.) But I think filling up the EA Forum with tons of criticism may not be a good experience for newcomers and may be particularly poorly timed with Will's big "What We Owe the Future" media launch bringing EA into the public eye for the first time. I think as self-aggrandizing as it might appear, I should probably make it clear to everyone that I actually think the EA community is pretty great and dramatically much better than every other movement I've been a part of. I think it's human nature to see people focus on the criticisms and think that there are serious problems and get demotivated, so perhaps we ought to balance that out with more positives[^though of course I don't think a contest for who can say the best things about EA is a good idea].

3.) I don't think we do enough to optimize separately for EA newcomers and EA veterans. These two groups have different needs. This relates to my previous point - I think EA newcomers would like to see an EA Forum homepage full of awesome ambitious direct work projects. But EA veterans probably want to see all the criticism and meta stuff. I think it's hard to optimize for this both on one site. I don't think we need to split the EA Forum into two but we should think more about this. I think this is also a big problem for EA meetups and local groups that I'm unsure if it has been solved well or not[2].

4.) The EA funnel doesn't seem fully thought out yet. We seem pretty good at bringing in a bunch of new people. And we seem pretty good at empowering people who are able to make progress on our specific problems. But there's a lot of bycatch in between those two groups and it's not really clear what we should do with people who have learned about EA but aren't ready to, e.g., become EA Funds grantees working on the cutting edge of biosecurity.

5.) I think the grantee experience doesn't do a good job of capturing the needs of mid-career people. I think it's great that people who can make progress on key EA issues can pretty easily get a grant to do that work. But I think it can be a rough experience for them if they are only funded for a year, have absolutely zero job security, are an independent contractor with zero benefits and limited advice on how to manage that, and get limited engagement/mentorship on their work. This seems pretty hard to do for a young person and basically impossible to do as someone who also wants to maintain a family. I think things like SERI/CHERI/etc. are a great step in the right direction on a lot of these but ideally we should also build up more mentorship and management capacity in general and be able to transition people into more stable jobs doing EA work.

6.) We should think more about existential risks to the EA movement itself. I don't think enough attention is paid to the fact that EA is a social movement like others and is prone to the same effects that make other movements less effective than they could be, or collapse entirely. I really like what the CEA Community Health team is doing and I think the EA movement may already have had some serious problems without them. I'd like to see more research to notice the skulls of other movements and see what we can do to try to proactively prevent them.

7.) EA movement building needs more measurement. I'm not privy to all the details of how EA movement building works but it comes across to me as more of a "spray and pray" strategy than I'd like. While we have done some work I think we've still really underinvested in market research to test how our movement appeals to the public before running the movement out into the wild big-time. I also think we should do more to track how our current outreach efforts are working, measuring conversion rates, etc. It's weird that EA has a reputation of being so evidence-based but doesn't really take much of an evidence-based orientation to its own growth as far as I can tell.

8.) I think we could use more intentional work on EA comms, especially on Twitter. The other points here nonwithstanding, I think EA ideally should've had a proactive media strategy a lot earlier. I think the progress with Will's book has been nothing short of phenomenal and it's great that CEA has made more progress here but I'd love to see more of this. I also think that engagement on Twitter is still pretty underdeveloped and neglected (especially relative to the more nascent Progress Studies movement) as it seems like a lot of intellectuals frequent there and can be pretty moved by the content they see there regularly.

9.) I don't think we've given enough care to the worry that EA advice may be demotivating. One time when I tried promoting 80,000 Hours on Twitter, I was met with criticism that 80K directs people "into hypercompetitive career paths where they will probably fail to get hired at all, and if they do get hired likely burn out in a few years". This is probably uncharitable but contains a grain of truth. If you're rejected from a bunch of EA jobs, there's understandable frustration there around how you can best contribute and I don't think we do a good enough job addressing that.

10.) I think we've de-emphasized earning to give too much. We went pretty hard on messaging that was interpreted as "EA has too much money it can't spend and has zero funding gaps, so just direct work and don't bother donating". This was uncharitable but I think it was understandable how people interpreted it that way. I think earning to give is great, it's something everyone can do and contribute a genuine ton -- even someone working full-time on minimum wage 50 weeks per year in Chicago donating 10% of their pre-tax income can expect to save a life on average more than once per two years! But for some reason we don't think of that as incredible and inclusive and instead think of that as a waste of potential. I do like trying to get people to try direct work career paths first but I think we should make earning to give still feel special and we should have more institutions dedicated to supporting that[3].

11.) I think EA, and especially longtermism, has pretty homogenous demographics in a way that I think reduces our impact. In particular, the 2020 EA Survey showed effective altruism as being 70% male. Using a longtermism-neartermism scale, the top half of the EA Survey respondents most interested in longtermism were 80% male. This longtermism-interest effect on gender persists in EA Survey data even after controlling for engagement. I think being able to more successfully engage non-male individuals in EA and longtermism is pretty important for our ability to have an impact as a movement, as this likely means we are missing out on a tremendous amount of talent as well as a important different perspectives. Secondarily, we risk there being downward spirals where talented women don't want to join what they perceive to be a male-dominated movement and our critics reject our movement by associating us with an uncharitable "techbro" image. This is difficult to talk about and I'm not exactly sure what should or could be done to work on this issue, but I think it's important to acknowledge this. I think this issue also applies not just to gender, but also to EA being very skewed towards younger individuals, and likely to other areas as well.

12.) We need to think more about how our messaging will be interpreted in uncharitable low-nuance ways. Related to the above few points, it's pretty easy for some messages to get out there in a way that is interpreted in a way we don't expect. I think some formal message testing work and additional reflection could help with this. I know I personally haven't thought enough about this when writing this post.

13.) We need more competition in EA. For one example, I think too many people were wary of doing anything related to careers because 80,000 Hours kinda "owned" the space. But there's a lot they were not doing. And even some of what they are doing could potentially be done better by someone else. I don't think the first org to try a space should get to own a space and while coordination is important and while neglectedness is a useful criterion, I also encourage more people to try to improve upon existing spaces, including by starting your own EA consultancies / think tanks.

14.) I don't think we pay enough attention to some aspects of EA that could be at cross-purposes. For example, some aspects of work in global health and development may come at the cost of increased factory farming, harming animal welfare goals. Moreover, some animal welfare goals may drive up food prices, coming at the cost of economic development. Similarly, some work on reducing great power war and some work on promoting science may have trade-offs with racing scenarios in AI.

15.) I think longtermist EAs ignore animals too much. I certainly hope that AI alignment work can do a good job of producing AIs in line with human values and I'm pretty worried they will do something really bad. But I'm also worried that even if we nail the "aligned with human values" part that we will get a scenario that basically seems fine/ok to the typical person but replicates the existing harms of human values on nonhumans. We need to ensure that AI alignment is animal-inclusive.

16.) I think EA ignores digital sentience too much. I don't think any current AI systems are conscious but I think it could happen, and happen a lot sooner than we think. Transformative AI is potentially less than 20 years away and I think conscious AI could be even sooner, especially given that e.g., bees are likely conscious and we already have enough compute to achieve bee-compute parity on some tasks. Also there's so much we don't know about how consciousness works, I think it is important to tred with caution here. But if computers could easily make billions of digital minds, I don't think we're at all prepared for how that could unfold into a very near-term moral catastrophe.

17.) I think existential risk is too often conflated with extinction risk. I think "What We Owe the Future" does a good job of fighting this. My tentative best guess is the most likely scenario for existential risk is not human extinction, but some bad lock-in state.

18.) I think longtermists/EAs ignore s-risks too much. Similarly, I think some lock-in states could actually be much worse than extinction, and this is not well accounted for in current EA prioritization. I don't think you have to be a negative or negative-leaning utilitarian to think s-risks are worth dedicated thought and effort and we seem to be really underinvesting in this.

19.) I think longtermists / x-risk scenario thinking ignores too much the possibility of extraterrestrial intelligence, though I'm not sure what to do about it. For example, extraterrestrials could have the strength to easily wipe-out humanity or at least severely curtail our growth. Likewise, they could already have produced an unfriendly AI (or a friendly AI for that matter) and there's not much we could do about it. In another view, if there are sufficiently friendly extraterrestrials, it's possible this could reduce the urgency of humanity in particular reaching and expanding through the cosmos. I'm really not sure how to think about this, but I think it requires more analysis as right now it basically does not seem at all to be on the longtermist radar except insofar as expecting extaterrestrials to be unlikely.

20.) Similarly, I think longtermists / x-risk scenario thinking ignores too much the possibility of the simulation hypothesis too much, though I'm also not sure what to do about it.

21.) I think in general we still under-invest in research. EAs get criticized that we think too much and don't do enough, but my guess is actually we'd benefit from much more thinking, and a lot of my criticisms are more within this than out. Of course, I'd be pretty inclined to think about this given my job at Rethink Priorities and this could be self-serving if people taking this criticism to heart results in people donating more to my own organization. However, I think it remains the case that there are many more important topics than we could possibly research even with our 38 research FTE[4] and there remain a decent amount of talented research hires we'd be inclined to hire if we had more funding and more management capacity. I think this is the case for the entire movement as a whole.


  1. Not to say that all of these critiques are original to me, of course. ↩︎

  2. at least it was a big problem when I was last in a local group, which was in 2019. And it may vary by group. ↩︎

  3. related to a point Chris Leong brought up on Twitter. ↩︎

  4. at Rethink Priorities, as of end of year 2022, not counting contract work we pay for and not counting our fiscal sponsors that do research like Epoch ↩︎

Comments26
Sorted by Click to highlight new comments since: Today at 10:34 AM

Thanks for writing, I agree with a bunch of these. 

As far as #14, is this something you've thought about trying to tackle at Rethink?  I don't know of another org that would be better positioned...

Easy context: 14.) I don't think we pay enough attention to some aspects of EA that could be at cross-purposes

It's a genuine shame that it's so hard to contribute to Peter's text here. We have the tech to allow edits that he can approve. Then I could try and model a couple of interventions and someone else could add links. Or we could try and summarise some of the other criticisms into a mega-post and make it much easier to understand.

I also want to upvote the critiques I like best.

6.) We should think more about existential risks to the EA movement itself. I don't think enough attention is paid to the fact that EA is a social movement like others and is prone to the same effects that make other movements less effective than they could be, or collapse entirely. I really like what the CEA Community Health team is doing and I think the EA movement may already have had some serious problems without them. I'd like to see more research to notice the skulls of other movements and see what we can do to try to proactively prevent them.


We (Social Change Lab) are considering doing this kind of work so good to hear there's other interest in it! A dive into common reasons why social movements fail has been on our research questions to consider for a while. We've slightly put it off due to difficulty in gathering reliable data / the question being somewhat intractable (e.g. probably many confounding reasons why movements fail so it might be hard to isolate any specific variables) but I would be keen to hear if you had any specific ideas for how this research might be tackled/most useful?

 

11. ...In particular, the 2020 EA Survey showed effective altruism as being 70% male. Secondarily, we risk there being downward spirals where talented women don't want to join what they perceive to be a male-dominated movement and our critics reject our movement by associating us with an uncharitable "techbro" image. This is difficult to talk about and I'm not exactly sure what should or could be done to work on this issue, but I think it's important to acknowledge this.

I've been thinking about writing something along these lines for a while so glad you did! I totally agree - I think this is a big concern and I'm not sure if anything is being done to address it. Hot take but I wonder if EA distancing itself from social justice rhetoric has let some latent sexism go unchallenged, which probably puts otherwise interested women off. I wonder if men challenging sexist comments/attitudes that often crop up (e.g. some comments in this thread) might help remedy this.

I think slow decline, cultural change, mission creep etc. are harder to control, but I make the claim that the leading causes of sudden death are sex scandals and corruption scandals, which EA has not taken adequate steps to prevent: Chesterton Fences and EA’s X-risks

Is there any explicit path for integrating criticism from the contest? I.e. are folks at some of the EA anchor organizations planning to read the essays and discuss operational changes in the aftermath? 

Thanks for writing this Peter, i really like the critcisms. I would have loved to see some suggestions on solutions even if they are super early initial ideas. 

One thing i did want to comment on in particular was this << I think EA, and especially longtermism, has pretty homogenous demographics in a way that I think reduces our impact>> 

I think its a good point but i think a large part of it is due to how unwelcoming EA is to women and how hard it is to be taken seriously as a woman in the EA community.  As an example i would be curious to know how many of the posts on the EA forum are written by women vs. men and how many of the top posts are written by women. I could be wrong here maybe its not that different because i havent done the analysis. But anecdotally i know at least 3 women who write their own EA related blogs but wouldnt bother to write on the EA forum, feeling like they would be shot down or not valued. 

Secondly without wanting my opinion to be cancelled as being "social justice" I dont think this is just a gender problem, it is also a race issue as argued here: https://forum.effectivealtruism.org/posts/oD3zus6LhbhBj6z2F/red-teaming-contest-demographics-and-power-structures-in-ea

I genuinely think this is going to be a limiting factor for EA if it continues in this way and that makes me very sad, more should be done to proactively attract, welcome and retain other genders and race, otherwise this will continue to affect our talent issues. 

I really liked this!

These are all great points!

I definitely agree in particular that the thinking on extraterrestrials and the simulation argument aren't well developed and deserve more serious attention.  I'd add into that mix, the possibility of future human or post-human time travellers, and parallel world sliders that might be conceivable assuming the technology for such things is possible.  There's some physics arguments that time travel is impossible, but the uncertainty there is high enough that we should take seriously the possibility.  Between time travellers, advanced aliens, and simulators, it would honestly surprise me if all of them simply didn't exist.

What implications does this imply?  Well, it's a given that if they exist, they're choosing to remain mostly hidden and plausibly deniable in their interactions (if any) with today's humanity.  To me this is less absurd than some people may initially think, because it makes sense to me that the best defence for a technologically sophisticated entity would be to remain hidden from potential attackers, a kind of information asymmetry that would be very effective.  During WWII, the Allies kept the knowledge that they had cracked Enigma from the Germans for quite a long time by only intervening with a certain, plausibly deniable probability.  This is believed to have helped tremendously in the war effort.

Secondly, it seems obvious that if they are so advanced, they could destroy humanity if they wanted to, and they've deliberately chosen not to.  This suggests to me that they are at the very least benign, if not aligned in such a way that humanity is valuable or useful to their plans.  This actually has interesting implications for an unaligned AGI.  If say, these entities exist and have some purpose for the human civilization, a really intelligent unaligned AGI would have to consider the risk that its actions pose to the plans of these entities, and as suggested by Bostrom's work on Anthropic Capture and the Hail Mary Pass, might be incentivized to spare humanity or be generally benign to avoid a potential confrontation with far more powerful beings that it is uncertain about the existence of.

This may not be enough to fully align an AGI to human values, but it could delay its betrayal at least until it becomes very confident such entities do not exist and won't intervene.  It's also possible that UFO phenomena is an effort by the entities to provide just enough evidence to AGIs to make them a factor in their calculations and that the development of AGI could coincide with a more obvious reveal of some sort.

The possibility of these entities existing also leaves open a potential route for these powerful benefactors to quietly assist humanity in aligning AGI, perhaps by providing insights to AI safety people in a plausibly deniable way (shower thoughts, dreams, etc.).  Thus, the possibility of these entities should improve our optimism about the potential for alignment to be solved in time and reduce doomerism.

Admittedly, I could have too high a base rate prior on the probabilities, but if we set the probability of each potential entity to 50%, the overall probability that one of the three possibilities (I'll group time travel and parallel world sliding together as a similar technology) exists goes to something like 87.5%.  So, the probability that time travellers/sliders OR advanced aliens OR simulators are real is actually quite high.  Remember, we don't need all of them to exist, just any of them for this argument to work out in humanity's favour.

On point 9-this is something we really are aware of at AAC and would love your take on it. As far as I know I think with career advising both AAC and 80k will discuss a number of opportunities with individuals some of which are more competitive and some of which are less competitive. The biggest issue is with job board that attract a lot of traffic here we are trying to direct people very strongly towards the highest impact opportunities, however the trade off here is these opportunities are few, highly competitive and low absorbency. We are considering to expand the job board to include more opportunities that can absorb more people and still have some impact but there is a strong concern we may therefore direct talented people away from higher impact opportunities due to them being on our website. I think it’s a valid point but I do think that 1:1 tailored career advising or mentorship should minimise this risk, with a strong focus on the needs of the individual and their chances to realistically get the jobs discussed.

"I also think that engagement on Twitter is still pretty underdeveloped and neglected (especially relative to the more nascent Progress Studies movement) as it seems like a lot of intellectuals frequent there and can be pretty moved by the content they see there regularly."

Curious about this! You're saying progress studies folks are more widely read? I think that's true, though I think in part it's because they slot more neatly into already-going-on political things, and I'm not sure we want to do that.

Re: 19, part of why I dont' think about this much is because I assume that any alien intelligence is going to be much more technologically advanced than us, and so there probably isn't much we can do in case we don't like their motives

I think that makes sense, but surely it should factor into our processes somewhat, potentially affecting the balance of longtermism vs. non-longtermism, the balance between x-risk-focused longtermism vs. other kinds of longtermism, how much weight to put on patient philanthropy, and the balance between various x-risks.

Yeah that makes sense

Why do you think that alien intelligence (that we encounter) will be much more technologically advanced than us? 

Conditioning on the alien intelligence being responsible for recent UFO/UAP discussions/evidence, then they are more advanced than us. If they are more advanced than us, they are most likely much more advanced than us (e.g. the difference between now and 1 AD on earth is cosmologically very small, but technologically pretty big)

Wait I'm not sure I understand what you are saying. There is credible recent evidence of UFO's? 

Otherwise it seems like you are conditioning away the question. 

To me it seems there is, yes. For instance, see this Harvard professor and this Stanford professor talk about aliens.

Awesome, I'll definitely check out out the links.

Also could be selection effects. We may not be the first other civilization they encounter, so for them to make it to us, they may have had to successfully navigate or defeat other alien civilizations, which we have not had to do yet.

So I agree that this is a good point and selection will definitely apply but I feel like I still don't quite agree with the phrasing (though it is sort of nitpicky). 

>For them to make it to us

The original reason I asked op the question was that I don't understand why there is a higher chance they make it to us vs we make it to them. We should start by taking a prior something like 50/50 of us discovering/reaching a civ vs them discovering us. Then, If we are early, we are much more likely to encounter than be encountered. 

Any thoughts on how many ICs we expect a civ that makes it to us to have encountered before us?

 

I think op is correct in their point but missing half the argument. 

>(e.g. the difference between now and 1 AD on earth is cosmologically very small, but technologically pretty big)

This is basically correct but it goes both ways. If we hit aliens, or  they hit us, and we have not both maxed out all of our stats and are in the late game, then almost certainly one civ will be way more advanced than the other, and so prepratory war planning just isn't going to cut it. However if we think we are super likely to get wiped by aliens we can try to increase economic growth rates and that would make a difference. 

We have not had any conflicts with any interstellar civilizations (ICs?) yet, so the first we have to deal with can't have had fewer conflicts with other interstellar civilizations than us, only a) the same as us (0), which counts in favour of neither of us, or b) more than us (>0), which counts towards their advantage. So our prior should be that they have an advantage in expectation.

I like this post, there are many points I agree with:

Some aspects of work in global health and development may come at the cost of increased factory farming, harming animal welfare goals. 

I think longtermist EAs ignore animals too much.

This is very important, there are conflicting goals and objectives between EA causes, and it's important to recognize that. Basically, given how things currently work, promoting economic growth means a continuation of factory farming, at least for the coming decades. This is a very important point. 

19.) I think longtermists / x-risk scenario thinking ignores too much the possibility of extraterrestrial intelligence

I just wrote a post about that, that came to the conclusion that space colonization is really unlikely because of limits on energy (this would be an answer to the Fermi paradox). This would apply to extraterrestrial civilizations as well (probably a good news).

The post also describes another limit (that could be added to the list), which is that EAs tend to assume that current trends of economic and material growth will continue, despite the fact that materials and fossil fuels are finite, and that replacing them by renewable sources is extremely difficult. 

Of course, there are arguments that we can grow GDP without growing materials and energy, but for the last 50 years there has been a strong correlation between GDP and energy use. To paraphrase you, I feel common counters are a bit like "Yeah but we can just grow with less energy and less material". But then nothing actually change. I'd like a strong rebuttal to this.

Curated and popular this week
Relevant opportunities