I totally agree. In order for an impact-oriented individual to contribute significantly in an area, there has to be some degree of openness to good ideas in that area, and if it is likely that no one will listen to evidence and reason then I'd tend to advise EAs to stay away from there. I think there are such areas where EAs could contribute and be heard. And I think the more mainstream the EA mindset will be, the more such places will exist. That's one of the reasons why we really should want EA to become more mainstream, and why we shouldn't hide ourselves from the rest of the world by operating in such a narrow set of domains.
Thank you for bringing this post to my attention, I really like it! We appear to make similar arguments, but frame them quite differently, so I think our two posts are very complementary.
I really like your framing of domain-specific vs. cause-neutral EA. I think you also do a better job than me in presenting the case for why helping people become more effective in what they already do might be more impactful than trying to convince them to change cause area.
Thank you Aaron for taking the time to write this detailed and thoughtful comment to my post!
I'll start with saying that I pretty much agree with everything you say, especially in your final remarks - that we should be really receptive to what people actually want and advise them accordingly, and maybe try to gently nudge them into taking a more open-minded general-impact-oriented approach (but not try to force it on them if they don't want to).
I also totally agree that most EA orgs are doing a fantastic job at exploring diverse causes and ways to improve the world, and that the EA movement is very open-minded to accepting new causes in the presence of good evidence.
To be clear, I don't criticize specific EA orgs. The thing I do criticize is pretty subtle, and refers more to the EA community itself - sometimes to individuals in the community, but mostly to our collective attitude and the atmospheres we create as groups.
When I say "I think we need to be more open to diverse causes", it seems that your main answer is "present me with good evidence that a new cause is promising and I'll support it", which is totally fair. I think this is the right attitude for an EA to have, but it doesn't exactly address what I allude to. I don't ask EAs to start contributing to new unproven causes themselves, but rather that they be open to others contributing to them.
I agree with you that most EAs would not confront a cancer researcher and blame her of doing something un-EA-like (and I presume many would even be kind and approach her with curiosity about the motives for her choice). But in the end, I think it is still very likely she would nonetheless feel somewhat judged. Because even if every person she meets at EA Global tries to nudge her only very gently ("Oh, that's interesting! So why did you decide to work on cancer? Have you considered pandemic preparedness? Do you think cancer is more impactful?"), those repeating comments can accumulate into a strong feeling of unease. To be clear, I'm not blaming any of the imaginary people who met the imaginary cancer researcher at the imaginary EAG conference for having done anything wrong, because each one of them tried to be kind and welcoming. It's only their collective action that made her feel off.
I think the EA community should be more welcoming to people who want to operate in areas we don't consider particularly promising, even if they don't present convincing arguments for their decisions.
I totally agree with you that many charities and causes can be a trap for young EAs and put their long-term career in danger. In some cases I think it's also true of classic EA cause areas, if people end up doing work that doesn't really fit their skill set or doesn't develop their career capital. I think this is pretty well acknowledged and discussed in EA circles, so I'm not too worried about it (with the exception, maybe, that I think one of the possible traps is to lock someone with career capital that only fits EA-like work, thereby blocking them from working outside of EA).
As to your question, if new cause areas were substantively explored by EAs, that would mitigate some of my concerns, but not all of them. In particular, besides having community members theoretically exploring diverse causes and writing posts on the forum summarizing their thinking process (which is beneficial), I'd also like to see some EAs actively trying to work in more diverse areas (what I called the bottom-up approach), and I'd like the greater EA community to be supportive of that.
Thank you for sharing your thoughts!
About your second point, I totally agree with the spirit of what you say, specifically that:
1. Contrary to what might be implied from my post, EAs are clearly not the only ones who think that impact, measurement and evidence are important, and these concepts are also gaining popularity outside of EA.
2. Even in an area where most current actors lack the motivation or skills to act in an impact-oriented way, there are more conditions that have to be met before I would deem it high-impact to work in this area. In particular, there need to be some indications that the other people acting in this area would be interested or persuaded to change their priorities once evidence is presented to them.
My experience working with non-EA charities is similar to yours: while they also talk about evidence and impact, it seems that in most cases they don't really think about these topics straightly. I've found that in most cases it's not very helpful to have this conversation with them, because, in the end, they are not really open to change their behavior based on evidence (I think it's more a lip service for charities to say they want to do impact evaluation, because it's becoming cool and popular these days). But in some cases (probably a minority of non-EA charities), there is genuine interest to learn how to be more impactful through impact evaluation. In these cases I think that having EAs around might be helpful.
I agree that when you first present EA to someone, there is a clear limitation on how much nuance you can squeeze in. For the sake of being concrete and down to earth, I don't see harm in giving examples from classic EA cause areas (giving the example of distributing bed nets to prevent malaria as a very cost-effective intervention is a great way to get people to start appreciating EA's attitude).
The problem I see is more in later stages of engagement with EA, when people already have a sense of what EA is but still get the impression (often unconsciously) that "if you really want to be part of EA then you need to work on one of the very specific EA cause areas".
Thank you for your great feedback and suggestions! (and sorry for not responding sooner)
I guess that one’s meaning for a “major” or “moderate” limitation is, in the end, contingent on their aspirations. If we had the standards of an organization like GiveWell, this would most certainly be a very big limitation. But quite early on we understood that we did not have the data to be able to support as strong conclusions about cost-effectiveness as GiveWell’s recommendations. Rather, our approach was: let’s do the best we can with the data we have at hand, and simply make sure that we are very clear and transparent about the limitations of our analysis. The biggest limitation of this analysis is the lack of experimental data (with only observational data available). We wanted to make sure this got the most eye-catching label. In the end we believe that what’s important is that readers of the report (or just of the executive summary) get a good sense of what conclusions are justified given our analysis and which aren’t, and that they understand what the important limitations of the analysis are. We totally agree with your arguments and the fact that past cost-effectiveness is by no means proof of future cost-effectiveness given more funding (though we do think there are reasons for cautious optimism in the case of Animals Now).
Also, thank you for the interesting suggestion for an RCT study design. This is something we have been considering in general, but haven’t thought of your exact idea. However, to approach anything like that that, we would first need the charity to have a strong motivation to get into that adventure.
I agree it would be nicer to report actual spared animals, rather than generic “portions of meat”. We thought of using data about the average meat diet in the relevant countries, to be able to translate portions of meat into animal lives. But we eventually decided against it, because it would introduce even more assumptions and uncertainties into our analysis, which we felt had many uncertainties already. Given the amount of uncertainty that we already have (with over an order-of-magnitude between our lower and upper bounds), we felt that giving a too detailed breakdown might be inappropriate. In the end we decided to keep it simple and use the metric we had data on, hoping that “1 to 12 portions of meat per 1 ILS” would give readers a rough sense of the potential of this program to spare animal lives.