Quantify the overall suffering from different conditions, and determine whether there's misallocation of resources in biomedical research.
I suspect there's a big gap between the distribution of resources allocated to the study of different diseases and what people actually suffer from the most. Among other factors that lead to non-optimal allocation, I'd guess that life-threatening diseases are overstudied whereas conditions that may really harm people's well-being, but are not deadly, are understudied. For example, I'd guess that chronic pain is understudied compared to how much suffering it inflicts on society. It would be valuable to quantify the overall human suffering from different conditions, and spot misallocations in biomedical research (and other societal efforts to treat these conditions). For example, a random cohort of individuals could be asked to take a survey asking what conditions they would want the most to get rid of, and how many life years they would be willing to sacrifice for it (either asking what is the maximum number of years they would be willing to sacrifice for an operation that would be guaranteed to reduce that number of years from their life expectancy and solve the condition they suffer from, or asking what's the maximum probability of dying in that operation that they would be willing to take). Given the survey's results, it should be possible to quantify the overall suffering from different conditions, and then detect mismatches between these estimates and estimates of the resources (money and talent) allocated into addressing these problems. It could also be interesting to try to address other potential reasons for mismatches between the social importance of conditions (in terms of overall well-being/suffering) and allocated resources, primarily the issue of tractability. For example, maybe condition A is causing more suffering than condition B, but it's easier to make progress on B, so we should prioritize it more. This could be figured out by interviewing experts and asking them to estimate how many resources it would take to make a given amount of progress (such as cutting the prevalence of the disease in half). I imagine that a similar style of studies could be carried out in other settings where we'd want to find out whether society's allocation of resources really reflects what people care about the most.
Thank you for writing down these good counterarguments.
About your first and second points, that it’s a wasteful to have someone’s career dedicated to a less promising cause area, I generally agree with that, but with a few caveats (which, for the most part, just reiterate and rephrase points already made in my post):
About your last comment, I really appreciate 80k’s directness about what the scope of their activity is (and their being nonterritorial and encouraging of the presence of other orgs targeting populations that 80k don’t see as their main target audience). As an entire community (that transcends the scopes of specific orgs) I think we totally should be in the business of giving career advice to wider publics.
I like your suggestions for questions one could ask a stranger at an EA event!
About "social EA" vs. "business EA", I think I'd make a slightly different distinction. If you ask for someone else's (or some org's) time or money, then of course you need to come up with good explanations for why the thing you are offering (whether it is your employment or some project) is worthwhile. It's not even a unique feature of EA. But, if you are just doing your own thing and not asking for anyone's time or money, and just want to enjoy the company of other EAs, then this is the case where I think the EA community should be more welcoming and be happy to just let you be.
I totally agree. In order for an impact-oriented individual to contribute significantly in an area, there has to be some degree of openness to good ideas in that area, and if it is likely that no one will listen to evidence and reason then I'd tend to advise EAs to stay away from there. I think there are such areas where EAs could contribute and be heard. And I think the more mainstream the EA mindset will be, the more such places will exist. That's one of the reasons why we really should want EA to become more mainstream, and why we shouldn't hide ourselves from the rest of the world by operating in such a narrow set of domains.
Thank you for bringing this post to my attention, I really like it! We appear to make similar arguments, but frame them quite differently, so I think our two posts are very complementary.
I really like your framing of domain-specific vs. cause-neutral EA. I think you also do a better job than me in presenting the case for why helping people become more effective in what they already do might be more impactful than trying to convince them to change cause area.
Thank you Aaron for taking the time to write this detailed and thoughtful comment to my post!
I'll start with saying that I pretty much agree with everything you say, especially in your final remarks - that we should be really receptive to what people actually want and advise them accordingly, and maybe try to gently nudge them into taking a more open-minded general-impact-oriented approach (but not try to force it on them if they don't want to).
I also totally agree that most EA orgs are doing a fantastic job at exploring diverse causes and ways to improve the world, and that the EA movement is very open-minded to accepting new causes in the presence of good evidence.
To be clear, I don't criticize specific EA orgs. The thing I do criticize is pretty subtle, and refers more to the EA community itself - sometimes to individuals in the community, but mostly to our collective attitude and the atmospheres we create as groups.
When I say "I think we need to be more open to diverse causes", it seems that your main answer is "present me with good evidence that a new cause is promising and I'll support it", which is totally fair. I think this is the right attitude for an EA to have, but it doesn't exactly address what I allude to. I don't ask EAs to start contributing to new unproven causes themselves, but rather that they be open to others contributing to them.
I agree with you that most EAs would not confront a cancer researcher and blame her of doing something un-EA-like (and I presume many would even be kind and approach her with curiosity about the motives for her choice). But in the end, I think it is still very likely she would nonetheless feel somewhat judged. Because even if every person she meets at EA Global tries to nudge her only very gently ("Oh, that's interesting! So why did you decide to work on cancer? Have you considered pandemic preparedness? Do you think cancer is more impactful?"), those repeating comments can accumulate into a strong feeling of unease. To be clear, I'm not blaming any of the imaginary people who met the imaginary cancer researcher at the imaginary EAG conference for having done anything wrong, because each one of them tried to be kind and welcoming. It's only their collective action that made her feel off.
I think the EA community should be more welcoming to people who want to operate in areas we don't consider particularly promising, even if they don't present convincing arguments for their decisions.
I totally agree with you that many charities and causes can be a trap for young EAs and put their long-term career in danger. In some cases I think it's also true of classic EA cause areas, if people end up doing work that doesn't really fit their skill set or doesn't develop their career capital. I think this is pretty well acknowledged and discussed in EA circles, so I'm not too worried about it (with the exception, maybe, that I think one of the possible traps is to lock someone with career capital that only fits EA-like work, thereby blocking them from working outside of EA).
As to your question, if new cause areas were substantively explored by EAs, that would mitigate some of my concerns, but not all of them. In particular, besides having community members theoretically exploring diverse causes and writing posts on the forum summarizing their thinking process (which is beneficial), I'd also like to see some EAs actively trying to work in more diverse areas (what I called the bottom-up approach), and I'd like the greater EA community to be supportive of that.
Thank you for sharing your thoughts!
About your second point, I totally agree with the spirit of what you say, specifically that:
1. Contrary to what might be implied from my post, EAs are clearly not the only ones who think that impact, measurement and evidence are important, and these concepts are also gaining popularity outside of EA.
2. Even in an area where most current actors lack the motivation or skills to act in an impact-oriented way, there are more conditions that have to be met before I would deem it high-impact to work in this area. In particular, there need to be some indications that the other people acting in this area would be interested or persuaded to change their priorities once evidence is presented to them.
My experience working with non-EA charities is similar to yours: while they also talk about evidence and impact, it seems that in most cases they don't really think about these topics straightly. I've found that in most cases it's not very helpful to have this conversation with them, because, in the end, they are not really open to change their behavior based on evidence (I think it's more a lip service for charities to say they want to do impact evaluation, because it's becoming cool and popular these days). But in some cases (probably a minority of non-EA charities), there is genuine interest to learn how to be more impactful through impact evaluation. In these cases I think that having EAs around might be helpful.
I agree that when you first present EA to someone, there is a clear limitation on how much nuance you can squeeze in. For the sake of being concrete and down to earth, I don't see harm in giving examples from classic EA cause areas (giving the example of distributing bed nets to prevent malaria as a very cost-effective intervention is a great way to get people to start appreciating EA's attitude).
The problem I see is more in later stages of engagement with EA, when people already have a sense of what EA is but still get the impression (often unconsciously) that "if you really want to be part of EA then you need to work on one of the very specific EA cause areas".
Thank you for your great feedback and suggestions! (and sorry for not responding sooner)
I guess that one’s meaning for a “major” or “moderate” limitation is, in the end, contingent on their aspirations. If we had the standards of an organization like GiveWell, this would most certainly be a very big limitation. But quite early on we understood that we did not have the data to be able to support as strong conclusions about cost-effectiveness as GiveWell’s recommendations. Rather, our approach was: let’s do the best we can with the data we have at hand, and simply make sure that we are very clear and transparent about the limitations of our analysis. The biggest limitation of this analysis is the lack of experimental data (with only observational data available). We wanted to make sure this got the most eye-catching label. In the end we believe that what’s important is that readers of the report (or just of the executive summary) get a good sense of what conclusions are justified given our analysis and which aren’t, and that they understand what the important limitations of the analysis are. We totally agree with your arguments and the fact that past cost-effectiveness is by no means proof of future cost-effectiveness given more funding (though we do think there are reasons for cautious optimism in the case of Animals Now).
Also, thank you for the interesting suggestion for an RCT study design. This is something we have been considering in general, but haven’t thought of your exact idea. However, to approach anything like that that, we would first need the charity to have a strong motivation to get into that adventure.