You can be effectively altruistic at doing lots of things
[I drafted this several months ago so the ‘topical’ references are out of date, but the broad point still stands. I guess this is a strong opinion, weakly held; tell me in the comments why I’m wrong, or tell me about people who are already doing what I suggest].
A few days after Roe v Wade was overturned, a Forum user submitted a question asking ‘What actions [are] most effective if you care about reproductive rights in America?’ Before tumbling unmourned into the oblivion of low-karma Forum posts, this question received some supportive responses, as well as some pushback: in particular, Larks said ‘I don't think we should encourage people to post about their personal cause with indifference to whether it could be highly effective’, and expressed the opinion that abortion access ‘seem[s] unlikely to be a top priority by cause-neutral EA lights’.
I agree that increasing access to abortion in rich countries is not likely to be able to compete with animal suffering, neglected diseases, or existential risk on any reasonable ITN framework. But I disagree that people shouldn’t post about more ‘personal’ causes on the Forum. In this post, I argue that it’s both valid and useful for (some) EAs to research cost-effective interventions for problems outside of the canonical EA cause areas, even if they acknowledge that those problems are not nearly as important, neglected or tractable as the traditional EA causes. Because having a noun is grammatically useful and EA clearly doesn’t have enough jargony acronyms, I’m going to call this sort of content ‘less-important cause-area analysis’, or LICA.
I’m not arguing that EAs who are currently working on central cause areas should drop everything to produce LICA. What I am arguing is:
- it’s valid to post on the Forum ‘what are the most effective interventions for [my pet cause area which is not a top cause area]?’, as the user above did
- if an EA has a lot of passion about or experience with a particular cause, I think it would be a great public service for them to use EA tools and attitudes to analyze interventions for that cause, or assess charities who are working on that problem.
If you’re into feminist activism as well as EA, do an EA-flavoured analysis of organisations that tackle domestic violence or systemic bias. If you worked for a homelessness charity before getting into EA, write your thoughts about the most effective ways to combat homelessness in rich cities. If you’re bothered by a brutal armed conflict, think about what the most effective ways to help might be.
I don’t think this opinion is unheard-of. I’m aware that some EA individuals and organizations are doing exactly what I’m recommending, for example:
- This user asked ‘Ukraine: how a regular person can effectively help their country in war’
- When Covid cases spiked in India last year, some people researched the most effective Covid charities working in India
- Founders Pledge produces reports on all sorts of areas
However, since I’ve seen some pushback to this idea, and in practice LICAs seem to be rare and under-promoted relative to work in the central, canonical cause areas, I think it’s worth laying out the case for research into less-central problems.
What sort of problem are you talking about?
Anything that any EA thinks is a serious problem in the world (even if it’s not among the most pressing). For example:
-racism, sexism, homophobia, transphobia, other bigotry
-(relative) poverty in rich countries
-’systemic’ problems in politics or the economy
I was inspired to write this by the Roe v Wade overturning (yes, this post has indeed languished in my drafts for a long time :p), and I often think about this when there is some big crisis in the news that all of my non-EA Facebook friends are posting about. I think ‘it would be cool if EAs worked out the best way to help with this, so I could recommend something actually-useful to my friends who care about this issue!’ But I wouldn’t limit this to problems that make the news: it could be anything that any EA thinks is a significant and somewhat-tractable problem.
Why should EAs research less-important problems?
Straightforwardly, I think we leave impact on the table when we don’t produce LICAs.
Imagine that every time there was a big crisis in the news, some EAs produced well-researched, sensible lists of the most plausibly-effective ways for people to help with that crisis. The lists would be produced voluntarily by EAs who were passionate about or informed about the cause, and shared widely by other EAs.
-when abortion is outlawed, produce lists of ways to fight against the decision
-if there is a high-profile war or genocide, produce lists of the best ways to help the victimized people
-If there is a racist attack, produce lists of the best ways to concretely support people in the attacked group
This would be a great way to harness an immense amount of altruistic energy which currently gets (largely) frittered away by people donating to ineffective organisations or taking ineffective actions.
For a fictional example: imagine Jo, a middle-aged liberal woman from the US. She was a radical feminist in the 70s and has attended abortion protests since her teens; she had an abortion herself in her twenties. She remembers dancing jubilantly when Roe v Wade was first passed, and she’s horrified that we’re going backwards. She’s desperate to do what she can to help with this terrible crisis for women. She’s heard of Effective Altruism a bit, but she’s not very interested in it. EAs seem to have their hearts in the right place, but it all seems a bit abstract and blokey and philosophical to her.
You’re unlikely to persuade Jo to stop putting her time, energy and money into reproductive rights, and instead to support one of the central EA causes. But you’re more likely to be able to persuade her to support more-effective interventions or charities within US reproductive rights. Jo really cares about this - she really does want to help more people have access to safe abortions, and she’s seen first-hand how some organizations look impressive, but don’t actually help pregnant people in need.
I think that if EAs assessed charities working on reproductive rights and promoted the top ones, that could direct many people’s time and donations towards more-effective interventions and away from less-effective ones. And, as in other cause areas, it’s possible that in reproductive rights impact is heavy-tailed and some interventions are orders of magnitude larger than others.
I’ve used reproductive rights in the US as a topical (ha) and vivid example, but I think that this is true for any number of social problems that are serious, but that are not traditionally prioritized by EAs because they don’t score highly on the ITN framework. In short, producing LICAs could increase the amount of impactful work that is done to address serious problems - which are still worth solving even if they’re not the most pressing.
It could help EA reach more people
As well as directing donations and time towards effective organizations, producing LICAs might be an effective way to introduce the EA mindset and worldview to people who might be receptive to it. Most people do care about impact and effectiveness: if you ask pretty much anyone ‘do you want to do good effectively, or ineffectively?’ they’ll be like, ‘uh…effectively, obviously?’
On the other hand, most people aren’t super-receptive to ‘hey, you know this serious social issue that you care about deeply and that you’ve worked on for years? It’s probably not the optimal thing for you to be working on’. If we produce EA recommendations in non-EA cause areas, this is a way to introduce EA ways of thinking to people in a way that validates their existing intuitions and history.
Some people who like the recommendations might become excited about EA generally, join the movement, and even switch to working on more pressing central EA causes over time. Many EAs have changed their altruistic priorities through their interaction with the community. However, even if they don’t do this, they’ll still be doing more good than they otherwise would have done, which is a win. Even if some of the people reading these recommendations are never going to become ‘full EAs’, it still might encourage them to be more reflective about their altruism and give them some tools for how to think about this.
It could improve EA’s reputation
This is related to the point above. If EAs are seen to be coming up with helpful solutions in crises, that might build EA’s reputation as a movement of people who earnestly and impartially want to do good across a variety of domains. There have been some high-profile critiques of EA recently. Many EAs have suggested that rather than responding directly to critics directly, EAs should just produce more positive content. This is all very well, but for that to work, people need to actually see the positive content. I think it would enhance EA’s reputation if EAs were seen to care about the types of absolutely serious, if relatively small, problems that average people care about.
I think there is a risk here of being manipulative. Ozy Brennan has rightly criticized EAs for strategically introducing EA with less-weird causes so as not to scare people off (a strategy known as ‘milk before meat’). I think it would be bad if EAs produced LICAs for the sole purpose of laundering EA’s reputation or seeming less weird. Instead, my suggestion is that people who already care about certain problems, and/or people who are in a good position to assess interventions in a certain area, produce LICAs if they want…and that they primarily produce them in order to generate more impact. However, a positive consequence of this is that EA would (accurately!) acquire a reputation of being less insular, less dogmatic, and more in touch with what the average person cares about.
You don’t need to establish that something beats existing causes before you start working on it
If it became the norm to produce and support LICA, this might actually incentivize work on ‘Cause Xs’ that could rival central cause areas. Here’s an imaginary example:
A is a therapist, and is (emotionally) very invested in promoting certain therapeutic techniques and communication strategies. Their background makes them well-placed to work on this sort of thing. Also, they kinda-sorta suspect that techniques promoting better mental health and happiness could rival the top EA cause areas, at least for people who share certain crucial values or beliefs.
But A worries that this suspicion is just motivated reasoning because this cause is important to them. Everyone around them seems really into existential risk and like…they guess that this really is the most important cause? It just doesn’t really click with them.
I can imagine that if A is especially conscientious and has some free time, they might start to build a case that increasing access to certain therapies really is comparable to the top causes, because if they can make that case, then they are ‘allowed’ to work on this. In the worst case, they can’t really make an argument for this, and are stuck in a limbo of feeling emotionally motivated to work a cause that is important but not the most important, but feeling epistemically bad about this and unsupported by the community. In the best case, they do write a convincing case for the new cause that maybe generates some discussion, but they’ve spent a lot of time doing that when they could have just started looking into interventions for the problem directly.
To be clear: it’s good to compare cause areas and to make the case that new cause areas compare to others in terms of potential impact. It would greatly diminish EA if EAs became relativistic and anything-goes. But at the same time, I think it’s counter-productive to say ‘you can’t start working on this thing that you’d be good at working on until you’ve established that it’s comparable to AI risk/malaria nets/factory farming’.
Objections and responses
This might be impactful, but it’s not as impactful as other things EAs could be doing
Objection: This might be useful and impactful, but it’s not the most useful or the most impactful thing an EA could do. Cost-effectiveness analysis takes a lot of time! If you’re able to produce high-quality cost-effectiveness analysis of reproductive rights interventions, maybe you could do it for more pressing cause areas instead.
Response: though there are more EA jobs and funding than before, there are still many EAs who want to contribute to the community but can’t find, or haven’t yet found, an impactful career that’s a good fit. As I’ve said, I don’t think people in highly-impactful roles should spend time doing this. But I think lots of EAs could produce LICAs without this trading off against more impactful activity, for example:
-students with some free-time
-people who are unemployed or who work part-time
-people who are just willing to volunteer some free time
A big uncertainty I have here is that I don’t know how many hours it would take to produce analyses that are actually useful. I’ve never done cost-effectiveness analysis myself, and I’m uncomfortably aware that this might be coming across as ‘EA should do [extremely challenging and complicated thing that I have no intention of doing].’ From a quick google, Charity Entrepreneurship’s process for assessing interventions seems to take many hundreds of hours.
Then again, maybe the bar for CE is higher, because they’re trying to found charities in areas that are extremely pressing. Since less-important causes are less important, it might be justifiable to do a much quicker and less rigorous job - this might still be substantially better than nothing.
For example, in May 2021 when Covid cases rose rapidly in India, several EAs produced analyses of Covid charities in India. Since this was done in response to the crisis, I assume they can’t have spent many hundreds of hours on this; but these lists were very useful to me, and I donated to some of those organizations and posted the recommendations on my Facebook, where, I hoped, they might influence the donations of non-EA Facebook friends who were distressed by the crisis.
EAs will sometimes disagree on whether something is an important problem at all
On the post about abortion access, Larks pointed out some EAs might not even be convinced that improving abortion access is good, let alone a pressing issue. Similarly, in polarizing international conflicts, for example, EAs might disagree about which side is the aggressor and which is the victim. Generally, many newsworthy, emotive societal problems are highly politicized, and EAs tend to be wary of politics.
My response is: so what? EAs disagree with each other about which causes are the most important all the time! That’s kind of our whole deal. If you’re a pro-life EA reading a list of interventions to improve abortion access produced by a pro-choice EA, this doesn’t seem fundamentally different to me than if you’re an EA committed to improving farmed animal welfare reading a post about AI risk.
Admittedly, this isn’t exactly analogous: pro-lifers and pro-choicers tend to feel extremely adversarial towards each other and struggle to engage with each other charitably, in a way that’s not true for, e.g., EAs who prioritize existential risk vs EAs who prioritize mental health. But I think the difference is one of degree rather than kind. If you don’t believe in the act/omission distinction - which is true for many EAs - you arguably should see EAs who disagree with you as doing something very harmful. I’m not arguing that EAs who disagree should be more vitriolic towards each other; rather, I trust EAs to be able to engage with their characteristic charity and goodwill even on emotive and politicized issues.
We shouldn’t be driven by our emotions
I speculate that many EAs choose not to respond to emotive newsworthy crises or popular hot-button issues, because they think that this sort of knee-jerk responsiveness to the news cycle and popular sentiment is a fundamentally flawed way to approach altruism and caring. The central EA cause areas are not acute newsworthy crises [unless you have extremely pessimistic AI timelines], but ongoing atrocities - threats of extinction that are ever-present and intensifying, the injustice of poverty and preventable disease, the horror of animal suffering. It is greatly to EA’s credit that we are willing to care about these crucial, terrible, unsexy problems.
But we should be realistic about the fact that people, in general, are emotionally driven. I certainly am. It would be better, perhaps, if everyone decided what they care about in the (hypothetical) way that the (ideal) EA does: by thinking long and hard about different cause areas, forming an opinion on major philosophical issues, and painstakingly working out what problems they think are most pressing. But that’s not the world we live in. In other areas, EAs are pretty pragmatic about working with existing conditions even if those conditions are suboptimal. This is another area where, in my opinion, EAs should be more willing to meet the world where it’s at.
I’m running out of time so I’ll forego a snappy conclusion, but I’m interested in hearing all of your thoughts! Also, a bit of shameless self-promotion - I’m currently working as a writer and copy-editor for EAs, so if you like how this post was written, feel free to message me on the Forum, or fill out this (somewhat out-of-date) Expression of Interest form.