Choosing a preferred cause area is arguably one of the most important decisions an EA will make. Not only are there plausibly astronomical differences in value between non-EA and EA cause areas, but this is also the case between different EA cause areas. It therefore seems important to make it easy for EAs to make a fully-informed decision on preferred cause area.
In this post I claim that, to make the best choice on preferred cause area, EAs should have at least a high-level understanding of various ‘Important Between-Cause Considerations’ (IBCs). An IBC is an idea that a significant proportion of the EA community takes seriously, and that is important to understand (at least at a high-level) in order to aid in the act of prioritising between the potentially highest value cause areas, which I classify as: extinction risk, non-extinction risk longtermist, near-term animal-focused, near-term human-focused, global priorities research and movement building. I provide illustrations of the concept of an IBC, as well as a list of potential IBCs.
Furthermore, I think that the EA community needs to do more to ensure that EAs can easily become acquainted with IBCs, by producing a greater quantity of educational content that could appeal to a wider range of people. This could include short(ish) videos, online courses, or simplified write-ups. An EA movement where most EAs have at least a high-level understanding of all known IBCs should be a movement where people are more aligned to the highest value cause areas (whatever these might be), and ultimately a movement that does more good.
Note: I am fairly confident in the claim that it would be good for the EA community to do more to enable EAs to better understand important ideas, and that a greater variety of educational content would help with this. Any of my stronger claims are more speculative, but I hold to be true until convinced otherwise.
Acknowledgement: Many thanks to Michael Aird for some helpful comments on a first draft of this post.
Illustrations of the idea
Here are two fictional stories:
Arjun is a university student studying Economics and wants to improve health in the low-income world. He has been convinced by Peter Singer’s shallow pond thought experiment and is struck by how one can drastically improve the lives of those in different parts of the world at little personal cost. On the other hand, he has never been convinced of the importance of longtermist cause areas. In short, Arjun holds a person-affecting view of population ethics which makes him relatively unconcerned about the prospect of human extinction. One day, Arjun comes across a blog post on the EA Forum which summarises the core arguments of a paper called “The Case for Strong Longtermism” by Greaves and MacAskill. He’s heard of the paper but, not being an academic, has never quite felt up for reading it. A blog post however seems far more accessible to him. On reading the post, Arjun is struck by the claim that longtermism is broader than just reducing extinction risk. He is surprised to learn that there may be tractable ways to improve average future well-being, conditional on humanity not going prematurely extinct, for example by improving institutions. Whilst Arjun doesn’t feel the need to ensure people exist in the future, he thinks it an admirable goal to improve the wellbeing of those who will live anyway. Over the next month, Arjun reads everything on longtermism he can get his hands on and, whilst this doesn’t convince him of the validity of longtermism, it convinces him that it is at least plausible. Ultimately, because the stakes seem so high, Arjun decides to switch from working on global health to researching potentially tractable longtermist interventions that may be desirable even under a person-affecting view, with a focus on institutional and political economics.
Maryam wants to spend her career finding the best ways to improve human mental health. She has suffered from depression before and knows how bad it can be. There also seem to be some tractable ways to make progress on the problem. One day, Maryam’s friend Lisa tells her that she just has to read Animal Liberation by Peter Singer, as it changed Lisa’s life. Maryam googles the book and reads how influential it has been, so on Sunday morning Maryam buys a copy for her kindle and gets reading. Five hours later Maryam has devoured the book and is feeling weird. She can’t find the fault in Singer’s philosophical argument. Discriminating on the basis of species seems no different to discriminating on the basis of sex or race. What ultimately matters is that animals can feel pleasure and pain. Maryam asks herself why she is so concerned when a human is killed, but not when a pig or a cow is. Over the next month Maryam proceeds to read everything she can on the topic and learns of the horrors of factory farming. Ultimately Maryam decides she wants to change her focus. Mental health in humans is really important, but ending factory farming, a more neglected cause, seems to her to be even more so.
NOTE: I don’t necessarily hold the views of Maryam or Arjun, these stories are simply illustrative.
Important Between-Cause Considerations
Maryam and Arjun have something in common: they both encountered an important idea(s) which led them to change their view of the most important cause area. I call these ideas ‘Important Between-Cause Considerations’ (IBCs). More formally, an IBC is an idea that a significant proportion of the EA community takes seriously, and that is important to understand (at least at a high-level) in order to aid in the act of prioritising between the potentially highest value cause areas, which I classify as: extinction risk, non-extinction risk longtermist, near-term animal-focused, near-term human-focused, global priorities research and movement building.
In Maryam’s case the IBC was the concept of speciesism and later an awareness of factory farming, which changed her focus from near-term human-focused to near-term animal-focused. In Arjun’s case it was the realisation of the potential robustness of longtermism to differences in population axiology, which ultimately changed his focus from near-term human-focused to global priorities research.
My core claim is that we want to make it far easier for EAs to develop at least a high-level understanding of all known IBCs. This is because a movement in which people are more aware of these key ideas, and are therefore able to make fully-informed decisions on preferred cause areas, should be a movement in which people are more aligned to the highest impact causes, whatever these might be. I’m not saying that Arjun and Maryam definitely reoriented in an objectively better way when they came across the information they did, but I think that, on average, this should be the case. When people are exposed to more information and credible arguments, they should, on average, make better decisions.
Because certain cause areas are plausibly far better than others, a movement in which EAs understand IBCs and potentially reorient their focus on this basis, may do far more good than it would have otherwise. Indeed I chose to classify cause areas in the way I have because I think this classification allows for there to be potentially astronomical differences in value between the cause areas. There probably won’t be as astronomical differences in value within these cause areas (e.g. between different ways to improve near-term human welfare). As such, I think it makes sense for EAs to engage with the various IBCs to decide on a preferred cause area, but after that to restrict further reading and engagement to within that preferred cause area (and not within other cause areas they have already ruled out).
Here’s a question for you to ponder: how do you know you aren’t a Maryam or an Arjun? In other words, is it possible that there’s some idea that you haven’t come across or fully understood but that, if you did, would cause you to want to change your preferred cause area? Unless you’ve spent quite a bit of time investigating all the EA literature, you probably can’t be sure there isn’t such an idea out there. I’m certainly not saying this will apply to all EAs, but I think it will apply to a significant number, and I actually think it applies to myself, which is why I currently don’t have a strongly-held preferred cause area.
Potential list of IBCs
In my opinion, most EAs should have at least a high-level understanding of the following IBCs (which are listed below in no particular order). The idea is that, for each of these, I could tell a story like Maryam’s or Arjun’s, which involves someone becoming aware of the idea, and then changing their preferred cause area.
In reality, I suspect there are likely to be valid reasons for certain people not to engage with some of the IBCs. For example, if one has read up on population ethics and is confident that they hold a person-affecting view, one can rule out reducing extinction risk at that point without having to engage with that area further (i.e. by understanding the overall probability of extinction-risk this century). Ideally there would be some sort of flowchart for people to use to avoid them engaging with ideas that have no chance of swaying their preferred cause area.
I am sure that there are many IBCs that I haven’t included that should be here, and some IBCs that are included that shouldn’t be. I would appreciate it if people have any comments. I also include some links to relevant texts (mostly off the top of my head - a later exercise could do this more thoroughly).
- The different population axiologies (total utilitarianism, person-affecting etc.) (Greaves 2017)
- The key objections that can be levelled to each axiology (repugnant conclusion, non-identity problem etc.) (Greaves 2017)
- The general thrust of the impossibility theorems (Greaves 2017)
- The implications of choice of population axiology for preferred cause area (various)
- The concept of speciesism (Singer 1975)
- The arguments for and against the sentience of non-human animals (Muehlhauser 2017)
- The scale of suffering on factory farms and potentially promising interventions to reduce that suffering or end factory farming (various)
- Wild animal suffering and the leading ideas on how to reduce it
- The concept of moral circle expansion to anything sentient, and the possibility of artificial sentience
- The distinction between simple cluelessness, complex cluelessness, and not being clueless (Greaves 2016)
- The possible implications of cluelessness for cause prioritisation
- The leading suggestions for how to act under complex cluelessness (various)
Arguments for and against long-termism
(Some of these are quite dense)
- The plausibility argument for long-termism (Greaves, MacAskill 2019)
- The objection that influencing the long-run future may be intractable (Greaves, MacAskill 2019)
- The leading long-termist intervention types that could plausibly avoid the intractability objection (Greaves, MacAskill 2019)
- The different varieties of longtermism and the implications of them for what we should do
- The expected value of reducing extinction risk
- The probability of x-risks occurring this century (Ord 2020)
- Arguments why the future may not be vast in expectation (e.g. the Doomsday argument)
- Arguments about to what extent longtermism relies on pascalian fanaticism, and whether that’s a bad thing (various)
- Discounting the future and arguments for and against a zero rate of pure time preference (Greaves 2017)
- The arguments for the robustness of long-termism or specific long-termist interventions against different views (e.g. different population axiologies, different decision-theories) (Greaves, MacAskill 2019)
Are we living at the most influential period in history?
- The argument as presented by Will MacAskill that we probably aren’t living at the most influential time (MacAskill 2020)
- The counterarguments against MacAskill’s position
- The implications of the answer to this question for what we should do (MacAskill 2020)
Investing for the future
- The argument for investing money for the future
- Other types of investment for the future (non-financial) (various)
Are we living in a simulation?
- The simulation argument (Bostrom 2003)
- The implications of living in a simulation for what we should do (e.g. Tomasik 2018)
Global health and development
- The extent of global inequality and the concept of the diminishing marginal utility of resources, motivating giving to the global poor (various)
- The shallow pond / drowning child thought experiment (Singer 1972)
- The leading randomista intervention types and GiveWell-type charity evaluation
- Arguments for and against prioritising randomista interventions over boosting economic growth (Hillebrandt & Halstead, 2020)
- Possible unintended side effects of working on global health and development (meat-eater problem, climate change) (various)
- The Tyler Cowen-type argument for why maximising economic growth might be the most important thing to do
- The possible drawbacks of boosting economic growth (animal welfare, climate change etc.) (various)
- The arguments for a suffering-focused ethics (Gloor, 2019)
- The implications of such a view, including focusing on s-risks
- Organisations such as the Global Priorities Institute are carrying out important research that could be full of IBCs. I think it’s important that this research is made easily digestible for those that may not want to read the original research
Some potential objections
Below are some possible objections to my argument that I have thought up. I leave out some possible objections that I feel I have already (tried to) tackle in the above text. I certainly may not be covering all possible objections in this post so am looking forward to people’s comments.
Objection #1: It’s not really the best use of time for people
“Think of the opportunity cost of people reading up on all of this. Is it really worth the time?”
I have a few things to say here. First of all, I don’t want becoming acquainted with the IBCs to be a very time-consuming endeavour, and I don’t think it has to be. The way to make it easy for people is to produce more educational content that is easy to digest. Not everyone wants to read academic papers. I would love to see the EA community produce a wider variety of content such as videos, simplified write-ups, or online courses, and I’d actually be quite interested in playing a part in this myself. I plan to make a video on one of my proposed IBCs as a personal project (to assuage concerns of doing harm I don’t plan to refer to EA in it).
Secondly, even if it is a non-negligible time commitment, I think it’s probably worth it for the reasons I outlined earlier. Cause area is arguably the most important decision an EA will make, and the differences in value between cause areas are potentially astronomical. It makes sense to me to spend a decent amount of time becoming acquainted with any ideas that can prove pivotal in deciding on a cause area. Even if one doesn’t want to change career, becoming convinced of the importance of a certain cause area can lead to one changing where they donate and the way they discuss ideas with other EAs, so I think it’s worth it for almost anyone.
Objection #2: People know these things
“Most people do consider important considerations before deciding on their preferred cause area and career and already know about these topics.”
I am fairly confident that many IBCs are not well-understood by a large number of EAs (myself included). I recently carried out a poll on the EA Polls Facebook group asking about awareness of the concept of ‘complex cluelessness’ and what people think are the implications of this for work in global health and development. The most common response was “I’m not aware of the concept”.
That’s just one example of course, but my general impression from interacting with others in the EA community is that EAs have not engaged with all the IBCs enough. Further polls could shed more light on what IBCs people are well aware of, and what IBCs people aren’t so well aware of.
Objection #3: Greater knowledge of these things won’t change minds
“Even if people don’t know about some of these things, I doubt greater knowledge of them would actually change minds. These ideas probably won’t have the effect you’re claiming they will.”
Maybe there aren’t many Arjuns or Maryams out there who would change their mind when confronted with these IBCs, perhaps because they are already aligned to the best cause area (for their underlying views) already, or because I’m just off the mark about the potential power of many of these ideas.
This is possible, but I’m more optimistic, in part due to my personal experience. On a few occasions I have come across ideas that have caused me to seriously rethink my preferred cause area. Since learning about EA I have gone from global health to ending factory farming to being quite unsure as I started to take long-termism more seriously. EAs are generally open-minded and very rational, so I’m hopeful a significant number can theoretically change their preferred cause area.
However, even if greater knowledge doesn’t change minds, I still think there is a strong case for a greater focus on educating people on these topics. I think this could improve the quality of discussion in the community and aid the search for the ultimate truth.
Objection #4: Some people need to know all this, but not everyone
“We probably do need some people to be aware of all of this so that they can make a fully-informed decision of which cause area is most important. These people should be those who are quite influential in the EA movement, and they probably know all this anyway. As for the rest of us average Joes, can’t we just defer to these people?”
In my opinion, it’s not as simple as that. It isn’t really clear to me how one can defer on the question of which cause area to prioritise. I guess if one were to try to defer in this way, one would probably go long-termist, as this is what most of the most prominent EAs seem to align to. In practice however, I don’t think people want to defer on cause area and, if they’re not going to defer, then we should ensure that they are well-informed when making their own decision.
Objection #5: This process never ends and we have to make a decision
“Well, IBCs are going to keep popping up as foundational research continues. I could end up wanting to change my cause area an arbitrarily large number of times and I think I should just make a decision on the cause area”.
Fair enough, at some point many of us should make a decision on which cause area we want to focus on and just accept that it is possible that there are IBCs yet to be uncovered that could change our views, but that we can’t just wait around forever. However, this is no excuse not to engage with the IBCs we are currently aware of. That seems to be the least we should do.
After engaging with the IBCs we are currently aware of there are two broad decisions one can make. Firstly, one could have a preferred cause area and feel fairly confident that further IBCs won’t change their mind. In this case one can just pursue their preferred cause area. Secondly, one could feel that it is quite possible that further IBCs could come along that could change their preferred cause area further. In that case one may want to remain cause-neutral by pursuing paths such as global priorities research, earning-to-give/save, or EA movement building (I realise two of these I’ve actually defined as cause areas themselves, but at the same time they seem very robustly good making it fairly safe to pursue them even when quite uncertain about how best to do good in the world). Either way, I think it is important for EAs to engage with all IBCs that we are currently aware of.
If what I have said is true, then it is the case that there is a central body of knowledge that most EAs should be aware of and understand, at least to a certain degree. It is also the case that currently many EAs don’t have a good understanding of much of this central body of knowledge.
In light of this these are my proposed next steps:
- Please comment on this post and either:
- Tear all of this to shreds in which case steps 2-5 can be ignored
- Shower me with praise and suggest some additions/removals from my list of IBCs. Then I would proceed with step 2
- I might try to gauge to what extent the EA community is aware of the IBCs, perhaps through a survey asking people of their awareness of specific concepts and maybe even including some questions to test knowledge
- Do a stock take of all the resources that are currently available to learn about the IBCs
- Identify where further content might be useful to inform a wider range of people of the IBCs, and determine what type of content this should be
- Potentially collaborate with others to produce this content and disseminate to the EA community (I am very aware of the danger of doing harm at this stage and would mitigate this risk or may not engage in this stage myself if necessary)