Many thanks to Ozzie Gooen for suggesting this project, to Marta Krzeminska for editing help and to Michael Aird and others for various comments.

In the last few years, there have been many dozens of posts about potential new EA cause areas, causes and interventions. Searching for new causes seems like a worthy endeavour, but on their own, the submissions can be quite scattered and chaotic. Collecting and categorizing these cause candidates seemed like a clear next step.

We —Ozzie Gooen of the Quantified Uncertainty Research Institute and I— might later be interested in expanding this work and eventually using it for forecasting —e.g., predicting whether each candidate would still seem promising after much more rigorous research. At the same time, we feel like this list itself can be useful already. 

Further, as I kept adding more and more cause candidates, I realized that aiming for completeness was a fool's errand, or at least too big a task for an individual working alone.

Below is my current list with a simple categorization, as well as an occasional short summary which paraphrases or quotes key points from the posts linked. See the last appendix for some notes on nomenclature. If there are any entries I missed (and there will be), please say so in the comments and I'll add them. I also created the "Cause Candidates" tag on the EA Forum and tagged all of the listed posts there. They are also available in a Google Sheet.

Animal Welfare and Suffering

Pointer: This cause has its various EA Forum tags (farmed animal welfare, wild animal welfare, meat alternatives), where more cause candidates can be found. Brian Tomasik et al.'s Essays on Reducing Suffering are also a gift that keeps on giving for this and other cause areas.

1.Wild Animal Suffering Caused by Fires

Related categories: Politics: System change, targeted change, policy reform.

An Animal Ethics grantee designed a protocol aimed at helping animals during and after fires. The protocol contains specific suggestions, but the path to turning these into policy is unclear.

2. Invertebrate Welfare

"In this post, we apply the standard importance-neglectedness-tractability framework to invertebrate welfare to determine, as best we can, whether this is a cause area that is worth prioritizing. We conclude that it is."

Note: See also Brian Tomasik's Do Bugs Feel Pain.

3. Humane Pesticides

The post argues that insects experience consciousness, and that there are a lot of them, so we should give them significant moral weight (comments contain a discussion on this point). The post goes on to recommend subsidization of less-painful pesticides, an idea initially suggested by Brian Tomasik, who "estimates this intervention to cost one dollar per 250,000 less-painful deaths."  The second post goes into much more depth.

4. Diet Change

The first post is a stub. The second post looks at a reasonably high-powered study on individual outreach. It concludes that, based on reasonable assumptions, the particular intervention used (showing videos of the daily life of factory-farmed pigs) isn't competitive with other interventions on humans:

“(...) we now think there is sufficient evidence to establish that individual outreach may work to produce positive change for nonhuman animals. However, evidence in this study points to an estimate of $310 per pig year saved (90% interval: $46 to $1100), which is worse than human-focused interventions even from a species neutral perspective. More analysis would be needed to see how individual outreach compares to other interventions in animal advocacy or in other cause areas.

Given that a person can be reached for ~$2 and that they spare ~1 pig week, that works out to $150 per pig saved (90% interval: $23 to $560) and, again assuming that each pig has a ~6 month lifespan, that works out to $310 per pig year saved (90% interval: $47 to $1100). To put this in context, Against Malaria Foundation can avert a year of human suffering from malaria for $39, this does not look very cost-effective.”

Comments point out that the postulated retention rates may be too high (making the intervention even worse). Lastly, the second post was written in 2018, and more work might have been done in the meantime. 

The third post is somewhat more recent (Nov 2020), but it reports results in terms of "portions of meat not consumed" rather than “animal-years spared”. This makes a comparison with previous research not be straightforward, because different animals correspond to different intensity and length of suffering per kilogram of meat produced, and the post does not report how big these portions are or to which animals they belong. 

The fourth post explores "current and developing alternatives to self-reporting of dietary data."

5. Vegan/Vegetarian Recidivism

"But there's a big problem with vegan/vegetarian advocacy: most people who switch to vegan/vegetarian diets later switch back."

The post suggests paying more attention to the growth rate of the vegan/vegetarian movement. It also suggests some specific measures, like producing resources which make it easier for vegetarians/vegans to get all the nutrients they need in the absence of animal products.

6. Plant-Based Seafood

This Charity Entrepreneurship report ultimately concludes that: "...while fish product creation in Asia is the most promising intervention within food technology in terms of impact on animals, it is not the most promising intervention for Charity Entrepreneurship to focus on."

Note: Charity Entrepreneurship has produced many more reports. But, as they are not tagged on the EA Forum, they were difficult to incorporate in this analysis, given the search method I was using (see Appendix: Method). They are, however, available on their webpage.

7. Moral Circle Expansion

"This blog post makes the case for focusing on quality risks over population risks. More specifically, though also more tentatively, it makes the case for focusing on reducing quality risk through moral circle expansion (MCE), the strategy of impacting the far future through increasing humanity's concern for sentient beings who currently receive little consideration (i.e. widening our moral circle so it includes them.)"

In particular, the post makes this point by comparing moral circle expansion to AI alignment as a cause area.

8. Analgesics for Farm Animals

Related categories: Politics: System change, targeted change, policy reform.

"There is only one FDA approved drug for farm animal pain in the U.S. (and that drug is not approved for any of the painful body modifications that farm animals are subjected to), FDA approval might meaningfully increase the frequency with which these drugs actually used, and addressing this might be a tractable and effective way to improve farm animal welfare [...] Farm animals in the U.S. almost never get pain medication for acutely painful procedures such as castration, tail docking, beak trimming, fin cutting, abdominal surgery, and dehorning. What I was not aware of until this morning is that there is only one FDA approved medication for ANY farm animal analgesic, and that medication is specifically approved only for foot rot in cattle [...] In contrast, the EU, UK, and Canada have much higher standards for food residues in other domains (hormones, antibiotics, etc.) but have nevertheless approved several pain medication for several procedures in species of farm animals. As a result, these drugs are much more commonly used there."

9. Welfare of Specific Animals

Rethink Priorities has done research on the welfare of specific animals, and possible interventions to improve it. They produced a number of profiles, some of which I include here for illustration purposes, but without any claim to comprehensiveness. Thanks to Saulius for bringing my attention to this point. 

10. Cell-Based Meat R&D

Based on a Fermi estimate, the author concludes that "cell-based meat research and development is roughly 10 times more cost-effective than top recommended effective altruist animal charities."

11. Antibiotic Resistance in Farmed Animals

“Reducing antibiotic use in farms is very likely to be net positive for humans. However, it is not clear whether it would be net positive for animals. If farmers stop using antibiotics, animals might suffer from more disease and worse welfare. This effect might be mitigated by the fact that (i) farmers can replace antibiotics with substitutes such as probiotics, prebiotics, and essential oils, which also prevent disease, and (ii) farmers might be motivated to make adaptations to farming practices which prevent disease and also benefit animal welfare, such as lowering stocking density, reducing stress, and monitoring disease more closely. It is not obvious how likely it is that farmers will take these disease-mitigating measures, but since high disease rates increase mortality, decrease carcass profitability, and could cause reputational damage, it is plausible that they will be motivated to do so. Alternatively, animal advocates could take the 'holistic strategy' of promoting welfare measures which also tend to cause reduced antibiotic use. Tentatively, I take the view that eliminating antibiotic use on a farm would not lead to worse lives for those animals.

Eliminating antibiotics might also be expensive for producers, and because of this, it could increase the price of animal products in the short term, which would be good for animals. The literature weakly supports the view that meat prices will increase following an antibiotic ban. However, there is also some support for the view that price will increase differentially for smaller and larger animals, which lands us with the small animal replacement problem. This problem could be avoided by the approach taken to the intervention, e.g. a corporate campaign targeting only small animals.”

12. Helping Wild Animals Through Vaccination

"We will first see some cases of successful vaccination programs in the past, including vaccination against rabies, anthrax, rinderpest, brucellosis, and sylvatic plague, in addition to the proposal to vaccinate great apes against Ebola. Next, we will see how zoonotic epidemics have been the object of growing attention.We will then see some responses to them that are misguided and harmful to animals. We will then see the prospects for eventual wild animal vaccination programs against coronaviruses like SARS-CoV-2. We will see the three main limitations of such hypothetical programs. These are the lack of an effective vaccine, the lack of funding to implement the vaccination program, and the lack of an effective system to administer the vaccine. We’ll consider the extent to which these limitations could be overcome and what clues previous examples of vaccination can provide. As we will see, such programs remain to date merely speculative. They could be feasible at some point as other wild animal vaccination programs show. However, it remains uncertain whether there will be human interest in implementing them, despite the benefits for animals themselves.

Finally, we will see the reasons why, if implemented, programs of this kind could substantially help not just the vaccinated animals, but many others as well. Not only would this prevent zoonotic disease transmission to other animals, but such measures could also help inform other efforts to vaccinate animals living in the wild. Moreover, each successful vaccination program helps to illustrate that helping animals in the wild is not impractical, but realistic. This helps to raise concern for these animals and to inspire action on their behalf.”

Community Building

1.Effective Animal Advocacy Movement Building

Related categories: Animal Welfare and Suffering

The post argues that EAA-specific movement-building might be particularly neglected within EA.

2. Non-Western EA

The post asks about expanding EA beyond the USA and Europe. It gets some pushback in the comments, particularly because of the difficulty of transmitting ideas with high fidelity.

3. Understanding and/or reducing value drift.

Pointer: This cause has its own EA Forum tag

4. Values Spreading

Values spreading refers to improving other people's values. The idea has met with some skepticisim, but perhaps variants of it, like highly targetted or high-leverage values spreading, could still be promising.

Transhumanism

Related categories: Global Health and Development, States of Consciousness

1.Cryonics

Cryonics will probably get cheaper if more people sign up. It might also divert money from wealthy people who would otherwise spend it on more selfish things. Further, cryonics might help people take long-term risks more seriously. 

"One advantage of life extension is that it might prompt people to think in a more long-term-focused way, which might be nice for solving coordination problems and x-risks." 

One could also argue "that cryonics doesn't create many additional QALYs because by revival time we've probably hit Malthusian limits. So any revived cryonics patients would be traded off against other future lives."

2. Ageing

Pointer: This cause candidate has its own EA Forum tag. For illustration purposes:

3. Genetic Enhancement

The post makes the argument both from a short and long-term perspective. I was particularly intrigued by the suggestion to select for empathy; the comments also suggest selecting against malevolent traits.

4. Finding Extraterrestrial Life

Politics

Politics: Ideological Politics

1. Local Political Causes

2. Fighting Harmful Ideologies

Note: Post is a stub.

Politics: Global politics

Pointer: See also the EA Forum tag for Global Governance.

1. Democracy Promotion

The author estimates the benefits of democracy. They then suggest concrete actions to take: "a review essay on the efficacy of tools of external democracy promotion finds that non-coercive tools like foreign aid that is conditioned on democratic reforms and election monitoring are effective, while coercive tools like sanctions and military intervention are ineffective... One tool EA organizations can fund is election monitoring. Research suggests that election monitoring can play a causal role in decreasing fraud and manipulation." Some forum comments suggest that the area is too costly, and not that neglected.

2. Human Rights in North Korea

The scale of suffering seems vast, and marginal interventions (e.g., smuggling North Koreans out of China) might be cost-effective. The post also suggests capacity building in this area might be a promising intervention.

3. Improving Local Governance in Fragile States

Politics: System Change, Targeted Change, and Policy Reform.

Note: These categories are grouped together because in practice the distinction between broad system change from outside a political system and targeted change or policy reform within a system is often not quite clear.

Pointer: This cause candidate has a related EA Forum tag: Policy Change.

1. Better Political Systems and Policy-Making

Pointer: The related Institutional Decision-Making has its own EA Forum tag; more cause candidates can be found there.

2. Getting Money Out of Politics and Into Charity

Donors from two opposing parties could be matched to send their money to their favourite charities instead than to zero-sum political contests.

3. Vote Pairing

The post makes the case that vote pairing —where one or more voters for a mainstream candidate in a safe US state vote for a third-party candidate in exchange for a vote from a third candidate supporter in a contested state— is much more effective than other traditional interventions.

4. Electoral Reform

Pointer: This cause has its own EA Forum tag. I’m adding one post for illustration purposes:

Note: Included here for completeness. This isn't, strictly speaking, a new cause area because the Center For Election Science is working on it.

5. Tax Justice

The post gives an overview of current efforts to make tax evasion or tax flight harder, and why this should be thought of as positive. A commenter, Larks, makes the opposite case.

6. Effective Informational Lobbying

The first post starts with a literature review and concludes by proposing "something along the lines of ‘effective lobbying’: a rigorous approach to institutional-level change, starting with the legislature, that would take a portfolio approach to policy advocacy," and outlines how that would broadly look.

The second post is a "call to all interested in lobbying as both a career and an EA methods topic." Having a discussion group on this topic seems like a great idea, so I gave the post a strong upvote. However, it seems like it didn't get picked up when it was posted in mid-December 2020.

7. Ballot Initiatives

"The goal of this post is to bring ballot initiatives to the collective attention of the EA community to help promote future research into the effectiveness of ballot initiative campaigns for EA-aligned policies and movement-building." The post gives examples of what might be accomplished with ballot initiatives and covers their advantages and disadvantages.

8. Increasing Development Aid

Related categories: Global Health and Development.

9. Institutions for Future Generations 

Pointer: This cause candidate has its own EA Forum tag. For illustration purposes:

Politics: Armed Conflict

This cause has two related EA Forum tags: Armed conflict and Nuclear Weapons which may contain more cause candidates.

1. Preventing or Reducing The Severity of Nuclear War

Note: Luisa Rodríguez has more content in this cause.

Global Health and Development

Pointer: This cause candidate has its own EA Forum tag.

1. Reducing the Efficiency of Genocides

Related categories: Politics

The post makes the case that at least some genocides (the Rwandan, Myanmar, and possibly the Somalian genocides) could have been stopped with better oversight and targeted use of resources.

2. Malnutrition

The author asks about the impact of malnutrition, i.e., "eating the wrong things as a voluntary choice despite having alternatives." This would mostly be a problem for middle and high-income countries.

3. Raising IQ

Related categories: Transhumanism

"Interventions to raise IQ could do a lot of good because of potentially significant flow-through effects of intelligence. IQ also has the benefit of being easily quantifiable, which would make it simpler to compare interventions."

Note: In practice, the raising-IQ framing is unpalatable for some people, as are some charities in an adjacent space, like Project Prevention. However, because one of the most effective ways of raising IQ is reducing malnourishment or undernourishment, and in particular, iodine deficiency, one could focus on these causes instead. Note that mal/undernourishment in kids leads to lower wages in adulthood. Although one might suspect IQ is the mediating factor, it's not necessary to emphasize the connection.

4. Physical Goods

"Seven out of eight of the Givewell top charities deal with physical goods—anti-malaria nets, deworming medication, and vitamins. But otherwise, there's not much discussion/active work in EA on how to improve/spin up the physical manufacture and distribution of physical goods beyond donating money to existing organisations."

5. Fighting Diarrhoea

Diarrhoea seems like a large problem because more people die of it per year (or did so in 2015). The remedy is apparently "oral rehydration therapy: a large pinch of salt and a fistful of sugar dissolved in a jug of clean water."

Note: GiveWell has moved slowly and cautiously on this topic, but Evidence Action's Dispensers for Safe Water program is now a GiveWell Standout charity.

6. International Supply Chain Accountability

Related categories: Politics: System change, targeted change, policy reform.

Workers' organizations can lobby international companies to adopt better labour conditions across their supply chain, and to get the original companies to pay for these efforts. A particularly promising strategy is to apply pressure in the countries these companies originate from (Spain, Germany, the US), rather than in the countries where the products are made. This seems to be working for the case of Inditex (Zara, and various other textile brands). It is unclear how, and if, EA might get organizations working in this area to accept external funds, but they could in principle absorb a lot of them.

Note: I'm the author of this post.

7. Chloramphenicol for Heart Attacks

The article linked suggests approving Chloramphenicol as a coronary treatment, which is claimed to be a fixed cost of "$25 million spent once to save 400,000 lives per year in the U.S. alone." Comments point out that the estimate "seems to be based on one study of 21 pigs."

8. COVID-19

Pointer: This cause candidate has its own EA Forum tag, which contains more cause candidates. Here are some examples included for illustration purposes:

9. Clean Cookstoves

This is a very quick, rough model of the cost-effectiveness of promoting clean cookstoves in the developing world. It suggests that:

"If a clean cookstove intervention is successful, it may have roughly the same ballpark of cost-effectiveness as a GiveWell-recommended charity.

Circa 90% of the impact comes from directly saving lives, based on a model which estimated both the number of lives saved and the impact on climate change."

10. Agricultural R&D

“In combination, the difficulties with estimating the effects of R&D and the potential barriers to adoption suggest that the estimated benefit-cost ratios reported earlier are likely to be upwardly biased. The benefit-cost ratios estimated are also lower than those associated with Giving What We Can’s currently recommended charities. For instance, the $304 per QALY estimate based on the Copenhagen Consensus benefit-cost ratio, which appears to be at the higher end of the literature, compares unfavourably to GiveWell’s baseline estimate of $45 to $115 per DALY for insecticide treated bednets (GiveWell, 2013). The benefit-cost ratios also appear to be lower than those associated with micronutrient supplements, as discussed earlier. While there are significant benefits that remain unquantified within agricultural R&D, the same is also true for interventions based on bednet distribution, deworming and micronutrient supplements. As a result, while this area could yield individual high impact opportunities, the literature as it stands does not seem to support the claim that agricultural R&D is likely to be more effective than the best other interventions.”

11. Air Purifiers Against Pollution

"The goal for this post is to give an introduction into the human health effects of air pollution, encourage further discussion, and evaluate an intervention: The use of air purifiers in homes. These air purifiers are inexpensive, standalone devices not requiring any special installation procedure. A first analysis suggests that the cost-effectiveness of this intervention is two orders of magnitude worse than the best EA interventions. However, it is still good enough to qualify as an ’effective’ or even ‘highly effective’ health intervention according to WHO criteria."

Global Health and Development: Mental Health

Related categories: States of consciousness.

Pointer: This cause candidate has two related EA Forum tags: Mental Health (Cause Area) and Subjective Well-Being. For illustration purposes:

"Not only does mental illness seem to cause as much, if not more, total worldwide unhappiness than global poverty, it also seems far more neglected. Effective mental health interventions exist currently. These have been improving over time and we can expect further improvements. I estimate the cost-effectiveness of a particular mental health organisation, StrongMinds, and claim it is (at least)four times more effective per dollar than GiveDirectly, a GiveWell recommended top charity. This assumes we understand cost-effectiveness in terms of happiness, as measured by self-reported life satisfaction [...] Even if mental health is a large-scale, neglected problem, we shouldn't consider it a possible moral priority if there aren't effective treatments. Fortunately, there are."

The project, which seems to be ongoing, tries to systematically assess a long list of mental health interventions.

Initially, various EAs proposed varied experimental mental health interventions. There are a number of posts asking if "mental health issue X" should fall within Effective Altruism's purview. Of these, mental health apps represent probably the most well-argued intervention and stand on a class of their own. In particular, they are scalable.

States of Consciousness.

1. Psychedelics

Related categories: Global Health and Development: Mental Health.

The post makes the case from an EA perspective and offers a cash prize for counter-arguments.

2. Fundamental Consciousness Research

"...if your goal is to reduce suffering, it's important to know what suffering is."

3. Increasing Access to Pain Relief (Opioids) in Developing Countries

Related categories: Global Health and Development. Politics: System change, targeted change, policy reform.

Access to opioids is unduly restricted, such that the pain of some deaths can amount to "torture by omission". The author suggests, as a tentative donation target, the Pain and Policy Studies Group of the University of Wisconsin-Madison which "runs 'International Pain Policy Fellowships', which train national champions of the cause to identify and overcome barriers to the use of opioids in their countries. The programme has had numerous in-country successes." However, the program seems to now be defunct. One organization that I personally perceive as promising, which is working in this space, is The Organisation for the Prevention of Intense Suffering.

4. Cluster Headaches

"Cluster headaches are considered one of the most excruciating conditions known to medicine..."; "there is [...] evidence that psilocybin mushrooms can prevent and abort entire episodes. Such evidence has been published as survey data and is also widely reported by patients in cluster headache groups. TwoPhase I RCTs are ongoing and should add to the existing evidence for efficacy. Lack of access to psilocybin mushrooms and widespread information about using them are key barriers to effective treatment for many patients."

5. Drug Policy Reform

Related categories: Politics: System change, targeted change, policy reform.

"In the last 4 months, I've come to believe drug policy reform, changing the laws on currently illegal psychoactive substances, may offer a substantial, if not the most substantial, opportunity to increase the happiness of humans alive today."

6. Love

"Making it possible for people to deliberately fall in love seems like a high priority, competitive with good short- and medium-term causes such as malaria prevention and anti-aging. However, there is little serious work on it."

7. Universal Euphoria

Compressing as much happiness into a unit of matter can be pursued at all levels of technological development. With current technology, we could have animal farms dedicated to making rats, about which we know a fair bit, maximally happy. With future technology, we could have computer simulations of maximal bliss. 

The idea is sometimes thought to be morally repugnant or philosophically misguided, and a quick Fermi estimate suggests that current happy animal farms would not be cost effective compared to interventions in the developing world. 

Space

Pointer: This cause candidate has its own EA Forum tag.

Related categories: Existential risk, Transhumanism, Politics: System change, targeted change, policy reform.

1. Space Colonization

If we had a backup planet, existential risk would be reduced. Further, we'd be able to have more people. However, even with a backup planet, existential risk in both planets would be correlated, and the protection from extinction that the second planet provides would be inversely proportional to the degree of correlation. One might expect this correlation to be particularly high for hostile AI. See here for some discussion on these points.

User @kbog looked at this issue in more depth, and concluded that:

In this post I take a serious and critical look at the value of space travel. Overall, I find that the value of space exploration is dubious at this time, and it is not worth supporting as a cause area, though Effective Altruists may still want to pay attention to the issue. I also produce specific recommendations for how space organizations can rebalance their operations to have a better impact.

2. Space Governance

"I argue that space governance has been overlooked as a potentially promising cause area for longtermist effective altruists. While many uncertainties remain, there is a reasonably strong case that such work is important, time-sensitive, tractable and neglected, and should therefore be part of the longtermist EA portfolio [...] The work I have in mind aims to replace the current state of ambiguity with a coherent framework of (long-term) space governance that ensures good outcomes if and when large-scale space colonisation becomes feasible."

Education

Related categories: Global Health and Development.

1. Global Basic Education

The post could use some work, but I can imagine both of its points being true: education has intrinsic value (all things being equal, we want to have more education), and extrinsic value (it is somewhat correlated with health outcomes, and economic productivity).

2. Philosophy in Schools

"In this post I consider the possibility that the Effective Altruism (EA) movement has overlooked the potential of using pre-university education as a tool to promote positive values and grow the EA movement. Specifically, I focus on evaluating the potential of promoting the teaching of philosophy in schools."

Climate Change

Related categories: Politics: System change, targeted change, policy reform. Politics: Culture war.

Pointer: This cause candidate has its own EA Forum tag. For illustration purposes:

1. General

Most notably, climate change has a long tail of bad outcomes, and it impacts more than just GDP, as previously modelled.

Note: The disagreement about whether EA should give more attention to climate change is probably older than any of these posts.

2. Public R&D to Deal With Climate Change

3. Leveraging the Climate Change Movement

"This willingness to act seems to be mostly tied to climate change and cannot be easily directed towards more effective causes. Therefore, I think EAs could influence existing concerns and willingness to act on climate change to direct funds/donations towards cost-effective organizations (i.e., CfRN, CATF)with relatively low investment of time."

4. Extinguishing or Preventing Coal Seam Fires

"Much greenhouse gas emissions comes from uncontrolled underground coal fires. I can't find any detailed source on its share of global CO2 emissions; I see estimates for both 0.3% and 3% quoted for coal seam fires just in China, which is perhaps the world's worst offender. Another rudimentary calculation said 2-3% of global CO2 emissions comes from coal fires. They also seem to have pretty bad local health and economic effects, even compared to coal burning in a power plant (it's totally unfiltered, though it's usually diffuse in rural areas). There are some methods available now and on the horizon to try and put the fires out, and some have been practiced - see the Wikipedia article. However, the continued presence of so many of these fires indicates a major problem to be solved with new techniques and/or funding for the use of existing techniques."

5. Paris-Compliant Offsets

"We should be rapidly exploring higher quality and more durable offsets. If adopted, these principles could be a scalable and high-leverage way of moving organisations towards net-zero."

Existential Risks

Pointer: This cause has its own EA Forum tag. More cause candidates may be found there, or in the related AI Alignment, AI Governance and Civilizational Collapse & Recovery tags.

1. Corporate Global Catastrophic Risks

"It might be useful to think of corporations as dangerous optimization demons which will cause GCRs if left unchecked by altruism and philanthropy."

Comments present a different perspective.

2. Aligning Recommender Systems

Pointer: See the related Near-Term AI Ethics tag.

"In this post we argue that improving the alignment of recommender systems with user values is one of the best cause areas available to effective altruists, particularly those with computer science or product design skills."

3. Keeping Calories in the Ocean for a Possible Catastrophe

In particular, the post suggests cultivating bacteria. ALLFED's director answers in the comments.

Note: Included here for completeness. This isn't, strictly speaking, a new cause area since ALLFED is now working on it.

4. Resilience of Industry and the Electric Grid

5. Foods for Global Catastrophes (ALLFED)

Note: Included here for completeness. This isn't, strictly speaking, a new cause area since ALLFED is now working on it.

6. Preventing Ideological Engineering and Social Control

Related categories: Politics

Ideological engineering and social control: A neglected topic in AI safety research? (@geoffreymiller)

"Will enhanced government control of populations' behaviors and ideologies become one of AI's biggest medium-term safety risks?"

7. Reducing Long-Term Risks from Malevolent Actors

The authors make the case that a situation when malevolent actors rise to power has many negative externalities. They propose countermeasures, such as advancing the science of malevolence. This would involve developing better constructs and measures of malevolence, and hard-to-beat detection measures, such as neuroimaging techniques. Comments suggest further concrete measures, such as having elections for parties rather than leaders (which gives less power to individuals).

8. Autonomous Weapons

Pointer: This cause candidate has its own EA Forum tag

9. AI Governance

Pointer: This cause candidate has its own EA Forum tag, and is already being worked on at FHI's Centre for the Governance of AI, among other places. For illustration purposes:

10. Improving Disaster Shelters to Increase the Chances of Recovery From a Global Catastrophe

Pointer: This cause candidate has its own EA Forum tag.

“What is the problem? Civilization might not recover from some possible global catastrophes. Conceivably, people with access to disaster shelters or other refuges may be more likely to survive and help civilization recover. However, existing disaster shelters (sometimes built to ensure continuity of government operations and sometimes built to protect individuals), people working on submarines, largely uncontacted peoples, and people living in very remote locations may serve this function to some extent.

What are the possible interventions? Other interventions may also increase the chances that humanity would recover from a global catastrophe, but this review focuses on disaster shelters. Proposed methods of improving disaster shelter networks include stocking shelters with appropriately trained people and resources that would enable them to rebuild civilization in case of a near-extinction event, keeping some shelters constantly full of people, increasing food reserves, and building more shelters. A philanthropist could pay to improve existing shelter networks in the above ways, or they could advocate for private shelter builders or governments to make some of the improvements listed above.”

11. Discovering Previously Unknown Existential Risks

The most dangerous existential risks appear to be the ones that we only became aware of recently. As technology advances, new existential risks appear. Extrapolating this trend, there might exist even worse risks that we haven't discovered yet.

Rationality and Epistemics

Pointer: This cause candidate has its own EA Forum tag. It has seen more work on LessWrong.

1. Developing the Rationality Community

2. Progress Studies

3. Epistemic Progress

  • Epistemic Progress has also been suggested as a cause area, but this topic has seen more activity outside the EA Forum.

Donation timing

1. Counter-Cyclical Donation Timing

2. Patient Philanthropy

Pointer: This cause has its own EA Forum tag. For illustration purposes:

3. Improving our Estimate of the Philanthropic Discount Rate

How we should spend our philanthropic resources over time depends on how much we discount the future. A higher discount rate means we should spend more now; a lower discount rate tells us to spend less now and more later.

According to a simple model, improving our estimate of the discount rate might be the top effective altruist priority.

Other

Trivia: See Wastebasket Taxon.

1. Eliminating Email

Civilization could have better workflows around email.

2. Software Development in EA

Note: Post is a stub.

3. Tweaking the Algorithms which Feed People Information

The post is structured in a confusing way, but a core suggestion is to tweak various current AI systems, particularly the Youtube and Facebook algorithms, to better fit EA values. However, the post doesn't give specific suggestions of the sort a Youtube engineer could implement.

4. Positively Shaping the Development of Crypto-Assets

The article tries to analyze the promisingness of influencing the development of crypto assets from an ITN perspective, in 2018. Three of its most notable points are:

  • Effective Altruists should shape the implementation of any high-impact new technology,
  • crypto-assets constitute a new organizational technology which could solve a bunch of coordination problems, and
  • the use of crypto assets could result in beneficial resource redistribution.

5. Increasing Economic Growth

Pointer: This cause candidate has its own EA Forum tag. For illustration purposes:

6. For-Profit Companies Serving Emerging Markets

7. Land Use Reform 

Pointer: This cause candidate has its own EA Forum tag, but no full-fledged EA forum posts.

The Land Use Reform tag covers posts that discuss changes to regulations around the use of land (e.g. for housing or business development). These changes could lead to increases in economic growth and welfare in locations around the world.

8. Markets for Altruism 

Pointer: This cause candidate has its own EA Forum tag. For illustration purposes:

9. Meta-Science 

Pointer: This cause candidate has its own EA Forum tag. For illustration purposes:

10. Scientific Progress 

Pointer: This cause candidate has its own EA Forum tag. However, it is mostly a stub, as far as EA Forum posts go. For illustration purposes:

11. EA Art & Fiction 

Pointer: This cause candidate has its own EA Forum tag. For illustration purposes:


Appendix I: Method

I queried all forum posts using the following query at the EA forum's GraphQL API:

{
posts(input: {
terms: {
meta: null  # this seems to get both meta and non-meta posts
after: "10-1-2000"
before: "10-11-2020" # or some date in the future
}
}) {
results {
title
url
pageUrl
postedAt
}
}
}

Then, I copied them over to a document called last5000posts.txt.

The EA forum API returns a maximum of 5000 entries, but this is not a problem because it currently only has 4077 posts. 

I then searched for the keywords "cause x", "cause y", "new cause", "cause", "area", "neglected", "promising", "proposal", "intervention", "effectiveness", "cost-effective", using grep, a Unix/Linux tool, taking care to use the case-insensitive option (this is necessary because, although links contain the title in lowercase, links don't always contain the full title). An example of using grep to do this is: 

grep -i "cause x" last5000posts.txt >> searchoutputs.txt 

which appends the results to the searchoutputs.txt file if the file exists, and otherwise creates that file. 

I then looked through the posts with the “Cause-Prioritization” tag and under the most upvoted posts to see if I had missed anything. I then went through all EA Forum tags which had some relation to cause candidates and read through the relevant posts.

When I started tagging the posts I'd found, I found out about the "Less-discussed Causes" tag. I didn't like its categorization scheme, which also included things other than cause candidates, so I continued creating my own tags. The “Less-discussed Causes'' tag had about 5 posts I wouldn't have found. I also found many more posts which were not in the tag. 

I imagine a similar method could be used to efficiently populate other tags.

Appendix II: A Note on Nomenclature

Trivia: See Soviet Nomenklatura.

Thanks to Michael Aird for pushing for clarification of the terms I'm using, and for asking exactly what this list was about.

Terms:

  • Cause Area: A broad category of causes. For example, "animal welfare and suffering" would be a cause area, "factory-farmed animals" and "wild animal welfare" would be slightly less-broad cause areas.
  • Cause: Something more specific than a cause area. For example, "analgesics for farm animals" would be a cause within the "factory-farmed animals" cause area.
  • Intervention, charity idea, etc.: Something more specific than a cause. For example, a ballot initiative to provide more space for factory-farmed animals, like 2018 California Proposition 12 would be an intervention. "Working to bring approval voting to Saint Louis" would be an intervention within the cause "better voting methods", itself within the cause area "better political systems".
  • Meta-intervention: An intervention that can be applied to different causes. For example, ballot initiatives.

Question: On what level of specificity am I working in this post?

In practice, it's often hard to establish whether something is a cause area, a cause, or an intervention. For example, I'd say that "climate change" is a cause area and that "extinguishing or preventing coal seam fires" is a cause, but the original post refers to it as a cause area.

Column C —"Level of specificity"— this google sheet contains information about the categorization chosen for each cause candidate (from intervention to cause area).

151

59 comments, sorted by Highlighting new comments since Today at 1:03 AM
New Comment

At the risk of being overly self-promotional, I have written a few posts on cause candidates that I don't see listed here.

Another potential cause area that's not listed: reducing value drift (e.g., this post).

Thanks, I added the first two, as well as reducing value drift. 

With regards to your four weird ideas. 

Added value spreading

Added universal euphoria

Interesting list! One important cause area that I think may have missed is preventing/avoiding stable longterm totalitarianism. 

Toby Ord and Bryan Caplan have both written on this - see  "the Precipice" for Ord's discussion and "The Totalitarian Threat" in Bostrom's "Global Catastrophic Risks" for Caplan's.

It may be worth adding these to the list as it seems that totalitarianism is fairly widely accepted as a cause candidate. Thanks for the post as well, lots of interesting ideas and links in here!

For readers who may find the following useful:

I very strongly upvoted this because I think it's highly likely to produce efficiencies in conversation on the Forum, to serve as a valuable reference for newcomers to EA, and to act as a catalyst for ongoing conversation.

I would be keen to see this list take on life outside the forum as a standalone website or heavily moderated wiki, or as a page under CEA or somesuch, or at QURI.

I feel it should be pointed out that there already is a similar standalone wiki causeprioritization.org and until recently there was another similar website PriorityWiki but I think that neither of them have received much traffic.

Thanks! 

Ozzie has been proposing something like that. Initially, an airtable could be nice for visualization. 

Thanks for putting this together, this is great!

We —Ozzie Gooen of the Quantified Uncertainty Research Institute and I— might later be interested in expanding this work and eventually using it for forecasting —e.g., predicting whether each candidate would still seem promising after much more rigorous research.

Can you expand a little bit on what you mean by this and how it might work? I'm not sure what you mean by 'forecasting' in this context. 

On the first day, alexrjl went to Carl Shulman and said: "I have looked at 100 cause candidates, and here are the five I predict have the highest probability of being evaluated favorably by you"

And Carl Shulman looked at alexrjl in the eye, and said: "these are all shit, kiddo"

On the seventh day, alexrjl came back and said: "I have read through 1000 cause candidates in the EA Forum, LessWrong, the old Felicifia forum and all of Brian Tomasik's writtings. And here are the three I predict have the highest probability of being evaluated favorably by you"

And Carl Shulman looked at alexrjl in the eye and said: "David Pearce  already came up with your #1 twenty years ago, but on further inspection it was revealed to not be promising. Ideas#2 and #3 are not worth much because of such and such"

On the seventh day of the seventh week alexrjl came back, and said "I have scrapped Wikipedia, Reddit, all books ever written and otherwise the good half of the internet for keywords related to new cause areas, and came up with 1,000,000 candidates. Here is my top proposal"

And Carl Shulman answered "Mmh, I guess this could be competitive with OpenPhil's last dollar"

At this point, alexrjl attained nirvana. 

Sure. So one straightforward thing one can do is forecast the potential of each idea/evaluate its promisingness, and then just implement the best ideas, or try to convince other people to do so. 

Normally, this would run into incentive problems because if forecasting accuracy isn't evaluated, the incentive is to just to make the forecast that would otherwise benefit the forecaster. But if you have a bunch of aligned EAs, that isn't that much of a problem.

Still, one might run into the problem that maybe the forecasters are in fact subtly bad; maybe you suspect that they're missing a bunch of gears about how politics and organizations work. In that case, we can still try to amplify some research process we do trust, like a funder or incubator who does their own evaluation. For example, we could get a bunch of forecasters to try to forecast whether, after much more rigorous research, some more rigorous, senior and expensive evaluators also finds a cause candidate exciting, and then just carry the expensive evaluation for the ideas forecasted to be the most promising. 

Simultaneously, I'm interested in altruistic uses for scalable forecasting, and cause candidates seems like a rich field to experiment on. But, right now, these are just ideas, without concrete plans to follow on them. 

Thanks. I hadn't seen those amplification posts before, seems very interesting!

You could add this post of mine to space colonization: An Informal Review of Space Exploration - EA Forum (effectivealtruism.org).

I think the 'existential risks' category is too broad and some of the things included are dubious. Recommender systems as existential risk? Autonomous weapons? Ideological engineering? 

Finally, I think the categorization of political issues should be heavily reworked, for various reasons. This kind of categorization is much more interpretable and sensible:

  • Electoral politics
  • Domestic policy
    • Housing liberalization
    • Expanding immigration
    • Capitalism
    • ...
  • Political systems
    • Electoral reform
    • Statehood for Puerto Rico
    • ...
  • Foreign policy and international relations
    • Great power competition
    • Nuclear arms control
    • Small wars
    • Democracy promotion
    • Self-determination
    • ...

I wouldn't use the term 'culture war' here, it means something different than 'electoral politics'.

I agree that the categorization scheme for politics isn't that great. But I also think that there is an important different between "pulling one side of the rope harder" (currently under "culture war", say, putting more resources into the US Senate races in Georgia) and "pulling the rope sideways", say Getting money out of politics and into charity [^1]. 

Note that a categorization scheme which distinguishes between the two doesn't have to take a position on their value. But I do want the categorization scheme to distinguish between the two clusters because I later want to be able to argue that one of them is ~worthless, or at least very unpromising. 

Simultaneously, I think that other political endeavors have been tainted by association to more "pulling the rope harder" kind of political proposals, and making the distinction explicitly makes it  more apparent that other kinds of political interventions might be very promising. 

Your proposed categorization seems to me to have the potential to obfuscate the difference between topics which are heavily politicized  among US partisan lines, and those which are not. For example, I don't like putting electoral reform (i.e., using more approval voting, which would benefit candidates near the center with broad appeal) and statehood for Puerto Rico (which would favor Democrats) in the same category.

I'll think a little bit about how and whether to distinguish between raw categorization schemes (which should presumably be "neutral") and judgment values or discussions (which should presumably be separate). One option would be to have, say, a neutral third party (e.g. Aaron Gertler) choose the categorization scheme. 

Lastly, I wanted to say that although it seems we have strong differences of opinion on this particular topic, I appreciate some of your high quality past work, like Extinguishing or preventing coal seam fires is a potential cause area, Love seems like a high priority, the review of space exploration which you linked, your overview of autonomous weapons, and your various posts on the meat eater problem. 

[^1]: Vote pairing would be in the middle, because it could be used both to trade Democrat <=> third party candidates and Republican <=> third party candidates, with third party candidates being the ones that benefit the most (which sounds plausibly good). In practice, I have the impression that exchanges have mostly been set-up for Democrat <=> third party trades, but if they gain more prominence I'd imagine that Republicans would invest more in their own setups. 

Thanks for the comments. Let me clarify about the terminology. What I mean is that there are two kinds of "pulling the rope harder". As I argue here:

The appropriate mindset for political engagement is described in the book Politics Is for Power, which is summarized in this podcast. We need to move past political hobbyism and make real change. Don’t spend so much time reading and sharing things online, following the news and fomenting outrage as a pastime. Prioritize the acquisition of power over clever dunking and purity politics. See yourself as an insider and an agent of change, not an outsider. Instead of simply blaming other people and systems for problems, think first about your own ability to make productive changes in your local environment. Get to know people and build effective political organizations. Implement a long-term political vision.

A key aspect of this is that we cannot be fixated on culture wars. Complaining about the media or SJWs or video game streamers may be emotionally gratifying in the short run but it does nothing to fix the problems with our political system (and it usually doesn't fix the problems with media and SJWs and video game streamers either). It can also drain your time and emotional energy, and it can stir up needless friction with people who agree with you on political policy but disagree on subtle cultural issues. Instead, focus on political power.

To illustrate the point, the person who came up with the idea of 'pulling the rope sideways', Robin Hanson, does indeed refrain from commenting on election choices and most areas of significant public policy, but has nonetheless been quite willing to state opinions on culture war topics like political correctness in academia, sexual inequality, race reparations, and so on.

I think that most people who hear 'culture wars' think of the purity politics and dunking and controversies, but not stuff like voting or showing up to neighborhood zoning meetings.

So even if you keep the same categorization, just change the terminology so it doesn't conflate those who are focused on serious (albeit controversial) questions of policy and power with those who are culture warring. 

Fair enough; I've changed this to "Ideological politics" pending further changes.

  1. Added the Space Exploration Review. Great post, btw, of the kind I'd like to see more of for other speculative or early stage cause candidates.
  2. I agree that the existential risks category is too broad, and that I was probably conflating it with dangers from technological development. Will disambiguate

Great list! It reminds me of Peter McClusky's "Future of Earning to Give" post showing that there is plenty of room for more funding of high impact projects.

I recommend changing the "climate change" header to something a  bit broader (e.g."environmentalism" or "protecting the natural environment", etc.).  It is a shame that (it seems) climate change has come to eclipse/subsume all other environmental  concerns in the public imagination.  While most environmental issues are exacerbated by climate change, solving climate change will not necessarily solve them.

A specific cause worth mentioning is preventing the collapse of key ecosystems, e.g. coral reefs: https://forum.effectivealtruism.org/posts/YEkyuTvachFyE2mqh/trying-to-help-coral-reefs-survive-climate-change-seems

 

With regards to coral reefs, your post is pretty short. In my experience, it's more likely that people will pay more attention to it if you flesh it out a little bit more.

Yeah...  it's not at all my main focus, so I'm hoping to inspire someone else to do that! :) 

Yeah, this makes sense, thanks.

In that context, this seems maybe like just a pathway for reducing long-term-risks from malevolent actors? Or, are you thinking more of Age of Em or something else which Hanson wrote?

Sorry, you're right; the link I provided earlier isn't very relevant (that was the only EA Forum article on WBE I could find). I was thinking something along the lines of what Hanson wrote. Especially the economic and legal issues (this and the last 3 paragraphs in this; there are other issues raised in the same Wiki article as well). Also Bostrom raised significant concerns in Superintelligence, Ch. 2 that if WBE was the path to the first AGI invented, there is significant risk that unfriendly AGI will be created (see the last set of bullet points in this).

Ok, cheers, will add.

One other cause-enabler I'd love to see more research on is donating to (presumably early stage) for-profits. For all that they have better incentives it's still a very noisy space with plenty of remaining perverse incentives, so supporting those doing worse than they merit seems like it could be high value.

It might be possible to team up with some VCs on this, to see if any of them have a category of companies they like but won't invest in; perhaps because of a surprising lack of traction; or perhaps because of predatory pricing by companies with worse products/ethics; perhaps some other unmerited headwind.

Cool! Like the list, like the forecasting project idea.

Tiny suggestion, I think "Air Purifiers Against Pollution" shouldn't go into the Climate Change basket, and instead probably to Global Health & Development.

Thanks! I like the kind comments you leave under my posts; they brighten my day.

Aw, really glad to hear that!

Changed the "Air Purifiers Against Pollution"

To do:

I would like to see more about 'minor' GCRs and our chance of actually becoming an interstellar civilisation given various forms of backslide. In practice, the EA movement seems to treat the probability as 1. We can see this attitude in this very post, 

I don't think this is remotely justified. The arguments I've seen are generally of the form 'we'll still be able to salvage enough resources to theoretically recreate any given  technology', which  doesn't mean we can get anywhere near the economies of scale needed to create global industry on today's scale, let alone that we actually will given realistic political development. And the industry would need to reach the point where we're a reliably spacefaring civilisation, well beyond today's technology, in order to avoid the usual definition of being an existential catastrophe (drastic curtailment of life's potential).

If the chance of recovery from any given backslide is 99%, then that's only two orders of magnitude between its expected badness and the badness of outright extinction, even ignoring other negative effects. And given the uncertainty around various GCRs, a couple of orders of magnitude isn't that big a deal (Toby Ord's The Precipice puts an order of magnitude or two between the probability of many of the existential risks we're typically concerned with).

Things I would like to see more discussion of in this area:

  • General principles for assessing the probability of reaching interstellar travel given specific backslide parameters and then, with reference to this:
  • Kessler syndrome
  • Solar storm disruption
  • CO2 emissions from fossil fuels and other climate change rendering the atmosphere unbreathable (this would be a good old fashioned X-risk, but seems like one that no-one has discussed - in Toby's book he details some extreme scenarios where a lot of CO2 could be released that wouldn't necessarily cause human extinction by global warming, but that some of my back-of-the-envelope maths based on his figures seemed consistent with this scenario)
  • CO2 emissions from fossil fuels and other climate change substantially reducing IQs
  • Various 'normal' concerns: antibiotic resistant bacteria; peak oil; peak phosphorus; substantial agricultural collapse; moderate climate change; major wars; reverse Flynn effect; supporting interplanetary colonisation; zombie apocalypse
  • Other concerns that I don't know of, or that no-one has yet thought of, that might otherwise be dismissed by zealous X-riskers as 'not a big deal'

I agree that I'd like to see more research on topics like these, but would flag that they seem arguably harder to do well than more standard X-risk research.

I think from where I'm standing, direct, "normal" X-risk work is relatively easy to understand the impact of; a 0.01% chance less of an X-risk is a pretty simple thing. When you get into more detailed models it can be more difficult to estimate the total importance or impact, even though more detailed models are often overall better. I think there's a decent chance that 10-30 years from now the space would look quite different (similar to ways you mention) given more understanding (and propagation of that understanding) of more detailed models. 

One issue regarding a Big List is figuring out what specifically should be proposed. I'd encourage you to write up a short blog post on this and we could see about adding it to this list or the next one :)

Write a post on which aspect? You mean basically fleshing out the whole comment?

Yes, fleshing out the whole comment, basically.

Can you give a bit more of an explanation about the scoring in the google sheet? E.g. time horizon, readiness, promisingness etc.

I was slightly disappointed to see such low scores for my idea of philosophy in schools (but I guess I should have realised by now that it's not cause X!). I'm not sure I agree with 'time horizon' being 'very short' though given that some of the main channels through which I hope the intervention would be good are in terms of values spreading (which you rate as 'medium') and moral circle expansion (which you rate as 'long'). The whole point of my post was to argue for this intervention from a longtermist angle and it was partly in response to 80,000 Hours listing 'broadly promoting positive values' as a potential highest priority. So saying time horizon is 'very short' is a sign that you didn't engage with the post at all, or (quite possibly!) that I've misunderstood something quite important. If you do have some specific feedback on the idea I'd appreciate it!

Can you give a bit more of an explanation about the scoring in the google sheet?

A post about this is incoming.

With respect to philosophy in schools in particular:

Why I'm not excited about it as a cause area:

  • Your posts conflicts with my personal experience about how philosophy in schools can be taught. (Spain has philosophy, ethics & civics classes for kids as their curriculum, and I remember them being pretty terrible. In a past life, I also studied philosophy at university and overall came away with a mostly negative impression).
  • I know an EA who is doing something similar to what you propose re: EAs teaching philosophy and spreading values, but for maths in an ultra-prestigious school. Philosophy doesn't seem central to that idea.
  • I believe that there aren't enough excellent philosophy teachers for it to be implemented at scale.
  • I don't give much credence to the papers you cite replicating at scale.
  • On the above two points, see Khorton's comments in your post.
  • To elaborate a bit on that, there are some things on the class of "philosophy in schools" that scale really well, like, say, CBT. But I expect that "philosophy in schools" would scale like, say, budhist meditation (i.e., badly without good teachers).
  • Philosophy seems like a terrible field. It has low epistemic standards. It can't come to conclusions. It has Hegel. There is simply a lot of crap to wade through.
  • Philosophy in schools meshes badly with religion and it's easy for the curriculum to become political.
  • I imagine that teaching utilitarianism at scale in schools is not very feasible.
  • I'd expect EA to loose a political value about teaching EA values (as opposed to, say, Christian values, or liberal values, or feminist values, etc.). I also expect this fight to be costly.

Why I categorized it as "very-short":

  • If I think about how philosophy in schools would be implemented, and you can see this in Spain, I imagine this coming about as a result of a campaign promise, and lasting for a term or two (4, 8 years) until the next political party comes with their own priorities. In Spain we had a problem with politicians changing education laws too often.
  • You in fact propose getting into party-politics as a way to implement "philosophy in schools"
  • When I think of trying to come up with a 100 or 1000 years research program to study philosophy in schools, the idea doesn't strike me as much superior to the 10 year version of: do a literature review of existing literature of philosophy in schools and try to get it implemented. This is in contrast with other areas for which e.g., a 100-1000 years+ observatory for global priorities research or unknown existential risks does strike as more meaningful.
  • One of your arguments was: "One reason why it might be highly impactful for philosophy graduates to teach philosophy is that they may, in many cases, not have a very high-impact alternative." This doesn't strike me as a consideration that will last for generations (though, you never know with philosophy graduates)

That said, I can also see why classifying it as longer term would make sense.

OK thanks for this reply! I think some of this is fair and as I say, I'm not clinging to this idea as being hugely promising. Some of your comments seem quite personal and possibly contentious, but then again I don't know what the context of the scoring is so maybe that's sort of the idea at this stage.

A few specific thoughts.

Your posts conflicts with my personal experience about how philosophy in schools can be taught. (Spain has philosophy, ethics & civics classes for kids as their curriculum, and I remember them being pretty terrible. In a past life, I also studied philosophy at university and overall came away with a mostly negative impression).

OK this seems fairly personal and anecdotal (as I said maybe this is fine at this stage but I hope this sort of analysis wouldn't play a huge role in scoring at a later stage).

I know an EA who is doing something similar to what you propose re: EAs teaching philosophy and spreading values, but for maths in an ultra-prestigious school. Philosophy doesn't seem central to that idea.

Not sure what point you're making here (I also know this EA by the way).

I believe that there aren't enough excellent philosophy teachers for it to be implemented at scale.

I don't give much credence to the papers you cite replicating at scale.

Perhaps fair! We could always train more teachers though.

Philosophy seems like a terrible field. It has low epistemic standards. It can't come to conclusions. It has Hegel. There is simply a lot of crap to wade through.

Hmm. Well I at least feel fairly confident that a lot of people will disagree with you here. And any good curriculum designer should leave out the crap. My experience with philosophy has led me to go vegan, engage with EA and give effectively (think Peter Singer type arguments). I've found it quite important in shaping my views and I'm quite excited by the field of global priorities research which is essentially econ and philosophy.

I imagine that teaching utilitarianism at scale in schools is not very feasible.

If you teach philosophy, you will probably spend at least a little bit of time teaching utilitarianism within that. Not really sure what you're saying here.

I'd expect EA to loose a political value about teaching EA values (as opposed to, say, Christian values, or liberal values, or feminist values, etc.). I also expect this fight to be costly.

It's teaching philosophy, not teaching values.  In the post I don't suggest we include EA explicitly in the curriculum. In any case, EA is the natural conclusion of a utilitarian philosophy and I would expect any reasonable philosophy curriculum to include utilitarianism.

If I think about how philosophy in schools would be implemented, and you can see this in Spain, I imagine this coming about as a result of a campaign promise, and lasting for a term or two (4, 8 years) until the next political party comes with their own priorities. In Spain we had a problem with politicians changing education laws too often.

Ok interesting. I didn't really consider that its inclusion might just be overturned by another party. From my personal experience you don't get subjects being dropped very often and so I was hopeful for staying power, but maybe this is a fair criticism.

When I think of trying to come up with a 100 or 1000 years research program to study philosophy in schools, the idea doesn't strike me as much superior to the 10 year version of: do a literature review of existing literature of philosophy in schools and try to get it implemented. This is in contrast with other areas for which e.g., a 100-1000 years+ observatory for global priorities research or unknown existential risks does strike as more meaningful.

OK fine this (and your later comments) was probably me just not knowing what you meant by 'time horizon'.

OK this seems fairly personal and anecdotal

Yeah, this is fair. Ideally I'd ask a bunch of people what their subjective promisingness was, and then aggregate over that. I'd have to somehow adjust for the fact that people from EA backgrounds might have gone to excellent universities and schools, and thus their estimate of teacher quality might be much, much higher than average, though.

I'm not sure why your instinct is to go by your own experience or ask some other people. This seems fairly 'un-EA' to me and I hope whatever you're doing regarding the scoring doesn't take this approach.

I would go by the available empirical evidence, whilst noting any likely weaknesses in the studies. The weaknesses brought up by Khorton (and which you referenced in your comment) were actually noted in the original empirical review paper, which said the following regarding the P4C process:

  • “Many of the studies could be criticized on grounds of methodological rigour, but the quality and quantity of evidence nevertheless bears favourable comparison with that on many other methods in education.”
  • “It is not possible to assert that any use of the P4C process will always lead to positive outcomes, since implementation integrity may be highly variable. However, a wide range of evidence has been reported suggesting that, given certain conditions, children can gain significantly in measurable terms both academically and socially through this type of interactive process.”
  • “further investigation is needed of wider generalization within and beyond school, and of longer term maintenance of gains”

My overall feeling on scale was therefore that it was 'promising' but still unclear. I'm not impressed with just giving scale rating = 1 based on personal feeling/experience to be honest. Your tractability points possibly seem more objective and justified.

I'm not sure why your instinct is to go by your own experience or ask some other people. This seems fairly 'un-EA' to me and I hope whatever you're doing regarding the scoring doesn't take this approach

From where I'm sitting, asking other people is fairly in line with what many EAs do, especially on longtermist things. We don't really have RCTs around AI safety, governance, or bio risks, so we instead do our best with reasoned judgements. 

I'm quite skeptical of taking much from scientific studies on many kinds of questions, and I know this is true for many other members in the community. Scientific studies are often very narrow in scope, don't cover the thing we're really interested in, and often they don't even replicate. 

My guess is that if we were to show several senior/respected EAs at OpenPhil/FHI and similar your previous blog post, as is, they'd be similarly skeptical to Nuño here. 

All that said, I think there are more easily-arguable proposals around yours (or arguably, modifications of yours). It seems obviously useful to make sure that Effective Altruists have good epistemics and there are initiatives in place to help teach them these. This includes work in Philosophy. Many EA researchers spend quite a while learning about Philosophy. 

I think people are already bought into the idea of basically teaching important people how to think better. If large versions of this could be expanded upon, they seem like they could be large cause candidates there could be buy in for. 

For example, in-person schools seem expensive, but online education is much cheaper to scale. Perhaps we could help subsidize or pay a few podcasters or Youtubers or similar to teach people the parts of philosophy that are great for reasoning. We could also target who is most important, and very well select the material that seems most useful. Ideally we could find ways to get relatively strong feedback loops; like creating tests that indicate one's epistemic abilities, and measuring educational interventions on such tests. 

Hey, fair enough. I think overall you and Nuno are right. I did write in my original post that it was all pretty speculative anyway. I regret if I was too defensive.

I think those proposals sound good. I think they aim to achieve something different to what I was going for as I was mostly going for a "broadly promote positive values" angle on a societal level which I think is potentially important from a longtermist point of view, as opposed to educating smaller pockets of people, although I think the latter approach could be high value.

I can imagine reconsidering, but I don't in principle have anything against using my S1. Because:

  • It is fast, and I am rating 100+ causes
  • From past experience with forecasting, I basically trust it.
  • It does in fact have useful information. See here for some discussion I basically agree with.

OK I mean you can obviously do what you want and  I appreciate that you've got a lot of causes to get through.

I don't place that much stock in S1 when evaluating things as complex as how to do the most good in the world. Especially when your S1 leads to comments such as:

  • Philosophy seems like a terrible field - I'd imagine you're in the firm minority here and when that is the case I'd imagine it's reasonable to question your S1 and investigate further. Perhaps you should do a critique of philosophy on the forum (I'd certainly be interested to read it). There are people who have argued that philosophy does make progress and that it may not be as obvious, as philosophical progress tends to spawn other disciplines that then don't call themselves philosophy. See here for a write-up of philosophical success stories. In any case what I really care about in a philosophical education is teaching people how to think (e.g. Socratic questioning, Bayesian updating etc.), not get people to become philosophers. 
  • I also studied philosophy at university and overall came away with a mostly negative impression - I mean, what about all the people who don't come away with a negative impression? They seem fairly abundant in EA.
  • I know an EA who is doing something similar to what you propose re: EAs teaching philosophy and spreading values, but for maths in an ultra-prestigious school. Philosophy doesn't seem central to that idea - I still don't get this comment to be honest. In my opinion the EA you speak of isn't doing something similar to what I propose, and even if they were, why would the fact that they don't see philosophy as central to what they're doing mean that teaching philosophy would obviously fail?

Anyway I won't labour the point much more. 43 karma on my philosophy in schools post is a sign it isn't going to be revolutionary in EA and I've accepted that, so it's not that I want you to rate it highly, it's just that I'm sceptical of your process of how you did rate it.

Let me try to translate my thoughts to something which might be more legible / written in a more formal tone.

  • From my experience observing this in Spain, the philosophy curriculum taught in schools is a political compromise, in which religion plays an important role. Further, if utilitarism is even taught (it wasn't in my high school philosophy class), it can be taught badly by proponents of some other competing theory. I expect this to happen, because most people (and by expectation most teachers) aren't utilitarian.
  • Philosophy doesn't have high epistemic standards, as evidenced by the fact that it can't come to a conclusion about "who is right". Some salient examples of philosophers who continue to be taught and given significant attention despite having few redeeming qualities are Plotinous, Anaximenes, or Hegel. Although it can be argued that they do have redeeming qualities (Anaximenes was an early proponent of proto-scientific thinking, and Hegel has some interesting insights about history, and has shaped further thought), paying too much attention to these philosophers would be the equivalent of coming to deeply understand phologiston or aether theory when studying physics. I understand that grading the healthiness of a field can be counterintuitive or weird, but to the extent that a field can be sick, I think that philosophy ranks near the bottom (in contrast, development economics of the sort where you do an RCT to find out if you're right would be near the top)
  • Relatedly, when teaching philosophy too much attention is usually given to the history of philosophy. I agree that an ideal philosophy course which promoted "critical thinking" would be beneficial, but I don't think that it would be feasible to implement it because: a) it would have to be the result of tricky political compromise and have to be very careful around critizicing whomever is in power, and b) I don't think that there are enough good teachers who could pull it off.
  • Note that I'm not saying that philosophy can't produce success stories, or great philosophers, like Parfit, David Pearce, Peter Singer, arguably Bostrom, etc (though note that all examples except Singer are pretty mathematical). I'm saying that most of the time, the average philosophy class is pretty mediocre
  • On this note, I believe that my own (negative) experience with philosophy in schools is more representative than yours. Google brings up that you went to Cambridge and UCL, so I posit that you (and many other EAs who have gone to top universities) have an inflated sense of how good teachers are (because you have been exposed to smart and at least somewhat capable teachers, who had the pleasure of teaching top students). In contrast, I have been exposed to average teachers who sometimes tried to do the best they could, and who often didn't really have great teaching skills.

tl;dr/Notes:

I have some models of the world which lead me to think that the idea was unpromising. Some of them clearly have a subjective component. Still, I'm using the same "muscles" as when forecasting, and I trust that those muscles will usually produce sensible conclusions.

It is possible that in this case I had too negative a view, though not in a way which is clearly wrong (to me). If I was forecasting the question "will a charity be incubated to work on philosophy in schools" (surprise reveal: this is similar to what I was doing all along), I imagine I'd give it a very low probability, but that my team mates would give it a slightly higher probability. After discussion, we'd both probably move towards the center, and thus be more accurate.

Note that if we model my subjective promisingness = true promisingness + error term, if we pick the candidate idea at the very bottom of my list (in this case, philosophy in schools, the idea under discussion and one of the four ideas to which I assigned a "very unpromising" rating), we'd expect it to both be unpromising (per your own view) and have a large error term (I clearly don't view philosophy very favorably)

Thanks for the clarifications in your previous two comments. Helpful to get more of an insight into your thought process.

Just a few comments:

  • I strongly don't think a charity to work on philosophy in schools would be helpful and I don't like that way of thinking about it. My suggestions were having prominent philosophers join (existing) advocacy efforts for philosophy in the curriculum, more people becoming philosophy teachers (if this might be their comparative advantage), trying to shift educational spending towards values-based education, more research into values-based education (to name a few).
  • This is a whole separate conversation that I'm not sure we have to get into right now too deeply (I think I'd rather not) but I think there are severe issues with development economics as a field to the extent that I would place it near the bottom of the pecking order within EA. Firstly the generalisability of RCT results is highly questionable (for example see Eva Vivalt's research). More importantly and fundamentally, the problem of complex cluelessness (see here and here).  It is partly considerations of cluelessness that makes me interested in longtermist areas such as moral circle expansion and broadly promoting positive values, along with x-risk reduction.

I'm hoping we're nearing a good enough understanding of each other's views that we don't need to keep discussing for much longer, but I'm happy to continue a bit if helpful.

I am writing on a post about "better/healthier diets" simply due to their effect on human health. I hope it will be out during the next weeks.  - I have to wait for some feedback by experts on this topic.  

I wish we could finally strike off cryonics from the list. The most popular answers in the linked 'Is there a hedonistic utilitarian case for Cryonics? (Discuss)' essay seem to be essentially 'no'.

The claim that 'it might also divert money from wealthy people who would otherwise spend it on more selfish things' gives no reason to suppose that spending money on yourself in this context is somehow unselfish. 

As for 'Further, cryonics might help people take long-term risks more seriously'. Sure. So might giving people better health, or, say, funding long-term risk outreach. At least equally as plausibly to me, constantly telling people that they don't fear death enough and should sign up for cryonics seems likely to make people fear death more, which seems like a pretty miserable thing to inflict on them.

I just don't see any positive case for this to be on the list. It seems to be a vestige of a cultural habit among Less Wrongers that has no place in the EA world.

The goal of this list was to be comprehensive, not opinionated. We're thinking about ways of doing ranking/evaluation (particularly, with forecasting) going forward. I'd also encourage others to give it their own go, it's a tricky problem.

One reason to lean towards comprehension is to make it more evident which causes are quite bad. I'm sure, given the number, that many of these causes are quite poor. Hopefully systematic analysis would both help identify these, and then make a strong case for their placement. 

Then I would suggest being more clear about what it's comprehensive of, ie by having clear criteria for inclusion. 

The Cause Candidates tag has these criteria. You'll note that Cryonics qualifies, as would e.g. each of kbog's political proposals, even though I vehemently disagree with them. I think that the case for this is similar to the case in Rule Thinkers In, Not Out

Can you spell both of these points out for me? Maybe I'm looking in the wrong place, but I don't see anything in that tag description that recommends criteria for cause candidates.

As for Scott's post, I don't see anything more than a superficial analogy. His argument is something like 'the weight by which we improve our estimation of someone for their having a great idea should be much greater than the weight by which we downgrade our estimation of them for having a stupid idea'. Whether or not one agrees with this, what does it have to do with including on this list an expensive luxury that seemingly no-one has argued for on (effective) altruistic grounds?

Right, the criteria in the tag are almost maximally inclusive ("posts which specifically suggest, consider or present a cause area, cause, or intervention. This is independent of the quality of the suggestion, the community consensus about it, or the level of specificity"). This is because I want to distinguish between the gathering step and the evaluation step. I happen to agree that cryonics right now doesn't feel that promising, but I'd still include it because some evaluation processes might judge it to be valuable after all. Incidentally, this has happened before for me, seeing an idea which struck me as really weird and then later coming to appreciate it (fish welfare)

Per Scott Alexander's post, considering the N least promising cause candidates in my list would be like a box which has a low chance of producing a really good idea. It will fail most of the time, but produce good ideas otherwise.

Also, cryonics has been discussed in the context of EA, one just has to follow the links in the post: