[ Question ]

Are there lists of causes (that seemed promising but are) known to be ineffective?

by Maxime Perrigault1 min read8th Jul 20203 comments

39

Cause Prioritization
Frontpage

Among the EA community, the questions "what are the most important, the most effective causes?" is a crucial topic. There are even lists of the most important ones, so people can focus their effort on them.

I assume that to build these lists, a lot of causes have been studied and compared. In the end, only the most neglected, important, and tractable ones have been chosen, the other ones have been discarded.

In my opinion knowing what are the most effective cause is a must, but knowing which causes have been discarded is also important. Here are a couple reasons why:

  • If someone decides to use its time to search for new promising areas, it would be a waste of time to explore already discarded areas.
  • In my opinion, for someone discovering EA (such as myself), some of the most important problems are often counter intuitive. Therefore, to have explanation about these problems helps to have a better understanding of the EA's way of thinking. I believe that also understanding what problems are ineffective and why they are ineffective would also be an interesting approach to EA's way of thinking.

So here is my question: Are there lists of causes (that seemed promising but are) known to be ineffective?

New Answer
Ask Related Question
New Comment

2 Answers

This seems to me like a good question/a good idea.

Some quick thoughts:

  • I can't think of such a list (at least, off the top of my head).
  • There was a very related comment thread on a recent post from 80,000 Hours. I'd recommend checking that out. (It doesn't provide the sort of list you're after, but touches on some reasons for and against making such a list.)
    • I've now also commented a link to this question from that thread, to tie these conversations together.
  • I'd suggest avoiding saying "known to be ineffective" (or "known to be low-priority", or whatever). I think we'd at best create a list of causes we have reason to be fairly confident are probably low-priority. More likely, we'd just have a list of causes we have some confident are low-priority, but not much confidence, because once they started to seem low-priority we (understandably) stopped looking into them.
    • To compress that into something more catchy, we could maybe say "a list of causes that were looked into, but that seem to be low-priority". Or even just "a list of causes that seem to be low-priority".
  • This sort of list could be generated not only for causes, but also for interventions, charities, and/or career paths.
    • E.g., I imagine looking through some of the "shallow reviews" from GiveWell and Charity Entrepreneurship could help one create lists of charities and interventions that were de-prioritised for specific reasons, and that thus may not be worth looking into in future.

In an old post, Michael Dickens writes:

The closest thing we can make to a hedonium shockwave with current technology is a farm of many small animals that are made as happy as possible. Presumably the animals are cared for by people who know a lot about their psychology and welfare and can make sure they’re happy. One plausible species choice is rats, because rats are small (and therefore easy to take care of and don’t consume a lot of resources), definitively sentient, and we have a reasonable idea of how to make them happy.
[...]
Thus creating 1 rat QALY costs $120 per year, which is $240 per human QALY per year.
[...]
This is just a rough back-of-the-envelope calculation so it should not be taken literally, but I’m still surprised by how cost-inefficient this looks. I expected rat farms to be highly cost-effective based on the fact that most people don’t care about rats, and generally the less people care about some group, the easier it is to help that group. (It’s easier to help developing-world humans than developed-world humans, and easier still to help factory-farmed animals.) Again, I could be completely wrong about these calculations, but rat farms look less promising than I had expected.

I think this is a good example of something seeming like a plausible idea for making the world better, but which turned out to seem pretty ineffective.