A common criticism of EA/rationalist discussion is that we reinvent the wheel - specifically, that concepts which become part of the community have close analogies that have been better studied in academic literature. Or in some cases, that we fixate on some particular academically sourced notion to the exclusion of many similar or competing theories.
I think we can simultaneously test and address this purported problem by crowd sourcing an open database mapping EA concepts to related academic concepts, and in particular citing papers that investigate the latter. In this thread I propose the following format:
- 'Answers' name an EA or rat concept either that you suspect might or know has mappings to a broader set of academic literature.
- Replies to answers cite at least one academic work (or a good Wikipedia article) describing a related phenomenon or concept. In some cases, an EA/rat concept might be an amalgam of multiple other concepts, so please give as many replies to answers as seem appropriate.
- Feel free but not obliged to add context to replies (as long as they link a good source)
- Feel free to reply to your own answer
I'll add any responses this thread gets to a commentable Google sheet (which I can keep updating), and share that sheet afterwards. Hopefully this will be a valuable resource both for fans of effective altruism to learn more about their areas of interest, and for critics to asserting the reinventing-of-wheelness of EA/rat to prove instances of their case (where an answer gets convincing replies) or refute them (where an answer gets no or only loosely related replies).
I'll seed the discussion with a handful of answers of my own, most of which I have at best tentative mappings.
[ETA I would ask people not to downvote answers to this thread. If the system I proposed is functioning accurately, then every answer is a service to the community, whether it ends up being mapped (and therefore validated as an instance of people re) or not mapped (and therefore refuted). If you think this is a bad system, then please downvote the top level post, rather than disincentivising the people who are trying to make it work.]
I agree with your numbered points, especially that if your discount rate is very high, then a catastrophe that kills almost everyone is similar in badness to a catastrophe that kills everyone.
But one of the key differences between EA/LT and these fields is that we're almost the only ones who think future people are (almost) as important as present people, and that the discount rate shouldn't be very high. Under that assumption, the work done is indeed very different in what it accomplishes.
I'm skeptical that the insurance industry isn't bothering to protect against asteroids and nuclear winter just because they think the government is already handling those scenarios. For one, any event that kills all humans is uninsurable, so a profit-motivated mitigation plan will be underincentivized and ineffective. Furthermore, I don't agree that the government has any good plan to deal with x-risks. (Perhaps they have a secret, very effective, classified plan that I'm not aware of, but I doubt it.)