A common criticism of EA/rationalist discussion is that we reinvent the wheel - specifically, that concepts which become part of the community have close analogies that have been better studied in academic literature. Or in some cases, that we fixate on some particular academically sourced notion to the exclusion of many similar or competing theories.
I think we can simultaneously test and address this purported problem by crowd sourcing an open database mapping EA concepts to related academic concepts, and in particular citing papers that investigate the latter. In this thread I propose the following format:
- 'Answers' name an EA or rat concept either that you suspect might or know has mappings to a broader set of academic literature.
- Replies to answers cite at least one academic work (or a good Wikipedia article) describing a related phenomenon or concept. In some cases, an EA/rat concept might be an amalgam of multiple other concepts, so please give as many replies to answers as seem appropriate.
- Feel free but not obliged to add context to replies (as long as they link a good source)
- Feel free to reply to your own answer
I'll add any responses this thread gets to a commentable Google sheet (which I can keep updating), and share that sheet afterwards. Hopefully this will be a valuable resource both for fans of effective altruism to learn more about their areas of interest, and for critics to asserting the reinventing-of-wheelness of EA/rat to prove instances of their case (where an answer gets convincing replies) or refute them (where an answer gets no or only loosely related replies).
I'll seed the discussion with a handful of answers of my own, most of which I have at best tentative mappings.
[ETA I would ask people not to downvote answers to this thread. If the system I proposed is functioning accurately, then every answer is a service to the community, whether it ends up being mapped (and therefore validated as an instance of people re) or not mapped (and therefore refuted). If you think this is a bad system, then please downvote the top level post, rather than disincentivising the people who are trying to make it work.]
I happen to strongly agree that moral discount rate should be 0, but a) it's still worth acknowledging that as an assumption, and b) I think it's easy for both sides to equivocate it with risk-based discounting. It seems like you're de facto doing when you say 'Under that assumption, the work done is indeed very different in what it accomplishes' - this is only true if risk-based discounting is also very low. See e.g. Thorstad's Existential Risk Pessimism and the Time of Perils and Mistakes in the Moral Mathematics of Existential Risk for formalisms of why it might not be - I don't agree with his dismissal of a time of perils, but I do agree that the presumption that explicitly longtermist work is actually better for the long term than short-to-medium-term focused work is is based on little more than Pascalian handwaving.
I'm confused by your paragraph about insurance. To clarify:
Of course you can disagree about the high risk to flourishing from non-existential catastrophes but that's going to be a speculative argument about which people might reasonably differ. To my knowledge, no-one's made the positive case in depth, and the few people who've looked seriously into our post-catastrophe prospects seem to be substantially more pessimistic than those who haven't. See e.g.: