By longtermism I mean “Longtermism =df the view that the most important determinant of the value of our actions today is how those actions affect the very long-run future.”
I want to clarify my thoughts around longtermism as an idea - and to understand better why some aspects of how it is used within EA make me uncomfortable despite my general support of the idea.
I'm doing a literature search but because this is primarily an EA concept that I'm familiar with from within EA I'm mostly familiar with work (e.g Nick Beadstead etc) advocates of this position. I'd like to understand what the leading challenges and critiques to this position are (if any) as well. I know of some within the EA community (Kaufmann) but not of what the position is in academic work or outside of the EA Community.
Thanks!
“The Epistemic Challenge to Longtermism” by Christian Tarsney is perhaps my favorite paper on the topic.
“How the Simulation Argument Dampens Future Fanaticism” by Brian Tomasik has also influenced my thinking but has a more narrow focus.
Finally, you can also conceive of yourself as one instantiation of a decision algorithm that probably has close analogs at different points throughout time, which makes Caspar Oesterheld’s work relevant to the topic. There are a few summaries linked from that page. I think it’s an extremely important contribution but a bit tangential to your question.
My essay on consequentialist cluelessness is also about this: What consequences?