By longtermism I mean “Longtermism =df the view that the most important determinant of the value of our actions today is how those actions affect the very long-run future.”
I want to clarify my thoughts around longtermism as an idea - and to understand better why some aspects of how it is used within EA make me uncomfortable despite my general support of the idea.
I'm doing a literature search but because this is primarily an EA concept that I'm familiar with from within EA I'm mostly familiar with work (e.g Nick Beadstead etc) advocates of this position. I'd like to understand what the leading challenges and critiques to this position are (if any) as well. I know of some within the EA community (Kaufmann) but not of what the position is in academic work or outside of the EA Community.
Thanks!
Strongly agree - I think it's really important to disentangle longtermism from existential risk from AI safety. I might suggest writing separate posts.
I'd also be keen to see more focus on which arguments seem best, rather than having such a long list (including many that have a strong counter, or are no longer supported by the people who first suggested them), though I appreciate that might take longer to write. A quick fix would be to link to counterarguments where they exist.