By longtermism I mean “Longtermism =df the view that the most important determinant of the value of our actions today is how those actions affect the very long-run future.”
I want to clarify my thoughts around longtermism as an idea - and to understand better why some aspects of how it is used within EA make me uncomfortable despite my general support of the idea.
I'm doing a literature search but because this is primarily an EA concept that I'm familiar with from within EA I'm mostly familiar with work (e.g Nick Beadstead etc) advocates of this position. I'd like to understand what the leading challenges and critiques to this position are (if any) as well. I know of some within the EA community (Kaufmann) but not of what the position is in academic work or outside of the EA Community.
Thanks!
Hmm, I remember seeing a criticism somewhere in the EA-sphere that went something like:
"The term "longtermism" is misleading because in practice "longtermism" means "concern over short AI timelines", and in fact many "longtermists" are concerned with events on a much shorter time scale than the rest of EA."
I thought that was a surprising and interesting argument, though I don't recall who initially made it. Does anyone remember?
This sounds like a misunderstanding to me. Longtermists concerned with short AI timelines are concerned with them because of AI's long lasting influence into the far future.