By longtermism I mean “Longtermism =df the view that the most important determinant of the value of our actions today is how those actions affect the very long-run future.”
I want to clarify my thoughts around longtermism as an idea - and to understand better why some aspects of how it is used within EA make me uncomfortable despite my general support of the idea.
I'm doing a literature search but because this is primarily an EA concept that I'm familiar with from within EA I'm mostly familiar with work (e.g Nick Beadstead etc) advocates of this position. I'd like to understand what the leading challenges and critiques to this position are (if any) as well. I know of some within the EA community (Kaufmann) but not of what the position is in academic work or outside of the EA Community.
Thanks!
[Not primarily a criticism of your comment, I think you probably agree with a lot of what I say here.]
Yes, but in addition your view in normative ethics needs to have suitable features, such as:
(Examples are to illustrate the point, not to suggest they are plausible views.)
I'm concerned that some presentations of "non-consequentialist" reasons for longtermism sweep under the rug the important difference between the actual longtermist claim that improving the long-term future is of particular concern relative to other goals and the weaker claim that improving or preserving the long-term future is one ethical consideration among many, with it being underdetermined how they trade off against each other.
So for example, sure, if we don't prevent extinction we are uncooperative toward previous generations because we frustrate their 'grand project of humanity'. That might be a good, non-consequentialist reason to prevent extinction. But without specifying the full normative view, it is really unclear how much to focus on this relative to other responsibilities.
Note that I actually do think that something like longtermist practical priorities follow from many plausible normative views, including non-consequentialist ones. Especially if one believes in a significant risk of human extinction this century. But the space of such views is vast, and which views are and aren't plausible is contentious. So I think it's important to not present longtermism as an obvious slam dunk, or to only consider (arguably implausible) objections that completely deny the ethical relevance of the long-term future.