Longtermism's most controversial premise seems to be the assumption that we can predict (better than chance) -- and are not clueless about -- the overall impact of our actions, considering the long-term future. Many have defended this epistemic premise of longtermism by arguing that we, at the very least, can and should help humanity reach its potential with X-risk reduction and trajectory changes toward "bigger futures".[1] Call this optimistic longtermism.[2] I suggest that whether optimistic longtermism escapes cluelessness depends on whether we should trust all the decisive judgment calls underlying our "best guesses" on the question. Then, I point to the possibility that the judgment calls (/intuitions) backing optimistic longtermism may be better explained by evolutionary pressures towards pro-natalist beliefs than by a process giving us good reasons to believe these judgment calls are truth-tracking. This uncovers an evolutionary debunking argument against optimistic longtermism which I comment on.
Note: This post is very short and gathers only preliminary thoughts. I'm considering drafting an academic paper on the topic and would be curious to see people's reactions to these preliminary thoughts before deciding what key points such a paper should address.
1. Optimistic longtermism cannot be endorsed without judgment calls
Longtermists often consider X-risk reduction to be unusually robust and to circumvent cluelessness worries.[1] But while X-risk reduction certainly has a massive lasting impact on the far future, considering this impact to be positive requires precisely estimating numerous parameters[3] (including considerations related to aliens, acausal reasoning, and crucial considerations we may be missing) and weighing them against one another.
Once we consider all the arguments pointing in favor and those pointing against X-risk reduction, given all these relevant factors, it is not obvious that all rational agents should converge on forming a determinate >50% credence in the proposition "reducing X-risks is good in the long run". Two very smart experts on longtermism and its epistemic challenge who have both considered all the arguments, could very well end up disagreeing on the long-term value of X-risks reduction based on nothing but different judgment calls (i.e., intuitions that cannot themselves be supported with arguments) when weighing considerations against one another. There is no evidently correct way to weigh these.[4]
So how do we know whose judgment calls are correct? In fact, how do we know if anyone's judgment calls track the truth better than random? I will not try to answer this. However, I will consider judgment calls leading to optimistic longtermism, specifically, and hopefully start shedding some (dim) light on whether we should trust these.
2. Are the judgment calls backing optimistic longtermism suspicious?
To know whether our judgment calls about the long-term value of X-risk reduction are informative to any degree, we ought to think about where they come from. In particular, we must wonder whether they come from
- A) a source that makes them reliable (such as an evolutionary pressure selecting correct longtermist beliefs, whether directly or indirectly); or
- B) a source that makes them unreliable (such as an evolutionary pressure toward pro-natalist beliefs).
So do the judgment calls that back optimistic longtermism come primarily from A or from B? I hope to bring up considerations that will help answer this question in a future essay, but see An Evolutionary Argument undermining Longtermist thinking? for some thoughts of mine relevant to the topic. My main goal here is simply to point that optimistic longtermists must defend A vs B for their position to be tenable.
Concluding thoughts
- Assuming B is on the table, there is a valid -- although not necessarily strong, depending on how plausible B is vs A -- evolutionary debunking argument to be made against optimistic longtermism. It would not be a genetic fallacy as long as one agrees that there necessarily are decisive judgment calls involved in forming optimistic-longtermist beliefs (as I tersely argue in section 1).[5] If it is these opaque judgment calls that dictate whether one accepts or rejects optimistic longtermism at the end of the day, assessing the reliability of these judgment calls seems in fact much more relevant than discussing object-level arguments for and against optimistic longtermism which have all been inconclusive so far (in the sense that none of them are slam-dunk arguments that eliminate the need for weighing reasons for and against optimistic longtermism with judgment calls).[6]
- It is worth noting that such an evolutionary debunking argument does not back "pessimistic longtermism", at least not on its own. It may very well be that we should remain agnostic on whether X-risk reduction and trajectory changes toward "bigger futures" are good. A reason to think that believing X is unwarranted is not a reason to believe anti-X. I decided to assail optimistic longtermism specifically because it is more popular.
- While I find the evolutionary debunking argument my post presents plausibly extremely relevant and important in theory, I'm pessimistic about its potential to draw forth good philosophical discussions in practice. If one wants to convince an optimistic longtermist that the judgment calls that led them to this position may be unreliable, there may be more promising ways (e.g., give them crucial considerations they haven't thought of that make them strongly update and question their ability to make good judgment calls given all the crucial considerations they might still be missing).
References
Chappell, Richard Yetter. 2017. “Knowing What Matters.” In Does Anything Really Matter?: Essays on Parfit on Objectivity, edited by Peter Singer, 149–67. Oxford University Press.
Greaves, Hilary. 2016. “XIV—Cluelessness.” Proceedings of the Aristotelian Society 116 (3): 311–39. https://doi.org/10.1093/arisoc/aow018.
Greaves, Hilary, and William MacAskill. 2021. “The Case for Strong Longtermism.” https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/.
Greaves, Hilary, and Christian Tarsney. 2023. “Minimal and Expansive Longtermism.” https://globalprioritiesinstitute.org/minimal-and-expansive-longtermism-hilary-greaves-and-christian-tarsney/.
Kahane, Guy. 2010. “Evolutionary Debunking Arguments.” Noûs 45 (1): 103–25. https://doi.org/10.1111/j.1468-0068.2010.00770.x.
MacAskill, William. 2022. What We Owe the Future. New York: Basic Books.
Mogensen, Andreas L. 2021. “Maximal Cluelessness.” The Philosophical Quarterly 71 (1): 141–62. https://doi.org/10.1093/pq/pqaa021.
Rulli, Tina. 2024. “Effective Altruists Need Not Be Pronatalist Longtermists.” Public Affairs Quarterly 38 (1): 22–44. https://doi.org/10.5406/21520542.38.1.03.
Tarsney, Christian. 2023. “The Epistemic Challenge to Longtermism.” Synthese 201 (6): 195. https://doi.org/10.1007/s11229-023-04153-y.
Tarsney, Christian, Teruji Thomas, and William MacAskill. 2024. “Moral Decision-Making Under Uncertainty.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta and Uri Nodelman, Spring 2024. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2024/entries/moral-decision-uncertainty/.
Thorstad, David, and Andreas Mogensen. 2020. “Heuristics for Clueless Agents: How to Get Away with Ignoring What Matters Most in Ordinary Decision-Making.” https://globalprioritiesinstitute.org/david-thorstad-and-andreas-mogensen-heuristics-for-clueless-agents-how-to-get-away-with-ignoring-what-matters-most-in-ordinary-decision-making/.
Williamson, Patrick. 2022. “On Cluelessness.” https://doi.org/10.25911/ZWK2-T508.
Yim, Lok Lam. 2019. “The Cluelessness Objection Revisited.” Proceedings of the Aristotelian Society 119 (3): 321–24. https://doi.org/10.1093/arisoc/aoz016.
- ^
See, e.g., Thorstad & Mogensen 2020; Greaves & MacAskill 2021, §4, §7; MacAskill 2022, Chapters 1, 2, 9; Tarsney 2023; Greaves & Tarsney 2023)
- ^
Tina Rulli (2024) talks about "pro-natalist longtermism", referring to something equivalent. (I wonder whether "optimistic longtermism" actually is a better term -- see this comment).
- ^
The same parameters apply to evaluating trajectory changes toward "bigger futures". Also, I'm assuming that parameters we might not be able to estimate do not "cancel out".
- ^
See Greaves 2016, §V; Yim 2019; Thorstad & Mogensen 2020; Mogensen 2021; Williamson 2022, Chapter 1; Tarsney et al. 2024, §3 for analogous points applied to causes other than X-risk reduction.
- ^
This means the argument the present post discusses is immune to critics of evolutionary debunking arguments in other context such as Chappell's (2017).
- ^
Commenting on an analogous issue, Guy Kahane (2011) writes: "It is notoriously hard to resolve disagreements about the supposed intrinsic value or moral significance of certain considerations- to resolve differences in intuition. And we saw that a belief's aetiology makes most difference for justification precisely in such cases, when reasons have run out. Debunking arguments thus offer one powerful way of moving such disagreements forward".
- ^
Ah nice, thanks for these points, Cody.
I mean... it's quite easy. There were people who, for some reason, were optimistic regarding the long-term future of humanity and they had more children than others (and maybe a stronger survival drive), all else equal. The claim that there exists such a selection effect seems trivially true. The real question is how strong it is relative to, e.g., a potential indirect selection toward truth-tracking longtermist beliefs. I.e., the EDA argument against optimistic longermism seems trivially valid. The question is how strong it is relative to other arguments. (And I'd really like for my potential paper to make progress on this, yeah!)
(Hopefully, the above also addresses your second bullet point.)
Now, you give potential reasons to believe the EDA is weak (thanks for that!):
You can't reason yourself into or out of something like optimistic longtermism just using math. You need to make so many subjective judgment calls. And because you can reason yourself out of a belief does not mean that there weren't evolutionary pressures toward this belief. This means that the evo pressure was at least not overwhelmingly strong, however, fair. But I don't think anyone was contesting that. You can say this about absolutely all evolutionary pressures on normative and empirical beliefs. I don't think there is any that is so strong that we can't reason ourselves out of it. But this doesn't mean they can't have suspicious origins.
On person-affecting beliefs: The vast majority of people holding these are not longtermists to begin with. What we should be wondering is "to the extent that we have intuitions about what is best for the long-term (and care about this), where do these intuitions come from?". Non-longtermist beliefs are irrelevant, here. Hopefully, this also addresses your last bullet point.