Longtermism's most controversial premise seems to be the assumption that we can predict (better than chance) -- and are not clueless about -- the overall impact of our actions, considering the long-term future. Many have defended this epistemic premise of longtermism by arguing that we, at the very least, can and should help humanity reach its potential with X-risk reduction and trajectory changes toward "bigger futures".[1] Call this optimistic longtermism.[2] I suggest that whether optimistic longtermism escapes cluelessness depends on whether we should trust all the decisive judgment calls underlying our "best guesses" on the question. Then, I point to the possibility that the judgment calls (/intuitions) backing optimistic longtermism may be better explained by evolutionary pressures towards pro-natalist beliefs than by a process giving us good reasons to believe these judgment calls are truth-tracking. This uncovers an evolutionary debunking argument against optimistic longtermism which I comment on.
Note: This post is very short and gathers only preliminary thoughts. I'm considering drafting an academic paper on the topic and would be curious to see people's reactions to these preliminary thoughts before deciding what key points such a paper should address.
1. Optimistic longtermism cannot be endorsed without judgment calls
Longtermists often consider X-risk reduction to be unusually robust and to circumvent cluelessness worries.[1] But while X-risk reduction certainly has a massive lasting impact on the far future, considering this impact to be positive requires precisely estimating numerous parameters[3] (including considerations related to aliens, acausal reasoning, and crucial considerations we may be missing) and weighing them against one another.
Once we consider all the arguments pointing in favor and those pointing against X-risk reduction, given all these relevant factors, it is not obvious that all rational agents should converge on forming a determinate >50% credence in the proposition "reducing X-risks is good in the long run". Two very smart experts on longtermism and its epistemic challenge who have both considered all the arguments, could very well end up disagreeing on the long-term value of X-risks reduction based on nothing but different judgment calls (i.e., intuitions that cannot themselves be supported with arguments) when weighing considerations against one another. There is no evidently correct way to weigh these.[4]
So how do we know whose judgment calls are correct? In fact, how do we know if anyone's judgment calls track the truth better than random? I will not try to answer this. However, I will consider judgment calls leading to optimistic longtermism, specifically, and hopefully start shedding some (dim) light on whether we should trust these.
2. Are the judgment calls backing optimistic longtermism suspicious?
To know whether our judgment calls about the long-term value of X-risk reduction are informative to any degree, we ought to think about where they come from. In particular, we must wonder whether they come from
- A) a source that makes them reliable (such as an evolutionary pressure selecting correct longtermist beliefs, whether directly or indirectly); or
- B) a source that makes them unreliable (such as an evolutionary pressure toward pro-natalist beliefs).
So do the judgment calls that back optimistic longtermism come primarily from A or from B? I hope to bring up considerations that will help answer this question in a future essay, but see An Evolutionary Argument undermining Longtermist thinking? for some thoughts of mine relevant to the topic. My main goal here is simply to point that optimistic longtermists must defend A vs B for their position to be tenable.
Concluding thoughts
- Assuming B is on the table, there is a valid -- although not necessarily strong, depending on how plausible B is vs A -- evolutionary debunking argument to be made against optimistic longtermism. It would not be a genetic fallacy as long as one agrees that there necessarily are decisive judgment calls involved in forming optimistic-longtermist beliefs (as I tersely argue in section 1).[5] If it is these opaque judgment calls that dictate whether one accepts or rejects optimistic longtermism at the end of the day, assessing the reliability of these judgment calls seems in fact much more relevant than discussing object-level arguments for and against optimistic longtermism which have all been inconclusive so far (in the sense that none of them are slam-dunk arguments that eliminate the need for weighing reasons for and against optimistic longtermism with judgment calls).[6]
- It is worth noting that such an evolutionary debunking argument does not back "pessimistic longtermism", at least not on its own. It may very well be that we should remain agnostic on whether X-risk reduction and trajectory changes toward "bigger futures" are good. A reason to think that believing X is unwarranted is not a reason to believe anti-X. I decided to assail optimistic longtermism specifically because it is more popular.
- While I find the evolutionary debunking argument my post presents plausibly extremely relevant and important in theory, I'm pessimistic about its potential to draw forth good philosophical discussions in practice. If one wants to convince an optimistic longtermist that the judgment calls that led them to this position may be unreliable, there may be more promising ways (e.g., give them crucial considerations they haven't thought of that make them strongly update and question their ability to make good judgment calls given all the crucial considerations they might still be missing).
References
Chappell, Richard Yetter. 2017. “Knowing What Matters.” In Does Anything Really Matter?: Essays on Parfit on Objectivity, edited by Peter Singer, 149–67. Oxford University Press.
Greaves, Hilary. 2016. “XIV—Cluelessness.” Proceedings of the Aristotelian Society 116 (3): 311–39. https://doi.org/10.1093/arisoc/aow018.
Greaves, Hilary, and William MacAskill. 2021. “The Case for Strong Longtermism.” https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/.
Greaves, Hilary, and Christian Tarsney. 2023. “Minimal and Expansive Longtermism.” https://globalprioritiesinstitute.org/minimal-and-expansive-longtermism-hilary-greaves-and-christian-tarsney/.
Kahane, Guy. 2010. “Evolutionary Debunking Arguments.” Noûs 45 (1): 103–25. https://doi.org/10.1111/j.1468-0068.2010.00770.x.
MacAskill, William. 2022. What We Owe the Future. New York: Basic Books.
Mogensen, Andreas L. 2021. “Maximal Cluelessness.” The Philosophical Quarterly 71 (1): 141–62. https://doi.org/10.1093/pq/pqaa021.
Rulli, Tina. 2024. “Effective Altruists Need Not Be Pronatalist Longtermists.” Public Affairs Quarterly 38 (1): 22–44. https://doi.org/10.5406/21520542.38.1.03.
Tarsney, Christian. 2023. “The Epistemic Challenge to Longtermism.” Synthese 201 (6): 195. https://doi.org/10.1007/s11229-023-04153-y.
Tarsney, Christian, Teruji Thomas, and William MacAskill. 2024. “Moral Decision-Making Under Uncertainty.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta and Uri Nodelman, Spring 2024. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2024/entries/moral-decision-uncertainty/.
Thorstad, David, and Andreas Mogensen. 2020. “Heuristics for Clueless Agents: How to Get Away with Ignoring What Matters Most in Ordinary Decision-Making.” https://globalprioritiesinstitute.org/david-thorstad-and-andreas-mogensen-heuristics-for-clueless-agents-how-to-get-away-with-ignoring-what-matters-most-in-ordinary-decision-making/.
Williamson, Patrick. 2022. “On Cluelessness.” https://doi.org/10.25911/ZWK2-T508.
Yim, Lok Lam. 2019. “The Cluelessness Objection Revisited.” Proceedings of the Aristotelian Society 119 (3): 321–24. https://doi.org/10.1093/arisoc/aoz016.
- ^
See, e.g., Thorstad & Mogensen 2020; Greaves & MacAskill 2021, §4, §7; MacAskill 2022, Chapters 1, 2, 9; Tarsney 2023; Greaves & Tarsney 2023)
- ^
Tina Rulli (2024) talks about "pro-natalist longtermism", referring to something equivalent. (I wonder whether "optimistic longtermism" actually is a better term -- see this comment).
- ^
The same parameters apply to evaluating trajectory changes toward "bigger futures". Also, I'm assuming that parameters we might not be able to estimate do not "cancel out".
- ^
See Greaves 2016, §V; Yim 2019; Thorstad & Mogensen 2020; Mogensen 2021; Williamson 2022, Chapter 1; Tarsney et al. 2024, §3 for analogous points applied to causes other than X-risk reduction.
- ^
This means the argument the present post discusses is immune to critics of evolutionary debunking arguments in other context such as Chappell's (2017).
- ^
Commenting on an analogous issue, Guy Kahane (2011) writes: "It is notoriously hard to resolve disagreements about the supposed intrinsic value or moral significance of certain considerations- to resolve differences in intuition. And we saw that a belief's aetiology makes most difference for justification precisely in such cases, when reasons have run out. Debunking arguments thus offer one powerful way of moving such disagreements forward".
- ^
We disagree about "what we have reason to" think about the value of humanity's continued existence -- that's precisely the question in dispute. I might as well ask why you limit yourself to (widely) imprecise credences that don't narrow things down nearly enough (or as much as we have reason to).
The topics under dispute here (e.g. whether we should think that human extinction is worse in expectation than humanity's continued existence) involve ineradicable judgment calls. The OP wants to call pro-humanity judgment calls "suspicious". I've pointed out that I think their reasons for suspicion are insufficient to overturn such a datum of good judgment as "it would be bad if everyone died." (I'm not saying it's impossible to overturn this verdict, but it should take a lot more than mere debunking arguments.)
Incidentally, I think the tendency of some in the community to be swayed to "crazy town" conclusions on the basis of such flimsy arguments is a big part of why many outsiders think EAs are unhinged. It's a genuine failure mode that's worth being aware of; the only way to avoid it, I suspect, is to have robustly sensible priors that are not so easily swayed without a much stronger basis.
Anyway, that was my response to the OP. You then complained that my response to the OP didn't engage with your posts. But I don't see why it would need to. Your post treats broad imprecision as a privileged default; my previous reply explained why I disagree with that starting point. Your own post links to further explanations I've given, here, about how sufficiently imprecise credences lead to crazy verdicts. Your response (in your linked post) dismisses this as "motivated reasoning," which I don't find convincing.
To mandate broadly imprecise credences on the topic at hand would be to defer overly much to a formal apparatus which, in virtue of forcing (with insufficient reason) a kind of practical neutrality about whether it would be bad for everyone to die, is manifestly unfit to guide high-stakes decision-making. That's my view. You're free to disagree with it, of course.