These are some initial reflections written quickly mostly to elicit discussions (before potentially diving more into the topic).
Abstract/Introduction
Longtermists aim to have an overall positive impact on the long-term future, which requires them to form beliefs about what actions would do more good than harm in this regard. One could question the reliability of such beliefs by defending the following evolutionary argument for cluelessness:
- P1: One’s beliefs about what actions positively influence the long-term future (in expectation) decisively rely on judgment calls (by which I mean intuitions that are neither clearly supported nor clearly undermined by the available evidence).
- P2: There has been no evolutionary pressure (not even indirect) against individuals unable to make the correct judgment calls regarding what actions do more good than harm (in expectation) considering their far future effects.
- Conclusion: Having non-agnostic beliefs about what actions longtermism recommends is unwarranted.
Assuming no misunderstanding of what I mean, I expect everyone to agree the conclusion logically follows from these two premises.[1] However, I expect skepticism towards P1 and especially P2 and am interested in people giving me compelling objections to these in the comments. My aim in the rest of the post is to help commenters do this by first addressing objections that I think are bad or unconvincing if not accompanied by further justification that I couldn't (yet) really come up with.
Objections to P1
Recall P1: One’s beliefs about what actions positively influence the long-term future (in expectation) decisively rely on judgment calls (by which I mean intuitions that are neither clearly supported nor clearly undermined by the available evidence).
“Surely there’s at least the longtermist argument for X-risk reduction that is robustly justified without judgment call.”
Here’s a list of crucial factors you have to consider while assessing whether X-risk reduction is good from a (full)[2] longtermist perspective:
- what is (dis)valuable and what kind of value ethically compensates for what kind of disvalue
- what animals are moral subjects and to what extent
- whether and/or when humanity will stop farming (some of) the animals with miserable lives we would not wish to live
- whether we will spread wildlife to other celestial bodies and how much
- the overall welfare of animals living in the wild
- whether digital sentience is possible and practically feasible and how its development goes if it is
- whether humanity colonizes outer space, how much, and the actors involved
- how multipolar the development of transformative artificial intelligence will be and the values of the expected stakeholders
- the likelihood of a conflict or malevolent actors causing astronomical amounts of suffering
- the probability of another civilization ever taking over humanity if it goes extinct or gets disempowered and when this would happen
- the overall impact of potential inter-civilizational conflict
- the gains from potential peaceful inter-civilizational trade
- how all the above factors interact with one another
- what to make of unknown unknowns
- how to weigh all these factors against one another to reach a final verdict.
I believe it is uncontroversial that you cannot estimate how these crucial factors should influence your overall credence without making judgment calls. You can very well argue that we should trust our judgment calls on these questions or those of whatever longtermist(s) you think make the correct calls. But you can’t say they’re not judgment calls. You can’t say that any rational agent with similar intellectual capacities to yours, and exposed to the same evidence as you, should end up with the same verdict on whether X-risk reduction does more good than harm in the long run. Unless you make the move I address next?
“What about the option value argument for X-risk reduction? It seems more robust.”
Here are the assumptions you need to make for the argument to pan out:
- Whoever are the stakeholders in the future will be more confident than us in their beliefs regarding whether X-risk reduction in their time is overall good OR overall bad.
- Their overall verdict will be the correct one.
If these same descendants correctly find that X-risk reduction is overall good, we would have had (by reducing X-risks in our times) a positive impact greater than the negative one we would have had in the scenario where they correctly find that X-risk reduction is overall bad.[3]
I think it’s clear there are at least a few judgment calls needed to back these three assumptions. Unless you make the following assumption?
“The good and bad consequences of an action we can’t estimate without judgment calls cancel each other out, such that judgment calls are unnecessary”
This sort of assumption is so common that it is known as “the cancelation postulate” in the cluelessness literature. However, it has been rejected by every single academic paper assessing this postulate that I’m aware of (see the references cited in the rest of this section). There are two different ways some authors have tried to justify it.
The first one is by straightforwardly appealing to a principle of indifference (POI from here on), which “states that in the absence of any relevant evidence, a rational agent will distribute their credence (or `degrees of belief') equally amongst all the possible outcomes under consideration” (Eva 2019). In the context of the cancelation postulate, such an appeal has been made and endorsed implicitly by Kagan (1998, p.64); Dorsey (2012), Cowen (2006), as well as more explicitly by Burch-Brown (2014) and Greaves (2016; §III) although only in situations Greaves estimate to be cases of what she calls “simple cluelessness”. However, at least in cases of what Greaves calls “complex cluelessness”, POI seems to totally lose its potential appeal (Burch-Brown 2014; Greaves 2016, §V; Yim 2019; Thorstad & Mogensen 2020; Mogensen 2021; Williamson 2022, Chapter 1; Tarsney et al. 2024, §3). For instance, when wondering whether the civilization that might take over ours would affect overall welfare more or less positively than humanity, we are not in total absence of any relevant evidence. There are systematic reasons why such a new civilization would do better[4] and systematic reasons to believe it would do worse[5]. And it seems exceedingly likely that some of these systematic reasons have not even occurred to us, worsening our cluelessness problem with that of unawareness (Roussos 2021; Tarsney et al. 2024, §3; Bostrom 2007, p.23-24). The problem is not that reasons pointing in one way or the other do not exist but our incapacity to sensibly weigh these reasons against one another.[6] Applying POI here would require us to have a determinate 50% credence in the proposition that the civilization that might take over ours would increase overall welfare better than humanity, the same way we hold a 50% credence in a fair coin landing on heads after being tossed. Such an application of POI seems misguided.[7] In the fair-coin-toss case, if we were to bet money, we can genuinely and rationally expect to win half of the time. In the counterfactual-civilization case, we have no clue whether we would win in expected value. The expectation we genuinely (should) have is not 50%. Rather, it is indeterminate.
The second way some have tried to justify the cancelation postulate is by appealing to an extrapolation of the parameters estimable without judgment calls. While these authors might concede that POI cannot straightforwardly justify this postulate, they claim that the parameters we can easily estimate are meaningfully representative of those we can’t, such that considering the latter is irrelevant in practice. For example, Brian Tomasik (2015) writes:
That we can anticipate something about UUs [(unknown unknowns)] despite not knowing what they will be can be seen more clearly in a case where the current UUs are more lopsided. For example, suppose the action under consideration is "start fights with random people on the street". While this probably has a few considerations in its favor, almost all of the crucial considerations that one could think of argue against the idea, suggesting that most new UUs will point against it as well.
In Tomasik’s example, the problem lies between a) “almost all of the crucial considerations that one could think of argue against the idea” and b) the claim that this “suggest[s] that most new UUs will point against it as well.” How does (a) suggest (b)? Why should we assume that the considerations that happen to have occurred to us and that we happen to be able to estimate without judgment calls are representative of those we can’t? This would be an incredibly fortunate coincidence. Surely there are systematic biases in the crucial considerations we uncover and can estimate. There probably is something unusual about the parameters we can easily estimate compared to those we can’t. And while we do not know in which direction the bias is, applying POI still seems unwarranted, here. There surely are systematic reasons why the bias would be one way and systematic reasons why it would be the other.[8] We have no idea how to weigh up such reasons against one another, just like earlier when we were wondering whether to straightforwardly apply POI to our beliefs regarding the civilization that might take over humanity if it goes extinct or gets disempowered. This does not mean that one must be fine with “starting fights with random people on the street". However, this means that, if it is not fine, this cannot be because good and bad inestimable indirect consequences of doing so “cancel out”. The commonsensical intuition why starting such fights would be bad, in particular, has nothing to do with estimates of the overall consequences this would have from now until the end of time. Hence, such commonsense cannot be directly used as evidence that not starting fights with random people on the street must be good from a purely impartial consequentialist perspective.
Objections to P2
Recall P2: There has been no evolutionary pressure (not even indirect) against individuals unable to make the correct judgment calls regarding what actions do more good than harm (in expectation) considering their far future effects.
“The correct judgment calls on how to predictably influence the future have an evolutionary advantage since they lead to lasting influence.”
Sure but P2 doesn’t imply we cannot predictably influence the far future. Rather, it implies we cannot predictably influence the far future in a way that makes its trajectory morally more desirable. Of course those who know how to influence the far future will influence it more. But this doesn’t mean they will influence it positively.
“Correct longtermist judgment calls are a systematic by-product of evolutionary advantageous traits that were directly selected for.”[9]
For this objection to hold, we need to suppose that there is a systematic correlation between the traits that guarantee survival and reproductive success and the quality of one’s judgment calls on how to positively influence the long-term future. This appears dubious to me. What a lucky coincidence.
One may say “well, our intuitions regarding which longtermist jugdment calls are correct must come from somewhere. Why would we have them if they don’t track the truth? Why wouldn’t our intuitions be silent on tricky questions relevant to how to positively influence the long-term future?”. I have two responses:
- I don’t need to know where longtermists’ judgment calls come from to tentatively suppose that they don’t come from a selection pressure towards correct longtermist judgment calls. This is for similar reasons why I don’t need to know why people have historically pictured aliens as small green creatures to tentatively suppose that correct intuitions about what aliens look like are not a trait that has been selected for among humans.
- There are many alternative explanations as to why we have intuitions about which longtermist jugdment calls are correct that don’t require assuming evolution selected for the correct ones. Examples include:
- Misgeneralization due to a distribution shift in our environment.
- Some random glitch of evolution.
An evolutionary pressure towards longtermist judgment calls that simply made people more likely to want to survive and reproduce in the present and near future (e.g., optimism about the long-term future of humanity).[10]
“Assessing P2 itself unavoidably requires questionable judgment calls, making P2 self-defeating”
Assessing P2 requires the ability to understand some basic logical implications of the theory of evolution. To the extent that there are “judgment calls” involved in one’s evaluation of P2, these are the judgment calls behind the very laws of logic. It makes perfect sense that evolution favored (whether directly or not) abilities to discover this kind of things (FitzPatrick & William 2021, §4.1). Historically, you die if you’re not able to reason logically on a basic level. You don’t die if you don’t have the correct beliefs about whether your actions do more good than harm considering all their overall effects on the far future.
But well, what do you think?
References
Bostrom, Nick. 2007. “Technological Revolutions: Ethics and Policy in the Dark.” In Nanoscale, 129–52. John Wiley & Sons, Ltd. https://doi.org/10.1002/9780470165874.ch10.
Bradley, Richard. 2017. Decision Theory with a Human Face. Cambridge: Cambridge University Press. https://doi.org/10.1017/9780511760105.
Burch-Brown, Joanna M. 2014. “Clues for Consequentialists.” Utilitas 26 (1): 105–19. https://doi.org/10.1017/S0953820813000289.
Cowen, Tyler. 2006. “The Epistemic Problem Does Not Refute Consequentialism.” Utilitas 18 (4): 383–99. https://doi.org/10.1017/S0953820806002172.
Dorsey, Dale. 2012. “Consequentialism, Metaphysical Realism and the Argument From Cluelessness.” Philosophical Quarterly 62 (246): 48–70. https://doi.org/10.1111/j.1467-9213.2011.713.x.
Eva, Benjamin. 2019. “Principles of Indifference.” Journal of Philosophy 116 (7): 390–411. https://doi.org/10.5840/jphil2019116724.
FitzPatrick, William. 2021. “Morality and Evolutionary Biology.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Spring 2021. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2021/entries/morality-biology/.
Greaves, Hilary. 2016. “XIV—Cluelessness.” Proceedings of the Aristotelian Society 116 (3): 311–39. https://doi.org/10.1093/arisoc/aow018.
Kagan, Shelly. 1998. Normative Ethics. Routledge.
Mogensen, Andreas L. 2021. “Maximal Cluelessness.” The Philosophical Quarterly 71 (1): 141–62. https://doi.org/10.1093/pq/pqaa021.
Roussos, Joe. 2021. “Unawareness for Longtermists.” https://joeroussos.org/wp-content/uploads/2021/11/210624-Roussos-GPI-Unawareness-and-longtermism.pdf.
Schwitzgebel, Eric. 2024. “The Washout Argument Against Longtermism.” Accessed March 3, 2025. https://faculty.ucr.edu/~eschwitz/SchwitzAbs/WashoutLongtermism.htm.
Tarsney, Christian. 2023. “The Epistemic Challenge to Longtermism.” Synthese 201 (6): 195. https://doi.org/10.1007/s11229-023-04153-y.
Tarsney, Christian, Teruji Thomas, and William MacAskill. 2024. “Moral Decision-Making Under Uncertainty.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta and Uri Nodelman, Spring 2024. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2024/entries/moral-decision-uncertainty/.
Thorstad, David. Forthcoming. “The Scope of Longtermism.” Australasian Journal of Philosophy.
Thorstad, David, and Andreas Mogensen. 2020. “Heuristics for Clueless Agents: How to Get Away with Ignoring What Matters Most in Ordinary Decision-Making.” https://globalprioritiesinstitute.org/david-thorstad-and-andreas-mogensen-heuristics-for-clueless-agents-how-to-get-away-with-ignoring-what-matters-most-in-ordinary-decision-making/.
Tomasik, Brian. 2015. “Charity Cost-Effectiveness in an Uncertain World.” Center on Long-Term Risk (blog). August 29, 2015. https://longtermrisk.org/charity-cost-effectiveness-in-an-uncertain-world/.
Williamson, Patrick. 2022. “On Cluelessness.” https://doi.org/10.25911/ZWK2-T508.
Yim, Lok Lam. 2019. “The Cluelessness Objection Revisited.” Proceedings of the Aristotelian Society 119 (3): 321–24. https://doi.org/10.1093/arisoc/aoz016.
- ^
[Edit a few hours after posting] This implicitly assumes a few other (uncontroversial) premises, some of which fleshed out by Michael St. Jules in this comment. Thanks to him for that.
- ^
Most of them apply even if you endorse a bounded version of longtermism where you ignore what happens beyond a certain number of years (see, e.g., Tarsney 2023; Schwitzgebel 2024)
- ^
This requires, among other things, taking into account their capacity to try to “make up for our mistake” by increasing X-risks.
- ^
E.g., the alien civilization that might eventually take over ours surely is more advanced and “wiser” by many aspects, which is – all else equal – one reason to believe it would be a better civilization when it comes to its impact on overall welfare, under some additional assumptions.
- ^
E.g., the alien civilization that might eventually take over ours is likely to hold values unusually conducive to effective space colonization, for obvious selection pressure reasons (see my post The Grabby Values Selection Thesis). Concern for the suffering one might incidentally and/or indirectly cause when colonizing is not evolutionary adaptive in such a context, all else equal (see my post Why we may expect our successors not to care about suffering), which, under some additional assumptions, is one reason to believe such alien civilization would be worse than ours, welfare-wise.
- ^
Interestingly, Thorstad & Mogensen (2020) write: “The problem is not that we can say nothing about the potential future effects of our actions. Quite the opposite. There is often simply too much that we can say. Even the simplest among us can list a great number of potential future effects of our actions and produce some considerations bearing on their likelihoods[.]”. Relatedly, David Thorstad (Forthcoming) suggests that “[in long-range forecasting territory,] we are often in a situation of evidential paucity: although we have some new evidence bearing on long-term values, often our evidence is quite weak and undiagnostic.”
- ^
Whether it would be similarly misguided to have a 50% credence that an unfair coin toss with unknown bias will land on heads is controversial (see, e.g., Bradley 2017, §13). Such a case, however, seems to match Greaves’ descriptions of examples of “simple cluelessness” and is irrelevant to the subject at hand. Greaves (2016, §V) writes: “There is an obvious and natural symmetry between the thoughts that (i) it’s possible that moving my hand to the left might disturb air molecules in a way that sets off a chain reaction leading to an additional hurricane in Bangladesh, which in turn renders many people homeless, which in turn sparks a political uprising, which in turn leads to widespread and beneficial democratic reforms, ..., and (ii) it’s possible that refraining from moving my hand to the left has all those effects. But there is no such natural symmetry between, for instance, the arguments for the claim that the world is overpopulated and those for the claim that it’s underpopulated [...] And, in contrast to the above relatively optimistic verdict on the Principle of Indifference, clearly there is no remotely plausible epistemic principle mandating equal credences in p and not-p whenever arguments for and against p are inconclusive.” Johanna Burch-Brown (2014) also makes an appealing case endorsing a similar distinction.
- ^
For example, our commonsensical intuitions precluding starting fights for no reason may make us more likely to discover reasons why this would be bad from an impartial consequentialist perspective than reasons why this would be good. On the other hand, this commonsensical intuition may be due to us getting aware of these reasons rather than the other way around, and it may be that we are actually biased towards uncovering reasons to believe starting fights for no reason would have good overall consequences due to tribal instincts. Also, here again, it would be unwise to assume we are aware of all the reasons we must weight up against each other (Roussos 2021; Tarsney et al. 2024, §3; Bostrom 2007, p.23-24).
- ^
This resembles some of the most popular objections to the general evolutionary debunking argument in metaethics (see, e.g., FitzPatrick & William 2021, §4.1).
- ^
Indeed, maybe there is a selective evolutionary debunking argument (see FitzPatrick & William 2021, §4.2) to be made against some common beliefs among longtermists? I might propose such an argument and assess whether it pans out, and how strong/weak it is if it does, in a future essay.
Yes, or we don't need to have any specific reason to believe they do better than random. I think this could be consistent with phenomenal conservatism.