Hide table of contents

Longtermism's most controversial premise seems to be the assumption that we can predict (better than chance) -- and are not clueless about -- the overall impact of our actions, considering the long-term future. Many have defended this epistemic premise of longtermism by arguing that we, at the very least, can and should help humanity reach its potential with X-risk reduction and trajectory changes toward "bigger futures".[1] Call this optimistic longtermism.[2] I suggest that whether optimistic longtermism escapes cluelessness depends on whether we should trust all the decisive judgment calls underlying our "best guesses" on the question. Then, I point to the possibility that the judgment calls (/intuitions) backing optimistic longtermism may be better explained by evolutionary pressures towards pro-natalist beliefs than by a process giving us good reasons to believe these judgment calls are truth-tracking. This uncovers an evolutionary debunking argument against optimistic longtermism which I comment on.

Note: This post is very short and gathers only preliminary thoughts. I'm considering drafting an academic paper on the topic and would be curious to see people's reactions to these preliminary thoughts before deciding what key points such a paper should address.

1. Optimistic longtermism cannot be endorsed without judgment calls

Longtermists often consider X-risk reduction to be unusually robust and to circumvent cluelessness worries.[1] But while X-risk reduction certainly has a massive lasting impact on the far future,  considering this impact to be positive requires precisely estimating numerous parameters[3] (including considerations related to aliens, acausal reasoning, and crucial considerations we may be missing) and weighing them against one another. 

Once we consider all the arguments pointing in favor and those pointing against X-risk reduction, given all these relevant factors, it is not obvious that all rational agents should converge on forming a determinate >50% credence in the proposition "reducing X-risks is good in the long run". Two very smart experts on longtermism and its epistemic challenge who have both considered all the arguments, could very well end up disagreeing on the long-term value of X-risks reduction based on nothing but different judgment calls (i.e., intuitions that cannot themselves be supported with arguments) when weighing considerations against one another. There is no evidently correct way to weigh these.[4]

So how do we know whose judgment calls are correct? In fact, how do we know if anyone's judgment calls track the truth better than random? I will not try to answer this. However, I will consider judgment calls leading to optimistic longtermism, specifically, and hopefully start shedding some (dim) light on whether we should trust these.

2. Are the judgment calls backing optimistic longtermism suspicious?

To know whether our judgment calls about the long-term value of X-risk reduction are informative to any degree, we ought to think about where they come from. In particular, we must wonder whether they come from

  • A) a source that makes them reliable (such as an evolutionary pressure selecting correct longtermist beliefs, whether directly or indirectly); or
  • B) a source that makes them unreliable (such as an evolutionary pressure toward pro-natalist beliefs).

So do the judgment calls that back optimistic longtermism come primarily from A or from B? I hope to bring up considerations that will help answer this question in a future essay, but see An Evolutionary Argument undermining Longtermist thinking? for some thoughts of mine relevant to the topic. My main goal here is simply to point that optimistic longtermists must defend A vs B for their position to be tenable.

Concluding thoughts

  • Assuming B is on the table, there is a valid -- although not necessarily strong, depending on how plausible B is vs A -- evolutionary debunking argument to be made against optimistic longtermism. It would not be a genetic fallacy as long as one agrees that there necessarily are decisive judgment calls involved in forming optimistic-longtermist beliefs (as I tersely argue in section 1).[5] If it is these opaque judgment calls that dictate whether one accepts or rejects optimistic longtermism at the end of the day, assessing the reliability of these judgment calls seems in fact much more relevant than discussing object-level arguments for and against optimistic longtermism which have all been inconclusive so far (in the sense that none of them are slam-dunk arguments that eliminate the need for weighing reasons for and against optimistic longtermism with judgment calls).[6]
  • It is worth noting that such an evolutionary debunking argument does not back "pessimistic longtermism", at least not on its own. It may very well be that we should remain agnostic on whether X-risk reduction and trajectory changes toward "bigger futures" are good. A reason to think that believing X is unwarranted is not a reason to believe anti-X. I decided to assail optimistic longtermism specifically because it is more popular.
  • While I find the evolutionary debunking argument my post presents plausibly extremely relevant and important in theory, I'm pessimistic about its potential to draw forth good philosophical discussions in practice. If one wants to convince an optimistic longtermist that the judgment calls that led them to this position may be unreliable, there may be more promising ways (e.g., give them crucial considerations they haven't thought of that make them strongly update and question their ability to make good judgment calls given all the crucial considerations they might still be missing).

References

Chappell, Richard Yetter. 2017. “Knowing What Matters.” In Does Anything Really Matter?: Essays on Parfit on Objectivity, edited by Peter Singer, 149–67. Oxford University Press.

Greaves, Hilary. 2016. “XIV—Cluelessness.” Proceedings of the Aristotelian Society 116 (3): 311–39. https://doi.org/10.1093/arisoc/aow018.

Greaves, Hilary, and William MacAskill. 2021. “The Case for Strong Longtermism.” https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/.

Greaves, Hilary, and Christian Tarsney. 2023. “Minimal and Expansive Longtermism.” https://globalprioritiesinstitute.org/minimal-and-expansive-longtermism-hilary-greaves-and-christian-tarsney/.

Kahane, Guy. 2010. “Evolutionary Debunking Arguments.” Noûs 45 (1): 103–25. https://doi.org/10.1111/j.1468-0068.2010.00770.x.

MacAskill, William. 2022. What We Owe the Future. New York: Basic Books.

Mogensen, Andreas L. 2021. “Maximal Cluelessness.” The Philosophical Quarterly 71 (1): 141–62. https://doi.org/10.1093/pq/pqaa021.

Rulli, Tina. 2024. “Effective Altruists Need Not Be Pronatalist Longtermists.” Public Affairs Quarterly 38 (1): 22–44. https://doi.org/10.5406/21520542.38.1.03.

Tarsney, Christian. 2023. “The Epistemic Challenge to Longtermism.” Synthese 201 (6): 195. https://doi.org/10.1007/s11229-023-04153-y.

Tarsney, Christian, Teruji Thomas, and William MacAskill. 2024. “Moral Decision-Making Under Uncertainty.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta and Uri Nodelman, Spring 2024. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2024/entries/moral-decision-uncertainty/.

Thorstad, David, and Andreas Mogensen. 2020. “Heuristics for Clueless Agents: How to Get Away with Ignoring What Matters Most in Ordinary Decision-Making.” https://globalprioritiesinstitute.org/david-thorstad-and-andreas-mogensen-heuristics-for-clueless-agents-how-to-get-away-with-ignoring-what-matters-most-in-ordinary-decision-making/.

Williamson, Patrick. 2022. “On Cluelessness.” https://doi.org/10.25911/ZWK2-T508.

Yim, Lok Lam. 2019. “The Cluelessness Objection Revisited.” Proceedings of the Aristotelian Society 119 (3): 321–24. https://doi.org/10.1093/arisoc/aoz016.

  1. ^

    See, e.g., Thorstad & Mogensen 2020; Greaves & MacAskill 2021, §4, §7; MacAskill 2022, Chapters 1, 2, 9; Tarsney 2023Greaves & Tarsney 2023)

  2. ^

    Tina Rulli (2024) talks about "pro-natalist longtermism", referring to something equivalent. (I wonder whether "optimistic longtermism" actually is a better term -- see this comment).

  3. ^

    The same parameters apply to evaluating trajectory changes toward "bigger futures". Also, I'm assuming that parameters we might not be able to estimate do not "cancel out".

  4. ^

    See Greaves 2016, §V; Yim 2019; Thorstad & Mogensen 2020; Mogensen 2021; Williamson 2022, Chapter 1; Tarsney et al. 2024, §3 for analogous points applied to causes other than X-risk reduction.

  5. ^

    This means the argument the present post discusses is immune to critics of evolutionary debunking arguments in other context such as Chappell's (2017).

  6. ^

    Commenting on an analogous issue, Guy Kahane (2011) writes: "It is notoriously hard to resolve disagreements about the supposed intrinsic value or moral significance of certain considerations- to resolve differences in intuition. And we saw that a belief's aetiology makes most difference for justification precisely in such cases, when reasons have run out. Debunking arguments thus offer one powerful way of moving such disagreements forward".

  7. ^

     

Comments24
Sorted by Click to highlight new comments since:

I don't think it's plausible that optimistic longtermism is vulnerable to evolutionary debunking, because:

  • I've seen people reason themselves into and out of pro-natalist and anti-natalist stances, often using mathematical reasoning. I haven't seen any reason to believe that the pro-natalists' reasoning in particular is succumbing to evolutionary pressure.
  • You can tell a story about pro-natalist beliefs having evolutionary advantages, of course, but that's not actually establishing a fact of evolutionary psychology. There are many such stories that sound plausible, and they can often be contradictory.
  • Person-affecting beliefs, and neutrality about creating positive lives, often reflect deeply held intuitions shared by many people that are hard to square with the idea that there's strong evolutionary pressure toward intuitive pro-natalism. Indeed, my experience in philosophy is that these views are treated as the intuitive view that need to be defended from the un-intuitive arguments for longtermism.
  • I think in fact it's more plausible to think that evolution selected for people who tend to have sex (that happens to be procreative) and want to care for children, than that it selects for having intuitions that people rely on when they reason impartially about population ethics.

I think if you were to turn this into an academic paper, I'd be interested to see if you could defend the claim that pro-natalist beliefs have been selected for in human evolutionary history.

Ah nice, thanks for these points, Cody.

I'd be interested to see if you could defend the claim that pro-natalist beliefs have been selected for in human evolutionary history.

I mean... it's quite easy. There were people who, for some reason, were optimistic regarding the long-term future of humanity and they had more children than others (and maybe a stronger survival drive), all else equal. The claim that there exists such a selection effect seems trivially true. The real question is how strong it is relative to, e.g., a potential indirect selection toward truth-tracking longtermist beliefs. I.e., the EDA argument against optimistic longermism seems trivially valid. The question is how strong it is relative to other arguments. (And I'd really like for my potential paper to make progress on this, yeah!)

(Hopefully, the above also addresses your second bullet point.)

Now, you give potential reasons to believe the EDA is weak (thanks for that!): 

I've seen people reason themselves into and out of pro-natalist and anti-natalist stances, often using mathematical reasoning. I haven't seen any reason to believe that the pro-natalists' reasoning in particular is succumbing to evolutionary pressure.

You can't reason yourself into or out of something like optimistic longtermism just using math. You need to make so many subjective judgment calls. And because you can reason yourself out of a belief does not mean that there weren't evolutionary pressures toward this belief. This means that the evo pressure was at least not overwhelmingly strong, however, fair. But I don't think anyone was contesting that. You can say this about absolutely all evolutionary pressures on normative and empirical beliefs. I don't think there is any that is so strong that we can't reason ourselves out of it. But this doesn't mean they can't have suspicious origins. 

On person-affecting beliefs: The vast majority of people holding these are not longtermists to begin with. What we should be wondering is "to the extent that we have intuitions about what is best for the long-term (and care about this), where do these intuitions come from?". Non-longtermist beliefs are irrelevant, here. Hopefully, this also addresses your last bullet point.

I mean... it's quite easy. There were people who, for some reason, were optimistic regarding the long-term future of humanity and they had more children than others (and maybe a stronger survival drive), all else equal. The claim that there exists such a selection effect seems trivially true.

I agree that you can construct hypothetical scenarios in which a given trait is selected for (though even then you have to postulate that it's heritable, which you didn't specify here). But your claim is is not trivially true, and it does not establish that optimism regarding the long-term future of humanity has in fact been selected for in human evolutionary history. Other beliefs that are more plausibly susceptible to evolutionary debunking include the idea that we have special obligations to our family members, since these are likely connected to kinship ties that have been widely studied across many species.

So I think a key crux between us is on the question: what does it take for a belief to be vulnerable to evolutionary debunking? My view is that it should actually be established in the field of evolutionary psychology that the belief is best explained as the direct[1] product of our evolutionary history. (Even then, as I think you agree, that doesn't falsify the belief, but it gives us reason to be suspicious of it.)

I asked ChatGPT how evolutionary psychologists typically try to show that a psychological trait was selected for. Here was its answer:

Evolutionary psychologists aim to show that a psychological trait is a product of selection by demonstrating that it likely solved adaptive problems in our ancestral environment. They look for traits that are universal across cultures, appear reliably during development, and show efficiency and specificity in addressing evolutionary challenges. Evidence from comparative studies with other species, heritability data, and cost-benefit analyses related to reproductive success also support such claims. Altogether, these approaches help build a case that the trait was shaped by natural or sexual selection rather than by learning or cultural influence alone.

I think you might say that you don't have to show that a belief is best explain by evolutionary pressure, just that there's some selection for it. In fact, I don't think you've done that (because e.g. you have to show that it's heritable). But I think that's not nearly enough, because "some evolutionary pressure toward belief X" is a claim we can likely make about any belief at all. (E.g., pessimism about the future can be very valuable, because it can make you aware of potential dangers that optimists would miss.)

Also, in response to  this:

On person-affecting beliefs: The vast majority of people holding these are not longtermists to begin with. What we should be wondering is "to the extent that we have intuitions about what is best for the long-term (and care about this), where do these intuitions come from?". Non-longtermist beliefs are irrelevant, here. Hopefully, this also addresses your last bullet point.

I'm not sure why you think non-longtermist beliefs are irrelevant. Your claim is that optimistic longtermist beliefs are vulnerable to evolutionary debunking. But that would only be true if they were plausibly a product of evolutionary pressures which should apply to populations that have been subject to evolutionary selection; otherwise they're not a product of our evolutionary history. And so evidence of what humans generally are prone to believe seems highly relevant. The fact that many people, perhaps most, are pre-theoretically disposed toward views that push away from optimistic longtermism and pro-natalism casts further doubt on the claim that the intuitions that push people toward optimistic longtermism and pro-natalism have been selected for.

  1. ^

    I used "direct" here because, in some sense, all of our beliefs are the product of our evolutionary history.

I'm not sure why you think non-longtermist beliefs are irrelevant.

Nice. That's what makes us misunderstand each other, I think. (This is crucial to my point.)

Many people have no beliefs about what actions are good or bad for the long-term future (they are clueless or just don't care anyway). But some people have beliefs about this, most of whom believe X-risk reduction is good in the very long run. The most fundamental question I raise is Where do the beliefs of the latter type of people come from? Why do they hold them instead of holding that x-risk reduction is bad in the very long run or being agnostic on this particular question? [1] Is it because X-risk reduction is in fact good in the long term (i.e., these people have the capacity to make judgment calls that track the truth on this question) or because of something else? 

And then my post considers the potential evolutionary pressure towards optimism vis-a-vis the long-term future of humanity as a candidate for "something else". 

So I'm not saying optimistic longtermism is more evolutionary-debunkable than, e.g., partial altruism towards your loved ones. I'm saying it is more evolutionary-debunkable than not optimistic longtermism (i.e., pessimistic longtermism OR agnosticism on how to feel about the long-term future of humanity). Actually I'm not even really saying that, but I think that and this is why I chose to discuss an EDA against optimistic longtermism, specifically.

So if you want to disagree with me, you have to argue that:
A) Not optimistic longtermism is at least just as evolutionarily debunkable as optimistic longtermism, and/or
B) Optimistic longtermism is better explained by the possibility that our judgment calls vis-a-vis the long-term value of X-risk reduction track the truth than by something else.

Does that make sense?

  1. ^

    So I'm interested in optimistic longtermism vs not optimistic longtermsm (i.e., pessimictic longtermism OR agnosticism on the long-term value of x-risk reduction). Beliefs that the long-term future doesn't matter or something are irrevelant, here.

To be clear: you're arguing that we should be agnostic (and, more strongly, take others to also be utterly clueless) about whether it would be good or bad for everyone to die?

I think this is a really good example of what I was talking about in my post, It's Not Wise to be Clueless.

If you think that, in general, justified belief is incompatible with "judgment calls", then radical skepticism immediately follows. You can't even establish, to this standard, that the external world exists. I take that to show that there's a problem with the epistemic standards you're assuming.

It's OK - indeed, essential - to make judgment calls, and we should simply try to exercise better rather than worse judgment. There are, of course, tricky questions about how best to do that. But if there's anything that we've learned from philosophy since Descartes, it's that skeptical calls to abjure from disputable judgments altogether are... not feasible.

Thanks for engaging with this, Richard!

To be clear: you're arguing that we should be agnostic (and, more strongly, take others to also be utterly clueless) about whether it would be good or bad for everyone to die?

I think I am making a much weaker claim than this. While I suggest that the EDA argument I raise is valid, I do not argue that it is strong to the point where optimistic longtermism is unwarranted. Also, the argument itself does not say what people should believe if they do not endorse optimistic longtermism (an alternative to cluelessness is pessimistic longtermism -- I do not say anything about which one is the most appropriate alternative to optimistic longtermism if the EDA argument is strong enough). Sorry if my writing was unclear.

whether it would be good or bad for everyone to die

Maybe a nitpick, but I find this choice of words quite unfair as it implicitly appeals to commonsensical intuitions that seem to have nothing to do with longtermism (to implicitly back your opinion that we know X-risk reduction is good from a longtermist perspective). You do something very similar multiple times in It's Not Wise to be Clueless.

If you think that, in general, justified belief is incompatible with "judgment calls"

I didn't say that. I said that we ought to wonder whether these judgment calls are reliable, claim which you seem to agree with when you write:

It's OK - indeed, essential - to make judgment calls, and we should simply try to exercise better rather than worse judgment.

Now, you seem much more convinced than me that our judgment calls with regard to the long-term value of X-risk reduction come from a reliable source (such as an evolutionary pressure selecting correct longtermist beliefs, whether directly or indirectly) rather than from evolutionary pressures towards pro-natalist beliefs. In It's Not Wise to be Clueless, the justification you provide for something in this vicinity[1] is that we ought to start with the prior that something like X-risk reduction is good for the similar reasons why we should start with the prior that the sun will rise tomorrow. But I think Jesse quite accurately pointed out the disanalogy and the problem with your argument in his comment. Do you have another argument and/or an objection to Jesse's reply that you are happy to share?

  1. ^

    EDIT: actually, not sure this is related. You don't seem to argue that our judgment calls are truth-tracking. You argue that there is a rational requirement to start with a certain prior (i.e., you implicitly suggest that all rational agents should agree with you on X-risk reduction without having to make judgment calls, in fact).

I just posted the following reply to Jesse:

I don't think penalizing complexity is enough to escape radical skepticism in general. Consider the "universe popped into existence (fully-formed) 5 minutes ago" hypothesis. It's not obvious that this is more complex than the alternative hypothesis that includes the past five minutes PLUS billions of years before that. One could try to argue for this claim, but I don't think that our confidence in history should be *contingent* on that extremely contentious philosophical project working out successfully!

But to clarify: I don't think I say anything much in that post about "the reasons why we should start with" various anti-skeptical priors, and I'm certainly not committed to saying that there are "similar reasons" in every anti-skeptical case. The similarity I point to is simply that we clearly should have anti-skeptical priors. "Why" is a separate question (if it has an answer at all, the answer may vary from case to case).

On whether we agree: When I talk about exercising better rather than worse judgment, I take success here to be determined by the contents of our judgments. Some claims warrant higher credence than others, and we should try to have our credences match as close as possible to the objectively warranted level.

But that's quite different from focusing on whether our judgments stem from a "reliable source". I think there's very little chance that you could show that almost any of your philosophical beliefs (including this very epistemic demand) stem from a source that we can independently demonstrate to be reliable. I think the kind of higher-order inquiry you're proposing is a dead end: you can't really judge which philosophical dispositions are reliable until you've determined which philosophical beliefs are true.

To illustrate with a couple of concrete examples:

(1) You claim that "an evolutionary pressure toward pro-natalist beliefs" is an "unreliable" source. But that isn't unreliable if pro-natalism is (broadly) correct.

(2) Compare evolutionary pressures to judge that pain is bad. A skeptic might claim this source is "unreliable", but we needn't accept that claim. Since pain is bad, when evolution disposes us to believe this, it is disposing us towards a true belief. (To simply assert this obviously won't suffice to convince a skeptic, but the lesson of post-Cartesian epistemology is that trying to convince skeptics is a fool's game.)

Re (1): I mean, say we know the reason why Alice is a pro-natalist is 100% due to the mere fact that this belief was evolutionarily advantageous for her ancestors (and 0% due to good philosophical reasoning). This would discredit her belief, right? This wouldn't mean pro-natalism is incorrect. It would just mean that if it is correct, it is for reasons that have nothing to do with what led Alice to endorse it. She just happened to luckily be "right for the wrong reasons". Do you at least agree with this in this particular contrived example or do you think that evolutionary pressures cannot ever be a reason to question our beliefs?

(Waiting for your answer on this before potentially responding to the rest as I think this will help us pin down the crux.)

Philosophical truths are causally inefficacious, so we already know that there is a causal explanation for any philosophical belief you have that (one could characterize as) having "nothing to do with" the reasons why it is true. So if you accept that causal condition as sufficient for debunking, you cannot have any philosophical beliefs whatsoever.

Put another way: we should already be "questioning our beliefs"; spinning out a causal debunking story offers nothing new. It's just an isolated demand for rigor, when you should already be questioning everything, and forming the overall most coherent belief-set you can in light of that questioning.

Compare my response to Parfit:

We do better, I argue, to regard the causal origins of a (normative) belief as lacking intrinsic epistemic significance.  The important question is instead just whether the proposition in question is itself either intrinsically credible or otherwise justified.  Parfit rejects this (p.287):

Suppose we discover that we have some belief because we were hypnotized to have this belief, by some hypnotist who chose at random what to cause us to believe. One example might be the belief that incest between siblings is morally wrong. If the hypnotist's flipped coin had landed the other way up, he would have caused us to believe that such incest is not wrong. If we discovered that this was how our belief was caused, we could not justifiably assume that this belief was true.

I agree that we cannot just assume that such a belief is true (but this was just as true before we learned of its causal origins -- the hypnotist makes no difference).  We need to expose it to critical reflection in light of all else that we believe.  Perhaps we will find that there is no basis for believing such incest to be wrong. Or perhaps we will find a basis after all (perhaps on indirect consequentialist grounds).  Either way, what matters is just whether there is a good justification to be found or not, which is a matter completely independent of us and how we originally came by the belief.  Parfit commits the genetic fallacy when he asserts that the causal origins "would cast grave doubt on the justifiability of these beliefs." (288)

Note that "philosophical reasoning" governs how we update our beliefs, iron out inconsistencies, etc. But the raw starting points are not reached by "reasoning" (what would you be reasoning from, if you don't already accept any premises?) So your assumed contrast between "good philosophical reasoning" and "suspicious causal forces that undermine belief" would actually undermine all beliefs, once you trace them back to foundational premises.

The only way to actually maintain coherent beliefs is to make your peace with having starting points that were not themselves determined via a rational process. Such causal "debunking" gives us a reason to take another look at our starting points, and consider whether (in light of everything we now believe) we want to revise them. But if the starting points still seem right to us, in light of everything, then it has to be reasonable to stick with them whatever their original causal basis may have been.

Overall, the solution is just to assess the first-order issues on their merits. "Debunking" arguments are a sideshow. They should never convince anyone who shouldn't already have been equally convinced on independent (first-order) grounds.

Imagine you and I have laid out all the possible considerations for and against reducing X-risks and still disagree (because we make different opaque judgment calls when weighing these considerations against one another). Then, do you agree that we have nothing left to discuss other than whether any of our judgment calls correlate with the truth? 

(This, on its own, doesn't prove anything about whether EDAs can ever help us; I'm just trying to pin down which assumption I'm making that you don't or vice versa).

Probably nothing left to discuss, period. (Which judgment calls we take to correlate with the truth will simply depend on what we take the truth to be, which is just what's in dispute. I don't think there's any neutral way to establish whose starting points are more intrinsically credible.)

Oh interesting.

> I don't think there's any neutral way to establish whose starting points are more intrinsically credible.

So do I have any good reason to favor my starting points (/judgment calls) over yours, then? Whether to keep mine or to adopt yours becomes an arbitrary choice, no?

It depends what constraints you put on what can qualify as a "good reason". If you think that a good reason has to be "neutrally recognizable" as such, then there'll be no good reason to prefer any internally-coherent worldview over any other. That includes some really crazy (by our lights) worldviews. So we may instead allow that good reasons aren't always recognizable by others. Each person may then take themselves to have good reason to stick with their starting points, though perhaps only one is actually right about this -- and since it isn't independently verifiable which, there would seem an element of epistemic luck to it all. (A disheartening result, if you had hoped that rational argumentation could guarantee that we would all converge on the truth!)

I discuss this epistemic picture in a bit more detail in 'Knowing What Matters'.

I don't think this response engages with the argument that judgment calls about our impact on net welfare over the whole cosmos are extraordinary claims, so they should be held to a high epistemic standard. What do you think of my points on this here and in this thread?

I think it's conceptually confused to use the term "high epistemic standards" to favor imprecise credence or suspended judgment over using one's best judgment. I don't think the former two are automatically more epistemically responsible.

Suspended judgment may be better than forming a bad precise judgment, but worse than forming a good precise judgment. Nothing in the concept of "high standards" should necessarily lead us to prioritize avoiding the risk of bad judgment over the risk of failing to form a good judgment when we could and should have.

I've written about this more (with practical examples from pandemic policy disputes) in 'Agency and Epistemic Cheems Mindset'

I don't see how this engages with the arguments I cited, or the cited post more generally. Why do you think it's plausible to form a (non-arbitrary) determinate judgment about these matters? Why think these determinate judgments are our "best" judgment, when we could instead have imprecise credences that don't narrow things down beyond what we have reason to?

We disagree about "what we have reason to" think about the value of humanity's continued existence -- that's precisely the question in dispute. I might as well ask why you limit yourself to (widely) imprecise credences that don't narrow things down nearly enough (or as much as we have reason to).

The topics under dispute here (e.g. whether we should think that human extinction is worse in expectation than humanity's continued existence) involve ineradicable judgment calls. The OP wants to call pro-humanity judgment calls "suspicious". I've pointed out that I think their reasons for suspicion are insufficient to overturn such a datum of good judgment as "it would be bad if everyone died." (I'm not saying it's impossible to overturn this verdict, but it should take a lot more than mere debunking arguments.)

Incidentally, I think the tendency of some in the community to be swayed to "crazy town" conclusions on the basis of such flimsy arguments is a big part of why many outsiders think EAs are unhinged. It's a genuine failure mode that's worth being aware of; the only way to avoid it, I suspect, is to have robustly sensible priors that are not so easily swayed without a much stronger basis.

Anyway, that was my response to the OP. You then complained that my response to the OP didn't engage with your posts. But I don't see why it would need to. Your post treats broad imprecision as a privileged default; my previous reply explained why I disagree with that starting point. Your own post links to further explanations I've given, here, about how sufficiently imprecise credences lead to crazy verdicts. Your response (in your linked post) dismisses this as "motivated reasoning," which I don't find convincing.

To mandate broadly imprecise credences on the topic at hand would be to defer overly much to a formal apparatus which, in virtue of forcing (with insufficient reason) a kind of practical neutrality about whether it would be bad for everyone to die, is manifestly unfit to guide high-stakes decision-making. That's my view. You're free to disagree with it, of course.

[...] whether it would be good or bad for everyone to die

I'm sorry for not engaging with the rest of your comment (I'm not very knowledgeable on questions of cluelessness), but this is something I sometimes hear in X-risk discussion and I find it a bit confusing. Depending on what animals are sentient, it's likely that every few weeks, the vast majority of the world's individuals die prematurely, often in painful ways (being eaten alive or starving). To my understanding, the case EA makes against X-risk is not the badness of death for the individuals whose lives will be somewhat shortened - because it would not seem compelling in that case, especially when aiming to take into consideration of the welfare / interests of most individuals on earth. I don't think this is a complex philosophical point or some extreme skepticism: I'm just superficially observing that the situation of "everyone dies prematurely"[1] seems to be very close to what we already have, so it doesn't seem that obvious that this is what makes X-risks intuitively bad.

(To be clear, I'm not saying "animals die so X-risk is good", my point is simply that I don't agree that the fact that X-risks cause death is what makes them exceptionally bad, and (though I'm much less sure about that) to my understanding, that is not what initially motivated EAs to care about X-risks (as opposed to the possibility of creating a flourishing future, or other considerations I know less well)).

  1. ^

    Not that I supposed that "prematurely" was implied when you said "good or bad for everyone to die". Of course, if we think that it's bad in general that individuals will die, no matter whether they die at a very young age or not, the case for X-risks being exceptionally bad seems weaker.

A very important consequence of everyone simultaneously dying would be that there would not be any future people. (I didn't mean to imply that what makes it bad is just the harm of death to the individuals directly affected. Just that it would be bad for everyone to die so.)

Yes, I agree with that! This is what I consider to be the core concern regarding X-risk. Therefore, instead of framing it as "whether it would be good or bad for everyone to die," the statement "whether it would be good or bad for no future people to come into existence" seems more accurate, as it addresses what is likely the crux of the issue. This latter framing makes it much more reasonable to hold some degree of agnosticism on the question. Moreover, I think everyone maintains some minor uncertainties about this - even those most convinced of the importance of reducing extinction risk often remind us of the possibility of "futures worse than extinction." This clarification isn't intended to draw any definitive conclusion, just to highlight that being agnostic on this specific question isn't as counter-intuitive as the initial statement in your top comment might have suggested (though, as Jim noted, the post wasn't specifically arguing that we should be agnostic on that point either).

I hope I didn't come across as excessively nitpicky. I was motivated to write by impression that in X-risk discourse, there is sometimes (accidental) equivocation between the badness of our deaths and the badness of the non-existence of future beings. I sympathize with this: given the short timelines, I think many of us are concerned about X-risks for both reasons, and so it's understandable that both get discussed (and this isn't unique to X-risks, of course). I hope you have a nice day of existence, Richard Y. Chappell, I really appreciate your blog!

No worries at all (and best wishes to you too!).

One last clarification I'd want to add is just the distinction between uncertainty and cluelessness. There's immense uncertainty about the future: many different possibilities, varying in valence from very good to very bad. But appreciating that uncertainty is compatible with having (very) confident views about whether the continuation of humanity is good or bad in expectation, and thus not being utterly "clueless" about how the various prospects balance out.

This seems great, but it does something which is kind of indefensible that I keep seeing, assuming longtermism requires consequentialism.

Executive summary: Optimistic longtermism relies on decisive but potentially unreliable judgment calls, and these may be better explained by evolutionary biases—such as pressures toward pro-natalism—than by truth-tracking reasoning, which opens it up to an evolutionary debunking argument.

Key points:

  1. Optimistic longtermism depends on high-stakes, subjective judgment calls about whether reducing existential risk improves the long-term future, despite pervasive epistemic uncertainty.
  2. These judgment calls cannot be fully justified by argument and may differ even among rational, informed experts, making their reliability questionable.
  3. The post introduces the idea that such intuitions may stem from evolutionary pressures—particularly pro-natalist ones—rather than from reliable truth-tracking processes.
  4. This constitutes an evolutionary debunking argument: if our intuitions are shaped by fitness-maximizing pressures rather than truth-seeking ones, their epistemic authority is undermined.
  5. The author emphasizes this critique does not support pessimistic longtermism but may justify agnosticism about the long-term value of X-risk reduction.
  6. While the argument is theoretically significant, the author doubts its practical effectiveness and suggests more fruitful strategies may involve presenting new crucial considerations to longtermists.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

What do you think of the term "pro-natalist longtermism" instead of "optimistic longtermism"? I find the latter (EDIT: former) kinda... pejorative? It feels like an uncharitable framing for some reason, even though it's fairly accurate when you think about it. The reason why longtermists want humanity to remain and the future to be "bigger" is so more people/beings (which they expect to be happy in expectation) could exist.

Meanwhile, "optimistic longtermism" feels too charitable as the word "optimistic" puts a positive spin on it.

Curated and popular this week
Relevant opportunities