There are definitely some people who are fanatical strong longtermists, but a lot of people who are made out to be such treat it as an important consideration but not one held with certainty or overwhelming dominance over all other moral frames and considerations. In my experience one cause of this is that if you write about implications within a particular worldview people assume you place 100% weight on it, when the correlation is a lot less than 1.
I agree with this, and the example of Astronomical Waste is particularly notable. (As I understand his views, Bostrom isn't even a consequentialist!). This is also true for me with respect to the CFSL paper, and to an even greater degree for Hilary: she really doesn't know whether she buys strong longtermism; her views are very sensitive to current facts about how much we can reduce extinction risk with a given unit of resources.The language-game of 'writing a philosophy article' is very different than 'stating your exact views on a topic' (the former is more about making a clear and forceful argument for a particular view, or particular implication of a view someone might have, and much less about conveying every nuance, piece of uncertainty, or in-practice constraints) and once philosophy articles get read more widely, that can cause confusion. Hilary and I didn't expect our paper to get read so widely - it's really targeted at academic philosophers. Hilary is on holiday, but I've suggested we make some revisions to the language in the paper so that it's a bit clearer to people what's going on. This would mainly be changing phrases like 'defend strong longtermism' to 'explore the case for strong longtermism', which I think more accurately represents what's actually going on in the paper.
I'm also not defending or promoting strong longtermism in my next book. I defend (non-strong) longtermism, and the definition I use is: "longtermism is the view that positively influencing the longterm future is among the key moral priorities of our time." I agree with Toby on the analogy to environmentalism.(The definition I use of strong longtermism is that it's the view that positively influencing the longterm future is the moral priority of our time.)
I agree that Gordon deserves great praise and recognition! One clarification: My discussion of Zhdanov was based on Gordon's work: he volunteered for GWWC in the early days, and cross-posted about Zhdanov on the 80k blog. In DGB, I failed to cite him, which was a major oversight on my part, and I feel really bad about that. (I've apologized to him about this.) So that discussion shouldn't be seen as independent convergence.
Thanks Greg - I asked and it turned out I had one remaining day to make edits to the paper, so I've made some minor ones in a direction you'd like, though I'm sure they won't be sufficient to satisfy you. Going to have to get back on with other work at this point, but I think your arguments are important, though the 'bait and switch' doesn't seem totally fair - e.g. the update towards living in a simulation only works when you appreciate the improbability of living on a single planet.
Thanks for this, Greg."But what is your posterior? Like Buck, I'm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 / 1 million."I'm surprised this wasn't clear to you, which has made me think I've done a bad job of expressing myself. It's the former, and for the reason of your explanation (2): us being early, being on a single planet, being at such a high rate of economic growth, should collectively give us an enormous update. In the blog post I describe what I call the outside-view arguments, including that we're very early on, and say: "My view is that, in the aggregate, these outside-view arguments should substantially update one from one’s prior towards HoH, but not all the way to significant credence in HoH. Quantitatively: These considerations push me to put my posterior on HoH into something like the [1%, 0.1%] interval. But this credence interval feels very made-up and very unstable."
I'm going to think more about your claim that in the article I'm 'hiding the ball'. I say in the introduction that "there are some strong arguments for thinking that this century might be unusually influential", discuss the arguments that I think really should massively update us in section 5 of the article, and in that context I say "We have seen that there are some compelling arguments for thinking that the present time is unusually influential. In particular, we are growing very rapidly, and civilisation today is still small compared to its potential future size, so any given unit of resources is a comparatively large fraction of the whole. I believe these arguments give us reason to think that the most influential people may well live within the next few thousand years." Then in the conclusion I say: "There are some good arguments for thinking that our time is very unusual, if we are at the start of a very long-lived civilisation: the fact that we are so early on, that we live on a single planet, and that we are at a period of rapid economic and technological progress, are all ways in which the current time is very distinctive, and therefore are reasons why we may be highly influential too." That seemed clear to me, but I should judge clarity by how readers interpret what I've written.
Actually, rereading my post I realize I had already made an edit similar to the one you suggest (though not linking to the article which hadn't been finished) back in March 2020:"[Later Edit (Mar 2020): The way I state the choice of prior in the text above was mistaken, and therefore caused some confusion. The way I should have stated the prior choice, to represent what I was thinking of, is as follows:
The prior probability of us living in the most influential century, conditional on Earth-originating civilization lasting for n centuries, is 1/n.
The unconditional prior probability over whether this is the most influential century would then depend on one's priors over how long Earth-originating civilization will last for. However, for the purpose of this discussion we can focus on just the claim that we are at the most influential century AND that we have an enormous future ahead of us. If the Value Lock-In or Time of Perils views are true, then we should assign a significant probability to that claim. (i.e. they are claiming that, if we act wisely this century, then this conjunctive claim is probably true.) So that's the claim we can focus our discussion on.
It's worth noting that my proposal follows from the Self-Sampling Assumption, which is roughly (as stated by Teru Thomas ('Self-location and objective chance' (ms)): "A rational agent’s priors locate him uniformly at random within each possible world." I believe that SSA is widely held: the key question in the anthropic reasoning literature is whether it should be supplemented with the self-indication assumption (giving greater prior probability mass to worlds with large populations). But we don't need to debate SIA in this discussion, because we can simply assume some prior probability distribution over sizes over the total population - the question of whether we're at the most influential time does not require us to get into debates over anthropics.]"
Thanks, Greg. I really wasn't meaning to come across as super confident in a particular posterior (rather than giving an indicative number for a central estimate), so I'm sorry if I did."It seems more reasonable to say 'our' prior is rather some mixed gestalt on considering the issue as a whole, and the concern about base-rates etc. should be seen as an argument for updating this downwards, rather than a bid to set the terms of the discussion."I agree with this (though see for the discussion with Lukas for some clarification about what we're talking about when we say 'priors', i.e. are we building the fact that we're early into our priors or not.).
Richard’s response is about right. My prior with respect to influentialness, is such that either: x-risk is almost surely zero, or we are almost surely not going to have a long future, or x-risk is higher now than it will be in the future but harder to prevent than it will be in the future or in the future there will be non-x-risk-mediated ways of affecting similarly enormous amounts of value in the future, or the idea that most of the value is in the future is false.
I do think we should update away from those priors, and I think that update is sufficient to make the case for longtermism. I agree that the location in time that we find ourselves in (what I call ‘outside-view arguments’ in my original post) is sufficient for a very large update.Practically speaking, thinking through the surprisingness of being at such an influential time made me think:
It also made me take more seriously the thoughts that in the future there might be non-extinction-risk mechanisms for producing comparably enormous amounts of (expected) value, and that maybe there’s some crucial consideration(s) that we’re currently missing such that our actions today are low-expected-value compared to actions in the future.
"Only using a single, simple function for something so complicated seems overconfident to me. And any mix of functions where one of them assigns decent probability to early people being the most influential is enough that it's not super unlikely that early people are the most influential."
I strongly agree with this. The fact that under a mix of distributions, it becomes not super unlikely that early people are the most influential, is really important and was somewhat buried in the original comments-discussion. And then we're also very distinctive in other ways: being on one planet, being at such a high-growth period, etc.
Thanks, I agree that this is key. My thoughts:
So I guess the answer to your question is 'no': our earliness is an enormous update, but not as big as Toby would suggest.