Summary[1]
- I introduce an analogy between present-day NIMBYs opposing local development, fearing it will make their neighbourhood worse, and ‘cosmic NIMBYs’ opposing making the future large, fearing it will lower the welfare of existing people.
- On ~totalist population axiologies, this could be very bad and lose much of the value of the future.
- I suggest working to reduce extinction risks will become less neglected as it is clearly in the interests of existing people, but working to make the future as large and good as possible may remain more neglected.
- This should update us (slightly) towards preferring explicitly longtermist community building and work over narrower extinction risk outreach.
Main Text
In traditional NIMBYism, residents of a particular locale advocate against the construction of more housing or other infrastructure near them. They believe[2] that as more people come to live near them, the parks will become crowded, the streets noisy, the views obstructed, the culture ruined and generally the neighbourhood will be less nice to live in. Most NIMBYs are trying to (implicitly, imperfectly) optimise for their family’s welfare, or perhaps the broader welfare of their existing community, not for what is impartially best for the city or the world. In most cases, from an impartial perspective, having more construction and higher density will be net good for the world, as even if the welfare of the original inhabitants declines slightly, this will be amply compensated for by more people getting to live in the nice location.[3]
I propose a variant of this relevant to longtermists:
Cosmic NIMBYs are people who attempt to bring about a smaller future with fewer people because they think a less crowded universe will have higher welfare for existing people, including themselves.[4]
The analogy is clear: in each case there is a relatively small group of incumbents (current residents; existing people) who are far more politically powerful and coordinated than a larger group (possible migrants; possible future people) of disempowered people who stand to benefit greatly.[5] If cosmic NIMBYs prove as successful as traditional NIMBYs have been, this could greatly limit the value of the future, at least according to population axiologies that are scope-sensitive to the size of the future and not suffering-focused, such as total utilitarianism. Such a future dominated by cosmic NIMBYs could be an example of a world where there is no existential catastrophe - indeed it may be a utopia - but where a large fraction of possible value is not realised.[6] A related possibility is that humanity chooses not to ‘go digital’ because of the large upfront investments required to create mind-uploading technology, and the risks to the early adopters from ironing out bugs in this process. This would entail a huge reduction in the size of the future.
The repugnant conclusion: do cosmic NIMBYs have a point?
One relevant question is whether in expectation there is a tradeoff between a rapidly expanding, large, future and a future with high average welfare. This is related to the classic worry about a repugnant conclusion where a very large world with low average welfare may have more total welfare than a far smaller world with very high average welfare.[7] I think there is some theoretical plausibility to the existence of this tradeoff: as we introduce more people into a world, there are more interests that warrant moral and political consideration, and sometimes these interests will clash and not all be satisfiable. As a toy example, if too large a fraction of people have a strong desire to have a private planet not shared with others, achieving a larger future will necessarily require countering these welfare interests of existing people.
However, there are also reasons to think people will benefit from being in larger worlds: there may be more and better art produced, faster scientific and technological progress, and a greater diversity of social/cultural niches people can choose from.[8] Moreover, if there is a transhumanist future with digital minds and significant influence over what our future selves or descendants will value,[9] it seems quite likely people will choose to not have hard-to-satisfy tastes, to avoid being dissatisfied.[10] Instead, we may choose to prefer the simple pleasures of e.g. doing abstract maths and writing poetry with our digital friends. These ‘simple tastes’ could be very cheap to satisfy in terms of energy and materials, which would mean the tradeoff between average welfare and population size could be very weak.
The framing of a ‘repugnant conclusion’ feels repugnant to many people for precisely cosmic NIMBY reasons - in the thought experiment we generally identify more with the people in the small utopia and don’t want to imagine ourselves losing that in favour of the large mediocre world. So the fact that many people share the repugnant conclusion intuition is perhaps some evidence in favour of cosmic NIMBYism being the default future, as most of us don’t intuitively feel the (putative) moral importance of the future being large as strongly as we feel the importance of average welfare being high. It may also be hard to separate out the often co-occurring views that ‘I personally don’t want to live in a large mediocre world, for self-interested reasons’ and ‘I think having a high average welfare is more morally important than having a high total welfare’. We may seek to rationalise the former as the more noble-seeming latter.
Implications for longtermists
Mostly I just wanted to share this new framing I thought of, but I also tried to think a bit about what this would mean. All these conclusions are low robustness:
- Support traditional YIMBYism?
- It is already basically the case that longtermists tend to be YIMBYs.
- The spillover from there being more traditional YIMBYs now, to cosmic YIMBYism winning in the longer term would probably be positive but quite close to zero, as there will likely be lots of cultural change between now and the generation(s) that determine how big the future will be.
- Support pro-expansion space exploration policies and laws.
- The current international maritime law system has arguably been significantly influenced by writings and laws from centuries ago in early European colonial times. So perhaps early influence on new areas of law can be impactful, and shaping space law before space expansion properly gets underway would be very valuable.
- Preventing catastrophe/extinction/dystopia will likely have a far bigger constituency and more vociferous support than advocating for the future to be large.[11] In the NIMBY analogy, there will be huge local opposition to the neighbourhood being bulldozed, but not an equivalent reaction to the neighbourhood failing to double in size.[12]
- This could mean that it is more neglected and hence especially valuable for longtermists to focus on making the future large conditional on there being no existential catastrophe, compared to focusing on reducing the chance of an existential catastrophe.
- This could make EA and longtermist community building relatively more valuable than direct ‘Holy Shit, X-risk’ pitches.[13]
- This consideration might still be outweighed by reducing extinction risks plausibly being (relatively) more tractable. Also, extinction risks are bad on a wide range of moral views (which is why it may be easier to convince people to work on this!), whereas making the future very large is especially valuable in a narrower range of ~totalist population axiologies.
- However, this issue may naturally resolve if even some small subset of the population has consistently higher birth rates or more expansionist dispositions, then in the far future they will come to represent a large fraction of the population and overall growth will be strong.
- This could mean that it is more neglected and hence especially valuable for longtermists to focus on making the future large conditional on there being no existential catastrophe, compared to focusing on reducing the chance of an existential catastrophe.
- There could be an interesting tension between advocating for a larger future and faster expansion for reasons I have described here, versus taking this too far leading to a Molochian future where the most efficient (even ruthless) expansionists win out, to the exclusion of beauty and value.
- This is related to Guive Assadi’s idea of ‘evolutionary futures’ where no group of humans decides what the future should be like and causes it to come about, but rather competitive pressures ensure the fittest factions dominate, even to the detriment of individuals in this faction. A possible solution to this is to coordinate expansion between all actors (e.g. via a singleton) so that there is not a race to the bottom towards maximally efficient valueless expansion.
- I don’t have a solution here, I think this is a real tension where there are failure modes both with expansion being too slow and the future too small (but with high average welfare) and with expansion being too fast and the future too large (but with ~zero average welfare). More thinking is needed, I suppose!My
- ^
Thanks to Fin Moorhouse, Hanna Pálya, Catherine Brewer, Nathan Barnard and Oliver Guest for helpful comments on a draft. My work on this was inspired by some of Michael Aird’s writings on related topics. I am unsure if he would endorse what I have written here, and all views/mistakes are my own. This also in no way represents the views of any organisations I did or do work for.
- ^
Often incorrectly! People may personally benefit from more housing construction near them.
- ^
This seems very intuitive to me, and I remember reading things a bit like this, but I can’t find a great source. This is related, but not exactly what I want.
- ^
Robin Hanson has written about some related ideas.
There may also be other reasons to expect a much-smaller-than-possible future, e.g. many environmentalists and wilderness advocates want to leave much of the earth relatively free from human influence, and perhaps similar attitudes will be common regarding space. A meme I have heard is that we shouldn’t go to other planets until and unless we steward this one sustainably.
- ^
In my usage, following Richard Chappell’s work, people being created and leading happy lives still ‘benefit’ from being created even though they wouldn’t exist otherwise.
- ^
On some definitions, this would in fact constitute an existential catastrophe, as a large fraction of our future potential would fail to be realised. However, the more natural, and possibly more common, interpretation of an ‘existential catastrophe’ involves a relatively sudden or decisive event, whereas in a cosmic NIMBY future there is still always the possibility of a moral awakening or cultural transformation that leads to cosmic YIMBYs winning out and the future becoming a lot larger. So our cosmic endowment would be whittled away a generation at a time while we are not expanding rapidly, rather than lost all in one go at an extinction event or totalitarian lock-in.
- ^
When Parfit coined the term, and when most people use it, it is in the context of questioning ~totalist population axiologies. I, however, assume totalism, and am thinking about what the psychology of people finding the repuganant conclusion repugnant might imply about the size and shape of the future.
- ^
There are some related ideas here, but I can’t remember the other source where I came across this idea.
- ^
Autonomous digital people with access to their own code will be radically empowered to change their mental architectures, including reshaping their first-order preferences based on their second-order preferences. For biological humans, at least with current neurosurgery and neuroscience, this is impossible directly.
- ^
There are some similarities with ~Buddhist ideas of life involving lots of preference dissatisfaction, and that letting go of our desires is a key route to the good life. Mine is a more modest suggestion that people will prefer to have easier-to-satisfy desires over harder-to-satisfy desires, all else equal. Moreover, current people often have relativistic aspects to their welfare, where they are sad if their neighbour has a fancier car than they do, even if their car is amply good objectively. But digital minds will presumably choose not to have these sorts of zero-sum preferences. There is an important worry about bad adaptive preferences though - perhaps rather than autonomous digital people tweaking their own preferences, an authoritarian regime will force all its digital subjects to have preferences favouring the continuation of the regime. I won’t explore this here.
- ^
Paul Christiano wrote about this back in 2013: “One significant issue is population growth. Self-interest may lead people to create a world which is good for themselves, but it is unlikely to inspire people to create as many new people as they could, or use resources efficiently to support future generations. But it seems to me that the existence of large populations is a huge source of value. A barren universe is not a happy universe.”
- ^
At least in local NIMBYism there are out-of-town would-be buyers and property developers who have some political sway. The cosmic case is even worse, as neither the suppliers nor the demanders of cosmic expansion exist yet.
- ^
First two points sound reasonable (and helpfully clarifying) to me!
I share the guess that scope sensitivity and prioritarianism could be relevant here, as you clearly (I think) endorse these more strongly and more consistently than I do; but having thought about it for only 5-10 minutes, I'm not sure I'm able to exactly point at how these notions play into our intuitions and views on the topic - maybe it's something about me ignoring the [(super-high payoff of larger future)*(super-low probability of affecting whether there is a larger future) = (there is good reason to take this action)] calculation/conclusion more readily?
That said, I fully agree that "something being very important and neglected and moderately tractable (like x-risk work) isn't always enough for it to be the 'best' ". To figure out which option is best, we'd need to somehow compare their respective scores on importance, neglectedness, and tractability... I'm not sure actually figuring that out is possible in practice, but I think it's fair to challenge the claim that "action X is best because it is very important and neglected and moderately tractable" regardless. In spite of that, I continue to feel relatively confident in claiming that efforts to reduce x-risks are better (more desirable) than efforts to increase the probable size of the future, because the former is an unstable precondition for the latter (and because I strongly doubt the tractability and am at least confused about the desirability of the latter).
I think my stance on this example would depend on the present state of the company. If the company is in really dire straits, I'm resource-constrained, and there are more things that need fixing now than I feel able to easily handle, I would seriously question whether one of my employees should go thinking about making best-case future scenarios the best they can be[1]. I would question this even more strongly if I thought that the world and my company (if it survives) will change so drastically in the next 5 years that the employee in question has very little chance of imaging and planning for the eventuality.
(I also notice while writing that a part of my disagreement here is motivated by values rather than logic/empirics: part of my brain just rejects the objective of massively expanding and improving a company/situation that is already perfectly acceptable and satisfying. I don't know if I endorse this intuition for states of the world (I do endorse it pretty strongly for private life choices), but can imagine that the intuitive preference for satisficing informs/shapes/directs my thinking on the topic at least a bit - something for myself to think about more, since this may or may not be a concerning bias.)
+100 :)
(This is not to say that it might not make sense for one or a few individuals to think about the company's mid- to long-term success; I imagine that type of resource allocation will be quite sensible in most cases, because it's not sustainable to preserve the company in a day-to-day survival strategy forever; but I think that's different from asking these individuals to paint a best-case future to be prepared to make a good outcome even better.)