Hide table of contents

Summary[1]

  • I introduce an analogy between present-day NIMBYs opposing local development, fearing it will make their neighbourhood worse, and ‘cosmic NIMBYs’ opposing making the future large, fearing it will lower the welfare of existing people.
  • On ~totalist population axiologies, this could be very bad and lose much of the value of the future.
  • I suggest working to reduce extinction risks will become less neglected as it is clearly in the interests of existing people, but working to make the future as large and good as possible may remain more neglected.
  • This should update us (slightly) towards preferring explicitly longtermist community building and work over narrower extinction risk outreach.

Main Text

In traditional NIMBYism, residents of a particular locale advocate against the construction of more housing or other infrastructure near them. They believe[2] that as more people come to live near them, the parks will become crowded, the streets noisy, the views obstructed, the culture ruined and generally the neighbourhood will be less nice to live in. Most NIMBYs are trying to (implicitly, imperfectly) optimise for their family’s welfare, or perhaps the broader welfare of their existing community, not for what is impartially best for the city or the world. In most cases, from an impartial perspective, having more construction and higher density will be net good for the world, as even if the welfare of the original inhabitants declines slightly, this will be amply compensated for by more people getting to live in the nice location.[3]

I propose a variant of this relevant to longtermists:

Cosmic NIMBYs are people who attempt to bring about a smaller future with fewer people because they think a less crowded universe will have higher welfare for existing people, including themselves.[4]

The analogy is clear: in each case there is a relatively small group of incumbents (current residents; existing people) who are far more politically powerful and coordinated than a larger group (possible migrants; possible future people) of disempowered people who stand to benefit greatly.[5] If cosmic NIMBYs prove as successful as traditional NIMBYs have been, this could greatly limit the value of the future, at least according to population axiologies that are scope-sensitive to the size of the future and not suffering-focused, such as total utilitarianism. Such a future dominated by cosmic NIMBYs could be an example of a world where there is no existential catastrophe - indeed it may be a utopia - but where a large fraction of possible value is not realised.[6] A related possibility is that humanity chooses not to ‘go digital’ because of the large upfront investments required to create mind-uploading technology, and the risks to the early adopters from ironing out bugs in this process. This would entail a huge reduction in the size of the future.

The repugnant conclusion: do cosmic NIMBYs have a point?

One relevant question is whether in expectation there is a tradeoff between a rapidly expanding, large, future and a future with high average welfare. This is related to the classic worry about a repugnant conclusion where a very large world with low average welfare may have more total welfare than a far smaller world with very high average welfare.[7] I think there is some theoretical plausibility to the existence of this tradeoff: as we introduce more people into a world, there are more interests that warrant moral and political consideration, and sometimes these interests will clash and not all be satisfiable. As a toy example, if too large a fraction of people have a strong desire to have a private planet not shared with others, achieving a larger future will necessarily require countering these welfare interests of existing people.

However, there are also reasons to think people will benefit from being in larger worlds: there may be more and better art produced, faster scientific and technological progress, and a greater diversity of social/cultural niches people can choose from.[8] Moreover, if there is a transhumanist future with digital minds and significant influence over what our future selves or descendants will value,[9] it seems quite likely people will choose to not have hard-to-satisfy tastes, to avoid being dissatisfied.[10]  Instead, we may choose to prefer the simple pleasures of e.g. doing abstract maths and writing poetry with our digital friends. These ‘simple tastes’ could be very cheap to satisfy in terms of energy and materials, which would mean the tradeoff between average welfare and population size could be very weak.

The framing of a ‘repugnant conclusion’ feels repugnant to many people for precisely cosmic NIMBY reasons - in the thought experiment we generally identify more with the people in the small utopia and don’t want to imagine ourselves losing that in favour of the large mediocre world. So the fact that many people share the repugnant conclusion intuition is perhaps some evidence in favour of cosmic NIMBYism being the default future, as most of us don’t intuitively feel the (putative) moral importance of the future being large as strongly as we feel the importance of average welfare being high. It may also be hard to separate out the often co-occurring views that ‘I personally don’t want to live in a large mediocre world, for self-interested reasons’ and ‘I think having a high average welfare is more morally important than having a high total welfare’. We may seek to rationalise the former as the more noble-seeming latter.

Implications for longtermists

Mostly I just wanted to share this new framing I thought of, but I also tried to think a bit about what this would mean. All these conclusions are low robustness:

  • Support traditional YIMBYism?
    • It is already basically the case that longtermists tend to be YIMBYs.
    • The spillover from there being more traditional YIMBYs now, to cosmic YIMBYism winning in the longer term would probably be positive but quite close to zero, as there will likely be lots of cultural change between now and the generation(s) that determine how big the future will be.
  • Support pro-expansion space exploration policies and laws.
    • The current international maritime law system has arguably been significantly influenced by writings and laws from centuries ago in early European colonial times. So perhaps early influence on new areas of law can be impactful, and shaping space law before space expansion properly gets underway would be very valuable.
  • Preventing catastrophe/extinction/dystopia will likely have a far bigger constituency and more vociferous support than advocating for the future to be large.[11] In the NIMBY analogy, there will be huge local opposition to the neighbourhood being bulldozed, but not an equivalent reaction to the neighbourhood failing to double in size.[12]
    • This could mean that it is more neglected and hence especially valuable for longtermists to focus on making the future large conditional on there being no existential catastrophe, compared to focusing on reducing the chance of an existential catastrophe.
      • This could make EA and longtermist community building relatively more valuable than direct ‘Holy Shit, X-risk’ pitches.[13]
      • This consideration might still be outweighed by reducing extinction risks plausibly being (relatively) more tractable. Also, extinction risks are bad on a wide range of moral views (which is why it may be easier to convince people to work on this!), whereas making the future very large is especially valuable in a narrower range of ~totalist population axiologies.
    • However, this issue may naturally resolve if even some small subset of the population has consistently higher birth rates or more expansionist dispositions, then in the far future they will come to represent a large fraction of the population and overall growth will be strong.
  • There could be an interesting tension between advocating for a larger future and faster expansion for reasons I have described here, versus taking this too far leading to a Molochian future where the most efficient (even ruthless) expansionists win out, to the exclusion of beauty and value.
    • This is related to Guive Assadi’s idea of ‘evolutionary futures’ where no group of humans decides what the future should be like and causes it to come about, but rather competitive pressures ensure the fittest factions dominate, even to the detriment of individuals in this faction. A possible solution to this is to coordinate expansion between all actors (e.g. via a singleton) so that there is not a race to the bottom towards maximally efficient valueless expansion.
    • I don’t have a solution here, I think this is a real tension where there are failure modes both with expansion being too slow and the future too small (but with high average welfare) and with expansion being too fast and the future too large (but with ~zero average welfare). More thinking is needed, I suppose!My
  1. ^

     Thanks to Fin Moorhouse, Hanna Pálya, Catherine Brewer, Nathan Barnard and Oliver Guest for helpful comments on a draft. My work on this was inspired by some of Michael Aird’s writings on related topics. I am unsure if he would endorse what I have written here, and all views/mistakes are my own. This also in no way represents the views of any organisations I did or do work for.

  2. ^

     Often incorrectly! People may personally benefit from more housing construction near them.

  3. ^

     This seems very intuitive to me, and I remember reading things a bit like this, but I can’t find a great source. This is related, but not exactly what I want.

  4. ^

     Robin Hanson has written about some related ideas.

    There may also be other reasons to expect a much-smaller-than-possible future, e.g. many environmentalists and wilderness advocates want to leave much of the earth relatively free from human influence, and perhaps similar attitudes will be common regarding space. A meme I have heard is that we shouldn’t go to other planets until and unless we steward this one sustainably.

  5. ^

     In my usage, following Richard Chappell’s work, people being created and leading happy lives still ‘benefit’ from being created even though they wouldn’t exist otherwise.        

  6. ^

     On some definitions, this would in fact constitute an existential catastrophe, as a large fraction of our future potential would fail to be realised. However, the more natural, and possibly more common, interpretation of an ‘existential catastrophe’ involves a relatively sudden or decisive event, whereas in a cosmic NIMBY future there is still always the possibility of a moral awakening or cultural transformation that leads to cosmic YIMBYs winning out and the future becoming a lot larger. So our cosmic endowment would be whittled away a generation at a time while we are not expanding rapidly, rather than lost all in one go at an extinction event or totalitarian lock-in.

  7. ^

     When Parfit coined the term, and when most people use it, it is in the context of questioning ~totalist population axiologies. I, however, assume totalism, and am thinking about what the psychology of people finding the repuganant conclusion repugnant might imply about the size and shape of the future.

  8. ^

     There are some related ideas here, but I can’t remember the other source where I came across this idea.

  9. ^

     Autonomous digital people with access to their own code will be radically empowered to change their mental architectures, including reshaping their first-order preferences based on their second-order preferences. For biological humans, at least with current neurosurgery and neuroscience, this is impossible directly.

  10. ^

     There are some similarities with ~Buddhist ideas of life involving lots of preference dissatisfaction, and that letting go of our desires is a key route to the good life. Mine is a more modest suggestion that people will prefer to have easier-to-satisfy desires over harder-to-satisfy desires, all else equal. Moreover, current people often have relativistic aspects to their welfare, where they are sad if their neighbour has a fancier car than they do, even if their car is amply good objectively. But digital minds will presumably choose not to have these sorts of zero-sum preferences. There is an important worry about bad adaptive preferences though - perhaps rather than autonomous digital people tweaking their own preferences, an authoritarian regime will force all its digital subjects to have preferences favouring the continuation of the regime. I won’t explore this here.

  11. ^

     Paul Christiano wrote about this back in 2013: “One significant issue is population growth. Self-interest may lead people to create a world which is good for themselves, but it is unlikely to inspire people to create as many new people as they could, or use resources efficiently to support future generations. But it seems to me that the existence of large populations is a huge source of value. A barren universe is not a happy universe.”

  12. ^

     At least in local NIMBYism there are out-of-town would-be buyers and property developers who have some political sway. The cosmic case is even worse, as neither the suppliers nor the demanders of cosmic expansion exist yet.

  13. ^

     There are some related ideas here and here.

17

0
2

Reactions

0
2

More posts like this

Comments32
Sorted by Click to highlight new comments since:

I find it plausible that future humans will choose to create much fewer minds than they could. But I don't think that "selfishly desiring high material welfare" will require this. Just the milky way has enough stars for each currently alive human to get an entire solar system each. Simultaneously, intergalactic colonization is probably possible (see here) and I think the stars in our own galaxy is less than 1-in-a-billion of all reachable stars. (Most of which are also very far away, which further contributes to them not being very interesting to use for selfish purposes.)

When we're talking about levels of consumption that are greater than a solar system, and that will only take place millions of years in the future, it seems like the relevant kind of human preferences to be looking at is something like "aesthetic" preference. And so I think the relevant analogies are less that of present humans optimizing for their material welfare, but perhaps more something like "people preferring the aesthetics of a clean and untouched universe (or something else: like the aesthetics of a universe used for mostly non-sentient art) over the aesthetics of a universe which is packed with joy".

I think your point "We may seek to rationalise the former [I personally don’t want to live in a large mediocre world, for self-interested reasons] as the more noble-seeming latter [desire for high average welfare]" is the kind of thing that might influence this aesthetic choice. Where "I personally don’t want to live in a large mediocre world, for self-interested reasons" would split into (i) "it feels bad to create a very unequal world where I have lots more resources than everyone else", and (ii) "it feels bad to massively reduce the amount of resources that I personally have, to that of the average resident in a universe packed full with life".

Another relevant consideration along these lines is that people who selfishly desire high wealth might mostly care about positional goods which are similar to current positional goods. Usage of these positional goods won't burn much of any compute (resources for potential minds) even if these positional goods become insanely valuable in terms of compute. E.g., land values of interesting places on earth might be insanely high and people might trade vast amounts of comptuation for this land, but ultimately, the computation will be spent on something else. 

Hmm true, I think I agree that this means the dynamics I describe matter less in expectation (because the positional goods-oriented people will be quite marginal in terms of using the resources of the universe).

Good point re aesthetics perhaps mattering more, and about people dis-valuing inequality and therefore not wanting to create a lot of moderately good lives lest they feel bad about having amazing lives and controlling vast amounts of resources.

Re "But I don't think ..." in your first paragraph, I am not sure what if anything we actually disagree about. I think what you are saying is that there are plenty of resources in our galaxy, and far more beyond, for all present people to have fairly arbitrarily large levels of wealth. I agree, and I am also saying that people may want to keep it roughly that way, rather than creating heaps of people and crowding up the universe.

There might not be any real disagreement. I'm just saying that there's no direct conflict between "present people having material wealth beyond what they could possibly spend on themselves" and "virtually all resources are used in the way that totalist axiologies would recommend".

One additional point is that, to the extent that EA AI advocacy acts to restrict, delay, and obstruct AI -- even if this is not their intention -- the effect may be to entrench an early form of cosmic NIMBYism in which AI population growth is impeded. If continued, this advocacy could have a lasting effect on whether our civilization grows to be as large as possible, by establishing norms against growth and innovation.

When viewed from this perspective, the consequence of much of EA AI advocacy may be to reduce the expected size of our vibrant, rich post-human future, rather than increase it. I currently think this consequence is fairly plausible too; as a result, I'm not convinced that obstructing AI is a good policy on total utilitarian views.

Thanks, interesting idea, I think I mostly disagree and would like to see AI progress specifically slowed/halted while continuing to have advances in space exploration, biology, nuclear power, etc and that if we later get safe TAI we won't have become too anti-technology/anti-growth to expand a lot. But I hadn't thought about this before and there probably is something to this, I just think it is most likely swamped by the risks from AI. It is a good reason to be careful in pause AI type pitches to be narrowly focused on frontier AI models rather than tech and science in general.

I suppose when I think about pro-expansion things I would like to see they are only really ones that do not (IMO) increase x-risks - better institutions, more pro-natalism, space exploration, maybe cognitive enhancement.

Thanks, interesting idea, I think I mostly disagree and would like to see AI progress specifically slowed/halted while continuing to have advances in space exploration, biology, nuclear power, etc

Of those technologies, AI seems to be the only one that could be transformative, in the sense of sustaining dramatic economic growth and bringing about a giant, vibrant cosmic future. In other words, it seems you're saying we should slow down the most promising technology -- the only technology that could actually take us to the future you're advocating for -- but make sure not to slow down the less promising ones. The fact that people want to slow down (and halt!) precisely the technology that is most promising is basically the whole reason I'm worried here -- I think my argument would be much less strong if I we were talking about slowing down something like nuclear power.

I hadn't thought about this before and there probably is something to this, I just think it is most likely swamped by the risks from AI.

It's important to be clear about what we mean when we talk about the risks from AI. Do you mean:

  1. The risk that AI could disempower humanity in particular?
  2. The risk that AI could derail a large, vibrant cosmic civilization?

I think AI does pose a large risk in the sense of (1), but (2) is more important from a total utilitarian perspective, and it doesn't seem particularly likely to me that AIs pose a large risk in the sense of (2) (as the AIs themselves, after disempowering humanity, would presumably go on to create a big, vibrant civilization). 

If you care about humanity as a species in particular, I understand the motive behind slowing down AI. On the other hand, if you're a total utilitarian (or you're concerned about the present generation of humans who might otherwise miss out on the benefits of AI), then I'm not convinced, as you seem to be, that the risks from AI outweigh the considerations that I mentioned.

It is a good reason to be careful in pause AI type pitches to be narrowly focused on frontier AI models rather than tech and science in general.

Again, the frontier AI models, in my view, are precisely what is most promising from a pro-growth perspective. So if you are worried about EAs choking off economic growth and spurring cosmic NIMBYism by establishing norms against growth, it seems from my perspective that you should be most concerned about attempts to obstruct frontier AI research.

What's the argument for why an AI future will create lots of value by total utilitarian lights?

At least for hedonistic total utilitarianism, I expect that a large majority of expected-hedonistic-value (from our current epistemic state) will be created by people who are at least partially sympathetic to hedonistic utilitarianism or other value systems that value a similar type of happiness in a scope-sensitive fashion. And I'd guess that humans are more likely to have such values than AI systems. (At least conditional on my thinking that such values are a good idea, on reflection.)

Objective-list theories of welfare seems even less likely to be endorsed by AIs. (Since they seem pretty niche to human values.)

There's certainly some values you could have that would mainly be concerned that we got any old world with a large civilization. Or that would think it morally appropriate to be happy that someone got to use the universe for what they wanted, and morally inappropriate to be too opinionated about who that should be. But I don't think that looks like utilitarianism.

We can similarly ask, "Why would an em future create lots of value by total utilitarian lights?" The answer I'd give is: it would happen for essentially the same reasons biological humans might do such a thing. For example, some biological humans are utilitarians. But some ems might be utilitarians too. Therefore, both could create lots of value by total utilitarian lights.

In order to claim that ems have a significantly lower chance of creating lots of value by total utilitarian lights than biological humans, you'd need to posit a distinction between ems and biological humans that makes this possibility plausible. Some candidate distinctions, such as the idea that ems would not be conscious because they're on a computer, seem implausible in any way that could imply the conclusion. So, at least as far as I can tell, I cannot identify any such distinction; and thus, ems seem similarly likely to create lots of value by total utilitarian lights, compared to biological humans.

The exact same analysis can likewise be carried over to the case for AIs. Some biological humans are utilitarians, but some AIs might be utilitarians too. Therefore, both could create lots of value by total utilitarian lights.

In order to claim that AIs have a significantly lower chance of creating lots of value by total utilitarian lights than biological humans, you'd need to posit a distinction between AIs and biological humans that makes this possibility plausible. A number of candidate distinctions have been given to me in the past. These include:

  1. The idea that AIs will not be conscious
  2. The idea that AIs will care less about optimizing for extreme states of moral value
  3. The idea that AIs will care more about optimizing imperfectly specified utility functions, which won't produce much utilitarian moral value

In each case I generally find that the candidate distinction is either poorly supported, or it does not provide strong support for the conclusion. So, just as with ems, I find the idea that AIs will have a significantly lower chance of creating lots of value by total utilitarian lights than biological humans to be weak. I do not claim that there is definitely no such distinction that would convince me of this premise. But I have yet to hear one that has compelled me so far.

Here's one line of argument:

  • Positive argument in favor of humans: It seems pretty likely that whatever I'd value on-reflection will be represented in a human future, since I'm a human. (And accordingly, I'm similar to many other humans along many dimensions.)
    • If AI values where sampled ~randomly (whatever that means), I think that the above argument would be basically enough to carry the day in favor of humans.
  • But here's a salient positive argument in favor of why AIs' values will be similar to mine: People will be training AIs to be nice and helpful, which will surely push them towards better values.
    • However, I also expect people to be training AIs for obedience and, in particular, training them to not disempower humanity. So if we condition on a future where AIs disempower humanity, we evidentally didn't have that much control over their values. This signiciantly weakens the strength of the argument "they'll be nice because we'll train them to be nice".
      • In addition: human disempowerment is more likely to succeed if AIs are willing to egregiously violate norms, such a by lying, stealing, and killing. So conditioning on human disempowerment also updates me somewhat towards egregiously norm-violating AI. That makes me feel less good about their values.
    • Another argument is that, in the near term, we'll train AIs to act nicely on short-horizon tasks, but we won't particularly train them to deliberate and reflect on their values well. So even if "AIs' best-guess stated values" are similar to "my best-guess stated values", there's less reason to belive that "AIs' on-reflection values" are similar to "my on-reflection values". (Whereas the basic argument of my being similar to humans still work ok: "my on-reflection values" vs. "other humans' on-reflection values".)

Edit: Oops, I accidentally switched to talking about "my on-reflection values" rather than "total utilitarian values". The former is ultimately what I care more about, though, so it is what I'm more interested in. But sorry for the switch.

The ones I would say are something like (approximately in priority order):

  • AI's values could result mostly from playing the training game or other relatively specific optimizations they performed in training which might result in extremely bizarre values from our perspective.
    • More generally AI values might be highly alien in a way where caring about experience seems very strange to them.
  • AIs by default will be optimized for very specific commercial purposes with narrow specializations and a variety of hyperspecific heuristics and the resulting values and  generalizations of these will be problematic
  • I care ultimately about what I would think is good upon (vast amounts of) reflection and there are good a priori reasons to think this is similar to what other humans (who care about using vast amounts of compute) will end up thinking is good.
    • As a sub argument, I might care specifically about things which are much more specific than "lots of good diverse experience". And, divergences from what I care about (even conditioning on something roughly utilitarian) might result in massive discounts from my perspective.
    • I care less about my values and preferences in worlds where they seem relatively contingent, e.g. they aren't broadly shared on reflection by reasonable fractions of humanity.
  • AIs don't have a genetic bottleneck and thus can learn much more specific drives that perform well while evolution had to make values more discoverable and adaptable.
    • E.g. various things about empathy.
  • AIs might have extremely low levels of cognitive diversity in their training environments as far as co-workers go which might result in very different attitudes as far as caring about diverse experience. 

Some of these can be defeated relatively easily if we train AIs specifically to be good successors, but the default AIs which end up with power over the future will not have this property.

Also, I should note that this isn't a very strong list, though in aggregate it's sufficient to make me think that human control is perhaps 4x better than AIs. (For reference, I'd say that me personally being in control is maybe 3x better than human control.) I disagree with a MIRI style view about the disvalue of AI and the extent of fragility of value that seems implicit.

AI's values could result mostly from playing the training game or other relatively specific optimizations they performed in training

Don't humans also play the training game when being instructed to be nice/good/moral? (Humans don't do it all the time, and maybe some humans don't do it at all; but then again, I don't think every AI would play the training game all the time either.)

AIs by default will be optimized for very specific commercial purposes with narrow specializations and a variety of hyperspecific heuristics and the resulting values and  generalizations of these will be problematic

You should compare against human nature, which was optimized for something quite different from utilitarianism. If I add up the pros and cons of the thing humans were optimized for and compare it against the thing AIs will be optimized for, I don't see why it comes out with humans on top, from a utilitarian perspective. Can you elaborate on your reasoning here?

I care ultimately about what I would think is good upon (vast amounts of) reflection and there are good a priori reasons to think this is similar to what other humans (who care about using vast amounts of compute) will end up thinking is good.

What are these a priori reasons and why don't they similarly apply to AI?

AIs don't have a genetic bottleneck and thus can learn much more specific drives that perform well while evolution had to make values more discoverable and adaptable.

I haven't thought about this one much, but it seems like an interesting consideration.

AIs might have extremely low levels of cognitive diversity in their training environments as far as co-workers go which might result in very different attitudes as far as caring about diverse experience. 

This consideration feels quite weak to me, although you also listed it last, so I guess you might agree with my assessment.

What are these a priori reasons and why don't they similarly apply to AI?

I am a human. Other humans might end up in a similar spot on reflection.

(Also I care less about values of mine which are highly contingent wrt humans.)

I am a human.

"Human" is just one category you belong to. You're also a member of the category "intelligent beings", which you will share with AGIs. Another category you share with near-future AGIs is "beings who were trained on 21st century cultural data". I guess 12th century humans aren't in that category, so maybe we don't share their values?

Perhaps the category that matters is your nationality. Or maybe it's "beings in the Milky Way", and you wouldn't trust people from Andromeda? (To be clear, this is rhetorical, not an actual suggestion)

My point here is that I think your argument could benefit from some rigor by specifying exactly what about being human makes someone share your values in the sense you are describing. As it stands, this reasoning seems quite shallow to me.

Currently, humans seem much closer to me in a values level than GPT-4 base. I think this is also likely to be true of future AIs, though I understand why you might not find this convincing.

I think the architecture (learning algorithm, etc.) and training environment between me and other humans seems vastly more similar than between me and likely AIs.

I don't think I'm going to flesh this argument out to an extent to which you'd find it sufficiently rigorous or convincing, sorry.

I don't think I'm going to flesh this argument out to an extent to which you'd find it sufficiently rigorous or convincing, sorry.

Getting a bit meta for a bit, I'm curious (if you'd like to answer) whether you feel that you won't explain your views rigorously in a convincing way here mainly because (1) you are uncertain about these specific views, (2) you think your views are inherently difficult or costly to explain despite nonetheless being very compelling, (3) you think I can't understand your views easily because I'm lacking some bedrock intuitions that are too costly to convey, or (4) some other option.

My views are reasonably messy, complicated, hard to articulate, and based on a relatively diffuse set of intuitions. I think we also reason in a pretty different way about the situation than you seem to (3). I think it wouldn't be impossible to try to write up a post on my views, but I would need to consolidate and think about how exactly to express where I'm at. (Maybe 2-5 person days of work.) I haven't really consolidated my views or something close to reflective equilibrium.

I also just that arguing about pure philosophy very rarely gets anywhere and is very hard to make convincing in general.

I'm somewhat uncertain on the "inside view/mechanistic" level. (But my all considered view is partially defering to some people which makes me overall less worried that I should immediately reconsider my life choices.)

I think my views are compelling, but I'm not sure if I'd say "very compelling"

You should compare against human nature, which was optimized for something quite different from utilitarianism. If I add up the pros and cons of the thing humans were optimized for and compare it against the thing AIs will be optimized for, I don't see why it comes out with humans on top, from a utilitarian perspective. Can you elaborate on your reasoning here?

I can't quickly elaborate in a clear way, but some messy combination of:

  • I can currently observe humans which screens off a bunch of the comparison and let's me do direct analysis.
  • I can directly observe AIs and make predictions of future training methods and their values seem to result from a much more heavily optimized and precise thing with less "slack" in some sense. (Perhaps this is related to genetic bottleneck, I'm unsure.)
  • AIs will be primarily trained in things which look extremely different from "cooperatively achieving high genetic fitness".
  • Current AIs seem to use the vast, vast majority of their reasoning power for purposes which aren't directly related to their final applications. I predict this will also apply for internal high level reasoning of AIs. This doesn't seem true for humans.
  • Humans seem optimized for something which isn't that far off from utilitarianism from some perspective? Make yourself survive, make your kin group survive, make your tribe survive, etc? I think utilitarianism is often a natural generalization of "I care about the experience of XYZ, it seems arbitrary/dumb/bad to draw the boundary narrowly, so I should extend this further" (This is how I get to utilitarianism.) I think the AI optimization looks considerably worse than this by default.

(Again, note that I said in my comment above: "Some of these can be defeated relatively easily if we train AIs specifically to be good successors, but the default AIs which end up with power over the future will not have this property." I edited this in to my prior comment, so you might have missed it, sorry.)

  •  I think utilitarianism is often a natural generalization of "I care about the experience of XYZ, it seems arbitrary/dumb/bad to draw the boundary narrowly, so I should extend this further" (This is how I get to utilitarianism.) I think the AI optimization looks considerably worse than this by default.

Why is this different between AIs and humans? Do you expect AIs to care less about experience than humans, maybe bc humans get reward during life-time learning about AIs don't get reward during in context learning?

  • I can directly observe AIs and make predictions of future training methods and their values seem to result from a much more heavily optimized and precise thing with less "slack" in some sense. (Perhaps this is related to genetic bottleneck, I'm unsure.)

Can you say more about how slack (or genetic bottleneck) would affect whether AIs have values that are good by human lights?

  • AIs will be primarily trained in things which look extremely different from "cooperatively achieving high genetic fitness".

They might well be trained to cooperate with other copies on tasks, if this is they way they'll be deployed in practice?

  • Current AIs seem to use the vast, vast majority of their reasoning power for purposes which aren't directly related to their final applications. I predict this will also apply for internal high level reasoning of AIs. This doesn't seem true for humans.

In what sense do AIs use their reasoning power in this way? How that that affect whether they will have values that humans like?

I can currently observe humans which screens off a bunch of the comparison and let's me do direct analysis.

I'm in agreement that this consideration makes it hard to do a direct comparison. But I think this consideration should mostly make us more uncertain, rather than making us think that humans are better than the alternative. Analogy: if you rolled a die, and I didn't see the result, the expected value is not low just because I am uncertain about what happened. What matters here is the expected value, not necessarily the variance.

I can directly observe AIs and make predictions of future training methods and their values seem to result from a much more heavily optimized and precise thing with less "slack" in some sense. (Perhaps this is related to genetic bottleneck, I'm unsure.)

I guess I am having trouble understanding this point.

AIs will be primarily trained in things which look extremely different from "cooperatively achieving high genetic fitness".

Sure, but the question is why being different makes it worse along the relevant axes that we were discussing. The question is not just "will AIs be different than humans?" to which the answer would be "Obviously, yes". We're talking about why the differences between humans and AIs make AIs better or worse in expectation, not merely different.

Current AIs seem to use the vast, vast majority of their reasoning power for purposes which aren't directly related to their final applications. I predict this will also apply for internal high level reasoning of AIs. This doesn't seem true for humans.

I am having a hard time parsing this claim. What do you mean by "final applications"? And why won't this be true for future AGIs that are at human-level intelligence or above? And why does this make a difference to the ultimate claim that you're trying to support? 

Humans seem optimized for something which isn't that far off from utilitarianism from some perspective? Make yourself survive, make your kin group survive, make your tribe survive, etc? I think utilitarianism is often a natural generalization of "I care about the experience of XYZ, it seems arbitrary/dumb/bad to draw the boundary narrowly, so I should extend this further" (This is how I get to utilitarianism.) I think the AI optimization looks considerably worse than this by default.

This consideration seems very weak to me. Early AGIs will presumably be directly optimized for something like consumer value, which looks a lot closer to "utilitarianism" to me than the implicit values in gene-centered evolution. When I talk to GPT-4, I find that it's way more altruistic and interested in making others happy than most humans are. This seems kind of a little bit like utilitarianism to me -- at least more than your description of what human evolution was optimizing for. But maybe I'm just not understanding the picture you're painting well enough though. Or maybe my model of AI is wrong.

I'm in agreement that this consideration makes it hard to do a direct comparison. But I think this consideration should mostly make us more uncertain, rather than making us think that humans are better than the alternative.

Actually, I was just trying to say "I can see what humans are like, and it seems pretty good relative to me current guesses about AIs in ways that dont just update me up about AIs" sorry about the confusion.

My guess now of where we most disagree is regarding the value of a world where AIs disempower humanity and go onto have a vast technologically super-advanced, rapidly expanding civilisation. I think this would quite likely be ~0 value since we don't understand consciousness at all really, and my guess is that AIs aren't yet conscious and if we relatively quickly get to TAI in the current paradigm they probably still won't be moral patients. As a sentientist I don't really care whether there is a huge future if humans (or something sufficiently related to humans e.g. we carefully study consciousness for a millennium and create digital people we are very confident have morally important experiences to be our successors) aren't in it.

So yes I agree frontier AI models are where the most transformative potential lies, but I would prefer to get there far later once we understand alignment and consciousness far better (while other less important tech progress continues in the meantime).

My guess now of where we most disagree is regarding the value of a world where AIs disempower humanity and go onto have a vast technologically super-advanced, rapidly expanding civilisation. I think this would quite likely be ~0 value since we don't understand consciousness at all really, and my guess is that AIs aren't yet conscious and if we relatively quickly get to TAI in the current paradigm they probably still won't be moral patients.

Thanks. I disagree with this for the following reasons:

  1. AIs will get more complex over time, even in our current paradigm. Eventually I expect AIs will have highly sophisticated cognition that I'd feel comfortable calling conscious, on our current path of development (I'm an illusionist about phenomenal consciousness so I don't think there's a fact of the matter anyway).

  2. If we slowed down AI, I don't think that would necessarily translate into a higher likelihood that future AIs will be conscious. Why would it?

  3. In the absence of a strong argument that slowing down AI makes future AIs more likely to be conscious, I still think the considerations I mentioned are stronger than the counter-considerations you've mentioned here, and I think they should push us towards trying to avoid entrenching norms that could hamper future growth and innovation.

Thanks for writing this up, Oscar! I largely disagree with the (admittedly tentative) conclusions, and am not sure how apt I find the NIMBY analogy. But even so, I found the ideas in the post helpfully thought-provoking, especially given that I would probably fall into the cosmic NIMBY category as you describe it. 

First, on the implications you list. I think I would be quite concerned if some of your implications were adopted by many longtermists (who would otherwise try to do good differently):

Support pro-expansion space exploration policies and laws

Even accepting the moral case for cosmic YIMBYism (that aiming for a large future is morally warranted), it seems far from clear to me that support for pro-expansion space exploration policies would actually improve expected wellbeing for the current and future world. Such policies & laws could share many of the downsides colonialism and expansionism have had previously: 

  • Exploitation of humans & the environment for the sake of funding and otherwise enabling these explorations; 
  • Planning problems: Colonial-esque megaprojects like massive space exploration likely constitute a bigger task than human planners can reasonably take on, leading to large chances of catastrophic errors in planning & execution (as evidenced by past experiences with colonialism and similarly grand but elite-driven endeavours)
  • Power dynamics: Colonial-esque megaprojects like massive space exploration seem prone to reinforcing the prestige, status, and power for those people who are capable of and willing to support these grand endeavours, who - when looking at historical colonial-esque megaprojects - do not have a strong track record of being the type of people well-suited to moral leadership and welfare-enhancing actions (you do acknowledge this when you talk about ruthless expansionists and Molochian futures, but I think it warrants more concern and worry than you grant);
  • (Exploitation of alien species (if there happened to be any, which maybe is unlikely? I have zero knowledge about debates on this)).

This could mean that it is more neglected and hence especially valuable for longtermists to focus on making the future large conditional on there being no existential catastrophe, compared to focusing on reducing the chance of an existential catastrophe.

It seems misguided and, to me, dangerous to go from "extinction risk is not the most neglected thing" to "we can assume there will be no extinction and should take actions conditional on humans not going extinct". My views on this are to some extent dependent on empirical beliefs which you might disagree with (curious to hear your response there!): I think humanity's chances to avert global catastrophe in the next few decades are far from comfortably high, and I think the path from global catastrophe to existential peril is largely unpredictable but it doesn't seem completely unconceivable that such a path will be taken. I think there are far too few earnest, well-considered, and persistent efforts to reduce global catastrophic risks at present. Given all that, I'd be quite distraught to hear that a substantial fraction (or even a few members) of those people concerned about the future would decide to switch from reducing x-risk (or global catastrophic risk) to speculatively working on "increasing the size of the possible future", on the assumption that there will be no extinction-level event to preempt that future in the first place.

--- 

On the analogy itself: I think it doesn't resonate super strongly (though it does resonate a bit) with me because my definition of and frustration with local NIMBYs is different from what you describe in the post. 

In my reading, NIMBYism is objectionable primarily because it is a short-sighted and unconstructive attitude that obstructs efforts to combat problems that affect all of us; the thing that bugs me most about NIMBYs is not their lack of selflessness but their failure to understand that everyone, including themselves, would benefit from the actions they are trying to block. For example, NIMBYs objecting to high-rise apartment buildings seem to me to be mistaken in their belief that such buildings would decrease their welfare: the lack of these apartment buildings will make it harder for many people to find housing, which exacerbates problems of homelessness and local poverty, which decreases living standards for almost everyone living in that area (incl. those who have the comfort of a spacious family house, unless they are amongst the minority who enjoy or don't mind living in the midst of preventable poverty and, possibly, heightened crime). It is a stubborn blindness to arguments of that kind and an unwillingness to consider common, longer-term needs over short-term, narrowly construed self-interests that form the core characteristic of local NIMBYs in my mind. 

The situation seems to be different for the cosmic NIMBYs you describe. I might well be working with an unrepresentative sample, but most of the people I know/have read who consciously reject cosmic YIMBYism do so not primarily on grounds of narrow self-interest but for moral reasons (population ethics, non-consequentialist ethics, etc) or empirical reasons (incredibly low tractability of today's efforts to influence the specifics about far-future worlds; fixing present/near-future concerns as the best means to increase wellbeing overall, including in the far future). I would be surprised if local NIMBYs were motivated by similar concerns, and I might actually shift my assessment of local NIMBYism if it turned out that they are. 

Thanks for this really thoughtful engagement! I expected this would not be a take particularly to your liking, but your pushback is stronger than I thought, this is useful to hear. Perhaps I failed to realise how controversial and provocative these ideas would be after playing with them myself and with a few relatively similar people. Onto the substance:

  • That makes sense to me that the analogy is a bit weak, I think I mostly agree. I think the strongest part of the analogy to me is less the NIMBYs themselves and more who is politically empowered (a smaller group that is better coordinated - and actually existing - than the larger group of possible beneficiaries). Maybe I should have foregrounded this more actually.
  • Re space expansion/colonisation, yeah I don't have much idea about how all this would work, so it is intuition-based. It is interesting I think how people have such different intuitive reactions to space expansion, I think roughly along the lines of pro-market, pro-"progress", technologist, capitalist types (partially including me) pattern match space exploration to other things they like and intuitively like. Whereas environmentalists, localists, post-colonialists, social justice-oriented people, degrowthers etc (also partially including me, but to a lesser extent probably) are intuitively pretty opposed. But I think it is reasonable to at least be worried about the socio-political consequences of a space focus - not at all sure how it would play out and I am probably somewhat more optimistic than you, but yes your worries seem plausible.
  • I completely agree there are far too few people working on x-risks, and that there should be far more, and collapse is dangerous and scary, and that we are very much not out of the woods and things could go terribly. I suppose it is the nature of being scope-sensitive and prioritiationist though that something being very important and neglected and moderately tractable (like x-risk work) isn't always enough for it to be the 'best' (granted re your previous post that this may not makes sense). I'm not sure if this is what you had in mind, but I think there is some significance to risk-averse decision-making principles, where maybe avoiding extinction is especially important even compared to building (an even huger) utopia. So I think I have less clear views on what practically is best for people like me to be doing (for now I will continue to focus on catastrophic and existential risks). But I still think in principle it could be reasonable to focus on making a great future even larger and greater, even if that is unlikely. Another, perhaps tortured, analogy: you have founded a company, and could spend all your time trying to avoid going bankrupt and mitigating risks, but maybe some employee should spend some fraction of their time thinking about best-case scenarios and how you could massively expand and improve the company 5 years down the line if everything else falls into place nicely.

As a process note, I think these discussions are a lot easier and better to have when we are (I think) both confident the other person is well-meaning and thoughtful and altruistic, I think otherwise it would be a lot easier to dismiss prematurely ideas I disagree with or find uncomfortable. So in other words I'm really glad I know you :)

First two points sound reasonable (and helpfully clarifying) to me!

I suppose it is the nature of being scope-sensitive and prioritiationist though that something being very important and neglected and moderately tractable (like x-risk work) isn't always enough for it to be the 'best'

I share the guess that scope sensitivity and prioritarianism could be relevant here, as you clearly (I think) endorse these more strongly and more consistently than I do; but having thought about it for only 5-10 minutes, I'm not sure I'm able to exactly point at how these notions play into our intuitions and views on the topic - maybe it's something about me ignoring the [(super-high payoff of larger future)*(super-low probability of affecting whether there is a larger future) = (there is good reason to take this action)] calculation/conclusion more readily? 

That said, I fully agree that "something being very important and neglected and moderately tractable (like x-risk work) isn't always enough for it to be the 'best' ". To figure out which option is best, we'd need to somehow compare their respective scores on importance, neglectedness, and tractability... I'm not sure actually figuring that out is possible in practice, but I think it's fair to challenge the claim that "action X is best because it is very important and neglected and moderately tractable" regardless. In spite of that, I continue to feel relatively confident in claiming that efforts to reduce x-risks are better (more desirable) than efforts to increase the probable size of the future, because the former is an unstable precondition for the latter (and because I strongly doubt the tractability and am at least confused about the desirability of the latter).

Another, perhaps tortured, analogy: you have founded a company, and could spend all your time trying to avoid going bankrupt and mitigating risks, but maybe some employee should spend some fraction of their time thinking about best-case scenarios and how you could massively expand and improve the company 5 years down the line if everything else falls into place nicely.

I think my stance on this example would depend on the present state of the company. If the company is in really dire straits, I'm resource-constrained, and there are more things that need fixing now than I feel able to easily handle, I would seriously question whether one of my employees should go thinking about making best-case future scenarios the best they can be[1]. I would question this even more strongly if I thought that the world and my company (if it survives) will change so drastically in the next 5 years that the employee in question has very little chance of imaging and planning for the eventuality. 

(I also notice while writing that a part of my disagreement here is motivated by values rather than logic/empirics: part of my brain just rejects the objective of massively expanding and improving a company/situation that is already perfectly acceptable and satisfying. I don't know if I endorse this intuition for states of the world (I do endorse it pretty strongly for private life choices), but can imagine that the intuitive preference for satisficing informs/shapes/directs my thinking on the topic at least a bit - something for myself to think about more, since this may or may not be a concerning bias.)

I expected this would not be a take particularly to your liking, but your pushback is stronger than I thought, this is useful to hear. [...] As a process note, I think these discussions are a lot easier and better to have when we are (I think) both confident the other person is well-meaning and thoughtful and altruistic, I think otherwise it would be a lot easier to dismiss prematurely ideas I disagree with or find uncomfortable.

+100 :)

  1. ^

    (This is not to say that it might not make sense for one or a few individuals to think about the company's mid- to long-term success; I imagine that type of resource allocation will be quite sensible in most cases, because it's not sustainable to preserve the company in a day-to-day survival strategy forever; but I think that's different from asking these individuals to paint a best-case future to be prepared to make a good outcome even better.)

That makes sense, yes perhaps there are some fanaticism worries re my make-the-future large approach even more so than x-risk work, and maybe I am less resistant to fanaticism-flavoured conclusions than you. That said I think not all work like this need be fanatical - e.g. improving international cooperation and treaties for space exploration could be good in more frames (and bad is some frames you brought up, granted).

I don't know lots about it, but I wonder if you prefer more of a satisficing decision theory where we want to focus on getting a decent outcome rather than necessarily the best (e.g. Bostrom's 'Maxipok' rule). So I think not wholeheartedly going for maximum expected value isn't a sign of irrationality, and could reflect different, sound, decision approaches.

Executive summary: Cosmic NIMBYs, who prefer a smaller future with higher average welfare, could greatly limit the value of the future according to total utilitarianism, but there are reasons to believe a larger future may not necessarily reduce average welfare.

Key points:

  1. Cosmic NIMBYs are people who prefer a smaller future with fewer people to maintain higher welfare for existing people, analogous to traditional NIMBYs opposing local development.
  2. If cosmic NIMBYs are successful, it could greatly reduce the value of the future according to total utilitarianism and scope-sensitive axiologies.
  3. The repugnant conclusion suggests a possible tradeoff between a large future and high average welfare, but there are also reasons a larger future may not reduce average welfare.
  4. Longtermist implications include supporting pro-expansion space policies, focusing on making the future large in addition to reducing existential risk, and navigating the tension between advocating for expansion versus enabling a Molochian future.
  5. More thinking is needed to avoid the failure modes of a future that is either too small with high average welfare or too large with near-zero average welfare.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities