A short and arguably unfinished blog post that I'm sharing as part of EA strategy fortnight. There's probably a lot more to say about this, but I've sat on this draft for a few months and don't expect to have time to develop the argument much further. 

-

I understand longtermism to be the claim that positively shaping the long-term future is a moral priority. The argument for longtermism goes:

  1. The future could be extremely large;
  2. The beings who will inhabit that future matter, morally;
  3. There are things we can do to improve the lives of those beings (one of which is reducing existential risk);
  4. Therefore, positively shaping the long-term future should be a moral priority.

However, I have one core worry about longtermism and it’s this: people (reasonably) see its adherents as power-seeking. I think this worry somewhat extends to broad existential risk reduction work, but much less so.
 

cool planet

Arguments for longtermism tell us something important and surprising; that there is an extremely large thing that people aren’t paying attention to. That thing is the long-term future. In some ways, it’s odd that we have to draw attention to this extremely large thing. Everyone believes the future will exist and most people don’t expect the world to end that soon.[1]

Perhaps what longtermism introduces to most people is actually premises 2 and 3 (above) — that we might have some reason to take it seriously, morally, and that we can shape it.

In any case, longtermism seems to point to something that people vaguely know about or even agree with already and then say that we have reason to try and influence that thing.

This would all be fine if everyone felt like they were on the same team. That, when longtermists say “we should try and influence the long-term future”, everyone listening sees themselves as part of that “we”.

This doesn’t seem to be what’s happening. For whatever reason, when people hear longtermists say “we should try and influence the long-term future”, they hear the “we” as just the longtermists.[2]

This is worrying to them. It sounds like this small group of people making this clever argument will take control of this extremely big thing that no one thought you could (or should) control.

The only thing that could make this worse is if this small group of people were somehow undeserving of more power and influence, such as relatively wealthy[3], well-educated white men. Unfortunately, many people making this argument are relatively wealthy, well-educated white men (including me).

To be clear, I think longtermists do not view accruing power as a core goal or as an implication of longtermism.[4] Importantly, when longtermists say “we should try and influence the long-term future”, I think they/we really mean everyone.[5]

Ironically, it seems that, because no one else is paying attention to the extremely big thing, they’re going to have to be the first ones to pay attention to it.

I don’t have much in the way of a solution here. I mostly wanted to point to this worry and spell it out more clearly so that those of us making the case for longtermism can at least be aware of this potential, unfortunate misreading of the idea.

  1. ^

    58% of US adults do not think we are living in “the end times”. Not super reassuring.

  2. ^

    See Torres and Crary. A google search will also do the trick.

  3. ^

    As much as they try and make themselves less wealthy by donating a large portion of their income to charity.

  4. ^

    I think you could make the case that this is often an indirect goal, such as getting the ears of important policymakers.

  5. ^

    Except, perhaps, dictators and other ne'er-do-wells.

Comments43
Sorted by Click to highlight new comments since: Today at 1:16 AM

I'm not sure it is a full misreading, sadly. I don't think it a fair characterization of Ord, Greaves and MacAskill (though I am kind of biased because of my pride in having been an Oxford philosophy DPhil).  It would be easy to give a radical deliberative democracy spin on Will and Toby's "long reflection" ideas in particular.  But all the "pivotal act" stuff come out of certain people in the Bay, sure sounds like an attempt to temporarily seize control of the future without worrying too much about actual consent. Of course, the idea (or at least Yudkowsky's original vision for "coherent extrapolated volition") is that eventually the governing AIs will just implement what we all collectively want. And that could happen! But remember Lenin thought that the state would eventually "wither away" as Marx predicted, once the dictatorship of the proletariat had taken care of building industrial socialism...

Not to mention there are, shall we say, longtermism adjacent rich people like Musk and Thiel who seem pretty plausibly power-seeking, even if they are not really proper longtermists (or at least, they are not EAs). 

(Despite all this, I should say that I think the in-principle philosophical case for longtermism is very strong. Alas, ideas can be both correct and dangerous.) 

Not to mention there are, shall we say, longtermism adjacent rich people like Musk and Thiel who seem pretty plausibly power-seeking, even if they are not really proper longtermists (or at least, they are not EAs). 

These people both seem like clear longtermists to me - they have orientated their lives around trying to positively influence the long term future of humanity. I struggle to see any reasonable criteria by which they do not count as longtermists which doesn't also exclude almost everyone else that we would normally think of as a longtermist. Even under a super parochial definition like 'people Will supports' it seems like Elon would still count!

In practice I think people's exclusionist instincts here are more tribal / political than philosophically grounded.

[anonymous]10mo5
0
0

When has Will supported Elon?

Will attempted to support Elon's purchase of Twitter.

[anonymous]10mo4
4
0

Meaning he tried to put him in touch with someone else who was interested in buying Twitter in case they wanted to buy it together?

(If that's what you're referring to, I think we understand "people Will supports" differently. And I can't see how it's relevant to whether or not Elon is a longtermist.)

I agree it's not relevant - I think the real test is whether someone cares a lot about future people and tries to help them, which Elon satisfies. 

These seem like reasonable points.

But all the "pivotal act" stuff come out of certain people in the Bay, sure sounds like an attempt to temporarily seize control of the future without worrying too much about actual consent.

I'm not familiar with this stuff and I'm unsure how it relates to longtermism as an idea (if at all) but, yes that would certainly be an example of power-seeking behaviour.

Here's the first hit on google for 'Yudkowsky pivotal act': https://www.lesswrong.com/posts/Jo89KvfAs9z7owoZp/pivotal-act-intentions-negative-consequences-and-fallacious

And Yudkowsky has also tried to work out what looks like a template for how an AI could govern the whole world (though he gave up on the idea later): https://arbital.com/p/cev/


I also have the impression that Bostrom in particular, is sympathetic to the idea that a single government should one day exist that takes control of all real important stuff to ensure it is perfectly optimized: https://nickbostrom.com/fut/singleton

I'm not saying this stuff is unambiguously bad by the way: any political theorizing involves an interest in power, and it's hard to tell whether benevolent AI governance in particular would be more or less dangerous than human governments (which have done lots of bad things! even the liberal democracies!). I'm just saying you can see why it would set off alarm bells. I get the impression Bostrom and Yudkowsky basically think that it's okay to act in a fairly unilateralist way so long as the system you set up takes everyone's interests into account, which has obvious dangers as a line of thought.  

I also have the impression that Bostrom in particular, is sympathetic to the idea that a single government should one day exist that takes control of all real important stuff to ensure it is perfectly optimized: https://nickbostrom.com/fut/singleton

For what it's worth, my impression is that Bostrom's sympathies here are less about perfect optimization (e.g., CEV realization or hedonium tessellation) and more about existential security. (A world government singleton in theory ensures existential security because it is able to suppress bad actors, coordination disasters and collective action failures, i.e., suppress type-1, 2a and 2b threats in Bostrom's "Vulnerable World Hypothesis".)

Yeah that's probably fair actually. This might make the view more sympathetic but not necessarily less dangerous. Maybe more dangerous, because most people will laugh you out the room if you say we need extreme measures to make sure we fill the galaxy with hedonium, but they will take 'extreme measures are needed or we might all die' rather more seriously. 

To add in some 'empirical' evidence: Over the past few months, I've read 153 answers to the question "What is your strongest objection to the argument(s) and claim(s) in the video?" in response to "Can we make the future a million years from now go better?" by Rational Animations, and 181 in response to MacAskill's TED talk, “What are the most important moral problems of our time?”.

I don't remember the concern that you highlight coming up very much if at all. I did note "Please focus on the core argument of the video — either 'We can make future lives go better', or the framework for prioritising pressing problems (from ~2mins onwards in either video)", but I still would have expected this objection to come up a bunch if it was a particularly prevalent concern. For example, I got quite a lot of answers commenting that people didn't believe it was fair/good/right/effective/etc to prioritise issues that affect the future when there are people alive suffering today, even though this isn't a particularly relevant critique to the core argument of either of the videos.

If someone wanted to read through the dataset and categorise responses or some such, I'd be happy to provide the anonymised responses. I did that with my answers from last year, which were just on the MacAskill video and didn't have the additional prompt about focusing on the core argument, but probably won't do it this year.

(This was as part of the application process to Leaf's Changemakers Fellowship, so the answers were all from smart UK-based teenagers.)

Thanks! That question seems like it might exclude the worry I outlined, but this is still something of an update.

This doesn’t seem to be what’s happening. For whatever reason, when people hear longtermists say “we should try and influence the long-term future”, they hear the “we” as just the longtermists.[2]

 

Hmm. I want to distinguish between two potential concerns folks who disagree with longtermism could have:

  1. Longtermists are using this weird ideology to gain a bunch of power and influence in the present day, this is bad because we have ideological problems with longtermism (e.g. strong person affecting intuitions) or because we think this is distracting from more important problems.
  2. Longtermists are correct to point out that the far future is big and has untapped power. The problems is that we have value differences from longtermists / think they are an insular group of elites, and 'ceding control' over the future to this group is bad.
    1. FWIW I am very sympathetic to this, as are probably a bunch of people with longtermist intuitions?

I think (1) is definitely a vein of concern that I've heard expressed a lot by critics of longtermists.

But it sounds like you are claiming that (2) is also a strong undercurrent of concern? If so, can you point to evidence of this?

I'm claiming they're intertwined. I think the problem that people have with longtermism that makes them feel concern 1 is that longtermists seem to be an insular group of elites with views they disagree with (maybe consequentialism). I'm not sure there are two veins of concern here, so I'm using the same evidence, and maybe just picking up on some other vibes (i.e. critics aren't mad because they have person-affecting views, they're mad because they think no small, insular group of people should have a lot of influence).

Hmm, interesting take! Thanks for clarifying your thesis :)

Hi Ollie, thanks for sharing your thoughts here. A lot has already been covered in the comments, so perhaps some unexplored points here:

  1. Most moral and political ideologies at some point imply power-seeking? Highly revolutionary leftist ideologies imply, well... revolution. Conservative ideologies at some level imply gaining enough power to conserve the parts of society that are valued. After some reflection, I agree that I don't think that longtermism necessarily implies power seeking, at least I'm not sure it's out of the same class here as other political theories.
  2. So I think what does seem to be causing this is not the philosophy but the practical on-the-ground response of a) longtermism growing rapidly in prominence in EA and gaining funding[1] as well as b) EA increasing in visibility to the wider world, as well as wider influence (e.g. Toby Ord talking to the UN, EA being associated with the recent wave of concern about AIXR policy, the significant amount of promotion for WWOTF). From the outside, making a naive extrapolation[2], it would appear like longtermism would be on the way to becoming a lot more influential in the near future.
  3. The best examples of where this might actually be true comes from some highly speculative/ambitious/dangerous ideas present in Bay-Area Rationalist orgs, though I think these happened before 'longtermism' was actually a thing:[3]
    1. There are suggestions that Leverage Research had some half-baked grand plan to convert their hoped-for breakthroughs about human behaviour/psychology to lead them to be able to take over the US Government (source1, source2 - ctrl+F "take over")
    2. There are also hints at the plan from MIRI involving a "pivotal act", a plan that seems to cash out as getting researchers to develop a somewhat aligned proto-AGI and using that to take over the world and preventing any unaligned AGIs from being built (source1, source2 - not as sure what the truth is here, but I think there is some smoke)
  4. Finally, I think a big thing is that contributes to this idea of EA and longtermism as inherently and/or dangerously power seeking is framing by its ideological enemies. For instance, I don't think Crary or Torres[4] are coming at this from a perspective of 'mistake theory' - it's 'conflict theory' all the way for them. I don't think they're misreading longtermist works by accident, I think that they view longtermism as inherently dangerous because of the ideological differences, and they're shouting loudly about it. This is a lot of people's first impression of longtermism/EA - and unfortunately I think it often sticks. Critically, prominent longtermists seem to have ceded this ground to their critics, and don't prominently push back against it, which I think is a mistaken strategy.

Would be really interested to hear what you (or others) think about this.

  1. ^

    though again, to the best of my knowledge GH&D is still the number 1 cause area by funding

  2. ^

    especially pre-FTX collapse

  3. ^

    IMPORTANT NOTE: I don't claim to know fully what the truth behind these claims are, but it did stick in my mind thinking about the post. But I'm happy to amend/retract if provided clearer evidence by those in the know. I don't think it was likely any of these plans had any chance of succeeding, but it still points to a concerning background trend if true.

  4. ^

    Especially Torres - who seems to be fighting a personal holy war against longtermism. I remained perplexed about what happened here, since he seems to have been an active EA concerned about xRisk for some time.

Yes, this matches what I've seen. Also they aren't fully wrong. We are making choices and we are taking discretion.

OllieBase - interesting points, and a useful caution.

Insofar as EAs longtermism is starting to be viewed as 'power-seeking' by the general public, I think it's important for us to distinguish 'power' from 'influence'. 

'Power' implies coercion, dominance, and the ability to do things that violate other people's preferences and values.  

Whereas 'influence' implies persuasion, discussion, and consensual decision-making that doesn't violate other people's interests.

Maybe we need to frame our discussion of longtermism in more 'influence' terms, e.g. 'Here's what we EAs are worried about, and what we hope for; we're painfully aware of the many unknowns that the future may bring, and we invite everybody to join the discussion of what our human future should become; this is something we all have a stake in.'

The antidote to looking like arrogant, preachy power-seekers is to act like humble, open-minded influencers.

(By contrast, the pro-AGI accelerationists are actually power-seeking, in the sense of imposing radically powerful new technologies on the rest of humanity without anybody else's understanding, input, consent, or support.)

Geoffrey I noticed that you used the words "humanity" and "human future" when referring to what longtermism is about. Well... I noticed it because I specifically searched for the term on the page and yours was the only one that used these terms in this way. I honestly expected there to be more uses of these descriptors. 

I do find the speciesist bias in longtermism to be one thing that has always bothered me. It seems like animals are always left out of the discussion when it comes to the long term future. Some examples I can call to mind are the name of The Future of Humanity Institute or an OP sponsored Kurzgesagt video inadvertently promoting wild animal s-risk in other planets.

The optics of the Wytham Abbey and The Rose Garden Inn don't help with the image of longtermists being seen as power seeking. Both got significant funding from Open Philanthropy at some point and primarily seem to serve to house longtermists and host longtermism events. In addition, they both seem to be quite aesthetically lavish although the folks leading these projects would argue that the high-end aesthetics are for optimizing the impact of the people that work/live in these spaces. 

Very eloquent. I do think the perception is justified, e.g. SBF's attempt to elect candidates to the US Congress/Senate.

This is a very interesting and provocative idea! Thank you for sharing. 

One thought: is it possible that the concern relates to innumeracy / anti-science-thinking rather than (or in addition) to doubts about any specific group (e.g. white men)? 

As in: could (part of) the concern be: "Here is a group of (very nerdy?) people trying to force us all to take critical decisions that sometimes seem bizarre, based on complex, abstract, quantitative arguments that only they understand. I'm not sure I trust them or their motives." ?

IMHO we underestimate just how abstract some of the arguments in favour of longtermism can seem to 99% of the population who have not studied it, and when this is combined with recommendations that seem quite dramatic, it isn't hard to see why people would express doubt and question the motives. 

Remember, we live in a society in which many people think climate-scientists are just inventing climate-change so they can get more power, and in which medical experts trying to use data to recommend strategies to fight covid frequently had the motives questioned.

Is there a chance that, despite all the healthy disagreement within the EA community, to the external world we seem like an echo-chamber, living in an (imaginary?) world in which math and science and logic can justify any position? 

I don't think most people feel that way. People learn to be suspicious when people throw lots of numbers and new ideas at them and then recommend something that doesn't seem to make sense. They think of suave car salesmen. 

If you say "give me $100 and I can save three children by buying them mosquito nets," that is tangible and simple. If you say "we should devote X% of our GDP to preventing very low-risk scenarios which could cost trillions of lives," you've lost most people. If you then tell them that you want some of their money or resources to be impacted by this, they will question your motives. The specific details of the longtermism argument may not even be relevant to their suspicion. 

This is an extremely difficult and compelling problem. 

On the one hand, I've long been rather obsessed with cosmopolitanism/pluralism/lacking fingerprints. I think lock-in is sufficiently bad that I've frequently tried to convince people to just not have positive goals at all and only have negative goals. 

On the other hand, fear of the responsibility of power doesn't necessarily hold up to scrutiny, you can copenhagen ethics your way out of taking literally any actions ever (except for maybe, like, voting in a uniform-weighted democracy). 

I think you need to have a lot of trust for complaints of this type to matter at all, i.e. trust that the complainer is deeply committed to social choice ethics and isn't making a thinly veiled bid for power themselves that just happens to not be going very well at this point in time. We shouldn't expect solutions to the whole "losers or minorities are incentivized to make appeals to sanctity of process/structure/institutional integrity, but you never know if they'll care just as much about that stuff if they ever pivot to winning or being a majority" problem to magically appear because of machine learning progress, right? 


Importantly, when longtermists say “we should try and influence the long-term future”, I think they/we really mean everyone.

Someone privately gave me the feedback that it should probably be "on behalf of everyone" and not "everyone should try and influence the long-term future" and I think I agree. This would also mean I wouldn't need the footnote.

I'm not super compelled by this shift in framing. If you have a social choice oracle that steamrolls over barriers to sufficient or satisfying aggregation, and you have excessively good UX that provides really high resolution elicitation, you can still expect to screw over voiceless moral patients. Maybe you can correct for this by just giving St Francis (or some MCE extremist) a heavier vote than he deserves under classical distributions of voice, but this causes a hundred massive problems (e.g. ordinary voters resenting the imposition of veganism, questioning the legitimacy of St Francis, how was St Francis even appointed anyway). 

I.e. I think once you try to define "behalf" or "everyone" you find that distinguishing "we should try to influence the future on behalf of everyone" from "everyone should try and influence the future" is not helpful. 

Except, perhaps, dictators and other ne'er-do-wells.

I would guess that a significant number of power-seeking people in history and the present are power-seeking precisely because they think that those they are vying for power with are some form of "ne'er-do-wells." So the original statement:

Importantly, when longtermists say “we should try and influence the long-term future”, I think they/we really mean everyone.

with the footnote doesn't seem to mean very much. "Everyone, except those viewed as irresponsible," historically, at least, has certainly not meant everyone, and to some people means very few people.

Yeah, as I comment below:

Importantly, when longtermists say “we should try and influence the long-term future”, I think they/we really mean everyone.

Someone privately gave me the feedback that it should probably be "on behalf of everyone" and not "everyone should try and influence the long-term future" and I think I agree. This would also mean I wouldn't need the footnote.

Empirically, longtermists seem pretty power-seeking in that they attempt to: spread their point of view ("community building"); increase funding towards their priorities and projects; and gain influence over laws and regulations in areas that interest them.

This might be a too loose of a criteria for 'power-seeking', or at least the version of power-seeking that has the negative connotations this post alludes to. By this criteria, a movement like Students for Sensible Drug Policy would be power seeking.

  1. they try to seed student groups and provide support for them 
  2. They have multiple buttons to donate on their webpage
  3. They have things like a US policy council and explicitly mention policy change in their title.

Maybe it's just being successful at these things that makes the difference between generic power-seeking  and power-seeking that is perceived as alarming? 

But if I had to guess (of the top of my head) the negative associations with longtermism's alleged power seeking come more from 1) longtermism being an ideology that makes pretty sweeping, unintutive moral claims and 2) longtermism reaching for power among people society labels as 'elites' (e.g., ivory tower academics, politicians, and tech industry people). 

I agree a more nuanced definition is probably required, or at least to distinguish acceptable from (possibly) unacceptable power-seeking.

I think longtermism stands out for the amount of power it has and seeks relative to the number of members of the movement, and that there isn't much consensus (across wider society) around its aims. I've not fully thought this through but I'd frame it around democratic legitemacy.

Curated and popular this week
Relevant opportunities