tl;dr: I am much more interested in making the future good, as opposed to long or big, as I neither think the world is great now nor am convinced it will be in the future. I am uncertain whether there are any scenarios which lock us into a world at least as bad as now that we can avoid or shape in the near future. If there are none, I think it is better to focus on “traditional neartermist” ways to improve the world.
I thought it might be interesting to other EAs why I do not feel very on board with longtermism, as longtermism is important to a lot of people in the community.
This post is about the worldview called longtermism. It does not describe a position on cause prioritisation. It is very possible for causes commonly associated with longtermism to be relevant under non-longtermist considerations.
I structured this post by crux and highlighted what kind of evidence or arguments would convince me that I am wrong, though I am keen to hear about others which I might have missed! I usually did not investigate my cruxes thoroughly. Hence, only ‘probably’ not a longtermist.
The quality of the long-term future
1. I find many aspects of utilitarianism uncompelling.
You do not need to be a utilitarian to be a longtermist. But I think depending on how and where you differ from total utiliarianism, you will probably not go ‘all the way’ to longtermism.
I very much care about handing the world off in a good state to future generations. I also care about people’s wellbeing regardless of when it happens. What I value less than a total utilitarian is bringing happy people into existence who would not have existed otherwise. This means I am not too fussed about humanity’s failure to become much bigger and spread to the stars. While creating happy people is valuable, I view it as much less valuable than making sure people are not in misery. Therefore I am not extremely concerned about the lost potential from extinction risks (but I very much care about its short-term impact), although that depends on how good and long I expect the future to be (see below).
What would convince me otherwise:
I not only care about pursuing my own values, but I would like to ensure that other people’s reflected values are implemented. For example, if it turned out that most people in the world really care about increasing the human population in the long term, I would prioritise it much more. However I am a bit less interested in the sum of individual preferences, but more the preferences of a wide variety of groups. This is to give more weight to rarer worldviews as well as not rewarding one group outbreeding the other or spreading their values in an imperialist fashion.
I also want to give the values of people who are suffering the most more weight. If they think the long-term future is worth prioritising over their current pain, I would take this very seriously.
Alternatively, convincing me of moral realism and the correctness of utilitarianism within that framework would also work. So far I have not seen a plain language explanation of why moral realism makes any sense, but it would probably be a good start.
If the world suddenly drastically improved and everyone had as good a quality of life as my current self, I would be happy to focus on making the future big and long instead of improving people’s lives.
2. I do not think humanity is inherently super awesome.
A recurring theme in a lot of longtermist worldviews seems to be that humanity is wonderful and should therefore exist for a long time. I do not consider myself a misanthrope, I expect my views to be average for Europeans. Humanity has many great aspects which I like to see thrive.
But I find the overt enthusiasm for humanity most longtermists seem to have confusing. Even now, humanity is committing genocides, letting millions of people die of hunger, enslaving and torturing people as well as billions of factory-farmed animals. I find this hard to reconcile with a “humanity is awesome” worldview.
A common counterargument to this seems to be that these are problems, but we have just not gotten around to fixing them yet. That humans are lazy, not evil. This does not compel me. I not only care about people living good lives, I also care about them being good people. Laziness is no excuse.
Right now, we have the capacity to do more. Mostly, we do not. Few people who hear about GiveWell recommended charities decide to donate a significant amount of their income. People go on tourist intercontinental flights despite knowing about climate change. Many eat meat despite having heard of conditions on factory farms. Global aid is a tiny proportion of most developed countries’ budgets. These examples are fairly cosmopolitan, but I do not consider this critical.
Taken one at a time, you can quibble with these examples. Sometimes people actually lack the information. They can have empirical disagreements or different moral views (e.g. not considering animals to be sentient). Sometimes they triage and prioritise other ways of doing good. I am okay with all of these reasons.
But in the end, it seems to me that many people have plenty of resources to do better and yet there are still enormous problems left. It is certainly great if we set up better systems in the future to reduce misery and have the right carrots and sticks in place to get people to behave better. But I am unenthusiastic about a humanity which requires these to behave well.
This also makes me reluctant to put a lot of weight on helping people being good regardless of when it happens. This is only true if people in the future are as morally deserving as people are today.
Or putting this differently: if humans really were so great, we would not need to worry about all these risks to the future. They would solve themselves.
What would convince me otherwise:
I would be absolutely thrilled to be wrong about how moral people are where I live! Admittedly, I find it hard to think of plausible evidence as it seems to be in direct contradiction to the world I observe. Maybe it is genuinely a lack of information that stops people from acting better, as e.g. Max Roser from Our World in Data seems to believe. Information campaigns having large effects would be persuasive.
I am unfamiliar with how seriously people take their moral obligations in other places and times. Maybe the lack of investment I see is a local aberration.
Even though this should not have an impact on my worldview, I would probably also feel more comfortable with the longtermist idea if I saw a stronger focus on social or medical engineering to produce (morally) better people within the longtermist community.
3. I am unsure whether the future will be better than today.
In many ways, the world has gotten a lot better. Extreme poverty is down and life expectancy is up. Fewer people are enslaved. I am optimistic about these positive trends continuing.
What I feel more skeptical of is how much of the story these trends tell. While probably most people agree that having fewer people starve and die young is good, there are plenty of trends which get lauded by longtermists which others might feel differently about, for example the decline in religiosity. Or they can put weight on different aspects. Someone who values animals in factory farms highly might not think the world has improved.
I am concerned that seeing the world as improving is dependent on a worldview with pretty uncommon values. Using the lens of Haidt’s moral foundations theory it seems that most of the improvements are in the Care/harm foundation, while the world may not have improved according to other moral foundations like Loyalty/betrayal or Sanctity/degradation.
Also, many world improvements I expect to peter out before they become negative. But I am worried that some will not. For example, I think increased hedonism and individualism have both been a good force, but if overdone I would consider them to make the world worse, and it seems to me we are either almost or already there.
I am generally concerned about trends to overshoot their original good aim by narrowly optimising too much. Optimising for profit is the clearest example. I wrote a bit more about this here.
If the world is not better than it was in the past, extrapolating towards expecting an even better future does not work. For me this is another argument on wanting to focus on making the future good instead of long or big.
On a related note, while this is not an argument which deters me from longtermism, some longtermists looking forward to futures which I consider to be worthless (e.g. the hedonium shockwave) puts me off. Culturally many longtermists seem to favour more hedonism, individualism and techno-utopianism than I would like.
What would convince me otherwise:
I am well aware lots of people are pessimistic about the future because they get simple facts about how the world has been changing wrong. Yet I am interested in learning more about how different worldviews lead to perceiving the world as improving or not.
The length of the long-term future
I do not feel compelled by arguments that the future could be very long. I do not see how this is possible without at least soft totalitarianism, which brings its own risks of reducing the value of the future.
Or looking at it differently, people working on existential risks spent some years convincing me that existentials risks are pretty big. Switching from that argument to work on existential risks to longtermism, which requires reaching existential security, gives me a sense of whiplash.
See also this shortform post on the topic. One argument brought up there is the Lindy rule, pointing out that self-propagating systems have existed for billions of years so we can expect this length again. But I do not see why self-propagating systems should be the baseline, I am only interested in applying the Lindy rule to a morally worthwhile human civilisation which has been rather short in comparison.
I am also not keen to base decisions on rough expected value calculations in which the assessment of the small probability is uncertain and the expected value is the primary argument (as opposed to a more ‘cluster thinking’ based approach). I am not in principle opposed to such decisions, but my own track record with such decisions is very poor. : the predicted expected value from back of the envelope calculations does not materialise.
I also have traditional Pascal’s mugging type concerns for prioritizing the potentially small probability of a very large civilisation.
What would convince me otherwise:
I would appreciate solid arguments on how humanity could reach existential security.
The ability to influence the long-term future
I am unconvinced that people can reliably have a positive impact which dissipates further into the future than 100 years, maybe within a factor of 3. But there is one important exception: if we have the ability to prevent or shape a “lock-in” scenario within this timeframe. By lock-in I mean anything which humanity can never escape from. Extinction risks are an obvious example, others are permanent civilisational collapse.
I am aware that Bostrom’s canonical definition of existential risks includes both of these lock-in scenarios, but it also includes scenarios which I consider to be irrelevant (failing to reach a transhumanist future), which is why I am not using the term in this section.
Thinking we cannot reliably impact the world for more than several decades, I do not find working on cause areas like ‘improving institutional decision-making’ compelling except for their ability to shape or prevent a lock-in in that timeframe..
I am also only interested in lock-in scenarios which would be as bad or worse than the current world, or maybe not much better. I am not interested in preventing a future in which humans just watch Netflix all day - it would be pretty disappointing, but at least better than a world in which people routinely starve to death.
At the moment, I do not know enough about the probabilities of a range of bad lock-in scenarios to judge whether focusing on them is warranted under my worldview. If this turns out to be the case on further investigation, I could imagine describing my worldview as longtermist when pushed, but I expect I would still feel a cultural disconnect with other longtermists.
If there are no options to avoid or shape bad lock-in scenarios within the next few decades, I expect improving the world with “traditional neartermist” approaches is best. My views here are very similar to Alexander Berger’s which he laid out in this 80,000 Hours podcast.
What would convince me otherwise:
If there have been any intentional impacts for more than a few hundred years out, I would be keen to know about them. I am familiar with Carl’s blogposts on the topic.
I expect to spend some time investigating this crux soon: if there are bad lock-in scenarios on the horizon which we can avoid or shape, that would likely change my feelings on longtermism.
Given that this is an important crux one might well consider it premature for me to draw conclusions about my worldview already. But my other views seem sufficiently different to most of the longtermist views I hear that they were hopefully worth lying out regardless.
If anyone has any resources they want to point me to which might change my mind, I am keen to hear about them.
Thanks to AGB and Linch Zhang for providing comments on a draft of this post.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Thanks a lot for sharing this denise. Here are some thoughts on your points.
Hey, great post, I pretty much agree with all of this.
My caveat is: One aspect of longtermism is that the future should be big and long, because that's how we'll create the most moral value. But a slightly different perspective is that the future might be big and long, and so that's where the most moral value will be, even in expectation.
The more strongly you believe that humanity is not inherently super awesome, the more important that latter view seems to be. It's not "moral value" in the sense of positive utility, it's "moral value" in the sense of lives that can potentially be affected.
For example, you write:
And I agree! But where you seem to be implying "the future will only be stable under totalitarianism, so it's not really worth fighting for", I would argue "the future will be stable under totalitarianism, so it's really important to fight totalitarianism in particular!" An overly simplistic way of thinking about this is that longtermism is (at least in public popular writing) mostly concerned with x-risk, but under your worldview, we ought to be much more concerned about s-risk. I completely agree with this conclusion, I just don't think it goes against longtermism, but that might come down to semantics.
FWIW my completely personal and highly speculative view is that EA orgs and EA leaders tend to talk too much about x-risk and not enough about s-risk, mostly because the former is more palatable, and is currently sufficient for advocating for s-risk relevant causes anyway. Or more concretely: It's pretty easy to imagine an asteroid hitting the planet, killing everyone, and eliminating the possibility of future humans. It's a lot wackier, more alienating and more bizarre to imagine an AI that not only destroys humanity, but permanently enslaves it in some kind of extended intergalactic torture chamber. So (again, totally guessing), many people have decided to just talk about x-risk, but use it as a way to advocate for getting talent and funding into AI Safety, which was the real goal anyway.
On a final note, if we take flavors of your view with varying degrees of extremity, we get, in order of strength of claim:
Some of these strike me as way too strong and unsubstantiated, but regardless of what we think object-level, it's not hard to think of reasons these views might be under-discussed. So I think what you're really getting at is something like, "does EA have the ability to productively discuss info-hazards". And the answer is that we probably wouldn't know if it did.
I’m pretty sure that risks of scenarios a lot broader and less specific than extended intergalactic torture chambers count as s-risks. S-risks are defined as merely “risks of astronomical suffering.” So the risk of having, for example, a sufficiently extremely large future with a small but nonzero density of suffering would count as an s-risk. See this post from Tobias Baumann for examples.
To be clear, by "x-risk" here, you mean extinction risks specifically, and not existential risks generally (which is what "x-risk" was coined to refer to, from my understanding)? There are existential risks that don't involve extinction, and some s-risks (or all, depending on how we define s-risk) are existential risks because of the expected scale of their suffering.
Ah, yes, extinction risk, thanks for clarifying.
Thanks for this! Quick thoughts:
On your second bullet point what I would add to Carl's and Ben's posts you link to is that suffering is not the only type of disvalue or at least "nonvalue" (e.g. meaninglessness comes to mind). Framing this in Haidt's moral foundations theory, suffering is only addressing the care/harm foundation.
Also, I absolutely value positive experiences! More so for making existing people happy, but also somewhat for creating happy people. I think I just prioritise it a bit less than the longtermists around me compared to avoiding misery.
I will try to respond to the s-risk point elsewhere.
Thanks! I'm not very familiar with Haidt's work, so this could very easily be misinformed, but I imagine that other moral foundations / forms of value could also give us some reasons to be quite concerned about the long term, e.g.:
(This stuff might not be enough to justify strong longtermism, but maybe it's enough to justify weak longtermism--seeing the long term as a major concern.)
Oh, interesting! Then (with the additions you mentioned) you might find the arguments compelling?
Yeah this is a major motivation for me to be a longtermist. As far as I can see a Haidt/conservative concern for a wider range of moral values, which seem like they might be lost 'by default' if we don't do anything, is a pretty longtermist concern. I wonder if I should write something long up on this.
I would be interested to read this!
My recent post on Scheffler discusses some of these themes:
I think many (but not all) of these values are mostly conditional on future people existing or directed at their own lives, not the lives of others, and you should also consider the other side: in an empty future, everyone has full freedom/autonomy and gets everything they want, no one faces injustice, no one suffers, etc..
I think most people think of the badness of extinction as primarily the deaths, not the prevented future lives, though, so averting extinction wouldn't get astronomical weight. From this article (this paper):
Curious why you think this first part? Seems plausible but not obvious to me.
I have trouble seeing how this is a meaningful claim. (Maybe it's technically right if we assume that any claim about the elements of an empty set is true, but then it's also true that, in an empty future, everyone is oppressed and miserable. So non-empty flourishing futures remain the only futures in which there is flourishing without misery.)
Yup, agreed that empty futures are better than some alternatives under many value systems. My claim is just that many value systems leave substantial room for the world to be better than empty.
Yeah, agreed that something probably won't get astronomical weight if we're doing (non-fanatical forms of) moral pluralism. The paper you cite seems to suggest that, although people initially see the badness of extinction as primarily the deaths, that's less true when they reflect:
I think, for example, it's silly to create more people just so that we can instantiate autonomy/freedom in more people, and I doubt many people think of autonomy/freedom this way. I think the same is true for truth/discovery (and my own example of justice). I wouldn't be surprised if it wasn't uncommon for people to want more people to be born for the sake of having more love or beauty in the world, although I still think it's more natural to think of these things as only mattering conditionally on existence, not as a reason to bring them into existence (compared to non-existence, not necessarily compared to another person being born, if we give up the independence of irrelevant alternatives or transitivity).
I also think a view of preference satisfaction that assigns positive value to the creation and satisfaction of new preferences is perverse in a way, since it allows you to ignore a person's existing preferences if you can create and satisfy a sufficiently strong preference in them, even against their wishes to do so.
Sorry, I should have been more explicit. You wrote "In the absence of a long, flourishing future, a wide range of values (not just happiness) would go for a very long time unfulfilled", but we can also have values that would go frustrated for a very long time too if we don't go extinct, and including even in a future that looks mostly utopian. I also think it's likely the future will contain misery.
That's fair. From the paper:
It is worth noting that this still doesn't tell us how much greater the difference between total extinction and a utopian future is compared an 80% loss of life in a utopian future. Furthermore, people are being asked to assume the future will be utopian ("a future which is better than today in every conceivable way. There are no longer any wars, any crimes, or any people experiencing depression or sadness. Human suffering is massively reduced, and people are much happier than they are today."), which we may have reason to doubt.
When they were just asked to consider the very long-term consequences in the salience condition, only about 50% in the UK sample thought extinction was uniquely bad and <40% did in the US sample. This is the salience condition:
They were also not asked their views on futures that could be worse than now for the average person (or moral patient, generally).
Fair points. Your first paragraph seems like a good reason for me to take back the example of freedom/autonomy, although I think the other examples remain relevant, at least for nontrivial minority views. (I imagine, for example, that many people wouldn't be too concerned about adding more people to a loving future, but they would be sad about a future having no love at all, e.g. due to extinction.)
(Maybe there's some asymmetry in people's views toward autonomy? I share your intuition that most people would see it as silly to create people so they can have autonomy. But I also imagine that many people would see extinction as a bad affront to the autonomy that future people otherwise would have had, since extinction would be choosing for them that their lives aren't worthwhile.)
This seems like more than enough to support the claim that a wide variety of groups disvalue extinction, on (some) reflection.
I think you're generally right that a significant fraction of non-utilitarian views wouldn't be extremely concerned by extinction, especially under pessimistic empirical assumptions about the future. (I'd be more hesitant to say that many would see it as an actively good thing, at least since many common views seem like they'd strongly disapprove of the harm that would be involved in many plausible extinction scenarios.) So I'd weaken my original claim to something like: a significant fraction of non-utilitarian views would see extinction as very bad, especially under somewhat optimistic assumptions about the future (much weaker assumptions than e.g. "humanity is inherently super awesome").
Re: the dependence on future existence concerning the values of "freedom/autonomy, relationships (friendship/family/love), art/beauty/expression, truth/discovery, the continuation of tradition/ancestors' efforts, etc.," I think that most of these (freedom/autonomy, relationships, truth/discovery) are considered valuable primarily because of their role in "the good life," i.e. their contribution to individual wellbeing (as per "objective list" theories of wellbeing), so the contingency seems pretty clear here. Much less so for the others, unless we are convinced that people only value these instrumentally.
Thanks! I think I see how these values are contingent in the sense that, say, you can't have human relationships without humans. Are you saying they're also contingent in the sense that (*) creating new lives with these things has no value? That's very unintuitive to me. If "the good life" is significantly more valuable than a meh life, and a meh life is just as valuable as nonexistence, doesn't it follow that a flourishing life is significantly more valuable than nonexistence?
(In other words, "objective list" theories of well-being (if they hold some lives to be better than neutral) + transitivity seem to imply that creating good lives is possible and valuable, which implies (*) is false. People with these theories of well-being could avoid that conclusion by (a) rejecting that some lives are better than neutral, or (b) by rejecting transitivity. Do they?)
I mostly meant to say that someone who otherwise rejects totalism would agree to (*), so as to emphasize that these diverse values are really tied to our position on the value of good lives (whether good = virtuous or pleasurable or whatever).
Similarly, I think the transitivity issue has less to do with our theory of wellbeing (what counts as a good life) and more to do with our theory of population ethics. As to how we can resolve this apparent issue, there are several things we could say. We could (as I think Larry Temkin and others have done) agree with (b), maintaining that 'better than' or 'more valuable than' is not a transitive relation. Alternatively, we could adopt a sort of "tethered good approach" (following Christine Korsgaard), where we maintain that claims like "A is better/more valuable than B" are only meaningful insofar as they are reducible to claims like "A is better/more valuable than B for person P." In that case, we might deny that "a meh life is just as valuable as [or more/less valuable than] nonexistence " is meaningful, since there's no one for whom it is more valuable (assuming we reject comparativism, the view that things can be better or worse for merely possible persons). Michael St. Jules is probably aware of better ways this could be resolved. In general, I think that a lot of this stuff is tricky and our inability to find a solution right now to theoretical puzzles is not always a good reason to abandon a view.
Hm, I can't wrap my head around rejecting transitivity.
Does this imply that bringing tortured lives into existence is morally neutral? I find that very implausible. (You could get out of that conclusion by claiming an asymmetry, but I haven't seen reasons to think that people with objective list theories of welfare buy into that.) This view also seems suspiciously committed to sketchy notions of personhood.
Yeah I’m not totally sure what it implies. For consequentialists, we could say that bringing the life into existence is itself morally neutral; but once the life exists, we have reason to end it (since the life is bad for that person, although we’d have to make further sense of that claim). Deontologists could just say that there is a constraint against bringing into existence tortured lives, but this isn’t because of the life’s contribution to some “total goodness” of the world. Presumably we’d want some further explanation for why this constraint should exist. Maybe such an action involves an impermissible attitude of callous disregard for life or something like that. It seems like there are many parameters we could vary but that might seem too ad hoc.
Again, I haven't actually read this, but this article discusses intransitivity in asymmetric person-affecting views, i.e. I think in the language you used: The value of pleasure is contingent in the sense that creating new lives with pleasure has no value. But the disvalue of pain is not contingent in this way. I think you should be able to directly apply that to other object-list theories that you discuss instead of just hedonistic (pleasure-pain) ones.
An alternative way to deal with intransitivity is to say that not existing and any life are incomparable. This gives you the unfortunate situation that you can't straightforwardly compare different worlds with different population sizes. I don't know enough about the literature to say how people deal with this. I think there's some long work in the works that's trying to make this version work and that also tries to make "creating new suffering people is bad" work at the same time.
I think some people probably do think that they are comparable but reject that some lives are better than neutral. I expect that that's rarer though?
Under the asymmetry, any life is at most as valuable as nonexistence, and depending on the particular view of the asymmetry, may be as good only when faced with particular sets of options.
If you accept transitivity and the independence of irrelevant alternatives, instead of having the flourishing life better than none, you could have a principled antinatalism:
meh life < good life < flourishing life ≤ none,
although this doesn't follow.
Thanks! I can see that for people who accept (relatively strong versions of) the asymmetry. But (I think) we're talking about what a wide range of ethical views say--is it at all common for proponents of objective list theories of well-being to hold that the good life is worse than nonexistence? (I imagine, if they thought it was that bad, they wouldn't call it "the good life"?)
I think this would be pretty much only antinatalists who hold stronger forms of the asymmetry, and this kind of antinatalism (and indeed all antinatalism) is relatively rare, so I'd guess not.
I think the word "totalitarianism" is pulling too much weight here. I'm sympathetic to something like "existential security requires a great combination of preventative capabilities and civilizational resilience." I don't see why that must involve anything as nasty as totalitarianism. As one alternative, advances in automation might allow for decentralized, narrow, and transparent forms of surveillance--preventing harmful actions without leaving room for misuse of data (which I'd guess is our usual main concern about mass surveillance).
(Calling something "soft totalitarianism" also feels like a bit odd, like calling something "mild extremism." Totalitarianism has historically been horrible in large part because it's been so far from being soft/moderate, so sticking the connotations of totalitarianism onto soft/moderate futures may mislead us into underestimating their value.)
I don't see how traditional Pascal's mugging type concerns are applicable here. As I understand them, those apply to using expected value reasoning with very low (subjective) probabilities. But surely "humanity will last with at least our current population for as long as the average mammalian species" (which implies our future is vast) is a far more plausible claim than "I'm a magical mugger from the seventh dimension"?
Thanks for this post I am always interested to hear why people are sceptical of longtermism.
If I were to try to summarise your view briefly (which is helpful for my response) I would say:
I’m going to accept 1, 2 as your personal values and I won’t try to shift you on them. I don’t massively disagree on point 3.
I’m not sure I completely agree on point 4 but I can perhaps accept it as a reasonable view, with a caveat. Even if the future isn’t very long in expectation, surely it is kind of long in expectation? Like probably more than a few hundred years? If this is the case, might it be better to be some sort of “medium-termist” as opposed to a “traditional neartermist”. For example, might it be better to tackle climate change than to give out malarial bednets? I’m not sure if the answer is yes, but it’s something to think about.
Also, as has been mentioned, if we can only have long futures under totalitarianism, which would be terrible, might we want to reduce risks of totalitarianism?
Moving onto point 5 and lock-in scenarios. Firstly I do realise that the constellation of your views means that the only type of x-risk you are likely to care about is s-risks, so I will focus on lock in events that involve vast amounts of suffering. With that in mind, why aren’t you interested in something like AI alignment? Misaligned AI could lock-in vast amounts of suffering. We could also create loads of digital sentience that suffers vastly. And all this could happen this century. We can’t be sure of course, but it does seems reasonable to worry about this given how high the stakes are and the uncertainty over timelines. Do you not agree? There may also be other s-risks that may have potential lock-ins in the nearish future but I’d have to read more.
My final question, still on point 5, is why don’t you think we can affect probabilities of lock-in events that may happen beyond the next few decades? What about growing the Effective Altruism/longtermist community, or saving/investing money for the future, or improving values? These are all things that many EAs think can be credible longtermist interventions and could reasonably affect chances of lock-in (including of the s-risk kind) beyond the next few decades as they essentially increase the number of thoughtful/good people in the future or the amount of resources such people have at their disposal. Do you disagree?
Thanks for trying to summarise my views! This is helpful for me to see where I got the communication right and where I did not. I'll edit your summary accordingly where you are off:
Thanks for that. To be honest I would say the inaccuracies I made are down to sloppiness by me rather than by you not being clear in your communication. Having said that none of your corrections change my view on anything else I said in my original comment.
"If there have been any intentional impacts for more than a few hundred years out"
There have been a number of stabilizing religious institutions which were built for exactly this purpose, both Jewish, and Christian. They intended to maintain the faiths of members and peace between them, and have been somewhere between very and incredibly successful in doing so, albeit imperfectly. Similarly, Temple-era Judaism seems to have managed a fairly stable system for several hundred years, including rebuilding the Temple after its destruction. We also have the example of Chinese dynasties and at least several European monarchies which intended to plan for centuries, and were successful in doing so.
But given the timeline of "more than a few hundred years out," I'm not sure there are many other things which could possibly qualify. On a slightly shorter timescale, there are many, many more examples. The US government seems like one example - an intentionally built system which lasted for centuries and spawned imitators which were also largely successful. But on larger and smaller scales, we've seen 200+ year planning be useful in many, many cases, where it occurred.
The question of what portion of such plans worked out is a different one, and a harder one to answer, but it's obviously a minority. I'm also unsure whether there are meaningful differentiators between cases where it did and didn't work, but it's a really good question, and one that I'd love to see work on.
Thank you everyone for the many responses! I will address one point which came up in multiple comments here as a top-level comment, and otherwise respond to comments.
Regarding the length of the long-term future: My main concern here is that it seems really hard to reach existential security (i.e. extinction risks falling to smaller and smaller levels), especially given that extinction risks have been rising in recent decades. If we do not reach existential security the future population is much smaller accordingly and gets less weight in my considerations. I take concerns around extinction risks seriously - but they are an argument against longtermism, not in favour of it. It just seems really weird to me to jump from 'extinction risks are rising so much, we must prioritize them!' to 'there is lots of value in the long-term future'. The latter is only true if we manage to get rid of those extinction risks.
The line about totalitarianism is not central for me. Oops. Clearly should not have opened the section with a reference to it.
I think even with totalitarianism reaching existential security is really hard - the world would need to be permanently locked into a totalitarian state.
I recommend reading this shortform discussion on reaching existential security.
Something that stood out to me in that discussion (in a comment by Paul Christiano: "Stepping back, I think the key object-level questions are something like "Is there any way to build a civilization that is very stable?" and "Will people try?" It seems to me you should have a fairly high probability on "yes" to both questions.")
as well as Toby's EAG Reconnect AMA is how much of the belief that we can reach existential security might be based on a higher level of baseline optimism than I have about humanity.
This is just a note that I still intend to respond to a lot of comments, but I will be slow! (I went into labour as I was writing my previous batch of responses and am busy baby cuddling now.)
I think you mean to say 'existential risk' rather than 'extinction risk' in this comment?
Something I didn't say in my other comment is that I do think the future could be very, very long under a misaligned AI scenario. Such an AI would have some goals, and it would probably be useful to have a very long time to achieve those goals. This wouldn't really matter if there was no sentient life around for the AI to exploit, but we can't be sure that this would be the case as the AI may find it useful to use sentient life.
Overall I am interested to hear your view on the importance of AI alignment as, from what I've heard, it sounds like it could still be important taking into account your various views.
I don’t understand. It seems that you could see the value of the long term future being unrelated to the probability of x risk. Then, the more you value the long term future, the more you value improving x risk.
I think a sketch of the story might go: let’s say your value for reaching the best final state of the long term future is "V".
If there's a 5%, 50%, or 99.99% risk of extinction, that doesn’t affect V (but might make us sadder that we might not reach it).
Generally (e.g. assuming that x risk can be practically reduced) it’s more likely you would work on x-risk as your value of V is higher.
It seems like this explains why the views are correlated, “extinction risks are rising so much, we must prioritize them!” and “there is lots of value in the long-term future”. So these views aren't a contradiction.
Am I slipping in some assumption or have I failed to capture what you envisioned?
I am largely sympathetic to the main thrust of your argument (borrowing from your own title: I am probably a negative utilitarian), but I have 2 disagreements that ultimately lead me to a very different conclusion on longtermism and global priorities:
I'm not Denise, but I agree that we can and will all affect the long-term future. The children we have or don't have, the work we do, the lives we save, will all effect future generations.
What I'm more skeptical about is the claim that we can decide /how/ we want to affect future generations. The Bible has certainly had a massive influence on world history, but it hasn't been exclusively good, and the apostle Paul would have never guessed how his writing would influence people even a couple hundred years after his death.
If by “decide” you mean control the outcome in any meaningful way I agree, we cannot. However I think it is possible to make a best effort attempt to steer things towards a better future (in small and big ways). Mistakes will be made, progress is never linear and we may even fail altogether, but the attempt is really all we have, and there is reason to believe in a non-trivial probability that our efforts will bear fruit, especially compared to not trying or to aiming towards something else (like maximum power in the hands of a few).
For a great exploration of this topic I refer to this talk by Nick Bostrom: http://www.stafforini.com/blog/bostrom. The tl;dr is that we can come up with evaluation functions for states of the world that, while not yet being our desired outcome, are indications that we are probably moving in the right direction. We can then figure out how we get to the very next state, in the near future. Once there, we will jot a course for the next state, and so on. Bostrom signals out technology, collaboration and wisdom as traits humanity will need a lot of in the better future we are envisioning, so he suggests can measure them with our evaluation function.
To me that doesn't sound very different from "I want a future with less suffering, so I'm going to evaluate my impact based on how far humanity gets towards eradicating malaria and other painful diseases". Which I guess is consistent with my views but doesn't sound like most long-termists I've met.
Well, it wouldn’t work if you said “I want a future with less suffering, so I am going to evaluate my impact based on how many paper clips exist in the world at a given time”. Bostrom selects collaboration, technology and wisdom because he thinks they are the most important indicators of a better future and reduced x-risk. You are welcome to suggest other parameters for the evaluation function of course, but not every parameter works. If you read the analogy to chess in the link I posted it will become much more clear how Bostrom is thinking about this.
(if anyone reading this comment knows of evolutions in Bostrom’s thought since this lecture I would very much appreciate a reference)
For similar moral views (asymmetric, but not negative utilitarian), this paper might be of interest:
Teruji Thomas, "The Asymmetry, Uncertainty, and the Long Term" (also on the EA Forum). See especially section 6 (maybe after watching the talk, instead of reading the paper, since the paper gets pretty technical).
This is a link collection for content relevant to my post published since, for ease of reference.
Focusing on the empirical arguments to prioritise x-risks instead of philosophical ones (which I could not be more supportive of):
Carl Shulman’s 80,000hours podcast on the common sense case for existential risk
Scott Alexander writing about the terms long-termism and existential risks
On the definition of existential risk (as I find Bostrom’s definition dubious):
Linch asking how existential risk should be defined
Based on this comment thread in a different question by Linch
Zoe’s paper which also has other stuff I have not yet read in full
How GCBRs could remain a solved problem, thereby getting us closer to existential security:
Thanks for this clear write-up in an important discussion :)
I'm not sure where exactly my own views lie, but let me engage with some of your points with the hope of clarifying my own views (and hopefully also help you or other readers).
You say that care more about the preference of people than about total wellbeing, and that it'd change your mind if it turns out that people today prefer longtermist causes.
What do you think about the preferences of future people? You seem to take the "rather make people happy than to make happy people" point of view on population ethics, but future preferences extend beyond their preference to exist. Since you also aren't interested in a world where trillions of people watch Netflix all day, I take it that you don't take their preferences as that important.
That said, you clearly do care about the shape of the future of humanity. Whether people have freedom, whether people suffer, whether they are morally righteous, etc. In fact, you seem to be pretty pessimistic about humanity's future in those aspects. Also, it seems like you aren't interested in transhumanist futures - at least, not how they are usually depicted.
Some thoughts on that. But first, please let me know if (where) I was off in any of the above. Sorry if I've misinterpreted your views.
What do you mean by this?
OP said, "I also care about people’s wellbeing regardless of when it happens." Are you interpreting this concern about future people's wellbeing as not including concern about their preferences? I think the bit about a Netflix world is consistent with caring about future people's preferences contingent on future people existing. If we accept this kind of view in population ethics, we don't have welfare-related reasons to ensure a future for humanity. But still, we might have quasi-aesthetic desires to create the sort of future that we find appealing. I think OP might just be saying that they lack such quasi-aesthetic desires.
(As an aside, I suspect that quasi-aesthetic desires motivate at least some of the focus on x-risks. We would expect that people who find futurology interesting would want the world to continue, even if they were indifferent to welfare-related reasons. I think this is basically what motivates a lot of environmentalism. People have a quasi-aesthetic desire for nature, purity, etc., so they care about the environment even if they never ground this in the effects of the environment on conscious beings.)
Perhaps you are referring to the value of creating and satisfying these future people's preferences? If this is what you meant, a standard line for preference utilitarians is that preferences only matter once they are created. So the preferences of future people only matter contingent on the existence of these people (and their preferences).
There are several ways to motivate this, one of which is the following: would it be a good thing for me to create in you entirely new preferences just so I can satisfy them? We might think not.
This idea is captured in Singer's Practical Ethics (from back when he espoused preference utilitarianism):
Good points, thanks :) I agree with everything here.
One view on how we impact the future is asking how would we want to construct it assuming we had direct control over it. I think that this view lends more to the points you make and where population ethics feels to me much murkier.
However, there are some things that we might be able to put some credence on that we'd expect future people to value. For example, I think that it's more likely than not that future people would value their own welfare. So while it's not an argument for preventing x-risk (as that runs into the same population ethics problems), it is still an argument for other types of possible longtermist interventions and definitely points at where (a potentially enormous amount of) value lies. Say, I expect working on moral circle expansion to be very important from this perspective (although, I'm not sure about how interventions there are actually promising).
Regarding quasi-aesthetic desires, I agree and think that this is very important to understand further. Personally, I'm confused as to whether I should value these kinds of desires (even at the expense of something based on welfarism), or whether I should think of these as a bias to overcome. As you say, I also guess that this might be behind some of the reasons for differing stances on cause prioritization.
(My views are suffering-focused and I'm not committed to longtermism, although I'm exploring s-risks slowly, mostly passively.)
Do you mean you expect all of our descendants to be wiped out, with none left? What range would you give for your probability of extinction (or unrecoverable collapse) each year?
If we colonize space and continue to expand (which doesn't seem extraordinarily unlikely), the probabilities of extinction in distant colonies become less and less correlated, and the probability of all colonies being wiped out with none left to continue expanding would decrease over time. Maybe this doesn't happen fast enough, in your view?
Thanks for the post. Here are some comments (I am confident there is considerable overlap with the other comments, but I have not read them):
Which of these represents your view more closely?
1: "I care about the future of humanity. I don't care about the long-term future of humanity as strongly as the near-term future. Therefore I wish to work on impacting the near-term future."
2: "I care about the future of humanity. I care about the long-term future of humanity as well as the near-term future, with neither form of care strictly dominating the other. However, I do not feel I can meaningfully impact the long-term future. I still care about both. Therefore I wish to work on impacting the near-term future."
3: "I care about the future of humanity. I try to care about the long-term future of humanity as well as the near-term future, with neither form of care strictly dominating the other. However, I do not feel I can meaningfully impact the long-term future. Because I feel I cannot meaningfully impact the long-term future, I also find it hard to care as much about the long-term future. Therefore I wish to work on impacting the near-term future."
Basically I'm just trying to understand how you reason between "what you care about" and "how to work towards what you care about". Utilitarians usually assume a strict distinction between action and (terminal) goals, with goals being axiomatic and action being justified by goals. Your post however seems to (in my mind) spend time justifying terminal (not instrumental) goals based on how easy or difficult it is to act in favour of them.