Hide table of contents

A key (new-ish) proposition in EA discussions is "Strong Longtermism," that the vast majority of the value in the universe is in the far future, and that we need to focus on it. This far future is often understood to be so valuable that almost any amount of preference for the long term is justifiable. 

In this brief post, I want to argue that this strong claim is unnecessary compared to a weaker argument, creates new problems that are easily avoided otherwise, and should be replaced with the weaker claim. (I am far from the first to propose this.)

The 'regular longtermism' claim, as I present it, is that we should assign approximately similar value to the long term future as we do to the short-term. This is a philosophically difficult position which nonetheless, I argue, is superior to either status quo, or strong longtermism.

Philosophical grounding

The typical presentation of longtermism is that if we do not discount future lives exponentially, almost any weight placed on the future, which almost certainly can be massively larger than the present, will overwhelm the value of the present. This is  hard to justify intuitively - it implies that we should ignore the near-term costs, and (taken to the extreme) could justify almost any atrocity in the pursuit of a miniscule reduction of long-term risks.

The typical alternative is presented by naïve economic discounting, which assumes that we should exponentially discount the far future at some finite rate. This leads to claims that a candy bar today is worth more than the entire future of humanity starting in, say, 10,000 years. This is also hard to justify intuitively.

A third perspective roughly justifies the current position; we should discount the future at the rate current humans think is appropriate, but also separately place significant value on having a positive long term future. This preserves both the value of the long-term future of humanity if positive, and the preference for the present. Lacking any strong justification for setting the balance, I will very tentatively claim they should be weighted approximately equally, but this is not critical - almost any non trivial weight on the far future would be a large shift from the status quo towards longer-term thinking. This may be non-rigorous, but has many attractive features.

The key question, it seems, is whether the new view is different, and/or whether the exact weights for the near and long term will matter in practice.

Does 'regular longtermism' say anything?

Do the different positions lead to different conclusions in the short term? If they do not, there is clearly no reason to prefer strong longtermism. If they do, it seems that almost all of these differences are intuitively worrying. Strong longtermism implies we should engage in much larger near term sacrifices, and justifies ignoring near-term problems like global poverty, unless they have large impacts on the far future.  Strong neartermism, AKA strict exponential discounting, implies that we should do approximately nothing about the long term future.

So, does regular longtermism suggest less focus on reducing existential risks, compared to the status quo? Clearly not. In fact, it suggests overwhelmingly more effort should be spent on avoiding existential risk than is currently available for the task. It may suggest less effort than strong longtermism, but only to the extent that we have very strong epistemic reasons for thinking that very large short term sacrifices are effective.

What now?

I am unsure that there is anything new in this post. At the same time, it seems that the debate has crystallized into two camps which I strongly disagree with - the "anti-longtermist" camp, typified by Phil Torres, who is horrified by the potentially abusive view of longtermism, and Vaden Masrani, who wrote a criticism of the idea,  versus the "strong longtermism" camp, typified by Toby Ord and (Edit: see Toby's comment) Will MacAskill, (Edit: See Will's comment.) who seems to imply that Effective Altruism should focus entirely on longtermism. (Edit: I should now say that it turns out that this is a weak-man argument, but also note that several commenters explicitly say they embrace this viewpoint.) 

Given the putative dispute, I would be very grateful if we could start to figure out as a community whether the strong form of longtermism is a tentative question about how to work out a coherent position that doesn't have potentially worrying implications, or if it is intended as a philosophical shibboleth. I will note that my typical mind fallacy view is that both sides actually endorse, or at least only slightly disagree with, my mid-point view, but I may be completely wrong. 

 

  1. Note that Will has called this "very strong longtermism", but it seems unclear how a line is drawn between very strong and strong forms. This is true especially because the definition-based version he proposes, that human lives in the far future are equally valuable and should not be discounted, seems to lead directly to this very strong longtermist conclusion.
  2. (Edited to add:) In contrast, any split of value between near-term and long-term value completely changes the burden of proof for longtermist interventions. As noted here, given strong longtermism, we would have a clear case for any positive-expectation risk reduction measure, and the only possible response to refute it is a claim that the expectation in terms of reduced risk is negative. With a weaker form, we can perform cost-benefit analysis to decide whether the loss in the near-term is worthwhile.
Comments73
Sorted by Click to highlight new comments since: Today at 10:59 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The reason we have a deontological taboo against “let’s commit atrocities for a brighter tomorrow” is not that people have repeatedly done this, it worked exactly like they said it would, and millions of people received better lives in exchange for thousands of people dying unpleasant deaths exactly as promised.

The reason we have this deontological taboo is that the atrocities almost never work to produce the promised benefits. Period. That’s it. That’s why we normatively should have a taboo like that.

(And as always in a case like that, we have historical exceptions that people don’t like to talk about because they worked, eg, Knut Haukelid, or the American Revolution. And these examples are distinguished among other factors by a found mood (the opposite of a missing mood) which doesn’t happily jump on the controversial wagon for controversy points, nor gain power and benefit from the atrocity; but quietly and regretfully kills the innocent night watchman who helped you, to prevent the much much larger issue of Nazis getting nuclear weapons.)

This logic applies without any obvious changes to “let’s commit atrocities in pursuit of a brighter tomorrow a million years away” just li... (read more)

6
Davidmanheim
3y
Agreed, and that's a very good response to a position that one of the sides I critiqued has presented. But despite this and other reasons to reject their positions, I don't think the reverse theoretical claim that we should focus resources exclusively on longtermism is a reasonable one to hold, even while accepting the deontological taboo and dismissing those overwrought supposed fears.

There is nothing special about longtermism compared to any other big desideratum in this regard.

 

I'm not sure this is the case. E.g. Steven Pinker in Better Angels makes the case that utopian movements systematically tend to commit atrocities because this all-important end goal justifies anyting in the medium term. I haven't rigorously examined this argument and think it would be valuable for someone to do so, but much of longtermism in the EA community, especially of the strong variety, is based on something like utopia.

One reason why you might intuitively think there would be a relationship is that shorter-term impacts are typically somewhat more bounded; e.g. if thousands of American schoolchildren are getting suboptimal lunches, this obviously doesn't justify torturing hundreds of thousands of people. With the strong longtermist claims it's much less clear that there's any sort of upper bound, so to draw a firm line against atrocities you end up looking to somewhat more convoluted reasoning (e.g. some notion of deontological restraint that isn't completely absolute but yet can withstand astronomical consequences, or a sketchy and loose notion that atrocities have an instrumental downside). 

There’s nothing convoluted about it! We just observe that historical experience shows that the supposed benefits never actually appear, leaving just the atrocity! That’s it! That’s the actual reason you know the real result would be net bad and therefore you need to find a reason to argue against it! If historically it worked great and exactly as promised every time, you would have different heuristics about it now!

The final conclusion here strikes me as just the sort of conclusion that you might arrive at as your real bottom line, if in fact you had an arrived at an inner equilibrium between some inner parts of you that enjoy doing something other than longtermism, and your longtermist parts.  This inner equilibrium, in my opinion, is fine; and in fact, it is so fine that we ought not to need to search desperately for a utilitarian defense of it.  It is wildly unlikely that our utilitarian parts ought to arrive at the conclusion that the present weighs about 50% as much as our long-term future, or 25% or 75%; it is, on the other hand, entirely reasonable that the balance of what our inner parts vote on will end up that way.  I am broadly fine with people devoting 50%, 25% or 75% of themselves to longtermism, in that case, as opposed to tearing themselves apart with guilt and ending up doing nothing much, which seems to be the main alternative.  But you're just not going to end up with a utilitarian defense of that bottom line; if the future can matter at all, to the parts of us that care abstractly and according to numbers, it's going to end up mattering much more than th... (read more)

Are there two different proposals?

  1. Construct a value function = 0.5* (near term value) + 0.5* (far future value), and do what seems best according to that function.
  2. Spend 50% of your energy on the best longtermist thing and 50% on the best neartermist thing. (Or as a community, half of people do each.)
     

I think Eliezer is proposing (2), but David is proposing (1). Worldview diversification seems more like (2).

I have an intuition these lead different places – would be interested in thoughts.

Edit: Maybe if 'energy' is understood as 'votes from your parts' then (2) ends up the same as (1).

9
Davidmanheim
3y
Ahh - thanks. Yes, if that is what Eliezer is proposing, my above response misunderstood him - but either I misunderstood something, or it would be inconsistent with how I understood his viewpoint elsewhere about why we want to be coherent decision makers.
7
EJT
3y
I remember Toby Ord gave a talk at GPI where he pointed out the following: Let L be long-term value per unit of resources and N be near-term value per unit of resources. Then spending 50% of resources on the best long-term intervention and 50% of resources on the best near-term intervention will lead you to split resources equally between A and C. But the best thing to do on a 0.5*(near-term value)+0.5*(long-term value) value function is to devote 100% of resources to B. Diagram
9
Davidmanheim
3y
That's exactly why it's important to clarify this. The position is that the entire value of the future has no more than a 50% weight in your utility function, not that each unit of future value is worth 50% as much.

This is crazy, and I think it makes a lot more sense to just admit that part of you cares about galaxies and part of you cares about ice cream and say that neither of these parts are going to be suppressed and beaten down inside you.

Have you read Is the potential astronomical waste in our universe too small to care about? which asks the question, should these two parts of you make a (mutually beneficial) deal/bet while being uncertain of the size of (the reachable part of) the universe, such that the part of you that cares about galaxies gets more votes in a bigger universe, and vice versa? I have not been able to find a philosophically satisfactory answer to this question.

If you do, then one or the other part of you will end up with almost all of the votes when you find out for sure the actual size of the universe. If you don't, that seems intuitively wrong also, analogous to a group of people who don't take advantage of all possible benefits from trade. (Maybe you can even be Dutch booked, e.g. by someone making separate deals/bets with each part of you, although I haven't thought carefully about this.)

It strikes me as a fine internal bargain for some nonhuman but human-adjacent species; I would not expect the internal parts of a human to able to abide well by that bargain.

2
WilliamKiely
3y
I just commented on your linked astronomical waste post:
3
WilliamKiely
3y
Adding to this what's relevant to this thread, re Eliezer's model: The way I think about the 'we can't suppress and beat down our desire for ice cream' is that it's part of our nature to want ice cream meaning that we literally can't just stop having ice cream, at least not without it harming our ability to pursue longtermist goals. (This is what I was referring to when I said above that the longtermist part of you would not be able to fulfill its end of the bargain in the world in which it turns out that the universe can support 3^^^3 ops.) And we should not deny this fact about ourselves. Rather, we should accept it and go about eating ice cream, caring for ourselves, and working on short-termist goals that are important to us (e.g. reducing global poverty even in cases when it makes no difference to the long term future, to use David's example from the OP). To do otherwise is to try to suppress and beat something out of you that cannot be taken out of you without harming your ability to productively pursue longtermist goals. (What I'm saying is similar to Julia's Cheerfully post.) I don't think this is a rationalization in general, though it can be in some cases. Rather, in general, I think it is the correct attitude to take (given a "strong longtermist" view) in response to certain facts about our human nature. The easiest way to see this is just to look at other people in the world who have done a lot of good or who are doing a lot of good currently. They have not beaten the part of themselves that likes ice cream out of themselves. As such, it is not a rationalization for you to make peace with the fact that you like ice cream and fulfill those wants of yours. Rather, that is the smart thing to do to allow to you to have more cheer and motivation to productively work on longtermist goals. So I don't have any problem with the conclusion that the overwhelming majority of expected value lies in the long term future. I don't feel any need to reject this conc
2
Davidmanheim
3y
This isn't really relevant to the point I was making, but the idea that longtermism has objective long-term value, but ice cream now is a moral failing seems to presuppose moral objectivism. And that seems be be your claim - the only reason to value ice cream now is to make us better at improving the long term in practice. And I'm wondering why "humans are intrinsically unable to get rid of value X" is a criticism / shortcoming, rather than a statement about our values that should be considered in maximization. (To some extent, the argument for why to change out values is about coherency / stable time preferences, but that doesn't seem to be the claim here.)
3
WilliamKiely
3y
I'm not sure I know what you mean by "moral objectivism" here. To try to clarify my view, I'm a moral anti-realist (though I don't think that's relevant to my point) and I'm fairly confident that the following is true about my values: the intrinsic value of my enjoyment of ice cream is no greater than the intrinsic value of other individuals' enjoyment of ice cream (assuming their minds are like mine and can enjoy it in the same way), including future individuals. I think we live at a time in history where our expected effect on the number of individuals that ultimately come into existence and enjoy ice cream is enormous. As such, the instrumental value of my actions (such as my action to eat or not eat ice cream) generally dwarfs the intrinsic value of my conscious experience that results from my actions. So it's not that there's zero intrinsic value to my enjoyment of ice cream, it's just that that intrinsic value is quite trivial in comparison to the net difference in value of the future conscious experiences that come into existence as a result of my decision to eat ice cream. The fact that I have to spend some resources on making myself happy in order to do the best job at maxizing value overall (which mostly looks like productively contributing to longtermist goals in my view) is just a fact about my nature. I don't see it as a criticism or shortcoming of my or human nature, just a thing that is true. So our preferences do matter also; it just happens that when trying to do the most good we find that it's much easier to do good for future generations in expectation than it is to do good for ourselves. So the best thing to do ends up being to help ourselves to the degree that helps us help future generations the most (such that helping ourselves any more or less causes us to do less for longtermism). I think humane nature is such that that optimal balance looks like us making ourselves happy, as opposed to us making great sacrifices and living lives of misery
5
Davidmanheim
3y
I think I can restate your view;  there is no moral objective truth, but individual future lives are equally valuable to individual present lives,  (I assume we will ignore the epistemic and economic arguments for now,) and your life in particular has no larger claim on your values than anyone else's.  That certainly isn't incoherent, but I think it's a view that few are willing to embrace - at least in part because even though you do admit that personal happiness, or caring for those close to you, is instrumentally useful, you also claim that it's entirely contingent, and that  if new evidence were to emerge, you would endorse requiring personal pain to pursue greater future or global benefits.
3
WilliamKiely
3y
I think that's an accurate restatement of my view, with the caveat that I do have some moral uncertainty, i.e. give some weight to the possibility that my true moral values may be different. Additionally, I wouldn't necessarily endorse that people be morally required to endure personal pain; personal pain would just be necessary to do greater amounts of good. I think the important takeaway is that doing good for future generations via reducing existential risk is probably incredibly important, i.e. much more than half of expected future value exists in the long-term future (beyond a few centuries or millenia from now).
2
Davidmanheim
3y
I had not seen this, and it definitely seem relevant - but it's still much closer to strong longtermism than what I'm (tentatively) suggesting.
5
Davidmanheim
3y
Agreed - upon reflection, this was what wrote my bottom line, and yes, this seemed like essentially the only viable way of approaching longtermism, according to my intuitions. This also seems to match the moral intuitions of many people I have spoken with, given the various issues with the alternatives. And I didn't try to claim that 50% specifically was justified by anything - as you pointed out, almost any balance of shortermism and longtermism could be an outcome of what many humans actually embrace, but as I argued, if we are roughly utilitarian in each context with those weights, the different options lead to very similar conclusions in most contexts.  Given that if we are willing to be utilitarian  by weighting across these two preferences, I believe that any one such weighting will lead to a coherent preference ordering - which is valuable if we don't want to be Dutch booked, among other things.  But I don't think that it's in some way more correct to start with "time-impartial utilitarianism is the correct objective morality," and ignore actual human intuitions about what we care about, which you seem to imply is the single coherent longtermist position, while my approach is only justified by preventing analysis paralysis - but perhaps I misunderstood.  

No-one is proposing we go 100% on strong longtermism, and ignore all other worldviews, uncertainty and moral considerations.

You say:

the "strong longtermism" camp, typified by Toby Ord and Will MacAskill, who seem to imply that Effective Altruism should focus entirely on longtermism. 

They wrote a paper about strong longtermism, but this paper is about clearly laying out a philosophical position, and is not intended as an all-considered assessment of what we should do. (Edit: And even the paper is only making a claim about what's best at the margin; they say in footnote 14 they're unsure whether strong longtermism would be justified if more resources were already spent on longtermism.)

In The Precipice – which is more intended that way - Toby is clear that he thinks existential risk should be seen as "a" key global priority, rather than "the only" priority. 

He also suggests the rough target of spending 0.1% of GDP on reducing existential risk, which is quite a bit less than 100%.

And he's clearly supported other issues with his life.

Will  is taking a similar approach in his new book about longtermism.

Even the most longtermist members of effective altruism typically think... (read more)

No-one says longtermist causes are astronomically more impactful.

Not that it undermines your main point - which I agree with, but a fair minority of longtermists certainly say and believe this.

DM
3y15
0
0

There is a big difference between (i) the very plausible claim that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, and (ii) the rather implausible claim that interventions targeted at improving the long-term are astronomically more important/cost-effective than those targeted at improving the near-term. It seems to me that many longtermists believe (i) but that almost no-one believes (ii).

Basically, in this context the same points apply that Brian Tomasik made in his essay "Why Charities Usually Don't Differ Astronomically in Expected Cost-Effectiveness" (https://reducing-suffering.org/why-charities-dont-differ-astronomically-in-cost-effectiveness/)

I tentatively believe (ii), depending on some definitions. I'm somewhat surprised to see Ben and Darius implying it's a really weird view, and makes me wonder what I'm missing.

I don't want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don't mean indirect effects more broadly in the sense of 'better health in poor countries' --> 'more economic growth' --> 'more innovation')

For example non-longtermist interventions are often a good way to demonstrate EA ideas and successes (eg. pointing to GiveWell is really helpful as an intro to EA); non-longtermist causes are a way for people to get involved with EA and end up working on longtermist causes (eg. [name removed] incoming at GPI comes to mind as a great success story along those lines); work on non-longtermist causes has better feedback loops so it might improve the community's skills (eg. Charity Entrepreneurship incubatees probably are highly skilled 2-5 years after the program. Though I'm not sure that... (read more)

DM
3y11
0
0

I'm not sure what counts as 'astronomically' more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii).

This may be the crux - I would not count a ~ 1000x multiplier as anywhere near "astronomical" and should probably have made this clearer in my original comment. 

Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term,  refers to differences in value of something like 1030 x.

All my comment was meant to say is that it seems highly implausible that something like such a 1030x multiplier also applies to claim (ii), regarding the expected cost-effectiveness differences of long-term targeted versus near-term targeted interventions.

It may cause significant confusion if the term "astronomical" is used in one context to refer to a 1030x multiplier and in another context to a 1000x multiplier.

It seems to me that many longtermists believe (i) but that almost no-one believes (ii).

Really? This surprises me. Combine (i) with the belief that we can tractably influence the far future and don't we pretty much get to (ii)?

6
DM
3y
No, we probably don’t. All of our actions plausibly affect the long-term future in some way, and it is difficult to (be justified to) achieve very high levels of confidence about the expected long-term impacts of specific actions. We would require an exceptional  degree of confidence to claim that the long-term effects of our specific longtermist intervention are astronomically (i.e. by many orders of magnitude) larger than the long-term effects of some random neartermist interventions (or even doing nothing at all). Of course, this claim is perfectly compatible with longtermist interventions being a few orders of magnitude more impactful in expectation than neartermist interventions (but the difference is most likely not astronomical). Brian Tomasik eloquently discusses this specific question in the above-linked essay. Note that while his essay focuses on charities, the same points likely apply to interventions and causes: Brian Tomasik further elaborates on similar points in a second essay, Charity Cost-Effectiveness in an Uncertain World. A relevant quote:
9[anonymous]3y
Phil Trammell's point in  Which World Gets Saved is also relevant: 
4
Jack Malde
3y
For the record I'm not really sure about 1030 times, but I'm open 1000s of times. Pretty much every action has an expected impact on the future in that we know it will radically alter the future  e.g. by altering the times of conceptions and therefore who lives in the future. But that doesn't necessarily mean we have any idea on the magnitude or sign of this expected impact. When it comes to giving to the Against Malaria Foundation for example I have virtually no idea of what the expected long-run impacts are and if this would even be positive or negative - I'm just clueless. I also have no idea what the flow-through effects of giving to AMF are on existential risks.  If I'm utterly clueless about giving to AMF but I think giving to an AI research org has an expected value of 1030 then in a sense my expected value of giving to the AI org is astronomically greater than giving to AMF (although it's sort of like comparing 1030 to undefined so it does get a bit weird...). Does that make any sense?
8
Habryka
3y
I think I believe (ii), but it's complicated and I feel a bit confused about it. This is mostly because many interventions that target the near-term seem negative from a long-term perspective, because they increase anthropogenic existential risk by accelerating the speed of technological development. So it's pretty easy for there to be many orders of magnitude in effectiveness between different interventions (in some sense infinitely many, if I think that many interventions that look good from a short-term perspective are actually bad in the long term).
9
DM
3y
Please see my above response to jackmalde's comment. While I understand and respect your argument, I don't think we are justified in placing high confidence in this  model of the long-term flowthrough effects of near-term targeted interventions. There are many similar more-or-less plausible models of such long-term flowthrough effects, some of which would suggest a positive net effect of near-term targeted interventions on the long-term future, while others would suggest a negative net effect. Lacking strong evidence that would allow us to accurately assess the plausibility of these models, we simply shouldn't place extreme weight on one specific model (and its practical implications) while ignoring other models (which may arrive at the opposite conclusion). 

Yep, not placing extreme weight. Just medium levels of confidence that when summed over, add up to something pretty low or maybe mildly negative. I definitely am not like 90%+ confidence on the flowthrough effects being negative.

3
Davidmanheim
3y
I'm unwilling to pin this entirely on the epistemic uncertainty, and specifically don't think everyone agrees that, for example, interventions targeting AI safety aren't the only thing that matters, period. (Though this is arguably not even a longtermist position.) But more generally, I want to ask the least-convenient-world question of what the balance should be if we did have certainty about impacts, given that you seem to agree strongly with (i).
7
Benjamin_Todd
3y
I was talking about the EA Leaders Forum results, where people were asked to compare dollars to the different EA Funds, and most were unwilling to say that one fund was even 100x higher-impact than another; maybe 1000x at the high end. That's rather a long way from 10^23 times more impactful.

Cool. Yeah, EA funds != cause areas. Because people may think that work done by EA funds in a cause area is net positive, whereas the total of work done in that area  is negative. Or they may think that work done on some cause is 1/100th as useful another cause, but only because it might recruit talent to the other, which is the sort of hard-line view that one might want to mention.

Indeed, I took that survey one year, and the reason why I wouldn't put the difference at 10^23 or something extremely large than that is because there are flowthrough effects of other cause areas that still help with longtermist stuff (like, GiveWell has been pretty helpful for also getting more work to happen on longtermist stuff).

I do think that as a cause area from a utilitarian perspective, interventions that affect the longterm future are astronomically more effective than things that help the short term future but are very unlikely to have any effect on the long term, or even slightly harm the longterm.

5
Benjamin_Todd
3y
Sure, though I still think it makes it misleading to say that the survey respondents think "EA should focus entirely on longtermism".  Seems more accurate to say something like "everyone agrees EA should focus on a range of issues, though people put different weight on different reasons for supporting them, including long & near term effects, indirect effects, coordination, treatment of moral uncertainty, and different epistemologies."

To be clear, my primary reason for why EA shouldn't entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn't the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.

To some degree my response to this situation is "let's create a separate longtermist community, so that I can indeed invest in that in a way that doesn't get diluted with all the other things that seem relatively unimportant to me". If we had a large and thriving longtermist community, it would definitely seem bad to me to suddenly start investing into all of these other things that EA does that don't really seem to check out (to me) from a utilitarian perspective, and I would be sad to see almost any marginal resources moved towards the other causes.

I'm strongly opposed to this, and think we need to be clear: EA is a movement of people with different but compatible values, dedicated to understanding  and it's fine for you to discuss why you think longtermism is valuable, but it's not as though anyone gets to tell the community what values the community should have. 

The idea that there is a single "good" which we can objectively find and then maximize is a bit confusing to me, given that we know values differ. (And this has implications for AI alignment, obviously.) Instead, EA is a collaborative endeavor of people with compatible interests - if strong-longtermists' interests really are incompatible with most of EA, as yours seem to be, that's a huge problem - especially because many of the people who seem to embrace this viewpoint are in leadership positions. I didn't think it was the case that there was such a split, but perhaps I am wrong.

I think we don't disagree?

I agree, EA is a movement of different but compatible values, and given its existence, I don't want to force anything on it, or force anyone to change their values. It's a great collaboration of a number of people with different perspectives, and I am glad it exists. Indeed the interests of different people in the community are pretty compatible, as evidenced by the many meta interventions that seem to help many causes at the same time.

I don't think my interests are incompatible with most of EA, and am not sure why you think that? I've clearly invested a huge amount of my resources into making the broader EA community better in a wide variety of domains, and generally care a lot about seeing EA broadly get more successful and grow and attract resources, etc.

But I think it's important to be clear which of these benefits are gains from trade, vs. things I "intrinsically care about" (speaking a bit imprecisely here). If I could somehow get all of these resources and benefits without having to trade things away, and instead just build something that was more directly aligned with my values of similar scale and level of success, that seems better to me. I think historically this wasn't really possible, but with longtermist stuff finding more traction, I am now more optimistic about it. But also, I still expect EA to provide value for the broad range of perspectives under its tend, and expect that investing in it in some capacity or another will continue to be valuable.

7
Davidmanheim
3y
Sorry, this was unclear, and I'm both not sure that we disagree, and want to apologize if  it seemed like I was implying that you haven't done a tremendous amount for the community, and didn't hope for its success, etc. I do worry that there is a perspective (which you seem to agree with) that if we magically removed all the various epistemic issues with knowing about the long term impacts of decisions, longtermists would no longer be aligned with others in the EA community.  I also think that longtermism is plausibly far better as a philosophical position than as a community, as mentioned in a different comment, but that point is even farther afield, and needs a different post and a far more in-depth discussion.
8
RyanCarey
3y
Agree it's more accurate. How I see it:  > Longtermists overwhelmingly place some moral weight on non-longtermist views and support the EA community carrying out some non-longtermist projects. Most of them, but not all, diversify their own time and other resources  across longtermist and non-longtermist projects. Some would prefer to partake in a new movement that focused purely on longtermism, rather than EA.
7
Davidmanheim
3y
Worth noting the ongoing discussions about how longtermism is better thought of / presented as a philosophical position rather than a social movement.  The argument is something like: just like effective altruists can be negative utilitarians or deontologists or average utilitarians, and just like they can have differing positions about the value of animals, the environment, and wild animal suffering, they can have different views about longtermism. And just like policymakers take different viewpoints into account without needing to commit to anything, longtermism as a position can exist without being a movement you need to join.
4
Davidmanheim
3y
Good points, but if I understand what you're saying, that survey was asking about specific interventions funded by those funds, given our epistemic uncertainties, not the balance of actual value in the near term versus the long term, or what the ideal focus should be if we found the optimal investments for each.
[anonymous]3y21
0
0

I do think it is important to distinguish these moral uncertainty reasons from moral trade and cooperation and strategic considerations for hedging. My argument for putting some focus on near-termist causes would be of this latter kind; the putative moral uncertainty/worldview diversification arguments for hedging carry little weight with me. 

As an example, Greaves and Ord argue that under the expected choiceworthiness approach, our metanormative ought is practically the same as the total utilitarian ought.

It's tricky because the paper on strong longtermism makes the theory sound like it does want to completely ignore other causes - eg 'short-term effects can be ignored'. I think it would be useful to have a source to point to that states 'the case for longtermism' without giving the impression that no other causes matter.

Just to second this because it seems to be a really common mistake- Greaves and MacAskill stress in the strong longtermism paper that the aim is to advance an argument about what someone should do with their impartial altruistic budget (of time or resources), not to tell anyone how large that budget should be in the first place. 

Also- I think the author would be able to avoid what they see as a "non-rigorous" decision to weight the short-term and long-term the same by reconceptualising the uneasiness around longtermism dominating their actions as an uneasiness with their totally impartial budget taking up more space in their life. I think everyone I have talked to about this feels a pull to support present day people and problems alongside the future, so it might help to just bracket off the present day section of your commitments away from the totally impartial side, especially if the argument against the longtermist conclusion is that it precludes other things you care about.  No one can live an entirely impartial life and we should recognise that, but this doesn't necessarily mean that the arguments for the rightness of doing so are wrong. 

4
Davidmanheim
3y
Thanks, that is valuable, but there are a couple of pieces here I want to clarify. I agree that there is space for people to have a budget for non-impartial altruistic donations. I am arguing that within the impartial altruistic budget, we should have a place for a balance between discounted values that emphasize the short term and impartial welfarist longtermism. Perhaps this is what you mean by "bracket off the present day section of your commitments away from the totally impartial side."  For example, I give at least 10% of my budget to altruistic causes, but I reserve some of the money for Givewell, Against Malaria Foundation, and similar charities, rather than focusing entirely on longtermist causes. This is in part moral uncertainty, at least on my part, since even putting aside the predictability argument, the argument for prioritizing possible future lives rests on a few assumptions that are debatable. But I'm very unhappy with the claim that "No one can live an entirely impartial life and we should recognise that," which is largely what led to the post. This type of position implies, among other things, that morality is objective and independent of instantiated human values, and that we're saying everyone is morally compromised. If what we are claiming as impartial welfare maximization requires that philosophical position, and we also agree it's not something people can do in practice, I'd argue we are doing something fundamentally wrong both in practice, condemning everyone for being immoral while saying they should do better, and in theory, saying that longtermist EA only works given an objective utilitarian position on morality. Thankfully, I disagree, and I think these problems are both at least mostly fixable, hence my (still-insufficient, partially worked out) argument in the post. But I wasn't trying to solve morality ab initio based on my intuitions. And perhaps I need to extend it to the more general position of how to allocate money and effort a
9
tobytrem
3y
Thanks for the post and the response David, that helpfully clarifies where you are coming from. What I was trying to get at is that if you want to say that strong longtermism isn't the correct conclusion for an impartial altruist who wants to know what to do with their resources, then that would call for more argument as to where the strong longtermist's mistake lies or where the uncertainty should be. On the other hand, it would be perfectly possible to say that the impartial altruist should end up endorsing strong longtermism, while recognising that you yourself are not entirely impartial (and have done with the issue). Personally I also think that strong longtermism relies on very debatable grounds, and I would also put some uncertainty on the claim "the impartial altruist should be a strong longtermist"- the tricky and interesting thing is working out where we disagree with the longtermist.  (also I recognise as you said that this post is not supposed to be a final word on all these problems, I'm just pointing to where the inquiry could go next).  On the second part of your response, I think that depends on what motivates you and what your general worldview is. I don't believe in objective moral facts, but I also generally see the world as a place where each and all could do better. For some that helps motivate action, for some it causes angst- I don't think there is a correct view there.  Separately I do actually worry that strong longtermism only works for consequentialists (though you don't have to believe in objective morality). The recent paper attempts to make the foundations more robust but the work there is still in its infancy. I guess we will see where it goes. 
3
Davidmanheim
3y
Thanks for the response - I think we mostly agree, at least to the extent that these questions have answers at all.
1
tobytrem
3y
Definitely, cheers!
8
Jack R
3y
I don’t think your point about Toby’s GDP recommendation is inconsistent with David’s claim that Toby/Will seem to imply “Effective Altruism should focus entirely on longtermism” since EA is not in control of all of the world’s GDP. It’s consistent to recommend EA focus entirely on longtermism and that the world spend .1% of GDP on x-risk (or longtermism).
3
Benjamin_Todd
3y
I agree it's not entailed by that, but both Will and Toby were also in the Leaders Forum Survey I linked to. From knowing them, I'm also confident that they wouldn't agree with "EA should focus entirely on longtermism".
3
Davidmanheim
3y
That's a very good point - and if that is the entire claim, I would strongly endorse it. But, from what I have read, that is not what strong longtermism actually claims, according to proponents.
1
DM
3y
I'd like to point to the essay Multiplicative Factors in Games and Cause Prioritization as a relevant resource for the question of how we should apportion the community's resources across (longtermist and neartermist) causes:

FWIW, my own views are more like 'regular longtermism' than 'strong longtermism,' and I would agree with Toby that existential risk should be a global priority, not the global priority. I've focused my career on reducing existential risk, particularly from AI, because it seems like a substantial chance of happening in my lifetime, with enormous stakes and extremely neglected. I probably wouldn't have gotten into it when I did if I didn't think doing so was much more effective than GiveWell top charities at saving current human lives, and outperforming even more on metrics like cost-benefit in $.

Longtermism as such (as one of several moral views commanding weight for me) plays the largest role for things like refuges that would prevent extinction but not catastrophic disaster, or leaving seed vaults and knowledge for apocalypse survivors. And I would say longtermism provides good reason to make at least modest sacrifices for that sort of thing (much more than the ~0 current world effort), but not extreme fanatical ones.

There are definitely some people who are fanatical strong longtermists, but a lot of people who are made out to be such treat it as an important consideration but not... (read more)

There are definitely some people who are fanatical strong longtermists, but a lot of people who are made out to be such treat it as an important consideration but not one held with certainty or overwhelming dominance over all other moral frames  and considerations. In my experience one cause of this is that if you write about implications within a particular worldview people assume you place 100% weight on it, when the correlation is a lot less than 1. 

 

I agree with this, and the example of Astronomical Waste is particularly notable. (As I understand his views, Bostrom isn't even a consequentialist!). This is also true for me with respect to the CFSL paper, and to an even greater degree for Hilary: she really doesn't know whether she buys strong longtermism; her views are very sensitive to current facts about how much we can reduce extinction risk  with a given unit of resources.

The language-game of 'writing a philosophy article' is very different than 'stating your exact views on a topic' (the former is more about making a clear and forceful argument for a particular view, or particular implication of a view someone might have, and much less about conveying eve... (read more)

[anonymous]3y32
0
0

I agree that it would be good to have a name for a less contentious form of longtermism similar to the one you propose, which says something like: the longterm deserves a seat at the top table with other commonly accept near-term priorities. 

I suspect one common response might be that due to normative uncertainty, we don't put all of our weight on longtermism but instead hedge across different plausible views. I haven't yet seen a defence of that view that I would view as compelling, so I think it would be valuable to have a less contentious version that we would be willing to stand behind in public

5
Davidmanheim
3y
Newberry and Ord's paper on moral parliamentarianism, originally proposed by Bostrom, seems like a reasonable way to arrive there. (Which seems almost ironic, given that they are key proponents of strong longtermism.)

I don't think I'm a proponent of strong longtermism at all — at least not on the definition given in the earlier draft of Will and Hilary's paper on the topic that got a lot of attention here a while back and which is what most people will associate with the name. I am happy to call myself a longtermist, though that also doesn't have an agreed definition at the moment.

Here is how I put it in The Precipice:

Considerations like these suggest an ethic we might call longtermism, which is especially concerned with the impacts of our actions upon the longterm future. It takes seriously the fact that our own generation is but one page in a much longer story, and that our most important role may be how we shape—or fail to shape—that story. Working to safeguard humanity’s potential is one avenue for such a lasting impact and there may be others too.

My preferred use of the term is akin to being an environmentalist: it doesn't mean that the only thing that matters is the environment, just that it is a core part of what you care about and informs a lot of your thinking.

I'm also not defending or promoting strong longtermism in my next book.  I defend (non-strong) longtermism, and the  definition I use is: "longtermism is the view that positively influencing the longterm future is among the key moral priorities of our time." I agree with Toby on the analogy to environmentalism.

(The definition I use of strong longtermism is that it's the view that positively influencing the longterm future is the moral priority of our time.)

4
Davidmanheim
3y
Thanks Will - I apologize for mischaracterizing your views, and am very happy to see that I was misunderstanding your actual position. I have edited the post to clarify. I'm especially happy about the clarification because I think there was at least a perception in the community that you and/or others do, in fact, endorse this position, and therefore that it is the "mainstream EA view," albeit one which almost everyone I have spoken to about the issue in detail seems to disagree with.
6
Davidmanheim
3y
That's super helpful to see clarified, and I will edit the post to reflect that - thanks!

It would indeed be ironic - the fact that Toby and Will are major proponents of moral uncertainty seems like more evidence in favour of the view in my top level comment.

5
Jack Malde
3y
I don't think it's necessarily clear that incorporating moral uncertainty means you have to support hedging across different plausible views. If one maximises expected choiceworthiness (MEC) for example one can be fanatically driven by a single view that posits an extreme payoff (e.g. strong longtermism!). Indeed MacAskill and Greaves have argued that strong longtermism seems robust to variations in population axiology and decision theory whilst Ord has argued reducing x-risk is robust to normative variations (deontology, virtue ethics, consequentialism). If an action is robust to axiological variations this can also help it dominate other actions, even under moral uncertainty.
3
Jack Malde
3y
I think Ord's favoured approach to moral uncertainty is maximising expected choice-worthiness (MEC) which he argues for with Will MacAskill. Reading the abstract of the moral parliamentarianism paper, it isn't clear to me that he is actually a proponent of that approach, just that he has a view on the best specific approach within moral parliamentarianism. As I say in my comment to Ben, I think an MEC approach to moral uncertainty can lead to being quite fanatical in favour of longtermism.

Thank you for this post David. I'd like to add two points that emphasize how important this discussion is, and that its implications are beyond the moral stances of individuals:

1. I believe that when looking at this distinction as a movement, we should also take into account how people are put off by strong longtermism - whether we view regular longtermism as a good entry point for EA ideas, or if we endorse it as a legitimate 'camp'. I think that the core idea of regular longtermism is very appealing when discussing the next few generations, while strong longtermism does imply disregarding current generations and thinking of "all future generations" (which obviously requires most people to think far beyond their current moral circle).

2. In practice, I think that an EA community that has a welcoming space for this mid-point view, would have more emphasis on interventions that are on mid-point position in the tradeoff between tractability (they're more likely to make a change) and importance (they're not as rewarding as preventing human extinction). We would see more emphasis than we currently have on improving institutions, interventions for improving developing economies, meta-science, and others.

A third perspective roughly justifies the current position; we should discount the future at the rate current humans think is appropriate, but also separately place significant value on having a positive long term future.

 

I feel that EA shouldn't spend all or nearly all of its resources on the far future, but I'm uncomfortable with incorporating a moral discount rate for future humans as part of "regular longtermism" since it's very intuitive to me that future lives should matter the same amount as present ones.

I prefer objections from the epistemic c... (read more)

3
BrownHairedEevee
3y
Yeah. I have this idea that the EA movement should start with short-term interventions and work our way to interventions that operate over longer and longer timescales, as we get more comfortable understanding their long-term effects.

I wonder if a heavy dose of skepticism about longtermist-oriented interventions wouldn't result in a somewhat similar mix of near termist and longtermist prioritization in practice. Specifically, someone might reasonably start with a prior that most interventions aimed at affecting the far future (especially those that don't do so by tangibly changing something in the near term so that there could be strong feedbacks) come out as roughly a wash. This might then put a high burden of evidence on these interventions so that only a few very well founded ones w... (read more)

Should "reduction" in the quote below (my emphasis) read "increase?" 

"This is  hard to justify intuitively - it implies that we should ignore the near-term costs, and (taken to the extreme) could justify almost any atrocity in the pursuit of a miniscule reduction of long-term value."

2
Davidmanheim
3y
Yeah, it should read "long-term *risk*" - fixing now, thanks!

Me, reading through the post: “I think I might have a minor comment to add, and for once I’m here the day of posting…”

Also me, seeing that there are already 31 comments: “Oh, well then.”

IMO, the best argument against strong longtermism ATM is moral cluelessness.  

[comment deleted]3y1
0
0
Curated and popular this week
Relevant opportunities