"Most expected value is in the far future." Because there are so many potential future lives, the value of the far future dominates the value of any near-term considerations.

Why this needs to be retired: just because a cause has high importance doesn't mean it has high tractability and low crowdedness. It could (and hopefully will soon) be the case that the best interventions for improving the far future are fully funded, and the next best intervention is highly intractable. Moreover, for optimally allocating the EA budget, we care about the expected value of the marginal action, and not average expected value.

"What matters most about our actions is their very long term effects."

Why this needs to be retired: there are only a small number of actions where we have a hope of reasonably estimating long-term effects, namely, actions affecting lock-in events like extinction or misaligned AGI spreading throughout the universe. For all other actions, estimating long-term effects is nearly impossible.  Hence, this is not a practical rule to follow.

5

0
0

Reactions

0
0
Comments21
Sorted by Click to highlight new comments since: Today at 3:37 AM
  • I happen to disagree that possible interventions that greatly improve the expectation of the long-term future will soon all be taken. But regardless, the first quote is just about value, not about what we ought to do.
  • I think the second principle is basically true. Since the long-term future is 10^(big number) times bigger than the short-term future, our effects on the short-term future mostly matter insofar as they affect the long-term future, unless we have reason to believe that long-term effects somehow cancel out exactly. You're right that humans are not psychologically capable of always following it directly, but we can pursue proxies and instrumental goals that we think improve the long-term future. (But also, this principle is about describing our actions, not telling us what to do, so what's relevant isn't our capability to estimate long-term effects, but rather what we would think about our actions if we were omniscient.)

But regardless, the first quote is just about value, not about what we ought to do.

How do you understand the claim about expected value? What is the expectation being taken over?

You're right that humans are not psychologically capable of always following it directly, but we can pursue proxies and instrumental goals that we think improve the long-term future.

What are some examples of such proxies?

this principle is about describing our actions, not telling us what to do, so what's relevant isn't our capability to estimate long-term effects, but rather what we would think about our actions if we were omniscient.

Why would we care about a hypothetical scenario where we're omniscient? Shouldn't we focus on the actual decision problem being faced?

How do you understand the claim about expected value? What is the expectation being taken over?

Over my probability distribution for the future. In my expected/average future, almost all lives/experiences/utility/etc are in the long-term future. Moreover, the variance in values of such a variable between possible futures is almost entirely due to differences in the long-term future.

What are some examples of such proxies?

  • General instrumentally convergent goods like power, money, influence, skills, and knowledge
  • Success in projects that we choose for longtermist reasons but then pursue without constantly thinking about the effect on the long-term future. For me these include doing well in college and organizing an EA group; for those with directly valuable careers it would mostly be achieving their day-to-day career goals.

Why would we care about a hypothetical scenario where we're omniscient? Shouldn't we focus on the actual decision problem being faced?

Sure, for the sake of making decisions. For the sake of abstract propositions about "what matters most," it's not necessarily constrained by what we know.

In my expected/average future, almost all lives/experiences/utility/etc are in the long-term future.

Okay, so you're thinking about what an outside observer would expect to happen. (Another approach is to focus on a single action A, and think about how A affects the long-run future in expectation.)

But regardless, the first quote is just about value, not about what we ought to do.

Coming back to this, in my experience the quote is used to express what we should do; it's saying we should focus on affecting the far future, because that's where the value is. It's not merely pointing out where the value is, with no reference to being actionable. 

To give a contrived example: suppose there's a civilization in a galaxy far away that's immeasurably larger than our total potential future, and we can give them ~infinite utility by sending them one photon. But they're receding from us faster than the speed of light, so there's nothing we can do about it. Here, all of the expected value is in this civilization, but it has no bearing on how the EA community should allocate our budget.

For the sake of abstract propositions about "what matters most," it's not necessarily constrained by what we know.

I just don't think MacAskill/Greaves/others intended this to be interpreted as a perfect-information scenario with no practical relevance.

I happen to disagree that possible interventions that greatly improve the expectation of the long-term future will soon all be taken.

What do you think about MacAskill's claim that "there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear."?

I mostly agree that obviously great stuff gets funding, but I think the "marginal stuff" is still orders of magnitude better in expectation than almost any neartermist interventions.

Do you disagree with FTX funding lead elimination instead of marginal x-risk interventions?

Not actively. I buy that doing a few projects with sharper focus and tighter feedback loops can be good for community health & epistemics. I would disagree if it took a significant fraction of funding away from interventions with a more clear path to doing an astronomical amount of good. (I almost added that it doesn't really feel like lead elimination is competing with more longtermist interventions for FTX funding, but there probably is a tradeoff in reality.)

I was just about to make all three of these points (with the first bullet containing two), so thank you for saving me the time!

I'm unsure if I agree or not. I think this could benefit from a bit of clarification on the "why this needs to be retired" parts.

For the first slogan, it seems like you're saying that this is not a complete argument for longtermism - just because the future is big doesn't mean its tractable, or neglected, or valuable at the margin. I agree that it's not a complete argument, and if I saw someone framing it that way I would object. But I don't think that means we need to retire the phrase unless we see it being constantly used as a strawman or something? It's not complete, but it's a quick way to summarize a big part of the argument.

For the second one, it sounds like you're saying this is misleading - it doesn't accurately represent the work being done, which is mostly on lock-in events, not affecting the long-term future. This is true, but it takes only one extra sentence to say "but this is hard so in practice we focus on lock-in". It's a quick way to summarize the philosophical motivations, but does seem pretty detached from practice.

I think my takeaway from thinking thru this comment is this:

  • Longtermism is a complicated argument with a lot of separate pieces
  • We have slogans that summarize some of those pieces and leave out others
  • Those slogans are useful in a high-context environment, but can be misleading for those that don't already know all the context they implicitly rely on

But I don't think that means we need to retire the phrase unless we see it being constantly used as a strawman or something? It's not complete, but it's a quick way to summarize a big part of the argument.

I do often see it used as an argument for longtermism, without reference to tractability.

This is true, but it takes only one extra sentence to say "but this is hard so in practice we focus on lock-in".

So: "What matters most about our actions is their very long term effects, but this is hard so in practice we focus on lock-in". 
But why bother making the claim about our actions in general? It seems like an attempt to make a grand theory where it's not warranted.

I think the existence of investing for the future as a meta option to improve the far future essentially invalidates both of your points. Investing money in a long-term fund won’t hit diminishing returns anytime soon. I think of it as the “Give Directly of longtermism”.

I'd be interested to see the details. What's the expected value of a rainy day fund, and what factors does it depend on?

Founders Pledge's Investing to Give report is an accessible resource on this.

I wrote a short overview here.

Do you think FTX funding lead elimination is a mistake, and that they should do patient philanthropy instead?

Well I’d say that funding lead elimination isn’t longtermist all other things equal. It sounds as if FTX’s motivation for funding it was for community health / PR reasons in which case it may have longtermist benefits through those channels.

Whether longtermists should be patient or not is a tricky, nuanced question which I am unsure about, but I would say I’m more open to patience than most.

[anonymous]2y3
0
0

You might be interested in checking out a GPI paper which argues the same thing as your second point: The Scope of Longtermism 

Here's the full conclusion:

This paper assessed the fate of ex ante swamping ASL: the claim that the ex ante best thing
we can do is often a swamping longtermist option that is near-best for the long-term future. I gave a two-part argument that swamping ASL holds in the special case of present-day cause-neutral philanthropy: the argument from strong swamping that a strong swamping option would witness the truth of ASL, and the argument from single-track dominance for the existence of a strong swamping option. 

However, I also argued for the rarity thesis that swamping longtermist options are rare. I gave two arguments for the rarity thesis: the argument from rapid diminution that probabilities of large far-future benefits often diminish faster than those benefits increase; and the argument from washing out that probabilities of far-future benefits are often significantly cancelled by probabilities of far-future harms.

I argued that the rarity thesis does not challenge the case for swamping ASL in present day, cause-neutral philanthropy, but showed how the rarity thesis generates two challenges to the scope of swamping ASL beyond this case. First, there is the area challenge that swamping ASL often fails when we restrict our attention to specific cause areas. Second, there is the challenge from option unawareness that swamping ASL often fails when we modify decision problems to incorporate agents’ unawareness of relevant options. 

In some ways, this may be familiar and comforting news. For example, Greaves (2016) considers the cluelessness problem that we are often significantly clueless about the ex ante values of our actions because we are clueless about their long-term effects. Greaves suggests that although cluelessness may be correct as a description of some complex decisionmaking problems, we should not exaggerate the extent of mundane cluelessness in everyday decisionmaking. A natural way of explaining this result would be to argue for a strengthened form of the rarity thesis on which in most mundane decisionmaking, the expected long-term effects of our actions are swamped by their expected short-term effects. So in a sense, the rarity thesis is an expected and comforting result. 

In addition, this discussion leaves room for swamping ASL to be true and important in the case of present-day, cause-neutral philanthropy as well as in a limited number of other contexts. It also does not directly pronounce on the fate of ex-post versions of ASL, or on the fate of non-swamping, convergent ASL. However, it does suggest that swamping versions of ASL may have a more limited scope than otherwise supposed. 

"What matters most about our actions is their very long term effects."

I think my takeaway from this slogan is: given limited evaluation capacity + some actions under consideration, a substantial proportion of this capacity should be debited to thinking about long term effects.

It could be false: maybe it's easy to conclude that nothing important can be known about the long term effects. However, I don't think this has been demonstrated yet.

I would flip it around: we should seek out actions that have predictable long-term effects. So, instead of starting from the set of all possible actions and estimating the long-term effects for each one (an impossible task), we would start by restricting the action space to those with predictable long-term effects.

How about this:
 A) Take top N interventions ranked by putting all effort into far future effects
 B) Take top N interventions ranked by putting more effort into near than far future effects

(you can use whatever method you like to prioritise the interventions you investigate). Then for most measures of value, group (A) will have much higher expected value than group (B). Hence "most of the expected value is in the far future".

Your initial comment was about slogan2 ("What matters most about our actions is their very long term effects"). I continue to think that this is not a useful framing. Some of our actions have big and predictable long-term effects, and we should focus on those. But most of our actions don't have predictable long-term effects, so we shouldn't be making generic statements about the long-term effects of an arbitrary action.

Re slogan1 ("Most expected value is in the far future"), it sounds like you're interpreting it as being about the marginal EV of an action. I agree that it's possible for the top long-term focused interventions to currently have a higher marginal EV than near-term focused interventions. But as these interventions are funded, I expect their marginal EV to decline (ie. diminishing returns), possibly to a value lower than the marginal EV of near-focused interventions.

Curated and popular this week
Relevant opportunities