Research Associate at the Global Priorities Institute.
Slightly less ignorant about economic theory than about everything else
Sorry, what’s REI work?
I think this is a valuable contribution—thanks for writing it! Among other things, it demonstrates that conclusions about when to give are highly sensitive to how we model value drift.
In my own work on the timing of giving, I’ve been thinking about value drift as a simple increase to the discount rate: each year philanthropists (or their heirs) face some x% chance of running off with the money and spending it on worthless things. So if the discount rate would have been d% without any value drift risk, it just rises to (d+x)% given the value drift risk. If the learning that will take place over the next year (and other reasons to wait, e.g. a positive interest rate) outweigh this (d+x)% (plus the other reasons why resources will be less valuable next year), it’s better to wait. But here we see that, if values definitely change a little each year, it might be best to spend much more quickly than if (as I’ve been assuming) they probably don’t change at all but might change a lot, since in the former case, holding onto resources allows for a kind of slippery slope in which each year you change your judgments about whether or not to defer to the next year. So I’m really glad this was written and I look forward to thinking about it more.
One comment on the thesis itself: I think it’s a bit confusing at the beginning, where it says that decision-makers face a tradeoff between “what is objectively known about the world and what they personally believe is true.” The tradeoff they face is between acquiring information and maintaining fidelity to their current preferences, not to their current beliefs. The rest of the thesis is consistent with framing the problem as a information-vs.-preference-fidelity tradeoff, so I think this wording is just a holdover from a previous version of the thesis which framed things differently. But (Max) let me know if I’m missing something.
Sorry, no, that's clear! I should have noted that you say that too.
The point I wanted to make is that your reason for saving as an urgent longtermist isn't necessarily something like "we're already making use of all these urgent opportunities now, so might as well build up a buffer in case the money is gone later". You could just think that now isn't a particularly promising time to spend, period, but that there will be promising opportunities later this century, and still be classified as an urgent longtermist.That is, an urgent longtermist could have stereotypically "patient longtermist" beliefs about the quality of direct-impact spending opportunities available in December 2020.
Thanks! I was going to write an EA Forum post at some point also trying to clarify the relationship between the debate over "patient vs urgent longtermism" and the debate over giving now vs later, and I agree that it's not as straightforward as people sometimes think.
On the one hand, as you point out, one could be a "patient longtermist" but still think that there are capacity-building sorts of spending opportunities worth funding now.
But I'd also argue that, if urgent longtermism is defined roughly as the view that there will be critical junctures in the next few decades, as you put it, then an urgent longtermist could still think it's worth investing now, so that more money will be spent near those junctures in a few decades. Investing to give in, say, thirty years is still pretty unusual behavior, at least for small donors, but totally compatible with "urgent longtermism" / "hinge of history"-type views as they're usually defined.
Sure, I see how making people more patient has more-or-less symmetric effects on risks from arms race scenarios. But this is essentially separate from the global public goods issue, which you also seem to consider important (if I'm understanding your original point about "even the largest nation-states being only a small fraction of the world"), which is in turn separate from the intergenerational public goods issue (which was at the top of my own list).
I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.
That's not a very firm belief on my part--I could easily be convinced that arms races should rank higher than the mundane, profit-motivated carelessness. But I'd be surprised if the latter were approximately none of the problem.
I agree that the world underinvests in x-risk reduction (/overspends on activities that increase x-risk as a side effect) for all kinds of reasons. My impression would be that the two most important reasons for the underinvestment are that existential safety is a public good on two fronts:
So I definitely agree that it's important that there are many actors in the world who aren't coordinating well, and that accounting for this would be an important next step.
But my intuition is that the first point is substantially more important than the second, and so the model assumes away much but not close to all of the problem. If the US cared about the rest of the world equally, that would multiply its willingness to pay for an increment of x-risk mitigation by maybe an order of magnitude. But if it had zero pure time preference but still just cared about what happened within its borders (or something), that would seem to multiply the WTP by many orders of magnitude.
Thanks! No need to inflict another recording of my voice on the world for now, I think, but glad to hear you like how the project coming.
The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. Utilitarian policymakers might implement more redistribution too. Given policymakers as they are, we’re still left with the question of how utilitarian philanthropists with their fixed budgets should prioritize between filling the redistribution gap and filling the investment gap.
In any event, if you/Owen have any more unpublished pre-2015 insights from private correspondence, please consider posting them, so those of us who weren’t there don’t have to go through the bother of rediscovering them. : )
Thanks! I agree that people in EA—including Christian, Leopold, and myself—have done a fair bit of theory/modeling work at this point which would benefit from relevant empirical work. I don’t think this is what either of the current new economists will engage in anytime soon, unfortunately. But I don’t think it would be outside a GPI economist’s remit, especially once we’ve grown.