trammell

Research Associate at the Global Priorities Institute.

Slightly less ignorant about economic theory than about everything else

Wiki Contributions

Comments

New Articles on Utilitarianism.net: Population Ethics and Theories of Well-Being

Nice to see this coming along! How many visitors has utilitarianism.net been getting?

A Model of Value Drift

I think this is a valuable contribution—thanks for writing it! Among other things, it demonstrates that conclusions about when to give are highly sensitive to how we model value drift.

In my own work on the timing of giving, I’ve been thinking about value drift as a simple increase to the discount rate: each year philanthropists (or their heirs) face some x% chance of running off with the money and spending it on worthless things. So if the discount rate would have been d% without any value drift risk, it just rises to (d+x)% given the value drift risk. If the learning that will take place over the next year (and other reasons to wait, e.g. a positive interest rate) outweigh this (d+x)% (plus the other reasons why resources will be less valuable next year), it’s better to wait. But here we see that, if values definitely change a little each year, it might be best to spend much more quickly than if (as I’ve been assuming) they probably don’t change at all but might change a lot, since in the former case, holding onto resources allows for a kind of slippery slope in which each year you change your judgments about whether or not to defer to the next year. So I’m really glad this was written and I look forward to thinking about it more.

One comment on the thesis itself: I think it’s a bit confusing at the beginning, where it says that decision-makers face a tradeoff between “what is objectively known about the world and what they personally believe is true.” The tradeoff they face is between acquiring information and maintaining fidelity to their current preferences, not to their current beliefs. The rest of the thesis is consistent with framing the problem as a information-vs.-preference-fidelity tradeoff, so I think this wording is just a holdover from a previous version of the thesis which framed things differently. But (Max) let me know if I’m missing something.

"Patient vs urgent longtermism" has little direct bearing on giving now vs later

Sorry, no, that's clear! I should have noted that you say that too.

The point I wanted to make is that your reason for saving as an urgent longtermist isn't necessarily something like "we're already making use of all these urgent opportunities now, so might as well build up a buffer in case the money is gone later". You could just think that now isn't a particularly promising time to spend, period, but that there will be promising opportunities later this century, and still be classified as an urgent longtermist.

That is, an urgent longtermist could have stereotypically "patient longtermist" beliefs about the quality of direct-impact spending opportunities available in December 2020.

"Patient vs urgent longtermism" has little direct bearing on giving now vs later

Thanks! I was going to write an EA Forum post at some point also trying to clarify the relationship between the debate over "patient vs urgent longtermism" and the debate over giving now vs later, and I agree that it's not as straightforward as people sometimes think.

On the one hand, as you point out, one could be a "patient longtermist" but still think that there are capacity-building sorts of spending opportunities worth funding now.

But I'd also argue that, if urgent longtermism is defined roughly as the view that there will be critical junctures in the next few decades, as you put it, then an urgent longtermist could still think it's worth investing now, so that more money will be spent near those junctures in a few decades. Investing to give in, say, thirty years is still pretty unusual behavior, at least for small donors, but totally compatible with "urgent longtermism" / "hinge of history"-type views as they're usually defined.

'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions

Sure, I see how making people more patient has more-or-less symmetric effects on risks from arms race scenarios. But this is essentially separate from the global public goods issue, which you also seem to consider important (if I'm understanding your original point about "even the largest nation-states being only a small fraction of the world"), which is in turn separate from the intergenerational public goods issue (which was at the top of my own list).

I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.

That's not a very firm belief on my part--I could easily be convinced that arms races should rank higher than the mundane, profit-motivated carelessness. But I'd be surprised if the latter were approximately none of the problem.

'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions

I agree that the world underinvests in x-risk reduction (/overspends on activities that increase x-risk as a side effect) for all kinds of reasons. My impression would be that the two most important reasons for the underinvestment are that existential safety is a public good on two fronts:

  • long-term (but people just care about the short term, and coordination with future generations is impossible), and
  • global (but governments just care about their own countries, and we don't do global coordination well).

So I definitely agree that it's important that there are many actors in the world who aren't coordinating well, and that accounting for this would be an important next step.

But my intuition is that the first point is substantially more important than the second, and so the model assumes away much but not close to all of the problem. If the US cared about the rest of the world equally, that would multiply its willingness to pay for an increment of x-risk mitigation by maybe an order of magnitude. But if it had zero pure time preference but still just cared about what happened within its borders (or something), that would seem to multiply the WTP by many orders of magnitude.

What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

Thanks! No need to inflict another recording of my voice on the world for now, I think, but glad to hear you like how the project coming.

Load More