trammell

Research Associate at the Global Priorities Institute.

Slightly less ignorant about economic theory than about everything else

Comments

"Patient vs urgent longtermism" has little direct bearing on giving now vs later

Sorry, no, that's clear! I should have noted that you say that too.

The point I wanted to make is that your reason for saving as an urgent longtermist isn't necessarily something like "we're already making use of all these urgent opportunities now, so might as well build up a buffer in case the money is gone later". You could just think that now isn't a particularly promising time to spend, period, but that there will be promising opportunities later this century, and still be classified as an urgent longtermist.

That is, an urgent longtermist could have stereotypically "patient longtermist" beliefs about the quality of direct-impact spending opportunities available in December 2020.

"Patient vs urgent longtermism" has little direct bearing on giving now vs later

Thanks! I was going to write an EA Forum post at some point also trying to clarify the relationship between the debate over "patient vs urgent longtermism" and the debate over giving now vs later, and I agree that it's not as straightforward as people sometimes think.

On the one hand, as you point out, one could be a "patient longtermist" but still think that there are capacity-building sorts of spending opportunities worth funding now.

But I'd also argue that, if urgent longtermism is defined roughly as the view that there will be critical junctures in the next few decades, as you put it, then an urgent longtermist could still think it's worth investing now, so that more money will be spent near those junctures in a few decades. Investing to give in, say, thirty years is still pretty unusual behavior, at least for small donors, but totally compatible with "urgent longtermism" / "hinge of history"-type views as they're usually defined.

'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions

Sure, I see how making people more patient has more-or-less symmetric effects on risks from arms race scenarios. But this is essentially separate from the global public goods issue, which you also seem to consider important (if I'm understanding your original point about "even the largest nation-states being only a small fraction of the world"), which is in turn separate from the intergenerational public goods issue (which was at the top of my own list).

I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.

That's not a very firm belief on my part--I could easily be convinced that arms races should rank higher than the mundane, profit-motivated carelessness. But I'd be surprised if the latter were approximately none of the problem.

'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions

I agree that the world underinvests in x-risk reduction (/overspends on activities that increase x-risk as a side effect) for all kinds of reasons. My impression would be that the two most important reasons for the underinvestment are that existential safety is a public good on two fronts:

  • long-term (but people just care about the short term, and coordination with future generations is impossible), and
  • global (but governments just care about their own countries, and we don't do global coordination well).

So I definitely agree that it's important that there are many actors in the world who aren't coordinating well, and that accounting for this would be an important next step.

But my intuition is that the first point is substantially more important than the second, and so the model assumes away much but not close to all of the problem. If the US cared about the rest of the world equally, that would multiply its willingness to pay for an increment of x-risk mitigation by maybe an order of magnitude. But if it had zero pure time preference but still just cared about what happened within its borders (or something), that would seem to multiply the WTP by many orders of magnitude.

What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

Thanks! No need to inflict another recording of my voice on the world for now, I think, but glad to hear you like how the project coming.

What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. Utilitarian policymakers might implement more redistribution too. Given policymakers as they are, we’re still left with the question of how utilitarian philanthropists with their fixed budgets should prioritize between filling the redistribution gap and filling the investment gap.

In any event, if you/Owen have any more unpublished pre-2015 insights from private correspondence, please consider posting them, so those of us who weren’t there don’t have to go through the bother of rediscovering them. : )

The case of the missing cause prioritisation research

Thanks! I agree that people in EA—including Christian, Leopold, and myself—have done a fair bit of theory/modeling work at this point which would benefit from relevant empirical work. I don’t think this is what either of the current new economists will engage in anytime soon, unfortunately. But I don’t think it would be outside a GPI economist’s remit, especially once we’ve grown.

What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

Sorry--maybe I’m being blind, but I’m not seeing what citation you’d be referring to in that blog post. Where should I be looking?

The case of the missing cause prioritisation research

Thanks, I definitely agree that there should be more prioritization research. (I work at GPI, so maybe that’s predictable.) And I agree that for all the EA talk about how important it is, there's surprisingly little really being done.

One point I'd like to raise, though: I don’t know what you’re looking for exactly, but my impression is that good prioritization research will in general not resemble what EA people usually have in mind when they talk about “cause prioritization”. So when putting together an overview like this, one might overlook some of even what little prioritization research is being done.

In my experience, people usually imagine a process of explicitly listing causes, thinking through and evaluating the consequences of working in each of them, and then ranking the results (kind of like GiveWell does with global poverty charities). I expect that the main reason more of this doesn’t exist is that, when people try to start doing this, they typically conclude it isn’t actually the most helpful way to shed light on which cause EA actors should focus on.

I think that, more often than not, a more helpful way to go about prioritizing is to build a model of the world, just rich enough to represent all the levers between which you’re considering and the ways you expect them to interact, and then to see how much better the world gets when you divide your resources among the levers this way or that. By analogy, a “naïve” government’s approach to prioritizing between, say, increasing this year’s GDP and decreasing this year’s carbon emissions would be to try to account explicitly for the consequences of each and to compare them. Taking the lowering emissions side, this will produce a tangled web of positive and negative consequences, which interact heavily both with each other and with the consequences of increasing GDP: it will mean

  • less consumption this year,
  • less climate damage next year,
  • less accumulated capital next year with which to mitigate climate damage,
  • more of an incentive for people next year to allow more emissions,
  • more predictable weather and therefore easier production next year,
  • …but this might mean more (or less) emissions next year,
  • …and so on.

It quickly becomes clear that finishing the list and estimating all its items is hopeless. So what people do instead is write down an “integrated assessment model”. What the IAM is ultimately modeling, albeit in very low resolution, is the whole world, with governments, individuals, and various economic and environmental moving parts behaving in a way that straightforwardly gives rise to the web of interactions that would appear on that infinitely long list. Then, if you’re, say, a government in 2020, you just solve for the policy—the level of the carbon cap, the level of green energy subsidization, and whatever else the model allows you to consider—that maximizes your objective function, whatever that may be. What comes out of the model will be sensitive to the construction of the model, of course, and so may not be very informative. But I'd say it will be at least as informative as an attempt to do something that looks more like what people sometimes seem to mean by cause prioritization.

If the project of “writing down stylized models of the world and solving for the optimal thing for EAs to do in them” counts as cause prioritization, I’d say two projects I’ve had at least some hand in over the past year count: (at least sections 4 and 5.1 of) my own paper on patient philanthropy and (at least section 6.3 of) Leopold Aschenbrenner’s paper on existential risk and growth. Anyway, I don't mean to plug these projects in particular, I just want to make the case that they’re examples of a class of work that is being done to some extent and that should count as prioritization research.

…And examples of what GPI will hopefully soon be fostering more of, for whatever that’s worth! It’s all philosophy so far, I know, but my paper and Leo’s are going on the GPI website once they’re just a bit more polished. And we’ve just hired two econ postdocs I’m really excited about, so we’ll see what they come up with.

What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

Hanson has advocated for investing for future giving, and I don't doubt he had this intuition in mind. But I'm actually not aware of any source in which he says that the condition under which zero-time-preference philanthropists should invest for future giving is that the interest rate incorporates beneficiaries' pure time preference. I only know that he's said that the relevant condition is when the interest rate is (a) positive or (b) higher than the growth rate. Do you have a particular source in mind?

Also, who made the "pure time preference in the interest rate means patient philanthropists should invest" point pre-Hanson? (Not trying to get credit for being the first to come up with this really basic idea, I just want to know whom to read/cite!)

Load More