trammell

Research Associate at the Global Priorities Institute.

Slightly less ignorant about economic theory than about everything else

trammell's Comments

Should I claim COVID-benefits I don't need to give to charity?

I don't know what counts as a core principle of EA exactly, but most people involved with EA are quite consequentialist.

Whatever you should in fact do here, you probably wouldn't find a public recommendation to be dishonest. On purely consequentialist grounds, after accounting for the value of the reputation of the EA community and so on, what community guidelines (and what EA Forum advice) do you think would be better to write: those that go out of their way to emphasize honesty or those that sound more consequentialist?

Existential Risk and Economic Growth

I'm just putting numbers to the previous sentence: "Say the current (instantaneous) hazard rate is 1% per century; my guess is that most of this consists of (instantaneous) risk imposed by existing stockpiles of nuclear weapons, existing climate instability, and so on, rather than (instantaneous) risk imposed by research currently ongoing."

If "most" means "80%" there, then halting growth would lower the hazard rate from 1% to 0.8%.

Existential Risk and Economic Growth

Hey, thanks for engaging with this, and sorry for not noticing your original comment for so many months. I agree that in reality the hazard rate at t depends not just on the level of output and safety measures maintained at t but also on "experiments that might go wrong" at t. The model is indeed a simplification in this way.

Just to make sure something's clear, though (and sorry if this was already clear): Toby's 20% hazard rate isn't the current hazard rate; it's the hazard rate this century, but most of that is due to developments he projects occurring later this century. Say the current (instantaneous) hazard rate is 1% per century; my guess is that most of this consists of (instantaneous) risk imposed by existing stockpiles of nuclear weapons, existing climate instability, and so on, rather than (instantaneous) risk imposed by research currently ongoing. So if stopping growth would lower the hazard rate, it would be a matter of moving from 1% to 0.8% or something, not from 20% to 1%.

Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good

Glad you liked it, and thanks for the good questions!

#1: I should definitely have spent more time on this / been more careful explaining it. Yes, x-risks should “feed straight into interest rates”, in the sense that a +1% chance of an x-risk per year should mean a 1% higher interest rate. So if you’re going to be

  • spending on something other than x-risk reduction; or
  • spending on x-risk reduction but only able to marginally lower the risk in the period you’re spending (i.e. not permanently lower the rate), and think that there will still be similar risk to mitigate in the next period conditional on survival,

then you should be roughly compensated for the risk. That is, under those circumstances, if investing seemed preferable to spending in the absence of the heightened risk, it should still seem that way given the heightened risk. This does all hold despite the fact that the heightened risk would give humanity such a short life expectancy.

But I totally grant that these assumptions may not hold, and that if they don’t, the heightened risk can be a reason to spend more! I just wanted to point out that there is this force pushing the other way that turns out to render the question at least ambiguous.

#2: No, there’s no reductio here. Once you get big enough, i.e. are no longer a marginal contributor to the public goods you’re looking to fund, the diminishing returns to spending make it less worthwhile to grow even bigger. (E.g., in the human consumption case, you’ll eventually be rich enough that spending the first half of your fund would make people richer to the point that spending the second half would do substantially less for them.) Once the gains from further investing fallen to the point that they just balance the (extinction / expropriation / etc) risks, you should start spending, and continue to split between spending and investment so as to stay permanently on the path where you’re indifferent between the two.

If you're looking to fund some narrow thing only one other person's interested in funding, and you're perfectly patient but the other person is about as impatient as people tend to be, and if you start out with funds the same size, I think you'll be big enough that it's worth starting to spend after about fifty years. If you're looking to spend on increasing human consumption in general, you'll have to hold out till you're a big fraction of global wealth--maybe on the order of a thousand years. (Note that this means that you'd probably never make it, even though this is still the expected-welfare-maximizing policy.)

#3: Yes. If ethics turns out to contain pure time preference after all, or we have sufficiently weak duties to future generations for some other reason, then patient philanthropy is a bad idea. :(

On Waiting to Invest

Glad you liked it!

In the model I'm working on, to try to weigh the main considerations, the goal is to maximize expected philanthropic impact, not to maximize expected returns. I do recommend spending more quickly than I would in a world where the goal were just to maximize expected returns. My tentative conclusion that long-term investing is a good idea already incorporates the conclusion that it will most likely just involve losing a lot of money.

That is, I argue that we're in a world where the highest-expected-impact strategy (not just the highest-expect-return strategy) is one with a low probability of having a lot of impact and a high probability of having very little impact.

If you value future people, why do you consider near term effects?

At the risk of repetition, I’d say that by the same reasoning, we could likewise add in our best estimates of saving a life on (just, say) total human welfare up to 2100.

Your response here was that “[p]opulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust”. But as with the first beneficiary, we can separate the direct welfare impact of population growth from all its other effects and observe that the former is a part of “sum u_i”, no?

Of course, estimates of shorter-term effects are usually more reliable than those of longer-term effects, for all sorts of reasons; but since we’re not arguing over whether saving lives in certain regions can be expected to increase population size up to 2100, that doesn’t seem to me like the point of dispute in this case.

I’m not sure where we’re failing to communicate exactly, but I’m a little worried that this is clogging the comments section! Let me know if you want to really try to get to the bottom of this sometime, in some other context.

On Waiting to Invest

Yup, no disagreement here. You're looking at what happens when we introduce uncertainty holding the absolute expected return constant, and I was discussing what happens when we introduce uncertainty holding the expected annual rate of return constant.

If you value future people, why do you consider near term effects?
If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B.

What I'm saying is, "Michael: you've given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but that's not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one."

Is this an example of CC?

Yes, you have CC in that circumstance if you don't have evidential symmetry with respect to X.

On Waiting to Invest

Hey, I know that episode : )

Thanks for these numbers. Yes: holding expected returns equal, our propensity to invest should be decreasing in volatility.

But symmetric uncertainty about the long-run average rate of return—or to a lesser extent, as in your example, time-independent symmetric uncertainty about short-run returns at every period—increases expected returns. (I think this is the point I made that you’re referring to.) This is just the converse of your observation that, to keep expected returns equal upon introducing volatility, we have to lower the long-run rate from r to q = r – s^2/2.

Whether these increased expected returns mean that patient philanthropists should invest more or less than they would under certainty is in principle sensitive to (a) the shape of the function from resources to philanthropic impact and (b) the behavior of other funders of the things we care about; but on balance, on the current margin, I’d argue it implies that patient philanthropists should invest more. I’ll try writing more on this at some point, and apologies if you would have liked a deeper discussion about this on the podcast.

Load More