trammell

1192Joined Sep 2018

Bio

Econ PhD student at Oxford and research associate at the Global Priorities Institute. I'm slightly less ignorant about economic theory than about everything else.

https://philiptrammell.com/

Comments
107

By the way, someone wrote this Google doc in 2019 on "Stock Market prediction of transformative technology". I haven't taken a look at it in years, and neither has the author, so understandably enough, they're asking to remain nameless to avoid possible embarrassment. But hopefully it's at least somewhat relevant, in case anyone's interested.

Thanks for writing this! I think market data can be a valuable source of information about the probability of various AI scenarios--along with other approaches, like forecasting tournaments, since each has its own strengths and weaknesses. I think it’s a pity that relatively little has yet been written on extracting information about AI timelines from market data, and I’m glad that this post has brought the idea to people’s attention and demonstrated that it’s possible to make at least some progress.

That said, there is one broad limitation to this analysis that hasn’t gotten quite as much attention so far as I think it deserves. (Basil: yes, this is the thing we discussed last summer….) This is that low real, risk-free interest rates are compatible with the belief
1) that there will be no AI-driven growth explosion, 
as you discuss--but also with some AI-growth-explosion-compatible beliefs investors might have, including
2) that future growth could well be very fast or very slow, and
3) that growth will be fast but marginal utility in consumption will nevertheless stay high, because AI will give us such mindblowing new things to spend on (my “new products” hobby-horse).
So it seems impossible to put any upper bound (below 100%) on the probability people are assigning to near-term explosive growth purely by looking at real, risk-free interest rates.

To infer that investors believe (1), one of course has to think hard about all the alternatives (including but not limited to (2) and (3)) and rule them out. But (if I’m not mistaken) all you do along these lines is to partly rule out (2), by exploring the implications of putting a yearly probability on the economy permanently stagnating. I found that helpful. As you observe, merely (though I understand that you don't see it as “merely”!) introducing a 20% chance of stagnation by 2053 is enough to mostly offset the interest rate increases produced by an 80% chance of Cotra AI timelines. You don’t currently incorporate any negative-growth scenarios, but even a small chance of negative growth seems like it should be enough to fully offset said interest rate increase. This is because of the asymmetry produced by diminishing marginal utility: the marginal utility of an extra dollar saved can only fall to zero, if you turn out to be very rich in the future, whereas it can rise arbitrarily high if you turn out to be very poor. (You note this when you say “the real interest rate reflects the expected future economic growth rate, where importantly the expectation is taken over the risk-neutral measure”, but I think the departure from caring about what we would normally call the expected growth rate is important and kind of obscured here.)

This seems especially relevant given that what investors should be expected to care about is the expected growth rate of their own future consumption, rather than of GDP. Even if they’re certain that AI is coming and bound to accelerate GDP growth, they could worry that it stands some chance of making a small handful of people rich and themselves poor. You write that “truly transformative AI leading to 30%+ economy-wide growth… would not be possible without having economy-wide benefits”, but this is not so clear to me. You might think that’s crazy, but given that I don’t, presumably some other investors don’t.

Anyway: this is all to say that I’m skeptical of inferring much from risk-free interest rates alone. This doesn’t mean we can’t draw inferences from market data, though! For one thing, on the hypothesis that investors believe “(2)”, we would probably expect to see the “insurance value” of bonds, and thus the equity premium, rising over time (as we do, albeit weakly). For another thing, one can presumably test how the market reacts to AI news. I’m certainly interested to see any further work people do in this direction.

Briefly, to reiterate / expand on a point made by a few other comments: I think the title is somewhat misleading, because it conflates expecting aligned AGI with expecting high growth. People could be expecting aligned AGI but (correctly or incorrectly) not expecting it to dramatically raise the growth rate.

This divergence in expectations isn’t just a technical possibility; a survey of economists attending the NBER conference on the economics of AI last year revealed that most of them do not expect AGI, when it arrives, to dramatically raise the growth rate. The survey should be out in a few weeks, and I’ll try to remember to link to it here when it is.

Perhaps just a technicality, but: to satisfy the transversality condition, an infinitely lived agent has to have a discount rate of at least r (1-σ). So if σ >1—i.e. if the utility function is more concave than log—then the time preference rate can be at least a bit negative.

Hey, really glad you liked it so much! And thank you for emphasizing that people should consider applying even if they worry they might not fit in--I think this content should be interesting and useful to lots of people outside the small bubbles we're currently drawing from.

Thanks Bruce! Definitely agreed that it was an amazing crowd : )

Thanks James, really glad to hear you feel you got a lot out of it (including after a few months' reflection)!

Load More