Research manager @ International Center for Future Generations | Ex-SecureBio, ex-McKinsey Global Institute
592 karmaJoined Jul 2018Working (6-15 years)Oslo, Norway



Research manager at ICFG.eu, board member at Langsikt.no, doing policy research to mitigate risks from biotechnology and AI. Ex-SecureBio manager, ex-McKinsey Global Institute fellow and founder of the McKinsey Effective Altruism community. Follow me on Twitter at @jgraabak


Thanks Harrison! Indeed, the "holding the bag" problem is what removes the incentive to "short the world", compared to any other short positions you may wish to take in the market (which also have a timing problem - the market can stay irrational even if you're right - but where there is at least a market mechanism creating incentives for the market to self-correct. The "holding the bag" problem removes this self-correction incentive, so the only way to beat the market is to consume more, and so a few investors won't unilaterally change the market price

See my response to Carl further up. This follows from accepting the assumptions of the former post. I wanted to show that even with said assumptions, their conclusions don’t follow. But I don’t think the assumptions are realistic either.

Yes, in isolation I see how that seems to clash with what Carl is saying. But that’s after I’ve granted the limited definition of TAI (x-risk or explosive, shared growth) from the former post. When you allow for scenarios with powerful AI where savings still matter, the picture changes (and I think that’s a more accurate description of the real world). I see that I could’ve been more clear that this post was a case of “even if blindly accepting the (somewhat unrealistic) assumptions of another post, their conclusions don’t follow”, and not an attempt at describing reality as accurately as possible

I agree that the marginal value of money won't be literally zero after TAI (in the growth scenario; if we're all dead, then it is exactly equal to zero). But (if we still assume those two TAI scenarios are the only possible ones), on a per-dollar basis it will be much lower than today, which will massively skew the incentives for traders - in the face of uncertainty, they would need overwhelming evidence before making trades that pay off only after TAI. And importantly, if you disagree with this and believe the marginal utility of money won't change radically, then that further undermines the point made in the original post, since their entire argument relies on the change in marginal utility - you can't have it both ways! (why would you posit that consumers change their savings rate when there is still benefits from being richer?)

Still, I see your point that even in such a world, there's a difference between being a trillionaire, or a quadrillionaire. If there are quadrillion-dollar profits to be made, then yes, you will get those chains of backwards induction up and working again. But I find that scenario very implausible, so in reality I don't think this is an important consideration.

I don't think this. Where do you think I say that?

These are the scenarios defined in the former post. I just run with the assumptions of the argument they present, and show that their conclusion doesn't follow from those assumptions. That doesn't mean I think all the assumptions are accurate reflections of reality. The fact that TAI can play out in many ways, and investors may have very differing beliefs about what it means for their optimal saving rate today, is just another argument for why we shouldn't use interest rates as a measure of AI timelines, which is what I argue in this post.

Carl, I agree with everything you're saying, so I'm a bit confused about why you think you disagree with this post.

This post is a response to the very specific case made in an earlier forum post, where they use a limited scenario to define transformative AI, and then argue that we should see interest rates rising if if traders believe that scenario to be near. 

I argue that we can't use interest rates to judge if said, specific scenario is near or not. That doesn't mean there are no ways to bet on AI (in a broader sense). Yes, when tech firms are trading at high multiples, and valuations of companies like NVIDIA/ OpenAI/ DeepMind is growing, that's evidence for a claim that "traders expect these technologies to become more powerful in the near-ish future". Talking to investors provides further evidence in the same direction - I just left McKinsey, so up until recently I've had plenty of those conversations myself.

So this post should not be read as an argument about what the market believes, nor is it an argument for short or long timelines. It is only an argument that interest rates aren't strong evidence either way.

I think I'll try and type up my objections in a post rather than a comment - it seems to me that this post is so close to being right that it takes effort to pinpoint the exact place where I disagree, and so I want to take the time to formalize it a bit more.


But in short, I think it's possible to have 1) rational traders, 2) markets that largely function well, and 3) still no 5+ year advance signal of AGI in the markets, without making very weird assumptions. (note: I choose the 5+ year timeline because I think once you get really close to AGI, say, less than 1 year and lots of weird stuff going on, then you'd at least see some turbulence in the markets as folks are getting confused about how to trade in this very strange situation, so I do think the markets are providing some evidence against extremely short timelines)

I see that I wasn't being super clear above. Others in the comments have pointed to what I was trying to say here:

 - The window between when "enough" traders realize that AI is near and when it arrives may be very short, meaning that even in the best case you'll only increase your wealth for a very short time by making this bet

 - It is not clear how markets would respond if most traders started thinking that AI was near. They may focus on other opportunities that they believe are stronger  than to go short interest rates (e.g., they may decide to invest in tech companies), or they may decide to take some vacation

 - In order to get the benefits of the best case above, you need to take on massive interest rate risk, so the downside is potentially much larger than the upside (plus, in the downside case, you're poor for a much longer time)


Therefore, traders may choose not to short interest rates, even if they believe AI is imminent

Load more