Hide table of contents

This post is a response to AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years from January 2023, by @basil.halperin , @J. Zachary Mazlish and @tmychow. In contrast to what they argue, I believe that interest rates are not a reliable instrument for assessing market beliefs about AI timelines - at least not the transformative AI described in the former forum post.

The reason for this is that savvy investors cannot expect to get rich by betting on short AI timelines. Such a bet simply ties up your capital until it loses its value (either because you're dead, or because you're so rich that it hardly matters). Therefore, a savvy investor with short timelines will simply increase their own consumption. No individual can increase their own consumption enough to affect global capital markets - rather, something like tens of millions of people would need to increase their consumption for interest rates to be affected. Interest rates can therefore remain low even if TAI is near, unless all of these people get worried enough about AI to change their savings rate.

 

Replaying the argument from AGI and the EMH

My understanding of the argument in the former post goes like this:

  1. AI is defined as transformative if it either causes an existential risk, or explosive growth (which, presumably, would be broad-based enough that the typical investor would expect to partake in it)
  2. If we knew that transformative AI was near, people would not need to save as much as they do today - since they would expect to either be dead or very, very rich in the near future
  3. If people save less, capital supply goes down, and interest rates go up. Therefore, if we knew transformative AI was near, interest rates should be high. 
  4. Even if we allow for uncertainty around the timing of transformative AI, a significant probability of near-term transformative AI should increase interest rates, since the equilibrium condition is that the expected utility of consumption today and in the future should be equal (reflecting the full distribution of outcomes).
  5. Since interest rates aren't high, if you assume market efficiency, this is evidence against near-term transformative AI.

 

My high-level response

I will start by granting the definition of transformative AI as either an existential risk or a driver of explosive (and broadly shared) growth. This means that I accept the premise that the marginal value of additional money post-TAI is much, much lower than the marginal value of money today, for any relevant investor. EDIT: I don’t think this is a realistic assumption, but I chose to run with it in this case to show that even when I accept the assumptions of the former post, the conclusions don’t follow. In reality I think many investors with short timelines believe in other scenarios than this, and in some of those scenarios, they still have some marginal utility of additional savings. If that is true, it supports my conclusion and it is one further nail in the coffin for the argument in the original post.

Second, I consider the dynamics of how prices in markets can change over time. This is something which the original post glosses over a bit, which is forgivable, since the main social value of markets is that they can work like a black box information aggregation mechanism where you don't need to think too carefully about the gears. However, in this case, this is a crucial reason why their argument seems to fail.

Let's consider two possible ways the price of an asset can change. Either, some information becomes available to all players in the market, and they uniformly update their assessment of the value of the asset, and adjust their market positions accordingly. Alternatively, some investor gains private information which indicates the asset is mispriced, and they take a big, directional bet based on said information, unilaterally moving the price. These two situations are extremes on a spectrum, and in most cases, price changes will reflect a situation somewhere in between these extremes. 

My argument is that this matters in the special context of interest rates. After all, interest rates reflect the aggregate capital supply in the world. Let's assume that there are two ways to move the prices of capital:

  1. Many people decide to reduce their retirement savings and rather consume more in the present, instead of investing the money.
  2. Savvy investors spot a mispricing of capital, and make directional bets (i.e., investments) that capital should be higher valued (i.e., interest rates should go up). While it is possible to debate whether such bets are available in the market at scale, I will simply assume that such an asset exists.

Presumably, it's the 2nd of these options that is of interest in the context of a discussion about AI timelines. Admittedly, the 1st option would happen if the typical consumer believed that extinction or explosive growth was near, so that mechanism is a plausible link between interest rates and AI timelines - but it is not an interesting link, since it would require very many people to believe in near-term TAI with high confidence. Global balance sheets are valued to at least $500 Tn and annual savings are between 1/4th and 1/3rd of global GDP, so even the wealthiest investors will struggle to meaningfully increase their utility by increasing their consumption rate to the point where it makes a dent in global capital supply. Therefore, interest rates would not be affected by this mechanism until something like tens of millions of people adjust their savings rate.

Therefore, the question is "would a savvy investor, informed about impending TAI, make a directional bet about interest rates, and if so, would that be sufficient to move interest rates?". I believe the answer is  that  the incentives of the savvy investor preclude them from taking these directional bets. The short reason is this:

  • The reason for an investor to make a bet, is that they believe they will profit later
  • However, if they believe in near-term TAI, savvy investors won't value future profits (since they'll be dead or super rich anyways)
  • Therefore, there is no way for them to win by betting on near-term TAI

The second part of the question is therefore of little relevance.

 

Why savvy investors won't bet on near-term TAI, even if they believe in it

In response to my short argument above about investor incentives, you may respond the following:

  • Investors aren't betting on imminent TAI directly. They are betting on rising interest rates
  • Interest rates can rise before we get TAI. Therefore, there is a period where the savvy investor can enjoy the profits of their bet, before the money is made worthless
  • Therefore, an investor which is sufficiently certain of imminent TAI should still take the bet

I believe that this is almost correct. My objection is with the second bullet point, "interest rates can rise before we get TAI". This is possible, but we no longer have a reason to believe that it will happen - unless very many people decide to reduce their savings rates. By then, this is no longer a bet on short AI timelines, but rather a bet about whether the typical consumer will realize that AI timelines are short sufficiently long enough before AI that you have time to enjoy your profits

The slightly more technical explanation relies on backward induction. I will start with the case of an idealized investment - any investment, not specifically linked to transformative AI. Let's assume that it works like this:

  • An investment is essentially just a money machine where you put a dollar in the machine, and then it spits out some other amount at some point in the future
  • A good investment will, in expectation, spit out some amount larger than a dollar - and furthermore compensate you if you have to wait a long time for it. (let's assume that this additional compensation corresponds to some metric of the opportunity cost of money, i.e., the appropriate discount rate for an investor)
  • For simplicity, let's assume each machine works only once

Now consider a case where we ignore some uncertainty: you have a machine where you put in $1, and sometime later it is perfectly known that it will spit out $100, plus compensation for however long the delay is. What is the present value of this machine? In a world where patient investors exist - investors who don't care about the exact timing of the payout, only that it beats their other alternatives - this machine is worth $99 already today, since that's the present value that the patient investor knows that they can get from the machine. The combination of 1) knowledge of the future, 2) competitive markets and 3) sufficiently patient investors brings the value forward.

Now let's consider a specific case of the scenario above - everything is the same, except that the payout will happen 1 day after the world experiences transformative AI. It is still not exactly known when that is, but whenever it happens, that's when the machine will spit out $100. What is the value of this machine?

Whoever owns the machine on the final day derives ~no value from it - since money is worthless by then, for one reason or another. Therefore, whoever owns the machine before then, should not expect to be able to sell it for any price above $0. Any investor willing to pay for this asset would need to play a Greater Fool strategy.

 

Conclusion

A bet that interest rates will rise is not a bet on short AI timelines. Rather, it is a bet that:

  1. Most consumers will correctly perceive that AI timelines are short, and
  2. Most consumers will realize this long enough before TAI that there is enough time to benefit from profitable bets made now, and
  3. Most consumers will believe that transformative AI will significantly reduce the marginal utility they get from their savings - and not, say, increase the marginal value of saving, because they could lose their jobs without taking part in the newfound prosperity from AI

For this reason, savvy investors will not bet on the end of the world, or the end of capital markets as we know them, except for perhaps increasing their own consumption a bit, or going on vacation more - but that would be far from sufficient to move capital markets. The only way interest rates could provide information about AI timelines is if a very broad group of people decided to reduce their savings rate and increase their consumption - so when it comes to AI timelines, interest rates should rather be considered something like a poll of upper middle class consumers in the US and EU, than a poll of the most informed investors.

 

Addendum: appreciation for the authors of the former post

I have exchanged emails with the authors of the former forum post, and they have graciously taken the time to respond to my emails, but I believe  they have not refuted the arguments that I presented in an email to them, so I chose to post my arguments here for broader scrutiny. 

I want to underscore that I deeply appreciate the effort put into the former forum post, and I believe that some of the points in the post are true and good (e.g., that catastrophic risk researchers can use low-interest rate environments to fund their research), while other points don't quite hold (most importantly, that we can use low interest rates as evidence for long AGI timelines). In the past I have spent some time exploring how financial markets can be used to elicit information about catastrophic risk and increase funding for risk mitigation, and I still believe this to be a valuable endeavor, even if it seems hard to scale it to the very largest risks.

 

Side note: what about the empirical argument?

The original post also presents some empirical evidence of the link between a) interest rates and growth, in section V, and b) interest rates and risk, in section VI. The evidence on b) is scarce, so I'll only focus on a) here. In short, this link can be equally well explained by a couple of alternative explanations:

  1. Serial correlation in the data set: interest rates used to be higher a few decades ago, and growth also used to be higher, so what looks like many independent observations is in fact just two observations of downwards trends in both parameters.
  2. Variation in capital demand, not capital supply: if the opportunity for profitable investment varies over time (e.g., because technological progress creates new investment opportunities, but innovation is stochastic and varies over time), it is not surprising that interest rates are higher ahead of periods of high growth. This could just mean that there were many good investment opportunities at the time, and then those investments created growth! It is possible to test how much of the link between interest rates and growth is driven by variation in capital demand by analyzing historical data of capital formation and growth, but I have not done that, simply because I haven't had the time.

Personally, I believe that there is some merit to the empirical arguments in the original post, but they are focused on variability within the historical sample, and transformative AI would bring us far from that situation, so I'm not confident that it has a lot of predictive value for TAI in particular.

83

0
0

Reactions

0
0

More posts like this

Comments21
Sorted by Click to highlight new comments since: Today at 11:33 AM

I disagree with the idea that short AI timelines are not investable (although I agree interest rates are a bad and lagging indicator vs AI stocks). People foreseeing increased expectations of AI sales as a result of scaling laws, shortish AI timelines, and the eventual magnitude of success have already made a lot of money investing in Nvidia, DeepMind and OpenAI. Incremental progress increases those expectations, and they can increase even in worlds where AGI winds up killing or expropriating all investors so long as there is some expectation of enough investors thinking ownership will continue to matter. In practice I know lots of investors expecting near term TAI who are betting on it (in AI stocks, not interest rates, because the returns are better). They also are more attracted to cheap 30 year mortgages and similar sources of mild cheap leverage. They put weight on worlds where society is not completely overturned and property rights matter after AGI, as well as during an AGI transition (e.g. consider that a coalition of governments wanting to build AGI is more likely to succeed earlier and more safely with more compute and talent available to it, so has reason to make credible promises of those who provide such resources actually being compensated for doing so post-AGI, or the philanthropic value of being able to donate such resources).

And at the object level from reading statements from investors and talking to them, investors weighted by trading in AI stocks (and overwhelmingly for the far larger bond market setting interest rates) largely don't have short AI timelines (confident enough to be willing to invest on) or expect explosive growth in AI capabilities. There are investors like Cathy Woods who do with tens or hundreds of billions of dollars of capital, but they are few enough relative to the investment opportunities available that they are not setting e.g. the prices for the semiconductor industry. I don't see the point of indirect arguments from interest rates for the possibility that investors or the market as a whole could believe in AGI soon but only versions where owning the AI chips or AI developers won't pay off, when at the object level that possibility is known to be false.

Carl, I agree with everything you're saying, so I'm a bit confused about why you think you disagree with this post.

This post is a response to the very specific case made in an earlier forum post, where they use a limited scenario to define transformative AI, and then argue that we should see interest rates rising if if traders believe that scenario to be near. 

I argue that we can't use interest rates to judge if said, specific scenario is near or not. That doesn't mean there are no ways to bet on AI (in a broader sense). Yes, when tech firms are trading at high multiples, and valuations of companies like NVIDIA/ OpenAI/ DeepMind is growing, that's evidence for a claim that "traders expect these technologies to become more powerful in the near-ish future". Talking to investors provides further evidence in the same direction - I just left McKinsey, so up until recently I've had plenty of those conversations myself.

So this post should not be read as an argument about what the market believes, nor is it an argument for short or long timelines. It is only an argument that interest rates aren't strong evidence either way.

It seems to me like you disagree with Carl because you write:

  • The reason for an investor to make a bet, is that they believe they will profit later
  • However, if they believe in near-term TAI, savvy investors won't value future profits (since they'll be dead or super rich anyways)
  • Therefore, there is no way for them to win by betting on near-term TAI

So you're saying that investors can't win from betting on near-term TAI. But Carl thinks they can win.

As Tom says, sorry if I wasn't clear.

Yes, in isolation I see how that seems to clash with what Carl is saying. But that’s after I’ve granted the limited definition of TAI (x-risk or explosive, shared growth) from the former post. When you allow for scenarios with powerful AI where savings still matter, the picture changes (and I think that’s a more accurate description of the real world). I see that I could’ve been more clear that this post was a case of “even if blindly accepting the (somewhat unrealistic) assumptions of another post, their conclusions don’t follow”, and not an attempt at describing reality as accurately as possible

I have now updated the post to reflect this

  • However, if they believe in near-term TAI, savvy investors won't value future profits (since they'll be dead or super rich anyways)

My future profits aren't very relevant if I'm dead, but I might still care about it even if I'm super rich. Sure, my marginal utility will be very low, but on the other hand the profit from my investments will be very large. Even if everyone is stupendously rich by today's terms, there might be a tangible difference between having a trillion dollars in your bank account and having a quadrillion dollars in your bank account. Maybe I want my own galaxy in which I alone have the rights to build Dyson spheres and that is out of the price range of your average joe with a trillion-dollar net wealth. Maybe (and this might be more salient to your typical investor who isn't actively thinking about far out sci-fi scenarios) I want the prestige, political control, etc, that come with being wealthy compared to everyone else.

 

A bet that interest rates will rise is not a bet on short AI timelines. Rather, it is a bet that:

  1. Most consumers will correctly perceive that AI timelines are short, and
  2. Most consumers will realize this long enough before TAI that there is enough time to benefit from profitable bets made now, and
  3. Most consumers will believe that transformative AI will significantly reduce the marginal utility they get from their savings - and not, say, increase the marginal value of saving, because they could lose their jobs without taking part in the newfound prosperity from AI

I believe that this is almost correct. My objection is with the second bullet point, "interest rates can rise before we get TAI". This is possible, but we no longer have a reason to believe that it will happen - unless very many people decide to reduce their savings rates. By then, this is no longer a bet on short AI timelines, but rather a bet about whether the typical consumer will realize that AI timelines are short sufficiently long enough before AI that you have time to enjoy your profits

If future benefits exist for being even richer after TAI, interest rates could rise due to inductive reasoning even before consumers begin adjusting their savings rates in response to TAI. If I know that consumers will adjust their savings rate one day before TAI (assuming a deterministic timeline where TAI occurs in one discontinuous jump and very unrealistic timescales for consumers changing their savings rate for simplicity's sake), then I should place a bet on the interest rate rising (e.g. shorting government bonds) two days before TAI. If enough investors take this action, then interest rates will rise two days before TAI. Knowing this, I should short government bonds three days before TAI, etc... Similar to how if the government promises to print a lot of money in one month, then inflation will begin to rise immediately.

Even if everyone is stupendously rich by today's terms, there might be a tangible difference between having a trillion dollars in your bank account and having a quadrillion dollars in your bank account. Maybe I want my own galaxy in which I alone have the rights to build Dyson spheres and that is out of the price range of your average joe with a trillion-dollar net wealth.

Are you assuming this holds true even in some scenario where a single company or government has total, decisive control over the future of civilization? Will the entity in power really still prioritize such exchanges if they could plausibly just take it directly from you (since they are not accountable to a higher power)?

Or are you assuming that such a Singleton is unlikely to exist? (Or, is the focus on the possibility that such a Singleton does not exist)

I agree that the marginal value of money won't be literally zero after TAI (in the growth scenario; if we're all dead, then it is exactly equal to zero). But (if we still assume those two TAI scenarios are the only possible ones), on a per-dollar basis it will be much lower than today, which will massively skew the incentives for traders - in the face of uncertainty, they would need overwhelming evidence before making trades that pay off only after TAI. And importantly, if you disagree with this and believe the marginal utility of money won't change radically, then that further undermines the point made in the original post, since their entire argument relies on the change in marginal utility - you can't have it both ways! (why would you posit that consumers change their savings rate when there is still benefits from being richer?)

Still, I see your point that even in such a world, there's a difference between being a trillionaire, or a quadrillionaire. If there are quadrillion-dollar profits to be made, then yes, you will get those chains of backwards induction up and working again. But I find that scenario very implausible, so in reality I don't think this is an important consideration.

By then, this is no longer a bet on short AI timelines, but rather a bet about whether the typical consumer will realize that AI timelines are short sufficiently long enough before AI that you have time to enjoy your profits.

I get your point, but it just seems a bit 4D-chess. 

If I believe that TAI is coming, it seems obvious to me that I should expect people beyond my peers to understand that TAI is coming. I could even encourage this by shouting it from the rooftops after making the bet. (The strategy might not be effective given that these views are already not well-kept secrets, but this seems to strengthen the possibility that others will understand without me shouting.) At which point we'd be in the new equilibrium.

Also, @Joel Becker, at this point you have called my thinking "pretty tortured" twice (in comments to the original post) and "4D-chess" here. Especially the first phrase seems - at least to me - more like solider mindset than scout mindset, in that I don't see how you'd make a discussion more truth-seeking, or enlighten anyone when using words like that.

I try to ask both "what does Joel know that I don't" and "what do I know that Joel doesn't, and how can I help him understand that". This post is my attempt at engaging in that way. In contrast, I don't see your comments offering much new evidence (e.g., in the comments to the original post you make comments such as "Traders are not dumb. At least, the small number of traders necessary to move the market are not dumb" - which you should realize that I am well aware of. I am making my argument without that assumption, so you are only arguing against a straw man. So I will try to offer my explanation one more time, in the hopes that it could lead to a productive debate. 

Let's use a physical analogy for financial markets - say, a horse race track. People take their money there, store it for some time, and take out a different amount of money when they leave, depending on the quality of their bets. If interest rates are ruled by capital supply, then making a bet on interest rates is akin to betting on how large volumes people will bet tomorrow. So if you believe that the horse race track is going to burn down tomorrow, you can of course go to the horse race track and place the bet "Trading volumes in 2 days are going to be really low" - and if you're right about the fire, you're likely also right about the trading volumes. But in the meantime, the horse race track burned down, and no one is left to pay out your winnings. Now of course, you can find someone who's willing to buy you out of the bet before things burn down, if you convince them that it is a safe way to profit. You can tell everyone about the forest fire you observed nearby, and how in 24h that's going to reach the horse track, and burn it to the ground. And people can believe your evidence. But that's not going to get anyone to buy you out of the bet you made, since they realize that they will be left holding the burned bag - unless they can find an even bigger fool to sell to. So the only way you can profit from your knowledge of the impeding fire, is to pull all of your bets, so you don't have cash inside the building when it burns down. And that's going to decrease the volumes on the market a little bit, but it is a tiny fraction of the total, since there are many bettors on the horse track. Now this analogy isn't perfect, but my point stands - the equilibrium you're hypothesizing, doesn't exist. If you're hypothesizing a capital supply-side response to short AI timelines, that can only happen if a large fraction of consumers decide to decrease their savings rates, and that would likely require so overwhelming evidence for near-term AI, it would no longer be a leading indicator. (as stated in the earlier comment, I think the capital demand-side argument has more merit, however).

Okay, I have attempted to clarify my thinking on multiple occasions now. In contrast, my experience is that you seem reluctant to engage with my actual arguments, offer few new pieces of evidence, and describe my thinking in quite disparaging terms, which adds up to a poor basis for further discussion. I don't think this is your intention, so please take this for what it is - an attempt at well-meaning feedback, and encouragement to revisit how you engage on this topic. Until I see this good-faith effort I will consider this argument closed for now.

Jakob, I sincerely apologize for my unhelpful (or at the very least unelightening) phrases that have come across as soldier mindset/rude.

I was commenting as I would on the unshared google doc of a friend asking for feedback. But perhaps this way of going about things is too curt for a public forum. Again, I'm sorry.

(I will probably reply on the substance later; currently too busy. I think there's a decent chance that I will agree with you that, in addition to being rude and craply communicated and coming across as soldier mindset, my previous comments reflected sloppy thinking.)

Thank you Joel! I appreciate it

It seems to me you don’t get the point. The point of the post is that the equilibrium you’re hypothesizing doesn’t really exist. Individuals can only amp up their own consumption by so much, so you need a ton of people partying like it’s the end of the world to move capital markets. And that’s what you’d be betting on - not if the end is near but if everyone will believe it to the degree that they materially shift their saving behavior.

At least, if you only consider the capital supply side argument in the original post, this would be why it would fail. IIRC they don’t consider the capital demand side (i.e., what companies are willing to pay for capital). If a lot of companies are suddenly willing to pay more for capital - say, because they see a bunch of capital intensive projects suddenly being in-the-money, either because new technology made new projects feasible, or because demand for their products is skyrocketing - then you could still see interest rates rise. I didn’t discuss this factor here, since that wasn’t the focus of the original post, but Carl Schulman has made it elsewhere - at The Lunar Society podcast, I think. Now if near-term TAI were to create those dynamics, then interest rates could indeed predict TAI, and the conclusion of the first post would happen to hold, though it would be for entirely different reasons than they state, and it would be contingent on the capital demand side link actually holding

I would really like to see a graphical representation of the logic involved in the original AGI and EMH post as well as how this post responds to that logic. As I pointed out in a comment on the original post, I think a key causal mechanism of the "EMH" outcome is feedback loop(s): 

  1. People who are "in the know" can normally profit by being "in the know", which enables them to use more profits to further correct the market towards efficiency.
  2. People who were not originally "in the know" can see that someone is systematically profiting (or competing beliefs/strategies are systematically failing), and then educate themselves to join in the profit-making, which helps correct the market.

But my point is that when there is little opportunity for such feedback loops, this does not reliably hold. The "big loop" in this case is "AGI happens and then... we're all dead or super rich"; it's not actually a loop. The smaller loops are about changes in people's beliefs about AGI timelines, but it's unclear what exactly these supposed loops look like, including which actually make sense from a game-theoretic perspective. For example, some narratives I've seen/imagined appear to rely on someone at the end of the chain "holding the bag" of [treasury bonds/etc.] right before the economy goes crazy in anticipation of AGI, at which point you have both counterparty risk (i.e., they may not get paid) and value risk (i.e., getting paid does not do you much good). 

Ultimately, economics is complex. I  think it might help people better understand if the disagreeing parties in this discussion used communication methods that were easier to interpret/dissect. I think one such method would be diagrams (although I'm not fully confident that is optimal).

Thanks Harrison! Indeed, the "holding the bag" problem is what removes the incentive to "short the world", compared to any other short positions you may wish to take in the market (which also have a timing problem - the market can stay irrational even if you're right - but where there is at least a market mechanism creating incentives for the market to self-correct. The "holding the bag" problem removes this self-correction incentive, so the only way to beat the market is to consume more, and so a few investors won't unilaterally change the market price

[anonymous]1y2
2
1

Why do you think the fraction of investors who believe that the potential outcomes of TAI include situations besides extinction and future profits being worthless is negligible? 

I don't think this. Where do you think I say that?

These are the scenarios defined in the former post. I just run with the assumptions of the argument they present, and show that their conclusion doesn't follow from those assumptions. That doesn't mean I think all the assumptions are accurate reflections of reality. The fact that TAI can play out in many ways, and investors may have very differing beliefs about what it means for their optimal saving rate today, is just another argument for why we shouldn't use interest rates as a measure of AI timelines, which is what I argue in this post.

[anonymous]1y1
0
0

The wording you used in the post was about "savvy" investors, but my naive understanding of markets is that the savviness or not doesn't particularly matter here. 

savvy investors cannot expect to get rich by betting on short AI timelines. Such a bet simply ties up your capital until it loses its value (either because you're dead, or because you're so rich that it hardly matters). Therefore, a savvy investor with short timelines will simply increase their own consumption.

 

The short reason is this:

  • The reason for an investor to make a bet, is that they believe they will profit later
  • However, if they believe in near-term TAI, savvy investors won't value future profits (since they'll be dead or super rich anyways)
  • Therefore, there is no way for them to win by betting on near-term TAI

If there are non-negligible portions of investors who believe in near-term TAI and also value future profits, doesn't that put a hole through the argument? 

See my response to Carl further up. This follows from accepting the assumptions of the former post. I wanted to show that even with said assumptions, their conclusions don’t follow. But I don’t think the assumptions are realistic either.

I have updated the post to reflect this