50 randomly-chosen stocks are much better diversified than 50 stocks that are specifically selected for having a high correlation to a particular outcome (e.g., AI development).
This paper provides some more in-depth explanation of what I was talking about with the math. It's fairly technical, but it doesn't use any math beyond high school algebra/statistics.
The key point I was making is that, if markets are efficient, then you shouldn't expect a 5% (or even 4.7%) geometric mean return from the AI portfolio. Instead, you should expect more like 1.3%. I might have messed up some of the details, but I'm confident that the geometric return for an un-diversified portfolio in an efficient market is meaningfully lower than the global market return. This is not to say that mission hedging is a bad idea, just that this is an important fact to take into account.
Thanks for making this model extension!
I believe the most important downside to a mission hedging portfolio is that it's poorly diversified, and thus experiences much more volatility than the global market portfolio. More volatility reduces the geometric return due to volatility drag.
Example case:
In geometric Brownian motion, arithmetic return = geometric return + stdev^2 / 2. Therefore, the geometric mean return of the AI portfolio is 5% + 15%^2/2 - 30%^2/2 = 1.6%. If we still assume a 20% return to AI stocks in the short-timelines scenario, that gives 1.3% return in the long-timelines scenario. And the annual return thanks to mission hedging is -1.1%.
(I'm only about 60% confident that I set up those calculations correctly. When to use arithmetic vs. geometric returns can be confusing.)
Of course, you could also tweak the model to make mission hedging look better. For instance, it's plausible that in the short-timeline world, money is 100x more valuable instead of 10x, in which case mission hedging is equivalent to a 24% higher return even with my more pessimistic assumption for the AI portfolio's return.
It seems to me that for mission hedging to work, there needs to be a strong positive relationship between production and stock price. That is, when (say) a fossil fuel company produces more oil, its stock price goes up. That might happen, but it might not. Several things need to happen:
Step 3 seems very likely to happen in the long run, but steps 1 and 2 seem more uncertain to me, and I don't have a great understanding of the relevant economics. Do we have good reason to expect increased production to translate into stock returns? Or do we at least understand the circumstances under which it will or will not translate?
(Alternatively, we could look at the relationship between, say, oil production and the price of oil futures. This is a simpler relationship, but I'd guess the two numbers are basically uncorrelated. They will move together if demand changes, and will move oppositely if supply changes.)
It was an accident. I should have made a post, not a question.
I mistakenly submitted this as a question instead of as a post. Is there any way to convert it to a post?
The question is intended to look at tail risk associated with stock markets shutting down. Transformative AI may or may not constitute such a risk; for example, the AI might shut down the stock market because it's going to do something far better with people's money, or it might shut down the market because everyone is turned into paperclips. So I think it should be unconditional.
In a number of cases, this reduction in hospital admissions and emergency room visits resulted in a cost savings in excess of $10,130, the cost of the average wish. In other words, Make-A-Wish helped, and helped in a cost-effective way.
This doesn't follow. The $10,130 cost savings went into hospital budgets, not into buying bednets, so it doesn't particularly matter that this money was saved.
Also, it seems implausible that Make-A-Wish could meaningfully reduce hospital admissions, so I'm inclined to disbelieve this study.
Just to be clear, you specifically mean to exclude not-yet-EAs who set up DAFs in, say, 2025?
Yes, the intention is to predict the maximum length of time that foundations and DAFs created now (or before now) can continue to exist.
It might be interesting to have forecasts on the amount of resources expected to be devoted to EA causes in the future [...]
Agreed.
I have a doc on my computer with some notes on Metaculus questions that I want to see, but either haven't gotten around to writing up yet, or am not sure how to operationalize. Feel free to take any of them.
"By 2040, there will be a broadly accepted answer on how to construct a rank ordering of possible worlds where some of the worlds have a nonzero probability of containing infinite utility."
"In 2121, it will be broadly agreed that, all things considered, donations to GiveDirectly were net positive."
"By 2040, there will be a broadly accepted answer on what prior to use for the lifespan of humanity." see https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1
"By 3020, a macroscopic object will be observed traveling faster than the speed of light."
To be clear, my model is exactly the same as your model, I just changed one of the parameters—I changed the AI portfolio's overall expected return from 4.7% to 1.3%.
It's not intuitively obvious to me whether, given the 1.3%-return assumption, the optimal portfolio contains more AI than the global market portfolio. I know how I'd write a program to find the answer, but it's complicated enough that I don't want to do it right now.
(The way you'd do it is to model the correlation between the AI portfolio and the market, and set your assumptions such that the optimal value-neutral portfolio (given the two investments of "AI stocks" and "all other stocks") equals the global market portfolio. Then write a utility function that assigns more utility to money in the short-timelines world and maximize that function where the independent variable is % allocation to each portfolio. You can do this with Python's scipy.optimize, or any other similar library.)