There's a longstanding debate in EA about whether to emphasizing giving now or giving later – see Holden in 2007 (a), Robin Hanson in 2011 (a), Holden in 2011 (updated 2016) (a), Paul Christiano in 2013 (a), Robin Hanson in 2013 (a), Julia Wise in 2013 (a), Michael Dickens in 2019 (a).

I think answers to the "give now vs. give later" question rest on deep worldview assumptions, which makes it fairly insoluble (though Michael Dickens' recent post (a) is a nice example of someone changing their mind about the issue). So here, I'm not trying to answer the question once and for all. Instead, I just want to make an argument that seems fairly obvious but I haven't seen laid out anywhere.

Here's a sketch of the argument –

Premise 1: If AGI happens, it will happen via a slow takeoff.

Premise 2: The frontier of AI capability research will be pushed forward by research labs at publicly-traded companies that can be invested in.

  • e.g. Google Brain, Google DeepMind, Facebook AI, Amazon AI, Microsoft AI, Baidu AI, IBM Watson
  • OpenAI is a confounder here – it's unclear who will control the benefits realized by the OpenAI capabilities research team.
    • From the OpenAI charter (a): "Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research."
  • Chinese companies that can't be accessed by foreign investment are another confounder – I don't know much about that space yet.

Premise 3: A large share of the returns unlocked by advances in AI will accrue to shareholders of the companies that invent & deploy the new capabilities.

Premise 4: Being an investor in such companies will generate outsized returns on the road to slow-takeoff AGI.

  • It'd be difficult to identify the particular company that will achieve a particular advance in AI capabilities, but relatively simple to hold a basket of the companies most likely to achieve an advance (similar to an index fund).
  • If you're skeptical of being able to select a basket of AI companies that will track AI progress, investing in a broader index fund (e.g. VTSAX) could be about as good. During a slow takeoff the returns to AI may well ripple through the whole economy.

Conclusion: If you're interested in maximizing your altruistic impact, and think slow-takeoff AGI is somewhat likely (and more likely than fast-takeoff AGI), then investing your current capital is better than donating it now, because you may achieve (very) outsized returns that can later be deployed to greater altruistic effect as AI research progresses.

  • Note that this conclusion holds for both person-affecting and longtermist views. All you need to believe for it to hold is that a slow takeoff is somewhat likely, and more likely than a fast takeoff.
  • If you think a fast takeoff is more likely, it probably makes more sense to either invest your current capital in tooling up as an AI alignment researcher, or to donate now to your favorite AI alignment organization (Larks' 2018 review (a) is a good starting point here).

Cross-posted to my blog. I'm not an investment advisor, and the above isn't investment advice.

21

0
0

Reactions

0
0

More posts like this

Comments33
Sorted by Click to highlight new comments since: Today at 2:03 PM

"returns that can later be deployed to greater altruistic effect as AI research progresses"

This is hiding an important premise, which is that you'll actually be able to deploy those increased resources well enough to make up for the opportunities you forego now. E.g. Paul thinks that (as an operationalisation of slow takeoff) the economy will double in 4 years before the first 1 year doubling period starts. So after that 4 year period you might end up with twice as much money but only 1 or 2 years to spend it on AI safety.

Good point – thank you for drawing out that premise.

I find myself getting confused as I think about the year-to-year operationalization of a slow takeoff (the distinction between slow and fast takeoff starts to blur).

It seems like the thing we really care about is AI systems falling out of alignment with our intentions as they grow more capable, and it's not clear where "falling out of alignment" starts in the GDP-doubling framework.

I'll think about this more & update here once/if it crystallizes.

February 2021 update:  I thought about it some more; I now feel confident that I'll be able to deploy the gains well enough to make up for the opportunity cost.

I like the general idea that AI timelines matter for all altruists, but I really don't think it's a good idea to try to "beat the market" like this. The current price of these companies is already determined by cutthroat competition between hyper-informed investors. If Warren Buffett or Goldman Sachs thinks the market is undervaluing these AI companies, then they'll spend billions bidding up the stock price until they're no longer undervalued.

Thinking that Google and Co are going to outperform the S&P500 over the next few decades might not sound like a super bold belief--but it should. It assumes that you're capable of making better predictions than the aggregate stock market. Don't bet on beating markets.

The current price of these companies is already determined by cutthroat competition between hyper-informed investors. If Warren Buffett or Goldman Sachs thinks the market is undervaluing these AI companies, then they'll spend billions bidding up the stock price until they're no longer undervalued.

That sounds like a nice world, but unfortunately I don't think that the market is quite that efficient. (Like the parent, I'm not going to offer any evidence, just express my view.)

You could reply, "then why ain'cha rich?" but it doesn't really work quantitatively for mispricings that would take 10+ years to correct. You could instead ask "then why ain'cha several times richer than you otherwise would be?" but lots of people are in fact several times richer than they otherwise would be after a lifetime of investment. It's not anything mind-blowing or even obvious to an external observer.

"Don't try to beat the market" still seems like a good heuristic, I just think this level of confidence in the financial system is misplaced and "hyper-informed" in particular is really overstating it. (As is "incredibly high prior" elsewhere.)

(ETA: I also agree that if you think you have a special insight about AI, there are likely to be better things to do with it.)

I don't think my argument here is analogous to trying to beat the market. (i.e. I'm not arguing that AI research companies are currently undervalued.)

I'm saying that in slow-takeoff scenarios, AI research companies would have a ton of growth potential.

See growth vs. value investing.

Edit: clarified my view in this comment.

I don't think my argument here is analogous to trying to beat the market. (i.e. I'm not arguing that AI research companies are currently undervalued.)

I have to disagree. I think your argument is exactly that AI companies are undervalued: investors haven't considered some factor - the growth potential of AI companies - and that's why they are such a good purchase relative to other stocks and shares.

My interpretation of Premise 4 ("Being an investor in such companies will generate outsized returns on the road to slow-takeoff AGI") is that Milan is asserting a company that develops advanced AI capabilities in the future will likely generate higher returns than the stock market after it has developed these capabilities. This does not seem like a controversial claim since it is analogous to the stocks of biotech companies skyrocketing after good news is announced like regulatory approval to launch a drug. The market may have priced the probability of a company leading a slow AI takeoff in the next X years, but having that actually happen is an entirely different story.

The concept of investing in something that generates a lot of capital if something "bad" happens like investing in AI stocks with the goal of having a lot more capital to deploy if a suboptimal/dangerous AI takeoff occurs is known as "mission hedging." EAs have covered this topic, for example Hauke's 2018 article A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good. Mission hedging is currently a recommended research topic on Effective Thesis.

I think the more important question, which Richard brought up, is whether having X times more cash after a suboptimal/dangerous AI takeoff begins is better than simply donating the money now in an attempt to avert bad outcomes. EAs will have different answers to this question depending on their model of how they can deploy funds now and in the future to impact the world.

The title of the article ("If slow-takeoff AGI is somewhat likely, don't give now" at the time of writing) implies giving now is bad because mission hedging all of that money for the purpose of donating later will lead to better outcomes. I believe the article should be modified to indicate that EAs should evaluate employing mission hedging for part or all of their intended donations rather than suggest that putting all intended donations towards mission hedging and ceasing to donate now is an obviously better option. After all, a hedge is commonly known as a protective measure against certain outcomes, not the sole strategy at work.

I agree this makes more sense in terms of mission hedging

I think the more important question, which Richard brought up, is whether having X times more cash after a suboptimal/dangerous AI takeoff begins is better than simply donating the money now in an attempt to avert bad outcomes.

Agree this is important. As I've thought about it some more, it appears quite complicated. Also seems important to have a view based on more than rough intuition, as it bears on the donation behavior of a lot of EA.

I'd probably benefit from having a formal model here, so I might make one.

Thanks for tying this to mission hedging – definitely seems related.

Milan is asserting a company that develops advanced AI capabilities in the future will likely generate higher returns than the stock market after it has developed these capabilities.

Perhaps that, but even if they don't, the returns from a market-tracking index fund could be very high in the case of transformative AI.

I'm imagining two scenarios:

1. AI research progresses & AI companies start to have higher-than-average returns

2. AI research progresses & the returns from this trickle through the whole market (but AI companies don't have higher-than-average returns)

A version of the argument applies to either scenario.

Clarified my view somewhat in this reply to Aidan.

The implicit argument here seems to be that, even if you think typical investment returns are too low to justify saving over donating, you should still consider investing in AI because it has higher growth potential.

I totally might be misunderstanding your point, but here's the contradiction as I see it. If you believe (A) the S&P500 doesn't give high enough returns to justify investing instead of donations, and (B) AI research companies are not currently undervalued (i.e., they have roughly the same net expected future returns as any other company), then you cannot believe that (C) AI stock is a better investment opportunity than any other.

I completely agree that many slow-takeoff scenarios would make tech stocks skyrocket. But unless you're hoping to predict the future of AI better than the market, I'd say the expected value of AI is already reflected in tech stock prices.

To invest in AI companies but not the S&P500 for altruistic reasons, I think you have to believe AI companies are currently undervalued.

(A) the S&P500 doesn't give high enough returns to justify investing instead of donations, and (B) AI research companies are not currently undervalued..., then you cannot believe that (C) AI stock is a better investment opportunity than any other.

I'm engaging the question of whether to make substantial donations now or whether to save for later. I don't have a strong view on what investments are the best savings vehicle, though I do have an intuition that the market is undervaluing the growth potential of AI-intensive companies.

So I suppose I disagree with both (A) and (B). I think the S&P 500 probably will generate high enough returns to justify investing instead of donations, and I think AI companies are somewhat undervalued.


To invest in AI companies but not the S&P500 for altruistic reasons, I think you have to believe AI companies are currently undervalued

We may be using different definitions of undervalued (see this comment). In the sense that I think AI companies are worth investing in because I think their stock price will be higher in future, I agree they're "undervalued."

But I don't think they're undervalued in the sense that the market is mis-valuing their current assets, etc. If their stock price is higher in the future, I'd expect this to be because they've made real productivity gains.

Also probably worth clarifying that the "slow" in slow takeoff is still incredibly fast compared to historical economic growth. (See the graph in Paul's takeoff post.)

It seems plausible that in the slow-takeoff scenario, almost all returns to GDP growth are accruing to those who own capital, and in particular those who own the companies driving the growth.

(This is all highly speculative and is making assumptions in the background, e.g. property rights still being meaningful in a slow-takeoff scenario.)

I think the background assumptions are probably doing a lot of work here. You'd have to go really far into the weeds of AI forecasting to get a good sense of what factors push which directions, but I can come up with a million possible considerations.

Maybe slow takeoff is shortly followed by the end of material need, making any money earned in a slow takeoff scenario far less valuable. Maybe the government nationalizes valuable AI companies. Maybe slow takeoff doesn't really begin for another 50 years. Maybe the profits of AI will genuinely be broadly distributed. Maybe current companies won't be the ones to develop transformative AI. Maybe investing in AI research increases AI x-risks, by speeding up individual companies or causing a profit-driven race dynamic.

It's hard to predict when AI will happen, it's worlds harder to translate that into present day stock-picking advice. If you've got a world class understanding of the issues and spend a lot of time on it, then you might reasonably believe you can outpredict the market. But beating the market is the only way to generate higher than average returns in the long run.

But beating the market is the only way to generate higher than average returns in the long run.

I'm not claiming that investing in AI companies will generate higher-than-average returns in the long run.

I'm claiming that an altruist's marginal dollar is better put towards investment (in AI companies or in the S&P 500) than towards present-day donations.

Fantastic, I completely agree, so I don't think we have any substantive disagreement.

I guess my only remaining question would then be: should your AI predictions ever influence your investing vs donating behavior? I'd say absolutely not, because you should have incredibly high priors on not beating the market. If your AI predictions imply that the market is wrong, that's just a mark against your AI predictions.

You seem inclined to agree: The only relevant factor for someone considering donation vs investment is expected future returns. You agree that we shouldn't expect AI companies to generate higher-than-average returns in the long run. Therefore, your choice to invest or donate should be completely independent of your AI beliefs, because no matter your AI predictions, you don't expect AI companies to have higher-than-average future returns.

Would you agree with that?

You agree that we shouldn't expect AI companies to generate higher-than-average returns in the long run.

I feel somewhat confused about whether to expect that AI companies will beat the broader market.

On one hand, I have an intuition that the current market price hasn't fully baked in the implications of future AI development. (Especially when I see things like most US executives thinking that AI will have less of an impact than the internet did.)

On the other, I accord with your point about it being very hard to "beat the market" and generally have a high prior about markets being efficient.

Inadequate Equilibria seems relevant here.


Therefore, your choice to invest or donate should be completely independent of your AI beliefs, because no matter your AI predictions, you don't expect AI companies to have higher-than-average future returns.

I do think that your AI predictions should bear on your decision to invest or donate now, as even if AI companies won't have higher-than-average returns, the average return of future firms could be extremely high (given productivity gains unlocked by AI), and it would be a shame to miss out on that return because you donated the money you otherwise would have invested (in a basket AI companies or a broader index fund like VTSAX, wherever).

Also, I was being somewhat sloppy in the post on this point – thanks for pushing on it!

I've edited the post to better reflect my view.

[This comment is no longer endorsed by its author]Reply

If AI research companies aren't currently undervalued, then your Premise 4 (being an investor in such companies will generate outsized returns on the road to slow-takeoff AGI) is incorrect, because the market will have anticipated those outsized returns and priced them in to the current share price.

Hm, I guess so, but wouldn't all investing be value investing under this framing? (i.e. it'll always be the case that when I make an investment, I'm expecting that the investment is a good deal / will increase in the future / the current price is "too low" given what I think the future will be like.)

We might be getting tripped up on semantics here.

(edited) I just saw your link above about growth vs value investing. I don't think that's a helpful distinction in this case, and when people talk about a company being undervalued I think that typically includes both unrecognised growth potential and unrecognised current value. (Maybe that's less true for startups, but we're talking about already-listed companies here).

I do think the core claim of "if AGI will be as big a deal as we think it'll be, then the markets are systematically undervaluing AI companies" is a reasonable one, but the arguments you've given here aren't precise enough to justify confidence, especially given the aforementioned need for caution. For example, premise 4 doesn't actually follow directly from premise 3 because the returns could be large but not outsized compared with other investments. I think you can shore that link up, but not without contradicting your other point:

I'm not claiming that investing in AI companies will generate higher-than-average returns in the long run.

Which means (under the definition I've been using) that you're not claiming that they're undervalued.

...when people talk about a company being undervalued I think that typically includes both unrecognised growth potential and unrecognised current value.

I think it's a spectrum:

  • Value stocks are where most of the case for investment is from the market is mis-pricing the firm's current operations
  • Growth stocks are where most of the case for investment is from the future (expected) growth of the firm
For example, premise 4 doesn't actually follow directly from premise 3 because the returns could be large but not outsized compared with other investments.

Agreed; I clarified my position after Aidan pointed this out: (1, 2)

The same neglect that potentially makes AI investments a good deal can also make AI philanthropy a better deal. If there is a huge AI boom, a prescient investment in AI companies might leave you with a larger share of the world economy---but you'll probably still be a much smaller share of total dollars directed at influencing AI.

That said, I do think this is a reasonable default thing to do with dollars if you are interested in the long term but unimpressed with the current menu of long-termist philanthropy (or expect to be better-informed in the future).

The same neglect that potentially makes AI investments a good deal can also make AI philanthropy a better deal.

Makes sense.

I realize I was writing from the perspective of a small-scale donor (whose donations trade off meaningfully against their saving & consumption goals).

From the perspective of a fully altruistic donor (who's not thinking about such trade-offs), doing current AI philanthropy seems really good (if the donor thinks current opportunities are sensible bets).

There are three additional premises required here. The first is that your own use of funds from investments must be significantly better than that of of other shareholders of the companies you invest in. The second is that the growth rate of the companies you invest in must exceed the rate at which the marginal cost of doing good increases, due to low-hanging fruit getting picked and due to lost opportunities for compounding. The third is that the growth potential of AI companies isn't already priced in, in a way that reduces your expected returns to be no better than index funds.

The first of these premises is probably true. The second is probably false. The third is definitely false.

The second is that the growth rate of the companies you invest in must exceed the rate at which the marginal cost of doing good increases, due to low-hanging fruit getting picked and due to lost opportunities for compounding.

Michael Dickens engages with something similar in this post.

In the case of transformative, slow-takeoff AI driven by for-profit companies, it seems reasonable to assume that the economy is going to grow faster than the marginal cost of doing good, because gains from AI seem unlikely to be evenly distributed.

The third is that the growth potential of AI companies isn't already priced in, in a way that reduces your expected returns to be no better than index funds.

I'm unsure whether AI company growth is adequately priced in or not.

If it is, I think the argument still holds. The returns from an index fund could be very high in the case of transformative AI, so holding index funds would probably be better than donating now in that case.

See also the discussion here & here.

When planning how to donate, it seems very important to consider the impact of market returns increasing due to progress in AI. But I think more considerations should be taken into account before drawing the conclusion in the OP.

For each specific cause, we should estimate the curve over time of EV-per-additional-dollar-invested-in-2019-and-used-now (given an estimate of market returns over time). As Richard pointed out, for reducing AI x-risk, it is not obvious we will have time to effectively use the money we invest today if we wait for too long (so "the curve" for AI safety might be sharply decreasing).

Here is another consideration I find relevant for AI x-risk: in slow takeoff worlds more people are likely to become worried about x-risk from AI (e.g. after they see that the economy has doubled in the past 4 years and that lots of weird things are happening). In such worlds, it might be the case that a very small fraction of the money that will be allocated for reducing AI x-risk would be donated by people who are currently worried about AI x-risk. This consideration might make us increase the weight of fast takeoff worlds.

On the other hand, maybe in slow takeoff worlds there is generally a lot more that could be done for reducing x-risk from AI (especially if slow takeoff correlates with longer timelines), which suggests we increase the weight of slow takeoff worlds.

If you think a fast takeoff is more likely, it probably makes more sense to either invest your current capital in tooling up as an AI alignment researcher, or to donate now to your favorite AI alignment organization (Larks' 2018 review (a) is a good starting point here).

I just wanted to note that some of the research directions for reducing AI x-risk, including ones that seem relevant in fast takeoff worlds, are outside of the technical AI alignment field (for example, governance/policy/strategy research).

February 2021 update:  I now think holding these ETFs, Tesla, and stock in the companies discussed in the post is a good way to quickly approximate the optimal AGI-bull-case portfolio.

(This isn't financial advice.)