Hide table of contents

Edit: after doing some basic estimates, I've found that the value of doing this is very low, generally less than 100 basis points: http://i.imgur.com/9of14il.png Therefore this strategy would only make any sense as a tiebreaker between very similar investments.

-

All of my investments are split between US stocks, US bonds, and EU stocks, and here I will explain why. The investment ideas I will provide here apply to any effective altruist individual or organization investing funds with the intention of using them for altruist projects.

Background

Financial advice for the common person often suggests that people divert some of their investments to international markets in order to mitigate domestic market risk. Emerging markets in particular (like China, Brazil, and South Africa), are considered to have value because the risks in those markets are not very well correlated with Western markets. Frontier markets (like Sri Lanka, Kenya and Pakistan) are similar in this regard. The least-developed countries, like Uganda and Malawi, seem likely to be similar although they are not common investment targets for several reasons. International stock indices generally have a mix across developed and emerging market stocks.

The limited covariance of international, emerging and frontier stocks with the US/EU/Japan markets means that a typical risk averse investor can benefit, reducing their risk while preserving expected returns, by taking a portfolio of US stocks and reallocating some of it to foreign markets. They may also sacrifice expected returns in this way - even if they believe that US stocks will outperform emerging market stocks, it may still be prudent for them to invest in some emerging market stocks, because it will reduce the severity of their exposure to the domestic market.

However, it is commonly accepted by now that altruists should generally be less financially risk averse than other people. This implies that we shouldn't worry too much about diversification, but only about expected value. So if an altruistic investor thinks that US markets will beat emerging markets, they should only invest in the US, and vice versa. If they perceived no significant difference it wouldn't matter much.

The basic idea

If you are investing money and plan to use it in the future to support a charitable cause, there is an additional consideration which is relevant to your choice of how to invest: the value of each dollar of your donations, conditional on your investments being successful. If you expect that altruistic projects' value will be correlated with a particular set of financial instruments, then you should prefer investing in those instruments since the potential profits will be more valuable to you whereas your potential losses are more likely to occur in the scenarios where your money doesn't have much use anyway. This means that, rather than maximizing expected financial returns, we should seek to find investments which are covariant with the value of our donations.

The basic idea, if you're not convinced, is explained in greater detail here.

Poverty donations

If you plan to make donations to reduce poverty, then you should seek to correlate investment returns with scenarios in which aid to the developing world is more effective. Now the reason that aid to the developing world is so much more effective than aid to the US or other countries is that people there have much less income and therefore have more basic needs. So the ideal financial instrument for a poverty-focused altruist who is saving money would be something that hedges against a rise in the wealth of the least developed countries. The most obvious choice here would be to short-sell the stocks of companies based in those nations. Now investing in the least developed countries is already a very niche area of finance, so I don't know what the prospects are for shorting those stocks. There may be other financial instruments which are sufficiently inversely covariant with the economies of the least developed countries to be worth pursuing, but I can't say for sure what they are.

This also relies on the assumption that a growing economy will also help the poorest people in the country. While this is not always strictly true, it's a general trend of capitalist economies ("a rising tide lifts all boats"). If you are considering making these investments, it may be worth investigating the degree and rapidity of this phenomenon, though even if the economic effects don't play out quickly enough, a fiscally stronger government in an economically growing nation will be able to direct its domestic health and welfare provision more successfully, providing a similar effect.

Animal rights and welfare

Meat consumption is strongly correlated with incomes in developing countries, and so are "self expression" values implying a wider role for progressive ethics in public life and decision making. If you think that much of the value of animal advocacy comes in the form of preventing future meat consumption in currently rising economies, whether in emerging, frontier, or least-developed markets, then you'll want to be positively exposed to those markets: if these economies become strong, then people will obtain both the money to buy lots of meat and the socioeconomic security to expand their moral circle, making activism more highly leveraged. This means those countries should get much or even all of your money.

However if you think that the value of animal advocacy is just in shifting Western practices, then it's not really clear to me what to do. There is evidence that meat consumption eventually begins to reverse with advanced economic development, which would naively imply that you should hold short positions on U.S. markets, but if changing moral values caused by economic development make advocacy more effective in the US (or if wealth makes people more open to buying meat replacements or humane meat) then this may be the wrong position to take. So whether to be short, long or indifferent is not clear to me, but analyzing the issue may provide better answers.

An alternative approach, instead of investing on the basis of markets, is to make investments oriented around companies which specialize in meat production, animal feed, or meat replacements.

Movement building

If you view EA as something that is going to be good for setting global priorities and growing much larger than it is now, it makes sense to be positively correlated with U.S., western European, and ANZAC markets.

One reason for this is that local economic success will increase the amount of donations which new EAs will be able to make, and possibly other factors such as their education and productivity. The great majority of our new members are from the West, and I believe this is likely to continue indefinitely due to language barriers, geographic distance, and cultural values. This means that the average incomes of new members will be higher in the case that Western economies are strong, so recruiting new people will be higher in value. A good objection to this idea is that the wealthier EAs are and the larger the community, the more well-funded our projects will be, so marginal funding will be less important. However, the more people there are in the movement, the more new projects we will have in disparate cause areas, so the value of marginal dollars might not significantly decline.

A second reason for correlating movement building funds with Western markets is that it is important for EA to be situated in the culture which has global dominance. It means that EAs will have access to the most powerful governments, the most influential individuals, and the most elite institutions of the world. Western countries' practices and moral values have spread to other countries through a variety of mechanisms due to our cultural, economic and military dominance of the global system (the former and latter being due at least in part to our economic dominance). If the West retreats on its position here, then EA will be less influential for the general course of global civilization. But if the West remains dominant or becomes even stronger relative to other countries, then EA will be more valuable.

A final factor is that increased wealth in the West is likely to make people more receptive to ideas of altruism -- see the world values survey linked above. This means that efforts in EA marketing will see a higher rate of return.

Artificial intelligence

I think that AI safety donors should be positively correlated with U.S. AI companies.

One issue here is the second reason described under movement building. AI efforts in the U.S. are situated in the presence of research and advocacy efforts where we have some measure of influence and leverage. If competent AI is developed in China or Japan then it will be less influenced by our safety and ethics advocacy than it would be if it were developed in the U.S. It's possible that AI ethics and safety organizations may arise in those countries in the future, although we will be less able to communicate with them and evaluate the value of funding them due to language and distance barriers, and they will be unlikely to match well with EA values due to the lack of EAs over there along with cultural and moral differences. This is more true if powerful AI is developed with some level of institutional secrecy, as it is likely to be if strategic considerations are perceived by developers or governments.

The reason this matters here is that progress in general AI will lead to potentially enormous economic returns and it's likely that these returns will be most concentrated in the companies which actually develop AI.

A second argument is that the US/West is currently the world leader in AI development, and game theoretic modeling implies that having a clear world leader in AI improves the degree to which safety considerations will be adhered to. So efforts for advocacy and research in safety and ethics will make more headway if the US remains in the lead than if competing nations become competitive, as China is beginning to do.

A third argument is that even holding constant the prospect of general AI being developed fastest in the US/West, a more rapid development of AI increases the value of our research, since it reduces the window of time available for safety work and advocacy.

Does economic growth really matter?

It may seem odd to claim that the fluctuations of markets matter much for these long-run considerations about the trajectory of the global system. However, fluctuations over these timeframes aren't just a reflection of previous growth in economic output; they also signal investors' rational expectations about the value to be had in various markets.

A note of caution: efficient markets doesn't imply equal expected returns

Though you may accept the Efficient Markets Hypothesis, you shouldn't assume that any financial instrument where you have a personal comparative advantage in preferring its variance is one that you ought to choose without doing some measure of due diligence. Many investments, such as frontier market stocks and anything used for hedging, are selected for reasons which don't fit well onto risk/return curves - namely, their limited or negative variance with mainstream markets. This means that an efficient market may invest in them to a degree that makes their expected returns lower than that of other financial instruments with equal variance. So you shouldn't assume that you should automatically make a certain investment just because your altruism lines up with it; ordinary investors often have similar reasons for making those investments.

Conclusion

For most cause areas, there are arguments that EAs with investments which they plan to donate should allocate funds on the basis of the expected value of donations conditional on certain investments succeeding or failing. The arguments may be strong enough to compel one to invest somewhere with a lower expected financial value, or to make investments where otherwise one might have chosen to donate immediately because of haste considerations.

5

0
0

Reactions

0
0

More posts like this

Comments12


Sorted by Click to highlight new comments since:

Thanks for this. Hauke Hillebrandt has been thinking about this concept of what he calls 'mission hedging' for a while, hopefully he'll weigh in.

In my view, the most potentially compelling example of this is shorting Facebook stock. From publicly available information, it seems that the large majority of Dustin Moskowitz and Cari Tuna's wealth is still in Facebook. If Facebook were to go under (unlikely, but possible), then the large majority of explicitly EA money would disappear. Given strongly diminishing returns, if you're interested in funding the areas that Open Phil funds that have a small gap (like AI or EA community growth), you'd therefore have a much bigger impact in the world in which Facebook decreases in value.

I'm glad you're thinking about this. Investing is an important issue and I believe there's room for more discussion of the topic.

[I]t is commonly accepted by now that altruists should generally be less financially risk averse than other people. This implies that we shouldn't worry too much about diversification, but only about expected value.

By diversifying, you can increase your risk at any given level of return, which also means you can increase your return at any given level of risk. (These are dual optimization problems).) You should also be concerned about correlation with other altruistic investors, and most investors put way too much money in their home country (so mostly the US and UK).

I don't know that you are claiming this, but you sort of imply it, so to be clear: you should not believe that US stocks have higher expected returns than any other country. If anything, you should believe that the US market will perform worse than most other countries because it's substantially more expensive. Right now the US has a CAPE ratio of 26, versus 21 for non-US developed markets and 14 for emerging markets. CAPE ratio strongly predicts 10-year future market returns.

On the covariance-with-charities issue: I'm doubtful that this consideration matters enough to substantially change how you should invest. If your investments can perform 2 percentage points better by investing in emerging markets rather than developed markets (which they probably can), I would expect this to outweigh any benefits from increased covariance. I would need to see some sort of quantitative analysis to be convinced otherwise.

I'm also not convinced that we should actually want to increase covariance rather than decreasing it. By increasing covariance you increase expected value by expanding the tails, but I don't believe we should be risk-neutral at a global scale because marginal money put into helping the world has diminishing utility.

Similar concerns apply to investing in companies that are correlated with AI development. AI companies tend to be growth stocks, which underperform the market in the long run compared to value stocks.

False. By diversifying, you can increase your risk at any given level of return, which also means you can increase your return at any given level of risk.

No because the risk/return frontier is bounded. You can't arbitrarily get more expected returns for taking on more risk, except possibly with complicated financial instruments and trading. For most people the best you can reasonably do is pick an aggressive index, and getting further returns is beyond reach. But if you're investing for the purpose of donating then an aggressive stock index is generally well below your risk threshold.

On the covariance-with-charities issue: I'm doubtful that this consideration matters enough to substantially change how you should invest. If your investments can perform 2 percentage points better by investing in emerging markets rather than developed markets (which they probably can), I would expect this to outweigh any benefits from increased covariance.

You are neglecting more aggressive markets within the US, where there are more options than the emerging markets offer. Small caps probably beat the US as a whole as well, as they, like emerging markets, are more risky. 2% better annual returns for the same level of risk is ridiculously optimistic.

I'm also not convinced that we should actually want to increase covariance rather than decreasing it. By increasing covariance you increase expected value by expanding the tails, but I don't believe we should be risk-neutral at a global scale because marginal money put into helping the world has diminishing utility.

This is described above where it is relevant to the cause area. However at the global level, EA money is a drop in the pond.

Similar concerns apply to investing in companies that are correlated with AI development. AI companies tend to be growth stocks, which underperform the market in the long run compared to value stocks.

That is not a settled debate.

You can't arbitrarily get more expected returns for taking on more risk, except possibly with complicated financial instruments and trading.

Paul Christiano suggests leveraged ETFs. There's also buying stocks on margin, which is not terribly hard to set up.

Just a quick note to anyone considering doing this: the relationship between country economic growth and equity returns is really weak.

So I doubt doing something like buying Chinese equities to hedge against increased meat consumption would really work. You'd need to find more exposed bets, like Chinese meat companies, though that will be more costly in terms of lost diversification. The top US AI tech company example seems better.

http://www.economist.com/blogs/buttonwood/2014/02/growth-and-markets

Meat price futures might be the way to go. However at this point you're arguably getting into the realm of unethical investing.

I think that AI safety donors, and all those who seek to spread values with the intention of influencing the values guiding a singleton or technological transformation, should probably be positively correlated with U.S. markets.

If you want to correlate with near-term AI development, you would buy GOOG. (Which is ~1% DeepMind).

That's a good point, I'm editing the post to fix it. I had thought of it but during the writing process it slipped my mind.

Similar considerations would apply for animal advocacy. There are a number of companies and agricultural derivatives which could be correlated with meat production in various ways.

Oh yeah, why haven’t I bought any Google/Alphabet stock yet. That’s about to be fixed.

Which agriculture-related companies are you thinking of?

Actually it seems like the value of being covariant is pretty low on some basic assumptions, if I did my math right (see edit), so maybe not worth buying into tech if you would have found a better investment otherwise.

No idea. Probably depends on exactly what kind of covariance you want. I'd suggest looking through agricultural stock indices.

More from kbog
Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
 ·  · 1m read
 · 
We’ve written a new report on the threat of AI-enabled coups.  I think this is a very serious risk – comparable in importance to AI takeover but much more neglected.  In fact, AI-enabled coups and AI takeover have pretty similar threat models. To see this, here’s a very basic threat model for AI takeover: 1. Humanity develops superhuman AI 2. Superhuman AI is misaligned and power-seeking 3. Superhuman AI seizes power for itself And now here’s a closely analogous threat model for AI-enabled coups: 1. Humanity develops superhuman AI 2. Superhuman AI is controlled by a small group 3. Superhuman AI seizes power for the small group While the report focuses on the risk that someone seizes power over a country, I think that similar dynamics could allow someone to take over the world. In fact, if someone wanted to take over the world, their best strategy might well be to first stage an AI-enabled coup in the United States (or whichever country leads on superhuman AI), and then go from there to world domination. A single person taking over the world would be really bad. I’ve previously argued that it might even be worse than AI takeover. [1] The concrete threat models for AI-enabled coups that we discuss largely translate like-for-like over to the risk of AI takeover.[2] Similarly, there’s a lot of overlap in the mitigations that help with AI-enabled coups and AI takeover risk — e.g. alignment audits to ensure no human has made AI secretly loyal to them, transparency about AI capabilities, monitoring AI activities for suspicious behaviour, and infosecurity to prevent insiders from tampering with training.  If the world won't slow down AI development based on AI takeover risk (e.g. because there’s isn’t strong evidence for misalignment), then advocating for a slow down based on the risk of AI-enabled coups might be more convincing and achieve many of the same goals.  I really want to encourage readers — especially those at labs or governments — to do something