# Video abstract

## Disclaimer

This is not to be construed as financial advice.

## Acknowledgments

Thanks to Ben Todd, Kit Harris, Alexander Gordon Brown, and James Snowden for helpful discussion on this manuscript. Any mistakes are my own.

## Introduction to “Mission Hedging”

How should a foundation whose only mission is to prevent dangerous climate change invest their endowment? Surprisingly, in order to maximize expected utility, it might use ‘mission hedging’ investment principles and invest in fossil fuel stocks. This way it has more money to give to organisations that combat climate change when more fossil fuels are burned, fossil fuel stocks go up and climate change will get particularly bad. When fewer fossil fuels are burnt and fossil fuels stocks go down - the foundation will have less money, but it does not need the money as much. Under certain conditions the mission hedging investment strategy maximizes expected utility.

So generally, if you want to do more good, should you invest in ‘evil’ corporations with negative externalities? Corporations that cause harm such as those that sell arms, tobacco, factory farmed meat, fossil fuels, or advance potentially dangerous new technology? Here I argue that, though perhaps counterintuitively, that this might be the optimal investment strategy.

In this note, I extend the special case of an investment strategy for foundation endowments called 'Mission hedging', originally introduced by Brigitte Roth Tran. The generalized strategy proposed here suggests that, under certain conditions, agents should invest resources in entities that cause activity they want to prevent. I will focus only on the conceptual extension of mission hedging here, but more technical details, caveating and mathematical formalism can be found in Roth Trans original paper, all of which are also relevant to the more generalized theory.

Roth Tran [2] summarizes the basic mechanics of mission hedging for foundations as follows:

“”[M]ission hedging," [is] a new strategy in which the endowment “doubles down," skewing investments toward firms it opposes. If increased objectionable activities coincide with both higher firm returns and greater foundation revenue needs (with which to counteract the objectionable activities), then the foundation can align funding availability with need by increasing exposure to objectionable firms beyond that of a typical portfolio. Increasing investment in objectionable firms creates a hedge around the foundation's mission, maximizing expected utility.”

In other words, the basic idea is that, surprisingly, it might be optimal for an altruist whose mission is to combat global poverty, factory farming, mass unemployment, or existential risks from artificial intelligence, to invest in stocks of corporations that might make the problem worse and then give the profits to organisations that will counteract the problem.

For example, it might be a good strategy for donors or even other entities such as governmental organisations that are concerned with global health to invest in tobacco corporations and then give the profits to tobacco control lobbying efforts. Another example might be that animal welfare advocates should invest in companies engaged in factory farming such as those in meat packing industry, and then use the profits to invest in organisations that work to create lab grown meat. A final example: it might be optimal for donors who think that emerging risks from artificial intelligence is a pressing cause to invest in the technology companies that might speed up such dangerous technologies, and then donate the profits to organisations that work on guarding against such risks.

The basic mechanics of the mission hedging investment strategy are the following: when a bad industry does well, you as an investor can use your increased profits or dividends to counteract this trend. In other words, the more harm the industry does, e.g. increases CO2 emissions, sells more meat, or is on more on track to create technology that destroys jobs or is otherwise dangerous, the more you will profit and you can then use these profits to prevent the industry from doing bad things (potentially through donations or grants). If the bad industry is not doing well, then you will not profit as much, but you don't need to donate as much anyway because there is less bad activity. So either way, you will always donate a more optimal amount of money in proportion to the bad activity level.

Here is a simple toy model that illustrates the climate change case.

First consider, the following figure (taken from[3]) that shows that there is uncertainty about the world’s emission pathway and how how high warming will be above pre industrial levels:

Now consider how the oil price will develop correspondingly:

Now consider the following toy model (The spreadsheet can be found here):

## Limitations

Interestingly, mission hedging decidedly skews the portfolio away from diversification and thus does not maximize risk-adjusted financial returns. Most endowments are trying to maximize financial returns, but with mission hedging one moves away from an optimally diversified, (passively invested) global market portfolio[29]. Thus, one will likely sacrifice financial returns. For a ‘cause neutral’ agent, who doesn’t have a mission, favourite cause, or mandate, it might be best to not sacrifice the flexibility of switching causes and rather invest purely as to maximize financial returns. This is similar to the strategy of building up career capital in the face of uncertainty about what the most important cause is.

Also, because mission hedging is somewhat counterintuitive and people might be repulsed by being associated as profiting from e.g. evil corporations, there might be reputational risks associated with it that need to be factored in. If this is a decisive factor, then it might be better to buy stocks that are merely correlate with bad activity - such as buying stock in the pharmaceutical industry rather than the tobacco industry. One could also buy derivatives that merely track the stock price, and which would be de facto stocks for all intents and purposes, but an investor would not ‘own’ part the company.

However, there might be a way of using some upsides of mission hedging without investing in what might be seen as morally reprehensible companies. Say your foundation focuses on two rather than one focus areas (which in itself might be suboptimal[30]). Say area 1 is animal welfare and area 2 is global poverty. Your prior intuition is that these areas are equally important and you assign a 50-50 split of your annual disbursements to these two cause areas. Now, you could invest your entire endowment in stock in the developing world countries where your foundation supports say cash-transfer program. We will assume that corporations in those country doing well causes poverty reduction and they are not seen as morally reprehensible. Now, relative to how the developing world stock portfolio does you decide to invest excess profits (over the standard stock market return) to the other area (here animal welfare). If the developing world stock does well, you might not need to invest as much in poverty reduction, and have more funds for the more neglected animal welfare area. If the stock doesn’t do as well, and there are no excess profits, it is better to stick closer to the original 50-50 split.

## Other quick thoughts

• The process of hedging might also help to put clarify one's mission itself. If there would be a lot of resistance for the IMF to skew their portfolio towards technology companies because they state their mission is to keep unemployment low, and the hedge is accepted to work, then maybe the IMF is not really pursuing its mission or mandate. Mission hedging might clarify the mission of an organisation or agent, because one ‘puts their money where their mouth is’.
• Catastrophe (CAT) bonds are now a $29 billion dollar market, and provide coverage against hurricanes, earthquakes, and pandemics [31]. There are new developments in this area of disaster insurance[32] with developments such as the creation over the counter catastrophe swaps[33].There is also ongoing research on cyber insurance and catastrophes on the cyberspace [34] and on liability of future robotics technology [35] • “Betterment Investing just added a no-cost automatic donation feature. Using their existing tax-optimized system, they allow you to donate your most appreciated shares directly to any of their many connected charities. This gives you the maximum tax deduction right now, while reducing your taxes further when you later withdraw from your account later in life” [36] • There are some artificial intelligence ETFs[37] that one could look into to hedge against risk from emerging technologies [1] email: h@ea.do. [2] "Divest, Disregard, or Double Down? by Brigitte Roth Tran :: SSRN." 13 Apr. 2017, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2952257. Accessed 2 Jun. 2017. [3] "Betting on negative emissions | Nature Climate Change." 21 Sep. 2014, https://www.nature.com/articles/nclimate2392. Accessed 20 Feb. 2018. [4] "What is the average annual return for the S&P 500? | Investopedia." https://www.investopedia.com/ask/answers/042415/what-average-annual-return-sp-500.asp. Accessed 20 Feb. 2018. [5] "Should the Open Philanthropy Project be Recommending More/Larger ..." 2015. 19 Sep. 2016 <http://www.openphilanthropy.org/blog/should-open-philanthropy-project-be-recommending-morelarger-grants> [6] "What Do We Know about AI Timelines? | Open Philanthropy Project." 2015. 19 Sep. 2016 <http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines> [7] "How Investors Can (and Can't) Create Social Value | Stanford Social ...." 8 Dec. 2016, https://ssir.org/up_for_debate/article/how_investors_can_and_cant_create_social_value. Accessed 2 Jun. 2017. [8] "Investment managers back greater transparency of clinical trials | The ...." 23 Jul. 2015, http://www.bmj.com/content/351/bmj.h4002. Accessed 5 Jun. 2017. [9] "Selecting investments based on covariance with the value of charities ...." 4 Feb. 2017, http://effective-altruism.com/ea/16u/selecting_investments_based_on_covariance_with/. Accessed 17 Feb. 2018. [11] "Taxes on Meat Could Join Carbon and Sugar to Help Limit Emissions ...." 11 Dec. 2017, http://www.fairr.org/news-item/taxes-meat-join-carbon-sugar-help-limit-emissions/. Accessed 17 Feb. 2018. [12] "Strategic considerations about different speeds of AI takeoff - Future of ...." 12 Aug. 2014, https://www.fhi.ox.ac.uk/strategic-considerations-about-different-speeds-of-ai-takeoff/. Accessed 8 Oct. 2017. [13] "Can You Short the Apocalypse? - Marginal REVOLUTION." 12 Aug. 2017, http://marginalrevolution.com/marginalrevolution/2017/08/can-short-apocalypse.html. Accessed 8 Oct. 2017. [14] "IJFS | Free Full-Text | A Study of Perfect Hedges - MDPI." 14 Nov. 2017, http://www.mdpi.com/2227-7072/5/4/28. Accessed 17 Feb. 2018. [15] "Ending factory farming as soon as possible - 80,000 Hours." 27 Sep. 2017, https://80000hours.org/2017/09/lewis-bollard-end-factory-farming/. Accessed 17 Feb. 2018. [16] "Investment Funds Worth Trillions Are Dropping Fossil Fuel Stocks ...." 12 Dec. 2016, https://www.nytimes.com/2016/12/12/science/investment-funds-worth-trillions-are-dropping-fossil-fuel-stocks.html. Accessed 5 Jun. 2017. [17] "Update on Cause Prioritization at Open Philanthropy | Open ...." 26 Jan. 2018, https://www.openphilanthropy.org/blog/update-cause-prioritization-open-philanthropy. Accessed 17 Feb. 2018. [19] "International Monetary Fund - Wikipedia." https://en.wikipedia.org/wiki/International_Monetary_Fund. Accessed 8 Oct. 2017. [20] "The IMF at a Glance." http://www.imf.org/en/About/Factsheets/IMF-at-a-Glance. Accessed 8 Oct. 2017. [21] "Clean Technology Fund | Climate Investment Funds." https://www.climateinvestmentfunds.org/fund/clean-technology-fund. Accessed 8 Oct. 2017. [22] "Just how bad is being a CEO in big tobacco? - 80,000 Hours." 21 Jan. 2016, https://80000hours.org/2016/01/just-how-bad-is-being-a-ceo-in-big-tobacco/. Accessed 17 Feb. 2018. [23] "The difference between true and tangible impact - 80,000 Hours." https://80000hours.org/articles/true-vs-tangible-impact/. Accessed 5 Jun. 2017. [24] "Why the long-term future of humanity matters more than anything else ...." https://80000hours.org/articles/why-the-long-run-future-matters-more-than-anything-else-and-what-we-should-do-about-it/. Accessed 8 Oct. 2017. [25] "Peter Thiel Has Been Hedging His Bet On Donald Trump - BuzzFeed." 7 Aug. 2017, https://www.buzzfeed.com/ryanmac/peter-thiel-and-donald-trump. Accessed 20 Aug. 2017. [26] "Do Investors Put Too Much Stock in the U.S.? | Michael Dickens." 26 Mar. 2017, http://mdickens.me/2017/03/26/do_investors_put_too_much_stock_in_the_us/. Accessed 8 Oct. 2017. [27] "A two-step hybrid investment strategy for pension funds - ScienceDirect." https://www.sciencedirect.com/science/article/pii/S1062940816301887. Accessed 18 Feb. 2018. [28] "Macro- and micro-dimensions of supervision of large pension ... - IOPS." http://www.iopsweb.org/WP-30-Macro-Micro-Dimensions-Supervision-LPFs.pdf. Accessed 17 Feb. 2018. [29] "Meet the Global Market Portfolio -- The 'Optimal Portfolio For ... - Forbes." 30 Jul. 2014, https://www.forbes.com/sites/phildemuth/2014/07/30/meet-the-global-market-portfolio-the-optimal-portfolio-for-the-average-investor/. Accessed 17 Feb. 2018. [30]James Snowden - Does risk aversion give us a good reason to diversify our charitable portfolio? ceppa.wp.st-andrews.ac.uk/files/2016/04/snowden_eac.ppt [31] "Pandemic bonds, a new idea - Fighting disease with finance." 27 Jul. 2017, https://www.economist.com/news/finance-and-economics/21725589-world-bank-creates-new-form-finance-pandemic-bonds-new-idea. Accessed 17 Feb. 2018. [32] "Economic instruments - IIASA PURE." http://pure.iiasa.ac.at/13904/1/Chapter4-ENHANCE.pdf. Accessed 18 Feb. 2018. [33] "Loading Pricing of Catastrophe Bonds and Other Long-Dated ...." 31 Oct. 2016, https://arxiv.org/abs/1610.09875. Accessed 18 Feb. 2018. [34] "Cyber Insurance - Springer Link." 27 Jun. 2017, https://link.springer.com/10.1007/978-3-319-06091-0_25-1. Accessed 18 Feb. 2018. [36] "How to Give Money (and Get Happiness) More Easily." 4 Dec. 2017, http://www.mrmoneymustache.com/2017/12/04/how-to-give-money-and-get-happiness-more-easily/comment-page-3/. Accessed 17 Feb. 2018. [37] "Top 15 Artificial Intelligence ETFs - ETF ...." 18 Feb. 2018, http://etfdb.com/themes/artificial-intelligence-etfs/. Accessed 18 Feb. 2018. # 34 31 comments, sorted by Highlighting new comments since New Comment I took your spreadsheet and made a quick estimate for an AI mission hedging portfolio. You can access it here The model assumes: • AI companies return 20% annually over the next 10 years in a short-timelines world, but less than the global market portfolio in a long-timelines world, • AI companies have equal or lower expected returns than the global market portfolio (otherwise we're just making a bet on AI), • money is 10x more useful in a short-timelines world than in a long-timelines world, • logarithmic utility. In the model, the extra utility from the AI portfolio is equivalent to an extra 2% annual return. My guess is that this is less than the extra returns one might expect if one believes the market doesn't price in short AI timelines sufficiently, but it makes the case for investing in an AI portfolio more robust. Caveat: I did this quickly. I haven't thought very carefully about the choice of parameters, haven't done sensitivity analyses, etc. As an extension to this model, I wrote a solver that finds the optimal allocation between the AI portfolio and the global market portfolio. I don't think Google Sheets has a solver, so I wrote it in LibreOffice. Link to download I don't know if the spreadsheet will work in Excel, but if you don't have LibreOffice, it's free to download. I don't see any way to save the solver parameters that I set, so you have to re-create the solver manually. Here's how to do it in LibreOffice: 1. Go to "Tools" -> "Solver..." 2. Click "Options" and change Solver Engine to "LibreOffice Swarm Non-Linear Solver" 3. Set "Target cell" to D32 (the green-colored cell) 4. Set "By changing cells" to E7 (the blue-colored cell) 5. Set two limiting conditions: E7 => 0 and E7 <= 1 6. Click "Solve" Given the parameters I set, the optimal allocation is 91.8% to the global market portfolio and 8.2% to the AI portfolio. The parameters were fairly arbitrary, and it's easy to get allocations higher or lower than this. As of yesterday, my position on mission hedging was that it was probably crowded out by other investments with better characteristics[1], and therefore not worth doing. But I didn't have any good justification for this, it was just my intuition. After messing around with the spreadsheet in the parent comment, I am inclined to believe that the optimal altruistic portfolio contains at least a little bit of mission hedging. Some credences off the top of my head: • 70% chance that the optimal portfolio contains some mission hedging • 50% chance that the optimal portfolio allocates at least 10% to mission hedging • 20% chance that the optimal portfolio allocates 100% to mission hedging [1] See here for more on what investments I think have good characteristics. More precisely, my intuition was that the global market portfolio (GMP) + mission hedging was probably a better investment than pure GMP, but a more sophisticated portfolio that included GMP plus long/short value and momentum had good enough expected return/risk to outweigh the benefits of mission hedging. EDIT: I should add that I think it's less likely that AI mission hedging is worth it on the margin, given that (at least in my anecdotal experience) EAs already tend to overweight AI-related companies. But the overweight is mostly incidental—my impression is EAs tend to overweight tech companies in general, not just AI companies. So a strategic mission hedger might want to focus on companies that are likely to benefit from AI, but that don't look like traditional tech companies. As a basic example, I'd probably favor Nvidia over Google or Tesla. Nvidia is still a tech company so maybe it's not an ideal example, but it's not as popular as Google/Tesla. Very cool - thanks for doing this. I agree that EA-related resources are skewed towards the US tech sector (see Ben Todd's recent post) and that should definitely be taken into account. Thanks for making this model extension! I believe the most important downside to a mission hedging portfolio is that it's poorly diversified, and thus experiences much more volatility than the global market portfolio. More volatility reduces the geometric return due to volatility drag. Example case: • Stocks follow geometric Brownian motion. • AI portfolio has the same arithmetic mean return as the global market portfolio. • Market standard deviation is 15%, AI portfolio standard deviation is 30%. • Market geometric mean return is 5%. In geometric Brownian motion, arithmetic return = geometric return + stdev^2 / 2. Therefore, the geometric mean return of the AI portfolio is 5% + 15%^2/2 - 30%^2/2 = 1.6%. If we still assume a 20% return to AI stocks in the short-timelines scenario, that gives 1.3% return in the long-timelines scenario. And the annual return thanks to mission hedging is -1.1%. (I'm only about 60% confident that I set up those calculations correctly. When to use arithmetic vs. geometric returns can be confusing.) Of course, you could also tweak the model to make mission hedging look better. For instance, it's plausible that in the short-timeline world, money is 100x more valuable instead of 10x, in which case mission hedging is equivalent to a 24% higher return even with my more pessimistic assumption for the AI portfolio's return. Yeah, in my model, I just assumed lower returns for simplicity. I don't think this is a crazy assumption – e.g., even if the AI portfolio has higher risk, you might keep your Sharpe ratio constant by reducing your equity exposure. Modelling an increase in risk would have been a bit more complicated, and would have resulted in a similar bottom line. I don't really understand your model, but if it's correct, presumably the optimal exposure to the AI portfolio would be at least slightly greater than zero. (Though perhaps clearly lower than 100%.) To be clear, my model is exactly the same as your model, I just changed one of the parameters—I changed the AI portfolio's overall expected return from 4.7% to 1.3%. It's not intuitively obvious to me whether, given the 1.3%-return assumption, the optimal portfolio contains more AI than the global market portfolio. I know how I'd write a program to find the answer, but it's complicated enough that I don't want to do it right now. (The way you'd do it is to model the correlation between the AI portfolio and the market, and set your assumptions such that the optimal value-neutral portfolio (given the two investments of "AI stocks" and "all other stocks") equals the global market portfolio. Then write a utility function that assigns more utility to money in the short-timelines world and maximize that function where the independent variable is % allocation to each portfolio. You can do this with Python's scipy.optimize, or any other similar library.) EDIT: I wrote a spreadsheet to do this, see this comment I can't follow this either but a study cited in Radical Markets suggests that a randomly chosen portfolio of as few as fifty stocks achieves 90% of the diversification benefits available from full diversification across the entire market. Given that FAANG's market cap alone is already$3 trillion and for almost 10% of the U.S. stock market's total market capitalization of $31 trillion, AND you could further diversify then this, wouldn't you get quite a lot of the diversification benefits? 50 randomly-chosen stocks are much better diversified than 50 stocks that are specifically selected for having a high correlation to a particular outcome (e.g., AI development). This paper provides some more in-depth explanation of what I was talking about with the math. It's fairly technical, but it doesn't use any math beyond high school algebra/statistics. The key point I was making is that, if markets are efficient, then you shouldn't expect a 5% (or even 4.7%) geometric mean return from the AI portfolio. Instead, you should expect more like 1.3%. I might have messed up some of the details, but I'm confident that the geometric return for an un-diversified portfolio in an efficient market is meaningfully lower than the global market return. This is not to say that mission hedging is a bad idea, just that this is an important fact to take into account. Very interesting- thanks for elaborating! @Jonas: I think your model is interesting, but if we define transformative AI like OpenPhil does (" AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution."), and you invest for mission hedging in a diversified portfolio of AI companies (and perhaps other inputs such as hardware) , then it seems conceivable to me to have much higher returns - perhaps 100x of crypto? This is the basic idea for mission hedging for AI, and in line with my prior, and I think this difference in returns might be why I find the results of your model, that Mission hedging wouldn't have a bigger effect, surprising. This piece provides an IMO pretty strong defense of divestment: https://sideways-view.com/2019/05/25/analyzing-divestment/ Do you agree, and if to some extent, how does it change the conclusions of this article? I think I might not really understand Paul’s argument completely, but I really value his opinion generally, so I think more people should look into this more (but he also said he meant to write a new version soon). Having said that, I still think divestment is not worth it for EAs and I still believe mission hedging is a better strategy, for four reasons: 1. I don’t think divestment changes the stock price substantially as I argue in my Impact Investing report with John Halstead. Peter Gerdes makes very similar points in the comments of Paul’s blog. 2. Even if share prices did not return to equilibrium, the impact on the level of production is likely to be low. Public equity is generally traded in secondary markets, meaning that changes in stock price affect shareholders, rather than the usable capital the company has at hand. From the impact investing report: “Movements in the share price do not always affect the capital available to companies. In industries that generate a lot of cash and so do not need to raise capital, changes in share prices will not have much impact. In addition, public equity is generally traded in secondary markets, meaning that changes in stock price affect shareholders, rather than the usable capital the company has at hand.[38].” Also see Modigliani–Miller theorem. 3. The post is too focused on marginal cost-effectiveness/benefit cost ratios but we should look at the cost-effectiveness at scale and also the total benefit minus cost. Paul highlights that the first unit of divestment is free and I agree with this. Thus, the first unit of divestment has theoretically an infinite marginal cost-effectiveness (infinite benefit cost ratio). But after the first dollar of divestment the costs increase. And then beyond the individual investor, the cost increases further. He also acknowledges that “It’s hard to implement” and “Deciding how to divest is quite challenging.” and “These funds would only be appealing to an unusual audience” Thus, even if Paul is right, and even if the marginal cost-effectiveness is high, the overall benefit will likely be small as this doesn’t seem to scale and have substantial associated set-up costs. 4. Even if one could scale at a benefit cost ratio - I don’t think it would be particularly effective. Consider, that Paul argues that one can ““sacrifice <$1 to reduce EvilCo’s output by >$5”. I have an intuition that this is unrealistically high on the face of it, because it would allow all kinds of uncompetitive practices like shorting competitors to dominate the market, but, for simplicity, let’s go with it and say it’s 10:1. Global fossil fuel industry revenue is ~$10trn. To reduce output to zero with divestment, you'd need to spend ~$1trn. Fossil fuel industry emits ~35gT/y. To reduce their output by 1 ton, their revenue needs to be reduced by ~$285 ($10trn / 35bn). So it's$285/tCO2e averted. Even with 10:1 leverage, this would be quite expensive (but am not 100% confident in this calculation). As you say Jonas: “Any thoughts on whether divestment is generally worth the opportunity cost if the returns had been donated to the most effective charities? (E.g., reductions in carbon emissions from divestment vs. donations to clean energy R&D.)” I think there are more effective ways to solve climate change with $1 trillion (e.g. through clean energy R&D). I don’t think trying to reduce corporations' costs of capital is an effective way to reduce their externalities. Generally, multi-objective optimization is harder than single-objective optimization. Divestment tries to optimize for both social impact and financial impact. However, I think it’s easier to optimize for financial impact (which is relatively straightforward), and then use the profits to optimize for social impact through donations (which we also have a relatively good grasp on). With mission hedging you still have the option to donate and it can also be combined with using leverage (for instance, this 3x leveraged AI FAANG+ ETF for people wanting to hedge against long-term risks from AI- this is of course not financial advice). Cool, thanks for the reply! Strong-upvoted. Regarding #1 and #2, so far I found Paul's line of argument more convincing, but I have only followed the discussion superficially. But points #3 and #4 seem pretty strong and convincing to me, so I'm inclined to conclude that mission hedging is indeed the stronger consideration here. For AI risk, #3 might not apply because there's no divestment movement for AI risk and tech giants are large compared to our philanthropic investments. For #4, using the same 10:1 ratio, we'd be faced with the choice between sacrificing around$10 billion to reduce the largest tech giants' output by 1%, or do something else with the money. We can probably do better than reducing output by 1%, especially because it's pretty unclear whether that would be net positive or negative.

Even with 10:1 leverage, this would be quite expensive

My understanding is that 10x leverage would also mean ~10x cost (from forgone diversification).

The point I would most like to emphasise is that it's often unclear what will happen to an asset when cost-effectiveness goes up. If you're confident it'll go up at that time, you buy/overweight it. If you're confident it'll go down at that time, you sell/underweight it. If it could go either way, this approach is weaker. Most discussion I have seen on this topic assumes that the 'evil' asset can be expected to move in the same direction as cost-effectiveness. Finding something with reliable covariance in either direction seems like it might be most of the challenge.

For more detail on that, here are some notes on the most valuable insights and most significant errors of the original Federal Reserve paper.

My guess is that the best suggestions from this post appear in 'Applications outside of investment'. These do not fall prey to the abovementioned issues since the mechanisms are different to the investment case, directly exploiting the extra power one gains from being on the inside of an organisation rather than correlation/covariance.

(I might as well note that this comment represents my views on the matter, and no-one else's, while the main post represents the views of others, and not necessarily mine.)

I thought this was super interesting, thanks Hauke. The question that sprang to mind: in what circumstances would it do more good to engage in mission hedging vs trying to maximise expected returns?

Great question!

In theory, mission hedging can always beat maximizing expected returns in terms of maximizing expected utility.

In practice, I think the main considerations here are a) whether you can find a suitable hedge in practice and b) whether you are sufficiently certain that a cause is important, because you give up the flexibility of being cause neutral and tie yourself financially to a particular cause. You can remain cause neutral by trying to maximize expected financial returns.

To me, the two most promising applications seem to be AI safety, where people are often quite certain that it is one of the most pressing causes (as per maxipok or preventing s-risk), and it seems as if investing in AI companies is plausible to me (but note Kit Harris objections in the comment section here). And then also using mission hedging for ones career might be good by either joining the military, the secret service, or an AI company for the reasons outlined above i.e. historically people in the military have sometimes had outsized impact.

Okay, but can you explain why it would beat maximise expected returns?

Here's the thought: maximise expected returns gives me more money than mission hedging. That extra money is a pro tanto reason to think the former is better.

However, mission hedging seems to have advantages, such as in shareholder activism: if evil company X makes money, I will have more cash to undermine it, and other shareholders will know this, thus suppressing X's value. This is a pro tanto reason to favour mission hedging.

How should I think about weighing these pro tanto reasons against one another to establish the best strategy? Apologies if I've missed something here, thinking this way is new to me.

Thanks for asking for clarification - I'm sorry I think I've been unclear about the mechanism. It's not really about shareholder activism, this is just an extra.

I've now added a few graphs and a spreadsheet as a toy model of why mission hedging beats a strategy that maximizes financial returns in the introduction. Can you take a look and see whether it's more clear now? Or maybe I'm missing your question.

It seems to me that for mission hedging to work, there needs to be a strong positive relationship between production and stock price. That is, when (say) a fossil fuel company produces more oil, its stock price goes up. That might happen, but it might not. Several things need to happen:

1. The increased quantity is not offset by a decrease in price
2. The increased revenue translates into higher profit (this might not happen if, e.g., increased revenue inuces more competition, or induces increased costs for the oil company)
3. Higher profit translates into a higher stock price

Step 3 seems very likely to happen in the long run, but steps 1 and 2 seem more uncertain to me, and I don't have a great understanding of the relevant economics. Do we have good reason to expect increased production to translate into stock returns? Or do we at least understand the circumstances under which it will or will not translate?

(Alternatively, we could look at the relationship between, say, oil production and the price of oil futures. This is a simpler relationship, but I'd guess the two numbers are basically uncorrelated. They will move together if demand changes, and will move oppositely if supply changes.)

Even though medical device sales and the stock prices of corporations selling these devices should often covary, they are merely correlated. One can imagine cases where medical devices sell poorly, and yet global health is poor, or cases where medical devices sell very well, but global health is good. This is why it’s better to invest in corporations that directly cause the bad activity, in this case tobacco.

I'm not sure about this section. You just say that the covariance isn't perfect, therefore we must directly invest in the relevant industry. Sure, the imperfect covariance is a reason why we should expect it to be better to invest in the relevant industry, but that doesn't mean that hedging in covariant industries is not good at all. You're talking about investments in the relevant industry as if they are a necessary condition for hedging to make sense, when in reality you just give a presumption that it's better than doing it in other ways. There is usually a chance that your investments will fail when the rest of the industry does well anyway, even if you invest directly in the target sector. And investing in a separated, covariant industry has a major benefit in that, not only is it not a reputational risk, but it isn't a directly harmful activity if the EMH is false.

Also, there is another necessary condition which is that the marginal value of donations must increase when the problem gets worse. Companies hedge because they have a greater need for money when their stocks fail. They don't really maximize expected profits, they are somewhat risk averse. Now do our donations go further when the problems in the world get worse? I'm inclined to say "yes", but I think it's a very small effect. I wrote about this and tested some estimate numbers with a very rudimentary calculation, and it seemed to me that the benefit was arguably too small to worry about, and it doesn't seem sufficient to outweigh the risk of robustly improving the performance of bad industries in the strategy you outline here.

http://effective-altruism.com/ea/16u/selecting_investments_based_on_covariance_with/

https://imgur.com/9of14il

Also, I think that generalizing to selecting estimates based on covariance with charity value is the right framework to use here, instead of just looking at this sort of hedging.

These are excellent comments, thank you!

Regarding your first point on investing in industries that covary vs. are causally related: you're right that mission hedging can also work when there is just covariance. I think the main benefit of investing in companies that cause the bad activity is that it will have have a tighter covariance than investing in companies that do not cause the bad activity and we can know this ex ante. I do take your point that this is potentially more of a reputational risk in investing in companies that cause the bad activity (for some cases, for some people). I do not think the reputational risk argument applies much to either small investors or some investments such as investing in technology companies to hedge against AI risks. Now, your last point I find most interesting: if the efficient market hypothesis (EHM) doesn't hold then it's better to invest in things that have a high covariance. I have a strong intuition that EHM holds for publically traded stocks, especially for small investors, who don't make a big fuzz about investing. Overall, I feel drawn to selecting investments that cause the bad activity due to higher certainty about high future covariance.

Now do our donations go further when the problems in the world get worse? I'm inclined to say "yes", but I think it's a very small effect.

Yes, this crucially depends on whether there are increasing returns to scale to charitable intervention, which is another assumption. However, for me the assumption has has intuitive appeal. I can imagine the effect size to be substantial in some cases (I now give a toy model in the beginning of the text). Think about the effect of public good type interventions where the cost-effectiveness scales pretty linearly with the problem (how many beings are affected).

I took a look at your calculation and I'm sorry to say that I don't quite understand it. However, based on the numbers that I see, I think that plugging in different parameters into the model would also not be entirely unreasonable. But yes, I agree think it might be interesting to have more empirical validation on this.

I think our disagreement might boil down to different intuitions about whether EMH holds on the stock market and whether there returns to scale i.e. whether a charity becomes more effective as the problem gets bigger. I think this is somewhat likely in some cases (but I'm not completely confident in this). So I'm still pretty convinced about this to the point where I would advice people to seriously, though carefully consider using mission hedging over your covariance approach.

Also, I think that generalizing to selecting estimates based on covariance with charity value is the right framework to use here, instead of just looking at this sort of hedging.

I think investing in corporations that cause the bad activity is theoretically equivalent to this and in fact is based on finding a (distal) cause of charity effectiveness. However, as mentioned above it assumes increasing returns to scale.

But I just thought about finding a more proximal cause of charity effectiveness, that can still be directly implemented on the stock market and maybe this might be shorting the endowment of your favorite charity. Will Macaskill made a similar comment on your post saying that maybe it might be worth considering shorting FB if OpenPhil is still heavily reliant on it. Maybe your favourite charity has an endowment and it itself doesn't hedge against risks (because their portfolio is not optimally diversified).

okay let me explain the spreadsheet better. I was comparing investments in an irrelevant market, to investments in a relevant market. Each investment has a 1/3 chance of growing 0%, a 1/3 chance of growing 5%, and a 1/3 chance of growing 10%. The top spreadsheet shows the value of your money if you invest it in an irrelevant market. The bottom spreadsheet shows the value of your money if you invest it in a relevant market. For instance if you invest in a relevant market and the relevant market doesn't change, then you get 0% on your investments and 0% change in donation value so your donations are worth 100% what they were worth before. If you invest in an irrelevant market, and both markets go up by 5%, then your donations would be worth 1 1.05 1.05 = 110.25 % if the covariance is 100%, but here the covariance is 40% so the calculation is 1 1.05 1.02 = 107.10%. Both numbers on the right are the average of the nine grid squares to the left, so they are the expected value of your investment after one year.

It's really really simplistic math but I just tried to get a sense of the scale of the effect, it turned out to be small.

This is really fascinating. I think this is largely right and an interesting intellectual puzzle on top of it. Two comments:

1) I would think mission hedging is not as well suited to AI safety as it is to climate or animal activism because AI safety is not directly focused on opposing an industry. As has been noted elsewhere on this forum, AI safety advocates are not focused on slowing down AI development and in many cases tend to think it might be helpful, in which case mission hedging is counterproductive. I could also imagine a scenario in which AI problems also weigh down a company's stock. Maybe a big scandal occurs around AI that foreshadows future problems with AGI and also embarrasses AI developers.

2) As kbog notes, it doesn't seem clear that the growth in an industry one opposes means the marginal dollar is more effective. Even though an industry's growth increases the scale of a problem, it might lower its tractability or neglectedness by a greater amount.

AI safety advocates are not focused on slowing down AI development and in many cases tend to think it might be helpful, in which case mission hedging is counterproductive.

I know people working on AI safety who would want to slow down progress in AI if it would be tractable. I actually think that it might be possible to slow down AI by reducing taxes on labor and increasing migration - see https://www.cgdev.org/blog/why-are-geniuses-destroying-jobs-uganda - which I think is a better idea than robot taxes: https://qz.com/911968/bill-gates-the-robot-that-takes-your-job-should-pay-taxes/ . Somebody should write about this.

But this not really about speed: mission hedging might work in this case because the stock price of an AI company merely reflects the probability of whether a company will come up with better artificial intelligence than the competition earlier, not when.

I could also imagine a scenario in which AI problems also weigh down a company's stock. Maybe a big scandal occurs around AI that foreshadows future problems with AGI and also embarrasses AI developers.

Note that it is important to diversify within mission hedging. So weighing down one company's stock doesn't matter. I feel that any scandals that are not really related to the actual ability of the AI industry to produce better AI faster will likely have very limited effect on the stock price dropping. I'm reminded here of fatalities with self-driving cars, which has not rocked investors confidence in investing in them. But even if it does, than that just means that self-driving cars are not as great as we thought they would be (presumably some fatalities are already 'priced in').

But yes, your point is valid in the that 'you can't short the apocalypse', as I mention above. Overall, I actually think, all things considered, mission hedging might work best for AI risk scenarios.

Other than that, I love the article. Thanks for the giant disclaimer ;)

This seems like a really powerful tool to have in one's cognitive toolbox when considering allocating EA resources. I have two questions on evaluating concrete opportunities.

First, if I can state what I take to be the idea (if I have this wrong, then probably both of my questions are based on understanding): we can move resources from lower-need (i.e. the problem continues as default or improves) to higher-need situations (i.e. the problem gets worse) by investing in instruments that will be doing well if the problem is getting worse (which because of efficient markets is balanced by the expectation they will be doing poorly if the problem is improving).

You mention the possibility that for some causes, the dynamics of the cause progression might mean hedging fails (like fast takeoff AI). Is another possible issue that some problems might unlock more funding as they get worse? For example, dramatic results of climate change might increase funding to fight it sufficiently early. While the possibility of this happening could just be taken to undermine the serious of the cause ("we will sort it out when it gets bad enough"), if different worsenings unlock different amounts of funding for the same badness, the cause could still be important. So should we focus on instruments that get more valuable when the problem gets worse AND the funding doesn't get better?

My other question was on retirement saving. When pursuing earning-to-give, doesn't it make more sense just to pursue straight expected value? If you think situations in which you don't have a job will be particularly bad, you should just be hedging those situations anyway. Couldn't you just try and make the most expected money, possibly storing some for later high-value interventions that become available?

Thank you for sharing this research! I will consider it when making investment decisions.

I replied about this before to one of your posts. Maybe I did not explain it well. In short, two guys wrote a paper about how combinations of heat and humidity above certain levels could kill everyone who lacks access to air conditioning in large regions of the world, or at least force them to evacuate their countries. Do you have any opinion on the priority level of understanding this compared with other climate causes?

Sorry, I missed your previous comment. I'm not an expert on climate change and this not necessarily the best place for this discussion of why this is neglected within effective altruism - I would recommend that you post your question to Effective Altruism Hangout facebook group and ask for an answer. The reason that you get downvoted is that you post on many different threads even though it's not really related to the discussion. I would recommend you reading this: before posting though: https://80000hours.org/2016/05/how-can-we-buy-more-insurance-against-extreme-climate-change/

However, here are my two cents: