I recently released two working papers that seek to integrate EA principles into financial economics. Both papers are academic versions of ideas we've been working on in much more practical contexts at the Total Portfolio Project. I hope to share more of our work with the wider community in the future. Right now I would most appreciate feedback either from other economists or community polymaths/brave souls who are curious enough to open these papers.

The first takes a cutting edge but otherwise standard financial model for the economy, adds in altruistic preferences, and then examines the optimal investment policy for different types of altruistic investors (e.g. patient philanthropists, urgent philanthropists). The second sets up the framework I use in the first paper. This includes highlighting the importance of the counterfactual and making the case for probabilistic reasoning about impact. 

Some reasons you might actually get something out of reading these papers:

  • The model naturally leads to a version of the SSN cause prioritization framework that can be applied at both micro and macro levels. It includes different definitions of 'neglectedness' depending on the context.
  • Mission-correlated premia, a generalization of the idea of 'mission hedging',  arise in both the model and the framework.
  • I also discuss model uncertainty, moral uncertainty and how these considerations might be integrated into investment models.

While I don't think asking for feedback on academic working papers is the norm on the forum, I wanted to do this because both papers present EA ideas and I cite several EA authors. So I'd be particularly interested in feedback that helps me improve how I represent the community and its ideas. 

Comments14
Sorted by Click to highlight new comments since: Today at 9:58 AM

Hi, 

I've only skimmed your theoretical model a little, so apologies if you already addressed this. But I think any good theoretical model of altruistic investment (assuming your altruistic preferences aren't extremely different from other altruistic actors of comparable or larger size) that's trying to advise altruistic decisionmaking has to account for other altruistic actors of comparable or larger size.

MichaelDickens has talked about this a bunch. I don't know he has written a handy primer, but this might be best. 

The basic idea is that under most reasonable utility functions, you want to reduce the correlation of your assets with that of other altruistic actors. This is because there's likely diminishing marginal utility to the total amount of funds that altruists control, so you want to be able to donate during times other altruists cannot (Sanity check: the first million dollars that goes to a GiveWell-like thing has more marginal impact than the next million dollars, since this allows us to set up GiveWell in the first place). 

This is not a problem for selfish actors, since while it is true that public goods are selfishly beneficial as well, the effects of your neighbors getting rich towards your personal utility aren't very high (and might well be negative). 

The toy model I usually run with (note this is a mental model, I neither study academic finance nor do I spend basically any amount of time on modeling my own investments) is assuming that my altruistic investment is aiming to optimize for E(log(EA wealth)). Notably this means having an approximately linear preferences for the altruistic proportions of my own money,  but suggests much more (relative) conservatism on investment for Good Ventures and FTX, or other would-be decabillionaires in EA. In addition, as previously noted, it would be good if I invested in things that aren't heavily correlated with FB stock or crypto, assuming that I don't have strong EMH-beating beliefs.

(If you have very unusual moral preferences or empirical beliefs about the world, the specific parameters I chose is less applicable. But the general parameters still hold. Some examples: 

  1. If you believe global health is the most important (and arguably only important) cause area, then you would want to reduce your correlation not only with Open Phil but also the Gates foundation and other reasonably effective global health foundations
    1. For all but the very largest donors, I expect you want to maximize your expected returns.
  2. if you care a lot about SFE based wordviews, then you want to reduce your correlation with other SFE based donors.
    1. As I believe there are much less donors in SFE views, even moderately rich (by philantropic standards, say several million in assets) donors may wish to have some risk aversion in their investments, in a way that isn't true for the above two examples.
  3. If you are the only sizable donor of a cause area and you're pessimistic you can convince other donors to join in in the next <10 years, you don't need to coordinate with other donors. I suspect this should mean pretty heavy risk aversion in practice with your investments (like roughly on par with selfish investors), if you believe that there's substantial diminishing returns to money in your cause area (which seems likely to me).
jh
2y7
0
0

Great points. You've inspired me to look at ways to put more emphasis on these ideas in the discussion section that I haven't yet added to the model paper.

Developing a stream of the finance literature that further develops and examines ideas from the EA community is one of the underlying goals with these papers.  I believe these ideas are valid and interesting enough to attract top research talent. Also, that there is plenty of additional work to do to flesh these ideas out so having more researchers working on these topics would be valuable.

In this context I see these papers are setting out a framework for further work. I could see a paper follow from specifying E(log(EA wealth)) as the utility function then examining the implications. Exactly as you've outlined above. It would surely need something more to make it worth a whole academic paper (e.g. examining alternative utility functions, examining relevant empirical data, estimating the size of the altruistic benefits gained by optimizing for this utility versus following a naive/selfish portfolio strategy). I would be excited to see papers like this get written and excited to collaborate on making it happen.

Directly on the points in your comment, I'm curious to what extent you've seen these ideas being action guiding in practice? e.g. Are you aware of smaller donors setting up DAFs and taking much more risk than they otherwise would (tax considerations, by the way, are another important thing I've abstracted away in my current papers). Are you aware of people specifically taking steps to reduce their correlations with other donors?

As in my papers, I'd split the implications you discussed above into buckets of risk-aversion and mission-correlation. If a smaller donor's utility depends on log(EA wealth) then of course it makes sense for them to have very little risk aversion in regards to their own wealth. But then they should have the mission-correlation effect of being averse to correlations with major donors. It seems reasonable to me to think of the major donor portfolio as approximately being a global diversified portfolio, i.e. the market (perhaps with some overweights on FB, MSFT, BRK). Just intuitively, I'd say that this means their aversion to market risk should be about equal to what it would be if they were selfish. Which means we're back to square one of just defaulting to a normal portfolio. That is, the (mission-correlated) risk the altruist sees in most investments will be about equal to the (selfish) market risk most investors see. So their optimal portfolios will be about the same. 

Of course, mission-correlated risk aversion could have different implications from normal risk aversion if it is easier to change the covariance of your portfolio with major donors than it is to change the variance of your portfolio. But that's my point in the above paragraph - the driver of both these variances is going to be your market risk exposure. And quickly reviewing Michael's post, I'd say all the ideas he mentions are also plausibly good ideas for mainstream investors looking to optimize their portfolios. If this is the case, then we need something more to imply altruists should deviate from following standard, even if advanced, financial advice (e.g. Hauke's example of crypto could be such a special case, or other investments that are correlated with government policy shifts, or technological shifts that change the altruistic opportunities that are available). 

Interested to hear your thoughts on this. I would be particularly excited to see more EA research on a) the expected trajectories of effectiveness over time in different cause areas, and b) the amount of diminishing returns to money in each area. On a), I'd note Founders Pledge has done some good, recent work on this with their Investing to Give and Climate research. Would be great to see more. On b), I think there is tons of thinking out there on this and I feel like it would be great if someone organized this collective wisdom to establish what the current consensus views are (e.g. like 'global health has low diminishing returns', 'AI safety research has relatively high diminishing returns right now').

Great comment. Related: part of me is glad that EA is so exposed to crypto, because governments are the biggest altruistic actors, and if cryptos valuation is largely due its potential to reduce taxation, it might be a good mission hedge.

@Kevin Kuruc at the University of Oklahoma might have something to add :-) 

Sidenote: I'm sure an engineering undergrad isn't your target audience, but all the big words (pecuniary, idiosyncrasy, premia, etc.) are a bit hard to parse :O 

jh
2y13
0
0

Thanks Madhav. I'm a big fan of using simple language most of the time. In this case all of those words are pretty normal for my target audience.

Thanks for flagging :) I am going to take a look!

In the section on robustness in the second paper, does the constant parameter for the degree of bias, ψ, have a natural interpretation, and is there a good way to set its value?

jh
2y6
0
0

Great question and thanks for looking into this section. I've now added a bit on this to the next version of the paper I'll release.

Watson and Holmes investigate this issue :)

They propose several heuristic methods that use simple rules or visualization to rule out values where the robust distribution becomes 'degenerate' (that is, puts an unreasonable amount of weight on a small set of scenarios). How to improve on these heuristics seems to be an open problem.

It seems to me that what seem like different techniques, like cross validation, are ultimately trying to solve the same problem. If so, I wonder if the machine learning community has already found better techniques for 'setting '?

I'm thinking in practice, it might just be better to explicitly consider different distributions, and do a sensitivity analysis for the expected value. You could maximize the minimum expected value over the alternative distributions (although maybe there are better alternatives?). This is especially helpful if there are specific parameters you are very concerned about and you can be honest with yourself about what you think a reasonable person could believe about their values, e.g. you can justify ranges for them.

Maybe it's good to do both, though, since considering other specific distributions could capture most of your known potential biases in cases where you suspect it could be large (and you don't think your risk of bias is as high in other ways than the ones covered), while the approach you describe can capture further unknown potential biases.

Cross-validation could help set   when your data follows relatively predictable trends (and is close to random otherwise), but it could be a problem for issues where there's little precedent, like transformative AI/AGI.

jh
2y8
0
0

Yes, Watson and Holmes definitely discuss other approaches which are more like explicitly considering alternative distributions. And I agree that the approach I've described does have that benefit that it can uncover potentially unknown biases and work for quite complicated models/simulations. Hence why I've found it useful to apply to my portfolio optimization with altruism paper (and actually to some practical work). Along with using common sense exploration of alternative models/distributions.

[comment deleted]2y1
0
0
More from jh
Curated and popular this week
Relevant opportunities