I recently released two working papers that seek to integrate EA principles into financial economics. Both papers are academic versions of ideas we've been working on in much more practical contexts at the Total Portfolio Project. I hope to share more of our work with the wider community in the future. Right now I would most appreciate feedback either from other economists or community polymaths/brave souls who are curious enough to open these papers.

The first takes a cutting edge but otherwise standard financial model for the economy, adds in altruistic preferences, and then examines the optimal investment policy for different types of altruistic investors (e.g. patient philanthropists, urgent philanthropists). The second sets up the framework I use in the first paper. This includes highlighting the importance of the counterfactual and making the case for probabilistic reasoning about impact. 

Some reasons you might actually get something out of reading these papers:

  • The model naturally leads to a version of the SSN cause prioritization framework that can be applied at both micro and macro levels. It includes different definitions of 'neglectedness' depending on the context.
  • Mission-correlated premia, a generalization of the idea of 'mission hedging',  arise in both the model and the framework.
  • I also discuss model uncertainty, moral uncertainty and how these considerations might be integrated into investment models.

While I don't think asking for feedback on academic working papers is the norm on the forum, I wanted to do this because both papers present EA ideas and I cite several EA authors. So I'd be particularly interested in feedback that helps me improve how I represent the community and its ideas. 

Comments14


Sorted by Click to highlight new comments since:

Hi, 

I've only skimmed your theoretical model a little, so apologies if you already addressed this. But I think any good theoretical model of altruistic investment (assuming your altruistic preferences aren't extremely different from other altruistic actors of comparable or larger size) that's trying to advise altruistic decisionmaking has to account for other altruistic actors of comparable or larger size.

MichaelDickens has talked about this a bunch. I don't know he has written a handy primer, but this might be best. 

The basic idea is that under most reasonable utility functions, you want to reduce the correlation of your assets with that of other altruistic actors. This is because there's likely diminishing marginal utility to the total amount of funds that altruists control, so you want to be able to donate during times other altruists cannot (Sanity check: the first million dollars that goes to a GiveWell-like thing has more marginal impact than the next million dollars, since this allows us to set up GiveWell in the first place). 

This is not a problem for selfish actors, since while it is true that public goods are selfishly beneficial as well, the effects of your neighbors getting rich towards your personal utility aren't very high (and might well be negative). 

The toy model I usually run with (note this is a mental model, I neither study academic finance nor do I spend basically any amount of time on modeling my own investments) is assuming that my altruistic investment is aiming to optimize for E(log(EA wealth)). Notably this means having an approximately linear preferences for the altruistic proportions of my own money,  but suggests much more (relative) conservatism on investment for Good Ventures and FTX, or other would-be decabillionaires in EA. In addition, as previously noted, it would be good if I invested in things that aren't heavily correlated with FB stock or crypto, assuming that I don't have strong EMH-beating beliefs.

(If you have very unusual moral preferences or empirical beliefs about the world, the specific parameters I chose is less applicable. But the general parameters still hold. Some examples: 

  1. If you believe global health is the most important (and arguably only important) cause area, then you would want to reduce your correlation not only with Open Phil but also the Gates foundation and other reasonably effective global health foundations
    1. For all but the very largest donors, I expect you want to maximize your expected returns.
  2. if you care a lot about SFE based wordviews, then you want to reduce your correlation with other SFE based donors.
    1. As I believe there are much less donors in SFE views, even moderately rich (by philantropic standards, say several million in assets) donors may wish to have some risk aversion in their investments, in a way that isn't true for the above two examples.
  3. If you are the only sizable donor of a cause area and you're pessimistic you can convince other donors to join in in the next <10 years, you don't need to coordinate with other donors. I suspect this should mean pretty heavy risk aversion in practice with your investments (like roughly on par with selfish investors), if you believe that there's substantial diminishing returns to money in your cause area (which seems likely to me).
jh
7
0
0

Great points. You've inspired me to look at ways to put more emphasis on these ideas in the discussion section that I haven't yet added to the model paper.

Developing a stream of the finance literature that further develops and examines ideas from the EA community is one of the underlying goals with these papers.  I believe these ideas are valid and interesting enough to attract top research talent. Also, that there is plenty of additional work to do to flesh these ideas out so having more researchers working on these topics would be valuable.

In this context I see these papers are setting out a framework for further work. I could see a paper follow from specifying E(log(EA wealth)) as the utility function then examining the implications. Exactly as you've outlined above. It would surely need something more to make it worth a whole academic paper (e.g. examining alternative utility functions, examining relevant empirical data, estimating the size of the altruistic benefits gained by optimizing for this utility versus following a naive/selfish portfolio strategy). I would be excited to see papers like this get written and excited to collaborate on making it happen.

Directly on the points in your comment, I'm curious to what extent you've seen these ideas being action guiding in practice? e.g. Are you aware of smaller donors setting up DAFs and taking much more risk than they otherwise would (tax considerations, by the way, are another important thing I've abstracted away in my current papers). Are you aware of people specifically taking steps to reduce their correlations with other donors?

As in my papers, I'd split the implications you discussed above into buckets of risk-aversion and mission-correlation. If a smaller donor's utility depends on log(EA wealth) then of course it makes sense for them to have very little risk aversion in regards to their own wealth. But then they should have the mission-correlation effect of being averse to correlations with major donors. It seems reasonable to me to think of the major donor portfolio as approximately being a global diversified portfolio, i.e. the market (perhaps with some overweights on FB, MSFT, BRK). Just intuitively, I'd say that this means their aversion to market risk should be about equal to what it would be if they were selfish. Which means we're back to square one of just defaulting to a normal portfolio. That is, the (mission-correlated) risk the altruist sees in most investments will be about equal to the (selfish) market risk most investors see. So their optimal portfolios will be about the same. 

Of course, mission-correlated risk aversion could have different implications from normal risk aversion if it is easier to change the covariance of your portfolio with major donors than it is to change the variance of your portfolio. But that's my point in the above paragraph - the driver of both these variances is going to be your market risk exposure. And quickly reviewing Michael's post, I'd say all the ideas he mentions are also plausibly good ideas for mainstream investors looking to optimize their portfolios. If this is the case, then we need something more to imply altruists should deviate from following standard, even if advanced, financial advice (e.g. Hauke's example of crypto could be such a special case, or other investments that are correlated with government policy shifts, or technological shifts that change the altruistic opportunities that are available). 

Interested to hear your thoughts on this. I would be particularly excited to see more EA research on a) the expected trajectories of effectiveness over time in different cause areas, and b) the amount of diminishing returns to money in each area. On a), I'd note Founders Pledge has done some good, recent work on this with their Investing to Give and Climate research. Would be great to see more. On b), I think there is tons of thinking out there on this and I feel like it would be great if someone organized this collective wisdom to establish what the current consensus views are (e.g. like 'global health has low diminishing returns', 'AI safety research has relatively high diminishing returns right now').

Great comment. Related: part of me is glad that EA is so exposed to crypto, because governments are the biggest altruistic actors, and if cryptos valuation is largely due its potential to reduce taxation, it might be a good mission hedge.

@Kevin Kuruc at the University of Oklahoma might have something to add :-) 

Sidenote: I'm sure an engineering undergrad isn't your target audience, but all the big words (pecuniary, idiosyncrasy, premia, etc.) are a bit hard to parse :O 

jh
13
0
0

Thanks Madhav. I'm a big fan of using simple language most of the time. In this case all of those words are pretty normal for my target audience.

Thanks for flagging :) I am going to take a look!

In the section on robustness in the second paper, does the constant parameter for the degree of bias, ψ, have a natural interpretation, and is there a good way to set its value?

jh
6
0
0

Great question and thanks for looking into this section. I've now added a bit on this to the next version of the paper I'll release.

Watson and Holmes investigate this issue :)

They propose several heuristic methods that use simple rules or visualization to rule out values where the robust distribution becomes 'degenerate' (that is, puts an unreasonable amount of weight on a small set of scenarios). How to improve on these heuristics seems to be an open problem.

It seems to me that what seem like different techniques, like cross validation, are ultimately trying to solve the same problem. If so, I wonder if the machine learning community has already found better techniques for 'setting '?

I'm thinking in practice, it might just be better to explicitly consider different distributions, and do a sensitivity analysis for the expected value. You could maximize the minimum expected value over the alternative distributions (although maybe there are better alternatives?). This is especially helpful if there are specific parameters you are very concerned about and you can be honest with yourself about what you think a reasonable person could believe about their values, e.g. you can justify ranges for them.

Maybe it's good to do both, though, since considering other specific distributions could capture most of your known potential biases in cases where you suspect it could be large (and you don't think your risk of bias is as high in other ways than the ones covered), while the approach you describe can capture further unknown potential biases.

Cross-validation could help set   when your data follows relatively predictable trends (and is close to random otherwise), but it could be a problem for issues where there's little precedent, like transformative AI/AGI.

jh
8
0
0

Yes, Watson and Holmes definitely discuss other approaches which are more like explicitly considering alternative distributions. And I agree that the approach I've described does have that benefit that it can uncover potentially unknown biases and work for quite complicated models/simulations. Hence why I've found it useful to apply to my portfolio optimization with altruism paper (and actually to some practical work). Along with using common sense exploration of alternative models/distributions.

[comment deleted]1
0
0
Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr