jh

Jonathan Harris, PhD | jonathan@total-portfolio.org

Total Portfolio Project's goal is to help altruistic investors prioritize the most impactful funding opportunities, whether that means grants, investing to give or impact investments. Projects we've completed range from theoretical research (like in this post), to advising on high impact investment deals, to strategic research on balancing give now versus later for new major donors.

Wiki Contributions

Comments

Estimating the Philanthropic Discount Rate

This is a nice post that touches on many important topics. One little note for future reference: I think the logic in the section 'Extended Ramsey model with estimated discount rate' isn't quite right. To start it looks like the inequality is missing a factor of 'b' on the lefthand side. More importantly, the result here depends crucially on the context. The one used is log utility with initial wealth equal to 1. This leads to the large, negative values for small delta. It also makes cost-effectiveness become infinitely good as delta become small.  This all makes it much more difficult to think intuitively about the results. I think the more appropriate context is with large initial wealth. The larger the initial wealth (and the larger the consumption each year) the less important delta becomes, relatively. For large initial wealth, it is probably correct to focus on improving 'b' (i.e. what the community does currently) over delta.  My point here is not to argue either way, but simply that the details of the model matter - it's not clear that delta has to be super important.

EA-Aligned Impact Investing: Mind Ease Case Study

I'm still not sure I understand your point(s). The payment of the customers was accounted for as a negligible (negative) contribution to the net impact per customer.

To put it another way: Think of the highly anxious customers each will get $100 in benefits from the App plus 0.02 DALYs averted (for themselves) on top of this. The additional DALYs being discounted for the potential they could use another App.

Say the App fee is $100 dollars. This means to unlock the additional DALYs the users as a group will pay $400 million over 8 years.

The investor puts in their $1 million to increase the chances the customers have the option to spend the $400m. In return they expect a percentage of the $400m (after operating costs, other investors shares, the founders shares). But they are also having a counterfactual effect on the chance the customers have/use this option.

This is basically a scaled up version of a simple story where the investor gives a girl called Alice a loan so he can get some therapy. The investor would still hope Alice repays them with interest. But they also believe that without their help to get started she would have been less like to get help for herself. Should they have just paid for her therapy? Well, if she is a well-off, western iPhone user who comfortably buys lattes everyday, then that's surely ineffective altruism. Unless she happens to be the investor's daughter or something, so it makes sense for other reasons.

I think the message of this post isn't that compatible with general claims like "investing is doing good, but donating is doing more good". The message of the post is that specific impact investments can pass a high effectiveness bar (i.e. $50 / DALY). If the investor thinks most of their donation opportunities are around $50/DALY, then they should see Mind Ease as a nice way to add to their impact.

If their bar is $5/DALY (i.e. they see much more effective donation opportunities) then Mind Ease will be less attractive. It might not justify the cost of evaluating it and monitoring it. But for EAs who are investment experts the costs will be lower. So this is all less an exhortation for non-investor EAs to learn about investing, and more a way for investor EAs to add to their impact.

Overall, the point of the post is a meta-level argument that we can compare donation and investment funding opportunities in this way. But the results will vary from case to case.

EA-Aligned Impact Investing: Mind Ease Case Study

Thanks for this comment and question, Paul.

It's absolutely true that the customer's wallets are worth potentially considering. An early reviewer of our analysis also made a similar point. In the end we are fairly confident this turns out to not be a key consideration. The key reason is that mental health is generally found to be a service for which people's willingness to pay is far below the actual value (to them). Especially for likely paying customer markets of e.g. high-income country iPhone users, the subscription costs were judged to be trivial compared to changes in their mental health. This is why, if I remember correctly, this consideration didn't feature more prominently in Hauke's report (on the potential impacts on the customers). Since it didn't survive there, it also didn't make it into the investment report.

I'm not quite sure I understand the point about the customer donating to the BACO instead. That could definitely be a good thing. But it would mean an average customer with anxiety choosing to donate to a highly effective charity (presumably instead of not buying the App). This seems unlikely. More importantly, it doesn't seem like the investor can influence it?...

In short, since the expected customers are reasonably well off non-EAs, concerns about customer wallet or donations didn't come into play.

Important ideas for prioritizing ambitious funding opportunities

Thanks Alex.

On Angel Investing, in case you haven't seen it, there is this case study. But much more to discuss.

On Technology Deployment, are there any links you can share as examples of what you have in mind?

EA-Aligned Impact Investing: Mind Ease Case Study

Hi Derek, hope you are doing well. Thank you for sharing your views on this analysis that you completed while you were at Rethink Priorities.

The difference between your estimates and Hauke's certainly made our work more interesting.

A few points that may be of general interest:

  • For both analysts we used 3 estimates, an 'optimistic guess', 'best guess' and 'pessimistic guess'.
  • For users from middle-income countries we doubled the impact estimates. Without reviewing our report/notes in detail, I don't recall the rationale for the specific value of this multiplier. The basic idea is that high-income countries are better served, more competitive markets, so apps are more likely to find users with worse counterfactuals in middle income countries.
  • The estimates were meant to be conditional on Mind Ease achieving some degree of success. We simply assumed the impact of failure scenarios is 0. Hauke's analysis seems to have made more clear use of this aspect. Not only is Hauke's reading of the literature more optimistic, but he is more optimistic about how much more effective a successful Mind Ease will be relative to the competition.
  • Indeed the values we used for Derek's analysis, for high income countries, were all less than 0.01. We simplified the 3 estimates, doing a weighted average across the two types of countries, into the single value of 0.01 for Derek's analysis after rounding up (I think the true number may be more like 0.006). The calculations in the post use rounded values so it is easier for a reader to follow. Nevertheless, the results are in line with our more detailed calculations in the original report.
  • Similar to this point of rounding, we simplified the explanation of the robustness tilt we applied. It wasn't just about Derek vs Hauke. It was also along the dimensions of the business analysis (e.g. success probabilities). We simplified the framing of the robustness tilt both here and in a 'Fermi Estimate' section of the original report because we believed that it is conceptually clearer to only talk about the one dimension.
  • What would I suggest to someone who would like to penalize the estimate more or less for all the uncertainty? Adjust the impact return.
  • How can you adjust the impact return in a consistent way? Of course, to make analyses like this useful you would want to do them in a consistent fashion. There isn't a golden standard for how to control the strength of the robustness tilts we used. But you can think of the tilt we applied (in the original report) as being like being told a coin is fair (50/50) and then assuming it is biased to 80% heads (if heads is the side you don't want). This is an expression of how different our tilted probability distribution was from the distribution in the base model (the effect on the impact estimate was more severe; 1-0.02/(0.25/2+0.01/2)=85%). There is a way of assessing this "degree of coin-equivalent tilt" for any tilt of any model. So if you felt another startup had the same level of uncertainty as Mind Ease you could tilt your model of it until you get the same degree of tilt. This would give you some consistency and not make the tilts based purely in analyst intuition (though of course there is basically no way to avoid some bias). If a much better way to consistently manage these tilts was developed, we would happily use it.
  • Overall, this analysis is just one example of how one might deal with all the things that make such assessments difficult including impact uncertainty, business uncertainty, and analyst disagreement. The key point really being a need to summarize all the uncertainty in a way that is useful to busy, non-technical decision makers who aren't going to look at the underlying distributions. We look forward to seeing how techniques in this regard evolve as more and more impact assessments are done and shared publicly.
EA-Aligned Impact Investing: Mind Ease Case Study

Just to add that in the analysis we only assumed Mind Ease has impact on 'subscribers'. This meanings paying users in high income countries (and active/committed users in low/middle income countries). We came across this pricing analysis while preparing our report. It has very little to do with impact but it does a) highlight Brendon's point that Headspace/Calm are seen as meditation apps, and b) that anxiety reduction looks to be among the highest Willingness To Pay / high value to the customer segments into which Headspace/Calm could expand (e.g. by relabeling their meditations as useful for anxiety). The pricing analysis doesn't even mention depression (which Mind Ease now addresses following the acquisition of Uplift). Perhaps because they realize it is a more severe mental health condition.

EA-Aligned Impact Investing: Mind Ease Case Study

Just to add, for the record, that we released most of Hauke's work because it was a meta-analysis that we hope contributes to the public good. We haven't released either Hauke or Derek's analyses of Mind Ease's proprietary data. Though, of course, their estimates and conclusions based on their analyses are discussed at a high level in the case study.

EA-Aligned Impact Investing: Mind Ease Case Study

To add two additional points to Brendon's comment.

The 1,000,000 active users is cumulative over the 8 years. So, just for example, it would be sufficient for Mind Ease to attract 125,000 users a year each year. Still very non-trivial, but not quite as high a bar as 1,000,000 MAU.

We were happy we the 25% chance of success primarily because of the base rates Brendon mentioned. In addition this can include the possibility that Mind Ease isn't commercially viable for reasons unconnected to its efficacy, so the IP could be spun out into a non-profit. We didn't put much weight on this, but it does seem like a possibility. I'm mentioning it mostly because it's an interesting consideration with impact investing that could be even more important in some cases.

X-Risk, Anthropics, & Peter Thiel's Investment Thesis

Thought provoking post, thanks Jackson.

You humbly note that creating an 'EA investment synthesis' is above your pay grade. I would add that synthesizing EA investment ideas into a coherent framework is a collective effort that is above any single person's pay grade. Also, that I would love to see more people from higher pay grades, both in EA and outside the community, making serious contributions to this set of issues. For example, top finance or economics researchers or related professionals. Finally, I'd also say that any EA with an altruistic strategy that relates to money (i.e. isn't purely about direct work) has a stake in these issues and could benefit from further research on some of the topics you highlighted. So there's a lot of things to discuss and a lot of reasons to keep the discussion going.

Seeking feedback on new EA-aligned economics paper

Yes, Watson and Holmes definitely discuss other approaches which are more like explicitly considering alternative distributions. And I agree that the approach I've described does have that benefit that it can uncover potentially unknown biases and work for quite complicated models/simulations. Hence why I've found it useful to apply to my portfolio optimization with altruism paper (and actually to some practical work). Along with using common sense exploration of alternative models/distributions.

Load More