I wouldn't put the key point here down to 'units'. I would say the aggregate units of GiveWell tends to use ('units of value' and lives saved) and of GIF (person-year of income-equivalent, "PYI") are very similar. I think any differences in terms of these units is going to be more about subjective differences in 'moral weights'. Other than moral weight differences, I'd expect the same analysis using GiveWell vs GIF units to deliver essentially the same results.
The point you're bringing up, and that Ken discusses as 'Apples vs oranges', is that the analysis...
Thank you for engaging with this discussion, Ken!
It's great to have these clarifications in your own words. As you highlight there are many important and tricky issues to grapple with here. I think we're all excited about the innovative work you're doing and excited to learn more as you're able to publish more information.
Actually, they are more of a grant fund than an impact investment fund. I've updated the post to clarify this. Thanks for bringing it up.
One might call them an 'investing for impact' fund - making whatever investments they think will generate the biggest long-term impact.
The reported projections aren't adjusted for counterfactuals (or additionality, contribution, funging, etc.). I wonder if the fact we're mostly talking about GIF grants vs GiveWell grants changes your worry at all?
For my part, I'd be excited to see more grant analyses (in addition to impac...
I'm torn with this post as while I agree with the overall spirit (that EAs can do better at cooperation and counterfactuals, be more prosocial), I think the post makes some strong claims/assumptions which I disagree with. I find it problematic that these assumptions are stated like they are facts.
First, EA may be better at "internal" cooperation than other groups, but cooperation is hard and internal EA cooperation is far from perfect.
Second, the idea that correctly assessed counterfactual impact is hyperopic. Nope, hyperopic assessments are just a sign of...
Interesting thesis! Though, it's his doctoral thesis, not from one of his bachelor's degrees, right?
Yes, and is there a proof of this that someone has put together? Or at least a more formal justification?
A comment and then a question. One problem I've encountered in trying to explain ideas like this to a non-technical audience is that actually the standard rationales for 'why softmax' are either a) technical or b) not convincing or even condescending about its value as a decision-making approach. Indeed, the 'Agents as probabilistic programs' page you linked to introduces softmax as "People do not always choose the normatively rational actions. The softmax agent provides a simple, analytically tractable model of sub-optimal choice." The 'Softmax demy...
One justification might be that in an online setting where you have to learn which options are best from past observations, the naive "follow the leader" approach -- exactly maximizing your action based on whatever seems best so far -- is easily exploited by an adversary.
This problem resolves itself if you make actions more likely if they've performed well, but regularize a little to smooth things out. The most common regularizer is entropy, and then as described on the "Softmax demystified" page, you basically end up recovering softmax (this is the well-known "multiplicative weight updates" algorithm).
Good to see more and more examples of using Squiggle. Do you think you can use these or future examples to really show how this leads to "ultimately better decisions"?
Thanks for putting this idea out there, Michael!
I have several questions, all in the spirit of helping you sharpen up the idea:
Thanks for checking and sharing that update, Pablo!
By the way, I expect to see 'mission hedging' continue to be the most 'commonly' used term in this area because this is arguably the right way to describe the AI portfolio Open Philanthropy has publicly mentioned considering. That is, if we label short AI timelines as a bad thing, then this is 'hedging'. Still, I do like to put it in the overall 'mission-correlated' bucket so we remember that the key bet with this portfolio is that short timelines lead to higher cost-effectiveness (i.e. we're betting timelines and cost-effectiveness are correlated).
So, obviously you and Pablo surely have a better sense of what is desired on the Forum/Wiki in general. I am just going based on intuition.
If this is important it would be helpful to know in more detail what place original research is supposed to have on Forum/Wiki. The same with summaries of existing research. Is a series of 'original research' EA Forum posts on mission-correlated investing acceptable? Then as the 'mission-correlated investing' Wiki tag summarizes these posts it is a summary of existing research.
That's an interesting point you make. I think you might have mistaken 'mission-correlated investing' as a replacement/equivalent for 'mission hedging'? Rather, the latter is a subset of the former.
For the record, some other relevant points:
i. The orders of magnitude of hits for 'mission hedging' needs to be taken with a pinch of salt. It doesn't look to me like it's thousands of people talking about mission hedging. Rather it's thousands of crossposts and similar listings, as well as false hits.
ii. When I created this tag (as 'mission hedging') there was n...
Thanks Stefan! The definition before was hard to parse. I've updated it and hope it's better now.
I'm not sure I agree about mission hedging being more intuitive. Perhaps, especially if 'investing in evil to do more good' is intuitive or memorable. But how many people who have read early articles about mission hedging would be able to point out it both increases the expected value of good done and decreases the variance?
If what is intuitive is 'investing to have more money in worlds where money is more valuable' then that is mission-correlated investing.
I agree examples are important. There are now more posts with examples so hopefully that helps.
Thank you Wayne and Michael for the helpful nudges and encouragement.
I agree that the table at the bottom of the post was at best ambiguous. I have now deleted it from this post, revised it and turned it into this new post with several examples.
This current post then, without the table, remains to make the point that 'mission hedging' is just a subset of 'mission correlated investing'. And that mission correlation research needs to focus on forecasting cost-effectiveness, not whether the world is 'good' or 'bad'.
Thanks for the kind words, Ramiro. Yes, it's on my to do list both to write more short posts on the key ideas in that paper (in posts) and to revise it to make it easier to follow (it's too ambitious).
(I drafted this then realized that it is largely the same as Zac's comment above - so I've strong upvoted that comment and I'm posting here in case my take on it is useful.)
Crowding in other funding
We're excited to see ideas for structuring projects in our areas of interest that leverage our funds by aligning with the tastes of other funders and investors. While we are excited about spending billions of dollars on the best projects we can find, we're also excited to include other funders and investors in the journey of helping these projects scale in ...
Investment strategies for longtermist funders
Research That Can Help Us Improve, Epistemic Institutions, Economic growth
Because of their non-standard goals, longtermist funders should arguably follow investment strategies that differ from standard best practices in investing. Longtermists place unusual value on certain scenarios and may have different views of how the future is likely to play out.
We'd be excited to see projects that make a contribution towards producing a pipeline of actionable recommendations in this regard. We think this is mostly a...
I have had a similar idea, which I didn't submit, relating to trying to create investor access to tax-deductible longtermist/patient philanthropy funds across all major EA hubs. Ideally these would be scaled up/modelled on the existing EA long term future fund (which I recall reading about but can't find now, sorry)
Edit - found it and some ideas - see this and top level post.
Also, if you combine $1/ton with the estimated lives per ton from Bressler's paper, then you get $4,400 per life saved.
Yes, what I was trying to say was that in my opinion the word 'Scalability' is a good match for 80'000 Hours stated definition of Solvability. In practice, Solvability and Tractability are not used as if they represent Scalability. I think this is a shame as: a) I think Scalability makes sense given the mathematical intuition for ITN developed by Owen Cotton-Barratt, and b) I think there is a risk of circular logic in how people use Solvability/Tractability (e.g. they judge them based on a sense of the marginal cost-effectiveness of work on a problem).
I ag...
Very well put!
I would add that Scalability is already implicitly there in the ITN/SSN framework. At least if you take 80,000 Hours' description of Solvability at face value (i.e. "if we doubled the resources dedicated to solving this problem, what fraction of the problem would we expect to solve?"). Albeit, this is just my observation and not a common opinion.
With limited investment, more scalable projects will tend to have higher cost-effectiveness because they will still have plenty of room for more funding.
What is happening with the 'modern' view is tha...
This is a nice post that touches on many important topics. One little note for future reference: I think the logic in the section 'Extended Ramsey model with estimated discount rate' isn't quite right. To start it looks like the inequality is missing a factor of 'b' on the lefthand side. More importantly, the result here depends crucially on the context. The one used is log utility with initial wealth equal to 1. This leads to the large, negative values for small delta. It also makes cost-effectiveness become infinitely good as delta become small. Th...
I'm still not sure I understand your point(s). The payment of the customers was accounted for as a negligible (negative) contribution to the net impact per customer.
To put it another way: Think of the highly anxious customers each will get $100 in benefits from the App plus 0.02 DALYs averted (for themselves) on top of this. The additional DALYs being discounted for the potential they could use another App.
Say the App fee is $100 dollars. This means to unlock the additional DALYs the users as a group will pay $400 million over 8 years.
The investor puts in ...
Thanks for this comment and question, Paul.
It's absolutely true that the customer's wallets are worth potentially considering. An early reviewer of our analysis also made a similar point. In the end we are fairly confident this turns out to not be a key consideration. The key reason is that mental health is generally found to be a service for which people's willingness to pay is far below the actual value (to them). Especially for likely paying customer markets of e.g. high-income country iPhone users, the subscription costs were judged to be trivial compa...
Thanks Alex.
On Angel Investing, in case you haven't seen it, there is this case study. But much more to discuss.
On Technology Deployment, are there any links you can share as examples of what you have in mind?
Hi Derek, hope you are doing well. Thank you for sharing your views on this analysis that you completed while you were at Rethink Priorities.
The difference between your estimates and Hauke's certainly made our work more interesting.
A few points that may be of general interest:
Just to add that in the analysis we only assumed Mind Ease has impact on 'subscribers'. This meanings paying users in high income countries (and active/committed users in low/middle income countries). We came across this pricing analysis while preparing our report. It has very little to do with impact but it does a) highlight Brendon's point that Headspace/Calm are seen as meditation apps, and b) that anxiety reduction looks to be among the highest Willingness To Pay / high value to the customer segments into which Headspace/Calm could expand (e.g. by rel...
Just to add, for the record, that we released most of Hauke's work because it was a meta-analysis that we hope contributes to the public good. We haven't released either Hauke or Derek's analyses of Mind Ease's proprietary data. Though, of course, their estimates and conclusions based on their analyses are discussed at a high level in the case study.
To add two additional points to Brendon's comment.
The 1,000,000 active users is cumulative over the 8 years. So, just for example, it would be sufficient for Mind Ease to attract 125,000 users a year each year. Still very non-trivial, but not quite as high a bar as 1,000,000 MAU.
We were happy we the 25% chance of success primarily because of the base rates Brendon mentioned. In addition this can include the possibility that Mind Ease isn't commercially viable for reasons unconnected to its efficacy, so the IP could be spun out into a non-profit. We didn't ...
Thought provoking post, thanks Jackson.
You humbly note that creating an 'EA investment synthesis' is above your pay grade. I would add that synthesizing EA investment ideas into a coherent framework is a collective effort that is above any single person's pay grade. Also, that I would love to see more people from higher pay grades, both in EA and outside the community, making serious contributions to this set of issues. For example, top finance or economics researchers or related professionals. Finally, I'd also say that any EA with an altruistic strategy ...
Yes, Watson and Holmes definitely discuss other approaches which are more like explicitly considering alternative distributions. And I agree that the approach I've described does have that benefit that it can uncover potentially unknown biases and work for quite complicated models/simulations. Hence why I've found it useful to apply to my portfolio optimization with altruism paper (and actually to some practical work). Along with using common sense exploration of alternative models/distributions.
Great question and thanks for looking into this section. I've now added a bit on this to the next version of the paper I'll release.
Watson and Holmes investigate this issue :)
They propose several heuristic methods that use simple rules or visualization to rule out values where the robust distribution becomes 'degenerate' (that is, puts an unreasonable amount of weight on a small set of scenarios). How to improve on these heuristics seems to be an open problem.
It seems to me that what seem like different techniques, like cross validation, are ultimately t...
Great points. You've inspired me to look at ways to put more emphasis on these ideas in the discussion section that I haven't yet added to the model paper.
Developing a stream of the finance literature that further develops and examines ideas from the EA community is one of the underlying goals with these papers. I believe these ideas are valid and interesting enough to attract top research talent. Also, that there is plenty of additional work to do to flesh these ideas out so having more researchers working on these topics would be valuable.
In this c...
Thanks Madhav. I'm a big fan of using simple language most of the time. In this case all of those words are pretty normal for my target audience.
@Neel Nanda. Quick update: I've now discussed this offline with a bunch of people who are considering potential strategies of this nature. It seems to me that 'mission-correlated investing' is a better umbrella term for these strategies that work with financial-mission correlations to enhance expected value. 'Mission hedging' strategies would be the subset of mission-correlated strategies that both increase expected value and reduce the variance of outcomes.
Thanks Sjir. Interesting thought to muse on.
Just quickly riffing on the example in this post, if you have a great business idea that will only work under one politician you might bet on them. Or if you think one politician will be good for your current job, but the other could make it optimal for you to retrain and change jobs, then bet on the other. Or if one will make you want to leave the country, then bet on them to help with your moving costs.
Great point and perhaps more interesting than you might have expected.
To repeat back what I think you meant, what I've called the mission hedging strategy for this case makes the two possible outcomes 15 vs 0. While for just donating the possible outcomes are 10 vs 1. So actually the variance of outcomes is higher. It's more like anti-hedging.
First, this depends on how happy you are about Biden v Trump for other reasons. If a Biden win is worth +100 in utility for you and Trump -100, then the mission hedging outcomes are 115 & -100, whereas for simply ...
Thank you jackva. Great points on this specific example.
In general, suppose we didn't think this was a special moment. Then essentially this means we think 'investing to give' also presents a good opportunity. If 'investing to give' is also 10x CCF under Trump, then indeed you would want to just wait and either give under Biden or invest to give. But if 'investing to give' is only 5x CCF, then we're in the scenario I discussed under 'More general context'. So, fair point, I have added a sentence to the main post to explicitly rule out 'investing to give' b...
Yeah, it seems we do have a semantic difference here. But, how you're using 'raw impact units' makes sense to me.
Nice, clear examples! I feel inspired by them to sketch out what I think the "correct" approach would look like. With plenty of room for anyone to choose their own parameters.
Let's simplify things a bit. Say the first round is as described above and its purpose is to fund the organization to test its intervention. Then let's lump all future rounds together and say they total $14m and fund the implementation of the intervention if the tests are s... (read more)