In this article I want to give an explanation for a phenomenon in the effective altruism community that might look a bit like the streetlight effect, propose an idea for a software that might help to further optimize this area, and ask you for your input. (Cross-posted from my blog.)
Open Phil–Type Interventions
Insightful quantitative analyses give me the warm-fuzzies, but there may well be highly effective interventions, maybe even interventions more cost-effective than GiveWell’s top charities, that are not easily quantifiable. The Open Philanthropy Project has set out to find them, but so far it has not published any hard and fast recommendations.
Meanwhile outsiders seem to have mistaken our enthusiasm for certain more easily quantifiable interventions for “effective altruism is about donating to easily quantifiable interventions” rather than “effective altruism is about doing the most good.” That’s weird. But in a comment on my article David Moss noted that there is something going on that does look like the streetlight effect. Still I think that this is the result of good judgment. That, however, doesn’t mean that the same good judgment could not also lead to different decisions given more information.
Expected Utility and Limited Diversification
What the streetlight effect describes is a scenario where you lose your wallet in the park, and then go up to the street to search for it there since there are streetlights there and you’d never find it in the dark of the park anyway. The reality is more like that you haven’t lost any particular wallet but that there are potentially a whole bunch of wallets lying around on the street and in the park. Since the light cones are so small, you figure chances are that the biggest wallets are probably somewhere out in the dark. (Unrelated question: When you find a wallet with just $3,340 in it and a bunch of business cards of the owner, should you return it or donate the money effectively?)
Translating the metaphor into reality, the darkness are areas of great uncertainty. You may, for example, have uncertainty surrounding how important the cause is that an intervention is trying to address, how effective the intervention is at addressing said cause, whether the intervention is still cost-effective at the margin, and whether the charity implementing the intervention is any good at it. If these uncertainties were known probabilities, you’d have to multiply them all, but you can’t do that unfortunately. The result of all these pseudomultiplications is that the expected utility of interventions in the dark becomes pretty small.
All the while there are a few, pretty few, really cool interventions in the cones of the streetlights, those of the GiveWell and Animal Charity Evaluators top charities. The interventions in the dark haze of uncertainty are many, many more, but you can only expect the tiniest fraction of them to have any considerable cost-effectiveness at the margin and even fewer of them to beat the known top charities.
Ordered by expected utility, the interventions will form something like a hyperbolic function with very few top interventions and a long long tail of potentially interesting giving opportunities. This doesn’t include interventions that are fairly certain not to be cost-effective.
When we choose charities to donate to, we can’t diversify infinitely, and that wouldn’t be a good idea anyway. Some even argue against any diversification, but that seems unnecessarily restrictive. In any case, few effective altruists will donate to more than five charities, and they will mostly focus on the charities with the highest expected utility or some slight variation thereof. Most will agree on the high expected utility of the known top charities, but the opinions on the long tail charities will vary widely. One person may be very familiar with a certain long tail charity and may hence think that they’re able to tell with above-average certainty that it’s a good buy, but someone else could worry that this very familiarity might bias the first person’s judgment and donate to a different one or none at all. The result is that the hyperbolic function becomes even more extreme when the y axis are donations.
Expected Utility Auctions
This seems all perfectly logical to me, and I see no reason to criticize these people’s decisions. What would be very valuable, however, is what Open Phil does, to try to find the few good giving opportunities in the long tail and lift them out of it.
Open Phil, however, has to prioritize interventions that are very scalable because there are eight billion dollar waiting to be invested. The interventions at least have to maintain a comparable marginal cost-effectiveness for long enough to warrant the time and money invested into finding them.
Significantly, EA metacharities could profit from prioritization. Such goals as to educate the public about effective giving, to fundraise for effective charities, to collect donation pledges, to conduct prioritization research, and much more might all conceivably be highly cost-effective so long as they haven’t overexerted the limits of their scalability or suffer from any other hidden ailments. Worries about these latter problems are probably what’s holding back many EAs who would otherwise donate to metacharities.
Is there maybe a system that is less reliable than the proper Open Phil treatment but that might serve as a rough guide for these donors and as a training ground for prioritization research hobbyists? Impact certificates may develop into such a tool, but here’s another idea.
I envision an expected utility auction site a bit like Stack Exchange,
- where people can post their own estimates of the cost-effectiveness and scalability of their project and their reasoning and calculations behind it
- where other people can reply to such a bid with their own ones and their own calculations
- where a widget at the side displays the current average of the bids weighted according to their upvotes, the standard deviation, and some other metrics of the thread
- where a list gives a sorted overview of these metrics of all threads.
The unit could be something like 1 util/$ = what GiveDirectly can do for $1. The better estimates would influence the overall total more strongly, and the original poster would be incentivized to start out with a reasonable starting bid to earn upvotes and thus exposure for their project.
Later a link to the donation register of the EA Hub might be useful so that people who read a review a few months after it was published can estimate how much room for more funding is still left for them.
Do you think this could work? Do you think it’s worthwhile? What other features would such a site need? Who would like to use such a site? Who can implement the MVP? (I’d do it myself, but I should really be working on my thesis.)
Thanks for reading!
I think that the impact purchase program has had more trouble finding good sellers of impact than buyers (Paul can correct me if I'm wrong). So I suspect the hard part of any such effort would be to get a large enough "deal flow" that the site actually gets used.
Separately, this sounds an awful lot like a two-sided market for certificates of impact except that the bids aren't binding and people's CE estimates are public to each other before they donate. Are there other relevant differences I'm missing?
Those differences are central; another one is that the projects don’t have to be completed. Eventually funders may be able or willing to buy some impact equity of charity startup founders, first funding them and eventually selling their share of the impact certificate at a profit, but so long as that’s not happening yet, it seems like another difference to me.
There is also the artificial unit of the person in addition to that of the project that makes it a bit more complicated to estimate where the impact has really originated. The problem at project level persists, but the person is not relevant. Then again you could imagine charities as a whole funding their operation by selling impact certificates, so it may not be a principle difference either.
I’ve been thinking about whether there are ways to hide the estimates for a while, for example until two of them are posted, but I haven’t come up with anything that I think would work.
We had a bigger group of applications this month (we'll post about it soon), along with significant unpurchased impact from last month, so now I think the balance is less clear. We'll see how the next round plays out, and how interested people are in funding the other opportunities.
In the long run and all else equal, it would be great to have additional non-binding estimates and public discussion for projects people are considering buying. The issue is the opportunity cost of the time spent discussing or thinking about them, especially for small projects. I don't think the character of the problem is really fundamentally different than for GiveWell. You can either spend a long time on scalable interventions, or a little bit of time on non-scalable interventions. You care more about the ratio of (size of opportunity) / (effort).
My guess is that the implementation is not a bottleneck. For example, having a post in this forum and a thread for each project (which may contain links) seems like it could basically work.
Looking forward to the results!
And good idea! If we use this forum for it, the posts would have to follow a common format to make it easier for people to calculate the metrics. We’d also need a central place, like a tag, to collect them, so people can compare among them easily. That will be very important.
If you buy my benchmark certificate, we can also convert dollars to utils more easily and draw on certificate prices to inform the estimates.
This sounds awesome, and perhaps even the sort of thing we could use to assess the applications we get for EA Ventures (eaventures.org). I imagine the tough part will be acquiring and sustaining a user base of reviewers. Toward this end, you might first recruit an official board of dedicated reviewers yet still allow for anyone to leave impact estimates.
The next couple weeks are going to be serious crunch time on EA Global, but feel free to ping me about this in ~2 weeks if you're interested in a potential EAV integration: tyler@centreforeffectivealtruism.org
+1. Tyler, has Telofy pinged you about this yet?
I have, but it must’ve been still in the EA Global crunch time. I still don’t know when I’ll have time to work on it…
Cool, noted. If we use this forum for it, or at least for the MVP, then the preparatory work will be concerned mostly with codifying a common structure for the impact analyses so readers can glean the most important data quickly and writers are forced to include various important considerations. I also thought of EA Ventures here, since you already have a framework that could perhaps be adapted.