I'm seeking feedback on and collaborators for the creation, popularization, and maintenance of a list which ranks the top N living people by their positive impact via donations. The goal is to make the list popular enough to increase the status awarded to those who rank highly, bring more awareness to the importance of donation effectiveness, and ultimately cause people to donate more effectively and/or donate more money.
Details of the list
I still have a lot of uncertainty about how the list should work, but this is my current view:
- People are ranked by the total lifetime amount of money donated to altruistic causes, scaled by effectiveness as determined by Impact List researchers (via a mix of independent research and repurposing research from GiveWell and other organizations).
- The first version might contain somewhere between 100 and 1,000 people.
- Viewers can specify their values which are taken into account by the rankings: how much weight to give animals, people who will exist in the far future, etc. The default settings would attempt to capture the values of the median viewer.
- The following columns exist, which the viewer can rank by:
- The raw (unscaled) total amount donated (which can be changed to the amount in a specific year)
- Amount pledged
- Net worth
- The list can be filtered by cause area.
- There's a drop-down to view alternate rankings provided by other organizations.
- Any organization submitting a new ranking metric must also submit the reasoning / calculations behind the scaling factors they assign to donation recipients.
- There's a page for each person showing every donation they've made, when it was made, and the scaled effectiveness of the donation according to Impact List and other organizations.
- There's a page for each donation recipient showing the research and calculations for why it has the effectiveness rating that it does, according to Impact List and any other organization that has provided their own research.
If Impact List becomes popular it could influence people's donations in the following ways:
- The list focuses attention on large (and potentially surprising to many viewers) differences in donation effectiveness, and on how much positive impact others are having. This may lead people to want to have more impact for its own sake.
- Attention on the list increases the status reward for giving effectively as well as the status penalty for giving ineffectively.
- An explicit ranking might make some donors want to rank higher than others for competitive reasons.
- The website could serve as an on-ramp for future EAs, with links to further reading. Billionaires may occasionally visit the website to check their ranking and the reasoning behind it, which could help us acquire more Sam Bankman-Frieds or Dustin Moskovitzs.
Note that reasons #1 and (the first part of) #4 could apply roughly equally to anyone who is aware of the list, not just those wealthy enough to potentially appear on it.
A large continuous auction
Because so many existing donations by the world’s wealthiest people are much less effective than they could be, spots on the list will initially be available for relatively low donation amounts to those who donate effectively. A list of size 1,000 might initially present an opportunity to hundreds of thousands of people to potentially appear on it (and push an ineffective donor off). This means that all four reasons above could be relevant for many more people than the size of the list.
To encourage the donations of the list members to continuously become more effective the UI could have a tool which shows people who aren't on the list how much money they'd need to donate to the current most effective recipient to claim a spot.
The power of popular lists
Popular lists like the Forbes Billionaire List, the US News College Rankings list, and the New York Times Best Seller list influence the status and behavior of those who appear (or want to appear) on them, and the behavior of those who view them.
This is well documented in the case of US News' list. Colleges regularly try to boost their US News ranking via strategies that have nothing to do with improving the services they offer. US News has a near-monopoly on attention paid to college rankings, which has a huge effect on US higher education. In this case the effect seems net negative, but it's a great example of the potential power of lists.
One takeaway is that it may be worthwhile to put an enormous amount of resources into making Impact List so popular that almost everyone is vaguely aware of the rankings and it dominates the attention paid to all philanthropy-related lists.
Expected value of solving this problem
Total donations in 2020 by US entities amounted to 471 billion dollars -- 324 billion by individuals, 88 billion by foundations, 41 billion by estates, and 17 billion by corporations. The top 50 individual donors in the US gave 33 billion dollars in 2021. The top ~1.2 million households in the US donate 1/3 of all individual donations, or about 108 billion dollars.
It's difficult to get good global donation data, but let's estimate global donations are twice the US amount, or 650 billion dollars. Assume the list has 1,000 slots. Given the above data we'll estimate that the top 1,000 global individuals donate 50 billion dollars per year.
It's unclear how much the list could realistically influence these donations, but the numbers are so large that even a small change would have a big impact. Shifting 1% of total individual donations into effective causes amounts to 6.5 billion dollars per year. Causing the top 1,000 most impactful donors to direct 10% of the currently donated amount to effective causes comes out to 5 billion dollars per year.
A common response I get when telling EAs about Impact List is surprise that no one has done it yet. The general idea was suggested on Twitter by Kelsey Piper here and here and Nathan Young has since mentioned it a couple times in forum comments, but I don't think anyone has started significant work on it.
If this is as neglected as it appears, it raises the question of why. My best guess is that non-EAs find the prospect of rating the effectiveness of a large number of donation recipients in a defensible way to be too daunting. Another possibility is that there are flaws in the idea that I haven't discovered or am underestimating the magnitude of. It's also possible that people have tried and failed but not left much evidence of their attempts. (Please contact me if you're aware of any!)
Although I don't know of attempts to rank people by donation impact, ranking people based on their donation amounts is common. See the Million Dollar List's top US donors between 2000-2016, Business Insider's list of the top 20 lifetime donors, Wikipedia's list of the top 21 lifetime donors, The Forbes 400's list of the top 400 richest Americans in 2021 where they give each a 1-5 philanthropy score based on the percent of their net worth that they've donated, Philanthropy.com's list of the top 50 American donors of 2021, and Donation List Website's ranked list of 55 EA donors, among many others. Note that Forbes' philanthropy score is a measure of the virtue of the giver, not of donation effectiveness.
Aside from not attempting to rate donation impact, there are many features of these lists that prevent them from becoming influential:
- The global lists rank only ~20 people.
- The US lists show donations only over a limited time period.
- Philanthropy.com requires registering for an account to see their list (which disqualifies it from ever becoming popular).
- Aside from Wikipedia's list, they're snapshots rather than continuously updated.
- Their UIs are either below average or not built for mass appeal, aside from Forbes'.
I haven't found any examples where a lot of effort has gone into creating a popular Philanthropy-focused ranking of people. The regular Forbes List does reflect a lot of effort, but the Forbes 400 list is just a version of the main list restricted to US people and with a new low-effort philanthropy score.
TL;DR: This seems extremely neglected. I'm not aware of any effort going into producing a ranking that takes into account donation effectiveness. Efforts that ignore effectiveness don't appear to be trying hard to become influential, so they don't give us much information about Impact List's chances of success.
In general getting society to care about a new thing to the degree that people care about the Forbes List or the US News College Rankings seems extremely difficult. There are only so many things that can be that popular because attention is finite and the competition for attention is fierce.
If we can create a high quality list there are a couple factors which might help with making it popular:
- The EA movement as a whole is increasing in influence. If prominent EA organizations and individuals think this project is worthwhile they may be willing to use their influence to help the list get traction.
- Impact List might also partner with a popular EA-friendly organization like Our World In Data or The Economist.
- There may be high inherent demand for this data.
- Its high neglectedness means that even if we didn't think people would be that interested, we shouldn't be too confident in this belief. The value of information from trying this project seems high.
- People seem pretty interested in what billionaires are up to in general.
- Keeping track of who has helped others the most was an important concern throughout our evolutionary history.
- A few people have told me they like this idea because billionaires often donate to their own ineffective foundations for tax deductions, to provide cushy jobs for their families and friends, and to give the appearance of being impactful. There may be a populist appetite for a list which highlights that many such donations are much less effective than they could be.
There's also the question of whether the list is too hard to build well. I speculated in the previous section that this was the main reason why Impact List doesn't already exist. I expect this to be very difficult, but the knowledge and expertise that EAs have developed around effectiveness evaluations should make this more feasible for us than for non-EA groups. However, Impact List has a few disadvantages relative to GiveWell:
- GiveWell is trying to find the best recipients, so it can cut short its research when it determines a recipient won't be among the best. Impact List will have to do deeper research into ineffective recipients because wealthy donors often donate large amounts of money to them.
- Ineffective recipients are likely harder to research than effective ones -- effective recipients probably care about being effective, and so care about establishing to themselves and others that they're effective.
- Impact List will have to evaluate a lot of recipients -- in theory anyone who those on the list or within reach of the list have donated to.
- In practice we'll prioritize deep research into recipients by how much money has been donated to them and by their expected effectiveness, since those evaluations will have the most effect on the rankings.
- We'll also need to develop good techniques for approximating effectiveness after relatively shallow evaluations.
- For example we might group recipients into similarity clusters, evaluate a small number of them, and tentatively extrapolate those evaluations to everything else in the cluster.
- Recipients could also be put into a limited number of buckets corresponding to different orders of magnitude of effectiveness.
- The effectiveness of a donation to a recipient is a function of time. GiveWell can focus on current effectiveness, while Impact List will have to assess the effectiveness of donations made long ago.
TL;DR: Both construction of the list and making it popular seem really hard. I'd guess that the combined difficulty of the outcome in the 'Scale' section (affecting 1% of total individual donations and 10% of top-1000 donations) is on the order of creating a billion dollar startup.
A very simple expected value calculation
I find calculating scores using the ITN framework for this problem less helpful than performing a direct expected value calculation.
If we consider the case where Impact List causes 1% of total individual donations to become effective, and 10% of top-1000 donations to become effective, then this results in ~11 billion extra dollars per year going to effective causes. Popular lists tend to remain popular for a while so if Impact List becomes influential we should expect benefits lasting longer than one year. If we estimate the duration of the list's influence at ten years, we get $110 billion moved to effective causes.
What is the probability of at least this level of impact if the project were funded with around a million dollars per year? This is where almost all of the uncertainty comes in. I would guess it's somewhere between 0.1% and 1%, which would result in a ten-year expected value between $110 million and $1.1 billion.
Impact List can still be very worthwhile if you think the amount of influence described above is unrealistic. If it affects 0.1% of total individual donations and 1% of top-1000 donations then the ten-year benefit is $11 billion. If the probability of this reduced level of influence is 1% then the resulting expected value is $110 million.
I'm very interested in seeing how others would estimate the expected value of this project.
Other risks and challenges
Rankings of people can feel arrogant and antagonistic
Publishing rankings of people has some inherent potential to rub people the wrong way. Done badly, we could come across as overly critical, arrogant know-it-alls who are claiming an authority to rank others which we don't deserve. Introducing a sense of competition into altruistic donations may also cause a negative reaction.
The marketing of the list and the messaging on its website should be thoughtfully crafted to avoid causing negative reactions. We might stress that:
- The list is primarily meant to celebrate people who are doing more good than almost everyone else in the world, and to inspire those who see it to increase their own impact.
- We don't claim our rankings are the final word and we welcome others to submit alternate metrics, which we'll make available to viewers of the list.
- Doing our best to figure out what's effective is better than not trying, despite the difficulty, because the stakes are high and differences in effectiveness can be huge.
- Wanting recognition for doing good is a common human desire, and if harnessing this desire helps create more good in the world then doing so seems better than the alternative.
The rankings may not seem credible enough to the public
Of course the rankings will seem less credible to the public than to EAs, but it's unclear whether they'd still seem credible enough to the public to be worth paying attention to and engaging with. There are several reasons the public might dismiss the list's rankings:
- They may be too weird -- for instance AI x-risk being judged as highly important.
- Comparing effectiveness across domains (climate change vs. global health vs. animal suffering vs. x-risk) may not seem legitimate to people.
- EAs are more willing than the general public to think it's legitimate to tentatively accept highly uncertain best-effort estimates.
- EAs will likely dominate early versions of the list because they're already donating to the causes that we think are the most effective and because effectiveness matters so much. Impact List might then strike people as partly motivated by a desire to congratulate ourselves for being better than other philanthropists.
Allowing other organizations to submit their own rankings is one attempt to mitigate this. A few other things that we could do to make Impact List feel more credible to the public:
- By default rank people by raw amount donated, and make our opinion of scaled impact just one of several columns that viewers could rank by if they chose to.
- Assign effectiveness ratings more conservatively than our actual best estimates, especially for longtermist causes.
- Initially separate out controversial cause areas into their own sections.
My guess is that we should not initially do any of these bulleted items, and should instead establish our credibility by publishing high quality explanations of our effectiveness research, but they're options that we could consider based on viewer feedback.
Customization options may prevent a canonical version of the list from being recognized
There's a tension between letting the viewer customize the list to match their preferences (about values and effectiveness-ranking providers) and having a single canonical list that becomes popular enough to influence donor behavior.
My low-confidence guess is that providing reasonable default values (possibly arrived at by collecting preference data from viewers) for these parameters would result in the default view being treated as canonical by most people. If there are multiple high quality effectiveness rankings we could use some blend of them as the default view.
Even if the default view of the list never becomes canonical, if the data is interesting enough then it might still put enough attention on donation effectiveness to have a significant impact.
Not all donations are public
The raw donation data for the list won't be fully accurate because some people donate anonymously. The popularity of the Forbes List (which suffers from a similar issue) suggests to me that this is probably not a big deal.
Rank people by net externalities
In the ideal version of this list people would be ranked by the value of their net externalities (positive minus negative) from all activities (donations, running companies, writing, etc.). Calculating this would be much more work and would raise more issues about the credibility of the list.
Make the list much larger
If the list had 100,000 slots it may be possible for a typical software developer to appear on it by donating 30% of their income very effectively. Evaluating the effectiveness of all donations made by 100,000 donors would be infeasible, but it would be easy to give people credit for donating to any already-evaluated recipient.
This could be done automatically if Impact List integrated with effective donation recipients. Imaging donating to the Long Term Future Fund through Impact List's UI. Your ranking could be updated immediately without any human effort. Organizations could also have options on their own donation pages to "share this data with Impact List".
Feedback and collaboration
I'm very interested in feedback, especially on these questions:
- How much of the EA community's funds should go toward this project, assuming a high quality team can be assembled? Why?
- How would you calculate the expected value of this project?
- How would you make this proposal better? What did I miss?
- Can you think of low-cost ways that we could test whether this is a good idea?
Assuming this project is worthwhile, I'd like it to be led by the best person for the role, which may not be me. If you're interested in working on this in any capacity, including leading the project, let me know.
I'll be at EAG San Francisco from July 29th-31st if anyone wants to discuss this in person. I've also set up an Impact List discord server.
Thanks to Branimir Dolicki, Eric Jorgenson, Spencer Pearson, William Ehlhardt, Jack Stennett, Claire Barwise, Pasha Kamyshev, Vaidehi Agarwalla, and Issa Rice for providing feedback on drafts of this post.
As of 2022-06-08, the certificate of this article is owned by Elliot Olds (100%).
Given the amount of work involved this would not happen before Impact List got very popular.
My calculations using these scales for the ITN framework give the project a value of 26 -- 12 for importance, 12 for neglectedness, 2 for tractability. An uncertain assumption was that $5,000 donated effectively produces 50 QALYs. This table shows scores for other problems.
Assume inflation and expected growth of donation amounts will roughly cancel.
For a better approximation we'd subtract the impact of any counterfactual ineffective donations, but I'm assuming not doing so gives a good approximation since the most effective donations are much more effective than almost all others.
An idea suggested by Vaidehi Agarwalla is to publish a much smaller version of the list (maybe top-10) as an article in something like Vox Future Perfect to see how much attention it gets.
Nice!! This is pretty similar to a project Nuño Sempere and I are are working on, inspired by this proposal:
I'm currently building the website for it while Nuño works on the data. I suspect these are compatible projects and there's an effective way to link up!
Also happy to give support on this if I can.
There's good discussion happening in the Discord if you want to hop in there!
Awesome! (Ideopunk and I are chatting on discord and likely having a call tomorrow.)
On the canonical list point.
I'd recommend getting the key ways in which different orgs weight the donations differently, then polling the public and getting the average moral weights and then showing that. Then give people the option to shift those around.
Eg sliders for AI risk, chicken lives to human lives. These two sliders would probably significantly affect the top spots.
It's a great idea but the devil is definitely in the details. You get at much of this, but maybe underestimate the challenge, especially for things like 'getting customization without overwhelming people and dissolving the impact'.
For the traditional global development stuff this could be somewhat tractable; the Cgdev/CDI took a stab at this for a country-level attempt at this.
... Although even there there are still loads of moral weights and 'moonshot project' issues to consider
But going further, including animals, x-risk/s-risk, lt-ism, cause prioritization itself, opens up huge worm cans. We may be to debate this stuff intelligently, but it might look like a huge black box to outsiders, or "that's like, just your opinion man".
A big challenge, but seems worth attempting to me. And the process of trying to do this itself, and engage wider audiences, seems valuable.
Is the CGD's Commitment to Development Index an expression of policy and spending quantifications within different sectors? I also wonder if 'Commitment' is the right way to call it, since some countries may be advantaged differently. For example, nations have limited control over how much, absolutely, they can spend on international peacekeeing, one of the Security's three subcomponents due to different income levels.
This index can but does not have to motivate countries to optimize for an 'ideal approach.' This can be valuable when countries make better choices (e. g. realize that they get a better score if they divest from arms trade and invest into fishing alternatives) but disvaluable when the intent of the index is different from the effect of the calculation (e. g. if it would be favorable to take large sums from subsidized organic agriculture to arms trade within the context of 'peacekeeping').
A similar consideration should be applied in this list. There should be no way to 'trick' the index. A way to address this is enable adjustments alongside/based on feedback. This, in addition to the possibly reduced perception absoluteness can enable partiality based on convenience or strategical attraction of donors (for example, utility monsters can be weighted down if it would make someone extremely better than the others or up, if there is a large monsters donor who could be attracted to EA by their listing). But, this could be addressed by actually inviting the top people converse about possible improvements in the metrics, implementing impartiality considerations as they are applying courtesy to each other, maybe. Just some thoughts.
Yeah, I've lately been considering just three options for moral weights: 'humans only', 'including animals', and 'longtermist', with the first two being implicitly neartermist.
It seems like we don't need 'longtermist with humans only' and 'longtermist including animals' because if things go well the bulk of the beings that exist in the long run will be morally relevant (if they weren't we would have replaced them with more morally relevant beings).
but even within 'humans only' (say, weighted by 'probability of existing' ... or only those sure to exist).
There are still difficult moral parameters, such as:
(Similar questions 'within animals' too).
Agreed. I guess my intuition is that using WALYs for humans+animals (scaled for brain complexity), humans only, and longtermist beings will be a decent enough approximation for maybe 80% of EAs and over 90% of the general public. Not that it's the ideal metric for these people, but good enough that they'd treat the results as pretty important if they knew the calculations were done well.
Do you mean all three separately (humans, animals, potential people) or trying to combine them in the same rating?
My impression was that separate could work but combining them ‘one of the three will overwhelm the others’.
If you do a linear weighting, this is expected. But one approach to worldview diversification is that you can normalize.
Linch, that sounds like a reasonable approach. I think something like that could work.
Ultimately, I guess a high value per dollarwould then be assigned to anyone who donated ~'maximally our best guess impactfully' in either of the three categories,
... and then the value would scale ~linearly with the amount donated 'max-impactfully' to either of the three categories.
Might be somewhat difficult to explain this to a smart popular audience, but I suspect it might be doable.
My suspicion is that there will only be a very narrow and “lucky” range of moral and belief parameters where the three cause areas will have cost effectivenesses in the same orders of magnitude.
But I should dig into this.
Doing credible cost-effectiveness estimates of all the world's top (by $ amount) philanthropists (who may plausibly make the list) seems very time-intensive.
Supposing the list became popular, I imagine people would commonly ask "Why is so-and-so not on the list?" and there'd be a need for a list of the most-asked-about-people-who-are-unexpectedly-not-on-the-list with justifications for why they are not on the list. After a few minutes of thinking about it, I'm still not sure how to avoid this. Figuring out how to celebrate top philanthropists (by impact) without claiming to be exhaustive and having people disagree with the rankings seems hard.
Yeah it will be very time intensive.
When we evaluate people who don't make the list, we can maintain pages for them on the site showing what we do know about their donations, so that a search would surface their page even if they're not on the list. Such a page would essentially explain why they're not on the list by showing the donations we know about and which recipients we've evaluated vs. those who we've assigned default effectiveness values for their category.
I think we can possibly offload some of the research work on people who think we're wrong about who is on the list, by being very willing to update our data if anyone sends us credible evidence about any donation that we missed, or persuasive evidence about the effectiveness of any org. The existence of donations seems way easier to verify than to discover. Maybe the potential list-members themselves would send us a lot of this data from alt accounts.
I think Impact List does want to present itself as a best-effort attempt at being comprehensive. We'll acknowledge that of course we've missed things, but that it's a hard problem and no one has come close to doing it better. Combined with our receptivity to submitted data, my guess is that most people would be OK with that (conditional on them being OK with how we rank people who are on the list).
Your estimate seems optimistic to me because:
(a) It seems likely that even in a wildly successful case of EA going more mainstream Impact List could only take a fraction of the credit for that. E.g. If 10 years from now the total amount of money committed to EA (in 2022 dollars) increased from its current ~$40B to ~$400B, I'd probably only assign about 10% or so of the credit for that growth to a $1M/year (2022 dollars) Impact List project, even in the case where it seemed like Impact List played a large role. So that's maybe $36B or so of donations the $10M investment in Impact List can take credit for.
(b) When we're talking hundreds of billions of dollars, there's significant diminishing marginal value of the money being committed to EA. So turn the $36B into $10B or something (not sure the appropriate discount). Then we're talking a 0.1%-1% chance of that. So that's $10M-$100M of value.
If a good team can be assembled, it does seem worth funding to me, but it doesn't seem as clear-cut as your estimate suggests.
Thanks for the feedback!
Regarding (a), it doesn't seem clear to me that conditional on Impact List being wildly successful (which I'm interpreting as roughly the $110B over ten years case), we shouldn't expect it to account for more than 10% of overall EA outreach impact. Conditional on Impact List accounting for $110B, I don't think I'd feel surprised to learn that EA controls only $400B (or even $200B) instead of ~$1T. Can you say more about why that would be surprising?
(I do think there's a ~5% chance that EA controls or has deployed $1T within ten years.)
I think (b) is a legit argument in general, although I have a lot of uncertainty about what the appropriate discount should be. This is also highlighting that using dollars for impact can be unclear, and that my EV calculation bucketed money as either 'ineffective' or 'effective' without spelling out the implications.
A few implications of that:
Given the bucketing and that "$X of value" doesn't mean "$X put into the most effective cause area", I think it may be reasonable to not have a discount. Not having a discount assumes that we'll find enough (or scalable enough) cause areas over the next ten years at least as effective as whatever threshold value we pick that they can soak up an extra ~110B. Although this is probably a lot more plausible to those who prioritize x-risk than to those who think global health will be the top cause area over that period.
Considerations in the opposite direction:
Twin challenges: constructing a metric that (1) we (EA) will not hate, that (2) will not confuse the public
Comparability across causes and across outcomes is very difficult
What credit do I get for things like those below, and how can we compare these in a way that is satisfying to us, and understandable to the larger public, including billionaires?
I donate $1 billion ...
FIRST-WORLD: To elderly hospices in the US and the UK
GIVEWELL: To GiveWell charities,
GIVEWELL*: one of which is later found to have been funding a program whose impact is thrown in doubt
GIVEWELL-ESQUE: To a non-Givewell charity working to prevent malaria. They use method similar to AMF but Givewell didn't have the resources to evaluate them, and they were 'too similar' to be worth a separate evaluation
ANIMALS: to successfully promote legislation to end prawn eyestalk ablation in Chile
LONGTERMIST: to fund AI safety research, generating research papers deemed very interesting
GOOD-FAILED-BET: To fund research into a possible cure for alzheimers disease, which looked good but turned out unsuccessful.
It would be very hard to resolve, even amongst ourselves, about GIVEWELL vs ANIMALS vs LONGTERMIST.
So, should we limit it to 'GH&D only'? But that would drag attention away from animals and LT causes that many/most EAs value above all else.
Perhaps a good first pass would simply be to sum "donations to all plausibly-high-impact charities"... maybe "all of the above except for FIRST-WORLD?" But then, we would probably want to discount OXFAM ... relative to the others but by how much? And how can we be claiming to measure the GIVEWELL, ANIMALS, and LONGTERMIST benefits by the same units? Unless the importance of 'value of prawns' sentient life/suffering' happens to add up to just the sweet spot where the expected good accomplished just equals a top Givewell charity, one will vastly outweigh the other.
The evidence synthesis base is thin
Even within global public health/development, we have basically a single source of a public evaluations that we trust (GiveWell), or at most a handful (public OP reports? Founders' pledge) These give rigorous assessments in accounting of a handful of interventions and charities, backed by strong academic evidence. I think we can be fairly confident that these interventions (like bednets, micronutrients) are in fact very likely to have strong positive effects.
But what do we do with charities such as Oxfam, Doctors without Borders etc, where, I guess, most of the GPHD giving goes. As far as I know there has been no credible comparison effectiveness rating for of these, because it's very difficult, because they do many things, and because their theories of change involves some things that are harder to measure. GiveWell does not say that 'AMF is 10.6 times more impactful than Oxfam'. They just don't report on this.
For reasons including 'the ability to make an impact list' I've been advocating that we do more to try to come up with credible reasoning transparent metrics of the effectiveness of charities
I was hopeful ImpactMatters would go in this direction, but I didn't see it. SoGive might still, if the funding is there. There are also some good initiatives coming out of QURI which would need alternate funding, and I think HLI is working in this area also.
I also mention this in my response to your other comment, but in case others didn't notice that: my current best guess for how we can reasonably compare across cause areas is to use something like WALYs. For animals my guess is we'll adjust WALYs with some measure of brain complexity.
In general the rankings will be super sensitive to assumptions. Through really high quality research we might be able to reduce disagreements a little, but no matter what there will still be lots of disagreements about assumptions.
I mentioned in the post that the default ranking might eventually become some blend of rankings from many EA orgs. Nathan has a good suggestion below about using surveys to do this blending. A key point is that you can factor out just the differences in assumptions between two rankings and survey people about which assumptions they find most credible.
I think you highlight something really important at the end of your post about the benefit of making these assumptions explicit.
Some questions; you do not have to reply to most.
1) Have you considered that in addition to one's absolute impact, their relative impact (e. g. to their income or capacity) would be shown? These lists can be inspirational and insightful, for example, could show that many extremely poor people compete with Bill Gates while other extremely poor people do not. This can motivate cooperation in inclusive global development which can make everyone involved in this area feel great and others wanting to join.
2) Since this is lifetime, younger people would be disadvantaged. Would you apply Bayesian updating?
3) Why are these lists of large philanthropes not so popular? Are there some regional/cause area/industry lists that are better known? Can they be aggregated? What are their current effects and how could EA make them better in doing good, such as motivating people to donate or develop market solutions that are better sustainable in the long term?
4) Are you considering the counterfactual impact of the people's alternative spending?
5) How are you going to make sure you do not forget anyone but do not discourage people who are motivated by not being recognized for their charity? (If you start excluding them, maybe they will not stop donating but get recognized?) Have you considered effects on networks, e. g. where some donations are a norm but no one individual donates a large sum? Is there a way to highlight them in a way that is not biased by the number of people?
6) Why would you include net worth? It seems inconsistent with the other columns, which celebrate donating rather than gaining status within the socioeconomic framework. Status is in impact, right.
7) I would also suggest deleting the amount donated so far/in a given year, since there should be intrinsic motivation of the donor/trust that they keep up with their pledge or when they do not there are valid reasons. If this is public, it seems like they have to do it, which can be demotivating. Thus, I would just keep an updated estimate of a pledge, updated by external experts, possibly after consultations with the donor. If the experts are sufficiently cool and serious, everyone will be excited to increase their expected impact value, even asking how they can do it even better?
8) I would either enable granular filtering, almost by output or intermediate outcome or no filtering, all converted to some sentience-adjusted wellbeing-adjusted life year. Because, if you want to compare who makes the most impact in animal welfare, for example, you have to measure which charity frees the most chicken from cages, or supports the implementation of policies or alternatives with the same effect regarding confined farm animals. If you just filter by cause area, then $1b to cute puppy mill awareness is the same as $1b to effective dynamic policy advocacy and development considering animals' experiences.
The one metric I am talking about is measuring 'active neural complexity' (Being You: A New Science of Consciousness, Chapter 2) and multiplying this by weighted adjustment in wellbeing. I am suggesting this first eliminating suffering and weighting suffering by an exponent of 2.5. Of course, externalities should be considered, so always one's contributions to changes of total weighted wellbeing in the universe should be estimated. This should be a prediction into infinity. This can seem like a challenge but if institutional inertia is considered and effects on the very long-term future of destructive actions known well and of constructive actions known little, it can be possible. Here, not only donations but also decisions should be considered.
9) Are you considering the counterfactual of what wealth and impact capacity people could have developed, within what they perceive as their free will?
10) Would you want to wait with heavily publicizing this list until it is possible to make impact cool in public, competing with other media competing for attention by other means (shaming, fear, impulsive appeal, ...)? Or, are you planning to draft it in a way that supports the occurrence of this environment?
11) Yes, I was talking to GWWC a while ago that they should enable people showcase their donations and sometimes I keep mentioning it. Is this already occurring? Maybe, it is not consistent with the spirit of the community to brag about donations. But, a pilot list could just include a few volunteers disclosing their donations (especially more speculative ones) and arguing for their impact. Others could comment, in lieu of organizations submitting other calculations of impact.
This could provide valuable feedback on even more impactful donations consideration as well as support 'donation specializations' within EA. Then, additional donors could use a donations ITN to research their philanthropic investment best options. Also, feedback prior to donating could be valuable.
12) Do you know of the Charity Navigator's Impact Unit (previously ImpactMatters) list of top charities within causes? Maybe the organization will not like it, but perhaps they could list top effective donors by multiplying the amount they donated by the unit effectiveness they list (for some they list cost per unit output, such as tonne of CO2 sequestered).
13) Trying to invite 'external' people by highlighting them on 'our' list can be ineffective because either they are already effective and so probably using at least some of EA-related resources or funding programs, so there is no need to maybe invite them to list their profile somewhere or there would be people who would like to make the list strategic so that it includes some people who would pay the most attention if included, some if excluded, also depending on the changing relationships among billionaires, so would either have to be well planned in advance considering possible scenarios of credible changing of metrics (but if billionaires just chat about improvements of the impact index under the standards of impartiality, it can work), a bit (tacitly) transparent about this objective of advancing a discourse, or not done.
14) But the list could also discourage some people, imagine that you were like number 3 or 4 in your local group and you realize you are number 135,000 - maybe you forget malaria or your local group before you start realizing that systemic change cannot happen if you just throw nets and money at the problem? So, I would maybe, in addition to considering impact relative to one's capacity and counterfactuals and including non-financial impact conduct a network analysis of decisionmaking. This can further make people comfortable and wanting to join EA. Anyone can be competitive, they just have to have high positive impact.
15) Will you consider the counterfactual of if one is preventing others to do what they are doing better?
16) Would the entire list be more interesting if it is something like a 4D vector space? It can still be possible to comprehend, but slightly challenging and all these colored images just attract one's attention. I would also make it VR compatible because it can be more interesting to explore this space of philanthropy with this headset.
17) Before you spend much resources, what is the metric? Is it QALY? WALY? Some other metric we have yet to come up with or discover, such as the prevalent spirit of virtuous progress? Are there any conditions (progress yes but no suffering or wellbeing yes and no suffering but no specific drugs, etc).
18) Would you be interested in spending an amount on researching the complex interactions of the millions of charities in the world and their donors to see what support increases efficiencies the most, considering donor interactions, the impact of counterfactual spending, and the charity's existing capital that makes the marginal cost of various programs complementary in systemic change different?
19) How do the billionaires get into EA? I trust that less and less will be interested in big numbers and more and more in sincere impact. Then, an impact list should primarily facilitate cooperation in making impact but also, of course, highlight some billionaires by its structure.
20) The rankings have to be publicly acceptable. It is a check for narratives use in EA, also, e. g. to attract donors. For example, if the public would be skeptical about AI safety, then sound arguments, that relate to the effects of AI safety, should be understandable.
21) How would you engage representatives of public groups and networks who have to buy into this list in creating it without introducing partiality biases? This can prevent any perceptions of arrogance by actual learning.
Would the participation of the public discourage large donors who will then perceive less exclusivity/something special for them developed by special people? Either, the representatives of the public have to be special, e. g. by their ability to think about impact, or this list should not be public public, more like EA-related networks public. But then, this could lower epistemics in EA, or improve them, depending on the calculation.
22) How are you going to include impact due to market-based innovations, coordination, efficiencies, and policy negotiations?
23) Will you differentiate situations when real income increases and when it does not (redistribution takes place)?
To answer your questions:
A lot of the community's funds, really at least $100m per year in the first year should go toward creating the environment that would make this idea seem plausible, aiming for sincerity, collaboration, and impartial and thoughtful definition of metrics/calculations. This is because if you hire like 5 people for $100k, pay maybe for articles in newspapers and popularize this list and it will be like who gives the most bednets then you end up with a world literally polluted by bednets (or with AI safety issues) and everyone thinking it is quite shameful to be thinking about stuff like systemic change or a cooperation toward it.
I would sum the contributions toward a better impact trajectory of EA of all the sub-parts of this project and update this sum as sub-parts occur and alternatives in EA that can achieve the same objective in a different way appear. It is an updated difference of two integrals. I would use my weighted WALY but I am biased.
In short, you should include externalities in an MVP. Also, consider making an actual spreadsheet.
Test it voluntarily with EAs. Do not publish it by Vox. This would go deeper the hole of 'more bednets and OpenAI' no thinking, because as it seems presented, people could experience negative emotions, such as fear, powerlessness/threat of submission, or anger, when seeing the list, which reduces critical thinking abilities, even within the environment they would be due to that list.
Really glad this is happening!
I sympathise with wanting to be less contentious and so ignore non-donation impact. But I expect most philanthropists to have most of their impact in their day job, and so the list really is missing a huge amount without it. And then the incentives we're offering aren't right.
It's also not impossible to do well. Keyword is "consumer surplus estimation".
Either way, good luck
Interesting idea but I don't think that's challenging to do well and I'm not sure it's merited:
Determining consumer surplus generated is very difficult
... Particularly as we are in a situation far from perfect competition in most industries (IMO). Did Bill Gates' Microsoft/Windows add consumer surplus, did it add more welfare than his charitable work? Hard to say. Obviously the DOJ was claiming he cost the world a tremendous amount of innovation and surplus.
Incentives: Non-donation impact may be accidental
Was Gates trying to make the world better by building Windows? Was it his main aim? I doubt it. But you might say 'why does it matter'? Maybe it matters because the impacts of people trying to make money can be good, bad, or neutral.
Consumer surplus/income and distribution
In principle all the income that is accrued the global wealthy and upper middle consumer surplus as a result of Bill Gates (etc.) could be passed on the neediest causes. But it wasn't/won't. It's hard to know how much diminishing returns to income to put into the welfare function.
None of these seem fatal to me (but then I'm not the one proposing to do the heavy inference).
Yeah, Gates is an exception.
But if we look at the other big billionaires, like Bezos/Amazon, the same issues come up. How much value did amazon bring? Probably a lot compared to 'no central web commerce site'. But if it weren't for them, presumably something else comparable would have arisen.
This seems really exciting!
I skimmed some sections so might have missed it in case you brought it up, but I think one thing that might be tricky about this project is the optics of where your own funding would be coming from. E.g. it might look bad if most (any?) of your funding was coming from OpenPhil and then Dustin Moskovitz and Cari Tuna were very highly ranked (which they probably should be!). In worlds where this project is successful and gathers some public attention, that kind of thing seems quite likely to come up.
So I think conditional on thinking this is a good idea at all, this may be an unusually good funding opportunity for smaller earning-to-givers. Unfortunately, the flip-side is that fundraising for this may be somewhat harder than for other EA projects.
SoGive has rated about 100 top UK charities with the goal of increasing the amount of effective donations via popularising impact ratings (note this is not the only SoGive project). I can see some similarities with what you are describing:
I encourage you to talk with Sanjay Joshi. I am also volunteering at SoGive, so we can also chat!
Eliot, is anyone working on this yet?
Yes, me and a few others but no one full time yet. I plan to start working roughly full time on it in a month.
I recently posted the work items that I need help with in the discord: https://discord.gg/6GNre8U2ta