On 17 February 2024, the mean length of the main text of the write-ups of Open Philanthropy’s largest grants in each of its 30 focus areas was only 2.50 paragraphs, whereas the mean amount was 14.2 M 2022-$[1]. For 23 of the 30 largest grants, it was just 1 paragraph. The calculations and information about the grants is in this Sheet.
Should the main text of the write-ups of Open Philanthropy’s large grants (e.g. at least 1 M$) be longer than 1 paragraph? I think greater reasoning transparency would be good, so I would like it if Open Philanthropy had longer write-ups.
In terms of other grantmakers aligned with effective altruism[2]:
- Charity Entrepreneurship (CE) produces an in-depth report for each organisation it incubates (see CE’s research).
- Effective Altruism Funds has write-ups of 1 sentence for the vast majority of the grants of its 4 funds.
- Founders Pledge has write-ups of 1 sentence for the vast majority of the grants of its 4 funds.
- Future of Life Institute’s grants have write-ups roughly as long as Open Philanthropy.
- Longview Philanthropy’s grants have write-ups roughly as long as Open Philanthropy.
- Manifund's grants have write-ups (comments) of a few paragraphs.
- Survival and Flourishing Fund has write-ups of a few words for the vast majority of its grants.
I encourage all of the above except for CE to have longer write-ups. I focussed on Open Philanthropy in this post given it accounts for the vast majority of the grants aligned with effective altruism.
Some context:
- Holden Karnofsky posted about how Open Philanthropy was thinking about openness and information sharing in 2016.
- There was a discussion in early 2023 about whether Open Philanthropy should share a ranking of grants it produced then.
- ^
Open Philanthropy has 17 broad focus areas, 9 under global health and wellbeing, 4 under global catastrophic risks (GCRs), and 4 under other areas. However, its grants are associated with 30 areas.
I define main text as that besides headings, and not including paragraphs of the type:
- “Grant investigator: [name]”.
- “This page was reviewed but not written by the grant investigator. [Organisation] staff also reviewed this page prior to publication”.
- “This follows our [dates with links to previous grants to the organisation] support, and falls within our focus area of [area]”.
- “The grant amount was updated in [date(s)]”.
- “See [organisation's] page on this grant for more details”.
- “This grant is part of our Regranting Challenge. See the Regranting Challenge website for more details on this grant”.
- “This is a discretionary grant”.
I count lists of bullets as 1 paragraph.
- ^
The grantmakers are ordered alphabetically.
I think it's a travesty that so many valuable analyses are never publicly shared, but due to unreasonable external expectations it's currently hard for any single organization to become more transparent without occurring enormous costs.
If open phil actually were to start publishing their internal analyses behind each grant, I will bet you at good odds the the following scenario is going to play out on the EA Forum:
Several things would be true about the above hypothetical example:
Criticism shouldn’t have to warrant a response if it takes time away from work which is more important. The internal analyses from open phil I’ve been privileged to see were pretty good. They were also made by humans, who make errors all the time.
In my ideal world, every one of these analyses would be open to the public. Like open-source programming people would be able to contribute to every analysis, fixing bugs, adding new insights, and updating old analyses as new evidence comes out.
But like an open-source programming project, there has to be an understanding that no repository is ever going to be bug-free or have every feature.
If open phil shared all their analyses and nobody was able to discover important omissions or errors, my main conclusion would be they are spending far too much time on each analysis.
Some EA organizations are held to impossibly high standards. Whenever somebody points this out, a common response is: “But the EA community should be held to a higher standard!”. I’m not so sure! The bar is where it’s at because it takes significant effort to higher it. EA organizations are subject to the same constraints the rest of the world is subject to.
More openness requires a lowering of expectations. We should strive for a culture that is high in criticism, but low in judgement.
What if, instead of releasing very long reports about decisions that were already made, there were a steady stream of small analyses on specific proposals, or even parts of proposals, to enlist others to aid error detection before each decision?