Hide table of contents

Summary

This article asks whether EA grantmakers should publicly disclose the probability of success (p(success)) for their funded projects, and discusses the potential benefits such as improved community norms and accountability, as well as potential drawbacks and implementation considerations.

Introduction: 

Recently I’ve been thinking about the possibility of EA grantmakers publicly sharing the p(success) of their funded projects. This article is intended to start a discussion by exploring the potential benefits and drawbacks of this approach, and is by no means exhaustive or super detailed. 

This is an idea I had this week and initial conversations with people in Trajan House about the idea were interesting and positive enough that I thought it’d be worth opening up the conversation. I have no experience as a grantmaker, only as a grant applicant, and so I’m sure that I have a very poor understanding of how grantmaking actually works. Therefore I’m sure there are reasons I haven’t heard or thought about as to why this suggestion might have already been rejected, or wouldn’t be a good idea - which I’d like to read.  I tried to get some grantmakers to take a look/comment on a pre-post draft of this, but didn’t have much luck. In any case, thanks to the two people that did give brief feedback.

Suggested Implementation: 

I’d suggest that grantmakers publish a grantee-independant p(success) alongside the public grant disclosures that some of them already publish,[1][2]That is to say that I assume grantmakers are able to look at a project proposal independent of the grant applicant, and assign a p(success). Once they’ve done that, they then might factor in their subjective belief in the aptitude/competence/track record of the applicant and adjust the p(success|grantee) up or down, and keep that private. I think to avoid worries about how this disclosure might affect grantee’s mentality regarding their proposed project pre-execution, it could be that p(success) only be shared publicly after the grant period is over.

Why this might be a good idea:

  • Improved community norms around failure: Although the EA community uses expected value for decision-making, success and failure still play a significant role in shaping people's reputation in the community.[3] Being transparent about the p(success) could help us better appreciate and acknowledge both ambitious projects with lower probabilities of success and those who work on them. It might also make it easier for people to be more open about their failures, and discuss ways to avoid similar pitfalls, if they’re able to point to the failed project not having a high p(success) in the first place.
  • Career stability: Sharing p(success) can also assist community members in obtaining funding or employment even after experiencing multiple project failures, as it helps clarify that the actual probability of success may have been lower than perceived. In a future scenario with multiple independent EA funders, if a grant proposal is rejected by one grantmaker, another grantmaker can fairly evaluate it with more information to properly assess the applicant's track record, without being influenced by the previous grant assessor's decision.
  • Greater grantmaker accountability and grant benchmarking: Making p(success) public could hold grantmakers accountable for their track records, particularly if they consistently overestimate or underestimate the success of certain classes of projects or grantees.
    • Publicizing p(success) can facilitate learning and benchmarking among grantmakers, allowing them to compare their success rates and identify areas for improvement. I can see that this could already be happening/ could happen in private channels, but I think this happening in the open would strengthen trust that the community has in grantmakers’ independence.  
    • Being transparent about p(success) might strengthen trust between grantmakers, grantees, and the wider EA community, by demonstrating their commitment to honesty and accountability, which can enhance their credibility and reputation.
  • Enhanced EA integrity and public reputation: Demonstrating transparency in how we use expected value when making individual grant decisions could strengthen the credibility of the EA community and convince other grantmakers to adopt similar decision-making frameworks.[4]
    • This could also lead to the EA community being more appealing to some external funders and potential partners. By demonstrating a data-driven and transparent approach to decision-making, we may attract additional resources and support from organizations and individuals who share similar values and goals.

Potential reasons for caution:

  • Discouraging ambitious projects: Publicizing p(success) could deter people from applying for funding for ambitious projects with lower probabilities of success.
    • Because reputation is still built on success/failure in the community
    • People might not want to ‘waste their time’ (given that time is often our most valuable resource) on projects that the think won’t help them further their careers or have an impact
  • Lower morale: Applicants might feel demotivated or less excited if the disclosed p(success) is lower than they initially anticipated, potentially affecting their commitment to the project.
  • Distorted incentives: Public p(success) information might lead to strange social dynamics, where individuals may prefer to work on projects with very low or very high probabilities of success for the sake of prestige.
  • Complexity in separating grantee-independent p(success) from p(success|grantee): It might be challenging, a waste of time,  or even counterproductive for grantmakers to separate the two probabilities as the grantee's capabilities could be critical in determining the project's overall success.


 

What do you think? 

  1. ^

     This used to be on FTXFF’s website but it no longer exists.

  2. ^

     Here are Givewell’s - although I don’t know if I think my p(success) suggestion would be as important for them.

  3. ^

     e.i. People in the community who seem to have a good reputation/high status are people who seem to have a track record of success, and people with a track record of failure don’t get much fame or recognition, even though we don’t know anything about how ambitious/low-p either person's projects were. 

  1. ^

     i.e. it shows that we put our money where our mouth is when it comes to how we make individual grant decisions, not just which causes we choose or how we choose causes. 

Show all footnotes

26

0
0

Reactions

0
0

More posts like this

Comments3


Sorted by Click to highlight new comments since:

I think most grants are not binary "sucess" or "failure" but outcomes have a lot more granularity than that -- so it would probably need to be a distribution of outcomes.

I think there are also often benefits like value of information (it's useful for someone to try, even if the project doesn't seem like it will be "successful" as written), upskilling, and the like to projects, which I expect many grantmakers (especially if you're thinking about things like small grants or funding a one-off project) try to evaluate and plan on. 

I think this would be good. one thing is that In many situations If you can write p(sucess) in a meaningful way then you should consider running a competition instead of grantmaking. Not going to work in every situation but I find this the most fair and transparent when possible. 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Recent opportunities in Building effective altruism
46
Ivan Burduk
· · 2m read