Cross-posted from: https://www.jamesnorris.org/effective-altruism-funding-guidelines/

Note: Written a while back, so some of these issues could have been improved. I don't know the latest on what the community is like or collectively wants. 

- - -

High-level thoughts:

  • Stewardship: Many people in the community view EA funders as stewards of EA money, similar to how citizens might consider government officials as stewards of government resources. Without fair, transparent processes detailing how and why funding is distributed many EAs may rethink their commitment to advancing EA causes. Especially given the extraordinary levels of interconnectedness among funders and fundees, leading to extremely high likelihood of conflicts of interest (e.g., power grabs) and information cascades (a.k.a., unfounded gossip).
  • Maximizing EA Hours: An hour is an hour is an hour. Money matters primarily because it’s highly fungible with time. EA currently focuses on the number of organizations and employees, number of dollars allocated, and other high-level metrics to gauge “its” “success”. But the more useful metric is the total number of "EA" or "high-impact" hours allocated in the most valuable way possible by altruists and non-altruists alike (including those outside the core EA community). Funding ought to maximize the number of hours people can dedicate to doing their highest value work (e.g., launching or growing high-impact organizations), working in less impactful organizations but transferring their salary to others doing higher impact work (e.g., earning to give), or working on self-development so they can then more effectively dedicate more of their discretionary resources to high-impact areas (e.g., strategic life optimization). EA funders currently seem to reject this framing and focus primarily on the value of their personal hours and not the hours of others. This seems to be “missing the forest for the trees”.
  • Collaborative Feedback: Not providing any feedback to rejected applicants means many thousands of hours of EA time will be potentially wasted. With many projects that fail to be funded, they should be closed or improved in some way the applicants likely don’t yet understand how to do. We should help those applicants if we’re all to succeed as a community. Generally speaking, this should be the evaluator (or a dedicated third-party service). If the evaluator doesn’t have the time to do this and doesn’t consider the net savings of potentially thousands or tens of thousands of EA hours and the possibility of preventing unexpected catastrophes from misguided applicants worth <5 minutes of their time, it seems they may be valuing their time in a very non-standard way. At a heuristic level, if an applicant spends >500 hours on building their organization or initiative then an evaluator ought to spend 1-60 minutes on providing feedback (for all reasonable applications and only in cases where the evaluators have meaningful insights to contribute). This feedback time should be pre-allocated into the evaluator’s standard time allocations for applicants. In other words, less money should go to applicants themselves and more to evaluators.
  • Evaluator Evaluations: In many cases, it is not the applicant with their proposal that is incorrect–it is actually the evaluator who is incorrect in their evaluations due to their lack of skill, care or their conflicts of interest. To partly mitigate this, the evaluator’s colleagues ought to independently rate the same applicant. Each evaluator’s inter-rater reliability scores ought to be made public for most $1M+ sized allocations, as well as their track record of most $1M+ sized allocations with 1, 5, and 10 year follow ups. The same would often, but not always, follow for some “contended” evaluation. Some of those evaluations would likely need to remain private. But overall providing feedback to applicants as suggested above also helps ensure the evaluator improves their skill and/or ethics, as well. If they have a history of poorly reasoned explanations or they regularly disagree with their fellow evaluators, that information ought to be public. Having highly divergent evaluators might be a good thing, but even so their track record ought to be public so the community can come to their own conclusions.
    • Public Evaluation Example: “John Doe granted $1.5M to Jane Doe Inc. to do X for Y reasons with Z confidence that it would be deemed a reasonable grant in 10 years.”
    • Public Review Example: “Jane Doe Inc. is still operating at Y10 and is judged by our current evaluators in a shallow investigation as plausibly net positive.”
  • Status Quo Bias: In general, EA seems to have ossified around relatively conventional ways of thinking and operating. “EA gospel” has become a thing and a culture of status maximizing has become entrenched. Some of this is unavoidable, but much isn’t. The flow of capital is plausibly the single biggest contributor to EA’s status quo and culture. EA has become heavily allergic to innovation. 

There are many different arrangements of funders in and around the EA ecosystem, plausibly requiring different norms for each. Here is a non-comprehensive taxonomy: 

  • Ownership: community “owned” (e.g., EA Funds), organizationally owned (e.g., shareholders, Boards of Directors), individually owned
  • Size: small organizations, moderate-sized organizations, large organizations
  • Governing Role: high-level of governing responsibility, moderate-level of governing responsibility, no level of governing responsibility
  • Capital Type: donations/grants, investments, loans, and other arrangements
  • Allocation Entity: direct, indirect (e.g., regrantors), crowdfunded
  • EA Involvement: active involvement in effective altruism community, moderate involvement in effective altruism community, no involvement in effective altruism community

Here are a few ideas to consider and potentially experiment with in some parts of the EA funding ecosystem:

  • Tiered Application System: Funders could consider a four-tiered application system to reduce applicant friction:
    • Round 1: Ask applicants to submit a one-pager and/or business model canvas for their project. Applicants generally should have already created those documents in their initial ideation phase before they consider fundraising. The funder could also ask the applicant to share any extenuating circumstances the funder ought to take into consideration with their application (e.g., potential conflicts of interest). Within 72 hours, funders could let promising applicants know they will continue to the next stage with an automatic email. Applicants who are less promising might also be given the option to continue, but with the generic response, “Based on a very cursory review, we don’t know if this proposal fits our funding criteria. That said, you’re welcome to continue to the next stage if you’d like. These entrepreneurial and writing coaches are available to help you craft application materials, sometimes at no cost to you. If you’d like to work on your application more before we review it deeply, please click this button to delay the process for 14 days. Resubmit your updated materials anytime before then. You will not be penalized at all for choosing to resubmit.” Applicant time: 1-20 hours; funder time = <1 hour.
    • Round 2: Ask applicants your most decision-relevant 3-5 questions in a standardized application. One question should be disclosure of conflicts of interest. Even after submission, the application should be editable by the applicant although the initial version should also be sent to the funder. Suggest a turnaround time for applicant and evaluator of 7 days. Applicant time: 1-20 hours; funder time: 1-5 hours.
    • Round 3: If needed, ask applicants any other necessary questions or request a full proposal. Otherwise, skip this step. Suggest a turnaround time for applicant and evaluator of 7 days. Applicant time: 1-20 hours; funder time: 1-10 hours.
    • Round 4: Have an open-ended verbal conversation on the project’s viability, strategic next steps, alternative funding channels, and any remaining open questions. Perform a final “gut” check on the decision to allocate or not. Suggest a turnaround time for call between applicant and evaluator of 3 days. Applicant time: 1 hour; funder time: 1 hour.
  • Confirmation: Run approved applicants through due diligence, legal review, and other internal processes to confirm the decision to allocate. Then send money. Suggested turnaround time: <4 weeks.
  • Standardized Application Forms: Funders could use a standardized application form, with different version lengths for different purposes. Some have tried to standardize forms, but even subtle differences in language and form lengths means applicants usually spend hours reediting their materials.
  • Example Applications: Two sample successful and two sample unsuccessful applications materials could be included on the funder’s website next to the application form, with reasons for the rejections and acceptances included. These could be anonymized. At least one could be the before (rejected) and after (accepted) application for a project.
  • Line By Line Feedback: Applications could be returned to applicants with anonymous positive and negative feedback on each line or section from the funders. A simple Google Doc with 1-3 comments would be exceptionally helpful.
  • Holistic Feedback: For many rejected applicants, funders could give written or verbal feedback on why the project wasn’t funded and the top three things the applicant could do to improve (a) their odds of acceptance next time and/or (b) the project in general. In either case, the reviewer could verbally explain their thinking very quickly and have it translated into written form or send the raw recording after it is masked by an anonymizing speech synthesizer. A clear disclaimer saying all feedback is rough, incomplete, and perhaps completely off the mark could be sent alongside the feedback. Feedback could also, if desired by the evaluator, always be framed as questions for the applicant to consider–no declarative statements. The rejected applicant could be required to click a box legally waiving any rights to use any feedback in any legal context for them to receive it. Note this might only be an honor-based system, given differences in enforceability in different jurisdictions. And finally, a heuristic for the length of time an evaluator could timebox for providing feedback might be <5 minutes in most cases.
    • For poor submissions, the feedback could be a simple generic email with helpful broad suggestions.
    • For good submissions, the feedback could be a simple generic email with helpful broad suggestions and 30-90 seconds of tailored feedback.
      • Feedback Example #1: “This is the 15th similar proposal that I’ve seen and it doesn’t appear to have a differentiator yet. Perhaps ask 2-3 mentors if they think it’s a tarpit idea or not, then decide whether you’d like to continue working on it. You can apply again in 30 days.” (30 seconds to write)
      • Feedback Example #2: “Hey I’m really sorry, but I just don’t think I get this proposal. I asked two of my colleagues to review it as well, but we’re all a little stuck here. Could you rework it and potentially re-apply in 30 days? See EASE for experts that can help you update your materials.” (25 seconds to write)
    • For outstanding submissions that almost met the funding bar, the feedback could be a simple generic email stating they were very close and might consider applying again in the future when more funds were available. They could also include 30-90 seconds of tailored feedback.
      • Feedback Example #1: “This was a great proposal and I think you could do a lot of good with it. You just missed our funding bar, so feel free to apply again later.” (20 seconds)
      • Feedback Example #2: “We’re a little more funding constrained than we expected to be, so we can’t fund this now. But we do expect this to change in 4-6 months. We’ll email you when we have more capital and see if you need support then. Perhaps try [Funder X] in the meantime.” (30 seconds to write)
  • Feedback Open Call: Funders could offer an optional one-hour call for all applicants to join after allocations decisions are made for applicants to ask questions and receive answers. Funders can choose to not answer specific questions if privacy is a key priority for their approach.
  • Application Statistics: Funders could provide application statistics on the primary application page. This could include average or expected number of applications, average acceptance rate, average and median amounts granted, smallest and largest amounts granted, and a ranking of the most common reasons for rejection. It could also include a list of the top reasons why applicants are historically rejected, as well as changes to these metrics over time. Note that putting statistics in other places like the EA Forum makes it difficult for applicants to find.
  • Private Submissions: Funders could allow applicants the option to submit an optional “for your eyes only” and not have it circulated among their formal or informal advisors. Ideally the application could selectively choose which evaluators they would be comfortable acting as reviewers. The funder could acknowledge this might lower the odds of the submission being successful. There are an enormous range of organizations doing good in the world, but many of them cannot follow the open-ended sharing framing that many funding processes require. Conflicts of interest, IP control, and bad faith actors amongst the funders or their network sometimes make this difficult. Allowing applicants to be privately submitted to only selected evaluators in the funding entity is especially important in the case of a proposed organization that would inherently critique people or organizations with links to some but not necessarily all of the evaluators in the funding entity.
  • Conflicts of Interest: Funders could declare any potential conflict of interest in their internal database for all applicants, in their public database for all successful applicants, and to every applicant in their acceptance or rejection communications. The internal database would be reviewed by the funding organization’s Board of Directors annually.
  • Kind Encouragement: Funders could emphasize that applicants can re-apply in the next funding round and that many of the best applicants in the past did exactly that (if applicable).
  • Advisory Support: An independent service offering advisory support for the EA funding ecosystem could be made available. 

Potential application rounds:

  • Round 1 – Short Form Application
    • Please submit your Business Model Canvas (1 page) or one-pager (1 page)
    • Please submit your short-term plan (<300 words)
    • If needed, please let us know anything else that might be pertinent for us to review this application fairly (<300 words)
  • Round 2 – Medium Form Application
    • 3-5 decision-relevant additional questions
  • Round 3 – Long Form Application (Optional)
    • 3-10 decision-relevant additional questions
  • Round 4 – Conversation
    • 20 minute (or longer) call exploring any relevant additional questions and a chance for the evaluator to offer strategic advice for the applicant, especially on their next steps

0

0
1

Reactions

0
1

More posts like this

Comments1


Sorted by Click to highlight new comments since:

I generally like it =D

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
 ·  · 8m read
 · 
In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward.  The ideas I think could have the highest impact are:  1. Government placements/secondments in key GHW areas (e.g. international development), and 2. Expanded (ultra) high-net-worth ([U]HNW) advising Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups.  I can’t commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to articulate a few of the ideas I’ve been considering.  I’d love to hear your thoughts, both on these ideas and any other gaps you see in the space! Introduction I’m Mel, a Senior Program Associate at Open Philanthropy, where I lead grantmaking for the Effective Giving and Careers program[1] (you can read more about the program and our current strategy here). Throughout my time in this role, I’ve encountered great ideas, but have also noticed gaps in the space. This post shares a list of projects I’d like to see pursued, and would potentially want to support. These ideas are drawn from existing efforts in other areas (e.g., projects supported by our GCRCB team), suggestions from conversations and materials I’ve engaged with, and my general intuition. They aren’t meant to be a definitive roadmap, but rather a starting point for discussion. At the moment, I don’t have capacity to more actively explore these ideas and find the right founders for related projects. That may change, but for now, I’m interested in