Hide table of contents

(I'm erring strongly here on the side of just putting this out there instead of spending days developing, so please excuse any lack of detail)

Introduction

I propose an organization that develops common applications by cause area (CACA). 

The existing environment is economically inefficient (as a note for this section, I am talking about funding as a whole, not just EA). PF application processes are opaque, inequitable, and very costly to organizations applying to grants. If anyone hasn't seen it, Crappy Funding Practices points out a lot of bad behavior. Another philanthropic movement, Trust-based philanthropy, has also criticized a lot of PF practices (and led to substantial change!). Ultimately, this all leads to economic inefficiency in poor matching and disproportionately high costs to apply relative to the size of the grant.

From my research into major US PF grantmaking, funders care a lot about improving the grantmaking process, though I do believe common applications are a difficult sell.

Existing common applications are typically by geographic area. These appear to have had pretty limited success. Common applications by geographic area are helpful for funders that only fund within a certain geographic area, but across cause areas, the information requested likely varies quite a bit. So a funder of environmental causes and a funder of education have different needs, even if they both fund within a particular city. In other words, I believe the applications for funders within a cause area are more similar than funders within a geographic area, and thus are possibly more amenable to a cause-area specific common application.

Theory of Change

Widespread adoption of CACA I believe would lead to:

  1.  Better matching of funder grants to grantees
  2. Reducing the amount of internal time organizations spend applying for grants.
  3. Additionally, and importantly for EA, I believe CACA can put additional emphasis on outcome evaluation/evidence of effectiveness. Each cause area has its own set of effectiveness metrics to choose from that are developed by both funders and organizations. If non-EA funders start using CACA that have well-designed outcome/effectiveness data, I believe they will make more effective (and EA-aligned) grant decisions. 

And I think the cost to develop and maintain would be fairly small, I think a small team could do this.

What would it actually look like?

It could be as simple as developing the common application and just putting it out there, but I also believe there is room for developing a database where each org can manage their own common application and apply to funders.

Base Application:

  • The ask
  • Org information
  • Financial information (could be auto-filled with 990 data)

Cause Area specific information

  • Some qualitative information about the work done and how it aligns with the cause area
  • Outcome/effectiveness information

Why will this not work?

Overall, I think this is a longshot for a couple reasons:

  1. Funders use obnoxious application processes as a screen. A common application may lead to more applicants for a foundation, which would increase internal time to make grant decisions, limiting the impact. One compromise could be allowing each funder to add a limited set of additional questions, which would add to the organization time to complete.
  2. A lot of funders do restrict their giving to a geographic area, so it would be annoying to get applications outside your geographic scope. However, it would be quite easy to make that known and screen out automatically.
  3. A critical mass of both funders and organizations using the common application is needed for it to be worthwhile for both groups. I'm sure this is the main reason that existing common applications seem to have had limited success.
  4. I think having this whole idea being EA-coded would harm adoption on the funder side, so distancing from EA would probably help.

Next steps:

I'm a researcher, so honestly this whole thing might just turn into a normative research paper where I examine existing common applications, where they go wrong, and make this case for CACA. 

However, if people believe this could have legs...

  1.  I'm interested in collaborators (especially folks with experience as a funder/grantwriter).
  2. I'd collect information from existing common applications to understand what has and hasn't worked for them.
  3. Talk to the folks at trust-based philanthropy about their perspective
  4. Picking one cause area to focus on to start. So something like developing a solid animal welfare common application (by talking to funders and organizations), then advertising it to animal welfare funders (including those outside EA) and animal welfare organizations. 
Comments1


Sorted by Click to highlight new comments since:

Executive summary: Creating standardized grant applications by cause area (CACA) could improve philanthropic efficiency and effectiveness by reducing application costs, improving funder-grantee matching, and encouraging evidence-based decision making.

Key points:

  1. Current philanthropic funding practices are inefficient and costly, with opaque processes and high application burdens relative to grant sizes.
  2. Cause area-specific common applications may be more successful than existing geographic-based ones, as funders within causes have more similar information needs.
  3. Benefits would include reduced application costs, better matching, and increased emphasis on outcome evaluation metrics specific to each cause area.
  4. Key challenges (cruxes) include: funders using difficult applications as intentional screens, need for critical mass adoption, and geographic restrictions by funders.
  5. Proposed next steps are to research existing common applications, consult with trust-based philanthropy experts, and pilot with one cause area (e.g., animal welfare).

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 9m read
 · 
TL;DR In a sentence:  We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up. In more detail: We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.   During 2025, we are prioritising: 1. Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well 2. Communicating why and how people can contribute to reducing the risks 3. Connecting our users with impactful roles in this field 4. And fostering an internal culture which helps us to achieve these goals We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.   This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions. Why we’re updating our strategic direction Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes. We think we should consolidate our effort and focus because:   * We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI
Recent opportunities in Building effective altruism