A week for posting incomplete, scrappy, or otherwise draft-y posts. Read more

Hide table of contents

Introduction

(Written with extensive AI-assistance, including in drafting)

Effective altruism has moved billions of dollars toward evidence-based interventions and saved lives. The analytical machinery the community has built (cost-effectiveness models, uncertainty quantification, comparative frameworks) represents a real contribution to how philanthropy works. These achievements are what makes the problem I'm describing so costly.

EA assembled one of the most capable, motivated, and intellectually diverse groups of people on earth. Tens of thousands of people with backgrounds spanning medicine, technology, policy, behavioral science, philosophy, and more, drawn together by a shared commitment to figuring out how to do the most good.

Then it built a system that asks for their labor and money while largely ignoring what they know.

EA is exceptionally good at one epistemic task: given a cause area, identifying the most cost-effective interventions within it. But there's a different task, discovering entirely new categories of opportunity, that the community is structurally resistant to performing.[^1] The primary reason is that EA treats its members as resources to be deployed, not as sources of insight to be listened to.

This post argues that EA's most important epistemic resource is the distributed knowledge and diverse perspectives of its members, and that its infrastructure systematically wastes this resource. In EA's own terms, the community has locked itself into pure exploitation of known cause areas while investing almost nothing in exploration. This is the exact failure mode EA would identify instantly in any other system.

[^1]: I've spent years developing research in an area outside established EA cause areas, so I've experienced the dynamics described here firsthand. I've kept specifics out because the structural argument is independent of any particular case.

The Telescope Array

Imagine a massive telescope array. Thousands of dishes, each with a slightly different position, capable of seeing parts of the sky no other dish can see. The power comes from distributed coverage. Each dish contributes something unique.

Now imagine pointing every dish at the same three stars.

A doctor notices regulatory bottlenecks in drug distribution that policy people never see. An engineer working in manufacturing understands supply chain dynamics invisible to researchers. A social worker in a mid-sized city observes patterns in homelessness that don't show up in national datasets. Each person, through their unique combination of education, professional experience, and life circumstances, has visibility into some corner of the world that no one else in the community can see.

In aggregate, the EA community has an extraordinary distributed sensing network, capable in principle of noticing opportunities that no central planner, however brilliant, could identify from their position. But the system doesn't ask its members what they see. It tells them where to look: here are the approved cause areas, here is where you can be useful. The entire distributed intelligence of the community is funneled into predetermined priorities rather than being harnessed for discovery.

The system asks: Where should you work? What should you donate to? What should you think about? How can you be useful?

It never asks: What do you see that we don't? What did you notice in your unusual combination of professional experience and domain knowledge? What opportunities are visible from where you stand?

How Ideas Actually Enter EA Priorities

To understand why this matters, trace the actual pathways through which new ideas get incorporated into EA's priorities.

Path 1: Top-down adoption by major funders. Someone at a major funding institution becomes personally interested in a topic. They commission research, fund organizations, and the community follows the resources. The "discovery" happens in the minds of a handful of people with enormous capital allocation power.

Path 2: Prestigious external validation. An idea gains credibility in adjacent elite circles (academia, Silicon Valley, the policy world). EA notices once it's already been vetted by people the community considers credible. EA doesn't find these ideas. It adopts them after they've been legitimized elsewhere.

Path 3: Internal iteration on existing cause areas. Someone proposes a new intervention or sub-cause within an already-recognized area. A more cost-effective global health intervention. A refinement of AI governance strategy. This is what the Forum, EA Funds, and existing evaluation machinery are good at. But it's optimization, not discovery.

Path 4: Respected insiders pivot. Someone with existing credibility uses their social capital to draw attention to something new. This works occasionally but is bottlenecked by the fact that insiders have strong incentives not to pivot away from what's already working for them professionally. Slow-burn community consensus on topics like wild animal suffering also fits here; it still depends on insider champions and existing legibility to gain traction.

Path 5: The path with no throughput. Someone outside the existing network, maybe someone who came to EA with unique domain knowledge and saw an opportunity others couldn't, posts on the Forum. Applies to EA Funds. Tries to get meetings. The Forum post gets modest engagement from people without capital allocation power. The Funds evaluate against criteria shaped by existing cause areas and probably pass. The person has no way to reach actual decision-makers. The idea doesn't get rejected. It never gets seriously considered by anyone with the power to act on it.

Notice what Paths 1 through 4 have in common: they all require existing capital, existing prestige, or existing proximity to power. There is no reliable, high-bandwidth pathway that runs on the merits of the idea alone.

EA has intake mechanisms for Path 5: the Forum, EA Funds, idea contests. What it lacks is throughput. There is no standing process that takes rough, outsider-origin ideas, develops them, and routes them to people who can allocate real attention and money. People see the door and assume it leads somewhere. It mostly leads to a waiting room nobody checks.

This isn't speculation. The community has diagnosed the bottleneck itself. A 2019 Forum post argued that EA is "vetting-constrained," limited not by a lack of ideas but by the ability to evaluate them. Grantmakers are extremely busy, often part-time, and fall back on prestige because they lack resources to properly evaluate unfamiliar proposals. A 2024 post described the lack of feedback from grantmakers as "endemic," making the official channel feel like a dead end. One applicant in the 2019 discussion noted that they posted anonymously because they "didn't want to jeopardize the projects I'm associated with by criticizing those that might fund it." When critiquing the intake process itself carries career risk, the process is not just bandwidth-constrained. It's structurally insulated from correction.

The Exceptions That Prove the Rule

Someone will point out that EA has adopted new cause areas. Fair. But trace how they entered, and the pattern reinforces the critique.

AI safety is the obvious example. It's worth asking: if Eliezer Yudkowsky were posting his first fragile, unrigorous essays about AI risk on today's EA Forum (no Oxford affiliation, no Berkeley appointment, just an autodidact with an unfamiliar framework making conclusions that sounded absurd to most people) what would happen? He'd be aggressively red-teamed. Told his expected-value calculations lacked a tractable theory of change. Advised to get a PhD for "career capital." Rejected by EA Funds. The community's epistemic machinery would grind him up and feel virtuous doing it.

The reason AI safety is an EA priority today is not that Path 5 worked. The decisive steps in prioritization came from people with enormous institutional prestige (Bostrom at Oxford, Russell at Berkeley) and major funders who became personally convinced and directed hundreds of millions toward it. That's Path 1. Informal spaces with that early incubation character still exist (LessWrong, parts of Twitter), but EA's formal evaluation infrastructure (Funds, career advising, the Forum's status hierarchy) has no room for that mode, and there's no bridge from informal speculation to institutional action.

Abundance is even more telling. Coefficient Giving's announcement of its $120M Abundance and Growth Fund explicitly cites being "encouraged by the recent rise of the Abundance and Progress Studies movements," movements that originated entirely outside EA, in the tech and policy world. EA didn't discover this. EA imported it after it had already achieved critical mass and prestige in adjacent elite circles. That's Path 2.

The YIMBY movement illustrates the mechanism from the other direction. Multiple EA-adjacent people recognized the opportunity in housing reform. Some posted about it on the Forum. But the actual movement happened entirely outside EA's infrastructure, until Alexander Berger at Open Philanthropy funded Brian Hanlon to go full-time on YIMBY organizing. The insight existed within the community's own distributed sensing network. The system had no mechanism to act on it. It only became an EA priority because a funder exercised Path 1 judgment.

These aren't examples of a discovery engine working. They're examples of funders occasionally changing their minds (Path 1) or importing ideas already validated elsewhere (Path 2).

Berger himself, in a recent SSIR essay, wrote something revealing: "Your programs become the streetlight under which you look for the keys of your impact." He also noted that some of their largest regrets involved not investing in neglected areas sooner. When the head of the organization that effectively sets EA's funding priorities is publicly articulating the streetlight problem and acknowledging costly false negatives, that's the system telling you it knows it has a discovery problem and doesn't have the structural answer.

Why Path 5 Fails

The failure isn't one thing. It's several reinforcing dynamics.

The epistemic culture is almost entirely negative. EA has developed extraordinary skill at not being fooled: skepticism, scrutiny, demanding evidence, finding flaws. These are about rejecting false things, and EA is good at this. But there's a corresponding skill that's almost entirely underdeveloped: finding true and important things. Curiosity. Openness. Constructive engagement with unfamiliar ideas. Call it positive epistemics.

The community identifies "good epistemics" entirely with the negative side. Being "epistemically rigorous" means being skeptical and demanding evidence. It doesn't mean being curious or exploratory.

Consider the asymmetry: EA ran a formal $100,000 red-teaming contest in 2022, specifically to incentivize criticism. Where is the comparable investment in green-teaming, the dedicated effort to take unfamiliar ideas and make them as strong as possible before deciding whether to reject them?

Green-teaming is not steelmanning. Steelmanning is an adversarial debate tool: make the opponent's argument stronger before you destroy it. Green-teaming is an institutional practice: collaboratively resource and protect a fragile idea to see if it can survive contact with reality. It's incubation, not rhetoric. And it doesn't exist in EA. Not as a norm, not as a practice, not as a vocabulary.

Every novel idea is fragile at birth. It has obvious holes. If the only tool applied is red-teaming, every new idea dies every time, and everyone feels epistemically virtuous. The idea never develops to the point where it could survive scrutiny, because scrutiny comes before development. The goal isn't lower standards. It's better sequencing: green-team, then maturation, then red-team.

Status and legibility select against novelty. A smart, idealistic person discovers EA. They observe what gets rewarded: certain topics, certain framings, certain vocabulary. They adapt. They might have an original insight from their unique background, but pursuing it means years in the wilderness. So they shelve it and optimize for legibility within the system. Multiply across thousands of people over a decade: the community has systematically selected against its most original thinkers. The mavericks either conform or leave. And nobody notices, because the community never knew what it was missing.

The karma system accelerates this. A well-written post about AI risk gets easy upvotes. A novel idea drawing on domains most readers don't understand requires real cognitive work. Most people scroll past. Those who engage are more likely to critique than to constructively develop, because critique is the high-status move. The system registers this as quality control working properly.

Official channels create the appearance of openness without the substance. This is worse than having no channels at all. If there were no Forum, no idea contests, no EA Funds, people might recognize the system lacks discovery capacity and try to build it, or take ideas directly to people with capital. Instead, the existing infrastructure absorbs that impulse. Someone posts a novel idea. It gets some engagement. Then nothing happens. The poster feels heard. The community feels open. The idea dies quietly.

The Cause Exploration Prizes illustrate this well. Open Philanthropy ran the contest in 2022 and received over 150 good-faith submissions, evidence of enormous latent ideation supply. But the prizes were explicitly scoped to identifying new cause areas "within our Global Health and Wellbeing portfolio." Discovery constrained by existing portfolio boundaries. The organizers weren't sure whether they'd repeat it. This was an episodic, funder-bounded intake event, not a standing discovery pipeline. The Future Fund's project ideas competition generated nearly 1000 submissions. Same pattern: when someone pays for exploration, the community produces an explosion of ideas. The bottleneck was never ideation. It's durable conversion into institutional attention.

Why the Equilibrium Persists

Funding concentration amplifies everything. When a few funding sources dominate, every researcher and organization orients toward what gets funded. Nobody sends a memo. The signal does all the work. In a community that listened to its members, funding concentration would be partially offset by diverse perspectives bubbling up from below. But when the orientation toward members is unidirectional ("serve these priorities" rather than "tell us what you see") funding concentration becomes the sole determinant of what gets explored. The distributed sensing network is disabled.

Philanthropy lacks the market's correction mechanism. In business, a major overlooked advantage is a gold rush. Competitors find it and you lose. In philanthropy, a foundation that misses a transformative opportunity suffers zero consequences. EA was supposed to be the correction. Instead, it has reproduced the same dynamics: a small group sets priorities and everyone else orients around them. It's worth asking whether post-FTX risk aversion has been correctly directed, toward greater structural scrutiny of insiders, or toward greater hostility to outsiders.

Good intentions make the critique socially costly. Everything described above is done by good people with good intentions. This is what makes the structural argument so difficult to voice. Criticizing the system feels like criticizing the people. And so the critique doesn't get voiced by anyone with standing, because they have career reasons not to, and people without standing get dismissed.

But this section isn't a concession. It's doing structural work. Without it, a reader could dismiss the entire post by saying "you're describing a conspiracy" or "you're saying EA leaders are bad." I'm not. These are good people operating inside a system with serious structural blind spots. Conflating the goodness of intentions with the quality of outcomes is exactly the error EA was founded to correct in other domains. The community should not exempt itself.

What Would a Functional Path 5 Look Like?

Other institutions have solved versions of this problem. DARPA hires program managers: eccentric domain experts given independent budget authority and mandated to fund non-consensus projects. Venture capital uses scout programs to extend its sensing network beyond partners' immediate social graphs. Y Combinator uses lightweight applications emphasizing founder insight over polished proposals, with quick decisions by people empowered to take risks.

The design principles these share:

Dedicated roles whose job is exploration, not evaluation of proposals that arrive through existing channels. People whose careers are measured by discovery, not by avoiding bad bets.

Small, fast money to buy information. Seed grants that let rough ideas develop to the point where they can be meaningfully assessed. The current system demands mature proposals from people with zero institutional support. That's like requiring a finished product before agreeing to look at a prototype.

A conversion pipeline from rough signal to serious evaluation. Not just intake, but throughput: someone's job is to take a weird signal from a community member with unusual domain knowledge, figure out whether there's something real there, and if so, develop it to the point where it can be presented to actual allocators.

Tolerance for high miss rates, because hits dominate expected value. EA thinks obsessively about expected value in every other context but has apparently never applied expected-value reasoning to its own discovery process. What is the expected cost of a false negative, of missing a transformative opportunity because the system couldn't see it?

Green-teaming as an institutional practice. Before "what's wrong with this?", invest effort in "what's the strongest version of this?" Make constructive engagement with novel proposals as prestigious as critique.

Conclusion

There's one more thing the current system costs EA that nobody has mentioned: people.

EA's implicit pitch right now is: "Join us and serve predetermined priorities set by people you'll never meet." That's not compelling for the most ambitious, most original thinkers, exactly the people EA needs most. The community currently selects for conscientiousness and agreeableness: people willing to follow directions well. It selects against the personality traits associated with breakthrough thinking.

Imagine a different pitch: "You have unique knowledge and perspective. We have resources and a community of brilliant people. Bring us what you see, and if it's important, we'll help you make it real."

That would be an extraordinary attractor. Every maverick who shows up, feels unheard, and leaves is a double loss: EA loses their potential contribution and loses a potential advocate who could have brought others in.

EA's greatest asset isn't its analytical frameworks or its major funders. It's its people. Tens of thousands of smart, motivated individuals, each with a unique vantage point on the world. The community has built a system that wastes this asset. It tells its people where to look instead of asking them what they see. It rewards skepticism toward novel ideas but not curiosity about them. It has built intake mechanisms that create the feeling of openness while routing all real decision-making power through a narrow set of established priorities.

The result is a community that excels at optimizing within known categories and is structurally resistant to discovery. A telescope array with every dish pointed at the same three stars.

The most important question isn't whether we're doing good work within existing cause areas. We probably are. The question is what we're missing, and whether we're willing to build a system capable of finding it.

That starts with listening to our own people.

17

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since:

Interestingly I've been thinking about something similar myself (in the context of what to do with the EA Forum this year), though I do feel skeptical that "shut down the EA Forum" is the best solution. I appreciate a lot of the points you bring up, and I am always happy to hear critical feedback about the Forum, but I do find your post too overconfident[1] and I personally don't think the existing data warrants that level of confidence.

My best guess is that, although there is no perfect system for this right now, the Forum is close enough that it's worth trying to make it serve this function better. There are many examples of Forum posts of this sort getting significant attention, such as Policy advocacy for eradicating screwworm looks remarkably cost-effective (which I think contributed to Launching Screwworm-Free Future – Funding and Support Request), Interstellar travel will probably doom the long-term future (which I believe led to the author getting his current position at Forethought), and Frog Welfare (one of the highest-karma posts from 2025). I would not characterize the comments in those posts as "aggressively red-teaming" — in fact they skew quite supportive, moreso than I would personally like (but I think I am unusually happy to receive criticism).

You also mentioned EA Funds — I'm not that familiar with their grantmaking processes, but just looking at their "Featured grants" list I see that Alfredo Parra was given a "6-month salary to do prioritization research and community building focused on reducing extreme pain in humans", plus he's written extensively about this work on the Forum and recently founded ClusterFree. This seems to me to be a success story for current systems.


In any case, I think a lot of people around EA have been successfully impactful because they do things rather than just post on the Forum. Since you think this is a problem, I would encourage you to take action to try to improve the situation, for example:

  1. You could post a thread on the Forum asking people to pitch potential Cause X candidates, to get a better sense of "what might we be missing". You're welcome to contact the Forum team if you'd like us to consider pinning it or promoting it via other channels. I would be excited for more Forum users to proactively try to make the Forum a more valuable community space.
  2. We're very open to Forum users running events, since our capacity is stretched pretty thin. If you want a green-teaming contest to happen on the Forum, especially if you're willing to put in the effort to help organize it, contact us!
  3. Pitch this as an activity for EA groups — I think this could both be valuable for the community and a valuable exercise for groups to work on together, to practice EA thinking. (And I'd love to see them post their results on the Forum! 😊)
  4. I would guess that, if you attended some EA events or conferences and talked to some people about this, you could find at least one other person who would be interested in starting a project to address this problem (especially if you had a list of "potential Cause X candidates" that they thought looked promising). I bet if you could show positive results from a small side-project, you'd have a significantly easier time getting funding to work on this more or convincing someone else to work on it.
  5. If you have specific suggestions for the EA Forum (including if you want to expand on your "shut it down" idea), I'd be happy to hear it! :)
  1. ^

    Which may just be an artifact of the AI-assistance

I think the key question is what portion of EA's total funding goes toward genuine discovery versus optimization within existing spotlights. Your examples may well be real successes. But if they represent a tiny fraction of total resource allocation, that's consistent with my argument rather than a counter to it.

Executive summary: The author argues that while effective altruism excels at optimizing within established cause areas, its funding structures and epistemic norms systematically suppress bottom-up discovery, causing it to overlook transformative opportunities visible within its own community.

Key points:

  1. EA is highly effective at evaluating interventions within predefined cause areas but lacks a reliable mechanism for discovering entirely new categories of opportunity.
  2. New priorities typically enter EA through top-down funder interest, external elite validation, internal iteration, or insider pivots, while outsider-origin ideas without prestige or proximity to power rarely receive serious consideration.
  3. Although EA has formal intake channels such as the Forum and EA Funds, these lack “throughput,” meaning rough or novel ideas are not developed or routed to decision-makers with real capital.
  4. The community’s epistemic culture overemphasizes skepticism and red-teaming while neglecting “green-teaming,” the institutional practice of nurturing fragile ideas before subjecting them to adversarial scrutiny.
  5. Funding concentration and status incentives orient researchers and organizations toward existing priorities, selecting against original thinkers and discouraging exploration outside established cause areas.
  6. The author proposes building a functional “Path 5” with dedicated exploration roles, small fast grants, structured development pipelines, and tolerance for high miss rates to better harness the distributed knowledge of EA members.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Spot on in your analysis but I don't know if it's fixable. I have suggested many times on this forum that we need to bake moral anti-realism into the core of the movement (which as you state probably does nothing). Ironically I think one of the core (but maybe not so novel) lessons of uppercase EA is that decentralization breeds fanaticism in a social movement if it financially exists in a larger extremely unequal society (even if the members are insanely thoughtful and bayesian). Some form of centralization is required to conform evolutionary value drift into something closer to ideal reflection. 

There are many paths, but unfortunately all of them require state capacity and culture. We would need some sort of political system to enforce the financial regulations that stops the gravity of the wealth-weighted dominant aesthetics from consuming the meta idea of ea (lower case ea). And probably a bunch of other things. But this is hard, there are 3 camps of main resistance. 

(1) the pure

Those who believe counting is not politics but math. 

(2) the pragmatic

Those who believe decentralization is good for the movement

(3) the de jure

Those who believe decentralization is good for their career, usually because it continues the default out status quo of who current has power

Together this coalition is sizable. I'm not sure exactly how much and maybe a vocal minority but I'd reckon at least 30%. Let's assume the rest of the movement is at least weakly in favor of centralization. But I think that 30% is more like a 50-70 percent in the hubs of oxford, dc, sf (just speculating here). These parts of the movement have not just money but better organization as well. The remaining 70% are spread throughout the world and It's not clear how they might at current coordinate to force some sort of constitution. 

Your functional path 5s are good ideas, but again who exactly is doing or paying for them? maybe you can convince someone rich right now, or maybe you can go build these projects, but there is nothing legally or politically forced and the Egregore will eat it up all the same. Anything short of a real politically binding set of laws and delineation between members and non members seems like window dressing to me. But increasingly I think even if this would get passed I wonder if the ea infra is best left as is and new young people try to just start a more functionally agnostic version of the movement. That's at least some of the essence of post-rats, though they never meant for that to be a big tent idea. 

Curated and popular this week
Relevant opportunities