Thanks to @Mathieu Duteil for all of his insights in writing this post!
TL;DR
EA has a strong infrastructure for inspiring people to pursue impactful careers. The infrastructure for helping them actually get there, especially in operations, is far less developed. After applying to dozens of operations roles over the past 6 months, I observed patterns suggesting inefficiencies in how EA organizations hire: they run parallel processes for similar roles and screen for overlapping competencies. I propose a shared hiring infrastructure, including a common application layer, candidate process history, and coordination between organizations scoping similar roles.
I. The Gap Between Motivation and Opportunity
EA has built excellent infrastructure for inspiration. Career bootcamps, introductory fellowships, and governance courses help people identify cause areas, understand the landscape, and commit to a direction. For someone transitioning from a different field, these programs are genuinely valuable. I have experienced this firsthand. Since encountering Effective Altruism, I have taken courses on AI governance and biosecurity, participated in the ML4Good governance bootcamp and the CEA Operations Career Bootcamp, and attended EAG and EAGx events that reinforced my motivation to work on these problems. The inspiration infrastructure works. The issue is that there is no structured path from “I know what I want to do” to “I am doing it.”
A researcher can suggest a paper idea. A policy analyst can pitch a brief on an emerging regulation. Operations work does not offer the same entry points. It is defined by organizational needs rather than by portable expertise. You cannot easily say, “I have this project I want to build for you,” because the work is inherently shaped by the organization’s existing context and gaps. This makes the path from motivation to contribution harder to navigate, and it makes the hiring process the primary bottleneck for operations talent entering the ecosystem.
The hiring challenge extends beyond operations. Abraham Rowe, in his post last November, argued that recruitment is the most undervalued function in high-impact organizations. He noted that organizations routinely struggle to find recruiting talent, rarely backtest their hiring practices against later performance, and rely on rudimentary tools for candidate evaluation. He concludes that almost no one is appropriately obsessed with hiring. This post argues something complementary from the candidate side. What follows describes what I have observed over months of navigating that bottleneck, and what might reduce it.
The difficulty of entering EA operations work is not a new observation. A 2024 survey by Julia Michaels of 91 job seekers found that 89% reported negative feelings about their job search, with employer hiring practices, a lack of feedback, and opaque decision-making cited as the primary barriers. In interviews, candidates described the process as a “black box,” even when individual steps were clearly communicated, suggesting a gap between explicit and implicit processes. That research focused broadly on EA roles; the operations track, in particular, has received less attention, which is part of why this post focuses on it.
II. What Dozens of Applications Taught Me
Over the last six months, I have applied to dozens of operations roles, mostly at AI governance and biosecurity organizations. The process has been instructive, though not in the ways I had hoped.
At least twice, I completed full application cycles for roles that were later cut. In one case, I had finished a work trial. In another, I had completed a final interview with the COO. Both times, I was told the decision stemmed from organizational pivots. I understand that organizations in this space face genuine uncertainty. Funding landscapes shift, priorities change, and what seemed essential in October may no longer make sense by December. While work tests themselves have value (they offer payment, something for your portfolio, and a chance to test fit), the issue is investing that effort in a role that may no longer exist by the time the process concludes. The organization’s staff, too, invested time that they will not recover.
Moreover, I noticed a pattern in the application questions. In the last quarter of 2025, at least five AI-adjacent organizations were simultaneously hiring for similar operations roles. Across the five roles, I answered variations of the same questions: describe your most impressive project, describe a system you built or improved, explain your relevant operations experience, and explain your interest in AI safety. The phrasing differed, but the substance was the same. The obvious objection is that candidates could simply copy and paste between applications. In practice, they cannot. Each application uses different word limits and slightly different framing, so candidates spend time reformatting the same answer rather than demonstrating anything new. These organizations were running parallel processes, screening for similar competencies, and evaluating overlapping candidate pools.
Consider the numbers, using conservative assumptions. If 300 people apply to a single operations role and each spends 30 minutes on the initial application, that is 150 hours of candidate time for one position. If five organizations run similar processes in the same quarter, that is 750 hours of applicant time on initial applications alone, before accounting for screening calls, work tests, or interviews. Under less conservative assumptions (400+ applicants per role, 45 minutes per application given tailored cover letters or written responses, and additional hours for screening calls and work tests), the total across five roles could easily reach several thousand hours. This does not include the staff hours spent reviewing those applications, conducting screening calls, and evaluating work trials. These numbers are rough estimates, but they illustrate the scale.
Most of these applications included a checkbox asking applicants whether they consented to share their material with other organizations. I checked that box whenever it appeared. It never resulted in any contact from other organizations, even those that showed a sustained interest when I applied to their roles. This is worth pausing on. Organizations have already shown willingness to share candidate information. What is probably missing is the follow-through and the capacity to scale it.
This is not a new concern. Over the years, there have been accounts of the same issues that I am mentioning: rejected candidates being ignored as a resource, the consent to share applications not leading anywhere, application processes that take weeks... These cover early-career researchers, mid-career operations candidates, and everyone in between. The pattern is consistent.
III. The Cumulative Cost
Through dozens of applications, organizations have evaluated my writing, judgment, and ability to perform under time pressure. That information now exists in many places, but there is no mechanism to carry it forward. It would feel awkward to ask an organization that rejected me for a role to recommend me, even though the evaluators may genuinely have positive things to say about my work. The social dynamics of hiring create a situation where useful information is generated and then lost. 80,000 Hours’ research suggests that work-sample tests are among the best predictors of job performance, which makes it worth asking why their results are not shared more widely (with candidate consent). Organizations may keep this information or share it internally. As a candidate, I walk away from each process with nothing to show for what I demonstrated.
This compounds over time. The emotional toll of repeated applications is real. So is the structural cost. Talented people with financial constraints cannot wait indefinitely. The process also selects against candidates who are strong enough to be already employed elsewhere: they do not have the time to navigate lengthy, parallel application cycles. The ecosystem loses people on both ends: those without the runway to keep searching and those too busy doing good work to jump through repeated hoops.
If the path to impact requires months of unpaid searching, networking, and speculative project work, those without a runway will leave. The ecosystem loses them not because they lacked commitment, but because they lacked resources. Programs like Open Philanthropy’s Career Development and Transition Funding exist precisely to address this gap, supporting career exploration periods and professional development. This is valuable infrastructure, and I am glad it exists. However, it is focused on certain cause areas, and the number of people facing this situation likely exceeds the program’s capacity.
I am not alone in this experience. Through EA career programs, I have met other mid-career professionals with relevant skills and a genuine commitment to these cause areas, who have described similar patterns. These are exactly the people the ecosystem should be absorbing, not losing to unnecessary process friction: repeated applications, roles that vanish, uncertainty about what organizations actually want from operations candidates (which tools matter most, what level of technical fluency is expected, whether project management experience outweighs domain knowledge…), and no mechanism to carry forward the vetting they have already undergone. None of this is an indictment of individual organizations: they are resource-constrained and doing their best with limited capacity. But the cumulative effect is worth examining, and there may be ways to reduce it.
IV. What could help
Several organizations already work on adjacent problems. Pineapple Operations maintains a candidate database for operations roles. Impact Ops helps individual organizations with hiring. These are useful, but the gap I am pointing at is specifically about coordination between organizations running parallel processes, and about preserving information generated during hiring.
Some organizations are already experimenting with different approaches. There are roles where the organization hired a consultant for a few months while running a longer search in parallel, allowing both sides to test fit without prolonged uncertainty. Other applications have detailed, role-specific questions rather than generic prompts, which signal seriousness and precision. These examples stayed with me because they felt different. There may be more to learn from what is already being tried. A fuller analysis of what exists and where gaps remain would benefit from conversations with these organizations, which I have not yet done.
There are also instructive precedents outside EA. University admissions faced a similar problem of parallel processes. The Common Application did more than reduce paperwork: evaluations found that participating institutions enrolled a more geographically diverse student body, including higher-performing candidates they would not have otherwise reached. Lowering friction expanded the talent pool.
In the tech industry, Triplebyte offered a shared technical assessment that allowed companies to skip redundant screening for software engineers, although the company shut down in 2023. A retrospective by its former Head of Product, after the company’s shutdown, identified a lesson relevant here: changing established hiring behavior is extremely difficult, even when the existing process is widely disliked. That risk applies to any version of what I am proposing. Still, she has since founded a new company pursuing a similar model, which suggests the underlying idea retains value even if the first attempt failed.
EA needs a shared hiring infrastructure, something that sits between organizations and helps them coordinate. The obvious question is: why does this not already exist? I see three likely reasons. First, coordination requires someone to build and maintain the infrastructure, and no single organization has the capacity to take that on alongside its core work. Second, there is no one whose job it is to maintain such a system. Third, each organization may believe its operations needs are specific enough that shared screening has limited value. Others closer to organizational strategy may see additional factors.
The third objection deserves a direct response. Operations roles vary by context: the systems, tools, and team dynamics at an AI safety lab differ from those at a global development nonprofit. But the core competencies being screened for in initial rounds (clear writing, structured thinking, systems design instinct, project management ability) are remarkably similar across organizations. The point is not that organizations should skip bespoke evaluation entirely, but that the first layer of screening, which is where most of the duplicated effort occurs, could be shared. Organizations have already shown willingness to share candidate information: most applications include a consent checkbox for exactly this purpose. What is missing is not the intent but the infrastructure to act on it.
Concretely, what I am describing is a shared platform, maintained as a community resource, that sits between organizations and candidates:
Shared application infrastructure. The platform would have a private profile for each candidate. Common questions would be addressed once in the profile: background, motivation, and familiarity with a specific cause area. The key is that these would be the same questions across organizations, not merely similar ones that still require starting from scratch.
A shared layer would eliminate that rewriting while still allowing organizations to add genuinely role-specific questions. It could also include candidates’ reasoning for common operational scenarios, allowing organizations to see how they think through problems they would actually face rather than relying solely on descriptions of past experience. A position may have diverse specific responsibilities, such as organizing a workshop or managing visas, but asking for each of them would make the application process too long. So, although they might not want to ask about these specifics, they may appreciate learning more about how an applicant answered a similar question for a different position.
Process history and work test evaluations. The platform could also track what hiring processes each candidate has completed and how far they progressed. Not “vetting” in the sense of endorsement, but transparency about what screening has already been done. Organizations could decide how much weight to give this information. When an organization finishes a hiring round with strong finalists it cannot hire, it could facilitate introductions to other organizations hiring for similar roles. This is where the consent checkboxes could finally lead somewhere.
Where both the applicant and the organization are willing, this record could also include brief evaluations from work tests. For confidentiality reasons, these would not necessarily contain the full work test. Instead, organizations could share something like “this person completed a task on systems building and performed well,” or “showed strengths in event logistics but less experience with financial administration.” Any such system would need strict access controls. Applicants would see only their own profiles, and organizations would need to be vetted before accessing the database.
Other things worth considering:
Role scoping support. Is this role actually needed? Is another organization already hiring for the same function? Can the organization realistically complete this hiring process given current capacity and funding uncertainty? An outside perspective could help resource-constrained teams answer these questions before launching a process. This might reduce the number of roles that disappear mid-process.
Visa clarity. A smaller but real issue is the lack of clarity around visa sponsorship. Some organizations clearly state their position at the top of the posting, saving everyone time. Others leave it ambiguous until late in the process. For candidates outside the US or UK, a simple upfront disclosure would reduce unnecessary effort on both sides.
V. Similar proposals
This topic has come up several times on this forum. In May 2022, Charles He published a detailed case for an EA Common Application, with founding team structures and funding estimates, and noted that a version had come close to receiving support from a major grantmaker before the prospective founder withdrew for personal reasons. In August 2022, Anya Hunt and Katie Glass published a systematic analysis of EA recruiting gaps that named the same three needs this post identifies: a shared candidate layer, a coordination mechanism for referrals between organizations, and a process for routing strong near-miss candidates to other open roles. They noted the consent checkbox and the same absence of infrastructure to act on it. Two years ago, Elizabeth Cooper mentioned that BERI had been looking into a Common Pre-Application for AI Safety. These posts are several years old, yet, as far as I know, none of what they proposed has been built. It would be extremely useful to have post-mortems on these projects to better understand their feasibility, the obstacles they encountered, what they achieved, and why these ideas were eventually dismissed.
One reason it would be particularly important to know is precisely that these examples are several years old. Back then, 80,000 Hours had not yet pivoted to prioritize AI safety careers, and the number of AI safety organizations was much smaller. This could reinforce the need for a common screening layer. The changing landscape is not the only reason this idea may be more urgent now.
Moreover, if rewriting new versions of similar essays over and over would have been a Sisyphean task in 2022, in practice, this is just pushing applicants to let an LLM answer for them. This wastes recruiters’ time or, conversely, pushes them to rely on LLMs to handle the workload, making the whole exercise even more absurd. A shared first layer, answered once and owned by the candidate, at least concentrates the effort where it can be genuine. In other words, this could be an idea whose time has come.
VI. Where This Analysis May Fall Short
It is possible that the bottleneck is not coordination but demand. If there are simply fewer operations roles than qualified candidates, better application infrastructure does not change the ratio. It is also possible that the current system, despite its costs, is rationally optimized for fit rather than efficiency. A bad operations hire at a small, high-impact organization can do real damage, and organizations may be willing to pay the cost of redundant screening to avoid it. One could also argue that lengthy, bespoke applications filter for motivation, though this argument has weakened considerably in the age of LLMs.
I cannot rule out any of these, but even if demand is the deeper constraint, reducing duplicated effort still frees capacity on both sides. If intensive screening is worth the cost, it is worth asking whether it needs to restart from zero every time. A shared system does not mean a lower bar. It means organizations get the same signal without requiring candidates to regenerate it from scratch, and they may also gain access to information they would not have thought to ask for. Finally, the morale cost is worth restating. A process that repeatedly discards demonstrated competence does not just waste time. It wears down the people the ecosystem most needs to retain.
I would be interested in working on a project like this. I have the operational experience and the candidate-side perspective, but I do not have HR expertise to do it alone. If you do, and this problem resonates with you, I would like to hear from you. I am also aware that enthusiasm and lack of follow-through are documented in this idea's history, and that noting interest in a forum post is not the same as building something. What I hope this post contributes that the earlier ones did not is a clearer account of the cumulative cost, a candidate-side perspective on where the duplication actually occurs, and enough specificity about the platform components that anyone with the relationships and organizational capacity to act on this has a concrete starting point rather than a general direction.
Note: The recent AMA with recruiters at impact-focused orgs offers perspectives from the other side of these processes as well.

Thank you so much for this. Commenting for reach and also because I want to re-read later in depth. Very much agree the system is broken, although the problem is more general and not EA focussed. However, I do agree with you that the EA ecosystem has huge potential for streamlining the process due to shared values and usually similar recruitment processes.
I'm preparing a piece about it and will DM you the draft - would love to get your input on this
Glad you like the idea. Looking forward to reading your draft!