Hi all,
I’m exploring an idea that came out of my own experience recruiting through the 80,000 Hours job board and wanted to sanity-check it with the community before going any further.
One thing I’ve noticed is that while EA job boards surface many strong candidates, a number of smaller EA-aligned organizations (e.g. early-stage AI safety, global priorities, or meta orgs) often don’t have the bandwidth to run a full recruiting process. Things like timely follow-ups, structured screening, proactive sourcing, and candidate communication can fall by the wayside—not due to lack of intent, but capacity constraints.
The result seems to be avoidable friction on both sides: candidates who are genuinely excited but left unsure where they stand, and orgs that miss out on good fits simply because they can’t move fast or consistently enough.
Because of that, I’m exploring whether there’s room for a small, opt-in recruiting / sourcing service that sits downstream of existing EA job boards, focused narrowly on execution and candidate experience for hard-to-fill roles at capacity-constrained orgs. The intent wouldn’t be to replace job boards, advising, or existing recruiting efforts—just to complement them by helping ensure promising matches don’t fall through due to operational bottlenecks.
At this stage, I’m mainly looking for a gut check from people with relevant perspectives:
- Is this a real problem?
If you’ve been a hiring manager, candidate, or advisor in the EA ecosystem, does this resonate with your experience—or is the issue overstated? - Is this a good use of marginal effort/funding?
Compared to other talent or meta interventions, does this seem plausibly cost-effective, or are there better ways to address the same bottleneck? - Would this be fundable?
Does this feel like something that could fit within existing EA funding buckets (e.g. meta / infrastructure / talent), assuming a clearly scoped pilot with modest costs? - What would you worry about most?
For example: perverse incentives, overlap with existing services, ecosystem fragmentation, or unintended downstream effects.
I’m not committed to building this unless it seems clearly useful and aligned, and I’m especially interested in reasons not to pursue it. If helpful, I’m happy to clarify what I have (and don’t have) in mind for scope, safeguards, or metrics.
Thanks in advance for any thoughts—critical or supportive.
