Across loads of EA project, career development services and organisations in general there's a strong sentiment towards focusing on 'top talent'. For example in AI safety there are a few very well funded but extremely competitive programmes for graduates who want to do research in the field. Naturally their output is then limited to relatively small groups of people. An opposing trend seems to have gained traction in AI capability research as e.g. the "We have no moat" paper argued, where a load of output comes from the sheer mass of people working on the problem with a breadth-first approach. A corresponding opposite strategy for EA funds and career development services could be to spread the limited ressouces they have over a larger amount of people.
This concentration of funds on the development on a small group of top talent rather than distributing it over a wider group of people seems to me is a general sentiment quite prominent in the US economy and much less so in EU-countries like Germany, Scandinavia, the netherlands etc. I could imagine that EA origins in US/UK are a major reason for this structural focus.
Has anyone pointers to research on effectiveness comparisons between focusing on top talent vs a broader set of people, ideally in the context of EA? Or any personal thoughts/anecdotes to share on this?
Relatedly, I hope someone is actively working on keeping people who weren't able to get funding still feeling welcome and engaged in the community (whether they decided to get a non-EA job or are attempting to upskill). Rejection can be alienating.
Not only do we not want these folks working on capabilities, it's also likely to me that there will be a burst of funding at some point (either because of new money, or because existing donors feel it's crunch time and acclerate their spending). If AI safety can bring on 100 per year, and a funding burst increased that to 300 (made up numbers), it's likely that we'd really like to get some of the people who were ranked 101-200 in prior years.
To use a military metaphor, these people are in something vaguely like the AI Safety Reserves branch. Are we inviting them to conferences, keeping them engaged in community, and giving them easy ways to keep their AI Safety knowledge up to date? At that big surge moment, we'd likely be asking them to probably take big pay cuts and interrupt promising career trajectories in whatever they decided to do. People make those kinds of decisions in large part with their hearts, not just their heads, and a sense of belonging (or non-belonging) in community is often critical to the decisions that they make.