Hide table of contents

Across loads of EA project, career development services and organisations in general there's a strong  sentiment towards focusing on 'top talent'. For example in AI safety there are a few very well funded but extremely competitive programmes for graduates who want to do research in the field. Naturally their output is then limited to relatively small groups of people. An opposing trend seems to have gained traction in AI capability research as e.g. the "We have no moat" paper argued, where a load of output comes from the sheer mass of people working on the problem with a breadth-first approach. A corresponding opposite strategy for EA funds and career development services could be to spread the limited ressouces they  have over a larger amount of people.

This concentration of funds on the development on a small group of top talent rather than distributing it over a wider group of people seems to me is a general sentiment quite prominent in the US economy and much less so in EU-countries like Germany, Scandinavia, the netherlands etc. I could imagine that EA origins in US/UK are a major reason for this structural focus.

Has anyone pointers to research on effectiveness comparisons between focusing on top talent vs a broader set of people, ideally in the context of EA? Or any personal thoughts/anecdotes to share on this?

37

0
0

Reactions

0
0
New Answer
New Comment
Comments7
Sorted by Click to highlight new comments since: Today at 5:28 AM

Has anyone ever studied what happens to people who don't quite make the cut with the current funding bar that is focused on "top talent"? Are they going into AI capabilities work because that's where they could find a job in AI? Are they earning to give? Or leaving EA (if they were in EA to start with)?

That could inform my reaction to this question.

I'm also curious about the answer to this question. For people I know in that category (which disincludes anyone who just stopped engaging with AI safety or EA entirely), many are working as software engineers or are on short-term grants to skill up. I'd expect more of them to do ML engineering if there were more jobs in that relative to more general software engineering. A couple of people I know after getting rejected from AI safety-relevant jobs or opportunities have also made the decision to do master's degrees or PhDs with the expectation that that might help, which is an option that's more available to people who are younger. 

Relatedly, I hope someone is actively working on keeping people who weren't able to get funding still feeling welcome and engaged in the community (whether they decided to get a non-EA job or are attempting to upskill). Rejection can be alienating. 

Not only do we not want these folks working on capabilities, it's also likely to me that there will be a burst of funding at some point (either because of new money, or because existing donors feel it's crunch time and acclerate their spending). If AI safety can bring on 100 per year, and a funding burst increased that to 300 (made up numbers), it's likely that we'd really like to get some of the people who were ranked 101-200 in prior years. 

To use a military metaphor, these people are in something vaguely like the AI Safety Reserves branch. Are we inviting them to conferences, keeping them engaged in community, and giving them easy ways to keep their AI Safety knowledge up to date? At that big surge moment, we'd likely be asking them to probably take big pay cuts and interrupt promising career trajectories in whatever they decided to do. People make those kinds of decisions in large part with their hearts, not just their heads, and a sense of belonging (or non-belonging) in community is often critical to the decisions that they make.

That sounds like it would be helpful, but I would also want people to have a healthier relationship with having an impact and with intelligence than I see some EAs having. It's also okay to not be the type of person who would be good at the types of jobs that EAs currently think are most important or would be most important for "saving the world". There's more to life than that. 

Really good question!

Can you give more details what "distributing resources over a wider group of people" means for you? Are you arguing that mentors should spend much less time per person and instead mentor 3 times as many people? Are you arguing that researchers should get half as much money so twice as many researchers can get funded?

A plausible hypothesis is that ordinary methods of distributing resources over a wider group of people don't unlock that many additional researchers. Then, if there is only infrastructure that can support a limited number of people, then it is not very surprising to me that there is a focus on so-called 'top talent'. All else being equal, you would rather have competent people. And there is probably not some central EA decision that favors a small number of researchers over a large number of researchers.

Some side remark:

For example in AI safety there are a few very well funded but extremely competitive programmes for graduates who want to do research in the field. Naturally their output is then limited to relatively small groups of people.

Naming the specific programmes might give you better answers here. People who want to answer have to speculate less, and if you are lucky the organizers of specific orgs might be inclined to answer.

An opposing trend seems to have gained traction in AI capability research as e.g. the "We have no moat" paper argued, where a load of output comes from the sheer mass of people working on the problem with a breadth-first approach.

They have much more resources than us.

Yes indeed that's what I am suggesting: if a strong bottleneck is mentoring for an org, one approach of "more broadly distributing ressources" might be that programmes increase their student-staff ratio (meaning a bit more self-guided work for each participant but more participants in total)

Prominent and very competitive programmes I was thinking of are SERI MATS and MLAB from redwood, but I think that extreme applicants-participant ratios are true for pretty much all paid and even many non-paid EA fellowships, e.g. PIBBSS or . Thanks for the hint that it may be helpful to mention some of them.

@'they have more ressources than us': Why does that matter? If the question is "How can we achieve the most possible impact with the limited ressources we got?". Then given the extreme competitiveness of these programmes and the early-career-stage most applicants are in, a plausible hypothesis is that scaling up the quantity of these training programmes at the expense of quality is a way to increase total output. And so far it seems to me that this is potentially neglected

Curated and popular this week
Relevant opportunities