Hide table of contents

Across loads of EA project, career development services and organisations in general there's a strong  sentiment towards focusing on 'top talent'. For example in AI safety there are a few very well funded but extremely competitive programmes for graduates who want to do research in the field. Naturally their output is then limited to relatively small groups of people. An opposing trend seems to have gained traction in AI capability research as e.g. the "We have no moat" paper argued, where a load of output comes from the sheer mass of people working on the problem with a breadth-first approach. A corresponding opposite strategy for EA funds and career development services could be to spread the limited ressouces they  have over a larger amount of people.

This concentration of funds on the development on a small group of top talent rather than distributing it over a wider group of people seems to me is a general sentiment quite prominent in the US economy and much less so in EU-countries like Germany, Scandinavia, the netherlands etc. I could imagine that EA origins in US/UK are a major reason for this structural focus.

Has anyone pointers to research on effectiveness comparisons between focusing on top talent vs a broader set of people, ideally in the context of EA? Or any personal thoughts/anecdotes to share on this?

37

0
0

Reactions

0
0
New Answer
New Comment


Comments7
Sorted by Click to highlight new comments since:

Has anyone ever studied what happens to people who don't quite make the cut with the current funding bar that is focused on "top talent"? Are they going into AI capabilities work because that's where they could find a job in AI? Are they earning to give? Or leaving EA (if they were in EA to start with)?

That could inform my reaction to this question.

I'm also curious about the answer to this question. For people I know in that category (which disincludes anyone who just stopped engaging with AI safety or EA entirely), many are working as software engineers or are on short-term grants to skill up. I'd expect more of them to do ML engineering if there were more jobs in that relative to more general software engineering. A couple of people I know after getting rejected from AI safety-relevant jobs or opportunities have also made the decision to do master's degrees or PhDs with the expectation that that might help, which is an option that's more available to people who are younger. 

Relatedly, I hope someone is actively working on keeping people who weren't able to get funding still feeling welcome and engaged in the community (whether they decided to get a non-EA job or are attempting to upskill). Rejection can be alienating. 

Not only do we not want these folks working on capabilities, it's also likely to me that there will be a burst of funding at some point (either because of new money, or because existing donors feel it's crunch time and acclerate their spending). If AI safety can bring on 100 per year, and a funding burst increased that to 300 (made up numbers), it's likely that we'd really like to get some of the people who were ranked 101-200 in prior years. 

To use a military metaphor, these people are in something vaguely like the AI Safety Reserves branch. Are we inviting them to conferences, keeping them engaged in community, and giving them easy ways to keep their AI Safety knowledge up to date? At that big surge moment, we'd likely be asking them to probably take big pay cuts and interrupt promising career trajectories in whatever they decided to do. People make those kinds of decisions in large part with their hearts, not just their heads, and a sense of belonging (or non-belonging) in community is often critical to the decisions that they make.

That sounds like it would be helpful, but I would also want people to have a healthier relationship with having an impact and with intelligence than I see some EAs having. It's also okay to not be the type of person who would be good at the types of jobs that EAs currently think are most important or would be most important for "saving the world". There's more to life than that. 

Really good question!

Can you give more details what "distributing resources over a wider group of people" means for you? Are you arguing that mentors should spend much less time per person and instead mentor 3 times as many people? Are you arguing that researchers should get half as much money so twice as many researchers can get funded?

A plausible hypothesis is that ordinary methods of distributing resources over a wider group of people don't unlock that many additional researchers. Then, if there is only infrastructure that can support a limited number of people, then it is not very surprising to me that there is a focus on so-called 'top talent'. All else being equal, you would rather have competent people. And there is probably not some central EA decision that favors a small number of researchers over a large number of researchers.

Some side remark:

For example in AI safety there are a few very well funded but extremely competitive programmes for graduates who want to do research in the field. Naturally their output is then limited to relatively small groups of people.

Naming the specific programmes might give you better answers here. People who want to answer have to speculate less, and if you are lucky the organizers of specific orgs might be inclined to answer.

An opposing trend seems to have gained traction in AI capability research as e.g. the "We have no moat" paper argued, where a load of output comes from the sheer mass of people working on the problem with a breadth-first approach.

They have much more resources than us.

Yes indeed that's what I am suggesting: if a strong bottleneck is mentoring for an org, one approach of "more broadly distributing ressources" might be that programmes increase their student-staff ratio (meaning a bit more self-guided work for each participant but more participants in total)

Prominent and very competitive programmes I was thinking of are SERI MATS and MLAB from redwood, but I think that extreme applicants-participant ratios are true for pretty much all paid and even many non-paid EA fellowships, e.g. PIBBSS or . Thanks for the hint that it may be helpful to mention some of them.

@'they have more ressources than us': Why does that matter? If the question is "How can we achieve the most possible impact with the limited ressources we got?". Then given the extreme competitiveness of these programmes and the early-career-stage most applicants are in, a plausible hypothesis is that scaling up the quantity of these training programmes at the expense of quality is a way to increase total output. And so far it seems to me that this is potentially neglected

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would