Hide table of contents

Summary

I've updated my estimate of the number of FTE (full-time equivalent) working (directly) on reducing existential risks from AI from 300 FTE to 400 FTE.

Below I've pasted some slightly edited excepts of the relevant sections of the 80,000 Hours profile on preventing an AI-related catastrophe.

New 80,000 Hours estimate of the number of people working on reducing AI risk

Neglectedness estimate

We estimate there are around 400 people around the world working directly on reducing the chances of an AI-related existential catastrophe (with a 90% confidence interval ranging between 200 and 1,000). Of these, about three quarters are working on technical AI safety research, with the rest split between strategy (and other governance) research and advocacy. [1]We think there are around 800 people working in complementary roles, but we’re highly uncertain about this estimate.

Footnote on methodology

It’s difficult to estimate this number.

Ideally we want to estimate the number of FTE (“full-time equivalent“) working on the problem of reducing existential risks from AI.

But there are lots of ambiguities around what counts as working on the issue. So I tried to use the following guidelines in my estimates:

  • I didn’t include people who might think of themselves on a career path that is building towards a role preventing an AI-related catastrophe, but who are currently skilling up rather than working directly on the problem.
  • I included researchers, engineers, and other staff that seem to work directly on technical AI safety research or AI strategy and governance. But there’s an uncertain boundary between these people and others who I chose not to include. For example, I didn’t include machine learning engineers whose role is building AI systems that might be used for safety research but aren’t primarily designed for that purpose.
  • I only included time spent on work that seems related to reducing the potentially existential risks from AI, like those discussed in this article. Lots of wider AI safety and AI ethics work focuses on reducing other risks from AI seems relevant to reducing existential risks – this ‘indirect’ work makes this estimate difficult. I decided not to include indirect work on reducing the risks of an AI-related catastrophe (see our problem framework for more).
  • Relatedly, I didn’t include people working on other problems that might indirectly affect the chances of an AI-related catastrophe, such as epistemics and improving institutional decision-making, reducing the chances of great power conflict, or building effective altruism.

With those decisions made, I estimated this in three different ways.

First, for each organisation in the AI Watch database, I estimated the number of FTE working directly on reducing existential risks from AI. I did this by looking at the number of staff listed at each organisation, both in total and in 2022, as well as the number of researchers listed at each organisation. Overall I estimated that there were 76 to 536 FTE working on technical AI safety (90% confidence), with a mean of 196 FTE. I estimated that there were 51 to 359 FTE working on AI governance and strategy (90% confidence), with a mean of 151 FTE. There’s a lot of subjective judgement in these estimates because of the ambiguities above. The estimates could be too low if AI Watch is missing data on some organisations, or too high if the data counts people more than once or includes people who no longer work in the area.

Second, I adapted the methodology used by Gavin Leech’s estimate of the number of people working on reducing existential risks from AI. I split the organisations in Leech’s estimate into technical safety and governance/strategy. I adapted Gavin’s figures for the proportion of computer science academic work relevant to the topic to fit my definitions above, and made a related estimate for work outside computer science but within academia that is relevant. Overall I estimated that there were 125 to 1,848 FTE working on technical AI safety (90% confidence), with a mean of 580 FTE. I estimated that there were 48 to 268 FTE working on AI governance and strategy (90% confidence), with a mean of 100 FTE.

Third, I looked at the estimates of similar numbers by Stephen McAleese. I made minor changes to McAleese’s categorisation of organisations, to ensure the numbers were consistent with the previous two estimates. Overall I estimated that there were 110 to 552 FTE working on technical AI safety (90% confidence), with a mean of 267 FTE. I estimated that there were 36 to 193 FTE working on AI governance and strategy (90% confidence), with a mean of 81 FTE.

I took a geometric mean of the three estimates to form a final estimate, and combined confidence intervals by assuming that distributions were approximately lognormal.

Finally, I estimated the number of FTE in complementary roles using the AI Watch database. For relevant organisations, I identified those where there was enough data listed about the number of researchers at those organisations. I calculated the ratio between the number of researchers in 2022 and the number of staff in 2022, as recorded in the database. I calculated the mean of those ratios, and a confidence interval using the standard deviation. I used this ratio to calculate the overall number of support staff by assuming that estimates of the number of staff are lognormally distributed and that the estimate of this ratio is normally distributed. Overall I estimated that there were 2 to 2,357 FTE in complementary roles (90% confidence), with a mean of 770 FTE.

There are likely many errors in this methodology, but I expect these errors are small compared to the uncertainty in the underlying data I’m using. Ultimately, I’m still highly uncertain about the overall FTE working on preventing an AI-related catastrophe, but I’m confident enough that the number is relatively small to say that the problem as a whole is highly neglected.

I’m very uncertain about this estimate. It involved a number of highly subjective judgement calls. You can see the (very rough) spreadsheet I worked off here. If you have any feedback, I’d really appreciate it if you could tell me what you think using this form.

Some extra thoughts from me

This number is extremely difficult to estimate.

Like any Fermi estimate, I'd expect there to be a number of mistakes in this estimate. I think there will be two main types:

  • Bad judgment calls when estimating the number of people working at each organisation, e.g. based on "what counts as an FTE working directly on this issue", "how wrong is the AI watch database on this organisation", etc.
  • Errors in calculation / estimating uncertainty, etc.

Again, like in any Fermi estimate, I'd hope that these errors will roughly cancel out overall.

I didn't spend much time on this (maybe about 2 days of work). This is because I'd guess that more work won't improve the estimate by decision-relevant amounts. Some reasons for this:

  • A rougher version of this estimate that I'd used previously came to an answer of 300 FTE. That estimate took around 3-4 hours of work. While 300 FTE to 400 FTE is a large proportional change, it still represents a highly neglected field and doesn't seem substantially decision-relevant.
  • Errors in collecting data on this seem large in a way that couldn't be easily mitigated by doing more work.
  • There would still be substantial subjective judgement in an estimate that took more time. My uncertainty in this estimate includes uncertainty in whether these are the right judgement calls (on the criteria of "is it truthful, across a distribution of plausible definitions, to say that this is the number of FTE working directly on reducing existential risk from AI"), and it seems very difficult to reduce that uncertainty.

 

  1. ^

    Note that before 19 December 2022, this page gave a lower estimate of 300 FTE working on reducing existential risks from AI, of which around two thirds were working on technical AI safety research, with the rest split between strategy (and other governance) research and advocacy

    This change represents a (hopefully!) improved estimate, rather than a notable change in the number of researchers.

Comments3


Sorted by Click to highlight new comments since:

Thank you for your work on this. 

I'd be interested in your opinion on the number of people who should be working on this. 

I appreciate that this isn't a straightforward question to answer. The truth is probably that returns diminish as the number of people working on this increase, and there probably isn't an obvious way to delineate a clear cut off point between "still useful to have another person" and "don't need any more people".

I think this useful because I suspect your view is that there should be lots more people working on this, but from reading the problem profile, I don't think readers would know whether 80k would want the 400 to increase to 500 or 500,000. (I've only skimmed it, so sorry if it is explained)

Knowing the difference between "the area is somewhat under-resourced" and "the area is extremely under-resourced" is useful for readers. 

Thanks, this seems useful! :) One suggestion: if there are similar estimates available for other causes, could you add at least one to the post as a comparison? I think this would make your numbers more easily interpretable.

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would