TL;DR
In a sentence:
We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up.
In more detail:
We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.
During 2025, we are prioritising:
- Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well
- Communicating why and how people can contribute to reducing the risks
- Connecting our users with impactful roles in this field
- And fostering an internal culture which helps us to achieve these goals
We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.
This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions.
Why we’re updating our strategic direction
Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes.
We think we should consolidate our effort and focus because:
- We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI development and the speed of recent AI progress. We don’t aim to fully defend this claim here (though we plan to publish more on this topic soon in our upcoming AGI career guide), but the idea that something like AGI will plausibly be developed in the next several years is supported by:
- The aggregate forecast of predictions on Metaculus
- Analysis of the constraints to AI scaling from Epoch
- The views of insiders at top AI companies — see here and here for examples; see additional discussion of these views here
- In-depth discussion of the arguments for and against short timelines from Convergence Analysis (written by Zershaaneh, who will be joining our team soon)
- We are in a window of opportunity to influence AGI, before laws and norms are set in place.
- 80k has an opportunity to help more people take advantage of this window. We want our strategy to be responsive to changing events in the world, and we think that prioritising reducing risks from AI is probably the best way to achieve our high-level, cause-impartial goal of doing the most good for others over the long term by helping people have high-impact careers. We expect the landscape to move faster in the coming years, so we’ll need a faster moving culture to keep up.
While many staff at 80k already regarded reducing risks from AI as our most important priority before this strategic update, our new strategic direction will help us coordinate efforts across the org, prioritise between different opportunities, and put in renewed effort to determine how we can best support our users in helping to make AGI go well.
How we hope to achieve it
At a high level, we are aiming to:
- Communicate more about the risks of advanced AI and how to mitigate them
- Identify key gaps in the AI space where more impactful work is needed
- Connect our users with key opportunities to positively contribute to this important work
To keep us accountable to our high level aims, we’ve made a more concrete plan. It’s centred around the following four goals:
- Develop deeper views about the biggest risks of advanced AI and how to mitigate them
- By increasing the capacity we put into learning and thinking about transformative AI, its evolving risks, and how to help make it go well.
- Communicate why and how people can help
- Develop and promote resources and information to help people understand the potential impacts of AI and how they can help.
- Contribute positively to the ongoing discourse around AI via our podcast and video programme to help people understand key debates and dispel misconceptions.
- Connect our users to impactful opportunities for mitigating the risks from advanced AI
- By growing our headhunting capacity, doing active outreach to people who seem promising for relevant roles, and driving more attention to impactful roles on our job board.
- Foster an internal culture which helps us to achieve these goals
- In particular, by moving quickly and efficiently, by increasing automation where possible, and by growing capacity. In particular, increasing our content capacity is a major priority.
Community implications
We think helping the transition to AGI go well is a really big deal — so much so that we think this strategic focusing is likely the right decision for us, even through our cause-impartial lens of aiming to do the most good for others over the long term.
We know that not everyone shares our views on this. Some may disagree with our strategic shift because:
- They have different expectations about AI timelines or views on how risky advanced AI might be.
- For example, one of our podcast episodes last year explored the question of why people disagree so much about AI risk.
- They’re more optimistic about 80,000 Hours’ historical strategy of covering many cause areas rather than this narrower strategic shift, irrespective of their views about AI.
We recognise that prioritising AI risk reduction comes with downsides and that we’re “taking a bet” here that might not end up paying off. But trying to do the most good involves making hard choices about what not to work on and making bets, and we think it is the right thing to do ex ante and in expectation — for 80k and perhaps for other orgs/individuals too.
If you are thinking about whether you should make analogous updates in your individual career or organisation, some things you might want to consider:
- Whether how you’re acting lines up with your best-guess timelines
- Whether — irrespective of what cause you’re working in — it makes sense to update your strategy to shorten your impact-payoff horizons or update your theory of change to handle the possibility and implications of TAI
- Applying to speak to our advisors if you’re weighing up an AI-focused career change
- What impact-focused career decisions make sense for you, given your personal situation and fit
- While we think that most of the very best ways to have impact with one’s career now come from helping AGI go well, we still don’t think that everyone trying to maximise the impact of their career should be working on AI.
On the other hand, 80k will now be focusing less on broader EA community building and will do little to no investigation into impactful career options in non-AI-related cause areas. This means that these areas will be more neglected, even though we still plan to keep our existing content up. We think there is space for people to create new projects in this space, e.g. an organisation focused on biosecurity and/or nuclear security careers advice outside where they intersect with AI. (Note that we still plan to advise on how to help biosecurity go well in a world of transformative AI, and other intersections of AI and other areas.) We are also glad that there are existing organisations in this space, such as Animal Advocacy Careers and Probably Good, as well as orgs like CEA focusing on EA community building.
Potential questions you might have
What does this mean for non-AI cause areas?
Our existing written and audio content isn’t going to disappear. We plan for it to still be accessible to users, though written content on non-AI topics may not be featured or promoted as prominently in the future. We expect that many users will still get value from our backlog of content, depending on their priorities, skills, and career stage. Our job board will continue listing roles which don’t focus on preventing risks from AI, but will raise its bar for these roles.
But we’ll be hugely raising our bar for producing new content on topics that aren’t relevant for making the transition to AGI go well. The topics we think are relevant here are relatively diverse and expansive, including intersections where AI increases risks in other cause areas, such as biosecurity. When deciding what to work on, we’re asking ourselves “How much does this work help make AI go better?”, rather than “How AI-related is it?”
We’re doing this because we don’t currently have enough content and research capacity to cover AI safety well and want to do that as a first priority. Of course, there are a lot of judgement calls to make in this area: which podcast guests might bring in a sufficiently large audience? What skills and cause-agnostic career advice is sufficiently relevant to making AGI go well? Which updates, like our recent mirror bio updates, are above the bar to make even if they’re not directly related to AI? One decision we’ve already made is going ahead with traditionally publishing our existing career guide, since the content is nearly ready, we have a book deal, and we think that it will increase our reach as well as help people develop an impact mindset about their careers — which is helpful for our new, more narrow goals as well.
We don't have a precise answer to all of these questions. But as a general rule, it’s probably safe to assume 80k won’t be releasing new articles on topics which don’t relate to making AGI go well for the foreseeable future.
How big a shift is this from 80k’s status quo?
At the most zoomed out level of “What does 80k do?”, this isn’t that big a change — we’re still focusing on helping people to use their careers to have an impact, we’re still taking the actions which we think will help us do the most good for sentient beings from a cause-impartial perspective, and we’re still ranking risks from AI as the top pressing problem.
But we’d like this strategic direction to cause real change at 80k — significantly shifting our priorities and organisational culture to focus more of our attention on helping AGI go well.
The extent to which that’ll cause noticeable changes to each programme's strategy and delivery depends on the team’s existing prioritisation and how costly dividing their attention between cause areas is. For example:
- Advising has already been prioritising speaking to people interested in mitigating risks from AI, whereas the podcast has been covering a variety of topics.
- Continuing adding non-AGI jobs to our job board doesn’t significantly trade off with finding new AGI job postings, whereas writing non-AGI articles for our site would need to be done at the expense of writing AGI-focused articles.
Are EA values still important?
Yes!
As mentioned, we’re still using EA values (e.g. those listed here and here) to determine what to prioritise, including in making this strategic shift.
And we still think it’s important for people to use EA values and ideas as they’re thinking about and pursuing high-impact careers. Some particular examples which feel salient to us:
- Scope sensitivity and thinking on the margin seem important for having an impact in any area, including helping AGI go well.
- We think there are some roles / areas of work where it’s especially important to continually use EA-style ideas and be steadfastly pointed at having a positive impact in order for it to be good to work in the area. For example, in roles where it’s possible to do a large amount of accidental harm, like working at an AI company, or roles where you have a lot of influence in steering an organisation's direction.
- There are also a variety of areas where EA-style thinking about issues like moral patienthood, neglectedness, leverage, etc. are still incredibly useful – e.g. grand challenges humanity may face due to explosive progress from transformatively powerful AI.
We have also appreciated that EA’s focus on collaborativeness and truthseeking has meant that people encouraged us to interrogate whether our previous plans were in line with our beliefs about AI timelines. We also appreciate that it’ll mean that people will continue to challenge our assumptions and ideas, helping us to improve our thinking on this topic and to increase the chance we’ll learn if we’re wrong.
What would cause us to change our approach?
This is now our default strategic direction, and so we'll have a reasonably high threshold for changing the overall approach.
We care most about having a lot of positive impact, and while this strategic plan is our current guess of how we'll achieve that, we aim to be prepared to change our minds and plans if the evidence changes.
Concretely, we’re planning to identify the kinds of signs that would cause us to notice this strategic plan was going in the wrong direction in order to react quickly if that happens. For example, we might get new information about the likely trajectory of AI or about our ability to have an impact with our new strategy that could cause us to re-evaluate our plans.
The goals, and actions towards them, mentioned above are specific to 2025, though we intend the strategy to be effective for the foreseeable future. After 2025, we’ll revisit our priorities and see which goals and aims make sense going forward.
For context, I’m an AI safety researcher and I think the stance that AGI is by far the #1 issue is defensible, although not my personal view.
I would like to applaud 80k hours for several things here.
1. Taking decisive action based on their convictions, even if it might be unpopular.
2. Announcing that action publicly and transparently.
3. Responding to comments on this post and engaging with people’s concerns.
However, several aspects of this move leave me feeling disappointed.
1. This feels like a step away from Effective Altruism is a Question (not an ideology), which I think is something that makes EA special. If you’ll pardon the oversimplification, to me this decision has the vibe of “Good news everyone, we figured out how to do the most good and it’s working on AGI!” I’m not sure to what extent that is the actual belief of 80k hours staff, but that’s the vibe I get from this post.
2. For better or for worse, I think 80k hours wields tremendous influence in the EA community and it seems likely to me that this decision will shift the overall tenor and composition of EA as a movement. Given that, it seems a bit weird to me that this decision was made based on the beliefs of a small subset of the community (80k hours staff). Especially since my impression is that “AGI is by far the #1 issue” is not the median EA’s view (I could be wrong here though). 80k is a private organization, and I’m not saying there should have been a public vote or something, but I think the views of 80k hours staff are not the only relevant views for this type of decision.
Overall, there’s a crucial difference between (A) helping people do the most good according to *their* definition and views, or (B) helping people do the most good according to *your* definition and views. One could argue that (B) is always better, since after all, those are your views. But I think that neglects important second-order effects such as the value of a community.
It may be true that (B) is better in this specific case if the benefits outweigh those costs. It’s also not clear to me if 80k hours fully subscribes to (B) or is just shifting in that direction. More broadly, I’m not claiming that 80k hours made the wrong decision: I think it's totally plausible that 80k hours is 100% correct and AGI is so pressing that even given the above drawbacks, the shift is completely worth it. But I wanted to make sure these drawbacks were raised.
Questions for 80k hours staff members (if you’re still reading the comments):
1. Going forward, do you view your primary goal more as (A) helping people do the most good according to their own definition and views, or (B) helping people do the most good according to your definition and views? (Of course it can be some combination)
2. If you agree that your object-level stance on AGI differs from the median EA, do you have any hypotheses for why? Example reasons could be (A) you have access to information that other people don't, (B) you believe people are in semi-denial about the urgency of AGI, (C) you believe that your definition of positive impact differs significantly from the median EA.