TL;DR
In a sentence:
We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up.
In more detail:
We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.
During 2025, we are prioritising:
- Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well
- Communicating why and how people can contribute to reducing the risks
- Connecting our users with impactful roles in this field
- And fostering an internal culture which helps us to achieve these goals
We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.
This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions.
Why we’re updating our strategic direction
Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes.
We think we should consolidate our effort and focus because:
- We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI development and the speed of recent AI progress. We don’t aim to fully defend this claim here (though we plan to publish more on this topic soon in our upcoming AGI career guide), but the idea that something like AGI will plausibly be developed in the next several years is supported by:
- The aggregate forecast of predictions on Metaculus
- Analysis of the constraints to AI scaling from Epoch
- The views of insiders at top AI companies — see here and here for examples; see additional discussion of these views here
- In-depth discussion of the arguments for and against short timelines from Convergence Analysis (written by Zershaaneh, who will be joining our team soon)
- We are in a window of opportunity to influence AGI, before laws and norms are set in place.
- 80k has an opportunity to help more people take advantage of this window. We want our strategy to be responsive to changing events in the world, and we think that prioritising reducing risks from AI is probably the best way to achieve our high-level, cause-impartial goal of doing the most good for others over the long term by helping people have high-impact careers. We expect the landscape to move faster in the coming years, so we’ll need a faster moving culture to keep up.
While many staff at 80k already regarded reducing risks from AI as our most important priority before this strategic update, our new strategic direction will help us coordinate efforts across the org, prioritise between different opportunities, and put in renewed effort to determine how we can best support our users in helping to make AGI go well.
How we hope to achieve it
At a high level, we are aiming to:
- Communicate more about the risks of advanced AI and how to mitigate them
- Identify key gaps in the AI space where more impactful work is needed
- Connect our users with key opportunities to positively contribute to this important work
To keep us accountable to our high level aims, we’ve made a more concrete plan. It’s centred around the following four goals:
- Develop deeper views about the biggest risks of advanced AI and how to mitigate them
- By increasing the capacity we put into learning and thinking about transformative AI, its evolving risks, and how to help make it go well.
- Communicate why and how people can help
- Develop and promote resources and information to help people understand the potential impacts of AI and how they can help.
- Contribute positively to the ongoing discourse around AI via our podcast and video programme to help people understand key debates and dispel misconceptions.
- Connect our users to impactful opportunities for mitigating the risks from advanced AI
- By growing our headhunting capacity, doing active outreach to people who seem promising for relevant roles, and driving more attention to impactful roles on our job board.
- Foster an internal culture which helps us to achieve these goals
- In particular, by moving quickly and efficiently, by increasing automation where possible, and by growing capacity. In particular, increasing our content capacity is a major priority.
Community implications
We think helping the transition to AGI go well is a really big deal — so much so that we think this strategic focusing is likely the right decision for us, even through our cause-impartial lens of aiming to do the most good for others over the long term.
We know that not everyone shares our views on this. Some may disagree with our strategic shift because:
- They have different expectations about AI timelines or views on how risky advanced AI might be.
- For example, one of our podcast episodes last year explored the question of why people disagree so much about AI risk.
- They’re more optimistic about 80,000 Hours’ historical strategy of covering many cause areas rather than this narrower strategic shift, irrespective of their views about AI.
We recognise that prioritising AI risk reduction comes with downsides and that we’re “taking a bet” here that might not end up paying off. But trying to do the most good involves making hard choices about what not to work on and making bets, and we think it is the right thing to do ex ante and in expectation — for 80k and perhaps for other orgs/individuals too.
If you are thinking about whether you should make analogous updates in your individual career or organisation, some things you might want to consider:
- Whether how you’re acting lines up with your best-guess timelines
- Whether — irrespective of what cause you’re working in — it makes sense to update your strategy to shorten your impact-payoff horizons or update your theory of change to handle the possibility and implications of TAI
- Applying to speak to our advisors if you’re weighing up an AI-focused career change
- What impact-focused career decisions make sense for you, given your personal situation and fit
- While we think that most of the very best ways to have impact with one’s career now come from helping AGI go well, we still don’t think that everyone trying to maximise the impact of their career should be working on AI.
On the other hand, 80k will now be focusing less on broader EA community building and will do little to no investigation into impactful career options in non-AI-related cause areas. This means that these areas will be more neglected, even though we still plan to keep our existing content up. We think there is space for people to create new projects in this space, e.g. an organisation focused on biosecurity and/or nuclear security careers advice outside where they intersect with AI. (Note that we still plan to advise on how to help biosecurity go well in a world of transformative AI, and other intersections of AI and other areas.) We are also glad that there are existing organisations in this space, such as Animal Advocacy Careers and Probably Good, as well as orgs like CEA focusing on EA community building.
Potential questions you might have
What does this mean for non-AI cause areas?
Our existing written and audio content isn’t going to disappear. We plan for it to still be accessible to users, though written content on non-AI topics may not be featured or promoted as prominently in the future. We expect that many users will still get value from our backlog of content, depending on their priorities, skills, and career stage. Our job board will continue listing roles which don’t focus on preventing risks from AI, but will raise its bar for these roles.
But we’ll be hugely raising our bar for producing new content on topics that aren’t relevant for making the transition to AGI go well. The topics we think are relevant here are relatively diverse and expansive, including intersections where AI increases risks in other cause areas, such as biosecurity. When deciding what to work on, we’re asking ourselves “How much does this work help make AI go better?”, rather than “How AI-related is it?”
We’re doing this because we don’t currently have enough content and research capacity to cover AI safety well and want to do that as a first priority. Of course, there are a lot of judgement calls to make in this area: which podcast guests might bring in a sufficiently large audience? What skills and cause-agnostic career advice is sufficiently relevant to making AGI go well? Which updates, like our recent mirror bio updates, are above the bar to make even if they’re not directly related to AI? One decision we’ve already made is going ahead with traditionally publishing our existing career guide, since the content is nearly ready, we have a book deal, and we think that it will increase our reach as well as help people develop an impact mindset about their careers — which is helpful for our new, more narrow goals as well.
We don't have a precise answer to all of these questions. But as a general rule, it’s probably safe to assume 80k won’t be releasing new articles on topics which don’t relate to making AGI go well for the foreseeable future.
How big a shift is this from 80k’s status quo?
At the most zoomed out level of “What does 80k do?”, this isn’t that big a change — we’re still focusing on helping people to use their careers to have an impact, we’re still taking the actions which we think will help us do the most good for sentient beings from a cause-impartial perspective, and we’re still ranking risks from AI as the top pressing problem.
But we’d like this strategic direction to cause real change at 80k — significantly shifting our priorities and organisational culture to focus more of our attention on helping AGI go well.
The extent to which that’ll cause noticeable changes to each programme's strategy and delivery depends on the team’s existing prioritisation and how costly dividing their attention between cause areas is. For example:
- Advising has already been prioritising speaking to people interested in mitigating risks from AI, whereas the podcast has been covering a variety of topics.
- Continuing adding non-AGI jobs to our job board doesn’t significantly trade off with finding new AGI job postings, whereas writing non-AGI articles for our site would need to be done at the expense of writing AGI-focused articles.
Are EA values still important?
Yes!
As mentioned, we’re still using EA values (e.g. those listed here and here) to determine what to prioritise, including in making this strategic shift.
And we still think it’s important for people to use EA values and ideas as they’re thinking about and pursuing high-impact careers. Some particular examples which feel salient to us:
- Scope sensitivity and thinking on the margin seem important for having an impact in any area, including helping AGI go well.
- We think there are some roles / areas of work where it’s especially important to continually use EA-style ideas and be steadfastly pointed at having a positive impact in order for it to be good to work in the area. For example, in roles where it’s possible to do a large amount of accidental harm, like working at an AI company, or roles where you have a lot of influence in steering an organisation's direction.
- There are also a variety of areas where EA-style thinking about issues like moral patienthood, neglectedness, leverage, etc. are still incredibly useful – e.g. grand challenges humanity may face due to explosive progress from transformatively powerful AI.
We have also appreciated that EA’s focus on collaborativeness and truthseeking has meant that people encouraged us to interrogate whether our previous plans were in line with our beliefs about AI timelines. We also appreciate that it’ll mean that people will continue to challenge our assumptions and ideas, helping us to improve our thinking on this topic and to increase the chance we’ll learn if we’re wrong.
What would cause us to change our approach?
This is now our default strategic direction, and so we'll have a reasonably high threshold for changing the overall approach.
We care most about having a lot of positive impact, and while this strategic plan is our current guess of how we'll achieve that, we aim to be prepared to change our minds and plans if the evidence changes.
Concretely, we’re planning to identify the kinds of signs that would cause us to notice this strategic plan was going in the wrong direction in order to react quickly if that happens. For example, we might get new information about the likely trajectory of AI or about our ability to have an impact with our new strategy that could cause us to re-evaluate our plans.
The goals, and actions towards them, mentioned above are specific to 2025, though we intend the strategy to be effective for the foreseeable future. After 2025, we’ll revisit our priorities and see which goals and aims make sense going forward.
I’m really sorry this post made you sad and confused. I think that’s an understandable reaction, and I wish I had done more to mitigate the hurt this update could cause. As someone who came into EA via global health, I personally very much value the work that you and others are doing on causes such as global development and factory farming.
A couple comments on other parts of your post in case it’s helpful:
Our purpose is not to get people into EA, but to help solve the world’s most pressing problems. I think the EA community and EA values are still a big part of that. (Arden has written more on 80k’s relationship to the EA community here.) But I also think the world has changed a lot and will change even more in the near future, and it would be surprising if 80k’s best path to impact didn’t change as well. I think focusing our ongoing efforts more on making the development of AGI go well is our best path to impact, building on what 80k has created over time.
But I might be wrong about this, and I think it’s reasonable that others disagree.
I don’t expect the whole EA community to take the same approach. CEA has said it wants to take a “principles-first approach”, rather than focusing more on AI as we will (though to be clear, our focus is driven by our principles, and we want to still communicate that clearly).
I think open communication about what different orgs are prioritising and why is really vital for coordination and to avoid single-player thinking. My hope is that people in the EA community can do this without making others with different cause prio feel bad about their disagreements or differences in strategy. I certainly don’t want anyone doing work in global health or animal welfare to feel bad about their work because of our conclusions about where our efforts are best focused — I am incredibly grateful for the work they do.
Unfortunately I think that all the options in this space involve taking bets in an important way. We also think that it’s costly if users come to our site and don’t quickly understand that we think the current AI situation deserves societal urgency.
On the other costs that you mention in your post, I think I see them as less stark than you do. Quoting Cody’s response to Rocky above:
> We still plan to have our career guide up as a key piece of content, which has been a valuable resource to many people; it explains our views on AI, but also guides people through thinking about cause prioritisation for themselves. And as the post notes, we plan to publish and promote a version of the career guide with a professional publisher in the near future. At the same time, for many years 80k has also made it clear that we prioritise risks from AI as the world’s most pressing problem. So I don’t think I see this as clearly a break from the past as you might.
I also want to thank you for sharing your concerns, which I realise can be hard to do. But it’s really helpful for us to know how people are honestly reacting to what we do.