TL;DR
In a sentence:
We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up.
In more detail:
We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.
During 2025, we are prioritising:
- Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well
- Communicating why and how people can contribute to reducing the risks
- Connecting our users with impactful roles in this field
- And fostering an internal culture which helps us to achieve these goals
We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.
This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions.
Why we’re updating our strategic direction
Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes.
We think we should consolidate our effort and focus because:
- We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI development and the speed of recent AI progress. We don’t aim to fully defend this claim here (though we plan to publish more on this topic soon in our upcoming AGI career guide), but the idea that something like AGI will plausibly be developed in the next several years is supported by:
- The aggregate forecast of predictions on Metaculus
- Analysis of the constraints to AI scaling from Epoch
- The views of insiders at top AI companies — see here and here for examples; see additional discussion of these views here
- In-depth discussion of the arguments for and against short timelines from Convergence Analysis (written by Zershaaneh, who will be joining our team soon)
- We are in a window of opportunity to influence AGI, before laws and norms are set in place.
- 80k has an opportunity to help more people take advantage of this window. We want our strategy to be responsive to changing events in the world, and we think that prioritising reducing risks from AI is probably the best way to achieve our high-level, cause-impartial goal of doing the most good for others over the long term by helping people have high-impact careers. We expect the landscape to move faster in the coming years, so we’ll need a faster moving culture to keep up.
While many staff at 80k already regarded reducing risks from AI as our most important priority before this strategic update, our new strategic direction will help us coordinate efforts across the org, prioritise between different opportunities, and put in renewed effort to determine how we can best support our users in helping to make AGI go well.
How we hope to achieve it
At a high level, we are aiming to:
- Communicate more about the risks of advanced AI and how to mitigate them
- Identify key gaps in the AI space where more impactful work is needed
- Connect our users with key opportunities to positively contribute to this important work
To keep us accountable to our high level aims, we’ve made a more concrete plan. It’s centred around the following four goals:
- Develop deeper views about the biggest risks of advanced AI and how to mitigate them
- By increasing the capacity we put into learning and thinking about transformative AI, its evolving risks, and how to help make it go well.
- Communicate why and how people can help
- Develop and promote resources and information to help people understand the potential impacts of AI and how they can help.
- Contribute positively to the ongoing discourse around AI via our podcast and video programme to help people understand key debates and dispel misconceptions.
- Connect our users to impactful opportunities for mitigating the risks from advanced AI
- By growing our headhunting capacity, doing active outreach to people who seem promising for relevant roles, and driving more attention to impactful roles on our job board.
- Foster an internal culture which helps us to achieve these goals
- In particular, by moving quickly and efficiently, by increasing automation where possible, and by growing capacity. In particular, increasing our content capacity is a major priority.
Community implications
We think helping the transition to AGI go well is a really big deal — so much so that we think this strategic focusing is likely the right decision for us, even through our cause-impartial lens of aiming to do the most good for others over the long term.
We know that not everyone shares our views on this. Some may disagree with our strategic shift because:
- They have different expectations about AI timelines or views on how risky advanced AI might be.
- For example, one of our podcast episodes last year explored the question of why people disagree so much about AI risk.
- They’re more optimistic about 80,000 Hours’ historical strategy of covering many cause areas rather than this narrower strategic shift, irrespective of their views about AI.
We recognise that prioritising AI risk reduction comes with downsides and that we’re “taking a bet” here that might not end up paying off. But trying to do the most good involves making hard choices about what not to work on and making bets, and we think it is the right thing to do ex ante and in expectation — for 80k and perhaps for other orgs/individuals too.
If you are thinking about whether you should make analogous updates in your individual career or organisation, some things you might want to consider:
- Whether how you’re acting lines up with your best-guess timelines
- Whether — irrespective of what cause you’re working in — it makes sense to update your strategy to shorten your impact-payoff horizons or update your theory of change to handle the possibility and implications of TAI
- Applying to speak to our advisors if you’re weighing up an AI-focused career change
- What impact-focused career decisions make sense for you, given your personal situation and fit
- While we think that most of the very best ways to have impact with one’s career now come from helping AGI go well, we still don’t think that everyone trying to maximise the impact of their career should be working on AI.
On the other hand, 80k will now be focusing less on broader EA community building and will do little to no investigation into impactful career options in non-AI-related cause areas. This means that these areas will be more neglected, even though we still plan to keep our existing content up. We think there is space for people to create new projects in this space, e.g. an organisation focused on biosecurity and/or nuclear security careers advice outside where they intersect with AI. (Note that we still plan to advise on how to help biosecurity go well in a world of transformative AI, and other intersections of AI and other areas.) We are also glad that there are existing organisations in this space, such as Animal Advocacy Careers and Probably Good, as well as orgs like CEA focusing on EA community building.
Potential questions you might have
What does this mean for non-AI cause areas?
Our existing written and audio content isn’t going to disappear. We plan for it to still be accessible to users, though written content on non-AI topics may not be featured or promoted as prominently in the future. We expect that many users will still get value from our backlog of content, depending on their priorities, skills, and career stage. Our job board will continue listing roles which don’t focus on preventing risks from AI, but will raise its bar for these roles.
But we’ll be hugely raising our bar for producing new content on topics that aren’t relevant for making the transition to AGI go well. The topics we think are relevant here are relatively diverse and expansive, including intersections where AI increases risks in other cause areas, such as biosecurity. When deciding what to work on, we’re asking ourselves “How much does this work help make AI go better?”, rather than “How AI-related is it?”
We’re doing this because we don’t currently have enough content and research capacity to cover AI safety well and want to do that as a first priority. Of course, there are a lot of judgement calls to make in this area: which podcast guests might bring in a sufficiently large audience? What skills and cause-agnostic career advice is sufficiently relevant to making AGI go well? Which updates, like our recent mirror bio updates, are above the bar to make even if they’re not directly related to AI? One decision we’ve already made is going ahead with traditionally publishing our existing career guide, since the content is nearly ready, we have a book deal, and we think that it will increase our reach as well as help people develop an impact mindset about their careers — which is helpful for our new, more narrow goals as well.
We don't have a precise answer to all of these questions. But as a general rule, it’s probably safe to assume 80k won’t be releasing new articles on topics which don’t relate to making AGI go well for the foreseeable future.
How big a shift is this from 80k’s status quo?
At the most zoomed out level of “What does 80k do?”, this isn’t that big a change — we’re still focusing on helping people to use their careers to have an impact, we’re still taking the actions which we think will help us do the most good for sentient beings from a cause-impartial perspective, and we’re still ranking risks from AI as the top pressing problem.
But we’d like this strategic direction to cause real change at 80k — significantly shifting our priorities and organisational culture to focus more of our attention on helping AGI go well.
The extent to which that’ll cause noticeable changes to each programme's strategy and delivery depends on the team’s existing prioritisation and how costly dividing their attention between cause areas is. For example:
- Advising has already been prioritising speaking to people interested in mitigating risks from AI, whereas the podcast has been covering a variety of topics.
- Continuing adding non-AGI jobs to our job board doesn’t significantly trade off with finding new AGI job postings, whereas writing non-AGI articles for our site would need to be done at the expense of writing AGI-focused articles.
Are EA values still important?
Yes!
As mentioned, we’re still using EA values (e.g. those listed here and here) to determine what to prioritise, including in making this strategic shift.
And we still think it’s important for people to use EA values and ideas as they’re thinking about and pursuing high-impact careers. Some particular examples which feel salient to us:
- Scope sensitivity and thinking on the margin seem important for having an impact in any area, including helping AGI go well.
- We think there are some roles / areas of work where it’s especially important to continually use EA-style ideas and be steadfastly pointed at having a positive impact in order for it to be good to work in the area. For example, in roles where it’s possible to do a large amount of accidental harm, like working at an AI company, or roles where you have a lot of influence in steering an organisation's direction.
- There are also a variety of areas where EA-style thinking about issues like moral patienthood, neglectedness, leverage, etc. are still incredibly useful – e.g. grand challenges humanity may face due to explosive progress from transformatively powerful AI.
We have also appreciated that EA’s focus on collaborativeness and truthseeking has meant that people encouraged us to interrogate whether our previous plans were in line with our beliefs about AI timelines. We also appreciate that it’ll mean that people will continue to challenge our assumptions and ideas, helping us to improve our thinking on this topic and to increase the chance we’ll learn if we’re wrong.
What would cause us to change our approach?
This is now our default strategic direction, and so we'll have a reasonably high threshold for changing the overall approach.
We care most about having a lot of positive impact, and while this strategic plan is our current guess of how we'll achieve that, we aim to be prepared to change our minds and plans if the evidence changes.
Concretely, we’re planning to identify the kinds of signs that would cause us to notice this strategic plan was going in the wrong direction in order to react quickly if that happens. For example, we might get new information about the likely trajectory of AI or about our ability to have an impact with our new strategy that could cause us to re-evaluate our plans.
The goals, and actions towards them, mentioned above are specific to 2025, though we intend the strategy to be effective for the foreseeable future. After 2025, we’ll revisit our priorities and see which goals and aims make sense going forward.
Zach wrote this last year in his first substantive post as CEO of CEA, announcing that CEA will continue to take a “principles-first” approach to EA. (I’m Zach’s Chief of Staff.) Our approach remains the same today: we’re as motivated as ever about stewarding the EA community and ensuring that together we live up to our full potential.
Collectively living up to our full potential ultimately requires making a direct impact. Even under our principles-first approach, impact is our north star, and we exist to serve the world, not the EA community itself. But Zach and I continue to believe there is no other set of principles that has the same transformative potential t... (read more)
I'm not sure exactly what this change will look like, but my current impression from this post leaves me disappointed. I say this as someone who now works on AI full-time and is mostly persuaded of strong longtermism. I think there's enough reason for uncertainty about the top cause and value in a broad community that central EA organizations should not go all-in on a single cause. This seems especially the case for 80,000 Hours, which brings people in by appealing to a general interest in doing good.
Some reasons for thinking cause diversification by the community/central orgs is good:
- From an altruistic cause prioritization perspective, existential risk seems to require longtermism, including potentially fanatical views (see Christian Tarsney, Rethink Priorities). It seems like we should give some weight to causes that are non-fanatical.
- Existential risk is not most self-identified EAs' top cause, and about 30% of self-identified EAs say they would not have gotten involved if it did not focus on their top cause (EA survey). So it does seem like you miss an audience here.
- Organizations like 80,000 Hours set the tone for the community, and I think there's good rule-of-thumb reasons to
... (read more)Hey Zach,
(Responding as an 80k team member, though I’m quite new)
I appreciate this take; I was until recently working at CEA, and was in a lot of ways very very glad that Zach Robinson was all in on general EA. It remains the case (as I see it) that, from a strategic and moral point of view, there’s a ton of value in EA in general. It says what’s true in a clear and inspiring way, a lot of people are looking for a worldview that makes sense, and there’s still a lot we don’t know about the future. (And, as you say, non-fanaticism and pluralistic elements have a lot to offer, and there are some lessons to be learned about this from the FTX era)
At the same time, when I look around the EA community, I want to see a set of institutions, organizations, funders and people that are live players, responding to the world as they see it, making sure they aren’t missing the biggest thing currently happening. (or, if like 80k they are an org where one of its main jobs is communicating important things, letting their audiences miss it.) Most importantly, I want people to act on their beliefs (with appropriate incorporation of heuristics, rules of thumb, outside views, etc). And to the extent tha... (read more)
Thanks @ChanaMessinger I appreciate this comment, and think that your kind of tone here is healthier than the original announcement. Your well written one sentence captures many of the important issues well.
"It could definitely be a mistake even within this framework (by causing 80k to not appeal parts of its potential audience) or empirically (on size of AI risk, or sizes of other problems) or long term (because of the damage it does to the EA community or intellectual lifeblood / eating the seed corn)."
FWIW I think a clear mistake is the poor communication here. That the most obvious and serious potential community impacts have been missed and the tone is poor. If this had been presented in a way that it looked like the most serious potential downsides were considered, I would both feel better about it and be more confident that 80k has done a deep SWAT analysis here rather than the really basic framing of the post which is more like...
"AI risk is really bad and urgent let's go all in"
This makes the decision seem not only insensitive but also poorly thought through which in sure is not the case. I imagine the chief concerns of the commenters were discussed at the highest level.
I'm assuming there are comms people at 80k and it surprises me that this would slip through like this.
Thanks for the feedback here. I mostly want to just echo Niel's reply, which basically says what I would have wanted to. But I also want to add for transparency/accountability's sake that I reviewed this post before we published it with the aim of helping it communicate the shift well – I focused mostly on helping it communicate clearly and succinctly, which I do think is really important, but I think your feedback makes sense, and I wish that I'd also done more to help it demonstrate the thought we've put into the tradeoffs involved and awareness of the costs. For what it's worth, & we don't have dedicated comms staff at 80k - helping with comms is currently part of my role, which is to lead our web programme.
No it doesn't! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.
Caring about existential risk does not require longtermism, but existential risk being the top EA priority probably requires longtermism or something like it. Factory farming interventions look much more cost-effective in the near term than x-risk interventions, and GiveWell top charities look probably more cost-effective.
By my read, that post and the excerpt from it are about the rhetorical motivation for existential risk rather than the impartial ethical motivation. I basically agree that longtermism is not the right framing in most conversations, and it's also not necessary for thinking existential risk work would be more valuable than the marginal public dollar.
I included the qualifier "From an altruistic cause prioritization perspective" because I think that from an impartial cause prioritization perspective, the case is different. If you're comparing existential risk to animal welfare and global health, the links in my comment I think make the case pretty persuasively that you need longtermism.
The comment you replied to
Your response merely reiterates that x-risk prevention has substantial altruistic (and non-altruistic) value. This isn't responsive to the claim about whether, under non-longtermist assumptions, that value is greater on the margin than AW or GHW work.
So even though I actually agree with the claims in your comment, I downvoted it (along with this one complaining about the downvotes) for being off-topic and not embodying the type of discourse I think the EA Forum should strive for.
Adding a bit more to my other comment:
For what it’s worth, I think it makes sense to see this as something of a continuation of a previous trend – 80k has for a long time prioritised existential risks more than the EA community as a whole. This has influenced EA (in my view, in a good way), and at the same time EA as a whole has continued to support work on other issues. My best guess is that that is good (though I'm not totally sure - EA as a whole mobilising to help things go better with AI also sounds like it could be really positively impactful).
I think that existential risks from various issues with AGI (especially if one includes trajectory changes) are high enough that one needn't accept fanatical views to prioritise them (though it may require caring some about potential future beings). (We have a bit on this here)
... (read more)Hey Zach. I'm about to get on a plane so won't have time to write a full response, sorry! But wanted to say a few quick things before I do.
Agree that it's not certain or obvious that AI risk is the most pressing issue (though it is 80k's best guess & my personal best guess, and I don't personally have the view that it requires fanaticism.) And I also hope the EA community continues to be a place where people work on a variety of issues -- wherever they think they can have the biggest positive impact.
However, our top commitment at 80k is to do our best to help people find careers that will allow them to have as much positive impact as they can. & We think that to do that, more people should strongly consider and/or try out working on reducing the variety of risks that we think transformative AI poses. So we want to do much more to tell them that!
In particular, from a web specific perspective, I feel that the website doesn't feel consistent right now with the possibility of short AI timelines & the possibility that AI might not only pose risks from catastrophic misalignment, but also other risks, plus that it will probably affect many other cause areas. Given the size of ... (read more)
To the extent that this post helps me understand what 80,000 Hours will look like in six months or a year, I feel pretty convinced that the new direction is valuable—and I'm even excited about it. But I'm also deeply saddened that 80,000 Hours as I understood it five years ago—or even just yesterday—will no longer exist. I believe that organization should exist and be well-resourced, too.
Like others have noted, I would have much preferred to see this AGI-focused iteration launched as a spinout or sister organization, while preserving even a lean version of the original, big-tent strategy under the 80K banner, and not just through old content remaining online. A multi-cause career advising platform with thirteen years of refinement, SEO authority, community trust, and brand recognition is not something the EA ecosystem can easily replicate. Its exit from the meta EA space leaves a huge gap that newer and smaller projects simply can't fill in the short term.
I worry that this shift weakens the broader ecosystem, making it harder for promising people to find their path into non-AI cause areas—some of which may be essential to navigating a post-AGI world. Even from within an AGI-focused... (read more)
Hey Rocky —
Thanks for sharing these concerns. These are really hard decisions we face, and I think you’re pointing to some really tricky trade-offs.
We’ve definitely grappled with the question of whether it would make sense to spin up a separate website that focused more on AI. It’s possible that could still be a direction we take at some point.
But the key decision we’re facing is what to do with our existing resources — our staff time, the website we’ve built up, our other programmes and connections. And we’ve been struggling with the fact that the website doesn’t really fully reflect the urgency we believe is warranted around rapidly advancing AI. Whether we launch another site or not, we want to honestly communicate about how we’re thinking about the top problem in the world and how it will affect people’s careers. To do that, we need to make a lot of updates in the direction this post is discussing.
That said, I’ve always really valued the fact that 80k can be useful to people who don’t agree with all our views. If you’re sceptical about AI having a big impact in the next few decades, our content on pandemics, nuclear weapons, factory farming — or our general career advice ... (read more)
Minor point, but I’ve seen big tent EA as referring to applying effectiveness techniques on any charity. Then maybe broad current EA causes could be called the middle-sized tent. Then just GCR/longtermism could be called the small tent (which 80k already largely pivoted to years ago, at least considering their impact multipliers). Then just AI could be the very small tent.
I think this is going to be hard for university organizers (as an organizer at UChicago EA).
At the end of our fellowship, we always ask the participants to take some time to sign up for 1-1 career advice with 80k, and this past quarter myself and other organizers agreed that we felt somewhat uncomfortable doing this given that we knew that 80k was leaning a lot on AI -- as we presented it as merely being very good for getting advice on all types of EA careers. This shift will probably make it so that we stop sending intro fellows to 80k for advice, and we will have to start outsourcing professional career advising to somewhere else (not sure where this will be yet).
Given this, I wanted to know if 80k (or anyone else) has any recommendations on what EA University Organizers in a similar position should do (aside from the linked resources like Probably Good).
Since last semester, we have made career 1-on-1s a mandatory part of our introductory program.
The advice we give during these sessions ends up being broader than just the top EA ones, although we are most helpful in cases where:
— someone is curious about EA/adjacent causes
— someone needs graduate school related questions
— general "how to best navigate college, plan for internships, etc" advice
Do y'all have something similar set up?
As a (now ex-) UChicago organizer and current Organizer Support Program mentor (though this is all in my personal capacity), I share Noah's concerns here.
I see how reasonable actors in 80k's shoes could come to the conclusions they came to, but I think this is a net loss for university groups, which disappoints me — I think university groups are some of the best grounds we have to motivate talented young people to devote their careers to improving the world, and I think the best way to do this is by staying principles-first and building a community around the core ideas of scope sensitivity, scout mindset, impartiality, and recognition of tradeoffs.
I know 80k isn't disavowing these principles, but the pivot does mean 80k is de-emphasizing them.
All this makes me think that 80k will be much less useful to university groups, because it
a) makes it much tougher for us to recommend 80k to interested intro fellows (personalized advising, even if it's infrequently granted, is a powerful carrot, and the exercises you have to complete to finish the advising are also very useful), and b) means that university groups will have to find a new advising source for their fresh members who haven't picked a cause-area yet.
Thanks for sharing this update. I appreciate the transparency and your engagement with the broader community!
I have a few questions about this strategic pivot:
On organizational structure: Did you consider alternative models that would preserve 80,000 Hours' established reputation as a more "neutral" career advisor while pursuing this AI-focused direction? For example, creating a separate brand or group dedicated to AI careers while maintaining the broader 80K platform for other cause areas? This might help avoid potential confusion where users encounter both your legacy content presenting multiple cause areas and your new AI-centric approach.
On the EA pathway: I'm curious about how this shift might affect the "EA funnel" - where people typically enter effective altruism through more intuitive cause areas like global health or animal welfare before gradually engaging with longtermist ideas like AI safety. By positioning 80,000 Hours primarily as an AI-focused organization, are you concerned this might make it harder for newcomers to find their way into the community if AI risk arguments initially seem abstract or speculative to them?
On reputational considerations: Have you weighed t... (read more)
Hi Håkon, Arden from 80k here.
Great questions.
On org structure:
One question for us is whether we want to create a separate website ("10,000 Hours?"), that we cross-promote from the 80k website, or to change the 80k website a bunch to front the new AI content. That's something we're still thinking about, though I am currently weakly leaning toward the latter (more on why below). But we're not currently thinking about making an entire new organisation.
Why not?
For one thing, it'd be a lot of work and time, and we feel this shift is urgent.
Primarily, though, 80,000 Hours is a cause-impartial organisation, and we think that means prioritising the issues we think are most pressing (& telling our audience about why we think that.)
What would be the reason for keeping one 80k site instead of making a 2nd separate one?
I feel like this argument has been implicitly holding back a lot of EA focus on AI (for better or worse), so thanks for putting it so clearly. I always wonder about the asymmetry of it: what about the reputational benefits that accrue to 80K/EA for correctly calling the biggest cause ever? (If they're correct)
I'm a little sad and confused about this.
First I think it's a bit insensitive that a huge leading org like this would write such a significant post with almost no recognition that this decision is likely to hurt and alienate some people. It's unfortunate that the post is written in a warm and upbeat tone yet is largely bereft of emotional intelligence and recognition of potential harms of this decision. I'm sure this is unintentional but it still feels tone deaf. Why not acknowledge the potential emotional and community significance of this decision, and be a bit more humble in general? Something like...
"We realise this decision could be seen as sidelining the importance of many people's work and could hurt or confuse some people. We encourage you to keep working on what you believe is most important and we realize even after much painstaking thought we're still quite likely to be wrong here.'
I also struggle to understand how this is the best strategy as an onramp for people to EA - assuming that is still part of the purpose of 80k. Yes there are other orgs which do career advising and direction, but that are still minnows compared with you. Even if you're sole goal is to get as ma... (read more)
I’m really sorry this post made you sad and confused. I think that’s an understandable reaction, and I wish I had done more to mitigate the hurt this update could cause. As someone who came into EA via global health, I personally very much value the work that you and others are doing on causes such as global development and factory farming.
A couple comments on other parts of your post in case it’s helpful:
Our purpose is not to get people into EA, but to help solve the world’s most pressing problems. I think the EA community and EA values are still a big part of that. (Arden has written more on 80k’s relation... (read more)
Sorry to hear you found this saddening and confusing :/
Just to share another perspective: To me, the post did not come across as insensitive. I found the tone clear and sober, as I'm used to from 80k content, and I appreciated the explicit mention that there might now be space for another org to cover other cause areas like bio or nuclear.
These trade-offs are always difficult, but as any EA org, 80k should do what they consider highest expected impact overall rather than what's best for the EA community, and I'm glad they're doing that.
Morally, I am impressed that you are doing an in many ways socially awkward and uncomfortable thing because you think it is right.
BUT
I strongly object to you citing the Metaculus AGI question as significant evidence of AGI by 2030. I do not think that when people forecast that question, they are necessarily forecasting when AGI, as commonly understood or in the sense that's directly relevant to X-risk will arrive. Yes the title of the question mentions AGI. But if you look at the resolution criteria, all an AI model has to in order to resolve the question 'yes' is pass a couple of benchmarks involving coding and general knowledge, put together a complicated model car, and imitate. None of that constitutes being AGI in the sense of "can replace any human knowledge worker in any job". For one thing, it doesn't involve any task that is carried out over a time span of days or weeks, but we know that memory and coherence over long time scales is something current models seem to be relatively bad at, compared to passing exam-style benchmarks. It also doesn't include any component that tests the ability of models to learn new tasks at human-like speed, which again, seems to be an is... (read more)
I've been very concerned that EA orgs, particularly the bigger ones, would be too slow to orient and react to changes in the urgency of AI risk, so I'm very happy that 80k is making this shift in focus.
Any change this size means a lot of work in restructuring teams, their priorities and what staff is working on, but I think this move ultimately plays to 80k's strengths. Props.
As an AI safety person who believes short timelines are very possible, I'm extremely glad to see this shift.
For those who are disappointed, I think it's worth mentioning that I just took a look at the Probably Good website and it seems much better than the last time I looked. I had previously been a bit reluctant to recommend it, but it now seems like a pretty good resource and I'm sure they'll be able to make it even better with more support.
Given that The 80,000 Hours Podcast is increasing its focus on AI, it's worth highlighting Asterisk Magazine as a good resource for exploring a broader set of EA-adjacent ideas.
I want to extend my sympathies to friends and organisations who feel left behind by 80k's pivot in strategy. I've talked to lots of people about this change in order to figure out the best way for the job board to fit into this. In one of these talks, a friend put it in a way that captures my own feelings: I hate that this is the timeline we're in.
I'm very glad 80,000 Hours is making this change. I'm not glad that we've entered the world where this change feels necessary.
To elaborate on the job board changes mentioned in the post:
Makes sense, seems like a good application of the principle of cause neutrality: being willing to update on information and focus on the most cost-effective cause areas.
I generally support the idea of 80k Hours putting more emphasis on AI risk as a central issue facing our species.
However, I think it's catastrophically naive to frame the issue as 'helping the transition to AGI go well'. This presupposes that there is a plausible path for (1) AGI alignment to be solved, for (2) global AGI safety treaties to be achieved and enforced in time, and for (3) our kids to survive and flourish in a post-AGI world.
I've seen no principled arguments to believe that any of these three things can be achieved. At all. And certainly not in the time frame we seem to have available.
So the key question is -- if there is actually NO credible path for 'helping the transition to AGI go well', should 80k Hours be pursuing a strategy that amounts to a whole lot of cope, and rearranging deck chairs on the Titanic, and gives a false sense of comfort and security to AI devs, and EA people, and politicians, and the general public?
I think 80k Hours has done a lot of harm in the past by encouraging smart young EAs to join AI companies to try to improve their safety cultures form within. As far as I've seen, that strategy has been a huge failure for AI safety, and a huge win for... (read more)
Hey Geoffrey,
Niel gave a response to a similar comment below -- I'll just add a few things from my POV:
I'd love to hear in more detail about what this shift will mean for the 80,000 Hours Podcast, specifically.
The Podcast is a much-loved and hugely important piece of infrastructure for the entire EA movement. (Kudos to everyone involved over the years in making it so awesome - you deserve huge credit for building such a valuable brand and asset!)
Having a guest appear on it to talk about a certain issue can make a massive real-world difference, in terms of boosting interest, talent, and donations for that issue. To pick just one example: Meghan Barrett's episode on insects seems to have been super influential. I'm sure that other people in the community will also be able to pick out specific episodes which have made a huge difference to interest in, and real-world action on, a particular issue.
My guess is that to a large extent this boosted activity and impact for non-AI issues does not “funge” massively against work on AI. The people taking action on these different issues would probably not have alternatively devoted a similar level of resources to AI safety-related stuff. (Presumably there is *some* funging going on, but my gut instinct is that it's probably ... (read more)
Thanks for your comment and appreciation of the podcast.
I think the short story is that yes, we’re going to be producing much less non-AI podcast content than we previously were — over the next two years, we tentatively expect ~80% of our releases to be AI/AGI focused. So we won’t entirely stop covering topics outside of AI, but those episodes will be rarer.
We realised that in 2024, only around 12 of the 38 episodes we released on our main podcast feed were focused on AI and its potentially transformative impacts. On reflection, we think that doesn’t match the urgency we feel about the issue or how much we should be focusing on it.
This decision involved very hard tradeoffs. It comes with major downsides, including limiting our ability to help motivate work on other pressing problems, along with the fact that some people will be less excited to listen to our podcast once it’s more narrowly focused. But we also think there’s a big upside: more effectively contributing to the conversation about what we believe is the most important issue of this decade.
On a personal level, I’ve really loved covering topics like invertebrate welfare, global health, and wild animal suffering, and I’m very sad we won’t be able to do as much of it. They’re still incredibly important and neglected problems. But I endorse the strategic shift we’re making and think it reflects our values. I’m also sorry it will disappoint some of our audience, but I hope they can understand the reasons we’re making this call.
Thanks for asking. Our definition of impact includes non-human sentient beings, and we don't plan to change that.
From the perspective of someone who thinks AI progress is real and might happen quickly over the next decade, I am happy about this update. Barring Ezra Klein and the Kevin guy from NYT, the majority of mainstream media publications are not taking AI progress seriously, so hopefully this brings some balance to the information ecosystem.
From the perspective of "what does this mean for the future of the EA movement," I feel somewhat negatively about this update. Non-AIS people within EA are already dissatisfied by the amount of attention, talent, and resources that are dedicated to AIS, and I believe this will only heighten that feeling.
I have a complicated reaction.
2. My assumption is that the direction change is motivated by factors like:
An assumption that there are / will be many more net positions to fill in AI safety for the next few years, especially to the extent one thinks that funding will continue to shift in this direction. (Relatedly, one might think there will be relatively few positions to fill in certain other cause areas.)
I would suggest that these kinds of views and assumptions don't imply that people who are already invested in other cause areas should shift focus. People who are already on a solid path to impact are not, as I understand it, 80K's primary target audience.
3. I'm generally OK with 80K going in this direction if that is what its staff, leadership, and donors want. I've taken a harder-line stance on this sort of thing to the ... (read more)
This seems a reasonable update, and I appreciate the decisiveness, and clear communication. I'm excited to see what comes of it!
Is there a possible world to "divest" or "spin out" the non-AI work of 80k hours org? I understand that this in and of itself could be a huge haul and may defeat the purpose of the re-alignment of values -- but could the door remain open to this if someone/another org expressed interest?
Here is a simple argument that this strategic shift is a bad one:
(1) There should be (at least) one EA org that gives career advice across cause areas.
(2) If there should be such an org, it should be (at least also) 80k.
(3) Thus, 80k should be an org that gives career advice across cause areas.
(Put differently, my reasoning is something like this: Should there be an org like the one 80k has been so far? Yes, definitely! But which one should it be? How about 80k!?)
I'm wondering with which premise 80k disagrees (and what you think about them!). They are indi... (read more)
But one could also reason:
(1) There should be (at least) one EA org focused on AI risk career advice; it is important that this org operate at a high level at the present time.
(2) If there should be such an org, it should be -- or maybe can only be -- 80K; it is more capable of meeting criterion (1) quickly than any other org that could try. It already has staff with significant experience in the area and organizational competence to deliver career advising services with moderately high throughput.
(3) Thus, 80K should focus on AI risk career advice.
If one generally accepts both your original three points and these three, I think they are left with a tradeoff to make, focusing on questions like:
Perhaps this is a bit tangential, but I wanted to ask since the 80k team seem to be reading this post. How have 80k historically approached the mental health effects of exposing younger (i.e. likely to be a bit more neurotic) people to existential risks? I’m thinking in the vein of Here’s the exit. Do you/could you recommend alternate paths or career advice sites for people who might not be able to contribute to existential risk reduction due to, for lack of a better word, their temperament? (Perhaps a similar thing for factory farming, too?)
For example, I... (read more)
Thanks for the update!
Where does this overall leave you in terms of your public association with EA? Many orgs (including ones that are not just focused on AIS) are trying to dissociate themselves from the EA brand due to reputational reasons.
80k is arguably the one org that has the largest audience from the "outside world", while also having close ties with the EA community. Are you guys going to keep the status quo?
I will add my two cents on this in this footnote[1] too, but I would be super curious to hear your thoughts!
- ^
... (read more)I think in the short term a
Thanks for the transparency! This is really helpful for coordination.
For anyone interested in what 80k is deprioritizing, this comment section might be a good space to pitch other EA career support ideas and offer support.
There might be space for an organization specifically focussed in high school graduates to help them decide whether, where and what to study. This might be the most important decision in one's life, especially for people like me who grew up on the countryside without really any intellectual role models and are open to moving abroad ... (read more)
I applaud the decision to take a big swing, but I think the reasoning is unsound and probably leads to worse worlds.
I think there are actions that look like “making AI go well” that actually are worse than not doing anything at all, because things like “keep human in control over AI” can very easily lead to something like value lock-in, or at least leaving it in the hands of immoral stewards. It’s plausible that if ASI is developed and still controlled by humans, hundreds of trillions of animals would suffer, because humans still want to eat meat from an a... (read more)
You’re shifting your resources, but should you change your branding?
Focusing on new articles and research about AGI is one thing, but choosing to brand yourselves as an AI-focused career organisation is another.
Personal story (causal thinking): I first discovered the EA principles while researching how to do good in my career, where, aside from 80k, all the well-ranked websites were non-impact focused. If the website had been specifically about AI or existential risk careers, I’m quite sure I would’ve skipped it and spent years not discovering EA principle... (read more)
Will this affect the 80k job board?
Will you continue to advertise jobs in all top cause areas equally, or will the bar for jobs not related to AI safety be higher now?
If the latter, is there space for an additional, cause-neutral job board that could feature all 80k-listed jobs and more from other cause areas?
Arden from 80k here -- just flagging that most of 80k is currently asleep (it's midnight in the UK), so we'll be coming back to respond to comments tomorrow! I might start a few replies, but will be getting on a plane soon so will also be circling back.
i'm selfishly in favor of this change. my question is: will 80k rebrand itself, perhaps to "N k hours (where 1 < N < 50)"?
Ok, so in the spirit of
[about p(doom|AGI)], and
[is lacking], I ask if you have seriously considered whether
is even possible? (Let alone at all likely from where we stand.)
You (we all) should be devoting a significant fraction of resources toward slowing down/pausi... (read more)
Hey Greg! I personally appreciate that you and others are thinking hard about the viability of giving us more time to solve the challenges that I expect we’ll encounter as we transition to a world with powerful AI systems. Due to capacity constraints, I won’t be able to discuss the pros and cons of pausing right now. But as a brief sketch of my current personal view: I agree it'd be really useful to have more time to solve the challenges associated with navigating the transition to a world with AGI, all else equal. However, I’m relatively more excited than you about other strategies to reduce the risks of AGI, because I’m worried about the tractability of a (really effective) pause. I’d also guess my P(doom) is lower than yours.