The claim that "biosecurity often selects for people who already have a wealth of experience in their respective fields" doesn't seem that obvious to me. Looking at the number of biosecurity roles we've posted on the 80,000 Hours job board in 2025, broken down by experience tag, I see:
I agree with you that there aren't a ton of opportunities...
There are some great replies here from career advisors -- I'm not one, but I want to mention that I got into software engineering without a university degree. I'm hesitant to recommend software engineering as the safe and well-paying career it once was, but I think learning how to code is still a great way to quickly develop useful skills without requiring a four-year degree!
@Sudhanshu Kasewa has enlisted me for this one!
I think earning to give is a really strong option and indeed the best option for many people.
Lack of supply is definitely an issue, though it can be helped by looking for impactful opportunities outside of "EA orgs" per se -- I don't know if this is your scenario, but this is often a problem. Knowing nothing about a person's situation and location, I'd prompt:
A clarification: We would not post roles if we thought they were net harmful and were hoping that somebody would counterfactually do less harm. I think that would be too morally fraught to propose to a stranger.
Relatedly, we would not post a job where we thought that to have a positive impact, you'd have to do the job badly.
We might post roles if we thought the average entrant would make the world worse, but a job board user would make the world better (due to the EA context our applicants typically have!). No cases of this come to mind immediately though....
Hi Geoffrey,
I'm curious to know which roles we've posted which you consider to be capabilities development -- our policy is to not post capabilities roles at the frontier companies. We do aim to post jobs that are meaningfully able to contribute to safety and aren’t just safety-washing (and our views are discussed much more in depth here). Of course, we're not infallible, so if people see particular jobs they think are safety in name only, we always appreciate that being raised.
I strongly agree with @Bella's comment. I'd like to add:
If your strategy is to just apply to open hiring rounds, such as through job ads that are listed on the 80,000 Hours job boards, you are cutting your chances of landing a role by ~half. It’s hard to know the exact figure, but I wouldn’t be surprised if as many as 30-50% of paid roles in the movement aren’t being recruited through traditional open hiring rounds ...
This is my impression as well, though heavily skewed by experience level. I'd estimate that >80%+ of senior "hires" in the movement occur without a public posting, and something like 20% of jun...
Hey Manuel,
I would not describe the job board as currently advertising all cause areas equally, but yes, the bar for jobs not related to AI safety will be higher now. As I mention in my other comment, the job board is interpreting this changed strategic focus broadly to include biosecurity, nuclear security, and even meta-EA work -- we think all of these have important roles to play in a world with a short timeline to AGI.
In terms of where we’ll be raising the bar, this will mostly affect global health, animal welfare, and climate postings — specifically ...
I want to extend my sympathies to friends and organisations who feel left behind by 80k's pivot in strategy. I've talked to lots of people about this change in order to figure out the best way for the job board to fit into this. In one of these talks, a friend put it in a way that captures my own feelings: I hate that this is the timeline we're in.
I'm very glad 80,000 Hours is making this change. I'm not glad that we've entered the world where this change feels necessary.
To elaborate on the job board changes mentioned in the post:
One example I can think of with regards to people "graduating" from philosophies is the idea that people can graduate out of arguably "adolescent" political philosophies like libertarianism and socialism. Often this looks like people realizing society is messy and that simple political philosophies don't do a good job of capturing and addressing this.
However, I think EA as a philosophy is more robust than the above: There are opportunities to address the immense suffering in the world and to address existential risk, some of these opportunities are much mo...
Hi there, I'd like to share some updates from the last month.
Text during last update (July 5)
Text as of today:
Update: We've changed the language in our top-level disclaimers: example. Thanks again for flagging! We're now thinking about how to best minimize the possibility of implying endorsement.
Re: On whether OpenAI could make a role that feels insufficiently truly safety-focused: there have been and continue to be OpenAI safety-ish roles that we don’t list because we lack confidence they’re safety-focused.
For the alignment role in question, I think the team description given at the top of the post gives important context for the role’s responsibilities:
OpenAI’s Alignment Science research teams are working on technical approaches to ensure that AI systems reliably follow human intent even as their capabilities scale beyond human ability to direct...
Thanks.
Fwiw while writing the above, I did also think "hmm, I should also have some cruxes for 'what would update me towards 'these jobs are more real than I currently think.'" I'm mulling that over and will write up some thoughts soon.
It sounds like you basically trust their statements about their roles. I appreciate you stating your position clearly, but, I do think this position doesn't make sense:
The arguments you give all sound like reasons OpenAI safety positions could be beneficial. But I find them completely swamped by all the evidence that they won't be, especially given how much evidence OpenAI has hidden via NDAs.
But let's assume we're in a world where certain people could do meaningful safety work an OpenAI. What are the chances those people need 80k to tell them about it? OpenAI is the biggest, most publicized AI company in the world; if Alice only finds out about OpenAI jobs via 80k that's prima facie evidence she won't make a contributio...
Hi, I run the 80,000 Hours job board, thanks for writing this out!
I agree that OpenAI has demonstrated a significant level of manipulativeness and have lost confidence in them prioritizing existential safety work. However, we don’t conceptualize the board as endorsing organisations. The point of the board is to give job-seekers access to opportunities where they can contribute to solving our top problems or build career capital to do so (as we write in our FAQ). Sometimes these roles are at organisations whose mission I disagree with, because th...
Insofar as you are recommending the jobs but not endorsing the organization, I think it would be good to be fairly explicit about this in the job listing. The current short description of OpenAI seems pretty positive to me:
OpenAI is a leading AI research and product company, with teams working on alignment, policy, and security. You can read more about considerations around working at a leading AI company in our career review on the topic. They are also currently the subject of news stories relating to their safety work.
I think this should say someth...
I think that given the 80k brand (which is about helping people to have a positive impact with their career), it's very hard for you to have a jobs board which isn't kinda taken by many readers as endorsement of the orgs. Disclaimers help a bit, but it's hard for them to address the core issue — because for many of the orgs you list, you basically do endorse the org (AFAICT).
I also think it's a pretty different experience for employees to turn up somewhere and think they can do good by engaging in a good faith way to help the org do whatever it's doing, an...
These still seem like potentially very strong roles with the opportunity to do very important work. We think it’s still good for the world if talented people work in roles like this!
I think given that these jobs involved being pressured via extensive legal blackmail into signing secret non-disparagement agreements that forced people to never criticize OpenAI, at great psychological stress and at substantial cost to many outsiders who were trying to assess OpenAI, I don't agree with this assessment.
Safety people have been substantially harmed by working at OpenAI, and safety work at OpenAI can have substantial negative externalities.
Hey Conor!
Regarding
we don’t conceptualize the board as endorsing organisations.
And
contribute to solving our top problems or build career capital to do so
It seems like EAs expect the 80k job board to suggest high impact roles, and this has been a misunderstanding for a long time (consider looking at that post if you haven't). The disclaimers were always there, but EAs (including myself) still regularly looked at the 80k job board as a concrete path to impact.
I don't have time for a long comment, just wanted to say I think this matters.
Nod, thanks for the reply.
I won't argue more for removing infosec roles at the moment. As noted in the post, I think this is at least a reasonable position to hold. I (weakly) disagree, but for reasons that don't seem worth getting into here.
The things I'd argue here:
I think this is a good policy and broadly agree with your position.
It's a bit awkward to mention, but as you've said that you've delisted other roles at OpenAI and that OpenAI has acted badly before - I think you should consider explicitly saying that you don't necessarily endorse other roles at OpenAI and suspect that some other role may be harmful on the OpenAI jobs board cards.
I'm a little worried about people seeing OpenAI listed on the board and inferring that the 80k recommendation somewhat transfers to other roles at OpenAI (which, imo is a reasonable heuristic for most companies listed on the board - but fails in this specific case).
Hi Remmelt,
Just following up on this — I agree with Benjamin’s message above, but I want to add that we actually did add links to the “working at an AI lab” article in the org descriptions for leading AI companies after we published that article last June.
It turns out that a few weeks ago the links to these got accidentally removed when making some related changes in Airtable, and we didn’t notice these were missing — thanks for bringing this to our attention. We’ve added these back in and think they give good context for job board users, and we’re certain...
I think this is a joke, but for those who have less-explicit feelings in this direction:
I strongly encourage you to not join a totalizing community. Totalizing communities are often quite harmful to members and being in one makes it hard to reason well. Insofar as an EA org is a hardcore totalizing community, it is doing something wrong.
Rereading your post, I'd also strongly recommend prioritizing finding ways to not spend all free time on it. Not only do I think that that level of fixating is one of the worst things people can do to make themselves suffer, it also makes it very hard to think straight and figure things out!
One thing I've seen suggested is dedicating time each day to use as research time on your questions. This is a compromise to free up the rest of your time to things that don't hurt your head. And hang out with friends who are good at distracting you!
I'm really sorry you're experiencing this. I think it's something more and more people are contending with, so you aren't alone, and I'm glad you wrote this. As somebody who's had bouts of existential dread myself, there are a few things I'd like to suggest:
It's pretty common in values-driven organisations to ask for an amount of value-alignment. The other day I helped out a friend with a resume for an organisation which asked for people applying to care about their feminist mission.
In my opinion this is a reasonable thing to ask for and expect. Sharing (overarching) values improves decision-making and requiring for it can help prevent value drift in an org.
This isn't exactly what I'm looking for (though I do think that concept needs a word).
The way I'm conceptualizing it right now is that there are three non-existential outcomes:
1. Catastrophe
2. Sustenance / Survival
3. Flourishing
If you look at Toby Ord's prediction, he includes a number for flourishing, which is great. There isn't a matching prediction in the Ragnarok series, so I've squeezed 2 and 3 together as a "non-catastrophe" category.
Thank you! And yeah, this is an artifact of the green nodes being filled in from the implicit inverse percent of the Ragnarok prediction and not having its own prediction. I could link to somewhere else, but it would need to be worth breaking the consistency of the links (all Metaculus Ragnarok links).
Nice!! This is pretty similar to a project Nuño Sempere and I are are working on, inspired by this proposal:
https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=vi7zALLALF39R6exF
I'm currently building the website for it while Nuño works on the data. I suspect these are compatible projects and there's an effective way to link up!
Location: Halifax, Canada
Remote: Yes
Willing to relocate: No
Skills:
- Tech: JavaScript/TypeScript, CSS, React, React Native, Node, Go, Rust
- Writing: Ex. https://www.lesswrong.com/posts/7hFeMWC6Y5eaSixbD/100-tips-for-a-better-life. Also see https://conorbarnes.com/blog
Resume: Portfolio with resume link! https://conorbarnes.com/work
Email: conorbarnes93@gmail.com
Notes:
- Preferably full-time.
- Cause neutral.
- Availability: Anytime!
- Role: Web dev / software engineering
- EA Background:
-- Following since 2015.
-- Giving What We Can pledge since 2019.
-- 1Day ...
Great points. I would love to see more of this too!