Substack shill @ parhelia.substack.com
Hey Manuel,
I would not describe the job board as currently advertising all cause areas equally, but yes, the bar for jobs not related to AI safety will be higher now. As I mention in my other comment, the job board is interpreting this changed strategic focus broadly to include biosecurity, nuclear security, and even meta-EA work -- we think all of these have important roles to play in a world with a short timeline to AGI.
In terms of where weāll be raising the bar, this will mostly affect global health, animal welfare, and climate postings ā specifically in terms of the effort we put into finding roles in these areas. With global health and animal welfare, we're lucky to have great evaluators like GiveWell and great programs like Charity Entrepreneurship to help us find promising orgs and teams. It's easy for us to share these roles, and I remain excited to do so. However, part of our work involves sourcing for new roles and evaluating borderline roles. Much of this time will shift into more AIS-focused work.
Cause-neutral job board: It's possible! I think that our change makes space for other boards to expand. I also think that this creates something of a trifecta, to put it very roughly: The 80k job board with our existential risk focus, Probably Good with a more global health focus, and Animal Advocacy Careers with an animal welfare focus. It's possible that effort put into a cause-neutral board could be better put elsewhere, given that there's already coverage split between these three.
I want to extend my sympathies to friends and organisations who feel left behind by 80k's pivot in strategy. I've talked to lots of people about this change in order to figure out the best way for the job board to fit into this. In one of these talks, a friend put it in a way that captures my own feelings: I hate that this is the timeline we're in.
I'm very glad 80,000 Hours is making this change. I'm not glad that we've entered the world where this change feels necessary.
To elaborate on the job board changes mentioned in the post:
One example I can think of with regards to people "graduating" from philosophies is the idea that people can graduate out of arguably "adolescent" political philosophies like libertarianism and socialism. Often this looks like people realizing society is messy and that simple political philosophies don't do a good job of capturing and addressing this.
However, I think EA as a philosophy is more robust than the above: There are opportunities to address the immense suffering in the world and to address existential risk, some of these opportunities are much more impactful than others, and it's worth looking for and then executing on these opportunities. I expect this to be true for a very long time.
In general I think effective giving is the best opportunity for most people. We often get fixated on the status of directly working on urgent problems, which I think is a huge mistake. Effective giving is a way to have a profound impact, and I don't like to think of it as something just "for mere mortals" -- I think there's something really amazing about people giving a portion of their income every year to save lives and health, and I think doing so makes you as much an EA as somebody whose job itself is impactful.
Hi there, I'd like to share some updates from the last month.
Text during last update (July 5)
Text as of today:
The thinking behind these updates has been:
This is thanks to discussions within 80k and thanks to some of the comments here. While I suspect, @Raemon, that we still don't align on important things, I nonetheless appreciate the prompt to think this through more and I believe that it has led to improvements!
Update: We've changed the language in our top-level disclaimers: example. Thanks again for flagging! We're now thinking about how to best minimize the possibility of implying endorsement.
Re: On whether OpenAI could make a role that feels insufficiently truly safety-focused: there have been and continue to be OpenAI safety-ish roles that we donāt list because we lack confidence theyāre safety-focused.
For the alignment role in question, I think the team description given at the top of the post gives important context for the roleās responsibilities:
OpenAIās Alignment Science research teams are working on technical approaches to ensure that AI systems reliably follow human intent even as their capabilities scale beyond human ability to directly supervise them.
With the above in mind, the role responsibilities seem fine to me. I think this is all pretty tricky, but in general, Iāve been moving toward looking at this in terms of the teams:
Alignment Science: Per the above team description, Iām excited for people to work there ā though, concerning the question of what evidence would shift me, this would change if the research they release doesnāt match the team description.
Preparedness: I continue to think itās good for people to work on this team, as per the description: āThis team ā¦ is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.ā
Safety Systems: I think roles here depend on what they address. I think the problems listed in their team description include problems I definitely want people working on (detecting unknown classes of harm, red-teaming to discover novel failure cases, sharing learning across industry, etc), but itās possible that we should be more restrictive in which roles we list from this team.
I donāt feel confident giving a probability here, but I do think thereās a crux here around me not expecting the above team descriptions to be straightforward lies. Itās possible that the teams will have limited resources to achieve their goals, and with the Safety Systems team in particular, I think thereās an extra risk of safety work blending into product work. However, my impression is that the teams will continue to work on their stated goals.
I do think itās worthwhile to think of some evidence that would shift me against listing roles from a team:
Other notes:
I really appreciated reading this. It captured a lot of how I feel when I think about having taken the pledge. It's astounding. I think it's worth celebrating, and assuming the numbers add up, I think it's worth grappling with the immensity of having saved a life.