Conor Barnes šŸ”¶

Job Board @ 80,000 Hours

Bio

Substack shill @ parhelia.substack.com

Posts
7

Sorted by New
9
Ā· Ā· 5m read

Comments
37

I really appreciated reading this. It captured a lot of how I feel when I think about having taken the pledge. It's astounding. I think it's worth celebrating, and assuming the numbers add up, I think it's worth grappling with the immensity of having saved a life.

Hey Manuel,

I would not describe the job board as currently advertising all cause areas equally, but yes, the bar for jobs not related to AI safety will be higher now. As I mention in my other comment, the job board is interpreting this changed strategic focus broadly to include biosecurity, nuclear security, and even meta-EA work -- we think all of these have important roles to play in a world with a short timeline to AGI.

In terms of where weā€™ll be raising the bar, this will mostly affect global health, animal welfare, and climate postings ā€” specifically in terms of the effort we put into finding roles in these areas. With global health and animal welfare, we're lucky to have great evaluators like GiveWell and great programs like Charity Entrepreneurship to help us find promising orgs and teams. It's easy for us to share these roles, and I remain excited to do so. However, part of our work involves sourcing for new roles and evaluating borderline roles. Much of this time will shift into more AIS-focused work.

Cause-neutral job board: It's possible! I think that our change makes space for other boards to expand. I also think that this creates something of a trifecta, to put it very roughly: The 80k job board with our existential risk focus, Probably Good with a more global health focus, and Animal Advocacy Careers with an animal welfare focus. It's possible that effort put into a cause-neutral board could be better put elsewhere, given that there's already coverage split between these three.

I want to extend my sympathies to friends and organisations who feel left behind by 80k's pivot in strategy. I've talked to lots of people about this change in order to figure out the best way for the job board to fit into this. In one of these talks, a friend put it in a way that captures my own feelings: I hate that this is the timeline we're in.

I'm very glad 80,000 Hours is making this change. I'm not glad that we've entered the world where this change feels necessary.

To elaborate on the job board changes mentioned in the post:

  • We will continue listing non-AI-related roles, but will be raising our bar. With some cause areas, we still consider them relevant to AGI (for example: pandemic preparedness). With others, we still think the top roles could benefit from talented people with great fit, so we'll continue to post these roles.
  • We'll be highlighting some roles more prominently. Even among the roles we post, we think the best roles can be much more impactful than others. Based on conversations with experts, we have some guess at which roles these are, and want to feature them a little more strongly.
  1. Become conversational in Spanish so I can talk to my fianceƩ's family easily.
  2. Work out ten times per month (3x/week with leeway)
  3. Submit 12 short stories about transformative AI to publishers this year.

    More details here. Ongoing mission: get a literary agent for my novel!

One example I can think of with regards to people "graduating" from philosophies is the idea that people can graduate out of arguably "adolescent" political philosophies like libertarianism and socialism. Often this looks like people realizing society is messy and that simple political philosophies don't do a good job of capturing and addressing this.

However, I think EA as a philosophy is more robust than the above: There are opportunities to address the immense suffering in the world and to address existential risk, some of these opportunities are much more impactful than others, and it's worth looking for and then executing on these opportunities. I expect this to be true for a very long time.

In general I think effective giving is the best opportunity for most people. We often get fixated on the status of directly working on urgent problems, which I think is a huge mistake. Effective giving is a way to have a profound impact, and I don't like to think of it as something just "for mere mortals" -- I think there's something really amazing about people giving a portion of their income every year to save lives and health, and I think doing so makes you as much an EA as somebody whose job itself is impactful.

Hi there, I'd like to share some updates from the last month.

Text during last update (July 5)

  • OpenAI is a leading AI research and product company, with teams working on alignment, policy, and security. We recommend specific positions at OpenAI that we think may be high impact. We do not necessarily recommend working at other jobs at OpenAI. You can read more about considerations around working at a leading AI company in our career review on the topic.

Text as of today:

  • OpenAI is a frontier AI research and product company, with teams working on alignment, policy, and security. We post specific opportunities at OpenAI that we think may be high impact. We do not necessarily recommend working at other positions at OpenAI. You can read concerns about doing harm by working at a frontier AI company in our career review on the topic. Note that there have also been concerns around OpenAI's HR practices.


The thinking behind these updates has been:

  • We continue to get negative updates concerning OpenAI, so it's good for us to update our guidance accordingly. 
  • While it's unclear exactly what's going on with the NDAs (are they cancelled or are they not?), it's pretty clear that it's in the interest of users to know there's something they should look into with regard to HR practices. 
  • We've tweaked the language to "concerns about doing harm" instead of "considerations" for all three frontier labs to indicate more strongly that these are potentially negative considerations to make before applying. 
  • We don't go into much detail for the sake of length / people not glazing over them -- my guess is that the current text is the right length to have people notice it and then look into it more with our newly updated AI company article and the Washington Post link.

This is thanks to discussions within 80k and thanks to some of the comments here. While I suspect, @Raemon, that we still don't align on important things, I nonetheless appreciate the prompt to think this through more and I believe that it has led to improvements!

I interpreted the title to mean "Is it a good idea to take an unpaid UN internship?", and it took a bit to realize that isn't the point of the post. You might want to change the title to be clear about what part of the unpaid UN internship is the questionable part!

Update: We've changed the language in our top-level disclaimers: example. Thanks again for flagging! We're now thinking about how to best minimize the possibility of implying endorsement.

(Copied from reply to Raemon)

Yeah, I think this needs updating to something more concrete. We put it up while ā€˜everything was happeningā€™ but Iā€™ve neglected to change it, which is my mistake and will probably prioritize fixing over the next few days.

Re: On whether OpenAI could make a role that feels insufficiently truly safety-focused: there have been and continue to be OpenAI safety-ish roles that we donā€™t list because we lack confidence theyā€™re safety-focused.

For the alignment role in question, I think the team description given at the top of the post gives important context for the roleā€™s responsibilities:

OpenAIā€™s Alignment Science research teams are working on technical approaches to ensure that AI systems reliably follow human intent even as their capabilities scale beyond human ability to directly supervise them. 

With the above in mind, the role responsibilities seem fine to me. I think this is all pretty tricky, but in general, Iā€™ve been moving toward looking at this in terms of the teams:

Alignment Science: Per the above team description, Iā€™m excited for people to work there ā€“ though, concerning the question of what evidence would shift me, this would change if the research they release doesnā€™t match the team description.

Preparedness: I continue to think itā€™s good for people to work on this team, as per the description: ā€œThis team ā€¦ is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.ā€

Safety Systems: I think roles here depend on what they address. I think the problems listed in their team description include problems I definitely want people working on (detecting unknown classes of harm, red-teaming to discover novel failure cases, sharing learning across industry, etc), but itā€™s possible that we should be more restrictive in which roles we list from this team.

I donā€™t feel confident giving a probability here, but I do think thereā€™s a crux here around me not expecting the above team descriptions to be straightforward lies. Itā€™s possible that the teams will have limited resources to achieve their goals, and with the Safety Systems team in particular, I think thereā€™s an extra risk of safety work blending into product work. However, my impression is that the teams will continue to work on their stated goals.

I do think itā€™s worthwhile to think of some evidence that would shift me against listing roles from a team: 

  • If a team doesnā€™t publish relevant safety research within something like a year.
  • If a teamā€™s stated goal is updated to have less safety focus.

Other notes:

  • Weā€™re actually in the process of updating the AI company article.
  • The top-level disclaimer: Yeah, I think this needs updating to something more concrete. We put it up while ā€˜everything was happeningā€™ but Iā€™ve neglected to change it, which is my mistake and will probably prioritize fixing over the next few days. 
  • Thanks for diving into the implicit endorsement point. I acknowledge this could be a problem (and if so, I want to avoid it or at least mitigate it), so Iā€™m going to think about what to do here.
Load more