Benjamin is a research analyst at 80,000 Hours. Before joining 80,000 Hours, he worked for the UK Government and did some economics and physics research.
Totally agree! Indeed, there's a classic 80k article about this.
When working out your next steps, we tend to recommend working forwards from what you know, and working backwards from where you might want to end up (see our article on finding your next career steps). We also think people should explore more with their careers (see our article on career exploration).
If there are areas where we're giving the opposite message, I'd love to know – shoot me an email or DM?
Hi Remmelt,
Thanks for sharing your concerns, both with us privately and here on the forum. These are tricky issues and we expect people to disagree about how to about how to weigh all the considerations — so it’s really good to have open conversations about them.
Ultimately, we disagree with you that it's net harmful to do technical safety research at AGI labs. In fact, we think it can be the best career step for some of our readers to work in labs, even in non-safety roles. That’s the core reason why we list these roles on our job board.
We argue for this position extensively in my article on the topic (and we only list roles consistent with the considerations in that article).
Some other things we’ve published on this topic in the last year or so:
Benjamin
Most of our advice on actually having an impact — rather than building career capital — is highly relevant to mid-career professionals. That's because they're entering their third career stage (https://80000hours.org/career-guide/career-planning/#three-career-stages), i.e. actually trying to have an impact. When you’re mid-career, it's much more important to appropriately:
So we hope mid-career people can get a lot out of reading our articles. I'd probably in particular suggest reading our advanced series (https://80000hours.org/advanced-series/).
Thanks for this comment Tyler!
To clarify what I mean by unknown unknowns, here's a climate-related example: We're uncertain about the strength of various feedback loops, like how much warming could be produced by cloud feedbacks. We'd then classify "cloud feedbacks" as a known unknown. But we're also uncertain about whether there are feedback loops we haven't identified. Since we don't know what these might be, these loops are unknown unknowns. As you say, the known feedback loops don't seem likely to warm earth enough to cause a complete destruction of civilisation, which means that if climate change were to lead to civilisational collapse, that would probably be because of something we failed to consider.
But here's the thing: generally we do know something about unknown unknowns.[1] In the case of these unknown feedback loops, we can place some constraints on them. For example:
In fact, we can gather a broad variety of evidence about these unknown unknowns, using various different lines of evidence. These lines of evidence include:
Accounting for these multiple lines of evidence is exactly what the 6th Assessment Report attempts to do when calculating climate sensitivity (how much Earth's surface will cool or warm after a specified factor causes a change in its climate system):[3]
In AR6 [the 6th Assessment report], the assessments of ECS [equilibrium climate sensitivity] and TCR [transient climate response] are made based on multiple lines of evidence, with ESMs [earth system models] representing only one of several sources of information. The constraints on these climate metrics are based on radiative forcing and climate feedbacks assessed from process understanding (Section 7.5.1), climate change and variability seen within the instrumental record (Section 7.5.2), paleoclimate evidence (Section 7.5.3), emergent constraints (Section 7.5.4), and a synthesis of all lines of evidence (Section 7.5.5). In AR5 [the 5th assessment report], these lines of evidence were not explicitly combined in the assessment of climate sensitivity, but as demonstrated by Sherwood et al. (2020) their combination narrows the uncertainty ranges of ECS compared to that assessed in AR5.
That is, as I mentioned in the main post "the IPCC's Sixth Assessment Report... attempts to account for structural uncertainty and unknown unknowns. Roughly, they find it’s unlikely that all the various lines of evidence are biased in just one direction — for every consideration that could increase warming, there are also considerations that could decrease it."
As a result, even when accounting for unknown unknowns, it looks extremely unlikely that anthropogenic warming could heat the earth enough to cause complete civilisational collapse (for a discussion of how hot that would need to be, see the first section of the main post!).
If you're interested in diving into this further, I'd suggest taking a look at the original paper "An Assessment of Earth's Climate Sensitivity Using Multiple Lines of Evidence" by Sherwood et al., or Why low-end 'climate sensitivity' can now be ruled out, a popular summary by the paper's authors.
It's of course true that there are some kinds of unknown unknowns that are impossible to account for — that is, things about which we have no information. But these are rarely particularly important unknown unknowns, in part because of that lack of information: in order to have no information about something, we necessarily can't have any evidence for its existence, so from the perspective of Occam's razor, they're inherently unlikely.
At least, in macroscopic systems. You can have negative absolute temperatures in systems with a population inversion (like a laser while it's lasing), although these systems are generally considered thermodynamically hotter than positive-temperature systems (because heat flows from the negative temperature system to the positive temperature system).
From the introduction to section 7.5 of the Working Group I contribution to the Sixth Assessment Report (p.993).
I don't currently have a confident view on this beyond "We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs."
But I agree that if we could reach a confident position here (or even just a confident list of considerations), that would be useful for people — so thanks, this is a helpful suggestion!
Thanks, this is an interesting heuristic, but I think I don't find it as valuable as you do.
First, while I do think it'd probably be harmful in expectation to work at leading oil companies / at the Manhattan project, I'm not confident in that view — I just haven't thought about this very much.
Second, I think that AI labs are in a pretty different reference class from oil companies and the development of nuclear weapons.
Why? Roughly:
Because these issues are difficult and we don’t think we have all the answers, I also published a range of opinions about a related question in our anonymous advice series. Some of the respondents took a very sceptical view of any work that advances capabilities, but others disagreed.
Hi Yonatan,
I think that for many people (but not everyone) and for many roles they might work in (but not all roles), this is a reasonable plan.
Most importantly, I think it's true that working at a top AI lab as an engineer is one of the best ways to build technical skills (see the section above on "it's often excellent career capital").
I'm more sceptical about the ability to push towards safe decisions (see the section above on "you may be able to help labs reduce risks").
The right answer here depends a lot on the specific role. I think it's important to remember than not all AI capabilities work is necessarily harmful (see the section above on "you might advance AI capabilities, which could be (really) harmful"), and that top AI labs could be some of the most positive-impact organisations in the world (see the section above on "labs could be a huge force for good - or harm"). On the other hand, there are roles that seem harmful to me (see "how can you mitigate the downsides of this option").
I'm not sure of the relevance of "having a good understanding of how to do alignment" to your question. I'd guess that lots of knowing "how to do alignment" is being very good at ML engineering or ML research in general, and that working at a top AI lab is one of the best ways to learn those skills.
Thanks Vasco! I'm working on a longer article on exactly this question (how pressing is nuclear risk). I'm not quite sure what I'll end up concluding yet, but your work is a really helpful input.