Update 2019-12-14: There is now a Facebook group for discussion of infosec careers in EA (including for GCR reduction); join here
This post was written by Claire Zabel and Luke Muehlhauser, based on their experiences as Open Philanthropy Project staff members working on global catastrophic risk reduction, though this post isn't intended to represent an official position of Open Phil.
In this post, we summarize why we think information security (preventing unauthorized users, such as hackers, from accessing or altering information) may be an impactful career path for some people who are focused on reducing global catastrophic risks (GCRs).
If you'd like to hear about job opportunities in information security and global catastrophic risk, you can fill out this form created by 80,000 Hours, and their staff will get in touch with you if something might be a good fit.
In brief, we think:
- Information security (infosec) expertise may be crucial for addressing catastrophic risks related to AI and biosecurity.
- More generally, security expertise may be useful for those attempting to reduce GCRs, because such work sometimes involves engaging with information that could do harm if misused.
- We have thus far found it difficult to hire security professionals who aren't motivated by GCR reduction to work with us and some of our GCR-focused grantees, due to the high demand for security experts and the unconventional nature of our situation and that of some of our grantees.
- More broadly, we expect there to continue to be a deficit of GCR-focused security expertise in AI and biosecurity, and that this deficit will result in several GCR-specific challenges and concerns being under-addressed by default.
- It’s more likely than not that within 10 years, there will be dozens of GCR-focused roles in information security, and some organizations are already looking for candidates that fit their needs (and would hire them now, if they found them).
- It’s plausible that some people focused on high-impact careers (as many effective altruists are) would be well-suited to helping meet this need by gaining infosec expertise and experience and then moving into work at the relevant organizations.
- If people who try this don’t get a direct work job but gain the relevant skills, they could still end up in a highly lucrative career in which their skillset would be in high demand.
We explain below.
Risks from Advanced AI
As AI capabilities improve, leading AI projects will likely be targets of increasingly sophisticated and well-resourced cyberattacks (by states and other actors) which seek to steal AI-related intellectual property. If these attacks are not mitigated by teams of highly skilled and experienced security professionals, then such attacks seem likely to (1) increase the odds that TAI / AGI is first deployed by malicious or incautious actors (who acquired world-leading AI technology by theft), and also seem likely to (2) exacerbate and destabilize potential AI technology races which could lead to dangerously hasty deployment of TAI / AGI, leaving insufficient time for alignment research, robustness checks, etc.
As far as we know, this is a common view among those who have studied questions of TAI / AGI alignment and strategy for several years, though there remains much disagreement about the details, and about the relative magnitudes of different risks.
Given this, we think a member of such a security team could do a lot of good, if they are better than their replacement and/or they understand the full nature of the AI safety and security challenge better than their replacement (e.g. because they have spent many years thinking about AI from a GCR-reduction angle). Furthermore, being a member of such a team may be a good opportunity to have a more general positive influence on a leading AI project, for example by providing additional demand and capacity for addressing accident risks in addition to misuse risks.
Somewhat separately, there may be substantial use for security expertise in a research context (rather than implementation context). For example:
- Some researchers think that security expertise and/or a "security mindset" of the sort often possessed by security professionals (perhaps in part as a result of professional training and experience) is important for AI alignment research in a fairly general sense.
- Some researchers think that one of the most plausible pre-AGI paths by which AI might have "transformative"-scale impact is via the automation of cyber offense and cyber defense (and perhaps one more than the other), and GCR-focused researchers with security expertise could be especially useful for investigating this possibility and related strategic questions.
- Safe and beneficial development and deployment of TAI / AGI may require significant trust and cooperation between multiple AI projects and states. Some researchers think that such cooperative arrangements may benefit from (potentially novel) cryptographic solutions for demonstrating to others (and verifying for oneself) important properties of leading AI projects (e.g. how compute is being used). Potentially relevant techniques include zero knowledge proofs, secure multi-party computation, differential privacy methods, or smart contracts. (E.g. see the explorations in Martic et al. 2018.)
Biosecurity and biorisk
Efforts to reduce biorisks may involve working with information on particular potential risks and strategies for reducing them. In general, information generated for the purpose of predicting the actions of or thwarting a bad actor may be of interest to that actor. This information could cause harm if potential bioterrorists or states aiming to advance or initiate bioweapons programs obtain it. Concerns about these kinds of information hazards hamper our and our grantees’ ability to study important aspects of biorisk.
For example, someone studying countermeasure research and development for different types of pathogens might uncover and take note of vulnerabilities in existing systems for the purposes of patching those vulnerabilities, but could inadvertently inform a bad actor about weaknesses in the current system.
Our impression is that many people in the national security community that focus on biosecurity believe that some state bioweapon programs are currently operating and we worry that these programs may expand as advances in synthetic biology facilitate the development of more sophisticated and/or inexpensive bioweapons (making these programs more appealing from the perspective of a state). We also think state actors are the ones most likely to execute sophisticated cyberattacks.
Because of the above, we expect security work in this space to be very important but potentially very challenging.
Open Phil began a preliminary search for a full-time information security expert to help our grantees with the above issues in February 2018. We hoped to find someone who could work on assessing the feasibility of different security measures and their plausible effect size as deterrents, assisting grantees in implementing security measures, and helping build up the field of infosec experts trying to reduce GCRs. So far, our search has been unsuccessful.
Why do we think our preliminary search has been challenging, and why do we expect that to continue, and apply to our grantees?
We’ve consistently heard, from relatively senior security professionals and candidates for our role, that it’s a “seller’s market”, and thus generally challenging and expensive (in funds and time) to attract top talent.
Specifically, our impression is that talented security experts often have many attractive job options to choose from, often involving managing large teams to handle security needs of very large-scale, intellectually engaging projects, and pay in the range of six to seven figures.
Our situation and needs (and that of some of our grantees) are unconventional, and those likely won’t confer as much prestige or career capital in the field, compared to other options we’d expect a talented potential hire to have (e.g. taking a job at a large tech company).
Our needs are also varied, and may not cleanly map to a well-recognized job profile (e.g. Security Analyst or Chief Information Security Officer), making the option less attractive to risk-averse candidates.
Our context in the field is limited, which makes attracting and evaluating candidates more challenging for us. (An additional benefit of more GCR-focused people entering the space is that we’d likely end up with trusted advisors who understand our situation and constraints, and can help us assess the talent and fit of others).
We’re particularly cautious about hiring someone we think is likely to end up with access to sensitive information and knowledge of the vulnerabilities of relevant systems.
And, as a funder, Open Phil runs the special risk of inadvertently pressuring grantees to interact with someone we hire, even if they have misgivings. This makes us want to be more cautious than if we were hiring someone that only we would work with on sensitive projects.
Potential fit for GCR-focused people
In brief, security experts may be able to address the concerns listed above by:
- Developing threat models to identify, e.g., probable attackers and their capabilities, potential attack vectors, and which assets are most vulnerable/desirable and in need of protection.
- Evaluating and prioritizing systems, policies, and practices to defend against potential threats.
- Assessing feasible levels of risk reduction to inform choices about lines of research to pursue for a given level of acceptable risk
- Implementing, maintaining, and auditing those systems, policies, and practices.
Additionally, we think GCR-focused people who enter the field for the purpose of direct work might be especially helpful, compared to potential hires with similar levels of experience and innate talent, but without preexisting interest in GCR reduction. For example:
- For both AI and bio, they might focus relatively more on strategies for resisting state actors.
- On AI, they might focus relatively more on issues of special relevance to TAI / AGI alignment and strategy.
- On biorisks, they might focus relatively more on working with academics and think tanks.
- They might be more familiar with and skilled at deploying epistemic tools like making predictions, calibration training, explicit cost-effectiveness analyses, adjustments for the unilateralist's curse, and scope-sensitive approaches to risk reduction, which might be useful on the object level as well as for interacting with some other staff at the relevant organizations.
We expect security work on GCR reduction to be more attractive to GCR-focused people with security expertise than it would be to otherwise-similar security experts, and the downsides to weigh less heavily. We also expect the “seller’s market” dynamic for security professionals to be advantageous for people who are influenced by this post to pursue this path effectively; even if they don’t find a role doing direct work on GCR reduction, they could find themselves in a lucrative profession doing intellectually engaging work.
We’re unsure how many roles requiring significant security expertise and experience will eventually be available in the GCR reduction space, but we think:
- There’s probably currently demand for ~3-15 such people (mostly in AI-related roles),
- It’s more likely than not that in 10 years, there will be demand for >25 security experts in GCR-reduction-focused roles, and
- It's at least "plausible" that in 10 years there will be demand for >75 security experts in GCR-reduction-focused roles, if TAI/AGI projects grow and cyberattacks against them intensify sharply and increase in sophistication.
We think it’s worth further exploring security as a potential career path for GCR-focused people, and if that investigation bears out the basic reasoning above, we hope people who think they might be a fit for this work seriously consider moving into the space. That said, we expect the training to be very challenging, and we’re unsure what it would involve or how many people would succeed (of those who try), so given our uncertainties we’re especially wary of making strong recommendations. We’ve discussed this reasoning with staff at 80,000 Hours, who are currently considering research into entering this career path.
These roles seem most promising to consider for someone who already has a technical background, could train in information security relatively quickly, and might be interested in working in the field even if they don’t end up working directly in GCR reduction. Additional desiderata include a security mindset, discretion, and comfort doing confidential work for extended periods of time.
Our current best guess is that people who are interested should consider seeking security training in a top team in industry, such as by working on security at Google or another major tech company, or maybe in relevant roles in government (such as in the NSA or GCHQ). Some large security companies and government entities offer graduate training for people with a technical background. However, note that people we’ve discussed this with have had differing views on this topic.
However, please bear in mind that we haven’t done much investigation into the details of how best to pursue this path. If you’re considering making a switch, we’d suggest doing your own research into how best to do it and your likely degree of fit. We’d also only suggest making the switch if you’d be comfortable with the risk of not landing a job directly relevant to GCR reduction within the next couple of years.
[edit: the form is no longer open] If you’re interested in pursuing this career path, or already have experience in information security, you can fill out this form (managed by 80,000 Hours, and accessible to some staff at 80,000 Hours and Open Philanthropy), and 80,000 Hours may be able to provide additional advice or introductions at some point in the future.
Many thanks to staff at 80,000 Hours, CSET, FHI, MIRI, OpenAI, and Open Phil, as well as Ethan Alley, James Eaton-Lee, Jeffrey Ladish, Kevin Esvelt, and Paul Crowley, for their feedback on this post.
For example, even if an AI project has enough of a lead over its competitors to not be worried about being "scooped" (over some time frame, with respect to some set of capabilities), its leadership will probably be more willing to invest in extensive safety and validation checks if they are also confident the technology won't be stolen while those checks are conducted. ↩︎
This paragraph is especially inspired by some thinking on this topic by Miles Brundage. ↩︎
We’re here referring to deskwork, as opposed to bench research on biological agents, which seems to us to be substantially more risky overall and requires a different set of expertise (expertise in biosafety) to do safely, in addition to information security expertise. ↩︎
Information hazards aren’t a big concern for natural biorisks, but our work so far suggests that anthropogenic outbreaks, especially those generated by state actors, constitute much of the risk of a globally catastrophic biological event. ↩︎
See e.g. the Arms Control Association’s Chemical and Biological Weapons Status at a Glance and the September 18 2018 Press Briefing on the National Biodefense Strategy (ctrl+f “convention” to find the relevant comments quickly) for public comments on this claim. But, we think our assertion here is not controversial in the national security community working on biosecurity, and conversations with people in that community were also important for persuading us that state BW programs are probably ongoing. ↩︎