Hide table of contents

In the spirit of  Career Conversations Week, throwing out some quick questions that I hope are also relevant for others in a similar position!

I'm an early-career person aiming to have a positive impact on AI safety. For a couple of years, I've been building skills towards a career in technical AI safety research, such as: 

  • Publishing ML safety research projects
  • Doing a Master's degree in machine learning at a top ML school
  • Generally focusing on building technical ML research skills and experience at the expense of other forms of career capital.

However, I'm now much more strongly considering paths to impact that route through AI governance, including AI policy, than pure technical alignment research. Since I still feel pretty junior, I think I have room to explore a bit. However, I'm not junior enough to have a fresh degree in front of me (e.g. to choose to study public policy), and I feel like I have a strong fit for technical ML skills and knowledge, including explaining technical concepts to non-technical audiences, that I want to leverage.

What are some of the best ways for people like me to transition from technical AI safety research roles into more explicit AI governance and policy? So far, I'm only really aware of:

  • Policy fellowships that might take technical researchers without policy experience, like the Horizon, STPI, PMF, or STPF fellowships
  • Policy positions in top AI labs, which are themselves important for AI governance and could transition well into other AI governance careers
  • Policy research positions that require significant technical knowledge at organizations like GovAI
  • Some vague notion of "being a trusted scientific advisor to key decision makers in DC or London," though I'm not sure what this practically looks like or how to get there.

Any other ideas? Or for those who have been in a similar situation, how have you thought about this?

This post is part of the September 2023 Career Conversations Week. You can see other Career Conversations Week posts here.

12

0
0

Reactions

0
0
New Answer
New Comment


3 Answers sorted by

Here are some different options for people with technical backgrounds to pivot into policy careers:

  • Policy fellowships (see our database here) are a great entryway into DC policy careers for technical people (and for many non-technical people!). Fellowships are especially helpful for mid-career technical folks who would otherwise struggle to make the pivot because they’re both (1) too senior for the normal entryways (e.g. policy internships [such as in Congress], a policy-oriented (under)graduate degree, junior policy jobs), and (2) have too little policy experience to qualify for mid-level or senior policy jobs. There are policy fellowships for people from all experience levels.
  • Check out our advice on policy internships (the linked post targets undergraduates, but the internship advice applies more widely), which are the most common way for junior people to test their fit for policy and build policy-relevant career capital, whether they have technical or non-technical backgrounds.
    • You might also conduct a policy-relevant research project during a summer/winter research fellowships offered by organizations like GovAI, ERA, CHERI, SERI, and XLab.
  • If you’re currently enrolled in a technical (under)graduate degree, try to gain some policy-relevant knowledge, networks, and skills, via choosing some policy classes if you can, especially in science and technology policy, or choosing a policy-relevant thesis project.
  • Participate in policy-relevant online programs, like the AI Safety Fundamentals Course’s Governance Track, speaker series like this, or these AI policy and biosecurity policy online courses.
  • Consider doing a policy-relevant graduate degree, particularly a policy master’s or law degree. You can often get into these degree programs even if you have only done technical work in the past (ideally, you should be able to tell a narrative about how your interest in policy work is connected to your prior technical studies and/or work experience). Even if you already have a technical graduate degree, it might make sense to do another (short/part-time) policy degree if you’re serious about pivoting into policy but are otherwise struggling to make the switch.

One brief comment on mindset: Policy jobs typically don’t require people to have a particular subject background, though there are exceptions. There are plenty of people with STEM degrees and technical work experience who have pivoted into policy roles, often focused on science and technology (S&T) policy areas, where they can leverage their technical expertise for added credibility and impact. There are certain policy roles and institutions that prefer people with technical backgrounds, such as many roles in the White House OSTP, NSF, DOE, NIH, etc. So, you shouldn't feel like it's impossible to pivot from technical to policy work, and there are resources to help you with this pivot. We particularly recommend speaking with an 80,000 Hours career adviser about this. 

This is sublime, thank you!

(Mostly I don't know.)

On policy fellowships: also RAND TASP.

I think many reasonably important policy roles don't require policy experience—working for key congressional committees or federal agencies.

Reposting an anonymous addition from someone who works in policy:

Your list of options mostly matches how I think about this. I would add:

  • Based on several anecdotal examples, the main paths I’m aware of for becoming a trusted technical advisor are “start with a relevant job like a policy fellowship, a job doing technical research that informs policy, or a non-policy technical job, and gradually earn a reputation for being a helpful expert.” To earn that reputation, some things you can do are: become one of the people who knows most about some niche but important area (anecdotally, “just” a few years of learning can be sufficient for someone to become a top expert in areas such as compute governance or high-skill immigration policy, since these are areas where no one has decades or experience — though there are also generalists who serve as trusted technical advisors); taking opportunities that come your way to advise policymakers (such opportunities can be common once you have your first policy job, or if you can draw on a strong network while doing primarily non-policy technical work); and generally being nice and respecting confidentiality. You don’t need to be a US citizen for doing this in the US context.
  • In addition to GovAI, other orgs where people can do technical research for AI policy include:
    • RAND and Epoch AI
    • Academia (e.g. I think the AI policy paper “What does it take to catch a Chinchilla?” was written as part of the author’s PhD work)
    • AI labs
Comments1
Sorted by Click to highlight new comments since:

Hmm I’d very keen to see what an answer to this might look like. I know some people I work with are interested in making a similar kind of switch.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 2m read
 · 
In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US).  Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place.  I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention. Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or