US Policy Careers

746Joined Apr 2021

Comments
21

As someone who lives in DC and is part of the EA community here, I wholeheartedly agree with a lot of this!  

To add to a couple of points: 

Non-EAs living in DC tend to be pretty impact-oriented and ambitious. It’s not uncommon for my non-EA peers in DC to be really excited about the work they’re doing and enjoy talking about their ambitions for impacting the world.

The article "Washington Is Not a Swamp" does a really nice job fleshing this out, describing how (in contrast to the "sharp elbows" stereotypes) most people in DC are very mission-oriented and kind. Similar to the EA community, DC attracts a large number of public-spirited and service-minded individuals, and conversations with (non-EA) friends and colleagues here are often very motivating and inspiring. And though folks in DC often joke about being in a bubble, one refreshing thing about the city is that it attracts people with a wide set of worldviews — in my experience, it's much more intellectually and professionally diverse than other hubs such as the Bay Area.  

You don’t have to work in government. This isn’t exactly a benefit — more of a PSA: it’s not crazy to move to DC, even if you don’t see yourself having a long career in government. It could still make sense to come to DC and work on policy issues from the outside. Jobs in lobbying and think tanks (especially the former) also offer better compensation and better hours than government work does.

I agree, and would go even further to say "you don't have to work in policy". There are lots of industries with large and prestigious presences in DC, including tech (especially in the NoVa area), health care (Bethesda and surroundings), journalism, arts and music, etc.   

 

Jeremy, thanks very much for sharing your path and this reflection. It resonates with much of what we heard from people in or considering careers practicing law in the U.S.: Even the shortest path to a law degree is long and taxing, and it’s common to spend many years of a career doing work that likely isn’t as impactful as the best realistic alternatives. 

Exciting to hear that you’re thinking about a pivot—we’ve heard from other practitioners doing the same, and would love to compare notes and learn from your experience!

Luke, thanks so much for sharing these perspectives—so helpful to have a UK perspective for a topic like this one that comes with so many jurisdictional differences. (I suspect some U.S.-based contributors who are still making tuition or loan payments may have winced at your observation that “you can’t really go broke by going to law school” in the UK…) Cheers!

Molly, thanks so much for this feedback. You’re right to suggest that folks should try reading an edited opinion from a casebook, rather than a full-length original. Thank you! We’ve updated the post to link to an edited version of the Erie opinion that one professor used in a recent civil procedure class, and we’ve added a similar link to a criminal law case, McBoyle v. United States. The linked versions allow readers to expand the elided sections, so folks who would prefer to see the original text can still do so.

Thanks, too, for highlighting name recognition among non-lawyers as an important—if frustrating and arbitrary—consideration. As a note to folks considering this factor in the future: because it’s tricky and context-dependent to decide out how to weigh this factor against others, we’d be happy to try to connect you with someone in the community who can help you think it through; please reach out!

In case helpful, this is from the FAQ document (linked on the OP page):

Writing sample: What is the intended purpose of the writing sample? Does it need to be related to AI or biotechnology? Are there other requirements? The writing sample is primarily intended to display your ability to write clearly for non-specialist audiences such as policymakers. It is not necessary that the sample be related to AI, biotechnology, or another technology topic. The writing sample can be either published or unpublished work, either analytical or expository in genre, and does not necessarily require any particular type of citation or sourcing. We strongly prefer a single-authored piece, but feel free to submit whatever you think best represents your personal abilities (e.g. if you contributed almost all the writing, and a co-author contributed data analysis, that would be fine). Please do not write something new for the application; you may use older pieces (graduate school or college essays) if needed.

If you are interested in an international politics angle, a relevant recent release is The New Fire: War, Peace, and Democracy in the Age of AI. It will cover some of the same basics but is more sophisticated on the geopolitical dimensions than the books you've listed, none of which were written by people with international security expertise.

I put some credence on the MIRI-type view, but we can't simply assume that is how it will go down. What if AGI gets developed in the context of an active international crisis or conflict? Could not a government — the US, China, the UK, etc. — come in and take over the tech, and race to get there first? To the extent that there is some "performance" penalty to a safety implementation, or that implementing safety measures takes time that could be used by an opponent to deploy first, there are going to be contexts where not all safety measures are going to be adopted automatically. You could imagine similar dynamics, though less extreme, in an inter-company or inter-lab race situation, where (depending on the perceived stakes) a government might need to step in to prevent premature deployment-for-profit.

The MIRI-type view bakes in a bunch of assumptions about several dimensions of the strategic situation, including: (1) it's going to be clear to everyone that the AGI system will kill everyone without the safety solution, (2) the safety solution is trusted by everyone and not seen as a potential act of sabotage by an outside actor with its own interest, (3) the external context will allow for reasoned and lengthy conversation about these sorts of decisions. This view makes sense within one scenario in terms of the actors involved, their intentions and perceptions, the broader context, the nature of the tech, etc. It's not an impossible scenario, but to bet all your chips on it in terms of where the community focuses its effort (I've similarly witnessed some MIRI staff's "policy-skepticism") strikes me as naive and irresponsible.

Thanks!

In particular, the point about even fewer people (~25) doing applied policy work is super important, to the extent that I think I should edit the post to significantly weaken certain claims.

I appreciate you taking this seriously! I do want to emphasize I'm not very confident in the ~25 number, and I think people with more expansive definitions of "policy" would reach higher numbers (e.g. I wouldn't count people at FHI as doing "policy" work even if they do non-technical work, but my sense is that many EAs lump together all non-technical work under headings such as "governance"/"strategy" and implicitly treat this as synonymous with "policy"). To the extent that it feels crux-y to someone whether the true "policy" number is closer to 25 or 50 or 75, it might be worth doing a more thorough inventory. (I would be highly skeptical of any list that claims it's >75, if you limit it to people who do government policy-related and reasonably high-quality work, but I could be wrong.)

while I think the idea of technical people being in short supply and high demand in policy is generally overrated, that seems like it could be an important consideration

I certainly agree that sometimes technical credentials can provide a boost to policy careers. However, that typically involves formal technical credentials (e.g.a CS graduate degree), and "three months of AISTR self-study" won't be of much use as career capital (and may even be a negative if it requires you to have a strange-looking gap on your CV or an affiliation with a weird-looking organization). A technical job taken for fit-testing purposes at a mainstream-looking organization could indeed, in some scenarios, help open doors to/provide a boost for some types of policy jobs. But I don't think that effect is true (or large) often enough for this to really reduce my concerns about opportunity costs for most individuals.

I definitely don't have state funding for safety research in mind; what I mean is that since I think it's very unlikely that policy permanently stops AGI from being developed, success ultimately depends on the alignment problem being solved.

This question is worth a longer investigation at some point, but I don't see many risk scenarios where a technical solution to the AI alignment problem is sufficient to solve AGI-related risk. For accident-related risk models (in the sense of this framework) solving safety problems is necessary. But even when technical solutions are available, you still need all relevant actors to adopt those solutions, and we know from the history of nuclear safety that the gap between availability and adoption can be big — in that case decades. In other words, even if technical AI alignment researchers somehow "solve" the alignment problem, government action may still be necessary to ensure adoption (whether government-affiliated labs or private sector actors are the developers). In which case one could flip your argument: since it's very unlikely AGI-related risks will be addressed solely through technical means, success ultimately depends on having thoughtful people in government working on these problems. In reality, for safety-related risks, both technical and policy solutions are necessary.

And for security-related risk models, i.e. bad actors using powerful AI systems to cause deliberate harm (potentially up to existential risks in certain capability scenarios), technical alignment research is neither a necessary nor a sufficient part of the solution, but policy is at least necessary. (In this scenario, other kinds of technical research may be more important, but my understanding is that most "safety" researchers are not focused on those sorts of problems.)

From a policy perspective, I think some of the claims here are too strong.

This post lays out some good arguments in favor of AISTR work but I don't think it's super informative about the comparative value of AISTR v. other work (such as policy), nor does it convince me that spending months on AISTR-type work is relevant even for policy people who have high opportunity costs and many other things they could (/need to) be learning. As Linch commented: "there just aren't that many people doing longtermist EA work, so basically every problem will look understaffed, relative to the scale of the problem".

Based on my experience in the field for a few years, a gut estimate for people doing relevant, high-quality AI policy work is in the low dozens (not counting people who do more high-level/academic "AI strategy" research, most of which is not designed for policy-relevance). I'm not convinced that on the current margin, for a person who could do both, the right choice is to become the 200th person going into AISTR than the 25th person going into applied AI policy.

Specific claims:

For people who eventually decide to do AI policy/strategy research, early exploration in AI technical material seems clearly useful, in that it gives you a better sense of how and when different AI capabilities might develop and helps you distinguish useful and “fake-useful” AI safety research, which seems really important for this kind of work. (Holden Karnofsky says “I think the ideal [strategy] researcher would also be highly informed on, and comfortable with, the general state of AI research and AI alignment research, though they need not be as informed on these as for the previous section [about alignment].)

Re: the Karnofsky quote, I also think there's a big difference between "strategy" and "policy". If you're doing strategy research to inform e.g. OP's priorities, that's pretty different from doing policy research to inform e.g. the US government's decision-making. This post seems to treat them as interchangeable but there's a pretty big distinction. I myself do policy so I'll focus on that here.

For policy folks, it might be useful to understand but I think in many cases the opportunity costs are too high. My prior (not having thought about this very much) is that maybe 25% of AI policy researchers/practitioners should spend significant time on this (especially if their policy work is related to AI safety R&D and related topics), so that there are people literate in both fields. But overall it's good to have a division of labor. I have not personally encountered a situation in 3+ years of policy work (not on AI safety R&D) where being able to personally distinguish between "useful and fake-useful" AI safety research would have been particularly helpful. And if I did, I would've just reached out to 3-5 people I trust instead of taking months to study the topic myself.

I argue that most ... policy professionals should build some fairly deep familiarity with the field in order to do their jobs effectively

Per the above, I think "most" is too strong. I also think sequencing matters here. I would only encourage policy professionals to take time to develop strong internal models of AISTR once it has proven useful for their policy work, instead of assuming ex ante (before someone starts their policy career) that it will provide useful and so should/could be done beforehand. There are ways to upskill in technical fields while you're already in policy.

Even people who decide to follow a path like “accumulate power in the government/private actors to spend it at critical AI junctures,” it seems very good to develop your views about timelines and key inputs; otherwise, I am concerned that they will not be focused on climbing the right ladders or will not know who to listen to. Spending a few months really getting familiar with the field, and then spending a few hours a week staying up to date, seems sufficient for this purpose.

Again, division of labor and a prudent approach to deference can get you the benefits without these opportunity costs. I also think that in many cases it's simply not realistic to expect successful policy professionals to spend "a few hours a week staying up to date" with an abstract and technical literature. Every week the things that I want to read and have some chance of being decision/strategy-relevant is already >3x as long as what I have time for.

While other careers in the AI space — policy work ... — can be very highly impactful, that impact is predicated on the technical researchers, at some point, solving the problems, and if a big fraction of our effort is not on the object-level problem, this seems likely to be a misallocation of resources.

I think this assumes a particular risk model from AI that isn't the only risk model. Unless I am misreading you, this assumes policy success looks like getting more funding for AI technical research. But it could also look like affecting the global distribution of AI capabilities, slowing down/speeding up general AI progress, targeting deliberate threat actors (i.e. "security" rather than "safety"), navigating second-order effects from AI (e.g. destabilizing great power relations or nuclear deterrence) rather than direct threats from misaligned AI, and many other mechanisms. Another reason to focus on policy careers is that you can flexibly pivot between problems depending on how our threat models and prioritization evolve (e.g. doing both AI and bio work at the same time, countering different AI threat models at different times).

Thanks for the detailed update!

There was one expectation / takeaway that I was surprised about.

Getting sympathetic founders from adjacent networks to launch new projects related to our areas of interest - Worse than expected. We thought that maybe there was a range of people who aren't on our radar yet (e.g., tech founder types who have read The Precipice) who would be interested in launching projects in our areas of interest if we had accessible explanations of what we were hoping for, distributed the call widely, and made the funding process easy. But we didn’t really get much of that. Instead, most of the applications we were interested in came from people who were already working in our areas of interest and/or from the effective altruism community. So this part of the experiment performed below our expectations.

You mentioned the call was open for three weeks. Would that have been sufficient for people who are not already deeply embedded in EA networks to formulate a coherent and fundable idea (especially if they currently have full-time jobs)? It seems likely that this kind of "get people to launch new projects" effect would require more runway. If so, the data from this round shouldn't update one's priors very much on this question.

Load More