US Policy Careers

895Joined Apr 2021


Great question! This is really up to the office. There are many examples of international students or other non-citizens interning in congressional offices, and there are no strict rules against it on an institutional level. So it's possible. But some offices may decide they only want US citizens, and even ones that don't have such a rule still prefer people with ties to their district/state, which might disqualify/disadvantage certain non-citizens. On the whole, I'd say that if you're an international student or other non-citizen, you'd probably require support from a structured university semester in DC program (discussed in the post) or very warm connections to insiders if you wanted to get an internship offer.  

There's also the question of the relative payoff of doing a congressional internship as a non-citizen. Many of the benefits of internships involve them being good stepping stones into full-time post-graduation jobs. But getting a full-time job in Congress, or elsewhere in government/policy, often does require citizenship or at least work authorization — whereas there's often flexibility with internships, jobs can involve more of an immigration headache. We've written a bit about this here. It'd vary case-by-case whether a congressional internship is still worth it for a non-citizen (depending on opportunity cost, future plans, how much time you'd need to invest to get the internship, etc.). 

And just in case any green card holders read this: the most important distinction is probably not between citizens and non-citizens, but between "US persons" (citizens or permanent residents) and others. Once you have a green card (i.e. permanent residency), most jobs Congressional and policy jobs will be open to you and you'd have a relatively smooth path to citizenship.       

All else equal, it is probably best to use a writing sample that is closer to the kind of work you'd be doing during the fellowship, which is more like research reports (you can read some of STPI's work products online). But if you have nothing like that that's high-quality, no time to edit or write something else, and the mock NSF grant demonstrates the qualities STPI is looking for, it could be an okay choice.

As someone who lives in DC and is part of the EA community here, I wholeheartedly agree with a lot of this!  

To add to a couple of points: 

Non-EAs living in DC tend to be pretty impact-oriented and ambitious. It’s not uncommon for my non-EA peers in DC to be really excited about the work they’re doing and enjoy talking about their ambitions for impacting the world.

The article "Washington Is Not a Swamp" does a really nice job fleshing this out, describing how (in contrast to the "sharp elbows" stereotypes) most people in DC are very mission-oriented and kind. Similar to the EA community, DC attracts a large number of public-spirited and service-minded individuals, and conversations with (non-EA) friends and colleagues here are often very motivating and inspiring. And though folks in DC often joke about being in a bubble, one refreshing thing about the city is that it attracts people with a wide set of worldviews — in my experience, it's much more intellectually and professionally diverse than other hubs such as the Bay Area.  

You don’t have to work in government. This isn’t exactly a benefit — more of a PSA: it’s not crazy to move to DC, even if you don’t see yourself having a long career in government. It could still make sense to come to DC and work on policy issues from the outside. Jobs in lobbying and think tanks (especially the former) also offer better compensation and better hours than government work does.

I agree, and would go even further to say "you don't have to work in policy". There are lots of industries with large and prestigious presences in DC, including tech (especially in the NoVa area), health care (Bethesda and surroundings), journalism, arts and music, etc.   


Jeremy, thanks very much for sharing your path and this reflection. It resonates with much of what we heard from people in or considering careers practicing law in the U.S.: Even the shortest path to a law degree is long and taxing, and it’s common to spend many years of a career doing work that likely isn’t as impactful as the best realistic alternatives. 

Exciting to hear that you’re thinking about a pivot—we’ve heard from other practitioners doing the same, and would love to compare notes and learn from your experience!

Luke, thanks so much for sharing these perspectives—so helpful to have a UK perspective for a topic like this one that comes with so many jurisdictional differences. (I suspect some U.S.-based contributors who are still making tuition or loan payments may have winced at your observation that “you can’t really go broke by going to law school” in the UK…) Cheers!

Molly, thanks so much for this feedback. You’re right to suggest that folks should try reading an edited opinion from a casebook, rather than a full-length original. Thank you! We’ve updated the post to link to an edited version of the Erie opinion that one professor used in a recent civil procedure class, and we’ve added a similar link to a criminal law case, McBoyle v. United States. The linked versions allow readers to expand the elided sections, so folks who would prefer to see the original text can still do so.

Thanks, too, for highlighting name recognition among non-lawyers as an important—if frustrating and arbitrary—consideration. As a note to folks considering this factor in the future: because it’s tricky and context-dependent to decide out how to weigh this factor against others, we’d be happy to try to connect you with someone in the community who can help you think it through; please reach out!

In case helpful, this is from the FAQ document (linked on the OP page):

Writing sample: What is the intended purpose of the writing sample? Does it need to be related to AI or biotechnology? Are there other requirements? The writing sample is primarily intended to display your ability to write clearly for non-specialist audiences such as policymakers. It is not necessary that the sample be related to AI, biotechnology, or another technology topic. The writing sample can be either published or unpublished work, either analytical or expository in genre, and does not necessarily require any particular type of citation or sourcing. We strongly prefer a single-authored piece, but feel free to submit whatever you think best represents your personal abilities (e.g. if you contributed almost all the writing, and a co-author contributed data analysis, that would be fine). Please do not write something new for the application; you may use older pieces (graduate school or college essays) if needed.

Answer by US Policy CareersJul 13, 202230

If you are interested in an international politics angle, a relevant recent release is The New Fire: War, Peace, and Democracy in the Age of AI. It will cover some of the same basics but is more sophisticated on the geopolitical dimensions than the books you've listed, none of which were written by people with international security expertise.

I put some credence on the MIRI-type view, but we can't simply assume that is how it will go down. What if AGI gets developed in the context of an active international crisis or conflict? Could not a government — the US, China, the UK, etc. — come in and take over the tech, and race to get there first? To the extent that there is some "performance" penalty to a safety implementation, or that implementing safety measures takes time that could be used by an opponent to deploy first, there are going to be contexts where not all safety measures are going to be adopted automatically. You could imagine similar dynamics, though less extreme, in an inter-company or inter-lab race situation, where (depending on the perceived stakes) a government might need to step in to prevent premature deployment-for-profit.

The MIRI-type view bakes in a bunch of assumptions about several dimensions of the strategic situation, including: (1) it's going to be clear to everyone that the AGI system will kill everyone without the safety solution, (2) the safety solution is trusted by everyone and not seen as a potential act of sabotage by an outside actor with its own interest, (3) the external context will allow for reasoned and lengthy conversation about these sorts of decisions. This view makes sense within one scenario in terms of the actors involved, their intentions and perceptions, the broader context, the nature of the tech, etc. It's not an impossible scenario, but to bet all your chips on it in terms of where the community focuses its effort (I've similarly witnessed some MIRI staff's "policy-skepticism") strikes me as naive and irresponsible.


In particular, the point about even fewer people (~25) doing applied policy work is super important, to the extent that I think I should edit the post to significantly weaken certain claims.

I appreciate you taking this seriously! I do want to emphasize I'm not very confident in the ~25 number, and I think people with more expansive definitions of "policy" would reach higher numbers (e.g. I wouldn't count people at FHI as doing "policy" work even if they do non-technical work, but my sense is that many EAs lump together all non-technical work under headings such as "governance"/"strategy" and implicitly treat this as synonymous with "policy"). To the extent that it feels crux-y to someone whether the true "policy" number is closer to 25 or 50 or 75, it might be worth doing a more thorough inventory. (I would be highly skeptical of any list that claims it's >75, if you limit it to people who do government policy-related and reasonably high-quality work, but I could be wrong.)

while I think the idea of technical people being in short supply and high demand in policy is generally overrated, that seems like it could be an important consideration

I certainly agree that sometimes technical credentials can provide a boost to policy careers. However, that typically involves formal technical credentials (e.g.a CS graduate degree), and "three months of AISTR self-study" won't be of much use as career capital (and may even be a negative if it requires you to have a strange-looking gap on your CV or an affiliation with a weird-looking organization). A technical job taken for fit-testing purposes at a mainstream-looking organization could indeed, in some scenarios, help open doors to/provide a boost for some types of policy jobs. But I don't think that effect is true (or large) often enough for this to really reduce my concerns about opportunity costs for most individuals.

I definitely don't have state funding for safety research in mind; what I mean is that since I think it's very unlikely that policy permanently stops AGI from being developed, success ultimately depends on the alignment problem being solved.

This question is worth a longer investigation at some point, but I don't see many risk scenarios where a technical solution to the AI alignment problem is sufficient to solve AGI-related risk. For accident-related risk models (in the sense of this framework) solving safety problems is necessary. But even when technical solutions are available, you still need all relevant actors to adopt those solutions, and we know from the history of nuclear safety that the gap between availability and adoption can be big — in that case decades. In other words, even if technical AI alignment researchers somehow "solve" the alignment problem, government action may still be necessary to ensure adoption (whether government-affiliated labs or private sector actors are the developers). In which case one could flip your argument: since it's very unlikely AGI-related risks will be addressed solely through technical means, success ultimately depends on having thoughtful people in government working on these problems. In reality, for safety-related risks, both technical and policy solutions are necessary.

And for security-related risk models, i.e. bad actors using powerful AI systems to cause deliberate harm (potentially up to existential risks in certain capability scenarios), technical alignment research is neither a necessary nor a sufficient part of the solution, but policy is at least necessary. (In this scenario, other kinds of technical research may be more important, but my understanding is that most "safety" researchers are not focused on those sorts of problems.)

Load more