Hide table of contents

We are excited to announce the new Open Philanthropy Technology Policy Fellowship. You can apply here until September 15th. This post will provide some background on the fellowship program and details on who we’d be excited to receive applications from.

Other resources for prospective applicants:

  • You can register for information sessions about the fellowship here.
  • We will be running an AMA on the EA Forum soon (here).

What is the fellowship?

The fellowship aims to help grow the community of people working at the intersection of US policy and Open Philanthropy’s longtermism-motivated[1] work, especially in AI and biosecurity. It provides selected applicants with policy-focused training and mentorship, and supports them in matching with a host organization for a full-time, fully-funded fellowship based in the Washington, DC area. Potential host organizations include executive branch offices, Congressional offices, and think tank programs.

Fellowship placements are expected to begin in early or mid-2022 and to last 6 or 12 months (depending on the fellowship category), with potential renewal for a second term (for a maximum duration of 12 or 24 months).

More details about the program can be found on the application page.

Such fellowship programs are fairly common in Washington, DC. To learn more about the model and partly analogous programs, you can check out AAAS’s Science and Technology Policy Fellowship (for executive branch and congressional fellowships), the RWJ Foundation’s Health Policy Fellowship (for executive branch and congressional fellowships), TechCongress (for congressional fellowships), CFR’s International Affairs Fellowship (for executive branch and think tank fellowships), and Scoville (for think tank junior fellowships).

Who should apply?

The fellowship program was designed to accommodate a broad talent pool. Opportunities are available for both entry-level and mid-career applicants, for technical and non-technical people, and for people both with and without prior policy experience.

  • Entry-level applicants can apply to be “junior fellows” at think tanks. This role will combine research assistance with operational support, for example organizing and taking notes at expert workshops. Junior fellows may also get opportunities for short-form independent writing (e.g. articles for popular outlets). The position will initially be for 6 months, with potential renewal for another 6 months. Current students who will complete their bachelor’s or master’s degree in Spring 2022 are eligible to apply, as are other recent graduates (see here for more on eligibility).

  • Early/mid-career applicants can apply to be fellows in the executive branch, Congress, or at think tanks. Applicants must have a minimum of several years of relevant experience, but it is not uncommon for policy fellowship programs to take in people at a more advanced career stage (e.g. someone in their 30s or 40s with 10+ years of experience).[2] We are open to applicants of all levels of seniority above our minimum requirements; we will work hard to support all fellows in matching with a host office and role where their background and expertise will be put to good use. (See here for more on eligibility.)

Besides the appropriate level of seniority, we will largely be looking for (a) sufficient alignment with Open Phil's longtermist interests and priorities, (b) evidence of “fit” for applied policy work, and (c) some expertise or experience relevant to emerging technology (broadly defined). What does this mean in practice?

  • Policy work often involves a great deal of communication and networking, generally with lots of different groups. Cross-cultural communication (both oral and writing) skills are an important criterion.
  • Policy work is also generally collaborative and team-based, so you have to be able to collaborate well, including with people who you might disagree with. Comfort and experience working in such environments is a strong plus.
  • People with prior policy experience are welcome to apply, but experience is not required. To the extent that we take such experience into account in the screening process, it will mostly be as evidence that policy work is a good “fit” for them, and people without experience can show their “fit” in other ways.
    • As discussed in more detail below, we are excited to support people who plan to spend a significant part of their career working in or around the US government, but we also expect to accept applicants who are not sure that applied policy work is right for them and would like to use the fellowship program to assess that.
  • More generally, we expect to weigh “fit” and “soft skills” more heavily in the application process than knowledge. This is because, in our experience, knowledge (e.g. about the committee structure of Congress or the division of responsibility across federal agencies) can be taught fairly easily — and the program includes a significant training dimension on exactly those sorts of questions — whereas personality and communication abilities are harder to change.
  • This is a “technology policy” fellowship (focused especially on AI and bio), so host organizations will expect some relevant experience or expertise — but don’t disqualify yourself too quickly! “Technology expertise” is a looser concept in DC than it is in, say, San Francisco. Someone with professional experience in health tech can usually profile themselves as having experience “relevant” to AI or biosecurity, even if their work did not directly involve building machine learning systems or pandemic preparedness. To the extent possible, we won’t disqualify applicants who we’re otherwise excited about on the basis of formal credentials.
    • Junior fellows especially will not be expected to have extensive expertise; some relevant classes or a relevant term paper may be sufficient, as long as you are a quick study and can learn on the job.
  • There are also some differences (e.g. in how broad your policy portfolio is) between our three organizational categories (executive branch, Congress, and think tanks). You may be a good fit for one type of organization but a poor fit for the others. More about these differences on the application page.

When in doubt about your eligibility or fit, we encourage you to apply! We welcome explicit mention of particular concerns/questions in your application materials. You can also ask us questions about eligibility and fit at any time during the application process, and during our information sessions and the EA Forum AMA (see top of this post).

We aim to build a diverse cohort, and strongly encourage individuals with backgrounds and experiences underrepresented in science and technology policy to apply, especially women and people of color.

What does success look like?

We hope some of the fellows will continue doing policy work in some capacity, whether directly in government or at other organizations in Washington, DC. Fellows will be provided with mentorship and professional development opportunities explicitly aimed at helping them secure policy jobs, including introductions to helpful contacts and tailored workshops on job application cycles and norms across various institutions.

Some fellows will learn that working in government is not the right “fit” for them. For this group, we nonetheless think that the fellowship will serve as a useful learning experience. For example, some fellows may decide to pursue AI governance work at tech firms or EA organizations after their fellowship instead of continuing direct government work. We expect this type of work to benefit significantly from fellows having developed better models of the US policymaking process and the influence of various policy stakeholders. Continuing social ties will also play an important role here.[3]

More generally, we believe longtermist strategy and policy would benefit from having more “translators” who can bridge different communities (EA and non-EA, research and policy, technical and non-technical, etc.). Moving between different types of employers and cultures every few years is an excellent way to develop translation skills and cross-pollinate ideas. Even if you’re not sure you want to spend your entire career in the US government — or even if you’re pretty sure that you don’t — we still very much encourage you to apply to the fellowship.


    1. We expect readers on the EA Forum to be familiar with these concepts and won’t elaborate on them here, but see e.g. here and here for more details. Note that we welcome applicants who previously have not worked directly within longtermist cause areas, as long as they are interested in working on longtermist-related issues in the future. ↩︎

    2. See e.g. the lists of alumni of the Presidential Innovation Fellows program (which places people in executive branch offices) and the RWJ Health Policy Fellowship (Congress or executive branch), which include many people with 10-20 years of professional experience. ↩︎

    3. See e.g. here (point #2) for more thinking along these lines. ↩︎

Comments6


Sorted by Click to highlight new comments since:

This seems like a great initiative, I'm excited to see where this goes!

Do people need to be US citizens (or green card holders etc) to apply for this?

The job posting states: 

"All participants must be eligible to work in the United States and willing to live in Washington, DC, for the duration of their fellowship. We are not able to sponsor US employment visas for participants; US permanent residents (green card holders) are eligible to apply, but fellows who are not US citizens may be ineligible for placements that require a security clearance."

So my impression would be that it would be pretty difficult to participate for non-US citizens who do not already live in the US. 

Hi Luke, could you describe a candidate that would inspire you to flex the bachelor's requirement for Think Tank Jr. Fellow? I took time off credentialed institutions to do lambda school and work (didn't realize I want to be a researcher until I was already in industry), but I think my overall CS/ML experience is higher than a ton of the applicants you're going to get (I worked on cooperative AI at AI Safety Camp 5 and I'm currently working on multi-multi delegation, hence my interest in AI governance). If possible, I'd like to hear from you how you're thinking about the college requirement before I invest the time into writing a cumulative 1400 words. 

Ah, just saw techpolicyfellowship@openphilanthropy.org at the bottom of the page. Sorry, will direct my question to there! 


We're writing to let you know that the group you tried to contact (techpolicyfellowship) may not exist, or you may not have permission to post messages to the group. A few more details on why you weren't able to post:

* You might have spelled or formatted the group name incorrectly.
* The owner of the group may have removed this group.
* You may need to join the group before receiving permission to post.
* This group may not be open to posting.

If you have questions related to this or any other Google Group, visit the Help Center at https://support.google.com/a/openphilanthropy.org/bin/topic.py?topic=25838.

Thanks,

openphilanthropy.org admins
 

Oops! Should be fixed now.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f