Hide table of contents

TL;DR. AI safety needs more people who understand the field and its gaps deeply enough to own problems end-to-end, found new projects and organizations, and shape the threat models that the rest of the field runs on. Astra is trying to cultivate more of these people. 

Astra's new Strategy and Governance stream is a fully-funded 5-month fellowship (Sept 2026 - Feb 2027) at Constellation, with 25+ mentors from organizations like Coefficient Giving, AI Futures Project, and IAPS. If you have strong strategic taste, are highly agentic, and are looking for the most impactful thing you can do in AI safety, we encourage you to apply by May 3rd. You could also apply to Constellation's Visiting Fellows or Incubator programs if you already have a strong pitch and full-time AI safety experience.

Strategy matters when someone implements it.

AI progress is moving fast. There are a ton of open problems that need to be addressed before society is ready to navigate transformative AI. The AI safety community ought to position people to plan for and respond to these problems with exceptional speed and strategic awareness. But we think the ecosystem could be doing better at cultivating these traits. 

Constellation, which runs the Astra fellowship, hosts many AI safety organizations, and those organizations consistently tell us about two important traits they look for when hiring: strong strategic thinking and high agency. Even candidates coming out of established AI safety programs frequently lack these skills at the level full-time roles require. Astra’s new Strategy and Governance stream is one of Constellation’s attempts to develop these skills in the AI safety talent pool more deliberately. Through this stream of Astra, Constellation aims to cultivate strategists who know how to get their work implemented and implementers who understand the strategy behind their actions. 

Strong strategic thinkers can interpret trends in the environment, infer the implications of those trends (e.g. develop threat models and theories of victory), and diagnose what the field of AI safety needs most. The work of “strategists” also needs to be tied back into the real world – someone needs to own the problems strategists identify and implement solutions. This practice can be referred to as AI governance. 

Sometimes, AI strategists are the ones implementing their ideas and other times they build partnerships with people who are the best fit to take their work forward. The important step we want to highlight is that, for impact to come from strategy work, someone needs to interface with the constraints of the real world. This could mean starting an organization, working with frontier AI developers, briefing policymakers, popularizing an idea by field building, or doing something entirely new. For example, Redwood Research established the agenda of AI control, popularized the idea by giving talks and hosting conferences, while building partnerships to advise frontier AI developers directly. As another example, METR, which was an early mover in independent third party analyses of the capabilities and risks of AI systems, has contributed to helping Anthropic write its first Responsible Scaling Policy and provides technical assistance to the U.S. NIST AI Safety Institute Consortium, the UK AISI, and the European EU AI office. 

Astra provides a space where people can hone their understanding of AI strategy and its real world applications, then take ownership over the hardest problems in the field. For example, in the first cohort of Constellation’s Astra Fellowship, Eli Lifland and Romeo Dean worked with Daniel Kokotajlo, and the scenario they began writing during the program ultimately became AI 2027

We’re very excited about our plans to cultivate fellows to have the knowledge and network to pursue tough problems in AI safety through a combination of mentorship, feedback from senior AI safety strategists, structured practice articulating theories of impact, and exercises to pressure-test strategy against real-world incentives of policymakers and frontier AI developers. 

Astra is trying to cultivate people who can do both.

Astra is housed in Constellation, a home to many researchers and organization leaders who have identified problems in the AI safety space and owned them. People who come to Constellation consistently find that they are empowered to pursue the most impactful work they can. For example, Aryan Bhatt, previous Astra fellow and current Member of Technical Staff at Redwood Research shares: “Constellation is the single place in the world with the highest density of AI safety talent. Period.” Current Astra fellow Joe Kwon shares that, at Astra, he received "feedback from people with strong conceptual clarity", which facilitated his writing of research agenda to address the threat of secretly loyal AIs.

The Strategy and Governance stream aims to attract applicants who prioritize a career in AI safety, are high agency, and have or aim to develop excellent research taste. Applicants should have prior engagement with AI safety. We plan to select applicants from roughly three profiles: 

  • AI safety professionals with several years of full-time work in the field who want to address an under-scoped problem, such as concentration of power risks, AI verification, or post-AGI governance. Applicants in this group will be considered for an "independent" route, where they are not paired with a mentor but are supported by the Constellation team to pursue an important, neglected problem.
  • Junior AI safety professionals with demonstrated strategic thinking (as attested by previous mentors or evidenced through research samples) but limited full-time experience in the field (e.g., MATS alumni, recent graduates of AI safety fellowships, or those with under ~1 year of full-time AI safety work).
  • Mid-career professionals from relevant fields – including AI safety program development, policy, national security, law, academic research in relevant fields, or AI industry roles – who also have meaningful prior exposure to AI safety (e.g., through fellowships such as ERA or Pivotal).

If you are uncertain about whether Astra is the right program for you, we encourage you to apply anyway (by May 3rd). The application takes ~30 minutes to fill out. Filling out the application will put your name in the talent pool Constellation can draw from as we develop more programming on AI strategy. You could also apply to Constellation's Visiting Fellows or Incubator programs if you already have a strong pitch and full-time AI safety experience.

For strategists who want a strong grasp on the field of AI safety, working directly at an AI safety organization can provide great depth of experience and knowledge. Astra supplies this same depth through work and mentorship with an AI strategy organization alongside an unmatched level of breadth through our programming. We expect that someone who is immersed in the Constellation ecosystem for 5 months will benefit more in their career (both in terms of personal development and in terms of impact) than they would doing the same work outside of Constellation. Additionally, we think our mentors and their organizations are doing some of the best work in the AI safety ecosystem. There are over 25 strategy and governance mentors who will work with Astra fellows, many of whom work at grantmaking organizations such as Coefficient Giving. Others work directly at the intersection of strategy and policy at think tanks like IAPS and RAND or are in the process of founding new organizations to close gaps in the AI safety ecosystem. 

We expect some fellows will exit Astra ready to own well-scoped problems in the AI safety space: founding new organizations, developing new research agendas, and inspiring more leaders. 

We don’t, however, expect that everyone who completes Astra should start something completely new. In fact, many of the skills we hope fellows will gain during Astra are the skills that employers in the AI safety space say they need in prospective hires: strong reasoning about AI safety threat models, the ability to work autonomously, and the ability to entrepreneurially jump on opportunities when they present themselves. 

14

0
0
1

Reactions

0
0
1

More posts like this

Comments1
Sorted by Click to highlight new comments since:

I'm really excited for this! 

More from a_e_r
Curated and popular this week
Relevant opportunities