Sure, this is a very reasonable question. The decision to prioritize AI this year stems largely from our comparative advantage and ERA's established track record.
The Cambridge community has really exceptional AI talent, and we’ve taken advantage of this by partnering closely with the Leverhulme Centre for the Future of Intelligence and the Krueger AI Safety Lab within the Cambridge University Engineering Department (alongside AI researchers at CSER). Furthermore, the Meridian Office, the base for the ERA team and Fellowship, is also the site of the Cambridge AI Safety Hub (CAISH) and various independent AI safety projects. This is an ideal ecosystem for finding outstanding mentors and research managers with AI expertise.
A more crucial factor in our focus on AI is the success of ERA alumni, particularly those from the AI safety and governance track, who have continued to conduct significant research and build impactful careers. For instance, 4 out of the 6 alumni stories highlighted here involve fellows engaged in AI safety projects beyond the fellowship. This says little about fellows from other cause areas but rather suggests a unique opportunity for early-career researchers to make a significant impact in AI-related organizations — a feat that appears more challenging in well-established fields like nuclear or climate change.
Given the importance of AI safety and the timely opportunity to influence its development both technically and in policy-making, focusing our resources on AI appears strategically sound, especially with the aforementioned strong AI community. It is worth adding a disclaimer: our emphasis on AI does not diminish the importance of other X-risk or GCR research areas. It simply reflects our comparative strengths and track records, suggesting that our AI focus is likely to be the most effective use of resources.
Sure, this is a very reasonable question. The decision to prioritize AI this year stems largely from our comparative advantage and ERA's established track record.
The Cambridge community has really exceptional AI talent, and we’ve taken advantage of this by partnering closely with the Leverhulme Centre for the Future of Intelligence and the Krueger AI Safety Lab within the Cambridge University Engineering Department (alongside AI researchers at CSER). Furthermore, the Meridian Office, the base for the ERA team and Fellowship, is also the site of the Cambridge AI Safety Hub (CAISH) and various independent AI safety projects. This is an ideal ecosystem for finding outstanding mentors and research managers with AI expertise.
A more crucial factor in our focus on AI is the success of ERA alumni, particularly those from the AI safety and governance track, who have continued to conduct significant research and build impactful careers. For instance, 4 out of the 6 alumni stories highlighted here involve fellows engaged in AI safety projects beyond the fellowship. This says little about fellows from other cause areas but rather suggests a unique opportunity for early-career researchers to make a significant impact in AI-related organizations — a feat that appears more challenging in well-established fields like nuclear or climate change.
Given the importance of AI safety and the timely opportunity to influence its development both technically and in policy-making, focusing our resources on AI appears strategically sound, especially with the aforementioned strong AI community. It is worth adding a disclaimer: our emphasis on AI does not diminish the importance of other X-risk or GCR research areas. It simply reflects our comparative strengths and track records, suggesting that our AI focus is likely to be the most effective use of resources.