Hide table of contents

(Edited: We've extended our application deadline until November 26th, 2024.)

We’re excited to announce that applications are now open for our 2025 Q1 Pivotal Research Fellowship, a 9-week program designed to enable promising researchers to produce impactful research and accelerate their careers in AI safety, AI governance, and biosecurity.

Apply now

About the Fellowship

The Pivotal Research Fellowship is hosted in London at the London Initiative for Safe AI (LISA). It offers a unique opportunity for early-career researchers to collaborate with experienced mentors, engage in workshops and seminars, and build a strong network within the AI safety research community in London and beyond.

Dates: February 3rd to April 4th, 2025
Application Deadline: November 21st, 2024
Deadline Extended: November 26th, 2024
Apply here.

Fellows receive:

  • Direct mentorship from established researchers
  • Access to LISA, working alongside leading researchers in AI safety
  • £5000 stipend, plus meals, travel support, accommodation, and compute costs

This marks our 5th research fellowship, building on a strong track record of supporting researchers in tackling important questions about the safety and governance of emerging technology.

Looking back on our 2024 Research Fellowship

(We plan on releasing a more in-depth retrospective in the upcoming weeks.)

In 2024 we hosted 15 fellows:

  • 7 in AI governance
  • 6 in technical AI safety
  • 2 in biosecurity

The fellowship received high ratings with 9.17/10 for overall value and 9.33/10 for recommendation likelihood (Net Promoter Score: 88).

Here’s what 2024 fellows say about their experience with the Pivotal Research Fellowship:

  • “The Fellowship has been transformative for my career and personal development. Most importantly, I had the incredible opportunity to be mentored by a leading expert and go from idea development to paper submission.”
  • “The fellowship allowed me to work with top AI safety researchers – a great privilege early in my career! People were surprised at what can be accomplished in two months, including me.”
  • “Pivotal Research Fellowship opened the door to AI governance, enabling me to conduct impactful research in this field and connecting me to the broader AI governance community and new opportunities.”
  • “Pivotal shifted my career: I'm now working on a startup for white box model access and will join GovAI as a Winter Fellow. Being at LISA connects you with top AI safety researchers and places you on the radar of leading organizations.”
  • “Pivotal’s approach expanded my AI safety perspective, illustrating the importance of governance and biosecurity challenges that complement technical safety, making me seriously consider AI policy roles.”

Looking Ahead to Q3 2025

In addition to the Q1 2025 Fellowship, we’re planning another cohort in Q3 2025. If you’re interested in being kept up to date for future fellowship opportunities, please express your interest.

If you have any questions about the application process, please reach out to us.

26

0
0

Reactions

0
0
Comments1


Sorted by Click to highlight new comments since:

Deadline Extended to Tuesday 26. November!

You can recommend others who may be a good fit. We'll give you $100 for each accepted candidate we contact through you.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
Joris 🔸
 ·  · 5m read
 · 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr