We, the Center on Long-Term Risk, are looking for Summer Research Fellows to explore strategies for reducing suffering in the long-term future (s-risks) and work on technical AI safety ideas related to that. For eight weeks, fellows will be part of our team while working on their own research project. During this time, you will be in regular contact with our researchers and other fellows, and receive guidance from an experienced mentor.
You will work on challenging research questions relevant to reducing suffering. You will be integrated and collaborate with our team of intellectually curious, hard-working, and caring people, all of whom share a profound drive to make the biggest difference they can.
While this iteration retains the basic structure of previous rounds, there are several key differences:
- We are particularly interested in applicants who wish to engage in s-risk relevant empirical AI safety work (more details on our priority areas below).
- We encourage applications from individuals who may be less familiar with CLR’s work on s-risk reduction but are nonetheless interested in empirical AI safety research. Our empirical agenda focuses on understanding LLM personas, in particular how malicious traits might arise.
- We are especially looking for individuals seriously considering transitioning into s-risk research, whether to assess their fit or explore potential employment at CLR.
Apply here by 23:59 PT Sunday 22nd March.
We are also hiring for permanent research positions, for which you can apply through the same link.
Apply nowAbout the Summer Research Fellowship
Purpose of the fellowship
In this iteration of the fellowship, we are primarily looking for people seriously considering transitioning to s-risk research, who want to assess their fit or explore potential employment at CLR.
That said, we welcome applicants with other motivations though the bar for acceptance will likely be higher. In the past, we have often had fellows from the following backgrounds:
- People at the very start of their careers—such as undergraduates or even high school students—who are strongly focused on s-risk and want to explore research and assess their fit.
- People with a fair amount of research experience, e.g. from a partly- or fully completed PhD, whose research interests significantly overlap with CLR’s and who want to work on their research project in collaboration with CLR researchers for a few months. This includes people who do not strongly prioritize s-risk themselves.
- People committed to s-risk who are pursuing a research or research-adjacent career outside CLR and want to develop a strong understanding of s-risk macrostrategy beforehand.
Additionally, there may be many other valuable reasons to participate in the fellowship. We encourage you to apply if you think you would benefit from the program. In all cases, we will work with you to make the fellowship as valuable as possible given your strengths and needs. For many participants, the primary focus will be on learning and assessing their fit for s-risk research, rather than immediately producing valuable research output.
Priority areas
Moving forward, a significant focus of our work will be on s-risk-motivated empirical AI safety research through our Model Persona research agenda.
In this agenda, we are aiming to understand in which conditions AI personas develop malicious traits that provide motivation to create suffering: examples of such traits include spitefulness, sadism, or punitiveness. We are also interested in building a general understanding of LLM psychology in order to develop interventions that make personas robustly avoid such traits.
Candidates for the empirical stream can work on one of our suggested research questions, their own proposal, or join an ongoing project of one of our researchers.
We are also looking forward to taking on fellows interested in working on:
Safe Pareto improvements (SPI). An SPI is (roughly) an intervention on how AIs approach bargaining that mitigates downsides from conflict, without changing their bargaining positions. We’re currently interested in both:
- empirical research on evals for failures in reasoning about SPI; and
- conceptual research on the conditions under which AIs individually prefer to do SPI, and on how to prepare for AI-assisted SPI research.
S-risk macrostrategy. We are interested in research on how to robustly reduce s-risk through interventions in AI development—in particular, understanding the conditions under which such interventions might backfire or have unintended effects, and developing frameworks for evaluating their robustness. Possible projects include:
- analysing how s-risk interventions interact with different AI development scenarios;
- identifying and modelling mechanisms by which interventions can fail; and
- developing recommendations for when and how to act.
We expect to take on at most one fellow in this area, and are particularly looking for candidates with a strong existing interest in s-risk reduction and familiarity with CLR's work.
What we look for in candidates
We don’t require specific qualifications or experience for this role, but the following abilities and qualities are what we’re looking for in candidates. We encourage you to apply if you think you may be a good fit, even if you are unsure whether you meet some of the criteria.
- Curiosity and a drive to work on challenging and important problems;
- Ability to answer complex research questions related to the long-term future;
- Willingness to work in poorly-explored areas and to learn about new domains as needed;
- Independent thinking;
- A cautious approach to potential information hazards and other sensitive topics;
- Alignment with our mission or strong interest in one of our above priority areas.
In the empirical stream we are primarily looking for candidates with prior research experience, preferably involving LLMs. University projects, independent work, or work done at prior fellowships such as MATS all count, and other demonstrations of technical skills and interest in our focus areas can substitute for this.
We worry that some people won’t apply because they wrongly believe they are not a good fit for the program. While such a belief is sometimes true, it is often the result of underconfidence rather than an accurate assessment. We would therefore love to see your application even if you are not sure if you are qualified or otherwise competent enough for the positions listed. We explicitly have no minimum requirements in terms of formal qualifications. Being rejected this year will not reduce your chances of being accepted in future hiring rounds.
Program details
We encourage you to apply even if any of the below does not work for you. We are happy to be flexible for exceptional candidates, including when it comes to program length and compensation.
Program dates
The default start date is Monday 29th June. Exceptions may be possible and will be considered on a case-by-case basis.
Location & office space
CLR is a research organization based in London, UK. We prefer fellows to be based in London throughout the fellowship, where possible.
We expect to facilitate in-person participation in London in most cases, including support with necessary immigration permissions or visas.
That said, we encourage strong candidates to apply regardless of their situation, and are happy to discuss remote arrangements for those who would be inconvenienced by travel.
Compensation
Fellows will receive a stipend of £4,925 per month.
In addition to the base stipend, we will provide funding for travel or immigration costs for fellows who relocate to London for the program. Funding will also be available for expenses to facilitate your productivity during the program.
Program length & work quota
The program is intended to last for eight weeks in a full-time capacity. Exceptions, including part-time participation, may be possible.
We’re also very happy for participants to take reasonable time out for other commitments such as holidays.
Application process
We value your time and we are aware that applications can be demanding, so we have thought carefully about making the application process time-efficient and transparent. Please let us know in your initial application if the timelines below definitely won’t work for you since we may be able to work something out; in some cases we might be able to give earlier decisions or expedite parts of the application process.
We plan to make the final decisions by Friday 23rd May, and unfortunately we can’t accept any late applications at any stage.
Stage 1
To start your application, please complete our short initial application form. We expect this form can be completed in as little as 5 minutes if you just answer the required questions, though there is space to answer optional long-form questions.
The application deadline is 23:59 PT Sunday 22nd March.
Stage 2
By the end of Friday 28th March we will decide whether to invite you to the second stage. The second stage consists of answering long-form questions. We expect this stage to take 1-3 hours.
The deadline for submissions for this stage is Monday 7th April 23:59 PT.
Stage 3
By the end of Friday 11th April, we will decide whether to invite you to the third stage. The third stage consists of a paid research test, which we expect will take around 8 hours of work. Applicants will be compensated with £350 for their work at this stage.
The deadline for submissions for this stage is Sunday 27th April 23:59 PT.
Stage 4
By the end of Friday 2nd May, we will decide to invite you to interview by video call. For candidates interested in empirical roles, all candidates that have completed stage 3 will present the results of their work test in their research interview.
All interviews will happen by the end of 16 May.
We will send out final decisions to applicants by Friday 23rd May 23:59 PT.
Why work with CLR
We aim to combine the best aspects of academic research (depth, scholarship, mentorship) with an altruistic mission to prevent negative future scenarios. So we leave out the less productive features of academia, such as administrative burden and publish-or-perish incentives, while adding a focus on impact and application.
As part of our fellowship, you will enjoy:
- a program tailored to your qualifications and strengths;
- working to facilitate a shared mission with dedicated and caring people;
- an interdisciplinary research environment, surrounded by friendly and intellectually curious people who will hold you to high standards and support you in your intellectual development;
- mentorship in longtermist macrostrategy, especially from the perspective of preventing s-risks;
- the support of a well-networked longtermist EA organization with substantial operational assistance instead of administrative burdens.
You will advance neglected research to reduce the most severe risks to our civilization in the long-term future. Depending on your specific project, your work may help inform impactful work across the s-risk and AI safety ecosystem, or any of CLR’s activities, including:
- Technical interventions: We aim to develop and communicate insights about the safe development of artificial intelligence to the relevant stakeholders (e.g. AI developers, key organizations in the longtermist effective altruism community). We are in regular contact with leading AI labs and AI safety research nonprofits.
- Research collaborations: CLR researchers have been involved in collaborations with researchers from Anthropic, UK AISI and TruthfulAI.
- Research community: in addition to the Summer Research Fellowship, CLR sometimes runs external research retreats, bringing together members of the research community to co-ordinate and make progress on problems.
Inquiries
If you have any questions about the process, please contact us at hiring@longtermrisk.org.
Diversity and equal opportunity employment: CLR is an equal opportunity employer, and we value diversity at our organization. We don’t want to discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, marital status, veteran status, social background/class, mental or physical health or disability, or any other basis for unreasonable discrimination, whether legally protected or not. If you're considering applying to this role and would like to discuss any personal needs that might require adjustments to our application process or workplace, please feel very free to contact us.
Apply now