Hide table of contents

The Legal Priorities Project (LPP) is excited to announce that applications for our Summer Research Fellowship in Law & AI 2023 are now open. For 8–12 weeks, participants will work with researchers at LPP on how the law can help to mitigate existential risks from artificial intelligence. Fellows will receive a stipend of $10,000.

If you are interested in carrying out research in this field and are considering using your career to mitigate existential risks, particularly those from AI, we invite you to apply. The application deadline is July 6 at 11:59 pm Anywhere on Earth; however, we will consider applications and select fellows on a rolling basis, so we encourage you to apply as early as possible. Current students are encouraged to check their academic calendars and apply with enough time to complete the fellowship, or as much of it as possible, before classes resume.

We look forward to receiving your application!

About the fellowship

You will take the lead on a research project, with mentorship and support from other LPP researchers. We will support you in deciding what project and output will be most valuable for you to work towards, for example, publishing a report, journal/law review article, or blog post. We also expect fellows to attend regular meetings, give occasional presentations on their research, and provide feedback on other research pieces.

Fellows will have the opportunity to select a research topic from a list prepared by LPP. Potential research topics for the summer may include:

  • Tort law liability, including strict liability for abnormally dangerous activities, for activities related to the development and dissemination of transformative AI.
  • Product liability law as a way to address harms from transformative AI.
  • The role of litigation in mitigating risks from transformative AI.
  • Potential obstacles for AI regulation presented by the major questions doctrine.
  • First Amendment issues related to AI regulation.
  • The design of a new international organization, similar to the IAEA or CERN, for the international governance of AI.
  • The legal authorities of agencies in the United States government to address risks from transformative AI. 
  • The influence of different jurisdictions on the development and dissemination of transformative AI.
  • Developing a syllabus for a course on law and transformative AI.

This list of topics is non-exhaustive, and is presented to give an overview of the types of research we are interested in. Fellows will further define the research question at the beginning of the fellowship.

In exceptional cases, we are open to research project proposals relevant to existential risk in one of our other focus areas.

Selection criteria

We are looking for graduate law students (JD or LLM), PhD candidates, and postdocs working in law. Students entering the final year of a 5-year undergraduate law degree are also welcome to apply. 

We strongly encourage you to apply if you have an interest in our work and are considering using your career to study or mitigate existential risks, particularly those from transformative AI. Candidates will be expected to apply their research capabilities and legal knowledge to AI governance, but are not required to have previous experience or expertise in AI.

In addition to a willingness to engage with existential risks from AI, the ideal candidate will have the following strengths:

  • Ability to carry out self-directed research with limited supervision.
  • Excellent written communication skills.
  • Excellent problem-solving and critical thinking skills.

If you're not sure about applying because you don't know if you're qualified or the right fit, we would encourage you to apply anyway. 

Further details

  • Funding: You will receive a stipend of $10,000 for the entire fellowship.
  • Duration: Fellows can choose a flexible period of 8–12 weeks between July and October 2023.
  • Work quota: This is a full-time role with flexible working hours. We will also consider exceptional candidates who are only able to join on a part-time basis but for a longer period of time. Students whose classes resume during the fellowship may complete it part-time during the semester.
  • Location: Remote. We will consider applicants from all countries.
  • Diversity and equal opportunities: LPP is committed to providing an inclusive and equitable work environment, and we encourage individuals with diverse backgrounds and experiences to apply. We especially encourage applications from women, gender minorities, and Black, Brown, Indigenous, Latinx, and other people of the global majority who are excited about contributing to our mission. We are an equal opportunity employer and welcome applicants of any race, religion, age, origin, class, citizenship, parental status, disability status, sexual orientation, and gender.
  • Requests for accommodation: If you are unable or limited in your ability to apply for this fellowship as a result of a disability or incompatible assistive technology, please contact us at careers@legalpriorities.org to request reasonable accommodations.

Application process

We have done our best to make the application process as simple and time-efficient as possible. We plan to evaluate applications on a rolling basis, so we encourage you to apply as early as possible and by July 6 at 11:59 pm Anywhere on Earth at the latest.

First stage: Please complete this simple application form. The form asks you to:

  • Submit your CV
  • Briefly answer the following four questions (max 750 characters each):
    • How familiar are you with the existing discourse around existential risk from AI?
    • What motivates you to research how the law can help to mitigate existential risk posed by AI? Discuss the potential implications and challenges associated with this area and how you believe your skills and background can contribute to addressing these risks.
    • Are there any topic(s) you would particularly like to work on during the fellowship, and if so, why?
    • What career paths are you considering, and how could the SRF further your career goals?

These responses can be completed quickly. We aren't looking for perfect essays! We're looking to get an impression of what you're thinking about, what you care about, and how you'd approach the program. 

  • Optionally share previous writing samples (which need not relate to our focus areas).

We will aim to send invitations to the interview stage within two weeks of receiving your application.

Second stage: This stage will consist of one or two short online interviews. We plan to make the final decision shortly after that. You can let us know if you need an earlier decision, for example in order to begin and complete the fellowship before classes resume.

In exceptional cases, we can consider fellows joining us off-season or during the winter.

If you have any questions about the process, please contact us at careers@legalpriorities.org. We very much look forward to receiving your application!

Comments4


Sorted by Click to highlight new comments since:
[anonymous]6
0
0

I'm a bit late to the party here; sorry!

I shared this opportunity with a law school friend, and their reaction was: law students at good schools will already have their summer plans set well before late June, so most won't be able to commit to working on something full-time for 8-12 weeks between July and October. Correspondingly, LPP may be significantly limiting its applicant pool (and possibly the quality of its applicants) by posting these kinds of opportunities so late. I flag this in part because I think something similar happened with LPP's cost-benefit writing competition last summer—the opportunity was posted in June and had a deadline in July.

LPP is throwing serious money at these (both cool-seeming!) projects, but I suspect is significantly undermining their effectiveness by only sharing the opportunities so late. (I also recognize LPP may not have that much control over when it gets its funding, so this comment may be a critique of whoever is funding these projects (OP?) as much as a critique of LPP.)

[anonymous]3
0
0

Thanks a lot for your comment, liilly! 

We agree with your assessment. We were also aware of the timelines for law students (especially in the US), but decided to take our chances. This was our reasoning:

  • The funder behind this project agreed to provide funding only if we found suitable applicants (given the late announcement). So in the scenario where we don't find any applicants after all, we will only have lost some time announcing and advertising the opportunity. We thought it was worth a try.
  • The fellowship is also open to applicants who are not US law school students, which significantly increases our applicant pool.
  • If the fellowship goes well despite the late announcemment, it will be much easier for us to secure early funding for future fellowships. Having received 864 applications for this fellowship (of which 122 from the US), we're optimistic that this will be the case!

Multiple factors affected the timing of the announcement, some of which were outside of our control, but we hope to have more predictable schedules as our capacity (staff and funding) increases.

Thanks again!

This seems like a great opportunity. It is now live on the EA Opportunity Board!

[anonymous]2
0
0

Thank you so much!

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
Thomas Kwa
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A