Hide table of contents

This is a linkpost for https://grants.futureoflife.org/

Epistemic status: Describing the fellowship that we are a part of and sharing some suggestions and experiences.

The Future of Life Institute is opening its PhD and postdoc fellowships in AI Existential Safety now. Same as in the previous calls in 2022 and 2023, it has two separate opportunities:

  • Up-to-Five-Year PhD Fellowship: Apply by Nov 16, 2023. This fellowship covers "tuition, fees, and the stipend of the student's PhD program up to $40,000, as well as a fund of $10,000 that can be used for research-related expenses."
  • Up-to-Three-Year Postdoc Fellowship: Apply by Jan 2, 2024. This fellowship supports "an annual $80,000 stipend and a fund of up to $10,000 that can be used for research-related expenses."

The main purpose for the fellowship is to nurture a cohort of rising star researchers that work on AI existential safety, and selected fellows will also participate in annual workshops and other activities that will be organized to help them interact and network with other researchers in the field.

The application is very inclusive:

For (Prospective) PhDs:

To be eligible, applicants should either be graduate students or be applying to PhD programs. Funding is conditional on being accepted to a PhD program, working on AI existential safety research, and having an advisor who can confirm to us that they will support the student’s work on AI existential safety research. If a student has multiple advisors, these confirmations would be required from all advisors. There is an exception to this last requirement for first-year graduate students, where all that is required is an "existence proof". For example, in departments requiring rotations during the first year of a PhD, funding is contingent on only one of the professors making this confirmation. If a student changes advisor, this confirmation is required from the new advisor for the fellowship to continue.

An application from a current graduate student must address in the Research Statement how this fellowship would enable their AI existential safety research, either by letting them continue such research when no other funding is currently available, or by allowing them to switch into this area.

Fellows are expected to participate in annual workshops and other activities that will be organized to help them interact and network with other researchers in the field.

Continued funding is contingent on continued eligibility, demonstrated by submitting a brief (~1 page) progress report by July 1st of each year.

There are no geographic limitations on applicants or host universities. We welcome applicants from a diverse range of backgrounds, and we particularly encourage applications from women and underrepresented minorities. 

For Postdocs:

To be eligible, applicants should identify a mentor (normally a professor) at the host institution (normally a university) who commits in writing to mentor and support the applicant in their AI existential safety research if a Fellowship is awarded. This includes ensuring that the applicant has access to office space and is welcomed and integrated into the local research community. Fellows are expected to participate in annual workshops and other activities that will be organized to help them interact and network with other researchers in the field.

How to Apply

You can apply at grants.futureoflife.org, and if you know people who may be good fits, please help spread the word! Good luck!

Comments5


Sorted by Click to highlight new comments since:

The post seems to confuse the postdoctoral fellowship and the PhD fellowship (assuming the text on the grant interface is correct). It's the postdoc fellowship that has an $80,000 stipend, whereas the PhD fellowship stipend is $40,000.

Thank you for spotting it! I just did the fix :).

Fantastic news. Note: don’t forget to share it on LessWrong too.

Good idea! Just made the other post to reach more audience!

Executive summary: The Future of Life Institute is offering PhD and postdoc fellowships in AI Existential Safety for 2024, aiming to foster a cohort of researchers in this field, with no geographic limitations on applicants or host universities.

Key points:

  1. PhD fellowships provide an $80,000 stipend and $10,000 research fund, open until November 16, 2023.
  2. Postdoc fellowships cover up to $40,000 tuition and fees plus a $10,000 research fund, open until January 2, 2024.
  3. Fellows participate in workshops and activities to network with others in AI existential safety.
  4. Applications are inclusive - all backgrounds encouraged, especially women and minorities.
  5. Requirement is working on AI existential safety with an advisor's support.
  6. Application is through the FLI grants website - spread the word to potential applicants.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 1m read
 · 
Science just released an article, with an accompanying technical report, about a neglected source of biological risk. From the abstract of the technical report: > This report describes the technical feasibility of creating mirror bacteria and the potentially serious and wide-ranging risks that they could pose to humans, other animals, plants, and the environment...  > > In a mirror bacterium, all of the chiral molecules of existing bacteria—proteins, nucleic acids, and metabolites—are replaced by their mirror images. Mirror bacteria could not evolve from existing life, but their creation will become increasingly feasible as science advances. Interactions between organisms often depend on chirality, and so interactions between natural organisms and mirror bacteria would be profoundly different from those between natural organisms. Most importantly, immune defenses and predation typically rely on interactions between chiral molecules that could often fail to detect or kill mirror bacteria due to their reversed chirality. It therefore appears plausible, even likely, that sufficiently robust mirror bacteria could spread through the environment unchecked by natural biological controls and act as dangerous opportunistic pathogens in an unprecedentedly wide range of other multicellular organisms, including humans. > > This report draws on expertise from synthetic biology, immunology, ecology, and related fields to provide the first comprehensive assessment of the risks from mirror bacteria.  Open Philanthropy helped to support this work and is now supporting the Mirror Biology Dialogues Fund (MBDF), along with the Sloan Foundation, the Packard Foundation, the Gordon and Betty Moore Foundation, and Patrick Collison. The Fund will coordinate scientific efforts to evaluate and address risks from mirror bacteria. It was deeply concerning to learn about this risk, but gratifying to see how seriously the scientific community is taking the issue. Given the potential infoha
 ·  · 14m read
 · 
1. Introduction My blog, Reflective Altruism, aims to use academic research to drive positive change within and around the effective altruism movement. Part of that mission involves engagement with the effective altruism community. For this reason, I try to give periodic updates on blog content and future directions (previous updates: here and here) In today’s post, I want to say a bit about new content published in 2024 (Sections 2-3) and give an overview of other content published so far (Section 4). I’ll also say a bit about upcoming content (Section 5) as well as my broader academic work (Section 6) and talks (Section 7) related to longtermism. Section 8 concludes with a few notes about other changes to the blog. I would be keen to hear reactions to existing content or suggestions for new content. Thanks for reading. 2. New series this year I’ve begun five new series since last December. 1. Against the singularity hypothesis: One of the most prominent arguments for existential risk from artificial agents is the singularity hypothesis. The singularity hypothesis holds roughly that self-improving artificial agents will grow at an accelerating rate until they are orders of magnitude more intelligent than the average human. I think that the singularity hypothesis is not on as firm ground as many advocates believe. My paper, “Against the singularity hypothesis,” makes the case for this conclusion. I’ve written a six-part series Against the singularity hypothesis summarizing this paper. Part 1 introduces the singularity hypothesis. Part 2 and Part 3 together give five preliminary reasons for doubt. The next two posts examine defenses of the singularity hypothesis by Dave Chalmers (Part 4) and Nick Bostrom (Part 5). Part 6 draws lessons from this discussion. 2. Harms: Existential risk mitigation efforts have important benefits but also identifiable harms. This series discusses some of the most important harms of existential risk mitigation efforts. Part 1 discus
 ·  · 2m read
 · 
THL UK protestors at the Royal Courts of Justice, Oct 2024. Credit: SammiVegan.  Four years of work has led to his moment. When we started this, we knew it would be big. A battle of David versus Goliath as we took the Government to court. But we also knew that it was the right thing to do, to fight for the millions of Frankenchickens that were suffering because of the way that they had been bred. And on Friday 13th December, we got the result we had been nervously waiting for. Represented by Advocates for Animals, four years ago we started the process to take the Government to court, arguing that fast-growing chicken breeds, known as Frankenchickens, are illegal under current animal welfare laws. After a loss, and an appeal, in October 2024 we entered the courts once more. And the judgment is now in on one of the most important legal cases for animals in history. The judges have ruled in favour on our main argument - that the law says that animals should not be kept in the UK if it means they will suffer because of how they have been bred. This is a huge moment for animals in the UK. A billion Frankenchickens are raised with suffering coded into their DNA each year. They are bred to grow too big, too fast, to make the most profit possible. In light of this ruling, we believe that farmers are breaking the law if they continue to keep these chickens. However, Defra, the Government department responsible for farming, has been let off the hook on a technicality. Because Defra has been silent on fast-growing breeds of chicken, the judges found they had no concrete policy that they could rule against. This means that our case has been dismissed and the judges have not ordered Defra to act. It is clear: by not addressing this major animal welfare crisis, Defra has failed billions of animals - and the farming community. This must change. While this ruling has failed to force the Government to act, it has confirmed our view that farmers are acting criminally by using
Relevant opportunities