Recently, Elon Musk donated $10M to fund research on making AI more robust and beneficial, motivated in part by Nick Bostrom's book Superintelligence and by AI's links to existential risk.

Many EAs I know are interested in the relationship between artificial intelligence and existential risk, and there has been some discussion here of long-term AI safety as a topic for long-run focused EA. Given this, I thought it'd make sense to post the request for proposals for research projects to be funded by Musk's donation. I'd be very happy to see some applications come out of the broader EA community, so do think about it yourself and pass it along to friends!

Here's the full Request for Proposals on the Future of Life Institute's website.

If you have questions, feel free to ask them in the comments or to contact me!

Here's the email FLI has been sending around:

Initial proposals (300–1000 words) due March 1, 2015

The Future of Life Institute, based in Cambridge, MA and headed by Max Tegmark (MIT), is seeking proposals for research projects aimed to maximize the future societal benefit of artificial intelligence while avoiding potential hazards. Projects may fall in the fields of computer science, AI, machine learning, public policy, law, ethics, economics, or education and outreach. This 2015 grants competition will award funds totaling $6M USD.

This funding call is limited to research that explicitly focuses not on the standard goal of making AI more capable, but on making AI more robust and/or beneficial; for example, research could focus on making machine learning systems more interpretable, on making high-confidence assertions about AI systems' behavior, or on ensuring that autonomous systems fail gracefully. Funding priority will be given to research aimed at keeping AI robust and beneficial even if it comes to greatly supersede current capabilities, either by explicitly focusing on issues related to advanced future AI or by focusing on near-term problems, the solutions of which are likely to be important first steps toward long-term solutions.

Please do forward this email to any colleagues and mailing lists that you think would be appropriate.

Proposals

Before applying, please read the complete RFP and list of example topics, which can be found online along with the application form:

    http://futureoflife.org/grants/large/initial

As explained there, most of the funding is for $100K–$500K project grants, which will each support a small group of collaborators on a focused research project with up to three years duration. For a list of suggested topics, see the complete RFP [1] and the Research Priorities document [2]. Initial proposals, which are intended to require merely a modest amount of preparation time, must be received on our website [1] on or before March 1, 2015.

Initial proposals should include a brief project summary, a draft budget, the principal investigator’s CV, and co-investigators’ brief biographies. After initial proposals are reviewed, some projects will advance to the next round, completing a Full Proposal by May 17, 2015. Public award recommendations will be made on or about July 1, 2015, and successful proposals will begin receiving funding in September 2015.

References and further resources

[1] Complete request for proposals and application form: http://futureoflife.org/grants/large/initial
[3] An open letter from AI scientists on research priorities for robust and beneficial AI: http://futureoflife.org/misc/open_letter
[4] Initial funding announcement: http://futureoflife.org/misc/AI
Questions about Project Grants: dewey@futureoflife.org
Media inquiries: tegmark@mit.edu

5

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since: Today at 7:36 AM

Hi Daniel, for further reach, the X-Risk comm channels on this spreadsheet might help: https://docs.google.com/spreadsheets/d/1_EH3cpHUJw052iXNI1Q_b-FgHBBNuXe_a4ZjM6uqzpU/edit?usp=sharing

Thanks Tyler!