This is the first call for applications of the EAF Fund.
Logistical info
- Application deadline: Please submit your application before August 12, 2019. (EDIT: Note that the CLR Fund now accepts applications on a rolling basis.)
- Funding available: ~$800,000 (updated July 29)
- previously $424,170
- Application form: https://forms.gle/82Px7J5L7f6H3tkY8
Projects we’d like to fund
We’d like to fund projects aligned with the mission of the fund: improving the quality of the long-term future by supporting efforts to reduce risks of astronomical suffering (s-risks) from advanced artificial intelligence. You can learn more about what this means in our article on Cause prioritization for downside-focused value systems (section: “AI Alignment”).
When launching the fund, we listed specific areas we consider to be particularly important. You can also take a look at the two grants we made last year. Below we provide additional guidance on grants we’d be excited to make.
Grants for capacity-building (e.g., workshops, community infrastructure, scholarships, funding for self-study). We’re particularly interested in increasing capacity in terms of top talent and quality (e.g., nuance, depth) of output as opposed to the size of the community in general. Examples of grantees might be:
- Somebody who is familiar with our research program to learn about existing AI alignment approaches or pursue a degree relevant to AI governance.
- Somebody to organize an event or provide a resource that addresses a current need in the community of people focused on s-risks.
Grants that enable research in priority areas (e.g., teaching buy-outs, funding for compute, funding for conference attendance, funding for independent research). Below are examples of research topics we currently find particularly relevant.
- Surrogate goals are a promising avenue for reducing downside risks from threats. Ideally, they would transform threat games to games involving threats against “surrogate goals” which would not be catastrophic if carried out. However, surrogate goals are not well-studied and a number of challenges need to be addressed. We’re expecting additional research to illuminate how and whether they are a feasible intervention for reducing s-risk.
- Program games, and related ideas in machine learning such as learning with opponent-learning awareness, demonstrate how the ability of artificial agents to access one another’s source code (or policy parameters) can be used to achieve more cooperative outcomes. However, the technical study of program games and “open-source game theory” more generally is in its infancy. There is also still the possibility of a “credibility gap” between the output of the program submitted in the program game and the actions carried out by the agents. Understanding how and when this gap can be closed is critical to understanding how program equilibrium can be used to reduce bargaining losses.
- While behavioral game theory is now a well-established field, the behavioral game theory of human-AI interaction is almost entirely unexplored (but see Crandall et al. (2018) for a relevant study). Understanding how humans will interact with advanced AIs or AI-assisted humans in high-stakes scenarios may be crucial for preventing disvalue from threats, for instance, by informing the design of protocols for humans in an HCH-type scheme.
- Technological changes are likely to affect the ability of powerful agents to make credible commitments. For instance, they might influence the likelihood of carrying out threats or not to giving in to threats. Cryptographic game theory, the game theory of ransomware, and the aforementioned work on program games are examples of such research on the game-theoretic implications of various technological affordances. We are interested in work analyzing how various possible technological advancements could affect the analysis of threat scenarios via their effects on credibility.
- Multiverse-wide cooperation via correlated decision-making and other forms of acausal trade may create opportunities for significant gains from trade. It is also possible that agents reasoning along these lines will be exposed to various risks, including s-risks. We are interested in better understanding the possibilities for gains and losses from acausal interaction. This includes both building out formal frameworks for, e.g., acausal bargaining, and investigating the physical and computational limits to the sorts of simulations that would be necessary to carry out some kinds of acausal interaction. (See our grant from last year to Daniel Kokotajlo for his PhD research on acausal trade.)
- In order to improve our general picture of how advanced artificial intelligence might develop and the possible points of intervention in AI design for reducing s-risk, we are interested in foundational questions related to rational agency. See this overview for a treatment of the foundational questions several EAF-affiliated researchers are most interested in answering. More generally, we are interested in developing more satisfactory theories of rational decision-making for bounded and embedded agents.
- We are also very interested in receiving applications for AI policy and strategy research. In most cases, we do not recommend for this type of research to focus on s-risk reduction in particular because the field is still too young for that. However, we would be especially excited about applicants who are familiar with the state of research on s-risk. We are currently unable to provide further guidance for specific work we’re likely to fund in these areas.
Within these areas, we are open to funding both original research, as well as thorough literature reviews in cases like (3) and (4), drawing on well-established subfields. Grantees are expected to display high levels of reasoning transparency and nuance in their work
In general, we expect applications for this round to be most worthwhile for people who are already familiar with our work and prioritization. If you’re uncertain whether your project is in line with what we have in mind, feel free to reach out to us at fund@ea-foundation.org before investing the time for a complete application. We’re happy to provide feedback on brief descriptions of project ideas. We will also answer questions in the comment section.
Future application rounds
We are expecting to run another application round before the end of the year. By then, we expect to have published a research agenda which will provide more context and outline concrete research projects we’d like to see more work on. Before then, we expect to make only a limited number of grants to individuals who are already somewhat familiar with our work.
This is very exciting - it is really great to see another large-scale grantmaker with a public application process.
Nitpick: There is a typo in the choices for "What would you likely do if we decided not to fund your project?" in the application.
Nice to see this! I suppose the deadline to submit an application is before the 12th of August? (So latest acceptable date is the 11th)
Thanks! Clarified.