1. Background and summary
As part of my broader research project on AI ideal governance (see also here and here), I have developed some possible research questions relevant to the progression of the sub-field. This post was partly inspired by similar forum and blog posts that highlight research it’d be good to see in the EA community, along with an argument about the importance of collections for the forum. It aims to organise and highlight some areas that I think are significant, and in which further research can ultimately help us make better positive plans for AI development.
I’ve split up the post into four groups of five questions, with an added final section on some organisations that have previously done work (or shown interest in topics) related to AI ideal governance. These are certainly not intended as exhaustive lists, and I hope instead that they might serve as inspiration to those interested in this area but who are not sure where to start.
If you’d like to discuss any of these questions or are thinking about performing research on similar topics, do get in touch!
2. Historical questions
The practice of imagining better futures has a long history, with many examples of failure and of association with bad regimes. There are lessons to be learned here so that we can improve our plans, making for useful historical research related to AI ideal governance. Relevant work often touches upon complicated empirical topics that may be better broken down and may require a high level of domain knowledge about the political and intellectual history of the society being studied. Some possible questions could include:
- What was motivating about utopia X to people in scenario Y?
- What features of utopia X remain appealing today?
- Why was the attempt to implement utopia X unsuccessful?
- Does the history of utopianism suggest that there is something dangerous about the practice itself?
- Does the history of utopianism suggest that the practice encourages people to put ‘ends before means’?
3. Design questions
The activity of designing ideal theories (e.g., through worldbuilding) can be useful in clarifying our objectives and beliefs. However, previous attempts in such areas illustrate common pitfalls that can face these practices. These can range from failing to think clearly about the precise value of the design exercise, to failing to ask broader questions to check if the exercise has been successful. Some questions I think authors would benefit from asking about their ideal theories include:
- Is my ideal theory designed to play a motivational or action-guiding role, and does it reflect this?
- Is my ideal theory inclusive of different identities and ways of life?
- Is my ideal theory designed to embody a virtue (e.g., justice or liberty), and does it achieve this?
- Would people have fun if my ideal theory was implemented?
- Is my ideal theory self-consistent?
4. Policy questions
A possible charge against AI ideal governance theories is that they are not always helpful in guiding action. Given this, it would be helpful to think more deeply about how to develop ideal scenarios that are useful for policymakers. Relevant research can include direct thought about which strategies would get us closer to an ideal goal, or more abstractly about issues such as how AI ideal governance can better be integrated with other sub-fields in AI governance. Possible questions include:
- If AI policy X was enacted, what would be the best-case scenario?
- Which strategies would be required to reach ideal AI scenario X?
- Which AI policy areas might most naturally be able to integrate ideal theories?
- What is the relationship of AI ideal governance research to AI governance research more generally?
- How can AI ideal governance work better support other AI governance researchers?
5. Meta questions
Finally, there are also a group of open meta questions related to AI ideal governance. These are helpful in assessing the overall approach of the sub-field and ways it could be improved. Relevant work includes empirical study on psychological questions, which would require more precise questions than are listed here. Possible meta questions might include:
- Why are people motivated by AI ideal governance theories?
- Why has there been relatively little AI ideal governance research?
- Does developing AI ideal governance theories require technical knowledge about AI?
- How can AI ideal governance theories be useful, despite the epistemic challenge to longtermism?
- Is there an intrinsic impulse to think positively about the future?
6. Organisations interested in AI ideal governance (and related areas)
As AI ideal governance is a somewhat interdisciplinary topic with related sub-fields in different disciplines, I compiled a (non-exhaustive) list of organisations who have done interesting work (or shown interest in topics) related to AI ideal governance. To those who are interested in the field, I’d recommend following these organisations and looking for related opportunities. This list of early career research opportunities also includes good places to pursue these questions. Relevant organisations include:
- Convergence Analysis
- ‘AI clarity’ research area
- Future of Life Institute
- Foresight Institute
- Future of Humanity Institute
- Legal Priorities Project
- ‘Institutional design’ research area
- The Blog Prize
- The Long Now Foundation