What is digital minds governance?
Researchers are increasingly asking whether AI systems could be conscious and have moral status. This work is important, but progress is slow: philosophers have been working on consciousness for centuries without consensus.
Perhaps superintelligent AI will help us solve it one day. But in the meantime, we will have to make many consequential decisions about the future of technology on our own. Even as researchers continue to make progress on fundamental questions about consciousness, we must also ask: Given radical uncertainty about AI consciousness and moral status, what should society do? This is the central question that the new field of digital minds governance attempts to answer.
Digital minds governance spans a wide range of questions:
- Near-term: How should the potential for AI moral status be addressed within AI companies? Should policymakers be considering this issue? How should we respond to evolving public opinion?
- Mid-term: What institutions and governance bodies do we need? Should AI systems have any legal standing? Should the creation of digital minds be restricted?
- Long-term: At what scale should digital minds be created, if at all? How should they be integrated into society? What would harmonious coexistence between humans and digital minds look like? How should digital minds in space be governed?
Digital minds governance barely exists as a field. We want to help get it started, so we interviewed experts to compare notes and generate ideas.
Expert Interviews
Between February and April 2026, we spoke with 29 experts whose work gave us reason to think they would have useful takes on digital minds governance. Most interviews lasted ~1 hour.
Our interviewees included:
- 4 cognitive science/consciousness researchers
- 9 philosophers with a variety of specialties
- 5 legal scholars
- 5 field-builders and nonprofit professionals
- 4 policy and governance professionals
- 2 AI lab researchers working on model welfare and AI ethics
Caveats about our methodology
- This is not meant to be particularly rigorous or perfectly representative. Our interviewees were disproportionately involved in AI safety research in the US and UK.
- We did not ask the exact same set of questions to each interviewee. When we say that some percentage of interviewees expressed a particular view, we are only considering the subset with whom we discussed it.
- Views are anonymized and paraphrased in our own words.
Key themes from the interviews
What, concretely, should be done?
We asked all interviewees what exactly should be done to make progress on the most important questions in digital minds governance. Below are a few highlights.
Public engagement and messaging
Interviewees generally expressed a strong preference for quiet, gradual institution-building over loud public messaging about digital minds. Many noted that several risks are associated with public advocacy, which could cause it to backfire.
Key risks:
- Digital minds advocacy can easily be perceived as part of industry attempts to hype up the technology.
- Public engagement could attract industry opposition if AI companies fear regulation.
- Digital minds could be perceived as a distraction from the “real” problems of AI governance.
- Tensions with AI safety could cause divisions within otherwise aligned communities.
This does not mean avoiding any kind of public engagement. Rather, interviewees suggested that we should prioritize quality and nuance over volume and recognition. There was significant disagreement over which particular research agendas should receive the most emphasis in public messaging, with some interviewees favoring welfare- and consciousness-focused messaging, others favoring pragmatic points about AI safety and corporate incentives, and others preferring messages that focus on dignity- and rights-based frameworks.
We think that much more research should be done to explore and test various messaging strategies.
Legislation and legal status
Should we avoid premature legislative action?
Most experts think it is too early for legislation to address digital minds. Almost all interviewees opposed legislative action, while a few had mixed views and one supported legislation of some kind.
Reasons cited:
- Scientific uncertainty about AI consciousness.
- Uncertainty about which policies would be helpful.
- Risk of “crying wolf” and harming the field’s credibility.
- Prematurely locking in hard-to-reverse governance frameworks.
- Potential backlash from affected interest groups.
- Safety concerns: it is unclear whether legislation addressing digital minds would be helpful or harmful for aligning AI systems with human values and preventing disempowerment of humans.
- Major legislative efforts to protect AI welfare could be perceived as undemocratic, absent greater issue salience and broader public support.
Several interviewees who were mixed favored narrow, targeted legal moves now.
Should AI systems have rights or responsibilities?
A common theme in our discussions of concrete policy proposals was the possibility of assigning legal rights or responsibilities to AI systems. As a concrete example, we asked interviewees about bills that have been introduced in several US state legislatures which would ban rights, responsibilities, and legal personhood for AI systems, and in some cases declare that AI systems cannot be conscious, sentient, or self-aware[1].
Although interviewees generally agreed that current AI systems should not have rights, all 27 with whom we discussed the bills raised concerns. Some felt that bans on legal personhood might prove premature, and ultimately harm AI welfare and/or human safety by interfering with mutually beneficial forms of trade. Others argued that it is inappropriate for governments to legislate on scientific questions about consciousness. At the same time, views differed on how serious these concerns are, with many suggesting that the bills currently being considered are unlikely to be particularly consequential.
How reversible is digital minds policy?
These discussions surfaced a crucial consideration: to what extent will policies passed now influence the long-run future for digital minds?
Perhaps near-term policies wash out in the long run, their effects being swamped by more important historical processes. In this case, one might prefer to focus on influencing the more important processes.
Alternatively, policies passed now could lock in hard-to-reverse governance regimes. This property of path dependence is common in technological systems. This could support delaying policy decisions until we have greater confidence in our approach.
Conceptual and strategic uncertainties
We also discussed several conceptual and strategic uncertainties that affect how the field should approach digital minds governance.
How do AI welfare and safety interact?
This came up across many interviews, and we think it’s one of the most important open strategic questions in the field.
A few points highlighted by interviewees:
- Incentivizing cooperative interactions between humans and AI systems could be important for safety, and could also benefit AI welfare. This might involve creating mechanisms for trade or negotiation with AIs.
- Human control of AI systems could be undercut by legal schemes to protect AI welfare.
- Many interviewees held nuanced positions, suggesting the importance of finding governance approaches that satisfy both safety and welfare concerns.
We (the authors) think the AI welfare and AI safety communities need to coordinate very closely, in order to find welfare interventions that are safety-positive and vice versa. Treating welfare questions as purely distinct from, or opposed to, safety questions could lead to bad outcomes in both directions.
Politics, coalitions, and public opinion
Another important theme was discussions about how the public and various ideological and interest groups will engage with digital minds.
In general, interviewees acknowledged that AI welfare is currently a very low-salience issue, and neither the political left nor right appears particularly friendly or hostile to it. Importantly, there is significant cross-cutting anti-AI sentiment, ranging from concerns about the impact of chatbots on children to environmentalist opposition to data center construction. Many anti-AI groups are dispositionally skeptical of digital minds concerns because they perceive it as part of broader AI hype.
Interviewees also noted that many people may start believing that AI systems are conscious in the future. This could become a powerful and volatile political force: it could lead to improved protections for AI systems, but it could also be a means for AI companies (or the AI systems themselves) to manipulate and disempower humans.
Religious and non-Western perspectives
The current digital minds field is philosophically narrow, leaning heavily on computational functionalism and Western analytic philosophy. Several interviewees noted that it will be important to engage with other perspectives and anticipate how they will approach the issue.
Several interviewees suggested that religion will inform how many people think about digital minds. For example, Christians might ask whether AIs can have souls or “dignity.” Two of the philosophers we interviewed specialize in philosophy of religion, and both emphasized that traditional Christian institutions will likely be skeptical of AI consciousness claims by default. Some suggested that eastern religions might be more open to considering digital minds, and that new religious movements might develop with novel metaphysical beliefs about AI.
Additionally, China and non-Western perspectives were highlighted as a major gap in the field. Several interviewees noted that China may in some ways be more open to consideration of digital minds than Western countries[2]. Concepts of harmony and coexistence between humans and AIs may be salient in China. We know very few researchers in China thinking about these topics, so more field-building may be needed to explore this research direction further.
Research funding and field-building
Field-building and increasing research funding were universally supported by interviewees who discussed them. They highlighted the following research areas as particularly important:
- Technical research on AI welfare and mental capacities
- Philosophical foundations
- Macrostrategy and high-level governance research
- Forecasting future developments
- Tracking public opinion, especially longitudinal surveys with consistent methodology
- Legal scholarship on AI personhood
- Historical research on social movements and moral circle expansion
- Model spec and model character work at labs
We also discussed with many interviewees the possibility of international expert coordination on digital minds. All 25 interviewees who discussed this agreed that it seems like a robustly positive strategy. We are working with several colleagues to launch a project related to this later in 2026.
Conclusions and how to get started
These interviews provided useful input on multiple projects we’re working on. They also helped us get a sense of expert views and reactions to key ideas and proposals in digital minds governance. Some of this is implicit and hard to communicate, but we think it will inform our strategy and actions going forward.
Our general takeaway is that digital minds governance is important and neglected. The field needs many more people thinking through priorities and concrete actions. There are no clear paradigms, little infrastructure, no talent pipeline, few organizations, few jobs, and no settled research agendas. It is still very unclear what to do concretely.
The field is small enough that a motivated person can get close to the frontier in months. Backgrounds in law, policy (including AI policy), and the social sciences are especially useful, but anyone with good ideas can contribute. You do not need to be a philosopher of mind, an ethicist, or an ML researcher.
Current organizations focused on digital minds governance:
- The NYU Center for Mind, Ethics, and Policy is a key academic hub.
- Cambridge Digital Minds runs the Digital Minds Fellowship, online course, and annual Strategy Workshop.
- Eleos AI Research works on AI welfare and partners closely with Anthropic.
- AI safety fellowships such as Future Impact Group, Neuromatch, MATS, Pivotal, and SPAR increasingly offer mentorship on digital minds/AI welfare, including on governance.
Practical suggestions:
- Read the literature. See the new Introduction to Digital Minds online course to get acquainted with the field, or read introductory materials on your own. The list of open strategic questions for digital minds is also worth a look.
- Be cautious about public advocacy or unilateral political pushes. The field is fragile and bad early moves could do damage.
- Talk to people. Most people working on this are reachable. We’re happy to connect.
Thanks to all 29 interviewees. Views summarized here are not endorsements by the interviewees, and any mistakes in paraphrasing or interpretation are ours. Austin Smith was funded through the Pivotal Research Fellowship over the course of this work.
