I am very pleased to share that I have been selected as a Research Fellow at the Future Impact Group.
Over the coming months, I will be working on questions of artificial sentience and artificial consciousness with Jeff Sebo (New York University) and a small interdisciplinary team composed of other Research Fellows at the Future Impact Group, as well as collaborators at the New York University Center for Mind, Ethics, and Policy. Our work lies in the intersection of ethics, philosophy of mind, and long-term governance.
Before turning to research projects, it is worth taking a moment to reflect on why artificial sentience and artificial consciousness deserve attention in the first place.
Why artificial sentience and artificial consciousness?
Much of today's conversation around AI ethics and adjacent areas focuses, rightly, on issues such as bias, misuse, and labour displacement. But there is a quieter question in the background, one that can feel speculative until suddenly it does not: What if some digital minds could one day have morally relevant experiences?
By digital minds, we refer to technological systems that may possess mental systems and intelligence, such as AI, or may lack it altogether, such as computer simulations. By morally relevant experiences, we refer to the possibility, however uncertain or distant, that a digital mind might have subjective experiences, including pleasure or suffering, awareness, or interests of its own, as a result of acquiring sentience and/or consciousness. This does not require human-like properties, emotions, or self-conception. In moral philosophy, even relatively minimal forms of experience can matter a great deal.
History offers a sobering lesson here. As has so often been the case, moral consideration has expanded only after prolonged delay, often after immense harm, because we failed to take unfamiliar forms of vulnerability seriously. Enslaved people, animals, children, and other marginalised groups were once widely regarded as outside the circle of full moral concern, not because they did not have interests, but because their interests did not yet register as morally salient.
One of the motivations behind this fellowship is to ask whether we can do better this time, whether we can think ahead of ourselves, rather than retroactively justifying concern.
Our projects
Our team is working on three interconnected projects (the titles are provisional):
- A research ethics framework for digital minds, including sentient and/or conscious AI, drawing on well-established principles from human and animal research ethics.
- The individuation of digital minds, with particular attention to connected minds, collective systems, and questions of personal identity in AI.
- Digital embodiment, exploring the ways in which AI systems might be meaningfully embodied, even without biological bodies.
My focus will be on the first project, developing a research ethics framework that could guide researchers, policymakers, and institutions before we have high confidence about digital minds. I will also be providing feedback on the other two projects, which are closely related and philosophically rich. The expected outcome of this work is a public-facing, action-oriented report.
Work on digital minds can feel abstract, even uncomfortable. It asks us to imagine beings that we believe do not yet exist, and responsibilities we might prefer to postpone. But it is important to take these questions seriously, and to show a willingness to expand moral concern under uncertainty, rather than waiting for certainty to force our hand.
I am grateful for the opportunity to contribute to this work, and I look forward to sharing more as the projects develop.
About the Future Impact Group
The Future Impact Group is an organisation focused on anticipating and shaping the long-term consequences of AI. Its work spans AI policy, philosophy for safe AI, and AI sentience. It brings together researchers from philosophy, computer science, law, and policy to tackle questions that do not yet fit neatly into existing institutions.
Importantly, the Future Impact Group accepts applications for its fellowship programme during application rounds, with new rounds opening throughout the year. If you are working on, or pivoting towards, research on high-impact questions at the frontier of ethics and emerging technology, this is well worth keeping an eye on.
Bonus
If you are interested in this field, I have a peer-reviewed article published in New-Techno Humanities, titled "Artificially Sentient Beings: Moral, Political, and Legal Issues".
—
Disclaimer: All views expressed here are my own and do not necessarily represent those of the Future Impact Group or the New York University Center for Mind, Ethics, and Policy.

Great to see this Firat!
Thanks, Alistair!