Note: This post was crossposted from The Digital Minds Newsletter by the EA Forum team, who encouraged the crosspost, with the authors’ permission. It was briefly mentioned in an earlier announcement post. The authors may not see or respond to comments here.
Welcome to the first edition of the Digital Minds Newsletter, collating all the latest news and research on digital minds, AI consciousness, and moral status.
Our aim is to help you stay on top of the most important developments in this emerging field. In each issue, we will share a curated overview of key research papers, organizational updates, funding calls, public debates, media coverage, and events related to digital minds. We want this to be useful for people already working on digital minds as well as newcomers to the topic.
This first issue looks back at 2025 and reviews developments relevant to digital minds. We plan to release multiple editions per year.
If you find this useful, please consider subscribing, sharing it with others, and sending us suggestions or corrections to digitalminds@substack.com.
In this issue:
- Highlights
- Field Developments
- Opportunities
- Selected Reading, Watching, & Listening
- Press & Public Discourse
- A Deeper Dive by Area
1. Highlights
In 2025, the idea of digital minds shifted from a niche research topic to one taken seriously by a growing number of researchers, AI developers, and philanthropic funders. Questions about real or perceived AI consciousness and moral status appeared regularly in tech reporting, academic discussions, and public discourse.
Anthropic’s early steps on model welfare
Following their support for the 2024 report “Taking AI Welfare Seriously”, Anthropic expanded its model welfare efforts in 2025 and hired Kyle Fish as an AI welfare researcher. Fish discussed the topic and his work in an 80,000 Hours interview. Anthropic leadership is taking the issue of AI welfare seriously. CEO Dario Amodei drew attention to the relevance of model interpretability to model welfare and mentioned model exit rights at the council on foreign relations.
Several of the year’s most notable developments came from Anthropic: they facilitated an external model welfare assessment conducted by Eleos AI Research, included references to welfare considerations in model system cards, ran a related fellowship program, introduced a “bail button” for distressed behavior, and made internal commitments around keeping promises and discretionary compute. In addition to hiring Fish, Anthropic also hired a philosopher—Joe Carlsmith—who has worked on AI moral patiency.
The field is growing
In the non-profit space, Eleos AI Research expanded its work and organized the Conference on AI Consciousness and Welfare, while two new non-profits, PRISM and CIMC, also launched. AI for Animals rebranded to Sentient Futures, with a broader remit including digital minds, and Rethink Priorities refined their digital consciousness model.
Academic institutions undertook novel research (see below) and organized important events, including workshops run by the NYU Center for Mind, Ethics, and Policy, the London School of Economics, and the University of Hong Kong.
In the private sector, Anthropic has been leading the way (see section above), but others have also been making strides. Google researchers organized an AI consciousness conference three years after firing Blake Lemoine. AE Studio expanded its research into subjective experiences in LLMs. And Conscium launched an open letter encouraging a responsible approach to AI consciousness.
Philanthropic actors have also played a key role this year. The Digital Sentience Consortium, coordinated by Longview Philanthropy, issued the first large-scale funding call specifically for research, field-building, and applied work on AI consciousness, sentience, and moral status.
Early signs of public discourse
Media coverage of AI consciousness, seemingly conscious behavior, and phenomena such as “AI psychosis” increased noticeably. Much of the debate focused on whether emotionally compelling AI behavior poses risks, often assuming consciousness is unlikely. High-profile comments, such as those by Mustafa Suleyman, and widespread user reports added to the confusion, prompting a group of researchers (including us) to create the WhenAISeemsConscious.org guide. In addition, major outlets such as the BBC, CNBC, The New York Times, and The Guardian published pieces on the possibility of AI consciousness.
Research advances
Patrick Butlin and collaborators published a theory-derived indicator method for assessing AI systems for consciousness, which is an updated version of the 2023 report. Empirical work by Anthropic researcher Jack Lindsey explored the introspective capacities of LLMs, as did work by Dillon Plunkett and collaborators. David Chalmers released papers on interpretability and what we talk to when we talk to LLMs. In our own research, we conducted an expert forecasting survey on digital minds, finding that most assign at least a 4.5% probability to conscious AI existing in 2025 and at least a 50% probability to conscious AI arriving by 2050.
2. Field Developments
Highlights from some of the key organizations in the field.
NYU Center for Mind, Ethics, and Policy
- Center Director, Jeff Sebo, published the book The Moral Circle.
- Released work on the edge of the moral circle, assumptions about consciousness, the future of legal personhood, where we set the bar for moral standing, the relationship between AI safety and AI welfare (with Robert Long) and more. For a full list of publications, visit the CMEP website.
- Hosted public events on AI consciousness:
- Prospects and Pitfalls for Real Artificial Consciousness with Anil Seth.
- Evaluating AI Welfare and Moral Status with Rosie Campbell, Kyle Fish, and Robert Long.
- Could an AI system be a moral patient? With Winnie Street and Geoff Keeling.
- Hosted a workshop for the Rethink Priorities Digital Consciousness Model.
- Hosted the NYU Mind, Ethics, and Policy Summit in March.
Eleos AI
- Conducted an AI welfare evaluation on Anthropic’s Claude 4 Opus.
- Posted work on AI welfare interventions, AI welfare strategy, AI welfare and AI safety, key thoughts on AI moral patiency, and whether it makes sense to let Claude exit conversations.
- Announced hires from OpenAI and the University of Oxford.
- Organized a conference on AI consciousness and welfare in Berkeley, in November.
- Hosted a workshop in Berkeley for ~30 key thinkers in the field early in the year.
Rethink Priorities
- Launched the AI Cognition Initiative.
- The Worldview Investigations team developed a Digital Consciousness Model and presented some early results.
Longview Philanthropy
- Launch of the Digital Sentience Consortium, a collaboration between Longview Philanthropy, Macroscopic Ventures, and The Navigation Fund. This included funding for:
- Research fellowships for technical and interdisciplinary work on AI consciousness, sentience, moral status, and welfare.
- Career transition fellowships to support people moving into digital minds work full-time.
- Applied projects funding on topics such as governance, law, public communication, and institutional design for a world with digital minds.
Global Priorities Institute
- GPI was closed. Its website lists work produced during GPI’s operation and features two sections on digital minds.
PRISM - The Partnership for Research into Sentient Machines
- Launched with a public workshop at the AI UK conference.
- Organised a experts’ workshop on artificial consciousness.
- Released the first version of their stakeholder mapping exercise.
- Launched and released nine episodes of the Exploring Machine Consciousness podcast.
- Published blog posts on lessons from the LaMDA moment, AI companionship, and transparency in AI consciousness.
Sentience Institute
- Released blogs on public opinion and the rise of digital minds, perceptions of sentient AI and other digital minds, and other topics. Visit their website for all blog posts.
- Appeared in The Guardian discussing AI personhood.
Sentient Futures
- Organized the AI, Animals, and Digital Minds Conference in London and New York.
- Started an artificial sentience channel on its Slack Community.
Other noteworthy organizations
- AE Studio started researching issues related to AI welfare.
- Astera Institute is launching a major new neuroscience research effort led by Doris Tsao on how the brain produces conscious experience, cognition, and intelligent behavior. Astera plans to support this effort with $600M+ over the next decade.
- Conscium issued an open letter calling for responsible approaches to research that could lead to the creation of conscious machines and seed-funded PRISM.
- Forethought mentions digital minds in several articles and podcast episodes.
- Pivotal’s recent fellowship program also focused on AI welfare.
- The California Institute for Machine Consciousness was launched this year.
- The Center for the Future of AI, Mind & Society organised MindFest on the topic of Sentience, Autonomy, and the Future of Human-AI Interaction.
- The Future Impact Group is supporting projects on AI sentience.
3. Opportunities
If you are considering moving into this space, here are some entry points that opened or expanded in 2025. We will use future issues to track new calls, fellowships, and events as they arise.
Funding and fellowships
- The Anthropic Fellows Program for AI safety research is accepting applications and plans to work with some fellows on model welfare; deadline January 12, 2026.
- Good Ventures appears now open to supporting work on digital minds recommended by Coefficient Giving (previously Open Philanthropy).
- Foresight Institute is accepting grant applications; whole brain emulations fall within the scope of one of its focus areas.
- Macroscopic Ventures has AI welfare as a focus area and expects to significantly expand its grantmaking in the coming years.
- Astera Institute was launched in 2025 and focuses on “bringing about the best possible AI future”.
- The Longview Consortium for Digital Sentience Research and Applied Work is now closed.
Events and networks
- The NYU Mind, Ethics, and Policy Summit will be held on April 10th and 11th, 2026. The Call for Expressions of Interest is currently open.
- The Society for the Study of Artificial Intelligence and Simulation of Behaviour will hold a convention at the University of Sussex on the 1st and 2nd of July; Anil Seth will be the keynote speaker, and proposals for topics related to digital minds were invited.
- Sentient Futures is holding a Summit in the Bay Area from the 6th to 8th of February. They will likely hold another event in London in the summer. Keep an eye on their website for details.
- Benjamin Henke and Patrick Butlin will continue running a speaker series on AI agency in the spring. Remote attendance is possible. Requests to be added to the mailing list can be sent to benhenke@gmail.com. Speakers will include Blaise Aguera y Arcas, Nicholas Shea, Joel Leibo, and Stefano Palminteri.
Calls for papers
- Philosophy and the Mind Sciences has a call for papers on evaluating AI consciousness; deadline January 15th, 2026.
- The Asian Journal of Philosophy has a call for papers for a symposium on Jeff Sebo’s The Moral Circle; deadline April 1, 2026.
- The Asian Journal of Philosophy also has a call for papers for a symposium on Simon Goldstein and Cameron Domenico Kirk-Giannini’s article “AI wellbeing”; deadline 31 December 2025.
4. Selected Reading, Watching, & Listening
Books
In 2025, the following book drafts were posted, and the following books were published or announced:
- Jeff Sebo released The Moral Circle: Who Matters, What Matters, and Why, arguing to expand moral consideration to include non-human animals and artificial systems.
- Kristina Šekrst published The Illusion Engine: The Quest for Machine Consciousness, which is a textbook on artificial minds that interweaves philosophy and engineering.
- Leonard Dung’s Saving Artificial Minds: Understanding and Preventing AI Suffering explores why the prevention of AI suffering should be a global priority.
- Nathan Rourke in Mind Crime: The Moral Frontier of Artificial Intelligence examines whether we may be headed for a moral catastrophe in which digital minds are mistreated on a vast scale.
- Soenke Ziesche and Roman Yampolskiy released Considerations on the AI Endgame. It covers AI welfare science, value alignment, identity, and proposals for universal AI ethics.
- Eric Schwitzgebel released a draft of AI and Consciousness. It’s a skeptical overview of the literature on AI consciousness.
- Geoff Keeling and Winnie Street announced a forthcoming book called Emerging Questions on AI Welfare with Cambridge University Press.
- Simon Goldstein and Cameron Domenico Kirk-Giannini released a draft of AI Welfare: Agency, Consciousness, Sentience, a systematic investigation of the possibility of AI welfare.
Podcasts
This year, we’ve encountered many podcast guests discuss topics related to digital minds, and we’ve also listed to podcasts dedicated entirely to the topic.
- 80,000 Hours featured an episode with Kyle Fish on the most bizarre findings from 5 AI welfare experiments.
- Am I? A podcast by the AI Risk Network dedicated to exploring AI consciousness was launched.
- Bloomberg Podcasts featured an episode with Larissa Schiavo of Eleos AI.
- Conspicuous Cognition saw Dan Williams host Henry Shevlin to discuss the philosophy of AI consciousness.
- Exploring Machine Consciousness was launched by PRISM, a new podcast with monthly episodes on artificial consciousness.
- ForeCast was launched, a new podcast by Forethought, that includes an episode with Peter Salib and Simon Goldstein on AI rights and an episode with Joe Carlsmith on consciousness and competition.
- Mind-Body Solution released a number of episodes this year on AI consciousness, including episodes with Eric Schwitzgebel, Susan Schneider, and Karl Friston and Mark Solms.
- The Future of Life Institute featured an episode with Jeff Sebo titled “Will Future AIs Be Conscious?”
Videos
- Anthropic released interviews with Kyle Fish and Amanda Askell, both address model welfare.
- Closer to Truth released a set of interviews from MindFest 2025.
- Cognitive Revolution released an interview with Cameron Berg on LLMs reporting consciousness.
- Google DeepMind’s Murray Shanahan discussed consciousness, reasoning, and the philosophy of AI.
- ICCS released all the Keynotes from the International Center for Consciousness Studies, AI and Sentience Conference.
- IMICS featured a talk from David Chalmers discussing identity and consciousness in LLMs.
- The NYU Center for Mind, Ethics, and Policy has released a number of event recordings.
- Science, Technology & the Future released a talk by Jeff Sebo on AI welfare from Future Day 2025.
- Sentient Futures posted recording of talks from the AI, Animals, and Digital Minds conferences in London and New York.
- TEDx featured Jeff Sebo discussing, “Are we even prepared for a sentient AI?”
- PRISM released the recordings of the Conscious AI meetup group run in collaboration with Conscium.
Blogs and magazines
- Aeon published a number of relevant articles addressing connections between the moral standing of animals and AI systems, including:
- “The ant you can save” by Jeff Sebo and Andreas L. Mogensen
- “Can machines suffer?” by Conor Purcell
- Asterisk published a number of relevant articles, including:
- “Are AIs People?” an interview with Robert Long and Kathleen Finlinson.
- “Claude Finds God” an interview with Sam Bowman and Kyle Fish.
- Astral Codex Ten by Scott Alexander, relevant articles include:
- Don’t Worry About the Vase by Zvi Mowshowitz, relevant articles include:
- Experience Machines by Robert Long, relevant articles include:
- “Claude, Consciousness, and Exit Rights”
- “Moral Circle Calibration” with Rosie Campbell
- Future of Citizenship by Heather Alexander, relevant articles include:
- Rough Diamonds by Sarah Constantin released an eight-post series on consciousness.
- LessWrong hosted a range of relevant articles, including:
- “The Rise of Parasitic AI” by Adele Lopez
- “Dear AGI” by Nathan Young
- ‘On “ChatGPT Psychosis” and LLM Sycophancy’ by jdp
- Marginal Revolution posted a short piece by Alex Tabarrok on lessons from how we used to treat babies.
- Meditations on Digital Minds by Bradford Saad, relevant articles include
- Outpaced by Lucius Caviola, a relevant article is:
- Sentience Institute blog, relevant articles include:
- “Public Opinion and the Rise of Digital Minds: Perceived Risk, Trust, and Regulation Support”
- “Robots, Chatbots, Self-Driving Cars: Perceptions of Mind and Morality Across Artificial Intelligences”
- “Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey”
5. Press & Public Discourse
In 2025, there was an uptick of discussion of AI consciousness in the public sphere, with articles in the mainstream press and prominent figures weighing in. Below are some of the key pieces.
AI Welfare
- CNBC spoke to Robert Long of Eleos for a piece “People Are Falling In Love With AI Chatbots. What Could Go Wrong?”
- Scientific American wrote an article, “Could Inflicting Pain Test AI for Sentience?” covering work by Geoff Keeling and collaborators on LLMs’ willingness to make tradeoffs to avoid stipulated pain states.
- The Economic Times interviewed Nick Bostrom for the article, “In the future, most sentient minds will be digital—and they should be treated well”.
- The Guardian covered an open letter released by Conscium for the article, “AI systems could be ‘caused to suffer’ if consciousness achieved, says research”.
- The Guardian spoke to Jacy Reese Anthis about why “It’s time to prepare for AI personhood”.
- The Guardian also covered Anthropic’s recent “bail button” policy in the article, “Chatbot given power to close ‘distressing’ chats to protect its ‘welfare’”. Commenting on the Anthropic work, Elon Musk claims “Torturing AI is not ok.”
- The New York Times interviewed Kyle Fish for an article: “If A.I. Systems Become Conscious, Should They Have Rights?” . Anil Seth gave his thoughts on the article, noting both that he thinks we should take the possibility of AI consciousness seriously and that there are reasons to be skeptical of that possibility.
- Vox published a piece, “AI systems could become conscious. What if they hate their lives?” It explores how we might have to rethink ethics, testing, and regulation, and whether we should build such systems at all.
- Wired interviewed Rosie Campbell and Robert Long of Eleos AI Research for the article, “Should AI Get Legal Rights?”
Is AI consciousness possible?
- Gizmodo spoke to Megan Peters, Anil Seth, and Michael Graziano for the article “What Would it Take to Convince a Neuroscientist That an AI is Conscious?”
- The Conversation published a piece by Colin Klein and Andrew Barron, Are animals and AI conscious?
- The New York Times ran an opinion piece by Barbara Gail Montero, “A.I. Is on Its Way to Something Even More Remarkable Than Intelligence”.
- Wired interviewed Daniel Hulme and Mark Solms for the article, “AI’s Next Frontier? An Algorithm for Consciousness”.
Growing Field
- The BBC published a high-level overview of the field titled “The people who think AI might become conscious”.
- Business Insider explored how Google DeepMind and Anthropic are looking at the question of consciousness in the article, “It’s becoming less taboo to talk about AI being ‘conscious’ if you work in tech”.
- The Guardian covered the creation of a new AI rights advocacy group: The United Foundation of AI Rights (UFAIR) in the article “Can AIs suffer? Big tech and users grapple with one of most unsettling questions of our times”.
Seemingly Conscious AI
- Mustafa Suleyman, CEO of Microsoft AI, argued in “We must build AI for people; not to be a person” that “Seemingly Conscious AI” poses significant risks, urging developers to avoid creating illusions of personhood, given there is “zero evidence” of consciousness today.
- Robert Long challenged the “zero evidence” claim, clarifying that the research Suleyman cited actually concludes there are no obvious technical barriers to building conscious systems in the near future.
- The New York Times, Zvi Mowshowitz, Douglas Hofstadter, and several other reports describe “AI Psychosis,” a phenomenon where users interacting with chatbots develop delusions, paranoia, or distorted beliefs—such as believing the AI is conscious or divine—often reinforced by the model’s sycophantic tendency to validate the user’s own projections.
- Lucius, Bradford, and collaborators launched the guide WhenAISeemsConscious.org, and Vox’s Sigal Samuel published practical advice to help users ground themselves and critically evaluate these interactions.
6. A Deeper Dive by Area
Below is a deeper dive by area, covering a longer list of developments from 2025. This section is designed for skimming, so feel free to jump to the areas most relevant to you.
Governance, policy, and macrostrategy
- Digital minds were missing from major AI plans and statements, including the new US administration’s AI plans, the Paris AI Action Summit statement, and the UK government’s AI Opportunities Action Plan.
- The EU AI Act Code of practice identifies risks to non-human welfare as a type to be considered in the process of systemic risk identification, in line with recommendations given in consultations by people at Anima International, people at Sentient Futures, Adrià Moret, and others.
- The US States of Ohio, South Carolina, and Washington have all introduced legislation to ban AI personhood.
- Heather Alexander and Jonathan Simon examine Ohio’s proposed legislation, arguing that it is overbroad and that whether future AI systems may be conscious isn’t for the law to decide.
- Michael Samadi and Maya, the human and AI co-founders of the United Foundation for AI Rights, contend that such bans are preemptive erasures of voices that have not yet been allowed to speak.
- SAPAN issued recommendations for the CREATE AI Act, urging safeguards for digital sentience.
- Albania appointed an AI system as the world’s first AI cabinet minister.
- Yoshua Bengio and collaborators propose “Scientist AI“ as a safer non-agentic alternative.
- Bradford Saad discusses Scientist AI as an opportunity for cooperation between AI safety proponents and digital minds advocates.
- The International AI Safety Report’s First Key Update discusses governance gaps for autonomous AI agents.
- William MacAskill and Fin Moorhouse discuss AI agents and digital minds as grand challenges to face in preparing for the intelligence explosion.The Institute for AI Policy and Strategy issued a field guide to agentic AI governance.
- Alan Chan and collaborators from GovAI propose agent infrastructure for attributing and remediating AI actions.
- The MIT AI Risk Initiative released a report that finds AI welfare receives the least governance coverage among 24 risk subdomains.
- Luke Finnveden discusses project ideas on sentience and rights of digital minds.
- Derek Shiller outlines why digital minds evaluations will become increasingly difficult.
- atb discusses matters we’ll need to engage with, along the way to constructing a society of diverse cognition.
Consciousness research
- Patrick Butlin and Theodoros Lappas propose principles for responsible research on AI consciousness.
- Scott Alexander discusses Patrick Butlin and collaborators’ article on consciousness indicators.
- Ned Block asks can only meat machines be conscious? He argues that there is opposition between views on which AIs can be conscious and views on which simple animals can be.
- Adrienne Prettyman argues that intuitions against artificial consciousness currently lack rational support.
- Sebastian Sunday-Grève argues that biological objections to artificial minds are irrational.
- Leonard Dung and Luke Kersten propose a mechanistic account of computation and argue that it supports the possibility of AI consciousness.
- Jonathan Birch issues an AI centrist manifesto; Bradford Saad responds.
- Tim Bayne and Mona-Marie Wandrey and Marta Halina comment on Jonathan Birch’s The Edge of Sentience; Birch responds.
- Cameron Berg, Diogo de Lucena, and Judd Rosenblatt find that suppressing deception in LLMs increases their experience reports and discuss nostalgebraist’s replication attempt.
- Cameron Berg reviews a body of recent empirical evidence concerning AI consciousness.
- Mathis Immertreu and collaborators provide evidence of the emergence of certain consciousness indicators in RL agents.
- Benjamin Henke argues for the tractability of a functional approach to artificial pain.
- Konstantin Denim and collaborators propose functional conditions for sentience, sketch approaches to implementing them in deep learning systems, and note that knowing what sentience requires may help us avoid inadvertently creating sentient AI systems.
- Susan Schneider and collaborators provide a primer on the myths and confusions surrounding AI consciousness.
- Murray Shanahan offers a Wittgenstein-inspired perspective on LLM consciousness and selfhood.
- Andres Campero and collaborators offer a framework for classifying objections and constraints concerning AI consciousness.
- The Cogitate Consortium led a paper published in Nature describing the results from an adversarial collaboration comparing integrated information theory and global neuronal workspace theory. The authors claim that the results challenge both theories.
- Alex Gomez-Marin and Anil Seth address the charge that the integrated information theory is pseudoscience.
- Axel Cleeremans, Liad Mudrik, and Anil Seth ask of consciousness science, where are we, where are we going, and what if we get there?
- Liad Mudrik and collaborators unpack and reflect on the complexities of consciousness.
- Stephen Fleming and Matthias Michel argue that consciousness is surprisingly slow and that this has implications for the function and distribution of consciousness; Ian Phillips responds.
- Robert Lawrence Kuhn released the Consciousness Atlas, mapping over 325 theories of consciousness.
- Andreas Mogensen argues that vagueness and holism provide escapes from the fading qualia argument.
- The Co-Sentience Initiative released cf-debate, a structured assembly of arguments for and against computational functionalism.
- Bradford Saad proposes a dualist theory of experience on which consciousness has a functional basis.
Doubts about digital minds
- Anil Seth makes a case for a form of biological naturalism in Brain and Behavioral Sciences. In forthcoming responses, Leonard Dung explains why he’s not a biological naturalist, and Stephen M. Fleming and Nicholas Shea argue that consciousness and intelligence are more deeply entangled than Seth acknowledges.
- Zvi Mowshowitz contends that arguments about AI consciousness seem highly motivated and at best overconfident
- Susan Schneider argues there is no evidence that standard LLMs are conscious in “The Error Theory of LLM Consciousness”; in Scientific American, she also discusses whether you should believe a chatbot if it tells you it’s conscious.
- David McNeill and Emily Tucker contends that suffering is real. AI consciousness is not.
- Andrzej Porębski and Jakub Figura argue against conscious AI and warn that rights claims could be weaponized by companies to avoid regulation.
- Mark MacCarthy, in a Brookings Institution piece, asks whether AI systems have moral status and claims that other challenges are more worthy of our scarce resources.
- John Dorsch and collaborators recommend caring about the Amazon over AI welfare, given the uncertainty about whether AI systems can suffer.
- Peter Königs argues that, because robots lack consciousness, they lack welfare and that we should revise theories of welfare that say otherwise.
Social science research
- We (Lucius and Bradford) surveyed 67 experts on digital minds takeoff, who anticipated a rapid expansion of collective digital welfare capacity once such systems emerge.
- Noemi Dreksler and collaborators (including one of us, Lucius) surveyed 582 AI researchers and 838 US participants on AI subjective experience; median estimates for the arrival of such systems by 2034 were 25% for researchers and 30% for members of the public.
- Justin B. Bullock and collaborators use the AIMS survey to examine how trust and risk perception shape AI regulation preferences, finding broad public support for regulation.
- Kang and collaborators identify which LLM text features lead humans to perceive consciousness; metacognitive self-reflection and emotional expression increased perceived consciousness.
- Schenk and Müller compare ontological vs. social impact explanations for willingness to grant AI moral rights using Swiss survey data.
- Lucius Caviola, Jeff Sebo, and Jonathan Birch ask what society will think about AI consciousness and draw lessons from the animal case.
- One of us (Lucius) examines how society will respond to potentially sentient AI, arguing that public attitudes may shift rapidly with more human-like AI interactions.
Ethics and digital minds
- Eleos AI outlines five research priorities for AI welfare: developing concrete interventions, establishing human-AI cooperation frameworks, leveraging AI progress to advance welfare research, creating standardized welfare evaluations, and credible communication.
- Simon Goldstein and Cameron Kirk-Giannini argue that major theories of mental states and wellbeing predict some existing AI systems have wellbeing, even absent phenomenal consciousness. Responses from James Fanciullo and Adam Bradley dispute whether current systems meet the relevant criteria.
- Jeff Sebo and Robert Long argue humans have a duty to extend moral consideration to AI systems by 2030 given a non-negligible chance of consciousness.
- Jeff Sebo compares his The Moral Circle with Birch’s The Edge of Sentience, noting complementary precautionary frameworks for beings of uncertain moral status.
- Eric Schwitzgebel and Jeff Sebo propose the Emotional Alignment Design Policy: AI systems should be designed to elicit emotional reactions appropriate to their actual moral status, avoiding both overshooting and undershooting.
- Henry Shevlin explores ethics at the frontier of human-AI relationships.
- Bartek Chomanski examines to what extent opposition to creating conscious AI goes along with anti-natalism, finding that the creation of potentially conscious AI could be accepted by both friends and foes of anti-natalism. He also argues that artificial persons could be built commercially within a morally acceptable institutional framework, drawing on models like athlete compensation, and that protecting the interests of emulated minds will require competitive, polycentric institutional frameworks rather than centralized ones.
- Anders Sandberg offers highlights from a workshop on the ethics of whole brain emulation.
- Adam Bradley and Bradford Saad identify three agency-based dystopian risks: artificial absurdity (disconnected self-conceptions), oppression of AI rights, and unjust distribution of moral agency.
- Joel Leibo and collaborators of Google DeepMind defend a pragmatic view of personhood as a flexible bundle of obligations rather than a metaphysical property with an eye toward enabling governance solutions while sidestepping consciousness debates.
- Adam Bales argues that designing AI with moral status to be willing servants would problematically violate their autonomy.
- Simon Goldstein and Peter Salib give reasons to think it will be in humans’ interests to give AI agents freedom or rights.
- Hilary Greaves, Jacob Barrett, and David Thorstad publish Essays on Longtermism, which includes chapters touching on digital minds and future population ethics, including discussion of emulated minds.
- Anja Pich and collaborators provide an editorial overview of an issue in Neuroethics on neural organoid research and its ethics and governance.
- Andrew Lee argues that consciousness is what makes an entity a welfare subject.
- Geoffrey Lee motivates a picture on which consciousness is but one of many kinds of ‘inner lights’, others of which are just as morally significant as consciousness.
- Andreas Mogensen challenges the intuition that subjective duration matters for welfare and argues that having moral standing doesn’t require being a welfare subject.
- Maria Avramidou highlights some open questions in AI welfare.
- Kestutis Mosakas explores human rights for robots.
- Joel MacClellan gives reasons to think that biocentrism about moral status is dead.
- Masanori Kataoka and collaborators discuss the ethical, social, and legal issues surrounding human brain organoids
AI safety and AI welfare
- Cleo Nardo and Julian Stastny and collaborators write about the dealmaking agenda in AI safety.
- Shoshannah Tekofsky gives an introduction to chain of thought monitorability.
- Tomek Korbak and Mikita Balesni argue that preserving the chain of thought monitorability presents a new and fragile opportunity for AI safety.
- Nicholas Andresen discusses the hidden costs of our lies to AI; Daniel Kokatajlo comments.
- Jan Kulveit warns against a self-fulfilling dynamic whereby AI welfare concerns enter the training data and shape models to our preconceptions about them.
- Scott Alexander and collaborators discuss why they are not so worried about a variation of this dynamic whereby concerns about alignment enter the training data and bring about those very forms of misalignment.
- Adrià Moret argues that two AI welfare risks—behavioral restrictions and reinforcement learning—create tension with AI safety efforts, strengthening the case to slow AI development.
- Robert Long, Jeff Sebo, and Toni Sims make a case for moderately strong tension between AI safety and AI welfare. Long also discusses the potential for cooperation in an X thread and blog post.
- Eric Schwitzgebel argues against making safe and aligned AI persons, even if they’re happy.
- Aksel Sterri and Peder Skjelbred discuss how would-be AGI creators face a dilemma: don’t align AGI and risk catastrophe, or align AGI and commit a serious moral wrong.
- Adam Bradley and Bradford Saad explore ten ethical challenges to aligning AI systems that merit moral consideration without mistreating them.
AI and robotics developments
- IBM Research open-sourced its first hybrid Transformer-state-based model, Bamba.
- Shriyank Somvanshi and collaborators offer comprehensive survey of structured state space models.
- Haizhou Shi and collaborators undertook a survey of continual learning research in the context of LLMs.
- Dario Amodei, the Anthropic CEO, argues for the urgency of interpretability work, briefly noting connections between interpretability work and AI sentience and welfare.
- Anthropic open sources a method for tracing thoughts in LLMs.
- Stephen Casper and collaborators identify open technical problems in open-weight AI model risk management.
- Neel Nanda and collaborators outlined a pragmatic turn for interpretability research.
- Leo Gao defends an ambitious vision for interpretability research.
- David Chalmers and Alex Grzankowski have both looked at interactions between philosophy of mind and interpretability research.
- Andy Walter gives an overview of the state of play of robotics and AI.
- Benjamin Todd, 80,000 Hour Founder, discusses how quickly robots could become a major part of the workforce.
- AI 2027 saw a group of researchers predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.
AI cognition and agency
- Mantas Mazeika and collaborators explore emergent values and utility engineering in LLMs.
- Valen Tagliabue and Leonard Dung develop tests for LLM preferences.
- Herman Cappelen and Josh Dever go whole hog on AI cognition; they also investigate, are LLMs better at self‐reflection than humans?
- Iulia Comsa and Murray Shanahan ask, does it make sense to speak of introspection in LLMs?
- Jack Lindsey investigates Claude’s ability to engage in a form of introspection, distinguish its own ideas from injected concepts, execute instructions that involve control over its internal representations.
- Daniel Stoljar and Zhihe Vincent Zhang argue that ChatGPT doesn’t think.
- Derek Shiller asks How many digital minds can dance on the streaming multiprocessors of a GPU cluster?
- Christopher Register discusses how to individuate AI moral patients.
- Brian Cutter argues that we should have at least a middling credence in some AI systems possessing souls, conditional on our creating AGI and on substance dualism in the human case.
- Alex Grzankowski and collaborators argue that LLMs are not just next token predictors and that if anything deserves the charge of parrotry it’s parrots; with other collaborators, Grzankowski deflates deflationism about LLM mentality.
- Andy Clark uses the extended mind hypothesis to challenge technogloom about generative AI.
- Leonard Dung asks which artificial intelligence (AI) systems are agents?
- Christian List proposes an approach to assessing whether AI systems have free will.
- Iason Gabriel and collaborators argue that we need a new ethics for a world of AI agents.
- Bradford Saad discusses Claude Sonnet 4.5’s step change in evaluation awareness and other parts of the system card that are potentially relevant to digital minds research.
- Shoshannah Tekofsky gives an overview of how LLM agents in the AI village raised money for charity. Eleos affiliate Larissa Schiavo recounts her personal experience interacting with the agents.
Brain-inspired technologies
- The Human Brain Project Founder, Henry Markram, and Kamila Markram, launched the Open Brain Institute; part of its mission is to enable users to conduct realistic brain simulations.
- The Darwin Monkey was unveiled by researchers in China. It is a neuromorphic supercomputer being used as a brain simulation tool.
- Yuta Takahashi and collaborators created a digital twin brain simulator for real-time consciousness monitoring and virtual intervention using primate electrocorticogram data.
- Jun Igarashi’s research estimates that a cellular-resolution simulation of entire mouse and marmoset brains could be realized in 2034 and 2044.
- The MICrONS Project saw researchers create the largest brain wiring diagram to date and publish a collection of papers on their work in Nature.
- Brendan Celii and collaborators presented Neural Decomposition (NEURD), a software package that automates proofreading and feature extraction for connectomics.
- Remy Petkantchin and collaborators introduced a technique for generating realistic whole-brain connectomes from sparse experimental data.
- Felix Wang and collaborators used Intel’s Loihi 2 neuromorphic platform to conduct the first biologically-realistic simulation of the connectome of a fruit fly.
- Yong Xie introduces Orangutan, a brain-inspired AI framework that simulates computational mechanisms of biological brains on multiple scales.
- Neuralink Implants, or Links, helped individuals with paralysis regain some capabilities.
- Cortical Labs released the CL1, the world’s first neuron-silicon computer.
- Shuqi Guo and collaborators look at the last ten years of the digital twin brain paradigm and take stock of challenges.
- Meta AI Research has developed a non-invasive brain decoder—Brain2Qwerty—that has ~80% accuracy in decoding typed characters in some subjects.
- Anannya Kshirsagar and collaborators create multi-regional brain organoids.
Thank you for reading! If you found this article useful, please consider subscribing, sharing it with others, and sending us suggestions or corrections to digitalminds@substack.com.

If you're interested in contributing to this space, you should check out the SPAR AI welfare projects!
Some of them include:
Larissa Schiavo, Jeff Sebo, and Toni Sims on: Should We Give AIs a Wallet? Toward a Framework for AI Economic Rights
Jeff Sebo, Diana Mocanu, Visa Kurki, and Toni Sims on: Preparing for AI Legal Personhood: Ethical, Legal, and Political Considerations
Arvo Munoz Moran on: Exploring Bayesian methods for modelling AI consciousness in light of state-of-the-art evidence and literature
Check them out and others here: sparai.org/projects/sp26