Hide table of contents

This report was written by Christian de Weerd for Rethink Priorities. It was primarily meant to inform our own thinking about the merits of biological naturalism about consciousness. However, as Christian did such an excellent job on the report, we thought it might be worth sharing more broadly. If you're working on this topic, please consider submitting an abstract for a conference that we're hosting at UCLA.

We should mention that, at least as far as we know, Christian won't be monitoring the comments.

 

Executive summary

  • Only a couple of years ago the idea that AI systems may be conscious was widely regarded as a far-fetched and purely hypothetical scenario. Recent rapid developments in AI capabilities, however, have caused a surge of interest in the question whether conscious AI may actually be realized sometime soon.
  • Most researchers agree that consciousness is relevant to moral status in one way or another. Therefore, finding out whether AI systems can be conscious is of great practical importance.
  • Researchers generally agree that the most promising scientifically-informed approach to answer questions about AI consciousness consists in assessing whether AI systems have the right kind of internal organization to realize consciousness.
    • One influential version of this approach assumes that computational functionalism is correct: The view according to which implementing the right computations is sufficient for consciousness.
    • Proponents of this computational approach have recently suggested that there are no obvious technological barriers to implementing consciousness in conventional AI systems in the foreseeable future.
  • In response to these developments, a number of researchers have advocated for an alternative and opposing view, called “biological naturalism”, that rejects the idea that implementing sheer computations alone is sufficient to bring consciousness about.
    • Instead, biological naturalists argue in favour of the idea that biology is in one way or another necessary to bring consciousness about.
    • Most biological naturalists therefore reject the idea that consciousness in conventional AI systems is possible, since these systems cannot implement the kind of biological properties they associate with consciousness.
  • Even though a growing number of researchers are developing more explicit defenses of the view, biological naturalism as a research program arguably still remains in its infancy.
  • By analyzing recent work on the topic, this report aims to identify a number of research directions that can be pursued to further develop biological naturalism as a research program. These research directions are clarified by posing the following series of questions:
  • “How should biological naturalism be defined?”
    • Even though biological naturalism now has numerous explicit defenders, the way in which the view is understood appears to vary. An important task is to provide clarity on what biological naturalists are (supposed to be) committed to more generally and what the problem space that biological naturalists engage with should look like.
  • “How to resolve the (appearance of a) dialectical stalemate between biological naturalism and computational functionalism?”
    • A growing sentiment in the literature is that the debate between biological naturalists and computational functionalists might be stuck, with some even suggesting that agnosticism about AI consciousness is the most reasonable option. An important task is to figure out how this dialectical stalemate can be broken.
  • “How can biological naturalism be empirically supported?”
    • It remains relatively unclear what kind of empirical evidence would convincingly support biological naturalism. Even though some suggestions have been made, various challenges remain, and a more systematic discussion is lacking. Therefore, more attention should be devoted to analyze how biological naturalism could or should be empirically supported.
  • “How can explanatory bridges between biological properties and consciousness shed light on questions about AI consciousness?
    • An alternative way to motivate biological naturalism that can be gleaned from the literature is by arguing how biological properties explanatorily account for structural properties of conscious experiences. These insights can subsequently be used to make assessments about the possibility of AI consciousness. More attention should be devoted to addressing the challenges that this approach faces.
  • “How can biological naturalists provide a deep explanation of consciousness?”
    • Some biological naturalists claim that biological naturalism is in a unique position to address the hard problem or making progress on bridging the explanatory gap.  More attention should be devoted to systematically analyze this approach in more detail, including the way in which it aims to shed light on the possibility of AI consciousness.
  • “Which type of AI systems would be conscious if biological naturalism is correct?”
    • Biological naturalists have primarily focused on developing arguments against the idea that conventional AI systems can be conscious. What remains underexplored, and what should be given more attention, is a more systematic discussion of what kind of non-conventional architectures would be good consciousness candidates if biological naturalism is true.
  • The findings of this report suggest the following. What can be gleaned from the existing literature is that there are a number of interesting ways in which biological naturalism as a research program can be defended and developed. At the same time, there also exist a number of open questions and challenges that biological naturalists have to contend with moving forward. Despite these unresolved challenges, there are many indications that biological naturalism has the potential to become a thriving and widely pursued research program.

1. Introduction

Can machines ever become conscious? Are we on the precipice of bringing into existence artificial systems that are able to suffer? What would conscious experiences of a Large Language Model be like if they were possible? It is only until fairly recently that these questions are being treated by many researchers as practically urgent instead of merely philosophically or scientifically intriguing. That is, given rapid developments in AI capabilities, with artificial systems exhibiting more and more conscious-like behaviours, where these behaviours appear for many to provoke at the very least pre-theoretical intuitions or impressions that these systems may be conscious (Shevlin, 2024; Seth, 2025; Birch, 2025), these questions are now placed at the forefront of debates about the distribution of consciousness beyond the human case (see e.g., Butlin & Long et al., 2023; 2025; Chalmers, 2023; McClelland, 2025; Wiese, 2024; Dung, 2023, 2026; Birch, 2025; Aru et al., 2023; Shiller, 2024; Dung & Kersten, 2024; Milinkovic & Aru, 2026; Seth, 2025; Godfrey-Smith, 2024a; Block, 2025; Evers et al., 2024; de Weerd, 2026, Schneider, 2019; Shiller et al., 2026).

        Getting a grip on questions about the possibility of AI consciousness is not viewed as urgent just because AI consciousness may be a real, and not just far-fetched, possibility, but also because consciousness has a plausible link to moral standing. Specifically, even though it is hotly disputed whether consciousness is necessary for moral standing (Lee, 2019), or whether all kinds of conscious experiences have moral value (Chalmers, 2022, 2026), most scholars agree that at least some conscious experiences are sufficient for moral standing, with pain perhaps being the most pressing example (Birch, 2024).[1] With this in mind, it is perhaps not surprising that questions about AI consciousness play a prominent role in discussion about AI welfare (Long et al., 2024), with the biggest concern being that of AI suffering (Dung, 2026; Saad & Bradley, 2022; Metzinger, 2021).[2]

        A central question, then, is how credible and informed assessments about the possibility of AI consciousness can and should be made. How is it, for instance, that we should determine the likelihood of any given artificial system being capable of suffering? Most scholars agree that scientific insights into consciousness in humans and animals will play a key role in answering such questions. A pressing issue, however, is that many indicators that have been developed to reliably indicate consciousness in the animal case cannot be straightforwardly brought to bear on questions about AI consciousness. The problem, more specifically, is that AI systems seem to be able (at least in principle) to replicate behaviours or cognitive capacities associated with consciousness in animals whilst lacking the kind of architecture that could plausibly bring consciousness about, an issue known as the gaming problem (Birch & Andrews, 2023; cf. Dung, 2023).

        In response to this issue, many (if not most) researchers suggest that the most promising scientifically-informed approach to answer questions about AI consciousness is to assess whether artificial systems have the right kind of architecture to support consciousness (Butlin et al., 2023, 2025). More specifically, on this approach, what researchers are looking for is whether the internal organization of artificial systems consists of the kind of properties that are also associated with consciousness in the human (or animal) case. What is hotly disputed and an ongoing debate, however, is which kinds of properties an artificial system needs to implement for consciousness to arise.

        One influential approach to assessing the possibility of AI consciousness assumes that computational functionalism is correct (see e.g., Butlin et al., 2023, 2025; Lau, 2022; Michel, forthcoming), which is roughly the view that implementing the right kinds of computations is sufficient for consciousness, and subsequently assesses whether AI systems indeed implement the right computations or not (see e.g., Butlin et al., 2023, 2025; Goldstein & Kirk-Giannini, 2024; Lau, 2022). On this approach, biology is assumed to be only contingently important in so far as it realizes the relevant computations in the biological case. So far, mainstream computational functionalists have tended to be relatively optimistic about the prospects of AI consciousness. Even though current AI systems do not appear to implement the relevant computations, it has been suggested that there are no obvious technological barriers to implementing consciousness in AI in the foreseeable future (Butlin et al., 2023, 2025; Lau, 2022).

        This approach to investigating artificial consciousness has recently received significant pushback from a sizable amount of researchers that defend some version of biological naturalism: A family of views that unite in their commitment to biology being required for consciousness (see e.g., Searle, 2017; Seth, 2025; Milinkovic & Aru, 2026; Block, 2025; Feinberg & Mallatt, 2016; Godfrey-Smith, 2016, 2024b; Humphrey, 2023; Hunt & Jones, 2023; Aru et al., 2023; Lane, 2024). According to biological naturalists, biology does not merely contingently realize consciousness in the biological case but is (nomically) necessary for consciousness in general. Because of this, biological naturalists typically deny that we are on the precipice of bringing into existence conscious artificial systems, often suggesting that consciousness in conventional AI is impossible (Dung & Kersten, 2024), and instead claiming that consciousness in AI at the very least requires much more biologically-inspired architectures (Godfrey-Smith, 2024a; Brunet & Halina, 2020; McClelland & Halina, n.d.).

        Although biological naturalism is prominently and often invoked in debates about AI consciousness to deny the idea that AI consciousness (in conventional AI) is possible,[3] biological naturalism as a research program is arguably still in its infancy. The goal of this report is to investigate the prospects of biological naturalism as a research program. By analysing recent work on the topic, what this report shows is that there are a number of interesting ways in which biological naturalism as a research program can be defended and developed further, as well as there being various open questions and challenges that need to be contended with.

        The article is structured as follows. Each section is built around a question that clarifies what research directions seem fruitful to pursue moving forwards. Along the way, the open questions and challenges that need to be contended with are highlighted. Specifically, Section 2 focuses on the idea that biological naturalism has remained ill-defined. Section 3 focuses on the seeming dialectical stalemate between biological naturalism and computational functionalism and the different ways biological naturalism can be supported. Section 4 focuses on the implications of biological naturalism for consciousness in non-conventional architectures and a more general description problem for biological naturalism. Section 5 concludes the report.

2. How should biological naturalism about consciousness be defined?

A foundational issue is that biological naturalism has remained somewhat ill-defined. Although biological naturalism is sometimes treated as a monolithic view, upon closer examination it turns out not to be straightforward enough to exactly pinpoint what it is that biological naturalists (are supposed to) have in common, and little explicit attention has been paid to the question of how biological naturalism should be defined or characterized more broadly (although see Kleiner, 2025). What is of importance is that this does not seem to concern a mere semantic problem. Instead, the way in which biological naturalism is defined can change the problem space that biological naturalists subsequently (ought to) engage with (see below).

        Even though little explicit attention has been given to this definitional issue, this doesn’t mean that no candidate definitions can be gleaned from the literature. For instance, biological naturalism is sometimes understood as the thesis that consciousness “is a property of only, but not necessarily all, living systems” (Seth, 2025, p. 2). Understood this way, a commitment to biological naturalism implies a commitment to the more general idea that it is necessary, for any system, to be alive for consciousness to come about. What’s more, given the qualification that not necessarily all living systems are conscious, the view does not entail that “biopsychism” must be correct (Thompson, 2022). The view that all living systems are conscious, including cells (Baluska & Reber, 2019), is a position that is sometimes grounded in a life-mind continuity thesis. In that sense, the commitments of biological naturalism are less radical than those of biopsychism.

        Notice, however, that this way of defining biological naturalism pushes to the forefront a very specific set of questions that biological naturalists need to contend with. For instance, if being alive is necessary for consciousness, then questions about artificial consciousness are intimately tied to vexed questions about artificial life. What’s more, what biological naturalists should focus on is arguments that specifically support the idea that being alive matters to consciousness. In addition, and relatedly, many of the more specific biological properties that biological naturalists often appeal to (like autopoiesis, metabolism, ephaptic coupling, and so on) do not straightforwardly seem to require artificial systems to be alive for them to have these properties (Kleiner, 2025; see also Halina & McClelland, n.d.). So, the biological naturalist also needs to contend with the question of how their more specific proposals about which biological properties are relevant ties in with the more general commitment that being alive is necessary for consciousness.

        Biological naturalism is sometimes also understood as a more holistic approach to solving the mind-body problem (Searle, 2017; Feinberg & Mallat, 2016; see also Godfrey-Smith, 2019, 2024b; Humphrey, 2012, 2025). Understood this way, the central aim of biological naturalism is to make sense more broadly of how consciousness is related to the physical world, and biology is taken to play an important role in this broader explanatory picture. Notice, again, that this characterization pushes a distinct set of questions to the forefront. Specifically, on this understanding of biological naturalism, it is a desideratum that hypotheses about the relevance of biology for consciousness to also entail insights that are relevant to making progress on the explanatory gap (Levine, 1983; Godfrey-Smith, 2019) or the hard problem (Chalmers, 1996) (see e.g., Godfrey-Smith, 2019; Feinberg & Mallatt, 2019). Moreover, biological naturalists need to argue why it is that making progress on the explanatory gap or the hard problem is required to make progress on questions about AI consciousness. In addition, not all hypotheses biological naturalists put forward have any clear relevance to issues related to the explanatory gap or the hard problem.

        Yet another way to understand biological naturalism is as a family of views that all argue in favour of the claim that at least some biological properties are nomologically necessary for consciousness to obtain (Saad, forthcoming; see also Seth, 2025, p. 15).[4] On this understanding of the view, biological naturalists claim that there is something special about certain aspects of biological processes in virtue of which consciousness comes about in the actual world. This is consistent with biological naturalists nevertheless having very different views amongst themselves about which specific biological properties are relevant to consciousness and why.[5] A biological naturalist may motivate this understanding of the view by suggesting that practically relevant questions about AI consciousness concern first and foremost nomological and not metaphysical possibility (Saad, forthcoming; Schlicht, 2025, p. 22; Chirimuuta, forthcoming).[6] [7]

        This way of understanding biological naturalism comes with its own problem space. Specifically, on this understanding of biological naturalism, a biological naturalist is neither required (as a matter of principle) to argue why being alive is necessary for consciousness nor are they required to show how hypotheses about the relevance of biology to consciousness address the explanatory gap or the hard problem. Accordingly, the biological naturalist may even end up remaining entirely neutral regarding the metaphysics of consciousness more broadly. For instance, a dualist and a reductive physicalist can both be biological naturalists in the sense being discussed here as long as they both accept the claim that certain biological properties are required for consciousness to come about in the actual world. A dualist, for instance, may say that consciousness is metaphysically distinct from the physical but tethered to biological states in the actual world (Saad, forthcoming, p. 11). A physicalist, on the other hand, may say that consciousness metaphysically reduces to the physical and is identical to certain biological states. Both views are compatible with the view that biology is nomologically necessary for consciousness.[8] 

Instead, on this understanding of biological naturalism, the biological naturalist ought to focus on giving a satisfying explanation of why certain specific biological properties are nomologically necessary for consciousness. However, as it stands, little attention has been given to what such an explanation should look like. Instead, many of the reasons why certain specific biological properties (like electrochemical signaling) are nomically relevant to consciousness are presented in very tentative ways (see e.g., Block, 2025), and it remains unclear what the contours of a robust explanation ultimately should look like.[9] So, on this understanding of biological naturalism, the central question seems to be what kind of explanations can be appealed to in order to support the idea that a certain biological property is nomically relevant for consciousness to obtain.

Lastly, biological naturalism is sometimes also understood in the following two-fold way. First, biological naturalism is initially understood via negativa, as the view that rejects–or is juxtaposed with–functionalism (see Michel, forthcoming). Second, on this understanding of the view, biological naturalists point to different kinds of non-functional biological properties in virtue of which consciousness is claimed to (at least partly) obtain, such as intrinsic properties of biological matter (Seth, 2025; Searle, 1992; Prinz, 2003), biological structures/substrates as opposed to biological functions (Overgaard & Kirkeby-Hinrup, 2024), or consciousness being type-identical to certain brainstates (Sebo & Long, 2023; Dung & Kersten, 2024).

Again, this way of understanding biological naturalism raises its own (distinct) set of questions. For instance, does rejecting functionalism mean to wholly reject functionalism or instead to preserve a more stringent, fine-grained, version of functionalism (Harrison et al., 2022; Brunet & Halina, 2020; de Weerd, 2026)? If the latter, is there still a categorical or sharp distinction between biological naturalism and computational functionalism or is the difference a matter of degree? Do supposed non-functional (e.g., intrinsic [Seth, 2025; Prinz, 2002]) properties of biological matter reduce to functional descriptions on different levels of analyses or are they not reducible to functional descriptions? Can a view that posits the relevance of non-functional biological properties of biological material be empirically or theoretically justified (de Weerd, 2026)?

All of this is supposed to highlight a more general point that the way in which biological naturalism is understood appears to vary and it remains unclear what biological naturalism is supposed to be committed to with respect to its methodology and (metaphysical) assumptions. An important task is to clarify what biological naturalists are (supposed to be) committed to more generally, which questions should be pushed to the forefront, and which problem spaces are worth engaging with. It may turn out that it is too simplistic to think of biological naturalism as a monolithic view. Instead, it may be the case that biological naturalism is best understood as an umbrella term for a variety of approaches, all of which make different methodological and/or metaphysical assumptions about how the relevance of biology to AI consciousness should ultimately be supported and established (see also Section 3.2).

3. How to resolve the (appearance of a) dialectical stalemate between biological naturalism and computational functionalism?

3.1 The dialectical stalemate

An important and pressing issue concerns an (alleged) dialectical stalemate between biological naturalism and computational functionalism. As mentioned in the introduction, the dominant approach to assessing the possibility of AI consciousness assumes that computational functionalism is correct, which is roughly the idea that implementing the right kinds of computations is sufficient for consciousness, and subsequently assesses whether AI systems indeed implement the right computations (see e.g., Butlin et al., 2023, 2025; Goldstein & Kirk-Giannini, 2024; Lau, 2022). Crucially, computational functionalists typically argue that the nature of the relevant computations concerns certain coarse-grained functional properties (like global broadcasting [Dehaene, 2014] or perceptual reality monitoring [Lau, 2022])[10] and hold that biological properties are only contingently relevant to consciousness, as realizers of these computations in the biological case. Computational functionalists, understood in this way, deny that biological properties are necessary for consciousness, instead claiming that consciousness is medium-independent (Schlicht, 2025), at least to a significant extent (Seth, 2025).[11] 

        The problem, however, is that there is a perceived dialectical stalemate between both sides about the necessity of biology to consciousness (McClelland, 2025), and neither biological naturalists nor computational functionalists have convincingly shown how this dialectical stalemate should ultimately be addressed. Therefore, an important task for biological naturalists is to address more explicitly the question of how progress on the stalemate can or should be made. In what follows, I will outline this stalemate  before sketching how specific versions of biological naturalism could make progress on this stalemate, noting various challenges and open questions along the way.

        What reasons are there to think that there is currently a dialectical stalemate between these positions? To understand this, consider the kind of considerations typically used to support either view. Computational functionalism and biological naturalism are traditionally supported either by philosophical or empirical considerations. Starting with the former, arguably one of the most widely appealed to philosophical arguments to support the idea that biology is not necessary for consciousness is the “fading qualia argument” (Chalmers, 1996; see also Mogensen, 2025).[12] Roughly, the fading qualia argument is a purported reductio ad absurdum in the following way. Suppose we imagine a scenario where a person’s neurons are gradually replaced with silicon chips that are ex hypothesi functionally isomorphic to neurons. Because during this replacement procedure functional isomorphism is preserved there will be no change, detectable or otherwise, in the behavior of this person. They would still report being conscious, and they would still exhibit various other consciousness-related behaviors. In other words, nothing has changed.

However, if consciousness is supposed to be fundamentally tied to a biological substrate, this would either mean that consciousness gradually fades away as more and more neurons are replaced with silicon chips, or that consciousness will abruptly disappear after enough neurons have been replaced. This would suggest that there is a radical disconnect between the consciousness-related behaviors of the person and their verbal reports about consciousness and their actual conscious experiences, a result the proponent of the fading qualia argument takes to be unacceptable (Chalmers, 2022, p. 290). Thus, biology does not matter to the implementation of consciousness (except for contingently realizing it in the biological case) and computational functionalism is better supported than  biological naturalism.

        The issue, however, is that the fading qualia argument faces an “audience problem” (Udell & Schwitzgebel, 2021): It is primarily convincing to those who already antecedently sympathetic to the idea that consciousness does not require biology. Biological naturalists themselves, however, remain unpersuaded and typically have various responses available to them. For instance, many biological naturalists deny that a silicon chip can be a functional isomorph to a neuron because their causal profiles inherently differ (Cao, 2022), instead suggesting that replacing neurons with silicon chips will change the functional organization of the person undergoing the replacement procedure (Godfrey-Smith, 2024a). Relatedly, some argue that the fading qualia argument targets a version of biological naturalism that biological naturalists wouldn’t and shouldn’t accept in the first place (de Weerd, 2026), or regard the argument as question-begging (Block, 2019). Mogensen (2025) has recently suggested that the unacceptable outcomes of the argument can be evaded by appealing to considerations about vagueness and holistic properties of the brain being necessary to consciousness. Setting aside all the details of these responses, the more general point is that the fading qualia argument, at least so far, is unlikely to convincingly show that biology is irrelevant to consciousness.[13] 

        What about the more general suggestion that endorsing biological naturalism is tantamount to endorsing an implausible version of an identity theory between consciousness and certain biological states? If there are more considerations for favouring functionalism over an identity-theory (Putnam, 1965), and  biological naturalists implicitly assume this identity theory, then computational functionalism can be preferred over biological naturalism on these more general grounds, because endorsing biological naturalism would be tantamount to a dialectical regression towards a now widely though not universally abandoned metaphysics. Something like this argument frequently (implicitly or explicitly) appears  in debates about artificial consciousness, for instance when the dispute is being framed as reducing to biology versus functionalism (see e.g., Michel, forthcoming)[14] or as a dispute about preferring either functional roles or the realizers of those functional roles (Block, 2025). On this framing, biology is designated a subordinate role from the onset to functional roles at a different level of functional organization.

        The problem with this argument, however, is that it only targets a very specific version of biological naturalism, and does not address the fact that most biological views can (at least in principle) readily be understood as fine-grained functionalist views. In fact, many proponents of biological naturalism explicitly or implicitly endorse some form of functionalism (Harrison et al., 2022; Godfrey-Smith, 2024a; Brunet & Halina, 2020; de Weerd, 2026). That is, most biological naturalists appear to endorse the idea that functions are ultimately what matters to constituting consciousness, but they put significant constraints on their realization base (Harrison et al., 2022). If anything, a biological naturalist may retort that fine-grained functionalism is a dialectical progression that further develops what functionalism plausibly amounts to (Godfrey-Smith, 2016), not a regression towards a widely abandoned identity theory. In other words, even if it is correct that functionalism is the only viable framework to operate in (but see Polger & Shapiro, 2016), no good reasons have been given why biological naturalists cannot operate in a functionalist paradigm just as much as computational functionalists can (de Weerd, 2026).

All of this suggests that there currently seem to be no convincing (a priori) philosophical considerations that clearly favour computational functionalism over biological naturalism (or vice versa). What about empirical considerations? One view that has gained  traction is that empirical considerations do not favour, at least not convincingly, either computational functionalism or biological naturalism. For instance, leading scientific theories of consciousness are often interpreted as computational theories of consciousness (Lau, 2022), suggesting that biology is not strictly required to satisfy the necessary and sufficient conditions spelled out by these theories. Other times, these theories are assumed to be computational theories of consciousness for pragmatic reasons, to investigate whether AI consciousness is possible assuming that computational functionalism is correct (Butlin et al., 2023). Yet this  assumption itself is rarely explicitly argued for in these contexts (Butlin & Lappas, 2025; Seth, 2025).[15] However, even though many available scientific theories of consciousness (like GWT or PRM) can be interpreted computationally, the empirical evidence that these theories are based on does not mandate a computational interpretation. Various scholars have noted that  the empirical evidence these theories appeal to underdetermines whether a biological and non-biological interpretation is warranted (McClelland, 2025; Schlicht, 2025; Seth, 2025; de Weerd & Wiese, 2025; see also Block, 2002).

        Whilst some more cautiously suggest that current empirical evidence does not favour either side of the debate (Schlicht, 2025), McClelland (2025) has recently argued that there are more principled reasons for thinking that agnosticism about AI consciousness is a view worth taking very seriously (see also Block, 2002). Specifically, McClelland begins  by distinguishing shallow explanations and deep explanations of consciousness. A shallow explanation of consciousness, roughly, merely identifies a systematic association between (some aspects of) consciousness and certain physical processes (e.g., global broadcasting) but does not explain why these associations occur. A deep explanation of consciousness, on the other hand, also reveals why consciousness is systematically associated with certain physical processes. That is, a deep explanation renders intelligibles the connection between consciousness and certain physical processes.[16] A deep explanation of consciousness would entail a solution to the hard problem (Chalmers, 1996), or would entail closing the explanatory gap (Levine, 1986).

The problem, McClelland argues, is that both biological and non-biological scientific views about consciousness at best provide (in principle) a shallow understanding of consciousness. A shallow understanding of consciousness might warrant making inferences about consciousness in the biological case, but does not warrant extrapolating these insights to make inferences about consciousness in the artificial case because, he argues, “[e]vidence of GWT being correct in the biological case does not tell us whether non-biological global workspaces would also give rise to consciousness.” When it comes to artificial systems, McClelland argues, we simply hit an “epistemic wall” and absent of a deep explanation of consciousness, we cannot know whether, for instance,  non-biological workspaces also suffice for consciousness. Moreover, McClelland suggests, any suggestion on behalf of the biological naturalists that consciousness requires, for instance, GWT + X (e.g., metabolism) does nothing to advance the dialectic, but simply increases the number of available hypotheses that the empirical evidence cannot decide between.

Crucially, then, McClelland’s argument for agnosticism about AI consciousness assumes that, absent a deep explanation of consciousness, the dispute between biological naturalism and computational functionalism cannot be resolved. Put differently, McClelland demands a deep explanation of consciousness before evidence-based assessments of the possibility of AI consciousness can be made. His agnosticism about AI consciousness is a view worth taking seriously, and it nicely describes how the dialectic may appear to be stuck at this point.[17] 

Do we really find ourselves stuck in an inescapable and perpetual dialectical limbo such that the only reasonable way out is to adopt agnosticism about AI consciousness? Although not inconceivable, an alternative attractive hypothesis is that it is currently simply too early to tell because biological naturalism as a research program has been too underdeveloped to make any meaningful assessments of its overall prospects, or of the prospects of specific versions of it. In the next subsection, three different directions into which biological naturalism can be developed and defended will be distilled and discussed. Each of these approaches seems to at least, prima facie, have the potential to ultimately make progress on the dialectical dispute.

3.2 Addressing the dialectical stalemate

Jonathan Birch (2025, p. 22) has already noted that one reason why the current dialectic appears to be stuck and irresolvable is because biological hypotheses have not been properly or sufficiently worked out yet. According to Birch (2025), what this specifically means is that biological naturalists need to be more explicit about the empirical implications or predictions of their views, making it clearer how biological hypotheses can be  empirically tested. The sentiment that biological naturalism as a research program is still in its infancy is arguably correct, and it also seems reasonable to suggest that more carefully working out the different ways in which the view can be supported  or defended may reveal avenues for progress. However, there seem to be more strategies available to the biological naturalist than just the empirical approach Birch has in mind.

        More specifically, there appear to be at least two general strategies available. First, biological naturalists can assume that a deep explanation of consciousness is not required to convincingly argue that biology is necessary for consciousness. Following this strategy, they may (as Birch suggests) work on more carefully explicating what the empirical predictions of their hypotheses are and demonstrate that these predictions hold true (subsection 3.2.1). Alternatively, they may try to build explanatory bridges between properties of consciousness and certain biological properties (subsection 3.2.2). Second, biological naturalists can also assume that a deep explanation of consciousness is required to make progress on AI consciousness. On this approach, the biological naturalist aims to argue that their hypotheses about the relevance of biology to consciousness are (ultimately) well-positioned to provide such a deep explanation of consciousness (subsection 3.2.3).

        Even though all of these approaches are somewhat independent research directions that biological naturalists can focus on, they need not be mutually exclusive and can ultimately feed into each other. In what follows, these three research directions are discussed in turn, noting some challenges and obstacles along the way, and flagging some open questions that need to be grappled with in order to make progress.

3.2.1 How can biological naturalism be empirically supported?

An underexplored question is a more systematic analysis of how empirical evidence can, or should, be used to support biological naturalism while avoiding the underdetermination problem discussed earlier. One motivating worry behind taking agnosticism about AI consciousness seriously is that we have to stay within the biological realm to gather empirical evidence about consciousness (McClelland, 2025), since artificial systems cannot serve as a test case to derive insights (Schlicht, 2025, p. 23). However, and contrary to what McClelland claims, a biological naturalist can  argue that it may not be true that all available evidence we can gather within this biological realm is in principle incapable of deciding whether certain biological candidate properties are necessary for consciousness.

        One recent suggestion pursuing this strategy comes from Jonathan Birch (2025), who argues in favour of the following cluster-approach to testing biological naturalism: If it is true that some candidate biological property X is necessary for consciousness then we should expect X to cluster with independent markers of consciousness across a wide range of species (Birch, 2025, p. 22). For instance, suppose that the hypothesis under consideration is that endothermy (warmbloodedness) is necessary for consciousness (Humphrey, 2023). Suppose furthermore that we have independent reasons to think that cognitive capacities like trace-conditioning, cross-modal learning, and rapid reversal learning are reliable indicators of conscious experiences in animals (Birch, 2022).[18] In that case, Birch argues, we should expect endothermy to cluster with these cognitive capacities such that no animal would exhibit these capacities without being warm-blooded.

In the long run, as enough evidence comes in, we would be able to tell whether a convincing empirical case can be made for the relevance of warmbloodedness to consciousness. Birch suggests that this point may generalize for many hypotheses proposed by biological naturalists:[19]

“Once specific biological naturalist hypotheses are properly fleshed-out, they often entail testable predictions. It’s just that the research program of trying to winnow down the list has barely begun – that’s why the list at present feels so unconstrained.”

Birch continues:

“Such a research program, over time, can gain traction on the bigger question of whether any version of biological naturalism is correct. It just requires a lot of work.”

The cluster-approach to testing biological naturalism fits with the broader idea that figuring out whether AI can be conscious requires first getting a clearer picture of the distribution of animal consciousness (Andrews & Birch, 2023). A further implication is that biological naturalists should focus on clarifying what empirical predictions (if any) their hypotheses entail and show how these can (or cannot) be amenable to empirical testing using approaches such as the cluster-approach.

It is important to recognize that empirically-driven approaches to testing biological naturalism can also in principle be developed in different ways, for instance by primarily focussing on evidence from neuroscience (see also Aru et al., 2023) or by primarily focussing on evolutionary evidence (Block, 2025). Any empirically-driven approach, however, will arguably face several further challenges that need to be addressed by biological naturalists to make progress. To demonstrate this, several challenges for Birch’s cluster-approach will be highlighted.

One challenge is that the cluster-approach does not explain why certain biological properties are necessary for consciousness, even if it shows that these properties are likely necessary for consciousness in the biological case (Wiese, 2026). Absent of any explanatory story, this may lead to problematic assessments about the possibility of AI consciousness.

For instance, suppose a researcher indeed finds that endothermy systematically clusters across a wide range of animal species with independent indicators of consciousness (like trace conditioning, cross-modal learning, or rapid reversal learning [Birch, 2022]). Following the cluster-approach, this would support the idea that endothermy is necessary for consciousness, and Humphrey would be vindicated.[20] On this basis, the researcher argues that AI consciousness (in most cases) is not possible because AI systems are not warm-blooded (in the relevant sense).

However, as Humphrey himself argues, endothermy, on his view, is necessary for consciousness in so far that it prepares the brain to deliver sentience. Specifically, Humphrey (2023) suggests that warmbloodedness enhances the conduction speed of nerve cells and decreases refractory periods, allowing for the relevant feedback loops to bring about consciousness, and allows for a kind of stability of these feedback loops that may be impossible in brains whose conduction velocities change all the time (Humphrey, 2023, p. 148). In other words, warmbloodedness is necessary for consciousness in so far that it enhances neural processing and maintains a kind of stability.[21] 

But if that is correct, then what matters is that artificial systems are capable of achieving and maintaining a certain processing speed, and it is unclear whether this really requires warmbloodedness in the sense found in biological beings. After all, there are various  ways in which processing speed can be facilitated, with warmbloodedness being only one of them. If there are various non-biological alternatives available, then the suggestion that biological properties are required to bring about the relevant kind of processing speed and stability for consciousness in AI systems loses its force. The moral of this story is that even if Humphrey is correct about warmbloodedness being necessary for consciousness in the biological case, what the researcher should look for in artificial systems is not endothermy per se, but instead the kinds of processes that endothermy facilitates in biological beings.[22] 

This does not mean, of course, that the cluster-approach is inherently problematic. Instead, what this may suggest is that the cluster-approach needs to be supplemented with something like an explanatory account of why these biological properties are relevant to consciousness, or to the processes that bring consciousness about.[23] Alternatively, a strategy a biological naturalist can try to pursue is to develop arguments for why a purely empirically-driven version of the cluster-approach works after all, despite the challenges raised here.

Another challenge that biological naturalists sympathetic to the cluster-approach need to address is whether relatively uncontroversial independent indicators of consciousness can be identified. This challenge consists not of a more general sceptical argument that questions the validity of any indicator of animal consciousness (for a recent discussion, see Michel, forthcoming-b). Rather, the issue is that at least some prominent biological naturalists tend to favour very different indicators of consciousness than, say, the typical computational functionalist.

Specifically, many computational functionalists heavily rely on empirical evidence from human consciousness, derived in laboratory settings, to inform which indicators are indicative of consciousness in animals (Birch, 2022; Seth & Bayne, 2022). However, some biological naturalists reject these as indicative of consciousness in general (see e.g., Godfrey-Smith, 2020, p. 209; see also de Weerd & Wiese, 2025, p. 10 for a detailed discussion). Instead, they let other empirical considerations (e.g., evolutionary considerations) inform what are plausible independent indicators for consciousness. For instance, Humphrey (2023) argues that sensory play, and not a capacity like trace conditioning mentioned earlier, is a plausible independent indicator of consciousness in animals. The problem this raises is that someone like Humphrey may therefore reject that a lack of clustering of (say) warmbloodedness with cognitive capacities like trace conditioning constitutes good evidence against the idea that warmbloodedness is necessary for consciousness.

More generally, if it is the case that biological naturalists tend to favour other kinds of independent indicators for consciousness than proponents of the opposing view, then it is unclear how successful the cluster-approach can be successful in its aim to provide a neutral approach to testing biological naturalism. Therefore, it is crucial for biological naturalists (and computational functionalists) to clarify which indicators for animal consciousness are (1) plausible indicators for animal consciousness more generally and (2) which are relatively uncontroversial in so far that the opposing side can also agree that they are plausible.

        Another challenge is that the cluster-approach may have limited applicability. Specifically, the approach seems very well-suited to test (in principle) biological properties like “warmbloodedness” (Humphrey, 2023) or “ephaptic coupling” (Hunt & Jones, 2023). This is because not all animals have these properties, or have them to the same degree: not all animals are warm-blooded and ephaptic coupling is inhibited by myelination in some animals (at least much more so than in other animals) (Birch, 2025, p. 23). However, this is not true for all biological candidate properties proposed by biological naturalists. That is, some popular biological candidate properties, a prime example being metabolism (Godfrey-Smith, 2016), are shared by all animals, with no obviously relevant variation that permits empirical testing via the cluster-approach. For such biological properties, the cluster-approach is in principle incapable of determining whether they are necessary for consciousness or not.

        Birch is aware of this, but suggests that if it turns out that testable hypotheses fail to support biological naturalism, and biological naturalists have to “retreat to properties common to all and only living things – such as the bare fact of having metabolism – where there is no variation to provide any basis for comparative testing”, then this is tantamount to “a rather desperate, last-ditch attempt to salvage a position that the evidence has backed into a tight corner” (Birch, 2025, p. 24)). It is certainly true that retreating to untestable biological properties merely for the sake of preserving a dogmatic commitment to biological naturalism is problematic. However, it is not clear whether this analysis does justice to the views of those biological naturalists who claim that biological properties like metabolism are necessary for consciousness (notably Godfrey-Smith, 2016, 2024b. See de Weerd [forthcoming] for a detailed discussion). That is, biological naturalists can also argue in favour of independent reasons for certain biological properties being necessary for consciousness even if these biological properties (like metabolism) are not themselves amenable to empirical testing (in the way Birch envisions). This strategy is discussed in the next subsections.

3.2.2 How can ‘explanatory bridges’ between biological properties and consciousness shed light on questions about AI consciousness?

So far, the idea has been that biological naturalists can make progress by carefully working out the empirical predictions of their views (if there are any) and testing these predictions by methods such as (but not limited to) the cluster-approach discussed earlier. However, a biological naturalist can also argue for the necessity of a biological property for consciousness via an alternative route, namely by building explanatory bridges between that biological property and certain structural properties or aspects of conscious experience (Seth, 2009, 2025; Wiese, 2026; Milinkovic & Aru, 2026; de Weerd & Wiese, 2025; de Weerd, 2026).[24] On this approach, even if a certain biological property (such as metabolism) is not amenable to empirical testing (in the way described in the previous section) then this explanatory route can still, at least in principle, convincingly show that such properties are necessary for consciousness.

        More specifically, the idea of this strategy is for a biological naturalist to argue that (1) a biological property X is explanatorily relevant to a structural property of conscious experience Y, and (2) to show that Y is a necessary property of conscious experience in general. If a biological naturalist makes a convincing argument for both (1) and (2), then the absence of X (or: the absence of features similar to X) in an artificial system would entail the absence of Y, and since Y is a necessary structural feature of conscious experiences, the absence of Y would mean that consciousness is not present.

        Take, for example, the recent suggestion by Milinkovic and Aru (2026) that the properties of scale-inseparability and hybrid dynamics that are characteristic of biological rather than digital computations are necessary for consciousness because they partially explain structural properties of consciousness itself (i.e. its unity, differentiation, temporal coherence, and context-sensitivity; for a detailed discussion see Milinkovic & Aru, 2026, p. 9). Milinkovic & Aru argue that since digital computations do not facilitate scale-inseparability nor hybrid dynamics then various structural properties of conscious experience (most notably unity) remain unaccounted for. And if these structural properties are themselves necessary properties of conscious experiences, as Milinkovic & Aru suggest, then the absence of the biological properties (i.e. biological computations) that do account for these properties entails the absence of consciousness in conventional AI systems.

        What is crucial here is that this strategy does not, at least not obviously, force the biological naturalist to provide a deep explanation of consciousness. After all, what the biological naturalist aims to do is account for certain structural features of conscious experiences. Even if a biological naturalist successfully explains certain structural features of conscious experiences, there may very well still be a lingering explanatory gap left that is unaccounted for (see also Bayne, 2003; de Weerd, forthcoming). That is, even after successfully establishing certain explanatory bridges between consciousness and certain biological properties, it is perfectly reasonable to pose that it has still not been explained why there is any conscious experience at all.[25] 

Some biological naturalists claim that building such explanatory bridges will ultimately dissolve (or contribute to dissolving) the explanatory gap or the hard problem (Seth, 2021, 2025). But this stronger claim, at least on this version of the approach, is not assumed to be required for a biological naturalist to make progress on questions about AI consciousness. Even absent a deep explanation of consciousness, this explanatory strategy can still (in principle) make a convincing case for the necessity of biology for AI consciousness. The approach is therefore particularly interesting because it strikes something of a middle ground between the other approaches discussed: Compared to the empirically-driven approach it places more emphasis on the question why biological properties are relevant to conscious experiences without aiming to address the hard problem or explanatory gap directly. With that said, the approach also faces some serious challenges that arguably warrant more attention from biological naturalists wishing to pursue this strategy.

Ideally, a candidate biological proposal provides a crucial thesis (Saad, forthcoming) in so far that the proposal, if correct, should decrease the likelihood that AI consciousness is possible.[26] However, what is critical for this type of argument to work is that, as mentioned earlier, the biological naturalist not only convincingly shows that their proposed candidate biological property explanatorily accounts for a certain structural property of conscious experiences. But they also need to (independently) show that conscious experiences in general necessarily need to have this structural property. That is, they need to convincingly argue that consciousness cannot exist without this structural property being present.

        The problem, more generally, is that it is not clear why the kind of structural properties of conscious experiences (like unity) biological naturalists tie (or could tie) to biological properties specifically need to be involved in all possible conscious experiences. For instance, Milinkovic and Aru (2026) may well be right in suggesting that scale-inseparability of biological computations constitutively accounts (at least partly) for the fact that our conscious experiences are unified, or more specifically integrated into a unified whole. However, what is less clear, and at least much more controversial, is whether all conscious experiences need to be unified (Bayne, 2010). And if unity is not a necessary structural property of conscious experiences, then it still remains possible that AI systems which fail to implement the right kind of scale-inseparability simply have disunified conscious experiences.[27]

        Suppose for the moment being that a biological naturalist succeeds in convincingly showing that a certain biological property explains (partly) a certain structural feature of conscious experiences (e.g., unity), but fails to convincingly argue that all conscious experiences necessarily need to have this structural feature. In that case, the biological naturalist is not left completely empty handed when it comes to providing some interesting insight about the possibility of AI consciousness. After all, they can still say something interesting about the possible phenomenal character of AI consciousness irrespective of making any claims about whether AI consciousness is possible or not. That is, they can still make the conditional claim that if a certain AI system is conscious but lacks the relevant biological property (e.g., scale-interdependence) then it cannot have conscious experiences with the structural property accounted for by this biological property (e.g., unified conscious experiences).

This more modest result aligns with McClelland’s (2025, p. 18) suggestion that endorsing agnosticism about AI consciousness is consistent with claiming that we can still say something meaningful about the potential phenomenal character of conscious experiences that an AI system would have if it were conscious (see also Chrisley, 2008, p. 132). Specifically, McClelland suggests that even if we need to be agnostic about the question of  whether AI systems can be conscious or not, we may still be able to make informed judgements about whether an AI system is sentient (i.e. has valenced conscious experiences that either feel good or bad). This is because, McClelland argues, it is easier to assess for valenced states that you don’t know to be either conscious or nonconscious.  And if an AI system lacks such valenced states, then it cannot be sentient, which requires the presence of valenced states and  consciousness. In a similar vein, biological naturalists may still be able to rule out that an AI system can have (say) unified conscious experiences (Milinkovic & Aru, 2026), or (say) that the conscious experiences of AI systems have a perspectival structure (Godfrey-Smith, 2016, 2024), even if they fail to show that these properties are necessary for all conscious experiences.

        It may be worth noting, however, that depending on which structural property a biological naturalist is trying to account for, their proposal may lose much of its moral significance. For instance, whereas valence has a straightforward link to conscious experiences that are morally relevant (i.e. by making conscious states feel good or bad), this is far less straightforwardly true of properties like unity. Suppose, for example, that an artificial system lacks scale-interdependence (Milinkovic & Aru, 2026) and therefore (ex hypothesi) lacks unified conscious experiences. In that case, assuming that the artificial system is conscious, it may still have disunified conscious experiences of suffering. Yet it’s hard to see why these experiences should be any less morally relevant than negatively valenced experiences integrated into a unified whole. On the other hand, if a biological naturalist successfully argues that (say) unity is necessary for all conscious experiences, then the absence of (say) scale-interdependence is morally relevant. After all, the absence of scale-interdependence in an artificial system would entail that the system has no conscious experiences in general, let alone any conscious experiences that are most straightforwardly morally relevant (like valenced conscious experiences).

        Suffice to say that the approach of building explanatory bridges between properties of consciousness and biological properties is potentially promising and can be developed in greater detail.

3.2.3 How can biological naturalists provide a deep explanation of consciousness?

The previously discussed empirically-driven approach and the explanatory bridge-building strategy both operate under the assumption that a deep explanation of consciousness is not required to make progress on questions about AI consciousness. That is, they assume that proposals about the relevance of biology to AI consciousness need not entail anything about how the hard problem (Chalmers, 1996) or how the explanatory gap (Levine, 1983) should be addressed.[28] However, some may insist that making progress on addressing the explanatory gap directly is required to make headway on questions about AI consciousness (McClelland, 2025). For those sympathetic to that idea, an alternative strategy that accommodates it can be gleaned from the literature and can be developed in more detail moving forwards.

        For a start, according to the explanatory gap worry, (Levine, 1983) there is a strong sense in which physical facts that are typically invoked when explaining consciousness seem unable to account for (i.e. establish an intelligible link with) conscious experiences. That is, the worry is that when given an explanation of which physical processes underlie conscious experiences (e.g., global broadcasting), there is still a deep sense of puzzlement about why that kind of physical process should be accompanied by any conscious experience at all. One strategy that biological naturalists can pursue is to argue that biological naturalism is uniquely positioned to make progress on addressing the explanatory gap. This strategy aligns with Searle’s (1992; 2017) initial formulations of biological naturalism, namely as a more holistic approach to the mind-body problem.

        What reasons are there to think that biological naturalists are somehow well-positioned to address the explanatory gap? Biological naturalists can, and sometimes do, argue that the aforementioned sense of bewilderment or puzzlement stems (at least partly) from pervasive misunderstandings about which physical processes are supposed to be involved with conscious experiences (Godfrey-Smith, 2016; 2019) and pervasive fixations on the wrong explanatory targets (Humphrey, 2025; Godfrey-Smith, 2019).

For instance, as Godfrey-Smith puts it: “Computation, rather than life, became the crucial bridging concept between mental and physical” (Godfrey-Smith, 2016, p. 2; see also Seth, 2025). According to Godfrey-Smith, this focus on computational explanation has made the gulf between consciousness and the physical seem unnecessarily wide because computational explanations abstract away from various biological details that are (allegedly) relevant to explaining consciousness. The result is an impoverished understanding of the physical as something like “dead matter”. To remedy this, Godfrey-Smith suggests that progress on narrowing the explanatory gap can only be made once various intricate biological details are taken into consideration again (Godfrey-Smith, 2019). More generally, the strategy that a biological naturalist tries to pursue here is to show that certain biological properties are relevant to AI consciousness because these properties are relevant to addressing the explanatory gap.

Of course, this puts matters crudely. The inclusion of (fine-grained) biological properties into an enriched understanding of the physical does not magically make the explanatory gap seem smaller. Instead, the arguments that biological naturalists develop on this approach are (or should be) more subtle and comprehensive (de Weerd, forthcoming). For instance, one way to justify the inclusion of certain biological properties into an understanding of the physical is by tying this move into a broader story about what the explanatory targets of consciousness researchers ought to be.

For example, a central aspect of Godfrey-Smith’s broader argument is that the explanandum (i.e. consciousness) should be recharacterized in terms of subjectivity or points-of-view which is claimed to be a more scientifically tractable phenomenon (Godfrey-Smith, 2024b; see also Keijzer, 2025). According to Godfrey-Smith, once it is recognized that consciousness is fundamentally about points-of-view, it should become much clearer how various properties of nervous systems, including their metabolic aspects and large-scale dynamical properties (Godfrey-Smith, 2016, 2024b), are explanatorily relevant to  what it means to have a point of view.

In a similar spirit of target-shifting, Humphrey (2025) argues that the gulf between consciousness and the physical seems so wide because of a pervasive fixation on establishing correlations between consciousness and neural substrates (i.e. neural correlates of consciousness). Instead, Humphrey argues that the explanatory target should be how it is that the brain gives us the idea that phenomenal properties (like intrinsicality, being immediately apprehended, being radically private, and so on) exist (Humphrey, 2012, 2016, 2025). Once that explanatory target is placed front and centre, Humphrey suggests, the question of how various biological properties (including endothermy) are explanatorily relevant to that explanatory target becomes much more tractable (for a detailed discussion see Humphrey, 2012, 2023, 2025).

The approach is ambitious. It aims to make progress on questions about AI consciousness by first addressing what is arguably one of the hardest questions of all. However, perhaps that is precisely what is required to ultimately answer questions about AI consciousness. Despite there being some interesting existing work that tries to pursue this more ambitious version of biological naturalism (e.g., Godfrey-Smith, 2019; Humphrey, 2023; see also Feinberg & Mallatt, 2016; 2019; Ginsburg & Jablonka, 2019), it remains significantly underexplored. In addition, a broader debate and more systematic discussion of how this strategy is supposed to work and whether existing proposals are convincing is currently missing (for exceptions see de Weerd [forthcoming] and Birch [2021]), is currently missing. Because of this, the approach arguably deserves much more attention than it has received so far.

4. Which type of AI systems would be conscious if biological naturalism is correct?

Another underexplored area of research concerns the question of which AI systems would be promising consciousness candidates if biological naturalism is correct. Or similarly, what does a blueprint for building conscious AI systems look like if biological naturalism is true? More broadly, biological naturalists have primarily focussed on rebutting the idea that consciousness in conventional AI systems is possible (in principle) as computational functionalists suggest (Butlin et al., 2023, 2025; Milinkovic & Aru, 2026). Conventional AI is implemented on non-organic, silicon-based hardware and established (e.g., von Neumann) architectures that perform digital computations. Of particular interest in this context is a specific subset of conventional AI systems sometimes referred to as “Challenger-AI”, which are conventional AIs that also satisfy a variety of (computational) markers that normally signal the presence of consciousness in humans or animals (McClelland, 2025).

        Computational functionalists are inclined to suggest that Challenger-AIs would be conscious (Butlin et al., 2023, 2025; Lau, 2022; Goldstein & Kirk-Giannini, 2024).[29] Biological naturalists have primarily focussed on giving reasons to deny the idea that conventional AIs can be conscious, pointing more specifically to the fact that digital computers cannot implement various biological properties that may be relevant for consciousness (Seth, 2025; Godfrey-Smith, 2016, 2024a; Aru et al., 2023). Interestingly, while many biological naturalists claim that implementing consciousness in conventional AI is impossible, they often do not completely rule out the idea that AI consciousness is possible. Instead, they claim that the architectures of AI systems that would be candidates for consciousness should at the very least be much more biologically-inspired. For instance, Seth suggests that:

“… real artificial consciousness will only be possible if we create machines that are also in some relevant sense alive” (Seth, 2025, p. 22).

“Conscious AI will need to share substrate properties with living systems” (Seth, 2025, p. 27).

In a similar spirit, Godfrey-Smith suggests that:

“[A]rtificial systems need to have something physically like the features of nervous systems that I’ve been discussing. The argument is not that no artificial system could be conscious, but that such a system would have to be set-up in a fairly brain-like way” (Godfrey-Smith, 2024a, p. 11).

        The problem is that these remarks only suggest an openness to the idea that AI systems with more biologically-inspired architectures are better consciousness candidates. But more precise suggestions about which non-conventional, biologically-inspired systems would be conscious are lacking. That is, what is currently lacking are more systematic and explicit analyses of which non-conventional biologically-inspired architectures are capable of generating conscious experiences if biological naturalism is correct. One notable exception comes from work by Brunet and Halina (2020) who argue that molecular machines (such as Brownian computers) implement the fine-grained functional features associated with the metabolic aspects of nervous systems Godfrey-Smith claims are relevant to consciousness.[30]

        However, there remains a plethora of open questions in this domain. For instance, consider hybrid systems that consist of brain organoids interfacing with conventional computing devices or LLMS specifically (Schneider et al., 2025; Kob, 2025; Morales Pantoja et al., 2023; Wiese, 2026). These systems (at least potentially) have all the right components for consciousness, including various biological properties that biological naturalists often associate with consciousness, as well as computational properties that are often associated with consciousness. Consider now Seth’s suggestion that “conscious AI will need to share substrate properties with living systems” (Seth, 2025, p. 27). Hybrid systems certainly share many substrate properties with living systems. However, these substrate properties are integrated with the computational properties in very different ways from how this is done in the human or animal case. So, the question that remains unaddressed is whether these differences in integration matter to conscious experiences or not.

        Similar questions arise for other kinds of non-conventional architectures. For instance, biological naturalists tend to be more sympathetic to the idea that neuromorphic or analogue hardware can in principle facilitate the implementation of consciousness (Seth, 2025; Milinkovic & Aru, 2026; see also Shiller, 2024). This is, as Seth (2025, p. 10) suggests, because, among other things, neuromorphic chips can use spikes to communicate between hardware elements (Hochstetter et al., 2021; Merola et al., 2014) and use analogue instead of digital computations (Douglas et al., 1995). This makes neuromorphic computing (by design) closely mimic how information is being processed in the brain at least compared to digital computations, thereby making neuromorphic systems more promising consciousness candidates.

        However, as Milinkovic and Aru (2026, pp. 11-12) discuss, neuromorphic systems come in a variety of different forms, each of which mimic (different) substrate properties of biological systems in their own way. Accordingly, it remains an open question of how closely a neuromorphic system needs to mimic the biological properties a biological naturalist thinks are relevant to consciousness for them to (potentially) be conscious. As with the hybrid systems case, what biological naturalists (should) say more specifically about what it takes for a neuromorphic system to be potentially conscious remains heavily underexplored (for an exception see Milinkovic & Aru [2026]).

        A more general problem that may prevent any clear-cut answers about what it takes for an artificial system to be conscious if biological naturalism is correct is the following “description problem”. Most biological naturalists are inclined to endorse some version of “fine-grained functionalism” about consciousness (Harrison et al., 2022; Brunet & Halina, 2020; de Weerd, 2026). This means that the biological properties which biological naturalists associate with consciousness (e.g., metabolism [Godfrey-Smith, 2016] or autopoiesis [Seth, 2025]) can ultimately be individuated by their functional role. However, this opens the door for a sceptic to pose the following challenge:

Description problem: For any candidate biological property X, there exists at least one formalism or functional description Y such that at least some conventional digitally computing AI systems can in principle satisfy Y.

So, the description problem is the problem that any biological candidate property can be functionally or formally described in a way that makes its implementation in artificial systems possible, and it is not immediately obvious, assuming that functionalism is presupposed, what is wrong with functionally describing the relevant biological property in this way (see also Kleiner, 2025). For instance, Schwitzgebel (2026) poses a version of this problem specifically to Seth’s suggestion that autopoiesis may be necessary to consciousness because the idea of autopoiesis can be understood in such a way that conventional AI systems are autopoietic systems after all.[31]

        The description problem seems to threaten most biological proposals. To take another example, Seth (2025, p. 10) points to the idea that different kinds of (non-Turing) computations may lie at the basis of consciousness, such as “mortal computations” (Kleiner, 2024; Ororbia & Friston, 2023; Hinton, 2022), “neural computations” (Piccinini, 2020), “analogue computations”, or “neuromorphic computations” (Mead, 2020; Schuman et al., 2022), all of which cannot be implemented on conventional silicon-based hardware and instead require unconventional neuromorphic or analogue hardware to be realized.[32] However, Seth also recognizes that standard Turing computation may still be able to simulate any of these computations, but suggests that “the computations themselves would be different, and the simulations could not be guaranteed to be exact” (Seth, 2025, p. 11). But this raises the question of why that difference would matter to consciousness and why a very good approximation would not be sufficient if functionalism is correct.

        The problem is not that there are no answers available here. Instead, the problem highlights that a relatively uncontroversial commitment to (fine-grained) functionalism is not enough for a biological naturalist to take a principled stance about the impossibility of consciousness in conventional AI. That is, if a biological naturalist aims at providing a principled argument that consciousness in conventional AI is impossible in general, only convincingly showing (empirically or otherwise, see Section 3) that a certain biological property is necessary for consciousness in general may not be enough. Instead, for such an argument to work, they probably need to build in various substantive and controversial background assumptions about, for instance, constraints on functional implementation (see Shiller, 2024 and Dung & Kersten, 2024) or the nature of computations underlying conscious experiences (see Kleiner, 2024). Such assumptions will have to go way beyond a more general commitment to (fine-grained) functionalism being correct.[33]

It may be true that something like the description problem applies more broadly to non-biological views as well. That is, for any functionalist view, including computational functionalism, it may be possible to describe the relevant functions (i.e., global broadcasting) in a variety of different ways, and, for any functionalist view, it remains unclear which functional description is most apt and why (see Shevlin’s [2021] specificity problem). But even if it is true that the description problem is not necessarily unique to biological naturalism, that does not make the problem any less pressing for the biological naturalist specifically.

Taken together, an underexplored area of research is a more explicit analysis of what it would take to realize consciousness in non-conventional AI systems if biological naturalism is true beyond more general suggestions like saying that neuromorphic architectures may be more promising. More precisely figuring out what it takes for non-conventional AI systems to be conscious probably requires fleshing out biological proposals in much more detail beyond general claims about how biology is relevant to consciousness, making more explicit which (metaphysical) background assumptions are required, and showing how the description problem should be addressed.

5. Conclusion

What can be gleaned from the existing literature is that there are a number of interesting ways in which biological naturalism as a research program can be defended and developed. At the same time, the program is arguably still in its infancy, and there exists a number of open questions and challenges that biological naturalists have to deal with moving forward. This report aimed to identify at least some important (but non-exhaustive) topics and questions that researchers can devote their attention towards. As it stands, it is probably too early to tell whether biological naturalism as a research program can ultimately provide convincing answers to vexed questions about the possibility of AI consciousness. However, there seem to be enough signs available that biological naturalism has the potential to become a thriving and widely pursued research program in the foreseeable future.


References

Aru, J., Larkum, M. E., & Shine, J. M. (2023). The feasibility of artificial consciousness through the lens of neuroscience. Trends in Neurosciences, 46(12), 1008–1017. https://doi.org/10.1016/j.tins.2023.09.009

Arvan, M., & Maley, C. J. (2022). Panpsychism and AI consciousness. Synthese, 200(3). https://doi.org/10.1007/s11229-022-03695-x

Baluška, F., & Reber, A. (2019). Sentience and Consciousness in Single Cells: How the First Minds Emerged in Unicellular Species. BioEssays : news and reviews in molecular, cellular and developmental biology, 41(3), e1800229. https://doi.org/10.1002/bies.201800229

Bayne, T. (2004). Closing the gap? Some questions for neurophenomenology. Phenomenology and the Cognitive Sciences, 3(4), 349–364. https://doi.org/10.1023/b:phen.0000048934.34397.ca

Birch, J. (2021). The hatching of consciousness. History and Philosophy of the Life Sciences, 43(4). https://doi.org/10.1007/s40656-021-00472-w

Birch, J. (2022). The search for invertebrate consciousness. Noûs, 56(1), 133–153. https://doi.org/10.1111/nous.12351

Birch, J. (2024). The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI (1st ed.). Oxford University Press.

Birch, J. (2025). AI consciousness: A centrist manifesto. PsyArXiv. https://doi.org/10.31234/osf.io/af7c9_v1

Birch, J., & Andrews, K. (2023). What has feelings. Aeon. https://aeon.co/essays/tounderstand-ai-sentience-first-understand-it-in-animals

Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences, 18(2), 227–247. https://doi.org/10.1017/S0140525X00038188

Block, N. (2002). The Harder Problem of Consciousness. The Journal of Philosophy, 99(8), 391–425. https://doi.org/10.2307/3655621

Block, N. (2019). Fading qualia: A response to Michael Tye. In P. A. & D. Stoljar (Eds.), Blockheads! Essays on Ned Block’s philosophy of mind and consciousness. MIT Press.

Block, N. (2025). Can only meat machines be conscious? Trends in Cognitive Sciences. https://doi.org/10.1016/j.tics.2025.08.009

Brunet, T. D. P., & Halina, M. (2020). Minds, Machines, and Molecules. Philosophical Topics, 48(1), 221-241.

Butlin, P., & Lappas, T. (2025). Principles for Responsible AI Consciousness Research. arXiv. https://doi.org/10.48550/arxiv.2501.07290

Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., Deane, G., Fleming, S. M., Frith, C., Ji, X., Kanai, R., Klein, C., Lindsay, G., Michel, M., Mudrik, L., Peters, M. A. K., Schwitzgebel, E., Simon, J., & VanRullen, R. (2023). Consciousness in Artificial Intelligence: Insights from the Science of Consciousness (arXiv:2308.08708). arXiv. http://arxiv.org/abs/2308.08708

Butlin, P., Long, R., Bayne, T., Bengio, Y., Birch, J., Chalmers, D., Constant, A., Deane, G., Elmoznino, E., Fleming, S. M., Ji, X., Kanai, R., Klein, C., Lindsay, G., Michel, M., Mudrik, L., Peters, M. A., Schwitzgebel, E., Simon, J., & VanRullen, R. (2025). Identifying indicators of consciousness in AI systems. Trends in Cognitive Sciences. https://doi.org/10.1016/j.tics.2025.10.011

Cao, R. (2022). Multiple realizability and the spirit of functionalism. Synthese, 200(6), 506. https://doi.org/10.1007/s11229-022-03524-1

Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.

Chalmers, D. J. (2022). Reality+: Virtual Worlds and the Problems of Philosophy. W. W. Norton

Chalmers, D. J. (2023). Could a Large Language Model be Conscious? PhilPapers. https://philarchive.org/rec/CHACAL-3

Chalmers, D. J. (2026). Sentience and Moral Status. In Geoffrey Lee & Adam Pautz, The Importance of Being Conscious. Oxford University Press.

Chirimuuta, M. (forthcoming). Neuromorphic computing and the significance of medium dependence.

Chrisley, R. (2008). Philosophical foundations of artificial consciousness. Artificial Intelligence in Medicine, 44(2), 119–137. https://doi.org/10.1016/j.artmed.2008.07.011

Degenaar, J., O’Regan, J.K. (2017). Sensorimotor Theory and Enactivism. Topoi, 36, 393–407 (2017). https://doi.org/10.1007/s11245-015-9338-z

Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the Brain Encodes Our Thoughts. New York: Viking Press.

Dennett, D. (1988). Quining Qualia. In Anthony J. Marcel & Edoardo Bisiach, Consciousness in Contemporary Science. New York: Oxford University Press.

de Weerd, C. R. (2024). A credence-based theory-heavy approach to non-human consciousness. Synthese, 203(5), 171. https://doi.org/10.1007/s11229-024-04539-6

de Weerd, C. R. (2026). What Matters Is Not What Lies Dormant Beneath: Why AI Consciousness Is Not About Biological Substrates. Synthese, 207(147), https://doi.org/10.1007/s11229-026-05534-9

de Weerd, C. R. (forthcoming). A Bridge Too Far? The Bottom-up Evolutionary Approach to Consciousness Science and Bridging the Explanatory Gap. Ergo: An Open Access Journal of Philosophy. https://philpapers.org/rec/DEWABT-2

de Weerd, C. R., & Wiese, W. (2025). Computation + X: A Constraint-based Approach to Investigating AI Consciousness. PhilArchive. https://philpapers.org/rec/DEWCXE

Douglas, R., Mahowald, M., & Mead, C. (1995). Neuromorphic analogue VLSI. Annual Review of  Neuroscience, 18, 255-281.

Dung, L. (2023). Tests of Animal Consciousness are Tests of Machine Consciousness. Erkenntnis. https://doi.org/10.1007/s10670-023-00753-9 
Dung, L., & Kersten, L. (2024). Implementing artificial consciousness. Mind & Language, mila.12532. https://doi.org/10.1111/mila.12532 
Dung, L. (2026). Saving Artificial Lives: Understanding and Preventing Artificial Suffering. Routledge.

Dung, L. (2025). Consciousness without biology: An argument from anticipating scientific progress. PhilPapers. https://philpapers.org/archive/DUNCWB.pdf

Evers, K., Farisco, M., Chatila, R., Earp, B., Freire, I., Hamker, F., Nemeth, E., Verschure, P., & Khamassi, M. (2025). Preliminaries to artificial consciousness: A multidimensional heuristic approach. Physics of Life Reviews, 52, 180–193. https://doi.org/10.1016/j.plrev.2025.01.002

Feinberg, T. E., & Mallatt, J. (2016). The nature of primary consciousness. A new synthesis. Consciousness And Cognition, 43, 113–127. https://doi.org/10.1016/j.concog.2016.05.009

Feinberg, T. E., & Mallatt, J. (2019). Consciousness Demystified. MIT Press.

Frankish, K. (2012). Quining diet qualia. Consciousness And Cognition, 21(2), 667–676. https://doi.org/10.1016/j.concog.2011.04.001

Frankish, K. (2016). Illusionism as a Theory of Consciousness. Journal of Consciousness Studies, 23(11-12), 11-39.

Frankish, K. (2021). Panpsychism and the Depsychologization of Consciousness. Aristotelian Society Supplementary Volume, 95(1), 51–70. https://doi.org/10.1093/arisup/akab012

Ginsburg, S., & Jablonka, E. (2019). The evolution of the sensitive soul: Learning and the

origins of consciousness. MIT Press.

Godfrey-Smith, P. (2016). Mind, matter, and metabolism. Journal of Philosophy, 113(10), 481–506. https://doi.org/10.5840/jphil20161131034

Godfrey-Smith, P. (2019). Evolving Across the Explanatory Gap. Philosophy, Theory, and Practice in Biology, 11(20220112).

Godfrey-Smith, P. (2020). Gradualism and the evolution of experience. Philosophical Topics, 48(1), 201-220. https://doi.org/10.5840/philtopics202048110

Godfrey-Smith, P. (2024a). Nervous Systems, Functionalism, and Artificial Minds. https://petergodfreysmith.com/wp-content/uploads/2023/12/NYU-Oct-2023-AnimalsAI-Functionalism-paper-Post-C3.pdf

Godfrey-Smith, P. (2024b). Inferring Consciousness in Phylogenetically Distant Organisms. Journal of Cognitive Neuroscience, 36(8), 1660-1666.

Goldstein, S., & Kirk-Giannini, C. D. (2024). A Case for AI Consciousness: Language

Agents and Global Workspace Theory. https://philpapers.org/archive/GOLACF-2.pdf

Halina, M., & McClelland, T. (n.d.). Biobot Consciousness. Unpublished manuscript.

Hameroff, S., & Penrose, R. (2014). Consciousness in the universe. Physics Of Life Reviews, 11(1), 39–78. https://doi.org/10.1016/j.plrev.2013.08.002

Harrison, D., Rorot, W., & Laukaityte, U. (2022). Mind the matter: Active matter, soft robotics, and the making of bio-inspired artificial intelligence. Frontiers in Neurorobotics, 16, 880724. https://doi.org/10.3389/fnbot.2022.880724

Hinton, G. (2022). The forward-forward algorithm: some preliminary investigations. Arxiv. https://arxiv.org/abs/2212.13345

Hochstetter, J., Zhu, R., Loeffler, A., Diaz-Alvarez, A., Nakayama, T., & Kuncic, Z. (2021).  Avalanches and edge-of-chaos learning in neuromorphic nanowire networks. Nature Communications, 12(1), 4008. https://doi.org/10.1038/s41467-021-24260-z

Humphrey, N. (2012). Soul Dust: The Magic of Consciousness. Princeton University Press.

Humphrey, N. (2016). Redder than Red: Illusionism or Phenomenal Surrealism? Journal of Consciousness Studies, 23(11-12).

Humphrey, N. (2023). Sentience: the invention of consciousness. The MIT press.

Humphrey N. (2025). Phenomenal consciousness: its scope and limits. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 380(1939), 20240306. https://doi.org/10.1098/rstb.2024.0306

Hunt, T., and Jones, M. (2023). Fields or firings? Comparing the spike code and the electromagnetic field hypothesis. Frontiers in Psychology, 14, 1029715.

Kammerer, F. (2025). Defining consciousness and denying its existence. Sailing between Charybdis and Scylla. Philosophical Studies, 182(2), 541–565. https://doi.org/10.1007/s11098-025-02285-0

Kammerer, F. (2026). Moral significance in artificial systems: if not consciousness, then what? https://philarchive.org/rec/KAMMSI

Keijzer, F. (2025). Full naturalism: The objectivity of subjective points of view. Biological Theory. https://doi.org/10.1007/s13752-025-00493-9

Kob, L. (2025). Methodological structuralism and the two-factor approach: Implications for  consciousness science and AI. Philosophy and the Mind Sciences, 6. https://doi.org/10.33735/phimisci.2025.11760

Kleiner, J. (2024). Consciousness requires mortal computation. Arxiv. https://arxiv.org/abs/2403.03925

Kleiner, J. (2025). Defining Biological Naturalism. Behavioral and the Brain Sciences.

Lane, N., & Rodriguez, E. (2024). What is a feeling? A bioenergetic explanation. Biochimica et biophysica acta, 1865. 

Lau, H. (2022). In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience (1st ed.). Oxford University Press. https://doi.org/10.1093/oso/9780198856771.001.0001

Lee, G. (2019). ‘ Alien Subjectivity and the Importance of Consciousness,’ in A. Pautz and D. Stoljar (eds) Blockheads! Essays on Ned Block's Philosophy of Mind and Consciousness. Cambridge MA: MIT Press, pp. 215–242.

Levine, J. (1983). Materialism and qualia: The explanatory gap. Pacific Philosophical Quarterly, 64, 354-61

Mather, J. (2025). Consciousness of octopuses—on their own terms. Animal Sentience, 10, 1.

Maturana, H. R., & Varela, F. J. (1980). Autopoiesis and Cognition. In Boston studies in the philosophy of science. https://doi.org/10.1007/978-94-009-8947-4

Milinkovic, B., & Aru, J. (2025). On biological and artificial consciousness: A case for biological computationalism. Neuroscience & Biobehavioral Reviews, 181, 106524. https://doi.org/10.1016/j.neubiorev.2025.106524

Milinkovic, B., Seth, A. K., Barnett, L., Carter, O., Andrillon, T. (2025). Dynamical independence reveals anaesthetic specific fragmentation of emergent structure in neural dynamics. bioRxiv. https://doi.org/10.1101/2025.07.16.664881

McClelland, T. (2025). Agnosticism about artificial consciousness. Mind & Language, 1–21. https://doi.org/10.1111/mila.70010

Mckilliam, A. (2025). Explanation, understanding, and the methodological problem in consciousness science. Synthese, 205(4), 1–21. https://doi.org/10.1007/s11229-025- 05001-x

Mead, C. (2020). How we created neuromorphic engineering. Nature Electronics, 3, 434-435.

Metzinger, T. (2021). Artificial Suffering: An argument for a global moratorium on Synthetic phenomenology. Journal of Artificial Intelligence and Consciousness, 08(01), 43–66. https://doi.org/10.1142/s270507852150003x

Merolla, P. A., Arthur, J. V., Alvarez-Icaza, R., Cassidy, A. S., Sawada, J., Akopyan, F., Jackson, B. L.,  Imam, N., Guo, C., Nakamura, Y., Brezzo, B., Vo, I., Esser, S. K., Appuswamy, R., Taba, B., Amir, A., Flickner, M. D., Risk, W. P., Manohar, R., & Modha, D. S. (2014). Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197), 668-673. https://doi.org/10.1126/science.1254642

Michel, M. (forthcoming). Bet on functionalism – Commentary on Seth (2025). Behavioral and Brain Sciences.

Michel, M. (forthcoming-b). Consciousness doesn’t do that. Philosophy and Phenomenological Research.

Michel, M., & Lau, H., 2021. Higher-order theories do just fine. Cognitive Neuroscience, 12, 77–78.

Mogensen, A. L. (2025). How to resist the Fading Qualia Argument. Synthese, 206(5). https://doi.org/10.1007/s11229-025-05338-3

Morales Pantoja, I. E., Smirnova, L., Muotri, A. R., Wahlin, K. J., Kahn, J., Boyd, J. L., Gracias, D. H.,  Harris, T. D., Cohen-Karni, T., Caffo, B. S., Szalay, A. S., Han, F., Zack, D. J., Etienne Cummings, R., Akwaboah, A., Romero, J. C., Alam El Din, D. M., Plotkin, J. D., Paulhamus, B. L., . . . Hartung, T. (2023). First Organoid Intelligence (OI) workshop to form an OI community. Frontiers in Artificial Intelligence, 6, 1116870. https://doi.org/10.3389/frai.2023.1116870

Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83(4), 435-450.

Negro, N. (2020). Phenomenology-first versus third-person approaches in the science of consciousness: the case of the integrated information theory and the unfolding argument. Phenomenology and the Cognitive Sciences, 19, 979–996. https://doi.org/10.1007/s11097-020-09681-3

Ororbia, A., & Friston, K., J. (2023). Mortal computation: a foundation for biomimetic  intelligence. Arxiv. https://arxiv.org/abs/2311.09589

Overgaard, M., & Kirkeby-Hinrup, A. (2024). A clarification of the conditions under which Large language Models could be conscious. Humanities and Social Sciences Communications, 11(1), 1031. https://doi.org/10.1057/s41599-024-03553-w

Piccinini, G. (2020). Neurocognitive mechanisms. Oxford University Press.

Polger, T. W., & Shapiro, L. A. (2016). The multiple realization book. Oxford University Press.

Prinz, J. J. (2003). Level-headed mysterianism and artificial experience. Journal of Consciousness Studies,  10(4–5), 111–132.

Putnam, H. 1965. The nature of mental states. In: Mind, Language, and Reality. Philosophical papers Vol. 2. Cambridge, Cambridge University Press: 429-440

Richmond, A. (2025). How computation explains. Mind & Language, 40(1), 2–20. https://doi.org/10.1111/mila.12521

Saad, B. (forthcoming). In Search of a Biological Crux for AI Consciousness. Philosophy of AI.

Saad, B., & Bradley, A. (2022). Digital suffering: why it’s a problem and how to prevent it. Inquiry, 68(7), 2110–2145. https://doi.org/10.1080/0020174x.2022.2144442

Schlicht, T. (2025). Philosophical problems in the study of consciousness. In U. Olcese & L. Melloni (Eds.), The scientific study of consciousness: Experimental and theoretical approaches. Springer Nature.

Schneider, S., Sahner, D., Kuhn, R. L., Schwitzgebel, E., & Bailey, M. (2025). Is AI conscious?  A primer on the myths and confusions driving the debate.  https://philpapers.org/archive/SCHIAC-22.pdf

Schuman, C. D., Kulkarni, S. R., Parsa, M., Mitchell, J. P., Date, P., & Kay, B. (2022). Opportunities  for neuromorphic computing algorithms and applications. Nature Computational Science, 2(1), 10-19. https://doi.org/10.1038/s43588-021-00184- y

Schwitzgebel, E. (2016). Phenomenal Consciousness, Defined and Defended as Innocently as I Can Manage. Journal of Consciousness Studies, 23(11-12), 224-235.

Searle, J.R. (1992). The Rediscovery of Mind. MIT Press.

Searle, J. (2017). Biological naturalism. In S. Schneider & M. Velmans (Eds.), The Blackwell Companion to Consciousness. John Wiley and Sons Ltd

Sebo, J., & Long, R. (2023). Moral consideration for AI systems by 2030. AI and Ethics, 5(1), 591–606. https://doi.org/10.1007/s43681-023-00379-1

Seth, A. (2009). Explanatory correlates of consciousness: Theoretical and computational challenges. Cognitive Computation, 1(1), 50–63. https://doi.org/10.1007/s12559-009-9007-x

Seth, A. K. (2021). Being You: A New Science of Consciousness. Faber and Faber.

Seth, A. K. (2025). Conscious artificial intelligence and biological naturalism. Behavioral and Brain Sciences, 1–42. doi:10.1017/S0140525X25000032

Seth, A. K., & Bayne, T. (2022). Theories of consciousness. Nature Reviews Neuroscience, 23(7), 439–452. https://doi.org/10.1038/s41583-022-00587-4

Shevlin, H. (2021). Non‐human consciousness and the specificity problem: A modest theoretical proposal. Mind & Language, 36(2), 297–314. https://doi.org/10.1111/mila.12338

Shevlin, H. (2024). Consciousness, machines, and moral status. In A. Strasser (Ed.), Humans and smart machines as partners in thought.

Shiller, D. (2024). Functionalism, integrity, and digital consciousness. Synthese, 203(2), 47. https://doi.org/10.1007/s11229-023-04473-z

Shiller, D., Duffy, L., Morán, A. M., Moret, A., Percy, C., & Clatterbuck, H. (2026). Initial results of the Digital Consciousness Model. arXiv. https://doi.org/10.48550/arxiv.2601.17060

Schneider, S. (2019). Artificial you: AI and the future of your mind. Princeton University Press.

Thompson, E. (2022). Could All Life Be Sentient? Journal of Consciousness Studies, 29(3), 229–265. https://doi.org/10.53765/20512201.29.3.229

Tononi, G., & Koch, C. (2015). Consciousness: here, there and everywhere?. Phil. Trans. R. Soc. B, 370, 20140167, https://doi.org/10.1098/rstb.2014.0167

Udell, D. B., & Schwitzgebel, E. (2021). Susan Schneider’s Proposed Tests for AI Consciousness: Promising but Flawed.

Veit, W. (2023). A Philosophy for the Science of Animal Consciousness (1st ed.). Routledge. https://doi.org/10.4324/9781003321729

Wiese, W. (2024). Artificial consciousness: A perspective from the free energy principle. Philosophical Studies, 181(8), 1947–1970. https://doi.org/10.1007/s11098-024- 02182-y

Wiese, W. (2026). Inferring the presence (or absence) of consciousness in artificial systems. PhilArchive. https://philarchive.org/rec/WIEITP

 


  1. ^

     For a recent discussion on different versions of sentientism see Chalmers (2026)

  2. ^

     For a dissenting opinion see Kammerer (2026).

  3. ^

     A notable alternative way of denying that consciousness in conventional AI systems is possible is by appealing to non-functional theories of consciousness like Integrated Information Theory (IIT) (Tononi & Koch, 2015; see also Negro, 2020). Whilst IIT does not rule out AI consciousness in principle, it does suggest that consciousness requires a certain kind of causal integration, and that feedforward systems are not conscious.

  4. ^

     Seth (2025) goes back and forth between defining biological naturalism as the view that being alive is necessary for consciousness (Seth, 2025, p. 2), and the more minimal claim that biological properties are necessary for consciousness (Seth, 2025, p. 15).

  5. ^

     To get an impression, a non-exhaustive list of candidate biological properties include metabolism (Godfrey-Smith, 2016, 2024b), inter-cellular dynamics (Baluska & Reber, 2019; see also Lane, 2024), endothermy (Humphrey, 2023), scale-inseparability (Milinkovic & Aru, 2026), electro-chemical signalling (Block, 2025), autopoiesis (Maturana & Varela, 1980), and ephaptic coupling (Hunt & Jones, 2023).

  6. ^

     For instance, if AI consciousness is only metaphysically possible but not nomologically possible then we need not worry about (say) the potential of AI suffering in the actual world.

  7. ^

     Michel (forthcoming) suggests that Seth (2025) fails to make a convincing case for the metaphysical necessity of biology to consciousness and therefore does not threaten computational functionalism. If biological naturalism is understood as a commitment to the more modest claim that biology is (at least) nomologically necessary for consciousness, then this criticism arguably falls flat.

  8. ^

     However, see Arvan & Maley (2022) for the idea that the truth of dualism, and in particular panpsychism, may have consequences for the possibility of AI consciousness in the actual world.

  9. ^

     For instance, should it be based on evolutionary considerations (Block, 2025), empirical considerations (Seth, 2025), philosophical considerations (see also Milinkovic & Aru, 2026), or some mixture of all of these considerations?

  10. ^

     See Seth & Bayne (2022) more broadly for an overview.

  11. ^

     Even if computational functionalism is correct there may still be some constraints on realization bases. As Michel & Lau (2021) point out: “Swiss cheese cannot implement the relevant computations”.

  12. ^

     Or adjacent arguments like the dancing qualia argument. See Mogensen (2025) for a discussion.

  13. ^

     Something similar holds for other proposals. For example, Dung (2025) argues that computational functionalism is to be preferred on roughly explanatory and theoretical virtue grounds. But Dung’s argument assumes that psychological duplication thesis holds true, something that many biological naturalists will deny (see de Weerd & Wiese, 2025; de Weerd, 2026).

  14. ^

     See de Weerd (2026) for a discussion.

  15. ^

     McClelland (2025) goes even as far as to say that endorsing computational functionalism amounts to taking a “leap of faith”.

  16. ^

     See McKilliam (2025) for a similar suggestion that a scientific explanation of consciousness need not necessarily make the connection between consciousness and the physical intelligible.

  17. ^

     For another discussion on agnosticism about AI consciousness see de Weerd & Wiese (2025).

  18. ^

     But see Michel (forthcoming-b) for a recent critical perspective.

  19. ^

     Another example Birch discusses is that of ephaptic coupling being relevant to consciousness (Hunt & Jones, 2023).

  20. ^

     This is, of course, a hypothetical scenario. For evidence against the idea that warmbloodedness is required for consciousness see Mather’s (2025) recent work on assessing markers of consciousness in octopuses.

  21. ^

     Which may even imply that warmbloodedness is only necessary for animals that live outside of the tropics.

  22. ^

     Something similar may also be true for Block’s (2025) suggestion that electro-chemical signaling is necessary for consciousness. Block relies (at least partly) on an evolutionary claim that only electrochemical nervous systems allowed for the development of cognitive sophistication that ultimately gave rise to conscious beings like us. But what exactly electrochemical signaling supports that is relevant to consciousness on this view remains unclear. Therefore, it may be the case that a researcher should not look for electrochemical signaling specifically, but perhaps for some other property that is relevant to consciousness that electrochemical signaling facilitates in the biological case. And again, it is not clear whether electrochemical signaling is the only kind of process that can facilitate this property.

  23. ^

     Importantly, note that the kind of explanation discussed in the previous paragraph is not necessarily a deep explanation of consciousness. It doesn’t address the explanatory gap or the hard problem in any immediate way.

  24. ^

     Some biological candidate properties might be amenable to empirical testing and be explanatorily relevant to explaining certain structural properties of conscious experiences (see e.g., Milinkovic & Aru, 2026; Milinkovic et al., 2025).

  25. ^

     Bayne (2003) uses this a critique of neurophenomenology. Specifically, Bayne suggests that even if neurophenomenologists successfully account for certain structural aspects of conscious experiences, they have still failed to address the explanatory gap more broadly because there is still an “explanatory itch” left. I agree with Bayne here, but the point being stressed here is that the strategy being discussed here suggests that biological naturalists do not need to address the explanatory gap to say something meaningful about the possibility of AI consciousness.

  26. ^

     Or more carefully that at least certain AI systems cannot be conscious.

  27. ^

     Something similar may be argued for the idea that consciousness necessarily requires a point-of-view (Godfrey-Smith, 2019, 2024). See, for instance, Veit, 2023 for the claim that subjectivity in Godfrey-Smith’s sense may not be necessary for all conscious experiences in all animals).

  28. ^

     From here onwards I simply use these interchangeably as both being concerned with an alleged lack of intelligible link between explanans and explanandum.

  29. ^

     Note that Challenger-AIs (1) arguably do not exist yet but (2) they may do so very soon since there do not seem to be any obvious technological barriers that prevent all the right computations to be implemented in existing conventional architectures (Butlin et al., 2023).

  30. ^

     See Halina & McClelland (n.d.) for another noteworthy recent exception. Halina and McClelland argue that most prominent biological proposals suggest that what they call synthetic-organic systems, or biobots, (such as xenobots) can potentially implement consciousness.

  31. ^

     The idea that concepts like “autopoiesis” can be liberally applied is of course not unique to Schwitzgebel. See for instance Lenartowicz (2015) for the suggestion that universities can be understood as autopoietic systems.

  32. ^

     List and references adopted from Seth (2025, p. 10).

  33. ^

     Alternatively, a biological naturalist can also decide to drop functionalism all-together in favour of an alternative (but likely controversial) non-functionalist metaphysics such as identity-theory. Another pragmatic strategy is to remain agnostic about how the description problem should be addressed, instead suggesting that we should manage uncertainty (Wiese, 2026) as follows: The more the implementation of the biological property resembles a biological/human implementation, the more certain we can be that the system satisfies the biological marker (de Weerd & Wiese, 2025).

  34. Show all footnotes

10

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities