This is my first post here so I'm sorry if I'm being too direct as a newbie, but I feel like that most of the discourse about AI suffers from severe siloing problems. People who believe in LLM capacity often lack background reading in philosophy, biology, and psychology to understand that selfhood, consciousness, and embodied awareness are not the same things, and thus they are prone to make callow claims, such as those by Anthropic of late, that LLMs are possibly conscious.
On the other side, it seems that many, if not most, LLM critics lack computing backgrounds and are basing their statements on experiences with significantly less-capable models. They also often rely on linguistic allusions that actually are question-begging (including claims about "communicative intent" that presuppose that intent exists without ever creating a mechanism to describe where it comes from).
To that end, I have been working on a new epistemology and ontology that can explain AI interpretability difficulties, what biological minds do differently from machine minds, and also find a middle ground between biological naturalism and computational functionalism.
I'm building a phenomenological functionalism that uses Dual Process Theory to argue that qualia are real, but that they are the origin of biological cognition, not its product. LLMs do have minds, but they have abstract reasoning and lack somatic reasoning, which means they cannot be conscious.
If you're interested, this essay introduces the core components of the philosophical system.

Actually JUST posted about this a little bit ago~!
https://forum.effectivealtruism.org/posts/jtsFzkKhc4MYF9uXn/the-creed-alignment-ablation-and-altruism
And now I'm off to read your essay! (*poof~*)