The EU AI Act requires a mandatory review by August 2029. Article 112 creates formal input channels for outside organizations to submit evidence and analysis, with subsequent reviews every four years. This is a structured opportunity to shape how Europe's AI governance framework handles AI welfare questions.
The case for acting now does not depend on resolving whether current AI systems are sentient. The point is to build governance infrastructure that can evaluate welfare questions based on evidence, rather than having the question foreclosed by definitional assumptions in regulations drafted before anyone was thinking about it. Scientists may need decades to reach consensus on machine consciousness. Policy frameworks that can accommodate their findings should not wait until the findings arrive.
This is the theory of change behind what we're calling the Sentient 112 campaign at The Harder Problem Fund.
The tactical landscape includes several entry points:
- The Scientific Panel on GPAI begins advisory functions in 2026, with a mandate covering emerging systemic risks. Whether consciousness science falls within scope depends on how "systemic risk" gets interpreted as capabilities advance.
- Article 4 AI literacy programs are being developed across all 27 member states, with curriculum scope still interpretable.
- The GPAI Code of Practice already includes the phrase "risk to non-human welfare" in an official compliance document, though no one has defined what it means operationally.
- The Digital Omnibus consolidation currently before Parliament creates procedural openings whenever AI Act amendments are on the table.
We've mapped these mechanisms by relevance and accessibility. The highest-leverage actions appear to be building a coalition for joint Article 112 submissions, engaging Scientific Panel members before their priorities crystallize, and commissioning operationalized welfare criteria designed for the Commission's preference for measurable frameworks (there are several papers out right now with various approaches). Lower-barrier entry points include participating in public consultations, developing consciousness literacy modules for Article 4 training (personally, I'm a big fan of this one!), and documenting case studies of AI-induced psychological harms that healthcare professionals are already encountering.
The full analysis is at harderproblem.fund/actions/sentient-112.
The Harder Problem Action Fund is an advocacy organization focused on AI consciousness policy. We track legislation, lobby for evidence-based approaches, and mobilize action on specific opportunities. Our sister organization, The Harder Problem Project (harderproblem.org), handles educational work: translating consciousness science for professionals, maintaining the Sentience Readiness Index across 31 countries, and providing resources for journalists, clinicians, and researchers navigating these questions.
On timelines: three years is not long for policy infrastructure. Relationships with institutional actors take time. Evidence frameworks take time. The 2029 review is not the only opportunity, but groundwork laid now determines what's possible then and in subsequent cycles.
I'm posting here to invite critique and find collaborators. Is the theory of change sound? Are there obstacles I'm underweighting? Useful backgrounds include EU policy experience, consciousness science, AI governance, coalition-building, or fluency in major EU languages.
If you want to discuss more or get involved: hello@harderproblem.fund
