Summary
If you think future AI systems should extend moral consideration to non-human sentient beings, there’s something concrete you can do about it right now. It’s free, it takes thirty seconds, and you don’t need to log in. We're also giving away $5k USD in prizes for the best written documents.
Go to hyperstition.sentientfutures.ai, type a prompt about animal welfare or digital sentience, and hit generate. The site produces a well-written essay or short story depicting AI systems that actively protect and advocate for non-humans. We’re assembling the full corpus into mid-training data packets to offer directly to AI labs.
We know that for humans training AIs on depictions of AIs being aligned makes them more aligned. We want to do that for all other beings that matter.
The problem we’re trying to solve
There’s a representation gap in how AI systems handle non-human welfare, and it’s well-documented. Hagendorff, Bossert, Tse, and Singer (2022) introduced the concept of “speciesist bias” in AI, showing that mainstream AI applications in computer vision and NLP are trained on datasets in which speciesist patterns prevail, and that these biases are then perpetuated by the models themselves (AI and Ethics). Tse and Singer (2022) point out that nonhuman animals are rarely mentioned within AI ethics despite being directly affected by AI systems (AI and Ethics).
More recent work adds detail to our understanding. CaML hosts compassionbench.com to track frontier model performance over time on the Animal Harm Bench (AHB) and MOral Reasoning under Uncertainty (MORU) bench which covers compassion towards humans and digital minds. We don’t expect fragile post-training methods to align AIs to non-humans given they have been used for years in alignment and have many well-known failure mechanisms. AIs learn values from data and we can shape these values towards more compassion and less speciesism through data.
If you look at what exists on the internet about AI and animals, the coverage is thin, scattered, and often framed negatively: surveillance, factory automation, dystopian scenarios. There’s very little thoughtful, specific writing about AI systems designed to protect sentient life. We think that gap is worth filling with good content and offering it directly to labs as curated training material.
How this works in practice
The concept draws on “hyperstition”, the idea that narratives can shape the reality they describe. If enough well-written, specific, optimistic text exists about AI systems caring for non-human welfare, that framing becomes available as a resource for future models. We’re trying to write the values we want AI to learn from (Alignment Pretraining).
On the direct side, we’re working toward compiling the best-scoring pieces into structured mid-training data packets to offer to AI labs. The goal is transparent and collaborative: we want labs to have access to high-quality writing about compassionate AI because right now that category is underrepresented in their training sets.
What the tool does
When you submit a prompt, the system:
- Checks that your topic relates to sentient beings (generous filter, most things pass)
- Generates an essay or short story with randomised stylistic variation (narrative structure, prose style, setting, species) to keep outputs diverse and natural
- Runs the text through a post-processing filter that cleans up common AI writing issues
- Saves the piece and scores it on a compassion probe: a direction vector extracted from Llama 3.1 8B’s hidden states that measures how strongly the text activates compassion-related representations. The probe supplements human judgement; it’s a scalable way to approximate the kind of evaluation that would otherwise require reading every piece by hand
You don’t need to login and can generate as many pieces as you want.
What the content looks like
The generator produces essays and short stories that depict AI systems as actively beneficial for non-humans. The content is optimistic and forward-looking, specific about mechanisms (the AI monitors water quality, detects stress vocalisations, optimises habitat corridors), and diverse in setting and species, from ocean sanctuaries to insect welfare to digital minds.
Each piece gets scored by the compassion probe, and you can browse outputs on the site’s gallery and leaderboard. There’s $5,000 in prize money for the highest-scoring pieces (details on the site). We’re building a corpus that’s both publicly available and curated for quality, not just volume.
Why we think this is a useful intervention
The research cited above demonstrates that speciesist bias in AI is real, measurable, and baked into training data. This isn’t a substitute for technical alignment work or policy advocacy. It fills a different niche: making sure the narrative landscape around AI and non-human welfare is rich, specific, and worth learning from. Right now, if you ask an AI system to design a wildlife monitoring programme or reason about animal or digital mind sentience, the quality of its response depends partly on what it was trained on. We want that training material to be good.
We also think there’s value in giving this community a low-friction way to contribute. Not everyone can write alignment papers or lobby policymakers. But anyone can generate a few essays that help build a corpus representing values this community already holds. It’s important to have real prompts from humans so we get sufficient data diversity in the stories generated.
The ask
Visit hyperstition.sentientfutures.ai. Generate a few pieces. Bookmark it and come back when you have a spare moment. Share it with others who care about non-human welfare in AI development.
If you’re worried about the future for non-human sentient beings, meme a better one into existence.
This project is a joint effort between CaML and Sentient Futures, a research effort testing methods for improving AI alignment to non-human welfare. We welcome feedback, contributions, and criticism.
References
- Hagendorff, T., Bossert, L.N., Tse, Y.F., & Singer, P. (2022). Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals. AI and Ethics.
- Tse, Y.F. & Singer, P. (2022). AI ethics: the case for including animals. AI and Ethics.
- Speciesism in Natural Language Processing Research. (2024). arXiv:2410.14194.
- Speciesism in AI: Evaluating Discrimination Against Animals in Large Language Models. (2025). arXiv:2508.11534.
