Hide table of contents

I published a book in early 2026 called Minds We Create: AI and a Future Still Being Written. I'm posting here because the EA and AI safety community is the most likely group to have substantive disagreements with it, and those disagreements would be more useful to me than praise.

What it is

It's a nonfiction book aimed at intelligent general readers — not researchers, not policymakers, not people already inside the field. Twenty-one chapters moving from concrete historical cases (Petrov, Arkhipov, Frances Kelsey, the original Luddites) through the technical substance of alignment, interpretability, and adversarial robustness, and into governance and the current regulatory landscape. It closes on the February 2026 Anthropic/DoD standoff, which was unfolding as I finished the manuscript.

The book takes AI risk seriously without claiming certainty about timelines or outcomes. I try to represent the probability estimate range honestly — including the gap between economists who assign sub-1% to civilizational-scale catastrophe and safety researchers at frontier labs who put it at 10–50%.

What it isn't

It isn't a technical contribution. It doesn't add to the alignment literature. It's closer in spirit to something like The Precipice than to a paper on RLHF or interpretability — written for the person who is paying attention to the AI conversation but hasn't yet found a single book that maps the whole terrain without requiring a computer science background.

My background

I came to this through intelligence analysis and institutional governance rather than ML. I hold an MA in Intelligence and Security Studies from Brunel University London. I've completed programs with BlueDot Impact, the Center for AI Safety, ENAS, and Anthropic, and I'm a member of AI Safety Hungary and EA. I run an AI consulting firm in Strasbourg, France. None of that makes me a technical AI safety researcher — I want to be clear about that — but it does mean I've spent time stress-testing the arguments in this book against people who know the field deeply.

What I'm curious about

  • Whether the historical analogies I use hold up under scrutiny from people who know this literature. The Petrov framing in particular — I use it to argue that good outcomes require structural conditions, not just good intentions — is one I'd expect some pushback on.
  • Whether the governance chapters reflect the current state of the field accurately, or whether there are significant developments or frameworks I've underweighted.
  • Whether a book like this has value for the EA/AI safety community specifically — as something to give to the person in your life who isn't yet in the tent — or whether it fills a gap that already has better answers.

The book is available at books2read.com/mindswecreate. Happy to send a free EPUB to anyone who'd like to read it and respond — just say so in the comments.

5

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities