Erik Bernath

Author & AI Consultant @ Furioso AI Consulting
4 karmaJoined Working (15+ years)Strasbourg, France
www.erikbernath.com

Bio

Participation
2

Erik Bernath is an AI consultant and author based in Strasbourg, France. His background is in intelligence analysis and institutional governance rather than machine learning — he holds an MA in Intelligence and Security Studies from Brunel University London and runs Furioso AI Consulting OÜ, where he is developing multi-agent orchestration systems. He has completed programs with BlueDot Impact, the Center for AI Safety, Anthropic, and the European Network for AI Safety, and is a member of AI Safety Hungary and Effective Altruism. Minds We Create is his first book.

How others can help me

  • Read and respond critically. If you work in AI safety, alignment, or governance and spot something I've gotten wrong — an analogy that doesn't hold, a framework I've misrepresented, a development I've missed — I'd genuinely rather know. Comments here or by email.
  • Share with people outside the field. The book is written for intelligent readers who aren't yet inside the AI safety conversation. If you know someone who is paying attention but hasn't found a foothold — a family member, a colleague, a policymaker — that's exactly who it's for.
  • Review copy requests. If you'd like to read it before deciding whether to recommend it, I'm happy to send a free EPUB. Just say so in the comments or message me directly.
  • Speaking and events. I'm available for EA group discussions, reading groups, university events, and podcast conversations. If your group is working through AI safety material and this book might be a useful addition, get in touch.
  • Connections. If you know journalists, librarians, or acquisition editors at university libraries with strong AI ethics or policy programs who might be interested, an introduction would be welcome.

How I can help others

  • Institutional governance and organizational failure. If you're working on questions about how safety culture develops — or fails to develop — inside organizations under competitive pressure, this is terrain I've spent years on. Happy to think through it with you.
  • Intelligence analysis frameworks. My background in geopolitical risk assessment translates reasonably well to structured uncertainty reasoning about AI timelines and governance scenarios. If you're working on forecasting or scenario planning in the AI safety space, I can offer that lens.
  • Writing for general audiences. If you're a researcher who has important work that isn't reaching people outside the field, I can help translate it — either through editing, structural feedback, or co-authorship on accessible pieces.
  • European AI policy landscape. Based in Strasbourg, a short walk from the European Parliament, I follow EU AI governance closely. If you need a European perspective on regulatory developments or want to understand how the AI Act is landing in practice, I'm a useful contact.
  • Multi-agent systems. I'm developing patent-pending multi-agent orchestration technology. If your research touches on coordination problems in multi-agent AI architectures, I'm interested in the conversation.