Hide table of contents

Hello, EA Forum!

Posting under the pseudonym Artem Rudnev — civil engineer from Dnipro, Ukraine. English is not my native language; this post was written in Russian and translated using AI (all key claims double-checked).

Before early 2014 I regularly worked on infrastructure projects in Moscow and Siberia. Travelling between Ukraine and Russia felt as natural as moving between districts of the same city — deep economic, professional, and personal ties everywhere.

Over the next decade I watched those ties almost completely dissolve — not just because of war, but because shared reality itself fractured. Information warfare and epistemic fragmentation made cooperation literally impossible, even between close relatives with no personal stake in the conflict. Similar identity-driven fragmentation is now visible in the West too.

This experience gave me an unusual lens on existential risk: we already have promising technical approaches to AI alignment, pandemics, nuclear stability, climate. Every one quietly assumes humanity will still be able to coordinate when it matters.

My observation: that assumption is breaking, and fast. Coordination capacity looks like an under-weighted meta-risk — civilization’s immune system failing.

Just yesterday (24 Nov 2025) the UN human rights chief warned generative AI risks becoming a “modern-day Frankenstein's monster” without safeguards [source](https://www.arabnews.com/node/2623768/world). The irony is painful: we can diagnose the threat, but coordinating a response is getting harder.

What I’m trying to build  

I’m prototyping training environments — “flight simulators” for defending coordination and shared reality under epistemic attack. The goal is skill-building through safe, high-fidelity practice rather than passive media literacy.

Very early stage. Looking for:

• Honest feedback: is coordination capacity actually a neglected meta-risk multiplier?

• Critical counter-arguments

• Pointers to related work I’m missing

• Potential collaborators (game designers, red-teaming experts, funders)

Coming in the next weeks:

• ~4000-word analysis of mechanisms and evidence

• ~3000-word concrete proposal with training-game concepts

Three questions for now:

1. Does the “coordination as meta-risk” framing resonate, or am I overgeneralising from one case?

2. For AI safety folks: even if we solve technical alignment, how much remaining x-risk is coordination failure?

3. Has anyone tried scalable training for epistemic/coordination resilience? What worked?

Grateful for any pushback — better now than later.

Epistemic Status:

• High confidence in the existence and dynamics of the problem (20 years of direct observation)  

Medium confidence in the timeline acceleration caused by AI  

• Low-medium confidence in the proposed interventions (early-stage ideas)

Artem Rudnev

7

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities