The AI safety community doesn't agree on how worried to be. But most discussions flatten this into "doomers vs optimists." The actual disagreement is more interesting.

I built a semantic search tool that indexes 392 episodes from 80,000 Hours, AXRP, Dwarkesh Patel, The Inside View and more. Searching p(doom) and related terms surfaces some striking contrasts:

Robert Miles puts his p(doom) at 90–99%: "My mainline prediction is doom. It doesn't look good."

Eliezer Yudkowsky on those who agree with his arguments but have lower p(doom): they "enact the ritual of the young optimistic scientist who charges forth with no ideas of the difficulties."

Scott Alexander and Daniel Kokotajlo both land around 20% — the lowest on the AI 2027 team. Alexander notes he's "not entirely convinced we won't get alignment by default."

Will MacAskill sits at 10–20%, calling himself "optimistic today" — but notes this is among the lowest estimates in serious circles.

Sundar Pichai also estimates ~10%.

Zvi Mowshowitz has moved up to ~70%.

What's interesting isn't just the numbers — it's what drives the disagreement. Christiano focuses on conflict scenarios: AI systems acting on their own goals, described as "little green men getting ugly." Yudkowsky focuses on capability jumps that make intervention impossible. LeCun thinks the whole framing is wrong.

These aren't just vibes — they're searchable, timestamped arguments from the primary sources.

You can explore the full disagreement here:  AI Safety Search — Leita       — search p(doom), existential risk, or any researcher by name

2

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since:

I appreciate the initiative! AI Safety is rich with disagreements, and it's nice to have an opportunity to easily map out the range of existing views. Thanks for sharing!

Curated and popular this week
Relevant opportunities