Jeffrey Ladish

I'm Jeffrey Ladish. I'm a security researcher and risk consultant focused on global catastrophic threats. My website is at

Wiki Contributions


EA Hangout Prisoners' Dilemma

I expect most people to think either that AMF or MIRI is much more likely to do good. So from most agent's perspectives, the unilateral defection is only better if their chosen org wins. If someone has more of a portfolio approach that weights longtermist and global poverty  efforts similarly, then your point holds. I expect that's a minority position though.

The $100trn opportunity: ESG investing should be a top priority for EA careers

I see you define it a few paragraphs down, but at the top would be helpful I think

Nuclear war is unlikely to cause human extinction

Yeah, I would agree with that! I think radiological weapons are some of the most relevant nuclear capabilities / risks to consider from a longterm perspective, due to their risk of being developed in the future.

Nuclear war is unlikely to cause human extinction

The part I added was:

"By a full-scale war, I mean a nuclear exchange between major world powers, such as the US, Russia, and China, using the complete arsenals of each country. The total number of warheads today (14,000) is significantly smaller than during the height of the cold war (70,000). While extinction from nuclear war is unlikely today, it may become more likely if significantly more warheads are deployed or if designs of weapons change significantly."

I also think indirect extinction from nuclear war is unlikely, but I would like to address this more in a future post.  I disagree that additional clarifications are needed. I think people made these points clearly in the comments, and that anyone motivated to investigate this area seriously can read those.  If you want to try to doublecrux on why we disagree here I'd be up for that, though on a call might be preferable for saving time.

Nuclear war is unlikely to cause human extinction

I mean that the amount required to cover every part of the Earth's surface  would serve no military purpose. Or rather, it might enhance one's deterrent a little bit, but it would
1) kill all of one's own people, which is the opposite of a defense objective
2) not be a very cost effective way to improve one's deterrent. In nearly all cases it would make more sense to expand second strike capabilities by adding more submarines, mobile missile launchers, or other stealth second strike weapons.

Which isn't to say this couldn't happen! Military research teams have proposed crazy plans like this before. I'm just arguing, as have many others at RAND and elsewhere, that a doomsday machine isn't a good deterrent, compared to the other options that exist (and given the extraordinary downside risks).

Nuclear war is unlikely to cause human extinction

FWIW, my guess is that you're already planning to do this, but I think it could be valuable to carefully consider information hazards before publishing on this [both because of messaging issues similar to the one we discussed here and potentially on the substance, e.g. unclear if it'd be good to describe in detail "here is how this combination of different hazards could kill everyone"]. So I think e.g. asking a bunch of people what they think prior to publication could be good. (I'd be happy to review a post prior to publication, though I'm not sure if I'm particularly qualified.)

Yes, I was planning to get review prior to publishing this. In general when it comes to risks from biotechnology, I'm trying to follow the principles we developed here: I'd be excited to see, or help workshop, better guidance for navigating information hazards in this space in the future. 

Load More