Yudkowsky's message is "If anyone builds superintelligence, everyone dies." Zvi's version is "If anyone builds superintelligence under anything like current conditions, everyone probably dies."
Yudkowsky contrasts those framings with common "EA framings" like "It seems hard to predict whether superintelligence will kill everyone or not, but there's a worryingly high chance it will, and Earth isn't prepared," and seems to think the latter framing is substantially driven by concerns about what can be said "in polite company."
Obviously I can't speak for all of EA, or all of Open Phil, and this post is my personal view rather than an institutional one since no single institutional view exists, but for the record, my inside view since 2010 has been "If anyone builds superintelligence under anything close to current conditions, probably everyone dies (or is severely disempowered)," and I think the difference between me and Yudkowsky has less to do with social effects on our speech and more to do with differing epistemic practices, i.e. about how confident one can reasonably be about the effects of poorly understood future technologies emerging in future, poorly understood circumstances. (My all-things-considered view, which includes various reference classes and partial deference to many others who think about the topic, is more agnostic and hasn't consistently been above the "probably" line.)
Moreover, I think those who believe some version of "If anyone builds superintelligence, everyone dies" should be encouraged to make their arguments loudly and repeatedly; the greatest barrier to actually-risk-mitigating action right now is the lack of political will.
That said, I think people should keep in mind that:
* Public argumentation can only get us so far when the evidence for the risks and their mitigations is this unclear, when AI has automated so little of the economy, when AI failures have led to so few deaths, etc.
* Most concrete progress on worst-case AI risk