https://www.scottaaronson.com/blog/?p=4845 (a)
From the beginning:
Will He Go?, by legal scholar Lawrence Douglas, is, at 120 pages, a slim volume focused on a single question: what happens if the 2020 US election delivers a narrow or disputed result favoring Biden, and Trump refuses to concede? This question will, of course, either be answered or rendered irrelevant in half a year. And yet, in my estimation, there’s at least a 15% probability that Will He Go? will enter the ranks of the most important and prescient books ever written. You should read it right now (or at least read this Vox interview), if you want to think through the contours of a civilizational Singularity that seems at least as plausible to me as the AI Singularity, but whose fixed date of November 3, 2020 we’re now hurtling toward.
In one of the defining memes of the past few years, a sign in a bookstore reads “Dear customers: post-apocalyptic fiction has been moved to the Current Affairs section.” I was reminded of that as Douglas dryly lays out his horror scenario: imagine, hypothetically, that a President of the United States gets elected on a platform of racism and lies, with welcomed assistance from a foreign adversary. Suppose that his every outrage only endears him further to his millions of followers. Suppose that, as this president’s deepest (and perhaps only) principle, he never backs down, never apologizes, never acknowledges any inconvenient fact, and never accepts the legitimacy of any contest that he loses—and this is perfectly rational for him, as he’s been richly rewarded for this strategy his entire life. Suppose that, during the final presidential debate, he pointedly refuses to promise to respect the election outcome if he loses—a first in American history. And suppose that, after eking out a narrow win in the Electoral College, he then turns around and disputes the election anyway (!)—claiming, ludicrously, that he would’ve won the popular vote too, if not for millions of fraudulent voters. Suppose that, for their own sordid reasons, Republican majorities in the Senate and Supreme Court enable this president’s chaotic rule, block his impeachment, and acquiesce to his daily cruelties and lies.
Then what happens in the next election?
A Metaculus question on whether Trump will concede if he loses the election has just been posted:
https://www.metaculus.com/questions/4609/if-president-trump-loses-the-2020-election-will-he-concede/
Milan: I've categorized the post as "personal blog" for now. Can you say any more about how this relates to EA, or how readers might be able to take action if they want to find a way to help?
I thought "taking tail risks seriously" was kinda an EA thing...? In particular, we all agree that there probably won't be a coup or civil war in the USA in early 2021, but is it 1% likely? 0.001% likely? I won't try to guess, but it sure feels higher after I read that link (including the Vox interview) ... and plausibly high enough to warrant serious thought and contingency planning.
At least, that's what I got out of it. I gave it a bit of thought and decided that I'm not in a position that I can or should do anything about it, but I imagine that some readers might have an angle of attack, especially given that it's still 6 months out.
From the part I excerpted:
"You should read it right now (or at least read this Vox interview), if you want to think through the contours of a civilizational Singularity that seems at least as plausible to me as the AI Singularity, but whose fixed date of November 3, 2020 we’re now hurtling toward."
The EA implications of the 2020 US presidential election seem obvious?
See also Dustin & Cari's $20m grant to the 2016 Clinton campaign.
Thanks for sharing the last link, which I think provides useful context (that Open Philanthropy's funder has a history of donating to partisan political campaigns).
The very last line of the Vox interview is the only one I saw which suggests concrete action a person could take to reduce the chances of an electoral crisis (I assume that trying to get relevant laws changed within five months would be really hard):
Given these points, though, the upshot of this post is effectively an argument that supporting Biden's campaign should be thought of as an EA cause area, because even though it's very hard to tell what impact political donations have, an unclear election result runs the risk of triggering a civil war, which is bad enough that even hard-to-quantify forms of risk reduction are very valuable here? With some bonus value because Biden donations mean a candidate with mostly better policy ideas is more likely to win (though the article doesn't really go into policy differences)?
Does that seem like the right takeaway to you? Did you mean to make a different point about the value of changing electoral laws?
(I realize that the above is me making a lot of assumptions, but that's another reason why it's helpful to summarize what you found valuable/actionable in a given crosspost; it saves readers from having to work through all of the implications themselves.)
Why is this context useful? It feels like this the relevance of this post should not be particularly tied to Dustin and Cari's donation choices.
Is "X should be thought of as an EA cause area" distinct from "X would be good"? More generally, I'd like the forum to be a place where we can share important ideas without needing to include calls to action.
On the other hand, I also endorse holding political posts to a more stringent standard, so that we don't all get sucked in.
I should also mention that a post like this doesn't need to have expected-value calculations attached, or anything in that level of detail; it's just good to have a couple of sentences along the lines of "here's why I posted this, and why I think it demonstrates a chance to make an effective donation // take other effective actions," even if no math is involved.
(This kind of explanation seems more important the further removed a post is from "standard" EA content. When I crossposted Open Phil's 2019 year-in-review post, I didn't include a summary, because the material seemed to have very clear relevance for people who want to keep up with the EA community.)
I usually do link posts to improve the community's situational awareness.
This is upstream of advocating for specific actions, though it's definitely part of that causal chain.
I liked that post when it came out, but I had forgotten about it in the ensuing year-plus. Maybe you could link to this post when you make situational-awareness crossposts?