Indeed. Seems supported by a quantum suicide argument - no matter how unlikely the observer, there always has to be a feeling of what-its-like-to-be that observer.
It's worth adding that both Stephen Bush and Jeremy Cliffe at the New Statesman both do prediction posts and review them at the end of each year. The meme is spreading! They're also two of the best journalists to follow about UK Labour politics (Bush) and EU politics (Cliffe) - if you're interested in those topics, as I am.
I think the closest things we've got that's similar to this are:
Luke Muehlhauser's work on 'amateur macrohistory' https://lukemuehlhauser.com/industrial-revolution/
The (more academic) Peter Turchin's Seshat database: http://seshatdatabank.info/
I would say more optimistic. I think there's a pretty big difference between emergence (a shift from authoritarianism to democracy) - and democratic backsliding, that is autocratisation (a shift from democracy to authoritarianism). Once that shift has consolidated, there's lots of changes that makes it self-reinforcing/path-dependent: norms and identities shift, economic and political power shifts, political institutions shift, the role of the military shifts. Some factors are the same for emergence and persistence, like wealth/growth, but some aren't (which I would say are pretty key) like getting authoritarian elites to accept democratisation.
Two books on emergence that I've found particularly interesting are
However as I said, the impact of AI systems does raise uncertainty, and is super fascinating.
Something I'm very concerned about, which I don't believe you touched, is the fate of democracies after a civilizational collapse. I've got a book chapter coming out on this later this year, that I hope I may be able to share a preprint of.
Interesting post! If you wanted to read into the comparative political science literature a little more, you might be interested in diving into the subfield of democratic backsliding (as opposed to emergence):
One of the common threads in this subfield is that once a democracy has 'consolidated', it seems to be fairly resilient to coups and perhaps incumbent takeover.
I certainly agree that how this interacts with new AI systems: automation, surveillance and targeting/profiling, and autonomous weapons systems is absolutely fascinating. For one early stab, you might be interested in my colleagues':
That's right, I think they should be higher priorities. As you show in your very useful post, Ord has nuclear and climate change at 1/1000 and AI at 1/10. I've got a draft book chapter on this, which I hope to be able to share a preprint of soon.
I'm really sorry to hear that from both of you, I agree it's a serious accusation.
For longtermism as a whole, as I argued in the post, I don't understand describing it as white supremacy - like e.g. antiracism or feminism, longtermism is opposed to an unjust power structure.
Sorry its taking a while to get back to you!
In the meantime, you might be interested in this from our Catherine Richards: https://www.cser.ac.uk/resources/reframing-threat-global-warming/
Thanks for the comment and these very useful links - will check with our food expert colleague and get back to you, especially on the probability question.
Just personally, however, let me note that we say that those four factors you mention are current 'sources of significant stress' for systems for the production and allocation of food - and we note that while 'global food productivity and production has increased dramatically' we are concerned about the 'vulnerability of our global food supply to rapid and global disruptions' and shocks. The three ways we describe climate change further reducing food security are growing conditions, agricultural pests and diseases, and the occurrence of extreme weather events.
Note also that the global catastrophe is the shock (hazard) plus how it cascades through interconnected systems with feedback. We're explicitly suggesting that the field move beyond 'is x a catastrophe?' to 'how does x effect critical systems, which can feed into one another, and may act more on our vulnerability and exposure than as a direct, single hazard'.
Interesting! I would feel I had been quasirandomly selected to allocate our shared pool of donations - and would definitely feel some obligation/responsibility.
As evidence that other people feel the same way, I would point to the extensive research and write-ups that previously selected allocators have done. A key explanation for why they've done that is a sense of obligation/responsibility for the group.