Civilization Re-Emerging After a Catastrophe - Karim Jebari, 2019 (see also my commentary on that talk)
Civilizational Collapse: Scenarios, Prevention, Responses - Denkenberger & Ladish, 2019
Update on civilizational collapse research - Ladish, 2020 (personally, I found Ladish's talk more useful; see the above link)
Modelling the odds of recovery from civilizational collapse - Michael Aird (i.e., me), 2020
The long-term significance of reducing global catastrophic risks - Nick Beckstead, 2015 (Beckstead never actually writes "collapse", but has very relevant discussion of probability of "recovery" and trajectory changes following non-extinction catastrophes)
How much could refuges help us recover from a global catastrophe? - Nick Beckstead, 2015 (he also wrote a related EA Forum post)
Various EA Forum posts by Dave Denkenberger (see also ALLFED's site)
Aftermath of Global Catastrophe - GCRI, no date (this page has links to other relevant articles)
A (Very) Short History of the Collapse of Civilizations, and Why it Matters - David Manheim, 2020
A grant applic... (read more)
EA considerations regarding increasing political polarization - Alfred Dreyfus, 2020
Adapting the ITN framework for political interventions & analysis of political polarisation - OlafvdVeen, 2020
Thoughts on electoral reform - Tobias Baumann, 2020
Risk factors for s-risks - Tobias Baumann, 2019
(Perhaps some Slate Star Codex posts? I can't remember for sure.)
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
Also, I'm aware that there has also been a vast amount of non-EA analysis of this topic. The reasons I'm collecting only EA analyses here are that:
I've written some posts on related themes.
To provide us with more empirical data on value drift, would it be worthwhile for someone to work out how many EA Forum users each year have stopped being users the next year? E.g., how many users in 2015 haven't used it since?
Would there be an easy way to do that? Could CEA do it easily? Has anyone already done it?
One obvious issue is that it's not necessary to read the EA Forum in order to be "part of the EA movement". And this applies more strongly for reading the EA Forum while logged in, for commenting, and for posting, which are presumably the things there'd be data on.
But it still seems like this could provide useful evidence. And it seems like this evidence would have a different pattern of limitations to some other evidence we have (e.g., from the EA Survey), such that combining these lines of evidence could help us get a clearer picture of the things we really care about.
Here I list all the EA-relevant books I've read - well, mainly listened to as audiobooks - since learning about EA, in roughly descending order of how useful I perceive/remember them being to me.
I share this in case others might find it useful, as a supplement to other book recommendation lists. (I found Rob Wiblin, Nick Beckstead, and Luke Muehlhauser's lists very useful.) That said, this isn't exactly a recommendation list, because some of factors making these books more/less useful to me won't generalise to most other people, and because I'm including all relevant books I've read (not just the top picks).
Let me know if you want more info on why I found something useful or not so useful, where you can find the book, etc.
(See also this list of EA-related podcasts and this list of sources of EA-related videos.)
Movement collapse scenarios - Rebecca Baron
Why do social movements fail: Two concrete examples. - NunoSempere
What the EA community can learn from the rise of the neoliberals - Kerry Vaughan
How valuable is movement growth? - Owen Cotton-Barratt (and I think this is sort-of a summary of that article)
Long-Term Influence and Movement Growth: Two Historical Case Studies - Aron Vallinder, 2018
Some of the Sentience Institute's research, such as its "social movement case studies"* and the post How tractable is changing the course of history?
A Framework for Assessing the Potential of EA Development in Emerging Locations* - jahying
Hard-to-reverse decisions destroy option value - Schubert & Garfinkel, 2017
These aren't quite "EA analyses", but Slate Star Codex has several relevant book reviews and other posts, such as:
It appears Animal C... (read more)
I recently requested people take a survey on the quality/impact of things I’ve written. So far, 22 people have generously taken the survey. (Please add yourself to that tally!)
Here I’ll display summaries of the first 21 responses (I may update this later), and reflect on what I learned from this.
I had also made predictions about what the survey results would be, to give myself some sort of ramshackle baseline to compare results against. I was going to share these predictions, then felt no one would be interested; but let me know if you’d like me to add them in a comment.
For my thoughts on how worthwhile this was and whether other researchers/organisations should run similar surveys, see Should surveys about the quality/impact of research outputs be more common?
(Note that many of the things I've written were related to my work with Convergence Analysis, but my comments here reflect only my own opinions.)
Q5: “If you think anything I've written has affected your beliefs, please say what that thing was (either titles or roughly what the topic was), and/or say how it affected ... (read more)
tl;dr: Toby Ord seems to imply that economic stagnation is clearly an existential risk factor. But I that we should actually be more uncertain about that; I think it’s plausible that economic stagnation would actually decrease economic risk, at least given certain types of stagnation and certain starting conditions.
(This is basically a nitpick I wrote in May 2020, and then lightly edited recently.)
In The Precipice, Toby Ord discusses the concept of existential risk factors: factors which increase existential risk, whether or not they themselves could “directly” cause existential catastrophe. He writes:
An easy way to find existential risk factors is to consider stressors for humanity or for our ability to make good decisions. These include global economic stagnation… (emphasis added)
This seems to me to imply that global economic stagnation is clearly and almost certainly an existential risk factor.
He also discusses the inverse concept, existential security factors: factors which reduce existential risk. He writes:
Many of the things we commonly think of as social goods may turn out to also be existential security factors. Things such as education, peace or prosperity may help prot
This is a lightly edited version of some quick thoughts I wrote in May 2020. These thoughts are just my reaction to some specific claims in The Precipice, intended in a spirit of updating incrementally. This is not a substantive post containing my full views on nuclear war or collapse & recovery.
In The Precipice, Ord writes:
[If a nuclear winter occurs,] Existential catastrophe via a global unrecoverable collapse of civilisation also seems unlikely, especially if we consider somewhere like New Zealand (or the south-east of Australia) which is unlikely to be directly targeted and will avoid the worst effects of nuclear winter by being coastal. It is hard to see why they wouldn’t make it through with most of their technology (and institutions) intact.
(See also the relevant section of Ord's 80,000 Hours interview.)
I share the view that it’s unlikely that New Zealand would be directly targeted by nuclear war, or that nuclear winter would cause New Zealand to suffer extreme agricultural losses or lose its technology. (That said, I haven't looked into that clos... (read more)
Epistemic status: Unimportant hot take on a paper I've only skimmed.
Watson and Watson write:
Conditions capable of supporting multicellular life are predicted to continue for another billion years, but humans will inevitably become extinct within several million years. We explore the paradox of a habitable planet devoid of people, and consider how to prioritise our actions to maximise life after we are gone.
I react: Wait, inevitably? Wait, why don't we just try to not go extinct? Wait, what about places other than Earth?
They go on to say:
Finally, we offer a personal challenge to everyone concerned about the Earth’s future: choose a lineage or a place that you care about and prioritise your actions to maximise the likelihood that it will outlive us. For us, the lineages we have dedicated our scientific and personal efforts towards are mistletoes (Santalales) and gulls and terns (Laridae), two widespread groups frequently regarded as pests that need to be controlled. The place we care most about is south-eastern Australia – a region where we raise a family, manage a property, restore habitats, and teach the next generations of conservation scientists. Playing
tl;dr I think it's "another million years", or slightly longer, but I'm not sure.
In The Precipice, Toby Ord writes:
How much of this future might we live to see? The fossil record provides some useful guidance. Mammalian species typically survive for around one million years before they go extinct; our close relative, Homo erectus, survived for almost two million. If we think of one million years in terms of a single, eighty-year life, then today humanity would be in its adolescence - sixteen years old, just coming into our power; just old enough to get ourselves into serious trouble.
(There are various extra details and caveats about these estimates in the footnotes.)
Ord also makes similar statements on the FLI Podcast, including the following:
If you think about the expected lifespan of humanity, a typical species lives for about a million years [I think Ord meant "mammalian species"]. Humanity is about 200,000 years old. We have something like 800,000 or a million or more years ahead of us if we pla
I thought The Precipice was a fantastic book; I'd highly recommend it. And I agree with a lot about Chivers' review of it for The Spectator. I think Chivers captures a lot of the important points and nuances of the book, often with impressive brevity and accessibility for a general audience. (I've also heard good things about Chivers' own book.)
But there are three parts of Chivers' review that seem to me to like they're somewhat un-nuanced, or overstate/oversimplify the case for certain things, or could come across as overly alarmist.
I think Ord is very careful to avoid such pitfalls in The Precipice, and I'd guess that falling into such pitfalls is an easy and common way for existential risk related outreach efforts to have less positive impacts than they otherwise could, or perhaps even backfire. I understand that a review gives on far less space to work with than a book, so I don't expect anywhere near the level of nuance and detail. But I think that overconfident or overdramatic statements of uncertain matters (for example) can still be avoided.
I'll now quote and... (read more)
Information hazards: a very simple typology - Will Bradshaw, 2020
Information hazards and downside risks - Michael Aird (me), 2020
Information hazards - EA concepts
Information Hazards in Biotechnology - Lewis et al., 2019
Bioinfohazards - Crawford, Adamson, Ladish, 2019
Information Hazards - Bostrom, 2011 (I believe this is the paper that introduced the term)
Terrorism, Tylenol, and dangerous information - Davis_Kingsley, 2018
Lessons from the Cold War on Information Hazards: Why Internal Communication is Critical - Gentzel, 2018
Horsepox synthesis: A case of the unilateralist's curse? - Lewis, 2018
Mitigating catastrophic biorisks - Esvelt, 2020
The Precipice (particularly pages 135-137) - Ord, 2020
Information hazard - LW Wiki
Thoughts on The Weapon of Openness - Will Bradshaw, 2020
Exploring the Streisand Effect - Will Bradshaw, 2020
Informational hazards and the cost-effectiveness of open discussion of catastrophic risks - Alexey Turchin, 2018
A point of clarification on infohazard terminology - eukaryote, 2020
Somewhat less directly relevant
The Offense-Defense Balance of Scientific Knowledge: ... (read more)
The old debate over "giving now vs later" is now sometimes phrased as a debate about "patient philanthropy". 80,000 Hours recently wrote a post using the term "patient longtermism", which seems intended to:
They contrast this against the term "urgent longtermism", to describe the view that favours doing more donations a
Appendix A of The Precipice - Ord, 2020 (see also the footnotes, and the sources referenced)
The Long-Term Future: An Attitude Survey - Vallinder, 2019
Older people may place less moral value on the far future - Sanjay, 2019
Making people happy or making happy people? Questionnaire-experimental studies of population ethics and policy - Spears, 2017
The Psychology of Existential Risk: Moral Judgments about Human Extin... (read more)
Works by the EA community or related communities
Moral circles: Degrees, dimensions, visuals - Michael Aird (i.e., me), 2020
Why I prioritize moral circle expansion over artificial intelligence alignment - Jacy Reese, 2018
The Moral Circle is not a Circle - Grue_Slinky, 2019
The Narrowing Circle - Gwern, 2019 (see here for Aaron Gertler’s summary and commentary)
Radical Empathy - Holden Karnofsky, 2017
Various works from the Sentience Institute, including:
Have any EAs involved in GCR-, x-risk-, or longtermism-related work considered submitting writing for the Bulletin? Should more EAs consider that?
I imagine many such EAs would have valuable things to say on topics the Bulletin's readers care about, and that they could say those things well and in a way that suits the Bulletin. It also seems plausible that this could be a good way of:
See also Venn diagrams of existential, global, and suffering catastrophes
Bostrom & Ćirković (pages 1 and 2):
The term 'global catastrophic risk' lacks a sharp definition. We use it to refer, loosely, to a risk that might have the potential to inflict serious damage to human well-being on a global scale.
[...] a catastrophe that caused 10,000 fatalities or 10 billion dollars worth of economic damage (e.g., a major earthquake) would not qualify as a global catastrophe.
Differential progress / intellectual progress / technological development - Michael Aird (me), 2020
Differential technological development - summarised introduction - james_aung, 2020
Differential Intellectual Progress as a Positive-Sum Project - Tomasik, 2013/2015
Differential technological development: Some early thinking - Beckstead (for GiveWell), 2015/2016
Differential progress - EA Concepts
Differential technological... (read more)
tl;dr: In The Precipice, Toby Ord argues that some disagreements about population ethics don't substantially affect the case for prioritising existential risk reduction. I essentially agree with his conclusion, but I think one part of his argument is shaky/overstated.
This is a lightly edited version of some notes I wrote in early 2020. It's less polished, substantive, and important than most top-level posts I write. This does not capture my full views on population ethics... (read more)
Things I’ve written
If anyone reading this has read anything I’ve written on the EA Forum or LessWrong, I’d really appreciate you taking this brief, anonymous survey. Your feedback is useful whether your opinion of my work is positive, mixed, lukewarm, meh, or negative.
And remember what mama always said: If you’ve got nothing nice to say, self-selecting out of the sample for that reason will just totally bias Michael’s impact survey.
(If you're interested in more info on why I'm running this survey and some thoughts on whether other people should do similar, I give that ... (read more)
Certificates of impact - Paul Christiano, 2014
The impact purchase - Paul Christiano and Katja Grace, ~2015 (the whole site is relevant, not just the home page)
The Case for Impact Purchase | Part 1 - Linda Linsefors, 2020
Making Impact Purchases Viable - casebash, 2020
Plan for Impact Certificate MVP - lifelonglearner, 2020
Impact Prizes as an alternative to Certificates of Impact - Ozzie Gooen, 2019
Altruistic equity allocation - Paul Christiano, 2019
Social impact bond - Wikipe... (read more)
Questions: Is a change in the offence-defence balance part of why interstate (and intrastate?) conflict appears to have become less common? Does this have implications for the likelihood and trajectories of conflict in future (and perhaps by extension x-risks)?
Epistemic status: This post is unpolished, un-researched, and quickly written. I haven't looked into whether existing work has already explored questions like these; if you know of any such work, please commen... (read more)
Ways people trying to do good accidentally make things worse, and how to avoid them - Rob Wiblin and Howie Lempel (for 80,000 Hours), 2018
How to Avoid Accidentally Having a Negative Impact with your Project - Max Dalton and Jonas Vollmer, 2018
Sources that seem somewhat relevant
https://en.wikipedia.org/wiki/Unintended_consequences (in particular, "Unexpected drawbacks" and "... (read more)
Unilateralist's curse [EA Concepts]
Horsepox synthesis: A case of the unilateralist's curse? [Lewis] (usefully connects the curse to other factors)
The Unilateralist's Curse and the Case for a Principle of Conformity [Bostrom et al.’s original paper]
Hard-to-reverse decisions destroy option value [CEA]
Framing issues with the unilateralist's curse - Linch, 2020
Managing risk in the EA policy... (read more)
This is adapted from this comment, and I may develop it into a proper post later. I welcome feedback on whether it'd be worth doing so, as well as feedback more generally.
Epistemic status: During my psychology undergrad, I did a decent amount of reading on topics related to the "continued influence effect" (CIE) of misinformation. My Honours thesis (adapted into this paper) also partially related to these topics. But I'm a bit rusty (my Honours was in 2017... (read more)
Review of 'value drift' estimates, and several new estimates - Ben Todd, 2020
EA Survey 2018 Series: How Long Do EAs Stay in EA? - Peter Hurford, 2019
Empirical data on value drift - Joey Savoie, 2018
Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift - Darius Meissner, 2018
A Qualitative Analysis of Value Drift in EA - Marisa Jurczyk, 2020
Value Drift & How to Not Be Evil Part I & Part II - Daniel Gambacorta, 2019
Keeping everyone motivated: a case for effective careers outside of the highest imp... (read more)
The Precipice - Toby Ord (Chapter 5 has a section on Dystopian Scenarios)
The Totalitarian Threat - Bryan Caplan (if that link stops working, a link to a Word doc version can be found on this page) (some related discussion on the 80k podcast here; use the "find" function)
Reducing long-term risks from malevolent actors - David Althaus and Tobias Baumann, 2020
The Centre for the Governance of AI’s research agenda - Allan Dafoe (this contains discussion of "ro... (read more)
In Appendix F of The Precipice, Ord provides a list of policy and research recommendations related to existential risk (reproduced here). This post contains lightly edited versions of some quick, tentative thoughts I wrote regarding those recommendations in April 2020 (but which I didn’t post at the time).
Overall, I very much like Ord’s list, and I don’t think any of his recommendations seem bad to me. So most of my commentary is on things I feel are arguably missing.
Each of the following works show or can be read as showing a different model/classification scheme/taxonomy:
(Will likely be expanded as I find and remember more)
On a 2018 episode of the FLI podcast about the probability of nuclear war and the history of incidents that could've escalated to nuclear war, Seth Baum said:
a lot of the incidents were earlier within, say, the ’40s, ’50s, ’60s, and less within the recent decades. That gave me some hope that maybe things are moving in the right direction.
I think we could flesh out this idea as the following argument:
Comparisons of Capacity for Welfare and Moral Status Across Species - Jason Schukraft, 2020
Preliminary thoughts on moral weight - Luke Muehlhauser, 2018
Should Longtermists Mostly Think About Animals? - Abraham Rowe, 2020
2017 Report on Consciousness and Moral Patienthood - Luke Muehlhauser, 2017 (the idea of “moral weights” is addressed briefly in a few places)
As I’m sure you’ve noticed, this is a very small collection. I intend to add to it over time... (read more)
A few months ago I compiled a bibliography of academic publications about comparative moral status. It's not exhaustive and I don't plan to update it, but it might be a good place for folks to start if they're interested in the topic.
The term 'moral weight' is occasionally used in philosophy (David DeGrazia uses it from time to time, for instance) but not super often. There are a number of closely related but conceptually distinct issues that often get lumped together under the heading moral weight:
Differences in any of those three things might generate differences in how we prioritize interventions that target different species.
Rethink Priorities is going to release a report on this subject in a couple of weeks. Stay tuned for more details!