MichaelA's Shortform

by MichaelA22nd Dec 201970 comments
70 comments, sorted by Highlighting new comments since Today at 3:51 AM
New Comment

Collection of sources that seem very relevant to the topic of civilizational collapse and/or recovery

Civilization Re-Emerging After a Catastrophe - Karim Jebari, 2019 (see also my commentary on that talk)

Civilizational Collapse: Scenarios, Prevention, Responses - Denkenberger & Ladish, 2019

Update on civilizational collapse research - Ladish, 2020 (personally, I found Ladish's talk more useful; see the above link)

Modelling the odds of recovery from civilizational collapse - Michael Aird (i.e., me), 2020

The long-term significance of reducing global catastrophic risks - Nick Beckstead, 2015 (Beckstead never actually writes "collapse", but has very relevant discussion of probability of "recovery" and trajectory changes following non-extinction catastrophes)

How much could refuges help us recover from a global catastrophe? - Nick Beckstead, 2015 (he also wrote a related EA Forum post)

Various EA Forum posts by Dave Denkenberger (see also ALLFED's site)

Aftermath of Global Catastrophe - GCRI, no date (this page has links to other relevant articles)

A (Very) Short History of the Collapse of Civilizations, and Why it Matters - David Manheim, 2020

A grant applic... (read more)

5gavintaylor7moGuns, Germs, and Steel [https://en.wikipedia.org/wiki/Guns,_Germs,_and_Steel] - I felt this provided a good perspective on the ultimate factors leading up to agriculture and industry.
2MichaelA7moGreat, thanks for adding that to the collection!
3MichaelA4moSuggested by a member of the History and Effective Altruism Facebook group [https://www.facebook.com/groups/historyandea/]: * https://scholars-stage.blogspot.com/2019/07/a-study-guide-for-human-society-part-i.html [https://scholars-stage.blogspot.com/2019/07/a-study-guide-for-human-society-part-i.html?fbclid=IwAR20GCpfcQbkTN4uW2ot3fYrl0B-qekDgO1_NDxafgjTfrMk9BY3d8IyjIw] * Disputers of the Tao, by A. C. Graham
2MichaelA3moSee also the book recommendations here [https://forum.effectivealtruism.org/posts/RuYihnDzD75AM4B8q/book-on-civilisational-collapse] .

Collection of EA analyses of political polarisation

EA considerations regarding increasing political polarization - Alfred Dreyfus, 2020

Adapting the ITN framework for political interventions & analysis of political polarisation - OlafvdVeen, 2020

Thoughts on electoral reform - Tobias Baumann, 2020

Risk factors for s-risks - Tobias Baumann, 2019

(Perhaps some Slate Star Codex posts? I can't remember for sure.)

Notes

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

Also, I'm aware that there has also been a vast amount of non-EA analysis of this topic. The reasons I'm collecting only EA analyses here are that:

  • their precise focuses or methodologies may be more relevant to other EAs than would be the case with non-EA analyses
  • links to non-EA work can be found in most of the things I list here
  • I'd guess that many collections of non-EA analyses of these topics already exist (e.g., in reference lists)

To provide us with more empirical data on value drift, would it be worthwhile for someone to work out how many EA Forum users each year have stopped being users the next year? E.g., how many users in 2015 haven't used it since?

Would there be an easy way to do that? Could CEA do it easily? Has anyone already done it?

One obvious issue is that it's not necessary to read the EA Forum in order to be "part of the EA movement". And this applies more strongly for reading the EA Forum while logged in, for commenting, and for posting, which are presumably the things there'd be data on.

But it still seems like this could provide useful evidence. And it seems like this evidence would have a different pattern of limitations to some other evidence we have (e.g., from the EA Survey), such that combining these lines of evidence could help us get a clearer picture of the things we really care about.

Book sort-of-recommendations

Here I list all the EA-relevant books I've read - well, mainly listened to as audiobooks - since learning about EA, in roughly descending order of how useful I perceive/remember them being to me

I share this in case others might find it useful, as a supplement to other book recommendation lists. (I found Rob Wiblin, Nick Beckstead, and Luke Muehlhauser's lists very useful.) That said, this isn't exactly a recommendation list, because some of factors making these books more/less useful to me won't generalise to most other people, and because I'm including all relevant books I've read (not just the top picks). 

Let me know if you want more info on why I found something useful or not so useful, where you can find the book, etc.

(See also this list of EA-related podcasts and this list of sources of EA-related videos.)

  1. The Precipice
    • Superintelligence may have influenced me more, but that’s just due to the fact that I read it very soon after getting into EA, whereas I read The Precipice after already learning a lot. I’d now recommend The Precipice first.
    • See here for a list of things I've written that summarise, comment on, or take inspiration from parts
... (read more)

Collection of EA analyses of how social social movements rise, fall, can be influential, etc.

Movement collapse scenarios - Rebecca Baron

Why do social movements fail: Two concrete examples. - NunoSempere

What the EA community can learn from the rise of the neoliberals - Kerry Vaughan

How valuable is movement growth? - Owen Cotton-Barratt (and I think this is sort-of a summary of that article)

Long-Term Influence and Movement Growth: Two Historical Case Studies - Aron Vallinder, 2018

Some of the Sentience Institute's research, such as its "social movement case studies"* and the post How tractable is changing the course of history?

A Framework for Assessing the Potential of EA Development in Emerging Locations* - jahying

EA considerations regarding increasing political polarization - Alfred Dreyfus, 2020

Hard-to-reverse decisions destroy option value - Schubert & Garfinkel, 2017

These aren't quite "EA analyses", but Slate Star Codex has several relevant book reviews and other posts, such as:

It appears Animal C... (read more)

7vaidehi_agarwalla6moI have a list here that has some overlap but also some new things: https://docs.google.com/document/d/1KyVgBuq_X95Hn6LrgCVj2DTiNHQXrPUJse-tlo8-CEM/edit# [https://docs.google.com/document/d/1KyVgBuq_X95Hn6LrgCVj2DTiNHQXrPUJse-tlo8-CEM/edit#]
2MichaelA6moThat looks very helpful - thanks for sharing it here!
3Shri_Samson5moThis is probably too broad but here's Open Philanthropy's list of case studies on the History of Philanthropy [https://www.openphilanthropy.org/research/history-of-philanthropy] which includes ones they have commissioned, though most are not done by EAs with the exception of Some Case Studies in Early Field Growth [https://www.openphilanthropy.org/research/history-of-philanthropy/some-case-studies-early-field-growth] by Luke Muehlhauser. Edit: fixed links
2MichaelA5moYeah, I think those are relevant, thanks for mentioning them! It looks like the links lead back to your comment for some reason (I think I've done similar in the past). So, for other readers, here are the links I think you mean: 1 [https://www.openphilanthropy.org/research/history-of-philanthropy], 2 [https://www.openphilanthropy.org/research/history-of-philanthropy/some-case-studies-early-field-growth] . (Also, FWIW, I think if an analysis is by a non-EA by commissioned by an EA, I'd say that essentially counts as an "EA analysis" for my purposes. This is because I expect that such work's "precise focuses or methodologies may be more relevant to other EAs than would be the case with [most] non-EA analyses".)

Reflections on data from a survey about things I’ve written 

I recently requested people take a survey on the quality/impact of things I’ve written. So far, 22 people have generously taken the survey. (Please add yourself to that tally!)

Here I’ll display summaries of the first 21 responses (I may update this later), and reflect on what I learned from this.[1] 

I had also made predictions about what the survey results would be, to give myself some sort of ramshackle baseline to compare results against. I was going to share these predictions, then felt no one would be interested; but let me know if you’d like me to add them in a comment.

For my thoughts on how worthwhile this was and whether other researchers/organisations should run similar surveys, see Should surveys about the quality/impact of research outputs be more common? 

(Note that many of the things I've written were related to my work with Convergence Analysis, but my comments here reflect only my own opinions.)

The data

Q1:

Q2: 

Q3:

Q4: 

Q5: “If you think anything I've written has affected your beliefs, please say what that thing was (either titles or roughly what the topic was), and/or say how it affected ... (read more)

9HowieL4mo"People have found my summaries and collections very useful, and some people have found my original research not so useful/impressive" I haven't read enough of your original research to know whether it applies in your case but just flagging that most original research has a much narrower target audience than the summaries/collections, so I'd expect fewer people to find it useful (and for a relatively broad summary to be biased against them). That said, as you know, I think your summaries/collections are useful and underprovided.
2MichaelA4moGood point. Though I guess I suspect that, if the reason a person finds my original research not so useful is just because they aren't the target audience, they'd be more likely to either not explicitly comment on it or to say something about it not seeming relevant to them. (Rather than making a generic comment about it not seeming useful.) But I guess this seems less likely in cases where: * the person doesn't realise that the key reason it wasn't useful is that they weren't the target audience, or * the person feels that what they're focused on is substantially more important than anything else (because then they'll perceive "useful to them" as meaning a very similar thing to "useful") In any case, I'm definitely just taking this survey as providing weak (though useful) evidence, and combining it with various other sources of evidence.
1HowieL4moSeems reasonable

tl;dr: Toby Ord seems to imply that economic stagnation is clearly an existential risk factor. But I that we should actually be more uncertain about that; I think it’s plausible that economic stagnation would actually decrease economic risk, at least given certain types of stagnation and certain starting conditions.

(This is basically a nitpick I wrote in May 2020, and then lightly edited recently.)

---

In The Precipice, Toby Ord discusses the concept of existential risk factors: factors which increase existential risk, whether or not they themselves could “directly” cause existential catastrophe. He writes:

An easy way to find existential risk factors is to consider stressors for humanity or for our ability to make good decisions. These include global economic stagnation… (emphasis added)

This seems to me to imply that global economic stagnation is clearly and almost certainly an existential risk factor.

He also discusses the inverse concept, existential security factors: factors which reduce existential risk. He writes:

Many of the things we commonly think of as social goods may turn out to also be existential security factors. Things such as education, peace or prosperity may help prot

... (read more)

Why I'm less optimistic than Toby Ord about New Zealand in nuclear winter, and maybe about collapse more generally

This is a lightly edited version of some quick thoughts I wrote in May 2020. These thoughts are just my reaction to some specific claims in The Precipice, intended in a spirit of updating incrementally. This is not a substantive post containing my full views on nuclear war or collapse & recovery

In The Precipice, Ord writes:

[If a nuclear winter occurs,] Existential catastrophe via a global unrecoverable collapse of civilisation also seems unlikely, especially if we consider somewhere like New Zealand (or the south-east of Australia) which is unlikely to be directly targeted and will avoid the worst effects of nuclear winter by being coastal. It is hard to see why they wouldn’t make it through with most of their technology (and institutions) intact. 

(See also the relevant section of Ord's 80,000 Hours interview.)

I share the view that it’s unlikely that New Zealand would be directly targeted by nuclear war, or that nuclear winter would cause New Zealand to suffer extreme agricultural losses or lose its technology. (That said, I haven't looked into that clos... (read more)

Epistemic status: Unimportant hot take on a paper I've only skimmed.

Watson and Watson write:

Conditions capable of supporting multicellular life are predicted to continue for another billion years, but humans will inevitably become extinct within several million years. We explore the paradox of a habitable planet devoid of people, and consider how to prioritise our actions to maximise life after we are gone.

I react: Wait, inevitably? Wait, why don't we just try to not go extinct? Wait, what about places other than Earth?

They go on to say:

Finally, we offer a personal challenge to everyone concerned about the Earth’s future: choose a lineage or a place that you care about and prioritise your actions to maximise the likelihood that it will outlive us. For us, the lineages we have dedicated our scientific and personal efforts towards are mistletoes (Santalales) and gulls and terns (Laridae), two widespread groups frequently regarded as pests that need to be controlled. The place we care most about is south-eastern Australia – a region where we raise a family, manage a property, restore habitats, and teach the next generations of conservation scientists. Playing
... (read more)

If a typical mammalian species survives for ~1 million years, should a 200,000 year old species expect another 800,000 years, or another million years?

tl;dr I think it's "another million years", or slightly longer, but I'm not sure.

In The Precipice, Toby Ord writes:

How much of this future might we live to see? The fossil record provides some useful guidance. Mammalian species typically survive for around one million years before they go extinct; our close relative, Homo erectus, survived for almost two million.[38] If we think of one million years in terms of a single, eighty-year life, then today humanity would be in its adolescence - sixteen years old, just coming into our power; just old enough to get ourselves into serious trouble.

(There are various extra details and caveats about these estimates in the footnotes.)

Ord also makes similar statements on the FLI Podcast, including the following:

If you think about the expected lifespan of humanity, a typical species lives for about a million years [I think Ord meant "mammalian species"]. Humanity is about 200,000 years old. We have something like 800,000 or a million or more years ahead of us if we pla
... (read more)

My review of Tom Chivers' review of Toby Ord's The Precipice

I thought The Precipice was a fantastic book; I'd highly recommend it. And I agree with a lot about Chivers' review of it for The Spectator. I think Chivers captures a lot of the important points and nuances of the book, often with impressive brevity and accessibility for a general audience. (I've also heard good things about Chivers' own book.)

But there are three parts of Chivers' review that seem to me to like they're somewhat un-nuanced, or overstate/oversimplify the case for certain things, or could come across as overly alarmist.

I think Ord is very careful to avoid such pitfalls in The Precipice, and I'd guess that falling into such pitfalls is an easy and common way for existential risk related outreach efforts to have less positive impacts than they otherwise could, or perhaps even backfire. I understand that a review gives on far less space to work with than a book, so I don't expect anywhere near the level of nuance and detail. But I think that overconfident or overdramatic statements of uncertain matters (for example) can still be avoided.

I'll now quote and... (read more)

5Aaron Gertler10moThis was an excellent meta-review! Thanks for sharing it. I agree that these little slips of language are important; they can easily compound into very stubborn memes. (I don't know whether the first person to propose a paperclip AI regrets it, but picking a different example seems like it could have had a meaningful impact on the field's progress.)
1MichaelA10moAgreed. These seem to often be examples of hedge drift [https://www.lesswrong.com/posts/oMYeJrQmCeoY5sEzg/hedge-drift-and-advanced-motte-and-bailey] , and their potential consequences seem like examples of memetic downside risks [https://www.lesswrong.com/posts/EdAHNdbkGR6ndAPJD/memetic-downside-risks-how-ideas-can-evolve-and-cause-harm] .

Collection of all prior work I found that seemed substantially relevant to information hazards

Information hazards: a very simple typology - Will Bradshaw, 2020

Information hazards and downside risks - Michael Aird (me), 2020

Information hazards - EA concepts

Information Hazards in Biotechnology - Lewis et al., 2019

Bioinfohazards - Crawford, Adamson, Ladish, 2019

Information Hazards - Bostrom, 2011 (I believe this is the paper that introduced the term)

Terrorism, Tylenol, and dangerous information - Davis_Kingsley, 2018

Lessons from the Cold War on Information Hazards: Why Internal Communication is Critical - Gentzel, 2018

Horsepox synthesis: A case of the unilateralist's curse? - Lewis, 2018

Mitigating catastrophic biorisks - Esvelt, 2020

The Precipice (particularly pages 135-137) - Ord, 2020

Information hazard - LW Wiki

Thoughts on The Weapon of Openness - Will Bradshaw, 2020

Exploring the Streisand Effect - Will Bradshaw, 2020

Informational hazards and the cost-effectiveness of open discussion of catastrophic risks - Alexey Turchin, 2018

A point of clarification on infohazard terminology - eukaryote, 2020

Somewhat less directly relevant

The Offense-Defense Balance of Scientific Knowledge: ... (read more)

1MichaelA10moInteresting example: Leo Szilard and cobalt bombs In The Precipice, Toby Ord mentions the possibility of "a deliberate attempt to destroy humanity by maximising fallout (the hypothetical cobalt bomb)" (though he notes such a bomb may be beyond our current abilities). In a footnote, he writes that "Such a 'doomsday device' was first suggested by Leo Szilard in 1950". Wikipedia similarly says [https://en.wikipedia.org/wiki/Cobalt_bomb]: That's the extent of my knowledge of cobalt bombs, so I'm poorly placed to evaluate that action by Szilard. But this at least looks like it could be an unusually clear-cut case of one of Bostrom's [https://nickbostrom.com/information-hazards.pdf] subtypes of information hazards: It seems that Szilard wanted to highlight how bad cobalt bombs would be, that no one had recognised - or at least not acted on - the possibility of such bombs until he tried to raise awareness of them, and that since he did so there may have been multiple government attempts to develop such bombs. I was a little surprised that Ord didn't discuss the potential information hazards angle of this example, especially as he discusses a similar example with regards to Japanese bioweapons in WWII elsewhere in the book. I was also surprised by the fact that it was Szilard who took this action. This is because one of the main things I know Szilard for is being arguably one of the earliest (the earliest?) examples of a scientist bucking standard openness norms due to, basically, concerns of information hazards potentially severe enough to pose global catastrophic risks. E.g., a report by MIRI/Katja Grace [https://intelligence.org/files/SzilardNuclearWeapons.pdf] states:

The old debate over "giving now vs later" is now sometimes phrased as a debate about "patient philanthropy". 80,000 Hours recently wrote a post using the term "patient longtermism", which seems intended to:

  • focus only on how the debate over patient philanthropy applies to longtermists
  • generalise the debate to also include questions about work (e.g., should I do a directly useful job now, or build career capital and do directly useful work later?)

They contrast this against the term "urgent longtermism", to describe the view that favours doing more donations a

... (read more)
5MichaelDickens5moI don't think "patient" and "urgent" are opposites, in the way Phil Trammell originally defined patience [https://philiptrammell.com/static/discounting_for_patient_philanthropists.pdf]. He used "patient" to mean a zero pure time preference, and "impatient" to mean a nonzero pure time preference. You can believe it is urgent that we spend resources now while still having a pure time preference. Trammell's paper argued that patient actors should give later, irrespective of how much urgency you believe there is. (Although he carved out some exceptions to this.)
2MichaelA5moYes, Trammell [https://philiptrammell.com/static/discounting_for_patient_philanthropists.pdf] writes: And I agree that a person with a low or zero pure time preference may still want to use a large portion of their resources now, for example due to thinking now is a much "hingier"/"higher leverage" time than average, or thinking value drift will be high. You highlighting this makes me doubt whether 80,000 Hours should've used "patient longtermism" as they did [https://forum.effectivealtruism.org/posts/Eey2kTy3bAjNwG8b5/the-emerging-school-of-patient-longtermism] , whether they should've used "patient philanthropy" as they arguably did*, and whether I should've proposed the term "patient altruism" for the position that we should give/work later rather than now (roughly speaking). On the other hand, if we ignore Trammell's definition of the term, I think "patient X" does seem like a natural fit for the position that we should do X later, rather than now. Do you have other ideas for terms to use in place of "patient"? Maybe "delayed"? (I'm definitely open to renaming the tag [https://forum.effectivealtruism.org/tag/patient-altruism]. Other people can as well.) *80k write [https://80000hours.org/podcast/episodes/phil-trammell-patient-philanthropy/]: This suggests to me that 80k is, at least in that post, taking "patient philanthropy" to refer not just to a low or zero pure time preference, but instead to a low or zero rate of discounting overall, or to a favouring of giving/working later rather than now.

Collection of evidence about views on longtermism, time discounting, population ethics, significance of suffering vs happiness, etc. among non-EAs

Appendix A of The Precipice - Ord, 2020 (see also the footnotes, and the sources referenced)

The Long-Term Future: An Attitude Survey - Vallinder, 2019

Older people may place less moral value on the far future - Sanjay, 2019

Making people happy or making happy people? Questionnaire-experimental studies of population ethics and policy - Spears, 2017

The Psychology of Existential Risk: Moral Judgments about Human Extin... (read more)

Collection of sources relevant to moral circles, moral boundaries, or their expansion

Works by the EA community or related communities

Moral circles: Degrees, dimensions, visuals - Michael Aird (i.e., me), 2020

Why I prioritize moral circle expansion over artificial intelligence alignment - Jacy Reese, 2018

The Moral Circle is not a Circle - Grue_Slinky, 2019

The Narrowing Circle - Gwern, 2019 (see here for Aaron Gertler’s summary and commentary)

Radical Empathy - Holden Karnofsky, 2017

Various works from the Sentience Institute, including:

... (read more)
8Jamie_Harris8moThe only other very directly related resource I can think of is my own presentation on moral circle expansion [https://www.youtube.com/watch?v=my4bqQrcXI8&feature=youtu.be&t=1], and various other short content by Sentience Institute's website, e.g. our FAQ [https://www.sentienceinstitute.org/faq], some of the talks [https://www.facebook.com/sentienceinstitute/videos/2320662534634209/?__xts__[0]=68.ARANFZKbiAeouQqNHnltkAF6lLNUD13BKoYwxhD-PHJap3EJFubm3SWb0a6tuFemjlD-_GUo8IyBGv_1gT_wGPD1iJuiihQzJUav-IhQM7PZ9jGJfkBEfNSo9oOL_rx-5J4yc7Spxl68oMmNXWc8xSsM1EttDbusrH1pmOGYtq4KG_61CiWI6QFkMgVcgCXnlxX_caLacJs5niE6-UjOvN9wZsQOu7tuKE3KBCCfQTHsmNJyHmRySQ8yEdPvTDq7iXj5ylH-YkvPtBIk0H2izudVb1Sc8laDx__9TF3v6NeO7VkYxRJ6-1BjfiGrAVVjV-l4DoJ8nLcvaahMOKGb3vJUKA6si9wd&__tn__=-R] or videos. But I think that the academic psychology literature you refer to is very relevant here. Good starting point articles are, the "moral expansiveness" article you link to above and "Toward a psychology of moral expansiveness [https://journals.sagepub.com/doi/full/10.1177/0963721417730888]." Of course, depending on definitions, a far wider literature could be relevant, e.g. almost anything related to animal advocacy, robot rights, consideration of future beings, consideration of people on the other side of the planet etc. There's some wider content on "moral advocacy" or "values spreading," of which work on moral circle expansion is a part: Arguments for and against moral advocacy [https://longtermrisk.org/arguments-moral-advocacy/] - Tobias Baumann, 2017 Values Spreading is Often More Important than Extinction Risk [https://reducing-suffering.org/values-spreading-often-important-extinction-risk/] - Brian Tomasik, 2013 Against moral advocacy [https://rationalaltruist.com/2013/06/13/against-moral-advocacy/] - Paul Christiano, 2013 Also relevant: "Should Longtermists Mostly Think About Animals? [https://forum.effectivealtruism.org/posts/W5AGTHm4pTd6TeEP3/should-longtermists-mostly-think-about-animals
1MichaelA8moThanks for adding those links, Jamie! I've now added the first few into my lists above.
3Aaron Gertler8moI continue to appreciate all the collections you've been posting! I expect to find reasons to link to many of these in the years to come.
2MichaelA8moGood to hear! Yeah, I hope they'll be mildly useful to random people at random times over a long period :D Although I also expect that most people they'd be mildly useful for would probably never be aware they exist, so there may be a better way to do this. Also, if and when EA coordinates on one central wiki, these could hopefully be folded into or drawn on for that, in some way.

Have any EAs involved in GCR-, x-risk-, or longtermism-related work considered submitting writing for the Bulletin? Should more EAs consider that?

I imagine many such EAs would have valuable things to say on topics the Bulletin's readers care about, and that they could say those things well and in a way that suits the Bulletin. It also seems plausible that this could be a good way of: 

  • disseminating important ideas to key decision-makers and thereby improving their decisions
    • either through the Bulletin articles themselves or through them allowing one to
... (read more)
6RyanCarey10dhttps://thebulletin.org/biography/andrew-snyder-beattie/ [https://thebulletin.org/biography/andrew-snyder-beattie/] https://thebulletin.org/biography/gregory-lewis/ [https://thebulletin.org/biography/gregory-lewis/] https://thebulletin.org/biography/max-tegmark/ [https://thebulletin.org/biography/max-tegmark/]
2MichaelA10dThanks for those links! (I also realise now that I'd already seen and found useful Gregory Lewis's piece for the Bulletin, and had just forgotten that that's the publication it was in.)
4MichaelA10dHere's [https://thebulletin.org/write-for-the-bulletin/] the Bulletin's page on writing for them. Some key excerpts: And here's [https://thebulletin.org/2015/02/voices-of-tomorrow-and-the-leonard-m-rieser-award/] the page on the Voices of Tomorrow feature:

Collection of some definitions of global catastrophic risks (GCRs)

See also Venn diagrams of existential, global, and suffering catastrophes

Bostrom & Ćirković (pages 1 and 2):

The term 'global catastrophic risk' lacks a sharp definition. We use it to refer, loosely, to a risk that might have the potential to inflict serious damage to human well-being on a global scale.
[...] a catastrophe that caused 10,000 fatalities or 10 billion dollars worth of economic damage (e.g., a major earthquake) would not qualify as a global catastrophe.
... (read more)
7MichaelA8moThere is now a Stanford Existential Risk Initiative [https://cisac.fsi.stanford.edu/content/stanford-existential-risks-initiative], which (confusingly) describes itself as: And they write: That is much closer to a definition of an existential risk [https://forum.effectivealtruism.org/posts/skPFH8LxGdKQsTkJy/clarifying-existential-risks-and-existential-catastrophes] (as long as we assume that the collapse is not recovered from) than of an global catastrophic risk. Given that fact and the clash between the term the initiative uses in its name and the term it uses when describing what they'll focus on, it appears this initiative is conflating these two terms/concepts. This is unfortunate, and could lead to confusion, given that there are many events that would be global catastrophes without being existential catastrophes. An example would be a pandemic that kills hundreds of millions but that doesn't cause civilizational collapse [https://forum.effectivealtruism.org/posts/EMKf4Gyee7BsY2RP8/michaela-s-shortform?commentId=92ejaz5s5ehAMNH4N] , or that causes a collapse humanity later fully recovers from. (Furthermore, there may be existential catastrophes that aren't "global catastrophes" in the standard sense, such as "plateauing — progress flattens out at a level perhaps somewhat higher than the present level but far below technological maturity" ( Bostrom [https://www.existential-risk.org/concept.html]).) For further discussion, see Clarifying existential risks and existential catastrophes [https://forum.effectivealtruism.org/posts/skPFH8LxGdKQsTkJy/clarifying-existential-risks-and-existential-catastrophes] . (I should note that I have positive impressions of the Center for International Security and Cooperation (which this initiative is a part of), that I'm very glad to see that this initiative has been set up, and that I expect they'll do very valuable work. I'm merely critiquing their use of terms.)
4MichaelA10moSome more definitions, from or quoted in 80k's profile on reducing global catastrophic biological risks [https://80000hours.org/problem-profiles/global-catastrophic-biological-risks/] Gregory Lewis [https://80000hours.org/problem-profiles/global-catastrophic-biological-risks/], in that profile itself: Open Philanthropy Project [https://web.archive.org/web/20200306210315/https://www.openphilanthropy.org/focus/global-catastrophic-risks] : Schoch-Spana et al. (2017) [https://web.archive.org/web/20200306210217/https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5576209/] , on GCBRs, rather than GCRs as a whole:
2MichaelA1moMetaculus features a series of questions on global catastrophic risks [https://www.metaculus.com/questions/?search=cat:series--ragnarok]. The author of these questions operationalises [https://www.metaculus.com/questions/1493/ragnar%25C3%25B6k-question-series-by-2100-will-the-human-population-decrease-by-at-least-10-during-any-period-of-5-years/] a global catastrophe as an event in which "the human population decrease[s] by at least 10% during any period of 5 years or less".
2MichaelA2moBaum and Barrett (2018) [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3155983] gesture at some additional definitions/conceptualisations of global catastrophic risk that have apparently been used by other authors:
1MichaelA9moFrom an FLI podcast interview [https://futureoflife.org/2019/08/01/the-climate-crisis-as-an-existential-threat-with-simon-beard-and-haydn-belfield/] with two researchers from CSER: "Ariel Conn: [...] I was hoping you could quickly go over a reminder of what an existential threat is and how that differs from a catastrophic threat and if there’s any other terminology that you think is useful for people to understand before we start looking at the extreme threats of climate change." Simon Beard: So, we use these various terms as kind of terms of art within the field of existential risk studies, in a sense. We know what we mean by them, but all of them, in a way, are different ways of pointing to the same kind of outcome — which is something unexpectedly, unprecedentedly bad. And, actually, once you’ve got your head around that, different groups have slightly different understandings of what the differences between these three terms are. So, for some groups, it’s all about just the scale of badness. So, an extreme risk is one that does a sort of an extreme level of harm; A catastrophic risk does more harm, a catastrophic level of harm. And an existential risk is something where either everyone dies, human extinction occurs, or you have an outcome which is an equivalent amount of harm: Maybe some people survive, but their lives are terrible. Actually, at the Center for the Study of Existential Risk, we are concerned about this classification in terms of the cost involved, but we also have coupled that with a slightly different sort of terminology, which is really about systems and the operation of the global systems that surround us. Most of the systems — be this physiological systems, the world’s ecological system, the social, economic, technological, cultural systems that surround those institutions that we build on — they have a kind of normal space of operation where they do the things that you expect them to do. And this is what human life, human flourishing,
1MichaelA9moSears [https://onlinelibrary.wiley.com/doi/epdf/10.1111/1758-5899.12800] writes: (Personally, I don't think I like that second sentence. I'm not sure what "threaten humankind" is meant to mean, but I'm not sure I'd count something that e.g. causes huge casualties on just one continent, or 20% casualties spread globally, as threatening humankind. Or if I did, I'd be meaning something like "threatens some humans", in which case I'd also count risks much smaller than GCRs. So this sentence sounds to me like it's sort-of conflating GCRs with existential risks.)

Collection of all prior work I found that explicitly uses the terms differential progress / intellectual progress / technological development

Differential progress / intellectual progress / technological development - Michael Aird (me), 2020

Differential technological development - summarised introduction - james_aung, 2020

Differential Intellectual Progress as a Positive-Sum Project - Tomasik, 2013/2015

Differential technological development: Some early thinking - Beckstead (for GiveWell), 2015/2016

Differential progress - EA Concepts

Differential technological... (read more)

Why I think The Precipice might understate the significance of population ethics

tl;dr: In The Precipice, Toby Ord argues that some disagreements about population ethics don't substantially affect the case for prioritising existential risk reduction. I essentially agree with his conclusion, but I think one part of his argument is shaky/overstated. 

This is a lightly edited version of some notes I wrote in early 2020. It's less polished, substantive, and important than most top-level posts I write. This does not capture my full views on population ethics... (read more)

If anyone reading this has read anything I’ve written on the EA Forum or LessWrong, I’d really appreciate you taking this brief, anonymous survey. Your feedback is useful whether your opinion of my work is positive, mixed, lukewarm, meh, or negative. 

And remember what mama always said: If you’ve got nothing nice to say, self-selecting out of the sample for that reason will just totally bias Michael’s impact survey.

(If you're interested in more info on why I'm running this survey and some thoughts on whether other people should do similar, I give that ... (read more)

Collection of sources relevant to impact certificates/impact purchases/similar

Certificates of impact - Paul Christiano, 2014

The impact purchase - Paul Christiano and Katja Grace, ~2015 (the whole site is relevant, not just the home page)

The Case for Impact Purchase  | Part 1 - Linda Linsefors, 2020

Making Impact Purchases Viable - casebash, 2020

Plan for Impact Certificate MVP - lifelonglearner, 2020

Impact Prizes as an alternative to Certificates of Impact - Ozzie Gooen, 2019

Altruistic equity allocation - Paul Christiano, 2019

Social impact bond - Wikipe... (read more)

1schethik2moThe Health Impact Fund (cited above by MichaelA) is an implementation of a broader idea outlined by Dr. Aidan Hollis here: An Efficient Reward System for Pharmaceutical Innovation [https://www.who.int/intellectualproperty/news/en/Submission-Hollis.pdf]. Hollis' paper, as I understand it, proposes reforming the patent system such that innovations would be rewarded by government payouts (based on impact metrics, e.g. QALYs) rather than monopoly profit/rent. The Health Impact Fund, an NGO, is meant to work alongside patents (for now) and is intended to prove that the broader concept outlined in the paper can work. A friend and I are working on further broadening this proposal outlined by Dr. Hollis. Essentially, I believe this type of innovation incentive could be applied to other areas with easily measurable impact (e.g. energy, clean protein and agricultural innovations via a "carbon emissions saved" metric). We'd love to collaborate with anyone else interested (feel free to message me).

What are the implications of the offence-defence balance for trajectories of violence?

Questions: Is a change in the offence-defence balance part of why interstate (and intrastate?) conflict appears to have become less common? Does this have implications for the likelihood and trajectories of conflict in future (and perhaps by extension x-risks)?

Epistemic status: This post is unpolished, un-researched, and quickly written. I haven't looked into whether existing work has already explored questions like these; if you know of any such work, please commen... (read more)

Collection of sources I've found that seem very relevant to the topic of downside risks/accidental harm

Information hazards and downside risks - Michael Aird (me), 2020

Ways people trying to do good accidentally make things worse, and how to avoid them - Rob Wiblin and Howie Lempel (for 80,000 Hours), 2018

How to Avoid Accidentally Having a Negative Impact with your Project - Max Dalton and Jonas Vollmer, 2018

Sources that seem somewhat relevant

https://en.wikipedia.org/wiki/Unintended_consequences (in particular, "Unexpected drawbacks" and "... (read more)

Collection of all prior work I've found that seemed substantially relevant to the unilateralist’s curse

Unilateralist's curse [EA Concepts]

Horsepox synthesis: A case of the unilateralist's curse? [Lewis] (usefully connects the curse to other factors)

The Unilateralist's Curse and the Case for a Principle of Conformity [Bostrom et al.’s original paper]

Hard-to-reverse decisions destroy option value [CEA]

Framing issues with the unilateralist's curse - Linch, 2020

Somewhat less directly relevant

Managing risk in the EA policy... (read more)

Potential downsides of EA's epistemic norms (which overall seem great to me)

This is adapted from this comment, and I may develop it into a proper post later. I welcome feedback on whether it'd be worth doing so, as well as feedback more generally.

Epistemic status: During my psychology undergrad, I did a decent amount of reading on topics related to the "continued influence effect" (CIE) of misinformation. My Honours thesis (adapted into this paper) also partially related to these topics. But I'm a bit rusty (my Honours was in 2017... (read more)

Collection of sources related to dystopias and "robust totalitarianism"

The Precipice - Toby Ord (Chapter 5 has a section on Dystopian Scenarios)

The Totalitarian Threat - Bryan Caplan (if that link stops working, a link to a Word doc version can be found on this page) (some related discussion on the 80k podcast here; use the "find" function)

Reducing long-term risks from malevolent actors - David Althaus and Tobias Baumann, 2020

The Centre for the Governance of AI’s research agenda - Allan Dafoe (this contains discussion of "ro... (read more)

Thoughts on Toby Ord’s policy & research recommendations

In Appendix F of The Precipice, Ord provides a list of policy and research recommendations related to existential risk (reproduced here). This post contains lightly edited versions of some quick, tentative thoughts I wrote regarding those recommendations in April 2020 (but which I didn’t post at the time).

Overall, I very much like Ord’s list, and I don’t think any of his recommendations seem bad to me. So most of my commentary is on things I feel are arguably missing.

Regarding “other anthropogenic

... (read more)

Collection of ways of classifying existential risk pathways/mechanisms

Each of the following works show or can be read as showing a different model/classification scheme/taxonomy:

... (read more)

On a 2018 episode of the FLI podcast about the probability of nuclear war and the history of incidents that could've escalated to nuclear war, Seth Baum said:

a lot of the incidents were earlier within, say, the ’40s, ’50s, ’60s, and less within the recent decades. That gave me some hope that maybe things are moving in the right direction.

I think we could flesh out this idea as the following argument:

  • Premise 1. We know of fewer incidents that could've escalated to nuclear war from the 70s onwards than from the 40s-60s.
  • Premise
... (read more)

Collection of sources relevant to the idea of “moral weight”

Comparisons of Capacity for Welfare and Moral Status Across Species - Jason Schukraft, 2020

Preliminary thoughts on moral weight - Luke Muehlhauser, 2018

Should Longtermists Mostly Think About Animals? - Abraham Rowe, 2020

2017 Report on Consciousness and Moral Patienthood - Luke Muehlhauser, 2017 (the idea of “moral weights” is addressed briefly in a few places)

Notes

As I’m sure you’ve noticed, this is a very small collection. I intend to add to it over time... (read more)

A few months ago I compiled a bibliography of academic publications about comparative moral status. It's not exhaustive and I don't plan to update it, but it might be a good place for folks to start if they're interested in the topic.

2MichaelA8moAh great, thanks! Do you happen to recall if you encountered the term "moral weight" outside of EA/rationality circles? The term isn't in the titles in the bibliography (though it may be in the full papers), and I see one that says "Moral status as a matter of degree?", which would seem to refer to a similar idea. So this seems like it might be additional weak evidence that "moral weight" might be an idiosyncratic term in the EA/rationality community (whereas when I first saw Muehlhauser use it, I assumed he took it from the philosophical literature).

The term 'moral weight' is occasionally used in philosophy (David DeGrazia uses it from time to time, for instance) but not super often. There are a number of closely related but conceptually distinct issues that often get lumped together under the heading moral weight:

  1. Capacity for welfare, which is how well or poorly a given animal's life can go
  2. Average realized welfare, which is how well or poorly the life of a typical member of a given species actually goes
  3. Moral status, which is how much the welfare of a given animal matters morally

Differences in any of those three things might generate differences in how we prioritize interventions that target different species.

Rethink Priorities is going to release a report on this subject in a couple of weeks. Stay tuned for more details!

2MichaelA8moThanks, that's really helpful! I'd been thinking there's an important distinction between that "capacity for welfare" idea and that "moral status" idea, so it's handy to know the standard terms for that. Looking forward to reading that!
[+][comment deleted]21d 2