Shortform Content [Beta]

MichaelA's Shortform

Collection of some definitions of global catastrophic risks (GCRs)

Bostrom & Ćirković (pages 1 and 2):

The term 'global catastrophic risk' lacks a sharp definition. We use it to refer, loosely, to a risk that might have the potential to inflict serious damage to human well-being on a global scale.
[...] a catastrophe that caused 10,000 fatalities or 10 billion dollars worth of economic damage (e.g., a major earthquake) would not qualify as a global catastrophe. A catastrophe that caused 10 million fatalities or 10 trillion dollars wo
... (read more)
Showing 3 of 4 replies (Click to show all)

There is now a Stanford Existential Risk Initiative, which (confusingly) describes itself as:

a collaboration between Stanford faculty and students dedicated to mitigating global catastrophic risks (GCRs). Our goal is to foster engagement from students and professors to produce meaningful work aiming to preserve the future of humanity by providing skill, knowledge development, networking, and professional pathways for Stanford community members interested in pursuing GCR reduction.

And they write:

What is a Global Catastrophic Risk?
We think of globa
... (read more)
1MichaelA1moFrom an FLI podcast interview [https://futureoflife.org/2019/08/01/the-climate-crisis-as-an-existential-threat-with-simon-beard-and-haydn-belfield/] with two researchers from CSER: "Ariel Conn: [...] I was hoping you could quickly go over a reminder of what an existential threat is and how that differs from a catastrophic threat and if there’s any other terminology that you think is useful for people to understand before we start looking at the extreme threats of climate change." Simon Beard: So, we use these various terms as kind of terms of art within the field of existential risk studies, in a sense. We know what we mean by them, but all of them, in a way, are different ways of pointing to the same kind of outcome — which is something unexpectedly, unprecedentedly bad. And, actually, once you’ve got your head around that, different groups have slightly different understandings of what the differences between these three terms are. So, for some groups, it’s all about just the scale of badness. So, an extreme risk is one that does a sort of an extreme level of harm; A catastrophic risk does more harm, a catastrophic level of harm. And an existential risk is something where either everyone dies, human extinction occurs, or you have an outcome which is an equivalent amount of harm: Maybe some people survive, but their lives are terrible. Actually, at the Center for the Study of Existential Risk, we are concerned about this classification in terms of the cost involved, but we also have coupled that with a slightly different sort of terminology, which is really about systems and the operation of the global systems that surround us. Most of the systems — be this physiological systems, the world’s ecological system, the social, economic, technological, cultural systems that surround those institutions that we build on — they have a kind of normal space of operation where they do the things that you expect them to do. And this is what human life, human flourishing,
1MichaelA1moSears [https://onlinelibrary.wiley.com/doi/epdf/10.1111/1758-5899.12800] writes: (Personally, I don't think I like that second sentence. I'm not sure what "threaten humankind" is meant to mean, but I'm not sure I'd count something that e.g. causes huge casualties on just one continent, or 20% casualties spread globally, as threatening humankind. Or if I did, I'd be meaning something like "threatens some humans", in which case I'd also count risks much smaller than GCRs. So this sentence sounds to me like it's sort-of conflating GCRs with existential risks.)
gavintaylor's Shortform

Participants in the 2008 FHI Global Catastrophic Risk conference estimated a probability of extinction from nano-technology at 5.5% (weapons + accident) and non-nuclear wars at 3% (all wars - nuclear wars) (the values are on the GCR wikipedia page). In the Precipice, Ord estimated the existential risk of Other anthropogenic risks (noted in the text as including but not limited to nano-technology, and I interpret this as including non-nuclear wars) as 2% (1 in 50). (Note that by definition, extinction risk is a sub-set of existential risk.)


Since starting to... (read more)

I too find this an interesting topic. More specifically, I wonder why I've seen as little discussion published in the last few years (rather than from >10 years ago) of nanotech as I have. I also wonder about the limited discussion of things like very long-lasting totalitarianism - though there I don't have reason to believe people recently had reasonably high x-risk estimates; I just sort-of feel like I haven't yet seen good reason to deprioritise investigating that possible risk. (I'm not saying that there should be more discussio... (read more)

evelynciara's Shortform

I think there should be an EA Fund analog for criminal justice reform. This could especially attract non-EA dollars.

edoarad's Shortform

Cochrane had a team set up in 2011 to investigate better Priority Setting Methods.

MichaelA's Shortform

Collection of sources relevant to moral circles, moral boundaries, or their expansion

Works by the EA community or related communities

Why I prioritize moral circle expansion over artificial intelligence alignment - Jacy Reese, 2018

The Moral Circle is not a Circle - Grue_Slinky, 2019

The Narrowing Circle - Gwern, 2019 (see here for Aaron Gertler’s summary and commentary)

Radical Empathy - Holden Karnofsky, 2017

Various works from the Sentience Institute, including:

... (read more)
Showing 3 of 4 replies (Click to show all)
6Jamie_Harris7dThe only other very directly related resource I can think of is my own presentation on moral circle expansion [https://docs.google.com/presentation/d/1p1wkzWCAGvDiFU2vMkUBW7jmPXjsWcoyUqplj1_nwyg/edit?usp=sharing] , and various other short content by Sentience Institute's website, e.g. our FAQ [https://www.sentienceinstitute.org/faq], some of the talks [https://www.facebook.com/sentienceinstitute/videos/2320662534634209/?__xts__[0]=68.ARANFZKbiAeouQqNHnltkAF6lLNUD13BKoYwxhD-PHJap3EJFubm3SWb0a6tuFemjlD-_GUo8IyBGv_1gT_wGPD1iJuiihQzJUav-IhQM7PZ9jGJfkBEfNSo9oOL_rx-5J4yc7Spxl68oMmNXWc8xSsM1EttDbusrH1pmOGYtq4KG_61CiWI6QFkMgVcgCXnlxX_caLacJs5niE6-UjOvN9wZsQOu7tuKE3KBCCfQTHsmNJyHmRySQ8yEdPvTDq7iXj5ylH-YkvPtBIk0H2izudVb1Sc8laDx__9TF3v6NeO7VkYxRJ6-1BjfiGrAVVjV-l4DoJ8nLcvaahMOKGb3vJUKA6si9wd&__tn__=-R] or videos. But I think that the academic psychology literature you refer to is very relevant here. Good starting point articles are, the "moral expansiveness" article you link to above and "Toward a psychology of moral expansiveness [https://journals.sagepub.com/doi/full/10.1177/0963721417730888]." Of course, depending on definitions, a far wider literature could be relevant, e.g. almost anything related to animal advocacy, robot rights, consideration of future beings, consideration of people on the other side of the planet etc. There's some wider content on "moral advocacy" or "values spreading," of which work on moral circle expansion is a part: Arguments for and against moral advocacy [https://longtermrisk.org/arguments-moral-advocacy/] - Tobias Baumann, 2017 Values Spreading is Often More Important than Extinction Risk [https://reducing-suffering.org/values-spreading-often-important-extinction-risk/] - Brian Tomasik, 2013 Against moral advocacy [https://rationalaltruist.com/2013/06/13/against-moral-advocacy/] - Paul Christiano, 2013 Also relevant: "Should Longtermists Mostly Think About Animals? [https://forum.effectivealtruism.org/posts/W5AGTHm4pTd6TeEP3/should-lo

Thanks for adding those links, Jamie!

I've now added the first few into my lists above.

1MichaelA19dGood to hear! Yeah, I hope they'll be mildly useful to random people at random times over a long period :D Although I also expect that most people they'd be mildly useful for would probably never be aware they exist, so there may be a better way to do this. Also, if and when EA coordinates on one central wiki, these could hopefully be folded into or drawn on for that, in some way.
Ramiro's Shortform

Does anyone have any idea / info on what proportion of the infected cases are getting Covid19 inside hospitals?

(Epistemic status: low, but I didin't find any research on that, so the hypothesis deserves a bit more of attention)

1. Nosocomial infections are serious business. Hospitals are basically big buildings full of dying people and the stressed personel who goes from one bed to another try to avoid it. Throw a deadly and very contagious virus in it, and it becomes a slaughterhouse.

2. Previous coronavirus were rapidly spread in hospitals and other c... (read more)

2Khorton2mohttps://www.theguardian.com/world/2020/mar/24/woman-first-uk-victim-die-coronavirus-caught-hospital-marita-edwards [https://www.theguardian.com/world/2020/mar/24/woman-first-uk-victim-die-coronavirus-caught-hospital-marita-edwards]

Did anyone see the spread of Covid through nursing homes coming before? It seems quite obvious in hindsight - yet, I didn't even mention it above. Some countries report almost half of the deaths from those environments.

(Would it have made any difference? I mean, would people have emphasized patient safety, etc.? I think it's implausible, but has anyone tested if this isn't just some statistical effect, due to the concentration of old-aged people, with chronic diseases?)

Ramiro's Shortform

Why didn't we have more previous alarm concerning the spread of Covid through care and nursing homes? Would it have made any difference? https://www.theguardian.com/world/2020/may/16/across-the-world-figures-reveal-horrific-covid-19-toll-of-care-home-deaths

[This comment is no longer endorsed by its author]Reply
Ramiro's Shortform

Can Longtermists "profit" from short-term bias?

We often think about human short-term bias (and the associated hyperbolic discount) and the uncertainty of the future as (among the) long-termism’s main drawbacks; i.e., people won’t think about policies concerning the future because they can’t appreciate or compute their value. However, those features may actually provide some advantages, too – by evoking something analogous to the effect of the veil of ignorance:

  1. They allow long-termism to provide some sort of focal poi
... (read more)
Ramiro's Shortform

Is 'donations as gifts' neglected?

I enjoy sending 'donations as gifts' - i.e., donating to GD, GW or AMF in honor of someone else (e.g., as a birthday gift). It doesn't actually affect my overall budget for donations; but this way, I try to subtly nudge this person to consider doing the same with their friends, or maybe even becoming a regular donor.

I wonder if other EAs do that. Perhaps it seems very obvious (for some cultures where donations are common), but I haven't seen any remark or analysis about it (well, maybe I'... (read more)

4Khorton20dI don't know what you mean by 'neglected'. I know a lot of people who say they want this and a similar number who are deeply offended by the concept. (Personally, I'm against the idea of giving charitable donations to my favourite charity as a gift, although I'd consider a donation to the recipient's favourite charity.)
1Ramiro19dThanks. Maybe it's just my blindspot. I couldn't find anyone discussing this for more than 5min, except for this one [https://forum.effectivealtruism.org/posts/3Gs65Nesm6H4SvT5F/the-valentine-s-day-gift-that-saves-lives] . I googled it and found some blogs [https://www.telegraph.co.uk/christmas/0/best-charity-christmas-gifts-uk-2019-unusual-presents-give/] that are not about what I have in mind I agree that donating to my favourite charity instead of my friend's favorite one would be unpolite, at least; however, I was thinking about friends who are not EAs, or who don't use to donate at all. It might be a better gift than a card or a lame souvenir, and perhaps interest this friend in EA charities (I try to think about which charity would interest this person most). Is there any reason against it?

If your friend doesn't donate normally, then probably their preferred person to spend money on is themself. It still seems rude to me to say you're giving them a gift, which should be something they want, and instead give them something they don't want.

For example, my mother likes flowers. I normally get her flowers for mother's day. If I switch to giving her a donation to AMF instead of buying her flowers, she will be counterfactually worse off - she is no longer getting the flowers she enjoys. I don't think that kind of experience would make her more likely to start donating, either.

jacobpfau's Shortform

Medium term AI forecasting with Metaculus

I'm working on a collection of metaculus.com questions intended to generate AI domain specific forecasting insights. These questions are intended to resolve in the 1-15 year range, and my hope is that if they're sufficiently independent, we'll get a range of positive and negative resolutions which will inform future forecasts.

I've already gotten a couple of them live, and am hoping for feedback on the rest:

1. When will AI out-perform humans on argument reasoning tasks?

2. When will multi-modal ML out-perform uni-moda

... (read more)
2Lukas_Gloor20dYou might be familiar with https://ai.metaculus.com/questions/. [https://ai.metaculus.com/questions/.] It went dormant unfortunately.

Yes, I recently asked a metaculus mod about this, and they said they're hoping to bring back the ai.metaculus sub-domain eventually. For now, I'm submitting everything to the metaculus main domain.

Mati_Roy's Shortform

Nuke insurance

Category: Intervention idea

Epistemic status: speculative; arm-chair thinking; non-expert idea; unfleshed idea

Proposal: Have nuclear powers insure each other that they won't nuke each other for mutually assure destruction (ie. destroying my infrastructure means you will destroy your economy). Not accepting an offered of mutual insurances should be seen as extremely hostile and uncooperative, and possible even be severely sanctioned internationally.

2Ramiro2moAlso: what about just explicitly criminalizing a) a first strike, b) a nuclear attack? The idea is to make it more likely that the individuals who participated in a nuclear strike would be punished - even if they considered it to be morally justified. (Someone will certainly think this is "serious April Fool's stuff")
1Mati_Roy2moGood point. My implicit idea was to have the money in an independent trust, so that the "punishment" is easier to enforce.

BTW, I have recently learned that ICJ missed an opportunity to explicitly state that using nukes (or at least a first strike) is a violation of international law.

Ramiro's Shortform

Did UNESCO draft recommendation on AI principles involve anyone concerned with AI safety? The draft hasn't been leaked yet, and I didn't see anything in EA community - maybe my bubble is too small. https://en.unesco.org/artificial-intelligence

MichaelA's Shortform

Collection of evidence about views on longtermism, time discounting, population ethics, significance of suffering vs happiness, etc. among non-EAs

Appendix A of The Precipice - Ord, 2020 (see also the footnotes, and the sources referenced)

The Long-Term Future: An Attitude Survey - Vallinder, 2019

Older people may place less moral value on the far future - Sanjay, 2019

Making people happy or making happy people? Questionnaire-experimental studies of population ethics and policy - Spears, 2017

Psychology of Existential Risk and Long-Termism - Schubert, 2018 (spa... (read more)

MichaelA's Shortform

Collection of sources relevant to the idea of “moral weight”

Comparisons of Capacity for Welfare and Moral Status Across Species - Jason Schukraft, 2020

Preliminary thoughts on moral weight - Luke Muehlhauser, 2018

Should Longtermists Mostly Think About Animals? - Abraham Rowe, 2020

2017 Report on Consciousness and Moral Patienthood - Luke Muehlhauser, 2017 (the idea of “moral weights” is addressed briefly in a few places)

Notes

As I’m sure you’ve noticed, this is a very small collection. I intend to add to it over time... (read more)

Showing 3 of 4 replies (Click to show all)
2MichaelA22dAh great, thanks! Do you happen to recall if you encountered the term "moral weight" outside of EA/rationality circles? The term isn't in the titles in the bibliography (though it may be in the full papers), and I see one that says "Moral status as a matter of degree?", which would seem to refer to a similar idea. So this seems like it might be additional weak evidence that "moral weight" might be an idiosyncratic term in the EA/rationality community (whereas when I first saw Muehlhauser use it, I assumed he took it from the philosophical literature).
13Jason Schukraft22dThe term 'moral weight' is occasionally used in philosophy (David DeGrazia uses it from time to time, for instance) but not super often. There are a number of closely related but conceptually distinct issues that often get lumped together under the heading moral weight: 1. Capacity for welfare, which is how well or poorly a given animal's life can go 2. Average realized welfare, which is how well or poorly the life of a typical member of a given species actually goes 3. Moral status, which is how much the welfare of a given animal matters morally Differences in any of those three things might generate differences in how we prioritize interventions that target different species. Rethink Priorities is going to release a report on this subject in a couple of weeks. Stay tuned for more details!

Thanks, that's really helpful! I'd been thinking there's an important distinction between that "capacity for welfare" idea and that "moral status" idea, so it's handy to know the standard terms for that.

Looking forward to reading that!

Mati_Roy's Shortform

Community norm -- proposal: I wished all EA papers were posted on the EA Forum so I could see what other EAs thought of them, which would help me decide whether I want to read them.

saulius's Shortform

A tip for writing EA forum posts with footnotes First press on your nickname in the top right corner, go to Edit Settings and make sure that a checkbox Activate Markdown Editor is checked. Then write a post in Google docs and then use Google Docs to Markdown add-on to convert it to markdown. If you then paste the resulting markdown into the EA forum editor and save it, you will see your text with footnotes. It might also have some unnecessary text that you should delete.

Tables and images If you have images in your posts, you have to upload them somewhere o

... (read more)
Showing 3 of 7 replies (Click to show all)

If you have images in your posts, you have to upload them somewhere on the internet (e.g. https://imgur.com/)

If you've put the images in a google doc, and made the doc public, then you've already uploaded the images to the internet, and can link to them there. If you use the WYSIWYG editor, you can even copypaste the images along with the text.

I'm not sure whether I should expect google or imgur to preserve their image-links for longer.

3Aaron Gertler1moYou can also write "in-line" footnotes: See this guide to footnote syntax [https://forum.effectivealtruism.org/posts/fQ4HGx4AR2QXHR5RL/ea-forum-footnotes-are-live-and-other-updates] .
1G Gordon Worley III1moThanks for the gdocs to markdown tip. I didn't know I could do that, but it'll make writing posts for LW and EAF more convenient!
Ben_Snodin's Shortform

My Notes on Certificates of Impact

Introduction & purpose of post

This post contains some notes that I wrote after ~ 1 week of reading about Certificates of Impact as part of my work as a Research Scholar at the Future of Humanity Institute, and a bit of time after that thinking and talking about the idea here and there.

In this post, I

  • describe what Certificates of Impact are, including a concrete proposal,
  • provide some lists of ways that it might be good or bad, and reasons it might or might not work,
  • provide some other miscellaneous thoughts relevant to
... (read more)

FWIW I think you should make this a top level post.

2Habryka25dKind of surprised that this post doesn't link at all to Paul's post on altruistic equity: https://forum.effectivealtruism.org/posts/r7vmtHZKuosJZ3Xq5/altruistic-equity-allocation [https://forum.effectivealtruism.org/posts/r7vmtHZKuosJZ3Xq5/altruistic-equity-allocation]
Mati_Roy's Shortform

If the great filter is after sentience, but before technologically mature civilisations, the cosmos could be filled with lifeforms experiencing a lot of moral harm

1Ramiro1moLook on the bright side: they don't have factory farming ;) Or maybe the hidden premise of wild life suffering is false: the net expected value of wild life is positive (there's probably some positive hedonic utility in basic vital functions) & something like the repugnant conclusion is true. (By the way, I thought you were more a sort of preference utilitarianist)

I am "more a sort of preference utilitarian" -- "moral harm" is a neutral term, and depending on your values can be "suffering" or "preference violation" or something else

Or maybe the hidden premise of wild life suffering is false: the net expected value of wild life is positive (there's probably some positive hedonic utility in basic vital functions) & something like the repugnant conclusion is true.

not for negative (hedonist/preference) utilitarians, maybe for total utilitarians

MichaelA's Shortform

To provide us with more empirical data on value drift, would it be worthwhile for someone to work out how many EA Forum users each year have stopped being users the next year? E.g., how many users in 2015 haven't used it since?

Would there be an easy way to do that? Could CEA do it easily? Has anyone already done it?

One obvious issue is that it's not necessary to read the EA Forum in order to be "part of the EA movement". And this applies more strongly for reading the EA Forum while logged in, for commenting, and for posting, which are p... (read more)

gavintaylor's Shortform

At the start of Chapter 6 in the precipice, Ord writes:

To do so, we need to quantify the risks. People are often reluctant to put numbers on catastrophic risks, preferring qualitative language, such as “improbable” or “highly unlikely.” But this brings serious problems that prevent clear communication and understanding. Most importantly, these phrases are extremely ambiguous, triggering different impressions in different readers. For instance, “highly unlikely” is interpreted by some as one in four, but by others a
... (read more)
Load More