Bostrom & Ćirković (pages 1 and 2):
The term 'global catastrophic risk' lacks a sharp definition. We use it to refer, loosely, to a risk that might have the potential to inflict serious damage to human well-being on a global scale.
[...] a catastrophe that caused 10,000 fatalities or 10 billion dollars worth of economic damage (e.g., a major earthquake) would not qualify as a global catastrophe. A catastrophe that caused 10 million fatalities or 10 trillion dollars wo
There is now a Stanford Existential Risk Initiative, which (confusingly) describes itself as:
a collaboration between Stanford faculty and students dedicated to mitigating global catastrophic risks (GCRs). Our goal is to foster engagement from students and professors to produce meaningful work aiming to preserve the future of humanity by providing skill, knowledge development, networking, and professional pathways for Stanford community members interested in pursuing GCR reduction.
And they write:
What is a Global Catastrophic Risk?
We think of globa
Participants in the 2008 FHI Global Catastrophic Risk conference estimated a probability of extinction from nano-technology at 5.5% (weapons + accident) and non-nuclear wars at 3% (all wars - nuclear wars) (the values are on the GCR wikipedia page). In the Precipice, Ord estimated the existential risk of Other anthropogenic risks (noted in the text as including but not limited to nano-technology, and I interpret this as including non-nuclear wars) as 2% (1 in 50). (Note that by definition, extinction risk is a sub-set of existential risk.)
Since starting to... (read more)
I too find this an interesting topic. More specifically, I wonder why I've seen as little discussion published in the last few years (rather than from >10 years ago) of nanotech as I have. I also wonder about the limited discussion of things like very long-lasting totalitarianism - though there I don't have reason to believe people recently had reasonably high x-risk estimates; I just sort-of feel like I haven't yet seen good reason to deprioritise investigating that possible risk. (I'm not saying that there should be more discussio... (read more)
I think there should be an EA Fund analog for criminal justice reform. This could especially attract non-EA dollars.
Cochrane had a team set up in 2011 to investigate better Priority Setting Methods.
Works by the EA community or related communities
Why I prioritize moral circle expansion over artificial intelligence alignment - Jacy Reese, 2018
The Moral Circle is not a Circle - Grue_Slinky, 2019
The Narrowing Circle - Gwern, 2019 (see here for Aaron Gertler’s summary and commentary)
Radical Empathy - Holden Karnofsky, 2017
Various works from the Sentience Institute, including:
Thanks for adding those links, Jamie!
I've now added the first few into my lists above.
Does anyone have any idea / info on what proportion of the infected cases are getting Covid19 inside hospitals?
(Epistemic status: low, but I didin't find any research on that, so the hypothesis deserves a bit more of attention)
1. Nosocomial infections are serious business. Hospitals are basically big buildings full of dying people and the stressed personel who goes from one bed to another try to avoid it. Throw a deadly and very contagious virus in it, and it becomes a slaughterhouse.
2. Previous coronavirus were rapidly spread in hospitals and other c... (read more)
Did anyone see the spread of Covid through nursing homes coming before? It seems quite obvious in hindsight - yet, I didn't even mention it above. Some countries report almost half of the deaths from those environments.
(Would it have made any difference? I mean, would people have emphasized patient safety, etc.? I think it's implausible, but has anyone tested if this isn't just some statistical effect, due to the concentration of old-aged people, with chronic diseases?)
Why didn't we have more previous alarm concerning the spread of Covid through care and nursing homes? Would it have made any difference?
Can Longtermists "profit" from short-term bias?
We often think about human short-term bias (and the associated hyperbolic discount) and the uncertainty of the future as (among the) long-termism’s main drawbacks; i.e., people won’t think about policies concerning the future because they can’t appreciate or compute their value. However, those features may actually provide some advantages, too – by evoking something analogous to the effect of the veil of ignorance:
Is 'donations as gifts' neglected?
I enjoy sending 'donations as gifts' - i.e., donating to GD, GW or AMF in honor of someone else (e.g., as a birthday gift). It doesn't actually affect my overall budget for donations; but this way, I try to subtly nudge this person to consider doing the same with their friends, or maybe even becoming a regular donor.
I wonder if other EAs do that. Perhaps it seems very obvious (for some cultures where donations are common), but I haven't seen any remark or analysis about it (well, maybe I'... (read more)
If your friend doesn't donate normally, then probably their preferred person to spend money on is themself. It still seems rude to me to say you're giving them a gift, which should be something they want, and instead give them something they don't want.
For example, my mother likes flowers. I normally get her flowers for mother's day. If I switch to giving her a donation to AMF instead of buying her flowers, she will be counterfactually worse off - she is no longer getting the flowers she enjoys. I don't think that kind of experience would make her more likely to start donating, either.
Medium term AI forecasting with Metaculus
I'm working on a collection of metaculus.com questions intended to generate AI domain specific forecasting insights. These questions are intended to resolve in the 1-15 year range, and my hope is that if they're sufficiently independent, we'll get a range of positive and negative resolutions which will inform future forecasts.
I've already gotten a couple of them live, and am hoping for feedback on the rest:
1. When will AI out-perform humans on argument reasoning tasks?
2. When will multi-modal ML out-perform uni-moda
Yes, I recently asked a metaculus mod about this, and they said they're hoping to bring back the ai.metaculus sub-domain eventually. For now, I'm submitting everything to the metaculus main domain.
Category: Intervention idea
Epistemic status: speculative; arm-chair thinking; non-expert idea; unfleshed idea
Proposal: Have nuclear powers insure each other that they won't nuke each other for mutually assure destruction (ie. destroying my infrastructure means you will destroy your economy). Not accepting an offered of mutual insurances should be seen as extremely hostile and uncooperative, and possible even be severely sanctioned internationally.
BTW, I have recently learned that ICJ missed an opportunity to explicitly state that using nukes (or at least a first strike) is a violation of international law.
Did UNESCO draft recommendation on AI principles involve anyone concerned with AI safety? The draft hasn't been leaked yet, and I didn't see anything in EA community - maybe my bubble is too small.
Appendix A of The Precipice - Ord, 2020 (see also the footnotes, and the sources referenced)
The Long-Term Future: An Attitude Survey - Vallinder, 2019
Older people may place less moral value on the far future - Sanjay, 2019
Making people happy or making happy people? Questionnaire-experimental studies of population ethics and policy - Spears, 2017
Psychology of Existential Risk and Long-Termism - Schubert, 2018 (spa... (read more)
Comparisons of Capacity for Welfare and Moral Status Across Species - Jason Schukraft, 2020
Preliminary thoughts on moral weight - Luke Muehlhauser, 2018
Should Longtermists Mostly Think About Animals? - Abraham Rowe, 2020
2017 Report on Consciousness and Moral Patienthood - Luke Muehlhauser, 2017 (the idea of “moral weights” is addressed briefly in a few places)
As I’m sure you’ve noticed, this is a very small collection. I intend to add to it over time... (read more)
Thanks, that's really helpful! I'd been thinking there's an important distinction between that "capacity for welfare" idea and that "moral status" idea, so it's handy to know the standard terms for that.
Looking forward to reading that!
Community norm -- proposal: I wished all EA papers were posted on the EA Forum so I could see what other EAs thought of them, which would help me decide whether I want to read them.
A tip for writing EA forum posts with footnotes
First press on your nickname in the top right corner, go to Edit Settings and make sure that a checkbox Activate Markdown Editor is checked. Then write a post in Google docs and then use Google Docs to Markdown add-on to convert it to markdown. If you then paste the resulting markdown into the EA forum editor and save it, you will see your text with footnotes. It might also have some unnecessary text that you should delete.
Tables and images
If you have images in your posts, you have to upload them somewhere o
If you have images in your posts, you have to upload them somewhere on the internet (e.g. https://imgur.com/)
If you have images in your posts, you have to upload them somewhere on the internet (e.g. https://imgur.com/)
If you've put the images in a google doc, and made the doc public, then you've already uploaded the images to the internet, and can link to them there. If you use the WYSIWYG editor, you can even copypaste the images along with the text.
I'm not sure whether I should expect google or imgur to preserve their image-links for longer.
This post contains some notes that I wrote after ~ 1 week of reading about Certificates of Impact as part of my work as a Research Scholar at the Future of Humanity Institute, and a bit of time after that thinking and talking about the idea here and there.
In this post, I
FWIW I think you should make this a top level post.
If the great filter is after sentience, but before technologically mature civilisations, the cosmos could be filled with lifeforms experiencing a lot of moral harm
I am "more a sort of preference utilitarian" -- "moral harm" is a neutral term, and depending on your values can be "suffering" or "preference violation" or something else
Or maybe the hidden premise of wild life suffering is false: the net expected value of wild life is positive (there's probably some positive hedonic utility in basic vital functions) & something like the repugnant conclusion is true.
not for negative (hedonist/preference) utilitarians, maybe for total utilitarians
To provide us with more empirical data on value drift, would it be worthwhile for someone to work out how many EA Forum users each year have stopped being users the next year? E.g., how many users in 2015 haven't used it since?
Would there be an easy way to do that? Could CEA do it easily? Has anyone already done it?
One obvious issue is that it's not necessary to read the EA Forum in order to be "part of the EA movement". And this applies more strongly for reading the EA Forum while logged in, for commenting, and for posting, which are p... (read more)
At the start of Chapter 6 in the precipice, Ord writes:
To do so, we need to quantify the risks. People are often reluctant to put numbers on catastrophic risks, preferring qualitative language, such as “improbable” or “highly unlikely.” But this brings serious problems that prevent clear communication and understanding. Most importantly, these phrases are extremely ambiguous, triggering different impressions in different readers. For instance, “highly unlikely” is interpreted by some as one in four, but by others a