All posts

Old

Thursday, 11 April 2024
Thu, 11 Apr 2024

Community 5
Research 3
Cause prioritization 3
Existential risk 3
Opinion 2
Philosophy 2
More

Frontpage Posts

Quick takes

The latest episode of the Philosophy Bites podcast is about Derek Parfit.[1] It's an interview with his biographer (and fellow philosopher) David Edmonds. It's quite accessible and only 20 mins long. Very nice listening if you fancy a walk and want a primer on Parfit's work. 1. ^ Parfit was a philosopher who specialised in personal identity, rationality, and ethics. His work played a seminal role in the development of longtermism. He is widely considered one of the most important and influential moral philosophers of the late 20th and early 21st centuries.
In July 2022, Jeff Masters wrote an article (https://yaleclimateconnections.org/2022/07/the-future-of-global-catastrophic-risk-events-from-climate-change/) summarizing findings from a United Nations report on the increasing risks of global catastrophic risk (GCR) events due to climate change. The report defines GCRs as catastrophes that kill over 10 million people or cause over $10 trillion in damage. It warned that by increasingly pushing beyond safe planetary boundaries, human activity is boosting the odds of climate-related GCRs. The article argued that societies are more vulnerable to sudden collapse when multiple environmental shocks occur, and that the combination of climate change impacts poses a serious risk of total societal collapse if we continue business as usual. Although the article and report are from mid-2022, the scientific community has been messaging that climate change effects are increasing faster than models predicted. So I'm curious - what has the EA community been doing over the past year to understand, prepare for and mitigate these climate-related GCRs? Some questions I have: * What new work has been done in EA on these risks since mid-2022, and what are the key open problems? * How much intellectual priority and resources is the EA community putting towards climate GCRs compared to other GCRs? Has this changed in the past year, and is it enough given the magnitude of the risks? I see this as different than investing in interventions that address GHGs and warming.  * How can we ensure these risks are getting adequate attention? I'm very interested to hear others' thoughts. While a lot of great climate-related work is happening in EA, I worry that climate GCRs remain relatively neglected compared to other GCRs. 
3
ABishop
19d
0
Resolved unresolved issues  One of the things I find difficult about discussing problem solving with people is that they often fall back on shallow causes. For example, if politician A's corruption is the problem, you can kick him out. easy. Problem solved! This is the problem. Of course, the problem was solved, but the problem was not solved. The natural assumption is that politician B will cause a similar problem again. In the end, that's the advice people give. “Kick A out!!” Whatever it was. Whether it's your weird friends, your bad grades, or your weight. Of course, this is a personal problem, but couldn't it be expanded to a general problem of decision-making? Maybe it would have been better to post it on lesswrong. Still, I'd like to hear your opinions.
In conversations of x-risk, one common mistake seems to be to suggest that we have yet to invent something that kills all people and so the historical record is not on the side of "doomers." The mistake is survivorship bias, and Ćirković, Sandberg, and Bostrom (2010) call this the Anthropic Shadow. Using base rate frequencies to estimate the probability of events that reduce the number of people (observers), will result in bias.  If there are multiple possible timelines and AI p(doom) is super high (and soon), then we would expect a greater frequency of events that delay the creation of AGI (geopolitical issues, regulation, maybe internal conflicts at AI companies, other disaster, etc.). It might be interesting to see if super forecasters consistently underpredict events that would delay AGI. Although, figuring out how to actually interpret this information would be quite challenging unless it's blatantly obvious. I guess more likely is that I'm born in a universe with more people and everything goes fine anyway. This is quite speculative and roughly laid out, but something I've been thinking about for a while.

Topic Page Edits and Discussion