"American UBI: for and against"
"A brief history of Rosicrucianism & the Invisible College"
"Were almost all the signers of the Declaration of Independence high-degree Freemasons?"
"Have malaria case rates gone down in areas where AMF did big bednet distributions?"
"What is the relationship between economic development and mental health? Is there a margin at which further development decreases mental health?"
"Literature review: Dunbar's number"
"Why is Rwanda outperforming other African nations?"
"The longtermist case for animal welfare"
"Philosopher-Kings: why wise governance is important for the longterm future"
"Case studies: when has democracy outperform technocracy? (and vice versa)"
"Examining the tradeoff between coordination and coercion"
"Spiritual practice as an EA cause area"
"Tools for thought as an EA cause area"
"Is strong, ubiquitous encryption a net positive?"
"How important are coral reefs to ocean health? How can they be protected?"
"What role does the Amazon rainforest play in regulating the North American biosphere?"
"What can the US do to protect the Amazon from Bolsonaro?"
"Can the Singaporean governance model scale?"
"Is EA complacent?"
"Flow-through effects of widespread addiction"
I'd be really interested in reading an updated post that makes the case for there being an especially high (e.g. >10%) probability that AI alignment problems will lead to existentially bad outcomes.
There still isn't a lot of writing explaining case for existential misalignment risk. And a significant fraction of what's been produced since Superintelligence is either: (a) roughly summarizing arguments in Superintelligence, (b) pretty cursory, or (c) written by people who are relative optimists and are in large part trying to explain their relative optimism.
Since I have the (possibly mistaken) impression that a decent number of people in the EA community are quite pessimistic regarding existential misalignment risk, on the basis of reasoning that goes significantly beyond what's in Superintelligence, I'd really like to understand this position a lot better and be in a position to evaluate the arguments for it.
(My ideal version of this post would probably assume some degree of familiarity with contemporary machine learning, and contemporary safety/robustness issues, but no previous familiarity with arguments that AI poses an existential risk.)
I see, thanks for the explanation!