Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)
"EA getting swamped by normies with high inferential distances"
This seems like completely the wrong focus! We need huge numbers of normies involved to get the political pressure necessary to act on AI x-risk before it's too late. We've already tried the "EA's lobbying behind closed doors" approach, and it has failed (/been co-opted by the big AGI companies).
Of course, if we do somehow survive all this, people will accuse me and others like me of crying wolf. But 1/10 outcomes aren't that uncommon! I'm willing to take the reputation hit though, whether justified or not.
I think in general a big problem with AI x-risk discourse is that there are a lot of innumerate people around, who just don't understand what probability means (or at least act like they don't, and count everything as a confident statement even if appropriately hedged).
those doing so should caveat that they are designed to mitigate the possibility (and not certainty) of catastrophic outcomes. This should be obvious, but given that people will be waiting in the wings to weaponise anything that could be called a regulatory overreaction, I think it’s worth doing.
I think to a lot of people, it matters just how much of a possibility there is. From what I've seen, many people are just (irrationally imo!) willing to bite the bullet on yolo-ing ASI if there is "only a 10%" chance of extinction. For this reason I counter with my actual assessment: doom is the default outcome of AGI/ASI (~90% likely). Only very few people are willing to bite that bullet! (And much more common is for people to fall back on dismissing the risk as "low" - e.g. experts saying "only" 1-25%).
Beyond capacity building; it's not completely clear to me that there are robustly good interventions in AI safety, and I think more work is needed to prioritize interventions.
I think it's pretty clear[1] that stopping further AI development (or Pausing) is a robustly good intervention in AI Safety (reducing AI x-risk).
However, what happens if these tendencies resurface when “shit hits the fan”?
I don't think this could be pinned on PauseAI, when at no point has PauseAI advocated or condoned violence. Many (basically all?) political campaigns attract radical fringes. Non-violent moderates aren't responsible for them.
You mention S-risk. I tend to try not to think too much about this, but it needs to be considered in any EV estimate of working on AI Safety. I think appropriately factoring it in could be overwhelming in terms of concluding that preventing ASI from being built is the number 1 priority. Preventing space colonisation x-risk could be achieved by letting ASI extinct everything. But how likely is ASI-induced extinction over ASI-induced S-risk (ASI simulating, or physically creating, astronomical amounts of unimaginable suffering, on a scale far larger than human space colonisation could ever achieve)?
I'm concerned that given the nearness of AI-x-risk and high likelihood that we all die in the near future, to some extent you are trying to seek comfort in complex cluelessness and moral uncertainty. If we go extinct (along with all the rest of known life in the universe), maybe it would be for the best? I don't know, I think I would rather live to find out, and help steer the future toward more positive paths (we can end factory farming before space colonisation happens in earnest). I also kind of think "what's the point in doing all these other EA interventions if the world just ends in a few years?" Sure, there is some near term benefit to those helped here and now, but everyone still all ends up dead.
I don't think it's discount rate (esp given short timelines); I think it's more that people haven't really thought about why their p(doom|ASI) is low. But people seem remarkably resistant to actually tackle the cruxes of the object level arguments, or fully extrapolate the implications of what they do agree on. When they do, they invariably come up short.