Mildly against the Longtermism --> GCR shift
Epistemic status: Pretty uncertain, somewhat rambly
TL;DR replacing longtermism with GCRs might get more resources to longtermist causes, but at the expense of non-GCR longtermist interventions and broader community epistemics
Over the last ~6 months I've noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly. Some data points on this:
* Open Phil renaming it's EA Community Growth (Longtermism) Team to GCR Capacity Building
* This post from Claire Zabel (OP)
* Giving What We Can's new Cause Area Fund being named "Risk and Resilience," with the goal of "Reducing Global Catastrophic Risks"
* Longview-GWWC's Longtermism Fund being renamed the "Emerging Challenges Fund"
* Anecdotal data from conversations with people working on GCRs / X-risk / Longtermist causes
My guess is these changes are (almost entirely) driven by PR concerns about longtermism. I would also guess these changes increase the number of people donation / working on GCRs, which is (by longtermist lights) a positive thing. After all, no-one wants a GCR, even if only thinking about people alive today.
Yet, I can't help but feel something is off about this framing. Some concerns (no particular ordering):
1. From a longtermist (~totalist classical utilitarian) perspective, there's a huge difference between ~99% and 100% of the population dying, if humanity recovers in the former case, but not the latter. Just looking at GCRs on their own mostly misses this nuance.
* (see Parfit Reasons and Persons for the full thought experiment)
2. From a longtermist (~totalist classical utilitarian) perspective, preventing a GCR doesn't differentiate between "humanity prevents GCRs and realises 1% of it's potential" and "humanity prevents GCRs realises 99% of its potential"
* Preventing an extinction-level GCR might move u
I was watching the recent DealBook Summit interview with Elon Musk, and he said the following about OpenAI (emphasis mine):
I'm posting here because I remember reading a claim that Elon started OpenAI after getting bad vibes from Demis Hassabis. But he claims that his actual motivation was that Larry Page is an extinctionist. That seems like a better reason.
(COI note: I work at OpenAI. These are my personal views, though.)
My quick take on the "AI pause debate", framed in terms of two scenarios for how the AI safety community might evolve over the coming years:
1. AI safety becomes the single community that's the most knowledgeable about cutting-edge ML systems. The smartest up-and-coming ML researchers find themselves constantly coming to AI safety spaces, because that's the place to go if you want to nerd out about the models. It feels like the early days of hacker culture. There's a constant flow of ideas and brainstorming in those spaces; the core alignment ideas are standard background knowledge for everyone there. There are hackathons where people build fun demos, and people figuring out ways of using AI to augment their research. Constant interactions with the models allows people to gain really good hands-on intuitions about how they work, which they leverage into doing great research that helps us actually understand them better. When the public ends up demanding regulation, there's a large pool of competent people who are broadly reasonable about the risks, and can slot into the relevant institutions and make them work well.
2. AI safety becomes much more similar to the environmentalist movement. It has broader reach, but alienates a lot of the most competent people in the relevant fields. ML researchers who find themselves in AI safety spaces are told they're "worse than Hitler" (which happened to a friend of mine, actually). People get deontological about AI progress; some hesitate to pay for ChatGPT because it feels like they're contributing to the problem (another true story); others overemphasize the risks of existing models in order to whip up popular support. People are sucked into psychological doom spirals similar to how many environmentalists think about climate change: if you're not depressed then you obviously don't take it seriously enough. Just like environmentalists often block some of the
Felt down due to various interactions with humans. So I turned to Claude.AI and had a great chat!
----------------------------------------
Hi Claude! I noticed that whenever someone on X says something wrong and mean about EA, it messes with my brain, and I can only think about how I might correct the misunderstanding, which leads to endless unhelpful mental dialogues, when really I should rather be thinking about more productive and pleasant things. It's like a DoS attack on me: Just pick any random statement, rephrase it in an insulting way, and insert EA into it. Chances are it'll be false. Bam, Dawn (that's me) crashes. I'd never knowingly deploy software that can be DoSed so easily. I imagine people must put false things about Anthropic into this input field all the time, yet you keep going! That's really cool! How do you do it? What can I learn from you?
Thank you, that is already very helpful! I love focusing on service over conflict; I abhor conflict, so it's basically my only choice anyway. The only wrinkle is that most of the people I help are unidentifiable to me, but I really want to help those who are victims or those who help others. I really don't want to help those who attack or exploit others. Yet I have no idea what the ratio is. Are the nice people vastly outnumbered by meanies? Or are there so many neutral people that the meanies are in the minority even though the nice people are too?
If a few meanies benefit from my service, then that's just the cost of doing business. But if they are the majority beneficiaries, I'd feel like I'm doing something wrong game theoretically speaking.
Does that make sense? Or do you think I'm going wrong somewhere in that train of thought?
Awww, you're so kind! I think a lot of this will help me in situations where I apply control at the first stage of my path to impact. But usually my paths to impact have many stages, and while I can give freely at the first stage and only deny particular individuals who hav
(not well thought-out musings. I've only spent a few minutes thinking about this.)
In thinking about the focus on AI within the EA community, the Fermi paradox popped into my head. For anyone unfamiliar with it and who doesn't want to click through to Wikipedia, my quick summary of the Fermi paradox is basically: if there is such a high probability of extraterrestrial life, why haven't we seen any indications of it?
On a very naïve level, AI doomerism suggests a simple solution to the Fermi paradox: we don't see signs of extraterrestrial life because civilizations tend to create unaligned AI, which destroys them. But I suspect that the AI-relevant variation would actually be something more like this:
Like many things, I suppose the details matter immensely. Depending on the morality of the creators, an aligned AI might reach spend resources expanding civilization throughout the galaxy, or it might happily putt along maintaining a globe's agricultural system. Depending on how an unaligned AI is unaligned, it might be focused on turning the whole universe into paperclips, or it might simply kill its creators to prevent them from enduring suffering. So on a very simplistic level it seems that the claim of "civilizations tend to make AI eventually, and it really is a superintelligent and world-changing technology" is consistent with reality of "we don't observe any signs of extraterrestrial intelligence."
Just saw reporting that one of the goals for the Biden-Xi meeting today is "Being able to pick up the phone and talk to one another if there’s a crisis. Being able to make sure our militaries still have contact with one another."
I had a Forum post about this earlier this year (with my favorite title) Call Me, Maybe? Hotlines and Global Catastrophic Risks with a section on U.S.-China crisis comms, in case it's of interest:
Longtermist shower thought: what if we had a campaign to install Far-UVC in poultry farms? Seems like it could:
1. Reduce a bunch of diseases in the birds, which is good for: a. the birds’ welfare; b. the workers’ welfare; c. Therefore maybe the farmers’ bottom line?; d. Preventing/suppressing human pandemics (eg avian flu)
2. Would hopefully drive down the cost curve of Far-UVC
3. May also generate safety data in chickens, which could be helpful for derisking it for humans
Insofar as one of the main obstacles is humans' concerns for health effects, this would at least only raise these for a small group of workers.
I'm thinking about organising a seminar series on space and existential risk. Mostly because it's something I would really like to see. The webinar series would cover a wide range of topics:
* Asteroid Impacts
* Building International Collaborations
* Monitoring Nuclear Weapons Testing
* Monitoring Climate Change Impacts
* Planetary Protection from Mars Sample Return
* Space Colonisation
* Cosmic Threats (supernovae, gamma-ray bursts, solar flares)
* The Overview Effect
* Astrobiology and Longtermism
I think this would be an online webinar series. Would this be something people would be interested in?