AI safety has become a big deal in EA, and so I'm curious about how much "due diligence" on it has been done by the EA community as a whole. Obviously there have been many in-person discussions, but it's very difficult to evaluate whether these contain new or high-quality content. Probably a better metric is how much work has been done which:
1. Is publicly available;
2. Engages in detail with core arguments for why AI might be dangerous (type A), OR tries to evaluate the credibility of the arguments without directly engaging with them (type B);
3. Was motivated or instigated by EA.
I'm wary of focusing too much on credit assignment, but it seems important to be able to answer a question like "if EA hadn't ever formed, to what extent would it have been harder for an impartial observer in 2019 to evaluate whether working on AI safety is important?" The clearest evidence would be if there were much relevant work produced by people who were employed at EA orgs, funded by EA grants, or convinced to work on AI safety through their involvement with EA. Some such work comes to mind, and I've listed it below; what am I missing?
Type A work which meets my criteria above:
- A lot of writing by Holden Karnofsky
- A lot of writing by Paul Christiano
- This sequence by Rohin Shah
- These posts by Jeff Kaufman
- This agenda by Allan Dafoe
- This report by Tom Sittler
Type A work which only partially meets criterion 3 (or which I'm uncertain about):
- These two articles by Luke Muehlhauser
- This report by Eric Drexler
- This blog by Ben Hoffman
- AI impacts
Type B work which meets my criteria above:
Things which don't meet those criteria:
- This 80,000 hours report (which mentions the arguments, but doesn't thoroughly evaluate them)
- The AI Foom debate
Edited to add: Wei Dai asked why I didn't count Nick Bostrom as "part of EA", and I wrote quite a long answer which explains the motivations behind this question much better than my original post. So I've copied most of it below:
The three questions I am ultimately trying to answer are: a) how valuable is it to build up the EA movement? b) how much should I update when I learn that a given belief is a consensus in EA? and c) how much evidence do the opinions of other people provide in favour of AI safety being important?
To answer the first question, assuming that analysis of AI safety as a cause area is valuable, I should focus on contributions by people who were motivated or instigated by the EA movement itself. Here Nick doesn't count (except insofar as EA made his book come out sooner or better).
To answer the second question, it helps to know whether the focus on AI safety in EA came about because many people did comprehensive due diligence and shared their findings, or whether there wasn't much investigation and the ubiquity of the belief was driven via an information cascade. For this purpose, I should count work by people to the extent that they or people like them are likely to critically investigate other beliefs that are or will become widespread in EA. Being motivated to investigate AI safety by membership in the EA movement is the best evidence, but for the purpose of answering this question I probably should have used "motivated by the EA movement or motivated by very similar things to what EAs are motivated by", and should partially count Nick.
To answer the third question, it helps to know whether the people who have become convinced that AI safety is important are a relatively homogenous group who might all have highly correlated biases and hidden motivations, or whether a wide range of people have become convinced. For this purpose, I should count work by people to the extent that they are dissimilar to the transhumanists and rationalists who came up with the original safety arguments, and also to the extent that they rederived the arguments for themselves rather than being influenced by the existing arguments. Here EAs who started off not being inclined towards transhumanism or rationalism at all count the most, and Nick counts very little.
While I disagree with his conclusion and support FRI's approach to reducing AI s-risks, Magnus Vinding's essay "Why Altruists Should Perhaps Not Prioritize Artificial Intelligence" is one of the most thoughtful EA analyses against prioritizing AI safety I'm aware of. I'd say it fits into the "Type A and meets OP's criterion" category.
Thanks for sharing and for the kind words. :-)
I should like to clarify that I also support FRI's approach to reducing AI s-risks. The issue is more how big a fraction of our resources approaches of this kind deserve relative to other things. My view is that, relatively speaking, we very much underinvest in addressing other risks, by which I roughly mean "risks not stemming primarily from FOOM or sub-optimally written software" (which can still involve AI plenty, of course). I would like to see a greater investment in broad explorative research on s-risk scenarios and how we can reduce them.
In terms of explaining the (IMO) skewed focus, it seems to me that we mostly think about AI futures in far mode, see https://www.overcomingbias.com/2010/06/near-far-summary.html and https://www.overcomingbias.com/2010/10/the-future-seems-shiny.html. The perhaps most significant way in which this shows is that we intuitively think the future will be determined by a single or a few agents and what they want, as opposed to countless different agents, cooperating and competing with many (for those future agents) non-intentional factors influencing the outcomes.
I'd argue scenarios ... (read more)
I'm not aware of such summaries, but I'll take a stab at it here:
Even though it's possible for the expected disvalue of a very improbable outcome to be high if the outcome is sufficiently awful, the relatively large degree of investment in AI safety work by the EA community today would only make sense if the probability of AI-catalyzed GCR were decently high. This Open Phil post for example doesn't frame this as a "yes it's extremely unlikely, but the downsides could be massive, so in expectation it's worth working on" cause; many EAs in general give estimates of a non-negligible probability of very bad AI outcomes. So, accordingly, AI is considered not only a viable cause to work on but indeed one of the top priorities.
But arguably the scenarios in which AGI becomes a catastrophic threat rely on a conjunction of several improbable assumptions. One of which is that general "intelligence" in the sense of a capacity to achieve goals on a global scale - rather than capacity merely to solve problems easily representable within e.g. a Markov decision process - is something that computers can develop without a long process of real world t... (read more)