AI safety has become a big deal in EA, and so I'm curious about how much "due diligence" on it has been done by the EA community as a whole. Obviously there have been many in-person discussions, but it's very difficult to evaluate whether these contain new or high-quality content. Probably a better metric is how much work has been done which:
1. Is publicly available;
2. Engages in detail with core arguments for why AI might be dangerous (type A), OR tries to evaluate the credibility of the arguments without directly engaging with them (type B);
3. Was motivated or instigated by EA.
I'm wary of focusing too much on credit assignment, but it seems important to be able to answer a question like "if EA hadn't ever formed, to what extent would it have been harder for an impartial observer in 2019 to evaluate whether working on AI safety is important?" The clearest evidence would be if there were much relevant work produced by people who were employed at EA orgs, funded by EA grants, or convinced to work on AI safety through their involvement with EA. Some such work comes to mind, and I've listed it below; what am I missing?
Type A work which meets my criteria above:
- A lot of writing by Holden Karnofsky
- A lot of writing by Paul Christiano
- This sequence by Rohin Shah
- These posts by Jeff Kaufman
- This agenda by Allan Dafoe
- This report by Tom Sittler
Type A work which only partially meets criterion 3 (or which I'm uncertain about):
- These two articles by Luke Muehlhauser
- This report by Eric Drexler
- This blog by Ben Hoffman
- AI impacts
Type B work which meets my criteria above:
Things which don't meet those criteria:
- This 80,000 hours report (which mentions the arguments, but doesn't thoroughly evaluate them)
- Superintelligence
- The AI Foom debate
Edited to add: Wei Dai asked why I didn't count Nick Bostrom as "part of EA", and I wrote quite a long answer which explains the motivations behind this question much better than my original post. So I've copied most of it below:
The three questions I am ultimately trying to answer are: a) how valuable is it to build up the EA movement? b) how much should I update when I learn that a given belief is a consensus in EA? and c) how much evidence do the opinions of other people provide in favour of AI safety being important?
To answer the first question, assuming that analysis of AI safety as a cause area is valuable, I should focus on contributions by people who were motivated or instigated by the EA movement itself. Here Nick doesn't count (except insofar as EA made his book come out sooner or better).
To answer the second question, it helps to know whether the focus on AI safety in EA came about because many people did comprehensive due diligence and shared their findings, or whether there wasn't much investigation and the ubiquity of the belief was driven via an information cascade. For this purpose, I should count work by people to the extent that they or people like them are likely to critically investigate other beliefs that are or will become widespread in EA. Being motivated to investigate AI safety by membership in the EA movement is the best evidence, but for the purpose of answering this question I probably should have used "motivated by the EA movement or motivated by very similar things to what EAs are motivated by", and should partially count Nick.
To answer the third question, it helps to know whether the people who have become convinced that AI safety is important are a relatively homogenous group who might all have highly correlated biases and hidden motivations, or whether a wide range of people have become convinced. For this purpose, I should count work by people to the extent that they are dissimilar to the transhumanists and rationalists who came up with the original safety arguments, and also to the extent that they rederived the arguments for themselves rather than being influenced by the existing arguments. Here EAs who started off not being inclined towards transhumanism or rationalism at all count the most, and Nick counts very little.
Interesting posts. Yet I don't see how they support that what I described is unlikely. In particular, I don't see how "easy coordination" is in tension with what I wrote.
To clarify, competition that determines outcomes can readily happen within a framework of shared goals, and as instrumental to some overarching final goal. If the final goal is, say, to maximize economic growth (or if that is an important instrumental goal), this would likely lead to specialization and competition among various agents that try out different things, and which, by the nature of specialization, have imperfect information about what other agents know (not having such specialization would be much less efficient). In this, a future AI economy would resemble ours more than far-mode thinking suggests (this does not necessarily contradict your claim about easier coordination, though).
A reason I consider what I described likely is not least that I find it more likely that future software systems will consist in a multitude of specialized systems with quite different designs, even in the presence of AGI, as opposed to most everything being done by copies of some singular AGI system. This "one system will take over everything" strikes me as far-mode thinking, and not least unlikely given the history of technology and economic growth. I've outlined my view on this in the following e-book (though it's a bit dated in some ways): https://www.smashwords.com/books/view/655938 (short summary and review by Kaj Sotala: https://kajsotala.fi/2017/01/disjunctive-ai-scenarios-individual-or-collective-takeoff/)