Thanks for stating this publically here Will!
Saying thepaper is poorly argued is not particularly helpful or convincing. Could you highlight where and why Rubi? Breadth does not de-facto mean poorly argued. If that was the case then most of the key texts in x-risk would all be poorly argued.
Importantly, breadth was necessary to make a critique. There are simply many interrelated matters that are worth critical analysis.
Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus.
As David highlights in his response: we do not argue against the TUA, but point out the unanswered questions we observed. We do not argue against the TUA , but highlight assumptions that may be incorrect or smuggle in values. Interestingly, it's hard to find how you believe the piece is both polemic but also not directly critiquing the TUA sufficiently. Those two criticisms are in tension.
If you check our references, you will see that we cite many published papers that treat common criticisms and open questions of the TUA (mostly by advancing the research).
You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology?
Of course there are arguments for it, some of which are discussed in the forum. Our argument is that there is a lack of peer-review evidence to support differential technological development as a cornerstone of a policy approach to x-risk. Asking that we articulate and address every hypothetical counterargument is an incredibly high-bar, and one that is not applied to any other literature in the field (certainly not the key articles of the TUA we focus on). It would also make the paper far longer and broader. Again, your points are in tension.
I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and skipping over potential responses comes across as more polemical than evidence-seeking.
Then it wouldn't be a critique of the TUA. It would be a piece on differential tech development or hazard-centrism.
This is remarkably similar to a critique we got from Seán Ó hÉigeartaigh. He both said that we should zoom in an focus on one section and said that we should zoom out and compare the TUA against all (?) potential alternatives. The recommendations are in tension and obnly share the commonality of making sure we write a paper that isn't a criqitue.
We see many remaining problems in x-risk. This paper is an attempt to list those issues and point out their weaknesses and areas for improvement. It should be read similar to a research agenda.
The abstract and conclusion clearly spell-out the areas where we take a clear position, such as the need for diversity in the field, taking lessons from complex risk assessments in other areas, and democratisting policy recommendations. We do not articulate a position on degrowth, differential tech development etc. We highlight that the existing evidence basis and arguments for them are weak.
We do not position ourselves in many cases, because we believe they require further detailed work and deliberation. In that sense I agree with you that we're covering too much - but only if the goal was to present clear positions on all these points. Since this was not the goal, I think it's fine to list many remaining questions and point out that indeed they still are questions that require answers. If you have strong opinion on any of the questions we mention, then go ahead write a paper that argues for one side, publish it, and let's get on with the science.
Seán also called the paper polemic several times. (Per definition = strong verbal written attack, hostile, critical). This is not necessarily an insult (Orwell's Animal Farm is considered a polemic against totalitarianism), but I'm guessing it's not meant in that way.
We are somewhat disappointed that one of the most upvoted responses on the forum to our piece is so vague and unhelpful. We would expect a community that has such high epistemic standards to reward comments that articulate clear, specific criticisms grounded in evidence and capable of being acted on.
Finally, the 'speaking abstractly' about funding. It is hard not see to see this as an insinuation that this we have consistently produced such poor scholarship that it would justify withdrawn funding. Again, this does not signal anything positive aboput the epistemics, or just sheer civility, of the community.
fwiw I was not offended at all.
This is a great comment, thank you for writing it. I agree - I too have not seen sufficient evidence that could warrant the reaction of these senior scholars. We tried to get evidence from them and tried to understand why they explicitly feared that OpenPhil would not fund them because of some critical papers. Any arguments they shared with us were unconvincing. My own experience with people at OpenPhil (sorry to focus the conversation only on OpenPhil, obviously the broader conversation about funding should not only focus on them) in fact suggests the opposite.
I want to make sure that the discussion does not unduly focus on FHI or CSER in particular. I think this has little to do with the organisations as a whole and more to do with individuals who sit somewhere in the EA hierarchy. We made the choice to protect the privacy of the people whose comments we speak of here. This is out of respect but also because I think the more interesting area of focus (and that which can be changed) is what role EA as a community plays in something like this happening.
I woud caution to center the discussion only around the question of whether or not OpenPhil would reduce funding in respose to crticism. Improtantly, what happened here is that some EAs, who had influence over funding and research postions of more junior researchers in Xrisk, thought this was the case and acted on that assumption. I think it may well be the case that OpenPhil acts perfectly fine, while researchers lower in the funding hierarchy harmfully act as if OpenPhil would not act well. How can that be prevented?
To clarify: all the reviewers we approached gave critical feedback and we incorporated it and responded to it as we saw fit without feeling offended. But the only people who said the paper had a low academic standard, that it was 'lacking in rigor' or that it wasn't 'loving enough', were EAs who were emotionally affected by reading the paper. My point here is that in an idealised objective review process it would be totally fine to hear that we do not meet some academic standard. But my sense was that these comments were not actually about academic rigor, but about community membership and friendship. This is understandable, but it's not surprising that mixture of power, community and research can produce biased scholarship.
Very happy to have a private chat Aaron!
Very happy to have a private chat and tell you about our experience then.
Here's a Q&A which answers some of the questions by reviewers of early drafts. (I planned to post it quickly, but your comments came in so soon! Some of the comments hopefully find a reply here)
"Do you not think we should work on x-risk?"
"Do you think the authors you critique have prevented alternative frameworks from being applied to Xrisk?"
"Do you hate longtermism?"
"You’re being unfair to Nick Bostrom. In the vulnerable world hypothesis, Bostrom merely speculates that such a surveillance system may, in a hypothetical world in which VWH is true, be the only option"
"Do you think longtermism is by nature techno-utopian?"
"Who is your target audience?"
"Do you think we should abandon the TUA entirely?"
"Why didn’t you cite paper X?"
"Why didn’t you cite blogpost X? "
"You critique we need to solve problem X but Y has already written a paper on X!"
"Why is your language so harsh? Or: Your language should have been more harsh!"
The paper never spoke about getting rid of experts or replacing experts with citzens. So no.
Many countries now run citizen assemblies on climate change, which I'm sure you're aware of. They do not aim to replace the role of IPCC.
EA or the field of existential risk cannot be equated with the IPCC.
To your second point, no this does not follow at all. Democracy as a procedure is not to be equated (and thus limited) to governments that grant you a vote every so often. You will find references to the relevant literature on democratic experimentation in the last section which focusses on democracy in the paper.
That sounds cool. Happy to see that some of this work is going on and glad to hear that you're specifically thinking about tail-risk climate change too. Looking at fungi as a food source is obviously only one of the dimensions of use I describe as relevant here, and in ALLFED's case, cost of the production is surely only one relevant dimension from a longtermist perspective. In general, I'm happy to see that some of your interventions do seem to consider fixing existing vulnerabilities as much as treating the sympoms of a catastrophe. I'll go through the report you have online (2019 is the most recent one?) to check who you're already in contact with and whether I can recommend any other experts to you who it might be useful for you to reach out to.
On a seperate note and because it's not on the Q&A of your website: are you indeed fully funded by EA orgs (BERI, EA Lottery as per report)? I found it surprising that given your admirable attempts to connect with the relevant ecosystem of organisations you would not have funding from other sources. Is this because you didn't try or because it seems no one except EAs want to grant money for the work you're trying to do?
Thanks for saying this publically too Nick, this is helpful for anyone who might worry about funding.