CarlaZoeC

Topic Contributions

Comments

Democratising Risk - or how EA deals with critics

Thanks for saying this publically too Nick, this is helpful for anyone who might worry about funding. 

Democratising Risk - or how EA deals with critics

Thanks for stating this publically  here Will! 

Democratising Risk - or how EA deals with critics

Saying thepaper is poorly argued is not particularly helpful or convincing. Could you highlight where and why Rubi? Breadth does not de-facto mean poorly argued.  If that was the case then most of the key texts in  x-risk would all be poorly argued.

Importantly, breadth was necessary to make a critique. There are simply many interrelated matters that are worth critical analysis. 

Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus.

As  David highlights in his response: we do not argue against the TUA, but point out the unanswered questions we observed. We do not argue against the TUA , but highlight assumptions that may be incorrect or smuggle in values. Interestingly, it's hard to find  how you believe the piece is both polemic but also not directly critiquing the TUA sufficiently.  Those two criticisms are in tension. 

If you check our references, you will see that we cite many published papers that treat common criticisms and open questions of the TUA (mostly by advancing the research). 

You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology?

Of course there are arguments for it, some of which are discussed in the forum. Our argument is that there is a lack of peer-review evidence to support differential technological development as a cornerstone of a policy approach to x-risk. Asking that we articulate and address every hypothetical counterargument is an incredibly high-bar, and one that is not applied to any other literature in the field (certainly not the key articles of the TUA we focus on). It would also make the paper far longer and broader. Again,  your points are in tension. 

I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and skipping over potential responses comes across as more polemical than evidence-seeking.

Then it wouldn't be a critique of the TUA. It would be a piece on differential tech development or hazard-centrism. 

This is remarkably similar to a critique we got from Seán Ó hÉigeartaigh. He both said that we should zoom in an focus on one section and said that we should zoom out and compare the TUA against all (?) potential alternatives. The recommendations are in tension and obnly share the commonality of making sure we write a paper that isn't a criqitue.

We see many remaining problems in x-risk. This paper is an attempt to list those issues and point out their weaknesses and areas for improvement. It should be read similar to a research agenda. 

The abstract and conclusion  clearly spell-out the areas where we take a clear position, such as the need for diversity in the field, taking lessons from  complex risk assessments in other areas, and democratisting policy recommendations. We do not articulate a position on degrowth, differential tech development etc.  We highlight that the existing evidence basis and arguments for them are weak.

We do not position ourselves in many cases, because we believe they require further detailed work and deliberation. In that sense I agree with you that we're covering too much - but only if the goal was to present clear positions on all these points. Since this was not the goal, I think it's fine to list many remaining questions and point out that indeed they still are questions that require answers. If you have strong opinion on any of the questions we mention, then go ahead write a paper that argues for one side, publish it, and let's get on with the science. 

Seán also called the paper polemic several times. (Per definition = strong verbal written attack, hostile, critical). This is not necessarily an insult (Orwell's Animal Farm is considered a polemic against totalitarianism), but I'm guessing it's not meant in that way. 

We are somewhat disappointed that one of the most upvoted responses on the forum to our piece is so vague and unhelpful. We would expect a community that has such high epistemic standards to reward comments that articulate clear, specific criticisms grounded in evidence and capable of being acted on.

Finally, the 'speaking abstractly' about funding. It is hard not see to see this as an insinuation that this we have consistently produced such poor scholarship that it would justify withdrawn funding. Again, this does not signal anything positive aboput the epistemics, or just sheer civility, of the community.

Democratising Risk - or how EA deals with critics

This is a great comment, thank you for writing it. I agree - I too have not seen sufficient evidence that could warrant the reaction of these senior scholars. We tried to get evidence from them and tried to understand why they explicitly feared that OpenPhil would not fund them because of some critical papers. Any arguments they shared with us were unconvincing. My own experience with people at OpenPhil (sorry to focus the conversation only on OpenPhil, obviously the broader conversation about funding should not only focus on them) in fact suggests the opposite. 

I want to make sure that the discussion does not unduly focus on FHI or CSER in particular. I think this has little to do with the organisations as a whole and more to do with individuals who sit somewhere in the EA hierarchy. We made the choice to protect the privacy of the people whose comments we speak of here. This is out of respect but also because I think the more interesting area of focus (and that which can be changed) is what role EA as a community plays in something like this happening. 

I woud caution to center the discussion only around the question of whether or not OpenPhil would reduce funding in respose to crticism. Improtantly, what happened here is that some EAs, who had influence over funding and research postions of more junior researchers in Xrisk, thought this was the case and acted on that assumption. I think it may well be the case that OpenPhil acts perfectly fine, while researchers lower in the funding hierarchy harmfully act as if OpenPhil would not act well. How can that be prevented? 

To clarify: all the reviewers we approached gave critical feedback and we incorporated it and responded to it as we saw fit without feeling offended. But the only people who said the paper had a low academic standard, that it was 'lacking in rigor' or that it wasn't 'loving enough', were EAs who were emotionally affected by reading the paper. My point here is that in an idealised objective review process it would be totally fine to hear that we do not meet some academic standard. But my sense was that these comments were not actually about academic rigor, but about community membership and friendship. This is understandable, but it's not surprising that mixture of power, community and research can produce biased scholarship. 

Very happy to have a private chat Aaron!

Democratising Risk - or how EA deals with critics
  • It doesn't matter whether Nick Bostrom speculates or wants to implement surveillance globally. In respect to what we talk about (justification of extreme actions) what matters is how readers perceive his work and who the readers are.
  • There’s some hedging in the article but…
  • He published in a policy journal, with an opening ‘policy implication’ box
  • He published an outreach article about in Aeon, which also ends with the sentence: ”If you find yourself in a position to influence the macroparameters of preventive policing or global governance, you should consider that fundamental changes in those domains might be the only way to stabilise our civilisation against emerging technological vulnerabilities.”
  • In public facing interviews such as with Sam Harris and on TED, the idea of ‘turnkey totalitarianism’ was made the centrepiece. This was not framed as one hypothetical, possible, future solution for a philosophical thought experiment.
  • The VWH was also published as a German book (why I don’t know…)
Democratising Risk - or how EA deals with critics

Very happy to have a private chat and tell you about our experience then. 

Democratising Risk - or how EA deals with critics

Here's a Q&A which answers some of the questions by  reviewers of early drafts. (I planned to post it quickly, but your comments came in so soon! Some of the comments hopefully find a reply here)

"Do you not think we should work on x-risk?"

  • Of course we should work on x-risk

 

"Do you think the authors you critique have prevented alternative frameworks from being applied to Xrisk?"

  • No. It’s not really them we’re criticising if at all. Everyone should be allowed to put out their ideas. 
  • But not everyone currently gets to do that. We really should have a discussion about what authors and what ideas get funding.

 

"Do you hate longtermism?"

  • No. We are both longtermists (probs just not the techno utopian kind).

 

"You’re being unfair to Nick Bostrom. In the vulnerable world hypothesis, Bostrom merely speculates that such a surveillance system may, in a hypothetical world in which VWH is true, be the only option"

  • It doesn't matter whether Nick Bostrom speculates or wants to implement surveillance globally. In respect to what we talk about (justification of extreme actions) what matters is how readers perceive his work and who the readers are. 
  • There’s some hedging in the article but…
  • He published in a policy journal, with an opening ‘policy implication’ box 
  • He published an outreach article about in Aeon, which also ends with the sentence: ”If you find yourself in a position to influence the macroparameters of preventive policing or global governance, you should consider that fundamental changes in those domains might be the only way to stabilise our civilisation against emerging technological vulnerabilities.”
  • In public facing interviews such as with Sam Harris and on TED, the idea of ‘turnkey totalitarianism’ was made the centrepiece. This was not framed as one hypothetical, possible, future solution for a philosophical thought experiment. 
  • The VWH was also published as a German book (why I don’t know…)
  • Seriously if we’re not allowed to criticise those choices, what are we allowed to criticise?

 

"Do you think longtermism is by nature techno-utopian?"

  • In theory, no. Intergenerational justice is an old idea. Clearly there are versions of longtermism that do not have to rely on the current set of assumptions. Longtermist thinking is a good idea. 
  • In practice, most longtermists tend to operate firmly under the TUA. This is seen in the visions they present on the future, the value placed on continued technological and economic growth etc.


"Who is your target audience?"

  • Junior researchers who want to do something new and exciting in x-risk and 
  • External academics who have thus far felt repelled by the TUA framing of the x-risk and might want to come into the field and bring in their own perspective 
  • Anyone who really loves the TUA and wants to expose themselves to a different view 
  • Anyone who doubted the existing approaches but could not quite put a finger on why
  • Our audience is not: philosophers working on x-risk who are thinking about these issues day and night and who are well aware of some of the problems we raise.

 

"Do you think we should abandon the TUA entirely?"

  • No. Especially those who feel personally compelled to work on the TUA or who have built an expertise in it, are obviously free to work on it. 
  • We just shouldn’t pressure everyone else to do that too.

 

"Why didn’t you cite paper X?"

  • Sorry, we probably missed it. We’re covering an enormous amount in this paper. 

 

"Why didn’t you cite blogpost X? "

  • We constrained our lit search to papers that have the ambition to get through academic peer review. We also don’t read as many blog posts. That said, we appreciate that some people have raised similar concerns as we do on Twitter and on Blogs. We don’t think this renders a more formal listing of the concerns useless. 

 

"You critique we need to solve problem X but Y has already written a paper on X!"

  • Great! Then we support Y having written that paper! We invite more people to do what Y did. Do you think this was enough and the problem is now solved? Do you think there are no valuable alternative papers to be written so that it’s ridiculous to have said we need more work on X?

 

"Why is your language so harsh? Or: Your language should have been more harsh!"

  • Believe it or not we got both perspectives - for some people the paper is beating around the bush too much, for others it feels like a hostile attack. We could not please them all. 
  • Maybe ask youself what makes you as a reader fall into one of these categories?
Democratising Risk - or how EA deals with critics

The paper never spoke about getting rid of experts or replacing experts with citzens. So no. 

Many countries now run citizen assemblies on climate change, which I'm sure you're aware of. They do not aim to replace the role of IPCC. 

EA or the field of existential risk cannot be equated with the IPCC. 

To your second point, no this does not follow at all. Democracy as a procedure is not to be equated (and thus limited) to governments that grant you a vote every so often. You will find references to the relevant literature on democratic experimentation in the last section which focusses on democracy in the paper. 

Mushroom Thoughts on Existential Risk. No Magic.

That sounds cool. Happy to see that some of this work is going on and glad to hear that you're specifically thinking about tail-risk climate change too.  Looking at fungi as a food source is obviously only one of the dimensions of use I describe as relevant here, and in ALLFED's case, cost of the production is surely only one relevant dimension from a longtermist perspective. In general, I'm happy to see that some of your interventions do seem to consider fixing existing vulnerabilities as much as  treating the sympoms of a catastrophe. I'll go through the report you have online (2019 is the most recent one?) to check who you're already in contact with and whether I can recommend any other experts to you who it might be useful for you to reach out to. 

On a seperate note and because it's not on the Q&A of your website: are you indeed fully funded by EA orgs (BERI, EA Lottery as per report)? I found it surprising that given your admirable attempts to connect with the relevant ecosystem of organisations you would not have funding from other sources. Is this because you didn't try or because it seems no one except EAs want to grant  money for the work you're trying to do? 

Load More