[ Question ]

What are examples of EA work being reviewed by non-EA researchers?

by Aaron Gertler1 min read24th Mar 202015 comments


Criticism (EA Orgs)Criticism (EA Movement)

I sometimes see people claim that EA research tends to be low-quality or "not taken seriously" by scholars in relevant fields.

There are cases where this clearly isn't true (e.g. AI alignment questions seem to have at least split the scholarly community, with a lot of people on both sites). 

But I worry that, as a non-scientist, I'm living in a bubble where I don't see strong critique of GiveWell's methodology, FHI's policy papers, etc.

Does anyone have good examples of respected* scholars who have reviewed EA research and either praised it highly or found it lackluster? 

*I'm using this word to mean a combination of "regarded highly within their field" and "regarded reasonably well by EAs who care about their field"; if you're not sure whether someone counts, please share the example anyway!

Specifically, I'm looking for reviews of EA research that doesn't go through peer-reviewed research channels, or that gets published in very obscure journals that separate it from being "mainstream" within its field. Examples include:

  • Eric Drexler's Comprehensive AI Services model
  • Wild animal suffering (especially attempts to estimate its magnitude or compare it to human suffering on a moral basis)
  • GiveWell's cost-effectiveness models
  • X-risk policy work from FHI, CSER, or other longtermist research orgs
  • Recent EA discussion of COVID-19

An example of feedback that fits what I'm looking for:

  • Judea Pearl, a renowned computer scientist, reviewing Stuart Russell's Human Compatible
    • "Human Compatible made me a convert to Russell's concerns with our ability to control our upcoming creation -- super-intelligent machines. Unlike outside alarmists and futurists, Russell is a leading authority on AI. His new book will educate the public about AI more than any book I can think of, and is a delightful and uplifting read." 


New Answer
Ask Related Question
New Comment

6 Answers

Here's a thread in which a World Bank economist critiques GiveWell on research/publication methods. (GiveWell responds here.)

In addition to Will MacAskill's critique of functional decision theory (MIRI-originated and intended to be relevant for AI alignment), there's this write-up by someone that refereed FDT's submission to a philosophy journal:

My recommendation was to accept resubmission with major revisions, but since the article had already undergone a previous round of revisions and still had serious problems, the editors (understandably) decided to reject it. I normally don't publish my referee reports, but this time I'll make an exception because the authors are well-known figures from outside academia, and I want to explain why their account has a hard time gaining traction in academic philosophy.

Probably more informal than you want, but here's a Facebook thread debating AI safety involving some of the biggest names in AI.

As someone who has sometimes made a similar claim, I find a lot of assessments of others' work, not just that of EAs, tends to be informal, off-the-record, and discussion-based. I in fact think that EAs with some frequency miss out on a wealth of knowledge due to a widespread and often insistent requirement that knowledge be citable in order to be meaningful. There are very strong reasons to greatly prefer and put greater weight on citable knowledge, but there is A LOT of intelligence that people do not share in recorded formats for a variety of reasons, such as effort and reputational risks.

So I believe some lack of answers to this may be due to critiques of EA work being shared e.g. verbally, rather than more formally. Personally, I've discussed EA work with at least 4 quite prominent economists, at least 2 of whom I believed had reviewed some significant aspect of EA research and perspective thoroughly, but I have not really shared these accounts. To be sharable, I'd likely require more time and attention of these economists than I'm easily able to get, in order to ensure I provided both full and proper explanation and sufficient guarantee of anonymity.

The “Worm Wars” could arguably be an example (though the contentious research was not just from the EA community)

I've been involved (in some capacity) with most of the publications at the Centre for the Governance of AI at FHI coming out over the past 1.5 years. I'd say that for most of our research there is someone outside the EA community involved. Reasonably often, one or more of the authors of the piece wouldn't identify as part of the EA community. As for input to the work: If it is academically published, we'd get input from reviewers. We also seek additional input for all our work from folks we think will be able to provide useful input. This often includes academics we know in relevant fields. (This of course leads to a bit of a selection effect)