Hide table of contents

I sometimes see people claim that EA research tends to be low-quality or "not taken seriously" by scholars in relevant fields.

There are cases where this clearly isn't true (e.g. AI alignment questions seem to have at least split the scholarly community, with a lot of people on both sites). 

But I worry that, as a non-scientist, I'm living in a bubble where I don't see strong critique of GiveWell's methodology, FHI's policy papers, etc.

Does anyone have good examples of respected* scholars who have reviewed EA research and either praised it highly or found it lackluster? 

*I'm using this word to mean a combination of "regarded highly within their field" and "regarded reasonably well by EAs who care about their field"; if you're not sure whether someone counts, please share the example anyway!

Specifically, I'm looking for reviews of EA research that doesn't go through peer-reviewed research channels, or that gets published in very obscure journals that separate it from being "mainstream" within its field. Examples include:

  • Eric Drexler's Comprehensive AI Services model
  • Wild animal suffering (especially attempts to estimate its magnitude or compare it to human suffering on a moral basis)
  • GiveWell's cost-effectiveness models
  • X-risk policy work from FHI, CSER, or other longtermist research orgs
  • Recent EA discussion of COVID-19

An example of feedback that fits what I'm looking for:

  • Judea Pearl, a renowned computer scientist, reviewing Stuart Russell's Human Compatible
    • "Human Compatible made me a convert to Russell's concerns with our ability to control our upcoming creation -- super-intelligent machines. Unlike outside alarmists and futurists, Russell is a leading authority on AI. His new book will educate the public about AI more than any book I can think of, and is a delightful and uplifting read." 

63

0
0

Reactions

0
0
New Answer
New Comment


6 Answers sorted by

Here's a thread in which a World Bank economist critiques GiveWell on research/publication methods. (GiveWell responds here.)

In addition to Will MacAskill's critique of functional decision theory (MIRI-originated and intended to be relevant for AI alignment), there's this write-up by someone that refereed FDT's submission to a philosophy journal:

My recommendation was to accept resubmission with major revisions, but since the article had already undergone a previous round of revisions and still had serious problems, the editors (understandably) decided to reject it. I normally don't publish my referee reports, but this time I'll make an exception because the authors are well-known figures from outside academia, and I want to explain why their account has a hard time gaining traction in academic philosophy.

Since then, the related paper Cheating Death in Damascus has apparently been accepted by The Journal of Philosophy, though it doesn't seem to be published yet.

The Wolfgang Schwarz writeup is exactly the sort of thing I'm looking for; thank you! 

Will's critique is also a reasonable fit; I was hoping to avoid "EA people reviewing other EA people," but he seems to approach the topic in his capacity as a philosopher and shows no sign of soft-pedaling his critique.

Probably more informal than you want, but here's a Facebook thread debating AI safety involving some of the biggest names in AI.

As someone who has sometimes made a similar claim, I find a lot of assessments of others' work, not just that of EAs, tends to be informal, off-the-record, and discussion-based. I in fact think that EAs with some frequency miss out on a wealth of knowledge due to a widespread and often insistent requirement that knowledge be citable in order to be meaningful. There are very strong reasons to greatly prefer and put greater weight on citable knowledge, but there is A LOT of intelligence that people do not share in recorded formats for a variety of reasons, such as effort and reputational risks.

So I believe some lack of answers to this may be due to critiques of EA work being shared e.g. verbally, rather than more formally. Personally, I've discussed EA work with at least 4 quite prominent economists, at least 2 of whom I believed had reviewed some significant aspect of EA research and perspective thoroughly, but I have not really shared these accounts. To be sharable, I'd likely require more time and attention of these economists than I'm easily able to get, in order to ensure I provided both full and proper explanation and sufficient guarantee of anonymity.

Do you feel comfortable giving some general impression of what the economists' views were (e.g. "one favorable, two mixed, one unfavorable")? If not, that's understandable!

I would expect EA to have a weaker insistence on citable knowledge than people in other academic fields; do you think the insistence is actually stronger? (Or are most people in academic fields wrong, and EA isn't an exception?)

The “Worm Wars” could arguably be an example (though the contentious research was not just from the EA community)

How much of that research was from the EA community?

I've been involved (in some capacity) with most of the publications at the Centre for the Governance of AI at FHI coming out over the past 1.5 years. I'd say that for most of our research there is someone outside the EA community involved. Reasonably often, one or more of the authors of the piece wouldn't identify as part of the EA community. As for input to the work: If it is academically published, we'd get input from reviewers. We also seek additional input for all our work from folks we think will be able to provide useful input. This often includes academics we know in relevant fields. (This of course leads to a bit of a selection effect)

Likewise for publications at CSER. I'd add that for policy work, written policy submissions often provide summaries and key takaways and action-relevant points based on 'primary' work done by the centre and its collaborators, where the primary work is peer-reviewed.

We've received informal/private feedback from people in policy/government roles at various points that our submissions and presentations have been particularly useful or influential. And we'll have some confidential written testimony to support this for a few examples fo... (read more)

Comments6
Sorted by Click to highlight new comments since:

Hopefully Wild Animal Initiative will have more answers for you soon! We recently assembled an Academic Advisory Panel in part to solicit feedback on our publications when they don't go through a formal peer-review process.

We're still growing the panel, so please let us know if you or anyone you know might be interested in joining. https://www.wildanimalinitiative.org/advisory-panel

I'd love a follow-up on this. Particularly interested in how this might offer lessons for Unjournal.

(The link above is dead).

Good question!

Does anyone have good examples of respected* scholars who have reviewed EA research and either praised it highly or found it lackluster? 

Presumably you'd also be interested in examples where such scholars reviewed EA research and came to a conclusion in between high praise and finding it lackluster? I expect most academics find a lot of work in general to be somewhere around just "pretty good".

That's also good to see, and I'd appreciate examples! But I think it's a bit less interesting/useful to me because it's what I would expect in general.

I see a lot of people claiming that EA has better research than the norm, others claiming worse than the norm, so I'm curious which opinion actually seems more popular among scholars (vs. the neutral "yeah, this is fine, that's why the journal accepted it" reaction I'd expect to be more common than either of the other reactions).

Ah, that makes sense. I was thinking more about the detailed points reviewers might make about specifics from particular EA research, rather than getting data on the general quality of EA research to inform how seriously to take other such research (which also seems very/more valuable).

Data on "general quality" was my goal here, yes, albeit split up by source (since "EA research" includes everything from published journal articles to informal blog posts). 

Specifics are valuable too, but in my work, I often have to decide which recent research to share, and how widely; I don't expect experts to weigh in very quickly, but a general sense of quality from different sources may help me make better judgments around what to share.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A
 ·  · 2m read
 · 
Hi everybody! I'm Conor. I run the 80,000 Hours Job Board. Or I used to. As of today — April 1 — we are becoming Job Birds! We've been talking to users for the last few years about making this change, and people have overwhelmingly been in favour (remember, there are six or more birds for every human on Earth). Whether it's the daily emails asking me to finally switch, or the flocks of people accosting me at conferences to urge a migration to Job Birds, the demand is overwhelming! Luckily, the wait is over! I've included an FAQ below of the most common questions we receive. FAQ * Do these birds have jobs? * In a sense, no. In another, preferred sense, definitely! They have roles in ecological niches. * What's a good bird to get started with? * The peregrine falcon. * What's the theory of change? * Birds are fascinating creatures. * Birds are the only living animals with feathers. * Birds have hollow bones, which help facilitate flight. * Some bird species, such as parrots and corvids, display remarkable intelligence. * What was the question again? * Caw! Caw! * I have concerns about wild animal suffering. How does Job Birds intend to navigate promoting birds of prey? * The humans behind Job Birds share these concerns. Unfortunately, birds of prey we've spoken to overwhelmingly ascribe to an incompatible, sort of Avi-Nietzschean value system. Owl contractors are in the process of building a moral parliament tool in order to manage these conflicting normative claims. * I worry that sharing birds with people isn't as impactful as sharing jobs. * Okay, but consider this: you can click the media buttons to even hear the sounds of the bird! * I would like to do good with my career. Can Job Birds help me with that? * Users report that job hunting can be stressful and time-intensive. In light of this, we at Job Birds recommend breaking up your job hunts with some time learning about the wonders of over 1,000 bird species at 80,000
Recent opportunities in Building effective altruism
26
Benny Smith
· · 1m read