The Givewell 'Change our minds' contest has generated some interesting critiques. Some critiques can be cashed out into alternative estimates of a particular intervention's effect size/cost-effectiveness. I think it would be cool for someone to try to collate and compare these differing estimates into a forest plot, akin to a meta-analysis.
Givewell's own estimates should obviously be included. If Givewell has provided information on how staff differ in their estimates, that's in too. And if there are other external, non-contest estimates, they're in too.
Normal meta-analyses weight by sample size. This plot couldn't do that. Maybe an alternative is trying to weight by 'quality'. There's no clean way to do this, so here's a few approaches to try as a cluster. First with no weighting. Second weight by the prizes distributed by Givewell itself (most useful, but only 3 entries are rewarded according to quality). Third by EA-Forum post karma (obvious biases, but is some proxy of community support). Fourth by the meta-analysis' authors' personal judgment of quality (useful and practical, but labour-intensive and subjective). And I'm sure there's more someone could think up.
This could give a useful overview of the general pattern and direction of critique for people who don't plan to read any/most of the entries.
This would be most useful for critiques about a specific intervention, but many interventions will not have enough relevant critiques to be usefully aggregated. However an overall effect size across everything would be interesting - e.g. according to the contest, does Givewell generally underestimate their effect sizes? Overestimate? On balance get it right? What about for particular cause areas?
There's plenty of good critiques in the contest that can't be cashed out into a neat effect size adjustment - and for these there isn't a substitute for reading them directly (or at least a summary). But for the critiques that can be cashed out, a 'meta-analysis' might be a useful tool.
What do you think?