Hello! I'm Toni, ACE's new director of research. I've worked in ACE's research department for two and a half years, but I just stepped into the director role on July 31.
On behalf of ACE, I'd like to thank John Halstead for engaging so thoughtfully with our work and for his dedication to improving the field of animal advocacy research. We value honest feedback, which is precisely why, several months ago, we invited Mr. Halstead and six other individuals to act as consultants during our charity evaluation process this year. I'd also like to note my appreciation that Mr. Halstead shared his post with us prior to publishing it, which gave us the opportunity to consider his points and to draft this response for simultaneous publication.
ACE would like to take this opportunity for a public exchange about our work to:
- explain our position on our older intervention research,
- clarify the relationship between our cost-effectiveness estimates (CEEs) and our "all things considered" point of view,
- clarify the relationship between our intervention reports and our charity reviews, and
- outline some of our research priorities for next year.
My goal in this piece is not to address every point that Mr. Halstead raises, but rather to address what I believe is the underlying issue. It seems to me that Mr. Halstead's critique is largely based on: (i) a fundamental disagreement with ACE about the role that our CEEs should play in our decisions, and (ii) a misunderstanding about the role that our CEEs do play in our decisions. While we believe that our CEEs should (and do) play a very small role in our decisions, Mr. Halstead seems to believe that our CEEs should (and do) play a central role in our decisions. As a result, Mr. Halstead understandably has a very different idea than we do about which areas of our research we should prioritize.
Our Position on our Older Intervention Research
We are aware of some limitations of our older intervention research, and we agree with Mr. Halstead that ACE's 2014-2016 reports on corporate outreach, undercover investigations, humane education, and online ads are not up to our current standards. We recognize, for instance, that our previous use of "pessimistic," "realistic," and "optimistic" labels for our quantitative estimates was not ideal, and we did not sufficiently explain how we made or used those estimates. (In early 2017, we replaced our labeling strategy with the use of 90% subjective confidence intervals.) We are also aware that new research has become available since the publication of some of our older intervention reports. As such, they are in need of updating.
One question that's been on my mind since I became ACE's director of research (about five weeks ago) is whether or not we should archive some of our older intervention reports until we are able to update them. Since ACE's research department is currently hard at work on our charity reviews, my initial plan was to wait until our reviews are published in November before making any major decisions about our intervention work. However, Mr. Halstead's post has led us to consider the value of our older reports sooner than planned, and we've expedited our decision. We will be archiving our corporate outreach, undercover investigation, humane education, and online ads reports on September 14. This will allow interested readers to reference them easily for the next week—though even when they are archived, the reports will remain available on our site via the search tool.
We were glad to learn that Mr. Halstead agrees that our 2017 leafleting intervention report was of good standard, and I'd add that our 2017 protest intervention report is of a similar standard. These are our two most recent intervention reports, and they both utilize the new intervention research methodology that we officially introduced last November. Our new methodology includes a more systematic literature search, facilitates more transparent communication about our reasoning, and allows for more rigorous statistical analysis, when appropriate. We will ensure that all of our future intervention work is as rigorous as our leafleting and protest reports, if not more so.
The Role of Cost-Effectiveness Estimates in our Work
Mr. Halstead is correct that we have multiple platforms in which we present our views. He divides these platforms into three categories: (i) intervention reports, (ii) CEEs, and (iii) the "all things considered" views expressed in our charity reviews. In fact—in our intervention reports as well as in our charity reviews—we present (i) CEEs and (ii) "all things considered" views.
Mr. Halstead is also correct that the views expressed on these different platforms sometimes differ. "For example," he writes, "the view expressed in the intervention report on investigations is different to the view expressed in the cost-effectiveness analyses of investigations." Actually, I'd suggest that the views expressed in our intervention reports are often quite different from the views expressed in our charity reviews. Additionally—in both our intervention reports and our charity reviews—our cost-effectiveness estimates are often quite different from our overall (or "all things considered") views. These differences are intentional and justified, as I'll explain below.
The Relationship Between the Cost-Effectiveness Estimates and the Overall Views Expressed in our Intervention Reports
Our overall views of an intervention are informed by a number of factors other than our best estimate of the average cost-effectiveness of that intervention. After all, practical decisions about whether or not to devote further resources to an intervention should be made based on the intervention's marginal cost-effectiveness, and our CEEs estimate the intervention's average cost-effectiveness.
In order to develop a sense of an intervention's marginal cost-effectiveness, we consider how its average cost-effectiveness might change over time depending on the amount of resources invested in it, its interactions with other interventions, shifts in public opinion or political context, and so on. Even if we believe an intervention is currently highly cost-effective, we might think that investing further in it would have diminishing returns. Similarly, we consider whether an intervention might be necessary for the success of the animal advocacy movement. If so, we may recommend investing further in that intervention even if it doesn't currently seem to be accomplishing many tangible benefits. And of course, there are always costs and benefits of interventions that we simply don't include in our CEEs because they can't be quantified with any helpful degree of precision, though we discuss such costs and benefits elsewhere in our reports and they do factor into our overall views.
Mr. Halstead repeatedly claims that our "all things considered" view is that most forms of grassroots advocacy have "close to zero effect." That's not our view, and it's also not how we think about whether or not to recommend interventions. As quoted, we wrote in our THL review that "there is little evidence available" for the effects of the grassroots outreach that THL conducts, such as "leafleting, online ads, and education." We also wrote that we "do not currently recommend the use of leafleting or online ads as we suspect that they are not as effective as some other means of public outreach." It does not follow from these claims that our overall view is that grassroots advocacy has close to no effect. As we explain in the preceding paragraph of THL's review, "we still think it's important for the animal movement to target some outreach toward individuals, as a shift in public attitudes could lead to greater support for new animal-friendly policies. Public outreach might even be a necessary precursor to achieving institutional change." In other words, because of the possible necessity of grassroots outreach and because of its interactions with other interventions, our overall view of grassroots outreach is distinct from our CEEs for leafleting, online ads, and humane education.
The Relationship Between the Cost-Effectiveness Estimates and Overall Views Expressed in our Charity Reviews
The CEEs included in our charity reviews are very rough estimates of a charity's average cost-effectiveness. We emphasize strongly (with bold letters) that they should not be taken as our overall view of a charity's effectiveness. Our overall view of each charity is informed by all seven of our criteria.
Mr. Halstead seems to believe that our CEEs are the most important factor in our recommendation decisions. When we reminded him in conversation about our six other criteria, he argued that we wouldn't value factors like strong leadership or track record in a charity that wasn't cost-effective, and that therefore our CEEs must play a central role in our recommendation decisions. That seems like a fair assumption to make about an effective altruist organization. Once again, though, our CEEs are estimates of the average cost-effectiveness of the charity over the past year, and we make our recommendation decisions based on our beliefs about the marginal cost-effectiveness of each charity. We consider all seven of our criteria to be largely independent indications of marginal cost-effectiveness, as I'll explain below.
Suppose we learn that the director of a charity we're evaluating has been embezzling money (though this has never happened). Even if we believe the charity has a high average cost-effectiveness in its work for animals, we might believe that donations to the charity have low marginal cost-effectiveness because the charity is about to lose its director to prison. Therefore, we consider strong leadership to be an indication of a charity's marginal cost-effectiveness independently of the charity's average cost-effectiveness. Similarly, suppose we review a charity's track record and find that it accomplishes more every year on the same budget. Even if its average cost-effectiveness is currently low, we might be optimistic about the marginal cost-effectiveness of donations to that charity. Therefore, we consider track record to be an indication of a charity's marginal cost-effectiveness independently of the charity's average cost-effectiveness.
Mr. Halstead highlights some problems with our older intervention research and concludes that: "consequently ACE's research does not provide much reason to believe that their recommended charities actually improve animal welfare." If he had said that our cost-effectiveness estimates (on their own) don't provide much reason to believe that our recommended charities actually improve animal welfare, I might have agreed with him. However, as we explain in our reviews, we put limited weight on those estimates. Our research provides many reasons to believe that our recommended charities help animals. Those reasons can be found in all seven sections of our reviews.
The Relationship Between our Intervention Research and our Charity Reviews
Mr. Halstead notes some apparent inconsistencies between our intervention research and our charity reviews. For example, he points out that: "[ACE's] view as of August 2018 is that grassroots advocacy has close to no effect, though ACE does estimate that THL's online outreach is beneficial." As mentioned, we feel this is a misrepresentation of our overall view of grassroots outreach. However, the point I'd like to make now is that there are sometimes good reasons why our overall view of an intervention might differ from our assessment of that intervention as it is implemented by a particular charity.
Any given intervention can vary widely in its cost-effectiveness depending on how it is implemented. When we model the cost-effectiveness of an intervention, we have to make certain assumptions about how that intervention is implemented. For example, in our protest report, we modeled the cost-effectiveness of the types of protests implemented by THL. If we were writing a charity review for a group like Anonymous for the Voiceless, which uses a very different kind of protest, our model of the cost-effectiveness of their protests might look very different from the model in our protest report.
Readers may be wondering: what is the value of our intervention reports if they aren't necessarily the basis for the CEEs in our charity reviews? Our answer is two-fold. First, our overall views of each intervention do play some role in our reviews, particularly in Criterion 2 ("Does the charity engage in programs that seem likely to be highly impactful?"). Second, our new methodology for our intervention research was designed to make our intervention reports useful in other ways. For instance, we examine factors that might make an intervention more or less cost-effective, which we hope will be of use to other charities.
Further Thoughts on the Role of Cost-Effectiveness Estimates in our Charity Reviews
A discussion of the role of our CEEs in our charity reviews may appear to be tangential to the points that Mr. Halstead has raised. Because Mr. Halstead believes that our CEEs both should be and are the most important factor in our recommendation decisions, he may not perceive there to be any problem with the role that our CEEs play in our reviews and therefore didn't discuss it in his post. However, the role of our CEEs in our reviews is actually a key point in this exchange. It is the implicit assumption that allows Mr. Halstead to infer from the flaws in our corporate outreach report that our charity evaluation research "does not provide much reason to believe that their recommended charities actually improve animal welfare."
Mr. Halstead's chain of reasoning seems to be:
- ACE's corporate outreach intervention report is flawed.
- Corporate outreach accounts for 90% of THL's and Animal Equality's CEEs.
- Each charity's CEE is the primary piece of evidence that the charity improves animal welfare.
4. ACE does not provide much reason to believe that their recommended charities improve animal welfare.
Even if we grant (1) and (2), we've now explained that premise (3) is false, and therefore Mr. Halstead's conclusion does not follow.
Our Plans for ACE's Research Department
ACE's specific research plans for next year have not yet been set; we have annual strategic planning sessions in December or January, after our charity reviews are released. However, as ACE's new director of research, I can make the following public commitments about our future work:
- We will archive our outdated intervention reports on September 14, 2018.
- After our reviews are published in November (and we therefore have more time), we will consider whether we should add any of the problems raised by Mr. Halstead to our public mistakes page.
- Our future intervention reports will meet or exceed the standards set by our protest and leafleting reports.
- We will continue working to clearly describe the role that CEEs play in our charity reviews, since this has continually caused confusion among our community.
- In our 2018 reviews, we are no longer using our online ads report in our CEEs.
- We will make every effort to keep our readers apprised of our most current views on our research.
Updating our corporate outreach report is of particularly high priority for us, and Mr. Halstead is correct that this was also true in 2017. I spent the majority of my time this year working on an updated report, but my priorities shifted when I moved into my new position. When our charity reviews are complete, I will either finish up the report or pass it on to another team member.
We are considering further updating our intervention research methodology by breaking our reports into smaller pieces that can be published individually. That way, we would be able to publish or update some pieces of the reports more quickly than we are currently able to publish or update full reports.
As Mr. Halstead points out, we have not yet conducted much original research on the impact of various welfare reforms. We've generally left this work to other groups. However, we are considering doing more welfare research in the future. In fact, we have a report on fish welfare that is currently being copy edited for publication. One of our goals is to better anticipate which welfare reforms charities will pursue each year so that we can research the effects of those reforms independently and before we evaluate the charities, rather than by relying on others' work while we evaluate charities. We think this will both improve our charity evaluations and allow us to be more useful to charities that are considering which reforms to pursue.
Comments on Some Miscellaneous Claims
"Corporate outreach accounts for the majority of the modelled impact of both charities in the cost-effectiveness analyses of THL and Animal Equality: for THL, ~90% of the modelled impact is from corporate outreach, and ~10% from online outreach; and for Animal Equality, >90% of the modelled impact is from corporate outreach."
Our team is unsure how Mr. Halstead arrived at these percentages, though we shared with him in conversation that we believe they are incorrect. We assume that when Mr. Halstead refers to the portion of the "modeled impact" of corporate outreach, he means the portion of the charity's modeled impact that is due to corporate outreach weighted by the portion of the charity's budget that supports corporate outreach. By our calculations, corporate outreach accounts for about 63% of THL's 2017 CEE and about 36% of Animal Equality's 2017 CEE.
"When I asked ACE about this in 2017, they said that the basis for the figure was another page on the impacts of media coverage on meat demand, which is not linked to or referenced at that point on the cost-effectiveness analysis of undercover investigations."
We should have made this reference much clearer, though for what it's worth, the page on the impacts of media coverage on meat demand is linked in Section V.3 and the figure Mr. Halstead is referencing appears in Section V.3.1.
"ACE does not have up to date research of sufficient quality on the welfare effects of corporate campaigns."
It's true that we have not conducted much original research on the impact of various corporate campaigns on animal welfare. However, we do not just rely on our own research when we estimate the impact of these campaigns. We also rely on relevant research produced by other groups (e.g., The Open Philanthropy Project's report on the welfare differences between cage and cage-free housing).
"ACE also does not check whether their recommended charities are genuinely causally responsible for the corporate policy successes that they claim," and "ACE does not check with third party news sources, experts or with the companies themselves on whether the claims of the charities are accurate."
We do search online for evidence in the news of each charity's achievements. The problem is: there usually is no such evidence, particularly in the field of corporate outreach. Of course, the absence of evidence of a charity's involvement in a corporate campaign is not evidence that the charity was not involved. We've also looked up corporations' press releases announcing their commitments, but these generally do not mention animal charities. (As far as I can remember, I've never seen one that does.) We have little reason to believe that a corporation could or would share detailed information about their decisions with us if we asked them. I don't know who Mr. Halstead has in mind when he mentions checking with "experts," though we've certainly spoken with many experts in corporate campaigning, if that is what he means.
We are always looking for ways to better assess charities' claims. In the meantime, when we can't corroborate a charity's claims with a third party, we are careful to state in our reviews that the charity "reports" or "claims" to have achieved X or Y, rather than that they have done so.
Edited 9/11/18: Please see this comment exchange between Avi Norowitz and me for some additional details regarding our evaluation of the extent to which charities cause some corporate policy commitment. I think that the above two paragraphs generally hold, but there are some important cases where it doesn’t. To be clear, there are a number of cases where charities are named in the news associated with a policy commitment, or in corporation’s press release associated with a policy commitment, or in the policy commitment itself.
"My interactions with Ms Adleberg and other members of ACE's current research staff have been very positive..."
"Evaluating the impact of animal charities is generally more difficult than evaluating the impact of charities carrying out direct health interventions because evidence is sparse and much hinges on difficult questions about animal sentience."
 We have reviewed the entire contents of his post as a team.
 And we do grant (1), though (2) is false, as I discuss in the final section of this piece.
 In THL's 2017 CEE, we modeled three of their six programs—corporate outreach, grassroots outreach, and online outreach—which accounts for a total of about 78% of their budget. We weigh each estimate of program cost-effectiveness by the proportion of the charity' budget spent on that program. Corporate outreach accounted for an estimated 49% of THL's budget in 2017, which equates to 63% of the programs we modeled, and thus 63% of the total CEE. Following the same reasoning, corporate outreach accounted for just 36% of Animal Equality's 2017 CEE.
Thanks for this detailed response. I don't have much to add over what I said in my post. I have three comments.
Clarificatory question - I calculated the percentage of impact of the different interventions by multiplying the spending on the intervention by the c-e of the intervention. Could you clarify how I should have calculated the impact? Also, I'm not sure I understand how the percentages you give can be right. You say "By our calculations, corporate outreach accounts for about 63% of THL's 2017 CEE and about 36% of Animal Equality's 2017 CEE". But you model the effect of only 3 interventions carried out by Animal Equality - grassroots outreach, investigations, and corporate outreach. But as far as I can see from the CEE, your mean estimate of the effect of grassroots outreach and investigations is negative i.e. they do harm. So, I don't see how they could comprise 64% of the impact of Animal Equality. I would assume that similar things apply to the CEE of THL. Though maybe I have misunderstood
Clarifying my own position - I don't argue that the evidence on the impact of THL and Animal Equality comes from the CEE. My argument is that the arguments and evidence provided for the CEEs, and the views in the intervention reports and the charity reviews is not of the standard we should expect.
Corporate campaigns - it's fine to rely on the open phil report on cage-free welfare, but ACE's research doesn't tell the reader this or provide any acceptable justification for the view that corporate campaigns are beneficial.
1- Sure, happy to discuss this further. In the example we gave in footnote 3, we only used the proportional expenditure (PE) to calculate the weighting of each program’s “animal years averted” (AYA) estimate (i.e., weighting for AYA_1 = PE_1/Sum(PE_modelled)). So this gives a weighting that we apply to each AYA estimate, and is independent from the AYA estimate itself. Stopping here is not ideal, however it is not as straightforward to use a similar method for the AYA estimates, due to their distributions.
Including the mean values of the AYA estimates without the rest of the distributions introduces some inconsistencies that make this approach of questionable use. If you consider example 1 in this model, we have two calculations for total AYA. They would be identical if it weren’t for for the distribution of the third AYA. The impacts of the third AYAs would have the same result using your method of calculation, however they clearly impact the model differently (with 3a having a much larger impact on the overall result). In example 2, we have the issue of the mean being very small for one AYA. While the two distributions are of even size and have the same expenditure weighting, an estimate using the mean would attribute 99% of the impact to program 2.
A different way of considering the impact of each part of the model is not to consider the proportional magnitude of each program but to use a sensitivity analysis (Guesstimate has one built in). This tests which parts of the model would have the biggest impact on the final result, should they be adjusted. Running this for both models indicates that the THL model is most sensitive to corporate outreach, while the Animal Equality model fluctuates between corporate and grassroots outreach, depending on how guesstimate populates the model.
2- That’s fair! I agree that we did not sufficiently explain all of the evidence we used in our CEEs, and I agree that our old intervention reports were not of our current standard. You did not state explicitly that the evidence for supporting THL and Animal Equality comes only from their CEEs. However, you seemed to conclude that our reviews provide only weak evidence for supporting each charity simply because our CEEs are weak evidence. My point is just that we provide a lot of other evidence, as well.
3- Agreed—we should have mentioned this! We are trying to do better this year, and we appreciate your insights as our Criterion 3 consultant : )
People at GiveWell state that they base their recommendations on four criteria: evidence of effectiveness, cost-effectiveness, room for more funding, and transparency. For ACE, as you reminded us here, there are seven criteria, including:
"6. Does the charity have strong leadership and a well-developed strategic vision? A charity that meets this criterion has leaders who seem competent and well-respected. The charity’s overall mission puts a strong emphasis on effectively reducing suffering, and the charity responds to new evidence with that goal in mind, revisiting their strategic plan regularly to ensure they stay aligned with that mission.
"7. Does the charity have a healthy culture and a sustainable structure? A charity that meets this criterion is stable and sustainable under ordinary conditions, and seems likely to survive the transition should some of the current leadership move on to other projects. The charity acts responsibly to stakeholders including staff, volunteers, donors, and others in the community. In particular, staff and volunteers develop and grow as advocates due to their relationship with the charity."
I have some questions about this.
1) Would you agree that your evaluation criteria are different from those at GiveWell? If so, do you think that one organization should update its criteria? Or is it that the criteria should be different depending on whether we consider human or non-human animals?
2) W.r. to point 6: if a charity does outstanding work, but happens to not emphasize effectively reducing suffering in its mission statement (e.g. they emphasize a proximal goal which turns out to be useful for the reduction of animal suffering), would that be a reason to downgrade its evaluation?
3) W.r. to point 7: if a charity does outstanding work, but staff and volunteers do not become advocates due to their relationship with the charity, would that be a reason to downgrade its evaluation?
1- Yes, our criteria are different from GiveWell’s. As John alluded to in his original post, our work is quite different from GiveWell’s in a number of ways. For one thing, there is generally much less evidence available about the cost-effectiveness of animal advocacy interventions than about the cost-effectiveness of direct health interventions. As a result, our models of average cost-effectiveness are much less certain than GiveWell’s, which is one reason why we rely more heavily on other indicators of marginal cost-effectiveness. It’s possible that GiveWell could also benefit from considering some of the other criteria we consider, but I’m not enough of an expert on their work to be comfortable drawing that conclusion.
2- We look for charities that emphasize effectively reducing suffering in their mission statement so that we can be confident that their future activities will still align with that goal. Suppose a charity does outstanding work influencing diet change/meat reduction, but they do it with the goal of improving human health. We would be concerned that such a charity could dramatically shift their activities if something caused their mission to be less aligned with ours (for instance, if new research suggested that meat is good for human health). This concern wouldn’t necessarily prevent us from recommending the charity, but it would factor into our decision.
3- As above, this is a concern that would factor into our decision but it wouldn’t necessarily prevent us from recommending a charity.
Thanks for the informative post, Toni!
With regard to:
I've found that the causal role of animal charities in corporate commitments is often supported by publicly available evidence. This evidence generally takes one of two forms:
Some corporations do name animal charities in their press releases. This often occurs when the charity secured the commitment through a cooperative approach, though it also sometimes occurs after a public campaign.
In cases of public campaigns, the timeline of events often provides some evidence of causality. I've found that the following pattern is typical: An animal charity launches a public campaign, leaving historical evidence in the form of a petition, tweets, media coverage, etc. Weeks or months later, the corporation publishes a press release agreeing to the commitment. (For reference, I've found Twitter to be a helpful resource for establishing these timelines.)
For example, here's is a list of corporate commitments that CIWF USA was allegedly involved in for the period from January 2016 to March 2017. (I had originally compiled this back in March 2017.) In 15 of the 22 cases, I found that their causal role was supported by publicly available evidence.
Thanks for your comment!
I think you’re right that some corporations do name organizations in their press releases, and it seems more likely that groups will be named if they are using a more collaborative approach. For what it's worth, in the paragraph you quoted, I now think that I anchored too heavily on my impression that groups such as THL, Mercy For Animals, and Animal Equality are quite rarely (if ever) named in the news or press releases associated with the welfare policy statements, or in the welfare policy statements themselves. As the majority of the organizations we evaluate usually use a less collaborative approach, I think the paragraph you quote will usually hold for the groups that we evaluate.
Still, even in those cases, I think that you’re also right and there should often still be some indirect evidence available from the timeline. That is, evidence of an organization campaigning at t1 and then, usually a short time later, at t2 evidence of a corporation making a commitment to the related welfare standards. For particularly important commitments we do look at this evidence but for the majority of commitments we don’t.
I think that your comment helps provide some important nuance to this discussion and I have left a link to this comment in the piece itself. Thank you again for the comment!
Thanks very much for posting this reply. And thanks a lot for all the work ACE does in general. Some clarifications were useful to have, e.g. "The Relationship Between our Intervention Research and our Charity Reviews" - I had felt confused about this when I first looked through the reviews in depth.
Here are some specific comments:
Reviews of existing literature
I agree that the new intervention reports are much better on this front. I'm especially keen on the clear tables summarising existing literature in the protest report. I suspect that there's still room for more depth here, especially since the articles summarized are probably just the most relevant parts of much wider debates within the social movement studies literature. For example, I notice a couple of items by S.A. Soule; although I haven't read the book and analysis you (or whoever wrote the protest report) cite, I have read another article of her's which was partially directed at considering the importance of the "political mediation" and "political opportunity structure" theories for assessing the impact of social movement organizations, and suspect that some of the works you cite might consider similar issues. I think the protest report goes into an appropriate amount of depth, given limited time and resources etc, but I've recently gained the impression that a literature review of social movement impact theory in a broad sense, or more systematic reviews of some of the more specific sub-areas, is a high priority in EAA research. I'd be keen to hear views about how useful this would be, and I'm happy to share more specific thoughts if that would help.
Unclear sources of figures
With some older intervention reports I agree with John Halstead that there are some confusing, unexplained numbers, although I think he exaggerates the extent of this (perhaps unintentionally), since some of the figures are explained. I don't think this needs further comment since, as noted, the new intervention report style is much clearer. My impression was that the Guesstimate models from more recent charity evaluations also had some slightly unexplained figures on there. E.g. THL guesstimate model – “Rough estimate of number of farmed animals spared per dollar THL spent on campaigns” is -52 to 340. Tracking this back through the model takes you to a box which notes "THL did not provide estimates for the number of animals affected by cage-free campaigns they were involved with. We have roughly based this estimate on estimates from other groups active in promoting cage-free policies and have attempted to take into account the greater amount of resources THL dedicates towards this program area." I feel like some explanation of this (perhaps a link to an external Google sheet) might have been helpful? I don't think this is a big issue though. There's also a chance I've just missed something / don't fully understand Guesstimate yet.
General comment on use of CEEs
ACE does make very clear that it only sees CEEs as one part of a charity evaluation. I'd just suggest that, in spite of these warnings, individuals looking at the reports will naturally gravitate towards the CEEs as one of the more tangible/concrete/easily quotable areas of the report. E.g. when I've organised events and created resources for Effective Animal Altruism London, I've quoted some of the CEEs for charities (and pretty much nothing else from the report) to make broad points about the rough ballpark for cost effectiveness of different groups. Given this, it still makes sense to treat the CEEs as more important than some other parts of the report, and to try and be especially rigorous in these sections. So doing things like using a single disputed paper by De Mol et al (2016) (although this example is from the old corporate campaigns intervention report) as a key part of a cost effectiveness analysis seems inadvisable, if it is avoidable.
Thanks for those thoughts. I agree that there’s room for more depth in the literature review portion of our intervention reports. We’ve prioritized breadth over depth in our intervention research so far. That’s because there’s usually no existing survey of the literature on a given intervention, and beginning with a survey helps us identify the areas that we’d like to explore more in depth. (We usually identify “questions for further research” at the end of our reports.) I agree that a review of the literature on social movement impact theory would likely be very useful for the movement. I’m not sure whether ACE is the best-positioned group to do that kind of research, but we can certainly consider it!
Regarding the sources of the figures in our CEEs, I agree that this is an area where we can improve. I do think Guesstimate can be a little hard to read, and that might be part of it, but there are also some places where our 2017 CEEs did not include enough information. We are being more careful about this in 2018, and are publishing a separate “CEE metric library” that will explain the figures that crop up in every CEE.
Yes, we’ve definitely noticed that people naturally gravitate towards our CEEs : ) That corporate outreach report will be archived, and we are focusing on improving our research every year.
Thanks for the reply. Just wanted to note that I agree with ACE's breadth over depth strategy, and that ACE might not be best-placed for a fuller review of social movement impact literature. It's something I'm considering prioritizing doing personally in my work for Sentience Institute.
As I noted on the original post, I am grateful this dialogue is happening so respectfully this time around.
How does ACE intend the CEEs to be used, if they're not a major determinant of recommendations?
Good question! We discuss how we make and use our CEEs on this page: https://animalcharityevaluators.org/research/methodology/our-use-of-cost-effectiveness-estimates/#2
Hi Toni, thanks for posting. My apologies if the questions below have been answered elsewhere; I have not engaged very much with your research over the past year or so.
I'm wondering if you could provide a bit of clarification regarding the role of CEEs in the formation of your overall view of cost effectiveness. You describe CEEs and (e.g.) leadership quality as being independent features in determining the marginal effectiveness of donations. I understand "independent" here as meaning that each feature can vary as the other(s) are held constant (roughly, of course there will be some correlation between all of the criteria).
This makes sense, but I think it isn't hitting the core of Halstead's argument. The criteria seem to comprise a multiplicative model, where setting any of the variables to a sufficiently low value is enough to bring the estimated marginal impact close to zero. If donations don't cash out in the implementation of cost-effective interventions, then the rest of the features don't matter; likewise if the leadership is so poor that the organization disbands. In this view, something like CEEs are critical even if they are not "core" in a sense of being much more important than the other criteria, such that a compelling argument against the evidence of cost effectiveness does feel per se like a compelling argument against the evidence for marginal cost effectiveness.
While the criteria may vary independently, it doesn't seem that they independently contribute to animal welfare (i.e. contribute additively rather than multiplicatively).
Also relevant are one's prior on the cost effectiveness of animal welfare charities, and one's confidence in arguments in the style of CEEs. If I think that most animal welfare interventions have little impact, then I need compelling evidence that a particular charity is doing better in order to form a positive overall view of the org. If my prior is more optimistic, then other considerations pointing to organizational quality will be sufficient to think the org is a worthwhile target for donations (especially if I think my ability to form accurate CEEs is weak).
Reviewing your paraphrase of Halstead's argument:
It seems to me that a compellingly positive CEE, primary evidence or no, is nonetheless a necessary component in the belief that an organization will improve animal welfare, particularly if one has a pessimistic prior. As such, effectively attacking the CEE is basically decisive. I'll note that my argument somewhat rests on a conceptual confusion: the CEE as you use it isn't actually your estimate of cost-effectiveness, just the subset that is easily quantifiable. The argument still seems to carry given a pessimistic prior and a lack of justification for the subset of impact that is hard to quantify.
I expect you have a more optimistic prior on the benefits of animal welfare charities, and I take your point (as I understand it) that there are benefits that are hard to capture in a CEE, such as movement capacity should we find some great intervention later on, and the maintenance of enthusiastic grassroots support for use as leverage in corporate campaigns. Does ACE have material explicating and justifying its understanding of this systemic/hard-to-quantify value? Is there something else I'm missing about your argument for independence, such that arguments for the value of something like strong leadership don't rely on/flow through arguments for the cost-effectiveness of programs?
Thanks for your post, and no worries about asking questions we’ve answered elsewhere; we have a lot of research on our website, so we don’t expect anyone to know about all of it!
When I said that we consider each criterion to be an indication of a charity's marginal cost-effectiveness “independently” of the charity's average cost-effectiveness, I meant that—regardless of whether the charity has a high average cost-effectiveness or not—we still consider our six other criteria to be indications of marginal cost-effectiveness. There’s no one or two (or three, or four…) criteria that we think are perfect indications of marginal cost-effectiveness, though we think that all seven of them together are a very good indication. We discuss this a bit in our page on cost-effectiveness estimates, here: https://animalcharityevaluators.org/research/methodology/our-use-of-cost-effectiveness-estimates/
I won’t write more about this right now because we actually have a forthcoming blog post about how we weigh our criteria against each other to make our recommendation decisions. It’s being edited now and then we’ll likely seek external feedback before publishing, so I’d expect it in a month or so.
We think it’s totally possible to make well-reasoned, evidence-based decisions about how to help animals, even in the absence of quantitative CEEs. After all, we don’t even publish quantitative CEEs for some charities that we review (especially if they are working towards long-term or difficult-to-measure outcomes). Take The Good Food Institute, for example. They are one of our Top Charities, but we have not published a quantitative CEE for them. It would be very difficult for us to quantitatively estimate the good they have done so far, since they are working to change the food system in a way that could take years or even decades. Still, we think they have excellent leadership, strong strategy, a healthy culture, we think their programs are likely to have a high long-term impact, and so on. We explain why in their review, and we think we’ve provided a compelling case for donating to them based on their marginal cost-effectiveness.
Regarding your question about “material explicating and justifying [ACE’s] understanding of this systemic/hard-to-quantify value,” we explain some of our thinking about long-term outcomes on the page about our cost-effectiveness estimates, linked above. If you’re asking for explanations of our assessment of the long-term value of particular charities or interventions, that would be in each charity review (mostly discussed in the “high-impact” section with the theories of change) and in our specific intervention reports. For instance, our protest report discusses the importance of movement building.
Hope that helps to answer some of your questions, and watch our blog for the post on our weighing of each criterion!