Hide table of contents

We have been conducting annual charity evaluations since 2014. Throughout this time, our goal has remained the same: to find and promote the most effective animal charities. Our seven evaluation criteria have also remained broadly consistent (though we’ve reworded, reordered, and tweaked them over the years). Our process for evaluating charities, however, continues to develop each year. For instance, in 2017, we began having conversations with employees at each charity and began offering charities small grants for participating in our evaluations. In 2018, we began conducting culture surveys at each charity, added new dimensions to some of our criteria, and made some logistical changes to increase our efficiency.

This year, we are making a number of changes, including:

Publishing Overall Ratings of Each Charity on Each Criterion

To allow our readers to quickly form a general idea of how organizations perform on our seven evaluation criteria, this year we have included overall ratings of each charity on each criterion in our reviews. This decision is also based on feedback from some readers who have told us that after skimming through the reviews, it was not clear enough how charities were performing on each criterion. We believe the ratings we produced can give a better sense of our overall assessment as they are visual representations of charities’ performance (weak, average, or strong) on each criterion, relative to other charities under review, as well as our confidence level (low, moderate, or high) in every case. We hope these ratings make it easier for our audience to compare charities’ performance by criterion and help us better express how confident we feel about our appraisal depending on the available evidence.

Increasing the Number of Visual Aids (e.g., Charts, Tables, and Images) in Each Review

This year, in order to make our charity evaluations more accessible to a wider audience, we have made an effort to represent more information visually rather than as blocks of text. In addition to the ratings described above, we added tables representing charities’ main programs, key results, estimated future expenses, and our assessment of their track records. We also added a table representing each charity’s human resources policies with different color marks indicating the policies they have; the ones they do not; and the ones for which they have a partial policy, an informal or unwritten policy, or a policy that is not fully or consistently implemented. We think that these changes will make it easier for our audience to gather the most essential findings from our reviews in a quicker and more efficient manner.

Making Changes to our Cost-Effectiveness Models

Since 2014, we have been creating quantitative cost-effectiveness models that compare a charity’s outcomes to their expenditures for each of their programs and attempt to estimate the number of animals spared per dollar spent. As these models have developed each year, we found that some repeated issues emerged:

  1. We were only able to model short-term, direct impact. Our attempts to model medium/long-term or indirect effects of interventions were too speculative to be useful. This meant that we could not produce models at all for charities that were focused mostly on medium/long-term or indirect outcomes, and we often had to omit programs from the charities for which we did produce models.
  2. The estimates produced by the models were too broad to be useful for making recommendation decisions. Ultimately, we want each criterion to support our recommendation decisions, and we found that we often were not confident enough in the models to give them weight in those decisions.
  3. While we appreciate the value of using numbers to communicate our estimates and uncertainty, we found that by using numbers, our estimates were often misinterpreted as being more certain than we intended.
  4. The variation in cost-effectiveness between charities was more dependent on which interventions the charity used, rather than how they were implemented. This suggests that, rather than modeling the cost-effectiveness of each charity, we would be better served to model the average cost-effectiveness of each intervention and incorporate that into our discussion of effectiveness in Criterion 1.

Addressing these issues fully was not something we could resolve in a single review cycle, but we have taken significant steps to provide an assessment of cost-effectiveness that is more useful. We have moved away from using a fully quantitative model and instead transitioned to a qualitative approach that, for each intervention type, compares the resources used and outcomes achieved across all the charities being reviewed. In the discussion, we have also included aspects of each charity’s specific implementation of their interventions that seem likely to have influenced their cost-effectiveness, either positively or negatively.

This approach does have limitations, as focusing on qualitative comparisons can lead us to be overly confident in our assessment. As such, we have highlighted where this approach may not work and we have continued to put limited weight on this criterion as a whole when making decisions. That said, it has provided some insight into the cost-effectiveness of all reviewed charities regardless of the timescale or directness of their work, allowing us to make comparisons that we were previously unable to make. We have focused on comparisons within interventions and not between interventions so as not to overlap with Criterion 1, as well as to provide an insight into how cost effective the charity might be in implementing new programs in the future.

We welcome feedback on this approach, which can be directed to Jamie Spurgeon.

Making our Culture Surveys Mandatory for Charities Receiving a Recommendation

Our evaluations of each charity’s culture have evolved every year. In 2016, we simply asked each organization’s leadership about the health of their organization’s culture. In 2017, we began reaching out to two randomly selected staff members at each charity to corroborate leaderships’ claims. In 2018, we introduced culture surveys to our evaluation process and we distributed our surveys to each charity’s staff, with the agreement of their leadership. In some cases, a charity’s leadership preferred to send us the results of their internal surveys instead, which we accepted in 2018 as well.

We found that distributing our own culture survey to each charity under evaluation gave us a much fuller picture of the charity’s culture. We also found that distributing the same culture survey to every organization was essential, since charities’ internal surveys vary widely in content, relevance, and quality.

This year, we decided to make participation in our culture survey an eligibility requirement for receiving a recommendation from ACE. Our goal is not to uncover and report any small conflict or cultural problem at the charities we evaluate; rather, we only report general trends that bear upon the charity’s effectiveness. We view the distribution of our culture surveys as essential due diligence since we seek to promote charities that will contribute to the long-term health and sustainability of the animal advocacy movement.

Watch our blog for a forthcoming post with more information about our culture survey.

Hiring a Fact-Checker

ACE places high priority on using accurate and reliable evidence in our work. In order to improve our capacity to more deeply investigate empirical information, we have hired a Field Research Associate whose main role is to identify and verify the factual statements included in our research. These statements include claims made by charities under evaluation. We hope that this additional staff member will improve ACE’s decision-making by allowing us to better verify the information reported to us.

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would