Hi all,

some of you may remember that a while back, Vetted Causes had posted a quite poor review of Animal Charity Evaluators on this forum which led to lengthy discussion between the two in the comments. 

Vetted causes has now released their first review of one of the top Charities according to Animal Charity Evaluators, here are the two reviews:

Review of Sinergia Animal by Animal Charity Evaluators
Review of Sinergia Animal by Vetted Causes

As a long time donor to Animal Charity Evaluators, I obviously find it troubling that one of the Charities they recommend might be vastly overestimating its own impact, or even claiming successes as their own which they had no part in. At the same time I am not sure how trustworthy Vetted Causes is as their initial review of ACE was - imo - worded quite poorly and their review of Sinergia Animal almost sounds a bit - for lack of a better term - unbelievably negative, claiming problems with every single (7 out of 7) pig welfare commitment achieved by Sinergia Animal in 2023.

This leaves me in a difficult position where I don't really know who to believe and if I should cancel my donations to Animal Charity Evaluators based on this. 

Thats why I wanted to ask for some additional opinions, if you all find Vetted Causes' Review trustworthy and if so - who to donate to instead of ACE to help the most animals possible going forward. 

(For transparency, I am not associated with ACE, Vetted Causes or Sinergia Animal, beyond my donation to ACE.)

Thank you!

Comments21


Sorted by Click to highlight new comments since:

It's eag weekend. I would give at least a week before rushing to a judgement.

Hi Marcus,

Thanks for your comment.

I want to acknowledge that members of this community have shared this post with us, and we truly appreciate your engagement and interest in our work. A deep commitment to create real change, transparency and honesty have always been central to our approach, and we will address all concerns accordingly.

To clarify in advance, we have never taken credit for pre-existing or non-existent policies, and we will explain this in our response. We always strive to estimate our impact in good faith and will carefully review our methodology based on this feedback to address any concerns, if valid.

This discussion comes at a particularly busy time for us, as we have been attending EA Global while continuing our critical work across eight countries. We appreciate your patience as we prepare a thorough response.

As a best practice, we believe organizations mentioned by others in posts should have the chance to respond before content is published. We take the principle of the right to reply so seriously that we even extend it to companies targeted in our campaigns or enforcement programs. In that spirit, we will share our response with Vetted Causes via the email provided on their website 24 hours (or as much time as Vetted Causes prefers) before publishing it on the Forum.

The EA community has been a vital supporter of our work, and we hope this serves as an constructive opportunity to provide further insight into our efforts and approach.

Best,
Carolina

My primary advice is to avoid rushing to any judgements. The criticism came out yesterday and neither organization was aware of it in advance. I assume Sinergia and/or ACE will respond, but it makes sense that that might take at least several days.

Thanks, yeah this seems like a reasonable approach. Hoping from a statement from ACE or Sinergia.

I think it's pretty safe to assume that the reality of most charities' cost-effectiveness is less than they claim. 

I'd also advise skepticism of a critic who doesn't attempt to engage with the charity to make sure they're fully informed before releasing a scathing review. [I also saw signs of naive "cost-effectiveness analysis goes brrr" style thinking about charity evaluation from their ACE review, which makes me more doubtful of their work].

It's also worth noting that quantifying charity impact is messy work, especially in the animal cause area. We should expect people to come to quite different conclusions and be comfortable with that. FarmKind estimated the cost-effectiveness of Sinergia's pig work using the same data as ACE and came to a number of animals helped per dollar that was ~6x lower (but still a crazy number of pigs per dollar). Granted, the difference between ACE and Vetted Causes assessments are beyond the acceptable margin of error

I love this wisdom and agree that most charities' cost effectiveness will be less than they claim. I include our assessment of my own charity in that, and GiveWell's assessments. Especially as causes become more saturated and less neglected. And yes like you say with Animal charities there are more assumptions made and far wider error bars than with human assessment.

I haven't (and won't) look into this in detail but I hope some relatively unmotivated people will compare these analysis in detail.

Hi Aidan, thank you for providing your input to the community.

I think it's pretty safe to assume that the reality of most charities' cost-effectiveness is less than they claim. 

It appears we agree that Sinergia is making false claims about helping animals. '

We are curious if you think this is proper grounds for not recommending them as a charity. 

It appears we agree that Sinergia is making false claims about helping animals.

I can't speak for Aidan, but the word false has certain connotations. Although it doesn't explicitly denote bad faith, it does carry a hint of that aroma in most contexts. I think that's particularly true when it is applied to estimates (cost cost-effectiveness estimates), opinions, and so on. 

I think that phrasing should be used with caution. Generally, it would be better to characterize a cost-effectiveness estimate as overly optimistic, inaccurate, flawed, or so on to avoid giving the connotation that comes with false. False could be appropriate if the claims were outside the realm of what a charity could honestly -- but mistakenly -- believe. But I don't think that's what Aidan was saying (that reading would imply he thought "most charities[]" were guilty of promoting such claims rather than merely ones that were overly optimistic).

Thanks for the reply, Jason. 

If Sinergia had framed their claims as estimates, we would agree with you. 

However, Sinergia states that "every $1 you donate will spare 1,770 piglets from painful mutilations." If someone donates $1 to Sinergia based on this claim and Sinergia does not spare an additional 1,770 piglets from painful mutilations, Sinergia has made a false claim to the donor, and it is fair to state this is the case. 

The same applies to their claim that they help 113 million farmed animal every year.

Note: Sinergia could have avoided these issue by stating "we have estimates that state every $1 you donate will spare 1,770 piglets from painful mutilations" and "we have estimates that state we help 113 million farmed animals every year." However, these statements are likely not as effective at convincing people to donate to Sinergia. 

In common English parlance, we don't preface everything with "I have estimates that state...". 

I don't think any reasonable person thinks that they mean that if they got an extra $1, they'd somehow pay someone for 10 minutes of time to lobby some tiny backyard farm of about 1770 pigs to take on certain oractices. You get to these unit economics with a lot more nuance.

I think a reasonable reader would view these statements as assertions grounded in a cost-effectiveness estimate, rather than as some sort of guarantee that Singeria could point to 1,770 piglets that were saved as a result of my $1 donation. The reader knows that the only plausible way to make this statement is to rely on cost-effectiveness estimates, so I don't think there's a meaningful risk that the reader is misled here.

I think a reasonable reader would view these statements as assertions grounded in a cost-effectiveness estimate, rather than as some sort of guarantee

If there are no advantages to making these statements factual claims, why didn't Sinergia just state that they are estimates?

I can't believe how often I have to explain this to people on the forum: Speaking with scientific precision makes for writing very few people are willing to read. Using colloquial, simple language is often appropriate, even if it's not maximally precise. In fact, maximally precise doesn't even exist -- we always have to decide how detailed and complete a picture to paint. 

If you're speaking to a PhD physicist, then say "electron transport occurs via quantum tunneling of delocalized wavefunctions through the crystalline lattice's conduction band, with drift velocities typically orders of magnitude below c", but if you're speaking to high-school students teetering on the edge of losing interest, it makes more sense to say "electrons flow through the wire at the speed of light!". This isn't deception -- it's good communication.

You can quibble that maybe charities should say "may" or "could" instead of "will". Fine. But to characterize it as a wilful deception is mistaken.

If charities only spoke the way some people on the forum wish they would, they would get a fraction of the attention, a fraction of the donations, and be able to have a fraction of the impact. You'll get fired as a copywriter very quickly if you try to have your snappy call to action say "we have estimates that state every $1 you donate will spare 1,770 piglets from painful mutilations". 

Using colloquial, simple language is often appropriate, even if it's not maximally precise. In fact, maximally precise doesn't even exist -- we always have to decide how detailed and complete a picture to paint. 

I tend to agree, but historically EA (especially GiveWell) has been critical of the "donor illusion" involved in things like "sponsorship" of children in areas the NGO has already decided to fund by mainstream charities on a similar basis. More explicit statistical claims about future marginal outcomes based on estimates of outcomes of historic campaign spend or claims about liberating from confinement and mutilation when it's one or the other free seem harder to justify than some of the other stuff condemned as "donor illusion".

Even leaning towards the view it's much better for charities to have effective marketing than statistical and semantic exactness, that debate is moot if estimates are based mainly on taking credit for decisions other parties had already made, as claimed by the VettedCauses review. If it's true[1] that some of their figures come from commitments they should have known do not exist and laws they should have known were already changed it would be absolutely fair to characterise those claims as "false", even if it comes from honest confusion (perhaps ACE - apparently the source of the figures - not understanding the local context of Sinergia's campaigns?)

  1. ^

    I would like to hear Sinergia's response, and am happy for them to take their time if they need to do more research to clarify.

Thanks for your input David!

If it's true[1] that some of their figures come from commitments they should have known do not exist and laws they should have known were already changed it would be absolutely fair to characterise those claims as "false", even if it comes from honest confusion

We would like to clarify something. Sinergia wrote a 2023 report that states "teeth clipping is prohibited" under Normative Instruction 113/2020. Teeth clipping has been illegal in Brazil since February 1, 2021[1]. In spite of this, Sinergia took credit for alleged commitments leading to alleged transitions away from teeth clipping (see Row 12 for an example).  

We prefer not to speculate about whether actions were intentional or not, so we didn't include this in our report. We actually did not include most of our analysis or evidence in the review we published, since brevity is a top priority for us when we write reviews. The published review is only a small fraction of the problems we found. 

 

  1. ^

    See Article 38 Section 2 and Article 54 of Normative Instruction 113/2020.

We actually did not include most of our analysis or evidence in the review we published, since brevity is a top priority for us when we write reviews. The published review is only a small fraction of the problems we found. 

I'd suggest publishing an appendix listing more problems you believe you identified, as well as more evidence. Brevity is a virtue, but I suspect much of your potential impact lies with moving grantmaker money (and money that flows through charity recommenders) from charities that allegedly inflate their outcomes to those that don't. Looking at Sinergia's 2023 financials, over half came from Open Phil, and significant chunks came from other foundations. Less than 1% was from direct individual donations, although there were likely some passthrough-like donations recommended by individuals but attributed to organizations. 

Your review of Sinergia is ~1000 words and takes about four minutes to read. That may be an ideal tradeoff between brevity and thoroughness for a potential three-to-four figure donor, but I think the balance is significantly different for a professional grantmaker considering a six-to-seven figure investment. 

You can quibble that maybe charities should say "may" or "could" instead of "will". Fine.

We appreciate that you seem to acknowledge that saying "may" or "could" would be more accurate than saying "will", but we don’t see this as just a minor wording issue.

The key concern is donors being misled. It is not acceptable to use stronger wording to make impact sound certain when it isn't.

If charities only spoke the way some people on the forum wish they would, they would get a fraction of the attention, a fraction of the donations, and be able to have a fraction of the impact.

Perhaps the donations would instead go to charities that make true claims.

Hi everyone,

I’m Carolina, International Executive Director of Sinergia Animal.

I want to acknowledge that members of this community have shared this post with us, and we truly appreciate your engagement and interest in our work. A deep commitment to create real change, transparency and honesty have always been central to our approach, and we will address all concerns accordingly.

To clarify in advance, we have never taken credit for pre-existing or non-existent policies, and we will explain this in our response. We always strive to estimate our impact in good faith and will carefully review our methodology based on this feedback to address any concerns, if valid.

This discussion comes at a particularly busy time for us, as we have been attending EA Global while continuing our critical work across eight countries. We appreciate your patience as we prepare a thorough response.

As a best practice, we believe organizations mentioned by others in posts should have the chance to respond before content is published. We take the principle of the right to reply so seriously that we even extend it to companies targeted in our campaigns or enforcement programs. In that spirit, we will share our response with Vetted Causes via the email provided on their website 24 hours (or as much time as Vetted Causes prefers) before publishing it on the Forum.

The EA community has been a vital supporter of our work, and we hope this serves as an constructive opportunity to provide further insight into our efforts and approach.

Best,
Carolina

Hi Carolina, thank you for the response, looking forward to your more thorough response! 
 

They posted about their review of Sinergia on the forum already: https://forum.effectivealtruism.org/posts/YYrC2ZR5pnrYCdSLt/sinergia-ace-top-charity-makes-false-claims-about-helping

I suggest we concentrate discussion there and not here.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr