Hide table of contents

tl;dr: I present relative estimates for animal suffering and 2022 top Animal Charity Evaluators (ACE) charities. I am doing this to showcase a new tool from the Quantified Uncertainty Research Institute (QURI) and to present an alternative to ACE’s current rubric-based approach.

Introduction and goals

At QURI, we’re experimenting with using relative values to estimate the worth of various items and interventions. Instead of basing value on a specific unit, we ask how valuable each item in a list is, compared to each other item. You can see an overview of this approach here.

In this context, I thought it would be meaningful to estimate some items in animal welfare and suffering. I estimated the value of a few a few animal quality-adjusted life-years—fish, chicken, pigs and cows—relative to each other. Then I using those, I estimated the value of top and standout charities as chosen by ACE (Animal Charity Evaluators) in 2022.

This exercise might perhaps be useful to ACE, not necessarily from the estimates themselves, which are admittedly mediocre, but rather by considering these estimates as a potential template for evaluating the value of uncertain interventions outside of global health and development.

You can view these relative estimates here (a). The app in which they live has different views:

A view showing all of the estimates compared to each other:

A view showing items’ values compared to one reference item:

There is also a view plotting uncertainty vs value, and a view showing the underlying code, which is editable (though slow).

Discussion

Expected quality of the model

I expect these estimates to have numerous flaws. Previously, I worked on an aggregator for forecasts called Metaforecast, as part of which I assigned a “stars rating” to quickly signal the expected quality of probabilities from different platforms. If I applied that same rating here, these estimates would have a stars quality rating of one out of five possible stars, at most.

One key insufficiency of these estimates is that they estimate what I personally value after a short amount of reflection. They don’t necessarily represent what the entire Effective Altruism community or any particular philosophical viewpoint might value after in-depth reflection. I chose this approach mainly for efficiency. Future iterations might adopt a more sophisticated approach, such as allowing users to input their own values, or selecting from several philosophical perspectives, or aggregating them.

Methodology

I came up with these estimates in three steps:

  1. Estimated the Quality-Adjusted Life Years (QALYs) value of a few animal species
  2. Mechanistically estimated the value of three reference charities, in terms of QALYs for the species valued in step 1
  3. Estimated the value of the remaining charities in terms of the reference charities in step 2.

Estimating relative value of animal QALYs

I started by estimating the relative value of a QALY for a few animal species. Then I derived estimates for QALY per kilogram and QALYs per calorie, which could later be useful for improving calculators like this one.

Here is an example for cows:

// Add human QALY as a reference point
one_human_qaly = {
  id: "one_human_qaly", 
  name: "1 human QALY (quality-adjusted life-year)",
  value: normal(1, 0.01)
}

// Cows
value_happy_cow_year = 0.05 to 0.3 
// ^ in human qalys
value_tortured_cow_year = -(0.1 to 2)
value_farmed_cow_year = normal({ p10: -0.2, p90: 0.1 })
// ^ purely subjective estimates
// the thing is, it doesn't seem that unlikely to me
// that cows do lead net positive lives
weight_cow = mixture([450 to 1800, 360 to 1100], [1/2,1/2])
non_wastage_proportion_cow = (0.5 to 0.7) -> ss // should be a beta. 
lifetime_cow = (30 to 42) / 12
calories_cow = mixture(0.8M to 1.4M, (500k to 700k) * (weight_cow * non_wastage_proportion_cow)/1000) 
// ^ kilocalories, averaging two estimates from
// <https://www.reddit.com/r/theydidthemonstermath/comments/a8ha9r/how_many_calories_are_in_a_whole_cow/>

cow_estimates = {
  name: "cow",
  value_year: value_farmed_cow_year -> ss,
  weight: weight_cow,
  calories: calories_cow,
  lifetime: lifetime_cow -> ss
}

Coming up with mechanistic estimates for three reference projects

I then looked at three reference projects for which I thought a mechanistic estimate might be feasible: the Fish Welfare Initiative (FWI), Beyond Burgers, and the Open Wing Alliance. For each of those projects, I estimated how many specific animals did they affect, and by how much, and arrived at a wide subjective estimate of their impact.

For example, in the case of FWI I looked at their impact page for the number of animals they probably have helped. I then came up with an uncertain estimate for how much they had helped each animal. I took various shortcuts, for example, I pretended that the fish which FWI helped were salmon, because details about their life expectancy and caloric content were easy and quick to look up online. In fact, I expect the vast majority of fish that FWI helps to not be salmon, but I don’t expect the difference to matter all that much when estimating total impact.

Here is how my estimate for the Fish Welfare Initiative looks:

fish_potentially_helped = 1M to 2M
shrimp_potentially_helped = 1M to 2M
improvement_as_proportion_of_lifetime = (0.05 to 0.5) -> ss
sign_flip_to_denote_improvement(x) = -x

value_fwi_fish = (
    fish_potentially_helped * 
    improvement_as_proportion_of_lifetime * 
    (
       salmon_estimates.value_year /
       Salmon_estimates.lifetime
    )
  ) -> sign_flip_to_denote_improvement

value_of_shrimp_in_fish = (0.3 to 1)
// ^ very uncertain, subjective
value_fwi_shrimp = (
    shrimp_potentially_helped * 
    improvement_as_proportion_of_lifetime * 
    (
       salmon_estimates.value_year /
       Salmon_estimates.lifetime
    ) *
    value_of_shrimp_in_fish
  ) -> sign_flip_to_denote_improvement

value_fwi_so_far = value_fwi_fish + value_fwi_shrimp
proportion_fwi_in_2022 = 1/4 to 1/2
value_fwi  = value_fwi_so_far * proportion_fwi_in_2022 

fwi_item = {
  name: "Fish Welfare Initiative",
  year: 2022, 
  slug: "fish_welfare_initiative",
  value: value_fwi -> ss 
}

Estimating other charities in terms of the reference projects

I estimated the value of the remaining projects in terms of the previous three. For example, here is my estimate of the Good Food Institute:

value_reference_top_animal_org = mixture(
  [
    fwi_value,
    open_wing_alliance_value, 
    beyond_meat_value/(10 to 1k)
    // ^ beyond meat seems significantly more scaled up than the avg org working to affect cows
  ], 
  [ 1/3, 1/3, 1/3 ]
) -> SampleSet.fromDist

beyond_meat_equivalents_gfi = 0.01 to 2
value_gfi = mixture(
  [ 
    beyond_meat_equivalents_gfi * beyond_meat_item.value, 
    value_reference_top_animal_org
  ], 
  [ 2/3, 1/3 ]
)

and here is my estimate for Compassion USA:

value_compassion_usa = mixture(
  [
    open_wing_alliance_value * 
      truncateRight(0.05 to 10, 100), 
    value_reference_top_animal_org * 
      truncate(0.05 to 10, 100)
  ], 
  [ 1/2, 1/2 ]
) 

A comment on maintaining correlations

These estimates are written in Squiggle, which aims to make it easy to do relative values through its functionality around sample sets. For example,

x = SampleSet.fromDist(1 to 100) 
y = 2 * x y/x 
// ^ is a distribution which is 2 everywhere
or
x = SampleSet.fromDist(1 to 100)
y = SampleSet.fromDist(2 to 200) 
z = x * y (z/x) / y 
// ^ is one everywhere

I’ve usually shortened SampleSet.fromDist to just “ss.”

As a note of caution, note that maintaining correlations while having mixtures of different distributions is more tricky.

Conclusion

This post presents a model that starts with very rough estimates of the value of several types of animal suffering. It then uses these to build up mechanistic estimates of a few animal charities, and then uses those mechanistic estimates to give a guess as to the impact of all top ACE charities in 2022.

The motives for doing that were:

  • To showcase some tooling recently built at QURI
  • To show one possible path for having quantified estimates for speculative projects—as opposed to the rubric-based approach that organizations like ACE or Charity Entrepreneurship use.

Acknowledgements

This is a project of the Quantified Uncertainty Research Institute, from which I’ve since then taken a leave of absence. Thanks to Ozzie Gooen and Holly Elmore for their feedback.

Comments2


Sorted by Click to highlight new comments since:

Hey Nuño, thanks for doing this! This is interesting to see.

Fwiw, your placement of FWI in the ranking here broadly tracks with my own impressions of it, specifically that we're currently about an order of magnitude less effective than what I view as some of the currently most effective organizations. (This is of course something we're working to improve.)

Thanks! If there are similar estimates that would be useful for you and wouldn't take too much time on my side, happy to be reached out to. 

One order of magnitude isn't that much, particularly given that these estimates are super speculative, and that it's relatively early days for FWI.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f