Hide table of contents

Summary

  • I think GiveWell’s top charities may be anything from very harmful to very beneficial accounting for the effects on terrestrial arthropods (e.g. insects[1]).
  • This warrants further research on:
    • The moral weight of terrestrial arthropods.
    • The change in forest area caused by GiveWell’s top charities.
    • The sign of the welfare of terrestrial arthropods.
  • Cluelessness about the (short and long term) effects of GiveWell’s top charities may motivate one to prefer longtermist interventions[2], as Hilary Greaves suggests here, and Alex HT describes here.

Acknowledgements

Thanks to Anonymous Person, Michael Dickens, Michael St. Jules, and Ramiro.

Methods

I estimated the relative variation in the cost-effectiveness of GiveWell’s top charities due to terrestrial arthropods from the product between:

  • Net change in forest area per capita in 2015[3] (m2/person), in each of the countries analysed by GiveWell for its top charities here[4]. I calculated this from the ratio between:
    • Net change in forest area in 2015 by country (ha), based on these data from Our World in Data (OWID).
    • Population in 2015 by country, based on these data from OWID.
  • Variation in welfare of terrestrial arthropods as a fraction of that of a human due to deforestation (1/ha). I calculated this from the negative of the product between:
    • Welfare of a terrestrial arthropod as a fraction of that of a human.
    • Decrease in density of terrestrial arthropods due to deforestation (1/ha).

Welfare of a terrestrial arthropod as a fraction of that of a human

I estimated this from the product between:

  • Moral weight of terrestrial arthropods relative to humans. I set this to:
    • MW()/MW(), where:
      • MW(N) is the moral weight as a function of the number of neurons[5]. I considered the ones available in foodimpacts.org:
        • Logarithmic: log(N).
        • Square root: N^0.5.
        • Linear: N.
        • Quadratic: N^2.
      •  is the number of neurons of a terrestrial arthropod, which I set to 404 k. This is the mean of a lognormal distribution with 5th and 95th percentiles equal to the lower and upper bound of 100 k and 1 M given here by Animal Ethics[6].
      •  is the number of neurons of a human, which I set to 86 G based on Herculano-Houzel 2012.
    • Rethink Priorities’ (RP’s) median welfare range estimates, given in this post from Bob Fischer, for:
      • Black soldier flies, 0.013.
      • Silkworms, 0.002.
  • Welfare of a terrestrial arthropod as a fraction of that of a human not adjusting for moral weight. I computed this from the ratio between the total welfare score of a “wild bug” and “human in a low middle-income country” according to the Weighted Animal Welfare Index (WAWI) of Charity Entrepreneurship (CE), which was presented here.

Decrease in density of terrestrial arthropods due to deforestation

I estimated this from the product between:

  • Decrease in density of terrestrial arthropods due to deforestation as a fraction of that in global land. I computed this from the difference between the density of terrestrial arthropods in forested and deforested areas as a fraction of that in global land, which I set to 1.55 and 0.899. These are the means of lognormal distributions with 2.5th and 97.5th percentiles equal to the lower and upper bounds of the 95 % confidence intervals (CIs) given here by Brian Tomasik for:
    • Rainforest, 1.02 and 5.
    • Cerrado, 0.7 and 3.
  • Density of terrestrial arthropods in global land (1/ha). I computed this from the ratio between:
    • Number of terrestrial arthropods, which I set to 4.04*10^18. This is the mean of a lognormal distribution with 5th and 95th percentiles equal to the lower and upper bound of 10^18 and 10^19 given here by Brian Tomasik.
    • Global land area (ha), which I set to 13.0 Gha based on these data from The World Bank.

Results

The calculations and full results are in this Sheet. The tables below contain the moral weight of terrestrial arthropods relative to humans, variation in welfare of terrestrial arthropods as a fraction of that of a human due to deforestation[7], net change in forest area per capita in 2015, and relative variation in cost-effectiveness of GiveWell’s top charities due to terrestrial arthropods. I present the results by country for simplicity, but they would ideally be shown for each top charity of GiveWell.  and  are the number of neurons of terrestrial arthropods and humans.

Moral weight of terrestrial arthropods relative to humans

RP's median for black soldier flies

RP's median for silkworms

0.513

0.013

2.17 m

2 m

4.70 μ

22.1 p

Variation in welfare of terrestrial arthropods as a fraction of that of a human due to deforestation[7] (1/ha)

188 M

4.77 M

795 k

734 k

1.72 k

8.10 m

Country[8]

Net change in forest area per capita in 2015 (m2/person)

Relative variation in cost-effectiveness of GiveWell's top charities due to terrestrial arthropods if the moral weight is...

RP's median for black soldier flies

RP's median for silkworms

Cameroon

-24.3

458 k

11.6 k

1.94 k

1.79 k

4.19

16.7 μ

Mali[9]

0

0

0

0

0

0

0

Mozambique

-89.1

1.68 M

42.5 k

7.09 k

6.54 k

15.4

72.2 μ

Niger

-6.17

116 k

2.94 k

491

453

1.06

5.00 μ

Nigeria

-8.88

167 k

4.23 k

706

651

1.53

7.19 μ

Togo

-3.96

74.5 k

1.89 k

315

291

0.683

3.21 μ

Uganda

-11.0

207 k

5.25 k

875

808

1.90

8.91 μ

Mean[10]

-20.5

386 k

9.78 k

1.63 k

1.50 k

3.53

16.6 μ

Discussion

The results suggest changes in the number of terrestrial arthropods may be anything from crucial to negligible. The relative variation due to terrestrial arthropods for the mean country is:

  • 386 k (crucial) if the moral weight function is logarithmic.
  • 16.6 μ (negligible) if the moral weight function is quadratic.

Terrestrial arthropods completely dominate the cost-effectiveness of GiveWell’s top charities for RP’s median welfare range estimates. Those for black soldier flies and silkworms lead to a cost-effectiveness 9.78 k and 1.50 k times higher.

I think RP’s median welfare range estimates are more accurate than those solely based on the number of neurons. As Adam Shriver put it here:

Neuron counts likely provide some useful insights about how much information can be processed at a particular time, but it seems unlikely that they would provide more useful information individually than a function that takes them into account along with other plausible markers of sentience and moral significance.

I have not accounted for the welfare of farmed animals, but I guess the effect on terrestrial arthropods dominates. Based on:

  • This, the total number of neurons of terrestrial arthropods is 20 k times that of farmed animals excluding arthropods.
  • CE’s WAWI, the ratio between the total welfare score of factory-farmed animals and the “wild bug” is much lower than 20 k. It ranges from 47.6 % (= 20/42) for beef cows, to 136 % (= 57/42) for both turkeys and United States’ laying hens.

The present analysis is very shallow, and therefore the results are not resilient. However, they point to the value of further research on:

  • The moral weight of terrestrial arthropods.
    • The estimates I provided based solely on the number of neurons suggest the effects on terrestrial arthropods can be anything from crucial to negligible.
    • RP’s median welfare range estimates for bees, black soldier flies, and silkworms imply terrestrial arthropods dominate, but the 5th percentile is 0 for all those species (see here).
  • The change in the number of terrestrial arthropods caused by GiveWell’s top charities.
    • I supposed it to be directly proportional to net change in forest area per capita, but this is simplistic.
    • For example, I guess the consumption of the beneficiaries of GiveWell’s top charities is lower than that of the mean citizens of their countries, so I arguably overestimated the magnitude of the net change[11].
  • The sign of the welfare of terrestrial arthropods.

The uncertainty about the sign of the welfare of terrestrial arthropods, and the possibility of effects on these dominating those on humans imply GiveWell’s top charities may be anything from very harmful to very beneficial from a neartermist point of view.

Even neglecting animal welfare, I am not confident GiveWell’s top charities are robustly good. For example, the effect of distributing bednets[12] on the population size is unclear. According to Wilde 2019, that increases fertility in the short term, but decreases it in the long term:

The effect on fertility is positive only temporarily – lasting only 1-3 years after the beginning of the ITN distribution programs – and then becomes negative. Taken together, these results suggest the ITN distribution campaigns may have caused fertility to increase unexpectedly and temporarily, or that these increases may just be a tempo effect – changes in fertility timing which do not lead to increased completed fertility.

I do not know whether increasing the population size is good or bad, but I think it may well be the major driver for both the nearterm (e.g. next 100 years) and total effect of GiveWell’s top charities. For example, it influences not only the change in forested area, but also greenhouse gas emissions[13], policy, and economic growth more broadly.

To be honest, I personally do not think the best solution to the cluelessness about the (short and long term) effects of GiveWell’s top charities is to make their analyses more sophisticated. I would say focussing on longtermist interventions is better, as their (longterm) effects are more predictable. Regardless of your preferred solution, I highly recommend watching this talk from Hilary Greaves, and reading this post from Alex HT[14] (if you have not done so).

  1. ^

     Fun fact, in portuguese, my last name Grilo means cricket (which is a terrestrial arthropod).

  2. ^

     By which I mean increasing the share of resources going towards longtermist interventions, not that neartermist ones should not receive any resources. Relatedly, I liked this post from Jan Kulveit and Gavin Leech.

  3. ^

     This is only accurate to the extent the annual impact on net forest area of the people saved by GiveWell’s top charities is similar to that of the mean citizens of their countries in 2015.

  4. ^
  5. ^

     As illustrated by William MacAskill in What We Owe to the Future.

  6. ^

     I calculated the mean of the lognormal distributions using this Sheet, which is described here.

  7. ^

     A value of 1/ha means the suffering of terrestrial arthropods prevented by deforesting 1 ha equals the welfare of 1 person.

  8. ^

     Data about the net change in forest area for the other countries analysed by GiveWell for its top charities were not available.

  9. ^

     There was no net change in forest area for Mali, so the results are both 0.

  10. ^

     Ideally, the mean would be weighted by the funding GiveWell has directed to each of the countries.

  11. ^

     I believe this does not affect the conclusions. The uncertainty in the moral weight is much larger than that in the consumption of the beneficiaries, so the relative variation in cost-effectiveness would still be anything from crucial to negligible.

  12. ^

     This is done by Against Malaria Foundation, which is one of GiveWell’s top charities.

  13. ^

     These have short term effects such as increasing heat-related mortality (0.226 mlife/t based on Bressler 2021), apart from increasing the existential risk due to climate change (0.273 bp/Tt based on this).

  14. ^

     For pushback on longtermism, Michael St. Jules suggested checking the following:

    - The Future Might Not Be So Great by Jacy.

    - A longtermist critique of “The expected value of extinction risk reduction is positive” by Anthony DiGiovanni.

    - Why I am probably not a longtermist by Denise Melchin.

    - This thread started by Michael.

    - Which World Gets Saved by Philip Trammell.

    - Why does (any particular) AI safety work reduce s-risks more than it increases them? by Michael.

    - The motivated reasoning critique of effective altruism by Linchuan Zhang.

    - This thread started by Luke Muehlauser.

Comments14
Sorted by Click to highlight new comments since: Today at 5:31 PM

I think this post is really pretty good. What I'd like to see, either included here or in future work, is a plain-language description of your assumptions and conclusions. I tried to make one, which went something like this:

GiveWell's global health work saves lives, but the people they save go on to destroy insect habitat. The worth of a single insect is negligible relative to that of a human, but so many of them die when forests get cut down that global health efforts in certain locations could conceivably be bad overall. This side effect would be especially bad in countries where the most deforestation per capita is going on, and where habitat for insects is particularly lush.

Based on a shallow dive and plugging some numbers into a simple model, the results suggest we should reallocate human health resources out of a few countries and into other countries where less habitat destruction is happening, or reallocate some of those resources to preserving habitat. We really need to research this more carefully before taking these conclusions too seriously - but these early results suggest it would be worth the effort.

It would also be nice to move the acronyms (i.e. t_a) to the figure captions where they're presented, and work on better formatting the tables. You've done the research and thinking, so take a little more time to polish up the presentation so we can read it more easily :)

Thanks for the kind words! I like your summary. Just one note, since we are arguably so far from knowing whether insects have good or bad lives, I do not think we can take the conclusion below.

Based on a shallow dive and plugging some numbers into a simple model, the results suggest we should reallocate human health resources out of a few countries and into other countries where less habitat destruction is happening

I believe the best attitude is one of cluelessness, where we just know that insects may dominate (or not) the analysis, either making GiveWell's top charities much more harmful or beneficial. Moreover, we should beware surprising and suspicious convergence. If insects indeed went on to dominate the analysis (quite unclear), I would expect targetted wild animal interventions to be more effective than global health and development ones.

It would also be nice to move the acronyms (i.e. t_a) to the figure captions where they're presented, and work on better formatting the tables.

I have now restated the meaning of N_ta and N_h just before the tables, and improved the formatting of the headers of the table a little.

You've done the research and thinking, so take a little more time to polish up the presentation so we can read it more easily :)

Ah, you are right!

Can  you say why you feel that longtermism suffers from less cluelessness that what you argue the GiveWell charities do? The main limitation of longtermism is that affecting the future is riddled with cluelessness.
You mention Hilary Greaves' talk, but it doesn't seem to address this. She refers to "reducing the chance of premature human extinction" but doesn't say how.

Hi Henry,

Thanks for engaging!

Assuming most of the expected value of the interventions of GiveWell's top charities is in the future (due to effects on the population size), we are cluelessness about its total cost-effectiveness. This limitation also applies to longtermist interventions. 

However, if the goal is maximising longterm cost-effectiveness (because that is where most of the value is), explicitly focussing on the longterm effects will tend to be better than explicitly focussing on nearterm effects. This is informed by the heuristic that it is easier to achieve something when we are trying to achieve it. So longtermist interventions will tend to be more effective.

It would also be surprising and suspicious convergence if the best interventions to save lives in the present were also the best from a longtermist perspective. The post from Alex HT I linked in the Summary has more details.

Rethink Priorities’ (RP’s) median welfare range estimates, given in this post from Bob Fischer, for:

  • Black soldier flies, 0.013.
  • Silkworms, 0.002.

 

It's worth noting that most arthropods by population are significantly smaller, have significantly smaller brains and would probably have less sophisticated behaviour (at least compared to adult black soldier flies; I'm not familiar with silkworm and other larval behaviour), so would probably score lower on both probability of sentience and welfare range. So, if you're including all arthropods and using these figures for all arthropods, you should probably think of these numbers (or at least the BSF ones) as providing an overestimate of the arthropod welfare effects.

Hi Michael,

Thanks for pointing that out. I agree it is something worth having in mind.

However, the moral weight could still be much lower than those of black soldier flies and silkworms, and terrestrial arthropods still dominate. Assuming the moral weight is directly proportional to the number of neurons, in which case it is 0.0361 % (= 4.70 μ / 0.013) the one of black soldier flies, and 0.235 % (= 4.70 μ / 0.002) the one of silkworms, the mean cost-effectiveness would increase/decrease 353 % (assuming terrestrial arthropods have negative/positive lives).

It is true I may have overestimated the rate of deforestation, but I also expect the moral weight obtained by direct proportionality to the number of neurons to be an underestimate, so I think the analysis can go either way.

I think it would be really nice if Open Philanthropy, Rethink Priorities, Wild Animal Initiative, Faunalytics or other looked into considerations such this.

Hi Vasco, thanks for writing this! I'm glad to see more cross-cause research, and this seems like a useful starting point.

Some quick thoughts on why the deforestation rate assumptions might be too high:

Net change in forest area per capita in 2015[3] (m2/person), in each of the countries analysed by GiveWell for its top charities here[4]. I calculated this from the ratio between:

  • Net change in forest area in 2015 by country (ha), based on these data from Our World in Data (OWID).
  • Population in 2015 by country, based on these data from OWID.

(...)

This is only accurate to the extent the annual impact on net forest area of the people saved by GiveWell’s top charities is similar to that of the mean citizens of their countries in 2015.

This assumption would not hold if some of the major causes of deforestation are limited by factors not very sensitive to population size. For example, some deforestation may be driven by international demand for products that are produced in those countries, so that the effects of more people willing to work on these products (by saving lives) should be tempered by elasticity effects. They could also be limited by capital, which GiveWell beneficiaries may be unlikely to provide, given their poverty and living situations.

Deforestation for agriculture for domestic consumption or for living area would be sensitive to the population size, but, again, GiveWell beneficiaries may be unrepresentative, a possibility you implicitly acknowledge by assuming is not the case.

Furthermore, with increasing deforestation, there will be less land left to deforest, and that land may be harder to deforest (because of practical or political challenges). Each of these point towards the marginal effect of population being smaller than the average effect.

I haven't looked into any of this in detail or tried to verify any of these possibilities, though.

Hi Michael,

Thanks for the encouragement!

I agree I may well have overestimated the deforestation rate. That being said, even if the deforestation rate is only 1 % of what I assumed, the mean relative variation in cost-effectiveness would range from 3.86 k to 0.166 μ. We can narrow this down by focussing on the plausible moral weights, but without looking further it looks like the analysis could go either way.

Wow fascinating, thanks for this post Vasco!

I'd be inclined to take a Bayesian approach to this kind of cost-effectiveness modelling, where the "prior evidence" is the estimated impact on lives saved. This is something we have strong reason to believe is good under many world views. Then the "additional evidence" would be the reduction in insect welfare caused by deforestation. I'm just so very uncertain about whether the second one is really a negative effect that I think it would be swamped by the impact on lives saved. This is because we have several steps of major uncertainty: impact of GiveWell charities on deforestation, impact of deforestation on insect welfare, moral weight of insects, baseline welfare of insects (positive or negative). 

One issue here is that the same objection could potentially be applied to longtermist-focused charities, but I actually don't think this is true. I think (say) working in government to reduce the risk of biological weapons is actually far more robustly positive than trying to improve insect welfare by reducing deforestation. It also seems like the value of the far future could be far greater than the impact on present-day insects. 

What are your thoughts on this approach? 

Hi Lucas,

Thanks for engaging!

I think the approach you are suggesting is very much in line with the one of section "Applying Bayesian adjustments to cost-effectiveness estimates for donations, actions, etc." of this post from Holden Karnofsky.

The bottom line is that when one applies Bayes’s rule to obtain a distribution for cost-effectiveness based on (a) a normally distributed prior distribution (b) a normally distributed “estimate error,” one obtains a distribution with

  • Mean equal to the average of the two means weighted by their inverse variances
  • Variance equal to the harmonic sum of the two variances

I used to apply the above as (CE stands for cost-effectiveness, E for expected value, and V for variance):

  • E("CE") = "weight of modelled effects"*E("CE for modelled effects") + "weight of non-modelled effects"*E("CE for non-modelled effects").
  • "Weight of modelled effects" = 1/V("CE for modelled effects")/(1/V("CE for modelled effects") + 1/V("CE for non-modelled effects")). This tends to 1 as the uncertainty of the non-modelled effects increases.
  • "Weight of non-modelled effects" = 1/V("CE for non-modelled effects")/(1/V("CE for modelled effects") + 1/V("CE for non-modelled effects")). This tends to 0 as the uncertainty of the non-modelled effects increases.

If the modelled effects are lives saved in the near term, and the non-modelled effects are the impact on the welfare of terrestrial arthropods (which are not modelled by GW), V("CE for modelled effects") << V("CE for non-modelled effects"). So, based on the above, you are saying that we should give much more weight to the lives saved in the near term, and therefore these are the driver for the cost-effectiveness.

I believe the formula of the 1st bullet is not correct. I will try to illustrate with a sort of reversed Pascal's mugging. Imagine there was one button which would destroy the whole universe with probability 50 % when pressed, and someone was considering whether to press it or not. For the sake of the argument, we can suppose the person would certainly (i.e. with probability of 100 %) be happy while pressing the button. Based on the formula of the 1st bullet, it looks like all weight would go to the pretty negligible effect on the person pressing the button, because it would be a certain effect. So the cost-effectiveness of pressing the button would be essentially driven by the effect on one single person as opposed to the consideration that the whole universe could end with likelihood 50 %. The argument works for any probability of universal destruction lower than 1 (e.g. 99.99 %), so the example also implies null value of information for learning more about the impact of pressing the button. All of this seems pretty wrong.

However, I still think priors are valuable. If 2 restaurants have a rating of 4.5/5, but one of the ratings is based on 1 review, and another on 1 k reviews, the restaurant with more reviews is most likely better (assuming a prior lower than 4.5).

So I think the formula is not right as I wrote it above, but is pointing to something valuable. I would say it can be corrected as follows:

  • E("CE") = "weight of method 1"*E("CE for method 1") + "weight of method 2"*E("CE for method 2").

I do not have a clear approach to estimate the weights, but I think they should account not only for uncertainty, but also for their scale. Inverse-variance weighting appears to be a good approach if all methods output estimates for the same variable (such as in a meta-analysis). For cost-effectiveness analyses, I suppose the relevant variable is total cost-effectiveness. This encompasses near term effects on people, but also near term effects on animals, and long term effects. Since the scope of GW's estimates for lives saved differs from that of my estimates for the impact on terrestrial arthropods, I believe we cannot directly apply inverse-variance weighting.

It is not reasonable to press a button which may well destroy the whole universe for the sake of being happy for certain. In the same way, but to a much smaller extent, I do not think we can conclude GW's top charities are robustly cost-effective just because we are pretty certain about their near terms effects on people. We arguably have to investigate (decrease uncertainty, and increase resilience) about the other effects, such as those on animals, and the consequences of changing population size (which have apparently not been figured out; see comments here).

One issue here is that the same objection could potentially be applied to longtermist-focused charities, but I actually don't think this is true. I think (say) working in government to reduce the risk of biological weapons is actually far more robustly positive than trying to improve insect welfare by reducing deforestation. It also seems like the value of the far future could be far greater than the impact on present-day insects.

I agree efforts around pandemic preparedness are more robustly positive than those targetting insect welfare. 2 strong arguments come to mind:

  • It looks like at least some projects (e.g. developping affordable super PPE) are robustly good to decrease extinction risks, and I think extinction is robustly bad.
  • Extinction risks are pretty large in scale, and so they will tend to be a more important driver of the total cost-effectiveness. This is not necessarily the case for efforts on improving insect welfare. They might e.g. unintendly cause people to think that nature / wild life is intrinsically good/bad, and this may plausibly shape how people think about spreading (or not) wild life beyond Earth, which may be the driver of the total cost-effectiveness.

Putting a hold on helping people in poverty because of concern about insect rights is insulting to people who live in poverty and epitomises ivory-tower thinking that gets the Effective Altruism community so heavily criticised.

Saying "further research would be good" is easy because it is always true. Doing that research or waiting for it to be done is not always practical. I think you are being extremely unreasonable if, before helping someone die of malaria you ask for research to be done on:

  • the long term impacts of bednets on population growth
  • the effects of population growth on deforestation
  • the effects of deforestation on insect populations and welfare
  • specific quantification of insect suffering

I have a general disdain for criticizing arguments as ivory-tower thinking without engaging with the content itself. I think it is an ineffective way of communication which leaves room for quite a lot of non-central fallacy. The same ivory tower thinkings you identified were also quite important at promoting moral progress with careful reflections. I don't think considering animals as deserving moral attention is naturally an insulting position. Perhaps a better way of approaching this question will be to actually consider whether or not this trade-off is worth it. 

p.s  I don't think the post called for a stop of GiveWell's act of giving. The research questions you identified are important decision relevant open-ended questions which will aid GiveWell's research.  Perhaps not all of it can be solved, but it doesn't mean that we shouldn't consider devoting a reasonable amount of resources to researching these questions. I'm a firm believer in world-view diversification. The comparative probably isn't that GiveWell will stop helping someone die of malaria, but they may lower their recommendations for said program/or offer recommendations to make existing interventions more effective with an account for these new moral considerations.

I agree with you that criticising arguments without engaging with the content is bad. I do however probably agree with this statement. 

"Putting a hold on helping people in poverty because of concern about insect rights is insulting to people who live in poverty and epitomises ivory-tower thinking that gets the Effective Altruism community so heavily criticised."

I think that living a rich lifestyle in a western country, while saying that Givewell's projects which help lift people out of poverty  could be very harmful because of potential harm to insects is probably insulting to poor people, whether the argument is right or wrong. This also definitely gets the EA community heavily criticised.
 

And you say that the post doesn't call for a stop on GiverWell's act of giving, yet he suggests.
"I would say focussing on longtermist interventions is better, as their (longterm) effects are more predictable.", which seems to lean in that direction.

I think  a better approach due to the great uncertainty is to research things like terrestrial suffering, before referring to givewell or other types of giving. Why be potentially insulting or get the community criticised when you can encourage more research and thought without necessarily bringing global health and development into the question?

Thanks for commenting, Henry. I do feel you are pointing to something valuable. FWIW, I am confused about the implications of my analysis too. Somewhat relatedly, I liked this post from Michelle Hutchinson.