Hide table of contents

(Cross-posted from my substack The Ethical Economist: a blog covering Economics, Ethics and Effective Altruism.)

Many people in the EA movement give their money to fund interventions that directly improve lives. Some give to corporate campaigns to reduce animal suffering. Others fund bednets to protect people from contracting malaria. Some even directly give their money to people living in poverty.

Charities such as GiveWell and the Happier Lives Institute (HLI) advise how one can fund the most effective interventions to improve the lives of humans. Interestingly, these charities never advise that one should fund research into potentially even better interventions. These charities haven’t necessarily determined that funding research is less valuable than funding existing interventions. Instead they rule out funding research a priori, restricting their analysis to evaluating existing interventions. This doesn’t seem sensible to me.

(EDIT: I was too quick to judge that HLI rules out research, please see comments. I may also have been too quick to judge this for GiveWell)

How can we decide between funding existing interventions and funding research into potentially even better interventions? In trying to answer this question I found myself developing a simple model...

Model setup 

Say that, with a fixed amount of money, we can currently do  units of good each time period by funding the best intervention we have available to us. Considering HLI's recommendations, we could think of  as representing the number of Wellbeing-Adjusted Life-Years (WELLBYs) gained from giving 10% of an average annual salary to StrongMinds each year. Say that we have  time periods to do good in in total.

Now consider that, instead of giving to fund existing interventions, we can decide to spend  time periods (where ) using our money to fund research into potentially better interventions. This could mean funding research into psychedelic treatments to improve mental health, which at least has the potential to uncover a more effective mental health intervention than the group talk therapy utilised by StrongMinds. In this research time we do no direct good, so there is an opportunity cost in that we could have been carrying out our best existing intervention instead. However, with probability  we discover a better intervention than the best one we currently have, which allows us to do  units of good each year (so an improvement of ). If we discover a better intervention, we can carry out this new intervention for the rest of time. If we don't discover a better intervention, we can simply revert back to our original intervention.

Solving for the "yes research" condition

If the expected value of engaging in research and potentially finding a better intervention exceeds the value of simply carrying out our existing intervention, we should engage in research. I call this the "yes research" condition.

The value of carrying out the existing intervention is  as we do  units of good in each of  time periods.

The value of engaging in research is a bit more complex:

  • In the research period, lasting  time periods, we do no good as we are engaging in research rather than actually improving lives.
  • After the research period, with probability  we have found a better intervention, allowing us to deliver  units of good for the remaining  time periods. With probability  we didn't find a better intervention and so revert back to delivering  units of good for the remaining  time periods.
  • Therefore the expected value of engaging in research is 

The "yes research" condition is when the expected value of engaging in research exceeds the value of carrying out our best existing intervention. So this is:

The "yes research condition"

Plugging in some (semi-)random numbers

Let me put some numbers in to bring this to life. Let's say that:

  • We want to fund research for 1 year ()
  • We're looking to do good over a period of 50 years ()
  • The probability of finding a better intervention is 10% ()
  • If we do find a better intervention, we expect to do 150 units more good every year ()
  • We currently do 500 units of good every year ()

With these (semi-)random numbers the value of carrying out our existing intervention is . The expected value of engaging in research is 

 so "yes research" (just).

What can we learn from this model?

I just made up some numbers and plugged them in. In reality, estimating the probability of finding a better intervention () is likely to be quite difficult, as is estimating the amount of extra good that will be done through a better intervention if it is found (). I don't think this means the model is useless though. One could have a good go at estimating these parameters looking at the rate of progress that is being made in psychedelic research for example. I think having a go is better than ruling out further research a priori. After all, research has done wonders in the past (consider that mental health problems used to be treated by drilling holes in people's skulls - I'm pretty glad we didn't just fund that for the rest of time).

Even without estimated parameter values, the model can give us some useful insights. It is interesting to analyse the effect of changing some parameters whilst keeping the others constant. Remember the "yes research condition":

  • An increase in the probability of finding a better intervention through research () increases the left hand side of the condition, working in favour of research (see appendix for proof). Intuitively this is obvious.
    • Similarly obvious is that if we think it's impossible to find a better intervention () the left hand side cannot be greater than the right hand side and research cannot be be worth it.
  • An increase in the extra expected good done under a better intervention () also increases the left hand side of the condition, working in favour of research. This is obvious as there is more to potentially gain from research.
  • Slightly more subtly, an increase in the amount of time spent researching (), whilst keeping all other things equal, decreases the left hand side of the condition, working in favour of not researching. This is because we don't do any good whilst researching, incurring an opportunity cost.
  • Even more subtly, an increase in the amount of time over which we can/are looking to do good () works in favour of research (this isn't immediately clear from the "yes research condition" - see appendix for proof).  The intuition behind this is that if research finds us a better intervention, we can use this intervention for the rest of time. Even if the better intervention is only slightly better than the best one we currently have, given enough time this small improvement will make up for the fact that we did no good during our research period.

Key takeaways

My last point in the previous section is really the key takeaway. The greater the amount of time over which we can/are looking to do good (the greater  is), the more we have to gain from doing research into better interventions. This is because there is more time over which the better intervention can deliver greater benefit.

In a way, this is just classic longtermist reasoning. Finding a better intervention is kind of like moving into a better "persistent state", as once we have discovered a better intervention we can just fund that intervention for the rest of time. Then, given an expected vastness of the future, moving into that better persistent state can really do a phenomenal amount of total good.

It's worth noting of course that many people are not longtermists, and that these people are likely to be the ones who are donating to fund existing effective interventions, such as giving to StrongMinds. These people clearly have some reason for not being longtermist, so may not be convinced by the above longtermist reasoning. These people may only want to consider a short overall time period in which to do good (in my model choose small ), which would work against carrying out research. However, I think that there are people who are not longtermists, but who may still be interested in considering a large timeframe over which to do good (large ), which may mean they want to fund research over existing interventions. For example:

  1. People who reject longtermism due to fanaticism: Some may think that we can predictably influence the mid-far future and may not discount the future largely, but may not be a longtermist due to concern that we can only succeed in influencing the mid-far future with very small probability. Not being comfortable with being fanatical in this way, these people may prefer the certainty of doing good by giving to StrongMinds over funding longtermist interventions. Such a person may still find funding research better than giving to StrongMinds however, as carrying out research into better interventions may not be fanatical. Even if the probability of finding a better intervention is small, it's unlikely to be crazy small. So such people may well find that they should switch from funding existing interventions to funding research.
  2. People who reject longtermism due to preferred population axiology: Some may not find it very important to reduce risks of extinction, as they have a person-affecting view of population ethics. This may cause them to reject longtermism if they think reducing the risks of extinction is the only surefire way to tractably influence the mid-far future. These people may therefore prefer giving to StrongMinds over say the Nuclear Threat Initiative. Such people may still find funding research better than giving to StrongMinds however, as improving the quality of mental health treatment remains highly valuable under most plausible person-affecting views. Most plausible person-affecting views will accept that it is important to improve the lives of people who are not alive now but who will necessarily live in the future. In this case, such a person may want to accept a very large time period over which to do good (large  in my model) which works in favour of research.

So, in short, I think there could be quite a few members of the Global Health and Wellbeing community who find that funding research into potentially better interventions has higher value, given their ethical and empirical views, than helping people through existing interventions. If this is true, the current distribution of funding in the GH&W community could be due a radical shift in the pursuit of doing the most good.

Appendices

Proof of the effect of increasing the probability of finding a better intervention through research

In the What can we learn from this model? section I claimed that increasing the probability of finding a better intervention through research () works in favour of research. This is intuitively obvious, but requires some work to demonstrate mathematically.

Let's start with the "yes research condition":

We can show that the left hand side of the condition increases with  by manipulating it and differentiating with respect to . Simple manipulation of the left hand side gives us:

Differentiating with respect to  gives:

Which equals:

This is strictly positive as . Therefore increasing  works in favour of research.

Proof of the effect of having more time to do good

In the What can we learn from this model? section I claimed that increasing the amount of time over which we can/are looking to do good () works in favour of doing research. This isn't immediately clear from the "yes research condition" as  features on both the left and right hand sides of the condition. After a few algebraic steps however my claim should become clear.

Let's start with the "yes research condition":

Dividing each side by  we get:

Both terms on the left hand side include the fraction . We can divide the top and bottom of this fraction by  to get . It is clear that this term increases as increases. Therefore increasing  works in favour of research.

Comments4


Sorted by Click to highlight new comments since:

Note that often research is an intervention (e.g., alignment research), so I'm internally translating "research" in your usage to something like "prioritization research" (or maybe "meta research" or "intervention research").

Hello Jack,

Good to see you thinking about this! A couple of things. 

First, HLI doesn't claim that funding research is worse than funding existing interventions. Leaving aside this would be a mad thing to believe a priori - clearly, there are empirical questions here - we do discuss that it would be promising to fund research in e.g. our cause report into global mental health (link to full report here).

HLI hasn't investigated the cost-effectiveness of research because it's been beyond our capacity to do so. We currently have a research team of 3 and have been focusing on evaluating the cost-effectiveness of global health and developing interventions directly in terms of wellbeing, e.g. bednets, deworming, cash transfers. Our plan was/is to start by doing 'apples-to-apples' comparisons and then move on to less easily comparable things.

We are, as it happens, now hiring for a grant strategist role to look into other opportunities, which will include doing research, including very probably into the thing you flag - psychedelic-assisted mental health treatments, which I have long been interested in and spoke about back in 2017

Obviously, I can't comment on how other organisations think about this.

Second, I  slightly struggled to follow your analysis. There is an existing literature on how to think about the value of information you didn't mention. I wasn't sure if you were doing a conventional VOI analysis or something else. 

Hi Michael, thanks for your reply! I apologise I didn’t check with you before saying that you have ruled out research a priori. I will put a note to say that this is inaccurate. Prioritising based on self-reports of wellbeing does preclude funding research, but I’m glad to hear that you may be open to assessing research in the future.

Sorry to hear you struggled to follow my analysis. I think I may have over complicated things, but it did help me to work through things in my own head! I haven’t really looked at the literature into VOI.

In a nutshell my model implies that, the longer the time period you are willing to consider, the better further research is (all other things equal). This is because if you find a better intervention, you can fund it for the rest of time. So even a very slightly better intervention can deliver vastly more good than funding our best existing intervention. This effect is likely to dominate the opportunity cost of research (I.e. not improving mental health now), provided you’re considering a long enough time period.

My tentative view is that someone who doesn’t discount the future should almost definitely prefer funding research than existing interventions. So I personally would give to top research institutes over giving to StrongMinds. One might ask when one would ever want to stop giving to research. My model implies this might be the case when we’re very sceptical we can do better than our best intervention, when we think the likely improvement we can achieve is negligible, when for some reason we’re only interested in considering a short time period (e.g. perhaps we’re near heat death), or some constellation of these factors. I don’t think any of these are likely to be the case now, so I would fund research.

Hopefully that makes some sense! I doubt I’m saying anything ground-breaking here though…

I think the model setup or at least the clarifications around it needs tweaking. Namely you're assuming that the main reason we may discontinue a researched-to-be-positive intervention is due to intrinsic time preference. But I think it's much more likely that over enough time there will be distributional shift/generalizability issues with old studies.

For one example, if we're all dead, a lot of studies are kind of useless. For another example, studies on the cost-effectiveness of (e.g.) malaria nets and deworming pills becomes increasingly out-of-distribution as (thankfully!) malarial and intestinal worm loads decrease worldwide, perhaps in the future approaching zero.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
LewisBollard
 ·  · 8m read
 · 
> How the dismal science can help us end the dismal treatment of farm animals By Martin Gould ---------------------------------------- Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- This year we’ll be sharing a few notes from my colleagues on their areas of expertise. The first is from Martin. I’ll be back next month. - Lewis In 2024, Denmark announced plans to introduce the world’s first carbon tax on cow, sheep, and pig farming. Climate advocates celebrated, but animal advocates should be much more cautious. When Denmark’s Aarhus municipality tested a similar tax in 2022, beef purchases dropped by 40% while demand for chicken and pork increased. Beef is the most emissions-intensive meat, so carbon taxes hit it hardest — and Denmark’s policies don’t even cover chicken or fish. When the price of beef rises, consumers mostly shift to other meats like chicken. And replacing beef with chicken means more animals suffer in worse conditions — about 190 chickens are needed to match the meat from one cow, and chickens are raised in much worse conditions. It may be possible to design carbon taxes which avoid this outcome; a recent paper argues that a broad carbon tax would reduce all meat production (although it omits impacts on egg or dairy production). But with cows ten times more emissions-intensive than chicken per kilogram of meat, other governments may follow Denmark’s lead — focusing taxes on the highest emitters while ignoring the welfare implications. Beef is easily the most emissions-intensive meat, but also requires the fewest animals for a given amount. The graph shows climate emissions per tonne of meat on the right-hand side, and the number of animals needed to produce a kilogram of meat on the left. The fish “lives lost” number varies significantly by