TLDR: Developing new behavioral interventions can be highly cost-effective.

Previous research has shown that, in the aggregate, the social returns of investments in research and development (R&D) are very high for the first world (Jones & Summers, 2021) and even higher for the developing world (Pardey et al., 2016; Kremer et al., 2019; Broussard et al., 2022). Moreover, economic analyses of previous investments in agricultural research (Pardey et al., 2016) and R&D for global development (Kremer et al., 2019; Broussard, et al., 2022) show that some R&D projects are much more cost-effective at generating social value than others. 

Evaluating the cost-effectiveness of individual R&D projects is difficult because the social benefits of innovation are diffuse and uncertain. Properly accounting for the uncertainty and complexity of generating social value through R&D requires rigorous probabilistic modeling. To make it feasible for funders and cause prioritization researchers to estimate the cost-effectiveness of R&D projects and compare it to the cost-effectiveness of their favorite charities, we developed a method in the probabilistic modeling language Squiggle.

Depending on the context, funders, governments, and researchers need to answer two distinct questions about the cost-effectiveness of R&D (OECD, 2009). The first question concerns the impact of past R&D projects. The answer to that question is known as the project’s ex-post cost-effectiveness. The second question is how cost-effective it would be to fund a proposal for a future R&D project or initiative. The answer to that question is known as ex-ante cost-effectiveness. It is essential to distinguish between the ex-ante versus the ex-post cost-effectiveness of R&D projects. The ex-post cost-effectiveness of an R&D project is the retrospective assessment of how worthwhile the project was after it has delivered a new intervention. Such ex-post assessments can help funders and policymakers evaluate and learn from their past decisions. By contrast, the ex-ante cost-effectiveness of R&D is a prediction (or forecast) of how cost-effective it would be to attempt to develop a new intervention.

In this post, we describe a method for estimating the ex-post cost-effectiveness of R&D projects and provide proof-of-concept for how it can be applied. In the following post, we will do the same for predicting the ex-ante cost-effectiveness of future R&D projects. 

Defining the cost-effectiveness of R&D

The cost-effectiveness (CE) of a research and development (R&D) project that creates a new intervention with cost-effectiveness () and scalability () is the increase in moral value caused by the intervention(s) it creates divided by the cost of conducting the R&D project, that is


where  is the budget that would be available to deploy the new intervention if the R&D project is successful.

The moral value created by the R&D project is the difference in the amount of moral value that can be created when the new intervention is added to the options that funds can be allocated to minus the amount of moral value that can be created using the pre-existing set of interventions:

The amount of value that can be created with the interventions  depends on the available funds (), the interventions' cost-effectiveness levels (), and their scalabilities (). To simplify the notation, let's assume these interventions are arranged in decreasing order of cost-effectiveness, such that  is the most cost-effective intervention and  is the least cost-effective intervention. In this case, the amount of good we can do with these interventions is given by the following equation:

where bi​ is the budget available for the i-th intervention after all the more cost-effective interventions have been funded to the level at which their cost-effectiveness drops below the cost-effectiveness of the i-th intervention.

The cost-effectiveness of an R&D project primarily depends on two properties of the new intervention () – its cost-effectiveness () and its scalability () – and the cost of the R&D activities (). Putting together the equations for the cost-effectiveness of R&D and the value of developing a new intervention, we get the following equation for the ex-post cost-effectiveness of a completed R&D project:

The equations above assume that we have perfect knowledge of all relevant variables. In practice, there is always some uncertainty about the amount of money invested into the interventions, their cost-effectiveness, and their scalability. We have developed the rigorous probabilistic modeling method described in the following section to address this challenge.

A general method for estimating the ex-post cost-effectiveness of completed R&D projects

Overview of the method

As summarized in Table 1, the ex-post cost-effectiveness of a completed R&D project involves a series of three steps. The first and most challenging step is estimating the moral value of the project’s outputs. The second step is calculating the cost of developing the new intervention. And the third step combines the (probabilistic) estimates calculated in the first two steps to estimate the expected value of the project’s cost-effectiveness ratio.

Table 1. Procedure for calculating the ex-post cost-effectiveness of a completed R&D project.



Estimate the cost-effectiveness of the new intervention:

  1. Estimate the intervention’s effectiveness from empirical data
  2. Estimate the intervention’s cost
  3. Combine the two estimates into an estimate of the intervention’s cost-effectiveness.


Predict the project’s long-term impact (i.e., the total moral value created by the project)

  1. Estimate the scalability of the new intervention.
  2. Calculate the costs and benefits of evaluating the new intervention in an RCT.
  3. Calculate the expected increase in the predicted creation of moral value.


Calculate the project’s costs and cost-effectiveness

  1. Obtain information about the project’s expenses (personnel costs, research expenses, and overhead costs) and estimate the total cost.
  2. Calculate the project’s expected benefit-cost ratio from the probability distributions calculated in Steps 2c and 3a.

Step 1: Estimate the cost-effectiveness of the new intervention

Step 1 estimates the intervention’s cost-effectiveness from the data collected in the R&D project. If the R&D project did not measure the intervention’s cost-effectiveness directly, Step 1a entails modeling and estimating or simulating the causal effects of the variables measured in the R&D projects on the intervention’s effects on the world. Step 1b estimates the cost of deploying the intervention per person who completes it. This includes the cost of directing people to the intervention (e.g., online advertising) and the cost of conducting it.

Following Plant (2022), we recommend measuring cost-effectiveness in well-being adjusted life years (WELLBYs) per $1000. This entails quantifying the benefits of the intervention in terms of its direct and indirect effects on the well-being of people and animals. Our method can also be used with other measures of cost-effectiveness. For a detailed description and demonstration of our approach to estimating the cost-effectiveness of deploying an existing intervention, you can read the second post in this sequence.

Step 2: Predict the project’s long-term impact 

The second step usually answers the question, “How much additional social value will society create by deploying the new intervention above the amount of social value it could create without it.” This is a question about what will happen in the future. Since the future is uncertain, this question is answered by a probabilistic causal model of how the intervention might be used. This proceeds in the three sub-steps listed in Table 1.

Step 2a estimates how much money the new intervention can convert into moral value at the cost-effectiveness estimated in Step 1. This number can usually be estimated by multiplying estimates of the size of the reachable target audience and how the intervention’s total deployment costs per person reached. The latter may include the cost of advertising the intervention and the cost of conducting it. Our library of reusable functions for performing cost-effectiveness analyses includes a function that performs this calculation for online interventions.

Although good R&D projects estimate the cost-effectiveness of their outputs, there will generally remain considerable uncertainty about the cost-effectiveness of the new intervention. Therefore, Step 2b determines whether the intervention produced by the R&D project should be evaluated in a randomized controlled trial (RCT), what the outcomes could be, and how much it would cost. If an RCT is warranted, the value of having the new intervention is approximately equal to the conditional expected value of the new decision situation minus the cost of running the RCT ():

where  and  are the estimates of the new intervention’s effectiveness produced by the initial R&D project and the subsequent RCT, respectively, and  refers to the posterior distribution of the unknown value of DRCT given the known value of .[1][2] The third post in this sequence provides detailed information on how you can estimate the costs and benefits of an RCT and determine if it is warranted to conduct one.

Step 2c uses Equations 2 and 3 to derive the expected gain in moral value produced by the innovation produced by the R&D project from the probability distributions derived in Steps 1a-1c. This requires first identifying the most cost-effective and most scalable alternative interventions. Given this information, we can then calculate the expected moral value (EMV) of R&D according to the following equation:

Step 3: Estimate the project’s costs and cost-effectiveness

The third step in estimating the ex-post cost-effectiveness of an R&D project is to relate the benefits of having the intervention (Step 2) to how costly it was to develop it. This requires estimating and summing up all the expenses incurred by the project, including personnel, research, and overhead costs (Step 3a). If possible, this information should be obtained from the principal investigator who conducted the project. If the actual expenses are unavailable, you can use our library to estimate the project’s expenses from information about who worked on the project for how long and the description of the project’s activities.  The final step is calculating the probability distribution of the R&D project’s cost-effectiveness ratio and expected value (Step 3b). This can be done by dividing the moral value created by the project according to Step 2 by the cost estimate obtained in Step 3a. 

Proof-of-concept application of the method to the R&D project by Baumsteiger (2019)

To illustrate how our method can be applied to real R&D projects and demonstrate that it works, we applied it to the running example of this sequence: Baumsteiger’s online intervention for promoting prosocial behavior (Baumsteiger, 2019). While the previous posts estimate the cost-effectiveness of deploying the intervention (Post 2) and evaluating it (Post 3) after it was developed, this section evaluates the cost-effectiveness of R&D project that created the intervention. In this post, we briefly overview how our method was applied to Baumsteiger’s R&D project and the resulting estimates. If you want to see the details of how we applied our method to Baumsteiger’s R&D project, you are welcome to peruse Section 3.1 of the project’s Observable notebook.

Step 1: How cost-effective is Baumsteiger’s intervention?

We estimated the cost-effectiveness of deploying the intervention Baumsteiger (2019) developed in the second post of this sequence. We found that the intervention’s cost-effectiveness is highly uncertain (90% C.I. [-0.45 WELLBYs/$1000; 100 WELLBYs/$1000]) with an expected value of about 24.5 wellbeing-adjusted life years (WELLBYs) per $1000. We derived this estimate by dividing the increase in well-being the intervention might create per person reached by the cost of the large-scale online advertising campaign necessary to reach enough people to achieve those benefits. We modeled the amount of well-being created as the sum of the increase in the well-being of people who engage in more prosocial behavior because of the intervention and the increase in the well-being of those whom they are helping, comforting, or sharing resources with. The resulting estimate of the expected benefits of deploying the intervention was 55 hours of happiness per person reached (90% C.I. [-2.7, 190]). We estimated that the expected cost of achieving this benefit would be about $4.32 per person reached (90% C.I. [$0.64, $11]). 

Step 2:  How much extra moral value will the intervention create?

To estimate the moral value created by Baumsteiger’s R&D project, we applied Steps 2a-2c from Table 1. Baumsteiger’s online intervention targets emerging and young adults. We, therefore, estimated its scalability (Step 2a) by multiplying our probabilistic estimate of the number of internet users between the ages of 15 and 35 by our probabilistic estimate of the cost of online advertising per person reached. We found that the intervention can absorb about 340 million dollars at a high level of cost-effectiveness (90% C.I. [$6.2M; $1.3B]).

As documented in the third post of this sequence, we found that the uncertainty about the cost-effectiveness of Baumsteiger’s intervention warrants an RCT with 1200 participants (Step 2b). We reached this conclusion by comparing the expected moral value that can be created with versus without the RCT. We then used this information to calculate the probability distribution of the expected increase in moral value that future philanthropists will be able to create due to having access to Baumsteiger’s intervention and knowing its cost-effectiveness (Step 2c). We found that the expected moral value created by Baumsteiger’s R&D project is about 320,000 Wellbeing Adjusted Life Years if the total amount of influenceable funds for improving human well-being is between $10M and $1B. This calculation takes into account that running an RCT on Baumsteiger’s intervention would reduce the budget that could be used to deploy it by about $300k.

Step 3: How cost-effective was Baumsteiger’s R&D project?

To estimate the project’s costs (Step 3a), we asked Rachel Baumsteiger how much time she spent working on the project and her expenses for creating and testing the intervention. We found that the total cost of the entire R&D project was only about $43,000 (95% C.I. [$35k,$52k]).

When we combined the estimates of the project’s costs and benefits (Step 3b), we found that the expected ex-post cost-effectiveness of Baumsteiger's R&D project was about 9.2 Wellbeing-Adjusted Life Years per dollar (i.e., 9200 WELLBYs per $1000). This corresponds to about 9,400 hours of happiness per dollar.

Given that the project's primary cost was Rachel's time, it makes sense to think about its cost-effectiveness in terms of how many hours of happiness Rachel created per hour of working on her project. Our calculations suggest that Rachel created about 340 Wellbeing-Adjusted Life Years per hour. This is about 350,000 hours of happiness per hour of research.

Finally, we compared the ex-post cost-effectiveness of funding the R&D project by Baumsteiger (2019) to the cost-effectiveness of donating to the most cost-effective mental health charity (StrongMinds; Plant, 2022). We found that funding Rachel's R&D project was 130 times as cost-effective as donating to StrongMinds.


The results presented above constitute a proof of concept that our method can be used to estimate the ex-post cost-effectiveness of R&D projects in units of wellbeing-adjusted life years per dollar. According to the available data, the behavioral science R&D project reported by Baumsteiger (2019) was highly cost-effective. This suggests that developing psychological interventions to promote prosocial behavior could be substantially more cost-effective than deploying the best existing interventions to improve global health and well-being. This interpretation should be taken with a grain of salt because our estimates rely on at least two optimistic assumptions: a) philanthropists will invest in the developed intervention if it is shown to be more cost-effective than the currently favored interventions, and b) evaluation research is conducted correctly. Unfortunately, philanthropists are neither fully informed, entirely rational, nor wholly altruistic. For instance, many people in the EA community are biased against psychological interventions. This might prevent Baumsteiger's intervention from being deployed even if it is shown to be more cost-effective than the best alternative interventions, or it might reduce the amount of funding it receives compared to what pure Effective Altruism would recommend. We could model this by putting a probability on the intervention being adopted if it is shown to be superior. The results plotted below show the relative cost-effectiveness as a function of the probability that donors would be responsive to evidence showing that the new intervention is superior to previous interventions. The results show the cost-effectiveness ratio of investing in the R&D project relative to donating to StrongMinds if the probability that the evidence will be rationally considered is 10%, 20%, ..., or 100%. According to our calculations, investing in the R&D project was 4.4x, 19x, 34x, 48x, ..., or 140x as cost-effective as donating to StrongMinds. This suggests that investing in this R&D project was highly cost-effective, even under pessimistic assumptions about the uptake of psychological interventions.

All numbers concerning the ex-post cost-effectiveness of Baumsteiger's R&D project are estimates based on what we currently know and don't know. Additionally, information about the cost-effectiveness of Baumsteiger's intervention could fundamentally change these estimates. According to our analysis, the most likely outcome of evaluating her intervention in an RCT would be gaining high confidence that her intervention is less cost-effective than StrongMinds and the best GiveWell charities. In that case, our method's estimate of the cost-effectiveness of her R&D project might drop to almost 0. Conversely, there is a slight chance that the RCT might reveal that Baumsteiger's intervention is more cost-effective than the best existing interventions for promoting (mental) health and well-being. In that case, the cost-effectiveness of Baumsteiger's project would be even larger. Should we change our assessment of the praiseworthiness of Baumsteiger's work depending on the outcomes of a potential RCT? I don't think so. Instead, we should evaluate it in terms of its ex-ante cost-effectiveness. That’s precisely what we do in the next post in this sequence. So stay tuned. 


  1. Baumsteiger, R. (2019). What the world needs now: An intervention for promoting prosocial behavior. Basic and applied social psychology, 41(4), pp. 215-229.
  2. Broussard, N. H., Chomitz, K. M., Chowdhuri, R. N., Sturla, K., Ssentongo, J., & Zwane, A. P. (2022). Assessing the Social Returns to Innovation for Development: The Global Innovation Fund’s Impact to Date. Working Paper. 
  3. Jones, B. F., & Summers, L. H. (2021). A calculation of the social returns to innovation. In Goolsbee and Jones (Eds). Innovation and Public Policy, pp. 15-39. National Bureau of Economic Research. DOI: 10.3386/w27863. 
  4. Kremer, M., Gallant, S., Rostapshova, O., Thomas, M., Chomit, K., Carbonell, J., ... & Jaffe, A. (2019). Is development innovation a good investment? Which innovations scale? Evidence on social investing from USAID’s Development Innovation Ventures. Working paper.
  5. Lieder, F. (2022). Predicting how the effect of a psychological intervention would change over time. Technical Report.
  6. Lieder, F., & McGuire, J. (2022). Finding before funding: Why EA should probably invest more in research. Effective Altruism Forum, 
  7. Pardey, P. G., Andrade, R. S., Hurley, T. M., Rao, X., & Liebenberg, F. G. (2016). Returns to food and agricultural R&D investments in Sub-Saharan Africa, 1975–2014. Food policy, 65, pp. 1-8.
  8. Plant, M. (2022). Don’t just give well, give WELLBYs: HLI’s 2022 charity recommendation. Effective Altruism Forum,
  9. OECD Directorate for Science, Technology and Innovation (2009). Enhancing Public Research Performance Through Evaluation, Impact Assessment and Priority Setting. 
  1. ^

    While the estimate provided by the R&D project is already known (hence the small letter), the estimate that a future RCT might produce is still unknown, hence the capital letter. 

  2. ^

    Note that the inner expectation is taken with respect to the posterior distribution of Enew, given the data produced by the R&D project (Snew).