8

Thanks for this, great work. I would like to see an analysis of this explicitly taking into account the animal welfare impacts, just using the rethink priorities values for the value of a dogs lifespan saved and the cost of the egg based treat in terms of welfare. It seems to me that the benefits of an animal vaccination campaign would largely be concentrated on the animals. As dogs are omnivores I would like to see them use plant based lures instead, as a dog owner I know that peanut butter or any number of other options would have worked well.

You cannot use the distribution for the expected value of an average therapy treatment as the prior distribution for a SPECIFIC therapy treatment, as there will be a large amount of variation between possible therapy treatments that is missed when doing this. Your prior here is that there is a 99%+ chance that StrongMinds will work better than GiveDirectly before looking at any actual StrongMinds results, this is a wildly implausible claim.

You also state "If one holds that the evidence for something as well-studied as psychotherapy is too weak to justify any recommendations, charity evaluators could recommend very little." Nothing in Gregory's post suggests that he thinks anything like this, he gives a g of ~0.5 in his meta-analysis that doesn't improperly remove outliers without good cause. A g of ~0.5 suggests that individuals suffering from depression would likely greatly benefit from seeking therapy. There is a massive difference between "evidence behind psychotherapy is too weak to justify any recommendations" and claiming that "this particular form of therapy is not vastly better than GiveDirectly with a probability higher than 99% before even looking at RCT results". Trying to throw out Gregory's claims here over a seemingly false statement about his beliefs seems pretty offensive to me.

Would be interesting to see a SEM analysis of this looking at the impact on a latent IQ measure behind these test results that each try to get at it separately. This model should have much better power to find an effect and not suffer from multiple hypothesis testing issues looking at the different tests separately. Model would be of the form

Latent variable definition

IQ =~ RAPM+BDS+Other tests

Regression

IQ ~ Creatine+Other explanatory variables(Age,vegetarian etc.)

As much as I think family planning charities like MSI do good by preventing the pain of unwanted pregnancies on women, I do not think that we should factor in animal welfare concerns when it comes to family planning funding. The analysis assumes that an extra human will have the same impact on meat consumption as an average human, this isn't true. One extra meat consumer will raise the price of meat, reducing the amount that others eat, and meat production is far from perfectly elastic. One could argue there is some chance they might go on to work in the meat industry and raise supply that way, but at the current moment meat prices seem more dependent on available land then labor supply to me so that seems unlikely. Additionally, due to agglomeration effects an extra human may reduce the time until a full replacement of farmed animal meat with plant based or culture based meats. I do not think we should assume that bringing an extra human into the world should have a net negative impact on farmed animal welfare in expectation.

It's a very interesting study and a compelling idea. I think the big issue is that we need to look at the marginal impact of extra dollars on cancer research, this is looking at the average impact of money spent on cancer drug trials. Expected effectiveness of more money should be lower, as the most promising drugs are more likely to already have funding.

When running a meta-analysis, you can either use a fixed effect assumption (that all variation between studies is just due to sampling error) and a random effect assumption (that studies differ in terms of their "true effects".) Therapy treatments differ greatly, so you have to use a random effects model in this case. Then the prior you use for strong minds impact should have a variance that is the sum of the variance in the estimate of average therapy treatments effects AND the variance among different types of treatments effects, both numbers should be available from a random effects meta-analysis. I'm not quite sure what HLI did exactly to get their prior for strong minds here, but for some reason the variance on it seems WAY too low, and I suspect that they neglected the second type of variance that they should have gotten from a random effects meta-analysis.