JR

Joseph Richardson

Economics PhD student at Lancaster University and analyst at SoGive.
183 karmaJoined Jul 2021

Bio

Economics PhD Student (Lancaster University)  and analyst at SoGive.

Comments
17

Thank you for the reply. Indeed, I was referring to the studies engaging with butter-margarine substitution. However, I think it definitely needs emphasising just how weak those studies are and thus that they cannot be trusted to make any policy decisions. Additionally, while relevant, I would not want to use Auer and Papies (2020) as a basis for policy decisions either, as it is quite unclear how comparable those estimates are due to them pooling across a wide range of markets potentially quite different to this one (is the degree of product differentiation the same? groceries could be different to pharmaceuticals or durables). Finally, there is also the issue the papers they synthesise using instruments might not use good ones, but I do not have the time to check that. 

Skimming the studies in the meta-analysis, I am rather sceptical that anything can be concluded from these studies. As far as I can tell, each study investigates how quantity sold varies with the price, but cannot distinguish between demand and supply shocks. If any price changes are instead due to demand shocks, then the estimated coefficient could be severely biased and potentially possess an incorrect sign. Therefore, without a valid instrumental variable that only affects prices through supply-side factors, these coefficients will not be informative of consumers' true substitution patterns.

Although I have no doubts that iron deficiency is a problem, I do not think the evidence linked for this is particularly strong backing for it having massive effects. In particular, estimating the cognitive impacts of anything from one study that hits marginal statistical significance with a massive estimated effect size (0.5 standard deviations)  seems likely to lead to a wildly inaccurate estimate of the true effect. This is because this study possesses all the hallmarks of low statistical power interacting with publication bias.

Furthermore, given that the the other studies appear to have small samples sizes (note: I am an economist, not a medic) and the p-values are not far off 0.05, I would be worried about publication bias exaggerating any effects there as well, especially as I suspect studies conducted fifteen to twenty years ago were unlikely to be pre-registered.

To convince me of an effect size, I would want to see a study with p<<0.01 or a meta-analysis of RCTs that addresses the issue of publication bias.

It's good to see an animal welfare organisation using serious analysis to guide their interventions, although I'm not entirely clear why the assessment is being done on the basis of deaths rather than the integral of welfare over time?

Looking at the numbers, it appears that producing  a kilogram of carp involves significantly more time in factory farms than for a kilogram of salmon (both fish spend around 3 years in a farm but carp weigh half as much and there are more premature deaths). Additionally, given the far higher mortality rates, it seems likely that carp welfare is significantly worse than salmon welfare. If both these factors are true, this intervention is only backfiring under specific (and potentially resolvable) assumptions on the badness of slaughtering wild fish , the welfare of a wild fishes, and the elasticity of wild fish populations with respect to farmed salmon demand. 

Isn't that exactly what we'd expect when there is the marginal utility of consumption is diminishing?  An additional pound in a developing country is probably more likely to be purchasing something more important to a person's welfare than someone in a developed country e.g., food or basic shelter vs video games. Furthermore,  some of these essentials could themselves be life extending, which would bias the estimates. Finally, it's possible that life in poverty is bad enough that individuals are willing to forego less to extend it (I put the least weight on this explanation, but it is plausible).

In each of these cases, this GDP-adjusted value of a statistical life discrepancy would be completely rational and the underlying poverty driving the differences would be what needs addressing.

Thanks. I think the issue is the use of  the word effect, which usually implies causality in my field (Economics), rather than association when referring to the cross-sectional analysis alongside the fact that context was lost when it was edited down for the forum.

I'd like to add that related interventions have been successful for policymakers in the developing world, with econometrics training  increasing reliance on RCT evidence in policymaking and instruction in Effective Altruism increasing politicians' altruism. Indeed, influencing policymakers may be cost-effective in a wider range of scenarios as it could be far cheaper and is unlikely to require as much highly visible political messaging.

I think this is interesting research but I would quibble with your interpretation of the top part of figure 4 as a causal effect. As far as I can tell, that part is a cross-sectional analysis that is only valid if individuals with greater knowledge of climate organisations are the same in all relevant ways as those with lower levels of knowledge that identify with Friends of the Earth to the same extent. This seems unlikely to be true and indeed does not have to hold for the fixed effect analyses that make up the majority of this piece to be unbiased. If I have not misinterpreted something here, I would recommend being much clearer in future about when switching between fixed and random effects models  as they estimate very different parameters, with fixed effects usually being much more reliable at retrieving causal effects. 

Given it is easier to migrate to developed countries as a qualified doctor, medicine may also be a promising earn-to-give strategy for those in developing countries as well, if they wanted to pursue that route.

If you want to become a research assistant to academic economists, I would recommend taking econometrics courses with a coding component. A course using Stata is probably best for this, but R might also be fine. The essential requirement for most of those jobs is being able to clean data and run econometric tests, usually within Stata.

Load more