JP

Jacob_Peacock

916 karmaJoined

Posts
10

Sorted by New

Comments
74

Topic contributions
3

I think I agree with the central theses here, as I read them: indeed, ideally we would (1) measure what happens to people individually, rather than on average, due to taking psychiatric drugs, and (2) measure an outcome that reflects people's aggregate preference for their experience of life with the drug versus the counterfactual experience of life without the drug.

However, I think these problems are harder to resolve than the post suggests. Neither can be measured directly (outside circumscribed / assumption-laden situations) due to the fundamental problem of causal inference, which is not resolved by people's self-reported estimates of individual causal effects. There are better approaches to consider than comparing averages, but, in my opinion, this is the default for practical causal inference reasons, rather than a failure to take phenomenology seriously.

I agree that (2) is more tractable; however, these improvements are non-trivial to implement. Continuing your example, if we reanalyze a trial to focus on patients with high baseline akathisia, who may be most affected by either a benefit or a harm, we have far fewer patients to analyze. What was once an adequately powered trial to detect a moderate effect in the full sample is now under-powered. The same issue arises when analyzing complex interactions: precisely estimating interaction effects generally requires far larger sample sizes than estimating main effects. So a trial designed to measure a main effect of a drug is unlikely to be sufficiently powered to estimate several interaction effects.

For either issue, the data is not already there in my view. That said, I may not be fully understanding what exactly you propose doing; are there examples of "[using] criticality and complex systems modeling tools to deal with symptom interactions" in a healthcare context that illustrate this sort of analysis?

I think this article makes its case compellingly, and appreciate that you spell-out the sometimes subtle ways uncertainty gets handled.

Did the question "Why should justification standards be the same?" arise in a sociological / EA movement context? My interpretation (from the question wording alone) would be more epistemic, along the lines of the unity of science. In my view, standards for justification have to be standardized, otherwise they wouldn't be standards; one could just offer an arbitrary justification to any given question.

Hi Wyatt, it's indeed confusing, but journals often plan their issues several months in advance. So this article is published online and will appear in a January 2026 issue of Appetite, thus the future publication date.

Hi Dorsal, thank you for your kind words about the work and thoughtful questions. I agree with Seth's reply and would add for (A), there is some evidence of this effect for plant-based foods in general, for example: Garnett 2019, Parkin 2021, and Pechey 2022. I don't know of any studies which have tested adding animal-based meats.

Thank you for writing this thoughtful piece! I especially appreciate the transparency in reasoning and the careful attention to empirical evidence (some of which I’ve contributed to).

I wanted to share a few notes on the displacement section—specifically, some important papers that weren’t mentioned and a few potential misinterpretations of others:

  1. Carlsson 2022 presents a hypothetical discrete choice experiment on lower prices for plant-based meat, finding that 30% of consumers in Sweden would not choose plant-based meat even if it were free. This highlights a key limitation of many other discrete choice experiments: they may not be testing sufficiently low prices and could be subject to floor effects. In other words, consumers' willingness to pay for plant-based meat may be even lower than some studies estimate.
  2. Lusk 2022 is strictly a modeling paper (as you note) and effectively assumes a cross-price elasticity between plant- and animal-based meats. The specific quantities used are derived from hypothetical discrete choice experiments, but Lusk 2022 itself does not adduce new empirical evidence of displacement to my knowledge. (However, it does provide a useful review of cross-price elasticities.)
  3. Mendez 2023 reviews cross-price elasticities between butter and margarine. It's important to note that cross-price elasticities do not directly measure displacement—they measure (in theory) how a price change in one product affects demand for another. While price displacement is one possible mechanism, other mechanisms could also be at play.
  4. Grundy 2022 finds evidence supporting some interventions using mycoprotein-based meat alternatives. However, if I recall correctly, these are intensive, multi-component interventions that go beyond simply making mycoprotein-based meat alternatives available. As a result, it’s unclear whether the meat alternative itself was the causal factor in the outcomes observed.
  5. Malan 2022 is, in my opinion, one of the strongest studies on this topic. It finds either null or very small effects under reasonable analyses (see Peacock 2024).
  6. Several observational studies use grocery store scanner data to measure behavioral displacement between plant- and animal-based meat. Neuhofer 2024 is one such study, cited for its findings on the proportion of consumers purchasing both plant- and animal-based meat. It also provides an observational estimate suggesting that displacement is either small or nonexistent. Additional studies in this vein include Cuffey 2022, Gordon 2023, and Meyer 2024, some of which explore potential sources of exogeneity. To my recollection, each finds either null or very small displacement effects. However, since some of these papers rely on the same data sources, they shouldn't necessarily be treated as independent pieces of evidence.

Overall, I think this section may place too much emphasis on self-reported surveys that may tend toward finding effects, rather than studies that measure behavioral outcomes.

I'm thinking more about this interpretation, but I'm not sure it is correct because WFP's calculations are designed to be conservative in estimating the welfare improvements and exclude various welfare harms. For example, it looks like the broiler estimates exclude welfare harms from transport to slaughter. When these hours of suffering are added back in, the ratio between the two scenarios can go down.

As a hypothetical example, suppose BCC chickens are currently estimated to suffer 50 hours, while non-BCC chickens suffer 100 hours. If we add in 10 hours of suffering from transport for non-BCC chickens and only 2 hours for BCC chickens (as they are believed to be more heat tolerant), this ratio then increases to 53%. So while excluding harms from transport to slaughter is fine for keeping the absolute difference in hours suffered conservative (50=100-50 < 58 = 110-52), it does not necessarily keep the ratio conservative (50% vs 47% suffering reduction).

I think this is fine when comparing between different welfare levels for species, but I suspect it means they can not be used to compare directly to non-existence?

[Tagging @saulius as well since this seems relevant to the extent of whether cage-free is 'still pretty bad'.]

Thank you for writing this! It was very helpful learn how these initiatives went and I found my self agreeing with much of what you wrote.

I am curious to learn more of what costly signals you had in mind when you write:

politicians wanting to make extremely costly signals to show how much they support animal agriculture — two states have already preemptively banned the sale of cultivated meat.

My initial thinking was that these were pretty low costs for these politicians: cultivated meat isn't salient to the constituency, there are no sales in the state, and the industry is very small, so no one is really bothered to inflict a cost, but I'm curious what else I should consider.

Hi Bruce, thank you for your questions. I’m leading this project and made the decision to recruit volunteers, so thought I’d be best positioned to respond. (And Ben’s busy protesting for shrimp welfare today anyway!)

  1. Did the team consider a paid/minimum wage position instead of an unpaid one? How did it decide on the unpaid positions?

Yes, we would prefer to offer additional paid positions. However, given the budget for this project, we were not able to offer such positions. We regularly receive unsolicited inquiries from people interested in volunteering for our research. There is not always a good fit, but since this project is highly modular allowing people to meaningfully contribute with just a few hours of time, we decided to provide a formal volunteer opportunity.

  1. Is the theory of change for impact here mainly an "upskill students/early career researchers" thing, or for the benefits to RP's research outputs?

The primary theory of change is to improve the evidence-base for interventions to reduce animal product usage, thus allowing more and better interventions to be implemented and reducing the numbers of animals harmed by factory farming. RP’s research outputs are a mediator in this theory of change. The volunteer opportunity itself also represents an opportunity to upskill, but ultimately the goal for all involved is to benefit non-human animals.

  1. What is RP's current policy on volunteers?

RP occasionally considers and engages with volunteers for some projects, especially where relatively small time-limited contributions are possible.

  1. Does RP expect to continue recruiting volunteers for research projects in the future?

In practice, this will depend on the project and whether there are other opportunities that would be an appropriate fit.

Load more