Jacob_Peacock

Topic Contributions

Comments

New cause area: bivalve aquaculture

I don't find the case against bivalve sentience that strong, especially for the number of animals potentially involved and the diversity of the 10k bivalve species. (For example, scallops are motile and have hundreds of image-forming eyes—it'd be surprising to me if pain wasn't useful to such a lifestyle!)

I feel anxious that there is all this money around. Let's talk about it

I agree, pricing in impact seems reasonable. But do you think this is currently happening? if so, by what mechanism? I think the discrepancies between Redwood and ACE salaries are much more likely explained by norms at the respective orgs and funding constraints rather than some explicit pricing of impact.

Effectiveness of a theory-informed documentary to reduce consumption of meat and animal products: three randomized controlled experiments

Thanks for these Peter! (Note that Peter and I both work at Rethink Priorities.)

Do you think your study is sufficiently well powered to detect very small effect sizes on meat consumption?

No, and this is by design as you point out. We did try to recruit a population that may be more predisposed to change in Study 3 and looked at even more predisposed subgroups.

substantially larger than the effects we usually find for animal interventions even on more moveable things

I think we were informed by the results of our meta-analysis, which generally found effects around this size for meat reduction interventions.

Their null result on effect on meat consumption was not at all tightly bounded: -0.3oz [-6.12oz to + 5.46oz]

Obviously, this is ultimately subjective, but this corresponds to plus or minus a burger per week, which seems reasonably precise to me. The standardized CI is [−0.17, 0.15], so bounded below a 'small effect'. And, as David points out, less stringent CIs would look even better. But to be clear, I don't have a substantive disagreement here—just a matter of interpretation.

For even more power, we could combine studies 1 & 3 in a meta-analysis (doubling the sample size). Study 3 found a treatment effect of−1.72 oz/week; 95% CI: [−8.84,5.41], so the meta-analytic estimate would probably be very small but still in the correct direction, with tighter bounds of course.

explained just by the fact that you could find effects on the moveable attitudes

Just to clarify, we measured attitudes in all 3 studies. We found an effect on intentions in Study 2 where there wasn't blinding and follow-up was immediate. Studies 3 & 4 (likely) didn't find effects on attitudes.

I'd be curious to estimate what effect size would we be looking at if say 3-5% of people stopped eating meat (an optimistic estimate IMO).

Just roughly taking David Reinstein's number of 80 oz per week (could use our control group's mean for a better estimate) and assuming no other changes, 1% abstention would give a 0.8 oz effect size and 5% 4 oz. So definitely under-powered for the low end, but potentially closer to detectable at the high end. (And keeping in mind this is at 12-day follow-up; we should expect that 1% to dwindle further at longer follow-up. With figures this low I would be pessimistic for the overall impact. But keep in mind other successful meat reduction interventions don't seem to have worked mostly through a few individuals totally abstaining!)

corresponds to what a t-test is assessing

I wouldn't expect issues in testing the difference in means given our samples sizes. But otherwise not sure what you're suggesting here.

Effectiveness of a theory-informed documentary to reduce consumption of meat and animal products: three randomized controlled experiments

Yes, we did and found no meaningful increases in interest in animal activism, including voting intentions. Full questions available in in the supplementary materials.

Effectiveness of a theory-informed documentary to reduce consumption of meat and animal products: three randomized controlled experiments

Thank you taking the time to engage, much appreciated! Forgive my responding quickly and feel free to ask for clarification if I miss anything:

  • Definitely, could be different results with different docs. But ours showed a much stronger effect than the average of similar interventions we found in a previous meta-analysis, suggesting Good for Us is pretty good. It is probably better than Cowspiracy on changing intentions, with longer studies of excerpts of Cowspiracy also finding no effect.
  • Agree especially with your sub-point. We also tried to recruit populations more likely to be effected in Study 3. Also, see sources in my previous point.
  • Maybe but doesn't seem likely since there wasn't change in importance of animal welfare or other measures of attitudes. I would generally expect effects to decay over time rather than get stronger; our meta-analysis (weakly) supports this hypothesis in that longer time points showed smaller effects. Usefulness of a 2-3 month time point would mostly depend on attrition in my opinion.
  • I would vote other interventions. Classroom education in colleges and universities seems good as does increasing the availability of plant-based options in food service and restaurants.
How can we make Our World in Data more useful to the EA community?

+1 As well. I would emphasize that number of animal alive at any given time is significantly more important than slaughter as many animals die prior to slaughter.

Health Behavior Interventions Literature Review

Ah, I see—in that case, it makes a lot of sense for you to pursue these case studies. I appreciate the time you invested to get to a double crux here, thanks!

Health Behavior Interventions Literature Review

Thank you for your replies, Jamie, I appreciate the discussion. As a last point of clarification when you say ~40%, does this, for example, mean that if a priori I was uninformed on momentum v complacency and so put 50/50% credence on either possibility, that a series of case studies might potentially update you to 90/10%?

When I'm thinking about the value of social movement case studies compared to RCTs, I'm also thinking about their ability to provide evidence on the questions that I think are most important

I don't disagree—but my point with this intuition pump is the strength of inference a case study, or even series of case studies, might provide on any one of those questions.

Health Behavior Interventions Literature Review

To clarify, I suspect we have some agreement on (social movement) case studies: I do think they can provide evidence towards causation—literally that one should update their subjective Bayesian beliefs about causation based on social movement case studies. However, at least to my understanding of the current methods, they cannot provide causal identification, thus vastly limiting the magnitude of that update. (In my mind, to probably <10%.)

What I'm struggling to understand fundamentally is your conception of the quality of evidence. If you find the quality of evidence of the health behavior literature low, how does that compare to the quality of evidence of SI's social movement case studies? One intuition pump might be that the health behavior literature undoubtedly contains scores of cross-sectional studies, which themselves could be construed as each containing hundreds of case studies, and these cross-sectional studies are still regarded as much weaker evidence than the scores of RCTs in the health behavior literature. So where then must a single case study lie?

For what it's worth, in reflecting on an update which is fundamentally about how to make causal inferences, it seems like being unfamiliar with common tools for causal inference (eg, instrumental variables) warrants updating towards an uninformed prior. I'm not sure if they'll restore your confidence, but I'd be interested to hear.

Health Behavior Interventions Literature Review

Hi Jamie, I'm glad to see this work out and will look forward to reading it in more depth. Congratulations—I'm sure it was hugely labor intensive! In my quick read, I was confused by this point:

Weaknesses of the health behavior literature, despite decades of research and huge amounts of funding, suggest serious limitations of experimental and observational research in other contexts, such as the farmed animal movement.

I think this is too pessimistic and somewhat short-term thinking. Instead, I would explain the weakness of the current health behavior literature by a few factors:

  1. Foremost, I think this is a symptom of the extraordinary difficulty of empirical research. It's simply hard to do high-quality research and we are still very much actively discovering what it means to do high-quality research.
  2. Decades just aren't that long of a time to spend on a research subject, especially in light of the first point. Many contemporary research questions have been known and unanswered for millenia. For example, we have been studying how to extend human life, largely without success, since ancient times.
  3. Various cultural factors in academia inhibit the conduct of high-quality studies. As a few examples: funders sometimes simply won't cut a check big enough to fund a single high-quality study, but several smaller lower quality ones; some subfields have simply accepted low-quality study designs as a fact of life and made only modest efforts to improve them; a publish or perish mentality incentives producing many small studies on diverse topics, rather than one high-quality study; and highly powered studies are more likely to return a null result, thus damaging publication prospects.

Of course, none of these are easy to surmount, but I don't see reason to give up on trying to conduct high-quality studies, especially with few alternatives available. Which brings me to my second question:

This makes other types of evidence, such as social movement case studies, relatively more promising.

To my (limited) understanding, case studies are by and large a type of observational research, since they rely on analyzing the observed outcomes of, for example, a social movement, without intervention. It seems like social movement case studies are then limited generally, like most observational research, to understanding correlations and motivating causal theories about those correlations, rather than measuring causation itself. Furthermore, case studies are usually regarded as low-quality evidence and form the base of the evidence pyramid in epidemiology. As such, I'm not sure how the difficulty of collecting high-quality evidence then implies we should collect more of what is usually regarded as low-quality evidence.

This also seems like a rather broad proclamation about the usefulness of experimental and observational studies—have you considered the merits of regression discontinuity designs, instrumental variables estimation, propensity score matching and prospective cohort studies, for example? All of these seem like designs worth considering for EAA research but don't seem broadly explored either here or in Sentience Institute's foundational question "EAA RCTs v intuition/speculation/anecdotes v case studies v external findings".

Load More