There is an extensive research literature on the effectiveness of various interventions for changing health behaviors, such as dietary habits, physical activity, and smoking. Existing reviews and overviews tend to aggregate research on the effects of specific interventions on particular health behaviors. This review provides a largely qualitative, non-statistical summary of the evidence for the effectiveness of intervention types across health behaviors, by aggregating 706 research items (mostly systematic reviews and meta-analyses) based on the strength of evidence. This review makes a number of subjective judgment calls and uses novel methods of evaluation in order to quickly digest and summarize an extremely large evidence base. It focuses on applications to the farmed animal movement, especially to the behavioral change of reducing animal product consumption. In general, while the health behavior literature includes a very large number of studies, there is much inconsistency in wording, methodology, and subject matter, which makes it difficult to extract useful insights for behavior change advocates. However, some conclusions are warranted. Key findings include that almost all types of health behavior interventions targeted at individuals or small groups seem likely to have effect sizes conventionally interpreted as “small” or “very small,” that their effect sizes tend to be even smaller in the long term, and that interventions with educational and behavioral components outperform solely educational interventions.

See the full post here. (Shared as a linkpost because the bulk of the report is a large tables I thought might be difficult to reformat for the Forum)

25

0
0

Reactions

0
0
Comments7
Sorted by Click to highlight new comments since: Today at 5:17 PM

Hi Jamie, I'm glad to see this work out and will look forward to reading it in more depth. Congratulations—I'm sure it was hugely labor intensive! In my quick read, I was confused by this point:

Weaknesses of the health behavior literature, despite decades of research and huge amounts of funding, suggest serious limitations of experimental and observational research in other contexts, such as the farmed animal movement.

I think this is too pessimistic and somewhat short-term thinking. Instead, I would explain the weakness of the current health behavior literature by a few factors:

  1. Foremost, I think this is a symptom of the extraordinary difficulty of empirical research. It's simply hard to do high-quality research and we are still very much actively discovering what it means to do high-quality research.
  2. Decades just aren't that long of a time to spend on a research subject, especially in light of the first point. Many contemporary research questions have been known and unanswered for millenia. For example, we have been studying how to extend human life, largely without success, since ancient times.
  3. Various cultural factors in academia inhibit the conduct of high-quality studies. As a few examples: funders sometimes simply won't cut a check big enough to fund a single high-quality study, but several smaller lower quality ones; some subfields have simply accepted low-quality study designs as a fact of life and made only modest efforts to improve them; a publish or perish mentality incentives producing many small studies on diverse topics, rather than one high-quality study; and highly powered studies are more likely to return a null result, thus damaging publication prospects.

Of course, none of these are easy to surmount, but I don't see reason to give up on trying to conduct high-quality studies, especially with few alternatives available. Which brings me to my second question:

This makes other types of evidence, such as social movement case studies, relatively more promising.

To my (limited) understanding, case studies are by and large a type of observational research, since they rely on analyzing the observed outcomes of, for example, a social movement, without intervention. It seems like social movement case studies are then limited generally, like most observational research, to understanding correlations and motivating causal theories about those correlations, rather than measuring causation itself. Furthermore, case studies are usually regarded as low-quality evidence and form the base of the evidence pyramid in epidemiology. As such, I'm not sure how the difficulty of collecting high-quality evidence then implies we should collect more of what is usually regarded as low-quality evidence.

This also seems like a rather broad proclamation about the usefulness of experimental and observational studies—have you considered the merits of regression discontinuity designs, instrumental variables estimation, propensity score matching and prospective cohort studies, for example? All of these seem like designs worth considering for EAA research but don't seem broadly explored either here or in Sentience Institute's foundational question "EAA RCTs v intuition/speculation/anecdotes v case studies v external findings".

Thanks Jacob! It's great to see what was interesting / useful / confusing etc for people, and generally quite hard to get detailed feedback, so I appreciate you taking the time to read and reply.

I'm sure we could debate these topics at length; that's a tempting prospect, but I'll just reply to some specific parts here.

I don't see reason to give up on trying to conduct high-quality studies

I still think RCTs have their uses. It's just that they can be limited in various ways and that other research methods have some advantages over them, as discussed in the "EAA RCTs v intuition/speculation/anecdotes v case studies v external findings" section you refer to.

To summarise my view update from this review in other terms:

Lots of money has gone into health behaviour research. I expected the health behaviour literature to come to some fairly strong conclusions about the value of some intervention types over others. This didn't seem to be the case, given various limitations and inconsistencies in the research. Hence, I'm less optimistic about the usefulness of conducting comparable research now, relative to other types of research that we could conduct.

It seems like social movement case studies are then limited generally, like most observational research, to understanding correlations and motivating causal theories about those correlations, rather than measuring causation itself.

I don't agree with this. I think that you can look for evidence that X caused Y in a particular case, rather than just that X preceded Y. (Of course, often the evidence is very weak or nonexistent that X caused Y.) I discuss that in more depth here. You then have the separate problems of How much weight should we place on strategic knowledge from individual historical cases? and/or How likely is it that correlations will replicate across movements? It's hard to describe answers to those questions in precise and unambiguous terms, but I'd answer them with something like "not a lot" and "quite likely," respectively.

have you considered the merits of regression discontinuity designs, instrumental variables estimation, propensity score matching and prospective cohort studies, for example

I have never heard of these things, let alone considered their merits! I don't think that invalidates the view update I describe above, though if I look into these things more, it might restore my confidence?

To clarify, I suspect we have some agreement on (social movement) case studies: I do think they can provide evidence towards causation—literally that one should update their subjective Bayesian beliefs about causation based on social movement case studies. However, at least to my understanding of the current methods, they cannot provide causal identification, thus vastly limiting the magnitude of that update. (In my mind, to probably <10%.)

What I'm struggling to understand fundamentally is your conception of the quality of evidence. If you find the quality of evidence of the health behavior literature low, how does that compare to the quality of evidence of SI's social movement case studies? One intuition pump might be that the health behavior literature undoubtedly contains scores of cross-sectional studies, which themselves could be construed as each containing hundreds of case studies, and these cross-sectional studies are still regarded as much weaker evidence than the scores of RCTs in the health behavior literature. So where then must a single case study lie?

For what it's worth, in reflecting on an update which is fundamentally about how to make causal inferences, it seems like being unfamiliar with common tools for causal inference (eg, instrumental variables) warrants updating towards an uninformed prior. I'm not sure if they'll restore your confidence, but I'd be interested to hear.

However, at least to my understanding of the current methods, they cannot provide causal identification, thus vastly limiting the magnitude of that update. (In my mind, to probably <10%.)

Interesting. Let's imagine a specific question that we might be interested in, e.g. do incremental improvements (e.g. on welfare of animals or prisoners) encourage momentum for further change or complacency? ~10% sounds about right to me as an upper limit on an update from a single case study. But a case study will provide information on far more questions of interest than this single question. And as we look at several case studies and start to compare between them, then I can imagine an update of more like ~40% from historical social movement evidence in general on any single question of interest.

One intuition pump might be that the health behavior literature undoubtedly contains scores of cross-sectional studies, which themselves could be construed as each containing hundreds of case studies

That may be so, but they would be providing evidence on very different types of cause and effect relationships. E.g. the effects of motivational interviews on dietary behaviour, vs the effects of incremental improvements (e.g. on welfare of animals or prisoners) on a movement's momentum for further change. When I'm thinking about the value of social movement case studies compared to RCTs, I'm also thinking about their ability to provide evidence on the questions that I think are most important.

Thank you for your replies, Jamie, I appreciate the discussion. As a last point of clarification when you say ~40%, does this, for example, mean that if a priori I was uninformed on momentum v complacency and so put 50/50% credence on either possibility, that a series of case studies might potentially update you to 90/10%?

When I'm thinking about the value of social movement case studies compared to RCTs, I'm also thinking about their ability to provide evidence on the questions that I think are most important

I don't disagree—but my point with this intuition pump is the strength of inference a case study, or even series of case studies, might provide on any one of those questions.

Yes to the first part! (I was also thinking something like: If you had read some of the other available evidence but not the historical case studies and had 70/30% credence, then reading the historical case studies might update your views to 30/70%. But that's a bit messier.)

And got it with the second; I think we mostly agree there.

Ah, I see—in that case, it makes a lot of sense for you to pursue these case studies. I appreciate the time you invested to get to a double crux here, thanks!

Curated and popular this week
Relevant opportunities