Hide table of contents

Edited by Michael Dello-Iacovo and Jacy Reese Anthis. Thanks to Spencer Case, Maryam Khan, and Jacob Peacock for their thoughtful comments.

Summary

Governmentsfarmed animal advocates, and AI safety advocates rely on social influence strategies to intentionally change others’ attitudes and behaviors; aiming to create new behaviors or disrupt existing socio-political structures. Large-scale organizational strategies, often conducted by governments or large non-governmental organizations (NGOs) that rely on mass media to change public opinion or behavior en masse, such as radio propaganda, social media messaging, and TV campaigns have been labeled influence operations (or information operations). Courchesne et al. (2021) analyzed 82 empirical studies of influence operations. They found that influence operations are effective. In this blog post, I review literature on persuasion and social influence that bears on similar questions, such as that reviewed in Sentience Institute’s “Health Behavior Interventions Literature Review” (Harris 2020), which found limited but arguably robust evidence of small effect sizes. I unpack details of Courchesne et al.’s methodology, conduct an expanded analysis of the same studies, and consider the limitations of current research — finding that their conclusion holds, though limitations include: effect sizes are small; the studies are mostly on the effects of long-term (e.g., years) or short-term (e.g., days) exposure to traditional mass media; observational data can be unreliable; and the current literature may be heavily affected by publication bias. I conclude with suggestions to conduct meta-analyses, quantitatively assess publication bias, and consider epistemological assumptions such as homogenous treatment effects and the correlation between individual data and real-world collective behavior.

The Social Science of Social Influence

Humans are highly social. We have a need to belong to esteem-building groups and to have those groups be important and favored relative to other groups. We want others to adopt our preferred view of the world so that we can share reality. We are also a species of change. We change physically throughout life, the world around us changes, and we strive to make changes so that the future is worth living. Those working in advocacy aim to promote change towards their position (e.g., increasing plant-based product consumption, increasing pro-environmental behaviors, increasing support for policies that address systemic inequalities). Governments try to limit the influence of other countries’ misinformation campaigns within their own country. Politicians seek to limit the influence of their opposition’s messages while increasing the influence of their own messages. Much of this is made possible by social influence. 

Social influence and persuasion have long been focal topics within psychological and sociological sciences, meaning that many empirical studies have sought to explain how, when, and why people can be persuaded to change. Harris (2021) evaluated the effectiveness of persuasive strategies for changing public opinion; taking a positive view of social influence as a way to shift people towards a prosocial position.

The American Psychological Association (APA) defines social influence and persuasion:

Social Influence:

  • “any change in an individual’s thoughts, feelings, or behaviors caused by other people, who may be actually present or whose presence is imagined, expected, or only implied.”
  • “interpersonal processes that can cause individuals to change their thoughts, feelings, or behaviors.”

Persuasion:

  • “an active attempt by one person to change another person’s attitudes, beliefs, or emotions associated with some issue, person, concept, or object.”

Dual Process Model of Persuasion:

  • “any of various persuasion theories postulating that attitude change can occur as a result of strategies for processing attitude-relevant information that involve either a very high degree of effort or very little effort. The most prominent theories of this type are the elaboration-likelihood model and the heuristic-systematic model.”

Four reviews of the social scientific literature on persuasion and social influence evidence the breadth of research on social influence, persuasion, and attitude change at an individual-level. This research provides much mechanistic evidence explaining how, when, and why individual-level social influence and persuasion works. The reviews are summarized in Table 1 and cover research up to 2010. Reviews of the empirical research since 2010 have focused even more on mechanisms of influence like neural correlates, facial behavior, and emotion (e.g., this review on the neuroscience of persuasion).

Table 1: Summaries of Social Scientific Reviews of Social Influence, Persuasion, and Attitude Change

TitleAuthorshipRelevant PointsGoogle Scholar “cited by” (21 February 2023)
Attitudes and Attitude ChangeBohner & Dickel, 2011
  • Attitudes vary in strength (i.e., durability, impactfulness), can be affected by situational cues, and can be ambivalent (i.e., positive and negative attitudes towards the same evaluative target).
  • People can be aware of their attitudes (i.e., explicit attitudes) or unaware of their attitudes (i.e., implicit attitudes).
  • Physical perception like feeling the temperature of a room can affect attitudes and cognition (i.e., embodied cognition).
  • Attitude change involves the retrieval of stored evaluations and the consideration of new evaluative information.
  • Attitude change can result from consideration of information available in different situations or from a change in the memory underlying the attitude.
  • The success of persuasive attempts depends on the processing resources available to persuasion targets. Processing resources are determined by persuasion targets’ motivation and ability to process a message.
  • Information presented earlier can bias the processing of subsequently presented information.
  • Thinking about one’s own thoughts (i.e., meta-cognition) about a persuasive message is likely to affect the persuasiveness of the message.
  • Attitudes can lead to biased information processing. People tune their messages to their audience’s attitudes which can lead to biased recall and evaluation. People are motivated to select attitude congruent information which can lead to selective information exposure.
  • Spontaneous behaviors are predicted better by implicit attitudes and deliberate behaviors are predicted better by explicit attitudes.
2,703
Attitudes and PersuasionCrano & Prislin, 2006
  • Attitudes can be formed outside of awareness with evaluative conditioning (associating a valenced attitude object with a nonvalenced attitude object) and mere exposure methods but existing attitudes are unlikely to be changed with these methods.
  • Attitudes can be changed through heuristics like source expertise or peripheral cues like source attractiveness when persuasion targets are likely to be unmotivated and have little opportunity to think about the message.
  • Attitudes can be changed through systematic or elaborative analysis of persuasive messages that are logical, well-reasoned, and data-based when the persuasion targets are likely to be motivated and have the opportunity to think about the message.
  • Social consensus can increase the value of certain attitudes or positions and the likelihood that they are considered thoughtfully which has implications for the effectiveness of messages coming from a minority position.
  • Minority influence is more likely to be considered when it is consistently advocated and is very distinctive.
  • Attitudes can be changed indirectly by minority dissent shaping divergent thinking and increasing the quality of information processing.
  • Negative reactions to a persuasive message can lead to resistance to the message. This resistance may be strong or weak, depending on the effort and quality of any counterarguments used and on the perception of successful resistance to expert sources.
  • Using affect as a part of persuasive messaging may only be effective when the desired change is an attitude based more on affect than cognition.
  • Mood and emotion can influence the degree to which persuasive messages are processed, especially when those messages are self-relevant (e.g., for health-related persuasion). Emotions are used as information and persuasive messages may be more effective when the message framing matches the affective state of the target.
1,095
Social Influence: Compliance and ConformityCialdini & Goldstein, 2004
  • People have three key motivations that affect compliant and conforming behaviors: to have accurate perceptions of reality, to be socially connected, and to feel positive about themselves.
  • Affective states like moods, persuasive techniques that aim to disrupt and then re-frame, authority and the motivation to be obedient, and social norms can produce compliance in pursuit of accuracy.
  • Liking, reciprocity, and persuasive techniques that rely on making reciprocal concessions can produce compliance in pursuit of social connectedness.
  • Persuasive techniques that rely on self-perception and making commitments can produce compliance in pursuit of feeling positive about oneself.
  • Perceived social consensus, dynamic social systems, and goals automatically activated by a certain context can produce conformity in pursuit of accuracy.
  • Mimicry and attaining social approval can produce conformity in pursuit of social connectedness.
  • Social identification with a persuasive source, being aligned with the majority or minority, and deindividuation can produce conformity in pursuit of feeling positive about oneself.
6,541
Attitude Change: Persuasion and Social InfluenceWood, 2000
  • Influence studied from a persuasion paradigmatic approach have focused on individuals’ processing of detailed argumentation in minimally social contexts.
  • Influence studied from a social influence paradigmatic approach have focused on the source of the message in complex social contexts with complex social interactions.
  • Influence and persuasion can take place in public contexts where people think their responses are witnessed by others or in private contexts where people think their responses are known only to themselves. Classic perspectives held that only private conformity (i.e., ‘true’ internal and external change) evidenced effective persuasion. Research in the 1990s showed this to be unsupported, with public conformity (i.e., external change to better fit in a group) also leading to attitude change persistence.
  • People can be motivated to change by dissonance between their attitudes and behaviors and by the persuasion function (e.g., a self-concept motive) matching the function underpinning the message recipient’s attitude (e.g., a concern to express their values).
  • People can have multiple attitudes towards one attitude object (e.g., a dietary choice could be viewed as positive because it helps relieve the suffering of animals and as negative because it increases tension with family members).
  • Framing is one strategy for influence operations targeted at objects with multiple attitudes. There are too many attitudes to change but if the object itself is perceived to change, the influence attempt can still succeed.
  • Influence and persuasion attempts that are in line with targets’ motives are evaluated more favorably and prompt more thoughtful consideration resulting in motivated biased processing.
  • Moods and emotions can serve as information that affects information processing (e.g., extremely scary messages reduce processing and persuasion but low to moderately scary messages increase processing and persuasion).
  • Social group membership affects influence operations (e.g., prototypical group members are more influential)
  • Social consensus can provide subjective information about the validity of persuasive messages.
1,671

Mass Media and Influence Operations

Influence operations are large-scale – often mass media-based – highly strategic, usually organization-sponsored (e.g., governments, large NGOs), social influence or persuasion tactics meant to change or disrupt attitudes, behaviors, and institutions. Influence operations often target individuals but can serve to change both individuals’ behaviors (e.g., dietary choices) and institutions (e.g., political norms). Influence operations can include any action or strategy (e.g., TV dramas, text messaging, radio propaganda) in any domain (e.g., social issues, politics, health) that is employed to change any outcome (e.g., public opinion, political power, prejudice, moral inclusion).  They can be distinguished from small-scale interpersonal strategies to convince one or two individuals to change. However, large and small scale strategies are likely underpinned by the same social influence and psychological mechanisms.

The term “influence operations” has been used within political science and international relations to refer specifically to information operations as warfare, influences that advance national interests outside of national spaces, and competitive information (and misinformation) collection and dissemination particularly in governmental and military contexts. In this blog post, I follow Courchesne et al. (2021) and take a somewhat wider scope that includes non-military and non-government campaigns.

Attempts to change attitudes and behaviors are widespread amongst and between human social groups and are a part of human evolutionary history. Contemporary influence operations take many forms from messages passed through traditional mass media (e.g., radio and TV programs) to propaganda campaigns on social media to social influencers promoting brands online to bots digitally spreading misinformation. Below are some examples.

  • A TV network engages in influence operations when they use their nightly airtime to spread misinformation about the effects of a virus.
  • Political candidates use radio and TV ads to move voters towards their platform and away from opponents’ platforms.
  • Radical ideological groups pass messages through their social media networks to proselytize followers.
  • Influencers promote brands and behaviors on their social media platforms to encourage their followers to do the same thing.

Influence Operation Effectiveness

Courchesne et al. (2021) evaluated the state of research on influence operations by sourcing and summarizing 82 articles with empirical studies, primarily from political science and international relations. This report was summarized by Bateman et al. (2021) for the Carnegie Endowment for International Peace.

I summarize their key findings, conduct an expanded analysis of the same articles, evaluate the evidence, and consider additional research largely from the psychological and sociological sciences.

Courchesne et al. identified articles from 1995-2020 in a search using Google Scholar and Princeton University library’s Articles+ database with a list of keywords available in Appendix A of their report. They found an initial pool of 16 articles and looked at articles that cited or were cited by those to build their sample of 82 articles. To be included, articles had to have at least one study that empirically investigated an influence operation that targeted a specific population and had credible statistics comparing outcomes for those who were exposed to the influence operation with outcomes for those who were not exposed.

Findings from Courchesne et al. (2021) 

  • Since 2016, research on influence operations has rapidly increased (see this visualization).
  • Research is clustered by media type: traditional mass media (e.g., newspaper, radio) and social media (e.g., Twitter, Facebook).
    • Traditional mass media operations are characterized by extended exposure to the influence (between 4 days and 30 years).
    • Social media operations are characterized by briefer exposure times (limited to days).
    • Traditional and social media both shape attitudes, beliefs, and behaviors across many domains like politics (e.g., support for a political party), society (e.g., xenophobic attitudes), and healthcare (e.g., condom use).
  • Most research focuses on traditional media rather than social media.
  • Few studies examined influence operations on large online social networks and those that did studied Facebook and Twitter.
    • There were no comparisons of the effectiveness of specific influence operations (e.g., a specific piece of misinformation) on different social media platforms.
    • There were no investigations of multi-platform online influence operations (e.g., the effectiveness of the same campaign run concurrently on Facebook and Twitter).
  • Few studies examined the use of algorithms in social media-based influence operations.
  • Few studies compared the effects of influence operations using traditional media and social media.
  • Only a handful of studies investigated international influence operations, and most of those focused on Russia.
    • Studies of social media focused almost exclusively on Western influence operations.
  • Most studies focused either on very near-term (e.g., days) or long-term (e.g., years) exposures and effects, neglecting exposure and effects over weeks and months.
  • More systematic multi-stakeholder research including industry, academia, and government is needed to address these gaps.

Expanding the Empirical Analysis

I analyzed the 82 articles sourced and summarized by Courchesne et al. using a coding scheme I developed to probe additional features of the research on influence operations.

First, I used keywords and intuition to code studies by 1) the domain of the outcome, 2) the study context, and 3) the influence operation type.

  1. Outcome domain
    1. Definition: the intended area of attitude or behavior change
    2. Options:
      1. political
      2. social
      3. health
      4. basic science or knowledge-building
      5. consumer
  2. Study context
    1. Definition: the environment study participants were in
    2. Options:
      1. lab-based with a contrived situation
      2. real-world with a real context
  3. Influence type
    1. Definition: the type of influence studied
    2. Options:
      1. misinformation
      2. disinformation
      3. traditional media
      4. social media

Each study was coded with exactly one outcome domain, study context, and influence type. For example, a study could be coded as having a political outcome but not a social, health, basic science, or consumer outcome. Likewise, a study could be either contrived or real-world and have only one of the four influence types. This categorization of articles into one of four possible influence types is loose, building on Courchesne et al.’s framework separating traditional and social media. Some influence operations could be classified under multiple influence types. Misinformation and disinformation might be better considered as sub-types of traditional media and social media since misinformation and disinformation can be passed through traditional and social media. I classified each study under only one of these four types to emphasize the distinctions between these four areas of research and to increase clarity on whether the effect of the influence operation is due more to one influence type than another (e.g., the effect of the influence is due more to misinformation than to social media).

Second, I used keywords to code the studies based on three other features: data type, study features, and influence specifics.

  1. Data type
    1. Definition: special data types
    2. Options:
      1. historical or archival data
      2. big data
  2. Study features
    1. Definition: special study design features
    2. Options: if the study
      1. examined a mechanism of influence operations
      2. targeted individuals with messages
      3. was a natural experiment
      4. tried to actively change individuals’ behaviors
      5. had an institutional-level (i.e., macro or societal) outcome
  3. Influence specifics
    1. Definition: special features of the influence operation
    2. Options: if the influence
      1. involved bots or algorithms
      2. worked to expand the moral circle
      3. served to intentionally do more good on an issue
      4. served to actively shape power
      5. served to passively shape power

Studies did not have to qualify as having any of these features and they could qualify as having multiple, even within one category. For example, a study with 100 participants that had contemporary data collection would not qualify as having archival data or big data. A study that tested the effects of WhatsApp messaging in a political campaign to increase votes for a candidate would qualify for three options under “study features”: examined a mechanism of influence operations, targeted individuals with messages, and tried to actively change individuals’ behavior. That same study would also qualify as having an “influence specific” of serving to actively shape power.

This coding scheme reflects my judgments as to which categories a study belongs to rather than a purely objective assessment. For instance, an experiment on Twitter could be coded as a contrived situation if it included randomization to a treatment or control group and interaction with researchers (e.g., this political polarization study). The same experiment could be categorized as having a real-world context because it occurred in real-time, in public, and on a real-platform. I categorized this study as real-world rather than contrived because contact with the researchers was limited, the treatment was not isolated from normal Twitter usage, and the treatment did not entail substantially different Twitter usage norms than usual. For situations like this, I categorized based on my judgment of the degree to which the study expressed the relevant features. Some of these choices could have been made differently and additional interpretation should consider the specifics of the study.

Table 2 shows the coding scheme, keywords, and number of articles categorized under each feature. The complete codebook, including article titles, is in the Appendix.

Table 2: Summary of the 82 Articles Categorized by Outcome Domain, Study Context, Influence Types, Data Type, Study Features, and Influence Specifics

Coding SchemeCategories (Number of Studies)Description of CategoryExampleKeywords
Outcome Domain    
 Political (31)Related to governance or the control of public affairsThe Effect of Fake News on Populist Voting: Evidence from a Natural Experiment in ItalyPolitic, government, populist, vote, voting
 Social (27)Related to the operations of people or social groupsWhen and How Negative News Coverage Empowers Collective Action in MinoritiesPublic opinion, trust, protest, immigration, police, crime, Muslim
 Health (12)Related to physical or mental care and well-beingImpact of a Local Newspaper Campaign on the Uptake of the Measles Mumps and Rubella VaccineHealth, vaccine, autism, condom, COVID, AIDS, pandemic, birth
 Basic Science (11)Demonstration of a scientific principle to build knowledge on how an observable process worksBrief Exposure to Misinformation Can Lead to Long-Term False MemoriesKnowledge, affect, cognition, cognitive, memory, misinformation, fake news
 Consumer (1)Related to owning, purchasing, or using certain goodsA Tear in the Iron Curtain: The Impact of Western Television on Consumption BehaviorConsum*
Study Context    
 Lab-based or contrived (21)A situation contrived by the researchers to elicit a hypothesized observable responseDoes Emotional or Repeated Misinformation Increase Memory Distortion for a Trauma Analogue Event?Experiment, manipulated, Qualtrics, Mturk, random assignment
 Real-world (61)A situation occurring naturally, not created by the researchersSoap Operas and Fertility: Evidence from BrazilReal-world, natural experiment
Influence Type    
 Misinformation (11)Influence by means of false or inaccurate information that may be deliberately misleadingExposure to Health (Mis)information: Lagged Effects on Young Adults’ Health Behaviors and Potential PathwaysMisinform, fake news, conspiracy
 Disinformation (2)Researchers specifically call the influence “disinformation”Cognitive and Affective Responses to Political Disinformation in FacebookDisinformation
 Traditional Media (51)Mass influence by means of traditional propaganda, often through TV, radio, print news, or speechesPropaganda and Protest: Evidence from Post-Cold War AfricaTraditional, propaganda, radio, TV, mass media, broadcast, speech, campaign, advertis*
 Social Media (18)Influence by means of online social media networks like Facebook, WhatsApp, and TwitterExposure to Opposing Views on Social Media can Increase Political PolarizationNetwork, social media, Twitter, Facebook, WhatsApp
Data Type    
 Historic or Archival (27)Data collected in the past and used to understand a historical influence operationRadio and the Rise of the Nazis in Prewar GermanyNazi, Cold War, East German*, archival, histor*, 199*, 198*, 197*, election, war
 Big (9)Complex data with greater than 1 million data pointsA 61-Million Person Experiment in Social Influence and Political Mobilizationmillion, big data
Study Features    
 Mechanistic (20)Examining how or why influence operations work in specific waysRise of the Machines? Examining the Influence of Social Bots on a Political Discussion NetworkMemory, emotion, cognit*, conspiracy, collective action, framing, polarization, uncertain
 Individual Messaging (18)Using messages directed at influencing individuals to initiate changeMessages on COVID-19 Prevention in India Increased Symptoms Reporting and Adherence to Preventive Behaviors among 25 Million Recipients with Similar Effects on Non-Recipient Members of Their CommunitiesMessage, induced, manipulated, manipulation, random assignment
 Institutional Outcome (37)An aggregated or collective outcome representing political or social institutions like political party power or social capitalPolitician Hate Speech and Domestic TerrorismVote share, trust, public opinion, social norm, norm, protest, riot, crime, institution, aggregate
 Natural Experiment (36)Individuals exposed to different real conditions determined by real events not controlled by the researchersPropaganda and Conflict: Evidence from the Rwandan GenocideNatural experiment, natural, topograph*, geograph*, signal, exposure
 Active Behavior Change (8)Study actively trying to get individuals to act or change their behaviorHow the Pro-Beijing Media Influences Voters: Evidence from a Field ExperimentIntended action, change, behavior, intention
Influence Specifics    
 Bots (6)Influence operations that mention or use bots and algorithmsDigital Propaganda, Political Bots and Polarized Politics in IndiaBots, algorithms, crawler
 Moral Circle Expansion (4)Influence could expand the boundaries of the moral circleErasing Ethnicity? Propaganda, Nation Building, and Identity in RwandaConflict, violen*, identity, polarization
 Intentional Good (6)Influence to intentionally do more good on an issueEvaluating the Impact of a National AIDS Prevention Radio Campaign in St. Vincent and the GrenadinesHealth, prevention, immigration, race, change, social, conflict, voting
 Active Power (12)Influence actively or directly seeks to change access to power or controlWhatsApp and Political Instability in Brazil: Targeted Messages and Political RadicalisationPolitic, power
 Passive Power (27)Influence passively or indirectly seeks to change access to power or controlElectoral Effects of Biased Media: Russian Television in UkrainePolitic, power, radio, TV, television

Effect Sizes

Most studies found that influence operations had a positive effect (i.e., the intended effect). There was little evidence of influence operations backfiring to produce change in the opposite direction. Three articles reported that influence operations backfired for a subset of people (i.e., the effect of influence operations was moderated by another factor). 

  1. Bail et al. (2018) found that conservative, Republican identification prompted a backfiring effect of exposure to liberal perspectives on Twitter.
  2. Kao (2021) found that pre-existing China-skeptical attitudes prompted a backfiring effect of exposure to pro-Beijing media in Taiwan.
  3. Schmuck et al. (2020) found that disagreement with anti-Muslim information in Austria prompted a backfiring effect.

Most of the positive effects were small (see Table 3). Small effects may be as meaningful in one context as medium or large effects in other contexts. For instance, a small effect of exposure to real-world populist Facebook messages on increases in anti-refugee crime may entail more statistical error and a smaller effect size because of the complexity and uncertainty of the real-world social media context compared with the large effect in a contrived study of exposure to populist messages on perceived discrimination. Small effects can be meaningless artifacts of large sample sizes and large effect sizes can result from small sample sizes. Both small and large effects are meaningful and studies should be weighed in terms of impact and importance based on their methodology, sample size, context, and ecological validity in addition to effect size.

I evaluated effect sizes roughly based on standards within various fields (e.g., effect sizes in psychological sciencepersuasion rate in economics), my interpretation of statistics like % increase in vote share (e.g., 0% indicates no effect whereas 2% is larger than no effect but smaller than 20%), and whether or not the influence operation functioned independently of or was moderated by the presence of other factors (e.g., if the influence operation worked more strongly for men than women or if the influence operation depended on people holding a certain prior attitude or belief). Effects categorized as “none to small,” “small to medium,” or “medium to large” often entailed this sort of moderation. I was less confident in assigning effect sizes to influence operations that were primarily impactful in certain subgroups, preferring instead to indicate uncertainty.

Comparing effect sizes across disciplines and study contexts is difficult given the different statistics reported (e.g., percentage increase, percentage point increase, standardized and unstandardized regression coefficients), the different norms for interpreting effect sizes, and the different implications of effect sizes in different contexts. It is unclear how to meaningfully compare effects like a 15% increase in willingness to interact across ethnic group boundaries in post-genocide Rwanda following exposure to radio programming and a 2.6 percentage point increase in the vote share for an extreme nationalist party in Croatia following exposure to Serbian radio programming.

Table 3: Effects of Influence Operations in the 82 Articles

Effect Size% of All Studies (n)

Study Context

% of Contrived

% of Real-World

Outcome Domain

% of Basic science

% of Consumer

% of Health

% of Political

% of Social

Influence Type

% of Disinformation

% of Misinformation

% of Social media

% of Traditional media

None

6% 

(5)

0%0%50%
0%
0%
0%
8.2%11.1%
6.5%
3.9%
11.1%
None to small

6% 

(5)

4.8%0%0%
100%
0%
0%
6.6%0%
6.5%
9.8%
7.4%
Small

41% 

(34)

28.6%18.2%0%
0%
27.3%
33.3%
45.9%38.9%
54.8%
47.1%
40.7%
Small to medium

15% 

(12)

28.6%36.4%0%
0%
36.4%
25%
9.8%16.7%
9.7%
9.8%
7.4%
Medium

16% 

(13)

14.3%9.1%0%
0%
9.1%
8.3%
16.4%11.1%
16.1%
19.6%
22.2%
Medium to large

11% 

(9)

14.3%

27.3%


 

50%
0%
9.1%
25%
9.8%16.7%
3.2%
7.8%
7.4%
Large

5% 

(4)

9.5%9.1%0%
0%
18.2%
8.3%
3.3%5.6%
3.2%
2%
3.7%

Note. These effects were found in the intended or positive direction.

Exemplar Studies

I highlight four studies that illustrate the complexity of influence operations and show how features of influence operations can co-occur.

  1. Blouin and Mukand’s (2019), “Erasing Ethnicity? Propaganda, Nation Building, and Identity in Rwanda,” took place in a real-world context and was a natural experiment featuring propaganda passed through traditional mass media with a social domain outcome. The research examined how radio propaganda targeted interethnic attitudes in post-genocide Rwanda to improve interethnic trust, increase willingness to interact across ethnic group lines, and decrease the salience of ethnicity. This research is one of only four articles featuring an influence operation interpreted as working to expand the moral circle and one of only six articles where the researchers studied an influence operation enacted to intentionally do good. This research also exemplifies the passive shaping of power.
  2. Yan et al.’s (2021), “Asymmetrical Perceptions of Partisan Political Bots,” took place in a contrived situation and was a natural experiment featuring social media influence operations with a basic science outcome. The research examined how Twitter users distinguished between partisan human users and partisan bots. This is an example of a mechanistic study because it focused on explaining how influences operations work. It is one of only six studies using or examining bots.
  3. González and Prem’s (2018), “Can Television Bring Down a Dictator? Evidence from Chile’s “No” Campaign,” took place in a real-world context and was a natural experiment featuring traditional mass media influence operations from a historical data set with a political domain outcome. The research examined the vote share won by Pinochet’s opposition in the 1970 Chilean election as a function of exposure to the opposition’s TV advertising campaign. The featured outcome was institutional and the study provided an example of an influence operation that passively shapes power.
  4. Banerjee et al.’s (2020), “Messages on COVID-19 Prevention in India Increased Symptoms Reporting and Adherence to Preventive Behaviors Among 25 Million Recipients with Similar Effects on Non-Recipient Members of their Communities,” took place in a contrived situation and featured a traditional mass media campaign with a health domain outcome. This research featured big data from a study where the researchers used individual messaging to actively change individuals’ behavior. This research was one of only six articles with an influence operation enacted intentionally to do good.

Limitations

  1. As with all empirical research, there is likely a publication bias or file drawer problem whereby influence operations appear more effective than if all the studies, including those with null or negative (i.e., backfiring) effects, were published. There is no commonly accepted way to judge the size of the existing file drawer. There are recommendations for combatting publication bias:
    1. Preregistration 
      1. This is useful particularly for mitigating the growth of the file drawer.
      2. Existing trial registries (e.g., for healthcare RCTs) provide evidence of whether or not study results were made public. 
    2. Results-free peer-review of studies
    3. Thorough literature and meta-analytic reviews to reduce the occurrence of pre- and post-publication bias
    4. Statistical models of publication bias as part of meta-analytic reviews
      1. Sensitivity analyses
      2. Bayesian analyses
  2. Many of the studies included by Courchesne et al. (2021) were observational and correlational but still attempted to infer causality, primarily by controlling the statistical effects of confounding variables to increase confidence in the effect estimates. 
    1. Observational studies of influence operations often entailed coarse operationalization with methods like a scale of degree of exposure rather than a comparison between groups who were either exposed or not exposed. For example, Piazza (2020) operationalized the degree to which countries’ politicians used hate speech on an ordinal 0-4 scale. These approaches, though widely used, have limitations in that not every variable can be controlled for and the interpretation of the causal effect of the influence operation is limited. The strength of evidence from these studies is somewhat weaker than the evidence provided by randomized controlled trials (RCTs) or experiments with random assignment to exposure or no exposure.
    2. A large proportion (44%) of the studies were purported natural experiments. Natural experiments are important for their ecological validity especially in contexts where RCTs are impossible. However, they have drawbacks and any interpretation of causal effects needs to be tempered by the lack of random assignment and the potential lack of a true control condition.
      1. Natural experiments rely on naturally occurring variation, without random assignment of individuals to predetermined experimental treatment and control conditions. Without random assignment, other sources of variation could arise that produce what looks like a causal effect, but isn’t.
      2. Natural experiments might not have a true control condition to compare the effect of the treatment to. For instance, pre-post designs assume that changes occur within an individual after an intervention compared to their previous response (i.e., pre-response scores are the control). Other variables, like time, can affect the treatment-control comparison and obscure interpretation of the effect of the influence operation.
      3. Natural experiments are difficult to replicate. Replication over time and by different researchers is critical for establishing the veracity of influence operation effects on observed outcomes. Without replication, decision makers will have less confidence that their decisions are based on convergent evidence rather than one context-dependent example.
  3. Only 24% of studies examined mechanisms for how influence operations work. This leaves a large gap in the research on why people can be influenced, how influence operations persist over time, and in what contexts or situations various influence operations are more or less effective, to name a few of many potential mechanisms. A preference for natural experiments or real-world studies may lead to fewer resources spent on mechanistic studies that occur in lab-based or contrived situations. There are some examples of natural experiments in real-world contexts that are also mechanistic studies. Barfar’s (2019), “Cognitive and Affective Responses to Political Disinformation on Facebook,” found that exposure to fake news produced more anger and incivility than exposure to true news which produced more analytical thinking, positivity, and anxiety.
    1. Courchesne et al. highlighted a need for more mechanistic research, particularly on why misinformation is effective and how it persists. 
    2. Mechanistic studies offer important insights into how influence operations work and they can be conducted with individual-level or institutional-level outcomes. For example, mechanisms for how individuals remember misinformation may be different than mechanisms explaining how the source of misinformation changes social trust. Misinformation might have stronger effects or transfer at a faster rate on social media than traditional media. Insight into the mechanisms enabling influence operations is critical to developing the anti-influence intervention strategies that Courchesne et al. identified as important to policymakers. 
  4. The conceptual and methodological distinctions between misinformation and disinformation are unclear. This is a general problem. There are not yet clear, standard, and commonly referenced distinguishing characteristics of misinformation compared to disinformation despite some attempts to distinguish the two based on intentionality. The research on influence operations uses the terms seemingly interchangeably and this limits the conclusions and strategies that decision makers can take. If misinformation involves more subtle tactics than disinformation, different anti-influence strategies might be developed. If misinformation is more common than disinformation, decision makers might want to focus on combatting misinformation. If disinformation has larger, stronger, or more severe direct effects, decision makers might want to focus resources on combatting disinformation. 
  5. Influence operations may work differently for health outcomes than for other outcomes (e.g., political, social). Why, when, and how these differences occur is understudied. Courchesne et al. cited Wakefield et al.’s (2010) meta-analysis showing that influence operations conducted in the health domain (e.g., diet change, exercise change, vaccination behavior) are more effective for episodic or one-time health behaviors than for habitual behaviors. For example, influence operations were more successful at increasing yearly cancer screening or vaccination behaviors than habitual behaviors like choosing to routinely consume different food, floss daily, or increase daily physical activity. Furthermore, Wakefield et al. identified that repeating the same influence operation had only a small effect on habitual behavior change, consistent with some of the implications from Harris’ (2020) review of the research on health behavior change. This reifies that using the same influence repeatedly may reap few rewards for habits like flossing or switching to a plant-based diet.
  6. Although Courchesne et al. intended to review interdisciplinary influence operations research, they implicitly took a predominantly international relations perspective. The implications are that influence operations are successful and dangerous, particularly in political and international arenas, and that decision makers need better anti-influence strategies informed by research. 

Of the 82 articles, 74% were conducted in the real-world rather than in a contrived situation and 48% focused on influence operations that sought to change the balance of political power. Only 7% focused on influence operations intended to do good, meaning that 93% focused on control-oriented, harmful, or destructive outcomes. This could suggest that there are more real-world influence operations with destructive or destabilizing outcomes than real-world influence operations with prosocial or morally expansive outcomes.

These numbers could alternatively point to a negativity bias within the extant research on influence operations, where influence operations are framed as threatening, negative, or unwanted influences that need to be combatted. One implication of this is that all social influence may be perceived as harmful; undermining the idea that social influence is an inherent, long-existing element of human societies. Ignoring the neutral, positive or prosocial influence operations perpetuates a cycle of knowledge whereby social influence and influence operations are cast in a negative frame. This could limit the scope and productivity of research on social influence and influence operations.

More studies on influence operations with prosocial or morally expansive goals exist. Courchesne et al. acknowledged a robust literature on “‘pro-social’ influence campaigns” in pro-environmental, health, and traditional political advertising. Additionally, a large body of social psychological research exists on strategies to change prejudiced attitudes and discriminatory behaviors. These strategies are not traditionally labeled as “influence operations” but they are similar to those included in many of the articles reviewed by Courchesne et al. I summarize some of this research in the next section.

Additional Empirical Research

Reducing prejudice and discrimination is one domain with substantial research on social influence and influence operations. Paluck et al. (2021) reviewed empirical studies of a wide range of strategies to reduce prejudice and discrimination or increase prosocial attitudes and behaviors. One of the studies highlighted in this meta-analysis found that Twitter bots that sanctioned racist Twitter users led to fewer subsequent racist slurs over the two-month period of study when the Twitter bot posed as an ingroup White man with many followers compared to an ingroup White man with few followers or an outgroup Black man (regardless of number of followers). This study provided evidence for two mechanisms of social media influence to intentionally do good: ingroup membership and popularity. Another highlighted article found in three different experiments that providing arguments alongside a non-judgmental exchange of narratives during conversations compared to providing just arguments reduced exclusionary attitudes towards unauthorized immigrants and transgender people for up to four months.

Pro-environmental behavior is another domain in which there is substantial research. Clayton et al. (2015) reviewed social scientific research on climate change and emphasized change strategies as one of three key research areas. For example, a field experiment to influence energy reduction in buildings on a university campus found that buildings where occupants received informative emails about their building’s energy use showed 7% reduced energy use compared to buildings in a control condition where occupants received no feedback. Another condition where occupants of other buildings received training in energy reduction strategies showed a 4% reduction in energy use compared to the control condition buildings.

Health behavior is a third domain with substantial research on the effects of persuasive strategies. Of the 12 health domain articles included by Courchesne et al., four investigated vaccine uptake and five investigated COVID-19 behaviors. There is much research on influence operations to change other health behaviors like smoking, adopting a veg*n diet, and flossing. For example,

  1. GiveWell reviews organizations that use mass media to influence health behaviors in developing countries. The Population Media Center (PMC) and Development Media International (DMI) are two organizations focused on improving health and wellbeing outcomes using influence operations. PMC documents the effects of narrative storytelling (e.g., TV soap operas, radio programs, web content) to encourage contraceptive use, family planning, and the education of girls and women in countries throughout the developing world. DMI uses mass media storytelling (mostly radio programming) to effect health and wellbeing outcomes like family planning, hygiene, early childhood development, and infectious disease prevention in sub-Saharan Africa. 

DMI published two articles with evidence on the effectiveness of a radio campaign in Burkina Faso. One of these articles highlighted a cluster randomized trial testing the influence of a 32-month radio campaign about family behaviors compared to a no radio campaign on post neonatal under-5 child mortality. The radio program included short spots (i.e., 1 minute spots, 10 times a day) and longer interactive sessions (i.e., 2 hour sessions, 5 days per week) produced in the local language with several topics (e.g., promoting antenatal consultations with doctors, best breastfeeding practices, health-care seeking for illnesses like diarrhea and pneumonia). The authors found no effect of radio campaign exposure compared to no exposure. Both groups showed decreasing mortality rates over the multiyear study. The second article examined the impact of the same radio campaign on actual healthcare visits for reasons targeted by the campaign (e.g., antenatal care visits, under-5 care visits). The authors found under-5 consultations for illness increased across the years for those in the campaign compared to those not in the campaign (malaria: between 35-56%; lower respiratory infections: between 11-39%; diarrhea: 60-107%).

  1. Cruwys et al. (2015) reviewed experimental studies of social influence in eating behavior and found that the social modeling of eating behavior was effective in 64 out of 69 reviewed studies. For example, one of the reviewed studies found that exposure to a message conveying a descriptive norm about healthy eating choices resulted in increased healthy eating reports compared to a message conveying a descriptive norm about unhealthy eating or no message.
  2. Xiaoming et al. (2000) employed a RCT in semirural China and found that 18-30 year olds randomly assigned to either a 12-month multifaceted AIDS education campaign (i.e., educational text, videos, radio programs, small group discussions, home visits, individual counseling, and a free supply of condoms) or a control condition with no systematic exposure to educational materials found that those who received the campaign materials reported more knowledge of AIDS, more condom use after the campaign than before, and a greater use of condoms as their primary birth control device. Those who were not exposed to the influence operation showed no changes.
  3. Mathur et al. (2021) reviewed 100 studies that used individual messaging to influence meat consumption attitudes and behaviors. They found that these influence operations have a meaningful effect on the reduction of meat consumption and intentions to purchase meat products. The chronologically first study included in this meta-analysis found that 9-10 year old girls who watched an episode of The Simpsons promoting vegetarianism were less positive towards eating meat, more knowledgeable about nutrition, and intended to eat less meat following the episode than girls the same age who did not watch the episode.

Note. There is research on influence operations in public policyconsumer advertising, and biological conservation that I have not accounted for here. For example, Winter et al. (2015) found that negative argumentative and negative subjective comments on Facebook increased opposition to marijuana legalization compared to a control group that saw no comments. There was no effect of positive comments (whether argumentative or subjective) compared to the control group.

Future Directions

Of the research reviewed here, most studies found that influence operations are effective in producing some degree of intended change. This is consistent with the idea that social influence has been an important lever of change throughout human history. 

However, this research has notable limitations. The reviewed articles paint a rosier picture of the effectiveness of influence operations, whether as destructive forces that need to be combatted or as prosocial forces that can be capitalized on, than may be the reality of large-scale organizational strategies. The reviewed studies cover individual and institutional levels of influence operation strategies and effects, mixing foundational knowledge about social influence at different levels. This comprehensive coverage could be beneficial in broadening our understanding of how social influence mechanisms explain influence operations. It could also be detrimental if there are different mechanisms underpinning successful individual-level and institutional-level change.

To address these, and other limitations, researchers could implement the following strategies:

  1. Assess publication bias in the existing literature
  2. Craft and implement interdisciplinary standards to reduce publication bias
  3. Conduct meta-analyses on the existing literature
  4. Synthesize from the existing empirical results to identify promising pathways for future research
  5. Conduct future object-level research 

Future focus on the following topics would help to build the science of mass media, propaganda, and social influence:

  1. Systematic evaluation of epistemological assumptions
    1. that there is a meaningful difference between large-scale governmental influence operations framed as information warfare within and between countries and individual-level social influences and intentional strategies aimed at changing individuals’ pro-environmental, health, social, political, and consumer behavior.
    2. that we can make meaningful systemic change with mass media or propaganda targeted at individuals.
    3. that we can detect macro- or institutional-level change from studies of individual-level or aggregated individual-level attitudes and behaviors.
    4. about the differences between influence operations used for social good and influence operations used for destruction or control of power.
    5. about how we perceive and evaluate social influence, particularly any misperceptions or global evaluations of all social influence or influence operations as harmful.
    6. about our evidence-informed or intuitive confidence in
      1. our understanding of the social influence mechanisms underpinning influence operations and how these scale up for deployment in large-scale organized mass media approaches.
      2. the impactfulness and commonness of harmful influence operations (e.g., How sure are we that an influence operation is worth the resources to combat?).
      3. the implementation of impactful strategies (e.g., How sure are we that an influence operation or anti-influence intervention is effective at the scale and context it’s employed in?).
      4. the necessity of resisting certain operations (e.g., How tractable is it to resist? Should an influence operation be resisted?).
      5. the necessity of implementing certain operations (e.g., How necessary are some operations? Should we take certain actions to promote influence operations? How much can public opinion or behavior be changed en masse?).
  2. Studies on complexity and effect direction
    1. What are the personality, social, and cultural factors that independently and interactively shape the effectiveness of influence operations?
      1. What factors shape whether or not an influence operation will backfire?
      2. What factors prevent influence operations from succeeding in various domains, contexts, and for various outcomes?
        1. Are there different barriers for prosocial and destructive or power-seeking operations?
      3. Are there meaningful interactions between influence exposure time courses (e.g., one-time, many times per week, many times per year) and personality, social, or cultural factors on the direction and lastingness of effects?
    2. How are influence operations shaped by their exposure time course (e.g., days, months, or years of implementation) and their anticipated effects time course (e.g., a one time action like voting or a long-term attitude change)? 
      1. Are there discernible patterns of effects for influence operations with repeated or single exposures? 
        1. How domain specific are these patterns?
      2. What are the indirect effects of influence operations?
        1. How does influence propagate over time?
    3. What is the relationship between anti-influence interventions and prosocial influence operations?
    4. Are the effects of influence operations always linear – scaling from one individual to many – or are there non-linear or emergent collective effects of influence operations?
    5. How do academic, advocate, and professional experts in varying domains (e.g., politics, health, society) understand the efficacy and practice of influence operations?
      1. How do decision makers select specific influence operations and anti-influence interventions?
  3. Studies on impact
    1. Are there differences in the effectiveness of prosocial influence operations and destructive, controlling, or power-seeking operations?
    2. Are there differences in the effectiveness of influence operations targeting relatively more abstract (e.g., social trust, public opinion) and relatively more concrete (e.g., behaviors like voting or using condoms) outcomes?
      1. Are the same strategies (e.g., misinformation, disinformation, propaganda) used to shape any outcome in any domain?
      2. Are there common anti-influence strategies that are applied across domains?
      3. Can the same strategies be used effectively in any domain?
    3. What are the impacts of various operations on various outcomes in various domains? For example, 
      1. Is misinformation or disinformation more impactful on
        1. Health: vaccine uptake, early childhood doctor’s visits, etc.?
        2. Consumer: purchasing, brand identification, etc.?
        3. Social: disidentification with a social group, hate crimes, etc.?
        4. Political: voting behavior, support for policies, etc.?
      2. Are bots or algorithms on social media more impactful than traditional mass media on any outcomes or in any domains?
    4. How cost effective are large-scale governmental and NGO influence operations?
      1. Are there notable differences in the cost effectiveness of governmental and non-governmental operations?
      2. Are influence operation campaigns more or less cost effective than anti-influence interventions?
      3. Are specific types of operations (e.g, misinformation, disinformation, social media, traditional media) more cost effective?
      4. Is it more cost effective to target individual-level or institutional-level outcomes?
    5. How can evidence on the complexity, impact, and cost effectiveness of influence operations and anti-influence interventions be disseminated and communicated to decision makers across domains (e.g., politicians and advocates in governmental and non-governmental organizations)?
      1. Are there better or worse strategies to use when connecting disparate research areas?
      2. Are there clearer ways to synthesize and communicate the relevant information?
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities