Hide table of contents

Broad vs. narrow interventions

Parent Topic: Existential risk
You are viewing revision 1.15.0, last edited by MichaelA

The philosopher Nick Beckstead has distinguished between two different ways of influencing the long-term future: broad interventions, which "focus on unforeseeable benefits from ripple effects", and narrow (or targeted) interventions, which "aim for more specific effects on the far future, or aim at a relatively narrow class of possible ripple effects." (Beckstead 2013a)

Clarifying the distinction

The chain of causation connecting an intervention with its intended effect can be analysed along two separate dimensions. One dimension concerns the number of causal steps in the chain. Another dimension concerns the number of causal paths in the chain. In one sense of the term, broad interventions involve both many steps and many paths, while narrow interventions involve both few steps and few paths. For example, the broad intervention of promoting peace can reduce existential risk in countless different ways, each of which involves a long sequence of events culminating in the risk reduction. By contrast, the narrow intervention of distributing bed nets saves lives in just one way (by protecting people from mosquito bites) and in just a few steps (distribution > installation > protection).

However, interventions with many causal steps may have few causal paths, and interventions with many causal paths may have few causal steps. It is therefore convenient to have separate terms for each of these dimensions of variation. Some effective altruists reserve the terms "narrow" and "broad" for interventions with few or many causal paths, and use the terms "direct" and "indirect" for interventions with few or many causal steps (Cotton-Barratt 2015).

Assessing broad and narrow interventions

A number of arguments in favor of either broad or narrow interventions have been offered (e.g. Beckstead 2013b). A commonly given consideration in favor of broad interventions concerns their apparently superior historical track record. This point has been made independently by a number of authors at around the same time.[1] Beckstead himself writes (Beckstead 2013a: 145):

Suppose that in 1500 CE, someone wrote a forward-looking novel that featured a technology from the present day, such as a telephone. And suppose another person read this novel and then set for himself the goal that, in the future, people utilized rapid long-distance communication as effectively as possible. He would know that if making telephones was actually a good idea, future people would be in a much better position to find a way to create telephones and use them effectively. He would know very little about telephones or hw they might be discovered, so it would not make sense offer him to do something very targeted, such as drafting potential telephone designs. It would make more sense, I believe, for him to help in very broad ways (such as becoming a teacher or fighting political and religious threats to the advance of science), thereby empowering future generations to discover and effectively utilize rapid long-distance communication.

Similarly, Brian Tomasik writes (2013):

imagine an effective altruist in the year 1800 trying to optimize his positive impact. He would not know most of modern economics, political science, game theory, physics, cosmology, biology, cognitive science, psychology, business, philosophy, probability theory, computation theory, or manifold other subjects that would have been crucial for him to consider. If he tried to place his bets on the most significant object-level issue that would be relevant centuries later, he'd almost certainly get it wrong. I doubt we would fare substantially better today at trying to guess a specific, concrete area of focus more than a few decades out. [...] What this 1800s effective altruist might have guessed correctly would have been the importance of world peace, philosophical reflection, positive-sum social institutions, and wisdom. Promoting those in 1800 may have been close to the best thing this person could have done, and this suggests that these may remain among the best options for us today.

And Gwern Branwen writes (Branwen 2014):

Imagine someone in England in 1500 who reasons the same way about x-risk: humanity might be destroyed, so preventing that is the most important task possible. He then spends the rest of his life researching the Devil and the Apocalypse. Such research is, unfortunately, of no value whatsoever unless it produces arguments for atheism demonstrating that that entire line of enquiry is useless and should not be pursued further. But as the Industrial and Scientific Revolutions were just beginning, with exponential increases in global wealth and science and technology and population, ultimately leading to vaccine technology, rockets and space programs, and enough wealth to fund all manner of investments in x-risk reduction, he could instead had made a perhaps small but real contribution by contributing to economic growth by work & investment or making scientific discoveries.

In response to these claims, Toby Ord argues that comparisons with previous centuries may be misleading, because the bulk of the existential risk to which humanity is currently exposed is anthropogenic in nature, and originates in technologies developed only since around the mid-20th century. Narrow interventions aimed specifically at mitigating the risks posed by such technologies should thus be expected to accomplish much more than similar efforts in previous centuries. Ord also points out that broad interventions receive tens of thousands of times more funding than do narrow interventions, so even people with reasonable differences about the relative merits of broad and targeted interventions should favor the latter, given their much higher neglectedness (Ord 2020: ch. 6).

...

(Read more)