AK

Arjun_Kh

51 karmaJoined Nov 2018

Comments
6

There's an excellent old GiveWell blogpost by Holden Karnofsky on this topic called Sequence Thinking vs Cluster Thinking:

  • Sequence thinking involves making a decision based on a single model of the world: breaking down the decision into a set of key questions, taking one’s best guess on each question, and accepting the conclusion that is implied by the set of best guesses (an excellent example of this sort of thinking is Robin Hanson’s discussion of cryonics). It has the form: “A, and B, and C … and N; therefore X.” Sequence thinking has the advantage of making one’s assumptions and beliefs highly transparent, and as such it is often associated with finding ways to make counterintuitive comparisons.
  • Cluster thinking – generally the more common kind of thinking – involves approaching a decision from multiple perspectives (which might also be called “mental models”), observing which decision would be implied by each perspective, and weighing the perspectives in order to arrive at a final decision. Cluster thinking has the form: “Perspective 1 implies X; perspective 2 implies not-X; perspective 3 implies X; … therefore, weighing these different perspectives and taking into account how much uncertainty I have about each, X.” Each perspective might represent a relatively crude or limited pattern-match (e.g., “This plan seems similar to other plans that have had bad results”), or a highly complex model; the different perspectives are combined by weighing their conclusions against each other, rather than by constructing a single unified model that tries to account for all available information.

A key difference with “sequence thinking” is the handling of certainty/robustness (by which I mean the opposite of Knightian uncertainty) associated with each perspective. Perspectives associated with high uncertainty are in some sense “sandboxed” in cluster thinking: they are stopped from carrying strong weight in the final decision, even when such perspectives involve extreme claims (e.g., a low-certainty argument that “animal welfare is 100,000x as promising a cause as global poverty” receives no more weight than if it were an argument that “animal welfare is 10x as promising a cause as global poverty”).

Holden also linked other writing heavily overlapping with this idea:

Before I continue, I wish to note that I make no claim to originality in the ideas advanced here. There is substantial overlap with the concepts of foxes and hedgehogs (discussed by Philip Tetlock); with the model and combination and adjustment idea described by Luke Muehlhauser; with former GiveWell employee Jonah Sinick’s concept of many weak arguments vs. one relatively strong argument (and his post on Knightian uncertainty from a Bayesian perspective); with former GiveWell employee Nick Beckstead’s concept of common sense as a prior; with Brian Tomasik’s thoughts on cost-effectiveness in an uncertain world; with Paul Christiano’s Beware Brittle Arguments post; and probably much more.

You might be interested in checking out a GPI paper which argues the same thing as your second point: The Scope of Longtermism 

Here's the full conclusion:

This paper assessed the fate of ex ante swamping ASL: the claim that the ex ante best thing
we can do is often a swamping longtermist option that is near-best for the long-term future. I gave a two-part argument that swamping ASL holds in the special case of present-day cause-neutral philanthropy: the argument from strong swamping that a strong swamping option would witness the truth of ASL, and the argument from single-track dominance for the existence of a strong swamping option. 

However, I also argued for the rarity thesis that swamping longtermist options are rare. I gave two arguments for the rarity thesis: the argument from rapid diminution that probabilities of large far-future benefits often diminish faster than those benefits increase; and the argument from washing out that probabilities of far-future benefits are often significantly cancelled by probabilities of far-future harms.

I argued that the rarity thesis does not challenge the case for swamping ASL in present day, cause-neutral philanthropy, but showed how the rarity thesis generates two challenges to the scope of swamping ASL beyond this case. First, there is the area challenge that swamping ASL often fails when we restrict our attention to specific cause areas. Second, there is the challenge from option unawareness that swamping ASL often fails when we modify decision problems to incorporate agents’ unawareness of relevant options. 

In some ways, this may be familiar and comforting news. For example, Greaves (2016) considers the cluelessness problem that we are often significantly clueless about the ex ante values of our actions because we are clueless about their long-term effects. Greaves suggests that although cluelessness may be correct as a description of some complex decisionmaking problems, we should not exaggerate the extent of mundane cluelessness in everyday decisionmaking. A natural way of explaining this result would be to argue for a strengthened form of the rarity thesis on which in most mundane decisionmaking, the expected long-term effects of our actions are swamped by their expected short-term effects. So in a sense, the rarity thesis is an expected and comforting result. 

In addition, this discussion leaves room for swamping ASL to be true and important in the case of present-day, cause-neutral philanthropy as well as in a limited number of other contexts. It also does not directly pronounce on the fate of ex-post versions of ASL, or on the fate of non-swamping, convergent ASL. However, it does suggest that swamping versions of ASL may have a more limited scope than otherwise supposed. 

Holden Karnofsky  wrote in 2016 how his personal thinking evolved on topics that heavily overlap with longtermism and how that was a major factor in Open Phil deciding to work on them:

I recently wrote up a relatively detailed discussion of how my personal thinking has changed about three interrelated topics: (1) the importance of potential risks from advanced artificial intelligence, particularly the value alignment problem; (2) the potential of many of the ideas and people associated with the effective altruism community; (3) the properties to look for when assessing an idea or intervention, and in particular how much weight to put on metrics and “feedback loops” compared to other properties. My views on these subjects have changed fairly dramatically over the past several years, contributing to a significant shift in how we approach them as an organization.

....

The changes discussed here have caused me to shift from being a skeptic of supporting work on potential risks from advanced AI and effective altruism organizations to being an advocate, which in turn has been a major factor in the Open Philanthropy Project’s taking on work in these areas.

The strongest academic critique of longtermism I know of is The Scope of Longtermism by GPI's David Thorstad. Here's the abstract:

Longtermism holds roughly that in many decision situations, the best thing we can do is what is best for the long-term future. The scope question for longtermism asks: how large is the class of decision situations for which longtermism holds? Although longtermism was initially developed to describe the situation of cause-neutral philanthropic decisionmaking, it is increasingly suggested that longtermism holds in many or most decision problems that humans face. By contrast, I suggest that the scope of longtermism may be more restricted than commonly supposed. After specifying my target, swamping axiological strong longtermism (swamping ASL), I give two arguments for the rarity thesis that the options needed to vindicate swamping ASL in a given decision problem are rare. I use the rarity thesis to pose two challenges to the scope of longtermism: the area challenge that swamping ASL often fails when we restrict our attention to specific cause areas, and the challenge from option unawareness that swamping ASL may fail when decision problems are modified to incorporate agents’ limited awareness of the options available to them.

Related: OpenPhil's review of the evidence on the impacts of incarceration on crime:

The final report reaches two major conclusions:
I estimate that, at typical policy margins in the United States today, decarceration has zero net impact on crime outside of prison. That estimate is uncertain, but at least as much evidence suggests that decarceration reduces crime as increases it. The crux of the matter is that tougher sentences hardly deter crime, and that while imprisoning people temporarily stops them from committing crime outside prison walls, it also tends to increase their criminality after release. As a result, “tough-on-crime” initiatives can reduce crime in the short run but cause offsetting harm in the long run.
Empirical social science research—or at least non-experimental social science research—should not be taken at face value. Among three dozen studies I reviewed, I obtained or reconstructed the data and code for eight. Replication and reanalysis revealed significant methodological concerns in seven and led to major reinterpretations of four. These studies endured much tougher scrutiny from me than they did from peer reviewers in order to make it into academic journals. Yet given the stakes in lives and dollars, the added scrutiny was worth it. So from the point of view of decision makers who rely on academic research, today’s peer review processes fall well short of the optimal.

Where do you think most of RP's short term and long term impact is going to come from?