[ Question ]

What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

by Max_Daniel1 min read13th Aug 202047 comments

71

History of EALongtermism (Philosophy)
Frontpage

I've had interesting conversations with people based on this question, so I thought I'd ask it here. I'll follow up with some of my thoughts later to avoid priming.

By novel insights, I mean insights that were found for the first time. This excludes the diffusion of earlier insights throughout the community.

To gesture at the threshold I have in mind for major insights, here are some examples from the pre-2015 period:

  • Longtermism
  • Anthropogenic extinction risk is greater than natural extinction risk
  • AI could be a technology with impacts comparable to the Industrial Revolution, and those impacts may not be close-to-optimal by default

An example that feels borderline to me is the unilateralist's curse.

New Answer
Ask Related Question
New Comment

7 Answers

I think there haven’t been any novel major insights since 2015, for your threshold of “novel” and “major”.

Notwithstanding that, I believe that we’ve made significant progress and that work on macrostrategy was and continues to be valuable. Most of that value is in many smaller insights, or in the refinement and diffusion of ideas that aren’t strictly speaking novel. For instance:

  • The recent work on patient longtermism seems highly relevant and plausibly meets the bar for being “major”. This isn’t novel - Robin Hanson wrote about it in 2011, and Benjamin Franklin arguably implemented the idea in 1790 - but I still think that it’s a significant contribution. (There is a big difference between an idea being mentioned somewhere, possibly in very “hidden” places, and that idea being sufficiently widespread in the community to have a real impact.)
  • Effective altruists are now considering a much wider variety of causes than in 2015 (see e.g. here). Perhaps none of those meet your bar for being “major”, but I think that the “discovery” (scare quotes because probably none of those is the first mention) of causes such as Reducing long-term risks from malevolent actors, invertebrate welfare, or space governance constitutes significant progress. S-risks have also gained more traction, although again the basic idea is from before 2015.
  • Views on the future of artificial intelligence have become much more nuanced and diverse, compared to the relatively narrow focus on the “Bostrom-Yudkowsky view” that was more prevalent in 2015. I think this does meet the bar for “major”, although it is arguably not a single insight: relevant factors include takeoff speeds, whether AI is best thought of as a unified agent, or the likelihood of successful alignment by default. (And many critiques of the Bostrom-Yudkowsky view were written pre-2015, so it also isn't really novel.)

Thinking about insights that were particularly relevant for me / my values:

  • Reducing long-term risks from malevolent actors as a potentially promising cause area
  • The importance of developing (the precursors for) peaceful bargaining strategies
    • Related: Anti-realism about bargaining? (I don't know if people still believed this in 2015, but early discussions on Lesswrong seemed to indicate that a prevalent belief was that there exists a proper solution to good bargaining that works best independently of the decision architecture of other agents in the environment.)
  • Possible implications of correlated decision-making in large worlds
    • Arguably, some people were thinking along these lines before 2015. However, so many things fall under the heading of "acausal trade" that it's hard to tell, and judging by conversations with people who think they understood the idea but actually mixed it up with something else, I assign 40% to this having been relevantly novel.
  • Some insights on metaethics might qualify. For instance, the claim "Being morally uncertain and confidently a moral realist are in tension" is arguably a macrostrategically relevant insight. It suggests that more discussion of the relevance of having underdetermined moral values (Stuart Armstrong wrote about this a lot) seems warranted, and that, depending on the conclusions from how to think about underdetermined values, peer disagreement might work somewhat differently for moral questions than for empirical ones. (It's hard to categorise whether these are novel insights or not. I think it's likely that there were people who would have confidently agreed with these points in 2015 for the right reasons, but maybe lacked awareness that not everyone will agree on addressing the underdetermination issue in the same way, and so "missed" a part of the insight.)

Maybe: "We should give outsized attention to risks that manifest unexpectedly early, since we're the only people who can."

(I think this is borderline major? The earliest occurrence I know of was 2015 but it's sufficiently simple that I wouldn't be surprised if it was discovered multiple times and some of them were earlier.)

Patient philanthropy?

Completely out of my depth here, but I wondered if Robin Hanson's "Age of Em" would be considered as a new insight for longtermists along the lines of making the case that brain emulations could also "be a technology with impacts comparable to the Industrial Revolution, and [whose] impacts may not be close-to-optimal by default"

Greaves' cluelessness paper was published in 2016. My impression is that the broad argument has existed for 100+ years, but the formulation of cluelessness arising from flow-through effects outweighing direct effects (combined with EA's tending to care quite a bit about flow-through effects) was a relatively novel and major reformulation (though probably still below your bar).

This post itself is a major insight since 2015! :P