My quick take:
- I agree with other answers that in terms of "discrete" insights, there probably wasn't anything that qualifies as "major" and "novel" according to the above definitions.
- I'd say the following were the three major broader developments, though unclear to what extent they were caused by macrostrategy research narrowly construed:
- Patient philanthropy: significant development of the theoretical foundations and some practical steps (e.g. the Founders Pledge research report on potentially setting up a long-term fund).
- Though the idea and some of the basic arguments probably aren't novel, see this comment thread below.
- Reduced emphasis on a very small list of "top cause areas". (Visible e.g. here and here, though of course there must have been significant research and discussion prior to such conclusions.)
- Diversification of AI risk concerns: less focus on "superintelligent AI kills everyone after rapid takeoff because of poorly specified values" and more research into other sources of AI risk.
- I used to think there was less actual as opposed to publicly visible change, and less due to new research to the extent there was change. But it seems that a perception of significant change is more common.
In previous personal discussions, I think people have made fair points around my bar maybe being generally unreasonable. I.e. it's the default for any research field that major insights don't appear out of nowhere, and that it's almost always possible to find similar previous ideas: in other words, research progress being the cumulative effect of many small new ideas and refinements of them.
I think this is largely correct, but that it's still correct to update negatively on the value of research if past progress has been less good on the spectra of majority and novelty. However, overall I'm now most interested in the sort of question asked here to better understand what kind of progress we're aiming for rather than for assessing the total value of a field.
FWIW, here are some suggestions for potential "major and novel" insights others have made in personal communication (not necessarily with a strong claim made by the source that they meet the bar, also in some discussions I might have phrased my questions a bit differently):
- Nanotech / atomically precise manufacturing / grey goo isn't a major x-risk
- [NB I'm not sure that I agree with APM not being a major x-risk, though 'grey goo' specifically may be a distraction. I do have the vague sense that some people in, say, the 90s or until the early 2010s were more concerned about APM then the typical longtermist is now.]
- My comments were:
- "Hmm, maybe though not sure. Particularly uncertain whether this was because new /insights/ were found or just due to broadly social effects and things like AI becoming more prominent?"
- "Also, to what extent did people ever believe this? Maybe this one FHI survey where nanotech was quite high up the x-risk list was just a fluke due to a weird sample?"
- Brian Tomasik pointed out: "I think the nanotech-risk orgs from the 2000s were mainly focused on non-grey goo stuff: http://www.crnano.org/dangers.htm"
- Climate change is an x-risk factor
- My comment was: "Agree it's important, but is it sufficiently non-obvious and new? My prediction (60%) is that if I asked Brian [Tomasik] when he first realized that this claim is true (even if perhaps not using that terminology) he'd point to a year before 2014."
- We should build an AI policy field
- My comment was: "[snarky] This is just extremely obvious unless you have unreasonably high credence in certain rapid-takeoff views, or are otherwise blinded by obviously insane strawman rationalist memes ('politics is the mind-killer' [aware that this referred to a quite different dynamic originally], policy work can't be heavy-tailed [cf. the recent Ben Pace vs. Richard Ngo thing]). [/snarky]
- I agree that this was an important development within the distribution of EA opinions, and has affected EA resource allocation quite dramatically. But it doesn't seem like an insight that was found by research narrowly construed, more like a strategic insight of the kind business CEOs will sometimes have, and like a reasonably obvious meme that has successfully propagated through the community."
- Surrogate goals research is important
- My comment was: "Okay, maaybe. But again 70% that if I asked Eliezer when he first realized that surrogate goals are a thing, he'd give a year prior to 2014."
- Acausal trade, acausal threats, MSR, probable environment hacking
- My comment was: "Aren't the basic ideas here much older than 5 years, and specifically have appeared in older writings by Paul Almond and have been part of 'LessWrong folklore' for a while?Possible that there's a more recent crisp insight around probable environment hacking -- don't really know what that is."
- Importance of the offense-defense balance and security
- My comment was: "Interesting candidate, thanks! Haven't sufficiently looked at this stuff to have a sense of whether it's really major/important. I am reasonably confident it's new."
- [Actually I'm not a bit puzzled why I wrote the last thing. Seems new at most in terms of "popular/widely known within EA"?]
- Internal optimizers
- My comment was: "Also an interesting candidate. My impression is to put it more in the 'refinement' box, but that might be seriously wrong because I think I get very little about this stuff except probably a strawman of the basic concern."
- Bargaining/coordination failures being important
- My comment was: "This seems much older [...]? Or are you pointing to things that are very different from e.g. the Racing to the Precipice paper?"
- Two-step approaches to AI alignment
- My comment was: "This seems kind of plausible, thanks! It's also in some ways related to the thing that seems most like a counterexample to me so far, which is the idea of a 'Long Reflection'. (Where my main reservation is whether this actually makes sense / is desirable [...].)"
- More 'elite focus'
- My comment was: "Seems more like a business-CEO kind of insight, but maybe there's macrostrategy research it is based on which I'm not aware of?"