Hide table of contents

I've had interesting conversations with people based on this question, so I thought I'd ask it here. I'll follow up with some of my thoughts later to avoid priming.

By novel insights, I mean insights that were found for the first time. This excludes the diffusion of earlier insights throughout the community.

To gesture at the threshold I have in mind for major insights, here are some examples from the pre-2015 period:

  • Longtermism
  • Anthropogenic extinction risk is greater than natural extinction risk
  • AI could be a technology with impacts comparable to the Industrial Revolution, and those impacts may not be close-to-optimal by default

An example that feels borderline to me is the unilateralist's curse.

New Answer
New Comment

10 Answers sorted by

Thinking about insights that were particularly relevant for me / my values:

  • Reducing long-term risks from malevolent actors as a potentially promising cause area
  • The importance of developing (the precursors for) peaceful bargaining strategies
    • Related: Anti-realism about bargaining? (I don't know if people still believed this in 2015, but early discussions on Lesswrong seemed to indicate that a prevalent belief was that there exists a proper solution to good bargaining that works best independently of the decision architecture of other agents in the environment.)
  • Possible implications of correlated decision-making in large worlds
    • Arguably, some people were thinking along these lines before 2015. However, so many things fall under the heading of "acausal trade" that it's hard to tell, and judging by conversations with people who think they understood the idea but actually mixed it up with something else, I assign 40% to this having been relevantly novel.
  • Some insights on metaethics might qualify. For instance, the claim "Being morally uncertain and confidently a moral realist are in tension" is arguably a macrostrategically relevant insight. It suggests that more discussion of the relevance of having underdetermined moral values (Stuart Armstrong wrote about this a lot) seems warranted, and that, depending on the conclusions from how to think about underdetermined values, peer disagreement might work somewhat differently for moral questions than for empirical ones. (It's hard to categorise whether these are novel insights or not. I think it's likely that there were people who would have confidently agreed with these points in 2015 for the right reasons, but maybe lacked awareness that not everyone will agree on addressing the underdetermination issue in the same way, and so "missed" a part of the insight.)

I think there haven’t been any novel major insights since 2015, for your threshold of “novel” and “major”.

Notwithstanding that, I believe that we’ve made significant progress and that work on macrostrategy was and continues to be valuable. Most of that value is in many smaller insights, or in the refinement and diffusion of ideas that aren’t strictly speaking novel. For instance:

  • The recent work on patient longtermism seems highly relevant and plausibly meets the bar for being “major”. This isn’t novel - Robin Hanson wrote about it in 2011, and Benjamin Franklin arguably implemented the idea in 1790 - but I still think that it’s a significant contribution. (There is a big difference between an idea being mentioned somewhere, possibly in very “hidden” places, and that idea being sufficiently widespread in the community to have a real impact.)
  • Effective altruists are now considering a much wider variety of causes than in 2015 (see e.g. here). Perhaps none of those meet your bar for being “major”, but I think that the “discovery” (scare quotes because probably none of those is the first mention) of causes such as Reducing long-term risks from malevolent actors, invertebrate welfare, or space governance constitutes significant progress. S-risks have also gained more traction, although again the basic idea is from before 2015.
  • Views on the future of artificial intelligence have become much more nuanced and diverse, compared to the relatively narrow focus on the “Bostrom-Yudkowsky view” that was more prevalent in 2015. I think this does meet the bar for “major”, although it is arguably not a single insight: relevant factors include takeoff speeds, whether AI is best thought of as a unified agent, or the likelihood of successful alignment by default. (And many critiques of the Bostrom-Yudkowsky view were written pre-2015, so it also isn't really novel.)

The ideas behind patient altruism have received substantial discussion in academia:

But this literature doesn't seem well-known among EAs. I personally didn't know about any of it until Phil Trammell cited some of it in his paper on patient philanthropy. Trammell also argued that most people use too high a discount rate, so patient philanthropists should compensate by not donating any money; as far as I know, this is a novel argument.

Trammell also argued that most people use too high a discount rate, so patient philanthropists should compensate by not donating any money; as far as I know, this is a novel argument.

This has been much discussed from before the beginning of EA, Robin Hanson being a particularly devoted proponent.

9
trammell
4y
Hanson has advocated for investing for future giving, and I don't doubt he had this intuition in mind. But I'm actually not aware of any source in which he says that the condition under which zero-time-preference philanthropists should invest for future giving is that the interest rate incorporates beneficiaries' pure time preference. I only know that he's said that the relevant condition is when the interest rate is (a) positive or (b) higher than the growth rate. Do you have a particular source in mind? Also, who made the "pure time preference in the interest rate means patient philanthropists should invest" point pre-Hanson? (Not trying to get credit for being the first to come up with this really basic idea, I just want to know whom to read/cite!)
8
Owen Cotton-Barratt
4y
I don't know the provenance of the idea, but I recall Paul Christiano making the point about pure time preference during the debate on giving now vs later at the ?2014 GWWC weekend away.
6
CarlShulman
4y
My recollection is that back in 2008-12 discussions would often cite the Stern Review, which reduced pure time preference to 0.1% per year, and thus concluded massive climate investments would pay off, the critiques of it noting that it would by the same token call for immense savings rates (97.5% according to Dasgupta 2006), and the defenses by Stern and various philosophers that pure time preference of 0 was philosophically appropriate. In private discussions and correspondence it was used to make the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. I cited it for this in this 2012 blog post. People also discussed how this would go away if sufficient investment was applied patiently (whether for altruistic or other reasons), ending the era of dreamtime finance by driving pure time preference towards zero.
3
trammell
4y
Sorry--maybe I’m being blind, but I’m not seeing what citation you’d be referring to in that blog post. Where should I be looking?
4
CarlShulman
4y
The Stern discussion.

The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. Utilitarian policymakers might implement more redistribution too. Given policymakers as they are, we’re still left with the question of how utilitarian philanthropists with their fixed budgets should prioritize between filling the redistribution gap and filling the investment gap.

In any event, if you/Owen have any more unpublished pre-2015 insights from private correspondence, please consider posting them, so those of us who weren’t there don’t have to go through the bother of rediscovering them. : )

"The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save."

That was explicitly discussed at the time. I cited the blog post as a historical reference illustrating that such considerations were in mind, not as a comprehensive publication of everything people discussed at the time, when in fact there wasn't one. That's one reason, in addition to your novel contributions, I'm so happy about your work! GPI also has a big hopper of projects adding a lot of value by further developing and explicating ideas that are not radically novel so that they have more impact and get more improvement and critical feedback.

If you would like to do further recorded discussions about your research, I'm happy to do so anytime.

2
trammell
4y
Thanks! No need to inflict another recording of my voice on the world for now, I think, but glad to hear you like how the project coming.
7
MichaelDickens
4y
It seems you're right. I did a little searching and found Hanson making that argument here: https://www.overcomingbias.com/2013/04/more-now-means-less-later.html

That post just makes the claim that "all we really need are positive interest rates". My own point which you were referring to in the original comment is that, at least in the context of poverty alleviation (/increasing human consumption more generally), what we need is pure time preference incorporated into interest rates. This condition is neither necessary nor sufficient for positive interest rates.

Hanson's post then says something which sounds kind of like my point, namely that we can infer that it's better for us as philanthropists to invest than to spend if we see our beneficiaries doing some of both. But I could never figure out what he was saying exactly, or how it was compatible with the point he was trying to make that all we really need are positive interest rates.

Could you elaborate?

I liked this answer.

One thing I'd add: My guess is that part of why Max asked about novel insights is that he's wondering what the marginal value of longtermist macrostrategy or global priorities research has been since 2015, as one input into predictions about the marginal value of more such research. Or at least, that's a big part of why I find this question interesting.

So another interesting question is what is required for us to have "many smaller insights" and "the refinement and diffusion of ideas that aren’t strictly speaking novel"? E.g., does that require orgs like FHI and CLR? Or could we do that without paid full-time researchers, just via a bunch of people blogging in their spare time?

I don't know about generating many smaller insights or refining ideas. But I'd guess that mere "diffusion" probably doesn't require full-time researchers, just good and well-respected communicators.  

But I'd also guess that there's another thing that happened: Active critique and screening of a large set of potentially important insights, to identify those that are actually important and correct (or sufficiently likely to be correct to warrant major shifts in decisions). And that proc

... (read more)
So another interesting question is what is required for us to have "many smaller insights" and "the refinement and diffusion of ideas that aren’t strictly speaking novel"? E.g., does that require orgs like FHI and CLR? Or could we do that without paid full-time researchers, just via a bunch of people blogging in their spare time?

I think that's a very interesting question, and one I've sometimes wondered about.

Oversimplifying a bit, my answer is: We need neither just bloggers nor just orgs like FHI and CLR. Instead, we need to move from a model where epistemic progress is achieved by individuals to one where it is achieved by a system characterized by a diversification of epistemic tasks, specialization, and division of labor. (So in many ways I think: we need to become more like academia.)

Very roughly, it seems to me that early intellectual progress in EA often happened via distinct and actionable insights found by individuals. E.g. "AI alignment is super important" or "donating to the best as opposed to typical charities is really important" or "current charity evaluators don't help with finding impactful charities... (read more)

5
Alex HT
4y
This is really interesting and I'd like to hear more. Feel free to just answer the easiest questions: Do you have any thoughts on how to set up a better system for EA research, and how it should be more like academia?  What kinds of specialisation do you think we'd want - subject knowledge? Along different subject lines to academia?  Do you think EA should primarily use existing academia for training new researchers, or should there be lots of RSP-type things? What do you see as the current route into longtermist research? It seems like entry-level research roles are relatively rare, and generally need research experience. Do you think this is a good model?

[Off the top of my head. I don't feel like my thoughts on this are very developed, so I'd probably say different things after thinking about it for 1-10 more hours.]

[ETA: On a second reading, I think some of the claims below are unhelpfully flippant and, depending on how one reads them, uncharitable. I don't want to spend the significant time required for editing, but want to flag that I think my dispassionate views are not super well represented below.]

Do you have any thoughts on how to set up a better system for EA research, and how it should be more like academia? 

Things that immediately come to mind, not necessarily the most important levers:

  • Identify skills or bodies of knowledge that seem relevant for longtermist research, and where necessary design curricula for deliberate practice of these. In addition to having other downsides, I think our norms of single-dimensional evaluations of people (I feel like I hear much more often that someone is "promising" or "impressive" than that they're "good at <ability or skill>") are evidence of a harmful laziness that helps entrench the status quo.
  • Possibly something like a doubl
... (read more)
5
MichaelA
4y
Interesting, thanks for sharing! Could you say more about why you think that that shift at the margin would be good?

Several reasons:

  • In many cases, doing thorough work on a narrow question and providing immediately impactful findings is simply too hard. This used to work well in the early days of EA when more low-hanging fruit was available, but rarely works any more.
    • Instead of having 10 shallow takes on immediately actionable question X, I'd rather have 10 thorough takes on different subquestions Y_1, ..., Y_10, even if it's not immediately obvious how exactly they help with answering X (there should be some plausible relation, however). Maybe I expect 8 of these 10 takes to be useless, but unlike adding more shallow takes on X the thorough takes on the 2 remaining subquestions enable incremental and distributed intellectual progress:
      • It may allow us to identify new subquestions we weren't able to find through doing shallow takes on X.
      • Someone else can build on the work, and e.g. do a thorough take on another subquestions that helps illuminate how it relates to X, what else we need to know to use the thorough findings to make progress on Y etc.
      • The expected benefit from unknown unknowns is larger. Random examples: the economic historians who assembled data on historic GDP growth pre
... (read more)
4
MichaelA
4y
Interesting, thanks. I'm not sure whether I overall agree, but I think this glimpse of your models on this topic will be useful to me. One clarifying question: My first thought was "But wait, wouldn't 10 thorough takes take more time than 10 shallow takes, making this comparison unfair?" But now I think maybe you meant both sets of investigations to take a similar amount of time, but the former to be "shallow" in relation to the larger topic - i.e., the "shallow takes" involve the same amount of total analysis as the "thorough takes", but they're analysing such a big topic that they can only provide a shallow look at each component. Is that right?
4
Max_Daniel
4y
Yes, that's what I had in mind. Thanks for clarifying!
2
MichaelA
4y
I'm confused - did you make this comment in the wrong place?
2
Max_Daniel
4y
No, but there was a copy and paste error that made the comment unintelligible. Edited now. Thanks for flagging!
3
Alex HT
4y
Thanks, that's helpful for thinking about my career (and thanks for asking that question Michael!)  Edit: helpful for thinking about my career because I'm thinking about getting economics training, which seems useful for answering specific sub-questions in detail ('Existential Risk and Economic Growth' being the perfect example of this),  but one economic model alone is very unlikely to resolve a big question.
3
Max_Daniel
4y
Glad it's helpful! I think you're very likely doing this anyway, but I'd recommend to get a range of perspectives on these questions. As I said, my own views here don't feel that resilient, and I also know that several epistemic peers disagree with me on some of the above.

Maybe: "We should give outsized attention to risks that manifest unexpectedly early, since we're the only people who can."

(I think this is borderline major? The earliest occurrence I know of was 2015 but it's sufficiently simple that I wouldn't be surprised if it was discovered multiple times and some of them were earlier.)

FWIW, I also haven't seen that idea mentioned before your 2015 paper. And I think there's a good chance I would've seen it if the idea was decently widely discussed in EA before then, as I looked into this and related matters a bit for my recent post Crucial questions about optimal timing of work and donations

(The relevant section is "What “windows of opportunity” might there be? When might those windows open and close? How important are they?")

This is a bit of a nitpick: Perhaps you mean the more general point you mentioned above rather than the specific claim about AI risk, but you published this report in already in 2014, and I vaguely remember hearing a lot of discussion of those kinds of arguments in 2014 already.

[This comment is no longer endorsed by its author]Reply

Elsewhere on the forum, I asked Ajeya Cotra of Open Phil some questions inspired by this post. What follows are my questions [in square brackets] and her answers.

[1. Do you think many major insights from longtermist macrostrategy or global priorities research have been found since 2015?]

I think "major insights" is potentially a somewhat loaded framing; it seems to imply that only highly conceptual considerations that change our minds about previously-accepted big picture claims count as significant progress. I think very early on, EA produced a number of somewhat arguments and considerations which felt like "major insights" in that they caused major swings in the consensus of what cause areas to prioritize at a very high level; I think that probably reflected that the question was relatively new and there was low-hanging fruit. I think we shouldn't expect future progress to take the form of "major insights" that wildly swing views about a basic, high-level question as much (although I still think that's possible).

[2. If so, what would you say are some of the main ones?]

Since 2015, I think we've seen good analysis and discussion of AI timelines and takeoff speeds, discussion of specific AI risks that go beyond the classic scenario presented in Superintellilgence,  better characterization of multipolar and distributed AI scenarios, some interesting and more quantitative debates on giving now vs giving later and "hinge of history" vs "patient" long-termism, etc. None of these have provided definitive / authoritative answers, but they all feel useful to me as someone trying to prioritize where Open Phil dollars should go.

[3. Do you think the progress has been at a good pace (however you want to interpret that)?]

I'm not sure how to answer this; I think taking into account the expected low-hanging fruit effect, and the relatively low investment in this research, progress has probably been pretty good, but I'm very uncertain about the degree of progress I "should have expected" on priors.

[4. Do you think that this pushes for or against allocating more resources (labour, money, etc.) towards that type of work?]

I think ideally the world as a whole would be investing much more in this type of work than it is now. A lot of the bottleneck to this is that the work is not very well-scoped or broken into tractable sub-problems, which makes it hard for a large number of people to be quickly on-boarded to it.

[5. Do you think that this suggests we should change how we do this work, or emphasise some types of it more?]

Related to the above, I'd love for the work to become better-scoped over time -- this is one thing we prioritize highly at Open Phil.

(See also my response to Ajeya.)

My quick take:

  • I agree with other answers that in terms of "discrete" insights, there probably wasn't anything that qualifies as "major" and "novel" according to the above definitions.
  • I'd say the following were the three major broader developments, though unclear to what extent they were caused by macrostrategy research narrowly construed:
    • Patient philanthropy: significant development of the theoretical foundations and some practical steps (e.g. the Founders Pledge research report on potentially setting up a long-term fund).
      • Though the idea and some of the basic arguments probably aren't novel, see this comment thread below.
    • Reduced emphasis on a very small list of "top cause areas". (Visible e.g. here and here, though of course there must have been significant research and discussion prior to such conclusions.)
    • Diversification of AI risk concerns: less focus on "superintelligent AI kills everyone after rapid takeoff because of poorly specified values" and more research into other sources of AI risk.
      • I used to think there was less actual as opposed to publicly visible change, and less due to new research to the extent there was change. But it seems that a perception of significant change is more common.

In previous personal discussions, I think people have made fair points around my bar maybe being generally unreasonable. I.e. it's the default for any research field that major insights don't appear out of nowhere, and that it's almost always possible to find similar previous ideas: in other words, research progress being the cumulative effect of many small new ideas and refinements of them.

I think this is largely correct, but that it's still correct to update negatively on the value of research if past progress has been less good on the spectra of majority and novelty. However, overall I'm now most interested in the sort of question asked here to better understand what kind of progress we're aiming for rather than for assessing the total value of a field.

FWIW, here are some suggestions for potential "major and novel" insights others have made in personal communication (not necessarily with a strong claim made by the source that they meet the bar, also in some discussions I might have phrased my questions a bit differently):

  • Nanotech / atomically precise manufacturing / grey goo isn't a major x-risk
    • [NB I'm not sure that I agree with APM not being a major x-risk, though 'grey goo' specifically may be a distraction. I do have the vague sense that some people in, say, the 90s or until the early 2010s were more concerned about APM then the typical longtermist is now.]
    • My comments were: 
      • "Hmm, maybe though not sure. Particularly uncertain whether this was because new /insights/ were found or just due to broadly social effects and things like AI becoming more prominent?"
      • "Also, to what extent did people ever believe this? Maybe this one FHI survey where nanotech was quite high up the x-risk list was just a fluke due to a weird sample?"
    • Brian Tomasik pointed out: "I think the nanotech-risk orgs from the 2000s were mainly focused on non-grey goo stuff: http://www.crnano.org/dangers.htm"
  • Climate change is an x-risk factor
    • My comment was: "Agree it's important, but is it sufficiently non-obvious and new? My prediction (60%) is that if I asked Brian [Tomasik] when he first realized that this claim is true (even if perhaps not using that terminology) he'd point to a year before 2014."
  • We should build an AI policy field
    • My comment was: "[snarky] This is just extremely obvious unless you have unreasonably high credence in certain rapid-takeoff views, or are otherwise blinded by obviously insane strawman rationalist memes ('politics is the mind-killer' [aware that this referred to a quite different dynamic originally], policy work can't be heavy-tailed [cf. the recent Ben Pace vs. Richard Ngo thing]). [/snarky]
    • I agree that this was an important development within the distribution of EA opinions, and has affected EA resource allocation quite dramatically. But it doesn't seem like an insight that was found by research narrowly construed, more like a strategic insight of the kind business CEOs will sometimes have, and like a reasonably obvious meme that has successfully propagated through the community."
  • Surrogate goals research is important
    • My comment was: "Okay, maaybe. But again 70% that if I asked Eliezer when he first realized that surrogate goals are a thing, he'd give a year prior to 2014."
  • Acausal trade, acausal threats, MSR, probable environment hacking
    • My comment was: "Aren't the basic ideas here much older than 5 years, and specifically have appeared in older writings by Paul Almond and have been part of 'LessWrong folklore' for a while?Possible that there's a more recent crisp insight around probable environment hacking -- don't really know what that is."
  • Importance of the offense-defense balance and security
    • My comment was: "Interesting candidate, thanks! Haven't sufficiently looked at this stuff to have a sense of whether it's really major/important. I am reasonably confident it's new."
      • [Actually I'm not a bit puzzled why I wrote the last thing. Seems new at most in terms of "popular/widely known within EA"?]
  • Internal optimizers
    • My comment was: "Also an interesting candidate. My impression is to put it more in the 'refinement' box, but that might be seriously wrong because I think I get very little about this stuff except probably a strawman of the basic concern."
  • Bargaining/coordination failures being important
    • My comment was: "This seems much older [...]? Or are you pointing to things that are very different from e.g. the Racing to the Precipice paper?"
  • Two-step approaches to AI alignment
    • My comment was: "This seems kind of plausible, thanks! It's also in some ways related to the thing that seems most like a counterexample to me so far, which is the idea of a 'Long Reflection'. (Where my main reservation is whether this actually makes sense / is desirable [...].)"
  • More 'elite focus'
    • My comment was: "Seems more like a business-CEO kind of insight, but maybe there's macrostrategy research it is based on which I'm not aware of?"

Interesting thoughts, thanks :)

I think people have made fair points around my bar maybe being generally unreasonable. I.e. it's the default for any research field that major insights don't appear out of nowhere, and that it's almost always possible to find similar previous ideas [...]

I think this is largely correct, but that it's still correct to update negatively on the value of research if past progress has been less good on the spectra of majority and novelty.

I don't understand the last sentence there. In particular, I'm not sure what you mean "less goo... (read more)

4
Max_Daniel
3y
Yes, I meant "less than expected".  Among your three points, I believe something like 1 (for an appropriate reference class to determine "typical", probably something closer to 'early-stage fields' than 'all fields'). Though not by a lot, and I also haven't thought that much about how much to expect, and could relatively easily be convinced that I expected too much. I don't think I believe 2 or 3. I don't have much specific information about assumptions made by people who advocated for or funded macrostrategy research, but a priori I'd find it surprising if they had made these mistakes to a strong extent.
2
MichaelA
3y
I also haven't thought much about how much one should typically expect in a random field, how that should increase or decrease for this field in the last 5 years just because of how many people and dollars it got (compared to other fields), or how what was produced in the last 5 years in this field compares to that. But one thing that strikes me is that longtermist macrostrategy/GPR researchers over the past 5 years have probably had substantially less training and experience than researchers in most academic fields we'd probably compare this to. (I haven't really checked this, but I'd guess it's true.) So maybe if there was less novel or less major insights from this field than we should typically expect of a field with the same amount of people and dollars, this can be explained by the people having less human capital, rather than by the field being intrinsically harder to make progress on? (It could also perhaps be explained if the unusual approaches that are decently often taken in this field tend to be less effective - e.g., more generalist/shallow work rather than deeper dives into narrower topics, and more blog post style work.)

Patient philanthropy?

Completely out of my depth here, but I wondered if Robin Hanson's "Age of Em" would be considered as a new insight for longtermists along the lines of making the case that brain emulations could also "be a technology with impacts comparable to the Industrial Revolution, and [whose] impacts may not be close-to-optimal by default"

Thanks for this suggestion!

Identifying whole-brain emulation (WBE) as a potentially transformative technology definitely meets my threshold for major. However, this happened well before 2015. E.g. WBE was discussed in Superintelligence (published 2014), the Hanson-Yudkowsky FOOM debate in 2008, and FHI's WBE roadmap is from 2008 as well. So it's not novel.

(I'm fairly sure the idea had been discussed even earlier by transhumanists, but don't know good sources off the top of my head.)

To be clear, I still think the marginal contribution of... (read more)

Hanson's If Uploads Come First is from 1994, his economic growth given machine intelligence is from 2001, and uploads were much discussed in transhumanist circles in the 1990s and 2000s, with substantial earlier discussion (e.g. by Moravec in his 1988 book Mind Children). Age of Em added more details and has a number of interesting smaller points, but the biggest ideas (Malthusian population growth by copying and economic impacts of brain emulations) are definitely present in 1994. The general idea of uploads as a technology goes back even further.

Age of Em should be understood like Superintelligence, as a polished presentation and elaboration of a set of ideas already locally known.

This may sound really obvious in retrospect, but Evan G. Williams' 2015 paper (summarized here) felt pretty convincing to me that conditional upon moral realism being broadly true, we're all almost certainly unknowingly guilty of large moral atrocities

There's several steps here that I think is interesting:

  1. We may believe that this is a problem only for the "rest" of society; as enlightened vegan cosmopolitan longtermists, we see all the moral flaws that others do not.
    1. But this just has both a really bad historical track record and isn't very logically convincing, see below
  2. The inductive argument: No society that thinks of itself as just historically has been devoid of what we now consider grave moral wrongs.
    1. Further, we're in a time of upheaval. A society that has everything right likely is only a generation removed from a society that has almost everything right.
  3. The disjunctive argument: There are so many different ways that we could be morally wrong that we may/may not be aware of, and many of them are in tension with each other.
  4. Hedging (e.g. don't eat insects on the off chance insects have moral value)does not robustly work as a strategy to mitigate unknown ongoing moral catastrophe.
    1. Because of the disjunction above, and the sheer number of ways we could be wrong.

Even though I'm not a moral realist, I feel like this paper had a substantial effect on how I view the demands of morality, and over the years I've slowly internalized the message that this type of thing is hard (I'm also maybe 15% less optimistic about moral hedging as a robust strategy than I otherwise would've been if I hadn't read this paper). 

These points feel so obvious in retrospect that I'd be surprised if they weren't all covered before 2015, so I'd be interested in whether philosophers and philosophy students here can point to earlier sources.

Greaves' cluelessness paper was published in 2016. My impression is that the broad argument has existed for 100+ years, but the formulation of cluelessness arising from flow-through effects outweighing direct effects (combined with EA's tending to care quite a bit about flow-through effects) was a relatively novel and major reformulation (though probably still below your bar).

Thanks for this suggestion! Like ems, I think this is major but not novel. For instance, the first version of Brian Tomasik's Charity Cost-Effectiveness in an Uncertain World was written in 2013. And here's a reply from Jess Riedel, also from 2013.

Again, I do think later work including Greaves's cluelessness paper was a valuable contribution. But the basic issue that impact may be dominated by flow-through effects on unintuitive variables, and the apparent sign flipping as new 'crucial considerations' are discovered, was clearly present in the 2013 and possibly earlier discussions.

This post itself is a major insight since 2015! :P

Comments14
Sorted by Click to highlight new comments since: Today at 4:41 PM

Do you have a sense of how long is typically the lag between an insight first being had, and being recognised as major? I think this might often be several years.

Maybe the dynamic I'm imagining is: "At time T0, someone suggests X as a joke. At time T1, someone seriously posits X: it makes sense to them but they haven't managed to explain it to anyone. At T2, they've explained it in conversation and a small fraction of other people believe it. At T3, there's a first blog post which kind of explains it but to many readers it doesn't feel that well supported. At T4, it's believed by 10% of the relevant community. At T5, someone else makes a better writeup, which sets out more of a solid basis for it. At T6, it's relatively widely accepted as a major insight."

Was it novel at T0 or T1? (or later?) When does it get to count as major? (Is this just in the eyes of the observer?)

Do you have a sense of how long is typically the lag between an insight first being had, and being recognised as major? I think this might often be several years.

As an aside, there are a few papers examining this in the case of academia. One interesting finding is that there are a few outliers that only get widely recognized after decades, much longer than for typical insights. The term for those is 'sleeping beauties'. In a review paper on the science of science, Clauset et al. (2016, p. 478) say:

A systematic analysis of nearly 25 million publications in the natural and social sciences over the past 100 years found that sleeping beauties occur in all fields of study (9). Examples include a now famous 1935 paper by Einstein, Podolsky, and Rosen on quantum mechanics; a 1936 paper by Wenzel on waterproofing materials; and a 1958 paper by Rosenblatt on artificial neural networks.

The main references appear to be van Raan (2004) and Ke et al. (2015).

there are a few outliers that only get widely recognized after decades, much longer than for typical insights

These sleeping beauties might happen more often the younger a field is. In particular, I don't particularly care that (perhaps lesser) insights are spread quickly once a field is producing a lot of papers.

Anyways, some other examples are Taylor polynomials (!) and various discoveries by Tsiolkovsky on the mechanics of space travel.

At time T0, someone suggests X as a joke.

Telling jokes as an EA cause.

I think for the purpose of this question I was imagining dating insights to roughly 3/4 the way between T1 and T2.

I do agree that the lag time can be several years.

It seems that there haven't been that many major insights in macrostrategy/global priorities research recently.

One potential negative conclusion from that, that might seem natural, is that recent macrostrategy/global priorities research has been lacking in quality. 

But a more positive conclusion is that early macrostrategy/global priorities research had high quality, and that most of the major insights were therefore quickly identified. 

On this view, the recent lack of insights isn't a sign of recent lack of research quality, but rather a sign of early high research quality.

In my view, the positive conclusion is more warranted than the negative conclusion.

Do you still plan to add your own thoughts here, Max? I'd be keen to hear them :)

Done, thanks for reminding me :)

The examples feel weird to me, because the question asks for "insights from longtermist macrostrategy or global priorities research" but the insight about AI and the unilateralist's curse don't come from longtermist macrostrategy or global priorities research

Interesting, it sounds like we're using these terms somewhat differently. I guess I'm thinking of (longtermist) macrostrategy and global priorities research as trying to find high-level answers to the questions "How can we do the most good?", "How can we best improve the long-term future?", and "How do we even think about these questions?".

The unilateralist's curse is relevant to the third question, and the insight about AI relevant to the second question.

Admittedly, while I'd count "AI may be an important cause area" as macrostrategy/GPR I'd probably exclude particulars on how best to align AI, and the boundary is fuzzy.

I share Max’s sense that those two examples fit, while particulars on how best to align AI wouldn’t.

It also seems worth noting that Bostrom/FHI were among the most prominent champions (though not the sole originator) of “AI may be an important cause area”, and Bostrom was the lead author on the unilateralist’s curse paper. Bostrom and FHI themselves describe a big part of what they do as macrostrategy. (I think they may also have coined the term, though I’m unsure.)

In that case, I stand corrected on the unilateralist's curse - I thought it was more mainstream

I agree they're relevant to these areas! But I'm not sure that people had these areas in mind when they had these insights originally. The idea of AI as a 4th industrial revolution was pushed forward by economists, from what I can see? And then long-termists picked up the idea because of course it's relevant.

The idea of AI as a 4th industrial revolution was pushed forward by economists, from what I can see? And then long-termists picked up the idea because of course it's relevant.

My impression is that when most economists talk about AI as a 4th industrial revolution they're talking about impacts much smaller than what longtermists have in mind when they talk about "impacts at least as big as the Industrial Revolution". For example, in a public Google doc on What Open Philanthropy means by "transformative AI", Luke Muehlhauser says:

Unfortunately, in our experience, most people who encounter this definition (understandably) misunderstand what we mean by it. In part this may be due to the ubiquity of discussions about how AI (and perhaps other "transformative technologies") may usher in a "4th industrial revolution," which sounds similar to our definition of transformative AI, but (in our experience) typically denotes a much smaller magnitude of transformation than we have in mind when discussing "transformative AI."

To explain, I think the common belief is that the (first) Industrial Revolution caused a shift to a new 'growth mode' characterized by much higher growth rates of total economic output as well as other indicators relevant to well-being (e.g. life expectancy). It is said to be comparable to only the agricultural revolution (and perhaps earlier fundamental changes such as the arrival of humans or major transitions in evolution).

By contrast, the so-called second and third industrial revolution (electricity, computers, ...) merely sustained the new trend that was kicked off by the first. Hence the title of Luke Muehlhauser's influential blog post There was only one industrial revolution.

So e.g. in terms of the economic growth rate, I think economists talk about a roughly business-as-usual scenario, while longtermists talk about the economic doubling time falling from a decade to a month.

Regarding timing, I also think that some versions of longtermist concerns about AI predate talk about a 4th industrial revolution by decades. (By this, I mean concerns that are of major relevance for the long-term future and meet the 'transformative AI' impact bar, not concerns by people who explicitly considered themselves longtermists or were explicitly comparing their concerns to the Industrial Revolution.) For example, the idea of an intelligence explosion was stated by I. J. Good in 1965, and people also often see concerns about AI risk expressed in statements by Norbert Wiener in 1960 (e.g. here, p. 4) or Alan Turing in 1951.

--

I'm less sure about this, but I think most longtermists wouldn't consider AI to be a competitive cause area if their beliefs about the impacts of AI were similar to those of economists talking about a 4th industrial revolution. Personally, in that case I'd probably put it below all of bio, nuclear, and climate change.

Curated and popular this week
Relevant opportunities