Max_Daniel

Project Manager for the Research Scholars Programme at FHI. Previously part of the 2018-2020 cohort of that programme and Executive Director of the Foundational Research Institute (now Center on Long-Term Risk), a project by the Effective Altruism Foundation (but I don't endorse that organization's 'suffering-focused' view on ethics).

Comments

If someone identifies as a longtermist, should they donate to Founders Pledge's top climate charities than to GiveWell's top charities?

Also, I'm not sure how the donation lottery is a good opportunity from a long-term impact perspective. If I were a pure longtermist I would just trust the EA LTFF

I agree that asking whether oneself expects to make higher-impact grants than EA Funds is a key question here.

However, note that you retain the option to give to EA Funds if you win the donor lottery. So in this sense the donor lottery can't be worse than giving to EA Funds directly, unless you think that winning itself impairs your judgment or similar (or causes you to waste time searching for alternatives, or ...).

Also, I do think that at least some donors will be able to make better grants than EA Funds. Yes, EA Fund managers have more grantmaking experience. However, they are also quite time-constrained, and so a donor lottery winner may be able to invest more time per grant/money granted. 

In addition, donors may possess idiosyncratic knowledge that would be too costly to transfer to fund managers. For example, suppose there was a great opportunity to fund biosecurity policy work in the Philippines - it might be more likely that a member of EA Philippines hears about, and is able to evaluate, this opportunity than an EA Funds member (e.g. because this requires a lot of background knowledge on the country). [This is a hypothetical example to illustrate the idea, I don't want to make a claim that this specifically is likely.]

These points are also explained in more detail in the post on donor lotteries I linked to.

If someone identifies as a longtermist, should they donate to Founders Pledge's top climate charities than to GiveWell's top charities?

Could you clarify what you mean by "narrow impact perspective"?

That was unclear, sorry. I again meant impact from just the funded charity's work. As opposed to effects on the motivation or ability to acquire resources of the donor, etc.

If someone identifies as a longtermist, should they donate to Founders Pledge's top climate charities than to GiveWell's top charities?

[Giving just my impression before updating on others people's views.]

Very briefly:

  • I think donating to GiveWell top charities (and more generally donating to charities that have been selected not primarily for their long-term effects) clearly doesn't maximize long-term impact, at least at first glance. I think this is shown by arguments such as the following:
  • In some cases, there may be reasons other than the long-term effect of the funded charity's work in favor of giving to GiveWell charities. For example, perhaps this better maintains someone's motivation and altruism, thus increasing the long-term impact of their non-donation activities. Or perhaps this will better allow them to share their excitement for effective altruism with others, thus allowing them to acquire more resources, including for long-term causes.
    • However, I'm skeptical that these reasons are often decisive, except maybe in some extremely idiosyncratic cases.
  • I don't have much of a view on FP's climate change charities in particular. My best guess is they are higher-impact than GiveWell charities from a long-term perspective. However, I'd also guess there are other options that are even better from just a narrow impact perspective. Examples include:
  • It's much more plausible to me that among options that have been selected for having 'reasonably high long-term impacts' "secondary" considerations such as the ones mentioned above can be decisive (i.e. effect on motivation or ability to promote EA, etc.).
What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

Yes, I meant "less than expected". 

Among your three points, I believe something like 1 (for an appropriate reference class to determine "typical", probably something closer to 'early-stage fields' than 'all fields'). Though not by a lot, and I also haven't thought that much about how much to expect, and could relatively easily be convinced that I expected too much.

I don't think I believe 2 or 3. I don't have much specific information about assumptions made by people who advocated for or funded macrostrategy research, but a priori I'd find it surprising if they had made these mistakes to a strong extent.

What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

My quick take:

  • I agree with other answers that in terms of "discrete" insights, there probably wasn't anything that qualifies as "major" and "novel" according to the above definitions.
  • I'd say the following were the three major broader developments, though unclear to what extent they were caused by macrostrategy research narrowly construed:
    • Patient philanthropy: significant development of the theoretical foundations and some practical steps (e.g. the Founders Pledge research report on potentially setting up a long-term fund).
      • Though the idea and some of the basic arguments probably aren't novel, see this comment thread below.
    • Reduced emphasis on a very small list of "top cause areas". (Visible e.g. here and here, though of course there must have been significant research and discussion prior to such conclusions.)
    • Diversification of AI risk concerns: less focus on "superintelligent AI kills everyone after rapid takeoff because of poorly specified values" and more research into other sources of AI risk.
      • I used to think there was less actual as opposed to publicly visible change, and less due to new research to the extent there was change. But it seems that a perception of significant change is more common.

In previous personal discussions, I think people have made fair points around my bar maybe being generally unreasonable. I.e. it's the default for any research field that major insights don't appear out of nowhere, and that it's almost always possible to find similar previous ideas: in other words, research progress being the cumulative effect of many small new ideas and refinements of them.

I think this is largely correct, but that it's still correct to update negatively on the value of research if past progress has been less good on the spectra of majority and novelty. However, overall I'm now most interested in the sort of question asked here to better understand what kind of progress we're aiming for rather than for assessing the total value of a field.

FWIW, here are some suggestions for potential "major and novel" insights others have made in personal communication (not necessarily with a strong claim made by the source that they meet the bar, also in some discussions I might have phrased my questions a bit differently):

  • Nanotech / atomically precise manufacturing / grey goo isn't a major x-risk
    • [NB I'm not sure that I agree with APM not being a major x-risk, though 'grey goo' specifically may be a distraction. I do have the vague sense that some people in, say, the 90s or until the early 2010s were more concerned about APM then the typical longtermist is now.]
    • My comments were: 
      • "Hmm, maybe though not sure. Particularly uncertain whether this was because new /insights/ were found or just due to broadly social effects and things like AI becoming more prominent?"
      • "Also, to what extent did people ever believe this? Maybe this one FHI survey where nanotech was quite high up the x-risk list was just a fluke due to a weird sample?"
    • Brian Tomasik pointed out: "I think the nanotech-risk orgs from the 2000s were mainly focused on non-grey goo stuff: http://www.crnano.org/dangers.htm"
  • Climate change is an x-risk factor
    • My comment was: "Agree it's important, but is it sufficiently non-obvious and new? My prediction (60%) is that if I asked Brian [Tomasik] when he first realized that this claim is true (even if perhaps not using that terminology) he'd point to a year before 2014."
  • We should build an AI policy field
    • My comment was: "[snarky] This is just extremely obvious unless you have unreasonably high credence in certain rapid-takeoff views, or are otherwise blinded by obviously insane strawman rationalist memes ('politics is the mind-killer' [aware that this referred to a quite different dynamic originally], policy work can't be heavy-tailed [cf. the recent Ben Pace vs. Richard Ngo thing]). [/snarky]
    • I agree that this was an important development within the distribution of EA opinions, and has affected EA resource allocation quite dramatically. But it doesn't seem like an insight that was found by research narrowly construed, more like a strategic insight of the kind business CEOs will sometimes have, and like a reasonably obvious meme that has successfully propagated through the community."
  • Surrogate goals research is important
    • My comment was: "Okay, maaybe. But again 70% that if I asked Eliezer when he first realized that surrogate goals are a thing, he'd give a year prior to 2014."
  • Acausal trade, acausal threats, MSR, probable environment hacking
    • My comment was: "Aren't the basic ideas here much older than 5 years, and specifically have appeared in older writings by Paul Almond and have been part of 'LessWrong folklore' for a while?Possible that there's a more recent crisp insight around probable environment hacking -- don't really know what that is."
  • Importance of the offense-defense balance and security
    • My comment was: "Interesting candidate, thanks! Haven't sufficiently looked at this stuff to have a sense of whether it's really major/important. I am reasonably confident it's new."
      • [Actually I'm not a bit puzzled why I wrote the last thing. Seems new at most in terms of "popular/widely known within EA"?]
  • Internal optimizers
    • My comment was: "Also an interesting candidate. My impression is to put it more in the 'refinement' box, but that might be seriously wrong because I think I get very little about this stuff except probably a strawman of the basic concern."
  • Bargaining/coordination failures being important
    • My comment was: "This seems much older [...]? Or are you pointing to things that are very different from e.g. the Racing to the Precipice paper?"
  • Two-step approaches to AI alignment
    • My comment was: "This seems kind of plausible, thanks! It's also in some ways related to the thing that seems most like a counterexample to me so far, which is the idea of a 'Long Reflection'. (Where my main reservation is whether this actually makes sense / is desirable [...].)"
  • More 'elite focus'
    • My comment was: "Seems more like a business-CEO kind of insight, but maybe there's macrostrategy research it is based on which I'm not aware of?"
Nuclear war is unlikely to cause human extinction

The main reason I wanted to write this post is that a lot of people, including a number in the EA community, start with the conception that a nuclear war is relatively likely to kill everyone, either for nebulous reason or because of nuclear winter specifically.

This agrees with my impression, and I do think it's valuable to correct this misconception. (Sorry, I think it would have been better and clearer if I had said this in my first comment.) This is why I favor work with somewhat changed messaging/emphasis over no work.

It feels like I disagree with you on the likelihood that a collapse induced by nuclear war would lead to permanent loss of humanity's potential / eventual extinction.

I'm not sure we disagree. My current best guess is that most plausible kinds of civilizational collapse wouldn't be an existential risk, including collapse caused by nuclear war. (For basically the reasons you mention.) However, I feel way less confident about this than about the claim that nuclear war wouldn't immediately kill everyone. In any case, my point was not that I in fact think this is likely, but just that it's sufficiently non-obvious that it would be costly if people walked away with the impression that it's definitely not a problem.

I'm planning to follow this post with a discussion of existential risks from compounding risks like nuclear war, climate change, biotech accidents, bioweapons, & others.

This sounds like a very valuable topic, and I'm excited to see more work on it. 

FWIW, my guess is that you're already planning to do this, but I think it could be valuable to carefully consider information hazards before publishing on this [both because of messaging issues similar to the one we discussed here and potentially on the substance, e.g. unclear if it'd be good to describe in detail "here is how this combination of different hazards could kill everyone"]. So I think e.g. asking a bunch of people what they think prior to publication could be good. (I'd be happy to review a post prior to publication, though I'm not sure if I'm particularly qualified.)

Nuclear war is unlikely to cause human extinction

I agree that nuclear war - and even nuclear winter - would be very unlikely to directly cause human extinction. My loose impression is that other EAs who have looked into this agree as well.

However, I'm not sure if it's good to publicize work on existential risk from nuclear war under this headline, and with this scope. Here is why:

  • You only discuss whether nuclear war would somewhat directly cause human extinction - i.e. by either immediately killing everyone, or causing everyone to starve within, say, the next 20 years. However, you don't discuss whether nuclear war could cause a trajectory change of human civilization that make it more vulnerable to future existential risk. For example, if nuclear war would cause an irrecoverable loss of post-industrial levels of technology that would arguably constitute an existential catastrophe itself (by basically removing the chance of close-to-optimal futures) and also make humanity more vulnerable to natural extinction risk (e.g. they can no longer do asteroid deflection). FWIW, I think the example I just gave is fairly unlikely as well; my point here just is that your post doesn't tell us anything about such considerations. It would be entirely consistent with all evidence you present to think that nuclear war is a major indirect existential risk (in the sense just discussed).
  • For this reason, I in particular disagree "that the greatest existential threat from nuclear war appears to be from climate impacts" (as you say in the conclusion). I think that in fact the possibly greatest existential threat from nuclear war is a negative trajectory change precipitated by 'the collapse of civilization', though we don't really know how likely that is or whether this would in fact be negative on extremely long timescales.
  • (Note I'm not intending for this to just be a special case of the true but somewhat vacuous general claim that, for all we know, literally any event could cause a negative or positive trajectory change. The point is that the unprecedented damage caused by large-scale nuclear war seems unusually likely to cause a trajectory change in either direction.)
  • [Less important:] I'm somewhat less optimistic than you about point 3C, i.e. nuclear war planners being aware of nuclear winter. I agree they are aware of the risk. However, I'm not sure if they have incentives to care. They might not care if they view "large-scale nuclear war that causes all our major cities to be destroyed" or "nuclear war that leads to a total defeat by an adversary" as essentially the worst possible outcomes, which seems at least plausible to me. Certainly I think they won't care about the risk as much as a typical longtermist - from an impartial perspective, even a, say, 1% risk of nuclear winter would be very concerning, whereas it could plausibly be a minor consideration when planning nuclear war from a more parochial perspective. Perhaps even more importantly, even if they did care as much as a longtermist, it's not clear to me if the strategic dynamics allow them to adjust their policies. For example, a nuclear war planner may well think that only a 'countervalue' strategy of targeting adversaries' population centers has a sufficient deterrence effect.

So overall I think our epistemic situation is: We know that one type of existential risk from nuclear war is very small, but we don't really have a good idea for how large total existential risk from nuclear war is. It's of course fine, and often a good idea for tractability or presentation reasons, to focus on only one aspect of a problem. But given this epistemic situation, I think the cost of spreading a message that can easily be rounded off to "nuclear war isn't that dangerous [from a longtermist perspective]" are high, particularly since perceptions that nuclear war would be extremely bad may be partly causally responsible for the fact that we haven't yet seen one.

Note I'm not claiming that this post by itself has large negative consequences. No nuclear power is going to chance their policies because of an EA Forum post. But I'd be concerned if there was a growing body of EA work with a messaging like this. For future public work I'd feel better if the summary was more like "nuclear war wouldn't kill every last human within a few decades, but is still extremely concerning from both a long-termist and present-generation perspective" + some constructive implications (e.g. perhaps focus more on how to make post-collapse recovery more likely or to go well).

Nuclear war is unlikely to cause human extinction

There are four different spellings of 'Reisner' (which is correct) in this paragraph:

Alan Robock’s group published a paper in 2007 that found significant cooling effects even from a relatively limited regional war. A group from Los Alamos, Reisner et al, published a paper in 2018 that reexamined some of the assumptions that went into Robock et al’s model, and concluded that global cooling was unlikely in such a scenario. Robock et al. responded, and Riesner et al responded to the response. Both authors bring up good points, but I find Rieser’s position more compelling. This back and forth is worth reading for those who want to investigate deeper. Unfortunately Reiser’s group has not published an analysis on potential cooling effects from a modern full scale nuclear exchange, rather than a limited regional exchange. Even so, it’s not hard to extrapolate that Reiser’s model would result in far less cooling than Robock’s model in the equivalent situation.

Nuclear war is unlikely to cause human extinction

Two different analyses are required to calculate the chances of human extinction from nuclear winter. The first is the analysis of the climate change that could result from a nuclear war, and the second is the adaptive capacity of human groups to these climate changes. I have not seen an in depth analysis of the former, but I believe such an assessment would be worthwhile.  

Do you mean "I have not seen an in depth analysis of the latter"? I.e. humans' adaptive capacity?

Load More