Conversation with Holden Karnofsky, Nick Beckstead, and Eliezer Yudkowsky on the "long-run" perspective on effective altruism

by Nick_Beckstead 5y18th Aug 20147 comments

2


Earlier this year, I had an email conversation with Holden Karnofsky, Eliezer Yudkowsky, and Luke Muehlhauser about future-oriented effective altruism, as a follow-up to an earlier conversation Holden had with Luke and Eliezer.

The conversation is now available here. My highlights from the conversation:


NICK: I think the case for “do the most good” coinciding with “do what is best in terms of very long-term considerations” rests on weaker normative premises than your conversation suggests it does. For example, I don’t believe you need the assumption that creating a life is as good as saving a life, or a constant fraction as good as that. I have discussed a more general kind of argument—as well as some of the most natural and common alternative moral frameworks I could think of—in my dissertation (especially ch. 3 and ch. 5). It may seem like a small point, but I think you can introduce a considerable amount of complicated holistic evaluation into the framework without undermining the argument for focusing primarily on long-term considerations.

For another point, you can have trajectory changes or more severe “flawed realizations” that don’t involve extinction. E.g., you could imagine a version of climate change where bad management of the problem results in the future being 1% worse forever or you could have a somewhat suboptimal AI that makes the future 1% worse than it could have been (just treat these as toy examples that illustrate a point rather than empirical claims). If you’ve got a big enough future civilization, these changes could plausibly outweigh short-term considerations (apart from their long-term consequences) even if you don’t think that creating a life is within some constant fraction of saving a life.

HOLDEN: On your first point - I think you're right about the *far future* but I have more trouble seeing the connection to *x-risk* (even broadly defined). Placing a great deal of value on a 1% improvement seems to point more in the direction of working toward broad empowerment/improvement and weigh toward e.g. AMF. I think I need to accept the creating/saving multiplier to believe that "all the value comes from whether or not we colonize the stars."

NICK: The claim was explicitly meant to be about "very long-term considerations." I just mean to be speaking to your hesitations about the moral framework (rather than your hesitations about what the moral framework implies).

I agree that an increased emphasis on trajectory changes/flawed realizations (in comparison with creating extra people) supports putting more emphasis on factors like broad human empowerment relative to avoiding doomsday scenarios and other major global disruptions.

ELIEZER: How does AMF get us to a 1% better *long-term* future?  Are you envisioning something along the lines of "Starting with a 1% more prosperous Earth results in 1% more colonization and hence 1% more utility by the time the stars finally burn out"?

HOLDEN: I guess so. A 1% better earth does a 1% better job in the SWH transition? I haven't thought about this much and don't feel strongly about what I said.

ELIEZER: SWH?

HOLDEN: Something Weird Happens - Eliezer's term for what I think he originally intended Singularity to mean (or how I interpret Singularity).

(will write more later)


NICK: I feel that the space between your take on astronomical waste and Bostrom’s take is smaller than you recognize in this discussion and in discussions we’ve had previously. In the grand scheme of things, it seems the position you articulated (under the assumptions that future generations matter in the appropriate way) puts you closer to Bostrom than it does to (say) 99.9% of the population. I think most outsiders would see this dispute as analogous to a dispute between two highly specific factions of Marxism or something. As Eliezer said, I think your disagreement is more about how to apply maxipok than whether maxipok is right (in the abstract).[…]

I think there’s an interesting analogy with the animal rights people. Suppose you hadn’t considered the long-run consequences of helping people so much and you become convinced that animal suffering on factory farms is of comparable importance to billions of humans being tortured and killed each year, and that getting one person to be a vegetarian is like preventing many humans from being tortured and killed. Given that you accept this conclusion, I think it wouldn’t be unreasonable for you to update strongly in favor of factory farming being one of the most high priority areas for doing good in the world, even if you didn’t know a great deal about RFMF and so on. Anyway, it does seem pretty analogous in some important ways. This looks to me like a case where some animal rights people did something analogous to the process you critiqued and thereby identified factory farming,

HOLDEN: Re: Bostrom's essay - I see things differently. I see "the far future is extremely important" as a reasonably mainstream position. There are a lot of mainstream people who place substantial value on funding and promoting science, for that exact reason. Certainly there are a lot of people who don't feel this way, and I have arguments with them, but I don't feel Bostrom's essay tells us nearly as much when read as agreeing with me. I'd say it gives us a framework that may or may not turn out to be useful.

So far I haven't found it to be particularly useful. I think valuing extinction prevention as equivalent to saving something like 5*N lives (N=current global population) leads to most of the same conclusions. Most of my experience with Bostrom's essay has been people pointing to it as a convincing defense of a much more substantive position.

I think non-climate-change x-risks are neglected because of how diffuse their constituencies are (the classic issue), not so much because of apathy toward the far future, particularly not from failure to value the far future at [huge number] instead of 5*N.

NICK: […] Though I'm not particularly excited about refuges, they might be a good test case. I think that if you had this 5N view, refuges would be obviously dumb but if you had the view that I defended in my dissertation then refuges would be interesting from a conceptual perspective.


HOLDEN: One of the things I'm hoping to clarify with my upcoming posts is that my comfort with a framework is not independent of what the framework implies. Many of the ways in which you try to break down arguments do not map well onto my actual process for generating conclusions.

NICK: I'm aware that this isn't how you operate. But doesn't this seem like an "in the trenches" case where we're trying to learn and clarify our reasoning, and therefore your post would suggest that now is a good time to do engage in sequence thinking?

HOLDEN: Really good question that made me think and is going to make me edit my post. I concede that sequence thinking has important superiorities for communication; I also think that it COULD be used to build a model of cluster thinking (this is basically what I tried to do in my post - define cluster thinking as a vaguely specified "formula"). One of the main goals of my post is to help sequence thinkers do a better job modeling and explicitly discussing what cluster thinking is doing.

What's frustrating to me is getting accused of being evasive, inconsistent, or indifferent about questions like this far future thing; I'd rather be accused of using a process that is hard to understand by its nature (and shouldn't be assumed to be either rational or irrational; it could be either or a mix).

Anyway, what I'd say in this case is:

  • I think we've hit diminishing returns on examining this particular model of the far future. I've named all the problems I see with it; I have no more. I concede that this model doesn't have other holes that I've identified, for the moment. I've been wrong before re: thinking we've hit diminishing returns before we have, so I'm open to more questions.
  • In terms of how I integrate the model into my decisions, I cap its signal and give it moderate weight. "Action X would be robustly better if I accepted this model of the far future" is an argument in favor of action X but not a decisive one. This is the bit that I've previously had trouble defending as a principled action, and hopefully I've made some progress on that front. I don't intend this statement to cut off discussion on the sequence thinking bit, because more argument along those lines could strengthen the robustness of the argument for me and increase its weight.
HOLDEN: Say that you buy Apple stock because "there's a 10% chance that they develop a wearable computer over the next 2 years and this sells over 10x as well as the iPad has.' I short Apple stock because "I think their new CEO sucks." IMO, it is the case that you made a wild guess about the probability of the wearable computer thing, and it is not the case that I did.

NICK: I think I've understood your perspective for a while, I'm mainly talking about how to explain it to people.

I think this example clarifies the situation. If your P(Apple develops a wearable computer over the next 2 years and this sells over 10x as well as the iPad has) = 10%, then you'd want to buy apple stock. In this sense, if you short Apple stock, you're committed to P(Apple develops a wearable computer over the next 2 years and this sells over 10x as well as the iPad has) < 10%. In this sense, you often can't get out of being committed to ranges of subjective probabilities.

The way you think about it, the cognitive procedure is more like: ask a bunch of questions, give answers to the questions, give weights to your question/answer pairs, make a decision as a result. You're "relying on an assumption" only if that assumption is your answer to one of the questions and you put a lot of weight on that question/answer pair. Since you just relied on the pair (How good is the CEO?, The CEO sucks), you didn't rely on a wild guess about P(Apple develops a wearable computer over the next 2 years and this sells over 10x as well as the iPad has). And, in this sense, you can often avoid being committed to subjective probabilities.

When I first heard you say, "You're relying on a wild guess," my initial reaction was something like, "Holden is making the mistake of thinking that his actions don't commit him to ranges of subjective probabilities (in the first sense). It looks like he hasn't thought through the Bayesian perspective on this." I do think this is a real mistake that people make, though they may (often?) be operating more on the kind of basis you have described . I started thinking you had a more interesting perspective when, when I was pressing you on this point, you said something like, "I'm committed to whatever subjective probability I'm committed to on the basis of the decision that's an outcome of this cognitive procedure."