Joe_Carlsmith

Research analyst at Open Philanthropy. Doctoral student in philosophy at the University of Oxford. Opinions my own.

Comments

The importance of how you weigh it

Glad to hear you found it helpful. Unfortunately, I don't think I have a lot to add at the moment re: how to actually pursue moral weighting research, beyond what I gestured at in the post (e.g., trying to solicit lots of your own/other people's intuitions across lots of cases, trying to make them consistent,  that kind of thing). Re: articles/papers/posts, you could also take a look at GiveWell's process here, and the moral weight post from Luke Muelhauser I mentioned has a few references at the end that might be helpful (though most of them I haven't engaged with myself). I'll also add, FWIW, that I actually think the central point in the post most applicable outside of the EA community than inside it, as I think of EA as fairly "basic-set oriented" (though there are definitely some questions in EA where weightings matter).

Against neutrality about creating happy lives

Hi Michael — 

I meant, in the post, for the following paragraphs to address the general issue you mention: 

Some people don’t think that gratitude of this kind makes sense. Being created, we might say, can’t have been “better for” me, because if I hadn’t been created, I wouldn’t exist, and there would be no one that Wilbur’s choice was “worse for.” And if being created wasn’t better for me, the thought goes, then I shouldn’t be grateful to Wilbur for creating me.

Maybe the issues here are complicated, but at a high level: I don’t buy it. It seems to me very natural to see Wilbur as having done, for me, something incredibly significant — to have given me, on purpose, something that I value deeply. One option, for capturing this, is to say that something can be good for me, without being “better” for me (see e.g. McMahan (2009)). Another option is just to say that being created is better for me than not being created, even if I only exist — at least concretely — in one of the cases. Overall, I don’t feel especially invested in the metaphysics/semantics of “good for” and “better for” in this sort of case. I don’t have a worked out account of these issues, but neither do I see them as especially forceful reason not to be glad that I’m alive, or grateful to someone who caused me to be so.

That is, I don’t take myself to be advocating directly for comparativism here (though a few bits of the language in the post, in particular the reference to “better off dead,” do suggest that). As the quoted paragraphs note, comparativism is one option; another is to say that creating me is good for me, even if it’s not better for me (a la McMahan). 

FWIW, though, I do currently feel intuitively open/sympathetic to comparativism, partly because it seems plausible that we can say truly things like “Joe would prefer to be live rather than not to live,” even if Joe doesn’t and never will exist; and clear that we can truly say "Joe prefers to live" in worlds  where he does exist; and I tend to think about treating people well as centrally about being responsive to what they care about/would care about. But I haven’t tried to dig in on this stuff, partly because I see things like being glad I’m alive, and grateful to someone who caused me to be so, as on more generally solid ground than things like “betterness for Joe is a relation that requires two concrete Joe lives as relata" (see e.g. the Menagerie argument in Hilary's powerpoint, p. 13, for the type of thing that makes me think that metaphysical premises like that aren't a "super solid ground" type area). 

At a higher level, though: the point I’m arguing against is specifically that the neutrality intuition is directly intuitive. I don’t see it that way, and the point of “poetically tugging at people’s intuitions” was precisely to try to illustrate and make vivid the intuitive situation as I see it. But as I note at the end —  e.g., “direct intuitions about neutrality aren’t the only data available” — it’s a further question whether there is more to be said for neutrality overall (indeed, I think there is — though metaphysical issues like the ones you mention aren’t very central for me here). That said, I tend to see much of person-affecting ethics as driven at least in substantial part by appeal direct intuition, so I do think it would change the overall dialectical landscape a bit if people come in going “intuitively, we have strong reasons to create happy lives. But there are some metaphysical/semantic questions about how to make sense of this…” 

Contact with reality

Thanks! Re: mental manipulation, do you have similar worries even granted that you’ve already been being manipulated in these ways? We can stipulate that there won’t be any increase in the manipulation in question, if you stay. One analogy might be: extreme cognitive biases that you’ve had all along. They just happen to be machine-imposed. 

That said, I don’t think this part is strictly necessary for the thought experiment, so I’m fine with folks leaving it out if it trips them up.

On clinging

Glad to hear you enjoyed it. 

I haven't engaged much with tranquilism. Glancing at that piece, I do think that the relevant notions of "craving" and "clinging" are similar; but I wouldn't say, for example, that an absence of clinging makes an experience as good as it can be for someone.

Actually possible: thoughts on Utopia

Thanks :). I haven't thought much about personal universes, but glancing at the paper, I'd expect resource-distribution, for example, to remain an issue.

Alienation and meta-ethics (or: is it possible you should maximize helium?)

Glad to hear it :)

Re: "my motivational system is broken, I'll try to fix it" as the thing to say as an externalist realist: I think this makes sense as a response. The main thing that seems weird to me is the idea that you're fundamentally "cut off" from seeing what's good about helium, even though there's nothing you don't understand about reality. But it's a weird case to imagine, and the relevant notions of "cut off" and "understanding" are tricky.

Alienation and meta-ethics (or: is it possible you should maximize helium?)

Thanks for reading. Re: your version of anti-realism: is "I should create flourishing (or whatever your endorsed theory says)" in your mouth/from your perspective true, or not truth-apt? 

To me Clippy's having or not having a moral theory doesn't seem very central. E.g., we can imagine versions in which Clippy (or some other human agent) is quite moralizing, non-specific, universal, etc about clipping, maximizing pain, or whatever.