After Nakul Krishna posted the best critique of Effective Altruism so far, I did what anyone would do. I tried to steelman his opinions into their best version, and read his sources. For the third time, I was being pointed to Bernard Williams, so I conceded, and read Bernard Williams's book Ethics and The Limits of Philosophy. It's a great book, and I'd be curious to hear what Will, Toby, Nick, Amanda, Daniel, Geoff, Jeff and other philosophers in our group have to say about it at some point. But what I want to talk about is what it made me realize: that my reasons for Effective Altruism are not moral reasons.
When we act there can be several reasons for our actions, and some of those reasons may be moral in kind. When a utilitarian reasons about a trolley problem, they usually save the 5 people mostly for moral reasons. They consider the situation not from the perspective of physics, or of biology, or of entropy of the system. They consider which moral agents are participants of the scenario, they reason about how they would like those moral agents (or in the case of animals moral recipients) to fare in the situation, and once done, they issue a response on whether they would pull the lever or not.
This is not what got me here, and I suspect not what got many of you here either.
My reasoning process goes:
Well I could analyse this from the perspective of physics. - but that seems irrelevant.
I could analyse it from the perspective of biology. - that also doesn't seem like the most important aspect of a trolley problem.
I could find out what my selfish preferences are in this situation. - Huh, that's interesting, I guess my preferences, given I don't know any one of the minds involved are a ranking of states of affairs, from best to worst, where if 6 survive, I prefer that, then 5 and so on.
I could analyse what morality would issue me to do. - this has two parts 1) Does morality require of me that I do something in particular? and 2) Does morality permit that I do a thing from a specific (unique) set of actions?
It seems to me that morality certainly permits that I pull the lever, possibly permits that I don't too. Does it require that I pull it? Not so sure. Let us assume for the time being it does not.
After doing all this thinking, I pull the lever, save 5 people, kill one, and go home with the feeling of a job well done.
However there are two confounding factors there. So far, I have been assuming that I save them for moral reasons, so I backtrack those reasons into the moral theory that would make that action permissible and even sometimes demanded, I find aggregative consequentialism (usually utilitarianism) and thus, I conclude: "I am probably an aggregative consequentialist utilitarian."
There is other factor though, which is what I prefer in that situation, and that is the ranking of states of affairs I mentioned previously. Maybe I'm not a utilitarian, and I just want the most minds to be happy.
I never tried to tell those apart, until Bernard Williams came knocking. He makes several distinctions that are much more fine grained and deeper than my understanding of ethics or that I could explain here, he writes well and knows how to play the philosopher game. Somehow, he made me realize those confounds in my reasoning. So I proceeded to reason about situations in which there is a conflict between the part of my reasoning that says "This is what is moral" and the part that says "I want there to be the most minds having the time of their lives."
After doing a bit of this tinkering, tweaking knobs here and there in thought experiments, I concluded that my preference for there being most minds having the time of their lives supersedes my morals. When my mind is in conflict between those things I will happily sacrifice the moral action to instead do the thing that makes most minds better off the most.
So let me add one more strange label to my already elating, if not accurate, "positive utilitarian" badge:
I am an amoral Effective Altruist.
I do not help people (computers, animals and aliens) because I think this is what should be done. I do not do it because this is morally permissible or morally demanded. Like anyone, I have moral uncertainty, maybe some 5% of me is virtue ethicist or Kantian, or some other perspective. But the point is that even if those parts were winning, I would still go there and pull that lever. Toby or Nick suggested that we use a moral parliament to think of moral uncertainty. Well, if I do, then my conclusion is that basically I am not in a parliamentary system, but in some other form of government, and the parliament is not that powerful. I take Effective Altruist actions not because they are what is morally right for me to do, but in spite of what is morally right to do.
So Nakul Krishna and Bernard Williams may well, and in fact might have, reasoned me out of the claim "utilitarianism is the right way to reason morally." That deepened my understanding of morality a fair bit.
But I'd still pull that goddamn lever.
So much the worse for Morality.
Telofy: Trying to figure out the direction of the inferential gap here. Let me try to explain, I don't promise to succeed.
Aggregative consequentialist utilitarianism holds that people in general should value most minds having the times of their lives, where "in general" here actually translated into a "should" operator. A moral operator. There's a distinction between me wanting X, and morality suggesting, requiring, or demanding X. Even if X is the same, different things can hold a relation to it.
At the moment I both hold a personal preference relation to you having a great time as I do a moral one. But if the moral one was dropped (as Williams makes me drop sevral of my moral reasons) I'd still have the personal one, and it supersedes the moral considerations that could arise otherwise.
Moral Uncertainty: To confess, that was my bad not disentangling uncertainty about my preferences that happen to be moral, my preferences that happen to coincide with preferences that are moral, and the preferences that morality would, say, require me to have. That was bad philosophy and on my part and I can see Lewis, Chalmers and Muelhauser blushing at my failure.
I meant uncertainty I have as an empirical subject in determining which of the reasons for argument I find are moral reasons or not, and within that which belong to which moral perspective. For instance I assign high credence that breaking a promise is bad from a Kantian standpoint, times a low credence that Kant was right about what is right. So not breaking a promise has a few votes in my parliament, but not nearly as many as giving a speech about EA at UC Berkeley has, because I'm confident that a virtuous person would do that, and I'm somewhat confident it is good from a utilitarian standpoint too, so lots of votes.
I disagree that optimally satifying your moral preferences equals doing what is moral. For one thing you are not aware of all moral preferences that, on reflection you would agree with, for another, you could bias your dedication intensity in a way that even though you are acting on moral preferences, the outcome is not what is moral all things considered. Furthermore It is not obvious to me that a human is compelled necessarily to have all moral preferences that are "given" to them. You can flat out reject 3 preferences, act on all others, and in virtue of your moral gap, you would not be doing what is moral, even though you are satisfying all preferences in your moral preference class.
Nino: I'm not sure where I stand on moral realism (leaning against but weakly). The non-moral realist part of me replies:
Definitely not the same. First of all to participate in the moral discussion, there is some element of intersubjectivity that kicks in, which outright excludes defining my moral values to a priori equate my preferences, they may a posteriori do so, but the part where they are moral values involves clashing them against something, be it someone else, a society, your future self, a state of pain, or, in the case of moral realism, the moral reality out there.
To argue that my moral values equate all my preferences would be equivalent to universal ethical preference egoism, the hilarious position which holds that the morally right thing to do is for everyone to satisfy my preferences, which would tile the universe with whiteboards, geniuses, ecstatic dance, cuddlepiles, orgasmium, freckles, and the feeling of water in your belly when bodysurfing a warm wave at 3pm, among other things. I don't see a problem with that, but I suppose you do, and that is why it is not moral.
If morality is intersubjective, there is discussion to be had. If it is fully subjective, you still need to determine in which way it is subjective, what a subject is, which operations transfer moral content between subjects if any, what legitimizes you telling me that my morality is subjective, and finally why call it morality at all if you are just talking about subjective preferences.
Thanks for bridging the gap!
Yeah, that is my current perspective, and I’ve found no meaningful distinction that would allow me to distinguish moral from amoral preferences. What you call intersubjective is something that I consider a strategic concern that follows from wanting to realize my moral preferences. I’ve wondered whether I should count the implications of these strategic concerns into my moral category, but that seemed less parsimonious to me. I’m wary of subject... (read more)