Working to reduce extreme suffering for all sentient beings.
Author of Suffering-Focused Ethics: Defense and Implications; Reasoned Politics; & Essays on Suffering-Focused Ethics.
Co-founder (with Tobias Baumann) of the Center for Reducing Suffering (CRS).
On the first point, you're right, I should have phrased this differently: it's not that those passages imply that impartiality entails consequentialism ("an act is right iff it brings about the best consequences"). What I should have said is that they seem to imply that impartiality at a minimum entails strong forms of consequence-focused impartiality, i.e. the impartiality component of (certain forms of) consequentialism ("impartiality entails that we account for all moral patients, and all the most significant impacts"). My point was that that's not the case; there are forms of impartiality that don't — both weaker consequence-focused notions of impartiality as well as more rule-based notions of impartiality (etc), and these can be relevant to, and potentially help guide, ethics in general and altruism in particular.
Can you say more why you think it's very strong?
I think it's an extremely strong claim both because there's a broad set of alternative views that could potentially justify varieties of impartial altruism and work on EA causes — other than very strong forms of consequence-focused impartiality that require us to account for ~all consequences till the end of time. And the claim isn't just that all those alternative views are somewhat implausible, but that they are all wholly implausible (as seems implied by their exclusion and dismissal in passages like "impartial altruism would lose action-guiding force").
One could perhaps make a strong case for that claim, and maybe most readers on the EA Forum endorse that strong claim. But I think it's an extremely strong claim nevertheless.
At a conceptual level, I think it's worth clarifying that "impartiality" and "impartial altruism" do not imply consequentialism. For example, the following passages seem to use these terms as though they must imply consequentialism. [Edit: Rather, these passages seem to use the terms as though "impartiality" and the like must be focused on consequences.]
impartiality entails that we account for all moral patients, and all the most significant impacts we could have on them. ...
Perhaps it’s simply indeterminate whether any act has better expected consequences than the alternatives. If so, impartial altruism would lose action-guiding force — not because of an exact balance among all strategies, but because of widespread indeterminacy.
Yet there are forms of impartiality and impartial altruism that are not consequentialist in nature [or focused on consequences]. For example, one can be a deontologist who applies the same principles impartially toward everyone (e.g. be an impartial judge in court, treat everyone you meet with the same respect and standards). Such impartiality does not require us to account for all the future impacts we could have on all beings. Likewise, one can be impartially altruistic in a distributive sense — e.g. distributing a given resource equally among reachable recipients — which again does not entail that we account for all future impacts.
I don't think this is merely a conceptual point. For example, most academic philosophers, including academic moral philosophers, are not consequentialists, and I believe many of them would disagree strongly with the claim that impartiality and impartial altruism imply consequentialism.[1] Similarly, while most people responding to the EA survey of 2019 leaned toward consequentialism, it seems that around 20 percent of them leaned toward non-consequentialism, and presumably many of them would also disagree with the above-mentioned claim.
Furthermore, as hinted in another comment, I think this point matters because it seems implied a number of times in this sequence that if we can't ground altruism in very strong forms of consequentialist impartiality, then we have no reason for being altruists and impartial altruism cannot guide us (e.g. "if my arguments hold up, our reason to work on EA causes is undermined"; "impartial altruism would lose action-guiding force"). Those claims seem to assume that all the alternatives are wholly implausible (including consequentialist views that involve weaker or time-adjusted forms of impartiality). But that would be a very strong claim.
They'd probably also take issue with defining an "impartial perspective" as one that is consequentialist: "one that gives moral weight to all consequences, no matter how distant". That seems to define away other kinds of impartial perspectives.
"reliably" doesn't mean "perfectly"
Right, I guess within my intuitive conceptions and associations, it's more like a spectrum, with "perfectly" being the very strongest, "reliably" being somewhere in between, and something like "the tiniest bit better than chance" being the weakest. I suspect many would endorse ~the latter formulation without endorsing anything quite as strong as "reliably".
To be clear, I don't think this is a matter of outright misrepresenting others' views; I just suspect that many, maybe most, of those who hold a contrary view would say that those specific descriptions are not particularly faithful or accurate framings of their views, even if certain sections do frame and address things differently.
Yeah, my basic point was that just as I don't think we need to ground a value like "caring for those we love" in whether it has the best consequences across all time and space, I think the same applies to many other instances of caring for and helping individuals — not just those we love.
For example, if we walk past a complete stranger who is enduring torment and is in need of urgent help, we would rightly take action to help this person, even if we cannot say whether this action reduces total suffering or otherwise improves the world overall. I think that's a reasonable practical stance, and I think the spirit of this stance applies to many ways in which we can and do benefit strangers, not just to rare emergencies.
In other words, I was just trying to say that when it comes to reasonable values aimed at helping others, I don't think it's a case of "it must be grounded in strong impartiality or bust". Descriptively, I don't think that reflects virtually anyone's actual values or revealed preferences, and I don't think it's reasonable from a prescriptive perspective either (e.g. I don't think it's reasonable or defensible to abstain from helping a tormented stranger based on cluelessness about the large-scale consequences).
Why would sentient beings' interests matter less intrinsically when those beings are more distant or harder to precisely foresee?
I agree with that sentiment :) But I don't think one would be committed to saying that distant beings' interests matter less intrinsically if one "practically cares/focuses" disproportionally on beings who are in some sense closer to us (e.g. as a kind of mid-level normative principle or stance). The latter view might simply reflect the fact that we inhabit a particular place in time and space, and that we can plausibly better help beings in our vicinity (e.g. the next few thousands of years) compared to those who might exist very far away (e.g. beyond a trillion years from now), without there being any sharp cut-off between our potential to help them.
FWIW, I don't think it's ad hoc or unmotivated. As an extreme example, one might consider a planet with sentient life that theoretically lies just inside our future light cone from time t_now, such that if we travelled out there today at the theoretical maximum speed, then we, or meaningful signals, could reach them just before cosmic expansion makes any further reach impossible. In theory, we could influence them, and in some sense merely wagging a finger right now has a theoretical influence on them. Yet it nevertheless seems to me quite defensible to practically disregard (or near-totally disregard, à la asymptotic discount) these effects given how remote they are (assuming a CDT framework).
Perhaps such a position can be viewed from the lens of an "applicability domain": to a first approximation, the ideal of total impartiality is plausibly "practically morally applicable" on all of Earth and on and somewhat beyond our usual timescales. And we are right to strongly endorse it at this unusually large scale (i.e. unusual relative to prevailing values). But it also seems plausible that its applicability gradually breaks down when we approach extreme values.
Indeed, bracketing off "infinite ethics shenanigans" could be seen as an implicit acknowledgment of such a de-facto breakdown or boundary in the practical scope of impartiality. After all, there is a non-zero probability of an infinite future with sentient life, even if that's not what our current cosmological models suggest (cf. Schwitzgebel's Washout Argument Against Longtermism). Thus, it seems that if we limit infinite outcomes from dominating everything, we have already set some kind of practical boundary (even if it's a practical boundary of asymptotic convergence toward zero across an in-theory infinite scope). If so, it seems that the question is to clarify the nature and scope of that practical boundary, not whether it's there or not.
One might then say that infinite ethics considerations indeed count as an additional, perhaps also devastating challenge to any form of impartial altruism. But in that case, the core objection reduces to a fairly familiar objection about problems with infinities. If we make an alternative case, in which we assume that infinities can be set aside or practically limited, then it seems we have already de facto assumed some practical boundary.
I use "impartiality" loosely, in the sense in the first sentence of the intro: "gives moral weight to all consequences, no matter how distant".
Thanks for clarifying. :)
How about views that gradually discount at the normative level based on temporal distance, like or so? They would give weight to consequences no matter how distant, and still give non-trivial weight to fairly distant consequences (by ordinary standards), yet the weight would go to zero as the distance grows. If normative neartermism is largely immune to your arguments, might such "medium-termist" views largely withstand them as well?
(FWIW, I think views of that kind might actually be reasonable, or at least deserve some weight, in terms of what one practically cares about and focuses on — in part for the very reasons you raise.)
I meant "the reason to work on such causes that my target audience actually endorses."
I suspect there are many people in your target audience who don't exclusively endorse, or strictly need to rely on, the views you critique as their reason to work on EA causes (I guess I'm among them).
Toward the very end, you write:
“But what should we do, then?” Well, we still have reason to respect other values we hold dear — those that were never grounded purely in the impartial good in the first place. Integrity, care for those we love, and generally not being a jerk, for starters. Beyond that, my honest answer is: I don’t know.
You obviously don't exclude the following, but I would strongly hope that — beyond just integrity, care for those we love, and not being a jerk — we can also at a minimum endorse a commitment to reducing overt and gratuitous suffering taking place around us, even if it might not be the single best thing we can do from the perspective of perfect impartiality across all space and time. This value seems to me to be on a similarly strong footing as the other three you mention, and it doesn't seem like it stands or falls with perfect [or otherwise very strong] cosmic impartiality. I suspect you agree with its inclusion, but I feel like it deserves emphasis in its own right.
Relatedly, in response to this:
Ask yourself: Does “this strategy seems good when I assume away my epistemic limitations” have the deep moral urgency that drew you to EA in the first place?
I would say "yes", e.g. if I replace "this strategy" with something like "reducing intense suffering around me seems good [even] when I assume away my epistemic limitations [about long-term cosmic impacts]”. That does at least carry much of the deep moral urgency that motivates me. I mean, just as I can care for those I love without needing to ground it in perfect cosmic impartiality, I can also seek to reduce the suffering of other sentient beings without needing to rely on a maximally impartial perspective.
I should probably have made it more clear that this isn't an objection, and maybe not even much of a substantive point, but more just a remark on something that stood out to me while reading, namely that the views critiqued often seemed phrased in much stronger terms than what people with competing views would necessarily agree with.
Some of the examples that stood out were those I included in quotes above.
To clarify, what I object to here is not a claim like "very strong consequence-focused impartiality is most plausible all things considered", or "alternative views also have serious problems". What I push back against is what I see as an implied brittleness of the general project of effective altruism (broadly construed), along the lines of "it's either very strong consequence-focused impartiality or total bust" when it comes to working on EA causes/pursuing impartial altruism in some form.