Richard Y Chappell

Associate Professor of Philosophy @ University of Miami
2644Joined Dec 2018

Bio

Academic philosopher, co-editor of utilitarianism.net, blogs at https://rychappell.substack.com/

Comments
174

OK, thanks for clarifying! I guess there's a bit of ambiguity surrounding talk of "the goal of longtermists in the political sphere", so maybe worth distinguishing immediate policy goals that could be implemented right away, vs. external (e.g. "consciousness-raising") advocacy aimed at shifting values.

It's actually an interesting question when policymakers can reasonably go against public opinion. It doesn't seem necessarily objectionable (e.g. to push climate protection measures that most voters are too selfish or short-sighted to want to pay for). There's a reason we have representative rather than direct democracy. But the key thing about your definition of "democratically unacceptable" is that it specifies the policy could not possibly be maintained, which more naturally suggests a feasibility objection than a moral one, anyhow.

But I'm musing a bit far afield now.  Thanks for the thought-provoking paper!

I like the central points that (i) even weak assumptions suffice to support catastrophic risk reduction as a public policy priority, and (ii) it's generally better (more effective) to argue from widely-accepted assumptions than from widely-rejected ones.

But I worry about the following claim:

There are clear moral objections against pursuing democratically unacceptable policies

This seems objectionably conservative, and would seem to preclude any sort of "systemic change" that is not already popular. Closing down factory farms, for example, is clearly "democratically unacceptable" to the current electorate. But it would be ridiculous to claim that there are "clear moral objections" to vegan abolitionist political activism.

Obviously the point of such advocacy is to change what is (currently) regarded as "democratically (un)acceptable". If the advocacy succeeds, then the result is no longer democratically unacceptable.  If the advocacy fails, then it isn't implemented.  In neither case is there any obvious moral objection to advocating, within a democracy, for what you think is true and good.

Answer by Richard Y ChappellMar 13, 20232615

Because I think positive well-being is good, rather than neutral. (This strikes me as more plausible in principle, and also when reflecting on specific cases, e.g. a fantastic life involving a few minor harms.)

Thanks, I appreciate the clarification. (I agree that a general advantage of having a more diverse/"big tent" coalition is that different ppl/perspectives may be more or less likely to pick up on different potential problems.)

This is a strange and unhelpful-seeming comment. Obviously nothing I wrote should be read as denying that EAs are politically diverse (generic references to "EAs" should always be read as implicitly preceded by the word "many").

I'd like to see more folks from across the political spectrum be happily involved in EA.

Things I don't like so much:*

  • Gratuitous disrespect, e.g. through deliberately mis-naming your interlocutors.
  • The apparent assumption than anyone not a leftist must be a libertarian. (Is Joe Biden a libertarian too?)
  • Employing guilt-by-association tactics, and trying to pick a fight about which subgroups are collectively the worst.

The latter is the worst offense, IMO, and illustrates precisely the kind of tribal/politicized thinking that I strongly hope is never accepted in EA. I'd much prefer a "big tent" where folks with different views respectfully offer object-level arguments to try to persuade each other to change their minds, rather than this kind of rhetorical sniping. (Seriously, what good do you imagine the latter will achieve?)

Note that my complaint about "Doing EA Lefter" is not that I've anything against people trying to argue for views further left than mine -- by all means, feel free!  My concern was that their recommendations seemed to be presupposing leftism, and brutely commanding others to agree, rather than providing object-level arguments that might persuade the rest of us.

* = (I guess I also think it's bad form to create a burner account for the sole purpose of writing a comment with those other bad features.)

The paradox of open-mindedness

We want to be open-minded, but not so open-minded that our brains fall out.  So we should be open to high-quality critiques, but not waste our time on low quality ones.  My general worry with this post is that it doesn't distinguish between the two.  There seems a background assumption that EAs dismiss anti-capitalist or post-colonial critiques because we're just closed-minded, rather than because those critiques are bad.  I'm not so sure that you can just assume this!

Doing EA Lefter?

Another general worry I have about "Doing EA Better", and perhaps especially this post, is the extent to which it seems to be implicitly pushing an agenda of "be more generically leftist, and less analytical".  If my impression here is mistaken, feel free to clarify this (and maybe add more political diversity to your list of recommended "deep critiques" -- should we be as open to Hanania's "anti-woke" stuff as to Crary et al?).

Insofar as the general message is, in effect, "think in ways that are less distinctive of EA", whether this is good or bad advice will obviously depend on whether EA-style thinking is better or worse than the alternatives. Presumably most of us are here because we think it's better.  So that makes "be less distinctively EA" a hard sell, especially without firm evidence that the alternatives are better.

Some of this feels to me like, "Stop being you!  Be this other person instead." I don't like this advice at all.

I wonder if it's possible to separate out some of the more neutral advice/suggestions from the distracting "stop thinking in traditional analytic style" advice?

Thanks!  Yes, agreed it's an open empirical question how well people (in general, or particular individuals) can pull off the specified options.

I wouldn't be terribly surprised if something like (2) turned out to be best for most people most of the time. But I guess I'm sufficiently Aristotelian to think that if we're raised since childhood to abide by good norms, later learning that they're instrumentally justified shouldn't really undermine them that much. (They certainly haven't for me--my wife finds it funny how strongly averse I am to any kind of dishonesty, "despite" my utilitarian beliefs!)

Fair enough, but that's the purpose of the recent changes to the 'community' tab, right?

I would guess that they're people, and I always prefer for people to have a more accurate impression of valuable ideas (all else equal). Some might then decide to learn more about those ideas, and act upon them in valuable ways.

(I'm not suggesting that anyone prioritize this sort of community-building over other work that may be more pressing for them. But it seems weird to dismiss it entirely.)

Avoiding complicity (whatever that amounts to) also isn't literally the most important thing in the world. Note that even most deontologists reject "though the heavens fall" absolutism.

Load more