Richard Y Chappell

Associate Professor of Philosophy @ University of Miami
4709 karmaJoined Dec 2018

Bio

Academic philosopher, co-editor of utilitarianism.net, blogs at https://rychappell.substack.com/

Comments
283

Realistically, it is almost never in an academic's professional interest to write a reply paper (unless they are completely starved of original ideas). Referees are fickle, and if the reply isn't accepted at the original journal, very few other journals will even consider it, making it a bad time investment. (A real "right of reply" -- where the default expectation switches from 'rejection' to 'acceptance' -- might change the incentives here.)

Example: early in my career, I wrote a reply to an article that was published in Ethics. The referees agreed with my criticisms, and rejected my reply on the grounds that this was all obvious and the original paper never should have been published. I learned my lesson and now just post replies to my blog since that's much less time-intensive (and probably gets more readers anyway).

Here's how I picture the axiological anti-realist's internal monologue: 

"The point of liberal intuitions is to prevent one person from imposing their beliefs on others. I care about my axiological views, but, since I have these liberal intuitions, I do not feel compelled to impose my views on others. There's no tension here."

By contrast, here's how I picture the axiological realist:

"I have these liberal intuitions that make me uncomfortable with the thought of imposing my views on others. At the same time, I know what the objectively correct axiology is, so, if I, consequentialist-style, do things that benefit others according to the objectively correct axiology, then there's a sense in which that will be better for them than if I didn't do it. Perhaps this justifies going against the common-sense principles of liberalism, if I'm truly certain enough and am not self-deceiving here? So, I'm kind of torn..."

Right, this tendentious contrast is just what I was objecting to. I could just as easily spin the opposite picture:

(1) A possible anti-realist monologue: "I find myself with some liberal intuitions; I also have various axiological views. Upon reflection, I find that I care more about preventing suffering (etc.) than I do about abstract tolerance or respect for autonomy, and since I'm an anti-realist I don't feel compelled to abide by norms constraining my pursuit of what I most care about."

(2) A possible realist monologue: "The point of liberal norms is to prevent one person from imposing their beliefs on others. I'm confident about what the best outcomes would be, considered in abstraction from human choice and agency, but since it would be objectively wrong and objectionable to pursue these ends via oppressive or otherwise illicit means, I'll restrict myself to permissible means of promoting the good. There's no tension here."

The crucial question is just what practical norms one accepts (liberal or otherwise). Proposing correlations between other views and bad practical norms strikes me as an unhelpful -- and rather bias-prone -- distraction.

Thanks for writing this! I find it really striking how academic critics of longtermism (both Thorstad and Schwitzgebel spring to mind here) don't adequately consider model uncertainty. It's something I also tried to flag in my old post on 'X-risk agnosticism'.

Tarsney's epistemic challenge paper is so much better, precisely because he gets into higher-order uncertainty (over possible values for the crucial parameter "r" which includes the persisting risk of extinction, in the far future, despite our best efforts).

In general (whether realist or anti-realist), there is "no clear link" between axiological certainty and oppressive behavior, precisely because there are further practical norms (e.g. respect for rights, whether instrumentally or non-instrumentally grounded) that mediate between evaluation and action.

You suggest that it "seems only intuitive/natural" that an anti-realist should avoid being "too politically certain that what they believe is what everyone ought to believe." I'm glad to hear that you're naturally drawn to liberal tolerance. But many human beings evidently aren't! It's a notorious problem for anti-realism to explain how it doesn't just end up rubber-stamping any values whatsoever, even authoritarian ones.

Moral realists can hold that liberal tolerance is objectively required as a practical norm, which seems more robustly constraining than just holding it as a personal preference. So the suggestion that "moral realism" is "problematic" here strikes me as completely confused. You're implicitly comparing a realist authoritarian with an anti-realist liberal, but all the work is being done by the authoritarian/liberal contrast, not the realist/antirealist one. If you hold fixed people's first-order views, not just about axiology but also about practical norms, then their metaethics makes no further difference.

That said, I very much agree about the "weirdness" of turning to philosophical uncertainty as a solution. Surely philosophical progress (done right) is a good thing, not a moral threat. But I think that just reinforces my alternative response that empirical uncertainty vs overconfidence is the real issue here. (Either that, or -- in some conceivable cases, like an authoritarian AI -- a lack of sufficient respect for the value of others' autonomy. But the problem with someone who wrongly disregards others' autonomy is not that they ought to be "morally uncertain", but that they ought to positively recognize autonomy as a value. That is, they problematically lack sufficient confidence in the correct values. It's of course unsurprising that having bad moral views would be problematic!)

We just wrote a textbook on the topic together (the print edition of utilitarianism.net)! In the preface, we briefly relate our different attitudes here: basically, I'm much more confident in the consequentialism part, but sympathetic to various departures from utilitarian (and esp. hedonistic) value theory, whereas Will gives more weight to non-consequentialist alternatives (more for reasons of peer disagreement than any intrinsic credibility, it seems), but is more confident that classical hedonistic utilitarianism is the best form of consequentialism.

I agree it'd be fun for us to explore the disagreement further sometime!

This is really sad news. I hope everyone working there has alternative employment opportunities (far from a given in academia!).

I was shocked to hear that the philosophy department imposed a freeze on fundraising in 2020. That sounds extremely unusual, and I hope we eventually learn more about the reasons behind this extraordinary institutional hostility. (Did the university shoot itself in the financial foot for reasons of "academic politics"?)

A minor note on the forward-looking advice: "short-term renewable contracts" can have their place, especially for trying out untested junior researchers. But you should be aware that it also filters out mid-career academics (especially those with family obligations) who could potentially bring a lot to a research institution, but would never leave a tenured position for short-term one. Not everyone who is unwilling to gamble away their academic career is thereby a "careerist" in the derogatory sense.

I don't necessarily disagree with any of that, but the fact that you asserted it implicates you think it has some kind of practical relevance which is where I might want to disagree.

I think it's fundamentally dishonest (a kind of naive instrumentalism in its own right) to try to discourage people from having true beliefs because of faint fears that these beliefs might correlate with bad behavior.

I also think it's bad for people to engage in "moral profiling" (cf. racial profiling), spreading suspicion about utilitarians in general based on very speculative fears of this sort.

I just think it's very obvious that if you're worried about naive instrumentalism, the (morally and intellectually) correct response is to warn against naive instrumentalism, not other (intrinsically innocuous) views that you believe to be correlated with the mistake.

[See also: The Dangers of a Little Knowledge, esp. the "Should we lie?" section.]

fwiw, I wouldn't generally expect "high confidence in utilitarianism" per se to be any cause for concern. (I have high confidence in something close to utilitarianism -- in particular, I have near-zero credence in deontology -- but I can't imagine that anyone who really knows how I think about ethics would find this the least bit practically concerning.)

Note that Will does say a bit in the interview about why he doesn't view SBF's utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from the Soltes book on white-collar crime was that such crime tends to be more the result of negligence and self-deception than deliberate, explicit planning to that end).

I basically agree with the lessons Will suggests in the interview, about the importance of better "governance" and institutional guard-rails to disincentivize bad behavior, along with warning against both "EA exceptionalism" and SBF-style empirical overconfidence (in his ability to navigate risk, secure lasting business success without professional accounting support or governance, etc.).

I think it would be a big mistake to conflate that sort of "overconfidence in general" with specifically moral confidence (e.g. in the idea that we should fundamentally always prefer better outcomes over worse ones). It's just very obvious that you can have the latter without the former, and it's the former that's the real problem here.

[See also: 'The Abusability Objection' at utilitarianism.net]

Yes, I agree it seems important to have marketers and PR people to craft persuasive messaging for mass audiences. That's not what I'm trying to do here, and nor do I think it would make any sense for me to shift into PR -- it wouldn't be a good personal fit. My target audience is academics and "academic-adjacent" audiences, and as a philosopher my goal is to make clear what's philosophically justified, not to manipulate anyone through non-rational means. I think this is an important role, for reasons explained in some of the footnotes to my posts there. But I also agree it's not the only important role, and it would plausibly be good for EA to additionally have more mass-market appeal.  It takes all sorts.

fyi, I weakly downvoted this because (i) you seem like you're trying to pick a fight and I don't think it's productive; there are familiar social ratcheting effects that incentivize exaggerated rhetoric on race and gender online, and I don't think we should encourage that. (There was nothing in my comment that invited this response.) (ii) I think you're misrepresenting Trace. (iii) The "expand your moral circle" comment implies, falsely, that the only reason one could have for tolerating someone with bad views is that you don't care about those harmed by their bad views.

I did not mean the reference to Trace to function as a conversation opener. (Quite the opposite!) I've now edited my original comment to clarify the relevant portion of the tweet. But if anyone wants to disagree with Trace, maybe start a new thread for that rather than replying to me. Thanks!

Load more