M

mm6

46 karmaJoined Jun 2022
mandel.substack.com/

Posts
1

Sorted by New
35
mm6
· 2y ago · 26m read

Comments
5

This might be the best feedback I've ever gotten on a piece of writing (On the Philosophical Foundations of EA). Thanks for reading so many entries and helping make the contest happen!

Wow that's a fascinating connection/parallel – thank you so much for sharing!  Anything else you'd recommend reading in that literature? Am very curious about any other similarities between Madhyamaka Buddhism and Kantian thought

Also, regarding persuading non-consequentialists on their own terms, I've long been meaning to write a post (tentatively) titled "Judicious Duty: Effective Altruism for Non-Consquentialists", so this is giving me additional motivation to eventually do so :)

That sounds super interesting – definitely write it! If you ever want someone to read a draft or something, shoot me a dm!

Thanks for reading – pretty cool to get an academic philosopher's perspective! (I also really enjoyed your piece on Nietzsche!)

These are problems of applied beneficence.  Unless your moral theory says that consequences don't matter at all (which would be, as Rawls himself noted, "crazy"), you'll need answers to these questions no matter what ethical-theory tradition you're working within.

I think this is right,  but I'd argue that though all of the theories have to deal with the same questions, the answers to the questions will depend on the specific infrastructure each theory offers. A Kantian has access to the intent/foresight distinction when weighing beneficence in a way the consequentialist does not, eg. Whether your ethical theory treats personhood as an important concept might dictate whether death is a harm or if only pain counts as a harm.

(3) re: Harsanyi / "Each of the starting questions I've imagined clearly load the deck in terms of the kinds of answers that are conceptually viable." This seems easily avoided by instead asking which world one would rationally prefer from behind the veil of ignorance. (Whole possible worlds build in all the details, so do not artificially limit the potential for moral assessment in any way.)

I like this response, but I think in broadening the scope of the question, you make it harder to access the conclusion. Without already accepting consequentialism, it's not clear that I'd primarily optimize the world I'm designing along welfare considerations as opposed to any other possible value system.

(4) "Morality is at its core a guide for individuals to choose what to do." Agreed!  I'd add that noting the continuity between ethical choice and rational choice more broadly is something that strongly favours consequentialism.

And from the link

As Scheffler (1985) argued, rational choice in general tends to be goal-directed, a conception which fits poorly with deontic constraints.18 A deontologist might claim that their goal is simply to avoid violating moral constraints themselves, which they can best achieve by not killing anyone, even if that results in more others being killed....

Scheffler's challenge remains that such a proposal makes moral norms puzzlingly divergent from other kinds of practical norms. If morality sometimes calls for respecting value rather than promoting it, why is the same not true of prudence?

I think a Kantian would respond that what constitutes succeeding at achieving a goal in any practical domain is dictated by the nature of the activity. If your goal is to win a basketball game, you cannot simply manipulate the scoreboard at the end so that it says you have a higher score than your oppoent: you must get the ball in the hoop more times than the opposing team (mod the complexity of free throws and 3 point shots) while abiding by the rules of the game. The ways in which you can successfully achieve the goal are  inherently constrained. 

Furthermore, prudence seems like a bad case to consider because we do not automatically take prudential reasoning to be normative. We can instrumentally reason about how to achieve an end, but that certain means will help us get our end does not imply that we ought to take those means – we need a way to reason about ends. 

Thanks for the thoughtful response! :) 

Appreciate the kind words! 

re how EA considerations would change under different ethical theories: at the end of the piece I gesture towards the idea that a philosophically entrepreneurial EA might work out a system under which the numbers matter for Kantians when enacting the duty of beneficence. This Kantian EA would look a lot like any other EA in caring about maximizing QALYs when doing their charitable giving except that they might philosophically object to deliberately ever inflicting harm in order to help others (though might accept merely foreseen harms). So definitely no murderous organ harvesting or any similar scenario that would have you use people as a means to maximizing utility (obviously not something any EAs are advocating for, but something that straight consequentialism could theoretically require). Conversely (and very speculatively), as I mention in the piece, Kantian EAs might prioritize meat production over the harvesting of animal products as a cause in light of the intent/foresight distinction. And then even Kantianism aside, I think that EAs could potentially make conversations around applied ethics more productive by grounding the conversation in a foundational ethical theory instead of merely exchanging intuition pumps.