Sure. I’ll use traditional total act-utilitarianism defined as follows as the example here so that it’s clear what we are talking about:
Traditional total act-utilitarianism: An act is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.
I gather the metaethical position you describe is something like one of the following three:
(1) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I...
Very interesting :) I don’t mean to be assuming moral realism, and I don’t think of myself as a realist. Suppose I am an antirealist and I state some consequentialist criterion of rightness: ‘An act is right if and only if…’. When stating that, I do not mean or claim that it is true in a realist sense. I may be expressing my feelings, I may encourage others to act according to the criterion of rightness, or whatever. At least I would not merely be talking about how I prefer to act. I would mean or express roughly ‘everyone, your actions and mine are right ...
Yea, one can formulate many variants. I can't recall seening yours before. The following is one thing that might seem like nitpicking, but which I think it is quite important: In academia, it seems standard to formulate utilitarianism and other consequentialist theories so that they apply to everyone. For example,
Traditional total act-utilitarianism: An act is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.
These theories ar...
Carl, you write that you are “more sympathetic to consequentialism than the vast majority of people.” The original post by Richard is about utilitarianism and replacement thought experiments but I guess he is also interested in other forms of consequentialism since the kind of objection he talks about can be made against other forms of consequentialism too.
The following you write seems relevant to both utilitarianism and other forms of consequentialism:
I don't think a 100% utilitarian dictator with local charge of a society on Earth removes pragmatic...
To bite the bullet here would be to accept that it would be morally right to kill and replace everyone with other beings who, collectively, have a (possibly only slightly) greater sum of well-being. If someone could do that.
The following are two similar scenarios:
Traditional Utilitarian Elimination: The sum of positive and negative well-being in the future will be negative if humans or sentient life continues to exist. Traditional utilitarianism implies that it would be right to kill all humans or all sentient beings on Earth painlessly.
Suboptimal Pa...
If we are concerned with how vulnerable moral theories such as traditional total act-utilitarianism and various other forms of consequentialism are to replacement arguments, I think much more needs to be said. Here are some examples.
1. Suppose the agent is very powerful, say, the leader of a totalitarian society on Earth that can dominate the other people on Earth. This person has access to technology that could kill and replace either everyone on Earth or perhaps everyone except a cluster of the leader’s close, like-minded allies. Roughly, this person (or...
Hi Richard. You ask, “People who identify as utilitarians, do you bite the bullet on such cases? And what is the distribution of opinions amongst academic philosophers who subscribe to utilitarianism?”
Those are good questions, and I hope utilitarians or similar consequentialists reply.
It may be difficult to find out what utilitarians and consequentialists really think of such cases. Such theories could be understood as sometimes prescribing ‘say whatever is optimal to say; that is, say whatever will bring about the best results.’ It might be optimal to pre...
Thanks for the informative reply! And also for writing the paper in the first place :)
"Such theories could be understood as sometimes prescribing ‘say whatever is optimal to say; that is, say whatever will bring about the best results.’ It might be optimal to pretend to not bite the bullet even though the person actually does."
I think we need to have high epistemic standards in this community, and would be dismayed if a significant number of people with strong moral views were hiding them in order to make a better impression on others. (See also https://fo
...Thank you cdc482 for raising the topic. I agree describing EA as having only the goal of minimizing suffering would be inaccurate. As would it be to say that it has the goal to “maximizing the difference between happiness and suffering.” Both would be inaccurate simply because EAs disagree about what the goal should be. William MacAskill’s (a) is reasonable: “to ‘do the most good’ (leaving what ‘goodness’ is undefined).” But ‘do the most good’ would need to be understood broadly or perhaps rephrased into something roughly like ‘make things as much better a...
Hi, Your text mentions the importance of cause-neutrality but focuses on humanity, e.g. “maximizing good accomplished largely reduces to doing what is best in terms of very long-run outcomes for humanity.“ Why don’t you include any other species?
To explain where I’m coming from: To my knowledge, GiveWell and Good Ventures also focus on “humanity” and talk about “humanitarians” but I’m not familiar with any argument that shows why that focus makes sense (I’m grateful to be pointed to one). Of course, I don’t expect you to answer on behalf of GW or GV, and ...
A few updates: I have e-mailed the Open Philanthropy Project to ask about their activities. In particular about anyone at the Open Philanthropy Project trying to influence which ideas about, for example, moral philosophy, value theory or the value of the future, that a grant recipient or potential grant recipient talks or writes about in public. I have also asked whether I can share their replies in public, so hopefully there will be more public information about this. They have not replied yet but I have elaborated on this issue in the following section: ... (read more)