All of Simon_Knutsson's Comments + Replies

A few updates: I have e-mailed the Open Philanthropy Project to ask about their activities. In particular about anyone at the Open Philanthropy Project trying to influence which ideas about, for example, moral philosophy, value theory or the value of the future, that a grant recipient or potential grant recipient talks or writes about in public. I have also asked whether I can share their replies in public, so hopefully there will be more public information about this. They have not replied yet but I have elaborated on this issue in the following section: ... (read more)

Sure. I’ll use traditional total act-utilitarianism defined as follows as the example here so that it’s clear what we are talking about:

Traditional total act-utilitarianism: An act is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.

I gather the metaethical position you describe is something like one of the following three:

(1) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I... (read more)

Very interesting :) I don’t mean to be assuming moral realism, and I don’t think of myself as a realist. Suppose I am an antirealist and I state some consequentialist criterion of rightness: ‘An act is right if and only if…’. When stating that, I do not mean or claim that it is true in a realist sense. I may be expressing my feelings, I may encourage others to act according to the criterion of rightness, or whatever. At least I would not merely be talking about how I prefer to act. I would mean or express roughly ‘everyone, your actions and mine are right ... (read more)

4
Wei Dai
5y
I think there is at least one plausible meta-ethical position under which when I say "I think utilitarianism is right." I just mean something like "I think that after I reach reflective equilibrium my preferences will be well-described by utilitarianism." and it is not intended to mean that I think utilitarianism is right for anyone else or applies to anyone else or should apply to anyone else (except insofar as they are sufficiently similar to myself in the relevant ways and therefore are likely to reach a similar reflective equilibrium). Do you agree this is a plausible meta-ethical position? If yes, does my sentence (or the new version that I gave in the parallel thread) make more sense in light of this? In either case, how would you suggest that I rephrase my sentence to make it better?

Yea, one can formulate many variants. I can't recall seening yours before. The following is one thing that might seem like nitpicking, but which I think it is quite important: In academia, it seems standard to formulate utilitarianism and other consequentialist theories so that they apply to everyone. For example,

Traditional total act-utilitarianism: An act is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.

These theories ar... (read more)

3
Wei Dai
5y
You seem to be assuming moral realism, so that "Utilitarianism seems to imply ..." gets interpreted as "If utilitarianism is the true moral theory for everyone, then everyone should ..." whereas I'm uncertain about that. In particular if moral relativism, subjectivism, or anti-realism is true, then "Utilitarianism seems to imply ..." has to be interpreted as "If utilitarianism is the right moral theory for someone or represents their moral preferences, then that person should ..." So I think given my meta-ethical uncertainty, the way I phrased my statement actually does make sense. (Maybe it's skewed a bit towards anti-realism by implicature, but is at least correct in a literal sense even if realism is true.)

Carl, you write that you are “more sympathetic to consequentialism than the vast majority of people.” The original post by Richard is about utilitarianism and replacement thought experiments but I guess he is also interested in other forms of consequentialism since the kind of objection he talks about can be made against other forms of consequentialism too.

The following you write seems relevant to both utilitarianism and other forms of consequentialism:

I don't think a 100% utilitarian dictator with local charge of a society on Earth removes pragmatic
... (read more)

To bite the bullet here would be to accept that it would be morally right to kill and replace everyone with other beings who, collectively, have a (possibly only slightly) greater sum of well-being. If someone could do that.

The following are two similar scenarios:

Traditional Utilitarian Elimination: The sum of positive and negative well-being in the future will be negative if humans or sentient life continues to exist. Traditional utilitarianism implies that it would be right to kill all humans or all sentient beings on Earth painlessly.

Suboptimal Pa... (read more)

6[anonymous]5y
If you actually think that the only thing that matters is wellbeing, then personhood doesn't matter, so it makes sense that you would endorse these conclusions in this thought experiment.

If we are concerned with how vulnerable moral theories such as traditional total act-utilitarianism and various other forms of consequentialism are to replacement arguments, I think much more needs to be said. Here are some examples.

1. Suppose the agent is very powerful, say, the leader of a totalitarian society on Earth that can dominate the other people on Earth. This person has access to technology that could kill and replace either everyone on Earth or perhaps everyone except a cluster of the leader’s close, like-minded allies. Roughly, this person (or... (read more)

7
CarlShulman
5y
The first words of my comment were "I don't identify as a utilitarian" (among other reasons because I reject the idea of things like feeding all existing beings to utility monsters for a trivial proportional gains to the latter, even absent all the pragmatic reasons not to; even if I thought such things more plausible it would require extreme certainty or non-pluralism to get such fanatical behavior). I don't think a 100% utilitarian dictator with local charge of a society on Earth removes pragmatic considerations, e.g. what if they are actually a computer simulation designed to provide data about and respond to other civilizations, or the principle of their action provides evidence about what other locally dominant dictators on other planets will do including for other ideologies, or if they contact alien life? But you could elaborate on the scenario to stipulate such things not existing in the hypothetical, and get a situation where your character would commit atrocities, and measures to prevent the situation hadn't been taken when the risk was foreseeable. That's reason for everyone else to prevent and deter such a person or ideology from gaining the power to commit such atrocities while we can, such as in our current situation. That would go even more strongly for negative utilitarianism, since it doesn't treat any life or part of life as being intrinsically good, regardless of the being in question valuing it, and is therefore even more misaligned with the rest of the world (in valuation of the lives of everyone else, and in the lives of their descendants). And such responses give reason even for utilitarian extremists to take actions that reduce such conflicts. Insofar as purely psychological self-binding is hard, there are still externally available actions, such as visibly refraining from pursuit of unaccountable power to harm others, and taking actions to make it more difficult to do so, such as transferring power to those with less radical ideologies,
3
Wei Dai
5y
Another variant along these lines: Suppose almost all humans adopt utilitarianism as their moral philosophy and fully colonize the universe, and then someone invents the technology to kill humans and replace them with beings of greater well-being. Utilitarianism seems to imply that humans who are utilitarians should commit mass suicide in order to bring the new beings into existence. EDIT: Utilitarianism endorses humans voluntarily replacing themselves with these new beings. (New wording suggested by Richard Ngo.)

Hi Richard. You ask, “People who identify as utilitarians, do you bite the bullet on such cases? And what is the distribution of opinions amongst academic philosophers who subscribe to utilitarianism?”

Those are good questions, and I hope utilitarians or similar consequentialists reply.

It may be difficult to find out what utilitarians and consequentialists really think of such cases. Such theories could be understood as sometimes prescribing ‘say whatever is optimal to say; that is, say whatever will bring about the best results.’ It might be optimal to pre... (read more)

1
Omnizoid
8mo
I know this is very late, but I wrote a piece a while ago about this.  I bite the bullet.   https://benthams.substack.com/p/against-conservatism-about-value
3
JulianAbernethy
5y
I have to admit I only skimmed the paper, can you explain to me what "bullet" I ought to bite? It seems like a neutral proposition to say, what if we replace "us" with beings of greater well being. My best guess is that our intuition for self-preservation and our intuition that killing sentient beings is bad most of the time should make me feel like that it would be a horrible idea? I'd rather throw the intuitions out than the framework, so to me this seems like an easy question. But maybe you have other arguments that make me not want to replace us.

Thanks for the informative reply! And also for writing the paper in the first place :)

"Such theories could be understood as sometimes prescribing ‘say whatever is optimal to say; that is, say whatever will bring about the best results.’ It might be optimal to pretend to not bite the bullet even though the person actually does."

I think we need to have high epistemic standards in this community, and would be dismayed if a significant number of people with strong moral views were hiding them in order to make a better impression on others. (See also https://fo

... (read more)

Thank you cdc482 for raising the topic. I agree describing EA as having only the goal of minimizing suffering would be inaccurate. As would it be to say that it has the goal to “maximizing the difference between happiness and suffering.” Both would be inaccurate simply because EAs disagree about what the goal should be. William MacAskill’s (a) is reasonable: “to ‘do the most good’ (leaving what ‘goodness’ is undefined).” But ‘do the most good’ would need to be understood broadly or perhaps rephrased into something roughly like ‘make things as much better a... (read more)

Hi, Your text mentions the importance of cause-neutrality but focuses on humanity, e.g. “maximizing good accomplished largely reduces to doing what is best in terms of very long-run outcomes for humanity.“ Why don’t you include any other species?

To explain where I’m coming from: To my knowledge, GiveWell and Good Ventures also focus on “humanity” and talk about “humanitarians” but I’m not familiar with any argument that shows why that focus makes sense (I’m grateful to be pointed to one). Of course, I don’t expect you to answer on behalf of GW or GV, and ... (read more)

1
Nick_Beckstead
10y
We think many non-human animals, artificial intelligence programs, and extraterrestrial species could all be of moral concern, to degrees varying based on their particular characteristics but without species membership as such being essential. Humanity is used alternately in the text with "civilization," a civilization for which humanity is currently in the driver's seat.