The more I think about value monism, I get confused about why some people really want to cling to it, even though our own experience seems to tell us every day that we are in fact not value monists. We care about many different values and also care about what values other people hold. When we ask people who are dying most of them will talk of friendship, love, and regrets. Does all of this just count instrumentally toward one "super value" such as welfare or are there some values we hold dear as ends in themselves?
I came up with a short experiment that can maybe act as an intuition pump in this regard. I would be interested in your thoughts!
Thought experiment: What do we care about at the end of time?
We are close to the end of time. Humanity gained sophisticated technologies we can only imagine. Still, only two very old humans remain alive: Alice and Bob. However, there also remain machines that can predict the effects of medicines and states of consciousness and lived experience.
It seems like the last day for both Alice and Bob has come. Alice is terminally ill and in severe pain, Bob is simply old but also feels he is about to die a peaceful death soon. They have used up almost all of the medicine which was still around, only one dose of morphine remains.
The medical machines tell them that if Alice takes the morphine her pain would be soothed but the effect would not be as strong as normally due to her specific physiology which dampens the effect of morphine. Bob on the other hand would have a really great time if he took the morphine. His specific physiology is super receptive to morphine. He would experience unimaginable heights and states of bliss. The medical machines are entirely sure that net happiness would be several times higher if Bob would take the morphine. If Alice would take it, they would simply have one last conversation and both die peacefully.
How should Alice and Bob decide? What values are important in their decision?
Because my draft response was getting too long, I’m going to put it as a list of relevant arguments/points, rather than the conventional format, hopefully not much is lost in the process:
-Ethics does take things out there in the world as its subjects, but I don’t take the comparison to empirical science in this case to work, because the methods of inquiry are more about discourse than empirical study. Empirical study comes at the point of implementation, not philosophy. The strong version of this point is rather controversial but I do endorse it, I will return to it in a couple bullets to expand it out
-Even in empirical sciences, the idea of theories just being rough models is not always relevant. it comes from both uncertainty and the positive view that the actual real answer is far too complicated to exactly model. This is the difference between say economics and physics – theories in both will be tentative, and accept that they are probably just approximations right now because of uncertainty, but in economics this is not just a matter of historical humility, but also a positive belief about complexity in the world. Physics theories are both ways of getting good-enough-for-now answers, and positive proposals for ways some aspect of reality might actually be. Typically with plurality but not majority credence.
-Fully defining what I mean by ethics is difficult, and of less interest to me than doing the ethics. Maybe this seems a bit strange if you think defining ethics is of supreme importance to doing it, but my feeling of disconnect between the two is probably part of why I’m an anti-realist. I’m not sure there’s any definition I could plug into a machine to make an ethics-o-meter I would simply be satisfied taking its word for it on an answer (this is where the stronger version of bullet one comes in). This is sort of related to Brian Tomasik’s point that if moral realism were true, and it turned out that the true ethics was just torturing as many squirrels as you can, he would have simply learned he didn’t care about ethics and it wasn’t what he was doing all along. I feel part of my caring about ethics is constituted by my understanding of how I got there more than it is about extrapolating from exact definitions. I know it when I do it, and it is a project that, to my understanding of it, I care about deeply right now.
-I don’t think this answer quite fits any of Greenberg’s proposals exactly, but he is definitely confused, and fair enough, as he is confused about a confusing topic. I just want to note that it is meta-ethics that is confusing, not anti-realism. I think he blows past moral realism sort of quickly, expecting that what realists who subscribe to theories like these are doing is perfectly understandable, but I think it is still extremely weird. Most initial approaches one can take to moral realism either start out apparently collapsing into normative ethical theories instead, or else require some extremely unlikely empirical assumption. In order to rescue realist theories, you need to start getting ideas that are more complicated and recognize the dilemmas. I originally wrote two example dialogues to get at this point, but they wound up going on too long for a comment, so I just want to start by positing that, in my experience, this is the case. The obvious first approaches either in some way posit one’s normative theory to be what “value” is despite disagreement from other people who are using the same words, or else there is some sense in which the disagreement is explained away as coming from some source of irrationality that, if spelled out with an empirical prediction, requires a probably bad prediction. Meta-ethics always faces a foundational dilemma in spelling out what exactly moral disagreement is.
-Since this is getting long winded and it seems like it’s pretty much only us here at this point, I was wondering if you wanted to migrate this conversation in some way, for instance we could chat more via video call or something at some point. If not I’m also fine with that, we could call it here or keep going in the comments. I just thought I would mention that I’m open to it.