PS

Paal_SK

10 karmaJoined May 2020

Comments
6

So I agree with you that we should apply expected value reasoning in most cases. The cases in which I don't think we should use expected value reasoning are for hinge propositions. The propositions on which entire worldviews stand or fall, such as fundamental metaethical propositions for instance, or scientific paradigms. The reason these are special is that the grounds for belief in these propositions is also affected by believing them. 

Likewise, in the moral realm, if I thought it was 49% likely that a particular animal is a moral patient, then it seems clear to me that I shouldn't act in a way that would cause create suffering to the animal if so in exchange for just a small amount of pleasure for me.

Would you disagree with that? Maybe I'm misunderstanding your view?

I think we should apply expected value reasoning in ethics too. However, I don't think we should apply it to hinge propositions in ethics. The hinginess of a proposition is a matter of degree. The question of whether a particular animal is a moral patient does not seem very hingy to me, so if it was possible to assess the question in isolation I would not object to the way of thinking about it you sketch above. 

However, logic binds questions like these into big bundles through the justifications we give for them. On the issue of animal moral patiency, I tend to think that there must be a property in human and non-human animals that justifies our moral attitudes towards them. Many think that this should be the capacity to feel pain, and so, if I think this, and think there is a 49% chance that the animal feels pain, then I should apply expected value reasoning when considering how to relate to the animal. However,  the question of whether the capacity to feel pain is the central property we should use to navigate our moral lives, is hingier, and I think that it is less reasonable to apply expected value reasoning to this question (because this and reasonable alternatives leads to contradicting implications). 

I am sorry if this isn't expressed as clear as one should hope. I'll have a proper look into your and MacAskills views on moral uncertainty at some point, then I might try to articulate all of this more clearly, and revise on the basis of the arguments I haven't considered yet. 

Ah, good! Hmm, then this means that you  really find the arguments against normative realism convincing! That is quite interesting,  I'll delve into those links you mentioned sometime to have a look. As is often the case in philosophy, though, I suspect the low credence is explained not so much by the strength of the arguments, but by the understanding of the target concept or theory (normative realism). Especially in this case as you say that you are quite unsure what it even means. There are concepts of normativity that I would give a 0.01 credence to as well, but then there are also concepts of normativity which I think imply that normative realism is trivially true. It seems to me that you could square your commitments and restore coherence to your belief set by some good old fashioned conceptual analysis on the very notion of normativity itself. That is, anyways, what I would do in this epistemic state. I myself think that you can get most of the ethics in the column with quite modest concepts of normativity that is quite compatible with a modern scientific worldview!

I updated the links, thanks! 

Cool! Thank you for the candid reply, and for taking this seriously. Yes, for questions such as these I think one should act as though the most likely theory is true. That is, my current view is contrary to McAskill's view on this (I think). However, I haven't read his book, and there might be arguments there that would convince me if I had. 

The most forceful considerations driving my own thinking on this comes from sceptical worries in epistemology. In typical 'brain in a vat' scenarios, there are typically some slight considerations that tip in favor of realism about everything you believe. Similar worries appear in the case of conspiracy theories, where the mainstream view tends to have more and stronger supporting reasons, but in some cases it isn't obvious that the conspiracy is false, even though all things considered, one should believe that they are. These theories/propositions, as well as metaethical  propositions are sometimes called hinge-propositions in philosophy, because entire worldviews hinge on them. 

So empirically, I don't think that there is a way to act and believe in accordance with multiple worldviews at the same time. One may switch between worldviews, but it isn't possible to inhabit many worlds at the same time. Rationally, I don't think that one ought to act and believe in accordance with multiple  worldviews, because they are likely to contradict each other in multiple ways, and would yield absurd implications if takes seriously. That is, absurd implications relative to everything else you believe, which is the ultimate grounds on which you judged the relative weights of the reasons bearing on the hinge proposition to start with. Thinking in this way is called epistemological coherentism in philosophy, and is a dominant view in contemporary epistemology. However, that does not mean it's true, but it does mean that it should be taken seriously. 

Thank you for this interesting post! In the spreadsheet with additional info concerning the basis for your credences, you wrote the following about the first crucial crux about normativity itself: "Ultimately I feel extremely unsure what this claim means, how I should assess its probability, or what credence I should land on." Concerning the second question in this sentence, about how to assess the probability of philosophical propositions like this, I would like to advocate a view called 'contrastivism'. According to this view, the way to approach the claim "there are normative shoulds" is to consider the reasons that bear on it. What reasons count towards believing that there are normative shoulds, and what reasons are there to believe that there aren't. The way to judge the question is to weigh the reasons up against each other, instead of assessing them in isolation. According to contrastivism, reasons have comparative weights, but not absolute weights. Read  a short explanatory piece here, and a short paper here.

When I do this exercise, I find that the reasons to believe there are moral shoulds are slightly more convincing than the reasons against so believing, if only slightly. I think this means that the rational metaethical choice for me is realism, and that I should believe this fully (unless I have reason to believe that there is further  evidence that I haven't considered or don't understand). If you were to consider the issue this way, do you still think that your credence for normative realism would be 0.01?

Answer by Paal_SKMar 06, 20211
0
0

Interesting! The philosophical debates about the nature of morality in light of evolution is a great literature, which I very much recommend checking out further. However, the main question of contention in those debates is whether studies of the kind you allude to in fact show anything about morality itself. In fact, the mainstream view in metaethics is that the conclusion, which you have included in the title question, that morality emerges from evolutionary and functional pressures, is false. What usually happens in those studies is that some evolutionary psychologist identifies morality with some trait they can easily measure, and draw a bunch of conclusions. The entry to SEP that Ikaxas mentions is an excellent introduction to these debates. I can also recommend the podcast 'very bad wizards', which features a philosopher and psychologist talking about issues such as these. 

This is a really useful overview of crucial questions that have a ton of applications for conscientious longtermists!

The plan for future work seems even more interesting though. Some measures have beneficial effects for a broad range of cause-areas, and others less so. It would be very interesting to see how a set of interventions do in a cost-benefit analysis where interconnections are taken into account.

It would also be super-interesting to see the combined quantitative assessments of a thoughtful group of longtermist's answers to some of these questions. A series of surveys and some work in sheets could go a long way towards giving us a better picture of where our aims should be.

Looking forward to seeing more work on this area!