All of Paal_SK's Comments + Replies

So I agree with you that we should apply expected value reasoning in most cases. The cases in which I don't think we should use expected value reasoning are for hinge propositions. The propositions on which entire worldviews stand or fall, such as fundamental metaethical propositions for instance, or scientific paradigms. The reason these are special is that the grounds for belief in these propositions is also affected by believing them. 

Likewise, in the moral realm, if I thought it was 49% likely that a particular animal is a moral patient, then it s

... (read more)

Ah, good! Hmm, then this means that you  really find the arguments against normative realism convincing! That is quite interesting,  I'll delve into those links you mentioned sometime to have a look. As is often the case in philosophy, though, I suspect the low credence is explained not so much by the strength of the arguments, but by the understanding of the target concept or theory (normative realism). Especially in this case as you say that you are quite unsure what it even means. There are concepts of normativity that I would give a 0.01 cred... (read more)

Cool! Thank you for the candid reply, and for taking this seriously. Yes, for questions such as these I think one should act as though the most likely theory is true. That is, my current view is contrary to McAskill's view on this (I think). However, I haven't read his book, and there might be arguments there that would convince me if I had. 

The most forceful considerations driving my own thinking on this comes from sceptical worries in epistemology. In typical 'brain in a vat' scenarios, there are typically some slight considerations that tip in favo... (read more)

2
MichaelA
3y
Hmm, I guess at first glance it seems like that's making moral uncertainty seem much weirder and harder than it really is. I think moral uncertainty can be pretty usefully seen as similar to empirical uncertainty in many ways. And on empirical matters, we constantly have some degree of credence in each of multiple contradictory possibilities, and that's clearly how it should be (rather than us being certain on any given empirical matter, e.g. whether it'll rain tomorrow or what the population of France is). Furthermore, we clearly shouldn't just act on what's most likely, but rather do something closer to expected value reasoning.  There's debate over whether we should do precisely expected value reasoning in all cases, but it's clear for example that it'd be a bad idea to accept a 49% chance of being tortured for 10 years in exchange for a 100% chance of getting a dollar - it's clear we shouldn't think "Well, it's unlikely we'll get tortured, so we should totally ignore that risk."  And I don't think it feels weird or leads to absurdities or incoherence to simultaneously think I might get a job offer due to an application but probably won't, or might die if I don't wear a seatbelt but probably won't, and take those chances of upsides or downsides into account when acting? Likewise, in the moral realm, if I thought it was 49% likely that a particular animal is a moral patient, then it seems clear to me that I shouldn't act in a way that would cause create suffering to the animal if so in exchange for just a small amount of pleasure for me. Would you disagree with that? Maybe I'm misunderstanding your view? I'm a bit less confident of this in the case of metaethics, but it sounded like you were against taking even just moral uncertainty into account? You might enjoy some of the posts tagged moral uncertainty, for shorter versions of some of the explanations and arguments, including my attempt to summarise ideas from MacAskill's thesis (which was later adapted i

Thank you for this interesting post! In the spreadsheet with additional info concerning the basis for your credences, you wrote the following about the first crucial crux about normativity itself: "Ultimately I feel extremely unsure what this claim means, how I should assess its probability, or what credence I should land on." Concerning the second question in this sentence, about how to assess the probability of philosophical propositions like this, I would like to advocate a view called 'contrastivism'. According to this view, the way to approach the cla... (read more)

2
MichaelA
3y
Are you saying you should act as though moral realism is 100% likely, even though you feel only slightly more convinced of it than of antirealism? That doesn't seem to make sense to me? It seems like the most reasonable approaches to metaethical uncertainty would involve considering not just "your favourite theory" but also other theories you assign nontrivial credence to, analogous to the most reasonable-seeming approaches to moral uncertainty.
2
MichaelA
3y
I haven't read those links, but I think that that approach sounds pretty intuitive and like it's roughly what I would do anyway. So I think this would leave my credence at 0.01. (But it's hard to say, both because I haven't read those links and because, as noted, I feel unsure what the claim even means anyway.) (Btw, I've previously tried to grapple with and lay out my views on the question Can we always assign, and make sense of, subjective probabilities?, including for "supernatural-type claims" such as "non-naturalistic moral realism". Though that was one of the first posts I wrote, so is lower on concision, structure, and informed-ness than my more recent posts tend to be.) (Also, just a heads up that the links you shared don't work as given, since the Forum made the punctuation after the links part of the links themselves.)
Answer by Paal_SKMar 06, 20211
0
0

Interesting! The philosophical debates about the nature of morality in light of evolution is a great literature, which I very much recommend checking out further. However, the main question of contention in those debates is whether studies of the kind you allude to in fact show anything about morality itself. In fact, the mainstream view in metaethics is that the conclusion, which you have included in the title question, that morality emerges from evolutionary and functional pressures, is false. What usually happens in those studies is that some evolutiona... (read more)

This is a really useful overview of crucial questions that have a ton of applications for conscientious longtermists!

The plan for future work seems even more interesting though. Some measures have beneficial effects for a broad range of cause-areas, and others less so. It would be very interesting to see how a set of interventions do in a cost-benefit analysis where interconnections are taken into account.

It would also be super-interesting to see the combined quantitative assessments of a thoughtful group of longtermist's answers to some of these qu... (read more)

6
Max_Daniel
4y
I used to think similarly, but now am more skeptical about quantitative information on longtermists' beliefs. [ETA: On a second reading, maybe the tone of this comment is too negative. I still think there is value in some surveys, specifically if they focus on a small number of carefully selected questions for a carefully selected audience. Whereas before my view had been closer to "there are many low-hanging fruits in the space of possible surveys, and doing even quickly executed versions of most surveys will have a lot of value."] I've run internal surveys on similar questions at both FRI (now Center on Longterm Risk) and the Future of Humanity Institute. I've found it very hard to draw any object-level conclusions from the results, and certainly wouldn't feel comfortable for the results to directly influence personal or organizational goals. I feel like my main takeaways were: * It's very hard to figure out what exactly to ask about. E.g. how to operationalize different types of AI risk? * Even once you've settled on some operationalization, people will interpret it differently. It's very hard to avoid this. * There usually is a very large amount of disagreement between people. * Based on my own experience of filling in such surveys and anecdotal feedback, I'm not sure how much to trust the answers if at all. I think many people simply don't have stable views on the quantitative values one wants to ask about, and essentially 'make up' an answer that may be mostly determined by psychological substitution. (These are also sufficient reasons for why I've never published the results of such surveys, though sometimes there were also other reasons.) On reflection, maybe this isn't that surprising: e.g. how to delineate different types of AI risk is an active topic of research, and people write long texts about it; some people have disagreed for years, and don't fully understand each others' views even though they've tried for dozens of hours. It would be fairl