All of LanceSBush's Comments + Replies

The descriptive task of determining what ordinary moral claims mean may be more relevant to questions about whether there are objective moral truths than is considered here. Are you familiar with Don Loeb's metaethical incoherentism? Or the empirical literature on metaethical variability? I recommend Loeb's article, "Moral incoherentism: How to pull a metaphysical rabbit out of a semantic hat." The title itself indicates what Loeb is up to.

1
Lukas_Gloor
6y
Inspired by another message of yours, there's at least one important link here that I failed to mention: If moral discourse is about a, b, and c, and philosophers then say they want to make it about q and argue for realism about q, we can object that whatever they may have shown us regarding realism about q, it's certainly not moral realism. And it looks like the Loeb paper also argues that if moral discourse is about mutually incompatible things, that looks quite bad for moral realism? Those are good points!

Whoops. I can see how my responses didn't make my own position clear.

I am an anti-realist, and I think the prospects for identifying anything like moral truth are very low. I favor abandoning attempts to frame discussions of AI or pretty much anything else in terms of converging on or identifying moral truth.

I consider it a likely futile effort to integrate important and substantive discussions into contemporary moral philosophy. If engaging with moral philosophy introduces unproductive digressions/confusions/misplaced priorities into the discussion it ... (read more)

4
Lukas_Gloor
7y
I totally sympathize with your sentiment and feel the same way about incorporating other people's values in a superintelligent AI. If I just went with my own wish list for what the future should look like, I would not care about most other people's wishes. I feel as though many other people are not even trying to be altruistic in the relevant sense that I want to be altruistic, and I don't experience a lot of moral motivation to help accomplish people's weird notions of altruistic goals, let alone any goals that are clearly non-altruistically motivated. In the same way I'd feel no strong (even lower, admittedly) motivation to help make the dreams of baby eating aliens come true. Having said that, I am confident that it would screw things up for everyone if I followed a decision policy that does not give weight to other people's strongly held moral beliefs. It is already hard enough to not mess up AI alignment in a way that makes things worse for everyone, and it would become much harder still if we had half a dozen or more competing teams who each wanted to get their idiosyncratic view of the future installed. BTW note that value differences are not the only thing that can get you into trouble. If you hold an important empirical beliefs that others do not share, and you cannot convince them of it, then it may appear to you as though you're justified to do something radical about it, but that's even more likely to be a bad idea because the reasons for taking peer disagreement seriously are stronger in empirical domains of dispute than in normative ones. There is a sea of considerations from Kantianism, contractualism, norms for stable/civil societies and advanced decision theory that, while each line of argument seems tentative on its own and open to skepticism, all taken together point very strongly into the same direction, namely that things will be horrible if we fail to cooperate with each other and that cooperating is often the truly rational thing to do. You
2
Kaj_Sotala
7y
Ah, okay. Well, in that case you can just read my original comment as an argument for why one would want to use psychology to design an AI that was capable of correctly figuring out just a single person's values and implementing them, as that's obviously a prerequisite for figuring out everybody's values. The stuff that I had about social consensus was just an argument aimed at moral realists, if you're not one then it's probably not relevant for you. (my values would still say that we should try to take everyone's values into account, but that disagreement is distinct from the whole "is psychology useful for value learning" question) Sorry, my mistake - I confused utilitronium with hedonium.

It's certainly possible that this is the case, but looking for the kind of solution that would satisfy as many people as possible certainly seems like the thing we should try first and only give it up if it seems impossible, no?

Sure. That isn't my primary objection though. My main objection is that that even if we pursue this project, it does not achieve the heavy metaethical lifting you were alluding to earlier. It doesn’t demonstrate nor provide any particularly good reason to regard the outputs of this process as moral truth.

Well, the ideal case wo

... (read more)
0
Kaj_Sotala
7y
Well, what alternative would you propose? I don't see how it would even be possible to get any stronger evidence for the moral truth of a theory, than the failure of everyone to come up with convincing objections to it even after extended investigation. Nor a strategy for testing the truth which wouldn't at some point reduce to "test X gives us reason to disagree with the theory". I would understand your disagreement if you were a moral antirealist, but your comments seem to imply that you do believe that a moral truth exists and that it's possible to get information about it, and that it's possible to do "heavy metaethical lifting". But how? I think anything as specific as this sounds worryingly close to wanting an AI to implement favoritepoliticalsystem.

Hi Kaj,

Even if we found the most agreeable available set of moral principles, that amount may turn out not to constitute the vast majority of people. It may not even reach a majority at all. It is possible that there simply is no moral theory that is acceptable to most people. People may just have irreconcilable values. You state that:

“For empirical facts we can come up with objective tests, but for moral truths it looks to me unavoidable - due to the is-ought gap - that some degree of "truth by social consensus" is the only way of figuring out w... (read more)

1
Kaj_Sotala
7y
It's certainly possible that this is the case, but looking for the kind of solution that would satisfy as many people as possible certainly seems like the thing we should try first and only give it up if it seems impossible, no? Well, the ideal case would be that the AI would show you a solution which it had found, and upon inspecting it and considering it through you'd be convinced that this solution really does satisfy all the things you care about - and all the things that most other people care about, too. From a more pragmatic perspective, you could try to insist on an AI which implemented your values specifically - but then everyone else would also have a reason to fight to get an AI which fulfilled their values specifically, and if it was you versus everyone else in the world, it seems like a pretty high probability that somebody else would win. Which means that your values would have a much higher chance of getting shafted than if everyone had agreed to go for a solution which tried to take into everyone's preferences into account. And of course, in the context of AI, everyone insisting on their own values and their values only means that we'll get arms races, meaning a higher probability of a worse outcome for everyone. See also Gains from Trade Through Compromise.

Thanks for the excellent reply.

Greene would probably not dispute that philosophers have generally agreed that the difference between the lever and footbridge cases are due to “apparently non-significant changes in the situation”

However, what philosophers have typically done is either bit the bullet and said one ought to push, or denied that one ought to push in the footbridge case, but then feel the need to defend commonsense intuitions by offering a principled justification for the distinction between the two. The trolley literature is rife with attempt... (read more)

I agree that defining human values is a philosophical issue, but I would not describe it as "not a psychological issue at all." It is in part a psychological issue insofar as understanding how people conceive of values is itself an empirical question. Questions about individual and intergroup differences in how people conceive of values, distinguish moral from nonmoral norms, etc. cannot be resolved by philosophy alone.

I am sympathetic to some of the criticisms of Greene's work, but I do not think Berker's critique is completely correct, though ... (read more)

1
kbog
7y
You can do that if you want, but (1) it's still a narrow case within a much larger philosophical framework and (2) such cases are usually pretty simple and don't require sophisticated knowledge of psychology. To the contrary, Berker criticizes Greene precisely because his neuroscientific work is hardly relevant to the moral argument he's making. You don't need a complex account of neuroscience or psychology to know that people's intuitions in the trolley problem are changing merely because of an apparently non-significant change in the situation. Philosophers knew that a century ago. But nobody believes that judgements are correct or wrong merely because of the process that produces them. That just produces grounds for skepticism that the judgements are reliable - and it is skepticism of a sort that was already known without any reference to psychology, for instance through Plantinga's evolutionary argument against naturalism or evolutionary debunking arguments. Also it's worth clarifying that Greene only deals with a particular instance of a deontological judgement rather than deontological judgements in general. It's only a question of moral epistemology, so you could simply disagree on how he talks about intuitions or abandon the idea altogether (https://global.oup.com/academic/product/philosophy-without-intuitions-9780199644865?cc=us&lang=en&). Again, it's worth stressing that this is a fairly narrow and methodologically controversial area of moral philosophy. There is a difference between giving an opinion on a novel approach to a subject, and telling a group of people what subject they need to study in order to be well-informed. Even if you do take the work of x-philers for granted, it's not the sort of thing that can be done merely with education in psychology and neuroscience, because people who understand that side of the story but not the actual philosophy are going to be unable to evaluate or make the substantive moral arguments which are necessary f

I am a psychology PhD student with a background in philosophy/evolutionary psychology. My current research focuses on two main areas: effective altruism and the nature of morality and in particular the psychology of metaethics. My motivation for pursuing the former should be obvious, but my rationale for pursuing the latter is in part self-consciously about the third bullet point, "Defining just what it is that human values are." More basic than even defining what those values are, I am interested in what people take values themselves to be. For ... (read more)

Tom, that isn't the only way the term "moral anti-realism" is used. Sometimes it is used to refer to any metaethical position which denies substantive moral realism. This can include noncognitivism, error theory, and various forms of subjectivism/constructivism. This is typically how I use it.

For one thing, since I endorse metaethical variability/indeterminacy, I do not believe traditional descriptive metaethical analyses provide accurate accounts of ordinary moral language anyway. I think error theory works best in some cases, noncognitivism (p... (read more)

0
Owen Cotton-Barratt
9y
I think we might get to something like moral realism as the result of acausal trade between possible agents.

Hi Evan,

I study philosophy and would identify as a moral anti-realist. Like you, I am generally inclined to regard attempts to refer to moral statements as true or false as (in some cases) category mistakes, though in other cases I think they are better translated as cognitive but false (i.e. some moral discourse is captured by one or more error theories), and in other cases moral claims are both coherent and true, but trivial - for instance, a self-conscious subjectivist who deliberately uses moral terms to convey their preferences. Unfortunately, I think... (read more)