E

Evžen

5 karmaJoined Apr 2023

Comments
3

Hello from the 4 years into the future! Just a random note on the thing you said,

Argument that it is less likely: We can use the capabilities to do something like "Do what we mean," allowing us to state our goals imprecisely & survive.

Anthropic is now doing exactly this with their Constitutional AI. They let the chatbot respond in some way, then they ask it "reformulate the text so that it is more ethical", and finally train it to output something more akin to the latter rather than to the former.

EA needs more clarity on the most foundational questions in order to answer downstream questions. The answer to “how should we discount future utility” depends on what you think we “should” do generally. 

Thank you for the post! This is my view as well; I wish more people shared it. 

I have a technical background and it is very hard for me to understand (not even agree with — just understand) non-consequentialist views; your post has helped me very much indeed! Do you know any good resources aimed at persuading consequentialists to become more non-consequentialist? Or if not persuading, maybe at least explaining why the other theories should make sense? I think I'm fixated on consequentialism too much for my own good, but I don't know how to "escape".

If some of our intuitions today are objectively abominable, then how can we judge past theories' conclusions based on our intuitions?

I see your point and I intuitively agree with it. However, if we prohibit ourselves from using this frame of reference, do we have any other way to compare different moral theories? 

It seems to me that it is impossible to argue from first principles about which moral theory is the closest to the truth. At least, that is, with our current knowledge of physics, cognitive science, etc. So, how do we define moral progress in this state of affairs? How do we know we're improving? It seems to me the only shot we have is to hope that our morality (as measured by the true and unknown morality function) improves over time — especially during times of abundance — and so that we are in a better position to judge past theories than our ancestors were.

The best argument against consequentialism then is just that it is confused about what morality is. Morality is not an objective attribute that inheres in states of affairs. Morality is at its core a guide for individuals to choose what to do. Insofar as a consequentialist theory is not rooted in the subjective experience of deliberation, of an individual trying to make sense of what they ought to do, it will not be answering the fundamental questions of morality. 

I don't think I understand your argument here. I'll give you my personal definition of a moral theory; could you help me understand your point through this lens? The definition would go something like this:

Which actions should a person take? There is a universal two step solution:

  1. Have a scoring function for your actions.
  2. Take the action that maximizes this scoring function.

I define moral theory to be a particular choice of the scoring function. Consequentialism fits this, and so do any deontological and virtue ethical views. 

Do you agree with this definition? Or is your argument that in your opinion there should be some restrictions on the scoring function that consequentialism doesn't fulfill?

If bringing into existence lives that have positive wellbeing is at best neutral (and presumably strongly negative for lives with negative wellbeing) — why have children at all? Is it their instrumental value they bring in their lives that we're after under this philosophy? (Sorry, I'm almost surely missing something very basic here — not a philosopher.)