B

Braxton

4 karmaJoined Oct 2021

Comments
1

Tyler, thanks so much for writing this. I’ve been struggling to get more involved with EA precisely because I fear that accepting its premises would inexorably lead to the kind of dark night you’ve described. How could I possibly waste my time on leisure activities once I’ve seen the dark world? Even my work in the disability and autism communities would be profligate when measured against worthier causes.

This post goes some way to helping dispel these notions in me, but I’m still struggling with the implications. For example, I have a hard time seeing music, art, dancing, poetry and so forth as ends-in-themselves, rather than as a means to an end of generating utility in the form of quality of life. And it is hard for me to not see that quality of life as being fungible with that of other people; why should my own QALYs count for more than the drowning child’s, except on account of my own “beastly” self-interest? 

Maybe the valuing of EA-ends are indeed just as beastly as valuing any other ends. Yet if this is so, one is not actually valuing the betterment of the world, but the neurological reward mechanism, the warm fuzzies, that it produces. Maybe these warm fuzzies perfectly align with doing the most good. But this proxy-alignment doesn’t seem like the basis of a robust system of ethics. 

Where I had worried about falling on the slippery slope of EA-style utilitarianism, but I now fear a slope in the other direction; if I accept that “morality is relevant only because we care deeply about it”, then does it not follow that for those who care nought about it, there is consequently no moral obligation at all? Is our moral duty, if it exists, only commensurate with the degree of pleasure we derive from its practice?

There’s also a more practical aspect, best encapsulated by ThomasW’s reply to Eric Neyman’s bargain with the EA machine

It seems pretty intuitive that our impact would be power law distributed … If your other personal factors are your ability to have fun, have good friendships, etc., you now have to make the claim that those things are also power-law distributed, and that your best life with respect to those other values is hundreds of times better than your impact maximizing life. If you don't make that claim, then either you have to give your other values an extremely high weight compared with impact, or you have to let impact guide every decision

I don’t want to let impact guide my every decision. I want to believe that no trade-offs are required. But in building our moral framework, shouldn’t we assume the least convenient possible world? I'm not suggesting that there are yet certain answers here, but in the meantime I personally find it difficult to make moral decisions under such philosophical uncertainty. For now, I suppose the best heuristic remains:

Where music is concerned, I care about the journey.

When lives are at stake, I shut up and multiply.