H

HarryPeto

9 karmaJoined Oct 2015

Comments
4

I cannot WAIT for the results. Doing now :D Thanks!!

I very much welcome the opening of this discussion.

Many utilitarian EAs simultaneously claim that EA is "compatible" with most other forms of ethical thinking while also continuing to make their arguments very narrowly consequentialist.

I genuinely believe that most EA actions are actually required of people who subscribe to other ethical systems, and I try my best to adapt my language to the person I'm trying to convince based on what they care about.

One example: many left-leaning students talk a lot about "privilege". I tell them that the best thing they could do if they were serious about finding it "problematic" that so many of us are overly privileged is by giving that privilege away!

Alternatively, people who care about justice are very receptive if you tell them that globalization means we now have reciprocal relationships with most of the world and that we elect governments that are utterly hypocritical on the issue of free trade, causing extreme poverty. Our riches often do in some sense come out of their poverty, and if one believes the global economy is in need of change they should refuse to submit to it by voting with their wallets as well as with their ballots.

Thanks for the post.

It might be worth saying even when making clear that QALYs aren't the only things that EAs care about that even welfare maximisation doesn't have no be the only thing EAs care about; this might vary based on one's conception to EA, but given the movement at least currently accommodates for non-utilitarians (and I hope it continues to do so!) we don't want to fall into a WALY-maximisation trap any more than a QALY-maximisation trap.

That is to say: this post tells us, "look, specifically in the realm of health, there does seem to be ways of measuring things, but we actually care about measuring welfare". I'd suggest we say instead: "look, specifically in the realm of health, there does seem to be ways of measuring things, but we might actually want to measure any given value we might care about".

Interesting stuff!

I'd add to all this that I've experienced some EAs pitching the idea for the first time, and being actively amused at the idea that some people didn't immediately agree that an evidence-based approach was the best way for them to decide on their career.

I think we don't lose any of our critique of the way things are by appealing to the way people currently think. A concrete example: the "don't follow your passion" thing sounds unromantic to most; but if we talk a lot about "meaning", and how "making a difference" and "being altruistic" tend to make us happier and satisfied with our work, we can win over people through convincing them that we're simply putting into practice various bits of wisdom that people already tend to take as a given.

Also we probably need to try harder to generally seem emotionally sensitive when talking about why we are EAs: rather than focusing on numbers all the time (which does work for some audiences, of course) we should talk about why we're altruistic generally, and then it will flow from this that if one cares in a general sense one should care about doing the best thing possible.