I think EA should be a value neutral movement. That is, it should seek to have a large umbrella of folks seeking to do effective good based on what they value. This means that some folks in EA will want to be effective at doing things they think are good but you think are not good and vice versa. I think this is not only okay but desirable, because EA should be in the business of effectiveness and good doing, not deciding for others what things they should think are good.

Not everyone agrees. Comments on a few recent posts come to mind that indicate there's a solid chunk of folks in EA who think the things they value are truly best, not just their best attempt at determining what things are best. Some evidence:

On the one hand, it's good to ask if the object-level work we think is good actually does good by our values. And it's natural to come up with theories that try to justify which things are good. And yet in practice I find EA leaves out a lot of potential cause areas that people value and they could pursue more effectively.

To get really specific about this, here's some cause areas that are outside the Overton window for EAs today but that matter to some people in the world and that they could reasonably want to pursue more effectively:

  • spreading a religion like Christianity that teaches that those who don't convert will face extreme suffering for eternity
  • changing our systems of organizing labor to be more humane, e.g. creating a communist utopia
  • civilizing "barbarian" peoples
  • engaging in a multigenerational program to improve the human genome via selective breeding

All of these ideas, to my thinking, are well outside what most EAs would tolerate. If I were to write a post about how the most important cause area is spreading Buddhism to liberate all beings from suffering, I don't think anyone would take me very seriously. If I were to do the same but for spreading Islam to bring peace to all peoples, I'd likely get stronger opposition.

Why? Because EA is not in practice value neutral. This is not exactly a novel insight: many EAs, and especially some of the founding EAs, are explicitly utilitarians of one flavor or another. This is not a specific complaint about EAs, though: this is just how humans are by default. We get trapped by our own worldviews and values, suffer from biases like the typical mind fallacy, and are quick to oppose things that stand in opposition to our values because it means we, at least in the short term, might get less of what we want.

Taking what we think is good for granted is a heuristic that served our ancestors well, but I think it's is bad for the movement. We should take things like metaethical uncertainty and the unilateralist's curse (and the meta-unilateralist's curse?) seriously. And if we do so, that means leaving open the possibility that we're fundamentally wrong about what would be best for the world, or what "best" even means, or what we would have been satisfied with "best" having meant in hindsight. Consequently, I think we should be more open to EAs working towards things that they think are good because they value them even though we might personally value exactly the opposite. This seems more consistent with a mission of doing good better rather than doing some specific good better.

The good news is people in EA already do this. For example, I think x-risks are really important and dominate all other concerns. If I had $1bn to allocate, I'd allocate all of it to x-risk reduction and none of it to anything else. Some people would think this is a tragedy because people alive today could have been saved using that money! I think the even greater tragedy is not saving the much larger number of potential future lives! But I can co-exist in the EA movement alongside people who prioritize global health and animal welfare, and if that is possible, we should be able to tolerate even more people would value things even more unlike what we value, so long as what they care about is effective marginal good doing, whatever they happen to think good is.

As I see it, my allies in this world aren't so much the people who value what I value. Sure, I like them. But my real allies are the people who are willing to apply the same sort of methods to achieve their ends, whatever their ends may be. Thus I want these people to be part of EA, even if I think what they care about is wrong. Therefore, I advocate for a more inclusive, more value neutral EA than the one we have today.

ETA: There's a point that I think is important but I didn't make explicit in the post. Elevating it from the comments:

It's not that I think EAs must support things they disagree with at the object level, but that at the meta level metaethical uncertainty implies we should have an uncomfortable willingness to "help our 'enemies'" at the meta level even as we might oppose them at the object level.

To expand a bit, I analogize this to supporting free speech in a sort of maximalist way. That is, not only do I think we should have freedom of speech, but also that we should help people make the best arguments for things they want to say, even if we disagree with those things. We can disagree on the object level, but at the meta level we should all try to benefit from common improvements to processes, reasoning, etc.

I want disagreements over values to stay firmly rooted at the object level if possible, or maybe only one meta level up. Go up enough meta levels to the concept of doing effective good, for whatever you take good to be, and we become value neutral. For example, I want an EA where people help each other come up with the best case for their position, even if many find it revolting, and then disagree with that best case on the object level rather than trying to do an end run around actually engaging with it and sabotaging it by starving it as the meta level. As far as I'm concerned, elevating the conflict passed the object level is cheating and epistemically dishonest.

-3

0
0

Reactions

0
0

More posts like this

Comments13
Sorted by Click to highlight new comments since: Today at 3:03 AM

I like the post. Well-written and well-reasoned. Unfortunately, I don't agree — not at all.

A (hopefully) useful example, inspired by Worley's thoughts, my mother, and Richard's stinging question, respectively. Look at the following causes:

  • X-risk Prevention
  • Susan G. Komen Foundation
  • The Nazi Party

All three of the above would happily accept donations. Those who donate only to the first would probably view the values of the second cause as merely different from their own values, but they'd probably view the values of the third cause as opposing their own set of values.

Someone who donates to x-risk prevention might think that breast cancer awareness isn't a very cost-effective form of charity. Someone who values breast cancer awareness might think that extinction from artificial intelligence is absurd. They wouldn't mind the other "wasting their money" on x-risk prevention/breast cancer awareness — but both would (hopefully) find that the values of the Nazi Party are in direct opposition to their own values, not merely adjacently different.

The dogma that "one should fulfill the values one holds as effectively as possible" ignores the fundamental question: what values should one hold? Since ethics isn't a completed field, EA sticks to — and should stick to — things that are almost unquestionably good: animals shouldn't suffer, humanity shouldn't go extinct, people shouldn't have to die from malaria, etc. Not too many philosophers question the ethical value of preventing extinction or animal suffering. Benatar might be one of the few who disagrees, but even he would still probably say that relieving an animal's pain is a good thing.

 

TL;DR:  Doing something bad effectively... is still bad. In fact, it's not just bad; it's worse bad. I'd rather the Nazi Party was an ineffective mess of an institution than a data-driven, streamlined organization. This post seems to emphasize the E and ignore the A.

Agreed entirely. There is a large difference between "We should coexist alongside not maximally effective causes" and "We should coexist across causes we actively oppose." I think a good test for this would be:

You have one million dollars, and you can only do one of two things with it - you can donate it to Cause A, or you can set it on fire. Which would you prefer to do?

I think we should be happy to coexist with (And encourage effectiveness for) any cause for which we would choose to donate the money. A longtermist would obviously prefer a million dollars go to animal welfare than be wasted. Given this choice, I'd rather a million dollars go to supporting the arts, feeding local homeless people, or improving my local churches even though I'm not religious. But I wouldn't donate this money to the Effective Nazism idea that other people have mentioned - I'd rather it just be destroyed. Every dollar donated to them would be a net bad for the world in my opinion. 

Hmm, I think these arguments comparing to other causes are missing two key things:

  • they aren't sensitive to scope
  • they aren't considering opportunity cost

Here's an example of how that plays out. From my perspective, the value of the very large number of potential future lives dwarfs basically everything else. Like the value of worrying about most other things is close to 0 when I run the numbers. So in the face of those numbers, working on anything other than mitigating x-risk is basically equally bad from my perspective because that's all missed opportunity in expectation to save more future lives.

But I don't actually go around deriding people who donate to breast cancer research as if they donated to Nazis even though they, by comparison in scope to mitigating x-risks and the missed opportunity to have more mitigated x-risk, did approximately similarly "bad" things from my perspective. Why?

I take their values seriously. I don't agree, but they have a right to value what they want, even if I disagree. I don't personally have to help them, but I also won't oppose them unless they come into object level conflict with my own values.

Actually, that last sentence makes me realize a point I failed to make in the post! It's not that I think EAs must support things they disagree with at the object level, but that at the meta level metaethical uncertainty implies we should have an uncomfortable willingness to "help our 'enemies'" at the meta level even as we might oppose them at the object level.

"As I see it, my allies in this world aren't so much the people who value what I value. Sure, I like them. But my real allies are the people who are willing to apply the same sort of methods to achieve their ends, whatever their ends may be." 

This to me seems absurd. Let us imagine two armies one from country A and the other from country B, both of which use air raids as their methodology. The army from country A wants to invade country B and vice versa. Do you view these armies as being allies because they use the same methodology? 

In an important sense, yes!

To take an example of opposing armies, consider the European powers between say 1000 CE and 1950 CE. They were often at war with each other. Yet they were clearly allies in a sense that they were in agreement that the European way was best and that some European should clearly win in various conflicts and not others. This was clear during, for example, various wars between powers to preserve monarchy and Catholic rule. If I'm Austria I still want to fight the neighboring Catholic powers ruled by a king to gain land, but I'd rather be fighting them than Protestant republics!

As I see it, an object-level battle does not necessarily make someone my enemy and may in fact be my willing ally when we step back from object-level concerns. If phrased in terms of ideas, every time I'd prefer to make friends with folks who apply similar methods of rationality and epistemology even if we disagree on object-level conclusions because we share the same methods rather than make friends with people who happen to agree with me but don't share my methods, because I can talk and reason with people who share my methods. If the object-level-agreeing, method-disagreeing "allies" turn on me, I have no recourse to shared methods.

Can you define methodology? If you are defining the term so broadly that monarchy, catholic rule, and republic are methodologies then you don't have to bite the bullet on the "effective nazi" objection. You can simply say, "fascism is a methodology I oppose" however at this point it seems like the term is so broad that your objection to EA fails to have meaning. 

I don't think this example holds up to historical scrutiny, but it's so broad Idk how to argue on that front so I'm simply going to agree to disagree.

If the object-level-agreeing, method-disagreeing "allies" turn on me, I have no recourse to shared methods.

You can work to understand other people's philosophical assumptions and work within those parameters.

Would you really want to ally with Effective Nazism?

Strict value neutrality means not caring about the difference between good and evil.  I think the "altruism" part of EA is important: it needs to be directed at ends that are genuinely good.  Of course, there's plenty of room for people to disagree about how to prioritize between different good things. We don't all need to have the exact same rank ordering of priorities. But that's a very different thing from value neutrality.

Thank you for this. I think it's worth discussing which kinds of moral views are compatible with EA. For example, in chapter 2 of The Precipice, Toby Ord enumerates 5 moral foundations for caring about existential risk (also discussed in this presentation):

1. Our concern could be rooted in the present — the immediate toll such a catastrophe would take on everyone alive at the time it struck. (common-sense ethics)

2. It could be rooted in the future, stretching so much further than our own moment — everything that would be lost. (longtermism)

3. It could be rooted in the past, on how we would fail every generation that came before us. (Burkean "partnership of generations" conservatism)

4. We could also make a case based on virtue, on how by risking our entire future, humanity itself displays a staggering deficiency of patience, prudence, and wisdom. (virtue ethics)

5. We could make a case based on our cosmic significance, on how this might be the only place in the universe where there's intelligent life, the only chance for the universe to understand itself, on how we are the only beings who can deliberately shape the future toward what is good or just.

So I find it strange and disappointing that we make little effort to promote longtermism to people who don't share the EA mainstream's utilitarian foundations.

Similarly, I think it's worth helping conservationists figure out how to conserve biodiversity as efficiently as possible, perhaps alongside other values such as human and animal welfare, even though it is not something inherently valued by utilitarianism and seems to conflict with improving wild animal welfare. I have moral uncertainty as to the relative importance of biodiversity and WAW, so I'd like to see society try to optimize both and come to a consensus about how to navigate the tradeoffs between the two.

Curated and popular this week
Relevant opportunities