Hide table of contents

[Thanks to Max Dalton and Harry Peto for extensive comments, corrections and additions.]

I am a committed EA and am thrilled that the movement exists. Having spoken to many non-EAs, however, I’m convinced that the movement has an image problem: outsiders associate it and its members with negative things. This is partly unavoidable: the movement does and should challenge what are strongly held and deeply personal beliefs.  However, I think we as a movement can do more to overcome the problem

I first list some criticisms of the movement I’ve heard. Then I discuss, one by one, four key causes of these criticisms and suggest things we can do to combat them.

I think some of the key causes give people legitimate reasons to criticise the movement and others don’t. However, all criticisms damage EA’s image and should be avoided if possible.

 

Criticisms of EA

·      Smug and Arrogant – EAs have a strong conviction that they’re right and that they’re great people; similarly that their movement is correct and more important than all others

·      Cold Hearted – EAs aren’t sufficiently empathetic

·      Rich-Person Morality – EA provides a way for rich, powerful and privileged people to feel like they’re moral people

·      Privileged Access – EA is only accessible to those with a large amount of the educational and socio-economic privilege

 The latter two are particularly worrisome for those who, like me, want EA to change the attitudes of society as a whole. The movement will struggle to do this while it’s perceived as being elitist. More worryingly, I think the latter two are legitimate concerns to have about EA.

 

 

Key Cause 1- Obscuring the distinction between being a good person and doing good

 

There is a distinction

Suppose Clare is on £30K and gives away £15K to AMF, while Flo is on £300K and gives away £30K. Clare is arguably a more virtuous person because she has made a much bigger personal sacrifice for others, despite the fact that Flo does more absolute good.

 Now suppose Clare mistakenly believes that the most moral action possible is to give the money to disaster relief. Plausibly, Clare is still a more virtuous person than Flo because she has made a huge personal sacrifice for what she believed was right, and Flo has only made a small sacrifice by comparison.

 

 In a similar way people who make serious sacrifices to help the homeless in their area may be better people than EAs who do more absolute good by donating.

 

 

The EA movement does obscure this distinction

-The distinction is ignored completely whenever an EA assesses an agent by calculating the good that is produced by their actions (or, more specifically, by calculating the difference between what did happen and what would have happened if the person hadn’t lived). For example, William MacAskill claimed to find out the identity of “The Best Person who Ever Lived” is using this method[1].

 

-The distinction is obscured when EAs say ambiguous things like “It’s better to become a banker and give away 10% of your income than to become a social worker”. The sentence could mean that becoming a banker produces better outcomes, or that a person who becomes a banker is more moral. The former claim is true but the latter depends on the banker’s motivations and may be false. They may have wanted to go into banking anyway; and giving away 10% may only be a small sacrifice for them.

 

 

-We can also confuse the two sides of the distinction when we think about our own moral aims. We want to do as much good as possible, and we evaluate the extent to which we succeed. However it’s easy to confuse failure in achieving our moral aims with a failure to be a good person. This is a common confusion. For example the father who can’t adequately feed his children, despite working as hard as he could, has done nothing wrong yet still feels very guilty.

 

 

-Some of the discourse within the movement appears to imply that those who do the most good are, because of this, the best people. For example, EAs who are earn lots of money, or are successful more generally, are held in very high regard.
(Third post down on here https://www.facebook.com/groups/effective.altruists/search/?query=matt%20wage)

 

How does obscuring this distinction contribute to EA’s image problem?

Suppose non-EAs are not aware of the distinction, or think EAs are not aware. Then the EA movement will seem to be committed to the claim that EAs are, in general, significantly better people than non-EAs. But EAs often live comfortable lives with relatively low levels of personal sacrifice, so this is bound to make non-EAs angry. Even slight confusion over this issue can be very damaging.

More specifically:

·      Smug and Arrogant – if EAs appear to think that they’re much better people than everyone else, then they seem smug and arrogant

·      Cold Hearted – if EAs appear to think people who have a bigger impact are more moral than those who make significant personal sacrifices, this might comes across as cold hearted. If EAs find out how good a person someone is by calculating their impact this may appear cold hearted

·      Rich-Person Morality – The rich and powerful can do much more good much more easily. If people confuse doing good with being good then they may think EAs believe the rich and powerful can be good much more easily.

I think the criticisms here are partly unfair – EAs often do recognise the distinction I’ve been talking about. The problem arises largely because EA’s outcome-focussed discourse makes confusions about the distinction more likely to occur.

 

How can we improve the situation?

·     -Distinguish between being good and doing good whenever there’s a risk of confusion

·     -Be wary of posting material that implies that EAs are all amazing people (For example the description in EA Hangout reads “Just to chill, have fun, and socialize with other EAs while we're not busy saving the world.” I know this is funny, but I think it’s potentially damaging if it feeds into negative stereotypes.)

·      -Conceptualise debates with opponents not as about whether EAs are better people, but as about whether EAs have the correct moral views

·      -Consider explicitly saying that EA’s key claim is about which actions are better not which agents are better.
E.g. “EAs do think that giving to AMF is a better thing to do than giving to the local homeless. But they don’t think that someone who gives to AMF is necessarily a better person than someone who gives to the homeless. It’s just that even bad people can do loads of good things if they give money away effectively.

 

 Key Cause 2- Core parts of the EA movement are much easier for rich, powerful and privileged people to engage with

 

What aspects, and why are they easier?

GWWC’s pledge
Getting the certificate for making pledge is a signifier of being a good person. It is a sign that the pledger is making a significant sacrifice to help others. It is sometimes used as a way of determining whether someone is an effective altruist. But the pledge is much, much easier to make if you a rich or financially secure in other ways. For example it’s harder to commit to giving away 10% if your income is unreliable, or you have large debts. So it’s easier to gain moral credentials in the EA movement if you’re rich.

 

 

The careers advice of 80,000 hours
It’s focussed on people at top universities, who have huge educational privilege. Following the advice is an important way of engaging with the community’s account of what to do, but many people can’t follow it at all.

 

 

Privilege-friendly values.
It’s much easier for someone who hasn’t experienced sexism to accept that they should not devote their resources to fighting sexism, but to the most effective cause. It’s much easier for someone who hasn’t been depressed, or devoted a lot of time to helping a depressed friend, to accept that it’s better to give to AMF than to charities that help the mentally ill.

More generally, being privileged makes it much easier to select causes based on only their effectiveness. I think there’s a danger people think we aren’t prioritising their cause because we’re not fully empathising with a problem that they’ve had first-hand experience of.

 

How does this contribute to EA’s image problem?

·      Rich-Person Morality – EA-related moral accolades are much more easy for the rich and powerful to earn than anyone else.

·      Privileged Access – many parts of the movement are harder for non-privileged people to engage in. They understandably feel that the movement is not accessible to them.

I think the criticisms here are legitimate. EA is much more accessible and appealing to those who are rich, powerful and privileged. The image problem here is exacerbated by the fact that the demographic of EA is hugely privileged.

 

How can we improve the situation?

·     -Continue to stress how much good anyone can do

·     -Be sensitive to your wealth, as compared with that of your conversation partner, when talking about EA. Urging someone on £30K to give 10% when you have £50K after donating may lead them to question why you’re asking them to live on less than you live on.

·     -Be sensitive to your privilege, as compared with that of your conversation partner, when talking about EA. Be aware that it may have been much easier for you to change your cause prioritisation than it is for them.

·     -Adjust the pledge so that, below a certain income threshold, one can give less than 10%

·     -Consider redressing the balance of career advice

 

An objection

It might be in the interests of the EA movement to carry on celebrating high-impact individuals more than they deserve. For a culture where being good is equated with doing good incentivises people to do more good. These incentives help the movement achieve its aims. It wants to do the most good, not to accurately evaluate how good people are.

For example, it might be false to write an article saying that some Ukrainian man is the best person who ever lived. But the article might encourage its readers to do the most good they can. Sometimes, when speaking to wealthy people, I pretend that I think they’d be moral heroes if they gave away 10% of their income.

Similarly it might be in the interests of the movement to be particularly accessible to the rich, powerful and privileged. For these people have the potential to do the most good. Many will feel uncomfortable with this type of reasoning though.

These considerations potentially provide arguments against my suggested courses of action. We should think carefully about the arguments on both sides so that we can decide what to do.

 

Key Cause 3- EAs are believed to be narrow consequentialists

 

What is a narrow consequentialist?

A consequentialist holds that the goodness of an action is determined by its consequences. They compare the consequences that actually happened with those that would have happened if the action hadn’t been performed. A narrow consequentialist only pays attention to a small number of types of consequences, typically things like pleasure, pain or preference satisfaction. A broad consequentialist might also pay attention to things like equality, justice, integrity, promises kept and honest relationships.

 

Why do people believe that EAs are narrow consequentialists?

It’s true!

Many effective altruists, especially those who are vocal and those in leadership positions in the movement, are narrow consequentialists. This isn’t surprising. It’s very obvious from the perspective of narrow consequentialism that EA is amazing (FWIW I believe it’s amazing from many other perspectives as well!). It’s also obvious from this perspective that furthering the EA movement is a really great thing to do. People with other ethical codes might have other considerations that compete with their commitment to furthering the EA movement.

 

The popularity of the consequences-based argument for EA

This argument runs as follows: “You could do a huge amount of good for others at a tiny cost to yourself. So do it!”
This is exactly the argument a narrow consequentialist would make, so people infer that EAs are consequentialists. This inference is somewhat unfair, as it’s a powerful argument even if one isn’t a consequentialist.

 

EA discourse

Some discussion on EA forums implicitly presupposes consequentialism. Sometimes EAs attack moral commitments that a narrow consequentialist doesn’t hold: they equate them to “caring about abstract principles”, claim that those who hold them are simply rationalising immoral behaviour, and even assert that philosophers aren’t narrow consequentialists only because they want to keep their jobs!

 

How does this belief about EAs contribute to EA’s image problem?

·      Smug and Arrogant– EAs are committed narrow consequentialists even when the vast majority of experts dismiss it. This is dogmatic and arrogant, without independent reason to think the experts are wrong (independent of the moral arguments for and against narrow consequentialism).

·      Cold Hearted – if EAs are narrow consequentialists then they will over simplify moral issues by ignoring relevant considerations, like justice or human rights. It might seem like it is this oversimplification that allows EAs to figure out what to do by calculating the answer.

o   This plays into the idea that EAs make calculations because they are cold hearted

o   At worst people think that EA’s strong line on which charities one should donate to is a result of their narrow consequentialism and thus of their cold heartedness

I think both criticisms here are too harsh on EA but do find it surprising that many avowedly rational EAs are so strongly committed to narrow consequentialism.

 

How can we improve the situation?

·     -Don’t see EA as being a moral theory where there is “something that an EA would do” in every situation. Rather see EA’s appeal as being independent of what underlying moral theory you agree with

·     -Refrain from posting things that assume that consequentialism is true

·     -Don’t just use the consequences-based argument for EA 

 

This last point is particularly important because the other arguments are really strong as well:

·      -The drowning child argument first asserts that you should save the drowning child (it needn’t say why). Then it asserts that various considerations aren’t morally relevant. This argument isn’t consequentialist, but should appeal to all ethical positions.

·     -The argument from justice make use of the fact that many causes recommended by EA help those who
i) are in desperate poverty
ii) are in this position through no fault of their own
iii) are often poor for the same reason we’re rich
The argument points out that this state of affairs is terribly unjust, and infers that we have a very strong reason to change it. I think it’s a hugely powerful argument, but it’s not clear that it can even be made by a narrow consequentialist.

 

Key Cause 4 - EAs discourse is alienating

 

How?

Terminology from economics and philosophy is often used even when it’s not strictly needed.  This makes the conversation inaccessible to many.

 

EAs are very keen to be rational when they write, and to be regarded as such. This can create an intimidating atmosphere to post in.

 

EAs often celebrate the fact that the movement is “rational”. But “rational” is a normative word, meaning something like “the correct view to have given the evidence”. Thus we appear to be patting ourselves on the back a lot. This is alienating and annoying to someone who doesn’t yet agree with us.

 

How does this contribute to EA’s image problem?

·      Smug and Arrogant- claiming that your position is rational is smug because it like saying that you have great views. This can come across as arrogant in a conversation because it might seem like you’re not seriously entertaining the possibility that you’re wrong.

·      Cold Hearted – use of technical language makes the movement appear impersonal

·      Privileged Access – EA is less accessible to those that don’t know the relevant vocabulary or who aren’t confident. This tends to be those without educational and socio-economic privilege

I think these criticisms are mostly legitimate, but also think that EAs are very friendly and inclusive in general.

 

How can we improve the situation?

·     -Avoid alienating discourse. Follow George Orwell’s writing rules: never use a longer word when a shorter one will do; if it’s possible to cut out a word then do; and Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.

·     -Avoid words like “rational” which imply a superiority of EAs over others

Conclusion

Be sceptical of my arguments here, and reply with your objections, but also be sceptical of your current practices. Think about some ways in which you could better combat EA’s image problem.



[1] http://swarajyamag.com/culture/the-best-person-who-ever-lived-is-an-unknown-ukrainian-man/

Comments60
Sorted by Click to highlight new comments since: Today at 10:39 AM

Just a few remarks about 80,000 Hours.

Our intention is to eventually provide career advice to all graduates.

However, for the next 1-2 years, it seems far better for us to focus on especially talented graduates in their 20s. For startups the usual advice is to start by having strong appeal in a small market, and this audience is the best fit for us (it's where we've had most success in the past, where we have the strongest advantage over existing advice, and where we can have the largest impact with a small number of users).

Unfortunately, this has the negative side effect of making effective altruism look more elitist, and I don't see any easy way to avoid that.

Another thing to bear in mind is that 2/3 of the sections of our guide apply to everyone: https://80000hours.org/career-guide/basics/ https://80000hours.org/career-guide/how-to-choose/

We also have this article, which we link to in the intro material: https://80000hours.org/articles/how-to-make-a-difference-in-any-career/

The main bit that's targeted to talented students are the career profiles: https://80000hours.org/career-guide/profiles/ And the career recommender which is based on them. Even here, we'd like to expand these to include a wider range of careers within the next 6 months.

If you're talking to someone at 80,000 Hours who might be put off by it seeming overly elitist, stress the general principles ('learn the basics' section), approach to choosing a career ('make a decision' tool) and broad pathways to impact (building skills, etg, direct work, advocacy), since these apply to everyone.

Ever thought about reviewing high earning or direct work careers for non graduates? If someone did the work would it fit with the rest of the content to put it up? Do you have any considerations in the way of 'we're prettyt sure no one would take nottice' etc?

Hi Tom, we have considered doing it, but it's some way away from our target market and we have to specialise for now. If 80,000 Hours succeeds we'll get to that eventually but I wouldn't count on us doing it soon, if you were thinking of doing it yourself!

I see a tension at the heart of your piece. On the one hand, you say you "want EA to change the attitudes of society as a whole". But you seem willing to backpedal on the goal of changing societal attitudes as soon as you encounter any resistance. Yes, society as a whole believes that "it's the thought that counts" and that you should "do something you're passionate about". These are the sort of attitudes we're trying to change. If EA is watered down to the point where everyone can agree with it, it won't mean anything anymore. (Indeed, we've been criticized from the other direction for being too watered down already: one economist called EA "vacuous".)

You criticize EAs for believing that "their movement is correct and more important than all others". But the implicit premise of your post is that EA should seek to improve its image in order to increase its influence and membership, almost necessarily at the expense of other movements. The implication being that EA is more correct and/or important than those other movements.

I'm skeptical of your implicit premise. I think that EA should play to its strengths and not try to be everything to everyone. We're passionate about doing the most good, not passionate about problems that affect ourselves and our friends. We focus on evidence and reason, which sometimes comes across as cold-hearted (arguably due to cultural conditioning). Many of us are privileged and see EA as a way to give back. If someone doesn't like any of this, they are more than welcome to borrow from EA as a philosophy--but EA as a movement may not be a good fit for them, and we should be honest and upfront about that.

I'm tempted to say this post is itself a symptom of the "EA is the last social movement we'll ever need" type arrogance it criticizes :P

My vision of EA is not EA being the last social movement we ever need. It's a vision of getting a bunch of smart, wealthy, influential critical thinkers in the same room together, trying to figure out what the world's most important and neglected problems are and how they can most effectively be solved. If it's not a candidate for one of the world's most important and neglected problems, we should leave it to some other movement, and I don't think we need to apologize for that.

I'm not even sure I want EA to add more smart, wealthy, influential critical thinkers. EA has already captured a lot of mindshare and I'm in favor of intellectual diversity broadly construed. Additionally, EAs have (correctly in my view) pointed out that expanding the EA movement is not an evidence-based intervention... for example, for all we know most people who have recently pledged to donate 10% will burn out in a few years.

Final note: I'm doubtful that we can successfully split the "doing good" and "being a good person" semantic hair. And even if it was possible, I'm doubtful that it's a good idea. As you suggest, I think we should set up incentives so the way to gain status in the EA movement is to do a lot of good, not have the appearance of being a good person.

I'm not even sure I want EA to add more smart, wealthy, influential critical thinkers. EA has already captured a lot of mindshare and I'm in favor of intellectual diversity broadly construed.

I was quite surprised by this, so I was wondering what your reference class was? It seems to me that -- while much bigger than a few years ago -- effective altruism is still an extremely small proportion of society, and a larger but still very small part of influential thinkers.

In the early days of the EA movement, when it was uncertain whether expansion was even possible, I can see "try and expand like crazy, see what happens" as being a sensible option. But now we know that expansion is very possible and there's a large population of EA-amenable people out there. The benefits of reaching these people a bit sooner than we would otherwise seem marginal to me. So at this point I think we can afford to move the focus off of movement growth for a bit and think more deeply about exactly what we are trying to achieve. Brain dump incoming...

  • Does hearing about EA in its current form actually seem to increase folks' effective altruist output? (Why are so many EAs on the survey not donating anything?)

  • Claiming to be "effective altruists" amounts to a sort of holier-than-thou claim. Mild unethical behavior from prominent EAs that would be basically fine in any other context could be easy tabloid fodder for journalists due to the target EA has painted on its own back. There have already been a few controversies along these lines (not gonna link to them). EA's holier-than-thou attitude leads to unfavorable contrasts with giving to help family members etc.

  • EA has neglectedness as one of its areas of focus. But if a cause is neglected, it's neglected for a reason. Sometimes it's neglected because it's a bad cause. Other times it's neglected because it sounds like a bad cause but there are complicated reasons why it might actually be a good cause. EA's failure to communicate neglectedness well leads to people saying things like "Worrying about sentient AI as the ice caps melt is like standing on the tracks as the train rushes in, worrying about being hit by lightning". Which is just a terrible misunderstanding--EAs mostly think that global warming is a problem that needs to be addressed, but that AI risk is receiving much less funding and might be a better use of funds on the margin. The problem is that by branding itself as "effective altruism", EA is implicitly claiming that any causes EA isn't working on are ineffective ones. Which gets interpreted as a holier than thou attitude and riles anyone who's working on a different cause (even if we actually agree it's a pretty good one).

  • Some EAs cheered for the Dylan Matthews Vox article that prompted the tweet I linked to above, presumably because they agree with Matthews. But finding a reporter to broadcast your criticisms of the EA movement to a huge readership in order to gain leverage and give your cause more movement mindshare is a terrible defect/defect equilibrium. This is a similar conflict to the one at the heart of Tom_Davidson's piece. EA is always going to have problems with journalists due to the neglectedness point I made above. Doing good and looking good are not the same thing and it's not clear how to manage this tradeoff. It's not clear how best to spend our "weirdness points".

  • In line with this, you can imagine an alternate branding for EA that focuses on the weakest links in our ideological platform... for example, the "neglected causes movement" ("Neglected Causes Global"?), or the "thoughtful discussion movement"/"incremental political experimentation movement" if we decided to have a systemic change focus. (Willingness is not the limiting factor on doing effective systemic change! Unlike philanthropy, many people are extremely interested in doing systemic change. The limiting factor is people forming evidence filter bubbles and working at cross purposes to one another. As far as I can tell EA as a movement is not significantly good at avoiding filter bubbles. "Donate 10% of your time/energy towards systemic change" fails to solve the systemic problems with systemic change.) As far as I can tell, none of these alternate brandings have been explored. There hasn't been any discussion of whether EA is better as a single EA tentpole or as multiple tentpoles, with an annual conference for neglected causes, an annual conference for avoiding filter bubbles, etc. etc.

  • There's no procedure in place for resolving large-scale disagreement within the EA movement. EA is currently a "do-ocracy", which leads to the unilateralist's curse and other problems. In the limit of growth, we risk resolving our disagreements the same way society at large does: with shouting and/or fists. Ideally there would be some kind of group rationality best practices baked in to the EA movement. (These could even be a core branding focus.) The most important disagreement to resolve in a cooperative way may be how to spend our weirdness points.

  • EA is trying to be a "big tent", but they don't realize how difficult this is. The most diverse groups are the ones that are able to engineer their diversity: universities and corporations can hold up a degree/job carrot and choose people in order to get a representative cross section of the population. In the absence of such engineering, groups tend to get less diverse over time. Even Occupy Wall Street was disproportionately white. That's why people who say "I like the idea of altruistic effectiveness, but not the EA movement's implementation" don't hang around--it's stressful to have persistent important disagreements with everyone who's around you. (EA's definitional confusion might also eventually result in EA becoming a pernicious meme that's defined itself to be great. I'm somewhat in favor of trying to make sure we really have identified the world's most high impact causes before doing further expansion. People like Paul Cristiano have argued, convincingly IMO, that there are likely to be high-impact causes still not yet on EA movement radar. And focusing on funneling people towards a particular cause also helps address "meta trap" issues.) EA is trying to appeal to people of all ages, races, genders, political orientations, religions, etc. with very little capability for diversity engineering. It's difficult to imagine any other group in society that's being this ambitious.

Thanks.

There's no procedure in place for resolving large-scale disagreement within the EA movement. EA is currently a "do-ocracy", which leads to the unilateralist's curse and other problems. In the limit of growth, we risk resolving our disagreements the same way society at large does: with shouting and/or fists. Ideally there would be some kind of group rationality best practices baked in to the EA movement. (These could even be a core branding focus.)

This seems particularly important to me. I'd love to hear more in depth thoughts of you have any. Even if not, I think it might be worth a top level post to spur discussion.

One category of solutions is the various voting and governing systems. Score voting seems pretty solid based on my limited reading. There are also more exotic proposals like futarchy/prediction markets and eigendemocracy. The downside of systems like this is once you give people a way to keep score, they sometimes become focused on increasing their score (through forming coalitions, etc.) at the expense of figuring out what's true.

There are also "softer" solutions like trying to spread beneficial social norms. Maybe worrying about this is overkill in a group made up of do-gooders anyway, as long as moral trade is emphasized enough that people with very different value systems can still find ways to cooperate.

You're more than welcome to think things over and write a top level post.

Why are so many EAs on the survey not donating anything?

This I can answer at least. The vast majority of the EAs who were down as giving 0, in the survey, matched at least 1 (and often more) of these criteria, i) full time student, ii) donated a large amount in the past already (even if not in that particular year), iii) pledged to give a substantial amount. The same applied for EAs merely giving 'low' amounts e.g. <$500. I give the figures in a comment somewhere on an earlier thread where this was raised (probably the survey thread).

Some EAs cheered for the Dylan Matthews Vox article that prompted the tweet I linked to above, presumably because they agree with Matthews. But finding a reporter to broadcast your criticisms of the EA movement to a huge readership in order to gain leverage and give your cause more movement mindshare is a terrible defect/defect equilibrium.

Matthews is an EA, and identifies as one in that piece. This wasn't about finding someone to broadcast things, this was someone within the movement trying to shape it.

(I do agree with you that we shouldn't be trying to enlist the greater public to take sides in internal disagreements over cause prioritization within EA.)

Thanks for the reply! I would like to pick you up on a few points though...

"On the one hand, you say you "want EA to change the attitudes of society as a whole". But you seem willing to backpedal on the goal of changing societal attitudes as soon as you encounter any resistance... If EA is watered down to the point where everyone can agree with it, it won't mean anything anymore."

I think all the changes I suggested can be made without the movement losing the things that currently makes it distinctive and challenging in a good way. Which of my suggested changes do you think are in danger of watering EA down too much? Do you take issue with the other changes I've suggested?

"Yes, society as a whole believes that "it's the thought that counts" and that you should "do something you're passionate about". These are the sort of attitudes we're trying to change."

I completely agree we should try to change people's attitudes about both these things. I argued that we should say "An action that makes a difference is much better than one that doesn't, regardless of intention" rather than "An agent that makes a difference is much better than one who doesn't" because the latter turns people against the movement and the former says everything we need to say. Again, I'm interested to know which of my suggested changes you think would stop the movement challenging society in ways that it should be?

"I think that EA should play to its strengths and not try to be everything to everyone. We're passionate about doing the most good, not passionate about problems that affect ourselves and our friends. We focus on evidence and reason, which sometimes comes across as cold-hearted (arguably due to cultural conditioning)."

Again, I completely agree . The things you mention are essential parts of the movement. In my post was trying to suggest ways in which we can minimize the negative image that is easily associated with these things.

"But the implicit premise of your post is that EA should seek to improve its image in order to increase its influence and membership, almost necessarily at the expense of other movements... I'm skeptical of your implicit premise."

You're right, although it's not implicit - I say explicitly that I want EA to change the attitudes of society as a whole. This is because I think EA is a great movement and, therefore, that if it has more appeal and influence it will be able to accomplish more. FWIW I don't think it's the last social movement we'll ever need.

"It's a vision of getting a bunch of smart, wealthy, influential critical thinkers in the same room together, trying to figure out what the world's most important & neglected problems are and how they can most effectively be solved."

I think comments like these make the movement seem inaccessible to outsiders who aren't rich or privileged. It seems like we disagree over whether that's a problem or not though.

Overall it seems like you think that paying attention to our image in the ways I suggest would harm the movement by making it less distinctive. But I don't know why you think the things I suggest would do that. I'm also interested to hear more about why you don't think getting more members and being more influential would be a good thing.

I guess I'm not totally sure what concrete suggestions you're trying to make. You do imply that we should stop saying things like "It’s better to become a banker and give away 10% of your income than to become a social worker" and stop holding EAs who earn and donate lots of money in high regard. So I guess I'll run with that.

High-earning jobs are are often unpleasant and/or difficult to obtain. Not everyone is willing to get one or capable of getting one. Insofar as we de-emphasize earning to give, we are more appealing to people who can't get one or don't want one. But we'll also be encouraging fewer people to jump through the hoops necessary to achieve a high-earning job, meaning more self-proclaimed "EAs" will be in "do what you're passionate about" type jobs like going to grad school for pure math or trying to become a professional musician. Should Matt Wage have gone on to philosophy academia like his peers or not? You can't have it both ways.

I don't think high-earning jobs are the be all and end all of EA. I have more respect for people who work for EA organizations, because I expect they're mostly capable of getting high-paying jobs but they chose to forgo that extra income while working almost as hard. I guess I'm kind of confused about what exactly you are proposing... are we still supposed to evaluate careers based on impact, or not? As long as we evaluate careers based on impact, we're going to have the problem that highly capable people are able to produce a greater impact. I agree this is a problem, but I doubt there is an easy solution. Insofar as your post presents a solution, it seems like it trades off almost directly against encouraging people to pursue high-impact careers. We might be able to soften the blow a little bit but the fundamental problem still remains.

Just in terms of the "wealthy & privileged" image problem, I guess maybe making workers at highly effective nonprofits more the stars of the movement could help some? (And also help compensate for their forgone income.)

I think comments like these make the movement seem inaccessible to outsiders who aren't rich or privileged. It seems like we disagree over whether that's a problem or not though.

EA has its roots in philanthropy. As you say, philanthropy (e.g. in the form of giving 10% of your income) is fundamentally more accessible to rich people. It's not clear to me that a campaign to make philanthropy seem more accessible to people who are just scraping by is ever going to be successful on a large scale. No matter what you do you are going to risk coming across as demeaning and/or condescending.

I discuss more about why I'm skeptical of movement growth in this comment. Note that some of the less philanthropy-focused brandings of the EA movement that I suggest could be a good way to include people who don't have high-paying jobs.

Thanks a lot, this cleared up a lot of things.

I think we're talking past each other a little bit. I'm all for EtG and didn't mean to suggest otherwise. I think we should absolutely keep evaluating career impacts; Matt Wage made the right choice. When I said we should stop glorifying high earners I was referring to the way that they're hero-worshipped, not our recommending EtG as a career path.

Most of my suggested changes are about the way we relate to other EAs and to outsiders, though I had a couple of more concrete suggestions about the pledge and the careers advice. I do take your point that glorifying high earners might be consequentially beneficial though: there is a bit of a trade-off here.

As long as we evaluate careers based on impact, we're going to have the problem that highly capable people are able to produce a greater impact... Insofar as your post presents a solution, it seems like it trades off almost directly against encouraging people to pursue high-impact careers.

I hope my suggestions are compatible with encouraging people to pursue high-impact careers, but would reduce the image problem currently currently associated with it. One hope is that by distinguishing between doing good and being good we can encourage everyone to do good by high earning (or whatever) without alienating those who can't by implying they are less virtuous, or less good people. We could also try and make the movement more inclusive to those who are less rich in other ways: e.g. campaigning for EA causes is more accessible to all.

I guess maybe making workers at highly effective nonprofits more the stars of the movement could help some?

This seem like a good idea.

Good to hear we're mostly on the same page.

When I said we should stop glorifying high earners I was referring to the way that they're hero-worshipped

Hm, maybe I just haven't seen much of this?

Regarding the pledge, I'm inclined to agree with this quote:

I recently read a critique of the Giving What We Can pledge as classist. The GWWC pledge requires everyone with an income to donate 10% of their income. This disproportionately affects poor people: if you made $20,000 last year, giving 10% means potentially going hungry; if you made a million dollars last year, giving 10% means that instead of a yacht you will have to have a slightly smaller yacht. This is a true critique.

Of course, there’s another pledge that doesn’t have this problem. It was invented by the world’s most famous effective altruist. It even comes with a calculator. And I bet you half the people reading this haven’t heard of it.

The problem is that the Giving What We Can pledge is easy to remember. “Pledge to give 10% of your income” is a slogan. You can write it on a placard. “Pledge to give 1% of your before-tax income, unless charitable donations aren’t tax-deductible in your country in which case give 1% of your after-tax income, as long as you make less than $100,000/year adjusted for purchasing power parity, and after that gradually increase the amount you donate in accordance with these guidelines” is, um, not.

So, I'm inclined to think that preserving the simplicity of the current GWWC pledge is valuable. If someone doesn't feel like they're in a financial position to make that pledge, there's always the Life You Can Save pledge, or they can skip pledging altogether. Also, note that religions have been asking their members for 10% of their income for thousands of years, many hundreds of which folks were much poorer than people typically are today.

I don't think the existence of another pledge does much to negate the harm done by the GWWC pledge being classist.

I agree there's value in simplicity. But we already have an exception to the rule: students only pay 1%. There's two points here. Firstly, it doesn't seem to harm our placard-credentials. We still advertise as "give 10%", but on further investigation there's a sensible exception. I think something similar could accommodate low-earners. Secondly, even if you want to keep it at one exception, students are in a much better position to give than many adults. So we should change the exception to a financial one.

Do you agree that, all things equal, the suggestions I make about how to relate to each other and other EAs are good?

Great post! To address Rich-Person Morality, I wonder if it would make sense to support political movements to advocate for increased foreign aid for effective programs in the developing world. Government agencies like USAID and DFID are already some of the largest donors to many effective programs (e.g. malaria control and deworming). Yet at the same time, the USAID budget is less than one percent of federal budget, so there seems like there is room to give more.

One nice thing about this type of advocacy is that it would be inclusive of people of all income levels, since we can all vote for candidates who would support increasing the foreign aid budget for effective programs.

Examples of this type of advocacy include the ONE Campaign and the END7 campaign.

We could also advocate for less restrictive immigration laws and government policies to support reduced meat consumption. We could even create a "EA" legislative scorecard to endorse candidates running for public office.

[Update: edited post to reflect Owen's feedback that we should be supporting existing efforts]

Why 'build a political movement within EA', rather than just effectively supporting existing projects working towards these goals? This gave me the "smug and arrogant" impression the opening post eloquently warned against.

Yes thanks for pointing that out. It might be best to support an existing project like ONE Campaign. No need to reinvent the wheel. I updated the original comment.

I agree. I think more EAs need to specialise into very specific areas like foreign policy, politics and health (and many others) to work on such issues. I'm concerned that EAs are a society of generalists!

Very interesting ... with respect to the distinction between being a good person and doing good, I tend to think we underestimate the value of doing good. The archetypal example is Bill Gates, who built a $100 million house but is still (in Peter Singer's view, at least) the largest effective altruist of all time.

I do think the wealth have a greater moral imperative to give money, but I also think we tend to undervalue people's practical impact in favor of their level of martyrdom. If I'm at risk of dying of malaria, I'd much rather have Gates come to my rescue than someone making $50,000 and giving half to charity. I certainly don't think that makes Gates morally better in any way, but he has made life decisions that have increased his giving ability (not to mention being exceptionally fortunate to be born into an affluent, wealthy family at the dawn of the personal computer age, of course).

I generally think we (EAs, but everyone else, too) could use a dose of humility in acknowledging that no one really knows the best way to change the world. We're all guessing, and there is value in other approaches as well (such as making zillions, buying a yacht, and giving some to charity; running for office or supporting a political campaign, spending your time bringing food to your elderly neighbor across the street, building a socially responsible company that hires thousands of people, etc.).

Definitely agree with the point about Gates.

Hey David, Thanks very much for this article. Definitely gives us plenty of food for thought. I think the point about distinguishing judgements about characters from ones about actions is particularly interesting. I think in many ways one of the things effective altruism is often trying to do is get away from judgements about characters, and instead focus on judgements about actions.

Thanks for your suggestions on the pledge. This is definitely something we’ve considered in the past. One important thing to say is that the Pledge should absolutely not be used to distinguish ‘good people’. As you say, giving 10% of your income is just more viable for some people than others. I think it’s really important that we move away from judging people as good or bad in general. You might say a similar thing about vegnism: it might be tempting to classify people as good or bad based on whether they’re vegn or not; but some people have health reasons for not being veg*n, some people believe they can do more good by not being, etc. At the same time, one thing that GWWC is trying to highlight is how well off people in many rich countries are compared to people globally, even if they aren’t well off by their country’s standards. The fact that someone on the median wage in the UK is in the top 5% of incomes worldwide (ppp adjusted) does indicate that many of us are better off than we might have thought, which might make us feel more able to donate than we previously had. But how able people are to donate will depend on their individual circumstances and feelings, so it doesn’t seem sensible for us to pick a cut-off. I also think the simplicity of the message is pretty crucial. A large part of what GWWC is trying to achieve is to show people that donating 10% of one’s income isn’t an unachievably high bar, but something that many people do. The power of this comes from being able to say ‘we are a community of x people who all actually give 10% of their wages’ – that is much more powerful than saying ‘we are a community of people who would give 10% of our wage if they were above £x’. This is all the more so because we actually even get objections along the lines of ‘but giving away 10% is easier for people less wealthy than me, because it means they’re giving away less’ (my parents are sympathetic to that line of reasoning – they’re members, but they wouldn’t be if they earned more).

I’m not convinced that your characterisation of the ethical views of effective altruists is accurate, and I think it could be harmful to simplify in the way that you do. While it’s true that quite a few of the people who started the effective altruism movement put more credence in welfarist consequentialism than in many other normative theories, that has two caveats: There was a big selection effect at the start – Toby Ord and Will MacAskill met through studying Ethics, so it’s unsurprising they would have similar views. Also, many of the people I imagine you’re talking about not only have a great deal of moral uncertainty, they’re actually leaders in the field of moral uncertainty. That both means that it’s incorrect to ascribe ‘narrow consequentialism’ to them, and that the description of being arrogant and dogmatic is less true of them than most other ethicists. The view many of the people around at the start of the GWWC would probably be most accurately described as something like ‘welfarist with constraints’. In some cases (like me), that means having the most credence in some form of scalar utilitarianism, some in prioritarianism, some in a deontological framework incorporating human rights (plus a bunch of uncertainty). Because deontological theories put huge weight on not breaking rights, while consequentialism would typically only imply on the balance of probabilities it was somewhat better to break them in extreme cases, the credences I described should be summarised as welfarism with constraints. In other cases (like Andreas Mogensen, one of the other founders of GWWC and Assistant Director up until last month) people placed most credence in some form of deontology, which none-the-less holds consequences in terms of welfarism to be important.

I definitely agree with the idea we should be trying to make our writing accessible and clear, rather than technical. I would guess that’s something most people already have as an ideal, just not one they always manage to achieve (that’s definitely how I feel!). I also think it’s crucial we recognise the importance of causes like fighting discrimination, and in particular the benefit we get from people who have experienced it speaking up, despite it being difficult (and often frustratingly repetitive) to do so. (Sorry this ended up rather long, speaking of writing succintly!)

Thanks so much for this! Really good and persuasive points.

One important thing to say is that the Pledge should absolutely not be used to distinguish ‘good people’.

My worry is this isn't realistic, even if ideally it we wouldn't distinguish people like this. For example, having taken the pledge myself and told people about it I was congratulated (especially by other EAs). This simple and unavoidable kind of interaction rewards pledgers and shows that their moral status in the eyes of others has gone up. To me, it seems a real problem that this kind of status and reward is so much harder for the poor to attain.

Further, making the Pledge is bound to be an important part of engaging with the movement, even if we don't use it to distinguish virtuous people. To me, again, this feels like a serious issue.

so it doesn’t seem sensible for us to pick a cut-off. I also think the simplicity of the message is pretty crucial... [it's powerful that we can] say ‘we are a community of x people who all actually give 10% of their wages’

Great point! I'm interested to know how we currently accommodate the exception the the rule: students? Could we do the same thing for an income clause as well? To me an income exception seems better motivated because i) its a more important access issue, and ii) the pledge is already about lifetimes earnings, so wouldn't be particularly harder for a student to make (they can just give a little later) than a non-student.

I’m not convinced that your characterisation of the ethical views of effective altruists is accurate, and I think it could be harmful to simplify in the way that you do... the description of being arrogant and dogmatic is less true of them than most other ethicists.

This is a really good point and i'll keep this in mind, especially about the uncertainty. [To be clear neither Toby nor William MacAskill have ever done any of the things I objected to.] It's not clear to me that calling them narrow utilitarians is misleading though (unless they're deontologists)

To me, it seems a real problem that this kind of status and reward is so much harder for the poor to attain.

Why, do you believe we should redistribute moral virtue?

The Pledge is trying to encourage people to donate more, so it assigns status on that basis. We don't want to reduce that incentive, it is already weak enough.

Why, do you believe we should redistribute moral virtue?

No, but it's unfair that it's harder for the poor to attain the status. That has negative effects which I talked about in the article.

I mentioned this on Facebook before (I hope I don't sound like a broken record!), but the feelings of fellow aspiring EAs, while no doubt important, completely pales in comparison to that of the population we're trying to serve. Here's an analogy from GiveDirectly: https://www.givedirectly.org/blog-post.html?id=1960644650098330671

"through my interactions with the organization, it's become clear that their commitment is not just to evidence – it's to the poor. Most international charities' websites prominently feature photos of relatable smiling children, but not GiveDirectly, because of respect for beneficiaries' privacy and security. Many charities seem to resign themselves to a certain degree of corruption among their staff, but GiveDirectly is willing to install intrusive internal controls to actively prevent corruption."

Is intrusive internal controls "unfair" to GiveDirectly's staff members? In some sense, of course...other NGOs don't do this. In another, more important sense, however, GiveDirectly workers are still way better off than the people they're transferring money to.

In a similar sense, while "the poor" (by that, I assume you mean people making in the 80th percentile of income) will find it more difficult to meet the GWWC pledge, and maybe it's less "fair" for them to feel altruistic, it's even less fair to die from malaria. Ultimately my greatest priority isn't fellow EAs. Paul Farmer said that his duty is [paraphrasing] "first to the sick, second to prisoners, and third to students." I think this is the right model to have. Conventional models of morality radiates outwards from our class and social standing, whereas a more universalist ethic will triage.

If this is not obvious to you, imagine, behind the veil of ignorance, the following two scenarios:

1) You're making minimum wage in the US. You heard about the Giving What We Can pledge. You would like to contribute but know that you have a greater obligation to your family. You feel bad about the situation in Africa and wished that those elitist EAs didn't shove this into your face.

2) Your child, your second child, has convulsions from a fever. You don't know why, but you suspect that it's due to malaria. Your first child has already died of diarrhea. You didn't work today to take care of your child, but you know your family has very little savings left for food, never mind medicine.You're crying and crying and crying but you know you shouldn't cry because it's a waste of resources and anyway the world isn't fair and nobody cares.

I apologize for the pathos, but it seems blatantly clear to me that 2) is a substantially greater issue than 1). I suspect that my usual M.O of arguing rationally isn't getting this across clearly.

I agree with this. Let me make explain why I stand by the point that you quote me on. Tl;dr: by "negative effects" I wasn't talking about the hurt feelings of potential EAs.

My point wasn't the following: "It's unfair on relatively poor potential EAs, therefore it's bad, therefore let's change the movement" As you stress, this consideration is outweighed by the considerations of those the movement is trying to help. I accept explicitly in the article that such considerations might justify us making EA elitist.

My point was rather that people criticise us for being elitist etc. Having an elitist pledge reinforces this image and prevents people from joining - not just those in relative poverty. This reduces our ability to help those in absolute poverty. You don't seem to have acknowledged this point in your criticisms.

"Also, many of the people I imagine you’re talking about not only have a great deal of moral uncertainty, they’re actually leaders in the field of moral uncertainty. That both means that it’s incorrect to ascribe ‘narrow consequentialism’ to them, and that the description of being arrogant and dogmatic is less true of them than most other ethicists. The view many of the people around at the start of the GWWC would probably be most accurately described as something like ‘welfarist with constraints’."

I'm not sure you are actually talking about the same groups of people. I read that section as focusing on the LW/Rationalist segment of EA, rather than the Oxford Philosophy contingent. Unsurprisingly, the people who have studied philosophy are indeed closer to the combined views of most philosophers.

I read that section as focusing on the LW/Rationalist segment of EA, rather than the Oxford Philosophy contingent.

But the LW segment believe that value is fragile and that the ends don't justify the means!

Ah, that makes sense. Thanks!

This is a really well-thought out piece, and some excellent suggestions here!

I am especially worried about the points about being "cold-hearted," "rationalist," and "smug" - I agree that it's how the movement comes off across to others. This is what I found myself in my project of spreading rational thinking, including about philanthropy and thus Effective Altruism, to a broad audience.

Here's what I found helpful in my own outreach efforts to non-EAs.

First, to focus much more on speaking to people's emotions rather than their cognition. Non-EAs usually give because of the pull of their heartstrings, not because of raw data on QALYs. So we as EAs need to do a much better job of using stories and emotions to convey the benefits of Effective Altruism. We should tell stories about the children saved from malaria, of the benefits people gained from GiveDirectly, etc. Then, we should support these stories by numbers and metrics. This will definitely help reduce the image of us as smug, arrogant, and cold-hearted.

Second, we need to be much more intentional - dare I say "rational" - about our communication to non-EAs. We need to develop guidelines for how to communicate to people who are not intuitively rational about their donations. We need to remember that we suffer from a typical mind fallacy, in that most EAs are much more data-driven than the typical person. Moreover, after we got into the EA movement, we forget how weird it looks from the outside - we suffer from the curse of knowledge.

Third, we need to focus much more efforts on developing skills on outreach and communication (this is why I am trying to fill the gap here with my own project). We haven't done nearly enough research or experimentation on how to grow the movement most effectively through communicating effectively to outsiders. Investing resources in this area would be a very low-hanging fruit with very high returns, I think. If anyone is interested in learning more about my experience here and wants to talk about collaborating, my email is gleb@intentionalinsights.org

Interesting stuff!

I'd add to all this that I've experienced some EAs pitching the idea for the first time, and being actively amused at the idea that some people didn't immediately agree that an evidence-based approach was the best way for them to decide on their career.

I think we don't lose any of our critique of the way things are by appealing to the way people currently think. A concrete example: the "don't follow your passion" thing sounds unromantic to most; but if we talk a lot about "meaning", and how "making a difference" and "being altruistic" tend to make us happier and satisfied with our work, we can win over people through convincing them that we're simply putting into practice various bits of wisdom that people already tend to take as a given.

Also we probably need to try harder to generally seem emotionally sensitive when talking about why we are EAs: rather than focusing on numbers all the time (which does work for some audiences, of course) we should talk about why we're altruistic generally, and then it will flow from this that if one cares in a general sense one should care about doing the best thing possible.

Excellent article. The simple language worked well for me and some of the points he raised had been in my mind. Thanks for writing such a thoughtful piece.

Thanks, nice post. I agree with most of this. I'd particularly like to see more of an emphasis on the distinction between the goodness of acts and of people.

There are a couple of your recommendations which I think have significant costs, which I'll point out. That's not to say that I think it would be wrong to move in the direction of the recommendation, but I think it may well be right to disregard some reasonable proportion of the time. I'm also arguing in defence of things that I will sometimes do: I like to think this is mostly that I do them because I think they are reasonable, but there may be an element of my defending them because I behave that way.

-Refrain from posting things that assume that consequentialism is true & Terminology from economics and philosophy is often used even when it’s not strictly needed. This makes the conversation inaccessible to many. [...] Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.

In both cases I see the the argument that these can be alienating, and that's a cost which it's important to keep in mind. But both can be useful for faster progress and dialogue when exploring ideas. Thinking just of consequences gives a focus to conversation; technical terms often give you a precision that it can be hard to achieve with everyday terms. So I think it's reasonable to keep these tools available, but generally for private conversations where you know the people who might be reading.

I agree with you on technical language - we have to judge cases on an individual basis and be reasonable.

Less sure about the consequentialism, unless you know your talking to a consequentialist! If you want to evaluate an action from a narrow consequentialist perspective, can't you just say so at the start?

I think this is an excellent post. The point about unnecessary terminology from philosophy and economics is certainly one I've thought about before, and I like the suggestion about following Orwell's rules.

On the use of the term rational, I think it can be used in different ways. If we're celebrating Effective Altruism as a movement which proceeds from the assumption that reason and evidence should be used to put our moral beliefs into action, then I think the use of the term is fine, and indeed is one of the movement's strong points which will attract people to it.

But, if we're saying something along the lines of "effective altruists are rational because they're donating to AMF (or other popular charities among EAs)", then I suppose it could be interpreted as saying we have all the answers already. So, perhaps it should be stressed that the fact that effective altruism is based on the principle that we should engage in rational inquiry does not always mean that effective altruists will be rational. From what I've read, the EA movement seems to be good at welcoming criticism, but it may not seem that way to others not associated with the movement.

On the point about narrow consequentialism, I agree with using other arguments, such as the drowning child argument, to counter this accusation. It may be harder to counter it with people you personally know, though: my non-EA friends know I assign a lot of weight to utilitarianism, so even if I am discussing it without using narrow consequentialist arguments, they may still see it through the lens of narrow consequentialism because they'll associate EA with me and therefore with utilitarianism. Hopefully, though, by focusing on the arguments for EA that don't rely on consequentialism, this association can be dealt with.

Starting a long debate about moral philosophy would be relevant here, but also out of place, so I'll refrain myself.

But what do you mean by "Refrain from posting things that assume that consequentialism is true"? That its best to refrain from posting things that assume that values like e.g. justice aren't ends-in-themselves, or refrain from posting things that assume that consequences and their quantity are important?

If it is something more like the latter, I would ask myself if this would be to pursue the goal of popularity by diminishing a part of the movement that is among the main foundations of what makes it valuable.

Would you e.g. suggest for people to refrain from referring to scope insensitivity like its a cognitive bias?: http://lesswrong.com/lw/hw/scope_insensitivity/, http://lesswrong.com/lw/hx/one_life_against_the_world/

Lots of things are philosophically controversial. The question of whether slavery is a bad thing has renowned philosophers on both sides. I haven't looked much into it much, but I suppose that the anti-slavery movement at some point was going against the majority opinion of the "experts" with nothing speaking in favour of their view except specific arguments concerning the issue in question. I haven't given it a lot of thought, but I suppose that if being uncontroversial among "experts" is a good measure of reasonableness, then even today we should be more open to the possible importance of acting in accordance with theistic holy texts.

Don't get me wrong: I am aware of that there is a pluralism of ethical theories that motivate EAs. I appreciate people motivated by other ethical assumptions than my own and their good deeds, and wouldn't want EA to be considered a narrow-consequentialism-only movement where non-consequentialists aren't welcomed. That being said: While parts of EAs appeal are independent of the moral theory I agree with, other parts that I consider important are very much not. It's hard to think of any more fundamental assumptions in the reasoning behind e.g. why far future concerns are important.

While I try to make decisions that aren't deontologically outrageous, and make sense both from the perspective of "broad" and "not-so-broad" utilitarianism, it's clearly the case that if Immanuel Kant is right then a lot of the EA-relevant decisions I make are pointless. While Kantians who care about EA should be welcomed into the movement, and that not relying on only consequentialist reasoning when its not necessary, I think that encouraging all EAs to speak as if Kant and other philosophers with a complete disregard for consequentialism might be correct would be asking a lot.

While avoiding unnecessary alienation is good, I observe that the way of a movement to succeed isn't always to cave in (although it sometimes may be). Proponents of evolutionary theory don't concede that some species may be created by God, people arguing in favour of vaccines don't concede that the scientific method may be useless, etc.

I also honestly think that the word rational is a good description of the approach EA takes to doing good in a way that clearly isn't the case for many other ways of going about it (by most reasonable definitions of the word). The effective altruism way of going about things IS far superior to a lot of alternatives, and while tactfulness is a good thing, avoiding to say things that implies that this is the case does not seem to me like a good strategy. At least not in all cases.

You raise some interesting perspectives about an important topic, and my comment only concerns a fraction of your post. Many of the suggestions you raise seem good and wouldn't come at the expense of anything important :) I'm not at all certain about any of the strategic concerns that I comment upon here, so take it only as my vague and possibly wrong perspective.

The first talk of this video feels relevant: https://vimeo.com/136877104

Thanks for a thoughtful response.

But what do you mean by "Refrain from posting things that assume that consequentialism is true"? That its best to refrain from posting things that assume that values like e.g. justice aren't ends-in-themselves, or refrain from posting things that assume that consequences and their quantity are important?

Definitely the former. I find it hard to get my head round people who deny the latter. I suspect only people committed to a weird philosophical theories would do it. I thought modern Kantians were more moderate. Let's remember that most people don't have a "moral theory" but care about consequences and a cluster of other concerns: it's these people I don't want to alienate.

I think that encouraging all EAs to speak as if Kant and other philosophers with a complete disregard for consequentialism might be correct would be asking a lot.

I think philosophers who reject consequentialism (as the claim that consequences are the only morally relevant thing) might be correct, and personally find it annoying when everyone speak assumes that any such philosopher is obviously mistaken. I certainly agree there's no need to talk as if consequences might be irrelevant!

I'm sympathetic with your comments about rationality. I wonder if an equally informative way of phrasing it would be "carefully investigating about which actions help the most people". For people who disagree, reading EA's describe itself as "rational" will be annoying because it implies that they are irrational.

I suppose that if being uncontroversial among "experts" is a good measure of reasonableness, then even today we should be more open to the possible importance of acting in accordance with theistic holy texts.

This is a really interesting point. We could see history as a reductio on the claim that the academic experts reach even roughly true moral conclusions. So maybe the academics are wrong. My worry is the idea we can round this problem by evaluating the arguments ourselves. We're not special. Academics just evaluate the arguments, like we would, but understand them better. The only way i can see myself being justified in rejecting their views is by showing they're biased. So maybe my point wasn't "the academics are right, so narrow consequentialism is wrong" but "most people who know much more about this than us don't think narrow consequentialism is right, so we don't know its right".

Thanks for a thoughtful response.

Likewise :)

My worry is the idea we can round this problem by evaluating the arguments ourselves. We're not special. Academics just evaluate the arguments, like we would, but understand them better. The only way i can see myself being justified in rejecting their views is by showing they're biased. So maybe my point wasn't "the academics are right, so narrow consequentialism is wrong" but "most people who know much more about this than us don't think narrow consequentialism is right, so we don't know its right".

That's a reasonable worry, but whereas the field of ethics as a whole is concerned I would be much more worried about trusting the judgment of the average ethicist over ours.

I would also agree that the "we are not special"-assumption seems like a reasonable best-guess for how things are in the absence of evidence for or against (although, in fear of violating your not-comming-across-as-smug-and-arrogant-reccomendation, I’m genuinely unsure about whether its correct or not).

I've also thought a lot about ethics, I’ve been doing so since childhood. But admittedly, most of the philosophical texts that have been written about these topics have not been read by me (or by most professional ethicists I suppose, but I've read far less than them also, for sure). I have read a significant amount though, enough for me to have heard most or all memorable arguments I've heard be repeated several times. Also, perhaps more surprisingly; I'm somewhat confident that I've never heard an argument against my opinions about ethics (that is, not the specific issues, but the abstract issues) that was both (1) not based on axiomatic assumptions/intuitions I disagree with and (2) something I hadn't already thought of (of course, I may have forgotten, but it also seems like something that would have been memorable). Examples where criteria #2 was met but #1 wasn't met includes things like e.g. "the repugnant conclusion" (it doesn't seem repugnant to me at all, so it never occurred to me that this should be seen as a possible counter argument). Philosophy class was a lot of "oh.. so that argument has a name" (and also a lot of “what? do people find that a convincing argument against utilitarianism?”).

For what I know this could be the experience of many with opinions different from mine also, but if so, it suggests that intuitions and/or base assumptions may be the determining factor for many, as opposed to knowledge and understanding of arguments presented by differing sides. My suspicion is that the main contributor for the current "stale-mate" in philosophical debates is that people have different intuitions and commitments. Some ethicists realize that utilitarianism in some circumstances would require us to prioritise other children to the extent that we let our own children starve, and say "reductio absurdism". I realize the same thing, and say "yes, of course" (and if I don't act by that, it's because I have other urges and commitments beyond doing what I think is best, not because I think that I don't think doing so could be the best thing from a non-partial point of view).

My best guess would be that most ethicists don’t understand the arguments surrounding my views better than I do, but that they know a lot more than I do about views that are based on assumptions I don't agree with or am unconfident about (and about specific non-abstract issues they work with). But I'm not a 100% sure about this, and it would be interesting to test.

In the short story Three worlds collide one of the species the space-travelers meet evolved to see the eating of children as a terminal value. This doesn't seem to me like something that's necessarily is implausible (after all, evolution doesn't pass the ethical intuitions it gives us through an ethics review board). I can absolutely imagine alien ethicists viewing hedonistic utilitarianism as a reductio absurdum because it doesn't allow for the eating of conscious children.

While we have turned out much better than the hypothetical baby-eating aliens, I don't think its a ridiculous example to bring up. I once talked on Facebook with a person taking a PHD in ethics who disagreed that we should care about the suffering about wildlife animals (my impression was that I was rounding him into a corner where he would have to either change previously stated positions or admit that he didn't fully believe in logic, but at some point he didn't continue the discussion). And you'll find ethicists who see punishment against wrongdoers as a terminal value (I obviously see the use of punishment as an instrumental value).

A reasonable question to ask of me would be; so if you think peoples ethical intuitions are unreliable, isn't that also true of yourself?

Well, that's the thing. The views that I'm confident in are the ones that aren't based on core ethical intuitions (although they overlap with my ethical intuitions), but can be deduced from things that aren’t ethical intuitions, as well as principles such as logical consistency and impartiality (I know I’m being unspecif here, and can extend on this if anyone wants me to). I could have reasoned myself to these views also if I was a complete psychopath. And the views I'm most confident in are the ones that don't even rely on my beliefs about what I want for myself (that is, I'm much more sure about the conscious experience I have if tortured being inherently bad than I am about e.g. whether it inherently matters if my beliefs about reality correspond with reality). My impression is that this commitment to being sceptical of ethical intuitions in this way is something that isn't shared among all (or even the majority?) of ethicists.

Anyway, I think it would be stupid of me to go on a lot longer since this is a comment and not something that will be read by a lot of people, but I felt an urge to give at least some account of why I think like I do. To summarize: I’m not so sure that the average ethicist understands the relevant arguments better than the EAs who have reflected the most about this, and would be very unsurprised if the opposite was the case. And I think ethicists having other opinions than ‘narrow consequentialism’ is more about them having a commitment to other ethical intuitions, and lacking some of the commitments to “impartiality” that I suspect narrow consequensialists often have, as opposed to them having arguments that narrow consequensialist EAs haven’t considered or don’t understand. But I’m really not sure about this - if people think I’m wrong I’m interested in hearing about it, and looking more into this is definitely on my todo-list.

It would be interesting if comprehensive studies were done, or tools were made, in order to identify what differences of opinion are caused by, to which degree philosophers belonging to one branch of ethical theory are logically consistent and to which degree they understand the arguments of other branches, etc. Debates about these kinds of things can often be frustrating and inefficient, so I hope that we in the future will be able to make progress.

Thanks for that.

My basic worries are: -Academics must gain something from spending ages thinking and studying ethics, be it understanding of the arguments, knowledge of more arguments or something else. I think this puts them in a better position than others and should make others tentative in saying that they're wrong.

-Your explanation for disagreeing with certain academics is that they have different starting intuitions. But does this account for the fact that academics can revise/abandon intuitions because of broader considerations. Even if you're right, why you think your intuitions are more reliable than theirs?

The views that I'm confident in are the ones that aren't based on core ethical intuitions (although they overlap with my ethical intuitions), but can be deduced from things that aren’t ethical intuitions, as well as principles such as logical consistency and impartiality... I can extend on this if anyone wants me to

I'd definitely be interested to hear more :)

Academics must gain something from spending ages thinking and studying ethics, be it understanding of the arguments, knowledge of more arguments or something else. I think this puts them in a better position than others and should make others tentative in saying that they're wrong.

Btw, I agree with this in the sense that I'd rather have a random ethicist make decisions about an ethical question than a random person.

I'd definitely be interested to hear more :)

Great! I'm writing a text about this, and I'll add a comment with a reference to it when the first-draft finished :)

Your explanation for disagreeing with certain academics is that they have different starting intuitions. But does this account for the fact that academics can revise/abandon intuitions because of broader considerations. Even if you're right, why you think your intuitions are more reliable than theirs?

A reasonable question, and I'll try to give a better account of my reasons for this in my next comment, since the text may help in giving a picture of where I'm coming from. I will say in my defence though, that I do have at least some epistemic modesty in regards to this - although not as much as I think you would think is the reasonable level. While what I think of as probably being the best outcomes from an "objective" perspective corresponds to some sort of hedonistic utilitarianism, I do not and do not intend to ever work towards outcomes that don't also take other ethical concerns into account, and hope to achieve a future that that is very good from the perspective of many ethical viewpoints (rights of persons, fairness, etc) - partly because of epistemic modesty.

Thanks for posting this Tom, it resonates with some concerns I've had recently. In my opinion, EA does need somewhat of an image review, if not overhaul. A few comments:

"It’s much easier for someone who hasn’t been depressed, or devoted a lot of time to helping a depressed friend, to accept that it’s better to give to AMF than to charities that help the mentally ill."

If I can offer a personal experience, I was deeply depressed earlier in my life, and have lost family members to cancer. Interestingly, this motivated me to objectively do the most good more than focussing on these causes specifically, before I even discovered EA. I had experienced 'suffering', and became fixated with reducing 'suffering'. I wonder why this is, as I know this isn't the case for many. I hope this doesn't come across as smug, I'm just thinking out loud.

"Adjust the pledge so that, below a certain income threshold, one can give less than 10%"

I prefer Peter Singer's calculator which gives a % based off of your income. And I always tell people, "Try to do a little more good than you did last year." So telling people to start with a small and easily achievable goal like donating 1% to an effective charity and increasing that every year might be effective.

"EAs are committed narrow consequentialists even when the vast majority of experts dismiss it."

By this do you mean they propose a broader consequentialism including justice etc.? I confess I didn't know this, and would appreciate some further reading or philosophers who say so.

"By this do you mean they propose a broader consequentialism including justice etc.?"

More than that, most academic philosophers working in ethics are not consequentialist: Deontology (35.3%) Other (29.5%) Consequentialism (23.0%) Virtue ethics (12.2%) (These numbers are "accept or lean towards" so a greater number of these will be not strictly or purely utilitarian no doubt. Philosophers overall were not much more consequentialist, though they were a bit less deontological and more 'other.' Accept or lean toward: deontology 49 / 139 (35.3%) Other 41 / 139 (29.5%) Accept or lean toward: consequentialism 32 / 139 (23.0%) Accept or lean toward: virtue ethics 17 / 139 (12.2%)

Fwiw I went my whole Philosophical career without knowingly meeting any other person who was a utilitarian (and I was actively seeking them out) which was pretty isolating to say the least.

When I was at grad school for legal&political science the main way I encountered utilitarianism was as a bogeyman in legal/political/social science papers. Though limited to my own experience and universities I visited - my overwhelming impression is that in most policy connected academic disciplines not specifically housed in dedicated philosophy departments utilitarianism is mostly used as a signalling slur in similar way to a word like "neoliberalism" is and is not considered a respectable "thing" to identify as.

From David Chalmers' site, a guide to philosophical terms:

  • Utilitarian: one who believes that the morally right action is the one with the best consequences, so far as the distribution of happiness is concerned; a creature generally believed to be endowed with the propensity to ignore their own drowning children in order to push buttons which will cause mild sexual gratification in a warehouse full of rabbits

Urging someone on £30K to give 10% when you have £50K after donating may lead them to question why you’re asking them to live on less than you live on.

Because they make less money than you do! I think "people should live within their means" is about as uncontroversial as advice can get. Obviously poor people have to live on less than rich people - that's true whether or not both donate 10%.

This is far from obvious to me. I retracted a suggestion for someone to donate after I learned that their income is lower than my consumption...

As a rough heuristic, it is not reasonable to ask others to commit to a higher ethical standard than myself.

I retracted a suggestion for someone to donate after I learned that their income is lower than my consumption

Suppose Bill Gates spends £1m on personal consumption a year and donated $100m. Is it wrong for him to suggest others to donate to charity too?

As a rough heuristic, it is not reasonable to ask others to commit to a higher ethical standard than myself.

I agree with this entirely. But sacrifice is not virtue! There's nothing virtuous about limiting your own standard of living; virtue consists in living well and helping others. It is the donating 20% that measures (in part) my ethical standard, so I wouldn't ask others to donate more than 20% - but I'd be happy to suggest they do the same.

It is the donating 20% that measures (in part) my ethical standard

You seem to be somewhat contradicting yourself. You're criticizing others for equating sacrifice with virtue, but then measuring virtue as the percentage that you sacrifice! What matters is how much you help people. If you donate $3,500 to buy bed-nets, you've (in expectation) saved a life. It doesn't matter whether that was 10% of your income or 1% or 0.1%. The important thing isn't the percentage donated, it is the total amount donated. By asking someone earning less than you to donate 20% (or whatever it is you donate), you are asking them to do less good than you do. To be asking the same of them as you do of yourself, you would have to ask they donate a higher percentage, or increase their income.

Suppose Clare is on £30K and gives away £15K to AMF, while Flo is on £300K and gives away £30K. Clare is arguably a more virtuous person because she has made a much bigger personal sacrifice for others, despite the fact that Flo does more absolute good.

This argument seems to rely on the decision to donate as being a morally significant one, but one's income as having no merit. However, that's simply not the case; people can change their income! Choosing to study a liberal arts degree, or work for a not-for-profit, or not ask for a raise because it's scary, are all choices. Similarly, changing your degree, aggressively pushing for more money, and taking a job in finance you that doesn't make you feel emotionally fulfilled, are all choices. In the same way that giving a large % is a property of Claire that she deserves credit for, so to is earning a lot a property of Flo that she deserves credit for.

Now suppose Clare mistakenly believes that the most moral action possible is to give the money to disaster relief. Plausibly, Clare is still a more virtuous person than Flo because she has made a huge personal sacrifice for what she believed was right, and Flo has only made a small sacrifice by comparison.

In a similar way people who make serious sacrifices to help the homeless in their area may be better people than EAs who do more absolute good by donating.

You seem to associate virtue with self-sacrifice. I think this is a very unhealthy idea - the purpose of life is to live, not to die! EA offers a positive view of morality, where we have a great opportunity to improve the world. The height of morality is not a wastrel who, never having sought to improve their lot, sacrificed their life to achieve some tiny goal. But no! Far better to be a striving Elon Musk, living an full life that massively helps others.

Choosing to study a liberal arts degree, or work for a not-for-profit, or not ask for a raise because it's scary, are all choices. Similarly, changing your degree, aggressively pushing for more money, and taking a job in finance you that doesn't make you feel emotionally fulfilled, are all choices. In the same way that giving a large % is a property of Claire that she deserves credit for, so to is earning a lot a property of Flo that she deserves credit for.

Of course this is correct, and I think it's important to treat it as something of a virtue. But we typically have less control over the amount we earn than over the proportion we give, so it seems sensible to give less credit for it. Unfortunately that's an extremely nuanced position which is hard to communicate effectively.

I guess one way of thinking about it is that changing the % is easier than changing the income then, and we should allocate our scarce status resources to the low-hanging fruit.

I think you make a good point about virtue not being self-sacrifice, and I definitely see your first point too, particularly for lots of people currently involved in effective altruism.

However, of course people can only vary their income within certain limits. There are lots of people who may be earning as much as they possibly can, and yet still be earning something close to £15k, through no fault of their own. I'd aspire to an effective altruism that can accommodate these people too, and I think it's for people like this that Tom's point comes into play. However, I think that your caveat is really important for the many other people who have a higher upper limit on their earnings.

Dale
8y-2
0
0

There are lots of people who may be earning as much as they possibly can, and yet still be earning something close to £15k, through no fault of their own.

I seems unlikely there would be many EAs in this situation. EAs are generally very intelligent and very educated - something would have to be very wrong to leave them capped out at $15k. Even people with only High School education can earn 6 figures if they are committed - working on an oil rig, or driving trucks in Alaska, pay very well, and being a nurse is a very achievable career for most people. Even if they didn't change career, most people can substantially boost their income by asking for a raise each year.

I think this comment could be improved by removing the false suggestion that some of those professions are not open to certain genders (even if they have skewed gender ratios, the fields are open to all genders).

Good idea, I made the edit you suggested.

My concern was that people might accuse me of overstating my case. It's true that these professions are open to women but I would not feel comfortable recommending them. Certainly if someone suggested I work on a rig I would be rather nonplussed! We can recommend people change career but I think some options are sufficiently beyond the Overton window that it is unreasonable to ask it of people.

However, that's simply not the case; people can change their income!

I think this is the wrong point to focus on. Both our income and our generosity are largely determined by factors beyond our control, like our genes and prenatal environment. But they're still part of us! We can give people credit for inherent parts of them. Being born intelligent or generous are both great things, and we should praise people for them.

What I worry about, is that while EA heavily relies on well-known statistics alone, and it does not as much rely on as-of-yet unrejected hypotheses that follow from well-founded creative reasoning, that requires that little bit of financial support and effort to verify and generate data.