If I were to self-identify as an effective altruist, I might take myself merely to have committed to (*).

(*) To be a good altruist, you should use evidence and reason to do the most good with your altruistic actions.

Sometimes, however, effective altruism also seems to involve a second, potentially more demanding commitment. That is, it seems to involve a commitment about how much of my productive energy I should allocate to altruistic pursuits. 

If effective altruism does and should involve a commitment of this kind, then its proponents should try to come up with a reasonable (acceptably inclusive, flexible, etc) statement of it. Neglecting to do so has several downsides. I’ll flag three: 

1. It leaves effective altruism and the effective altruism community open to being misunderstood, particularly by outsiders. For example, one might think that effective altruism claims that:

(**) To be a good person, you should use evidence and reason to do the most good with all your actions.

This seems impossibly demanding. (Of course, a weaker and more carefully formulated version of (**) could be plausible and may enjoy widespread assent within the EA community at present.)

2. Vagueness on this topic seems to create fertile grounds for confusion, stress or even acrimony within the community.

3. Some people (e.g. me) will be reluctant to self-identify as an effective altruist due to uncertainty about what, if anything, such an identification entails beyond (*).

Disclosure: I’m a member of GWWC and I work for 80,000 Hours, but I don’t self-identify as an effective altruist.

9

0
0

Reactions

0
0
Comments49
Sorted by Click to highlight new comments since:

I think this post is long overdue. People often get stressed about how to be more effective, and if we made it clear that we were including people who demand less severe sacrifices from themselves, then we might have more allies.

And as EA increasingly becomes a political and scientific movement, allies will be increasingly important. Getting prominent politicians and scientists like Stephen Hawking or Bill Gates to affiliate with the philosophy is much more important than getting non-prominent people to offer to meet more extreme demands. If we need to recruit allies to make a societal change, this will be easier if we define EA in a way that is not extremely demanding.

Of course, more extreme generosity is still better. But there's a cap - once you give away about half or three-quarters of your funds, you will run out. Whereas how effectively or cleverly we can donate has no obvious upper bound. If we gather greater insights, we can always start newer and better projects.

As any philosophical movement gains widespread support, its idea gets watered down. The wider public will take hold of the central message of the idea, like women's suffrage, african-american civil rights or environmentalism but some of the details will be lost, and some of the message will be watered down. So it's important for us to think about which issues have room to compromise and which don't. The whole idea of extreme self-sacrifice has pretty mixed effects, so we don't need it in a bigger effective altruism movement. The importance of evidence and reason in altruistic actions is what's new and indispensible. So it would be nice if we could all relax our expectations of demandingness for a while.

I don't think it's clear if we should insist on anything in particular, but I don't see it as a no-brainer.

"Getting prominent politicians and scientists like Stephen Hawking or Bill Gates to affiliate..."

People respect and are impressed by those who are making big sacrifices for the benefit of others. Note how much attention and respect from important figures Toby Ord has gotten by pledging a large fraction of his income. Also the kudos generally given to doctors, soldiers, firefighters, Mother Teresa, etc.

"But there's a cap - once you give away about half or three-quarters of your funds, you will run out. Whereas how effectively or cleverly we can donate has no obvious upper bound. If we gather greater insights, we can always start newer and better projects."

We might be able to get people to give 50 times more than they do now (an average of 1% to 50% of income). Do you think we can persuade many people, who wouldn't be motivated to give more, to give to a charity that is, ex ante, 50x better than they do now on average (keeping in mind the mean of a log-normal distribution is already much higher than the median due to the right tail)?

"As any philosophical movement gains widespread support, its idea gets watered down."

This seems like an argument in favour of very high expectations to start with, knowing it will be diluted later anyway on as more people get involved.

"The whole idea of extreme self-sacrifice has pretty mixed effects"

A standard doesn't have to and shouldn't embody extreme self-sacrifice, it can just ask for something like 10%, which is not extreme - indeed it used to be the norm.

Strong rules can make for stronger communities: http://slatestarcodex.com/2014/12/24/there-are-rules-here/

Also note the empirical regularity that churches that place high demands on their members tend to last longer and have more internal cooperation (e.g. http://ccr.sagepub.com/content/37/2/211.abstract).

Quote relating to this:

"Which is too bad, because the theology of liberal Protestantism is pretty admirable. Openness to the validity of other traditions, respect for doubters and for skeptical thinkers, acceptance of the findings of science, pro-environmentalism – if I had to pick a church off a sheet of paper, I’d choose a liberal denomination like the United Church of Christ or the Episcopalians any day. But their openness and refusal to be exclusive – to demand standards for belonging – is also their downfall. By agreeing not to erect any high threshold for belonging, the liberal Protestant churches make their boundaries so porous that everything of substance leaks out, mingling with the secular culture around them.

So what if liberal Protestants kept their open-minded, tolerant theology, but started being strict about it – kicking people out for not showing up, or for not volunteering enough? Liberals have historically been wary of authority and its abuses, and so are hesitant about being strict. But strictness matters, if for no other reason because conservatives are so good at it: most of the strict, costly requirements for belonging to Christian churches in American today have to do with believing theologies that contradict science, or see non-Christians as damned. What if liberal Protestantism flexed its muscle, stood up straight, and demanded its own standards of commitment – to service of God and other people, to the dignity of women, and to radical environmental protection? Parishioners would have to make real sacrifices in these areas, or they’d risk exclusion. They couldn’t just talk the talk. By being strict about the important things, could liberal Protestant churches make their followers walk the walk of their faith – and save their denominations in the process?"

http://www.patheos.com/blogs/scienceonreligion/2013/07/why-is-liberal-protestantism-dying-anyway/

Do you think we can persuade many people, who wouldn't be motivated to give more, to give to a charity that is, ex ante, 50x better than they do now on average (keeping in mind the mean of a log-normal distribution is already quite high due to the right tail)?

This is a backwards interpretation of the dynamics of log-normal distributions.

The (rough) equivalent operation of moving everyone's donations from 1% to 50% would be moving everyone's donations from the (dollar-weighted) mean charity to the best charity. Although (as you noted) the heavier tail of a log-normal distribution means that the sample mean is higher relative to the mode or median, it has an even stronger effect on the sample maximum.

This means that overall, a lognormal has a higher, not lower, maximum:mean ratio for a fixed, say, median and standard deviation, compared to a thinner-tailed distribution like the normal. For instance, in numerical simulations I just ran with 100 samples from a log-normal and normal distribution, both with median 2 and variance approximately 4.6, the average ratio of sample maximum to sample mean was 5.5 for the log-normal and 3.7 for the normal.

Yes, but ex ante. The higher up the distribution the harder they will get to identify because 'if it's transformative it's not measurable; if it's measurable it's not transformative'. The weakness of the measurements mean you're going to be hit with a lot of regression to the mean.

Also that stuff is likely to be weird (must be extreme on neglectedness if it's so important and still useful to put more money in to), and so just as it's hard to get someone to give enormous amounts, it will probably also be hard to move donations there.

I'm not talking about in-practice difficulties like convincing people to donate. I'm just talking about statistics.

Can you point to actual parameters for a toy model where changing the distribution from normal to log-normal (holding median and variance constant) decreases the benefits you can get from convincing people to switch charities? The model can include things like regression to the mean and weirdness penalties. My intuition is that the parameter space of such models is very small, if it exists at all.

If we thought that the charity they were switching to were only at the 95th percentile, it could be worse in a log-normal case than a normal case (indeed it could be worse than not getting them to switch).

However that would be an unusual belief for us. More reasonably we might think it were uniformly drawn from the top ten percent (or something). And then log-normal is again much better than normal. I agree with the thrust of your intuition.

Yeah, in GiveWell classic, you're generally going to estimate that a high-impact charity is on the 95th percentile but with uncertainty around that. Which is in-between the cases that you describe.

I can imagine that knowing that something is on the 95th percentile with high certainty might be worse than guessing that something is on the 95th percentile with high uncertainty, if you have a prior that is some mixture of log-normal, normal and power-law. That's what we'd have to show to really question the classic GiveWell model.

I'm sure you're right about the math, but I am concerned with the in-practice difficulties.

My point about the mean was simply that one shouldn't compare the max-median on the log-normal - which would be natural if you visualise where 'typical donations' that you see go - but rather the max-mean, which is a less extreme ratio. I wasn't drawing a contrast with a normal or any other distribution.

I'm also not sure about the answer to my question to Ryan - maybe the effectiveness is still the better focus, but I'm prompting him with possible counterarguments.

Ah, from your first comment it sounded like you were comparing the mean of the log-normal to the mean of a less-skewed distribution, rather than to the median of the log-normal. That sounds more reasonable.

Yep my bad making the original comment ambiguous.

In practice, isn't it relatively easy to identify whether someone is already up the right tail in their giving? I don't recall struggling with this in initial conversations. You can just ask whether they give abroad for instance, which will seemingly get you most of the way since most people don't (i.e. it is true that those who do vastly skew the mean, but you can just exclude almost all of them ex ante...).

What you're saying might be appropriate for mass marketing I suppose where you can't cut off the right tail.

Sorry Alex, I can't quite follow what you're arguing here. Are you saying you can just focus on people who already give fairly effectively?

Sorry, pretty unclear post on my part. Owen basically got it right though; if we're talking practically rather than theoretically, you don't have to decide to always focus on effectiveness or always focus on quantity. You can choose, and your choice can be influenced by the information you have about your audience.

Since most individual people are around the median/mode and a long way below the mean, for most individual people talking about effectiveness is correct. There are a few exceptions to this (those 'up the right tail'), and then you can talk about amount...or just accept that you aren't going to achieve that much here and find more people where you can talk about effectiveness!

This is obviously dependent on how much ability you have to discriminate based on your audience, which in turn depends on context, hence my 'mass marketing' point.

I think it's the reverse -- if you exclude the people who already do give effectively then you've brought the mean of those who remain down closer to the vicinity of the median.

Another speculative argument in favor of big asks: most charity seems to focus on making small asks, because people think getting people on the first step towards making a difference is the crucial bottleneck (e.g. see this guy), so the space of 'making big asks' is neglected. This means it's unusually effective to work in this space, even if you appeal to many fewer people.

This seems to be one of the main reasons GWWC has been much more effective than ordinary fundraising techniques.

The downside is that we're concerned with total scale as well as cost-effectiveness, and a 'big ask' approach probably has less total growth potential in the long-run.

The GWWC pledge isn't really an 'ask' - people may make particular donations because they're asked to, but no one commits to donating 10% of their income every year until they retire because someone asked them to. Instead they make this commitment because they want to do it anyway, and the pledge provides a way for them to declare this publicly to influence others. So it would be interesting to find examples of more typical big asks working - eg. fundraising teams which highball potential donors. Does anyone know of these?

This would, if true, imply that GWWC does not actually result in any initial donations on the behalf of its members. It might still result in more donations on behalf of non-members.

"but no one commits to donating 10% of their income every year until they retire because someone asked them to"

We do in our outreach efforts!

Ah sure, but I'm saying that no one gives this sort of money just because they've been asked to - it's too large and long-lasting a commitment, and being asked is not a powerful enough reason or prompt. Asking them to sign the GWWC pledge may prompt them to make this public declaration, but only if they were already happy to give that sort of money.

In my experience, some of the people I've asked to take the pledge would have donated 10% "eventually", but the pledge actually made them follow through for at least that year in particular where I'm confident they otherwise would not.

Some would have done so anyway, but I think the example set by hundreds of others, including some they know personally, normalises giving a large amount and makes them more likely to copy. Also making a public declaration makes people more likely to follow through.

We also obviously make arguments in favour of doing so. It rarely convinces people immediately but it contributes to moving them in that direction.

"People respect and are impressed by those who are making big sacrifices for the benefit of others. Note how much attention and respect from important figures Toby Ord has gotten by pledging a large fraction of his income. Also the kudos generally given to doctors, soldiers, firefighters, Mother Teresa, etc."

Attention and admiration, absolutely. But how much copying? I would expect people to be most drawn to people who seem to be only 'one step' more self-sacrificing than they are, in a conceptual framework that I suppose is analogous to the idea of inferential distance. For instance, I think Toby benefits in this from having an 'ordinary' income. Anything further away than that rapidly becomes too weird to be taking seriously. Note Jeff Kaufman has written about pretty much exactly this:

http://www.jefftk.com/p/optimizing-looks-weird

One weak support for the idea that these examples are sufficiently far away for most people to dismiss without much thought is the very fact media happily talks about them; this would make them less challenging and cognitive-dissonance inducing, so others' dominant impressions on reading are to be impressed rather than unsettled.

The fact that each person is less impacted has to be weighed against the fact that about 100,000-1,000,000 times as many people have heard about what Toby is doing than someone who just privately gave a few percent of income.

Of course we need examples of more typical people to refer to as well.

Although Toby's story is only about 2-10x more widely known compared to if he gave 1% and still founded GWWC.

No way, the main media interest was in the extremeness of the amount. The press wouldn't care about or cover Giving What We Can if the ask was 1% and Toby was giving 1%.

Most of GWWC's exposure is through word of mouth. This is even truer for the exposure that matters, and that leads to people signing up, much of which is through meeting Giving What We Can members.

The idea that 99.99% of the exposure of Giving What We Can would have disappeared if they couldn't focus on Toby's generosity could only result from very unclear thinking. The press would've just looked different. Trivially, other members would have given a large fraction even if Toby hadn't told them to, and press could've focussed on that. The example is The Life You Can Save. Granted, it had Peter Singer, but I hope that we're not going to have the discussion that without Peter, they could've still aroused at least 0.1% of the same press. I mean, EA Melbourne has been able to give talks, go on community radio et cetera without citing the 10% figure...

Like, I know things like press can have weird power law distributions but we're really short-changing the exposure of the rest of GWWC's message - the parts that are about quality of donation - if we say that 100,000 of its exposure is just because of the quality.

So although the benefits for exposure of of giving 10% are presumably there, and they might matter a lot, they're like 2-10x.

"The idea that 99.99% of the exposure of Giving What We Can would have disappeared if they couldn't focus on Toby's generosity could only result from very unclear thinking."

Yes, but this is what you said: "Although Toby's story is only about 2-10x more widely known compared to if he gave 1% and still founded GWWC."

You may think GWWC would have similar numbers of members today because of how it grows person-to-person, but his story would be much less known because there would basically be no story to report on. Someone gives 1% of charity and encourages others to do the same? It's not newsworthy. As it was, it got to the most read stories on BBC News and other similar outlets and literally millions (maybe tens of millions) of people heard about him.

Furthermore, I think that media exposure was necessary to turn Giving What We Can from just a group of friends in Oxford into the going concern it is today. That story isn't as valuable today, but it was the main asset we had in the early days.

Haha Rob it's Christmas, can't we stop fighting? Because I'm right, and you should convert to my point of view. :|

But seriously, you're saying that if Toby had given 1% instead of 50%, then rather than 10 million people knowing his story, rather than 10 million people knowing about him, only 10-100 people would? That simply not reasonable. Without press, even if each of the handful of academics who signed up for GWWC had mentioned it in their opening lectures that semester, you would already have a thousand people who had heard the story.

Haha, let's split the difference. Maybe 100,000 people would have heard of Toby, so 1-2 orders of magnitude? I think that's enough for me to make my point that it could be better overall, even if each person takes the example less seriously. :P

Scott Alexander makes my point, that strong rules make for strong communities, better than I did here: http://slatestarcodex.com/2014/12/24/there-are-rules-here/

I think there are two questions here:

  1. How much of my time should I allocate to altruistic endeavour?
  2. How should I use the time I’ve allocated to altruistic endeavour?

Effective altruism clearly has a lot to say about (2). It could also say some things about (1), but I don’t think it is obliged to. These look like questions that can be addressed (fairly) independently of one another.

An aside: a weakness of the unqualified phrase “do the most good” is that it blurs these two questions. If you characterise the effective altruist as someone who wants to “do the most good”, it’s easy to give the impression that they are committed to maximising both the effectiveness of their altruistic endeavour and the amount of time they allocate to altruistic endeavour.

I’m quite keen on Rob’s proposed characterisation of an effective altruist, which remains fairly quiet on (1):

Someone who believes that to be a good altruist, you should use evidence and reason to do the most good with your altruistic actions, and puts at least some time or money behind the things they therefore believe will do the most good.

This strikes me as a substantive and inclusive idea. Complementary communities or sub-groups could form around the idea of giving 10%, giving 50%, etc, and effective altruists might be encouraged - but not obliged - to join them.

Much of the discussion in this thread has focussed on the question of which characterisation of effective altruism would have the greater impact potential in the long-run. In particular, whether a more demanding characterisation, likely to limit appeal, might nonetheless have a greater overall impact. I don't have much to add to what's been said, except to flag that an inclusive characterisation is likely to bring more diversity to the community - a quality it's somewhat lacking at present.

For better or for worse, I think it may be difficult to "police the door" on who should and shouldn't call themselves an effective altruist. For example, a whole lot of people call themselves environmentalists, even if they're doing little or nothing for the environment besides holding opinions positive to environmentalism. On the flip side, there are people doing more for the environment than the typical environmentalist.

In practice, I think that what words people use to describe themselves has more to do with what words their friends use to describe themselves. This applies to me too-- like Peter, I'm a GWWC member, but I don't self-identify as effective altruist, and I think this is because I don't feel very connected to the community.

I think this works in reverse too. "Queer" is a word that naively seems well defined to exclude some people, but I know people who self-identify as queer even though they are both straight and cisgender. I'm not criticizing--these people are also usually careful to communicate clearly about what this means. I say this to point out how difficult it can be to clearly define group membership.

GWWC has a well defined criterion for membership, and there could be other similar organizations with well defined criteria, but I'm not sure that we could give the movement itself a well defined criterion even if we wanted to.

I don't know about how to attract people to something but I certainly know a surefire way to turn them off: make them feel judged. I find that nothing will make someone like you more than making them feel validated and nothing will make someone hate you (or reject your position) more than if they feel that you're judging them. Look at videos of Singer's lectures to universities about EA and utilitarianism. Most of the students' questions afterwards are negative, sometimes strongly so. It's because they feel like he is judging them for being selfish. That's also why people tend to be negative towards vegans: they feel like the vegans probably think they're bad people for eating meat so they are resentful towards them.

Part of me likes the 10% standard, but part of me thinks that people that don't plan on giving that much will feel judged and therefore develop animosity against the movement, dismissing the whole effectiveness thing outright. I think that since people think so little about their impact in the world, a “the more good you do the better” attitude will probably be most productive. An all-or-nothing “if you donate less that 10%, or not to a “top” charity, you're not a real effective altruist, or you're a moral failure” attitude will probably just result in people to rejecting EA altogether, just like an “abolitionist” you're-a-horrible-person-if-you-consume-any-animal-products vegan stance results in most people simply dismissing changing their diets in any way.

Having said that, it's good to have an achievable goal and people are driven by aspiring to achieve something or be greater than they are, so the 10% standard I think would be net positive as long as it's just considered an ideal (the low end of the ideal) without any stigma for falling short of it.

The problem with this definition is that someone who did absolutely nothing to help others could hold that belief and qualify. That seems quite strange and confusing. At the EA Summit this year I proposed the following on my slides:

Possible standards

  • ‘Significant’ altruism. One of:
  • Money: 10% of income or more?
  • Time: 10% of hours or more?
  • Or a ‘significant’ change in career path?

  • Open-mindedness

  • Willing to change beliefs in response to evidence
  • Cause neutral: If given good reasons to believe that a different cause area will do more good, will switch to that
  • Must hold a ‘reasonable’ view of what is good (no Nazi clause)

Read more: https://drive.google.com/file/d/0B8_48dde-9C3WUVkTGdoUEliQ0E/view?usp=sharing

The problem with this definition is that someone who did absolutely nothing to help others could hold that belief and qualify.

Well zero can often be an awkward edge case but we don't really need a definition to tell us that someone who does nothing for others isn't an effective altruist. However, when someone does a small amount for others, if they're giving a small amount to highly effective causes, they can be a very important part of the extended altruism community. Take Peter Thiel, who seems to give <1%, or think of Richard Posner or Martin Rees, who have made huge contributions to the GCR-reduction space over many years, using a small fraction of their working hours.

On a related note, a lot of people think like effective altruists but don't act on it. I've found that it can be dangerous to write these kinds of people off because often you come back and meet them again in a few years and they take concrete actions by donating their time or other resources to help others.

Last, I just worry about the whole notion of applying 'standards' of effective altruism. The whole approach seems really wrong-headed. It doesn't feel useful to try to appraise whether people are sufficiently generous or "open-minded" or "willing to update" to "count" as "one of us". It's pretty common for people to say to me that they're not sure whether they "count" as an effective altruist. But that's obviously not what it's about. And I think we should be taking a loud and clear message from these kinds of cases that we're doing something wrong.

I think this is exactly right. Encouraging people to do more is of course great, but while in theory excluding people for not meeting a certain standard might nudge people up to that standard, I think in practice it's likely to result in a much smaller movement. Positive feedback for taking increasingly significant actions seems like a better motivator.

If we did spread the idea of effectiveness very widely but it didn't have a high level of altruism attached to it, I think that would already achieve a lot, and I think it would also be a world in which it was easier to persuade many people to be more altruistic.

"while in theory excluding people for not meeting a certain standard might nudge people up to that standard, I think in practice it's likely to result in a much smaller movement."

What makes you think that? I just have no idea which of the effects (encouraging people do to more; discouraging them from taking a greater interest) dominates.

Thanks for asking this question. I found it helpful to introspect on my reasons for thinking this.

Roughly, I picture a model where I have huge uncertainty over how far the movement will spread (~ 5 orders of magnitude), and merely large uncertainty over how much the average person involved will do (< 2 orders of magnitude). This makes it more important right now to care about gaining percentile improvements in movement breadth than commitment. Additionally, the growth model likely includes an exponential component, so nudging up the rate has compounding effects.

To put that another way, I see a lot of the expected value of the movement coming from scenarios where it gets very big (even though these are unlikely), so it's worth trying to maximise the chance of that happening. If we get to a point where it seems with high likelihood that it will become very big, it seems more worthwhile to start optimising value/person.

Two caveats here:

(i) It might be that demanding standards will help growth rather than hinder it. My intuition says not and that it's important to make drivers feel positive rather than negative, but I'm not certain.

(ii) My reasoning suggests paying a lot more attention at the margin to the effects on growth than on individual altruism than you might first think, but it doesn't say the ratio is infinite. We should err in both directions, taking at least some actions which push people to do more even if they hinder growth. The question is where we are on the spectrum right now. My perception is that we're already making noticeable trade-offs in this direction, so perhaps going too far, but I might be persuadable otherwise.

I have a different reason for thinking this is true, which involves fewer numbers and more personal experience and intuition.

Having a high standard--either you make major changes in your life or your not an effective altruist--will probably fail because people aren't used to or willing to make big, sudden changes in their lives. It's hard to imagine donating half your income from the point of view of someone currently donating nothing; it's much easier to imagine doing that if you're already donating 20% or 30%. When I was first exposed to EA, I found it very weird and vaguely threatening, and I could definitely not have jumped from that state to earning to give. Not that I have since gone that far, but I do donate 10% and the idea of donating more is at least contemplatable. Even if you mostly care about the number of people who end up very highly committed, having low or medium standards gives people plausible first steps on a ladder towards that state.

As an analogy, take Catholics and nuns. There are many Catholics and very few nuns, and even fewer of those nuns were people who converted to Catholicism and then immediately became nuns. If there was no way to be Catholic except being a nun, the only people who could possibly be nuns would be the people who converted and then immediately became nuns.

Giving What We Can finds that the 10% bar is tough for people but not unimaginable. Certainly we shouldn't be unfriendly to people who don't do that - I'm not unfriendly even to people who don't do anything to help strangers - but we could set it as the bar people should aspire to, and a bar most people in the community are achieving most of the time.

Yeah, it's also a useful observation that when they talk to the general public, most charities ask for a small regular committment of funds, like $30 per month. If you're asking people who already identify as effective altruists, it might make sense to ask for more but if you're approaching new people, this would seem like a sensible starting point.

Whether it's good to have a 'standard' is certainly unclear. But if we do, I don't think it can only relate only to beliefs rather than actions.

Compare: "A Christian is someone who believes that in order to be a good Christian you should do X, Y and Z." "An environmentalist is someone who thinks that if you wanted to help the environment, you would do X, Y, Z."

Well, an effective environmentalist can be someone whose environmentalism is effective. Likewise, an evangelical Christian could be someone who is evangelical in their Christianity. You could argue that an evangelical Christian only counts as such if they spend 2% of their time on a soapbox or knock on fifty doors per week but that would be extreme. Can't an (aspiring) effective altruist just be someone whose altruism is as effective as they can make it?

A stronger option is:

"Someone who believes that to be a good altruist, you should use evidence and reason to do the most good with your altruistic actions, and puts at least some time or money behind the things they therefore believe will do the most good"

Your examples don't track my statements of what is required (merely having a belief about the definition of a term 'good Christian').

"(*) To be a good altruist, you should use evidence and reason to do the most good with your altruistic actions."

What about someone who believes this but engages in only ineffective altruism because they don't care much about being a 'good altruist'? I can see there being many people like this. They realise that to be a 'good altruist' they should maximise their cost effectiveness, and they find it an interesting research area, but all of their actual altruism is related to people they know, causes they personally are invested in but aren't terribly helpful, etc.

Ah, I see. You were thinking about the kind of attributes involved in affiliation: e.g. self-identification, belief, general action or specific stipulated actions.

I was arguing along a different axis - whether it would be better to restrict the standard to the domain of altruism or make it unrestricted.

Interesting topic Peter!

If someone gives 10% of their income to effective charities, I don't think anyone would say that they don't count as an EA because they're not devoting all their actions and resources to altruism. (This is not to say that giving 10% of your income is required, only that it's sufficient.)

I don't think we'd want effective altruism to make claims about what it takes to "be a good person". EA says that you have the opportunity to do a lot of good with your resources, and that there's a strong moral case for devoting a large portion of them to doing so. But there's no non-arbitrary portion of your resources that you're required to devote to this to count as a good person.

I think that many people count someone as an EA if they subscribe to (*), regardless of what actions they take on the basis of it - perhaps even if they don't take any actions at all. I'd be curious as to others' views of this.

One reasonable starting point for this would be to get a list of 'sufficient' rather than 'necessary' conditions, which provide definition without being necessarily exclusionary. For instance, I think being in GWWC and keeping your pledge is a clear sufficient condition that we're unlikely to want to change or contest. What are some others?

Worth reminding everyone of the most upvoted post on this forum to date: "Effective Altruism is a Question (Not an Ideology)".

Curated and popular this week
Relevant opportunities