Hide table of contents

[This is rough write-up mainly based on my experiences in EA and previous reading (I didn't do specific reading/research for this post)- I think it’s possible there are important points I’m missing or explaining poorly. I’m posting this still in the spirit of trying to overcome perfectionism, and because I mentioned it to a couple of people who were interested in it]

I think that EA as a worldview contains many different claims and views, and sometimes we may not realise all these distinct claims are combined in our normal view of “an EA” and instead might think EA is just “maximise positive impact”. I initially brainstormed a list of various claims I think could be important parts of the EA worldview and then tried to categorise them into themes. What I present below is the arrangement that feels most intuitive to me, although I list multiple complexities/issues with it below. I tried to use an overall typology of claims on morality, claims about empirical facts about the world, and claims about how to reason. Again this is just based on some short intuitions I have, and is not a well defined typology.

I think this is an interesting exercise for a couple of reasons:

  • It helps us consider what are the most core ideas of EA, which inform how we pitch it and how we define the community. E.g. which claims do we focus on when first explaining EA?
  • It demonstrates the wide variety of reasons people might disagree with the common “EA worldview”
  • It demonstrates how there are some empirical claims EAs tend to believe that most people outside the community don’t, and that aren’t direct implications of the moral claims (e.g. AI poses a large threat, there’s a large variation in the impact of different charities). We might expect EA to be defined by a single key insight, not several unrelated ones (it’s one thing to notice the world is getting something wrong in one way, but feels more unlikely that we’d be the only ones to notice several independent flaws). However I do think these independent empirical claims can be explained through how EA values draw the community’s attention to specific areas, and gives it incentive to try to reason accurately about them.

(I’ve bolded the specific claims, and the other bullet points are my thoughts on these)

I’d be interested if there are important claims I’ve missed, if some of the claims below could be separated out, or if there’s a clearer path through the different claims. A lot my thinking on this was informed by Will MacAskill’s paper and Ben Todd’s podcast.

Moral Claims

Claims about what is good and what we ought to do.

  • Defining good
    • People in EA often have very similar definitions of what good means:
    • The impact of our actions are an important factor in what makes them good or not.
    • We should define good in a roughly impartial, welfarist view.
      • I roughly understand this as the defintion of good is not too dependent on who you are, and roughly depends on the impact of your actions on the net value of relevant lives in the world.
      • This definition of good then helps lead us to thinking in a scope sensitive way.
    • When considering the relevant lives, this includes all humans, animals and future people. We generally do not discount the lives of future people intrinsically at all.
      • This longtermist claim is common but not absolute in EA, and I’m brushing over mutliple population ethics questions here. (e.g. severals EA might hold person-affecting views)
  • Moral obligations
    • We should devote a large amount of our resources to trying to do good in the world
      • I think this is often missed and is not really included in common definitions of EA, which instead focus on maximising impartial impact with whatever resources you choose to do good with. But I think this misses something important. Someone who donated £2 a year based on maximising impact, and acted in their own interests the rest of the year would probably be quite out of place in the current EA community.
      • This is an important theme in Peter Singer’s work (I think), and aligns with a lot of common ideas of being a good person that involve being selfless. I think it may get less attention in EA at the moment because many people actually think that choosing a high impact career is the most important thing for you to do and that this can often align quite a lot with your own interests. I highlight this below as a separate empirical claim.
  • Maximisation
    • When trying to do good we should seek to maximise our positive impact
      • This is perhaps the most fundamental part of the EA worldview. It’s useful to note that it doesn’t include a definition of good- one could seek to maximise positive impact but defined only by impact on people in their own country. There is also some sense in which this naturally arises from any consequentialist definition of good where the more positive impact you have the better. So I sometimes struggle to disentangle this as a separate claim. Maybe often when people disagree with this claim, it’s because they view positive impact as a not a large factor in whether an action is good or bad.

Empirical Claims

  • There are also multiple parts of the EA worldview that are empirical facts about the world, and not just questions of morality. It’s interesting to think about how someone with the above moral views would act if they didn’t hold some of these empirical worldviews (it could be quite different from how the EA community looks at the moment).
  • Differences in impact
    • There is a large variety in the impact different approaches to doing good have
      • I think this really is a core EA claim that is sometimes neglected. The reason we spend time thinking about impact and getting other people to think about it is because there are huge differences. If we had the above moral views but thought there wasn’t much variation in impact, we wouldn’t spend anywhere near as much time thinking about how to have an impact or encouraging other people to.
      • This can also help justify that one doesn’t have to view impartial impact as the only thing that matters. Some people would argue that even if you care about other things as well, this huge variation in impartial impact should still be a huge concern.
  • How people normally try to do good
    • Another key part of the empirical EA worldview is that people don’t often make decisions to do good based on maximising impact. If everybody did, there’d be no need for a separate EA movement. It’s an interesting question about how much this is because people don’t hold the above moral views, don’t realise the differences in impact, or just have bad intuitions/reasoning about what options are high impact.
    • Our intuitions about what is high impact are often wrong
    • People often do not base their decisions on maximising impact
  • Size of maximum impact
    • An individual in a rich country can at least save many lives through their actions
    • Not necessarily a core claim, but I think many people in EA are motivated by how large their potential impact could be. This is also perhaps different to differences in impact being large. One could think there is large variety in impact, but that still the best option for doing good is much less than saving one person’s life. A consequence of this would be to spend much more time on improving your own life.
  • Facts about the world
    • I think there are also then just multiple somewhat random claims that play very important roles in the prioritisations EA has today.
    • Existential risk is “high”
      • A lot of people in EA think risk of human extinction or civilisation collapse is higher than people outside EA. Although this might not be very clear cut, EAs devote a lot of attention to existential risk. However, this might just be because they view extinction as much worse than non-EAs due to a longtermist worldview, in which case they need not have a higher estimate of the risk.
    • The human world has got better
      • Not necessarily a key claim in most EA cause prioritisation, but I think it’s an important background claim that humans today live better lives than they did in the past.
      • I changed this from “the world has got better” because the effecct of factory farming in the net value of the world is uncertain for some EAs.
    • Potentially huge future
      • A key claim in much longtermist prioritisation is that there could be an astronomically huge amount of people in the future. A separate claim could also be that the expected number of people in the future is astronomically large.
    • Long-term effects persist
      • Another key claim in the longtermist worldview is that our actions today have persistent effects long into the future, as opposed to “washing out” so that we should make our prioritisation based on short-term effects.
    • Animals in factory farms have net negative lives
      • Not a fundamental claim for EA, and not a view unique to EA, but still this is an important/common view in an unusually veggie/vegan community. And this is a distinct claim from animal lives matter. One could care about animal lives but think they live good lives in factory farms and therefore not think factory farming was bad.
      • It might be the case though that most people outside EA don’t value animal lives and so don’t really consider this point.
    • Most humans in the world have net postive lives
      • Again not a fundamental claim, but I think our prioritisation would look different if this weren’t true. For example, if you believed large amounts of people had net negative lives, and either didn’t expect this to improve or didn’t value future lives, you would maybe not view certain global catastrophic or extinction risks as that bad.
    • Sentience is not limited to humans/biological beings
      • It is perhaps a semantic issue where you draw the line about moral claims about sentience and sentient beings and empirical claims about sentience. But I think in particular the view that a “biological” body is not required for sentience and that e.g. digital minds could be sentient is an important consideration and relevant in a lot of longtermist EA prioritisation.
    • We live in an “unusual” time in history
      • This is quite a vague claim, and isn’t necessarily equating unusual with important/hingy. However I think most(?) EAs have the same view that the industrial revolution was historically abnormal and that the current world is quite unusual, and that the future could be very different from the past.

Claims about Reasoning

  • I feel most unsure how to differentiate claims under this heading.
  • But there is some sense in which EAs all agree on trying to reason critically, based on evidence and honest inquiry, and on being truth-seeking.
  • We should be truth-seeking.
    • I think for some people this comes downstream from wanting to have a large positive impact- seeing the world as it is is instrumentally useful to having a large positive impact. However some people (perhaps people with more of a rationalist bend) might view this as intrinsically valuable.

 

I’ve been quite vague in my descriptions above and am likely missing a lot of nuance. For me personally, many of these claims are downstream of the idea of feeling morally obligated to try to improve the world as much as possible, and an impartial and welfarist definition of good.

63

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 7:04 PM

When considering the relevant lives, this includes all humans, animals and future people. We generally do not discount the lives of future people intrinsically at all. This longtermist claim is common but not absolute in EA, and I’m brushing over mutliple population ethics questions here. (e.g. severals EA might hold person-affecting views)

I don't think this is a longtermist claim, nor does it preclude person-affecting views.

You can still value future people equally as present people, and not discount them at all insofar as they are sure to exist. If they are less likely to exist, you could discount them by this 1 - probability, in an expected value computation. OK, the math of this does get challenging for the person-affecting-view-er, insofar as they cannot just consider their impact on the sum of this value. They only care about improving the welfare holding the number of people constant but not the component of the effect of their choice that occurs through changing the expected number of people in existence.

I actually think total-population-ethics-ers would do that probability discounting too; however, they would value their impact on the number of people likely to exist.

I've been thinking of distilling some of the criticism of EA that I hear into similar, clearly attackable foundational claims.

One thing I would add is the very individualistic view of impact. We act as individuals to maximize (expected) individual impact. This means things like founding an org, choosing your career, spending time deciding where your money goes. Collective action would mean empowering community-controlled institutions that make decisions by going through a democratic process of consensus-building. Instead our coordination mechanisms rely on trusting a few decision-makers that direct large amounts of funding. This is a consequence of the EA movement having been really small in the past.

Also, it seems we are obsessed with the measurable. That goes as far as defining "good" in a way that does not directly include complex relationships. Strict QUALY maximizers would be okay with eugenics. I don't even know how to approach a topic like ecosystem conservation from an EA perspective.

I think in general we should be aware that our foundational assumptions are only a simplified model of what we actually want. They can serve us fine for directly comparing interventions, but when they lead to surprising conclusions, we should take a step back and examine if we just found a weak spot of the model.

One thing I would add is the very individualistic view of impact. We act as individuals to maximize (expected) individual impact. 

Personally I wouldn't agree with that. Effective altruists have been at pains to emphasise that we "do good together" - that was even the theme of a past EA Global, if I don't misremember.

80,000 hours had a long article on this theme already in 2018: Doing good together: how to coordinate effectively, and avoid single-player thinking. There was also a 2016 piece called The value of coordination on similar themes.

Also, it seems we are obsessed with the measurable.

I take a different view on that, too. For instance, Katja Grace wrote a post already in 2014 arguing that we shouldn't  refrain from interventions that are high-impact but hard to measure. That article was included in the first version of the EA Handbook (2015).

In fact, many of the causes currently popular with effective altruists, like AI safety and biosecurity, seem hard to measure.

Thanks for the very useful links, Stefan!
I think the usefulness of coordination is widely agreed upon, but we're still not working together as well as possible. The 80000hours article you linked even states:

Instead, especially in effective altruism, people engage in “single-player” thinking. They work out what would be the best course of action if others weren’t responding to what they do.

 I'll go and spend some time with these topics

I expect most EAs would be self-critical enough to see these both as frequently occurring flaws in the movement, but I'd dispute the claim that they're foundational. For the first criticism, some people track personal impact, and 80k talks a lot about your individual career impact, but people working for EA orgs are surely thinking of their collective impact as an org rather than anything individual. In the same way, 'core EAs' have the privilege of actually identifying with the movement enough that they can internalise the impact of the EA community as a whole. 

As for measurability, I agree that it is a bias in the movement, albeit probably a necessary one. The ecosystem example is an interesting one- I'd argue that it's not that difficult to approach ecosystem conservation from an EA perspective. We generally understand how ecosystems work and how they provide measurable valuable services to humans. A cost-effectiveness calculation would provide the human value of ecosystem services (which environmental economists usually do)and, if you want to give inherent value to species diversity, add the number of species within a given area, the number of individuals of these species and rarity/ external value of species etc. Then add weights according to various criteria to give something like an 'ecosystem value per square metre', and you'd get to a value that you could compare to other ecosystems. Calculate the price that it costs to conserve various ecosystems around the world, and voila, you have a cost-effectiveness analysis that feels at home on an EA platform. The reason this process doesn't feel 100% EA is not that it's difficult to measure, but because it can include value judgements that aren't related to the welfare of conscious beings. 

I think you get a lot right, but some of these claims, especially the empirical ones, seem to apply only to certain (perhaps long-termist) segments only.

I'd agree on/focus on

  1. Altruism, willingness to substantially give (money, time) from one's own resources, and the goodness of this (but not necessary an 'obligation')

  2. Utilitarianism/consequentialism

(Corollary): The importance of maximization and prioritization in making choices about doing good.

  1. A wide moral circle

  2. Truth-seeking and reasoning transparency

I think these four things are fairly universal and core among EA's -- longtermist and non, and it brings us together. I also suspect that what we learn about how to promote these things will transfer across the various cause areas and branches of EA.

I sort of disagree-ing with us

'Agreeing on a set of Facts'.

It seems somewhat to disagree with the truth-seeking part. I would say "it is bad for our epistemic norms" ... but I'm not sure I use that terminology correctly.

Aside from that, I think some of the empirics you mentioned probably have a bit less consensus in EA than you suggest... such as

We live in an “unusual” time in history

My impression was that even among longtermists the 'hinge of history' thing is greatly contested

Most humans in the world have net positive lives

Maybe now they do, but in future, I don't think we can have great confidence. Also, the 'most' does a lot of work here. It seems plausible to me that at least 1 billion people in this world have net negative lives.

Sentience is not limited to humans/biological beings

Most EAs (and most humans?) surely believe at least some animals sentient. But non-biological, I'm not sure how widespread this belief is. At least I don't think there is any consensus that we 'know of non-bios who are currently sentient', nor do we have consensus that 'there is a way to know what direction the valence of the non-bios goes'.

e.g. digital minds could be sentient is an important consideration and relevant in a lot of longtermist EA prioritisation.

I'm not sure that's been fully taken on board. In what ways? Are we prioritizing 'create the maximum number of super-happy algorithms'? (Maybe I'm missing something though; this is a legit question.)

I was just thinking about this the other day. In terms of pitching effective altruism, I think it's best to keep things simple instead of overwhelming people with different concepts. I think we can boil down your moral claims to essentially 3 core beliefs of EA: 

  1. Doing good is good. (Defining good)
  2. It is more good to do more good. (Maximization)
  3. Therefore, we ought to do more good. (Moral obligation)

If you buy these three beliefs, great! You can probably consider yourself an effective altruist or at least aligned with effective altruism. Everything else is downstream of these 3 beliefs and up for debate (and EAs excel at debating!).  

Probably it would be worthwhile to cross-reference your post with sources such as:

https://www.centreforeffectivealtruism.org/ceas-guiding-principles

https://resources.eagroups.org/running-a-group/communicating-about-ea/what-to-say-pitch-guide

These sources seem to encapsulate key claims of EA nicely, so points raised there could serve as additional points for your analysis, clarify some things up maybe (haven't thought of it much, just dropping the links).

Possibly relevant: "Effective Justice" paper.

Reminds me of this spreadsheet made by Adam S, which I generally really like.

I agree that it would be nice to have a more detailed, up-to-date typology of EA's core & common principles, as the latter seems like the more controversial class (e.g., should longtermism be considered the natural consequence of impartial welfarism)?

Curated and popular this week
Relevant opportunities