Over the past 2 months, staff at the Centre for Effective Altruism have updated the introduction to effective altruism programs. We now have a unified Effective Altruism Handbook (the fourth edition), which we will use as our suggested introductory fellowship curriculum for virtual programs and EA groups.

We are excited about the improvements that have been made, and think that this version will provide a better experience for people new to EA ideas and principles. But it’s still not perfect, and we’re eager to hear your input and feedback in order to help us to improve it. 

Here is the updated version.  

Major Changes

The main changes we made were:

  1. To focus more on core EA concepts and thinking tools: We added reasoning on these topics, and also highlighted these tools in the introduction to each chapter, because we think that core concepts and thinking tools are the most important thing we want to leave people with as we introduce them to EA. We still do have a lot of content on specific cause areas, but it is relatively de-emphasized.
  2. To update readings with content published in the last two years

Additionally, we: 

  • rearranged the sections on longtermism and X-risk;
  • added criticism sections to each session in the ‘More to Explore’ posts;
  • turned section 7 into an opportunity for people to reflect on specific uncertainties and concerns about EA ideas and practices;
  • made section 8 more clearly focused on next steps (by shifting some reflection back to week 7);
  • expanded the introduction to give more context on goals;
  • updated the UX/UI and added the handbook to the forum.   

Process

Max Dalton and Lizka Vaintrob did an initial review of the old curriculum, made quick changes, got feedback from group organizers and EA Virtual Program facilitators and integrated these comments.

After this, we shared the updated handbook with about 15 stakeholders in the EA community (including people from all major cause areas, and critics of previous editions of the handbook). Max Dalton and Jesse Rothman integrated their comments into the updated handbook. We then posted the handbook on the forum in order to improve the UX/UI, and integrated the handbook into broader EA conversations on the Forum.

We then did another set of user interviews with experienced facilitators and continued to make edits to the handbook based on their input. 

Our approach to cause area selection

Our primary goal with this version of the handbook was to share certain core principles or tools of EA (things like “making tradeoffs”, “truth seeking”, “scope sensitivity”). These concepts are now highlighted in the curriculum. I am personally excited to share and focus on these ideas, and CEA as an organization is focused on sharing these principles and helping people to work through their implications (rather than on promoting any particular cause area).

However, we also wanted to share the arguments for some of the key things that people in effective altruism are working on, and give examples of that work - we think that this is important because it’s a lot of what the community is about, and it also makes the introduction much more concrete (rather than being really philosophy-heavy, which we think would not give a good sense of what most people in effective altruism work on).

When talking about specific areas, our core goal was to share the arguments for some of the main areas, highlight that there are other areas that one could work on, make clear that there is disagreement in the community about what the right split between areas is, and encourage people to make up their own mind (which is the focus of the seventh chapter).

In terms of process, we emphasize in our introduction that we had to make these judgement calls and others would likely disagree with the calls we’ve made. We also consulted experts from all corners of the community, including previous critics of the curriculum and several people who focus on global health and wellbeing. Feedback from these groups was generally positive.

The overall split of the content is (very) roughly 50% on core principles, 30% on longtermism/x-risk, 10% on animal welfare, and 10% on global health and wellbeing. 

When deciding on this split we balanced a few different factors:

  • Main thing: wanting to give a high-quality explanation of each area. (This pushes somewhat to giving more space to harder-to-explain areas like AI relative to bednets.)
  • Wanting to be roughly representative of the views of people who have been involved in EA for a long while (placing some weight on “EA founders”, highly engaged community members, and cause prioritization experts, and not too much weight on the full sample of people who filled out the EA survey).
    • My current impression from rough research is that all of these groups currently would on average assign >60% of EA’s future resources to longtermist-related causes, though of course there is much disagreement.
  • Not wanting to emphasize any one area so much that people could “read the room”, and think “OK, they’re not saying it, but I’m meant to believe X”.
    • While the pure split still leans relatively longtermist (influenced by the other points) and risks this happening, we tried to mitigate this by having the 7th chapter be focused on encouraging people to develop their own views, by providing criticism for each cause area, and by trying to present a variety of framings in the final “what to do” chapter.

Overall the split that we decided on is roughly in line with the average views of the most engaged community members. I think that people will disagree about both the correct split and the correct process for deciding on the split, but ultimately we could only include so many articles and had to make a call.

Going Forward and Feedback

We are excited about the new version of the EA Handbook and think there are a bunch of ways it represents improvements from the previous one. Still, it’s clear that this is not the perfect version and we intend to continue to make edits and improvements. 

Going forward, we intend to make small (and occasionally larger) revisions to the program based on (1) feedback from facilitators and participants running online and in-person sessions; (2) feedback from EA stakeholders; and (3) CEA staff judgement.   

We are very eager for your feedback on the content, design, and usability of this new version. Please share your experience, thoughts, and ideas in the comments or through this dedicated feedback form.

33

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 5:11 PM

I'm not sure about the reasoning in Four Ideas You Already Believe In. The four ideas are the following:

  1. It's important to help others 
  2. People2 are equal 
  3. Helping more is better than helping less 
  4. Our resources are limited

The argument in the post seems to be the following.

 

Premise A: You already believe in 1-4.

Premise B: Effective altruism follows from 1-4.

Conclusion: Your own views entail the principles of effective altruism.

 

But I don't think the conclusion follows from Premises A and B. Even if effective altruism follows from 1-4, and even if someone believes in 1-4, it need not follow from their whole set of beliefs. They may have additional beliefs that are in conflict with effective altruism (e.g. a strong preference for a certain cause, or perceived special obligations towards a certain group of beneficiaries), and may only be willing to apply 1-4 as long as they aren't overridden by those additional beliefs. If so, the conclusion doesn't seem to follow.

There’s a small selfish part of me which is happy that my “Why I am probably not a longtermist” post is shared as the critical piece on longtermism.

There’s a much bigger part which would wish that someone had written up something much more substantial though! I am a bit appalled that my post seems to be the best we as a movement have to offer to newcomers on critical perspectives.

Honestly, I kind of agree! I think your piece is good, but I think there hasn't been enough really high-quality and well-presented criticism of longtermism from an EA perspective. (If I've missed anything, please let me know, but I've asked around a bit already.)

I'm afraid I don't know anything. While I still like my piece it wasn't intended to provide a strong case against longtermism, only to briefly explore my personal disagreements. In such a piece I would want to see the case against longtermism from different value systems as well as actually engaging with the empirics around cause prioritisation, apart from the obvious: being a lot more thorough than I was.

Thanks for giving everyone the opportunity to provide feedback!

I'm unsure how I feel about the section on global poverty and wellbeing. As of now, the section mostly just makes the same claim over and over that some charities are more effective than others, without much rigorous discussion around why that might be.

There's a ton of great material under the final 'differences in impact' post that I would love to see as part of the main sequence. Right now, I'm worried that people new to global health and development will leave this section feeling waay overconfident about how sure we are about all of this charity stuff. If I was a person with experience working in the aid sector and decided to go through the curricula as it is, I think I would be left thinking that EAs are way overconfident despite barely knowing a thing about global poverty.

Here is an example of a potential exercise you could include that I think might go a long way to convey just how difficult it is to gain certainty about this stuff:

Read and evaluate two RCT's on vaccine distribution in two southern Indian states. What might these RCT's tell us about vaccine distribution in India? Have the reader try to assess which aspects of these RCT's will generalise to the rest of India and which aspects won't. They could for example make predictions (practicing another relevant EA skill!) on the results of an RCT for a northern Indian state.

You only have to do one deep dive on a topic to gain an appreciation for how little we know.

I recommend an exercise at some point encouraging the reader / participant to write up  their criticisms of EA ideas

Thanks! The exercise in Session 7 is meant to encourage people to write their critiques: https://forum.effectivealtruism.org/s/32FKXByGNgHLPaHnj/p/SYxBpdthYWcd6eQhF

Perhaps it's useful to make the writing suggestion more explicit/specific.

Suggestion: provide an EPUB version.

Thanks for the suggestion!  This is on our list to look into in the future.

Have you tested this handbook with potential users yet?

Yes! We shared earlier drafts of this handbook with a number of potential users (especially uni group organizers) and the handbook is now being piloted in an EAVP cohort, which is sending in-depth feedback. 

Thanks!

The following is mostly my initial reactions and concerns, but I don't have enough context or have spent enough time on the syllabus to know if these risks are big. 

I don't thing this is necessarily an actual concern, but reading your first sentence I woud be a bit worried about oversampling from group organizers vs. group members because organizers are likely to be biased and nonrepresentative (I think this could be bad even if your target audience is potential future organizers). 

That being said, I'm glad you're piloting with a cohort of actual users. Do you know the size of the EAVP cohort and how they were selected?

I'd also be curious about how focus on uni groups in general could affect the overall content as opposed to an older group. One potential risk (based on my skim of the syllabus, I don't think it's a big risk, but wanted to flag)   is that the content is in some ways offputting or less useful. This seems important since the branding is "EA Handbook" rather than "EA Students Handbook" or something similar.

Thanks Vaidehi! I appreciate the concern about over-optimizing for organizer (and especially student) organizer experience. We're hoping to get feedback from many folks and integrate accordingly. The EAVP cohort is small and it was selected based on facilitator capacity - it isn't meant to be purely representative or the last time we'll get feedback from those types of users.  

Where can one find a PDF download of this handbook? That would be helpful for those who cannot always be online.

Is there a pdf version available for the Handbook?

In https://forum.effectivealtruism.org/s/B79ro5zkhndbBKRRX/p/ZhNaizQgYY9dXdQkM the whitespace following headers under "What are some examples of effective altruism in practice?" seem out of place. Suggestion: remove the whitespace or have them are proper headers.