Hide table of contents
4 min read 19

108

What the program is about[1]

Effective altruism (EA) is an ongoing project to find the best ways to do good, and put them into practice. 

Our core goal with this program is to introduce you to some of the principles and thinking tools behind effective altruism. We hope that these tools can help you as you think through how you can best help the world.

We also want to share some of the arguments for working on specific problems, like global health or biosecurity. People involved in effective altruism tend to agree that, partly due to uncertainty about which cause is best, we should split our resources between problems. But they don’t agree on what that split should be. People in the effective altruism community actively discuss and disagree about which causes to prioritize and how, even though we’ve learned a lot over the last decade. We hope that you will take these ideas seriously and think for yourself about which ways to help are most effective.

Finally, we give you some time at the end of the program to begin to reflect on how you personally can help to solve these problems. We don’t expect you’ll have an answer by the end of the eight weeks, but we hope you’re better prepared to explore this further.

What the program involves

Each part of the program has a set of core posts and sometimes an exercise. 

We think that the core posts take most people about 1-2 hours to get through, and the exercise another 30-60 minutes. We have matched the readings and exercises so that, in total, we think it will take around 2-2.5 hours per week to prepare for the weekly session.

The exercises help you put the concepts from the reading into practice.

Beyond the core posts, there are more materials each week in ‘More to Explore’ — these are all optional and explore the themes of the week in more depth and breadth. 

Approximate reading times are given for each of the posts. Generally, we’d prefer you to take your time and think through the readings instead of rushing.

This curriculum was drawn up by staff from the Centre for Effective Altruism, incorporating feedback from others. Ultimately we had to make many judgement calls, and other people would have drawn up a different curriculum.[2]

How we hope you’ll approach the program

Taking ideas seriously

Often, conversations about ideas are recreational: we enjoy batting around interesting thoughts and saying smart things, and then go back to doing whatever we were already doing in our lives. This is a fine thing to do — but at least sometimes, we think we should be asking ourselves questions like: 

  • “How could I tell if this idea was true?”
  • “What evidence would it take to convince me that I was wrong about an idea?”
  • “If it is true, what does that imply I should be doing differently in my life? What else does it imply I’m wrong about?”
  • “How might this impact my plans for my career/life?”

And, zooming out: 

  • “Where are my blind spots?”
  • “Which important questions should I be thinking about that I’m not?”
  • “Do I really know if this idea/plan will help make things better or not?”

Answering these questions can help make our worldviews as accurate and full as possible and, by extension, help us make better decisions about things that we care about.

Disagreements are useful

When thoughtful people with access to the same information reach very different conclusions from each other, we should be curious about why and we should actively encourage people to voice and investigate where those disagreements are coming from. If, for example, a medical community is divided on whether Treatment A or B does a better job of curing some disease, they should want to get to the bottom of that disagreement, because the right answer matters — lives are at stake. If you start off disagreeing with someone then change your mind, that can be hard to admit, but we think that should be celebrated. Helping conversations become clearer by changing your mind in response to arguments you find compelling will help the community act to save lives more effectively  Even if you don’t expect to end up agreeing with the other person, you’ll learn more if you acknowledge that you disagree and try to understand exactly how and why their views disagree with yours.

Be aware of our privilege and the seriousness of these issues

We shouldn’t lose sight of our privilege in being able to read and discuss these ideas, or that we are talking about real lives. We’re lucky to be in a position where we can have such a large impact, and this opportunity for impact is the consequence of a profoundly unequal world. Also, be conscious of the fact that people in this program come to these discussions with different ideas, backgrounds, and knowledge. Some of these topics can be uncomfortable to talk about — which is one of the reasons they’re so neglected, and so important to talk about — especially when we may have personal ties to some of these areas.

Explore further

This handbook aims to introduce people to effective altruism in a structured manner. There are far too many relevant topics, ideas, and research for all but a small fraction of them to fit into this very short program. If you are interested in these topics, you may find it very useful to dive into the linked websites, and the websites those sites link to, and so on.

 

 

This work is licensed under a Creative Commons Attribution 4.0 International License.
 

  1. ^

    This handbook is also accessible as a Google Doc version here 

  2. ^

     Our goal is to introduce people to some of the core principles of effective altruism, to share the arguments for different problems that people in effective altruism work on, and to encourage you to think about what you want to do on the basis of those ideas. We also tried to give a balance of materials that is in line with the (significant) diversity of views on these topics within effective altruism. 

    In drawing up the curriculum, we consulted community members, subject matter experts, and program facilitators.

    We think that these readings are interesting and give a good introduction, but we hope that you engage with them critically, rather than taking them all at face value. Once you’ve read this curriculum, we encourage you to explore other EA writings (e.g. on this wiki).

Comments19


Sorted by Click to highlight new comments since:

Thank you for putting this together! I was struck by this sentence: "We’re lucky to be in a position where we can have such a large impact, and this opportunity for impact is the consequence of a profoundly unequal world." I have been thinking about this a lot, and whether, by eliminating inequalities we could reach a time where effective altruism might be substantially less relevant (or, in other words, in an ideal world, effective altruism wouldn't exist). Lots of food for thought. 

Very well structured, and a big thank you from our auditory leaners! :)

This is really cool, love how structured and well laid out this is with a journey for people like me who don't quite have time to commit to a course but are happy to tune in and out as available. 

This statement has really stuck with me and shall be a guiding factor in many of the conversations I hold with people henceforth: "When thoughtful people with access to the same information reach very different conclusions from each other, we should be curious about why and we should actively encourage people to voice and investigate where those disagreements are coming from"

Thanks for a very structured and specific approach with materials to help others start later like me to start to do good better. Thanks the EA team!

I appreciate these writing!

I like the wording of "Be aware of our privilege and the seriousness of these issues". I think that sections encapsulated what this is about and what attracted me to the movement.

I am excited by the prospects of this vision and I am curious as I learn more how different cultures internalize these values in reference to their ancestral wisdom and indigenous beliefs. There are countless human philosophies around the earth that live in harmony with their land and sustainably and abundantly provide for their communities. It appears that cultures and nations who recently dominated and decimated similar sustainable ways of living, in search for profit, expansion of power, etc. are the cultures trying to provide support and charity toward the people and lands whom they actively extract value from. I am uncertain if this power dynamic is effectively captured in the concepts of "privilege and seriousness of these issues". I am seeking to find the truth and looking forward to conversations with those who disagree and those who recognize the seriousness of unconditional reparations as a form of decolonization. 

Thank you, Anthony, for your very thoughtful comment. I live in a white community in the western US where there was a forced march of Indigenous people from their ancestral homeland to the reservation. Recently the local government has been renaming landmarks, including a bridge over which they were forced, with names in that language. This strikes me as something to assuage white guilt and excuse us from taking real action, like the unconditional reparations you mention. I wonder if this is in the same vein as the question you raise here.

We shouldn’t lose sight of our privilege in being able to read and discuss these ideas, or that we are talking about real lives. We’re lucky to be in a position where we can have such a large impact, and this opportunity for impact is the consequence of a profoundly unequal world.

 

Indeed very grateful to embark on this journey and extremely excited to go through EA in a structured manner :)

 

If you start off disagreeing with someone then change your mind, that can be hard to admit, but we think that should be celebrated. Helping conversations become clearer by changing your mind in response to arguments you find compelling will help the community act to save lives more effectively

 

To add to this (for those who think Charity Entrepreneurship could be a career option), this is from How to Launch a High-Impact Nonprofit book page 35:

Feedback loops and openness to criticism allow you to change your mind and grow your organization. This requires a rare willingness to admit mistakes. Cultivate a "scout mindset," trying to understand situations and concepts as honestly and accurately as possible, even when inconvenient. Remember, changing your mind is the ultimate victory, because in those moments you are improving your model of the world and making your charity better.

Thanks, EA team for the information, looking forward to learning more

This is my first time of every engaging with the concept of EA. I am keeping an open mind and have come to this programme to learn. I will be open to question truths and knowledge i have known and held close to my heart over time. I would be open to new ideas, fresh thinking, and even hard truths. I seem to like the way the programme is structured to allow freshers like us build our confidence with this new thinking and philosopy.

"We shouldn’t lose sight of our privilege in being able to read and discuss these ideas, or that we are talking about real lives. We’re lucky to be in a position where we can have such a large impact..." Sometimes I feel as though I am one to lose sight of the privilege I have to ponder and discuss these ideas. The EA team makes me genuinely excited about the change and impact I can potentially make, and for that, I give them a big thanks!

Awesome. One thing is always very key to life, which is; we shouldn't trivialize every little opportunity (s) we found ourselves in making positive impacts. Thank you all @ EA team

This handbook alone reveals how exciting the program is going to be: a chance to explore and reflect on the world's biggest causes with others will develop critical thinking skills and introduce new insights and ideas.

I personally enjoy the part of this reading that calls us to take all presented ideas seriously and not take anything to heart without intentional forethought.

Sometimes we think there are no other ways to do something, but we must ask ourselves and try to change in order to help people.

"If you start off disagreeing with someone then change your mind, that can be hard to admit, but we think that should be celebrated."

This is hard and scary to do depending on the situation. I am starting to notice more when I change my mind and how it feels.

Thank you for taking your precious time to organize this handbook and make other effective altruists' lives easier! May all sentient beings be directly or indirectly benefited! 

Specifically under  "Be aware of our privilege and the seriousness of these issues". I'm inclined to believe that this section entails  what this is about and that's one of the core reasons why I got interested.

The handbook demonstrates the immense thrill that the program if about to offer. It is a very great opportunity to take a journey to learn about effective altruism and start the practical work of applying what we discuss into our lives. It presents an opportunity to collectively delve into and contemplate the most significant global issues. This experience will foster the growth of essential analytical abilities while also presenting novel perspectives and concepts. 

I would like to extend my warmest thanks to the EA team for all of their work, and for providing such a perfect content. 

Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
 ·  · 1m read
 · 
We’ve written a new report on the threat of AI-enabled coups.  I think this is a very serious risk – comparable in importance to AI takeover but much more neglected.  In fact, AI-enabled coups and AI takeover have pretty similar threat models. To see this, here’s a very basic threat model for AI takeover: 1. Humanity develops superhuman AI 2. Superhuman AI is misaligned and power-seeking 3. Superhuman AI seizes power for itself And now here’s a closely analogous threat model for AI-enabled coups: 1. Humanity develops superhuman AI 2. Superhuman AI is controlled by a small group 3. Superhuman AI seizes power for the small group While the report focuses on the risk that someone seizes power over a country, I think that similar dynamics could allow someone to take over the world. In fact, if someone wanted to take over the world, their best strategy might well be to first stage an AI-enabled coup in the United States (or whichever country leads on superhuman AI), and then go from there to world domination. A single person taking over the world would be really bad. I’ve previously argued that it might even be worse than AI takeover. [1] The concrete threat models for AI-enabled coups that we discuss largely translate like-for-like over to the risk of AI takeover.[2] Similarly, there’s a lot of overlap in the mitigations that help with AI-enabled coups and AI takeover risk — e.g. alignment audits to ensure no human has made AI secretly loyal to them, transparency about AI capabilities, monitoring AI activities for suspicious behaviour, and infosecurity to prevent insiders from tampering with training.  If the world won't slow down AI development based on AI takeover risk (e.g. because there’s isn’t strong evidence for misalignment), then advocating for a slow down based on the risk of AI-enabled coups might be more convincing and achieve many of the same goals.  I really want to encourage readers — especially those at labs or governments — to do something
Recent opportunities in Building effective altruism
49
Ivan Burduk
· · 2m read