Effective altruism (EA) is an ongoing project to find the best ways to do good, and put them into practice.
This series of articles will introduce you to some of the core thinking tools behind effective altruism, share some of the arguments about which global problems are most pressing, and help you to reflect on how you personally can contribute.
The handbook is structured into eight chapters, with exercises to help you reflect on your reading throughout. If you'd like to discuss these ideas with other people who are interested in improving the lives of others, you may be interested in our free introductory EA program, which is based on this handbook.
If you want to use your time or money to help others, you probably want to help as many people as you can. But you only have so much time to help, so you can have a much bigger impact if you focus on the interventions that help more people rather than fewer.
But finding such interventions is incredibly difficult: it requires a "scout mindset" - seeking the truth, rather than to defend our current ideas.
Around 700 million people still live in poverty, mostly in low-income countries. Efforts to help them - by policy reform, cash transfers, or provision of health services - can be incredibly effective.
Alongside investigating this issue, we also discuss how much more effective some interventions are than others, and we introduce a simple tool for estimating important figures.
Should we care about non-human animals? We'll show how it can be important to care impartially, rather than ignoring weird topics or unusual beneficiaries.
We'll also cover expected value theory (which helps when we're uncertain about the impact of an intervention), and give some ideas for how we could improve the lives of animals that suffer in factory farms.
Humanity appears to face existential risks: a chance that we'll destroy our long-term potential. We’ll examine why existential risks might be a moral priority, and explore why they are so neglected by society. We’ll also look into one of the major risks that we might face: a human-made pandemic, worse than COVID-19.
Alongside this we'll introduce you to the concepts of neglectedness, marginal thinking, and explore whether you could lose all of your impact by missing one crucial consideration.
"Longtermism" is the view that improving the long term future is a key moral priority of our time. This can bolster arguments for working on reducing some of the extinction risks that we covered in the last section.
We’ll also explore some views on what our future could look like, and why it might be pretty different from the present. And we'll introduce forecasting: a set of methods for improving and learning from our attempts to predict the future.
Transformative artificial intelligence may well be developed this century. If it is, it may begin to make many significant decisions for us, and rapidly accelerate changes like economic growth. Are we set up to deal with this new technology safely?
As we try to think about these and other difficult questions, how should we update our views? Bayes' rule is a theory designed just for this: it can help us to think more clearly about how to think clearly.
It's really important to think for yourself and reflect on the arguments you've heard in previous weeks: you might uncover places where you disagree, or mistakes in the reasoning. And even if you don't you'll probably understand the ideas more deeply if you've thought about their weakest points.
So in this week, we encourage you to take some time to reflect on your confusions and concerns about the ideas so far, and to read up on some of the strongest counterarguments.
In this final section, we hope to help you apply the principles of effective altruism to your own life and career.
You probably won’t be ready to make a major change just yet - you might want to read and reflect more before you do that. So instead we’ll help you to think through some of your key uncertainties, generate tests for those uncertainties, and plan out how you can make sure you follow through on your intentions.