For the exercise in this chapter, we will take some time to reflect on the ideas we’ve engaged with over the past chapters. Our goal is to take stock and to identify our concerns and uncertainties about EA ideas.
What are your concerns about EA? (15 mins.)
We’ve covered a lot over the last few chapters: the philosophical foundations of effective altruism, how to compare causes and allocate resources, and a look at some top priority causes using the EA framework.
What are your biggest questions, concerns, and criticisms based on what we’ve discussed so far? These can be about the EA framework/community, specific ideas or causes, or anything you’d like!
Reflecting back (45 mins.)
You’ve covered a lot so far! We hope you found it an interesting and enjoyable experience. There are lots of major considerations to take into account when trying to do the most good you can, and lots of ideas may have been new and unfamiliar to you. In this chapter we’d like you to reflect back on the program with a skeptical and curious mindset.
To recapitulate what we’ve covered:
Chapter 1: The Effectiveness mindset
Over the course of Chapters 1 and 2, we aim to introduce you to the core principles of effective altruism. We use global health interventions, which has been a key focus area for effective altruism, to illustrate these principles, partly because we have unusually good data for this cause area.
Chapter 2: Differences in impact
In Chapter 2 we continue to explore the core principles of effective altruism, particularly through the lens of global health interventions because they are especially concrete and well-studied. We focus on giving you tools to quantify and evaluate how much good an intervention can achieve; introduce expected value reasoning; and investigate differences in expected cost-effectiveness between interventions.
Chapter 3: Radical empathy
The next section focuses on your own values and their practical implications. During Chapter 3 we explore who our moral consideration should include. We focus especially on farmed animals as an important example of this question.
Chapter 4: Our final century?
In this chapter we’ll focus on existential risks: risks that threaten the destruction of humanity’s long-term potential. We’ll examine why existential risks might be a moral priority, and explore why existential risks are so neglected by society. We’ll also look into one of the major risks that we might face: a human-made pandemic, worse than COVID-19.
Chapter 5: What could the future hold? And why care?
In this chapter we explore what the future might be like, and why it might matter. We’ll explore arguments for “longtermism” - the view that improving the long term future is a key moral priority. This can bolster arguments for working on reducing some of the extinction risks that we covered in the last two weeks. We’ll also explore some views on what our future could look like, and why it might be pretty different from the present.
Chapter 6: Risks from artificial intelligence
Transformative artificial intelligence may well be developed this century. If it is, it may begin to make many significant decisions for us, and rapidly accelerate changes like economic growth. Are we set up to deal with this new technology safely?
Now, trying answering the following questions:
What topics or ideas from the program do you most feel like you don’t understand?
What seems most confusing to you about each one? (Go back to that topic/idea and see if there are any further readings you can do that would help you address your uncertainties and explore any concerns. Do those readings. Consider writing notes on your confusion, stream-of-consciousness style.)
List one idea from the program that you found surprising at first, and which you now think more or less makes sense and is important? How could this idea be wrong? What’s the strongest case against it?
List one idea from the program that you found surprising at first, and think probably isn’t right, or have reservations about. What’s the strongest case for this idea? What are your key hesitations about that case?
Concerns about EA
The focus of creating as much value as possible in the Expected Value calculation is one I am not very sure about. I understand the concept, but as someone who works with data, I’d be more drawn to causes that have been proven to have high value already. Thus, the idea of fringe ideas would not be one of my immediate concerns. I always feel that there are more pertinent issues which are visible and measurable that we should focus on currently. This is the same thought process I hold behind not being an advocate of longtermism. I believe we already have very many important causes right now that I’d rather we focus on.
Another area I’m a bit skeptical about is the expansion of the empathy circle. For most of my life I’ve not had a lot of empathy towards non-human animals. I would not want them tortured, but I would not say that they deserve as much empathy as human beings. Again, I feel like our circle of empathy still has a long way to go in dealing with humans so conversations on expanding it to other sentient beings (and stretching the idea as far as algorithms) is not a cause I feel holds that much importance to me.
Finally, one of my biggest concerns is the urgency placed on climate change in the community. When talking of the most important causes, I feel like climate change is under-ranked as a massive danger to the future of our world. I find it misleading to place speculative causes such as AI risks, catastrophic pandemics, nuclear war, and great power conflicts ahead of climate change in importance because climate change is something we are currently experiencing and it’s only bound to get worse. In my opinion, climate change is the top risk as it promises to destabilise society in the next few compared to some other risks (the warming targets we have set are on course to be missed as soon as 2030). The more we continue sliding into worse climate scenarios, the more social issues shall arise and they shall very possibly lead to the rise of all these other risks - AI/nuclear/biological wars triggered by the race for resources such as water, higher land away from the rising oceans, productive soil, and many more. Thus, I believe the cause prioritization should be reconsidered to account for these second factor effects of climate change.
You might wanna review the idea of neglectedness to asses impact. The idea isn't necessarily that climate change is less important than other causes. Just that there are already a ton of resources being put into work on climate change so adding more resources there will have less impact.
This article addresses neglectedness among other things.
https://forum.effectivealtruism.org/s/x3KXkiAQ6NH8WLbkW/p/ER4gAtS5LAx2T3Y98