Hide table of contents

For the exercise in this chapter, we will take some time to reflect on the ideas we’ve engaged with over the past chapters. Our goal is to take stock and to identify our concerns and uncertainties about EA ideas. 

What are your concerns about EA? (15 mins.)

We’ve covered a lot over the last few chapters: the philosophical foundations of effective altruism, how to compare causes and allocate resources, and a look at some top priority causes using the EA framework. 

What are your biggest questions, concerns, and criticisms based on what we’ve discussed so far? These can be about the EA framework/community, specific ideas or causes, or anything you’d like!

Reflecting back (45 mins.)

You’ve covered a lot so far! We hope you found it an interesting and enjoyable experience. There are lots of major considerations to take into account when trying to do the most good you can, and lots of ideas may have been new and unfamiliar to you. In this chapter we’d like you to reflect back on the program with a skeptical and curious mindset.

To recapitulate what we’ve covered:

Chapter 1: The Effectiveness mindset

Over the course of Chapters 1 and 2, we aim to introduce you to the core principles of effective altruism. We use global health interventions, which has been a key focus area for effective altruism, to illustrate these principles, partly because we have unusually good data for this cause area.

Chapter 2: Differences in impact

In Chapter 2 we continue to explore the core principles of effective altruism, particularly through the lens of global health interventions because they are especially concrete and well-studied. We focus on giving you tools to quantify and evaluate how much good an intervention can achieve; introduce expected value reasoning; and investigate differences in expected cost-effectiveness between interventions. 

Chapter 3: Radical empathy

The next section focuses on your own values and their practical implications. During Chapter 3 we explore who our moral consideration should include. We focus especially on farmed animals as an important example of this question.

Chapter 4: Our final century?

In this chapter we’ll focus on existential risks: risks that threaten the destruction of humanity’s long-term potential. We’ll examine why existential risks might be a moral priority, and explore why existential risks are so neglected by society. We’ll also look into one of the major risks that we might face: a human-made pandemic, worse than COVID-19.

Chapter 5: What could the future hold? And why care? 

In this chapter we explore what the future might be like, and why it might matter. We’ll explore arguments for “longtermism” - the view that improving the long term future is a key moral priority. This can bolster arguments for working on reducing some of the extinction risks that we covered in the last two weeks. We’ll also explore some views on what our future could look like, and why it might be pretty different from the present.

Chapter 6: Risks from artificial intelligence

Transformative artificial intelligence may well be developed this century. If it is, it may begin to make many significant decisions for us, and rapidly accelerate changes like economic growth. Are we set up to deal with this new technology safely?

Now, trying answering the following questions:

What topics or ideas from the program do you most feel like you don’t understand?

What seems most confusing to you about each one? (Go back to that topic/idea and see if there are any further readings you can do that would help you address your uncertainties and explore any concerns. Do those readings. Consider writing notes on your confusion, stream-of-consciousness style.)

List one idea from the program that you found surprising at first, and which you now think more or less makes sense and is important? How could this idea be wrong? What’s the strongest case against it?

List one idea from the program that you found surprising at first, and think probably isn’t right, or have reservations about. What’s the strongest case for this idea? What are your key hesitations about that case?


 


 



 


 

New Answer
New Comment


7 Answers sorted by

My criticisms about EA:

As a negative utilitarian I'm bitter about all the X-risk prevention enthusiasts trying to stop me from pushing the big red button

Jokes aside - I got very excited about EA when I learned about it. At some point I became aware of the excitement and I had a concern pop up that it sounds too good to be true, almost like a cult. I consider myself rather impressionable/easy to manipulate so I learned that when I feel very hyped about something it should make me healthily suspicious.

I'm grateful for the article earlier in the chapter that presented some good faith criticism and I agree with some of its points

Some thoughts:

  • EA may feel alienating to people who aren't top-of-their-field 150iq professionals. I very much relate to this post: https://forum.effectivealtruism.org/posts/x9Rn5SfapcbbZaZy9/ea-for-dumb-people . Maybe it's for the better and results in higher talent density and better reputation for the movement, maybe we're missing out on some skilled people/potential donors or critical mass.
  • I'd love to see some statistics on why people leave the movement, and what the rate is. I suspect that moral perfectionism leading to self-neglect and burnout is an occupational hazard among EAs (like it is among animal advocates).
  • It's somewhat difficult to talk about EA to regular people. Look, there's this movement that can literally save the world from apocalypse (cultish), and we also believe that shrimp welfare is important (insane). On the other hand, maybe I shouldn't start my conversations like that.

Concerns about EA

The focus of creating as much value as possible in the Expected Value calculation is one I am not very sure about. I understand the concept, but as someone who works with data, I’d be more drawn to causes that have been proven to have high value already. Thus, the idea of fringe ideas would not be one of my immediate concerns. I always feel that there are more pertinent issues which are visible and measurable that we should focus on currently. This is the same thought process I hold behind not being an advocate of longtermism. I believe we already have very many important causes right now that I’d rather we focus on.

Another area I’m a bit skeptical about is the expansion of the empathy circle. For most of my life I’ve not had a lot of empathy towards non-human animals. I would not want them tortured, but I would not say that they deserve as much empathy as human beings. Again, I feel like our circle of empathy still has a long way to go in dealing with humans so conversations on expanding it to other sentient beings (and stretching the idea as far as algorithms) is not a cause I feel holds that much importance to me.

Finally, one of my biggest concerns is the urgency placed on climate change in the community. When talking of the most important causes, I feel like climate change is under-ranked as a massive danger to the future of our world. I find it misleading to place speculative causes such as AI risks, catastrophic pandemics, nuclear war, and great power conflicts ahead of climate change in importance because climate change is something we are currently experiencing and it’s only bound to get worse. In my opinion, climate change is the top risk as it promises to destabilise society in the next few compared to some other risks (the warming targets we have set are on course to be missed as soon as 2030). The more we continue sliding into worse climate scenarios, the more social issues shall arise and they shall very possibly lead to the rise of all these other risks - AI/nuclear/biological wars triggered by the race for resources such as water, higher land away from the rising oceans, productive soil, and many more. Thus, I believe the cause prioritization should be reconsidered to account for these second factor effects of climate change.


 

You might wanna review the idea of neglectedness to asses impact. The idea isn't necessarily that climate change is less important than other causes. Just that there are already a ton of resources being put into work on climate change so adding more resources there will have less impact.

This article addresses neglectedness among other things.

https://forum.effectivealtruism.org/s/x3KXkiAQ6NH8WLbkW/p/ER4gAtS5LAx2T3Y98

I enjoyed this well-thought EA handbook - covering as much possible EA topics that can be think of - down to the details & providing practical/solution links to support it. 

I think a wonderful idea that we could create 2 further introductory EA handbook - one for kids/teens, ages 12-18 & one for adults that uses everyday, informal english language that also would like to get involved in EA, but intimidated with the complexity of the language used thoughtout (e.g. me, confessely skimmed so much information to get this far!)

Introduce the ideas with the most condensed & relevent information - I do see comments below thinking the same thing.

Maybe the quickest solution is that AI read all your well-put together EA handbook & create the 2 (perhaps more - AI might think to make more for different types/categories of people) - and you can proofread to see if it is your liking!

But thank you for starting/collecting the most important in the first place - it is the wonderful start!

The Introduction to Effective Altruism course is well-designed and accessible to participants from all over the world. Its interface is optimized for low data usage, making it easily accessible on mobile phones and other low-spec devices. The content is clear and understandable for anyone with basic reading and writing skills. Additionally, it is well-summarized and enriched with engaging facts, making the learning experience both informative and enjoyable.

Now, on criticisms: EA Everywhere on Slack feels overwhelmingly centered on Europe/ America. It is filled with opportunities that are primarily accessible to people from Europe, along with event announcements that are often restricted to the region. For someone like me, coming from Uganda and seeking a like-minded community to grow and develop my skills while staying committed to my country, this exclusion is disheartening. It creates a sense of isolation and limitation.

A more inclusive approach is needed, EA opportunities should be accessible to everyone, regardless of location. There should be strong support structures to uplift individuals from underrepresented and less developed regions. After all, the essence of EA is to find the most effective ways to do good, using evidence. That mission should extend to creating equal opportunities for all that are worthy it.

Many of the examples presented in the course are heavily focused on Europe. However, if we are truly committed to solving global problems, we must incorporate diverse contexts from different parts of the world. This exposure would help participants understand a broader range of challenges and design interventions that are effective and scalable across various regions. Perhaps an example from Africa could provide valuable solutions in Europe, just as a European example might offer insights applicable to Africa. By incorporating diverse perspectives, we can foster cross-regional learning and design interventions that are more adaptable and effective on a global scale. Africa, for instance, offers valuable insights on moral philosophy and ethical considerations given its diversity, yet it is barely mentioned. The course gives the impression that all interventions are meant for the U.S. or Europe, overlooking the rich perspectives and pressing issues faced in other parts of the world. A more inclusive approach would ensure that effective altruism remains truly global in its impact.

All the personalities highlighted in the course are white and from either Europe or America. (This is not about race, but rather an observation on representation.) Have Black individuals or people from outside Europe and America not contributed to this movement? Have their efforts gone unrecognized, or is this course unintentionally reinforcing the idea that intelligence and philanthropy are primarily Western traits? If effective altruism is truly a global movement, it should acknowledge and celebrate contributions from diverse backgrounds. Representation matters—not just for inclusivity, but for inspiring a broader audience to engage in meaningful change.

Overall, the course feels like it was designed primarily for a European/ American audience, with supporting structures that task them with finding solutions to the world's biggest problems. However, these "global" challenges seem to focus mainly on issues affecting the developed world, which, while important, represent only a small fraction of the broader global landscape. This approach risks overlooking critical problems faced by the majority of the world's population and reinforces a narrow perspective on what truly constitutes the world's most pressing challenges to only what is identified in the first world countries. Try to put this in the ITN framework but from a perspective of someone coming from an underdeveloped world. I acknowledge that my perspective on this has been shaped by a neartermist lens and not a longtermist lens.

Disclaimer: These are simply my observations, and there is a considerable possibility that I may be wrong. Please take them with a pinch of salt.

I have no criticism so far but my concerns are related with the topics and reading materials. Some of the reading materials are too detailed , maybe some of them should be summarized.

Part 2

At first i had a problem understanding radical empathy ( the animal welfare part specifically) but after doing some readings i have started to get a clear understanding.

The artificial intelligence ideas  were surprising being the first time I was encountering the topic.  But after the readings and session it now makes a lot of sense and I think that it is important to learn  and research more about AI ,its advantages, disadvantages and how it will shape the future or destroy it.

The idea that AI poses a bigger risk than Climate Change doesnt sit well with me. I feel like for AI we can be able to understand and know how to mitigate the risks but for climate change, it is more  challenging because it affects all aspects of human well being, their health , it affects animals and plants, destroys places and so forth. Climate change should be  given the first priority  and we  should be worried about it because we are seeing the climate crises happening and for such climate crises we better work on ways of  preventing the crises rather than waiting for the crises to happen then start responding.

I'm going to share my answers. Please keep in mind that they might have been already tackled by other people elsewhere. In any case, those are the critiques I have so far.

Superficial references problem:
The handbook almost never recommends books on the subjects (except those written by MacAskill, Ord, Singer, etc), but instead they tend to recommend blog posts, Wikipedia, other EA-aligned webpages, or, at best, philosophy papers. In my opinion, there could be recommendations of textbooks on cost-effectiveness analysis, cause prioritization, economics, ethics, statistics, cognitive biases, etc. Since webpages and standalone papers are not nearly as good as textbooks to learn a subject, I believe recommendations of books are definitely warranted, otherwise we can get the impression that all the theoretical background that EAs have are those shallow references.

The neglectedness problem:
First, it's not clear how to distinguish between these two scenarios: (1) The cause is unfairly neglected, that is, much more neglected than it ought to be, considering its scale and tractability; and (2) The cause is neglected because it's really a bad cause to work for (due, for instance, to low scale or low tractability), in which case it being neglected is actually a sign that we shouldn't work on it. In order to help us sort out what's the underlying scenario, I think we should see whether other institutions/researchers have attempted to work on the issue in the past, and not just look at the absolute numbers of funding/researchers that are going to that cause in the present. I don't remember seeing this historical analysis being done. And maybe we should employ other strategies besides this historical analysis to sort things out.

Second, there's another shortcoming of just assessing neglectedness by looking at the amount of dollars being poured into a cause. People might be working to solve a problem and pouring lots of money into it using an inefficient method. For instance, suppose that we lived in a world where hundreds of billions of dollars were being spent on leafletting about the animal cause, and suppose that it is the case that leafletting is a very inefficient method to promote concrete changes to animal well-being. Then even though there are hundreds of billions of dollars being put into the Animal Cause, there would still be a low-hanging fruit, if we assume that, for instance, corporate campaigns are a hundred times more efficient than leafletting. So just looking at the sheer number of donated dollars to the Animal Welfare cause could be very misleading because even though it's not "numerically" neglected, we're not making use of the most effective methods.

Third, I'm not convinced that the curve of improvement as a function of funding/researching has a log shape (or any curve that implies diminishing marginal returns)

Fourth, even if it has a log shape, in order to infer that an additional person/dollar of funding would have a greater impact on cause A compared to cause B, we would need to know the parameters of the log curve for cause A and cause B, which we don't. For example, check www.desmos.com/calculator/ohwiagg7zi. Here we have two causes with a log curve, but with different parameters (hence, different shapes), and we can see that even though cause red is receiving more funding than cause green, the marginal return of cause red is still higher than the marginal return of cause B, which makes comparisons between different cause with regards to neglectedness very hard, if not impossible.

Blindspots: By "blindspots", I mean arguments that I've never seen being raised in the given discussions, though they seem to be crucial.

[The logic of the larder] blindspot in the Animal Welfare discussion: It's not crystal clear what is the net value of the lives of each factory-farmed species. For instance, if some species have net positive lives, then interventions that aim to reduce the number of factory-farmed animals will cause a loss of total value. Another thing to consider is that, because of the crops to sustain factory-farmed animals, they have a negative impact on the number of wild animals, and if we consider that the lives of wild animals are worse than the lives of some factory-farmed animals, then abolishing factory farms will have this other source of disvalue as well, by creating lives whose quality is even worse.

[Intelligence restart] blindspot in the Extinction Risk discussion: If only humanity goes extinct, couldn't some other species as intelligent as (or even more intelligent than) humans eventually evolve from other animals, say, from the surviving primates?

-Going by the foundational exposition which have aided my introduction to the E.A community,I must have to state,I appreciate the reasoning and facts basis,which remain the foundational tools,geared towards doing good,in relation to prioritizing matters,which will yield the most benefits,as one of the striking features of the e.a,that really attracted me,to joining this community. -My concerns have to do with certain moral truths,which at some point can be objective or subjectively inclined,thereby been relative,with respect to location,times and settings,as how can the e.a community,set standards as regards such,in relation to handling issues.

  • my criticism so far from what we've discussed,comes from the approach on climate issues,which the e.a community gives fair attention to,as it should be given a topnotch priority to, despite global influence,both on expenditures and engagements,by CSO's and several institutions. -finally,what I like about the e.a is it's incorporation of various categories of individuals,in relation to professions,which I see strength been harnessed in diversity , because of its underpinning ideals,which I subscribe too, because it's aids balancing inequalities and also giving voice,to the marginalized,while respectfully engaging all and sundry. B.
  • based on materials and study engagements with our facilitator,I really understood,what we discussed and the studies I undertook, despite grey areas,which were subsequently made clear. -after each topic discussions and engagements,for me personally ,I will gladly say, enlightenment was attained,as even on issues which I personally might not be cool with,the facts and reasoning, alongside discussions ushered in awareness.
  • one thing I found surprising was the issue of farm animals,which the e.a community is really engaging on,as based on indepth studies,I got to see reasons on how it makes sense, alongside the need to used plant based sources as alternative protein. The point is, giving my background as an African, initially I view it as a waste, because,in Africa,the bush is one of our source,were we kill different species of animals for food,asuch,I feel to reduce such, animal farming would aid minimizing such influences,but as it stand,the multiplying effect on both the animals and the environment, owing to animal farming and it's effect also to the animals themselves,based on evidential facts,which I came across, owing to the studies, aided my perceptional change, with respect to that. -one of the issues ,which was not surprising,but I have a different reservation about, have to do with the issue of farm animals ,which I stated above,as my hesitation to it,has to do with my reasoning of it, from an African perception,which I view animals,as just to be used as food, nonetheless,as stated above,the exposition via evidence and reasoning in line with e.a program, alongside other materials,have given me a different perception to that.which have aided my receptiveness to better light,of viewing the issues,as they need to be protected.
Comments2
Sorted by Click to highlight new comments since:

I feel questions 1 and 2 are essentially the same, with the second having a more partitioned approach. Did I overlook some important difference between them?

Hi! Just want to point out a typo: "This chapter we’d like you (...)". Thanks! :-)

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal