Hide table of contents

Key takeaways

  • Apply for a 9-week online Longtermism Fellowship here until September 15th.
  • A graduate course at LMU Munich forms the basis for this course, and we hope to decrease the uncertainty about your belief in longtermism.




 

EA Munich will run a Longtermism Fellowship for 8-9 consecutive weeks starting in the first week of October. For each meeting, you should plan at least 60 min preparation time in advance for readings (mostly academic philosophy papers) and 90 min for the discussion itself. If you cannot attend one meeting for whatever reason, that's fine, but if you want to get the participation certificate, you will have to submit a write-up for sessions you did not attend (same as in the Virtual Programs). Upon graduation, we will offer LinkedIn certificates for active participants. The online and in-English course is based on a philosophical graduate course. The instructor of the Fellowship will be Bill D’Alessandro, a Postdoctoral Fellow in philosophy at Ludwig-Maximilians University (LMU) Munich. Bill will also offer office hours besides moderating the discussion group. Probably we will offer one big session per week with breakout rooms to discuss and Bill as moderator for clarifying questions.


 

Preliminary primary Readings of the Syllabus 


 

Week 1: Introduction to Longtermism

  • Nick Beckstead, “The case for shaping the far future”


 

Week 2: Future people

  • Elizabeth Harman, “Can we harm and benefit in creating?”


 

Week 3: Population axiology

  • Hilary Greaves, “Population axiology”


 

Week 4: AI risk

  • Nick Bostrom, “Is the default outcome doom?”, “The control problem” (from Superintelligence)


 

Week 5: Cluelessness

  • Christian Tarsney, “The epistemic challenge to longtermism”


 

Week 6: Predicting and deciding

  • Andreas Mogensen and Will MacAskill, “The paralysis argument”


 

Week 7: If longtermism is true, then what?

  • Will MacAskill, “What to do” (from What We Owe the Future)


 

Week 8: Criticisms of longtermism

  • Phil Torres, “The case against longtermism”


 

Week 9: Participant presentations (optional)

  • ~30 min for presentation and discussion for each participant



 

Who should apply?

We expect this Fellowship to be most helpful for people interested in exploring and critically engaging with longtermism. Whether you believe longtermism is true could have an immense impact on your career or donation decisions. Since we know many EAs who are deeply unsure about longtermism, this Fellowship could have a massive impact if the discussions lead to an inside view of longtermism. There are no formal requirements for applying, like majoring in philosophy, but an (informal) interest in philosophy is desirable for reading and discussing academic papers. If you’ve read this far, we strongly recommend applying for the Longtermism Fellowship until September 15th. You can participate from everywhere, but be aware of the different time zones. Feel free to ask questions in the comments.




 

Apply as a Fellow until September 15th!



 

Thanks to Jaime and Bill, with whom I organize the Fellowship jointly.

Comments10


Sorted by Click to highlight new comments since:

I'm surprised by the reading assigned for week 8. The article is very low quality and highly intellectually dishonest. I very much doubt that you would have included a defense of longtermism of comparably poor standards of quality and honesty—and none of the articles you did include are remotely comparable along those two dimensions—, so it is perplexing to see it part of your syllabus. This does a disservice to longtermism and also, and especially, to its critics, who deserve to be better represented.

We agree that the Torres piece is annoying and, to some extent irresponsible and unfair. And it’s certainly true that there are more sober, thoughtful, penetrating criticisms of specific aspects of longtermism. 

Our reasons for including the piece anyway are: (1) It’s probably the single most well-known attack on longtermism, and it’s helpful to know what kinds of objections have made it out into the world and gotten traction outside the EA bubble. (2) It bundles together many criticisms in one place, so we don’t have to read four or five different essays. (3) We think it’s healthy to hear outside-view criticisms that don’t shy away from denouncing the whole longtermist program. (4) Although Torres focuses too much on (particular interpretations of) Bostrom as a stand-in for longtermism generally, some of his worries about Bostromism do a good job raising tricky and essential questions.

We wish there were a piece that did a similar amount of useful things without so many flaws, but we don’t know of any!

That said, we think Scott Alexander’s recent piece on longtermism is pretty good, and maybe we should read that too.

I'm glad you're running this! It seems valuable to have reading programs focused on such an important question.

But I wonder whether a better goal for the program might be to help people to engage with the ideas and figure out their views either way, rather than to increase people's confidence that longtermism is correct[1].

I think that this is better even if your motivation is to increase the number of people who agree with longtermism - convincing people of specific conclusions seems worse for community epistemics, and might seem offputtingly dogmatic to some people.

  1. ^

    I understood "we hope to decrease the uncertainty about your belief in longtermism" to mean "we hope to increase your confidence in longtermism being correct", although is it a bit ambiguous.

You are right that we could have phrased it better. However, it is not about convincing people of specific conclusions but about engaging in a deeper way with the topic. Every week there will be open discussions and the last week deals explicitly with Criticisms of longtermism

Sounds great!

Hi! I'm interested in applying, but I'm just a little concerned about the 6-hour difference between our timezones (I'm from the Philippines) since I'll be having in-person classes around that period.  Wanted to ask if what around what time/s the discussions would likely be taking place? Thank you!

This depends also on your demand. You can apply now, and it's OK if you can't come because the discussion time is terrible for you. After looking at the timetable, I think doing it afternoons is the best option at the moment. Sadly, then it's pretty late in the Philippines :(also depends

Just checked the timetable and I think some of the times are doable for me if the discussions will only be held once a week. :) I'll apply in the meantime. Thanks!

Hi, I'm interested in this topic but never engaged with EA until now, considering that your LMU-newsletter E-Mail made me come in contact with this philosophy for the first time. What would be my chances of participating? Do you limit participation size and, if so, is it even worth applying? Kind regards

I think it's worth applying if you have some interest in philosophy and we expect to accept most applicants. You can also look at EA Virtual Programs https://www.effectivealtruism.org/virtual-programs , but I hope I have convinced you to apply :-)

Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
 ·  · 1m read
 · 
We’ve written a new report on the threat of AI-enabled coups.  I think this is a very serious risk – comparable in importance to AI takeover but much more neglected.  In fact, AI-enabled coups and AI takeover have pretty similar threat models. To see this, here’s a very basic threat model for AI takeover: 1. Humanity develops superhuman AI 2. Superhuman AI is misaligned and power-seeking 3. Superhuman AI seizes power for itself And now here’s a closely analogous threat model for AI-enabled coups: 1. Humanity develops superhuman AI 2. Superhuman AI is controlled by a small group 3. Superhuman AI seizes power for the small group While the report focuses on the risk that someone seizes power over a country, I think that similar dynamics could allow someone to take over the world. In fact, if someone wanted to take over the world, their best strategy might well be to first stage an AI-enabled coup in the United States (or whichever country leads on superhuman AI), and then go from there to world domination. A single person taking over the world would be really bad. I’ve previously argued that it might even be worse than AI takeover. [1] The concrete threat models for AI-enabled coups that we discuss largely translate like-for-like over to the risk of AI takeover.[2] Similarly, there’s a lot of overlap in the mitigations that help with AI-enabled coups and AI takeover risk — e.g. alignment audits to ensure no human has made AI secretly loyal to them, transparency about AI capabilities, monitoring AI activities for suspicious behaviour, and infosecurity to prevent insiders from tampering with training.  If the world won't slow down AI development based on AI takeover risk (e.g. because there’s isn’t strong evidence for misalignment), then advocating for a slow down based on the risk of AI-enabled coups might be more convincing and achieve many of the same goals.  I really want to encourage readers — especially those at labs or governments — to do something