Since CEA is looking for a new executive director and they are open to that person pursuing a dramatically different strategy, now seems like a good time to suggest possible strategic directions for the candidates to consider (I’d love to see posts suggesting a variety of directional shifts).

I suggest CEA increases its focus on helping altruists determine how to have the greatest impact in a world where AI capabilities are progressing rapidly. I’m proposing a discussion that would be much broader than AI safety, for instance considering short-term interventions like malaria nets vs. long-term interventions like economic growth, or developing scalable AI products for good. However, we should not assume current progress will continue unabated, and also discuss the possibility that we are in an ‘AI bubble’[1].

This would be an additional stream of activity and I’m definitely not suggesting a complete pivot. For example, virtual programs could offer a course on this, alongside Intro and In-Depth. Local groups would mostly continue on the same, but new content would be made available to them on these topics for them to use if wished.

Other activities that could contribute to such an agenda would include: talks/discussions at EA conferences, online debates, essay contests, and/or training movement builders on how to assist others working through these questions.

Another alternative: CEA could pick a “yearly theme” and advanced AI technologies might just be the theme for just one year.

Topics to explore include:

  • The AI Landscape: The current and potential future state of AI, including capabilities, recent progress, timelines, and whether we might be just in a ‘bubble’.
  • AI X-risks: Could superintelligence pose an existential threat? If so, how likely and how can we help (technical, policy, community-building, ect.)? Main arguments of skeptics.
  • Mindcrimes: Is AI likely to be sentient? If so, could running certain AI systems constitute a mindcrime?
  • AI and other X-risks: Might advanced AI aid or worsen other existential risks (bio, nano)?
  • Responsible AI: Even if AI doesn't pose an x-risk, how well we transition to AI could be the main determiner of our future. How can we responsibly manage this transition?
  • How worried should we be about people using AI to undermine democracy or spread misinformation?
  • AI and Global Poverty: How does the advance of AI affect our global poverty efforts? Does it render them irrelevant? Should we be integrating AI into our efforts? Should we be focusing more on short-term interventions rather than long-term interventions, as there is less chance of them being made irrelevant?
  • Animal rights: How do increasing AI capabilities affect the development of alternative proteins? Should the animal arm of EA be more focused on ensuring that the transition to advanced AI goes well for animals (such as through moral circle expansion) rather than focusing on the near term? What opportunities could advanced AI technologies open up for relieving wild animal suffering?
  • Applications of AI: Can we develop AI to improve education, wisdom, and mental health? Do any such applications have unforeseen consequences?

Why focus here?

  • There's been a rapid advance in AI capabilities recently, with little sign of slowing. Meanwhile, AI is becoming an increasing focus among people in EA, with many considering an AI safety pivot.
  • My proposal provides a way for CEA to adapt to these changes whilst also running activities that would be relevant to most of the community. This is important as while much of the community is sold on AI x-risks, much of the community isn't and so it’s important to help people who are skeptical of x-risk arguments figure out the broader implications of AI.
  • There's a lot of interest in AI in society more  generally. New members may join for this content who would not want to do a general EA fellowship.
  • Facilitating discussions on new considerations is one way to prevent a community from intellectually stagnating[2].
  1. ^

    I strongly disagree with this, but it is a discussion worth happening.

  2. ^

    This is another reason why I am suggesting exploring a theme that would include a broad range of discussions rather than just focusing on x-risk.

Comments12


Sorted by Click to highlight new comments since:

Could you say a little more about why these functions would be best housed at CEA specifically? I think there are fairly clear costs to a central organization moving strongly in the direction of a single cause area, so I think it's important to identify a case for them being at CEA specifically.

I guess I find the framing of this question somewhat strange because my proposal suggests that there should discussions about how AI intersects with various cause areas. That is, it is not the same as an AI safety pivot from my perspective (though it may result in more people pursuing AI safety). So I can try to answer your question as written the best that I can, but I might be able to answer better if you tried reframing your question first.

To put in some more backstory, what CEA does has an effect on people's sense of belonging -- such the extent to which people feel their work and interests are valued by the community. For instance, seeing a meaningful amount of subject-matter specific content relate to your cause area and the kind of work you do improves the sense of belonging. Detracting from it would have negative effects on recruiting and retention, and thus overall impact. So there's an added cost to anything that would detract from non-AI people's sense of belonging, a cost that wouldn't be borne if you stood up the new AI-focused initiative in a Center for Effective AI Epistemics or something.

I think your proposal would incur some costs to belonging. To take your proposal for GH&D as an example:

AI and Global Poverty: How does the advance of AI affect our global poverty efforts? Does it render them irrelevant? Should we be integrating AI into our efforts? Should we be focusing more on short-term interventions rather than long-term interventions, as there is less chance of them being made irrelevant?

I don't think suggesting that the type of work people are predominately doing in GH&D may be "irrelevant" is likely to promote their sense of belonging. I also don't see much evidence (in terms of grantmaking activity, Forum posts, etc.) that the types of questions you're suggesting are what people in global health & development currently find important. That's not to say they aren't worth asking, but that they don't seem to represent the GH&D community.

So I do think a focus shift by CEA would have some costs for other parts of the EA movement. (There would, of course, be opportunity costs and the like as well.) So the question is: What is the argument for CEA is particularly well-suited to do the work you're proposing? You've made, on the whole, a good case for why this work would be helpful but I don't think you've clearly linked it to CEA's place in the ecosystem.

Your suggestion that there could be rotating yearly topics is interesting, but producing new high-quality content each year would consume resources. And if there were a cycle (year of AI, of infectious diseases, of the chicken, of meta?) the mostly fixed costs of content creation and updating would be spread out over fewer uses. Plus someone who was interested in one of the topics might have to wait three years for the repeat. So I'd have the same kind of question: is the advantage of having CEA specifically run this work worth the increased resource utilization and inability to access the content 75% of the time?

I agree that some of the points I listed could have been better framed.

In terms of why CEA, I guess I see it as a core function of CEA to try to ensure that members of the EA community have access to the information that they need in order to make up their own mind on how to have as great an impact as possible. I don't necessarily think that CEA should always follow, I think it's okay for it to lead as well, but if it were to run a course and most people who took it didn't find that it was helping them develop their views, then I would see that as a failure.

Regarding belonging, I don't see that as the primary thing that CEA should optimise for, particularly when it comes at the expense of epistemics. It's worth thinking through how to frame things in order to ensure as much belonging as possible, which is part of why I was suggesting a course that would cover considerations relevant to people from various cause areas, but it isn't the number one priority.

I agree that if they went with the plan of yearly topics (I wasn't suggesting a rotation, but rather that the topic would be different every year unless exceptional circumstances caused us to repeat a topic) it would require significant resources. On the other hand, I believe that it would be well worth it in order to significantly reduce the chance of intellectual stagnation within the community.

It's certainly possible that a version of this content could address the belongingness concerns I identified.

About belongingness more generally: When the question of splitting up EA (e.g., into neartermist and longtermist branches) has arisen, people have generally been opposed. But I think a consequence of that position is that certain central organizations need to reflect a rough balance of different cause areas and neartermist/longtermist perspective within the movement. Stated differently, I don't think it is plausible for both of the following conditions to be true: "CEA is a broad-based organization for promoting effective altruism" and "CEA clearly gives the impression that certain key methodologies, cause areas, or philosophical views that are prominent within the community are second-rate." There are arguments for giving up the first statement to free CEA from the constraints it imposes, but they do impose costs. In my view, any argument that "CEA should do X," where X creates risks of causing disunity, needs to acknowledge the downsides and explain why the marginal benefit of housing the work at CEA outweighs them.

As far as epistemics, I tend to prefer decentralized epistemic institutions to the extent practicable. Maybe that's a bias from my professional training (as a lawyer), but in general I'd rather have a robust epistemic marketplace in which almost everyone can promote their ideas without having to compromise on belongingness grounds, rather than setting up CEA (or any similar organization) as promoter of views that do not reflect broad community consensus. EAs, EA-adjacent people, and EA-interested people can evaluate epistemic claims for themselves, and centralizing epistemics creates the usual risks of any system with a single point of failure.

I think a case should be made for important intersections with the various cause areas before CEA commits to pushing potentially distracting discussion onto others, and I don’t think CEA is the right organization to do this research, because they don't focus on research. I think Open Phil and/or Rethink Priorities (maybe others, too) could do this kind of research, because they research AI safety as well as global health and development and animal welfare.

More research would definitely be useful to help us make these decisions and I suppose it would be hard to run such a course without high quality content to include in it. So it might be better to focus on the other ideas for now.

One point I want to raise though: some of these discussions seem like discussions that should be happening in EA anyway and I don’t think we should only start having these discussions once we have all of the answers.

It’s also less of a commitment if CEA were only to adopt it as the first “yearly theme”.

Rather than a course, a single (maybe optional) meeting in a standard EA course seems like it would make more sense to me at this point. Group discussion events or talks at EAG(x) could also make sense.

I'd probably lean towards suggesting that CEA leaves the standard EA course as is and just organising some talks/discussion online or at EAGs or creates some materials for local EA groups to potentially run such a discussion themselves.

I think CEA needs to get behind the push for a global moratorium on AGI. Everything else is downstream of that (i.e. without such a moratorium there likely won't even be a world to do good in, or any sentient beings to help.)

I think the push for global moratorium (which I agree with) would be better served by having CEA take as small a role as possible. 

From the perspective of the people you need to actually convince -- politicans and voters -- the CEA and EVF brands are still way too linked to SBF right now. That's an easy attack surface to gift to those whose economic and other interests would be opposed to a mortatorium. "One of the major organizations behind this is that group that SBF was treasurer of a few years ago / that group that had two of the five board members heavily involved with giving away SBF's fraudulent money" may not be a convincing argument against a moratorium to the readers of this forum, but I think it would carry significant weight among some important groups.

Ok, fair point. Maybe OpenPhil then? Or Rethink Priorities? I think in general the EA community and its leadership are asleep at the wheel here. We're in the midst of an unprecedented global emergency and the stakes couldn't be higher, yet there is very little movement apart from amongst a rag-tag bunch of the rank and file (AGI Moratorium HQ Slack -- please join if you want to help)

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal