Hide table of contents

TL;DR: We’re[1] not really comfortable calling ourselves “EAs.” Moreover, we know that this is true for a lot of people in the EA community the eclectic group of people trying to make the world better who happen to use the Forum. So we’re renaming the “Effective Altruism Forum” to be the "EA-Adjacent Forum" (“EA Forum” for short). 

We have some deep disagreements with EA

Look, we run a forum focused on discussions about how to do the most good we can, and we work at the "Centre for Effective Altruism," but we're not really members of the EA community. We have some deep disagreements with many parts of the movement.[2]

(We don’t even always agree with each other about our disagreements, we don’t always think that the EA thing is the right thing (see also), and we even hosted an EA criticism contest to surface disagreements.)

It’s not just us

We know that others who use the Forum also prefer to call themselves “EA-adjacent.” We’re also somewhat worried that anything that someone posts on the EA Forum can be interpreted as representative of effective altruism. 

We think it’s important to preserve nuance and be clear about the facts listed here, so we’re rebranding. 

Impact of the rebrand, next steps

It’s already the case that “EA” often stands for “Ea-Adjacent,” and we don’t think the rebrand will change much in terms of how the Forum will function. 

As always, we’d love to hear your feedback. You can comment here or contact us directly

(Thanks to [unnamed people] for suggesting this rebrand. We’d credit them directly, but some of them prefer to not associate so closely with EA.)

The new logo / header text
  1. ^

    The EA-Adjacent Forum team. Please note that not all teammates agree with everything written here (probably). 

  2. ^

    Some example disagreements: 

    1) We disagree with a lot of people in the EA community about styling and font choices. 
    2) Most people in the EA community promote functional decision theory, but after spending many years making software for the forum, we've come to the conclusion that object-oriented decision theory is superior.
    3) We disagree with CEA about the spelling of “Centre” in “Centre for Effective Altruism.” It should be spelled “center” as Noah Webster intended.
    4) Many EAs appear to focus on scope sensitivity, but we think scope specificity is more neglected
    5) We think QALYs should be converted to their metric-system equivalent, such that 1 metric QALY is the amount of quality-adjusted life that can be supported by 1 joule of energy within a 1-cubic-meter box over 1 year at 0 degrees celsius.

174

0
0

Reactions

0
0
Comments11


Sorted by Click to highlight new comments since:

I'm sorry, but I consider myself EA-adjacent-adjacent. 

Isn't that a bit self-aggrandising? I prefer "aspiring EA-adjacent"

This is great, thanks for the change. As someone who aspires to use evidence and careful reasoning to determine how to best use my altruistic resources, I sometimes get uncomfortable when people call me an effective altruist.

2) Most people in the EA community promote functional decision theory, but after spending many years making software for the forum, we've come to the conclusion that object-oriented decision theory is superior.

Hahahaha 🤦🏻‍♀️

I think @Ollie Etherington will be personally offended that we posted this.

I think the question of which decision theory to use (functional or object oriented) is moot, since I don't have free will anyway.

I use copilot to make decisions

It feels like the more I proclaim myself an EA the less others want to. Fortunately, I don't think much about correlation, so I'm not going to worry about it.

On behalf of all fools, I really appreciate the "April Fools' Day" tag.

[anonymous]3
1
1

Canonically, to my mind, EA Forum will always stand for EA Adjacent Forum in the future. (This was really good)

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A
Recent opportunities in Building effective altruism