2

0
3

Reactions

0
3
Comments6


Sorted by Click to highlight new comments since:

Just a general note, I think adding some framing of the piece, maybe key quotes, and perhaps your own thoughts as well would improve this from a bare link-post? As for the post itself:

It seems Bregman views EA as:

a misguided movement that sought to weaponize the country’s capitalist engines to protect the planet and the human race

Not really sure how donating ~10% of my income to Global Health and Animal Welfare charities matches that framework tbqh. But yeah 'weaponize' is highly aggressive language here, if you take it out there's not much wrong with it. Maybe Rutger or the interviewer think Capitalism is inherently bad or something?

effective altruism encourages talented, ambitious young people to embrace their inner capitalist, maximize profits, and then donate those profits to accomplish the maximum amount of good.

Are we really doing the earn-to-give thing again here? But like apart from the snark there isn't really an argument here, apart from again implicitly associating capitalism with badness. EA people have also warned about the dangers of maximisation before, so this isn't unknown to the movement.

Bregman saw EA’s demise long before the downfall of the movement’s poster child, Sam Bankman-Fried

Is this implying that EA is dead (news to me) or that is in terminal decline (arguable, but knowledge of the future is difficult etc etc)?

he [Rutger] says the movement [EA] ultimately “always felt like moral blackmailing to me: you’re immoral if you don’t save the proverbial child. We’re trying to build a movement that’s grounded not in guilt but enthusiasm, compassion, and problem-solving.

I mean, this doesn't sound like an argument against EA or EA ideas? It's perhaps why Rutger felt put off by the movement, but then if you want a movement based on 'enthusiasm, compassion, and problem-solving' (which are still very EA traits to me, btw), then that's because it would be doing more good, rather than a movement wracked by guilt. This just falls victim to classic EA Judo, we win by ippon.

I don't know, maybe Rutger has written up more of his criticism somewhere more thoroughly. Feel like this article is such a weak summary of it though, and just leaves me feeling frustrated. And in a bunch of places, it's really EA! See:

  • Using Rob Mather founding AMF as a case study (and who has a better EA story than AMF?)
  • Pointing towards reducing consumption of animals via less meat-eating
  • Even explicitly admires EA's support for "non-profit charity entrepreneurship"

So where's the EA hate coming from? I think 'EA hate' is too strong and is mostly/actually coming from the interviewer, maybe more than Rutger. Seems Rutger is very disillusioned with the state of EA, but many EAs feel that way too! Pinging @Rutger Bregman or anyone else from the EA Netherlands scene for thoughts, comments, and responses.

In general, I think the article's main point was to promote Moral Ambition, not to be a criticism of EA, so it's not surprising that it's not great as a criticism of EA.

 

Not really sure how donating ~10% of my income to Global Health and Animal Welfare charities matches that framework tbqh. But yeah 'weaponize' is highly aggressive language here, if you take it out there's not much wrong with it. Maybe Rutger or the interviewer think Capitalism is inherently bad or something?

For what it's worth, Rutger has been donating 10% to effective charities for a while and has advocated for the GWWC pledge many times:

So I don't think he's against that, and lots of people have taken the 10% pledge specifically because of his advocacy.

Is this implying that EA is dead (news to me) or that is in terminal decline (arguable, but knowledge of the future is difficult etc etc)?

I think sadly this is a relatively common view, see e.g. the deaths of effective altruism, good riddance to effective altruism, EA is no longer in ascendancy

I mean, this doesn't sound like an argument against EA or EA ideas?

I think this is also a common criticism of the movement though (e.g. Emmet Shear on why he doesn't sign the 10% pledge)

 

This just falls victim to classic EA Judo, we win by ippon.

I think this mixes effective altruism ideals/goals (which everyone agrees with) with EA's specific implementation, movement, culture and community. Also, arguments and alternatives are not really about "winning" and "losing"

 

So where's the EA hate coming from? I think 'EA hate' is too strong and is mostly/actually coming from the interviewer, maybe more than Rutger. Seems Rutger is very disillusioned with the state of EA, but many EAs feel that way too!

Then you probably agree that it's great that they're starting a new movement with similar ideals! Personally, I think it has a huge potential, if nothing else because of this:

If we want millions of people to e.g. give effectively, I think we need to have multiple "movements", "flavours" or "interpretations" of EA projects.

You might also be interested in this previous thread on the difference between EA and Moral Ambition.

Feels like you've slightly misunderstood my point of view here Lorenzo? Maybe that's on me for not communicating it clearly enough though.

For what it's worth, Rutger has been donating 10% to effective charities for a while and has advocated for the GWWC pledge many times...So I don't think he's against that, and lots of people have taken the 10% pledge specifically because of his advocacy

That's great! Sounds like very 'EA' to me 🤷

I think this mixes effective altruism ideals/goals (which everyone agrees with) with EA's specific implementation, movement, culture and community.

I'm not sure everyone does agree really, some people have foundational moral differences. But that aside, I think effective altruism is best understand as a set of ideas/ideals/goals. I've been arguing that on the Forum for a while and will continue to do so. So I don't think I'm mixing, I think that the critics are mixing.

This doesn't mean that they're not pointing out very real problems with the movement/community. I still strongly think that the movement has lot of growing pains/reforms/recknonings to go through before we can heal the damage of FTX and onwards.

The 'win by ippon' was just a jokey reference to Michael Nielsen's 'EA judo' phrase, not me advocating for soldier over scout mindset.

If we want millions of people to e.g. give effectively, I think we need to have multiple "movements", "flavours" or "interpretations" of EA projects.

I completely agree! Like 100000% agree! But that's still 'EA'? I just don't understand trying to draw such a big distinction between SMA and EA in the case where they reference a lot of the same underlying ideas.

So I don't know, feels like we're violently agreeing here or something? I didn't mean to suggest anything otherwise in my original comment, and I even edited it to make it more clear I was more frustrated at the interviewer than anything Rutger said or did (it's possible that a lot of the non-quoted phrasing were put in his mouth)

feels like we're violently agreeing here or something

Yes, I think this is a great summary. Hopefully not too violently?

I mostly wanted to share my (outsider) understanding of MA and its relationship with EA

No really appreciated it your perspective, both on SMA and what we mean when we talk about 'EA'. Definitely has given me some good for thought :)

Was going to post this too! Good for community to know about these critiques and alternatives to EA. However, as JWS has already pointed out, critiques are weak or based on strawman version of EA.

But overall, I like the sound of the 'Moral Amibition' project given its principles align so well with EA.  Though, there is risk of confusing outsiders given how similar the goals are, and also risk of people falsely being put off EA if they get such a biased perspective.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A
Recent opportunities in Building effective altruism
46
Ivan Burduk
· · 2m read