Hide table of contents
3 min read 13

281

I'm very happy to see effective altruism community members write public posts about EA organizations, where they point out errors, discuss questionable choices, and ask hard questions. I'd like to see more cases where someone digs deeply into an org's work and writes about what they find; a simple "this checked out" or a "critical feedback that helps the org improve" are both good outcomes. Even a "this org is completely incompetent" is a good outcome: I'd rather orgs be competent of course, but in the cases where they aren't we want to know so we can explore alternatives.

When posting critical things publicly, however, unless it's very time-sensitive we should generally be letting orgs review a draft first. This allows the org to prepare a response if they want, which they can post right when your posts goes out, usually as a comment. It's very common that there are important additional details that you don't have as someone outside the org, and it's good for people to be able to review those details alongside your post. If you don't give the org a heads up they need to choose between:

  • Scrambling to respond as soon as possible, including working on weekends or after hours and potentially dropping other commitments, or

  • Accepting that with a late reply many people will see your post, some will downgrade their view of the org, and most will never see the follow-up.

Surprising them is to your advantage, if you consider this tactically, but within the community we're working towards the same goals: you're not trying to win a fight, you're trying to help us all get closer to the truth.

In general I think a week is a good amount of time to give for review. I often say something like "I was planning to publish this on Tuesday, but let me know if you'd like another day or two to review?" If a key person is out I think it's polite to wait a bit longer (and this likely gets you a more knowledgeable response) but if the org keeps trying to push the date out you've done your part and it's fine to say no.

Sometimes orgs will respond with requests for changes, or try to engage you in private back-and forth. While you're welcome to make edits in response to what you learn from them, you don't have an obligation to: it's fine to just say "I'm planning to publish this as-is, and I'd be happy to discuss your concerns publicly in the comments."

[EDIT: I'm not advocating this for cases where you're worried that the org will retaliate or otherwise behave badly if you give them advance warning, or for cases where you've had a bad experience with an org and don't want any further interaction. For example, I expect Curzi didn't give Leverage an opportunity to prepare a response to My Experience with Leverage Research, and that's fine.]

For orgs, when someone does do this it's good to thank them in your response. Not only is it polite to acknowledge it when someone does you a favor, it also helps remind people that sharing drafts is good practice.

As a positive example, I think the recent critical post, Why I don't agree with HLI's estimate of household spillovers from therapy handled this well: if James had published that publicly on a Sunday night with no warning then HLI would have been scrambling to put together a response. Instead, James shared it in advance and we got a much more detailed response from HLI, published at the same time as the rest of the piece, which was really helpful for outsiders trying to make sense of the situation.

The biggest risk here, as Ben points out, is that faced with the burden of sharing a draft and waiting for a response some good posts won't happen. To some people this sounds a bit silly (if you have something important to say and it's not time sensitive is it really so bad to send a draft and set a reminder to publish in a bit?) but not to me. I think this depends a lot on how people's brains work, but for some of us a short (or no!) gap between writing and publishing is an incredibly strong motivator. I'm writing this post in one sitting, and while I think I'd still be able to write it up if I knew I had to wait a week I know from experience this isn't always the case. This is a strong reason to keep reviews low-friction: orgs should not be guilting people into making changes, or (in the typical case) pushing for more time. Even if the process is as frictionless as possible, there's the unavoidable issue of delay being unpleasant, and I expect this norm does lose us a few good posts. Given how stressful it is to rush out responses, however, and the lower quality of such responses, I think it's a good norm on balance.

Comments13


Sorted by Click to highlight new comments since:

The discrepancy between this post's net karma here (171) and on LessWrong (19) is striking.

So is the number of comments here (5 at time of this comment) vs. there (69).

The EA Forum has recently had some very painful experiences where members of the community jumped to conclusions and tried to oust people on very flimsy evidence, and now we're seeing people upvote who are sick of the dynamic. 

LessWrong commenters did a better job of navigating accusations, waiting for evidence, and downvoting low-quality combativeness. People running off half-cocked hasn't had as disastrous effects, so there aren't as many people there who are currently sick of it. 

As many have noted, this recommendation will usually yield good results when the org responds cooperatively and bad results when the org responds defensively. It is an org’s responsibility to demonstrate that they will respond cooperatively, not a critic’s responsibility to assume. Defensive responses aren’t, like, rare.

To be more concrete, I personally would write to Givewell before posting a critique of their work because they have responded to past critiques with deep technical engagement, blog posts celebrating the critics, large cash prizes, etc. I would not write to CEA before posting a critique of their work because they have responded to exactly this situation by breaking a confidentiality request in order to better prepare an adversarial public response to the critic's upcoming post. People who aren’t familiar with deep EA lore won’t know all this stuff and shouldn’t be expected to take a leap of faith.

This does mean that posts with half-cocked accusations will get more attention than they deserve. This is certainly a problem! My own preferred solution to this would be to stop trusting unverifiable accusations from burner accounts. Any solution will face tradeoffs.

(For someone in OP’s situation, where he has extensive and long-time knowledge of many key EA figures, and further is protected from most retaliation because he’s married to Julia Wise, who is a very influential community leader, I do indeed think that running critical posts by EA orgs will often be the right decision.)

Just came across @Raemon saying something similar in 2017:

Running critical pieces by the people you're criticizing is necessary, if you want a good epistemic culture. (That said, waiting indefinitely for them to respond is not required. I think "wait a week" is probably a reasonable norm)

Reasons and considerations: [read more]

This seems mostly reasonable, but also seems like it has some unstated (rare!) exceptions that maybe seem too obvious to state, but that I think would be good to state anyway.

E.g. if you already have reason to believe an organization isn't engaging in good faith, or is inclined to take retribution, then giving them more time to plan that response doesn't necessarily make sense.

Maybe some other less extreme examples along the same lines.

I wouldn't be writing this comment if the language in the post hedged a bit more / left more room for exceptions, but reading a sentence like this makes me want to talk about exceptions:

When posting critical things publicly, however, unless it's very time-sensitive we should be letting orgs review a draft first.

I'd go a bit further. The proposed norm has several intended benefits: promoting fairness to the criticized organization by not blindsiding the organization, generating higher-quality responses, minimizing fire drills for organizations and their employees, etc. I think it is a good norm in most cases.

However, there are some circumstances in which the norm would not significantly achieve its intended goals. For instance, the rationale behind the norm will often have less force where the poster is commenting on the topic of a fresh news story. The organization already feels pressure to respond to the news story on a news-cycle timetable; the marginal burden of additionally having a discussion of the issue on the Forum is likely modest. If the media outlet gave the org a chance to comment on the story, the org should also not be blindsided by the issue.

Likewise, criticism in response to a recent statement or action by the organization may or may not trigger some of the same concerns as more out-of-the-blue criticism. Where the nature of the statement/action is such that the criticism was easily foreseeable, the organization should already be in a place to address it (and was not caught unawares by its own statement/action). This assumes, of course, that the criticism is not dependent on speculation about factual matters or the like.

Also, I think the point about a delayed statement being less effective at conveying a message goes both ways: if an organization says or does something today, people will care less about an poster's critical reaction posted eight days later than a reaction posted shortly after the organization action/statement.

Finally, there may also be countervailing reasons that outweigh the norm's benefits in specific cases.

Makes sense.

Edited to add something covering this, thanks!

within the community we're working towards the same goals: you're not trying to win a fight, you're trying to help us all get closer to the truth.

 

This is an aside, but it’s an important one:

Sometimes we're fighting! Very often it’s a fight over methods between people who share goals, e.g. fights about whether or not to emphasize unobjectional global health interventions and downplay the weird stuff in official communication. Occasionally it’s a good-faith fight between people with explicit value differences, e.g. fights about whether to serve meat at EA conferences. Sometimes it’s a boring old struggle for power, e.g. SBF’s response to the EAs who attempted to oust him from Alameda in ~2018.

Personally I think that some amount of fighting is critical for any healthy community. Maybe you disagree. Maybe you wish EA didn’t have any fighting. But acting as if this were descriptively true rather than aspirational is clearly incorrect.

In case it’s not obvious, the importance of previewing a critique also depends on the nature of the critique and the relative position of the critic and the critiqued. I think those possibly “Punching down” should be more generous and careful than those “punching up”. 

The same goes for the implications of the critique “if true”, whether it’s picking nits or questioning whether the organisation is causing net harm. 

That said, I think these considerations only make a difference between waiting one or two weeks for a response and sending one versus several emails to a couple of people if there’s no response the first time. 

I think these considerations only make a difference between waiting one or two weeks for a response and sending one versus several emails to a couple of people if there’s no response the first time.

I'm not sure I understand this part?

If you're sending a draft as a heads up and don't get a response, I don't think politeness requires sending several emails or waiting more than a week?

In general, i agree politeness doesn’t require that — but id encourage following up in case something got lost in junk if the critique could be quite damaging to its subject.

Curated and popular this week
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A
Recent opportunities in Building effective altruism
47
Ivan Burduk
· · 2m read