within the community we're working towards the same goals: you're not trying to win a fight, you're trying to help us all get closer to the truth.
This is an aside, but it’s an important one:
Sometimes we're fighting! Very often it’s a fight over methods between people who share goals, e.g. fights about whether or not to emphasize unobjectional global health interventions and downplay the weird stuff in official communication. Occasionally it’s a good-faith fight between people with explicit value differences, e.g. fights about whether to serve meat at EA conferences. Sometimes it’s a boring old struggle for power, e.g. SBF’s response to the EAs who attempted to oust him from Alameda in ~2018.
Personally I think that some amount of fighting is critical for any healthy community. Maybe you disagree. Maybe you wish EA didn’t have any fighting. But acting as if this were descriptively true rather than aspirational is clearly incorrect.
As many have noted, this recommendation will usually yield good results when the org responds cooperatively and bad results when the org responds defensively. It is an org’s responsibility to demonstrate that they will respond cooperatively, not a critic’s responsibility to assume. Defensive responses aren’t, like, rare.
To be more concrete, I personally would write to Givewell before posting a critique of their work because they have responded to past critiques with deep technical engagement, blog posts celebrating the critics, large cash prizes, etc. I would not write to CEA before posting a critique of their work because they have responded to exactly this situation by breaking a confidentiality request in order to better prepare an adversarial public response to the critic's upcoming post. People who aren’t familiar with deep EA lore won’t know all this stuff and shouldn’t be expected to take a leap of faith.
This does mean that posts with half-cocked accusations will get more attention than they deserve. This is certainly a problem! My own preferred solution to this would be to stop trusting unverifiable accusations from burner accounts. Any solution will face tradeoffs.
(For someone in OP’s situation, where he has extensive and long-time knowledge of many key EA figures, and further is protected from most retaliation because he’s married to Julia Wise, who is a very influential community leader, I do indeed think that running critical posts by EA orgs will often be the right decision.)
I actually do know the real names of the people who wrote about Brent. It’s one of those “community insiders know who they were but it’s hard to tell from the outside” situations, like the one I described with pre-doxxing Scott Alexander. If the authors had been anonymous for real then I don’t think it would’ve worked anywhere near as well. This approach avoids most of the downsides of actually-unknown-and-unaccountable burner accounts and I do not object to it.
Can you name examples of this working? Because I've seen a good number of anonymous public accusations on this forum and I don't recall any that led to the outcome you describe. I understand this theory of change but it sure doesn't seem to work that way in real life.
In contrast I know of many cases where backchannel reporting to trusted third parties has led to results. If someone is not willing to speak up publicly, then using whisper networks or official reporting channels has a much better track record compared to making burner accusations on the EA forum. I am somewhat worried about people making an ineffective burner account post and feeling like they've done their job when otherwise they would've mustered up their courage and told the conference organizer.
We will also never know about serious issues if people are too afraid to speak up in a way that can be trusted and acted on. Creating a burner account out of fear might be a psychologically understandable reaction (although I suspect its prevalence is overstated), but it is not an effective or tactically appropriate reaction. Burner account accusations get upvotes and public sympathy but they don’t accomplish much else. Actual change requires someone to stick their neck out, whether that might be in a public post or in influential backchannels. There is no substitute for courage.
There was one incorrect claim ("AI safteyists encourage work at AGI companies")
"AI safetyists" absolutely do encourage work at AGI companies. To take one of many examples, 80,000 Hours are "AI safetyists", and their job board currently encourages work at OpenAI, Deepmind, and Anthropic, which are AGI companies.
(I haven't watched the video.)
Ultimately, my overall point is that one reason for using a burner account (like in my case) is that if you don't belong to the "inner circle" of funders and grantees, then I believe that different rules apply to you. And if you want to join that inner circle, you need not question grants by directly emailing OP. And once you're inside the inner circle, but you want to criticise grants, you must use a burner account or risk being de-funded or blacklisted.
Thank you for a good description of what this feels like . But I have to ask… do you still “want to join that inner circle” after all this? Because this reads like your defense of using a burner account is that it preserves your chance to enter/remain in an inner ring which you believe to be deeply unethical. Which would be bad! Don't do that! Normally I don’t go around demanding that people must be willing to make personal sacrifices for the greater good if they want to be taken seriously but this is literally a forum for self-declared altruists.
Several times I’ve received lucrative offers and overtures from sources (including one EA fund) that seemed corrupt in ways that resemble how you think your funder is corrupt. Each time my reaction has been “I’ve gotta end this relationship ASAP. This will be used to pressure me into going along with corruption. Better to remove their power over me on my terms.” This was clearly correct in hindsight; it saved me and my team from some entanglements that would have made it harder to pursue our mission, and also it left me free to talk about the bad stuff I saw as much as I want to. While I did pass up a lot of money for myself and my organization, we’re doing fine now. None of this was some crazy-advanced Sun Tzu maneuver; it's common knowledge that refusing dirty money is the right thing to do but you have to pass up money to do it.
I dunno, a lot of these burner account accusations just strike me as trying to provoke a fight that the poster themselves lacks the courage and conviction to actually participate in, and I have very little patience for “let’s you and him fight”. I assume that the point of posting this stuff is to advocate for some sort of change, but that can’t happen unless specific people lead the charge. And if you’re not willing to bear any costs at all, then why should anyone else pick up your banner? Even if I wanted to, how would I lead the charge against “my friend who I won’t name got the impression that someone else who I won’t name did something bad, based on circumstantial evidence that you can’t check”? Questions of right and wrong aside, this plan just won’t work, you can’t actually lead from the rear like this.
Given your stated beliefs, your moral duty is to either become a “troublemaker” even if the risk to your career is real or else cut yourself off from the dirty money and go do something that’s not compromised. Personally I’ve usually chosen the latter option when I’ve faced similar dilemmas but I have a ton of respect for good-faith troublemakers.
Yeah, pseudonyms are great. There's been recent debates about people using one-off burner accounts to make accusations, but those don't reflect at all on the merits of using durable pseudonyms for general conversation.
The degree of reputation and accountability that durable pseudonyms provide might be less than using a wallet name, but it's still substantial, and in practice it's a perfectly sufficient foundation for good discourse.
Looking back five months later, can you say anything about whether this program ended up matching people with new jobs or opportunities, and if so how many? Thanks!